Test Report: KVM_Linux_crio 19667

                    
                      39f19baf3a7e1c810682dda0eb22abd909c6f2ab:2024-09-18:36273
                    
                

Test fail (31/312)

Order failed test Duration
33 TestAddons/parallel/Registry 74.44
34 TestAddons/parallel/Ingress 151.88
36 TestAddons/parallel/MetricsServer 327.28
164 TestMultiControlPlane/serial/StopSecondaryNode 141.44
165 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 5.65
166 TestMultiControlPlane/serial/RestartSecondaryNode 6.48
168 TestMultiControlPlane/serial/RestartClusterKeepsNodes 373.91
171 TestMultiControlPlane/serial/StopCluster 141.84
231 TestMultiNode/serial/RestartKeepsNodes 332.17
233 TestMultiNode/serial/StopMultiNode 144.64
240 TestPreload 170.73
248 TestKubernetesUpgrade 408.39
284 TestStartStop/group/old-k8s-version/serial/FirstStart 290.74
285 TestPause/serial/SecondStartNoReconfiguration 91.86
294 TestStartStop/group/no-preload/serial/Stop 138.96
298 TestStartStop/group/default-k8s-diff-port/serial/Stop 139.17
300 TestStartStop/group/embed-certs/serial/Stop 139.03
301 TestStartStop/group/old-k8s-version/serial/DeployApp 0.49
302 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 97.53
303 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 12.38
305 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 12.38
306 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 12.38
311 TestStartStop/group/old-k8s-version/serial/SecondStart 739.31
312 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 544.23
313 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 544.24
314 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 544.13
315 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 543.52
316 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 467.25
317 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 416.94
318 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 346.35
319 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 126.4
x
+
TestAddons/parallel/Registry (74.44s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:332: registry stabilized in 2.337933ms
addons_test.go:334: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-66c9cd494c-96wcm" [170420dc-8ea6-4aba-99c1-9f61d4449fff] Running
addons_test.go:334: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.004144858s
addons_test.go:337: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-jwxzj" [5ee21740-39f3-406e-bb72-65a28c5b5dde] Running
addons_test.go:337: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.003952118s
addons_test.go:342: (dbg) Run:  kubectl --context addons-815929 delete po -l run=registry-test --now
addons_test.go:347: (dbg) Run:  kubectl --context addons-815929 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:347: (dbg) Non-zero exit: kubectl --context addons-815929 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": exit status 1 (1m0.08204933s)

                                                
                                                
-- stdout --
	pod "registry-test" deleted

                                                
                                                
-- /stdout --
** stderr ** 
	error: timed out waiting for the condition

                                                
                                                
** /stderr **
addons_test.go:349: failed to hit registry.kube-system.svc.cluster.local. args "kubectl --context addons-815929 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c \"wget --spider -S http://registry.kube-system.svc.cluster.local\"" failed: exit status 1
addons_test.go:353: expected curl response be "HTTP/1.1 200", but got *pod "registry-test" deleted
*
addons_test.go:361: (dbg) Run:  out/minikube-linux-amd64 -p addons-815929 ip
2024/09/18 19:50:25 [DEBUG] GET http://192.168.39.158:5000
addons_test.go:390: (dbg) Run:  out/minikube-linux-amd64 -p addons-815929 addons disable registry --alsologtostderr -v=1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-815929 -n addons-815929
helpers_test.go:244: <<< TestAddons/parallel/Registry FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Registry]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-815929 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-815929 logs -n 25: (1.40436442s)
helpers_test.go:252: TestAddons/parallel/Registry logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| delete  | --all                                                                                       | minikube             | jenkins | v1.34.0 | 18 Sep 24 19:38 UTC | 18 Sep 24 19:38 UTC |
	| delete  | -p download-only-228031                                                                     | download-only-228031 | jenkins | v1.34.0 | 18 Sep 24 19:38 UTC | 18 Sep 24 19:38 UTC |
	| start   | -o=json --download-only                                                                     | download-only-226542 | jenkins | v1.34.0 | 18 Sep 24 19:38 UTC |                     |
	|         | -p download-only-226542                                                                     |                      |         |         |                     |                     |
	|         | --force --alsologtostderr                                                                   |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                                                                |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	| delete  | --all                                                                                       | minikube             | jenkins | v1.34.0 | 18 Sep 24 19:38 UTC | 18 Sep 24 19:38 UTC |
	| delete  | -p download-only-226542                                                                     | download-only-226542 | jenkins | v1.34.0 | 18 Sep 24 19:38 UTC | 18 Sep 24 19:38 UTC |
	| delete  | -p download-only-228031                                                                     | download-only-228031 | jenkins | v1.34.0 | 18 Sep 24 19:38 UTC | 18 Sep 24 19:38 UTC |
	| delete  | -p download-only-226542                                                                     | download-only-226542 | jenkins | v1.34.0 | 18 Sep 24 19:38 UTC | 18 Sep 24 19:38 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-930383 | jenkins | v1.34.0 | 18 Sep 24 19:38 UTC |                     |
	|         | binary-mirror-930383                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                      |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                      |         |         |                     |                     |
	|         | http://127.0.0.1:32853                                                                      |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-930383                                                                     | binary-mirror-930383 | jenkins | v1.34.0 | 18 Sep 24 19:38 UTC | 18 Sep 24 19:38 UTC |
	| addons  | enable dashboard -p                                                                         | addons-815929        | jenkins | v1.34.0 | 18 Sep 24 19:38 UTC |                     |
	|         | addons-815929                                                                               |                      |         |         |                     |                     |
	| addons  | disable dashboard -p                                                                        | addons-815929        | jenkins | v1.34.0 | 18 Sep 24 19:38 UTC |                     |
	|         | addons-815929                                                                               |                      |         |         |                     |                     |
	| start   | -p addons-815929 --wait=true                                                                | addons-815929        | jenkins | v1.34.0 | 18 Sep 24 19:38 UTC | 18 Sep 24 19:41 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                      |         |         |                     |                     |
	|         | --addons=registry                                                                           |                      |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                      |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                      |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                      |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                      |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano                                                              |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                      |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                      |         |         |                     |                     |
	|         | --addons=helm-tiller                                                                        |                      |         |         |                     |                     |
	| ssh     | addons-815929 ssh cat                                                                       | addons-815929        | jenkins | v1.34.0 | 18 Sep 24 19:49 UTC | 18 Sep 24 19:49 UTC |
	|         | /opt/local-path-provisioner/pvc-640ef54b-981f-4e43-8493-c1fa2c048453_default_test-pvc/file1 |                      |         |         |                     |                     |
	| addons  | addons-815929 addons disable                                                                | addons-815929        | jenkins | v1.34.0 | 18 Sep 24 19:49 UTC | 18 Sep 24 19:49 UTC |
	|         | storage-provisioner-rancher                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-815929 addons disable                                                                | addons-815929        | jenkins | v1.34.0 | 18 Sep 24 19:49 UTC | 18 Sep 24 19:49 UTC |
	|         | yakd --alsologtostderr -v=1                                                                 |                      |         |         |                     |                     |
	| addons  | disable nvidia-device-plugin                                                                | addons-815929        | jenkins | v1.34.0 | 18 Sep 24 19:49 UTC | 18 Sep 24 19:49 UTC |
	|         | -p addons-815929                                                                            |                      |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p                                                                 | addons-815929        | jenkins | v1.34.0 | 18 Sep 24 19:49 UTC | 18 Sep 24 19:49 UTC |
	|         | addons-815929                                                                               |                      |         |         |                     |                     |
	| addons  | addons-815929 addons disable                                                                | addons-815929        | jenkins | v1.34.0 | 18 Sep 24 19:50 UTC | 18 Sep 24 19:50 UTC |
	|         | helm-tiller --alsologtostderr                                                               |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | disable cloud-spanner -p                                                                    | addons-815929        | jenkins | v1.34.0 | 18 Sep 24 19:50 UTC | 18 Sep 24 19:50 UTC |
	|         | addons-815929                                                                               |                      |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-815929        | jenkins | v1.34.0 | 18 Sep 24 19:50 UTC | 18 Sep 24 19:50 UTC |
	|         | -p addons-815929                                                                            |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-815929 addons                                                                        | addons-815929        | jenkins | v1.34.0 | 18 Sep 24 19:50 UTC | 18 Sep 24 19:50 UTC |
	|         | disable csi-hostpath-driver                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-815929 addons                                                                        | addons-815929        | jenkins | v1.34.0 | 18 Sep 24 19:50 UTC | 18 Sep 24 19:50 UTC |
	|         | disable volumesnapshots                                                                     |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ip      | addons-815929 ip                                                                            | addons-815929        | jenkins | v1.34.0 | 18 Sep 24 19:50 UTC | 18 Sep 24 19:50 UTC |
	| addons  | addons-815929 addons disable                                                                | addons-815929        | jenkins | v1.34.0 | 18 Sep 24 19:50 UTC | 18 Sep 24 19:50 UTC |
	|         | registry --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-815929 addons disable                                                                | addons-815929        | jenkins | v1.34.0 | 18 Sep 24 19:50 UTC |                     |
	|         | headlamp --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/18 19:38:53
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0918 19:38:53.118706   15635 out.go:345] Setting OutFile to fd 1 ...
	I0918 19:38:53.118965   15635 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0918 19:38:53.118975   15635 out.go:358] Setting ErrFile to fd 2...
	I0918 19:38:53.118980   15635 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0918 19:38:53.119217   15635 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19667-7671/.minikube/bin
	I0918 19:38:53.119878   15635 out.go:352] Setting JSON to false
	I0918 19:38:53.120737   15635 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":1277,"bootTime":1726687056,"procs":171,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0918 19:38:53.120834   15635 start.go:139] virtualization: kvm guest
	I0918 19:38:53.123148   15635 out.go:177] * [addons-815929] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0918 19:38:53.124482   15635 notify.go:220] Checking for updates...
	I0918 19:38:53.124492   15635 out.go:177]   - MINIKUBE_LOCATION=19667
	I0918 19:38:53.125673   15635 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0918 19:38:53.126877   15635 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19667-7671/kubeconfig
	I0918 19:38:53.127987   15635 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19667-7671/.minikube
	I0918 19:38:53.129021   15635 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0918 19:38:53.130051   15635 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0918 19:38:53.131293   15635 driver.go:394] Setting default libvirt URI to qemu:///system
	I0918 19:38:53.163239   15635 out.go:177] * Using the kvm2 driver based on user configuration
	I0918 19:38:53.164302   15635 start.go:297] selected driver: kvm2
	I0918 19:38:53.164318   15635 start.go:901] validating driver "kvm2" against <nil>
	I0918 19:38:53.164342   15635 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0918 19:38:53.165066   15635 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0918 19:38:53.165151   15635 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19667-7671/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0918 19:38:53.179993   15635 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0918 19:38:53.180067   15635 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0918 19:38:53.180362   15635 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0918 19:38:53.180395   15635 cni.go:84] Creating CNI manager for ""
	I0918 19:38:53.180443   15635 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0918 19:38:53.180452   15635 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0918 19:38:53.180510   15635 start.go:340] cluster config:
	{Name:addons-815929 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-815929 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAg
entPID:0 GPUs: AutoPauseInterval:1m0s}
	I0918 19:38:53.180624   15635 iso.go:125] acquiring lock: {Name:mkbcf4d84f3091567a39802cd5e773cb10b8c545 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0918 19:38:53.182868   15635 out.go:177] * Starting "addons-815929" primary control-plane node in "addons-815929" cluster
	I0918 19:38:53.183982   15635 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0918 19:38:53.184039   15635 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19667-7671/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I0918 19:38:53.184052   15635 cache.go:56] Caching tarball of preloaded images
	I0918 19:38:53.184131   15635 preload.go:172] Found /home/jenkins/minikube-integration/19667-7671/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0918 19:38:53.184144   15635 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0918 19:38:53.184489   15635 profile.go:143] Saving config to /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/addons-815929/config.json ...
	I0918 19:38:53.184512   15635 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/addons-815929/config.json: {Name:mk126f196443338ecc21176132e0fd9e3cc4ae5b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0918 19:38:53.184666   15635 start.go:360] acquireMachinesLock for addons-815929: {Name:mk1b0d68a0b7d9d4bc8204d5d81cb9eb77222526 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0918 19:38:53.184723   15635 start.go:364] duration metric: took 41.331µs to acquireMachinesLock for "addons-815929"
	I0918 19:38:53.184743   15635 start.go:93] Provisioning new machine with config: &{Name:addons-815929 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.1 ClusterName:addons-815929 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0918 19:38:53.184805   15635 start.go:125] createHost starting for "" (driver="kvm2")
	I0918 19:38:53.186310   15635 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0918 19:38:53.186442   15635 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0918 19:38:53.186488   15635 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0918 19:38:53.200841   15635 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32951
	I0918 19:38:53.201300   15635 main.go:141] libmachine: () Calling .GetVersion
	I0918 19:38:53.201895   15635 main.go:141] libmachine: Using API Version  1
	I0918 19:38:53.201914   15635 main.go:141] libmachine: () Calling .SetConfigRaw
	I0918 19:38:53.202258   15635 main.go:141] libmachine: () Calling .GetMachineName
	I0918 19:38:53.202436   15635 main.go:141] libmachine: (addons-815929) Calling .GetMachineName
	I0918 19:38:53.202591   15635 main.go:141] libmachine: (addons-815929) Calling .DriverName
	I0918 19:38:53.202765   15635 start.go:159] libmachine.API.Create for "addons-815929" (driver="kvm2")
	I0918 19:38:53.202793   15635 client.go:168] LocalClient.Create starting
	I0918 19:38:53.202832   15635 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/19667-7671/.minikube/certs/ca.pem
	I0918 19:38:53.498664   15635 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/19667-7671/.minikube/certs/cert.pem
	I0918 19:38:53.663477   15635 main.go:141] libmachine: Running pre-create checks...
	I0918 19:38:53.663499   15635 main.go:141] libmachine: (addons-815929) Calling .PreCreateCheck
	I0918 19:38:53.663965   15635 main.go:141] libmachine: (addons-815929) Calling .GetConfigRaw
	I0918 19:38:53.664477   15635 main.go:141] libmachine: Creating machine...
	I0918 19:38:53.664493   15635 main.go:141] libmachine: (addons-815929) Calling .Create
	I0918 19:38:53.664656   15635 main.go:141] libmachine: (addons-815929) Creating KVM machine...
	I0918 19:38:53.665882   15635 main.go:141] libmachine: (addons-815929) DBG | found existing default KVM network
	I0918 19:38:53.666727   15635 main.go:141] libmachine: (addons-815929) DBG | I0918 19:38:53.666575   15656 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0002111f0}
	I0918 19:38:53.666778   15635 main.go:141] libmachine: (addons-815929) DBG | created network xml: 
	I0918 19:38:53.666798   15635 main.go:141] libmachine: (addons-815929) DBG | <network>
	I0918 19:38:53.666808   15635 main.go:141] libmachine: (addons-815929) DBG |   <name>mk-addons-815929</name>
	I0918 19:38:53.666813   15635 main.go:141] libmachine: (addons-815929) DBG |   <dns enable='no'/>
	I0918 19:38:53.666818   15635 main.go:141] libmachine: (addons-815929) DBG |   
	I0918 19:38:53.666825   15635 main.go:141] libmachine: (addons-815929) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0918 19:38:53.666831   15635 main.go:141] libmachine: (addons-815929) DBG |     <dhcp>
	I0918 19:38:53.666838   15635 main.go:141] libmachine: (addons-815929) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0918 19:38:53.666843   15635 main.go:141] libmachine: (addons-815929) DBG |     </dhcp>
	I0918 19:38:53.666848   15635 main.go:141] libmachine: (addons-815929) DBG |   </ip>
	I0918 19:38:53.666855   15635 main.go:141] libmachine: (addons-815929) DBG |   
	I0918 19:38:53.666859   15635 main.go:141] libmachine: (addons-815929) DBG | </network>
	I0918 19:38:53.666868   15635 main.go:141] libmachine: (addons-815929) DBG | 
	I0918 19:38:53.672175   15635 main.go:141] libmachine: (addons-815929) DBG | trying to create private KVM network mk-addons-815929 192.168.39.0/24...
	I0918 19:38:53.742842   15635 main.go:141] libmachine: (addons-815929) Setting up store path in /home/jenkins/minikube-integration/19667-7671/.minikube/machines/addons-815929 ...
	I0918 19:38:53.742874   15635 main.go:141] libmachine: (addons-815929) DBG | private KVM network mk-addons-815929 192.168.39.0/24 created
	I0918 19:38:53.742891   15635 main.go:141] libmachine: (addons-815929) Building disk image from file:///home/jenkins/minikube-integration/19667-7671/.minikube/cache/iso/amd64/minikube-v1.34.0-1726481713-19649-amd64.iso
	I0918 19:38:53.742925   15635 main.go:141] libmachine: (addons-815929) DBG | I0918 19:38:53.742793   15656 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19667-7671/.minikube
	I0918 19:38:53.742951   15635 main.go:141] libmachine: (addons-815929) Downloading /home/jenkins/minikube-integration/19667-7671/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19667-7671/.minikube/cache/iso/amd64/minikube-v1.34.0-1726481713-19649-amd64.iso...
	I0918 19:38:54.002785   15635 main.go:141] libmachine: (addons-815929) DBG | I0918 19:38:54.002609   15656 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19667-7671/.minikube/machines/addons-815929/id_rsa...
	I0918 19:38:54.238348   15635 main.go:141] libmachine: (addons-815929) DBG | I0918 19:38:54.238178   15656 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19667-7671/.minikube/machines/addons-815929/addons-815929.rawdisk...
	I0918 19:38:54.238378   15635 main.go:141] libmachine: (addons-815929) DBG | Writing magic tar header
	I0918 19:38:54.238388   15635 main.go:141] libmachine: (addons-815929) DBG | Writing SSH key tar header
	I0918 19:38:54.238395   15635 main.go:141] libmachine: (addons-815929) DBG | I0918 19:38:54.238295   15656 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19667-7671/.minikube/machines/addons-815929 ...
	I0918 19:38:54.238406   15635 main.go:141] libmachine: (addons-815929) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19667-7671/.minikube/machines/addons-815929
	I0918 19:38:54.238460   15635 main.go:141] libmachine: (addons-815929) Setting executable bit set on /home/jenkins/minikube-integration/19667-7671/.minikube/machines/addons-815929 (perms=drwx------)
	I0918 19:38:54.238483   15635 main.go:141] libmachine: (addons-815929) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19667-7671/.minikube/machines
	I0918 19:38:54.238491   15635 main.go:141] libmachine: (addons-815929) Setting executable bit set on /home/jenkins/minikube-integration/19667-7671/.minikube/machines (perms=drwxr-xr-x)
	I0918 19:38:54.238513   15635 main.go:141] libmachine: (addons-815929) Setting executable bit set on /home/jenkins/minikube-integration/19667-7671/.minikube (perms=drwxr-xr-x)
	I0918 19:38:54.238523   15635 main.go:141] libmachine: (addons-815929) Setting executable bit set on /home/jenkins/minikube-integration/19667-7671 (perms=drwxrwxr-x)
	I0918 19:38:54.238534   15635 main.go:141] libmachine: (addons-815929) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19667-7671/.minikube
	I0918 19:38:54.238548   15635 main.go:141] libmachine: (addons-815929) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19667-7671
	I0918 19:38:54.238559   15635 main.go:141] libmachine: (addons-815929) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0918 19:38:54.238565   15635 main.go:141] libmachine: (addons-815929) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0918 19:38:54.238571   15635 main.go:141] libmachine: (addons-815929) DBG | Checking permissions on dir: /home/jenkins
	I0918 19:38:54.238576   15635 main.go:141] libmachine: (addons-815929) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0918 19:38:54.238581   15635 main.go:141] libmachine: (addons-815929) DBG | Checking permissions on dir: /home
	I0918 19:38:54.238588   15635 main.go:141] libmachine: (addons-815929) DBG | Skipping /home - not owner
	I0918 19:38:54.238597   15635 main.go:141] libmachine: (addons-815929) Creating domain...
	I0918 19:38:54.239507   15635 main.go:141] libmachine: (addons-815929) define libvirt domain using xml: 
	I0918 19:38:54.239529   15635 main.go:141] libmachine: (addons-815929) <domain type='kvm'>
	I0918 19:38:54.239536   15635 main.go:141] libmachine: (addons-815929)   <name>addons-815929</name>
	I0918 19:38:54.239543   15635 main.go:141] libmachine: (addons-815929)   <memory unit='MiB'>4000</memory>
	I0918 19:38:54.239549   15635 main.go:141] libmachine: (addons-815929)   <vcpu>2</vcpu>
	I0918 19:38:54.239553   15635 main.go:141] libmachine: (addons-815929)   <features>
	I0918 19:38:54.239557   15635 main.go:141] libmachine: (addons-815929)     <acpi/>
	I0918 19:38:54.239561   15635 main.go:141] libmachine: (addons-815929)     <apic/>
	I0918 19:38:54.239566   15635 main.go:141] libmachine: (addons-815929)     <pae/>
	I0918 19:38:54.239569   15635 main.go:141] libmachine: (addons-815929)     
	I0918 19:38:54.239574   15635 main.go:141] libmachine: (addons-815929)   </features>
	I0918 19:38:54.239581   15635 main.go:141] libmachine: (addons-815929)   <cpu mode='host-passthrough'>
	I0918 19:38:54.239588   15635 main.go:141] libmachine: (addons-815929)   
	I0918 19:38:54.239596   15635 main.go:141] libmachine: (addons-815929)   </cpu>
	I0918 19:38:54.239608   15635 main.go:141] libmachine: (addons-815929)   <os>
	I0918 19:38:54.239618   15635 main.go:141] libmachine: (addons-815929)     <type>hvm</type>
	I0918 19:38:54.239629   15635 main.go:141] libmachine: (addons-815929)     <boot dev='cdrom'/>
	I0918 19:38:54.239633   15635 main.go:141] libmachine: (addons-815929)     <boot dev='hd'/>
	I0918 19:38:54.239640   15635 main.go:141] libmachine: (addons-815929)     <bootmenu enable='no'/>
	I0918 19:38:54.239643   15635 main.go:141] libmachine: (addons-815929)   </os>
	I0918 19:38:54.239648   15635 main.go:141] libmachine: (addons-815929)   <devices>
	I0918 19:38:54.239652   15635 main.go:141] libmachine: (addons-815929)     <disk type='file' device='cdrom'>
	I0918 19:38:54.239672   15635 main.go:141] libmachine: (addons-815929)       <source file='/home/jenkins/minikube-integration/19667-7671/.minikube/machines/addons-815929/boot2docker.iso'/>
	I0918 19:38:54.239681   15635 main.go:141] libmachine: (addons-815929)       <target dev='hdc' bus='scsi'/>
	I0918 19:38:54.239689   15635 main.go:141] libmachine: (addons-815929)       <readonly/>
	I0918 19:38:54.239699   15635 main.go:141] libmachine: (addons-815929)     </disk>
	I0918 19:38:54.239708   15635 main.go:141] libmachine: (addons-815929)     <disk type='file' device='disk'>
	I0918 19:38:54.239717   15635 main.go:141] libmachine: (addons-815929)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0918 19:38:54.239726   15635 main.go:141] libmachine: (addons-815929)       <source file='/home/jenkins/minikube-integration/19667-7671/.minikube/machines/addons-815929/addons-815929.rawdisk'/>
	I0918 19:38:54.239739   15635 main.go:141] libmachine: (addons-815929)       <target dev='hda' bus='virtio'/>
	I0918 19:38:54.239762   15635 main.go:141] libmachine: (addons-815929)     </disk>
	I0918 19:38:54.239780   15635 main.go:141] libmachine: (addons-815929)     <interface type='network'>
	I0918 19:38:54.239787   15635 main.go:141] libmachine: (addons-815929)       <source network='mk-addons-815929'/>
	I0918 19:38:54.239799   15635 main.go:141] libmachine: (addons-815929)       <model type='virtio'/>
	I0918 19:38:54.239804   15635 main.go:141] libmachine: (addons-815929)     </interface>
	I0918 19:38:54.239809   15635 main.go:141] libmachine: (addons-815929)     <interface type='network'>
	I0918 19:38:54.239815   15635 main.go:141] libmachine: (addons-815929)       <source network='default'/>
	I0918 19:38:54.239819   15635 main.go:141] libmachine: (addons-815929)       <model type='virtio'/>
	I0918 19:38:54.239824   15635 main.go:141] libmachine: (addons-815929)     </interface>
	I0918 19:38:54.239832   15635 main.go:141] libmachine: (addons-815929)     <serial type='pty'>
	I0918 19:38:54.239837   15635 main.go:141] libmachine: (addons-815929)       <target port='0'/>
	I0918 19:38:54.239844   15635 main.go:141] libmachine: (addons-815929)     </serial>
	I0918 19:38:54.239849   15635 main.go:141] libmachine: (addons-815929)     <console type='pty'>
	I0918 19:38:54.239868   15635 main.go:141] libmachine: (addons-815929)       <target type='serial' port='0'/>
	I0918 19:38:54.239879   15635 main.go:141] libmachine: (addons-815929)     </console>
	I0918 19:38:54.239883   15635 main.go:141] libmachine: (addons-815929)     <rng model='virtio'>
	I0918 19:38:54.239889   15635 main.go:141] libmachine: (addons-815929)       <backend model='random'>/dev/random</backend>
	I0918 19:38:54.239893   15635 main.go:141] libmachine: (addons-815929)     </rng>
	I0918 19:38:54.239897   15635 main.go:141] libmachine: (addons-815929)     
	I0918 19:38:54.239901   15635 main.go:141] libmachine: (addons-815929)     
	I0918 19:38:54.239913   15635 main.go:141] libmachine: (addons-815929)   </devices>
	I0918 19:38:54.239925   15635 main.go:141] libmachine: (addons-815929) </domain>
	I0918 19:38:54.239934   15635 main.go:141] libmachine: (addons-815929) 
	I0918 19:38:54.245827   15635 main.go:141] libmachine: (addons-815929) DBG | domain addons-815929 has defined MAC address 52:54:00:cb:c3:cb in network default
	I0918 19:38:54.246274   15635 main.go:141] libmachine: (addons-815929) Ensuring networks are active...
	I0918 19:38:54.246289   15635 main.go:141] libmachine: (addons-815929) DBG | domain addons-815929 has defined MAC address 52:54:00:11:b1:d6 in network mk-addons-815929
	I0918 19:38:54.246951   15635 main.go:141] libmachine: (addons-815929) Ensuring network default is active
	I0918 19:38:54.247192   15635 main.go:141] libmachine: (addons-815929) Ensuring network mk-addons-815929 is active
	I0918 19:38:54.247672   15635 main.go:141] libmachine: (addons-815929) Getting domain xml...
	I0918 19:38:54.248278   15635 main.go:141] libmachine: (addons-815929) Creating domain...
	I0918 19:38:55.697959   15635 main.go:141] libmachine: (addons-815929) Waiting to get IP...
	I0918 19:38:55.698757   15635 main.go:141] libmachine: (addons-815929) DBG | domain addons-815929 has defined MAC address 52:54:00:11:b1:d6 in network mk-addons-815929
	I0918 19:38:55.699235   15635 main.go:141] libmachine: (addons-815929) DBG | unable to find current IP address of domain addons-815929 in network mk-addons-815929
	I0918 19:38:55.699284   15635 main.go:141] libmachine: (addons-815929) DBG | I0918 19:38:55.699220   15656 retry.go:31] will retry after 240.136101ms: waiting for machine to come up
	I0918 19:38:55.940564   15635 main.go:141] libmachine: (addons-815929) DBG | domain addons-815929 has defined MAC address 52:54:00:11:b1:d6 in network mk-addons-815929
	I0918 19:38:55.941063   15635 main.go:141] libmachine: (addons-815929) DBG | unable to find current IP address of domain addons-815929 in network mk-addons-815929
	I0918 19:38:55.941095   15635 main.go:141] libmachine: (addons-815929) DBG | I0918 19:38:55.941001   15656 retry.go:31] will retry after 357.629453ms: waiting for machine to come up
	I0918 19:38:56.300779   15635 main.go:141] libmachine: (addons-815929) DBG | domain addons-815929 has defined MAC address 52:54:00:11:b1:d6 in network mk-addons-815929
	I0918 19:38:56.301261   15635 main.go:141] libmachine: (addons-815929) DBG | unable to find current IP address of domain addons-815929 in network mk-addons-815929
	I0918 19:38:56.301288   15635 main.go:141] libmachine: (addons-815929) DBG | I0918 19:38:56.301210   15656 retry.go:31] will retry after 307.786585ms: waiting for machine to come up
	I0918 19:38:56.610678   15635 main.go:141] libmachine: (addons-815929) DBG | domain addons-815929 has defined MAC address 52:54:00:11:b1:d6 in network mk-addons-815929
	I0918 19:38:56.611160   15635 main.go:141] libmachine: (addons-815929) DBG | unable to find current IP address of domain addons-815929 in network mk-addons-815929
	I0918 19:38:56.611191   15635 main.go:141] libmachine: (addons-815929) DBG | I0918 19:38:56.611111   15656 retry.go:31] will retry after 517.569687ms: waiting for machine to come up
	I0918 19:38:57.129855   15635 main.go:141] libmachine: (addons-815929) DBG | domain addons-815929 has defined MAC address 52:54:00:11:b1:d6 in network mk-addons-815929
	I0918 19:38:57.130252   15635 main.go:141] libmachine: (addons-815929) DBG | unable to find current IP address of domain addons-815929 in network mk-addons-815929
	I0918 19:38:57.130293   15635 main.go:141] libmachine: (addons-815929) DBG | I0918 19:38:57.130200   15656 retry.go:31] will retry after 494.799445ms: waiting for machine to come up
	I0918 19:38:57.626875   15635 main.go:141] libmachine: (addons-815929) DBG | domain addons-815929 has defined MAC address 52:54:00:11:b1:d6 in network mk-addons-815929
	I0918 19:38:57.627350   15635 main.go:141] libmachine: (addons-815929) DBG | unable to find current IP address of domain addons-815929 in network mk-addons-815929
	I0918 19:38:57.627378   15635 main.go:141] libmachine: (addons-815929) DBG | I0918 19:38:57.627307   15656 retry.go:31] will retry after 626.236714ms: waiting for machine to come up
	I0918 19:38:58.255770   15635 main.go:141] libmachine: (addons-815929) DBG | domain addons-815929 has defined MAC address 52:54:00:11:b1:d6 in network mk-addons-815929
	I0918 19:38:58.256298   15635 main.go:141] libmachine: (addons-815929) DBG | unable to find current IP address of domain addons-815929 in network mk-addons-815929
	I0918 19:38:58.256317   15635 main.go:141] libmachine: (addons-815929) DBG | I0918 19:38:58.256214   15656 retry.go:31] will retry after 826.525241ms: waiting for machine to come up
	I0918 19:38:59.083830   15635 main.go:141] libmachine: (addons-815929) DBG | domain addons-815929 has defined MAC address 52:54:00:11:b1:d6 in network mk-addons-815929
	I0918 19:38:59.084379   15635 main.go:141] libmachine: (addons-815929) DBG | unable to find current IP address of domain addons-815929 in network mk-addons-815929
	I0918 19:38:59.084413   15635 main.go:141] libmachine: (addons-815929) DBG | I0918 19:38:59.084316   15656 retry.go:31] will retry after 1.302088375s: waiting for machine to come up
	I0918 19:39:00.388874   15635 main.go:141] libmachine: (addons-815929) DBG | domain addons-815929 has defined MAC address 52:54:00:11:b1:d6 in network mk-addons-815929
	I0918 19:39:00.389329   15635 main.go:141] libmachine: (addons-815929) DBG | unable to find current IP address of domain addons-815929 in network mk-addons-815929
	I0918 19:39:00.389357   15635 main.go:141] libmachine: (addons-815929) DBG | I0918 19:39:00.389259   15656 retry.go:31] will retry after 1.82403913s: waiting for machine to come up
	I0918 19:39:02.216192   15635 main.go:141] libmachine: (addons-815929) DBG | domain addons-815929 has defined MAC address 52:54:00:11:b1:d6 in network mk-addons-815929
	I0918 19:39:02.216654   15635 main.go:141] libmachine: (addons-815929) DBG | unable to find current IP address of domain addons-815929 in network mk-addons-815929
	I0918 19:39:02.216681   15635 main.go:141] libmachine: (addons-815929) DBG | I0918 19:39:02.216609   15656 retry.go:31] will retry after 2.008231355s: waiting for machine to come up
	I0918 19:39:04.226837   15635 main.go:141] libmachine: (addons-815929) DBG | domain addons-815929 has defined MAC address 52:54:00:11:b1:d6 in network mk-addons-815929
	I0918 19:39:04.227248   15635 main.go:141] libmachine: (addons-815929) DBG | unable to find current IP address of domain addons-815929 in network mk-addons-815929
	I0918 19:39:04.227278   15635 main.go:141] libmachine: (addons-815929) DBG | I0918 19:39:04.227201   15656 retry.go:31] will retry after 2.836403576s: waiting for machine to come up
	I0918 19:39:07.065332   15635 main.go:141] libmachine: (addons-815929) DBG | domain addons-815929 has defined MAC address 52:54:00:11:b1:d6 in network mk-addons-815929
	I0918 19:39:07.065713   15635 main.go:141] libmachine: (addons-815929) DBG | unable to find current IP address of domain addons-815929 in network mk-addons-815929
	I0918 19:39:07.065748   15635 main.go:141] libmachine: (addons-815929) DBG | I0918 19:39:07.065691   15656 retry.go:31] will retry after 3.279472186s: waiting for machine to come up
	I0918 19:39:10.348133   15635 main.go:141] libmachine: (addons-815929) DBG | domain addons-815929 has defined MAC address 52:54:00:11:b1:d6 in network mk-addons-815929
	I0918 19:39:10.348607   15635 main.go:141] libmachine: (addons-815929) DBG | unable to find current IP address of domain addons-815929 in network mk-addons-815929
	I0918 19:39:10.348632   15635 main.go:141] libmachine: (addons-815929) DBG | I0918 19:39:10.348560   15656 retry.go:31] will retry after 3.871116508s: waiting for machine to come up
	I0918 19:39:14.220928   15635 main.go:141] libmachine: (addons-815929) DBG | domain addons-815929 has defined MAC address 52:54:00:11:b1:d6 in network mk-addons-815929
	I0918 19:39:14.221295   15635 main.go:141] libmachine: (addons-815929) DBG | domain addons-815929 has current primary IP address 192.168.39.158 and MAC address 52:54:00:11:b1:d6 in network mk-addons-815929
	I0918 19:39:14.221321   15635 main.go:141] libmachine: (addons-815929) Found IP for machine: 192.168.39.158
	I0918 19:39:14.221331   15635 main.go:141] libmachine: (addons-815929) Reserving static IP address...
	I0918 19:39:14.221782   15635 main.go:141] libmachine: (addons-815929) DBG | unable to find host DHCP lease matching {name: "addons-815929", mac: "52:54:00:11:b1:d6", ip: "192.168.39.158"} in network mk-addons-815929
	I0918 19:39:14.297555   15635 main.go:141] libmachine: (addons-815929) Reserved static IP address: 192.168.39.158
	I0918 19:39:14.297592   15635 main.go:141] libmachine: (addons-815929) DBG | Getting to WaitForSSH function...
	I0918 19:39:14.297601   15635 main.go:141] libmachine: (addons-815929) Waiting for SSH to be available...
	I0918 19:39:14.300410   15635 main.go:141] libmachine: (addons-815929) DBG | domain addons-815929 has defined MAC address 52:54:00:11:b1:d6 in network mk-addons-815929
	I0918 19:39:14.300839   15635 main.go:141] libmachine: (addons-815929) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:b1:d6", ip: ""} in network mk-addons-815929: {Iface:virbr1 ExpiryTime:2024-09-18 20:39:08 +0000 UTC Type:0 Mac:52:54:00:11:b1:d6 Iaid: IPaddr:192.168.39.158 Prefix:24 Hostname:minikube Clientid:01:52:54:00:11:b1:d6}
	I0918 19:39:14.300870   15635 main.go:141] libmachine: (addons-815929) DBG | domain addons-815929 has defined IP address 192.168.39.158 and MAC address 52:54:00:11:b1:d6 in network mk-addons-815929
	I0918 19:39:14.301080   15635 main.go:141] libmachine: (addons-815929) DBG | Using SSH client type: external
	I0918 19:39:14.301103   15635 main.go:141] libmachine: (addons-815929) DBG | Using SSH private key: /home/jenkins/minikube-integration/19667-7671/.minikube/machines/addons-815929/id_rsa (-rw-------)
	I0918 19:39:14.301133   15635 main.go:141] libmachine: (addons-815929) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.158 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19667-7671/.minikube/machines/addons-815929/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0918 19:39:14.301145   15635 main.go:141] libmachine: (addons-815929) DBG | About to run SSH command:
	I0918 19:39:14.301158   15635 main.go:141] libmachine: (addons-815929) DBG | exit 0
	I0918 19:39:14.432076   15635 main.go:141] libmachine: (addons-815929) DBG | SSH cmd err, output: <nil>: 
	I0918 19:39:14.432351   15635 main.go:141] libmachine: (addons-815929) KVM machine creation complete!
	I0918 19:39:14.432733   15635 main.go:141] libmachine: (addons-815929) Calling .GetConfigRaw
	I0918 19:39:14.433533   15635 main.go:141] libmachine: (addons-815929) Calling .DriverName
	I0918 19:39:14.433729   15635 main.go:141] libmachine: (addons-815929) Calling .DriverName
	I0918 19:39:14.433919   15635 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0918 19:39:14.433937   15635 main.go:141] libmachine: (addons-815929) Calling .GetState
	I0918 19:39:14.435144   15635 main.go:141] libmachine: Detecting operating system of created instance...
	I0918 19:39:14.435157   15635 main.go:141] libmachine: Waiting for SSH to be available...
	I0918 19:39:14.435162   15635 main.go:141] libmachine: Getting to WaitForSSH function...
	I0918 19:39:14.435167   15635 main.go:141] libmachine: (addons-815929) Calling .GetSSHHostname
	I0918 19:39:14.437837   15635 main.go:141] libmachine: (addons-815929) DBG | domain addons-815929 has defined MAC address 52:54:00:11:b1:d6 in network mk-addons-815929
	I0918 19:39:14.438147   15635 main.go:141] libmachine: (addons-815929) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:b1:d6", ip: ""} in network mk-addons-815929: {Iface:virbr1 ExpiryTime:2024-09-18 20:39:08 +0000 UTC Type:0 Mac:52:54:00:11:b1:d6 Iaid: IPaddr:192.168.39.158 Prefix:24 Hostname:addons-815929 Clientid:01:52:54:00:11:b1:d6}
	I0918 19:39:14.438173   15635 main.go:141] libmachine: (addons-815929) DBG | domain addons-815929 has defined IP address 192.168.39.158 and MAC address 52:54:00:11:b1:d6 in network mk-addons-815929
	I0918 19:39:14.438353   15635 main.go:141] libmachine: (addons-815929) Calling .GetSSHPort
	I0918 19:39:14.438525   15635 main.go:141] libmachine: (addons-815929) Calling .GetSSHKeyPath
	I0918 19:39:14.438702   15635 main.go:141] libmachine: (addons-815929) Calling .GetSSHKeyPath
	I0918 19:39:14.438842   15635 main.go:141] libmachine: (addons-815929) Calling .GetSSHUsername
	I0918 19:39:14.439003   15635 main.go:141] libmachine: Using SSH client type: native
	I0918 19:39:14.439223   15635 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.158 22 <nil> <nil>}
	I0918 19:39:14.439238   15635 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0918 19:39:14.543283   15635 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0918 19:39:14.543308   15635 main.go:141] libmachine: Detecting the provisioner...
	I0918 19:39:14.543317   15635 main.go:141] libmachine: (addons-815929) Calling .GetSSHHostname
	I0918 19:39:14.545882   15635 main.go:141] libmachine: (addons-815929) DBG | domain addons-815929 has defined MAC address 52:54:00:11:b1:d6 in network mk-addons-815929
	I0918 19:39:14.546221   15635 main.go:141] libmachine: (addons-815929) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:b1:d6", ip: ""} in network mk-addons-815929: {Iface:virbr1 ExpiryTime:2024-09-18 20:39:08 +0000 UTC Type:0 Mac:52:54:00:11:b1:d6 Iaid: IPaddr:192.168.39.158 Prefix:24 Hostname:addons-815929 Clientid:01:52:54:00:11:b1:d6}
	I0918 19:39:14.546253   15635 main.go:141] libmachine: (addons-815929) DBG | domain addons-815929 has defined IP address 192.168.39.158 and MAC address 52:54:00:11:b1:d6 in network mk-addons-815929
	I0918 19:39:14.546395   15635 main.go:141] libmachine: (addons-815929) Calling .GetSSHPort
	I0918 19:39:14.546623   15635 main.go:141] libmachine: (addons-815929) Calling .GetSSHKeyPath
	I0918 19:39:14.546775   15635 main.go:141] libmachine: (addons-815929) Calling .GetSSHKeyPath
	I0918 19:39:14.546892   15635 main.go:141] libmachine: (addons-815929) Calling .GetSSHUsername
	I0918 19:39:14.547035   15635 main.go:141] libmachine: Using SSH client type: native
	I0918 19:39:14.547232   15635 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.158 22 <nil> <nil>}
	I0918 19:39:14.547245   15635 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0918 19:39:14.652809   15635 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0918 19:39:14.652895   15635 main.go:141] libmachine: found compatible host: buildroot
	I0918 19:39:14.652905   15635 main.go:141] libmachine: Provisioning with buildroot...
	I0918 19:39:14.652912   15635 main.go:141] libmachine: (addons-815929) Calling .GetMachineName
	I0918 19:39:14.653238   15635 buildroot.go:166] provisioning hostname "addons-815929"
	I0918 19:39:14.653269   15635 main.go:141] libmachine: (addons-815929) Calling .GetMachineName
	I0918 19:39:14.653524   15635 main.go:141] libmachine: (addons-815929) Calling .GetSSHHostname
	I0918 19:39:14.656525   15635 main.go:141] libmachine: (addons-815929) DBG | domain addons-815929 has defined MAC address 52:54:00:11:b1:d6 in network mk-addons-815929
	I0918 19:39:14.656903   15635 main.go:141] libmachine: (addons-815929) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:b1:d6", ip: ""} in network mk-addons-815929: {Iface:virbr1 ExpiryTime:2024-09-18 20:39:08 +0000 UTC Type:0 Mac:52:54:00:11:b1:d6 Iaid: IPaddr:192.168.39.158 Prefix:24 Hostname:addons-815929 Clientid:01:52:54:00:11:b1:d6}
	I0918 19:39:14.656925   15635 main.go:141] libmachine: (addons-815929) DBG | domain addons-815929 has defined IP address 192.168.39.158 and MAC address 52:54:00:11:b1:d6 in network mk-addons-815929
	I0918 19:39:14.657113   15635 main.go:141] libmachine: (addons-815929) Calling .GetSSHPort
	I0918 19:39:14.657313   15635 main.go:141] libmachine: (addons-815929) Calling .GetSSHKeyPath
	I0918 19:39:14.657465   15635 main.go:141] libmachine: (addons-815929) Calling .GetSSHKeyPath
	I0918 19:39:14.657637   15635 main.go:141] libmachine: (addons-815929) Calling .GetSSHUsername
	I0918 19:39:14.657763   15635 main.go:141] libmachine: Using SSH client type: native
	I0918 19:39:14.657923   15635 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.158 22 <nil> <nil>}
	I0918 19:39:14.657933   15635 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-815929 && echo "addons-815929" | sudo tee /etc/hostname
	I0918 19:39:14.778145   15635 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-815929
	
	I0918 19:39:14.778168   15635 main.go:141] libmachine: (addons-815929) Calling .GetSSHHostname
	I0918 19:39:14.782280   15635 main.go:141] libmachine: (addons-815929) DBG | domain addons-815929 has defined MAC address 52:54:00:11:b1:d6 in network mk-addons-815929
	I0918 19:39:14.782681   15635 main.go:141] libmachine: (addons-815929) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:b1:d6", ip: ""} in network mk-addons-815929: {Iface:virbr1 ExpiryTime:2024-09-18 20:39:08 +0000 UTC Type:0 Mac:52:54:00:11:b1:d6 Iaid: IPaddr:192.168.39.158 Prefix:24 Hostname:addons-815929 Clientid:01:52:54:00:11:b1:d6}
	I0918 19:39:14.782707   15635 main.go:141] libmachine: (addons-815929) DBG | domain addons-815929 has defined IP address 192.168.39.158 and MAC address 52:54:00:11:b1:d6 in network mk-addons-815929
	I0918 19:39:14.782911   15635 main.go:141] libmachine: (addons-815929) Calling .GetSSHPort
	I0918 19:39:14.783128   15635 main.go:141] libmachine: (addons-815929) Calling .GetSSHKeyPath
	I0918 19:39:14.783294   15635 main.go:141] libmachine: (addons-815929) Calling .GetSSHKeyPath
	I0918 19:39:14.783416   15635 main.go:141] libmachine: (addons-815929) Calling .GetSSHUsername
	I0918 19:39:14.783559   15635 main.go:141] libmachine: Using SSH client type: native
	I0918 19:39:14.783758   15635 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.158 22 <nil> <nil>}
	I0918 19:39:14.783782   15635 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-815929' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-815929/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-815929' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0918 19:39:14.896628   15635 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0918 19:39:14.896658   15635 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19667-7671/.minikube CaCertPath:/home/jenkins/minikube-integration/19667-7671/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19667-7671/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19667-7671/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19667-7671/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19667-7671/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19667-7671/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19667-7671/.minikube}
	I0918 19:39:14.896682   15635 buildroot.go:174] setting up certificates
	I0918 19:39:14.896700   15635 provision.go:84] configureAuth start
	I0918 19:39:14.896715   15635 main.go:141] libmachine: (addons-815929) Calling .GetMachineName
	I0918 19:39:14.896993   15635 main.go:141] libmachine: (addons-815929) Calling .GetIP
	I0918 19:39:14.899455   15635 main.go:141] libmachine: (addons-815929) DBG | domain addons-815929 has defined MAC address 52:54:00:11:b1:d6 in network mk-addons-815929
	I0918 19:39:14.899815   15635 main.go:141] libmachine: (addons-815929) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:b1:d6", ip: ""} in network mk-addons-815929: {Iface:virbr1 ExpiryTime:2024-09-18 20:39:08 +0000 UTC Type:0 Mac:52:54:00:11:b1:d6 Iaid: IPaddr:192.168.39.158 Prefix:24 Hostname:addons-815929 Clientid:01:52:54:00:11:b1:d6}
	I0918 19:39:14.899848   15635 main.go:141] libmachine: (addons-815929) DBG | domain addons-815929 has defined IP address 192.168.39.158 and MAC address 52:54:00:11:b1:d6 in network mk-addons-815929
	I0918 19:39:14.900060   15635 main.go:141] libmachine: (addons-815929) Calling .GetSSHHostname
	I0918 19:39:14.902022   15635 main.go:141] libmachine: (addons-815929) DBG | domain addons-815929 has defined MAC address 52:54:00:11:b1:d6 in network mk-addons-815929
	I0918 19:39:14.902265   15635 main.go:141] libmachine: (addons-815929) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:b1:d6", ip: ""} in network mk-addons-815929: {Iface:virbr1 ExpiryTime:2024-09-18 20:39:08 +0000 UTC Type:0 Mac:52:54:00:11:b1:d6 Iaid: IPaddr:192.168.39.158 Prefix:24 Hostname:addons-815929 Clientid:01:52:54:00:11:b1:d6}
	I0918 19:39:14.902293   15635 main.go:141] libmachine: (addons-815929) DBG | domain addons-815929 has defined IP address 192.168.39.158 and MAC address 52:54:00:11:b1:d6 in network mk-addons-815929
	I0918 19:39:14.902392   15635 provision.go:143] copyHostCerts
	I0918 19:39:14.902479   15635 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19667-7671/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19667-7671/.minikube/ca.pem (1082 bytes)
	I0918 19:39:14.902600   15635 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19667-7671/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19667-7671/.minikube/cert.pem (1123 bytes)
	I0918 19:39:14.902671   15635 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19667-7671/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19667-7671/.minikube/key.pem (1679 bytes)
	I0918 19:39:14.902724   15635 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19667-7671/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19667-7671/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19667-7671/.minikube/certs/ca-key.pem org=jenkins.addons-815929 san=[127.0.0.1 192.168.39.158 addons-815929 localhost minikube]
	I0918 19:39:15.027079   15635 provision.go:177] copyRemoteCerts
	I0918 19:39:15.027139   15635 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0918 19:39:15.027161   15635 main.go:141] libmachine: (addons-815929) Calling .GetSSHHostname
	I0918 19:39:15.029651   15635 main.go:141] libmachine: (addons-815929) DBG | domain addons-815929 has defined MAC address 52:54:00:11:b1:d6 in network mk-addons-815929
	I0918 19:39:15.029950   15635 main.go:141] libmachine: (addons-815929) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:b1:d6", ip: ""} in network mk-addons-815929: {Iface:virbr1 ExpiryTime:2024-09-18 20:39:08 +0000 UTC Type:0 Mac:52:54:00:11:b1:d6 Iaid: IPaddr:192.168.39.158 Prefix:24 Hostname:addons-815929 Clientid:01:52:54:00:11:b1:d6}
	I0918 19:39:15.029974   15635 main.go:141] libmachine: (addons-815929) DBG | domain addons-815929 has defined IP address 192.168.39.158 and MAC address 52:54:00:11:b1:d6 in network mk-addons-815929
	I0918 19:39:15.030191   15635 main.go:141] libmachine: (addons-815929) Calling .GetSSHPort
	I0918 19:39:15.030381   15635 main.go:141] libmachine: (addons-815929) Calling .GetSSHKeyPath
	I0918 19:39:15.030555   15635 main.go:141] libmachine: (addons-815929) Calling .GetSSHUsername
	I0918 19:39:15.030715   15635 sshutil.go:53] new ssh client: &{IP:192.168.39.158 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19667-7671/.minikube/machines/addons-815929/id_rsa Username:docker}
	I0918 19:39:15.113743   15635 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0918 19:39:15.137366   15635 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0918 19:39:15.160840   15635 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0918 19:39:15.184268   15635 provision.go:87] duration metric: took 287.554696ms to configureAuth
	I0918 19:39:15.184296   15635 buildroot.go:189] setting minikube options for container-runtime
	I0918 19:39:15.184488   15635 config.go:182] Loaded profile config "addons-815929": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0918 19:39:15.184570   15635 main.go:141] libmachine: (addons-815929) Calling .GetSSHHostname
	I0918 19:39:15.187055   15635 main.go:141] libmachine: (addons-815929) DBG | domain addons-815929 has defined MAC address 52:54:00:11:b1:d6 in network mk-addons-815929
	I0918 19:39:15.187394   15635 main.go:141] libmachine: (addons-815929) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:b1:d6", ip: ""} in network mk-addons-815929: {Iface:virbr1 ExpiryTime:2024-09-18 20:39:08 +0000 UTC Type:0 Mac:52:54:00:11:b1:d6 Iaid: IPaddr:192.168.39.158 Prefix:24 Hostname:addons-815929 Clientid:01:52:54:00:11:b1:d6}
	I0918 19:39:15.187422   15635 main.go:141] libmachine: (addons-815929) DBG | domain addons-815929 has defined IP address 192.168.39.158 and MAC address 52:54:00:11:b1:d6 in network mk-addons-815929
	I0918 19:39:15.187614   15635 main.go:141] libmachine: (addons-815929) Calling .GetSSHPort
	I0918 19:39:15.187812   15635 main.go:141] libmachine: (addons-815929) Calling .GetSSHKeyPath
	I0918 19:39:15.187967   15635 main.go:141] libmachine: (addons-815929) Calling .GetSSHKeyPath
	I0918 19:39:15.188117   15635 main.go:141] libmachine: (addons-815929) Calling .GetSSHUsername
	I0918 19:39:15.188300   15635 main.go:141] libmachine: Using SSH client type: native
	I0918 19:39:15.188467   15635 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.158 22 <nil> <nil>}
	I0918 19:39:15.188480   15635 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0918 19:39:15.422203   15635 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0918 19:39:15.422228   15635 main.go:141] libmachine: Checking connection to Docker...
	I0918 19:39:15.422236   15635 main.go:141] libmachine: (addons-815929) Calling .GetURL
	I0918 19:39:15.423388   15635 main.go:141] libmachine: (addons-815929) DBG | Using libvirt version 6000000
	I0918 19:39:15.425708   15635 main.go:141] libmachine: (addons-815929) DBG | domain addons-815929 has defined MAC address 52:54:00:11:b1:d6 in network mk-addons-815929
	I0918 19:39:15.426166   15635 main.go:141] libmachine: (addons-815929) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:b1:d6", ip: ""} in network mk-addons-815929: {Iface:virbr1 ExpiryTime:2024-09-18 20:39:08 +0000 UTC Type:0 Mac:52:54:00:11:b1:d6 Iaid: IPaddr:192.168.39.158 Prefix:24 Hostname:addons-815929 Clientid:01:52:54:00:11:b1:d6}
	I0918 19:39:15.426200   15635 main.go:141] libmachine: (addons-815929) DBG | domain addons-815929 has defined IP address 192.168.39.158 and MAC address 52:54:00:11:b1:d6 in network mk-addons-815929
	I0918 19:39:15.426400   15635 main.go:141] libmachine: Docker is up and running!
	I0918 19:39:15.426415   15635 main.go:141] libmachine: Reticulating splines...
	I0918 19:39:15.426421   15635 client.go:171] duration metric: took 22.223621675s to LocalClient.Create
	I0918 19:39:15.426449   15635 start.go:167] duration metric: took 22.22368243s to libmachine.API.Create "addons-815929"
	I0918 19:39:15.426462   15635 start.go:293] postStartSetup for "addons-815929" (driver="kvm2")
	I0918 19:39:15.426475   15635 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0918 19:39:15.426497   15635 main.go:141] libmachine: (addons-815929) Calling .DriverName
	I0918 19:39:15.426717   15635 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0918 19:39:15.426747   15635 main.go:141] libmachine: (addons-815929) Calling .GetSSHHostname
	I0918 19:39:15.429165   15635 main.go:141] libmachine: (addons-815929) DBG | domain addons-815929 has defined MAC address 52:54:00:11:b1:d6 in network mk-addons-815929
	I0918 19:39:15.429467   15635 main.go:141] libmachine: (addons-815929) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:b1:d6", ip: ""} in network mk-addons-815929: {Iface:virbr1 ExpiryTime:2024-09-18 20:39:08 +0000 UTC Type:0 Mac:52:54:00:11:b1:d6 Iaid: IPaddr:192.168.39.158 Prefix:24 Hostname:addons-815929 Clientid:01:52:54:00:11:b1:d6}
	I0918 19:39:15.429493   15635 main.go:141] libmachine: (addons-815929) DBG | domain addons-815929 has defined IP address 192.168.39.158 and MAC address 52:54:00:11:b1:d6 in network mk-addons-815929
	I0918 19:39:15.429654   15635 main.go:141] libmachine: (addons-815929) Calling .GetSSHPort
	I0918 19:39:15.429831   15635 main.go:141] libmachine: (addons-815929) Calling .GetSSHKeyPath
	I0918 19:39:15.429969   15635 main.go:141] libmachine: (addons-815929) Calling .GetSSHUsername
	I0918 19:39:15.430118   15635 sshutil.go:53] new ssh client: &{IP:192.168.39.158 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19667-7671/.minikube/machines/addons-815929/id_rsa Username:docker}
	I0918 19:39:15.514784   15635 ssh_runner.go:195] Run: cat /etc/os-release
	I0918 19:39:15.519847   15635 info.go:137] Remote host: Buildroot 2023.02.9
	I0918 19:39:15.519878   15635 filesync.go:126] Scanning /home/jenkins/minikube-integration/19667-7671/.minikube/addons for local assets ...
	I0918 19:39:15.519966   15635 filesync.go:126] Scanning /home/jenkins/minikube-integration/19667-7671/.minikube/files for local assets ...
	I0918 19:39:15.519998   15635 start.go:296] duration metric: took 93.528833ms for postStartSetup
	I0918 19:39:15.520064   15635 main.go:141] libmachine: (addons-815929) Calling .GetConfigRaw
	I0918 19:39:15.520653   15635 main.go:141] libmachine: (addons-815929) Calling .GetIP
	I0918 19:39:15.523455   15635 main.go:141] libmachine: (addons-815929) DBG | domain addons-815929 has defined MAC address 52:54:00:11:b1:d6 in network mk-addons-815929
	I0918 19:39:15.523846   15635 main.go:141] libmachine: (addons-815929) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:b1:d6", ip: ""} in network mk-addons-815929: {Iface:virbr1 ExpiryTime:2024-09-18 20:39:08 +0000 UTC Type:0 Mac:52:54:00:11:b1:d6 Iaid: IPaddr:192.168.39.158 Prefix:24 Hostname:addons-815929 Clientid:01:52:54:00:11:b1:d6}
	I0918 19:39:15.523874   15635 main.go:141] libmachine: (addons-815929) DBG | domain addons-815929 has defined IP address 192.168.39.158 and MAC address 52:54:00:11:b1:d6 in network mk-addons-815929
	I0918 19:39:15.524124   15635 profile.go:143] Saving config to /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/addons-815929/config.json ...
	I0918 19:39:15.524332   15635 start.go:128] duration metric: took 22.339516337s to createHost
	I0918 19:39:15.524360   15635 main.go:141] libmachine: (addons-815929) Calling .GetSSHHostname
	I0918 19:39:15.526732   15635 main.go:141] libmachine: (addons-815929) DBG | domain addons-815929 has defined MAC address 52:54:00:11:b1:d6 in network mk-addons-815929
	I0918 19:39:15.527041   15635 main.go:141] libmachine: (addons-815929) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:b1:d6", ip: ""} in network mk-addons-815929: {Iface:virbr1 ExpiryTime:2024-09-18 20:39:08 +0000 UTC Type:0 Mac:52:54:00:11:b1:d6 Iaid: IPaddr:192.168.39.158 Prefix:24 Hostname:addons-815929 Clientid:01:52:54:00:11:b1:d6}
	I0918 19:39:15.527070   15635 main.go:141] libmachine: (addons-815929) DBG | domain addons-815929 has defined IP address 192.168.39.158 and MAC address 52:54:00:11:b1:d6 in network mk-addons-815929
	I0918 19:39:15.527313   15635 main.go:141] libmachine: (addons-815929) Calling .GetSSHPort
	I0918 19:39:15.527542   15635 main.go:141] libmachine: (addons-815929) Calling .GetSSHKeyPath
	I0918 19:39:15.527709   15635 main.go:141] libmachine: (addons-815929) Calling .GetSSHKeyPath
	I0918 19:39:15.527867   15635 main.go:141] libmachine: (addons-815929) Calling .GetSSHUsername
	I0918 19:39:15.528155   15635 main.go:141] libmachine: Using SSH client type: native
	I0918 19:39:15.528375   15635 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.158 22 <nil> <nil>}
	I0918 19:39:15.528388   15635 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0918 19:39:15.632644   15635 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726688355.604291671
	
	I0918 19:39:15.632664   15635 fix.go:216] guest clock: 1726688355.604291671
	I0918 19:39:15.632671   15635 fix.go:229] Guest: 2024-09-18 19:39:15.604291671 +0000 UTC Remote: 2024-09-18 19:39:15.524343859 +0000 UTC m=+22.440132340 (delta=79.947812ms)
	I0918 19:39:15.632711   15635 fix.go:200] guest clock delta is within tolerance: 79.947812ms
	I0918 19:39:15.632716   15635 start.go:83] releasing machines lock for "addons-815929", held for 22.447981743s
	I0918 19:39:15.632734   15635 main.go:141] libmachine: (addons-815929) Calling .DriverName
	I0918 19:39:15.632989   15635 main.go:141] libmachine: (addons-815929) Calling .GetIP
	I0918 19:39:15.635689   15635 main.go:141] libmachine: (addons-815929) DBG | domain addons-815929 has defined MAC address 52:54:00:11:b1:d6 in network mk-addons-815929
	I0918 19:39:15.636073   15635 main.go:141] libmachine: (addons-815929) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:b1:d6", ip: ""} in network mk-addons-815929: {Iface:virbr1 ExpiryTime:2024-09-18 20:39:08 +0000 UTC Type:0 Mac:52:54:00:11:b1:d6 Iaid: IPaddr:192.168.39.158 Prefix:24 Hostname:addons-815929 Clientid:01:52:54:00:11:b1:d6}
	I0918 19:39:15.636100   15635 main.go:141] libmachine: (addons-815929) DBG | domain addons-815929 has defined IP address 192.168.39.158 and MAC address 52:54:00:11:b1:d6 in network mk-addons-815929
	I0918 19:39:15.636232   15635 main.go:141] libmachine: (addons-815929) Calling .DriverName
	I0918 19:39:15.636698   15635 main.go:141] libmachine: (addons-815929) Calling .DriverName
	I0918 19:39:15.636877   15635 main.go:141] libmachine: (addons-815929) Calling .DriverName
	I0918 19:39:15.636982   15635 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0918 19:39:15.637025   15635 main.go:141] libmachine: (addons-815929) Calling .GetSSHHostname
	I0918 19:39:15.637083   15635 ssh_runner.go:195] Run: cat /version.json
	I0918 19:39:15.637103   15635 main.go:141] libmachine: (addons-815929) Calling .GetSSHHostname
	I0918 19:39:15.639906   15635 main.go:141] libmachine: (addons-815929) DBG | domain addons-815929 has defined MAC address 52:54:00:11:b1:d6 in network mk-addons-815929
	I0918 19:39:15.640052   15635 main.go:141] libmachine: (addons-815929) DBG | domain addons-815929 has defined MAC address 52:54:00:11:b1:d6 in network mk-addons-815929
	I0918 19:39:15.640306   15635 main.go:141] libmachine: (addons-815929) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:b1:d6", ip: ""} in network mk-addons-815929: {Iface:virbr1 ExpiryTime:2024-09-18 20:39:08 +0000 UTC Type:0 Mac:52:54:00:11:b1:d6 Iaid: IPaddr:192.168.39.158 Prefix:24 Hostname:addons-815929 Clientid:01:52:54:00:11:b1:d6}
	I0918 19:39:15.640333   15635 main.go:141] libmachine: (addons-815929) DBG | domain addons-815929 has defined IP address 192.168.39.158 and MAC address 52:54:00:11:b1:d6 in network mk-addons-815929
	I0918 19:39:15.640430   15635 main.go:141] libmachine: (addons-815929) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:b1:d6", ip: ""} in network mk-addons-815929: {Iface:virbr1 ExpiryTime:2024-09-18 20:39:08 +0000 UTC Type:0 Mac:52:54:00:11:b1:d6 Iaid: IPaddr:192.168.39.158 Prefix:24 Hostname:addons-815929 Clientid:01:52:54:00:11:b1:d6}
	I0918 19:39:15.640449   15635 main.go:141] libmachine: (addons-815929) Calling .GetSSHPort
	I0918 19:39:15.640456   15635 main.go:141] libmachine: (addons-815929) DBG | domain addons-815929 has defined IP address 192.168.39.158 and MAC address 52:54:00:11:b1:d6 in network mk-addons-815929
	I0918 19:39:15.640658   15635 main.go:141] libmachine: (addons-815929) Calling .GetSSHPort
	I0918 19:39:15.640662   15635 main.go:141] libmachine: (addons-815929) Calling .GetSSHKeyPath
	I0918 19:39:15.640846   15635 main.go:141] libmachine: (addons-815929) Calling .GetSSHUsername
	I0918 19:39:15.640865   15635 main.go:141] libmachine: (addons-815929) Calling .GetSSHKeyPath
	I0918 19:39:15.640960   15635 main.go:141] libmachine: (addons-815929) Calling .GetSSHUsername
	I0918 19:39:15.640964   15635 sshutil.go:53] new ssh client: &{IP:192.168.39.158 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19667-7671/.minikube/machines/addons-815929/id_rsa Username:docker}
	I0918 19:39:15.641064   15635 sshutil.go:53] new ssh client: &{IP:192.168.39.158 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19667-7671/.minikube/machines/addons-815929/id_rsa Username:docker}
	I0918 19:39:15.724678   15635 ssh_runner.go:195] Run: systemctl --version
	I0918 19:39:15.769924   15635 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0918 19:39:15.924625   15635 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0918 19:39:15.930995   15635 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0918 19:39:15.931078   15635 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0918 19:39:15.946257   15635 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0918 19:39:15.946282   15635 start.go:495] detecting cgroup driver to use...
	I0918 19:39:15.946349   15635 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0918 19:39:15.962493   15635 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0918 19:39:15.976970   15635 docker.go:217] disabling cri-docker service (if available) ...
	I0918 19:39:15.977037   15635 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0918 19:39:15.990730   15635 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0918 19:39:16.004287   15635 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0918 19:39:16.120456   15635 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0918 19:39:16.273269   15635 docker.go:233] disabling docker service ...
	I0918 19:39:16.273355   15635 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0918 19:39:16.287263   15635 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0918 19:39:16.300054   15635 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0918 19:39:16.431534   15635 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0918 19:39:16.542730   15635 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0918 19:39:16.556593   15635 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0918 19:39:16.574110   15635 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0918 19:39:16.574168   15635 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0918 19:39:16.584364   15635 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0918 19:39:16.584433   15635 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0918 19:39:16.595648   15635 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0918 19:39:16.605606   15635 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0918 19:39:16.615817   15635 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0918 19:39:16.625545   15635 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0918 19:39:16.635288   15635 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0918 19:39:16.651799   15635 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0918 19:39:16.662018   15635 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0918 19:39:16.671973   15635 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0918 19:39:16.672038   15635 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0918 19:39:16.684348   15635 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0918 19:39:16.694527   15635 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0918 19:39:16.806557   15635 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0918 19:39:16.893853   15635 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0918 19:39:16.893979   15635 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0918 19:39:16.898741   15635 start.go:563] Will wait 60s for crictl version
	I0918 19:39:16.898823   15635 ssh_runner.go:195] Run: which crictl
	I0918 19:39:16.903203   15635 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0918 19:39:16.954060   15635 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0918 19:39:16.954193   15635 ssh_runner.go:195] Run: crio --version
	I0918 19:39:16.982884   15635 ssh_runner.go:195] Run: crio --version
	I0918 19:39:17.014729   15635 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0918 19:39:17.016149   15635 main.go:141] libmachine: (addons-815929) Calling .GetIP
	I0918 19:39:17.018519   15635 main.go:141] libmachine: (addons-815929) DBG | domain addons-815929 has defined MAC address 52:54:00:11:b1:d6 in network mk-addons-815929
	I0918 19:39:17.018848   15635 main.go:141] libmachine: (addons-815929) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:b1:d6", ip: ""} in network mk-addons-815929: {Iface:virbr1 ExpiryTime:2024-09-18 20:39:08 +0000 UTC Type:0 Mac:52:54:00:11:b1:d6 Iaid: IPaddr:192.168.39.158 Prefix:24 Hostname:addons-815929 Clientid:01:52:54:00:11:b1:d6}
	I0918 19:39:17.018881   15635 main.go:141] libmachine: (addons-815929) DBG | domain addons-815929 has defined IP address 192.168.39.158 and MAC address 52:54:00:11:b1:d6 in network mk-addons-815929
	I0918 19:39:17.019079   15635 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0918 19:39:17.022910   15635 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0918 19:39:17.034489   15635 kubeadm.go:883] updating cluster {Name:addons-815929 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.
1 ClusterName:addons-815929 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.158 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountT
ype:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0918 19:39:17.034619   15635 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0918 19:39:17.034683   15635 ssh_runner.go:195] Run: sudo crictl images --output json
	I0918 19:39:17.066943   15635 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I0918 19:39:17.067023   15635 ssh_runner.go:195] Run: which lz4
	I0918 19:39:17.071020   15635 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0918 19:39:17.075441   15635 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0918 19:39:17.075480   15635 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I0918 19:39:18.279753   15635 crio.go:462] duration metric: took 1.208762257s to copy over tarball
	I0918 19:39:18.279822   15635 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0918 19:39:20.398594   15635 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.118749248s)
	I0918 19:39:20.398620   15635 crio.go:469] duration metric: took 2.11883848s to extract the tarball
	I0918 19:39:20.398627   15635 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0918 19:39:20.434881   15635 ssh_runner.go:195] Run: sudo crictl images --output json
	I0918 19:39:20.475778   15635 crio.go:514] all images are preloaded for cri-o runtime.
	I0918 19:39:20.475806   15635 cache_images.go:84] Images are preloaded, skipping loading
	I0918 19:39:20.475816   15635 kubeadm.go:934] updating node { 192.168.39.158 8443 v1.31.1 crio true true} ...
	I0918 19:39:20.475923   15635 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-815929 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.158
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:addons-815929 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0918 19:39:20.475986   15635 ssh_runner.go:195] Run: crio config
	I0918 19:39:20.519952   15635 cni.go:84] Creating CNI manager for ""
	I0918 19:39:20.519977   15635 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0918 19:39:20.519986   15635 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0918 19:39:20.520005   15635 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.158 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-815929 NodeName:addons-815929 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.158"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.158 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0918 19:39:20.520160   15635 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.158
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-815929"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.158
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.158"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0918 19:39:20.520220   15635 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0918 19:39:20.530115   15635 binaries.go:44] Found k8s binaries, skipping transfer
	I0918 19:39:20.530193   15635 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0918 19:39:20.539110   15635 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0918 19:39:20.554855   15635 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0918 19:39:20.570703   15635 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2157 bytes)
	I0918 19:39:20.586047   15635 ssh_runner.go:195] Run: grep 192.168.39.158	control-plane.minikube.internal$ /etc/hosts
	I0918 19:39:20.589512   15635 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.158	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0918 19:39:20.600947   15635 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0918 19:39:20.714800   15635 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0918 19:39:20.731863   15635 certs.go:68] Setting up /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/addons-815929 for IP: 192.168.39.158
	I0918 19:39:20.731895   15635 certs.go:194] generating shared ca certs ...
	I0918 19:39:20.731916   15635 certs.go:226] acquiring lock for ca certs: {Name:mkf54e0e71ed884c4372f9d3d4cb308ea4600185 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0918 19:39:20.732126   15635 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/19667-7671/.minikube/ca.key
	I0918 19:39:20.903635   15635 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19667-7671/.minikube/ca.crt ...
	I0918 19:39:20.903669   15635 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19667-7671/.minikube/ca.crt: {Name:mk5ab9af521edad191e1df188ac5d1ec102df64d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0918 19:39:20.903847   15635 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19667-7671/.minikube/ca.key ...
	I0918 19:39:20.903857   15635 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19667-7671/.minikube/ca.key: {Name:mk39487a69c8f19d5c09499199945d3411122eb5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0918 19:39:20.903924   15635 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19667-7671/.minikube/proxy-client-ca.key
	I0918 19:39:21.222001   15635 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19667-7671/.minikube/proxy-client-ca.crt ...
	I0918 19:39:21.222033   15635 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19667-7671/.minikube/proxy-client-ca.crt: {Name:mk216a92c8e5c2cc109551a33de4057317853d8d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0918 19:39:21.222192   15635 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19667-7671/.minikube/proxy-client-ca.key ...
	I0918 19:39:21.222203   15635 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19667-7671/.minikube/proxy-client-ca.key: {Name:mk5acd984a1bdd683ae18bb5abd36964f6b7c3c7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0918 19:39:21.222274   15635 certs.go:256] generating profile certs ...
	I0918 19:39:21.222328   15635 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/addons-815929/client.key
	I0918 19:39:21.222353   15635 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/addons-815929/client.crt with IP's: []
	I0918 19:39:21.427586   15635 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/addons-815929/client.crt ...
	I0918 19:39:21.427617   15635 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/addons-815929/client.crt: {Name:mka7942c1a0a773e2c8b5c86112e9c1ca7fd5d08 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0918 19:39:21.427767   15635 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/addons-815929/client.key ...
	I0918 19:39:21.427782   15635 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/addons-815929/client.key: {Name:mk0bb80ad3a72e414322fa8381dc0c9ca95a04d1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0918 19:39:21.427845   15635 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/addons-815929/apiserver.key.1207c200
	I0918 19:39:21.427862   15635 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/addons-815929/apiserver.crt.1207c200 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.158]
	I0918 19:39:21.547680   15635 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/addons-815929/apiserver.crt.1207c200 ...
	I0918 19:39:21.547712   15635 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/addons-815929/apiserver.crt.1207c200: {Name:mk8a17d4138be2d4aed650c4aadb0e9b8271625f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0918 19:39:21.547864   15635 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/addons-815929/apiserver.key.1207c200 ...
	I0918 19:39:21.547877   15635 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/addons-815929/apiserver.key.1207c200: {Name:mkca16a53905ed18fa3435c13c0144e57c60188b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0918 19:39:21.547942   15635 certs.go:381] copying /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/addons-815929/apiserver.crt.1207c200 -> /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/addons-815929/apiserver.crt
	I0918 19:39:21.548029   15635 certs.go:385] copying /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/addons-815929/apiserver.key.1207c200 -> /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/addons-815929/apiserver.key
	I0918 19:39:21.548077   15635 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/addons-815929/proxy-client.key
	I0918 19:39:21.548094   15635 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/addons-815929/proxy-client.crt with IP's: []
	I0918 19:39:21.746355   15635 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/addons-815929/proxy-client.crt ...
	I0918 19:39:21.746391   15635 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/addons-815929/proxy-client.crt: {Name:mk72f125b96fe55f295e7ce9376879b898e47f36 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0918 19:39:21.746557   15635 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/addons-815929/proxy-client.key ...
	I0918 19:39:21.746567   15635 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/addons-815929/proxy-client.key: {Name:mk6d5f5778449275cb7d437edd936b0c1235f081 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0918 19:39:21.746748   15635 certs.go:484] found cert: /home/jenkins/minikube-integration/19667-7671/.minikube/certs/ca-key.pem (1675 bytes)
	I0918 19:39:21.746783   15635 certs.go:484] found cert: /home/jenkins/minikube-integration/19667-7671/.minikube/certs/ca.pem (1082 bytes)
	I0918 19:39:21.746808   15635 certs.go:484] found cert: /home/jenkins/minikube-integration/19667-7671/.minikube/certs/cert.pem (1123 bytes)
	I0918 19:39:21.746830   15635 certs.go:484] found cert: /home/jenkins/minikube-integration/19667-7671/.minikube/certs/key.pem (1679 bytes)
	I0918 19:39:21.747359   15635 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0918 19:39:21.774678   15635 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0918 19:39:21.798559   15635 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0918 19:39:21.824550   15635 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0918 19:39:21.856972   15635 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/addons-815929/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0918 19:39:21.881486   15635 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/addons-815929/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0918 19:39:21.905485   15635 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/addons-815929/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0918 19:39:21.929966   15635 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/addons-815929/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0918 19:39:21.954634   15635 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0918 19:39:21.979726   15635 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0918 19:39:21.996220   15635 ssh_runner.go:195] Run: openssl version
	I0918 19:39:22.002125   15635 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0918 19:39:22.012616   15635 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0918 19:39:22.016717   15635 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 18 19:39 /usr/share/ca-certificates/minikubeCA.pem
	I0918 19:39:22.016780   15635 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0918 19:39:22.022337   15635 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0918 19:39:22.032855   15635 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0918 19:39:22.039081   15635 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0918 19:39:22.039137   15635 kubeadm.go:392] StartCluster: {Name:addons-815929 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 C
lusterName:addons-815929 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.158 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0918 19:39:22.039203   15635 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0918 19:39:22.039252   15635 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0918 19:39:22.077128   15635 cri.go:89] found id: ""
	I0918 19:39:22.077203   15635 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0918 19:39:22.087133   15635 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0918 19:39:22.096945   15635 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0918 19:39:22.106483   15635 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0918 19:39:22.106519   15635 kubeadm.go:157] found existing configuration files:
	
	I0918 19:39:22.106562   15635 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0918 19:39:22.115601   15635 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0918 19:39:22.115658   15635 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0918 19:39:22.125000   15635 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0918 19:39:22.134204   15635 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0918 19:39:22.134259   15635 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0918 19:39:22.143745   15635 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0918 19:39:22.152804   15635 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0918 19:39:22.152866   15635 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0918 19:39:22.162802   15635 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0918 19:39:22.173020   15635 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0918 19:39:22.173087   15635 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0918 19:39:22.184200   15635 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0918 19:39:22.239157   15635 kubeadm.go:310] W0918 19:39:22.219472     812 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0918 19:39:22.239864   15635 kubeadm.go:310] W0918 19:39:22.220484     812 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0918 19:39:22.375715   15635 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0918 19:39:32.745678   15635 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I0918 19:39:32.745741   15635 kubeadm.go:310] [preflight] Running pre-flight checks
	I0918 19:39:32.745827   15635 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0918 19:39:32.745932   15635 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0918 19:39:32.746038   15635 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0918 19:39:32.746135   15635 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0918 19:39:32.747995   15635 out.go:235]   - Generating certificates and keys ...
	I0918 19:39:32.748120   15635 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0918 19:39:32.748185   15635 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0918 19:39:32.748309   15635 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0918 19:39:32.748397   15635 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0918 19:39:32.748486   15635 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0918 19:39:32.748581   15635 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0918 19:39:32.748667   15635 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0918 19:39:32.748784   15635 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-815929 localhost] and IPs [192.168.39.158 127.0.0.1 ::1]
	I0918 19:39:32.748865   15635 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0918 19:39:32.748977   15635 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-815929 localhost] and IPs [192.168.39.158 127.0.0.1 ::1]
	I0918 19:39:32.749034   15635 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0918 19:39:32.749100   15635 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0918 19:39:32.749149   15635 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0918 19:39:32.749202   15635 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0918 19:39:32.749248   15635 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0918 19:39:32.749300   15635 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0918 19:39:32.749346   15635 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0918 19:39:32.749404   15635 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0918 19:39:32.749451   15635 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0918 19:39:32.749533   15635 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0918 19:39:32.749608   15635 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0918 19:39:32.751199   15635 out.go:235]   - Booting up control plane ...
	I0918 19:39:32.751299   15635 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0918 19:39:32.751390   15635 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0918 19:39:32.751462   15635 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0918 19:39:32.751561   15635 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0918 19:39:32.751639   15635 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0918 19:39:32.751678   15635 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0918 19:39:32.751805   15635 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0918 19:39:32.751940   15635 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0918 19:39:32.751993   15635 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.248865ms
	I0918 19:39:32.752083   15635 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0918 19:39:32.752136   15635 kubeadm.go:310] [api-check] The API server is healthy after 5.5020976s
	I0918 19:39:32.752230   15635 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0918 19:39:32.752341   15635 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0918 19:39:32.752393   15635 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0918 19:39:32.752553   15635 kubeadm.go:310] [mark-control-plane] Marking the node addons-815929 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0918 19:39:32.752613   15635 kubeadm.go:310] [bootstrap-token] Using token: 67qfck.xhy2rt9vuaaqal6w
	I0918 19:39:32.755162   15635 out.go:235]   - Configuring RBAC rules ...
	I0918 19:39:32.755272   15635 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0918 19:39:32.755391   15635 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0918 19:39:32.755583   15635 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0918 19:39:32.755697   15635 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0918 19:39:32.755824   15635 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0918 19:39:32.755931   15635 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0918 19:39:32.756094   15635 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0918 19:39:32.756170   15635 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0918 19:39:32.756238   15635 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0918 19:39:32.756250   15635 kubeadm.go:310] 
	I0918 19:39:32.756306   15635 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0918 19:39:32.756314   15635 kubeadm.go:310] 
	I0918 19:39:32.756394   15635 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0918 19:39:32.756403   15635 kubeadm.go:310] 
	I0918 19:39:32.756429   15635 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0918 19:39:32.756479   15635 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0918 19:39:32.756523   15635 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0918 19:39:32.756530   15635 kubeadm.go:310] 
	I0918 19:39:32.756585   15635 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0918 19:39:32.756595   15635 kubeadm.go:310] 
	I0918 19:39:32.756638   15635 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0918 19:39:32.756643   15635 kubeadm.go:310] 
	I0918 19:39:32.756686   15635 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0918 19:39:32.756750   15635 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0918 19:39:32.756808   15635 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0918 19:39:32.756814   15635 kubeadm.go:310] 
	I0918 19:39:32.756887   15635 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0918 19:39:32.756954   15635 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0918 19:39:32.756960   15635 kubeadm.go:310] 
	I0918 19:39:32.757031   15635 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 67qfck.xhy2rt9vuaaqal6w \
	I0918 19:39:32.757120   15635 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:8ad46bf288e8cc2226f5a929a768e6c95cddecc97ffc4016be27ef0987b1e36f \
	I0918 19:39:32.757151   15635 kubeadm.go:310] 	--control-plane 
	I0918 19:39:32.757157   15635 kubeadm.go:310] 
	I0918 19:39:32.757248   15635 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0918 19:39:32.757257   15635 kubeadm.go:310] 
	I0918 19:39:32.757354   15635 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 67qfck.xhy2rt9vuaaqal6w \
	I0918 19:39:32.757490   15635 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:8ad46bf288e8cc2226f5a929a768e6c95cddecc97ffc4016be27ef0987b1e36f 
	I0918 19:39:32.757501   15635 cni.go:84] Creating CNI manager for ""
	I0918 19:39:32.757507   15635 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0918 19:39:32.760281   15635 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0918 19:39:32.761848   15635 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0918 19:39:32.772978   15635 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0918 19:39:32.796231   15635 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0918 19:39:32.796347   15635 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0918 19:39:32.796347   15635 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-815929 minikube.k8s.io/updated_at=2024_09_18T19_39_32_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=85073601a832bd4bbda5d11fa91feafff6ec6b91 minikube.k8s.io/name=addons-815929 minikube.k8s.io/primary=true
	I0918 19:39:32.810093   15635 ops.go:34] apiserver oom_adj: -16
	I0918 19:39:32.947600   15635 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0918 19:39:33.448372   15635 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0918 19:39:33.947877   15635 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0918 19:39:34.447886   15635 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0918 19:39:34.948598   15635 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0918 19:39:35.448280   15635 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0918 19:39:35.947854   15635 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0918 19:39:36.447710   15635 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0918 19:39:36.948512   15635 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0918 19:39:37.028366   15635 kubeadm.go:1113] duration metric: took 4.232084306s to wait for elevateKubeSystemPrivileges
	I0918 19:39:37.028407   15635 kubeadm.go:394] duration metric: took 14.989273723s to StartCluster
	I0918 19:39:37.028429   15635 settings.go:142] acquiring lock: {Name:mk6ae95a69dcbe00c28846409e9f46945a88de2f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0918 19:39:37.028570   15635 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19667-7671/kubeconfig
	I0918 19:39:37.028921   15635 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19667-7671/kubeconfig: {Name:mk3a3eecadb0164dfdd5d3c4392b5b473a2a9bb0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0918 19:39:37.029140   15635 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0918 19:39:37.029150   15635 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.158 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0918 19:39:37.029221   15635 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:true inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0918 19:39:37.029346   15635 config.go:182] Loaded profile config "addons-815929": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0918 19:39:37.029362   15635 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-815929"
	I0918 19:39:37.029377   15635 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-815929"
	I0918 19:39:37.029386   15635 addons.go:69] Setting helm-tiller=true in profile "addons-815929"
	I0918 19:39:37.029349   15635 addons.go:69] Setting yakd=true in profile "addons-815929"
	I0918 19:39:37.029407   15635 addons.go:234] Setting addon helm-tiller=true in "addons-815929"
	I0918 19:39:37.029413   15635 addons.go:234] Setting addon yakd=true in "addons-815929"
	I0918 19:39:37.029425   15635 addons.go:69] Setting volcano=true in profile "addons-815929"
	I0918 19:39:37.029450   15635 addons.go:69] Setting default-storageclass=true in profile "addons-815929"
	I0918 19:39:37.029476   15635 addons.go:69] Setting ingress-dns=true in profile "addons-815929"
	I0918 19:39:37.029490   15635 host.go:66] Checking if "addons-815929" exists ...
	I0918 19:39:37.029496   15635 addons.go:234] Setting addon ingress-dns=true in "addons-815929"
	I0918 19:39:37.029460   15635 addons.go:69] Setting ingress=true in profile "addons-815929"
	I0918 19:39:37.029523   15635 host.go:66] Checking if "addons-815929" exists ...
	I0918 19:39:37.029373   15635 addons.go:69] Setting inspektor-gadget=true in profile "addons-815929"
	I0918 19:39:37.029658   15635 addons.go:234] Setting addon inspektor-gadget=true in "addons-815929"
	I0918 19:39:37.029673   15635 host.go:66] Checking if "addons-815929" exists ...
	I0918 19:39:37.029524   15635 addons.go:234] Setting addon ingress=true in "addons-815929"
	I0918 19:39:37.029797   15635 host.go:66] Checking if "addons-815929" exists ...
	I0918 19:39:37.029443   15635 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-815929"
	I0918 19:39:37.029906   15635 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-815929"
	I0918 19:39:37.029440   15635 host.go:66] Checking if "addons-815929" exists ...
	I0918 19:39:37.029984   15635 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0918 19:39:37.030010   15635 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0918 19:39:37.030050   15635 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0918 19:39:37.030053   15635 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0918 19:39:37.030084   15635 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0918 19:39:37.030095   15635 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0918 19:39:37.029415   15635 host.go:66] Checking if "addons-815929" exists ...
	I0918 19:39:37.030352   15635 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0918 19:39:37.030383   15635 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0918 19:39:37.030388   15635 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0918 19:39:37.030405   15635 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0918 19:39:37.029357   15635 addons.go:69] Setting metrics-server=true in profile "addons-815929"
	I0918 19:39:37.030536   15635 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0918 19:39:37.030543   15635 addons.go:234] Setting addon metrics-server=true in "addons-815929"
	I0918 19:39:37.029434   15635 addons.go:69] Setting gcp-auth=true in profile "addons-815929"
	I0918 19:39:37.030567   15635 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0918 19:39:37.030570   15635 mustload.go:65] Loading cluster: addons-815929
	I0918 19:39:37.029447   15635 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-815929"
	I0918 19:39:37.030611   15635 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-815929"
	I0918 19:39:37.029452   15635 addons.go:69] Setting volumesnapshots=true in profile "addons-815929"
	I0918 19:39:37.030625   15635 addons.go:234] Setting addon volumesnapshots=true in "addons-815929"
	I0918 19:39:37.029456   15635 addons.go:234] Setting addon volcano=true in "addons-815929"
	I0918 19:39:37.029457   15635 addons.go:69] Setting registry=true in profile "addons-815929"
	I0918 19:39:37.030642   15635 addons.go:234] Setting addon registry=true in "addons-815929"
	I0918 19:39:37.030669   15635 host.go:66] Checking if "addons-815929" exists ...
	I0918 19:39:37.029458   15635 addons.go:69] Setting cloud-spanner=true in profile "addons-815929"
	I0918 19:39:37.030800   15635 config.go:182] Loaded profile config "addons-815929": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0918 19:39:37.030815   15635 addons.go:234] Setting addon cloud-spanner=true in "addons-815929"
	I0918 19:39:37.030841   15635 host.go:66] Checking if "addons-815929" exists ...
	I0918 19:39:37.031041   15635 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0918 19:39:37.031067   15635 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0918 19:39:37.031110   15635 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0918 19:39:37.031114   15635 host.go:66] Checking if "addons-815929" exists ...
	I0918 19:39:37.031133   15635 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0918 19:39:37.031187   15635 host.go:66] Checking if "addons-815929" exists ...
	I0918 19:39:37.031267   15635 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0918 19:39:37.031290   15635 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0918 19:39:37.029462   15635 addons.go:69] Setting storage-provisioner=true in profile "addons-815929"
	I0918 19:39:37.031351   15635 addons.go:234] Setting addon storage-provisioner=true in "addons-815929"
	I0918 19:39:37.031456   15635 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0918 19:39:37.031479   15635 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0918 19:39:37.031509   15635 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0918 19:39:37.029483   15635 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-815929"
	I0918 19:39:37.031530   15635 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0918 19:39:37.031597   15635 host.go:66] Checking if "addons-815929" exists ...
	I0918 19:39:37.031865   15635 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0918 19:39:37.031880   15635 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0918 19:39:37.031919   15635 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0918 19:39:37.031942   15635 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0918 19:39:37.032180   15635 host.go:66] Checking if "addons-815929" exists ...
	I0918 19:39:37.032334   15635 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0918 19:39:37.032367   15635 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0918 19:39:37.032458   15635 host.go:66] Checking if "addons-815929" exists ...
	I0918 19:39:37.040881   15635 out.go:177] * Verifying Kubernetes components...
	I0918 19:39:37.042576   15635 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0918 19:39:37.051516   15635 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43999
	I0918 19:39:37.052168   15635 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33723
	I0918 19:39:37.052235   15635 main.go:141] libmachine: () Calling .GetVersion
	I0918 19:39:37.052173   15635 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35173
	I0918 19:39:37.052393   15635 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39509
	I0918 19:39:37.052668   15635 main.go:141] libmachine: () Calling .GetVersion
	I0918 19:39:37.052961   15635 main.go:141] libmachine: Using API Version  1
	I0918 19:39:37.052978   15635 main.go:141] libmachine: () Calling .SetConfigRaw
	I0918 19:39:37.053395   15635 main.go:141] libmachine: () Calling .GetMachineName
	I0918 19:39:37.053567   15635 main.go:141] libmachine: Using API Version  1
	I0918 19:39:37.053580   15635 main.go:141] libmachine: () Calling .SetConfigRaw
	I0918 19:39:37.053833   15635 main.go:141] libmachine: () Calling .GetVersion
	I0918 19:39:37.053907   15635 main.go:141] libmachine: () Calling .GetMachineName
	I0918 19:39:37.054034   15635 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0918 19:39:37.054084   15635 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0918 19:39:37.054251   15635 main.go:141] libmachine: Using API Version  1
	I0918 19:39:37.054272   15635 main.go:141] libmachine: () Calling .SetConfigRaw
	I0918 19:39:37.054491   15635 main.go:141] libmachine: () Calling .GetVersion
	I0918 19:39:37.054656   15635 main.go:141] libmachine: () Calling .GetMachineName
	I0918 19:39:37.055051   15635 main.go:141] libmachine: Using API Version  1
	I0918 19:39:37.055180   15635 main.go:141] libmachine: () Calling .SetConfigRaw
	I0918 19:39:37.055565   15635 main.go:141] libmachine: () Calling .GetMachineName
	I0918 19:39:37.062646   15635 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34865
	I0918 19:39:37.064595   15635 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0918 19:39:37.064636   15635 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0918 19:39:37.064659   15635 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0918 19:39:37.064700   15635 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0918 19:39:37.064716   15635 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0918 19:39:37.064740   15635 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0918 19:39:37.064784   15635 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0918 19:39:37.064821   15635 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0918 19:39:37.065051   15635 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43139
	I0918 19:39:37.065527   15635 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0918 19:39:37.065555   15635 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0918 19:39:37.066116   15635 main.go:141] libmachine: () Calling .GetVersion
	I0918 19:39:37.066219   15635 main.go:141] libmachine: () Calling .GetVersion
	I0918 19:39:37.066752   15635 main.go:141] libmachine: Using API Version  1
	I0918 19:39:37.066769   15635 main.go:141] libmachine: () Calling .SetConfigRaw
	I0918 19:39:37.067162   15635 main.go:141] libmachine: () Calling .GetMachineName
	I0918 19:39:37.067703   15635 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0918 19:39:37.067726   15635 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0918 19:39:37.069000   15635 main.go:141] libmachine: Using API Version  1
	I0918 19:39:37.069018   15635 main.go:141] libmachine: () Calling .SetConfigRaw
	I0918 19:39:37.069473   15635 main.go:141] libmachine: () Calling .GetMachineName
	I0918 19:39:37.070080   15635 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0918 19:39:37.070105   15635 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0918 19:39:37.098916   15635 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37831
	I0918 19:39:37.099493   15635 main.go:141] libmachine: () Calling .GetVersion
	I0918 19:39:37.100084   15635 main.go:141] libmachine: Using API Version  1
	I0918 19:39:37.100108   15635 main.go:141] libmachine: () Calling .SetConfigRaw
	I0918 19:39:37.100477   15635 main.go:141] libmachine: () Calling .GetMachineName
	I0918 19:39:37.100643   15635 main.go:141] libmachine: (addons-815929) Calling .GetState
	I0918 19:39:37.103211   15635 host.go:66] Checking if "addons-815929" exists ...
	I0918 19:39:37.103677   15635 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0918 19:39:37.103724   15635 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0918 19:39:37.106175   15635 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42007
	I0918 19:39:37.106455   15635 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38193
	I0918 19:39:37.106629   15635 main.go:141] libmachine: () Calling .GetVersion
	I0918 19:39:37.106732   15635 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44527
	I0918 19:39:37.107318   15635 main.go:141] libmachine: Using API Version  1
	I0918 19:39:37.107333   15635 main.go:141] libmachine: () Calling .SetConfigRaw
	I0918 19:39:37.107356   15635 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45109
	I0918 19:39:37.107737   15635 main.go:141] libmachine: () Calling .GetVersion
	I0918 19:39:37.107821   15635 main.go:141] libmachine: () Calling .GetMachineName
	I0918 19:39:37.107875   15635 main.go:141] libmachine: () Calling .GetVersion
	I0918 19:39:37.108413   15635 main.go:141] libmachine: Using API Version  1
	I0918 19:39:37.108435   15635 main.go:141] libmachine: () Calling .SetConfigRaw
	I0918 19:39:37.108877   15635 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0918 19:39:37.108909   15635 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0918 19:39:37.109176   15635 main.go:141] libmachine: () Calling .GetVersion
	I0918 19:39:37.109264   15635 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38691
	I0918 19:39:37.109861   15635 main.go:141] libmachine: () Calling .GetVersion
	I0918 19:39:37.109995   15635 main.go:141] libmachine: Using API Version  1
	I0918 19:39:37.110005   15635 main.go:141] libmachine: () Calling .SetConfigRaw
	I0918 19:39:37.110065   15635 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40721
	I0918 19:39:37.110320   15635 main.go:141] libmachine: () Calling .GetMachineName
	I0918 19:39:37.110484   15635 main.go:141] libmachine: (addons-815929) Calling .GetState
	I0918 19:39:37.110838   15635 main.go:141] libmachine: Using API Version  1
	I0918 19:39:37.110854   15635 main.go:141] libmachine: () Calling .SetConfigRaw
	I0918 19:39:37.111189   15635 main.go:141] libmachine: () Calling .GetMachineName
	I0918 19:39:37.111701   15635 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0918 19:39:37.111733   15635 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0918 19:39:37.112042   15635 main.go:141] libmachine: Using API Version  1
	I0918 19:39:37.112058   15635 main.go:141] libmachine: () Calling .SetConfigRaw
	I0918 19:39:37.112122   15635 main.go:141] libmachine: () Calling .GetVersion
	I0918 19:39:37.112177   15635 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36143
	I0918 19:39:37.112340   15635 main.go:141] libmachine: (addons-815929) Calling .DriverName
	I0918 19:39:37.112872   15635 main.go:141] libmachine: Using API Version  1
	I0918 19:39:37.112893   15635 main.go:141] libmachine: () Calling .SetConfigRaw
	I0918 19:39:37.112958   15635 main.go:141] libmachine: () Calling .GetVersion
	I0918 19:39:37.112994   15635 main.go:141] libmachine: () Calling .GetMachineName
	I0918 19:39:37.113426   15635 main.go:141] libmachine: Using API Version  1
	I0918 19:39:37.113442   15635 main.go:141] libmachine: () Calling .SetConfigRaw
	I0918 19:39:37.113536   15635 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0918 19:39:37.113555   15635 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0918 19:39:37.113791   15635 main.go:141] libmachine: () Calling .GetMachineName
	I0918 19:39:37.113948   15635 main.go:141] libmachine: (addons-815929) Calling .GetState
	I0918 19:39:37.114523   15635 main.go:141] libmachine: () Calling .GetMachineName
	I0918 19:39:37.114567   15635 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40673
	I0918 19:39:37.114766   15635 out.go:177]   - Using image ghcr.io/helm/tiller:v2.17.0
	I0918 19:39:37.114880   15635 main.go:141] libmachine: () Calling .GetMachineName
	I0918 19:39:37.115093   15635 main.go:141] libmachine: (addons-815929) Calling .GetState
	I0918 19:39:37.115461   15635 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0918 19:39:37.115486   15635 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0918 19:39:37.115987   15635 main.go:141] libmachine: (addons-815929) Calling .DriverName
	I0918 19:39:37.116102   15635 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-dp.yaml
	I0918 19:39:37.116125   15635 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-dp.yaml (2422 bytes)
	I0918 19:39:37.116144   15635 main.go:141] libmachine: (addons-815929) Calling .GetSSHHostname
	I0918 19:39:37.116423   15635 main.go:141] libmachine: () Calling .GetVersion
	I0918 19:39:37.116861   15635 main.go:141] libmachine: Using API Version  1
	I0918 19:39:37.116878   15635 main.go:141] libmachine: () Calling .SetConfigRaw
	I0918 19:39:37.117587   15635 main.go:141] libmachine: () Calling .GetMachineName
	I0918 19:39:37.117675   15635 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.32.0
	I0918 19:39:37.117765   15635 main.go:141] libmachine: (addons-815929) Calling .GetState
	I0918 19:39:37.118810   15635 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0918 19:39:37.118832   15635 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0918 19:39:37.118853   15635 main.go:141] libmachine: (addons-815929) Calling .GetSSHHostname
	I0918 19:39:37.119472   15635 main.go:141] libmachine: (addons-815929) Calling .DriverName
	I0918 19:39:37.120244   15635 main.go:141] libmachine: (addons-815929) Calling .DriverName
	I0918 19:39:37.122036   15635 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0918 19:39:37.122153   15635 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I0918 19:39:37.122370   15635 main.go:141] libmachine: (addons-815929) DBG | domain addons-815929 has defined MAC address 52:54:00:11:b1:d6 in network mk-addons-815929
	I0918 19:39:37.123115   15635 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0918 19:39:37.123133   15635 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0918 19:39:37.123160   15635 main.go:141] libmachine: (addons-815929) Calling .GetSSHHostname
	I0918 19:39:37.123192   15635 main.go:141] libmachine: (addons-815929) DBG | domain addons-815929 has defined MAC address 52:54:00:11:b1:d6 in network mk-addons-815929
	I0918 19:39:37.123838   15635 main.go:141] libmachine: (addons-815929) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:b1:d6", ip: ""} in network mk-addons-815929: {Iface:virbr1 ExpiryTime:2024-09-18 20:39:08 +0000 UTC Type:0 Mac:52:54:00:11:b1:d6 Iaid: IPaddr:192.168.39.158 Prefix:24 Hostname:addons-815929 Clientid:01:52:54:00:11:b1:d6}
	I0918 19:39:37.123859   15635 main.go:141] libmachine: (addons-815929) DBG | domain addons-815929 has defined IP address 192.168.39.158 and MAC address 52:54:00:11:b1:d6 in network mk-addons-815929
	I0918 19:39:37.123881   15635 main.go:141] libmachine: (addons-815929) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:b1:d6", ip: ""} in network mk-addons-815929: {Iface:virbr1 ExpiryTime:2024-09-18 20:39:08 +0000 UTC Type:0 Mac:52:54:00:11:b1:d6 Iaid: IPaddr:192.168.39.158 Prefix:24 Hostname:addons-815929 Clientid:01:52:54:00:11:b1:d6}
	I0918 19:39:37.123894   15635 main.go:141] libmachine: (addons-815929) DBG | domain addons-815929 has defined IP address 192.168.39.158 and MAC address 52:54:00:11:b1:d6 in network mk-addons-815929
	I0918 19:39:37.124062   15635 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0918 19:39:37.124077   15635 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0918 19:39:37.124093   15635 main.go:141] libmachine: (addons-815929) Calling .GetSSHHostname
	I0918 19:39:37.124109   15635 main.go:141] libmachine: (addons-815929) Calling .GetSSHPort
	I0918 19:39:37.124224   15635 main.go:141] libmachine: (addons-815929) Calling .GetSSHPort
	I0918 19:39:37.124275   15635 main.go:141] libmachine: (addons-815929) Calling .GetSSHKeyPath
	I0918 19:39:37.124424   15635 main.go:141] libmachine: (addons-815929) Calling .GetSSHUsername
	I0918 19:39:37.124477   15635 main.go:141] libmachine: (addons-815929) Calling .GetSSHKeyPath
	I0918 19:39:37.124532   15635 sshutil.go:53] new ssh client: &{IP:192.168.39.158 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19667-7671/.minikube/machines/addons-815929/id_rsa Username:docker}
	I0918 19:39:37.124835   15635 main.go:141] libmachine: (addons-815929) Calling .GetSSHUsername
	I0918 19:39:37.125242   15635 sshutil.go:53] new ssh client: &{IP:192.168.39.158 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19667-7671/.minikube/machines/addons-815929/id_rsa Username:docker}
	I0918 19:39:37.128252   15635 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46023
	I0918 19:39:37.128414   15635 main.go:141] libmachine: (addons-815929) DBG | domain addons-815929 has defined MAC address 52:54:00:11:b1:d6 in network mk-addons-815929
	I0918 19:39:37.128663   15635 main.go:141] libmachine: (addons-815929) DBG | domain addons-815929 has defined MAC address 52:54:00:11:b1:d6 in network mk-addons-815929
	I0918 19:39:37.128712   15635 main.go:141] libmachine: (addons-815929) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:b1:d6", ip: ""} in network mk-addons-815929: {Iface:virbr1 ExpiryTime:2024-09-18 20:39:08 +0000 UTC Type:0 Mac:52:54:00:11:b1:d6 Iaid: IPaddr:192.168.39.158 Prefix:24 Hostname:addons-815929 Clientid:01:52:54:00:11:b1:d6}
	I0918 19:39:37.128728   15635 main.go:141] libmachine: (addons-815929) DBG | domain addons-815929 has defined IP address 192.168.39.158 and MAC address 52:54:00:11:b1:d6 in network mk-addons-815929
	I0918 19:39:37.129043   15635 main.go:141] libmachine: (addons-815929) Calling .GetSSHPort
	I0918 19:39:37.129183   15635 main.go:141] libmachine: (addons-815929) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:b1:d6", ip: ""} in network mk-addons-815929: {Iface:virbr1 ExpiryTime:2024-09-18 20:39:08 +0000 UTC Type:0 Mac:52:54:00:11:b1:d6 Iaid: IPaddr:192.168.39.158 Prefix:24 Hostname:addons-815929 Clientid:01:52:54:00:11:b1:d6}
	I0918 19:39:37.129197   15635 main.go:141] libmachine: (addons-815929) DBG | domain addons-815929 has defined IP address 192.168.39.158 and MAC address 52:54:00:11:b1:d6 in network mk-addons-815929
	I0918 19:39:37.129373   15635 main.go:141] libmachine: (addons-815929) Calling .GetSSHKeyPath
	I0918 19:39:37.129430   15635 main.go:141] libmachine: (addons-815929) Calling .GetSSHPort
	I0918 19:39:37.129662   15635 main.go:141] libmachine: (addons-815929) Calling .GetSSHUsername
	I0918 19:39:37.129717   15635 main.go:141] libmachine: (addons-815929) Calling .GetSSHKeyPath
	I0918 19:39:37.130003   15635 main.go:141] libmachine: (addons-815929) Calling .GetSSHUsername
	I0918 19:39:37.130044   15635 sshutil.go:53] new ssh client: &{IP:192.168.39.158 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19667-7671/.minikube/machines/addons-815929/id_rsa Username:docker}
	I0918 19:39:37.130291   15635 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33081
	I0918 19:39:37.130581   15635 main.go:141] libmachine: () Calling .GetVersion
	I0918 19:39:37.130635   15635 sshutil.go:53] new ssh client: &{IP:192.168.39.158 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19667-7671/.minikube/machines/addons-815929/id_rsa Username:docker}
	I0918 19:39:37.131050   15635 main.go:141] libmachine: () Calling .GetVersion
	I0918 19:39:37.131646   15635 main.go:141] libmachine: Using API Version  1
	I0918 19:39:37.131664   15635 main.go:141] libmachine: () Calling .SetConfigRaw
	I0918 19:39:37.132051   15635 main.go:141] libmachine: () Calling .GetMachineName
	I0918 19:39:37.132594   15635 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0918 19:39:37.132634   15635 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0918 19:39:37.133414   15635 main.go:141] libmachine: Using API Version  1
	I0918 19:39:37.133432   15635 main.go:141] libmachine: () Calling .SetConfigRaw
	I0918 19:39:37.133778   15635 main.go:141] libmachine: () Calling .GetMachineName
	I0918 19:39:37.134298   15635 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0918 19:39:37.134332   15635 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0918 19:39:37.134555   15635 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42457
	I0918 19:39:37.140829   15635 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39169
	I0918 19:39:37.140852   15635 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38671
	I0918 19:39:37.141363   15635 main.go:141] libmachine: () Calling .GetVersion
	I0918 19:39:37.141476   15635 main.go:141] libmachine: () Calling .GetVersion
	I0918 19:39:37.142020   15635 main.go:141] libmachine: Using API Version  1
	I0918 19:39:37.142041   15635 main.go:141] libmachine: () Calling .SetConfigRaw
	I0918 19:39:37.142402   15635 main.go:141] libmachine: () Calling .GetMachineName
	I0918 19:39:37.143109   15635 main.go:141] libmachine: (addons-815929) Calling .GetState
	I0918 19:39:37.143714   15635 main.go:141] libmachine: Using API Version  1
	I0918 19:39:37.143732   15635 main.go:141] libmachine: () Calling .SetConfigRaw
	I0918 19:39:37.144237   15635 main.go:141] libmachine: () Calling .GetMachineName
	I0918 19:39:37.144935   15635 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0918 19:39:37.144977   15635 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0918 19:39:37.147981   15635 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-815929"
	I0918 19:39:37.148036   15635 host.go:66] Checking if "addons-815929" exists ...
	I0918 19:39:37.148428   15635 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0918 19:39:37.148465   15635 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0918 19:39:37.150809   15635 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44217
	I0918 19:39:37.151218   15635 main.go:141] libmachine: () Calling .GetVersion
	I0918 19:39:37.152360   15635 main.go:141] libmachine: Using API Version  1
	I0918 19:39:37.152379   15635 main.go:141] libmachine: () Calling .SetConfigRaw
	I0918 19:39:37.152751   15635 main.go:141] libmachine: () Calling .GetMachineName
	I0918 19:39:37.152876   15635 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36965
	I0918 19:39:37.153170   15635 main.go:141] libmachine: (addons-815929) Calling .GetState
	I0918 19:39:37.153972   15635 main.go:141] libmachine: () Calling .GetVersion
	I0918 19:39:37.154591   15635 main.go:141] libmachine: Using API Version  1
	I0918 19:39:37.154608   15635 main.go:141] libmachine: () Calling .SetConfigRaw
	I0918 19:39:37.155107   15635 main.go:141] libmachine: () Calling .GetMachineName
	I0918 19:39:37.155379   15635 main.go:141] libmachine: (addons-815929) Calling .GetState
	I0918 19:39:37.155626   15635 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45955
	I0918 19:39:37.155835   15635 main.go:141] libmachine: (addons-815929) Calling .DriverName
	I0918 19:39:37.156440   15635 main.go:141] libmachine: () Calling .GetVersion
	I0918 19:39:37.156559   15635 main.go:141] libmachine: () Calling .GetVersion
	I0918 19:39:37.157000   15635 main.go:141] libmachine: Using API Version  1
	I0918 19:39:37.157022   15635 main.go:141] libmachine: () Calling .SetConfigRaw
	I0918 19:39:37.157078   15635 main.go:141] libmachine: (addons-815929) Calling .DriverName
	I0918 19:39:37.157468   15635 main.go:141] libmachine: Using API Version  1
	I0918 19:39:37.157923   15635 main.go:141] libmachine: () Calling .SetConfigRaw
	I0918 19:39:37.157782   15635 main.go:141] libmachine: () Calling .GetMachineName
	I0918 19:39:37.158172   15635 main.go:141] libmachine: (addons-815929) Calling .GetState
	I0918 19:39:37.158484   15635 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.2
	I0918 19:39:37.158806   15635 main.go:141] libmachine: () Calling .GetMachineName
	I0918 19:39:37.159195   15635 main.go:141] libmachine: (addons-815929) Calling .GetState
	I0918 19:39:37.159405   15635 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
	I0918 19:39:37.159842   15635 main.go:141] libmachine: (addons-815929) Calling .DriverName
	I0918 19:39:37.161182   15635 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0918 19:39:37.161249   15635 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0918 19:39:37.161287   15635 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0918 19:39:37.161304   15635 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0918 19:39:37.161324   15635 main.go:141] libmachine: (addons-815929) Calling .GetSSHHostname
	I0918 19:39:37.162704   15635 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0918 19:39:37.162728   15635 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0918 19:39:37.162748   15635 main.go:141] libmachine: (addons-815929) Calling .GetSSHHostname
	I0918 19:39:37.163160   15635 main.go:141] libmachine: (addons-815929) Calling .DriverName
	I0918 19:39:37.164031   15635 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0918 19:39:37.164902   15635 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0918 19:39:37.165184   15635 main.go:141] libmachine: (addons-815929) DBG | domain addons-815929 has defined MAC address 52:54:00:11:b1:d6 in network mk-addons-815929
	I0918 19:39:37.165515   15635 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0918 19:39:37.165545   15635 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0918 19:39:37.165565   15635 main.go:141] libmachine: (addons-815929) Calling .GetSSHHostname
	I0918 19:39:37.166590   15635 main.go:141] libmachine: (addons-815929) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:b1:d6", ip: ""} in network mk-addons-815929: {Iface:virbr1 ExpiryTime:2024-09-18 20:39:08 +0000 UTC Type:0 Mac:52:54:00:11:b1:d6 Iaid: IPaddr:192.168.39.158 Prefix:24 Hostname:addons-815929 Clientid:01:52:54:00:11:b1:d6}
	I0918 19:39:37.166613   15635 main.go:141] libmachine: (addons-815929) DBG | domain addons-815929 has defined MAC address 52:54:00:11:b1:d6 in network mk-addons-815929
	I0918 19:39:37.166620   15635 main.go:141] libmachine: (addons-815929) DBG | domain addons-815929 has defined IP address 192.168.39.158 and MAC address 52:54:00:11:b1:d6 in network mk-addons-815929
	I0918 19:39:37.166869   15635 main.go:141] libmachine: (addons-815929) Calling .GetSSHPort
	I0918 19:39:37.166933   15635 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0918 19:39:37.167076   15635 main.go:141] libmachine: (addons-815929) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:b1:d6", ip: ""} in network mk-addons-815929: {Iface:virbr1 ExpiryTime:2024-09-18 20:39:08 +0000 UTC Type:0 Mac:52:54:00:11:b1:d6 Iaid: IPaddr:192.168.39.158 Prefix:24 Hostname:addons-815929 Clientid:01:52:54:00:11:b1:d6}
	I0918 19:39:37.167093   15635 main.go:141] libmachine: (addons-815929) DBG | domain addons-815929 has defined IP address 192.168.39.158 and MAC address 52:54:00:11:b1:d6 in network mk-addons-815929
	I0918 19:39:37.167133   15635 main.go:141] libmachine: (addons-815929) Calling .GetSSHKeyPath
	I0918 19:39:37.167258   15635 main.go:141] libmachine: (addons-815929) Calling .GetSSHPort
	I0918 19:39:37.167299   15635 main.go:141] libmachine: (addons-815929) Calling .GetSSHUsername
	I0918 19:39:37.167409   15635 main.go:141] libmachine: (addons-815929) Calling .GetSSHKeyPath
	I0918 19:39:37.167455   15635 sshutil.go:53] new ssh client: &{IP:192.168.39.158 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19667-7671/.minikube/machines/addons-815929/id_rsa Username:docker}
	I0918 19:39:37.167541   15635 main.go:141] libmachine: (addons-815929) Calling .GetSSHUsername
	I0918 19:39:37.167654   15635 sshutil.go:53] new ssh client: &{IP:192.168.39.158 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19667-7671/.minikube/machines/addons-815929/id_rsa Username:docker}
	I0918 19:39:37.169351   15635 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0918 19:39:37.169830   15635 main.go:141] libmachine: (addons-815929) DBG | domain addons-815929 has defined MAC address 52:54:00:11:b1:d6 in network mk-addons-815929
	I0918 19:39:37.169871   15635 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37677
	I0918 19:39:37.170291   15635 main.go:141] libmachine: (addons-815929) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:b1:d6", ip: ""} in network mk-addons-815929: {Iface:virbr1 ExpiryTime:2024-09-18 20:39:08 +0000 UTC Type:0 Mac:52:54:00:11:b1:d6 Iaid: IPaddr:192.168.39.158 Prefix:24 Hostname:addons-815929 Clientid:01:52:54:00:11:b1:d6}
	I0918 19:39:37.170343   15635 main.go:141] libmachine: () Calling .GetVersion
	I0918 19:39:37.170442   15635 main.go:141] libmachine: (addons-815929) DBG | domain addons-815929 has defined IP address 192.168.39.158 and MAC address 52:54:00:11:b1:d6 in network mk-addons-815929
	I0918 19:39:37.170594   15635 main.go:141] libmachine: (addons-815929) Calling .GetSSHPort
	I0918 19:39:37.170684   15635 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43447
	I0918 19:39:37.170837   15635 main.go:141] libmachine: (addons-815929) Calling .GetSSHKeyPath
	I0918 19:39:37.170943   15635 main.go:141] libmachine: Using API Version  1
	I0918 19:39:37.170956   15635 main.go:141] libmachine: () Calling .SetConfigRaw
	I0918 19:39:37.171006   15635 main.go:141] libmachine: (addons-815929) Calling .GetSSHUsername
	I0918 19:39:37.171021   15635 main.go:141] libmachine: () Calling .GetVersion
	I0918 19:39:37.171174   15635 sshutil.go:53] new ssh client: &{IP:192.168.39.158 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19667-7671/.minikube/machines/addons-815929/id_rsa Username:docker}
	I0918 19:39:37.171426   15635 main.go:141] libmachine: () Calling .GetMachineName
	I0918 19:39:37.172178   15635 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0918 19:39:37.172541   15635 main.go:141] libmachine: Using API Version  1
	I0918 19:39:37.172561   15635 main.go:141] libmachine: () Calling .SetConfigRaw
	I0918 19:39:37.173066   15635 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46769
	I0918 19:39:37.173090   15635 main.go:141] libmachine: () Calling .GetMachineName
	I0918 19:39:37.173137   15635 main.go:141] libmachine: (addons-815929) Calling .GetState
	I0918 19:39:37.173352   15635 main.go:141] libmachine: (addons-815929) Calling .GetState
	I0918 19:39:37.174570   15635 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0918 19:39:37.175288   15635 main.go:141] libmachine: (addons-815929) Calling .DriverName
	I0918 19:39:37.175665   15635 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37609
	I0918 19:39:37.175894   15635 main.go:141] libmachine: () Calling .GetVersion
	I0918 19:39:37.175993   15635 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45511
	I0918 19:39:37.176139   15635 main.go:141] libmachine: () Calling .GetVersion
	I0918 19:39:37.176458   15635 main.go:141] libmachine: Using API Version  1
	I0918 19:39:37.176473   15635 main.go:141] libmachine: () Calling .SetConfigRaw
	I0918 19:39:37.176509   15635 main.go:141] libmachine: () Calling .GetVersion
	I0918 19:39:37.176536   15635 main.go:141] libmachine: (addons-815929) Calling .DriverName
	I0918 19:39:37.176688   15635 main.go:141] libmachine: Making call to close driver server
	I0918 19:39:37.176717   15635 main.go:141] libmachine: (addons-815929) Calling .Close
	I0918 19:39:37.176818   15635 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0918 19:39:37.176941   15635 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.23
	I0918 19:39:37.178051   15635 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0918 19:39:37.178163   15635 main.go:141] libmachine: () Calling .GetMachineName
	I0918 19:39:37.178175   15635 main.go:141] libmachine: (addons-815929) DBG | Closing plugin on server side
	I0918 19:39:37.178206   15635 main.go:141] libmachine: Successfully made call to close driver server
	I0918 19:39:37.178214   15635 main.go:141] libmachine: Making call to close connection to plugin binary
	I0918 19:39:37.178235   15635 main.go:141] libmachine: Making call to close driver server
	I0918 19:39:37.178250   15635 main.go:141] libmachine: (addons-815929) Calling .Close
	I0918 19:39:37.178254   15635 main.go:141] libmachine: Using API Version  1
	I0918 19:39:37.178294   15635 main.go:141] libmachine: () Calling .SetConfigRaw
	I0918 19:39:37.178261   15635 main.go:141] libmachine: Using API Version  1
	I0918 19:39:37.178333   15635 main.go:141] libmachine: () Calling .SetConfigRaw
	I0918 19:39:37.178542   15635 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I0918 19:39:37.178556   15635 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0918 19:39:37.178574   15635 main.go:141] libmachine: (addons-815929) Calling .GetSSHHostname
	I0918 19:39:37.178597   15635 main.go:141] libmachine: (addons-815929) Calling .GetState
	I0918 19:39:37.178616   15635 main.go:141] libmachine: () Calling .GetMachineName
	I0918 19:39:37.179193   15635 main.go:141] libmachine: (addons-815929) DBG | Closing plugin on server side
	I0918 19:39:37.179197   15635 main.go:141] libmachine: () Calling .GetMachineName
	I0918 19:39:37.179230   15635 main.go:141] libmachine: Successfully made call to close driver server
	I0918 19:39:37.179243   15635 main.go:141] libmachine: Making call to close connection to plugin binary
	I0918 19:39:37.179280   15635 main.go:141] libmachine: (addons-815929) Calling .DriverName
	W0918 19:39:37.179328   15635 out.go:270] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I0918 19:39:37.179639   15635 main.go:141] libmachine: (addons-815929) Calling .GetState
	I0918 19:39:37.181366   15635 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0918 19:39:37.181535   15635 addons.go:234] Setting addon default-storageclass=true in "addons-815929"
	I0918 19:39:37.181576   15635 host.go:66] Checking if "addons-815929" exists ...
	I0918 19:39:37.181669   15635 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37473
	I0918 19:39:37.181924   15635 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0918 19:39:37.181945   15635 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0918 19:39:37.181961   15635 main.go:141] libmachine: () Calling .GetVersion
	I0918 19:39:37.182145   15635 main.go:141] libmachine: (addons-815929) DBG | domain addons-815929 has defined MAC address 52:54:00:11:b1:d6 in network mk-addons-815929
	I0918 19:39:37.182275   15635 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33577
	I0918 19:39:37.182398   15635 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0918 19:39:37.182418   15635 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0918 19:39:37.182441   15635 main.go:141] libmachine: (addons-815929) Calling .GetSSHHostname
	I0918 19:39:37.182531   15635 main.go:141] libmachine: Using API Version  1
	I0918 19:39:37.182548   15635 main.go:141] libmachine: () Calling .SetConfigRaw
	I0918 19:39:37.182977   15635 main.go:141] libmachine: () Calling .GetVersion
	I0918 19:39:37.183061   15635 main.go:141] libmachine: (addons-815929) Calling .GetSSHPort
	I0918 19:39:37.183086   15635 main.go:141] libmachine: (addons-815929) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:b1:d6", ip: ""} in network mk-addons-815929: {Iface:virbr1 ExpiryTime:2024-09-18 20:39:08 +0000 UTC Type:0 Mac:52:54:00:11:b1:d6 Iaid: IPaddr:192.168.39.158 Prefix:24 Hostname:addons-815929 Clientid:01:52:54:00:11:b1:d6}
	I0918 19:39:37.183117   15635 main.go:141] libmachine: (addons-815929) DBG | domain addons-815929 has defined IP address 192.168.39.158 and MAC address 52:54:00:11:b1:d6 in network mk-addons-815929
	I0918 19:39:37.183231   15635 main.go:141] libmachine: (addons-815929) Calling .GetSSHKeyPath
	I0918 19:39:37.183392   15635 main.go:141] libmachine: (addons-815929) Calling .GetSSHUsername
	I0918 19:39:37.183461   15635 main.go:141] libmachine: () Calling .GetMachineName
	I0918 19:39:37.183556   15635 sshutil.go:53] new ssh client: &{IP:192.168.39.158 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19667-7671/.minikube/machines/addons-815929/id_rsa Username:docker}
	I0918 19:39:37.184190   15635 main.go:141] libmachine: Using API Version  1
	I0918 19:39:37.184195   15635 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0918 19:39:37.184205   15635 main.go:141] libmachine: () Calling .SetConfigRaw
	I0918 19:39:37.184232   15635 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0918 19:39:37.184619   15635 main.go:141] libmachine: () Calling .GetMachineName
	I0918 19:39:37.184791   15635 main.go:141] libmachine: (addons-815929) Calling .GetState
	I0918 19:39:37.185344   15635 main.go:141] libmachine: (addons-815929) Calling .DriverName
	I0918 19:39:37.186225   15635 main.go:141] libmachine: (addons-815929) DBG | domain addons-815929 has defined MAC address 52:54:00:11:b1:d6 in network mk-addons-815929
	I0918 19:39:37.186672   15635 main.go:141] libmachine: (addons-815929) Calling .DriverName
	I0918 19:39:37.186971   15635 main.go:141] libmachine: (addons-815929) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:b1:d6", ip: ""} in network mk-addons-815929: {Iface:virbr1 ExpiryTime:2024-09-18 20:39:08 +0000 UTC Type:0 Mac:52:54:00:11:b1:d6 Iaid: IPaddr:192.168.39.158 Prefix:24 Hostname:addons-815929 Clientid:01:52:54:00:11:b1:d6}
	I0918 19:39:37.186997   15635 main.go:141] libmachine: (addons-815929) DBG | domain addons-815929 has defined IP address 192.168.39.158 and MAC address 52:54:00:11:b1:d6 in network mk-addons-815929
	I0918 19:39:37.187115   15635 main.go:141] libmachine: (addons-815929) Calling .GetSSHPort
	I0918 19:39:37.187255   15635 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0918 19:39:37.187310   15635 main.go:141] libmachine: (addons-815929) Calling .GetSSHKeyPath
	I0918 19:39:37.187453   15635 main.go:141] libmachine: (addons-815929) Calling .GetSSHUsername
	I0918 19:39:37.187632   15635 sshutil.go:53] new ssh client: &{IP:192.168.39.158 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19667-7671/.minikube/machines/addons-815929/id_rsa Username:docker}
	I0918 19:39:37.189574   15635 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0918 19:39:37.189599   15635 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0918 19:39:37.189633   15635 main.go:141] libmachine: (addons-815929) Calling .GetSSHHostname
	I0918 19:39:37.189708   15635 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.2
	I0918 19:39:37.191068   15635 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0918 19:39:37.191091   15635 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0918 19:39:37.191118   15635 main.go:141] libmachine: (addons-815929) Calling .GetSSHHostname
	I0918 19:39:37.193163   15635 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37369
	I0918 19:39:37.193512   15635 main.go:141] libmachine: (addons-815929) DBG | domain addons-815929 has defined MAC address 52:54:00:11:b1:d6 in network mk-addons-815929
	I0918 19:39:37.193809   15635 main.go:141] libmachine: (addons-815929) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:b1:d6", ip: ""} in network mk-addons-815929: {Iface:virbr1 ExpiryTime:2024-09-18 20:39:08 +0000 UTC Type:0 Mac:52:54:00:11:b1:d6 Iaid: IPaddr:192.168.39.158 Prefix:24 Hostname:addons-815929 Clientid:01:52:54:00:11:b1:d6}
	I0918 19:39:37.193886   15635 main.go:141] libmachine: (addons-815929) Calling .GetSSHPort
	I0918 19:39:37.194018   15635 main.go:141] libmachine: (addons-815929) DBG | domain addons-815929 has defined IP address 192.168.39.158 and MAC address 52:54:00:11:b1:d6 in network mk-addons-815929
	I0918 19:39:37.194053   15635 main.go:141] libmachine: (addons-815929) Calling .GetSSHKeyPath
	I0918 19:39:37.194201   15635 main.go:141] libmachine: (addons-815929) Calling .GetSSHUsername
	I0918 19:39:37.194373   15635 sshutil.go:53] new ssh client: &{IP:192.168.39.158 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19667-7671/.minikube/machines/addons-815929/id_rsa Username:docker}
	I0918 19:39:37.195021   15635 main.go:141] libmachine: (addons-815929) DBG | domain addons-815929 has defined MAC address 52:54:00:11:b1:d6 in network mk-addons-815929
	I0918 19:39:37.195342   15635 main.go:141] libmachine: (addons-815929) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:b1:d6", ip: ""} in network mk-addons-815929: {Iface:virbr1 ExpiryTime:2024-09-18 20:39:08 +0000 UTC Type:0 Mac:52:54:00:11:b1:d6 Iaid: IPaddr:192.168.39.158 Prefix:24 Hostname:addons-815929 Clientid:01:52:54:00:11:b1:d6}
	I0918 19:39:37.195382   15635 main.go:141] libmachine: (addons-815929) DBG | domain addons-815929 has defined IP address 192.168.39.158 and MAC address 52:54:00:11:b1:d6 in network mk-addons-815929
	I0918 19:39:37.195574   15635 main.go:141] libmachine: (addons-815929) Calling .GetSSHPort
	I0918 19:39:37.195743   15635 main.go:141] libmachine: (addons-815929) Calling .GetSSHKeyPath
	I0918 19:39:37.195909   15635 main.go:141] libmachine: (addons-815929) Calling .GetSSHUsername
	I0918 19:39:37.195982   15635 main.go:141] libmachine: () Calling .GetVersion
	I0918 19:39:37.196141   15635 sshutil.go:53] new ssh client: &{IP:192.168.39.158 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19667-7671/.minikube/machines/addons-815929/id_rsa Username:docker}
	I0918 19:39:37.196584   15635 main.go:141] libmachine: Using API Version  1
	I0918 19:39:37.196605   15635 main.go:141] libmachine: () Calling .SetConfigRaw
	I0918 19:39:37.197020   15635 main.go:141] libmachine: () Calling .GetMachineName
	W0918 19:39:37.197204   15635 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:46722->192.168.39.158:22: read: connection reset by peer
	I0918 19:39:37.197234   15635 retry.go:31] will retry after 174.790635ms: ssh: handshake failed: read tcp 192.168.39.1:46722->192.168.39.158:22: read: connection reset by peer
	I0918 19:39:37.197279   15635 main.go:141] libmachine: (addons-815929) Calling .GetState
	I0918 19:39:37.198708   15635 main.go:141] libmachine: (addons-815929) Calling .DriverName
	I0918 19:39:37.200881   15635 out.go:177]   - Using image docker.io/registry:2.8.3
	I0918 19:39:37.202072   15635 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0918 19:39:37.203833   15635 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I0918 19:39:37.203851   15635 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0918 19:39:37.203875   15635 main.go:141] libmachine: (addons-815929) Calling .GetSSHHostname
	I0918 19:39:37.205608   15635 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35855
	I0918 19:39:37.206094   15635 main.go:141] libmachine: () Calling .GetVersion
	I0918 19:39:37.206615   15635 main.go:141] libmachine: Using API Version  1
	I0918 19:39:37.206633   15635 main.go:141] libmachine: () Calling .SetConfigRaw
	I0918 19:39:37.206776   15635 main.go:141] libmachine: (addons-815929) DBG | domain addons-815929 has defined MAC address 52:54:00:11:b1:d6 in network mk-addons-815929
	I0918 19:39:37.206913   15635 main.go:141] libmachine: () Calling .GetMachineName
	I0918 19:39:37.207083   15635 main.go:141] libmachine: (addons-815929) Calling .GetState
	I0918 19:39:37.207141   15635 main.go:141] libmachine: (addons-815929) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:b1:d6", ip: ""} in network mk-addons-815929: {Iface:virbr1 ExpiryTime:2024-09-18 20:39:08 +0000 UTC Type:0 Mac:52:54:00:11:b1:d6 Iaid: IPaddr:192.168.39.158 Prefix:24 Hostname:addons-815929 Clientid:01:52:54:00:11:b1:d6}
	I0918 19:39:37.207157   15635 main.go:141] libmachine: (addons-815929) DBG | domain addons-815929 has defined IP address 192.168.39.158 and MAC address 52:54:00:11:b1:d6 in network mk-addons-815929
	I0918 19:39:37.207364   15635 main.go:141] libmachine: (addons-815929) Calling .GetSSHPort
	I0918 19:39:37.207561   15635 main.go:141] libmachine: (addons-815929) Calling .GetSSHKeyPath
	I0918 19:39:37.207717   15635 main.go:141] libmachine: (addons-815929) Calling .GetSSHUsername
	I0918 19:39:37.207864   15635 sshutil.go:53] new ssh client: &{IP:192.168.39.158 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19667-7671/.minikube/machines/addons-815929/id_rsa Username:docker}
	I0918 19:39:37.208374   15635 main.go:141] libmachine: (addons-815929) Calling .DriverName
	I0918 19:39:37.209305   15635 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33097
	I0918 19:39:37.209766   15635 main.go:141] libmachine: () Calling .GetVersion
	I0918 19:39:37.210145   15635 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0918 19:39:37.210290   15635 main.go:141] libmachine: Using API Version  1
	I0918 19:39:37.210312   15635 main.go:141] libmachine: () Calling .SetConfigRaw
	I0918 19:39:37.210763   15635 main.go:141] libmachine: () Calling .GetMachineName
	I0918 19:39:37.211277   15635 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0918 19:39:37.211316   15635 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0918 19:39:37.212277   15635 out.go:177]   - Using image docker.io/busybox:stable
	I0918 19:39:37.213533   15635 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0918 19:39:37.213560   15635 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0918 19:39:37.213578   15635 main.go:141] libmachine: (addons-815929) Calling .GetSSHHostname
	I0918 19:39:37.216384   15635 main.go:141] libmachine: (addons-815929) DBG | domain addons-815929 has defined MAC address 52:54:00:11:b1:d6 in network mk-addons-815929
	I0918 19:39:37.216720   15635 main.go:141] libmachine: (addons-815929) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:b1:d6", ip: ""} in network mk-addons-815929: {Iface:virbr1 ExpiryTime:2024-09-18 20:39:08 +0000 UTC Type:0 Mac:52:54:00:11:b1:d6 Iaid: IPaddr:192.168.39.158 Prefix:24 Hostname:addons-815929 Clientid:01:52:54:00:11:b1:d6}
	I0918 19:39:37.216738   15635 main.go:141] libmachine: (addons-815929) DBG | domain addons-815929 has defined IP address 192.168.39.158 and MAC address 52:54:00:11:b1:d6 in network mk-addons-815929
	I0918 19:39:37.216779   15635 main.go:141] libmachine: (addons-815929) Calling .GetSSHPort
	I0918 19:39:37.216952   15635 main.go:141] libmachine: (addons-815929) Calling .GetSSHKeyPath
	I0918 19:39:37.217072   15635 main.go:141] libmachine: (addons-815929) Calling .GetSSHUsername
	I0918 19:39:37.217179   15635 sshutil.go:53] new ssh client: &{IP:192.168.39.158 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19667-7671/.minikube/machines/addons-815929/id_rsa Username:docker}
	I0918 19:39:37.228878   15635 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40725
	I0918 19:39:37.229369   15635 main.go:141] libmachine: () Calling .GetVersion
	I0918 19:39:37.230046   15635 main.go:141] libmachine: Using API Version  1
	I0918 19:39:37.230074   15635 main.go:141] libmachine: () Calling .SetConfigRaw
	I0918 19:39:37.230431   15635 main.go:141] libmachine: () Calling .GetMachineName
	I0918 19:39:37.230684   15635 main.go:141] libmachine: (addons-815929) Calling .GetState
	I0918 19:39:37.232228   15635 main.go:141] libmachine: (addons-815929) Calling .DriverName
	I0918 19:39:37.232509   15635 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0918 19:39:37.232528   15635 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0918 19:39:37.232547   15635 main.go:141] libmachine: (addons-815929) Calling .GetSSHHostname
	I0918 19:39:37.235855   15635 main.go:141] libmachine: (addons-815929) DBG | domain addons-815929 has defined MAC address 52:54:00:11:b1:d6 in network mk-addons-815929
	I0918 19:39:37.236365   15635 main.go:141] libmachine: (addons-815929) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:b1:d6", ip: ""} in network mk-addons-815929: {Iface:virbr1 ExpiryTime:2024-09-18 20:39:08 +0000 UTC Type:0 Mac:52:54:00:11:b1:d6 Iaid: IPaddr:192.168.39.158 Prefix:24 Hostname:addons-815929 Clientid:01:52:54:00:11:b1:d6}
	I0918 19:39:37.236401   15635 main.go:141] libmachine: (addons-815929) DBG | domain addons-815929 has defined IP address 192.168.39.158 and MAC address 52:54:00:11:b1:d6 in network mk-addons-815929
	I0918 19:39:37.236588   15635 main.go:141] libmachine: (addons-815929) Calling .GetSSHPort
	I0918 19:39:37.236786   15635 main.go:141] libmachine: (addons-815929) Calling .GetSSHKeyPath
	I0918 19:39:37.236960   15635 main.go:141] libmachine: (addons-815929) Calling .GetSSHUsername
	I0918 19:39:37.237110   15635 sshutil.go:53] new ssh client: &{IP:192.168.39.158 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19667-7671/.minikube/machines/addons-815929/id_rsa Username:docker}
	W0918 19:39:37.240137   15635 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:46756->192.168.39.158:22: read: connection reset by peer
	I0918 19:39:37.240169   15635 retry.go:31] will retry after 192.441386ms: ssh: handshake failed: read tcp 192.168.39.1:46756->192.168.39.158:22: read: connection reset by peer
	I0918 19:39:37.520783   15635 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0918 19:39:37.520783   15635 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0918 19:39:37.529703   15635 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0918 19:39:37.533235   15635 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0918 19:39:37.578015   15635 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0918 19:39:37.578038   15635 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0918 19:39:37.582283   15635 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0918 19:39:37.582310   15635 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0918 19:39:37.733970   15635 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0918 19:39:37.770032   15635 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0918 19:39:37.770057   15635 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0918 19:39:37.814514   15635 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-rbac.yaml
	I0918 19:39:37.814546   15635 ssh_runner.go:362] scp helm-tiller/helm-tiller-rbac.yaml --> /etc/kubernetes/addons/helm-tiller-rbac.yaml (1188 bytes)
	I0918 19:39:37.816619   15635 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I0918 19:39:37.816636   15635 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0918 19:39:37.817489   15635 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0918 19:39:37.817508   15635 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0918 19:39:37.828765   15635 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0918 19:39:37.831161   15635 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0918 19:39:37.841293   15635 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0918 19:39:37.841341   15635 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0918 19:39:37.866270   15635 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0918 19:39:37.866300   15635 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0918 19:39:37.866300   15635 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0918 19:39:37.866320   15635 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0918 19:39:37.873023   15635 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0918 19:39:37.957968   15635 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0918 19:39:37.960217   15635 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0918 19:39:37.960242   15635 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0918 19:39:37.978264   15635 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0918 19:39:37.978296   15635 ssh_runner.go:362] scp helm-tiller/helm-tiller-svc.yaml --> /etc/kubernetes/addons/helm-tiller-svc.yaml (951 bytes)
	I0918 19:39:37.993929   15635 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I0918 19:39:37.993959   15635 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0918 19:39:37.994429   15635 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0918 19:39:37.994444   15635 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0918 19:39:38.017387   15635 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0918 19:39:38.017418   15635 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0918 19:39:38.088277   15635 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0918 19:39:38.088303   15635 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0918 19:39:38.131818   15635 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0918 19:39:38.131848   15635 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0918 19:39:38.203126   15635 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0918 19:39:38.203154   15635 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0918 19:39:38.226489   15635 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0918 19:39:38.226526   15635 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0918 19:39:38.250324   15635 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0918 19:39:38.273276   15635 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0918 19:39:38.283323   15635 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0918 19:39:38.332008   15635 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0918 19:39:38.332058   15635 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0918 19:39:38.385633   15635 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0918 19:39:38.385664   15635 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0918 19:39:38.469197   15635 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0918 19:39:38.469230   15635 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0918 19:39:38.472759   15635 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0918 19:39:38.472785   15635 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0918 19:39:38.628857   15635 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0918 19:39:38.628886   15635 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0918 19:39:38.637712   15635 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0918 19:39:38.637741   15635 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0918 19:39:38.656333   15635 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0918 19:39:38.656366   15635 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0918 19:39:38.714144   15635 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0918 19:39:38.714168   15635 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0918 19:39:38.932471   15635 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0918 19:39:38.932511   15635 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0918 19:39:38.964592   15635 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0918 19:39:38.971990   15635 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0918 19:39:39.017042   15635 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I0918 19:39:39.017073   15635 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0918 19:39:39.160724   15635 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0918 19:39:39.160756   15635 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0918 19:39:39.194791   15635 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0918 19:39:39.194821   15635 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0918 19:39:39.392439   15635 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0918 19:39:39.392461   15635 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0918 19:39:39.435551   15635 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0918 19:39:39.558272   15635 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0918 19:39:39.558296   15635 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0918 19:39:39.836142   15635 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0918 19:39:39.836167   15635 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0918 19:39:39.990546   15635 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.469638539s)
	I0918 19:39:39.990571   15635 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (2.469741333s)
	I0918 19:39:39.990600   15635 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0918 19:39:39.990604   15635 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (2.460853163s)
	I0918 19:39:39.990694   15635 main.go:141] libmachine: Making call to close driver server
	I0918 19:39:39.990714   15635 main.go:141] libmachine: (addons-815929) Calling .Close
	I0918 19:39:39.990994   15635 main.go:141] libmachine: Successfully made call to close driver server
	I0918 19:39:39.991007   15635 main.go:141] libmachine: Making call to close connection to plugin binary
	I0918 19:39:39.991015   15635 main.go:141] libmachine: Making call to close driver server
	I0918 19:39:39.991022   15635 main.go:141] libmachine: (addons-815929) Calling .Close
	I0918 19:39:39.991348   15635 main.go:141] libmachine: (addons-815929) DBG | Closing plugin on server side
	I0918 19:39:39.991365   15635 main.go:141] libmachine: Successfully made call to close driver server
	I0918 19:39:39.991372   15635 main.go:141] libmachine: Making call to close connection to plugin binary
	I0918 19:39:39.991593   15635 node_ready.go:35] waiting up to 6m0s for node "addons-815929" to be "Ready" ...
	I0918 19:39:40.004733   15635 node_ready.go:49] node "addons-815929" has status "Ready":"True"
	I0918 19:39:40.004757   15635 node_ready.go:38] duration metric: took 13.145596ms for node "addons-815929" to be "Ready" ...
	I0918 19:39:40.004768   15635 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0918 19:39:40.018964   15635 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-lr452" in "kube-system" namespace to be "Ready" ...
	I0918 19:39:40.314801   15635 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0918 19:39:40.509787   15635 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-815929" context rescaled to 1 replicas
	I0918 19:39:41.035157   15635 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (3.501885691s)
	I0918 19:39:41.035216   15635 main.go:141] libmachine: Making call to close driver server
	I0918 19:39:41.035231   15635 main.go:141] libmachine: (addons-815929) Calling .Close
	I0918 19:39:41.035566   15635 main.go:141] libmachine: (addons-815929) DBG | Closing plugin on server side
	I0918 19:39:41.035605   15635 main.go:141] libmachine: Successfully made call to close driver server
	I0918 19:39:41.035619   15635 main.go:141] libmachine: Making call to close connection to plugin binary
	I0918 19:39:41.035631   15635 main.go:141] libmachine: Making call to close driver server
	I0918 19:39:41.035643   15635 main.go:141] libmachine: (addons-815929) Calling .Close
	I0918 19:39:41.035883   15635 main.go:141] libmachine: Successfully made call to close driver server
	I0918 19:39:41.035902   15635 main.go:141] libmachine: Making call to close connection to plugin binary
	I0918 19:39:41.035907   15635 main.go:141] libmachine: (addons-815929) DBG | Closing plugin on server side
	I0918 19:39:42.108696   15635 pod_ready.go:103] pod "coredns-7c65d6cfc9-lr452" in "kube-system" namespace has status "Ready":"False"
	I0918 19:39:43.536656   15635 pod_ready.go:93] pod "coredns-7c65d6cfc9-lr452" in "kube-system" namespace has status "Ready":"True"
	I0918 19:39:43.536690   15635 pod_ready.go:82] duration metric: took 3.517697272s for pod "coredns-7c65d6cfc9-lr452" in "kube-system" namespace to be "Ready" ...
	I0918 19:39:43.536705   15635 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-p6827" in "kube-system" namespace to be "Ready" ...
	I0918 19:39:44.249408   15635 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0918 19:39:44.249450   15635 main.go:141] libmachine: (addons-815929) Calling .GetSSHHostname
	I0918 19:39:44.252925   15635 main.go:141] libmachine: (addons-815929) DBG | domain addons-815929 has defined MAC address 52:54:00:11:b1:d6 in network mk-addons-815929
	I0918 19:39:44.253362   15635 main.go:141] libmachine: (addons-815929) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:b1:d6", ip: ""} in network mk-addons-815929: {Iface:virbr1 ExpiryTime:2024-09-18 20:39:08 +0000 UTC Type:0 Mac:52:54:00:11:b1:d6 Iaid: IPaddr:192.168.39.158 Prefix:24 Hostname:addons-815929 Clientid:01:52:54:00:11:b1:d6}
	I0918 19:39:44.253399   15635 main.go:141] libmachine: (addons-815929) DBG | domain addons-815929 has defined IP address 192.168.39.158 and MAC address 52:54:00:11:b1:d6 in network mk-addons-815929
	I0918 19:39:44.253700   15635 main.go:141] libmachine: (addons-815929) Calling .GetSSHPort
	I0918 19:39:44.253927   15635 main.go:141] libmachine: (addons-815929) Calling .GetSSHKeyPath
	I0918 19:39:44.254121   15635 main.go:141] libmachine: (addons-815929) Calling .GetSSHUsername
	I0918 19:39:44.254291   15635 sshutil.go:53] new ssh client: &{IP:192.168.39.158 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19667-7671/.minikube/machines/addons-815929/id_rsa Username:docker}
	I0918 19:39:44.688107   15635 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0918 19:39:44.805145   15635 addons.go:234] Setting addon gcp-auth=true in "addons-815929"
	I0918 19:39:44.805206   15635 host.go:66] Checking if "addons-815929" exists ...
	I0918 19:39:44.805565   15635 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0918 19:39:44.805610   15635 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0918 19:39:44.822607   15635 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38481
	I0918 19:39:44.823258   15635 main.go:141] libmachine: () Calling .GetVersion
	I0918 19:39:44.823818   15635 main.go:141] libmachine: Using API Version  1
	I0918 19:39:44.823842   15635 main.go:141] libmachine: () Calling .SetConfigRaw
	I0918 19:39:44.824190   15635 main.go:141] libmachine: () Calling .GetMachineName
	I0918 19:39:44.824669   15635 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0918 19:39:44.824704   15635 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0918 19:39:44.840858   15635 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46495
	I0918 19:39:44.841389   15635 main.go:141] libmachine: () Calling .GetVersion
	I0918 19:39:44.841928   15635 main.go:141] libmachine: Using API Version  1
	I0918 19:39:44.841957   15635 main.go:141] libmachine: () Calling .SetConfigRaw
	I0918 19:39:44.842262   15635 main.go:141] libmachine: () Calling .GetMachineName
	I0918 19:39:44.842449   15635 main.go:141] libmachine: (addons-815929) Calling .GetState
	I0918 19:39:44.844152   15635 main.go:141] libmachine: (addons-815929) Calling .DriverName
	I0918 19:39:44.844416   15635 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0918 19:39:44.844445   15635 main.go:141] libmachine: (addons-815929) Calling .GetSSHHostname
	I0918 19:39:44.847034   15635 main.go:141] libmachine: (addons-815929) DBG | domain addons-815929 has defined MAC address 52:54:00:11:b1:d6 in network mk-addons-815929
	I0918 19:39:44.847375   15635 main.go:141] libmachine: (addons-815929) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:b1:d6", ip: ""} in network mk-addons-815929: {Iface:virbr1 ExpiryTime:2024-09-18 20:39:08 +0000 UTC Type:0 Mac:52:54:00:11:b1:d6 Iaid: IPaddr:192.168.39.158 Prefix:24 Hostname:addons-815929 Clientid:01:52:54:00:11:b1:d6}
	I0918 19:39:44.847408   15635 main.go:141] libmachine: (addons-815929) DBG | domain addons-815929 has defined IP address 192.168.39.158 and MAC address 52:54:00:11:b1:d6 in network mk-addons-815929
	I0918 19:39:44.847555   15635 main.go:141] libmachine: (addons-815929) Calling .GetSSHPort
	I0918 19:39:44.847716   15635 main.go:141] libmachine: (addons-815929) Calling .GetSSHKeyPath
	I0918 19:39:44.847869   15635 main.go:141] libmachine: (addons-815929) Calling .GetSSHUsername
	I0918 19:39:44.847967   15635 sshutil.go:53] new ssh client: &{IP:192.168.39.158 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19667-7671/.minikube/machines/addons-815929/id_rsa Username:docker}
	I0918 19:39:45.554393   15635 pod_ready.go:103] pod "coredns-7c65d6cfc9-p6827" in "kube-system" namespace has status "Ready":"False"
	I0918 19:39:46.370997   15635 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (8.636984505s)
	I0918 19:39:46.371041   15635 main.go:141] libmachine: Making call to close driver server
	I0918 19:39:46.371051   15635 main.go:141] libmachine: (addons-815929) Calling .Close
	I0918 19:39:46.371140   15635 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (8.542336234s)
	I0918 19:39:46.371200   15635 main.go:141] libmachine: Making call to close driver server
	I0918 19:39:46.371213   15635 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (8.540023182s)
	I0918 19:39:46.371243   15635 main.go:141] libmachine: Making call to close driver server
	I0918 19:39:46.371261   15635 main.go:141] libmachine: (addons-815929) Calling .Close
	I0918 19:39:46.371218   15635 main.go:141] libmachine: (addons-815929) Calling .Close
	I0918 19:39:46.371285   15635 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (8.498241115s)
	I0918 19:39:46.371313   15635 main.go:141] libmachine: Making call to close driver server
	I0918 19:39:46.371329   15635 main.go:141] libmachine: (addons-815929) Calling .Close
	I0918 19:39:46.371344   15635 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (8.413335296s)
	I0918 19:39:46.371375   15635 main.go:141] libmachine: Making call to close driver server
	I0918 19:39:46.371385   15635 main.go:141] libmachine: (addons-815929) Calling .Close
	I0918 19:39:46.371505   15635 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (8.121141782s)
	I0918 19:39:46.371534   15635 main.go:141] libmachine: Making call to close driver server
	I0918 19:39:46.371545   15635 main.go:141] libmachine: (addons-815929) Calling .Close
	I0918 19:39:46.371623   15635 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (8.098312186s)
	I0918 19:39:46.371639   15635 main.go:141] libmachine: Making call to close driver server
	I0918 19:39:46.371649   15635 main.go:141] libmachine: (addons-815929) Calling .Close
	I0918 19:39:46.371723   15635 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml: (8.088369116s)
	I0918 19:39:46.371745   15635 main.go:141] libmachine: Making call to close driver server
	I0918 19:39:46.371754   15635 main.go:141] libmachine: (addons-815929) Calling .Close
	I0918 19:39:46.371830   15635 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (7.407202442s)
	I0918 19:39:46.371846   15635 main.go:141] libmachine: Making call to close driver server
	I0918 19:39:46.371855   15635 main.go:141] libmachine: (addons-815929) Calling .Close
	I0918 19:39:46.371988   15635 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (7.399943132s)
	W0918 19:39:46.372035   15635 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0918 19:39:46.372077   15635 retry.go:31] will retry after 252.9912ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0918 19:39:46.372176   15635 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (6.936592442s)
	I0918 19:39:46.372198   15635 main.go:141] libmachine: Making call to close driver server
	I0918 19:39:46.372207   15635 main.go:141] libmachine: (addons-815929) Calling .Close
	I0918 19:39:46.374316   15635 main.go:141] libmachine: (addons-815929) DBG | Closing plugin on server side
	I0918 19:39:46.374333   15635 main.go:141] libmachine: (addons-815929) DBG | Closing plugin on server side
	I0918 19:39:46.374334   15635 main.go:141] libmachine: (addons-815929) DBG | Closing plugin on server side
	I0918 19:39:46.374348   15635 main.go:141] libmachine: Successfully made call to close driver server
	I0918 19:39:46.374354   15635 main.go:141] libmachine: Successfully made call to close driver server
	I0918 19:39:46.374357   15635 main.go:141] libmachine: Making call to close connection to plugin binary
	I0918 19:39:46.374361   15635 main.go:141] libmachine: (addons-815929) DBG | Closing plugin on server side
	I0918 19:39:46.374362   15635 main.go:141] libmachine: Making call to close connection to plugin binary
	I0918 19:39:46.374370   15635 main.go:141] libmachine: Successfully made call to close driver server
	I0918 19:39:46.374376   15635 main.go:141] libmachine: Making call to close driver server
	I0918 19:39:46.374379   15635 main.go:141] libmachine: Making call to close connection to plugin binary
	I0918 19:39:46.374385   15635 main.go:141] libmachine: (addons-815929) Calling .Close
	I0918 19:39:46.374389   15635 main.go:141] libmachine: Making call to close driver server
	I0918 19:39:46.374396   15635 main.go:141] libmachine: (addons-815929) Calling .Close
	I0918 19:39:46.374475   15635 main.go:141] libmachine: Successfully made call to close driver server
	I0918 19:39:46.374483   15635 main.go:141] libmachine: Making call to close connection to plugin binary
	I0918 19:39:46.374492   15635 main.go:141] libmachine: Making call to close driver server
	I0918 19:39:46.374499   15635 main.go:141] libmachine: (addons-815929) Calling .Close
	I0918 19:39:46.374549   15635 main.go:141] libmachine: (addons-815929) DBG | Closing plugin on server side
	I0918 19:39:46.374573   15635 main.go:141] libmachine: Successfully made call to close driver server
	I0918 19:39:46.374582   15635 main.go:141] libmachine: Making call to close connection to plugin binary
	I0918 19:39:46.374590   15635 main.go:141] libmachine: Making call to close driver server
	I0918 19:39:46.374596   15635 main.go:141] libmachine: (addons-815929) Calling .Close
	I0918 19:39:46.374639   15635 main.go:141] libmachine: (addons-815929) DBG | Closing plugin on server side
	I0918 19:39:46.374657   15635 main.go:141] libmachine: Successfully made call to close driver server
	I0918 19:39:46.374663   15635 main.go:141] libmachine: Making call to close connection to plugin binary
	I0918 19:39:46.374670   15635 main.go:141] libmachine: Making call to close driver server
	I0918 19:39:46.374676   15635 main.go:141] libmachine: (addons-815929) Calling .Close
	I0918 19:39:46.374713   15635 main.go:141] libmachine: (addons-815929) DBG | Closing plugin on server side
	I0918 19:39:46.374846   15635 main.go:141] libmachine: (addons-815929) DBG | Closing plugin on server side
	I0918 19:39:46.374866   15635 main.go:141] libmachine: Successfully made call to close driver server
	I0918 19:39:46.374873   15635 main.go:141] libmachine: Making call to close connection to plugin binary
	I0918 19:39:46.374878   15635 main.go:141] libmachine: (addons-815929) DBG | Closing plugin on server side
	I0918 19:39:46.374883   15635 addons.go:475] Verifying addon registry=true in "addons-815929"
	I0918 19:39:46.374923   15635 main.go:141] libmachine: (addons-815929) DBG | Closing plugin on server side
	I0918 19:39:46.374366   15635 main.go:141] libmachine: Making call to close driver server
	I0918 19:39:46.374938   15635 main.go:141] libmachine: (addons-815929) Calling .Close
	I0918 19:39:46.375214   15635 main.go:141] libmachine: Successfully made call to close driver server
	I0918 19:39:46.375231   15635 main.go:141] libmachine: Making call to close connection to plugin binary
	I0918 19:39:46.375240   15635 main.go:141] libmachine: Making call to close driver server
	I0918 19:39:46.375247   15635 main.go:141] libmachine: (addons-815929) Calling .Close
	I0918 19:39:46.375333   15635 main.go:141] libmachine: (addons-815929) DBG | Closing plugin on server side
	I0918 19:39:46.375384   15635 main.go:141] libmachine: Successfully made call to close driver server
	I0918 19:39:46.375393   15635 main.go:141] libmachine: Making call to close connection to plugin binary
	I0918 19:39:46.375626   15635 main.go:141] libmachine: (addons-815929) DBG | Closing plugin on server side
	I0918 19:39:46.375651   15635 main.go:141] libmachine: Successfully made call to close driver server
	I0918 19:39:46.375660   15635 main.go:141] libmachine: Making call to close connection to plugin binary
	I0918 19:39:46.375836   15635 main.go:141] libmachine: Successfully made call to close driver server
	I0918 19:39:46.375848   15635 main.go:141] libmachine: Making call to close connection to plugin binary
	I0918 19:39:46.375961   15635 main.go:141] libmachine: (addons-815929) DBG | Closing plugin on server side
	I0918 19:39:46.375994   15635 main.go:141] libmachine: Successfully made call to close driver server
	I0918 19:39:46.376000   15635 main.go:141] libmachine: Making call to close connection to plugin binary
	I0918 19:39:46.376222   15635 main.go:141] libmachine: Successfully made call to close driver server
	I0918 19:39:46.376235   15635 main.go:141] libmachine: Making call to close connection to plugin binary
	I0918 19:39:46.376264   15635 main.go:141] libmachine: Successfully made call to close driver server
	I0918 19:39:46.376278   15635 main.go:141] libmachine: Making call to close connection to plugin binary
	I0918 19:39:46.376287   15635 main.go:141] libmachine: Making call to close driver server
	I0918 19:39:46.376294   15635 main.go:141] libmachine: (addons-815929) Calling .Close
	I0918 19:39:46.376316   15635 main.go:141] libmachine: Successfully made call to close driver server
	I0918 19:39:46.376350   15635 main.go:141] libmachine: Making call to close connection to plugin binary
	I0918 19:39:46.376358   15635 main.go:141] libmachine: Making call to close driver server
	I0918 19:39:46.376365   15635 main.go:141] libmachine: (addons-815929) Calling .Close
	I0918 19:39:46.376224   15635 main.go:141] libmachine: (addons-815929) DBG | Closing plugin on server side
	I0918 19:39:46.376564   15635 main.go:141] libmachine: Successfully made call to close driver server
	I0918 19:39:46.376572   15635 main.go:141] libmachine: Making call to close connection to plugin binary
	I0918 19:39:46.376579   15635 main.go:141] libmachine: Making call to close driver server
	I0918 19:39:46.376587   15635 main.go:141] libmachine: (addons-815929) Calling .Close
	I0918 19:39:46.376681   15635 main.go:141] libmachine: (addons-815929) DBG | Closing plugin on server side
	I0918 19:39:46.376710   15635 main.go:141] libmachine: (addons-815929) DBG | Closing plugin on server side
	I0918 19:39:46.376716   15635 main.go:141] libmachine: Successfully made call to close driver server
	I0918 19:39:46.376725   15635 main.go:141] libmachine: Making call to close connection to plugin binary
	I0918 19:39:46.376730   15635 main.go:141] libmachine: Successfully made call to close driver server
	I0918 19:39:46.376737   15635 main.go:141] libmachine: Making call to close connection to plugin binary
	I0918 19:39:46.376736   15635 addons.go:475] Verifying addon ingress=true in "addons-815929"
	I0918 19:39:46.376903   15635 main.go:141] libmachine: (addons-815929) DBG | Closing plugin on server side
	I0918 19:39:46.376938   15635 main.go:141] libmachine: Successfully made call to close driver server
	I0918 19:39:46.376950   15635 main.go:141] libmachine: Making call to close connection to plugin binary
	I0918 19:39:46.377421   15635 main.go:141] libmachine: (addons-815929) DBG | Closing plugin on server side
	I0918 19:39:46.377456   15635 main.go:141] libmachine: Successfully made call to close driver server
	I0918 19:39:46.377466   15635 main.go:141] libmachine: Making call to close connection to plugin binary
	I0918 19:39:46.377475   15635 addons.go:475] Verifying addon metrics-server=true in "addons-815929"
	I0918 19:39:46.379309   15635 out.go:177] * Verifying registry addon...
	I0918 19:39:46.380107   15635 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-815929 service yakd-dashboard -n yakd-dashboard
	
	I0918 19:39:46.380116   15635 out.go:177] * Verifying ingress addon...
	I0918 19:39:46.381716   15635 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0918 19:39:46.382703   15635 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0918 19:39:46.442984   15635 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0918 19:39:46.443008   15635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:39:46.444566   15635 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0918 19:39:46.444589   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 19:39:46.448430   15635 main.go:141] libmachine: Making call to close driver server
	I0918 19:39:46.448452   15635 main.go:141] libmachine: (addons-815929) Calling .Close
	I0918 19:39:46.448784   15635 main.go:141] libmachine: Successfully made call to close driver server
	I0918 19:39:46.448805   15635 main.go:141] libmachine: Making call to close connection to plugin binary
	W0918 19:39:46.448896   15635 out.go:270] ! Enabling 'storage-provisioner-rancher' returned an error: running callbacks: [Error making local-path the default storage class: Error while marking storage class local-path as default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I0918 19:39:46.455634   15635 main.go:141] libmachine: Making call to close driver server
	I0918 19:39:46.455659   15635 main.go:141] libmachine: (addons-815929) Calling .Close
	I0918 19:39:46.455916   15635 main.go:141] libmachine: Successfully made call to close driver server
	I0918 19:39:46.455934   15635 main.go:141] libmachine: Making call to close connection to plugin binary
	I0918 19:39:46.625453   15635 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0918 19:39:46.891556   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 19:39:46.891905   15635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:39:47.249900   15635 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (6.935039531s)
	I0918 19:39:47.249959   15635 main.go:141] libmachine: Making call to close driver server
	I0918 19:39:47.249978   15635 main.go:141] libmachine: (addons-815929) Calling .Close
	I0918 19:39:47.249996   15635 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (2.405553986s)
	I0918 19:39:47.250263   15635 main.go:141] libmachine: Successfully made call to close driver server
	I0918 19:39:47.250285   15635 main.go:141] libmachine: Making call to close connection to plugin binary
	I0918 19:39:47.250291   15635 main.go:141] libmachine: (addons-815929) DBG | Closing plugin on server side
	I0918 19:39:47.250295   15635 main.go:141] libmachine: Making call to close driver server
	I0918 19:39:47.250310   15635 main.go:141] libmachine: (addons-815929) Calling .Close
	I0918 19:39:47.250600   15635 main.go:141] libmachine: Successfully made call to close driver server
	I0918 19:39:47.250616   15635 main.go:141] libmachine: Making call to close connection to plugin binary
	I0918 19:39:47.250626   15635 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-815929"
	I0918 19:39:47.250628   15635 main.go:141] libmachine: (addons-815929) DBG | Closing plugin on server side
	I0918 19:39:47.252725   15635 out.go:177] * Verifying csi-hostpath-driver addon...
	I0918 19:39:47.252729   15635 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0918 19:39:47.255488   15635 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0918 19:39:47.256476   15635 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0918 19:39:47.257160   15635 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0918 19:39:47.257179   15635 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0918 19:39:47.266081   15635 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0918 19:39:47.266118   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:39:47.352351   15635 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0918 19:39:47.352379   15635 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0918 19:39:47.382654   15635 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0918 19:39:47.382683   15635 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0918 19:39:47.400310   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 19:39:47.400779   15635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:39:47.466002   15635 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0918 19:39:47.762194   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:39:47.887466   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 19:39:47.888155   15635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:39:48.042818   15635 pod_ready.go:103] pod "coredns-7c65d6cfc9-p6827" in "kube-system" namespace has status "Ready":"False"
	I0918 19:39:48.150956   15635 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.525434984s)
	I0918 19:39:48.151014   15635 main.go:141] libmachine: Making call to close driver server
	I0918 19:39:48.151031   15635 main.go:141] libmachine: (addons-815929) Calling .Close
	I0918 19:39:48.151273   15635 main.go:141] libmachine: Successfully made call to close driver server
	I0918 19:39:48.151328   15635 main.go:141] libmachine: Making call to close connection to plugin binary
	I0918 19:39:48.151343   15635 main.go:141] libmachine: Making call to close driver server
	I0918 19:39:48.151350   15635 main.go:141] libmachine: (addons-815929) Calling .Close
	I0918 19:39:48.151297   15635 main.go:141] libmachine: (addons-815929) DBG | Closing plugin on server side
	I0918 19:39:48.151627   15635 main.go:141] libmachine: Successfully made call to close driver server
	I0918 19:39:48.151645   15635 main.go:141] libmachine: Making call to close connection to plugin binary
	I0918 19:39:48.262278   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:39:48.386162   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 19:39:48.388035   15635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:39:48.772137   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:39:48.928973   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 19:39:48.931748   15635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:39:49.012611   15635 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.546559646s)
	I0918 19:39:49.012680   15635 main.go:141] libmachine: Making call to close driver server
	I0918 19:39:49.012710   15635 main.go:141] libmachine: (addons-815929) Calling .Close
	I0918 19:39:49.013006   15635 main.go:141] libmachine: (addons-815929) DBG | Closing plugin on server side
	I0918 19:39:49.013065   15635 main.go:141] libmachine: Successfully made call to close driver server
	I0918 19:39:49.013099   15635 main.go:141] libmachine: Making call to close connection to plugin binary
	I0918 19:39:49.013113   15635 main.go:141] libmachine: Making call to close driver server
	I0918 19:39:49.013124   15635 main.go:141] libmachine: (addons-815929) Calling .Close
	I0918 19:39:49.013450   15635 main.go:141] libmachine: Successfully made call to close driver server
	I0918 19:39:49.013486   15635 main.go:141] libmachine: Making call to close connection to plugin binary
	I0918 19:39:49.015257   15635 addons.go:475] Verifying addon gcp-auth=true in "addons-815929"
	I0918 19:39:49.017437   15635 out.go:177] * Verifying gcp-auth addon...
	I0918 19:39:49.019355   15635 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0918 19:39:49.079848   15635 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0918 19:39:49.079876   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:39:49.263588   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:39:49.386599   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 19:39:49.386930   15635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:39:49.524123   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:39:49.546553   15635 pod_ready.go:98] pod "coredns-7c65d6cfc9-p6827" in "kube-system" namespace has status phase "Succeeded" (skipping!): {Phase:Succeeded Conditions:[{Type:PodReadyToStartContainers Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-18 19:39:49 +0000 UTC Reason: Message:} {Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-18 19:39:37 +0000 UTC Reason:PodCompleted Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-18 19:39:37 +0000 UTC Reason:PodCompleted Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-18 19:39:37 +0000 UTC Reason:PodCompleted Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-18 19:39:37 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.39.158 HostIPs:[{IP:192.168.39
.158}] PodIP:10.244.0.3 PodIPs:[{IP:10.244.0.3}] StartTime:2024-09-18 19:39:37 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2024-09-18 19:39:42 +0000 UTC,FinishedAt:2024-09-18 19:39:48 +0000 UTC,ContainerID:cri-o://d30d881ad2efefa3975ae112f0b3f4ed34e2c3e9169e1bab215cb4cba955008e,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.11.3 ImageID:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6 ContainerID:cri-o://d30d881ad2efefa3975ae112f0b3f4ed34e2c3e9169e1bab215cb4cba955008e Started:0xc001efb2c0 AllocatedResources:map[] Resources:nil VolumeMounts:[{Name:config-volume MountPath:/etc/coredns ReadOnly:true RecursiveReadOnly:0xc0022a0c40} {Name:kube-api-access-cpn6n MountPath:/var/run/secrets/kubernetes.io/serviceaccount ReadOnly:true RecursiveReadOnly:0xc0022a0c50}] User:ni
l AllocatedResourcesStatus:[]}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I0918 19:39:49.546588   15635 pod_ready.go:82] duration metric: took 6.009874416s for pod "coredns-7c65d6cfc9-p6827" in "kube-system" namespace to be "Ready" ...
	E0918 19:39:49.546603   15635 pod_ready.go:67] WaitExtra: waitPodCondition: pod "coredns-7c65d6cfc9-p6827" in "kube-system" namespace has status phase "Succeeded" (skipping!): {Phase:Succeeded Conditions:[{Type:PodReadyToStartContainers Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-18 19:39:49 +0000 UTC Reason: Message:} {Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-18 19:39:37 +0000 UTC Reason:PodCompleted Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-18 19:39:37 +0000 UTC Reason:PodCompleted Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-18 19:39:37 +0000 UTC Reason:PodCompleted Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-18 19:39:37 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.3
9.158 HostIPs:[{IP:192.168.39.158}] PodIP:10.244.0.3 PodIPs:[{IP:10.244.0.3}] StartTime:2024-09-18 19:39:37 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2024-09-18 19:39:42 +0000 UTC,FinishedAt:2024-09-18 19:39:48 +0000 UTC,ContainerID:cri-o://d30d881ad2efefa3975ae112f0b3f4ed34e2c3e9169e1bab215cb4cba955008e,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.11.3 ImageID:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6 ContainerID:cri-o://d30d881ad2efefa3975ae112f0b3f4ed34e2c3e9169e1bab215cb4cba955008e Started:0xc001efb2c0 AllocatedResources:map[] Resources:nil VolumeMounts:[{Name:config-volume MountPath:/etc/coredns ReadOnly:true RecursiveReadOnly:0xc0022a0c40} {Name:kube-api-access-cpn6n MountPath:/var/run/secrets/kubernetes.io/serviceaccount ReadOnly:true RecursiveRe
adOnly:0xc0022a0c50}] User:nil AllocatedResourcesStatus:[]}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I0918 19:39:49.546621   15635 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-815929" in "kube-system" namespace to be "Ready" ...
	I0918 19:39:49.567559   15635 pod_ready.go:93] pod "etcd-addons-815929" in "kube-system" namespace has status "Ready":"True"
	I0918 19:39:49.567588   15635 pod_ready.go:82] duration metric: took 20.955221ms for pod "etcd-addons-815929" in "kube-system" namespace to be "Ready" ...
	I0918 19:39:49.567598   15635 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-815929" in "kube-system" namespace to be "Ready" ...
	I0918 19:39:49.574966   15635 pod_ready.go:93] pod "kube-apiserver-addons-815929" in "kube-system" namespace has status "Ready":"True"
	I0918 19:39:49.574994   15635 pod_ready.go:82] duration metric: took 7.38881ms for pod "kube-apiserver-addons-815929" in "kube-system" namespace to be "Ready" ...
	I0918 19:39:49.575009   15635 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-815929" in "kube-system" namespace to be "Ready" ...
	I0918 19:39:49.582171   15635 pod_ready.go:93] pod "kube-controller-manager-addons-815929" in "kube-system" namespace has status "Ready":"True"
	I0918 19:39:49.582197   15635 pod_ready.go:82] duration metric: took 7.179565ms for pod "kube-controller-manager-addons-815929" in "kube-system" namespace to be "Ready" ...
	I0918 19:39:49.582207   15635 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-pqt4n" in "kube-system" namespace to be "Ready" ...
	I0918 19:39:49.590756   15635 pod_ready.go:93] pod "kube-proxy-pqt4n" in "kube-system" namespace has status "Ready":"True"
	I0918 19:39:49.590786   15635 pod_ready.go:82] duration metric: took 8.57165ms for pod "kube-proxy-pqt4n" in "kube-system" namespace to be "Ready" ...
	I0918 19:39:49.590800   15635 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-815929" in "kube-system" namespace to be "Ready" ...
	I0918 19:39:49.761078   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:39:49.887586   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 19:39:49.887848   15635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:39:49.941378   15635 pod_ready.go:93] pod "kube-scheduler-addons-815929" in "kube-system" namespace has status "Ready":"True"
	I0918 19:39:49.941403   15635 pod_ready.go:82] duration metric: took 350.596076ms for pod "kube-scheduler-addons-815929" in "kube-system" namespace to be "Ready" ...
	I0918 19:39:49.941414   15635 pod_ready.go:79] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-rvssn" in "kube-system" namespace to be "Ready" ...
	I0918 19:39:50.023296   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:39:50.262472   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:39:50.386706   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 19:39:50.387374   15635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:39:50.523109   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:39:50.762340   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:39:50.885849   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 19:39:50.886679   15635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:39:51.023386   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:39:51.261021   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:39:51.386809   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 19:39:51.387671   15635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:39:51.524078   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:39:51.760280   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:39:51.886917   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 19:39:51.887197   15635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:39:51.949053   15635 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-rvssn" in "kube-system" namespace has status "Ready":"False"
	I0918 19:39:52.023505   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:39:52.261214   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:39:52.385448   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 19:39:52.387823   15635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:39:52.522732   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:39:52.977102   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 19:39:52.977482   15635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:39:52.977880   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:39:53.022497   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:39:53.262850   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:39:53.388253   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 19:39:53.389257   15635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:39:53.523172   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:39:53.766469   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:39:53.890155   15635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:39:53.890309   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 19:39:53.949275   15635 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-rvssn" in "kube-system" namespace has status "Ready":"False"
	I0918 19:39:54.023129   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:39:54.260967   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:39:54.387271   15635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:39:54.387324   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 19:39:54.522450   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:39:54.762114   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:39:54.886263   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 19:39:54.886718   15635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:39:55.023055   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:39:55.262254   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:39:55.387141   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 19:39:55.387313   15635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:39:55.522239   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:39:55.761296   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:39:55.886317   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 19:39:55.886679   15635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:39:56.023100   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:39:56.261495   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:39:56.385260   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 19:39:56.386259   15635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:39:56.447336   15635 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-rvssn" in "kube-system" namespace has status "Ready":"False"
	I0918 19:39:56.523265   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:39:56.761818   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:39:56.885802   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 19:39:56.887031   15635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:39:57.022996   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:39:57.261375   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:39:57.388082   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 19:39:57.389199   15635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:39:57.536872   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:39:57.762269   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:39:57.887305   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 19:39:57.889861   15635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:39:58.023455   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:39:58.262414   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:39:58.385419   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 19:39:58.387689   15635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:39:58.447488   15635 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-rvssn" in "kube-system" namespace has status "Ready":"False"
	I0918 19:39:58.523505   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:39:58.761358   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:39:58.887588   15635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:39:58.887675   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 19:39:59.023310   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:39:59.261446   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:39:59.387083   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 19:39:59.387736   15635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:39:59.523936   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:39:59.761153   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:39:59.886378   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 19:39:59.886953   15635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:00.023551   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:40:00.261578   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:00.385740   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 19:40:00.387538   15635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:00.523033   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:40:00.761124   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:00.901613   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 19:40:00.904385   15635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:00.949037   15635 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-rvssn" in "kube-system" namespace has status "Ready":"False"
	I0918 19:40:01.023854   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:40:01.344698   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:01.386813   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 19:40:01.387259   15635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:01.523778   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:40:01.760693   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:01.889999   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 19:40:01.899640   15635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:02.024352   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:40:02.261808   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:02.386899   15635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:02.388992   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 19:40:02.524521   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:40:02.762196   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:02.885282   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 19:40:02.886521   15635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:03.023357   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:40:03.261472   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:03.394612   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 19:40:03.395145   15635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:03.451681   15635 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-rvssn" in "kube-system" namespace has status "Ready":"False"
	I0918 19:40:03.522752   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:40:03.760533   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:03.885469   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 19:40:03.886354   15635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:04.023419   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:40:04.261547   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:04.386446   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 19:40:04.387820   15635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:04.524995   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:40:04.761777   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:04.887074   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 19:40:04.887525   15635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:05.022473   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:40:05.261925   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:05.386445   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 19:40:05.386949   15635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:05.522814   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:40:05.762538   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:05.884894   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 19:40:05.888202   15635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:05.947082   15635 pod_ready.go:93] pod "nvidia-device-plugin-daemonset-rvssn" in "kube-system" namespace has status "Ready":"True"
	I0918 19:40:05.947114   15635 pod_ready.go:82] duration metric: took 16.005692748s for pod "nvidia-device-plugin-daemonset-rvssn" in "kube-system" namespace to be "Ready" ...
	I0918 19:40:05.947126   15635 pod_ready.go:39] duration metric: took 25.942342862s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0918 19:40:05.947145   15635 api_server.go:52] waiting for apiserver process to appear ...
	I0918 19:40:05.947207   15635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 19:40:05.964600   15635 api_server.go:72] duration metric: took 28.935412924s to wait for apiserver process to appear ...
	I0918 19:40:05.964629   15635 api_server.go:88] waiting for apiserver healthz status ...
	I0918 19:40:05.964653   15635 api_server.go:253] Checking apiserver healthz at https://192.168.39.158:8443/healthz ...
	I0918 19:40:05.971057   15635 api_server.go:279] https://192.168.39.158:8443/healthz returned 200:
	ok
	I0918 19:40:05.971991   15635 api_server.go:141] control plane version: v1.31.1
	I0918 19:40:05.972031   15635 api_server.go:131] duration metric: took 7.377749ms to wait for apiserver health ...
	I0918 19:40:05.972043   15635 system_pods.go:43] waiting for kube-system pods to appear ...
	I0918 19:40:05.981465   15635 system_pods.go:59] 18 kube-system pods found
	I0918 19:40:05.981498   15635 system_pods.go:61] "coredns-7c65d6cfc9-lr452" [ce99a83b-0924-4fe4-9a52-4c3400846319] Running
	I0918 19:40:05.981508   15635 system_pods.go:61] "csi-hostpath-attacher-0" [35c12d0e-5b48-4b7d-ba59-4a4c10501739] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0918 19:40:05.981516   15635 system_pods.go:61] "csi-hostpath-resizer-0" [888fc926-7f0f-445a-ad0d-196d1e4a131e] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0918 19:40:05.981528   15635 system_pods.go:61] "csi-hostpathplugin-tndql" [f9b32e85-54dc-4219-b8f2-ccd81d61ca01] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0918 19:40:05.981534   15635 system_pods.go:61] "etcd-addons-815929" [74c62370-8b66-4518-8839-5ce337d8ed18] Running
	I0918 19:40:05.981538   15635 system_pods.go:61] "kube-apiserver-addons-815929" [4b802a7c-d79a-4778-b93c-fb2eddfd3103] Running
	I0918 19:40:05.981541   15635 system_pods.go:61] "kube-controller-manager-addons-815929" [0478f6c2-35df-4091-9be5-2f739c29a169] Running
	I0918 19:40:05.981545   15635 system_pods.go:61] "kube-ingress-dns-minikube" [9660f591-df30-4595-ad30-0b79d840f779] Running
	I0918 19:40:05.981549   15635 system_pods.go:61] "kube-proxy-pqt4n" [f0634583-edcc-434a-9062-5511ff79a084] Running
	I0918 19:40:05.981552   15635 system_pods.go:61] "kube-scheduler-addons-815929" [577bc872-21cc-4a90-82e1-7552ce7eeb7c] Running
	I0918 19:40:05.981558   15635 system_pods.go:61] "metrics-server-84c5f94fbc-fvm48" [20825ca5-3044-4221-84bc-6fc04d1038fd] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0918 19:40:05.981564   15635 system_pods.go:61] "nvidia-device-plugin-daemonset-rvssn" [eed41d4f-f686-4590-959b-344ece686560] Running
	I0918 19:40:05.981570   15635 system_pods.go:61] "registry-66c9cd494c-96wcm" [170420dc-8ea6-4aba-99c1-9f61d4449fff] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0918 19:40:05.981575   15635 system_pods.go:61] "registry-proxy-jwxzj" [5ee21740-39f3-406e-bb72-65a28c5b5dde] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0918 19:40:05.981584   15635 system_pods.go:61] "snapshot-controller-56fcc65765-22mlv" [5197a1d6-f767-4030-b870-5fdd325589d2] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0918 19:40:05.981590   15635 system_pods.go:61] "snapshot-controller-56fcc65765-dzxnk" [28036305-c26d-4a1d-aa47-04d577b32c35] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0918 19:40:05.981596   15635 system_pods.go:61] "storage-provisioner" [a1b4202d-fa9c-4f1b-9a34-40ef0da0d6e8] Running
	I0918 19:40:05.981601   15635 system_pods.go:61] "tiller-deploy-b48cc5f79-8nbwq" [4d5df6b3-4cf1-4ce5-9cb4-2983f9aa2728] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I0918 19:40:05.981609   15635 system_pods.go:74] duration metric: took 9.560439ms to wait for pod list to return data ...
	I0918 19:40:05.981619   15635 default_sa.go:34] waiting for default service account to be created ...
	I0918 19:40:05.984361   15635 default_sa.go:45] found service account: "default"
	I0918 19:40:05.984393   15635 default_sa.go:55] duration metric: took 2.768053ms for default service account to be created ...
	I0918 19:40:05.984403   15635 system_pods.go:116] waiting for k8s-apps to be running ...
	I0918 19:40:05.992866   15635 system_pods.go:86] 18 kube-system pods found
	I0918 19:40:05.992896   15635 system_pods.go:89] "coredns-7c65d6cfc9-lr452" [ce99a83b-0924-4fe4-9a52-4c3400846319] Running
	I0918 19:40:05.992905   15635 system_pods.go:89] "csi-hostpath-attacher-0" [35c12d0e-5b48-4b7d-ba59-4a4c10501739] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0918 19:40:05.992913   15635 system_pods.go:89] "csi-hostpath-resizer-0" [888fc926-7f0f-445a-ad0d-196d1e4a131e] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0918 19:40:05.992919   15635 system_pods.go:89] "csi-hostpathplugin-tndql" [f9b32e85-54dc-4219-b8f2-ccd81d61ca01] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0918 19:40:05.992924   15635 system_pods.go:89] "etcd-addons-815929" [74c62370-8b66-4518-8839-5ce337d8ed18] Running
	I0918 19:40:05.992928   15635 system_pods.go:89] "kube-apiserver-addons-815929" [4b802a7c-d79a-4778-b93c-fb2eddfd3103] Running
	I0918 19:40:05.992932   15635 system_pods.go:89] "kube-controller-manager-addons-815929" [0478f6c2-35df-4091-9be5-2f739c29a169] Running
	I0918 19:40:05.992937   15635 system_pods.go:89] "kube-ingress-dns-minikube" [9660f591-df30-4595-ad30-0b79d840f779] Running
	I0918 19:40:05.992940   15635 system_pods.go:89] "kube-proxy-pqt4n" [f0634583-edcc-434a-9062-5511ff79a084] Running
	I0918 19:40:05.992944   15635 system_pods.go:89] "kube-scheduler-addons-815929" [577bc872-21cc-4a90-82e1-7552ce7eeb7c] Running
	I0918 19:40:05.992949   15635 system_pods.go:89] "metrics-server-84c5f94fbc-fvm48" [20825ca5-3044-4221-84bc-6fc04d1038fd] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0918 19:40:05.992956   15635 system_pods.go:89] "nvidia-device-plugin-daemonset-rvssn" [eed41d4f-f686-4590-959b-344ece686560] Running
	I0918 19:40:05.992962   15635 system_pods.go:89] "registry-66c9cd494c-96wcm" [170420dc-8ea6-4aba-99c1-9f61d4449fff] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0918 19:40:05.992970   15635 system_pods.go:89] "registry-proxy-jwxzj" [5ee21740-39f3-406e-bb72-65a28c5b5dde] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0918 19:40:05.992975   15635 system_pods.go:89] "snapshot-controller-56fcc65765-22mlv" [5197a1d6-f767-4030-b870-5fdd325589d2] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0918 19:40:05.992982   15635 system_pods.go:89] "snapshot-controller-56fcc65765-dzxnk" [28036305-c26d-4a1d-aa47-04d577b32c35] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0918 19:40:05.992988   15635 system_pods.go:89] "storage-provisioner" [a1b4202d-fa9c-4f1b-9a34-40ef0da0d6e8] Running
	I0918 19:40:05.992993   15635 system_pods.go:89] "tiller-deploy-b48cc5f79-8nbwq" [4d5df6b3-4cf1-4ce5-9cb4-2983f9aa2728] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I0918 19:40:05.993002   15635 system_pods.go:126] duration metric: took 8.592753ms to wait for k8s-apps to be running ...
	I0918 19:40:05.993011   15635 system_svc.go:44] waiting for kubelet service to be running ....
	I0918 19:40:05.993062   15635 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0918 19:40:06.007851   15635 system_svc.go:56] duration metric: took 14.818536ms WaitForService to wait for kubelet
	I0918 19:40:06.007886   15635 kubeadm.go:582] duration metric: took 28.978706928s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0918 19:40:06.007906   15635 node_conditions.go:102] verifying NodePressure condition ...
	I0918 19:40:06.010681   15635 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0918 19:40:06.010706   15635 node_conditions.go:123] node cpu capacity is 2
	I0918 19:40:06.010717   15635 node_conditions.go:105] duration metric: took 2.806111ms to run NodePressure ...
	I0918 19:40:06.010733   15635 start.go:241] waiting for startup goroutines ...
	I0918 19:40:06.023097   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:40:06.261044   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:06.386905   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 19:40:06.387938   15635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:06.523598   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:40:06.760607   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:06.885236   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 19:40:06.887183   15635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:07.023353   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:40:07.261244   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:07.387847   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 19:40:07.388133   15635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:07.523004   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:40:07.761314   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:07.886195   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 19:40:07.887026   15635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:08.022790   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:40:08.261966   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:08.386350   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 19:40:08.387977   15635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:08.522334   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:40:08.764428   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:08.887159   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 19:40:08.887636   15635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:09.023425   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:40:09.261458   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:09.386770   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 19:40:09.386931   15635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:09.523989   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:40:09.761715   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:09.888756   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 19:40:09.888913   15635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:10.022737   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:40:10.260843   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:10.385983   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 19:40:10.388375   15635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:10.523284   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:40:10.761667   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:10.886996   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 19:40:10.887478   15635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:11.023066   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:40:11.690574   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:40:11.691399   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:11.691415   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 19:40:11.692178   15635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:11.761412   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:11.886928   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 19:40:11.888082   15635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:12.023473   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:40:12.263133   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:12.386142   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 19:40:12.386662   15635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:12.525219   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:40:12.761693   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:12.886447   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 19:40:12.888253   15635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:13.022946   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:40:13.260971   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:13.386945   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 19:40:13.387172   15635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:13.522915   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:40:13.761554   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:13.885105   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 19:40:13.887694   15635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:14.028072   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:40:14.261504   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:14.385337   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 19:40:14.387622   15635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:14.523157   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:40:14.762317   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:14.886699   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 19:40:14.887653   15635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:15.023213   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:40:15.261539   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:15.386295   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 19:40:15.387692   15635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:15.523371   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:40:15.762030   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:15.887580   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 19:40:15.888087   15635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:16.024741   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:40:16.261036   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:16.385093   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 19:40:16.387141   15635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:16.523454   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:40:16.762326   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:16.890861   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 19:40:16.891242   15635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:17.022953   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:40:17.261544   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:17.386363   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 19:40:17.386458   15635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:17.523434   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:40:17.762229   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:17.889132   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 19:40:17.889326   15635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:18.028210   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:40:18.261194   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:18.385413   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 19:40:18.388574   15635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:18.523150   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:40:18.761054   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:18.887450   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 19:40:18.887779   15635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:19.024134   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:40:19.263289   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:19.385338   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 19:40:19.388348   15635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:19.523385   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:40:19.762917   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:19.885695   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 19:40:19.887582   15635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:20.022753   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:40:20.261175   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:20.385377   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 19:40:20.387295   15635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:20.522634   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:40:20.760753   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:20.887703   15635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:20.887712   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 19:40:21.235070   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:40:21.335251   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:21.387180   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 19:40:21.387296   15635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:21.523173   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:40:21.761619   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:21.885946   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 19:40:21.888761   15635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:22.023654   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:40:22.261327   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:22.385941   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 19:40:22.387514   15635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:22.524276   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:40:22.761455   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:22.889959   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 19:40:22.890148   15635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:23.023369   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:40:23.261803   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:23.385743   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 19:40:23.386867   15635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:23.523409   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:40:23.762815   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:23.889426   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 19:40:23.889754   15635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:24.023031   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:40:24.260909   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:24.385696   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 19:40:24.387779   15635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:24.523715   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:40:24.761870   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:24.887000   15635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:24.887192   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 19:40:25.025469   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:40:25.261744   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:25.385737   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 19:40:25.387836   15635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:25.523787   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:40:25.760667   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:25.886302   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 19:40:25.886864   15635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:26.023745   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:40:26.260372   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:26.387125   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 19:40:26.387622   15635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:26.522749   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:40:26.760728   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:26.887795   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 19:40:26.887929   15635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:27.022490   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:40:27.261285   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:27.387152   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 19:40:27.387208   15635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:27.526131   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:40:27.761567   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:27.885662   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 19:40:27.887226   15635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:28.023350   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:40:28.262005   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:28.386363   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 19:40:28.386533   15635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:28.523122   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:40:28.761308   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:28.886659   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 19:40:28.886766   15635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:29.025016   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:40:29.262720   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:29.385861   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 19:40:29.387067   15635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:29.523415   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:40:29.762456   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:29.889274   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 19:40:29.889409   15635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:30.022858   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:40:30.260706   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:30.385500   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 19:40:30.388109   15635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:30.523569   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:40:30.761409   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:30.887262   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 19:40:30.887513   15635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:31.022836   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:40:31.351673   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:31.631619   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 19:40:31.633821   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:40:31.634496   15635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:31.761680   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:31.886416   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 19:40:31.887521   15635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:32.022820   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:40:32.261775   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:32.385585   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 19:40:32.387110   15635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:32.522760   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:40:32.760560   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:32.887381   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 19:40:32.887779   15635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:33.023205   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:40:33.262433   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:33.386411   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 19:40:33.388473   15635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:33.522967   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:40:33.761145   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:33.885587   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 19:40:33.886411   15635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:34.024126   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:40:34.262336   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:34.386385   15635 kapi.go:107] duration metric: took 48.00466589s to wait for kubernetes.io/minikube-addons=registry ...
	I0918 19:40:34.387967   15635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:34.523142   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:40:34.761519   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:34.886743   15635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:35.023677   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:40:35.261534   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:35.386825   15635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:35.523475   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:40:35.761775   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:35.887530   15635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:36.024928   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:40:36.261912   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:36.389258   15635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:36.612910   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:40:36.760710   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:36.886570   15635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:37.023075   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:40:37.261566   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:37.386912   15635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:37.523858   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:40:37.761369   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:37.888082   15635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:38.023650   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:40:38.262241   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:38.387213   15635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:38.523095   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:40:38.761080   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:38.887662   15635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:39.022879   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:40:39.261795   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:39.388645   15635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:39.523629   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:40:39.764147   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:39.895243   15635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:40.023681   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:40:40.263820   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:40.388383   15635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:40.522769   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:40:40.760902   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:40.887214   15635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:41.024863   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:40:41.261355   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:41.388156   15635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:41.523189   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:40:41.763743   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:41.895229   15635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:42.024381   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:40:42.263606   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:42.388165   15635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:42.522769   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:40:42.760446   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:42.888084   15635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:43.022431   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:40:43.261740   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:43.387448   15635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:43.523089   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:40:43.761302   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:43.887688   15635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:44.023769   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:40:44.261649   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:44.388929   15635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:44.523353   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:40:44.761594   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:44.887209   15635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:45.022295   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:40:45.261575   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:45.386431   15635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:45.526748   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:40:45.761136   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:45.887483   15635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:46.023405   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:40:46.261710   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:46.386504   15635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:46.522766   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:40:46.760678   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:46.888552   15635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:47.408300   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:40:47.409204   15635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:47.409327   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:47.526541   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:40:47.762023   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:47.887476   15635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:48.024692   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:40:48.262281   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:48.387819   15635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:48.525990   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:40:48.761029   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:48.887048   15635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:49.022685   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:40:49.264666   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:49.387613   15635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:49.523501   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:40:49.762305   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:49.888742   15635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:50.023259   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:40:50.264411   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:50.391053   15635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:50.535411   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:40:50.763577   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:50.887602   15635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:51.022865   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:40:51.264264   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:51.398209   15635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:51.523440   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:40:51.762761   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:51.887030   15635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:52.022677   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:40:52.263450   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:52.388431   15635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:52.523149   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:40:52.763152   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:52.902779   15635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:53.024293   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:40:53.261509   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:53.386654   15635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:53.523350   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:40:53.790983   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:53.886920   15635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:54.029870   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:40:54.261998   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:54.386998   15635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:54.523404   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:40:54.762135   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:54.889645   15635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:55.023574   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:40:55.261586   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:55.799628   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:40:55.800153   15635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:55.800272   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:55.887540   15635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:56.023456   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:40:56.262164   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:56.387474   15635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:56.522936   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:40:56.760920   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:56.887129   15635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:57.022637   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:40:57.261192   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:57.387888   15635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:57.523659   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:40:57.761302   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:57.887216   15635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:58.022541   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:40:58.261223   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:58.386957   15635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:58.523331   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:40:58.762168   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:58.886618   15635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:59.023205   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:40:59.262141   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:59.387428   15635 kapi.go:107] duration metric: took 1m13.004718276s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0918 19:40:59.524360   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:40:59.762283   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:41:00.024053   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:41:00.262681   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:41:00.522704   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:41:00.760702   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:41:01.023661   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:41:01.260993   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:41:01.523442   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:41:01.762425   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:41:02.023110   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:41:02.265384   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:41:02.527771   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:41:02.761127   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:41:03.022885   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:41:03.260335   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:41:03.522913   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:41:03.761077   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:41:04.022763   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:41:04.263630   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:41:04.523144   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:41:04.761725   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:41:05.022991   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:41:05.261573   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:41:05.523927   15635 kapi.go:107] duration metric: took 1m16.504569327s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0918 19:41:05.526416   15635 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-815929 cluster.
	I0918 19:41:05.527994   15635 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0918 19:41:05.529367   15635 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0918 19:41:05.761527   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:41:06.266297   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:41:06.761123   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:41:07.260618   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:41:07.761457   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:41:08.260850   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:41:08.761648   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:41:09.260937   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:41:09.763235   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:41:10.264930   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:41:10.762866   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:41:11.262554   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:41:11.762641   15635 kapi.go:107] duration metric: took 1m24.506164382s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0918 19:41:11.764555   15635 out.go:177] * Enabled addons: cloud-spanner, ingress-dns, inspektor-gadget, helm-tiller, storage-provisioner, nvidia-device-plugin, metrics-server, yakd, default-storageclass, volumesnapshots, registry, ingress, gcp-auth, csi-hostpath-driver
	I0918 19:41:11.765613   15635 addons.go:510] duration metric: took 1m34.736385177s for enable addons: enabled=[cloud-spanner ingress-dns inspektor-gadget helm-tiller storage-provisioner nvidia-device-plugin metrics-server yakd default-storageclass volumesnapshots registry ingress gcp-auth csi-hostpath-driver]
	I0918 19:41:11.765657   15635 start.go:246] waiting for cluster config update ...
	I0918 19:41:11.765680   15635 start.go:255] writing updated cluster config ...
	I0918 19:41:11.765982   15635 ssh_runner.go:195] Run: rm -f paused
	I0918 19:41:11.816314   15635 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I0918 19:41:11.818785   15635 out.go:177] * Done! kubectl is now configured to use "addons-815929" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Sep 18 19:50:27 addons-815929 crio[660]: time="2024-09-18 19:50:27.922156385Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:864d454de1b7ac86c424d2d97b9350df392b96af9128a2a884065a4ad5047b20,Metadata:&PodSandboxMetadata{Name:registry-66c9cd494c-96wcm,Uid:170420dc-8ea6-4aba-99c1-9f61d4449fff,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1726688382268297324,Labels:map[string]string{actual-registry: true,addonmanager.kubernetes.io/mode: Reconcile,io.kubernetes.container.name: POD,io.kubernetes.pod.name: registry-66c9cd494c-96wcm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 170420dc-8ea6-4aba-99c1-9f61d4449fff,kubernetes.io/minikube-addons: registry,pod-template-hash: 66c9cd494c,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-18T19:39:41.650530931Z,kubernetes.io/config.source: api,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=968877d3-1468-408e-a774-7d9ae0ab7a2e name=/runtime.v1.Runtim
eService/ListPodSandbox
	Sep 18 19:50:27 addons-815929 crio[660]: time="2024-09-18 19:50:27.923152151Z" level=debug msg="Request: &PodSandboxStatusRequest{PodSandboxId:864d454de1b7ac86c424d2d97b9350df392b96af9128a2a884065a4ad5047b20,Verbose:false,}" file="otel-collector/interceptors.go:62" id=d79d4e0a-d4e5-49aa-9858-b1661656c035 name=/runtime.v1.RuntimeService/PodSandboxStatus
	Sep 18 19:50:27 addons-815929 crio[660]: time="2024-09-18 19:50:27.923597398Z" level=debug msg="Response: &PodSandboxStatusResponse{Status:&PodSandboxStatus{Id:864d454de1b7ac86c424d2d97b9350df392b96af9128a2a884065a4ad5047b20,Metadata:&PodSandboxMetadata{Name:registry-66c9cd494c-96wcm,Uid:170420dc-8ea6-4aba-99c1-9f61d4449fff,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1726688382268297324,Network:&PodSandboxNetworkStatus{Ip:10.244.0.6,AdditionalIps:[]*PodIP{},},Linux:&LinuxPodSandboxStatus{Namespaces:&Namespace{Options:&NamespaceOption{Network:POD,Pid:CONTAINER,Ipc:POD,TargetId:,UsernsOptions:nil,},},},Labels:map[string]string{actual-registry: true,addonmanager.kubernetes.io/mode: Reconcile,io.kubernetes.container.name: POD,io.kubernetes.pod.name: registry-66c9cd494c-96wcm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 170420dc-8ea6-4aba-99c1-9f61d4449fff,kubernetes.io/minikube-addons: registry,pod-template-hash: 66c9cd494c,},Annotations:map[string]string{kuberne
tes.io/config.seen: 2024-09-18T19:39:41.650530931Z,kubernetes.io/config.source: api,},RuntimeHandler:,},Info:map[string]string{},ContainersStatuses:[]*ContainerStatus{},Timestamp:0,}" file="otel-collector/interceptors.go:74" id=d79d4e0a-d4e5-49aa-9858-b1661656c035 name=/runtime.v1.RuntimeService/PodSandboxStatus
	Sep 18 19:50:27 addons-815929 crio[660]: time="2024-09-18 19:50:27.924796507Z" level=debug msg="Request: &ContainerStatusRequest{ContainerId:5ce6f551671c7ab058f935b8c72af7b6e49646bf1b442c9ba1ff4d6a7f28bd42,Verbose:false,}" file="otel-collector/interceptors.go:62" id=9717d1d2-0992-4091-a175-86ef96dabfac name=/runtime.v1.RuntimeService/ContainerStatus
	Sep 18 19:50:27 addons-815929 crio[660]: time="2024-09-18 19:50:27.926141127Z" level=debug msg="Response: &ContainerStatusResponse{Status:&ContainerStatus{Id:5ce6f551671c7ab058f935b8c72af7b6e49646bf1b442c9ba1ff4d6a7f28bd42,Metadata:&ContainerMetadata{Name:registry-proxy,Attempt:0,},State:CONTAINER_EXITED,CreatedAt:1726688434015049893,StartedAt:1726688434049201348,FinishedAt:1726689026791784117,ExitCode:0,Image:&ImageSpec{Image:gcr.io/k8s-minikube/kube-registry-proxy@sha256:b3fa0b2df8737fdb85ad5918a7e2652527463e357afff83a5e5bb966bcedc367,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38c5e506fa551ba5a1812dff63585e44b6c532dd4984b96f90944730f1c6e5c2,Reason:Completed,Message:,Labels:map[string]string{io.kubernetes.container.name: registry-proxy,io.kubernetes.pod.name: registry-proxy-jwxzj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5ee21740-39f3-406e-bb72-65a28c5b5dde,},Annotations:map[string]string{io.kubernetes.container.hash: c90bc829,io.kubernetes.c
ontainer.ports: [{\"name\":\"registry\",\"hostPort\":5000,\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},Mounts:[]*Mount{&Mount{ContainerPath:/etc/hosts,HostPath:/var/lib/kubelet/pods/5ee21740-39f3-406e-bb72-65a28c5b5dde/etc-hosts,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/dev/termination-log,HostPath:/var/lib/kubelet/pods/5ee21740-39f3-406e-bb72-65a28c5b5dde/containers/registry-proxy/5f1df027,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/var/run/secrets/kubernetes.io/serviceaccount,HostPath:/var/lib/kubelet/pods/5ee21740-39f3-406e-bb72-65a28c5b5dde/volumes/kubernetes.io~projected/kube-api-access-s52gv,Rea
donly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},},LogPath:/var/log/pods/kube-system_registry-proxy-jwxzj_5ee21740-39f3-406e-bb72-65a28c5b5dde/registry-proxy/0.log,Resources:&ContainerResources{Linux:&LinuxContainerResources{CpuPeriod:100000,CpuQuota:0,CpuShares:2,MemoryLimitInBytes:0,OomScoreAdj:1000,CpusetCpus:,CpusetMems:,HugepageLimits:[]*HugepageLimit{&HugepageLimit{PageSize:2MB,Limit:0,},},Unified:map[string]string{memory.oom.group: 1,memory.swap.max: 0,},MemorySwapLimitInBytes:0,},Windows:nil,},},Info:map[string]string{},}" file="otel-collector/interceptors.go:74" id=9717d1d2-0992-4091-a175-86ef96dabfac name=/runtime.v1.RuntimeService/ContainerStatus
	Sep 18 19:50:27 addons-815929 crio[660]: time="2024-09-18 19:50:27.926889828Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{io.kubernetes.pod.uid: 170420dc-8ea6-4aba-99c1-9f61d4449fff,},},}" file="otel-collector/interceptors.go:62" id=ec185625-e58f-4e74-b60a-629b051b9fb5 name=/runtime.v1.RuntimeService/ListContainers
	Sep 18 19:50:27 addons-815929 crio[660]: time="2024-09-18 19:50:27.927202637Z" level=debug msg="Request: &ContainerStatusRequest{ContainerId:5ce6f551671c7ab058f935b8c72af7b6e49646bf1b442c9ba1ff4d6a7f28bd42,Verbose:false,}" file="otel-collector/interceptors.go:62" id=6ce197ba-8309-444d-95ff-5f02691584f7 name=/runtime.v1.RuntimeService/ContainerStatus
	Sep 18 19:50:27 addons-815929 crio[660]: time="2024-09-18 19:50:27.927443767Z" level=debug msg="Response: &ContainerStatusResponse{Status:&ContainerStatus{Id:5ce6f551671c7ab058f935b8c72af7b6e49646bf1b442c9ba1ff4d6a7f28bd42,Metadata:&ContainerMetadata{Name:registry-proxy,Attempt:0,},State:CONTAINER_EXITED,CreatedAt:1726688434015049893,StartedAt:1726688434049201348,FinishedAt:1726689026791784117,ExitCode:0,Image:&ImageSpec{Image:gcr.io/k8s-minikube/kube-registry-proxy@sha256:b3fa0b2df8737fdb85ad5918a7e2652527463e357afff83a5e5bb966bcedc367,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38c5e506fa551ba5a1812dff63585e44b6c532dd4984b96f90944730f1c6e5c2,Reason:Completed,Message:,Labels:map[string]string{io.kubernetes.container.name: registry-proxy,io.kubernetes.pod.name: registry-proxy-jwxzj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5ee21740-39f3-406e-bb72-65a28c5b5dde,},Annotations:map[string]string{io.kubernetes.container.hash: c90bc829,io.kubernetes.c
ontainer.ports: [{\"name\":\"registry\",\"hostPort\":5000,\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},Mounts:[]*Mount{&Mount{ContainerPath:/etc/hosts,HostPath:/var/lib/kubelet/pods/5ee21740-39f3-406e-bb72-65a28c5b5dde/etc-hosts,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/dev/termination-log,HostPath:/var/lib/kubelet/pods/5ee21740-39f3-406e-bb72-65a28c5b5dde/containers/registry-proxy/5f1df027,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/var/run/secrets/kubernetes.io/serviceaccount,HostPath:/var/lib/kubelet/pods/5ee21740-39f3-406e-bb72-65a28c5b5dde/volumes/kubernetes.io~projected/kube-api-access-s52gv,Rea
donly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},},LogPath:/var/log/pods/kube-system_registry-proxy-jwxzj_5ee21740-39f3-406e-bb72-65a28c5b5dde/registry-proxy/0.log,Resources:&ContainerResources{Linux:&LinuxContainerResources{CpuPeriod:100000,CpuQuota:0,CpuShares:2,MemoryLimitInBytes:0,OomScoreAdj:1000,CpusetCpus:,CpusetMems:,HugepageLimits:[]*HugepageLimit{&HugepageLimit{PageSize:2MB,Limit:0,},},Unified:map[string]string{memory.oom.group: 1,memory.swap.max: 0,},MemorySwapLimitInBytes:0,},Windows:nil,},},Info:map[string]string{},}" file="otel-collector/interceptors.go:74" id=6ce197ba-8309-444d-95ff-5f02691584f7 name=/runtime.v1.RuntimeService/ContainerStatus
	Sep 18 19:50:27 addons-815929 crio[660]: time="2024-09-18 19:50:27.927895261Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=ec185625-e58f-4e74-b60a-629b051b9fb5 name=/runtime.v1.RuntimeService/ListContainers
	Sep 18 19:50:27 addons-815929 crio[660]: time="2024-09-18 19:50:27.928019771Z" level=debug msg="Request: &RemoveContainerRequest{ContainerId:5ce6f551671c7ab058f935b8c72af7b6e49646bf1b442c9ba1ff4d6a7f28bd42,}" file="otel-collector/interceptors.go:62" id=ce95af45-380b-4a7c-8f4e-0089f8964b6a name=/runtime.v1.RuntimeService/RemoveContainer
	Sep 18 19:50:27 addons-815929 crio[660]: time="2024-09-18 19:50:27.928154827Z" level=info msg="Removing container: 5ce6f551671c7ab058f935b8c72af7b6e49646bf1b442c9ba1ff4d6a7f28bd42" file="server/container_remove.go:24" id=ce95af45-380b-4a7c-8f4e-0089f8964b6a name=/runtime.v1.RuntimeService/RemoveContainer
	Sep 18 19:50:27 addons-815929 crio[660]: time="2024-09-18 19:50:27.928561720Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:094888f2e69706006c5048d9d508e8d28a82d3f350d1c4bf18d7973c4828ebeb,PodSandboxId:864d454de1b7ac86c424d2d97b9350df392b96af9128a2a884065a4ad5047b20,Metadata:&ContainerMetadata{Name:registry,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/registry@sha256:5e8c7f954d64eb89a98a3f84b6dd1e1f4a9cf3d25e41575dd0a96d3e3363cba7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:75ef5b734af47dc41ff2fb442f287ee08c7da31dddb3759616a8f693f0f346a0,State:CONTAINER_EXITED,CreatedAt:1726688424711142468,Labels:map[string]string{io.kubernetes.container.name: registry,io.kubernetes.pod.name: registry-66c9cd494c-96wcm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 170420dc-8ea6-4aba-99c1-9f61d4449fff,},Annotations:map[string]string{io.kubernetes.container.hash: 49fa49ac,io.kubernetes.container.ports: [{\"containerP
ort\":5000,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=ec185625-e58f-4e74-b60a-629b051b9fb5 name=/runtime.v1.RuntimeService/ListContainers
	Sep 18 19:50:27 addons-815929 crio[660]: time="2024-09-18 19:50:27.929216090Z" level=debug msg="Request: &ContainerStatusRequest{ContainerId:094888f2e69706006c5048d9d508e8d28a82d3f350d1c4bf18d7973c4828ebeb,Verbose:false,}" file="otel-collector/interceptors.go:62" id=70f18fc7-ee94-43f9-8b51-38dea950fe4e name=/runtime.v1.RuntimeService/ContainerStatus
	Sep 18 19:50:27 addons-815929 crio[660]: time="2024-09-18 19:50:27.929466416Z" level=debug msg="Response: &ContainerStatusResponse{Status:&ContainerStatus{Id:094888f2e69706006c5048d9d508e8d28a82d3f350d1c4bf18d7973c4828ebeb,Metadata:&ContainerMetadata{Name:registry,Attempt:0,},State:CONTAINER_EXITED,CreatedAt:1726688424759476406,StartedAt:1726688424786889482,FinishedAt:1726689026941798929,ExitCode:2,Image:&ImageSpec{Image:docker.io/library/registry@sha256:ac0192b549007e22998eb74e8d8488dcfe70f1489520c3b144a6047ac5efbe90,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:75ef5b734af47dc41ff2fb442f287ee08c7da31dddb3759616a8f693f0f346a0,Reason:Error,Message:,Labels:map[string]string{io.kubernetes.container.name: registry,io.kubernetes.pod.name: registry-66c9cd494c-96wcm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 170420dc-8ea6-4aba-99c1-9f61d4449fff,},Annotations:map[string]string{io.kubernetes.container.hash: 49fa49ac,io.kubernetes.container.ports: [{\"cont
ainerPort\":5000,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},Mounts:[]*Mount{&Mount{ContainerPath:/etc/hosts,HostPath:/var/lib/kubelet/pods/170420dc-8ea6-4aba-99c1-9f61d4449fff/etc-hosts,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/dev/termination-log,HostPath:/var/lib/kubelet/pods/170420dc-8ea6-4aba-99c1-9f61d4449fff/containers/registry/3305b6ad,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/var/run/secrets/kubernetes.io/serviceaccount,HostPath:/var/lib/kubelet/pods/170420dc-8ea6-4aba-99c1-9f61d4449fff/volumes/kubernetes.io~projected/kube-api-access-xjdbp,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidM
appings:[]*IDMapping{},GidMappings:[]*IDMapping{},},},LogPath:/var/log/pods/kube-system_registry-66c9cd494c-96wcm_170420dc-8ea6-4aba-99c1-9f61d4449fff/registry/0.log,Resources:&ContainerResources{Linux:&LinuxContainerResources{CpuPeriod:100000,CpuQuota:0,CpuShares:2,MemoryLimitInBytes:0,OomScoreAdj:1000,CpusetCpus:,CpusetMems:,HugepageLimits:[]*HugepageLimit{&HugepageLimit{PageSize:2MB,Limit:0,},},Unified:map[string]string{memory.oom.group: 1,memory.swap.max: 0,},MemorySwapLimitInBytes:0,},Windows:nil,},},Info:map[string]string{},}" file="otel-collector/interceptors.go:74" id=70f18fc7-ee94-43f9-8b51-38dea950fe4e name=/runtime.v1.RuntimeService/ContainerStatus
	Sep 18 19:50:27 addons-815929 crio[660]: time="2024-09-18 19:50:27.954303539Z" level=debug msg="Unmounted container 5ce6f551671c7ab058f935b8c72af7b6e49646bf1b442c9ba1ff4d6a7f28bd42" file="storage/runtime.go:495" id=ce95af45-380b-4a7c-8f4e-0089f8964b6a name=/runtime.v1.RuntimeService/RemoveContainer
	Sep 18 19:50:27 addons-815929 crio[660]: time="2024-09-18 19:50:27.972458452Z" level=info msg="Removed container 5ce6f551671c7ab058f935b8c72af7b6e49646bf1b442c9ba1ff4d6a7f28bd42: kube-system/registry-proxy-jwxzj/registry-proxy" file="server/container_remove.go:40" id=ce95af45-380b-4a7c-8f4e-0089f8964b6a name=/runtime.v1.RuntimeService/RemoveContainer
	Sep 18 19:50:27 addons-815929 crio[660]: time="2024-09-18 19:50:27.972686710Z" level=debug msg="Response: &RemoveContainerResponse{}" file="otel-collector/interceptors.go:74" id=ce95af45-380b-4a7c-8f4e-0089f8964b6a name=/runtime.v1.RuntimeService/RemoveContainer
	Sep 18 19:50:27 addons-815929 crio[660]: time="2024-09-18 19:50:27.973680834Z" level=debug msg="Request: &ContainerStatusRequest{ContainerId:5ce6f551671c7ab058f935b8c72af7b6e49646bf1b442c9ba1ff4d6a7f28bd42,Verbose:false,}" file="otel-collector/interceptors.go:62" id=5b8c19cd-92de-4751-adc1-1785b1d2df22 name=/runtime.v1.RuntimeService/ContainerStatus
	Sep 18 19:50:27 addons-815929 crio[660]: time="2024-09-18 19:50:27.973755951Z" level=debug msg="Response error: rpc error: code = NotFound desc = could not find container \"5ce6f551671c7ab058f935b8c72af7b6e49646bf1b442c9ba1ff4d6a7f28bd42\": container with ID starting with 5ce6f551671c7ab058f935b8c72af7b6e49646bf1b442c9ba1ff4d6a7f28bd42 not found: ID does not exist" file="otel-collector/interceptors.go:71" id=5b8c19cd-92de-4751-adc1-1785b1d2df22 name=/runtime.v1.RuntimeService/ContainerStatus
	Sep 18 19:50:27 addons-815929 crio[660]: time="2024-09-18 19:50:27.974522820Z" level=debug msg="Request: &ContainerStatusRequest{ContainerId:094888f2e69706006c5048d9d508e8d28a82d3f350d1c4bf18d7973c4828ebeb,Verbose:false,}" file="otel-collector/interceptors.go:62" id=5d86e6ac-522d-443a-aea3-42f3c399f7f3 name=/runtime.v1.RuntimeService/ContainerStatus
	Sep 18 19:50:27 addons-815929 crio[660]: time="2024-09-18 19:50:27.974705678Z" level=debug msg="Response: &ContainerStatusResponse{Status:&ContainerStatus{Id:094888f2e69706006c5048d9d508e8d28a82d3f350d1c4bf18d7973c4828ebeb,Metadata:&ContainerMetadata{Name:registry,Attempt:0,},State:CONTAINER_EXITED,CreatedAt:1726688424759476406,StartedAt:1726688424786889482,FinishedAt:1726689026941798929,ExitCode:2,Image:&ImageSpec{Image:docker.io/library/registry@sha256:ac0192b549007e22998eb74e8d8488dcfe70f1489520c3b144a6047ac5efbe90,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:75ef5b734af47dc41ff2fb442f287ee08c7da31dddb3759616a8f693f0f346a0,Reason:Error,Message:,Labels:map[string]string{io.kubernetes.container.name: registry,io.kubernetes.pod.name: registry-66c9cd494c-96wcm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 170420dc-8ea6-4aba-99c1-9f61d4449fff,},Annotations:map[string]string{io.kubernetes.container.hash: 49fa49ac,io.kubernetes.container.ports: [{\"cont
ainerPort\":5000,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},Mounts:[]*Mount{&Mount{ContainerPath:/etc/hosts,HostPath:/var/lib/kubelet/pods/170420dc-8ea6-4aba-99c1-9f61d4449fff/etc-hosts,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/dev/termination-log,HostPath:/var/lib/kubelet/pods/170420dc-8ea6-4aba-99c1-9f61d4449fff/containers/registry/3305b6ad,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/var/run/secrets/kubernetes.io/serviceaccount,HostPath:/var/lib/kubelet/pods/170420dc-8ea6-4aba-99c1-9f61d4449fff/volumes/kubernetes.io~projected/kube-api-access-xjdbp,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidM
appings:[]*IDMapping{},GidMappings:[]*IDMapping{},},},LogPath:/var/log/pods/kube-system_registry-66c9cd494c-96wcm_170420dc-8ea6-4aba-99c1-9f61d4449fff/registry/0.log,Resources:&ContainerResources{Linux:&LinuxContainerResources{CpuPeriod:100000,CpuQuota:0,CpuShares:2,MemoryLimitInBytes:0,OomScoreAdj:1000,CpusetCpus:,CpusetMems:,HugepageLimits:[]*HugepageLimit{&HugepageLimit{PageSize:2MB,Limit:0,},},Unified:map[string]string{memory.oom.group: 1,memory.swap.max: 0,},MemorySwapLimitInBytes:0,},Windows:nil,},},Info:map[string]string{},}" file="otel-collector/interceptors.go:74" id=5d86e6ac-522d-443a-aea3-42f3c399f7f3 name=/runtime.v1.RuntimeService/ContainerStatus
	Sep 18 19:50:27 addons-815929 crio[660]: time="2024-09-18 19:50:27.978735808Z" level=debug msg="Request: &ContainerStatusRequest{ContainerId:094888f2e69706006c5048d9d508e8d28a82d3f350d1c4bf18d7973c4828ebeb,Verbose:false,}" file="otel-collector/interceptors.go:62" id=efed0f0a-f1f0-4393-8f5f-1092ca932212 name=/runtime.v1.RuntimeService/ContainerStatus
	Sep 18 19:50:27 addons-815929 crio[660]: time="2024-09-18 19:50:27.980473559Z" level=debug msg="Response: &ContainerStatusResponse{Status:&ContainerStatus{Id:094888f2e69706006c5048d9d508e8d28a82d3f350d1c4bf18d7973c4828ebeb,Metadata:&ContainerMetadata{Name:registry,Attempt:0,},State:CONTAINER_EXITED,CreatedAt:1726688424759476406,StartedAt:1726688424786889482,FinishedAt:1726689026941798929,ExitCode:2,Image:&ImageSpec{Image:docker.io/library/registry@sha256:ac0192b549007e22998eb74e8d8488dcfe70f1489520c3b144a6047ac5efbe90,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:75ef5b734af47dc41ff2fb442f287ee08c7da31dddb3759616a8f693f0f346a0,Reason:Error,Message:,Labels:map[string]string{io.kubernetes.container.name: registry,io.kubernetes.pod.name: registry-66c9cd494c-96wcm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 170420dc-8ea6-4aba-99c1-9f61d4449fff,},Annotations:map[string]string{io.kubernetes.container.hash: 49fa49ac,io.kubernetes.container.ports: [{\"cont
ainerPort\":5000,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},Mounts:[]*Mount{&Mount{ContainerPath:/etc/hosts,HostPath:/var/lib/kubelet/pods/170420dc-8ea6-4aba-99c1-9f61d4449fff/etc-hosts,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/dev/termination-log,HostPath:/var/lib/kubelet/pods/170420dc-8ea6-4aba-99c1-9f61d4449fff/containers/registry/3305b6ad,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/var/run/secrets/kubernetes.io/serviceaccount,HostPath:/var/lib/kubelet/pods/170420dc-8ea6-4aba-99c1-9f61d4449fff/volumes/kubernetes.io~projected/kube-api-access-xjdbp,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidM
appings:[]*IDMapping{},GidMappings:[]*IDMapping{},},},LogPath:/var/log/pods/kube-system_registry-66c9cd494c-96wcm_170420dc-8ea6-4aba-99c1-9f61d4449fff/registry/0.log,Resources:&ContainerResources{Linux:&LinuxContainerResources{CpuPeriod:100000,CpuQuota:0,CpuShares:2,MemoryLimitInBytes:0,OomScoreAdj:1000,CpusetCpus:,CpusetMems:,HugepageLimits:[]*HugepageLimit{&HugepageLimit{PageSize:2MB,Limit:0,},},Unified:map[string]string{memory.oom.group: 1,memory.swap.max: 0,},MemorySwapLimitInBytes:0,},Windows:nil,},},Info:map[string]string{},}" file="otel-collector/interceptors.go:74" id=efed0f0a-f1f0-4393-8f5f-1092ca932212 name=/runtime.v1.RuntimeService/ContainerStatus
	Sep 18 19:50:27 addons-815929 crio[660]: time="2024-09-18 19:50:27.982028491Z" level=debug msg="Request: &RemoveContainerRequest{ContainerId:094888f2e69706006c5048d9d508e8d28a82d3f350d1c4bf18d7973c4828ebeb,}" file="otel-collector/interceptors.go:62" id=c23a59c6-4320-4f0d-9a38-f86887b88b90 name=/runtime.v1.RuntimeService/RemoveContainer
	Sep 18 19:50:27 addons-815929 crio[660]: time="2024-09-18 19:50:27.982075118Z" level=info msg="Removing container: 094888f2e69706006c5048d9d508e8d28a82d3f350d1c4bf18d7973c4828ebeb" file="server/container_remove.go:24" id=c23a59c6-4320-4f0d-9a38-f86887b88b90 name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                        CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	ee08a5ec3a513       docker.io/library/nginx@sha256:074604130336e3c431b7c6b5b551b5a6ae5b67db13b3d223c6db638f85c7ff56                              1 second ago        Running             nginx                     0                   1c669dd18bc97       nginx
	0b7f6341e501f       ghcr.io/headlamp-k8s/headlamp@sha256:65a75b550f62ee651071b602f3d2c1daf0362638f620743c61b25eb4a1759f0a                        7 seconds ago       Running             headlamp                  0                   01e76804f1f15       headlamp-7b5c95b59d-6t8xs
	172ef2c9c611d       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b                 9 minutes ago       Running             gcp-auth                  0                   915e30c1ffac7       gcp-auth-89d5ffd79-fm986
	62659a92289f5       registry.k8s.io/ingress-nginx/controller@sha256:401d25a09ee8fe9fd9d33c5051531e8ebfa4ded95ff09830af8cc48c8e5aeaa6             9 minutes ago       Running             controller                0                   592e6ac910504       ingress-nginx-controller-bc57996ff-8sxjf
	ecec83906a409       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:1b792367d0e1350ee869b15f851d9e4de17db10f33fadaef628db3e6457aa012   9 minutes ago       Exited              patch                     0                   4a8bc60acf00b       ingress-nginx-admission-patch-xp8xg
	d4c42a127325c       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:1b792367d0e1350ee869b15f851d9e4de17db10f33fadaef628db3e6457aa012   9 minutes ago       Exited              create                    0                   39b348ff86c37       ingress-nginx-admission-create-r4nz6
	a5437d1207356       docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef             10 minutes ago      Running             local-path-provisioner    0                   a99ddd38ed103       local-path-provisioner-86d989889c-vr6hr
	6109c3afb8acc       registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a        10 minutes ago      Running             metrics-server            0                   13e8766f7460e       metrics-server-84c5f94fbc-fvm48
	bd2242b8b099d       gcr.io/k8s-minikube/minikube-ingress-dns@sha256:07c8f5b205a3f8971bfc6d460978ae00de35f17e5d5392b1de8de02356f85dab             10 minutes ago      Running             minikube-ingress-dns      0                   9e6cb9c26267e       kube-ingress-dns-minikube
	3759671f1017e       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                             10 minutes ago      Running             storage-provisioner       0                   0e451edbd642f       storage-provisioner
	fe26b1e2b409b       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                                             10 minutes ago      Running             coredns                   0                   ddc00d8b37d3e       coredns-7c65d6cfc9-lr452
	c25ce10b42b68       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                                             10 minutes ago      Running             kube-proxy                0                   4edb1f646199c       kube-proxy-pqt4n
	af153f3716e56       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                                             11 minutes ago      Running             etcd                      0                   a30d6c4574148       etcd-addons-815929
	dcda62e7939de       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                                             11 minutes ago      Running             kube-scheduler            0                   b7081c4721d58       kube-scheduler-addons-815929
	f287481be73d0       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                                             11 minutes ago      Running             kube-controller-manager   0                   ad2848e491363       kube-controller-manager-addons-815929
	bd304f4e9c520       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee                                                             11 minutes ago      Running             kube-apiserver            0                   da55a8add5325       kube-apiserver-addons-815929
	
	
	==> coredns [fe26b1e2b409b9001b9416d86ce3ddf38df363c27c9d60c0de10b74f8347ee69] <==
	[INFO] 127.0.0.1:33911 - 37399 "HINFO IN 5747327246118162623.8020402030463234675. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.016819419s
	[INFO] 10.244.0.7:59262 - 43432 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000336053s
	[INFO] 10.244.0.7:59262 - 17322 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000150287s
	[INFO] 10.244.0.7:41687 - 18673 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000099786s
	[INFO] 10.244.0.7:41687 - 9207 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000065823s
	[INFO] 10.244.0.7:33094 - 24891 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000094342s
	[INFO] 10.244.0.7:33094 - 26173 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000059261s
	[INFO] 10.244.0.7:56632 - 33786 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000087163s
	[INFO] 10.244.0.7:56632 - 4856 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000058072s
	[INFO] 10.244.0.7:36451 - 41922 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000084154s
	[INFO] 10.244.0.7:36451 - 33727 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000092459s
	[INFO] 10.244.0.7:39340 - 30237 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000083666s
	[INFO] 10.244.0.7:39340 - 56611 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000065217s
	[INFO] 10.244.0.7:60263 - 43577 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000042731s
	[INFO] 10.244.0.7:60263 - 42043 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000060323s
	[INFO] 10.244.0.7:49317 - 26894 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000071504s
	[INFO] 10.244.0.7:49317 - 41231 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000053913s
	[INFO] 10.244.0.22:56096 - 25617 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000559428s
	[INFO] 10.244.0.22:46332 - 60333 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.00009869s
	[INFO] 10.244.0.22:56500 - 14226 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000212602s
	[INFO] 10.244.0.22:49148 - 10468 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.00009573s
	[INFO] 10.244.0.22:40941 - 26523 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000113485s
	[INFO] 10.244.0.22:37539 - 18925 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000348096s
	[INFO] 10.244.0.22:41445 - 2227 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 534 0.002727628s
	[INFO] 10.244.0.22:57259 - 2571 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.00255705s
	
	
	==> describe nodes <==
	Name:               addons-815929
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-815929
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=85073601a832bd4bbda5d11fa91feafff6ec6b91
	                    minikube.k8s.io/name=addons-815929
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_18T19_39_32_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-815929
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 18 Sep 2024 19:39:29 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-815929
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 18 Sep 2024 19:50:26 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 18 Sep 2024 19:50:04 +0000   Wed, 18 Sep 2024 19:39:27 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 18 Sep 2024 19:50:04 +0000   Wed, 18 Sep 2024 19:39:27 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 18 Sep 2024 19:50:04 +0000   Wed, 18 Sep 2024 19:39:27 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 18 Sep 2024 19:50:04 +0000   Wed, 18 Sep 2024 19:39:32 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.158
	  Hostname:    addons-815929
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	System Info:
	  Machine ID:                 7e65d1c428634e33ae59c564f000aca1
	  System UUID:                7e65d1c4-2863-4e33-ae59-c564f000aca1
	  Boot ID:                    eb3346ec-958a-43c9-b91c-e6223f603868
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (15 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m16s
	  default                     nginx                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         6s
	  gcp-auth                    gcp-auth-89d5ffd79-fm986                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  headlamp                    headlamp-7b5c95b59d-6t8xs                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         14s
	  ingress-nginx               ingress-nginx-controller-bc57996ff-8sxjf    100m (5%)     0 (0%)      90Mi (2%)        0 (0%)         10m
	  kube-system                 coredns-7c65d6cfc9-lr452                    100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     10m
	  kube-system                 etcd-addons-815929                          100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         10m
	  kube-system                 kube-apiserver-addons-815929                250m (12%)    0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-controller-manager-addons-815929       200m (10%)    0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-ingress-dns-minikube                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-proxy-pqt4n                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-scheduler-addons-815929                100m (5%)     0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 metrics-server-84c5f94fbc-fvm48             100m (5%)     0 (0%)      200Mi (5%)       0 (0%)         10m
	  kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  local-path-storage          local-path-provisioner-86d989889c-vr6hr     0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   0 (0%)
	  memory             460Mi (12%)  170Mi (4%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 10m   kube-proxy       
	  Normal  Starting                 10m   kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  10m   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  10m   kubelet          Node addons-815929 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    10m   kubelet          Node addons-815929 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     10m   kubelet          Node addons-815929 status is now: NodeHasSufficientPID
	  Normal  NodeReady                10m   kubelet          Node addons-815929 status is now: NodeReady
	  Normal  RegisteredNode           10m   node-controller  Node addons-815929 event: Registered Node addons-815929 in Controller
	
	
	==> dmesg <==
	[  +0.124134] kauditd_printk_skb: 18 callbacks suppressed
	[  +5.080646] kauditd_printk_skb: 116 callbacks suppressed
	[  +5.343608] kauditd_printk_skb: 127 callbacks suppressed
	[  +5.904152] kauditd_printk_skb: 83 callbacks suppressed
	[Sep18 19:40] kauditd_printk_skb: 9 callbacks suppressed
	[  +5.880845] kauditd_printk_skb: 34 callbacks suppressed
	[ +18.003706] kauditd_printk_skb: 12 callbacks suppressed
	[  +5.043206] kauditd_printk_skb: 46 callbacks suppressed
	[  +8.731855] kauditd_printk_skb: 72 callbacks suppressed
	[Sep18 19:41] kauditd_printk_skb: 6 callbacks suppressed
	[  +5.081169] kauditd_printk_skb: 44 callbacks suppressed
	[ +12.641013] kauditd_printk_skb: 12 callbacks suppressed
	[Sep18 19:42] kauditd_printk_skb: 28 callbacks suppressed
	[Sep18 19:43] kauditd_printk_skb: 28 callbacks suppressed
	[Sep18 19:46] kauditd_printk_skb: 28 callbacks suppressed
	[Sep18 19:49] kauditd_printk_skb: 28 callbacks suppressed
	[  +5.945047] kauditd_printk_skb: 15 callbacks suppressed
	[  +5.644536] kauditd_printk_skb: 25 callbacks suppressed
	[  +6.473959] kauditd_printk_skb: 17 callbacks suppressed
	[  +6.527768] kauditd_printk_skb: 14 callbacks suppressed
	[ +12.093612] kauditd_printk_skb: 3 callbacks suppressed
	[Sep18 19:50] kauditd_printk_skb: 24 callbacks suppressed
	[  +5.519549] kauditd_printk_skb: 10 callbacks suppressed
	[  +7.416140] kauditd_printk_skb: 55 callbacks suppressed
	[  +5.600964] kauditd_printk_skb: 31 callbacks suppressed
	
	
	==> etcd [af153f3716e56e0aba4d70e3ae86ff7d46933ad9b7f0dcd1f1ab5476c014ec6c] <==
	{"level":"info","ts":"2024-09-18T19:40:55.783491Z","caller":"traceutil/trace.go:171","msg":"trace[616005258] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1095; }","duration":"411.443513ms","start":"2024-09-18T19:40:55.372039Z","end":"2024-09-18T19:40:55.783482Z","steps":["trace[616005258] 'agreement among raft nodes before linearized reading'  (duration: 411.261686ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-18T19:40:55.783514Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-18T19:40:55.372005Z","time spent":"411.502511ms","remote":"127.0.0.1:37770","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":28,"request content":"key:\"/registry/pods\" limit:1 "}
	{"level":"warn","ts":"2024-09-18T19:40:55.783555Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-18T19:40:55.320353Z","time spent":"463.071957ms","remote":"127.0.0.1:37658","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":764,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/events/gadget/gadget-56gjj.17f66dffc2f7c48d\" mod_revision:1086 > success:<request_put:<key:\"/registry/events/gadget/gadget-56gjj.17f66dffc2f7c48d\" value_size:693 lease:8396277637547487747 >> failure:<request_range:<key:\"/registry/events/gadget/gadget-56gjj.17f66dffc2f7c48d\" > >"}
	{"level":"info","ts":"2024-09-18T19:40:55.783273Z","caller":"traceutil/trace.go:171","msg":"trace[1329412000] linearizableReadLoop","detail":"{readStateIndex:1127; appliedIndex:1126; }","duration":"411.153549ms","start":"2024-09-18T19:40:55.372056Z","end":"2024-09-18T19:40:55.783210Z","steps":["trace[1329412000] 'read index received'  (duration: 410.953333ms)","trace[1329412000] 'applied index is now lower than readState.Index'  (duration: 199.615µs)"],"step_count":2}
	{"level":"warn","ts":"2024-09-18T19:40:55.783842Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"274.750813ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-18T19:40:55.783883Z","caller":"traceutil/trace.go:171","msg":"trace[1696276562] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1095; }","duration":"274.791129ms","start":"2024-09-18T19:40:55.509082Z","end":"2024-09-18T19:40:55.783873Z","steps":["trace[1696276562] 'agreement among raft nodes before linearized reading'  (duration: 274.733422ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-18T19:40:58.140649Z","caller":"traceutil/trace.go:171","msg":"trace[1269958253] transaction","detail":"{read_only:false; response_revision:1100; number_of_response:1; }","duration":"116.461138ms","start":"2024-09-18T19:40:58.024133Z","end":"2024-09-18T19:40:58.140595Z","steps":["trace[1269958253] 'process raft request'  (duration: 116.084244ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-18T19:49:27.720379Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1528}
	{"level":"info","ts":"2024-09-18T19:49:27.755828Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":1528,"took":"34.749964ms","hash":189233142,"current-db-size-bytes":6471680,"current-db-size":"6.5 MB","current-db-size-in-use-bytes":3465216,"current-db-size-in-use":"3.5 MB"}
	{"level":"info","ts":"2024-09-18T19:49:27.755900Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":189233142,"revision":1528,"compact-revision":-1}
	{"level":"warn","ts":"2024-09-18T19:49:39.140178Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"372.134407ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" ","response":"range_response_count:1 size:1114"}
	{"level":"info","ts":"2024-09-18T19:49:39.140269Z","caller":"traceutil/trace.go:171","msg":"trace[44095741] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:1; response_revision:2065; }","duration":"372.258566ms","start":"2024-09-18T19:49:38.767988Z","end":"2024-09-18T19:49:39.140247Z","steps":["trace[44095741] 'range keys from in-memory index tree'  (duration: 371.974715ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-18T19:49:39.140351Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-18T19:49:38.767903Z","time spent":"372.433662ms","remote":"127.0.0.1:37750","response type":"/etcdserverpb.KV/Range","request count":0,"request size":67,"response count":1,"response size":1137,"request content":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" "}
	{"level":"warn","ts":"2024-09-18T19:49:39.140594Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"366.852656ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/namespaces/yakd-dashboard\" ","response":"range_response_count:1 size:2312"}
	{"level":"info","ts":"2024-09-18T19:49:39.140666Z","caller":"traceutil/trace.go:171","msg":"trace[955157213] range","detail":"{range_begin:/registry/namespaces/yakd-dashboard; range_end:; response_count:1; response_revision:2065; }","duration":"366.925812ms","start":"2024-09-18T19:49:38.773733Z","end":"2024-09-18T19:49:39.140659Z","steps":["trace[955157213] 'range keys from in-memory index tree'  (duration: 366.803518ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-18T19:49:39.140687Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-18T19:49:38.773695Z","time spent":"366.986639ms","remote":"127.0.0.1:37688","response type":"/etcdserverpb.KV/Range","request count":0,"request size":37,"response count":1,"response size":2335,"request content":"key:\"/registry/namespaces/yakd-dashboard\" "}
	{"level":"info","ts":"2024-09-18T19:49:39.140890Z","caller":"traceutil/trace.go:171","msg":"trace[1592645195] linearizableReadLoop","detail":"{readStateIndex:2214; appliedIndex:2213; }","duration":"186.300087ms","start":"2024-09-18T19:49:38.954572Z","end":"2024-09-18T19:49:39.140872Z","steps":["trace[1592645195] 'read index received'  (duration: 184.999995ms)","trace[1592645195] 'applied index is now lower than readState.Index'  (duration: 1.299584ms)"],"step_count":2}
	{"level":"info","ts":"2024-09-18T19:49:39.141064Z","caller":"traceutil/trace.go:171","msg":"trace[1097499478] transaction","detail":"{read_only:false; response_revision:2066; number_of_response:1; }","duration":"254.38821ms","start":"2024-09-18T19:49:38.886663Z","end":"2024-09-18T19:49:39.141051Z","steps":["trace[1097499478] 'process raft request'  (duration: 252.880343ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-18T19:49:39.141181Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"186.598804ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-18T19:49:39.141221Z","caller":"traceutil/trace.go:171","msg":"trace[2143615549] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:2066; }","duration":"186.63848ms","start":"2024-09-18T19:49:38.954567Z","end":"2024-09-18T19:49:39.141206Z","steps":["trace[2143615549] 'agreement among raft nodes before linearized reading'  (duration: 186.586728ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-18T19:49:39.141319Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"163.150361ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-18T19:49:39.141350Z","caller":"traceutil/trace.go:171","msg":"trace[8159077] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:2066; }","duration":"163.180095ms","start":"2024-09-18T19:49:38.978162Z","end":"2024-09-18T19:49:39.141343Z","steps":["trace[8159077] 'agreement among raft nodes before linearized reading'  (duration: 163.144483ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-18T19:50:18.732088Z","caller":"traceutil/trace.go:171","msg":"trace[1049508816] transaction","detail":"{read_only:false; response_revision:2373; number_of_response:1; }","duration":"138.120687ms","start":"2024-09-18T19:50:18.593955Z","end":"2024-09-18T19:50:18.732075Z","steps":["trace[1049508816] 'process raft request'  (duration: 136.844604ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-18T19:50:26.593679Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"295.207794ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-18T19:50:26.593767Z","caller":"traceutil/trace.go:171","msg":"trace[1193075531] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:2438; }","duration":"295.306191ms","start":"2024-09-18T19:50:26.298443Z","end":"2024-09-18T19:50:26.593750Z","steps":["trace[1193075531] 'range keys from in-memory index tree'  (duration: 295.158117ms)"],"step_count":1}
	
	
	==> gcp-auth [172ef2c9c611dd5748901516fac554cdb8f031212b96ee13467bda756557a347] <==
	2024/09/18 19:41:12 Ready to write response ...
	2024/09/18 19:41:12 Ready to marshal response ...
	2024/09/18 19:41:12 Ready to write response ...
	2024/09/18 19:49:14 Ready to marshal response ...
	2024/09/18 19:49:14 Ready to write response ...
	2024/09/18 19:49:15 Ready to marshal response ...
	2024/09/18 19:49:15 Ready to write response ...
	2024/09/18 19:49:25 Ready to marshal response ...
	2024/09/18 19:49:25 Ready to write response ...
	2024/09/18 19:49:27 Ready to marshal response ...
	2024/09/18 19:49:27 Ready to write response ...
	2024/09/18 19:49:33 Ready to marshal response ...
	2024/09/18 19:49:33 Ready to write response ...
	2024/09/18 19:50:01 Ready to marshal response ...
	2024/09/18 19:50:01 Ready to write response ...
	2024/09/18 19:50:04 Ready to marshal response ...
	2024/09/18 19:50:04 Ready to write response ...
	2024/09/18 19:50:14 Ready to marshal response ...
	2024/09/18 19:50:14 Ready to write response ...
	2024/09/18 19:50:14 Ready to marshal response ...
	2024/09/18 19:50:14 Ready to write response ...
	2024/09/18 19:50:14 Ready to marshal response ...
	2024/09/18 19:50:14 Ready to write response ...
	2024/09/18 19:50:22 Ready to marshal response ...
	2024/09/18 19:50:22 Ready to write response ...
	
	
	==> kernel <==
	 19:50:28 up 11 min,  0 users,  load average: 1.96, 0.99, 0.60
	Linux addons-815929 5.10.207 #1 SMP Mon Sep 16 15:00:28 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [bd304f4e9c5207664c3f5ae1d500d9898ffb98e6f5fbe2945d1a71f7d5f78e6c] <==
	E0918 19:41:22.794914       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E0918 19:41:22.794814       1 remote_available_controller.go:448] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.103.233.223:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.103.233.223:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.103.233.223:443: connect: connection refused" logger="UnhandledError"
	E0918 19:41:22.796712       1 remote_available_controller.go:448] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.103.233.223:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.103.233.223:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.103.233.223:443: connect: connection refused" logger="UnhandledError"
	E0918 19:41:22.802171       1 remote_available_controller.go:448] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.103.233.223:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.103.233.223:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.103.233.223:443: connect: connection refused" logger="UnhandledError"
	I0918 19:41:22.877204       1 handler.go:286] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0918 19:49:46.122311       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I0918 19:49:51.351654       1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W0918 19:49:52.486009       1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I0918 19:50:14.711287       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.96.178.208"}
	I0918 19:50:21.473006       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0918 19:50:21.475672       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0918 19:50:21.499495       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0918 19:50:21.499582       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0918 19:50:21.528355       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0918 19:50:21.528504       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0918 19:50:21.650040       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0918 19:50:21.650140       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0918 19:50:22.108280       1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
	I0918 19:50:22.290418       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.97.211.60"}
	W0918 19:50:22.650820       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0918 19:50:22.650941       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W0918 19:50:22.664307       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	
	
	==> kube-controller-manager [f287481be73d00137673fed540f11490c4a3a51a98e37282dab41b64635fe4f1] <==
	I0918 19:50:14.840127       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="headlamp/headlamp-7b5c95b59d" duration="58.456µs"
	I0918 19:50:14.844904       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="headlamp/headlamp-7b5c95b59d" duration="120.95µs"
	I0918 19:50:14.869988       1 stateful_set.go:466] "StatefulSet has been deleted" logger="statefulset-controller" key="kube-system/csi-hostpath-attacher"
	I0918 19:50:15.014176       1 stateful_set.go:466] "StatefulSet has been deleted" logger="statefulset-controller" key="kube-system/csi-hostpath-resizer"
	I0918 19:50:15.578320       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="addons-815929"
	I0918 19:50:20.846437       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="headlamp/headlamp-7b5c95b59d" duration="205.697µs"
	I0918 19:50:20.900372       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="headlamp/headlamp-7b5c95b59d" duration="21.081388ms"
	I0918 19:50:20.901029       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="headlamp/headlamp-7b5c95b59d" duration="92.621µs"
	I0918 19:50:21.681377       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/snapshot-controller-56fcc65765" duration="4.379µs"
	E0918 19:50:22.653783       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	E0918 19:50:22.653818       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	E0918 19:50:22.666134       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0918 19:50:23.765338       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0918 19:50:23.765397       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0918 19:50:23.798661       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0918 19:50:23.798746       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0918 19:50:24.131968       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0918 19:50:24.132006       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0918 19:50:26.646338       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/registry-66c9cd494c" duration="4.694µs"
	W0918 19:50:26.673424       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0918 19:50:26.673543       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0918 19:50:26.946066       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0918 19:50:26.946115       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0918 19:50:27.116832       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0918 19:50:27.116870       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	
	
	==> kube-proxy [c25ce10b42b68a6ed5d508643a26856bbe78ab45c824d9bb52b62536a54e94f6] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0918 19:39:39.772742       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0918 19:39:39.855112       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.158"]
	E0918 19:39:39.855197       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0918 19:39:39.943796       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0918 19:39:39.943838       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0918 19:39:39.943864       1 server_linux.go:169] "Using iptables Proxier"
	I0918 19:39:39.953935       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0918 19:39:39.954227       1 server.go:483] "Version info" version="v1.31.1"
	I0918 19:39:39.954239       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0918 19:39:39.958453       1 config.go:199] "Starting service config controller"
	I0918 19:39:39.958495       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0918 19:39:39.958560       1 config.go:105] "Starting endpoint slice config controller"
	I0918 19:39:39.958577       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0918 19:39:39.965954       1 config.go:328] "Starting node config controller"
	I0918 19:39:39.965978       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0918 19:39:40.059312       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0918 19:39:40.059385       1 shared_informer.go:320] Caches are synced for service config
	I0918 19:39:40.067090       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [dcda62e7939defac3570394097f84c23877573964da78205f5348d3f4c1f746d] <==
	W0918 19:39:30.259773       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0918 19:39:30.259828       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0918 19:39:30.260863       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0918 19:39:30.260937       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0918 19:39:30.316355       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0918 19:39:30.316410       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0918 19:39:30.325700       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0918 19:39:30.325748       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0918 19:39:30.384152       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0918 19:39:30.384201       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0918 19:39:30.388938       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0918 19:39:30.388996       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0918 19:39:30.471673       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0918 19:39:30.471719       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0918 19:39:30.484033       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0918 19:39:30.484082       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0918 19:39:30.491339       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0918 19:39:30.491383       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0918 19:39:30.519278       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0918 19:39:30.519335       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0918 19:39:30.634983       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0918 19:39:30.635043       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0918 19:39:30.839874       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0918 19:39:30.840702       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	I0918 19:39:32.951022       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 18 19:50:26 addons-815929 kubelet[1202]: I0918 19:50:26.047112    1202 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-66c9cd494c-96wcm" secret="" err="secret \"gcp-auth\" not found"
	Sep 18 19:50:26 addons-815929 kubelet[1202]: I0918 19:50:26.674179    1202 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h8kqj\" (UniqueName: \"kubernetes.io/projected/0ea2b254-30de-44b6-92b4-391e81e4be7e-kube-api-access-h8kqj\") pod \"0ea2b254-30de-44b6-92b4-391e81e4be7e\" (UID: \"0ea2b254-30de-44b6-92b4-391e81e4be7e\") "
	Sep 18 19:50:26 addons-815929 kubelet[1202]: I0918 19:50:26.674238    1202 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/0ea2b254-30de-44b6-92b4-391e81e4be7e-gcp-creds\") pod \"0ea2b254-30de-44b6-92b4-391e81e4be7e\" (UID: \"0ea2b254-30de-44b6-92b4-391e81e4be7e\") "
	Sep 18 19:50:26 addons-815929 kubelet[1202]: I0918 19:50:26.674326    1202 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0ea2b254-30de-44b6-92b4-391e81e4be7e-gcp-creds" (OuterVolumeSpecName: "gcp-creds") pod "0ea2b254-30de-44b6-92b4-391e81e4be7e" (UID: "0ea2b254-30de-44b6-92b4-391e81e4be7e"). InnerVolumeSpecName "gcp-creds". PluginName "kubernetes.io/host-path", VolumeGidValue ""
	Sep 18 19:50:26 addons-815929 kubelet[1202]: I0918 19:50:26.680425    1202 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0ea2b254-30de-44b6-92b4-391e81e4be7e-kube-api-access-h8kqj" (OuterVolumeSpecName: "kube-api-access-h8kqj") pod "0ea2b254-30de-44b6-92b4-391e81e4be7e" (UID: "0ea2b254-30de-44b6-92b4-391e81e4be7e"). InnerVolumeSpecName "kube-api-access-h8kqj". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 18 19:50:26 addons-815929 kubelet[1202]: I0918 19:50:26.775180    1202 reconciler_common.go:288] "Volume detached for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/0ea2b254-30de-44b6-92b4-391e81e4be7e-gcp-creds\") on node \"addons-815929\" DevicePath \"\""
	Sep 18 19:50:26 addons-815929 kubelet[1202]: I0918 19:50:26.775210    1202 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-h8kqj\" (UniqueName: \"kubernetes.io/projected/0ea2b254-30de-44b6-92b4-391e81e4be7e-kube-api-access-h8kqj\") on node \"addons-815929\" DevicePath \"\""
	Sep 18 19:50:27 addons-815929 kubelet[1202]: I0918 19:50:27.000175    1202 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nginx" podStartSLOduration=1.002043231 podStartE2EDuration="5.00015881s" podCreationTimestamp="2024-09-18 19:50:22 +0000 UTC" firstStartedPulling="2024-09-18 19:50:22.718143847 +0000 UTC m=+650.790447460" lastFinishedPulling="2024-09-18 19:50:26.716259414 +0000 UTC m=+654.788563039" observedRunningTime="2024-09-18 19:50:26.99977392 +0000 UTC m=+655.072077546" watchObservedRunningTime="2024-09-18 19:50:27.00015881 +0000 UTC m=+655.072462442"
	Sep 18 19:50:27 addons-815929 kubelet[1202]: I0918 19:50:27.179180    1202 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s52gv\" (UniqueName: \"kubernetes.io/projected/5ee21740-39f3-406e-bb72-65a28c5b5dde-kube-api-access-s52gv\") pod \"5ee21740-39f3-406e-bb72-65a28c5b5dde\" (UID: \"5ee21740-39f3-406e-bb72-65a28c5b5dde\") "
	Sep 18 19:50:27 addons-815929 kubelet[1202]: I0918 19:50:27.194498    1202 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5ee21740-39f3-406e-bb72-65a28c5b5dde-kube-api-access-s52gv" (OuterVolumeSpecName: "kube-api-access-s52gv") pod "5ee21740-39f3-406e-bb72-65a28c5b5dde" (UID: "5ee21740-39f3-406e-bb72-65a28c5b5dde"). InnerVolumeSpecName "kube-api-access-s52gv". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 18 19:50:27 addons-815929 kubelet[1202]: I0918 19:50:27.281173    1202 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-s52gv\" (UniqueName: \"kubernetes.io/projected/5ee21740-39f3-406e-bb72-65a28c5b5dde-kube-api-access-s52gv\") on node \"addons-815929\" DevicePath \"\""
	Sep 18 19:50:27 addons-815929 kubelet[1202]: I0918 19:50:27.382324    1202 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xjdbp\" (UniqueName: \"kubernetes.io/projected/170420dc-8ea6-4aba-99c1-9f61d4449fff-kube-api-access-xjdbp\") pod \"170420dc-8ea6-4aba-99c1-9f61d4449fff\" (UID: \"170420dc-8ea6-4aba-99c1-9f61d4449fff\") "
	Sep 18 19:50:27 addons-815929 kubelet[1202]: I0918 19:50:27.384300    1202 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/170420dc-8ea6-4aba-99c1-9f61d4449fff-kube-api-access-xjdbp" (OuterVolumeSpecName: "kube-api-access-xjdbp") pod "170420dc-8ea6-4aba-99c1-9f61d4449fff" (UID: "170420dc-8ea6-4aba-99c1-9f61d4449fff"). InnerVolumeSpecName "kube-api-access-xjdbp". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 18 19:50:27 addons-815929 kubelet[1202]: I0918 19:50:27.483002    1202 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-xjdbp\" (UniqueName: \"kubernetes.io/projected/170420dc-8ea6-4aba-99c1-9f61d4449fff-kube-api-access-xjdbp\") on node \"addons-815929\" DevicePath \"\""
	Sep 18 19:50:27 addons-815929 kubelet[1202]: I0918 19:50:27.920682    1202 scope.go:117] "RemoveContainer" containerID="5ce6f551671c7ab058f935b8c72af7b6e49646bf1b442c9ba1ff4d6a7f28bd42"
	Sep 18 19:50:27 addons-815929 kubelet[1202]: I0918 19:50:27.973293    1202 scope.go:117] "RemoveContainer" containerID="5ce6f551671c7ab058f935b8c72af7b6e49646bf1b442c9ba1ff4d6a7f28bd42"
	Sep 18 19:50:27 addons-815929 kubelet[1202]: E0918 19:50:27.974068    1202 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5ce6f551671c7ab058f935b8c72af7b6e49646bf1b442c9ba1ff4d6a7f28bd42\": container with ID starting with 5ce6f551671c7ab058f935b8c72af7b6e49646bf1b442c9ba1ff4d6a7f28bd42 not found: ID does not exist" containerID="5ce6f551671c7ab058f935b8c72af7b6e49646bf1b442c9ba1ff4d6a7f28bd42"
	Sep 18 19:50:27 addons-815929 kubelet[1202]: I0918 19:50:27.974161    1202 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5ce6f551671c7ab058f935b8c72af7b6e49646bf1b442c9ba1ff4d6a7f28bd42"} err="failed to get container status \"5ce6f551671c7ab058f935b8c72af7b6e49646bf1b442c9ba1ff4d6a7f28bd42\": rpc error: code = NotFound desc = could not find container \"5ce6f551671c7ab058f935b8c72af7b6e49646bf1b442c9ba1ff4d6a7f28bd42\": container with ID starting with 5ce6f551671c7ab058f935b8c72af7b6e49646bf1b442c9ba1ff4d6a7f28bd42 not found: ID does not exist"
	Sep 18 19:50:27 addons-815929 kubelet[1202]: I0918 19:50:27.974199    1202 scope.go:117] "RemoveContainer" containerID="094888f2e69706006c5048d9d508e8d28a82d3f350d1c4bf18d7973c4828ebeb"
	Sep 18 19:50:28 addons-815929 kubelet[1202]: I0918 19:50:28.004855    1202 scope.go:117] "RemoveContainer" containerID="094888f2e69706006c5048d9d508e8d28a82d3f350d1c4bf18d7973c4828ebeb"
	Sep 18 19:50:28 addons-815929 kubelet[1202]: E0918 19:50:28.006801    1202 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"094888f2e69706006c5048d9d508e8d28a82d3f350d1c4bf18d7973c4828ebeb\": container with ID starting with 094888f2e69706006c5048d9d508e8d28a82d3f350d1c4bf18d7973c4828ebeb not found: ID does not exist" containerID="094888f2e69706006c5048d9d508e8d28a82d3f350d1c4bf18d7973c4828ebeb"
	Sep 18 19:50:28 addons-815929 kubelet[1202]: I0918 19:50:28.006851    1202 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"094888f2e69706006c5048d9d508e8d28a82d3f350d1c4bf18d7973c4828ebeb"} err="failed to get container status \"094888f2e69706006c5048d9d508e8d28a82d3f350d1c4bf18d7973c4828ebeb\": rpc error: code = NotFound desc = could not find container \"094888f2e69706006c5048d9d508e8d28a82d3f350d1c4bf18d7973c4828ebeb\": container with ID starting with 094888f2e69706006c5048d9d508e8d28a82d3f350d1c4bf18d7973c4828ebeb not found: ID does not exist"
	Sep 18 19:50:28 addons-815929 kubelet[1202]: I0918 19:50:28.050976    1202 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0ea2b254-30de-44b6-92b4-391e81e4be7e" path="/var/lib/kubelet/pods/0ea2b254-30de-44b6-92b4-391e81e4be7e/volumes"
	Sep 18 19:50:28 addons-815929 kubelet[1202]: I0918 19:50:28.051237    1202 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="170420dc-8ea6-4aba-99c1-9f61d4449fff" path="/var/lib/kubelet/pods/170420dc-8ea6-4aba-99c1-9f61d4449fff/volumes"
	Sep 18 19:50:28 addons-815929 kubelet[1202]: I0918 19:50:28.051597    1202 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5ee21740-39f3-406e-bb72-65a28c5b5dde" path="/var/lib/kubelet/pods/5ee21740-39f3-406e-bb72-65a28c5b5dde/volumes"
	
	
	==> storage-provisioner [3759671f1017e4877f68e8f02b9e88508a0b5b788503476fa6663f5b152d0fa6] <==
	I0918 19:39:45.052140       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0918 19:39:45.070541       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0918 19:39:45.070599       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0918 19:39:45.124575       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0918 19:39:45.124795       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-815929_cdd2de0e-7456-4024-9550-98c8060a35ab!
	I0918 19:39:45.133742       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"ab4840eb-b79e-468b-af43-50c550ad69c5", APIVersion:"v1", ResourceVersion:"660", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-815929_cdd2de0e-7456-4024-9550-98c8060a35ab became leader
	I0918 19:39:45.237552       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-815929_cdd2de0e-7456-4024-9550-98c8060a35ab!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-815929 -n addons-815929
helpers_test.go:261: (dbg) Run:  kubectl --context addons-815929 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox ingress-nginx-admission-create-r4nz6 ingress-nginx-admission-patch-xp8xg
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/Registry]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-815929 describe pod busybox ingress-nginx-admission-create-r4nz6 ingress-nginx-admission-patch-xp8xg
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context addons-815929 describe pod busybox ingress-nginx-admission-create-r4nz6 ingress-nginx-admission-patch-xp8xg: exit status 1 (75.659341ms)

                                                
                                                
-- stdout --
	Name:             busybox
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-815929/192.168.39.158
	Start Time:       Wed, 18 Sep 2024 19:41:12 +0000
	Labels:           integration-test=busybox
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.23
	IPs:
	  IP:  10.244.0.23
	Containers:
	  busybox:
	    Container ID:  
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      sleep
	      3600
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:
	      GOOGLE_APPLICATION_CREDENTIALS:  /google-app-creds.json
	      PROJECT_ID:                      this_is_fake
	      GCP_PROJECT:                     this_is_fake
	      GCLOUD_PROJECT:                  this_is_fake
	      GOOGLE_CLOUD_PROJECT:            this_is_fake
	      CLOUDSDK_CORE_PROJECT:           this_is_fake
	    Mounts:
	      /google-app-creds.json from gcp-creds (ro)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-kvbgq (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-kvbgq:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	  gcp-creds:
	    Type:          HostPath (bare host directory volume)
	    Path:          /var/lib/minikube/google_application_credentials.json
	    HostPathType:  File
	QoS Class:         BestEffort
	Node-Selectors:    <none>
	Tolerations:       node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                   node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                    From               Message
	  ----     ------     ----                   ----               -------
	  Normal   Scheduled  9m17s                  default-scheduler  Successfully assigned default/busybox to addons-815929
	  Normal   Pulling    7m41s (x4 over 9m17s)  kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Warning  Failed     7m41s (x4 over 9m17s)  kubelet            Failed to pull image "gcr.io/k8s-minikube/busybox:1.28.4-glibc": unable to retrieve auth token: invalid username/password: unauthorized: authentication failed
	  Warning  Failed     7m41s (x4 over 9m17s)  kubelet            Error: ErrImagePull
	  Warning  Failed     7m30s (x6 over 9m16s)  kubelet            Error: ImagePullBackOff
	  Normal   BackOff    4m9s (x21 over 9m16s)  kubelet            Back-off pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-r4nz6" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-xp8xg" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context addons-815929 describe pod busybox ingress-nginx-admission-create-r4nz6 ingress-nginx-admission-patch-xp8xg: exit status 1
--- FAIL: TestAddons/parallel/Registry (74.44s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (151.88s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-815929 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-815929 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-815929 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [c5107435-d0c7-4308-88a9-d0fc42111e5e] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [c5107435-d0c7-4308-88a9-d0fc42111e5e] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 11.003746462s
I0918 19:50:33.342034   14878 kapi.go:150] Service nginx in namespace default found.
addons_test.go:264: (dbg) Run:  out/minikube-linux-amd64 -p addons-815929 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:264: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-815929 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m8.908911767s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:280: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:288: (dbg) Run:  kubectl --context addons-815929 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:293: (dbg) Run:  out/minikube-linux-amd64 -p addons-815929 ip
addons_test.go:299: (dbg) Run:  nslookup hello-john.test 192.168.39.158
addons_test.go:308: (dbg) Run:  out/minikube-linux-amd64 -p addons-815929 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:308: (dbg) Done: out/minikube-linux-amd64 -p addons-815929 addons disable ingress-dns --alsologtostderr -v=1: (1.02975137s)
addons_test.go:313: (dbg) Run:  out/minikube-linux-amd64 -p addons-815929 addons disable ingress --alsologtostderr -v=1
addons_test.go:313: (dbg) Done: out/minikube-linux-amd64 -p addons-815929 addons disable ingress --alsologtostderr -v=1: (7.713227461s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-815929 -n addons-815929
helpers_test.go:244: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-815929 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-815929 logs -n 25: (1.294961359s)
helpers_test.go:252: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| delete  | -p download-only-226542                                                                     | download-only-226542 | jenkins | v1.34.0 | 18 Sep 24 19:38 UTC | 18 Sep 24 19:38 UTC |
	| delete  | -p download-only-228031                                                                     | download-only-228031 | jenkins | v1.34.0 | 18 Sep 24 19:38 UTC | 18 Sep 24 19:38 UTC |
	| delete  | -p download-only-226542                                                                     | download-only-226542 | jenkins | v1.34.0 | 18 Sep 24 19:38 UTC | 18 Sep 24 19:38 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-930383 | jenkins | v1.34.0 | 18 Sep 24 19:38 UTC |                     |
	|         | binary-mirror-930383                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                      |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                      |         |         |                     |                     |
	|         | http://127.0.0.1:32853                                                                      |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-930383                                                                     | binary-mirror-930383 | jenkins | v1.34.0 | 18 Sep 24 19:38 UTC | 18 Sep 24 19:38 UTC |
	| addons  | enable dashboard -p                                                                         | addons-815929        | jenkins | v1.34.0 | 18 Sep 24 19:38 UTC |                     |
	|         | addons-815929                                                                               |                      |         |         |                     |                     |
	| addons  | disable dashboard -p                                                                        | addons-815929        | jenkins | v1.34.0 | 18 Sep 24 19:38 UTC |                     |
	|         | addons-815929                                                                               |                      |         |         |                     |                     |
	| start   | -p addons-815929 --wait=true                                                                | addons-815929        | jenkins | v1.34.0 | 18 Sep 24 19:38 UTC | 18 Sep 24 19:41 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                      |         |         |                     |                     |
	|         | --addons=registry                                                                           |                      |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                      |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                      |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                      |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                      |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano                                                              |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                      |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                      |         |         |                     |                     |
	|         | --addons=helm-tiller                                                                        |                      |         |         |                     |                     |
	| ssh     | addons-815929 ssh cat                                                                       | addons-815929        | jenkins | v1.34.0 | 18 Sep 24 19:49 UTC | 18 Sep 24 19:49 UTC |
	|         | /opt/local-path-provisioner/pvc-640ef54b-981f-4e43-8493-c1fa2c048453_default_test-pvc/file1 |                      |         |         |                     |                     |
	| addons  | addons-815929 addons disable                                                                | addons-815929        | jenkins | v1.34.0 | 18 Sep 24 19:49 UTC | 18 Sep 24 19:49 UTC |
	|         | storage-provisioner-rancher                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-815929 addons disable                                                                | addons-815929        | jenkins | v1.34.0 | 18 Sep 24 19:49 UTC | 18 Sep 24 19:49 UTC |
	|         | yakd --alsologtostderr -v=1                                                                 |                      |         |         |                     |                     |
	| addons  | disable nvidia-device-plugin                                                                | addons-815929        | jenkins | v1.34.0 | 18 Sep 24 19:49 UTC | 18 Sep 24 19:49 UTC |
	|         | -p addons-815929                                                                            |                      |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p                                                                 | addons-815929        | jenkins | v1.34.0 | 18 Sep 24 19:49 UTC | 18 Sep 24 19:49 UTC |
	|         | addons-815929                                                                               |                      |         |         |                     |                     |
	| addons  | addons-815929 addons disable                                                                | addons-815929        | jenkins | v1.34.0 | 18 Sep 24 19:50 UTC | 18 Sep 24 19:50 UTC |
	|         | helm-tiller --alsologtostderr                                                               |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | disable cloud-spanner -p                                                                    | addons-815929        | jenkins | v1.34.0 | 18 Sep 24 19:50 UTC | 18 Sep 24 19:50 UTC |
	|         | addons-815929                                                                               |                      |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-815929        | jenkins | v1.34.0 | 18 Sep 24 19:50 UTC | 18 Sep 24 19:50 UTC |
	|         | -p addons-815929                                                                            |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-815929 addons                                                                        | addons-815929        | jenkins | v1.34.0 | 18 Sep 24 19:50 UTC | 18 Sep 24 19:50 UTC |
	|         | disable csi-hostpath-driver                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-815929 addons                                                                        | addons-815929        | jenkins | v1.34.0 | 18 Sep 24 19:50 UTC | 18 Sep 24 19:50 UTC |
	|         | disable volumesnapshots                                                                     |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ip      | addons-815929 ip                                                                            | addons-815929        | jenkins | v1.34.0 | 18 Sep 24 19:50 UTC | 18 Sep 24 19:50 UTC |
	| addons  | addons-815929 addons disable                                                                | addons-815929        | jenkins | v1.34.0 | 18 Sep 24 19:50 UTC | 18 Sep 24 19:50 UTC |
	|         | registry --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-815929 addons disable                                                                | addons-815929        | jenkins | v1.34.0 | 18 Sep 24 19:50 UTC | 18 Sep 24 19:50 UTC |
	|         | headlamp --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| ssh     | addons-815929 ssh curl -s                                                                   | addons-815929        | jenkins | v1.34.0 | 18 Sep 24 19:50 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                      |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                      |         |         |                     |                     |
	| ip      | addons-815929 ip                                                                            | addons-815929        | jenkins | v1.34.0 | 18 Sep 24 19:52 UTC | 18 Sep 24 19:52 UTC |
	| addons  | addons-815929 addons disable                                                                | addons-815929        | jenkins | v1.34.0 | 18 Sep 24 19:52 UTC | 18 Sep 24 19:52 UTC |
	|         | ingress-dns --alsologtostderr                                                               |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-815929 addons disable                                                                | addons-815929        | jenkins | v1.34.0 | 18 Sep 24 19:52 UTC | 18 Sep 24 19:52 UTC |
	|         | ingress --alsologtostderr -v=1                                                              |                      |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/18 19:38:53
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0918 19:38:53.118706   15635 out.go:345] Setting OutFile to fd 1 ...
	I0918 19:38:53.118965   15635 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0918 19:38:53.118975   15635 out.go:358] Setting ErrFile to fd 2...
	I0918 19:38:53.118980   15635 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0918 19:38:53.119217   15635 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19667-7671/.minikube/bin
	I0918 19:38:53.119878   15635 out.go:352] Setting JSON to false
	I0918 19:38:53.120737   15635 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":1277,"bootTime":1726687056,"procs":171,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0918 19:38:53.120834   15635 start.go:139] virtualization: kvm guest
	I0918 19:38:53.123148   15635 out.go:177] * [addons-815929] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0918 19:38:53.124482   15635 notify.go:220] Checking for updates...
	I0918 19:38:53.124492   15635 out.go:177]   - MINIKUBE_LOCATION=19667
	I0918 19:38:53.125673   15635 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0918 19:38:53.126877   15635 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19667-7671/kubeconfig
	I0918 19:38:53.127987   15635 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19667-7671/.minikube
	I0918 19:38:53.129021   15635 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0918 19:38:53.130051   15635 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0918 19:38:53.131293   15635 driver.go:394] Setting default libvirt URI to qemu:///system
	I0918 19:38:53.163239   15635 out.go:177] * Using the kvm2 driver based on user configuration
	I0918 19:38:53.164302   15635 start.go:297] selected driver: kvm2
	I0918 19:38:53.164318   15635 start.go:901] validating driver "kvm2" against <nil>
	I0918 19:38:53.164342   15635 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0918 19:38:53.165066   15635 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0918 19:38:53.165151   15635 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19667-7671/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0918 19:38:53.179993   15635 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0918 19:38:53.180067   15635 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0918 19:38:53.180362   15635 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0918 19:38:53.180395   15635 cni.go:84] Creating CNI manager for ""
	I0918 19:38:53.180443   15635 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0918 19:38:53.180452   15635 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0918 19:38:53.180510   15635 start.go:340] cluster config:
	{Name:addons-815929 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-815929 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAg
entPID:0 GPUs: AutoPauseInterval:1m0s}
	I0918 19:38:53.180624   15635 iso.go:125] acquiring lock: {Name:mkbcf4d84f3091567a39802cd5e773cb10b8c545 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0918 19:38:53.182868   15635 out.go:177] * Starting "addons-815929" primary control-plane node in "addons-815929" cluster
	I0918 19:38:53.183982   15635 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0918 19:38:53.184039   15635 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19667-7671/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I0918 19:38:53.184052   15635 cache.go:56] Caching tarball of preloaded images
	I0918 19:38:53.184131   15635 preload.go:172] Found /home/jenkins/minikube-integration/19667-7671/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0918 19:38:53.184144   15635 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0918 19:38:53.184489   15635 profile.go:143] Saving config to /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/addons-815929/config.json ...
	I0918 19:38:53.184512   15635 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/addons-815929/config.json: {Name:mk126f196443338ecc21176132e0fd9e3cc4ae5b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0918 19:38:53.184666   15635 start.go:360] acquireMachinesLock for addons-815929: {Name:mk1b0d68a0b7d9d4bc8204d5d81cb9eb77222526 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0918 19:38:53.184723   15635 start.go:364] duration metric: took 41.331µs to acquireMachinesLock for "addons-815929"
	I0918 19:38:53.184743   15635 start.go:93] Provisioning new machine with config: &{Name:addons-815929 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.1 ClusterName:addons-815929 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0918 19:38:53.184805   15635 start.go:125] createHost starting for "" (driver="kvm2")
	I0918 19:38:53.186310   15635 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0918 19:38:53.186442   15635 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0918 19:38:53.186488   15635 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0918 19:38:53.200841   15635 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32951
	I0918 19:38:53.201300   15635 main.go:141] libmachine: () Calling .GetVersion
	I0918 19:38:53.201895   15635 main.go:141] libmachine: Using API Version  1
	I0918 19:38:53.201914   15635 main.go:141] libmachine: () Calling .SetConfigRaw
	I0918 19:38:53.202258   15635 main.go:141] libmachine: () Calling .GetMachineName
	I0918 19:38:53.202436   15635 main.go:141] libmachine: (addons-815929) Calling .GetMachineName
	I0918 19:38:53.202591   15635 main.go:141] libmachine: (addons-815929) Calling .DriverName
	I0918 19:38:53.202765   15635 start.go:159] libmachine.API.Create for "addons-815929" (driver="kvm2")
	I0918 19:38:53.202793   15635 client.go:168] LocalClient.Create starting
	I0918 19:38:53.202832   15635 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/19667-7671/.minikube/certs/ca.pem
	I0918 19:38:53.498664   15635 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/19667-7671/.minikube/certs/cert.pem
	I0918 19:38:53.663477   15635 main.go:141] libmachine: Running pre-create checks...
	I0918 19:38:53.663499   15635 main.go:141] libmachine: (addons-815929) Calling .PreCreateCheck
	I0918 19:38:53.663965   15635 main.go:141] libmachine: (addons-815929) Calling .GetConfigRaw
	I0918 19:38:53.664477   15635 main.go:141] libmachine: Creating machine...
	I0918 19:38:53.664493   15635 main.go:141] libmachine: (addons-815929) Calling .Create
	I0918 19:38:53.664656   15635 main.go:141] libmachine: (addons-815929) Creating KVM machine...
	I0918 19:38:53.665882   15635 main.go:141] libmachine: (addons-815929) DBG | found existing default KVM network
	I0918 19:38:53.666727   15635 main.go:141] libmachine: (addons-815929) DBG | I0918 19:38:53.666575   15656 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0002111f0}
	I0918 19:38:53.666778   15635 main.go:141] libmachine: (addons-815929) DBG | created network xml: 
	I0918 19:38:53.666798   15635 main.go:141] libmachine: (addons-815929) DBG | <network>
	I0918 19:38:53.666808   15635 main.go:141] libmachine: (addons-815929) DBG |   <name>mk-addons-815929</name>
	I0918 19:38:53.666813   15635 main.go:141] libmachine: (addons-815929) DBG |   <dns enable='no'/>
	I0918 19:38:53.666818   15635 main.go:141] libmachine: (addons-815929) DBG |   
	I0918 19:38:53.666825   15635 main.go:141] libmachine: (addons-815929) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0918 19:38:53.666831   15635 main.go:141] libmachine: (addons-815929) DBG |     <dhcp>
	I0918 19:38:53.666838   15635 main.go:141] libmachine: (addons-815929) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0918 19:38:53.666843   15635 main.go:141] libmachine: (addons-815929) DBG |     </dhcp>
	I0918 19:38:53.666848   15635 main.go:141] libmachine: (addons-815929) DBG |   </ip>
	I0918 19:38:53.666855   15635 main.go:141] libmachine: (addons-815929) DBG |   
	I0918 19:38:53.666859   15635 main.go:141] libmachine: (addons-815929) DBG | </network>
	I0918 19:38:53.666868   15635 main.go:141] libmachine: (addons-815929) DBG | 
	I0918 19:38:53.672175   15635 main.go:141] libmachine: (addons-815929) DBG | trying to create private KVM network mk-addons-815929 192.168.39.0/24...
	I0918 19:38:53.742842   15635 main.go:141] libmachine: (addons-815929) Setting up store path in /home/jenkins/minikube-integration/19667-7671/.minikube/machines/addons-815929 ...
	I0918 19:38:53.742874   15635 main.go:141] libmachine: (addons-815929) DBG | private KVM network mk-addons-815929 192.168.39.0/24 created
	I0918 19:38:53.742891   15635 main.go:141] libmachine: (addons-815929) Building disk image from file:///home/jenkins/minikube-integration/19667-7671/.minikube/cache/iso/amd64/minikube-v1.34.0-1726481713-19649-amd64.iso
	I0918 19:38:53.742925   15635 main.go:141] libmachine: (addons-815929) DBG | I0918 19:38:53.742793   15656 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19667-7671/.minikube
	I0918 19:38:53.742951   15635 main.go:141] libmachine: (addons-815929) Downloading /home/jenkins/minikube-integration/19667-7671/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19667-7671/.minikube/cache/iso/amd64/minikube-v1.34.0-1726481713-19649-amd64.iso...
	I0918 19:38:54.002785   15635 main.go:141] libmachine: (addons-815929) DBG | I0918 19:38:54.002609   15656 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19667-7671/.minikube/machines/addons-815929/id_rsa...
	I0918 19:38:54.238348   15635 main.go:141] libmachine: (addons-815929) DBG | I0918 19:38:54.238178   15656 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19667-7671/.minikube/machines/addons-815929/addons-815929.rawdisk...
	I0918 19:38:54.238378   15635 main.go:141] libmachine: (addons-815929) DBG | Writing magic tar header
	I0918 19:38:54.238388   15635 main.go:141] libmachine: (addons-815929) DBG | Writing SSH key tar header
	I0918 19:38:54.238395   15635 main.go:141] libmachine: (addons-815929) DBG | I0918 19:38:54.238295   15656 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19667-7671/.minikube/machines/addons-815929 ...
	I0918 19:38:54.238406   15635 main.go:141] libmachine: (addons-815929) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19667-7671/.minikube/machines/addons-815929
	I0918 19:38:54.238460   15635 main.go:141] libmachine: (addons-815929) Setting executable bit set on /home/jenkins/minikube-integration/19667-7671/.minikube/machines/addons-815929 (perms=drwx------)
	I0918 19:38:54.238483   15635 main.go:141] libmachine: (addons-815929) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19667-7671/.minikube/machines
	I0918 19:38:54.238491   15635 main.go:141] libmachine: (addons-815929) Setting executable bit set on /home/jenkins/minikube-integration/19667-7671/.minikube/machines (perms=drwxr-xr-x)
	I0918 19:38:54.238513   15635 main.go:141] libmachine: (addons-815929) Setting executable bit set on /home/jenkins/minikube-integration/19667-7671/.minikube (perms=drwxr-xr-x)
	I0918 19:38:54.238523   15635 main.go:141] libmachine: (addons-815929) Setting executable bit set on /home/jenkins/minikube-integration/19667-7671 (perms=drwxrwxr-x)
	I0918 19:38:54.238534   15635 main.go:141] libmachine: (addons-815929) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19667-7671/.minikube
	I0918 19:38:54.238548   15635 main.go:141] libmachine: (addons-815929) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19667-7671
	I0918 19:38:54.238559   15635 main.go:141] libmachine: (addons-815929) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0918 19:38:54.238565   15635 main.go:141] libmachine: (addons-815929) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0918 19:38:54.238571   15635 main.go:141] libmachine: (addons-815929) DBG | Checking permissions on dir: /home/jenkins
	I0918 19:38:54.238576   15635 main.go:141] libmachine: (addons-815929) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0918 19:38:54.238581   15635 main.go:141] libmachine: (addons-815929) DBG | Checking permissions on dir: /home
	I0918 19:38:54.238588   15635 main.go:141] libmachine: (addons-815929) DBG | Skipping /home - not owner
	I0918 19:38:54.238597   15635 main.go:141] libmachine: (addons-815929) Creating domain...
	I0918 19:38:54.239507   15635 main.go:141] libmachine: (addons-815929) define libvirt domain using xml: 
	I0918 19:38:54.239529   15635 main.go:141] libmachine: (addons-815929) <domain type='kvm'>
	I0918 19:38:54.239536   15635 main.go:141] libmachine: (addons-815929)   <name>addons-815929</name>
	I0918 19:38:54.239543   15635 main.go:141] libmachine: (addons-815929)   <memory unit='MiB'>4000</memory>
	I0918 19:38:54.239549   15635 main.go:141] libmachine: (addons-815929)   <vcpu>2</vcpu>
	I0918 19:38:54.239553   15635 main.go:141] libmachine: (addons-815929)   <features>
	I0918 19:38:54.239557   15635 main.go:141] libmachine: (addons-815929)     <acpi/>
	I0918 19:38:54.239561   15635 main.go:141] libmachine: (addons-815929)     <apic/>
	I0918 19:38:54.239566   15635 main.go:141] libmachine: (addons-815929)     <pae/>
	I0918 19:38:54.239569   15635 main.go:141] libmachine: (addons-815929)     
	I0918 19:38:54.239574   15635 main.go:141] libmachine: (addons-815929)   </features>
	I0918 19:38:54.239581   15635 main.go:141] libmachine: (addons-815929)   <cpu mode='host-passthrough'>
	I0918 19:38:54.239588   15635 main.go:141] libmachine: (addons-815929)   
	I0918 19:38:54.239596   15635 main.go:141] libmachine: (addons-815929)   </cpu>
	I0918 19:38:54.239608   15635 main.go:141] libmachine: (addons-815929)   <os>
	I0918 19:38:54.239618   15635 main.go:141] libmachine: (addons-815929)     <type>hvm</type>
	I0918 19:38:54.239629   15635 main.go:141] libmachine: (addons-815929)     <boot dev='cdrom'/>
	I0918 19:38:54.239633   15635 main.go:141] libmachine: (addons-815929)     <boot dev='hd'/>
	I0918 19:38:54.239640   15635 main.go:141] libmachine: (addons-815929)     <bootmenu enable='no'/>
	I0918 19:38:54.239643   15635 main.go:141] libmachine: (addons-815929)   </os>
	I0918 19:38:54.239648   15635 main.go:141] libmachine: (addons-815929)   <devices>
	I0918 19:38:54.239652   15635 main.go:141] libmachine: (addons-815929)     <disk type='file' device='cdrom'>
	I0918 19:38:54.239672   15635 main.go:141] libmachine: (addons-815929)       <source file='/home/jenkins/minikube-integration/19667-7671/.minikube/machines/addons-815929/boot2docker.iso'/>
	I0918 19:38:54.239681   15635 main.go:141] libmachine: (addons-815929)       <target dev='hdc' bus='scsi'/>
	I0918 19:38:54.239689   15635 main.go:141] libmachine: (addons-815929)       <readonly/>
	I0918 19:38:54.239699   15635 main.go:141] libmachine: (addons-815929)     </disk>
	I0918 19:38:54.239708   15635 main.go:141] libmachine: (addons-815929)     <disk type='file' device='disk'>
	I0918 19:38:54.239717   15635 main.go:141] libmachine: (addons-815929)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0918 19:38:54.239726   15635 main.go:141] libmachine: (addons-815929)       <source file='/home/jenkins/minikube-integration/19667-7671/.minikube/machines/addons-815929/addons-815929.rawdisk'/>
	I0918 19:38:54.239739   15635 main.go:141] libmachine: (addons-815929)       <target dev='hda' bus='virtio'/>
	I0918 19:38:54.239762   15635 main.go:141] libmachine: (addons-815929)     </disk>
	I0918 19:38:54.239780   15635 main.go:141] libmachine: (addons-815929)     <interface type='network'>
	I0918 19:38:54.239787   15635 main.go:141] libmachine: (addons-815929)       <source network='mk-addons-815929'/>
	I0918 19:38:54.239799   15635 main.go:141] libmachine: (addons-815929)       <model type='virtio'/>
	I0918 19:38:54.239804   15635 main.go:141] libmachine: (addons-815929)     </interface>
	I0918 19:38:54.239809   15635 main.go:141] libmachine: (addons-815929)     <interface type='network'>
	I0918 19:38:54.239815   15635 main.go:141] libmachine: (addons-815929)       <source network='default'/>
	I0918 19:38:54.239819   15635 main.go:141] libmachine: (addons-815929)       <model type='virtio'/>
	I0918 19:38:54.239824   15635 main.go:141] libmachine: (addons-815929)     </interface>
	I0918 19:38:54.239832   15635 main.go:141] libmachine: (addons-815929)     <serial type='pty'>
	I0918 19:38:54.239837   15635 main.go:141] libmachine: (addons-815929)       <target port='0'/>
	I0918 19:38:54.239844   15635 main.go:141] libmachine: (addons-815929)     </serial>
	I0918 19:38:54.239849   15635 main.go:141] libmachine: (addons-815929)     <console type='pty'>
	I0918 19:38:54.239868   15635 main.go:141] libmachine: (addons-815929)       <target type='serial' port='0'/>
	I0918 19:38:54.239879   15635 main.go:141] libmachine: (addons-815929)     </console>
	I0918 19:38:54.239883   15635 main.go:141] libmachine: (addons-815929)     <rng model='virtio'>
	I0918 19:38:54.239889   15635 main.go:141] libmachine: (addons-815929)       <backend model='random'>/dev/random</backend>
	I0918 19:38:54.239893   15635 main.go:141] libmachine: (addons-815929)     </rng>
	I0918 19:38:54.239897   15635 main.go:141] libmachine: (addons-815929)     
	I0918 19:38:54.239901   15635 main.go:141] libmachine: (addons-815929)     
	I0918 19:38:54.239913   15635 main.go:141] libmachine: (addons-815929)   </devices>
	I0918 19:38:54.239925   15635 main.go:141] libmachine: (addons-815929) </domain>
	I0918 19:38:54.239934   15635 main.go:141] libmachine: (addons-815929) 
	I0918 19:38:54.245827   15635 main.go:141] libmachine: (addons-815929) DBG | domain addons-815929 has defined MAC address 52:54:00:cb:c3:cb in network default
	I0918 19:38:54.246274   15635 main.go:141] libmachine: (addons-815929) Ensuring networks are active...
	I0918 19:38:54.246289   15635 main.go:141] libmachine: (addons-815929) DBG | domain addons-815929 has defined MAC address 52:54:00:11:b1:d6 in network mk-addons-815929
	I0918 19:38:54.246951   15635 main.go:141] libmachine: (addons-815929) Ensuring network default is active
	I0918 19:38:54.247192   15635 main.go:141] libmachine: (addons-815929) Ensuring network mk-addons-815929 is active
	I0918 19:38:54.247672   15635 main.go:141] libmachine: (addons-815929) Getting domain xml...
	I0918 19:38:54.248278   15635 main.go:141] libmachine: (addons-815929) Creating domain...
	I0918 19:38:55.697959   15635 main.go:141] libmachine: (addons-815929) Waiting to get IP...
	I0918 19:38:55.698757   15635 main.go:141] libmachine: (addons-815929) DBG | domain addons-815929 has defined MAC address 52:54:00:11:b1:d6 in network mk-addons-815929
	I0918 19:38:55.699235   15635 main.go:141] libmachine: (addons-815929) DBG | unable to find current IP address of domain addons-815929 in network mk-addons-815929
	I0918 19:38:55.699284   15635 main.go:141] libmachine: (addons-815929) DBG | I0918 19:38:55.699220   15656 retry.go:31] will retry after 240.136101ms: waiting for machine to come up
	I0918 19:38:55.940564   15635 main.go:141] libmachine: (addons-815929) DBG | domain addons-815929 has defined MAC address 52:54:00:11:b1:d6 in network mk-addons-815929
	I0918 19:38:55.941063   15635 main.go:141] libmachine: (addons-815929) DBG | unable to find current IP address of domain addons-815929 in network mk-addons-815929
	I0918 19:38:55.941095   15635 main.go:141] libmachine: (addons-815929) DBG | I0918 19:38:55.941001   15656 retry.go:31] will retry after 357.629453ms: waiting for machine to come up
	I0918 19:38:56.300779   15635 main.go:141] libmachine: (addons-815929) DBG | domain addons-815929 has defined MAC address 52:54:00:11:b1:d6 in network mk-addons-815929
	I0918 19:38:56.301261   15635 main.go:141] libmachine: (addons-815929) DBG | unable to find current IP address of domain addons-815929 in network mk-addons-815929
	I0918 19:38:56.301288   15635 main.go:141] libmachine: (addons-815929) DBG | I0918 19:38:56.301210   15656 retry.go:31] will retry after 307.786585ms: waiting for machine to come up
	I0918 19:38:56.610678   15635 main.go:141] libmachine: (addons-815929) DBG | domain addons-815929 has defined MAC address 52:54:00:11:b1:d6 in network mk-addons-815929
	I0918 19:38:56.611160   15635 main.go:141] libmachine: (addons-815929) DBG | unable to find current IP address of domain addons-815929 in network mk-addons-815929
	I0918 19:38:56.611191   15635 main.go:141] libmachine: (addons-815929) DBG | I0918 19:38:56.611111   15656 retry.go:31] will retry after 517.569687ms: waiting for machine to come up
	I0918 19:38:57.129855   15635 main.go:141] libmachine: (addons-815929) DBG | domain addons-815929 has defined MAC address 52:54:00:11:b1:d6 in network mk-addons-815929
	I0918 19:38:57.130252   15635 main.go:141] libmachine: (addons-815929) DBG | unable to find current IP address of domain addons-815929 in network mk-addons-815929
	I0918 19:38:57.130293   15635 main.go:141] libmachine: (addons-815929) DBG | I0918 19:38:57.130200   15656 retry.go:31] will retry after 494.799445ms: waiting for machine to come up
	I0918 19:38:57.626875   15635 main.go:141] libmachine: (addons-815929) DBG | domain addons-815929 has defined MAC address 52:54:00:11:b1:d6 in network mk-addons-815929
	I0918 19:38:57.627350   15635 main.go:141] libmachine: (addons-815929) DBG | unable to find current IP address of domain addons-815929 in network mk-addons-815929
	I0918 19:38:57.627378   15635 main.go:141] libmachine: (addons-815929) DBG | I0918 19:38:57.627307   15656 retry.go:31] will retry after 626.236714ms: waiting for machine to come up
	I0918 19:38:58.255770   15635 main.go:141] libmachine: (addons-815929) DBG | domain addons-815929 has defined MAC address 52:54:00:11:b1:d6 in network mk-addons-815929
	I0918 19:38:58.256298   15635 main.go:141] libmachine: (addons-815929) DBG | unable to find current IP address of domain addons-815929 in network mk-addons-815929
	I0918 19:38:58.256317   15635 main.go:141] libmachine: (addons-815929) DBG | I0918 19:38:58.256214   15656 retry.go:31] will retry after 826.525241ms: waiting for machine to come up
	I0918 19:38:59.083830   15635 main.go:141] libmachine: (addons-815929) DBG | domain addons-815929 has defined MAC address 52:54:00:11:b1:d6 in network mk-addons-815929
	I0918 19:38:59.084379   15635 main.go:141] libmachine: (addons-815929) DBG | unable to find current IP address of domain addons-815929 in network mk-addons-815929
	I0918 19:38:59.084413   15635 main.go:141] libmachine: (addons-815929) DBG | I0918 19:38:59.084316   15656 retry.go:31] will retry after 1.302088375s: waiting for machine to come up
	I0918 19:39:00.388874   15635 main.go:141] libmachine: (addons-815929) DBG | domain addons-815929 has defined MAC address 52:54:00:11:b1:d6 in network mk-addons-815929
	I0918 19:39:00.389329   15635 main.go:141] libmachine: (addons-815929) DBG | unable to find current IP address of domain addons-815929 in network mk-addons-815929
	I0918 19:39:00.389357   15635 main.go:141] libmachine: (addons-815929) DBG | I0918 19:39:00.389259   15656 retry.go:31] will retry after 1.82403913s: waiting for machine to come up
	I0918 19:39:02.216192   15635 main.go:141] libmachine: (addons-815929) DBG | domain addons-815929 has defined MAC address 52:54:00:11:b1:d6 in network mk-addons-815929
	I0918 19:39:02.216654   15635 main.go:141] libmachine: (addons-815929) DBG | unable to find current IP address of domain addons-815929 in network mk-addons-815929
	I0918 19:39:02.216681   15635 main.go:141] libmachine: (addons-815929) DBG | I0918 19:39:02.216609   15656 retry.go:31] will retry after 2.008231355s: waiting for machine to come up
	I0918 19:39:04.226837   15635 main.go:141] libmachine: (addons-815929) DBG | domain addons-815929 has defined MAC address 52:54:00:11:b1:d6 in network mk-addons-815929
	I0918 19:39:04.227248   15635 main.go:141] libmachine: (addons-815929) DBG | unable to find current IP address of domain addons-815929 in network mk-addons-815929
	I0918 19:39:04.227278   15635 main.go:141] libmachine: (addons-815929) DBG | I0918 19:39:04.227201   15656 retry.go:31] will retry after 2.836403576s: waiting for machine to come up
	I0918 19:39:07.065332   15635 main.go:141] libmachine: (addons-815929) DBG | domain addons-815929 has defined MAC address 52:54:00:11:b1:d6 in network mk-addons-815929
	I0918 19:39:07.065713   15635 main.go:141] libmachine: (addons-815929) DBG | unable to find current IP address of domain addons-815929 in network mk-addons-815929
	I0918 19:39:07.065748   15635 main.go:141] libmachine: (addons-815929) DBG | I0918 19:39:07.065691   15656 retry.go:31] will retry after 3.279472186s: waiting for machine to come up
	I0918 19:39:10.348133   15635 main.go:141] libmachine: (addons-815929) DBG | domain addons-815929 has defined MAC address 52:54:00:11:b1:d6 in network mk-addons-815929
	I0918 19:39:10.348607   15635 main.go:141] libmachine: (addons-815929) DBG | unable to find current IP address of domain addons-815929 in network mk-addons-815929
	I0918 19:39:10.348632   15635 main.go:141] libmachine: (addons-815929) DBG | I0918 19:39:10.348560   15656 retry.go:31] will retry after 3.871116508s: waiting for machine to come up
	I0918 19:39:14.220928   15635 main.go:141] libmachine: (addons-815929) DBG | domain addons-815929 has defined MAC address 52:54:00:11:b1:d6 in network mk-addons-815929
	I0918 19:39:14.221295   15635 main.go:141] libmachine: (addons-815929) DBG | domain addons-815929 has current primary IP address 192.168.39.158 and MAC address 52:54:00:11:b1:d6 in network mk-addons-815929
	I0918 19:39:14.221321   15635 main.go:141] libmachine: (addons-815929) Found IP for machine: 192.168.39.158
	I0918 19:39:14.221331   15635 main.go:141] libmachine: (addons-815929) Reserving static IP address...
	I0918 19:39:14.221782   15635 main.go:141] libmachine: (addons-815929) DBG | unable to find host DHCP lease matching {name: "addons-815929", mac: "52:54:00:11:b1:d6", ip: "192.168.39.158"} in network mk-addons-815929
	I0918 19:39:14.297555   15635 main.go:141] libmachine: (addons-815929) Reserved static IP address: 192.168.39.158
	I0918 19:39:14.297592   15635 main.go:141] libmachine: (addons-815929) DBG | Getting to WaitForSSH function...
	I0918 19:39:14.297601   15635 main.go:141] libmachine: (addons-815929) Waiting for SSH to be available...
	I0918 19:39:14.300410   15635 main.go:141] libmachine: (addons-815929) DBG | domain addons-815929 has defined MAC address 52:54:00:11:b1:d6 in network mk-addons-815929
	I0918 19:39:14.300839   15635 main.go:141] libmachine: (addons-815929) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:b1:d6", ip: ""} in network mk-addons-815929: {Iface:virbr1 ExpiryTime:2024-09-18 20:39:08 +0000 UTC Type:0 Mac:52:54:00:11:b1:d6 Iaid: IPaddr:192.168.39.158 Prefix:24 Hostname:minikube Clientid:01:52:54:00:11:b1:d6}
	I0918 19:39:14.300870   15635 main.go:141] libmachine: (addons-815929) DBG | domain addons-815929 has defined IP address 192.168.39.158 and MAC address 52:54:00:11:b1:d6 in network mk-addons-815929
	I0918 19:39:14.301080   15635 main.go:141] libmachine: (addons-815929) DBG | Using SSH client type: external
	I0918 19:39:14.301103   15635 main.go:141] libmachine: (addons-815929) DBG | Using SSH private key: /home/jenkins/minikube-integration/19667-7671/.minikube/machines/addons-815929/id_rsa (-rw-------)
	I0918 19:39:14.301133   15635 main.go:141] libmachine: (addons-815929) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.158 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19667-7671/.minikube/machines/addons-815929/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0918 19:39:14.301145   15635 main.go:141] libmachine: (addons-815929) DBG | About to run SSH command:
	I0918 19:39:14.301158   15635 main.go:141] libmachine: (addons-815929) DBG | exit 0
	I0918 19:39:14.432076   15635 main.go:141] libmachine: (addons-815929) DBG | SSH cmd err, output: <nil>: 
	I0918 19:39:14.432351   15635 main.go:141] libmachine: (addons-815929) KVM machine creation complete!
	I0918 19:39:14.432733   15635 main.go:141] libmachine: (addons-815929) Calling .GetConfigRaw
	I0918 19:39:14.433533   15635 main.go:141] libmachine: (addons-815929) Calling .DriverName
	I0918 19:39:14.433729   15635 main.go:141] libmachine: (addons-815929) Calling .DriverName
	I0918 19:39:14.433919   15635 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0918 19:39:14.433937   15635 main.go:141] libmachine: (addons-815929) Calling .GetState
	I0918 19:39:14.435144   15635 main.go:141] libmachine: Detecting operating system of created instance...
	I0918 19:39:14.435157   15635 main.go:141] libmachine: Waiting for SSH to be available...
	I0918 19:39:14.435162   15635 main.go:141] libmachine: Getting to WaitForSSH function...
	I0918 19:39:14.435167   15635 main.go:141] libmachine: (addons-815929) Calling .GetSSHHostname
	I0918 19:39:14.437837   15635 main.go:141] libmachine: (addons-815929) DBG | domain addons-815929 has defined MAC address 52:54:00:11:b1:d6 in network mk-addons-815929
	I0918 19:39:14.438147   15635 main.go:141] libmachine: (addons-815929) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:b1:d6", ip: ""} in network mk-addons-815929: {Iface:virbr1 ExpiryTime:2024-09-18 20:39:08 +0000 UTC Type:0 Mac:52:54:00:11:b1:d6 Iaid: IPaddr:192.168.39.158 Prefix:24 Hostname:addons-815929 Clientid:01:52:54:00:11:b1:d6}
	I0918 19:39:14.438173   15635 main.go:141] libmachine: (addons-815929) DBG | domain addons-815929 has defined IP address 192.168.39.158 and MAC address 52:54:00:11:b1:d6 in network mk-addons-815929
	I0918 19:39:14.438353   15635 main.go:141] libmachine: (addons-815929) Calling .GetSSHPort
	I0918 19:39:14.438525   15635 main.go:141] libmachine: (addons-815929) Calling .GetSSHKeyPath
	I0918 19:39:14.438702   15635 main.go:141] libmachine: (addons-815929) Calling .GetSSHKeyPath
	I0918 19:39:14.438842   15635 main.go:141] libmachine: (addons-815929) Calling .GetSSHUsername
	I0918 19:39:14.439003   15635 main.go:141] libmachine: Using SSH client type: native
	I0918 19:39:14.439223   15635 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.158 22 <nil> <nil>}
	I0918 19:39:14.439238   15635 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0918 19:39:14.543283   15635 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0918 19:39:14.543308   15635 main.go:141] libmachine: Detecting the provisioner...
	I0918 19:39:14.543317   15635 main.go:141] libmachine: (addons-815929) Calling .GetSSHHostname
	I0918 19:39:14.545882   15635 main.go:141] libmachine: (addons-815929) DBG | domain addons-815929 has defined MAC address 52:54:00:11:b1:d6 in network mk-addons-815929
	I0918 19:39:14.546221   15635 main.go:141] libmachine: (addons-815929) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:b1:d6", ip: ""} in network mk-addons-815929: {Iface:virbr1 ExpiryTime:2024-09-18 20:39:08 +0000 UTC Type:0 Mac:52:54:00:11:b1:d6 Iaid: IPaddr:192.168.39.158 Prefix:24 Hostname:addons-815929 Clientid:01:52:54:00:11:b1:d6}
	I0918 19:39:14.546253   15635 main.go:141] libmachine: (addons-815929) DBG | domain addons-815929 has defined IP address 192.168.39.158 and MAC address 52:54:00:11:b1:d6 in network mk-addons-815929
	I0918 19:39:14.546395   15635 main.go:141] libmachine: (addons-815929) Calling .GetSSHPort
	I0918 19:39:14.546623   15635 main.go:141] libmachine: (addons-815929) Calling .GetSSHKeyPath
	I0918 19:39:14.546775   15635 main.go:141] libmachine: (addons-815929) Calling .GetSSHKeyPath
	I0918 19:39:14.546892   15635 main.go:141] libmachine: (addons-815929) Calling .GetSSHUsername
	I0918 19:39:14.547035   15635 main.go:141] libmachine: Using SSH client type: native
	I0918 19:39:14.547232   15635 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.158 22 <nil> <nil>}
	I0918 19:39:14.547245   15635 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0918 19:39:14.652809   15635 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0918 19:39:14.652895   15635 main.go:141] libmachine: found compatible host: buildroot
	I0918 19:39:14.652905   15635 main.go:141] libmachine: Provisioning with buildroot...
	I0918 19:39:14.652912   15635 main.go:141] libmachine: (addons-815929) Calling .GetMachineName
	I0918 19:39:14.653238   15635 buildroot.go:166] provisioning hostname "addons-815929"
	I0918 19:39:14.653269   15635 main.go:141] libmachine: (addons-815929) Calling .GetMachineName
	I0918 19:39:14.653524   15635 main.go:141] libmachine: (addons-815929) Calling .GetSSHHostname
	I0918 19:39:14.656525   15635 main.go:141] libmachine: (addons-815929) DBG | domain addons-815929 has defined MAC address 52:54:00:11:b1:d6 in network mk-addons-815929
	I0918 19:39:14.656903   15635 main.go:141] libmachine: (addons-815929) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:b1:d6", ip: ""} in network mk-addons-815929: {Iface:virbr1 ExpiryTime:2024-09-18 20:39:08 +0000 UTC Type:0 Mac:52:54:00:11:b1:d6 Iaid: IPaddr:192.168.39.158 Prefix:24 Hostname:addons-815929 Clientid:01:52:54:00:11:b1:d6}
	I0918 19:39:14.656925   15635 main.go:141] libmachine: (addons-815929) DBG | domain addons-815929 has defined IP address 192.168.39.158 and MAC address 52:54:00:11:b1:d6 in network mk-addons-815929
	I0918 19:39:14.657113   15635 main.go:141] libmachine: (addons-815929) Calling .GetSSHPort
	I0918 19:39:14.657313   15635 main.go:141] libmachine: (addons-815929) Calling .GetSSHKeyPath
	I0918 19:39:14.657465   15635 main.go:141] libmachine: (addons-815929) Calling .GetSSHKeyPath
	I0918 19:39:14.657637   15635 main.go:141] libmachine: (addons-815929) Calling .GetSSHUsername
	I0918 19:39:14.657763   15635 main.go:141] libmachine: Using SSH client type: native
	I0918 19:39:14.657923   15635 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.158 22 <nil> <nil>}
	I0918 19:39:14.657933   15635 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-815929 && echo "addons-815929" | sudo tee /etc/hostname
	I0918 19:39:14.778145   15635 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-815929
	
	I0918 19:39:14.778168   15635 main.go:141] libmachine: (addons-815929) Calling .GetSSHHostname
	I0918 19:39:14.782280   15635 main.go:141] libmachine: (addons-815929) DBG | domain addons-815929 has defined MAC address 52:54:00:11:b1:d6 in network mk-addons-815929
	I0918 19:39:14.782681   15635 main.go:141] libmachine: (addons-815929) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:b1:d6", ip: ""} in network mk-addons-815929: {Iface:virbr1 ExpiryTime:2024-09-18 20:39:08 +0000 UTC Type:0 Mac:52:54:00:11:b1:d6 Iaid: IPaddr:192.168.39.158 Prefix:24 Hostname:addons-815929 Clientid:01:52:54:00:11:b1:d6}
	I0918 19:39:14.782707   15635 main.go:141] libmachine: (addons-815929) DBG | domain addons-815929 has defined IP address 192.168.39.158 and MAC address 52:54:00:11:b1:d6 in network mk-addons-815929
	I0918 19:39:14.782911   15635 main.go:141] libmachine: (addons-815929) Calling .GetSSHPort
	I0918 19:39:14.783128   15635 main.go:141] libmachine: (addons-815929) Calling .GetSSHKeyPath
	I0918 19:39:14.783294   15635 main.go:141] libmachine: (addons-815929) Calling .GetSSHKeyPath
	I0918 19:39:14.783416   15635 main.go:141] libmachine: (addons-815929) Calling .GetSSHUsername
	I0918 19:39:14.783559   15635 main.go:141] libmachine: Using SSH client type: native
	I0918 19:39:14.783758   15635 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.158 22 <nil> <nil>}
	I0918 19:39:14.783782   15635 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-815929' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-815929/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-815929' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0918 19:39:14.896628   15635 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0918 19:39:14.896658   15635 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19667-7671/.minikube CaCertPath:/home/jenkins/minikube-integration/19667-7671/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19667-7671/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19667-7671/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19667-7671/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19667-7671/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19667-7671/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19667-7671/.minikube}
	I0918 19:39:14.896682   15635 buildroot.go:174] setting up certificates
	I0918 19:39:14.896700   15635 provision.go:84] configureAuth start
	I0918 19:39:14.896715   15635 main.go:141] libmachine: (addons-815929) Calling .GetMachineName
	I0918 19:39:14.896993   15635 main.go:141] libmachine: (addons-815929) Calling .GetIP
	I0918 19:39:14.899455   15635 main.go:141] libmachine: (addons-815929) DBG | domain addons-815929 has defined MAC address 52:54:00:11:b1:d6 in network mk-addons-815929
	I0918 19:39:14.899815   15635 main.go:141] libmachine: (addons-815929) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:b1:d6", ip: ""} in network mk-addons-815929: {Iface:virbr1 ExpiryTime:2024-09-18 20:39:08 +0000 UTC Type:0 Mac:52:54:00:11:b1:d6 Iaid: IPaddr:192.168.39.158 Prefix:24 Hostname:addons-815929 Clientid:01:52:54:00:11:b1:d6}
	I0918 19:39:14.899848   15635 main.go:141] libmachine: (addons-815929) DBG | domain addons-815929 has defined IP address 192.168.39.158 and MAC address 52:54:00:11:b1:d6 in network mk-addons-815929
	I0918 19:39:14.900060   15635 main.go:141] libmachine: (addons-815929) Calling .GetSSHHostname
	I0918 19:39:14.902022   15635 main.go:141] libmachine: (addons-815929) DBG | domain addons-815929 has defined MAC address 52:54:00:11:b1:d6 in network mk-addons-815929
	I0918 19:39:14.902265   15635 main.go:141] libmachine: (addons-815929) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:b1:d6", ip: ""} in network mk-addons-815929: {Iface:virbr1 ExpiryTime:2024-09-18 20:39:08 +0000 UTC Type:0 Mac:52:54:00:11:b1:d6 Iaid: IPaddr:192.168.39.158 Prefix:24 Hostname:addons-815929 Clientid:01:52:54:00:11:b1:d6}
	I0918 19:39:14.902293   15635 main.go:141] libmachine: (addons-815929) DBG | domain addons-815929 has defined IP address 192.168.39.158 and MAC address 52:54:00:11:b1:d6 in network mk-addons-815929
	I0918 19:39:14.902392   15635 provision.go:143] copyHostCerts
	I0918 19:39:14.902479   15635 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19667-7671/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19667-7671/.minikube/ca.pem (1082 bytes)
	I0918 19:39:14.902600   15635 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19667-7671/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19667-7671/.minikube/cert.pem (1123 bytes)
	I0918 19:39:14.902671   15635 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19667-7671/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19667-7671/.minikube/key.pem (1679 bytes)
	I0918 19:39:14.902724   15635 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19667-7671/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19667-7671/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19667-7671/.minikube/certs/ca-key.pem org=jenkins.addons-815929 san=[127.0.0.1 192.168.39.158 addons-815929 localhost minikube]
	I0918 19:39:15.027079   15635 provision.go:177] copyRemoteCerts
	I0918 19:39:15.027139   15635 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0918 19:39:15.027161   15635 main.go:141] libmachine: (addons-815929) Calling .GetSSHHostname
	I0918 19:39:15.029651   15635 main.go:141] libmachine: (addons-815929) DBG | domain addons-815929 has defined MAC address 52:54:00:11:b1:d6 in network mk-addons-815929
	I0918 19:39:15.029950   15635 main.go:141] libmachine: (addons-815929) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:b1:d6", ip: ""} in network mk-addons-815929: {Iface:virbr1 ExpiryTime:2024-09-18 20:39:08 +0000 UTC Type:0 Mac:52:54:00:11:b1:d6 Iaid: IPaddr:192.168.39.158 Prefix:24 Hostname:addons-815929 Clientid:01:52:54:00:11:b1:d6}
	I0918 19:39:15.029974   15635 main.go:141] libmachine: (addons-815929) DBG | domain addons-815929 has defined IP address 192.168.39.158 and MAC address 52:54:00:11:b1:d6 in network mk-addons-815929
	I0918 19:39:15.030191   15635 main.go:141] libmachine: (addons-815929) Calling .GetSSHPort
	I0918 19:39:15.030381   15635 main.go:141] libmachine: (addons-815929) Calling .GetSSHKeyPath
	I0918 19:39:15.030555   15635 main.go:141] libmachine: (addons-815929) Calling .GetSSHUsername
	I0918 19:39:15.030715   15635 sshutil.go:53] new ssh client: &{IP:192.168.39.158 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19667-7671/.minikube/machines/addons-815929/id_rsa Username:docker}
	I0918 19:39:15.113743   15635 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0918 19:39:15.137366   15635 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0918 19:39:15.160840   15635 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0918 19:39:15.184268   15635 provision.go:87] duration metric: took 287.554696ms to configureAuth
	I0918 19:39:15.184296   15635 buildroot.go:189] setting minikube options for container-runtime
	I0918 19:39:15.184488   15635 config.go:182] Loaded profile config "addons-815929": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0918 19:39:15.184570   15635 main.go:141] libmachine: (addons-815929) Calling .GetSSHHostname
	I0918 19:39:15.187055   15635 main.go:141] libmachine: (addons-815929) DBG | domain addons-815929 has defined MAC address 52:54:00:11:b1:d6 in network mk-addons-815929
	I0918 19:39:15.187394   15635 main.go:141] libmachine: (addons-815929) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:b1:d6", ip: ""} in network mk-addons-815929: {Iface:virbr1 ExpiryTime:2024-09-18 20:39:08 +0000 UTC Type:0 Mac:52:54:00:11:b1:d6 Iaid: IPaddr:192.168.39.158 Prefix:24 Hostname:addons-815929 Clientid:01:52:54:00:11:b1:d6}
	I0918 19:39:15.187422   15635 main.go:141] libmachine: (addons-815929) DBG | domain addons-815929 has defined IP address 192.168.39.158 and MAC address 52:54:00:11:b1:d6 in network mk-addons-815929
	I0918 19:39:15.187614   15635 main.go:141] libmachine: (addons-815929) Calling .GetSSHPort
	I0918 19:39:15.187812   15635 main.go:141] libmachine: (addons-815929) Calling .GetSSHKeyPath
	I0918 19:39:15.187967   15635 main.go:141] libmachine: (addons-815929) Calling .GetSSHKeyPath
	I0918 19:39:15.188117   15635 main.go:141] libmachine: (addons-815929) Calling .GetSSHUsername
	I0918 19:39:15.188300   15635 main.go:141] libmachine: Using SSH client type: native
	I0918 19:39:15.188467   15635 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.158 22 <nil> <nil>}
	I0918 19:39:15.188480   15635 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0918 19:39:15.422203   15635 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0918 19:39:15.422228   15635 main.go:141] libmachine: Checking connection to Docker...
	I0918 19:39:15.422236   15635 main.go:141] libmachine: (addons-815929) Calling .GetURL
	I0918 19:39:15.423388   15635 main.go:141] libmachine: (addons-815929) DBG | Using libvirt version 6000000
	I0918 19:39:15.425708   15635 main.go:141] libmachine: (addons-815929) DBG | domain addons-815929 has defined MAC address 52:54:00:11:b1:d6 in network mk-addons-815929
	I0918 19:39:15.426166   15635 main.go:141] libmachine: (addons-815929) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:b1:d6", ip: ""} in network mk-addons-815929: {Iface:virbr1 ExpiryTime:2024-09-18 20:39:08 +0000 UTC Type:0 Mac:52:54:00:11:b1:d6 Iaid: IPaddr:192.168.39.158 Prefix:24 Hostname:addons-815929 Clientid:01:52:54:00:11:b1:d6}
	I0918 19:39:15.426200   15635 main.go:141] libmachine: (addons-815929) DBG | domain addons-815929 has defined IP address 192.168.39.158 and MAC address 52:54:00:11:b1:d6 in network mk-addons-815929
	I0918 19:39:15.426400   15635 main.go:141] libmachine: Docker is up and running!
	I0918 19:39:15.426415   15635 main.go:141] libmachine: Reticulating splines...
	I0918 19:39:15.426421   15635 client.go:171] duration metric: took 22.223621675s to LocalClient.Create
	I0918 19:39:15.426449   15635 start.go:167] duration metric: took 22.22368243s to libmachine.API.Create "addons-815929"
	I0918 19:39:15.426462   15635 start.go:293] postStartSetup for "addons-815929" (driver="kvm2")
	I0918 19:39:15.426475   15635 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0918 19:39:15.426497   15635 main.go:141] libmachine: (addons-815929) Calling .DriverName
	I0918 19:39:15.426717   15635 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0918 19:39:15.426747   15635 main.go:141] libmachine: (addons-815929) Calling .GetSSHHostname
	I0918 19:39:15.429165   15635 main.go:141] libmachine: (addons-815929) DBG | domain addons-815929 has defined MAC address 52:54:00:11:b1:d6 in network mk-addons-815929
	I0918 19:39:15.429467   15635 main.go:141] libmachine: (addons-815929) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:b1:d6", ip: ""} in network mk-addons-815929: {Iface:virbr1 ExpiryTime:2024-09-18 20:39:08 +0000 UTC Type:0 Mac:52:54:00:11:b1:d6 Iaid: IPaddr:192.168.39.158 Prefix:24 Hostname:addons-815929 Clientid:01:52:54:00:11:b1:d6}
	I0918 19:39:15.429493   15635 main.go:141] libmachine: (addons-815929) DBG | domain addons-815929 has defined IP address 192.168.39.158 and MAC address 52:54:00:11:b1:d6 in network mk-addons-815929
	I0918 19:39:15.429654   15635 main.go:141] libmachine: (addons-815929) Calling .GetSSHPort
	I0918 19:39:15.429831   15635 main.go:141] libmachine: (addons-815929) Calling .GetSSHKeyPath
	I0918 19:39:15.429969   15635 main.go:141] libmachine: (addons-815929) Calling .GetSSHUsername
	I0918 19:39:15.430118   15635 sshutil.go:53] new ssh client: &{IP:192.168.39.158 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19667-7671/.minikube/machines/addons-815929/id_rsa Username:docker}
	I0918 19:39:15.514784   15635 ssh_runner.go:195] Run: cat /etc/os-release
	I0918 19:39:15.519847   15635 info.go:137] Remote host: Buildroot 2023.02.9
	I0918 19:39:15.519878   15635 filesync.go:126] Scanning /home/jenkins/minikube-integration/19667-7671/.minikube/addons for local assets ...
	I0918 19:39:15.519966   15635 filesync.go:126] Scanning /home/jenkins/minikube-integration/19667-7671/.minikube/files for local assets ...
	I0918 19:39:15.519998   15635 start.go:296] duration metric: took 93.528833ms for postStartSetup
	I0918 19:39:15.520064   15635 main.go:141] libmachine: (addons-815929) Calling .GetConfigRaw
	I0918 19:39:15.520653   15635 main.go:141] libmachine: (addons-815929) Calling .GetIP
	I0918 19:39:15.523455   15635 main.go:141] libmachine: (addons-815929) DBG | domain addons-815929 has defined MAC address 52:54:00:11:b1:d6 in network mk-addons-815929
	I0918 19:39:15.523846   15635 main.go:141] libmachine: (addons-815929) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:b1:d6", ip: ""} in network mk-addons-815929: {Iface:virbr1 ExpiryTime:2024-09-18 20:39:08 +0000 UTC Type:0 Mac:52:54:00:11:b1:d6 Iaid: IPaddr:192.168.39.158 Prefix:24 Hostname:addons-815929 Clientid:01:52:54:00:11:b1:d6}
	I0918 19:39:15.523874   15635 main.go:141] libmachine: (addons-815929) DBG | domain addons-815929 has defined IP address 192.168.39.158 and MAC address 52:54:00:11:b1:d6 in network mk-addons-815929
	I0918 19:39:15.524124   15635 profile.go:143] Saving config to /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/addons-815929/config.json ...
	I0918 19:39:15.524332   15635 start.go:128] duration metric: took 22.339516337s to createHost
	I0918 19:39:15.524360   15635 main.go:141] libmachine: (addons-815929) Calling .GetSSHHostname
	I0918 19:39:15.526732   15635 main.go:141] libmachine: (addons-815929) DBG | domain addons-815929 has defined MAC address 52:54:00:11:b1:d6 in network mk-addons-815929
	I0918 19:39:15.527041   15635 main.go:141] libmachine: (addons-815929) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:b1:d6", ip: ""} in network mk-addons-815929: {Iface:virbr1 ExpiryTime:2024-09-18 20:39:08 +0000 UTC Type:0 Mac:52:54:00:11:b1:d6 Iaid: IPaddr:192.168.39.158 Prefix:24 Hostname:addons-815929 Clientid:01:52:54:00:11:b1:d6}
	I0918 19:39:15.527070   15635 main.go:141] libmachine: (addons-815929) DBG | domain addons-815929 has defined IP address 192.168.39.158 and MAC address 52:54:00:11:b1:d6 in network mk-addons-815929
	I0918 19:39:15.527313   15635 main.go:141] libmachine: (addons-815929) Calling .GetSSHPort
	I0918 19:39:15.527542   15635 main.go:141] libmachine: (addons-815929) Calling .GetSSHKeyPath
	I0918 19:39:15.527709   15635 main.go:141] libmachine: (addons-815929) Calling .GetSSHKeyPath
	I0918 19:39:15.527867   15635 main.go:141] libmachine: (addons-815929) Calling .GetSSHUsername
	I0918 19:39:15.528155   15635 main.go:141] libmachine: Using SSH client type: native
	I0918 19:39:15.528375   15635 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.158 22 <nil> <nil>}
	I0918 19:39:15.528388   15635 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0918 19:39:15.632644   15635 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726688355.604291671
	
	I0918 19:39:15.632664   15635 fix.go:216] guest clock: 1726688355.604291671
	I0918 19:39:15.632671   15635 fix.go:229] Guest: 2024-09-18 19:39:15.604291671 +0000 UTC Remote: 2024-09-18 19:39:15.524343859 +0000 UTC m=+22.440132340 (delta=79.947812ms)
	I0918 19:39:15.632711   15635 fix.go:200] guest clock delta is within tolerance: 79.947812ms
	I0918 19:39:15.632716   15635 start.go:83] releasing machines lock for "addons-815929", held for 22.447981743s
	I0918 19:39:15.632734   15635 main.go:141] libmachine: (addons-815929) Calling .DriverName
	I0918 19:39:15.632989   15635 main.go:141] libmachine: (addons-815929) Calling .GetIP
	I0918 19:39:15.635689   15635 main.go:141] libmachine: (addons-815929) DBG | domain addons-815929 has defined MAC address 52:54:00:11:b1:d6 in network mk-addons-815929
	I0918 19:39:15.636073   15635 main.go:141] libmachine: (addons-815929) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:b1:d6", ip: ""} in network mk-addons-815929: {Iface:virbr1 ExpiryTime:2024-09-18 20:39:08 +0000 UTC Type:0 Mac:52:54:00:11:b1:d6 Iaid: IPaddr:192.168.39.158 Prefix:24 Hostname:addons-815929 Clientid:01:52:54:00:11:b1:d6}
	I0918 19:39:15.636100   15635 main.go:141] libmachine: (addons-815929) DBG | domain addons-815929 has defined IP address 192.168.39.158 and MAC address 52:54:00:11:b1:d6 in network mk-addons-815929
	I0918 19:39:15.636232   15635 main.go:141] libmachine: (addons-815929) Calling .DriverName
	I0918 19:39:15.636698   15635 main.go:141] libmachine: (addons-815929) Calling .DriverName
	I0918 19:39:15.636877   15635 main.go:141] libmachine: (addons-815929) Calling .DriverName
	I0918 19:39:15.636982   15635 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0918 19:39:15.637025   15635 main.go:141] libmachine: (addons-815929) Calling .GetSSHHostname
	I0918 19:39:15.637083   15635 ssh_runner.go:195] Run: cat /version.json
	I0918 19:39:15.637103   15635 main.go:141] libmachine: (addons-815929) Calling .GetSSHHostname
	I0918 19:39:15.639906   15635 main.go:141] libmachine: (addons-815929) DBG | domain addons-815929 has defined MAC address 52:54:00:11:b1:d6 in network mk-addons-815929
	I0918 19:39:15.640052   15635 main.go:141] libmachine: (addons-815929) DBG | domain addons-815929 has defined MAC address 52:54:00:11:b1:d6 in network mk-addons-815929
	I0918 19:39:15.640306   15635 main.go:141] libmachine: (addons-815929) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:b1:d6", ip: ""} in network mk-addons-815929: {Iface:virbr1 ExpiryTime:2024-09-18 20:39:08 +0000 UTC Type:0 Mac:52:54:00:11:b1:d6 Iaid: IPaddr:192.168.39.158 Prefix:24 Hostname:addons-815929 Clientid:01:52:54:00:11:b1:d6}
	I0918 19:39:15.640333   15635 main.go:141] libmachine: (addons-815929) DBG | domain addons-815929 has defined IP address 192.168.39.158 and MAC address 52:54:00:11:b1:d6 in network mk-addons-815929
	I0918 19:39:15.640430   15635 main.go:141] libmachine: (addons-815929) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:b1:d6", ip: ""} in network mk-addons-815929: {Iface:virbr1 ExpiryTime:2024-09-18 20:39:08 +0000 UTC Type:0 Mac:52:54:00:11:b1:d6 Iaid: IPaddr:192.168.39.158 Prefix:24 Hostname:addons-815929 Clientid:01:52:54:00:11:b1:d6}
	I0918 19:39:15.640449   15635 main.go:141] libmachine: (addons-815929) Calling .GetSSHPort
	I0918 19:39:15.640456   15635 main.go:141] libmachine: (addons-815929) DBG | domain addons-815929 has defined IP address 192.168.39.158 and MAC address 52:54:00:11:b1:d6 in network mk-addons-815929
	I0918 19:39:15.640658   15635 main.go:141] libmachine: (addons-815929) Calling .GetSSHPort
	I0918 19:39:15.640662   15635 main.go:141] libmachine: (addons-815929) Calling .GetSSHKeyPath
	I0918 19:39:15.640846   15635 main.go:141] libmachine: (addons-815929) Calling .GetSSHUsername
	I0918 19:39:15.640865   15635 main.go:141] libmachine: (addons-815929) Calling .GetSSHKeyPath
	I0918 19:39:15.640960   15635 main.go:141] libmachine: (addons-815929) Calling .GetSSHUsername
	I0918 19:39:15.640964   15635 sshutil.go:53] new ssh client: &{IP:192.168.39.158 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19667-7671/.minikube/machines/addons-815929/id_rsa Username:docker}
	I0918 19:39:15.641064   15635 sshutil.go:53] new ssh client: &{IP:192.168.39.158 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19667-7671/.minikube/machines/addons-815929/id_rsa Username:docker}
	I0918 19:39:15.724678   15635 ssh_runner.go:195] Run: systemctl --version
	I0918 19:39:15.769924   15635 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0918 19:39:15.924625   15635 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0918 19:39:15.930995   15635 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0918 19:39:15.931078   15635 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0918 19:39:15.946257   15635 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0918 19:39:15.946282   15635 start.go:495] detecting cgroup driver to use...
	I0918 19:39:15.946349   15635 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0918 19:39:15.962493   15635 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0918 19:39:15.976970   15635 docker.go:217] disabling cri-docker service (if available) ...
	I0918 19:39:15.977037   15635 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0918 19:39:15.990730   15635 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0918 19:39:16.004287   15635 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0918 19:39:16.120456   15635 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0918 19:39:16.273269   15635 docker.go:233] disabling docker service ...
	I0918 19:39:16.273355   15635 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0918 19:39:16.287263   15635 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0918 19:39:16.300054   15635 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0918 19:39:16.431534   15635 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0918 19:39:16.542730   15635 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0918 19:39:16.556593   15635 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0918 19:39:16.574110   15635 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0918 19:39:16.574168   15635 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0918 19:39:16.584364   15635 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0918 19:39:16.584433   15635 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0918 19:39:16.595648   15635 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0918 19:39:16.605606   15635 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0918 19:39:16.615817   15635 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0918 19:39:16.625545   15635 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0918 19:39:16.635288   15635 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0918 19:39:16.651799   15635 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0918 19:39:16.662018   15635 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0918 19:39:16.671973   15635 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0918 19:39:16.672038   15635 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0918 19:39:16.684348   15635 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0918 19:39:16.694527   15635 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0918 19:39:16.806557   15635 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0918 19:39:16.893853   15635 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0918 19:39:16.893979   15635 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0918 19:39:16.898741   15635 start.go:563] Will wait 60s for crictl version
	I0918 19:39:16.898823   15635 ssh_runner.go:195] Run: which crictl
	I0918 19:39:16.903203   15635 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0918 19:39:16.954060   15635 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0918 19:39:16.954193   15635 ssh_runner.go:195] Run: crio --version
	I0918 19:39:16.982884   15635 ssh_runner.go:195] Run: crio --version
	I0918 19:39:17.014729   15635 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0918 19:39:17.016149   15635 main.go:141] libmachine: (addons-815929) Calling .GetIP
	I0918 19:39:17.018519   15635 main.go:141] libmachine: (addons-815929) DBG | domain addons-815929 has defined MAC address 52:54:00:11:b1:d6 in network mk-addons-815929
	I0918 19:39:17.018848   15635 main.go:141] libmachine: (addons-815929) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:b1:d6", ip: ""} in network mk-addons-815929: {Iface:virbr1 ExpiryTime:2024-09-18 20:39:08 +0000 UTC Type:0 Mac:52:54:00:11:b1:d6 Iaid: IPaddr:192.168.39.158 Prefix:24 Hostname:addons-815929 Clientid:01:52:54:00:11:b1:d6}
	I0918 19:39:17.018881   15635 main.go:141] libmachine: (addons-815929) DBG | domain addons-815929 has defined IP address 192.168.39.158 and MAC address 52:54:00:11:b1:d6 in network mk-addons-815929
	I0918 19:39:17.019079   15635 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0918 19:39:17.022910   15635 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0918 19:39:17.034489   15635 kubeadm.go:883] updating cluster {Name:addons-815929 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.
1 ClusterName:addons-815929 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.158 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountT
ype:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0918 19:39:17.034619   15635 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0918 19:39:17.034683   15635 ssh_runner.go:195] Run: sudo crictl images --output json
	I0918 19:39:17.066943   15635 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I0918 19:39:17.067023   15635 ssh_runner.go:195] Run: which lz4
	I0918 19:39:17.071020   15635 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0918 19:39:17.075441   15635 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0918 19:39:17.075480   15635 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I0918 19:39:18.279753   15635 crio.go:462] duration metric: took 1.208762257s to copy over tarball
	I0918 19:39:18.279822   15635 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0918 19:39:20.398594   15635 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.118749248s)
	I0918 19:39:20.398620   15635 crio.go:469] duration metric: took 2.11883848s to extract the tarball
	I0918 19:39:20.398627   15635 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0918 19:39:20.434881   15635 ssh_runner.go:195] Run: sudo crictl images --output json
	I0918 19:39:20.475778   15635 crio.go:514] all images are preloaded for cri-o runtime.
	I0918 19:39:20.475806   15635 cache_images.go:84] Images are preloaded, skipping loading
	I0918 19:39:20.475816   15635 kubeadm.go:934] updating node { 192.168.39.158 8443 v1.31.1 crio true true} ...
	I0918 19:39:20.475923   15635 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-815929 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.158
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:addons-815929 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0918 19:39:20.475986   15635 ssh_runner.go:195] Run: crio config
	I0918 19:39:20.519952   15635 cni.go:84] Creating CNI manager for ""
	I0918 19:39:20.519977   15635 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0918 19:39:20.519986   15635 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0918 19:39:20.520005   15635 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.158 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-815929 NodeName:addons-815929 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.158"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.158 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0918 19:39:20.520160   15635 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.158
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-815929"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.158
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.158"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0918 19:39:20.520220   15635 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0918 19:39:20.530115   15635 binaries.go:44] Found k8s binaries, skipping transfer
	I0918 19:39:20.530193   15635 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0918 19:39:20.539110   15635 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0918 19:39:20.554855   15635 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0918 19:39:20.570703   15635 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2157 bytes)
	I0918 19:39:20.586047   15635 ssh_runner.go:195] Run: grep 192.168.39.158	control-plane.minikube.internal$ /etc/hosts
	I0918 19:39:20.589512   15635 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.158	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0918 19:39:20.600947   15635 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0918 19:39:20.714800   15635 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0918 19:39:20.731863   15635 certs.go:68] Setting up /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/addons-815929 for IP: 192.168.39.158
	I0918 19:39:20.731895   15635 certs.go:194] generating shared ca certs ...
	I0918 19:39:20.731916   15635 certs.go:226] acquiring lock for ca certs: {Name:mkf54e0e71ed884c4372f9d3d4cb308ea4600185 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0918 19:39:20.732126   15635 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/19667-7671/.minikube/ca.key
	I0918 19:39:20.903635   15635 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19667-7671/.minikube/ca.crt ...
	I0918 19:39:20.903669   15635 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19667-7671/.minikube/ca.crt: {Name:mk5ab9af521edad191e1df188ac5d1ec102df64d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0918 19:39:20.903847   15635 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19667-7671/.minikube/ca.key ...
	I0918 19:39:20.903857   15635 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19667-7671/.minikube/ca.key: {Name:mk39487a69c8f19d5c09499199945d3411122eb5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0918 19:39:20.903924   15635 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19667-7671/.minikube/proxy-client-ca.key
	I0918 19:39:21.222001   15635 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19667-7671/.minikube/proxy-client-ca.crt ...
	I0918 19:39:21.222033   15635 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19667-7671/.minikube/proxy-client-ca.crt: {Name:mk216a92c8e5c2cc109551a33de4057317853d8d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0918 19:39:21.222192   15635 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19667-7671/.minikube/proxy-client-ca.key ...
	I0918 19:39:21.222203   15635 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19667-7671/.minikube/proxy-client-ca.key: {Name:mk5acd984a1bdd683ae18bb5abd36964f6b7c3c7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0918 19:39:21.222274   15635 certs.go:256] generating profile certs ...
	I0918 19:39:21.222328   15635 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/addons-815929/client.key
	I0918 19:39:21.222353   15635 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/addons-815929/client.crt with IP's: []
	I0918 19:39:21.427586   15635 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/addons-815929/client.crt ...
	I0918 19:39:21.427617   15635 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/addons-815929/client.crt: {Name:mka7942c1a0a773e2c8b5c86112e9c1ca7fd5d08 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0918 19:39:21.427767   15635 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/addons-815929/client.key ...
	I0918 19:39:21.427782   15635 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/addons-815929/client.key: {Name:mk0bb80ad3a72e414322fa8381dc0c9ca95a04d1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0918 19:39:21.427845   15635 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/addons-815929/apiserver.key.1207c200
	I0918 19:39:21.427862   15635 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/addons-815929/apiserver.crt.1207c200 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.158]
	I0918 19:39:21.547680   15635 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/addons-815929/apiserver.crt.1207c200 ...
	I0918 19:39:21.547712   15635 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/addons-815929/apiserver.crt.1207c200: {Name:mk8a17d4138be2d4aed650c4aadb0e9b8271625f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0918 19:39:21.547864   15635 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/addons-815929/apiserver.key.1207c200 ...
	I0918 19:39:21.547877   15635 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/addons-815929/apiserver.key.1207c200: {Name:mkca16a53905ed18fa3435c13c0144e57c60188b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0918 19:39:21.547942   15635 certs.go:381] copying /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/addons-815929/apiserver.crt.1207c200 -> /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/addons-815929/apiserver.crt
	I0918 19:39:21.548029   15635 certs.go:385] copying /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/addons-815929/apiserver.key.1207c200 -> /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/addons-815929/apiserver.key
	I0918 19:39:21.548077   15635 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/addons-815929/proxy-client.key
	I0918 19:39:21.548094   15635 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/addons-815929/proxy-client.crt with IP's: []
	I0918 19:39:21.746355   15635 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/addons-815929/proxy-client.crt ...
	I0918 19:39:21.746391   15635 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/addons-815929/proxy-client.crt: {Name:mk72f125b96fe55f295e7ce9376879b898e47f36 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0918 19:39:21.746557   15635 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/addons-815929/proxy-client.key ...
	I0918 19:39:21.746567   15635 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/addons-815929/proxy-client.key: {Name:mk6d5f5778449275cb7d437edd936b0c1235f081 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0918 19:39:21.746748   15635 certs.go:484] found cert: /home/jenkins/minikube-integration/19667-7671/.minikube/certs/ca-key.pem (1675 bytes)
	I0918 19:39:21.746783   15635 certs.go:484] found cert: /home/jenkins/minikube-integration/19667-7671/.minikube/certs/ca.pem (1082 bytes)
	I0918 19:39:21.746808   15635 certs.go:484] found cert: /home/jenkins/minikube-integration/19667-7671/.minikube/certs/cert.pem (1123 bytes)
	I0918 19:39:21.746830   15635 certs.go:484] found cert: /home/jenkins/minikube-integration/19667-7671/.minikube/certs/key.pem (1679 bytes)
	I0918 19:39:21.747359   15635 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0918 19:39:21.774678   15635 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0918 19:39:21.798559   15635 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0918 19:39:21.824550   15635 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0918 19:39:21.856972   15635 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/addons-815929/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0918 19:39:21.881486   15635 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/addons-815929/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0918 19:39:21.905485   15635 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/addons-815929/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0918 19:39:21.929966   15635 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/addons-815929/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0918 19:39:21.954634   15635 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0918 19:39:21.979726   15635 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0918 19:39:21.996220   15635 ssh_runner.go:195] Run: openssl version
	I0918 19:39:22.002125   15635 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0918 19:39:22.012616   15635 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0918 19:39:22.016717   15635 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 18 19:39 /usr/share/ca-certificates/minikubeCA.pem
	I0918 19:39:22.016780   15635 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0918 19:39:22.022337   15635 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0918 19:39:22.032855   15635 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0918 19:39:22.039081   15635 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0918 19:39:22.039137   15635 kubeadm.go:392] StartCluster: {Name:addons-815929 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 C
lusterName:addons-815929 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.158 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0918 19:39:22.039203   15635 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0918 19:39:22.039252   15635 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0918 19:39:22.077128   15635 cri.go:89] found id: ""
	I0918 19:39:22.077203   15635 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0918 19:39:22.087133   15635 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0918 19:39:22.096945   15635 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0918 19:39:22.106483   15635 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0918 19:39:22.106519   15635 kubeadm.go:157] found existing configuration files:
	
	I0918 19:39:22.106562   15635 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0918 19:39:22.115601   15635 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0918 19:39:22.115658   15635 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0918 19:39:22.125000   15635 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0918 19:39:22.134204   15635 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0918 19:39:22.134259   15635 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0918 19:39:22.143745   15635 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0918 19:39:22.152804   15635 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0918 19:39:22.152866   15635 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0918 19:39:22.162802   15635 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0918 19:39:22.173020   15635 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0918 19:39:22.173087   15635 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0918 19:39:22.184200   15635 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0918 19:39:22.239157   15635 kubeadm.go:310] W0918 19:39:22.219472     812 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0918 19:39:22.239864   15635 kubeadm.go:310] W0918 19:39:22.220484     812 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0918 19:39:22.375715   15635 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0918 19:39:32.745678   15635 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I0918 19:39:32.745741   15635 kubeadm.go:310] [preflight] Running pre-flight checks
	I0918 19:39:32.745827   15635 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0918 19:39:32.745932   15635 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0918 19:39:32.746038   15635 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0918 19:39:32.746135   15635 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0918 19:39:32.747995   15635 out.go:235]   - Generating certificates and keys ...
	I0918 19:39:32.748120   15635 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0918 19:39:32.748185   15635 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0918 19:39:32.748309   15635 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0918 19:39:32.748397   15635 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0918 19:39:32.748486   15635 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0918 19:39:32.748581   15635 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0918 19:39:32.748667   15635 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0918 19:39:32.748784   15635 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-815929 localhost] and IPs [192.168.39.158 127.0.0.1 ::1]
	I0918 19:39:32.748865   15635 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0918 19:39:32.748977   15635 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-815929 localhost] and IPs [192.168.39.158 127.0.0.1 ::1]
	I0918 19:39:32.749034   15635 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0918 19:39:32.749100   15635 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0918 19:39:32.749149   15635 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0918 19:39:32.749202   15635 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0918 19:39:32.749248   15635 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0918 19:39:32.749300   15635 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0918 19:39:32.749346   15635 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0918 19:39:32.749404   15635 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0918 19:39:32.749451   15635 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0918 19:39:32.749533   15635 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0918 19:39:32.749608   15635 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0918 19:39:32.751199   15635 out.go:235]   - Booting up control plane ...
	I0918 19:39:32.751299   15635 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0918 19:39:32.751390   15635 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0918 19:39:32.751462   15635 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0918 19:39:32.751561   15635 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0918 19:39:32.751639   15635 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0918 19:39:32.751678   15635 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0918 19:39:32.751805   15635 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0918 19:39:32.751940   15635 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0918 19:39:32.751993   15635 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.248865ms
	I0918 19:39:32.752083   15635 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0918 19:39:32.752136   15635 kubeadm.go:310] [api-check] The API server is healthy after 5.5020976s
	I0918 19:39:32.752230   15635 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0918 19:39:32.752341   15635 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0918 19:39:32.752393   15635 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0918 19:39:32.752553   15635 kubeadm.go:310] [mark-control-plane] Marking the node addons-815929 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0918 19:39:32.752613   15635 kubeadm.go:310] [bootstrap-token] Using token: 67qfck.xhy2rt9vuaaqal6w
	I0918 19:39:32.755162   15635 out.go:235]   - Configuring RBAC rules ...
	I0918 19:39:32.755272   15635 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0918 19:39:32.755391   15635 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0918 19:39:32.755583   15635 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0918 19:39:32.755697   15635 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0918 19:39:32.755824   15635 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0918 19:39:32.755931   15635 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0918 19:39:32.756094   15635 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0918 19:39:32.756170   15635 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0918 19:39:32.756238   15635 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0918 19:39:32.756250   15635 kubeadm.go:310] 
	I0918 19:39:32.756306   15635 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0918 19:39:32.756314   15635 kubeadm.go:310] 
	I0918 19:39:32.756394   15635 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0918 19:39:32.756403   15635 kubeadm.go:310] 
	I0918 19:39:32.756429   15635 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0918 19:39:32.756479   15635 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0918 19:39:32.756523   15635 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0918 19:39:32.756530   15635 kubeadm.go:310] 
	I0918 19:39:32.756585   15635 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0918 19:39:32.756595   15635 kubeadm.go:310] 
	I0918 19:39:32.756638   15635 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0918 19:39:32.756643   15635 kubeadm.go:310] 
	I0918 19:39:32.756686   15635 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0918 19:39:32.756750   15635 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0918 19:39:32.756808   15635 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0918 19:39:32.756814   15635 kubeadm.go:310] 
	I0918 19:39:32.756887   15635 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0918 19:39:32.756954   15635 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0918 19:39:32.756960   15635 kubeadm.go:310] 
	I0918 19:39:32.757031   15635 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 67qfck.xhy2rt9vuaaqal6w \
	I0918 19:39:32.757120   15635 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:8ad46bf288e8cc2226f5a929a768e6c95cddecc97ffc4016be27ef0987b1e36f \
	I0918 19:39:32.757151   15635 kubeadm.go:310] 	--control-plane 
	I0918 19:39:32.757157   15635 kubeadm.go:310] 
	I0918 19:39:32.757248   15635 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0918 19:39:32.757257   15635 kubeadm.go:310] 
	I0918 19:39:32.757354   15635 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 67qfck.xhy2rt9vuaaqal6w \
	I0918 19:39:32.757490   15635 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:8ad46bf288e8cc2226f5a929a768e6c95cddecc97ffc4016be27ef0987b1e36f 
	I0918 19:39:32.757501   15635 cni.go:84] Creating CNI manager for ""
	I0918 19:39:32.757507   15635 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0918 19:39:32.760281   15635 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0918 19:39:32.761848   15635 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0918 19:39:32.772978   15635 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0918 19:39:32.796231   15635 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0918 19:39:32.796347   15635 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0918 19:39:32.796347   15635 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-815929 minikube.k8s.io/updated_at=2024_09_18T19_39_32_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=85073601a832bd4bbda5d11fa91feafff6ec6b91 minikube.k8s.io/name=addons-815929 minikube.k8s.io/primary=true
	I0918 19:39:32.810093   15635 ops.go:34] apiserver oom_adj: -16
	I0918 19:39:32.947600   15635 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0918 19:39:33.448372   15635 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0918 19:39:33.947877   15635 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0918 19:39:34.447886   15635 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0918 19:39:34.948598   15635 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0918 19:39:35.448280   15635 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0918 19:39:35.947854   15635 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0918 19:39:36.447710   15635 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0918 19:39:36.948512   15635 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0918 19:39:37.028366   15635 kubeadm.go:1113] duration metric: took 4.232084306s to wait for elevateKubeSystemPrivileges
	I0918 19:39:37.028407   15635 kubeadm.go:394] duration metric: took 14.989273723s to StartCluster
	I0918 19:39:37.028429   15635 settings.go:142] acquiring lock: {Name:mk6ae95a69dcbe00c28846409e9f46945a88de2f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0918 19:39:37.028570   15635 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19667-7671/kubeconfig
	I0918 19:39:37.028921   15635 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19667-7671/kubeconfig: {Name:mk3a3eecadb0164dfdd5d3c4392b5b473a2a9bb0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0918 19:39:37.029140   15635 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0918 19:39:37.029150   15635 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.158 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0918 19:39:37.029221   15635 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:true inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0918 19:39:37.029346   15635 config.go:182] Loaded profile config "addons-815929": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0918 19:39:37.029362   15635 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-815929"
	I0918 19:39:37.029377   15635 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-815929"
	I0918 19:39:37.029386   15635 addons.go:69] Setting helm-tiller=true in profile "addons-815929"
	I0918 19:39:37.029349   15635 addons.go:69] Setting yakd=true in profile "addons-815929"
	I0918 19:39:37.029407   15635 addons.go:234] Setting addon helm-tiller=true in "addons-815929"
	I0918 19:39:37.029413   15635 addons.go:234] Setting addon yakd=true in "addons-815929"
	I0918 19:39:37.029425   15635 addons.go:69] Setting volcano=true in profile "addons-815929"
	I0918 19:39:37.029450   15635 addons.go:69] Setting default-storageclass=true in profile "addons-815929"
	I0918 19:39:37.029476   15635 addons.go:69] Setting ingress-dns=true in profile "addons-815929"
	I0918 19:39:37.029490   15635 host.go:66] Checking if "addons-815929" exists ...
	I0918 19:39:37.029496   15635 addons.go:234] Setting addon ingress-dns=true in "addons-815929"
	I0918 19:39:37.029460   15635 addons.go:69] Setting ingress=true in profile "addons-815929"
	I0918 19:39:37.029523   15635 host.go:66] Checking if "addons-815929" exists ...
	I0918 19:39:37.029373   15635 addons.go:69] Setting inspektor-gadget=true in profile "addons-815929"
	I0918 19:39:37.029658   15635 addons.go:234] Setting addon inspektor-gadget=true in "addons-815929"
	I0918 19:39:37.029673   15635 host.go:66] Checking if "addons-815929" exists ...
	I0918 19:39:37.029524   15635 addons.go:234] Setting addon ingress=true in "addons-815929"
	I0918 19:39:37.029797   15635 host.go:66] Checking if "addons-815929" exists ...
	I0918 19:39:37.029443   15635 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-815929"
	I0918 19:39:37.029906   15635 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-815929"
	I0918 19:39:37.029440   15635 host.go:66] Checking if "addons-815929" exists ...
	I0918 19:39:37.029984   15635 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0918 19:39:37.030010   15635 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0918 19:39:37.030050   15635 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0918 19:39:37.030053   15635 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0918 19:39:37.030084   15635 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0918 19:39:37.030095   15635 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0918 19:39:37.029415   15635 host.go:66] Checking if "addons-815929" exists ...
	I0918 19:39:37.030352   15635 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0918 19:39:37.030383   15635 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0918 19:39:37.030388   15635 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0918 19:39:37.030405   15635 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0918 19:39:37.029357   15635 addons.go:69] Setting metrics-server=true in profile "addons-815929"
	I0918 19:39:37.030536   15635 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0918 19:39:37.030543   15635 addons.go:234] Setting addon metrics-server=true in "addons-815929"
	I0918 19:39:37.029434   15635 addons.go:69] Setting gcp-auth=true in profile "addons-815929"
	I0918 19:39:37.030567   15635 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0918 19:39:37.030570   15635 mustload.go:65] Loading cluster: addons-815929
	I0918 19:39:37.029447   15635 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-815929"
	I0918 19:39:37.030611   15635 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-815929"
	I0918 19:39:37.029452   15635 addons.go:69] Setting volumesnapshots=true in profile "addons-815929"
	I0918 19:39:37.030625   15635 addons.go:234] Setting addon volumesnapshots=true in "addons-815929"
	I0918 19:39:37.029456   15635 addons.go:234] Setting addon volcano=true in "addons-815929"
	I0918 19:39:37.029457   15635 addons.go:69] Setting registry=true in profile "addons-815929"
	I0918 19:39:37.030642   15635 addons.go:234] Setting addon registry=true in "addons-815929"
	I0918 19:39:37.030669   15635 host.go:66] Checking if "addons-815929" exists ...
	I0918 19:39:37.029458   15635 addons.go:69] Setting cloud-spanner=true in profile "addons-815929"
	I0918 19:39:37.030800   15635 config.go:182] Loaded profile config "addons-815929": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0918 19:39:37.030815   15635 addons.go:234] Setting addon cloud-spanner=true in "addons-815929"
	I0918 19:39:37.030841   15635 host.go:66] Checking if "addons-815929" exists ...
	I0918 19:39:37.031041   15635 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0918 19:39:37.031067   15635 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0918 19:39:37.031110   15635 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0918 19:39:37.031114   15635 host.go:66] Checking if "addons-815929" exists ...
	I0918 19:39:37.031133   15635 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0918 19:39:37.031187   15635 host.go:66] Checking if "addons-815929" exists ...
	I0918 19:39:37.031267   15635 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0918 19:39:37.031290   15635 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0918 19:39:37.029462   15635 addons.go:69] Setting storage-provisioner=true in profile "addons-815929"
	I0918 19:39:37.031351   15635 addons.go:234] Setting addon storage-provisioner=true in "addons-815929"
	I0918 19:39:37.031456   15635 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0918 19:39:37.031479   15635 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0918 19:39:37.031509   15635 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0918 19:39:37.029483   15635 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-815929"
	I0918 19:39:37.031530   15635 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0918 19:39:37.031597   15635 host.go:66] Checking if "addons-815929" exists ...
	I0918 19:39:37.031865   15635 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0918 19:39:37.031880   15635 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0918 19:39:37.031919   15635 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0918 19:39:37.031942   15635 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0918 19:39:37.032180   15635 host.go:66] Checking if "addons-815929" exists ...
	I0918 19:39:37.032334   15635 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0918 19:39:37.032367   15635 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0918 19:39:37.032458   15635 host.go:66] Checking if "addons-815929" exists ...
	I0918 19:39:37.040881   15635 out.go:177] * Verifying Kubernetes components...
	I0918 19:39:37.042576   15635 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0918 19:39:37.051516   15635 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43999
	I0918 19:39:37.052168   15635 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33723
	I0918 19:39:37.052235   15635 main.go:141] libmachine: () Calling .GetVersion
	I0918 19:39:37.052173   15635 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35173
	I0918 19:39:37.052393   15635 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39509
	I0918 19:39:37.052668   15635 main.go:141] libmachine: () Calling .GetVersion
	I0918 19:39:37.052961   15635 main.go:141] libmachine: Using API Version  1
	I0918 19:39:37.052978   15635 main.go:141] libmachine: () Calling .SetConfigRaw
	I0918 19:39:37.053395   15635 main.go:141] libmachine: () Calling .GetMachineName
	I0918 19:39:37.053567   15635 main.go:141] libmachine: Using API Version  1
	I0918 19:39:37.053580   15635 main.go:141] libmachine: () Calling .SetConfigRaw
	I0918 19:39:37.053833   15635 main.go:141] libmachine: () Calling .GetVersion
	I0918 19:39:37.053907   15635 main.go:141] libmachine: () Calling .GetMachineName
	I0918 19:39:37.054034   15635 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0918 19:39:37.054084   15635 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0918 19:39:37.054251   15635 main.go:141] libmachine: Using API Version  1
	I0918 19:39:37.054272   15635 main.go:141] libmachine: () Calling .SetConfigRaw
	I0918 19:39:37.054491   15635 main.go:141] libmachine: () Calling .GetVersion
	I0918 19:39:37.054656   15635 main.go:141] libmachine: () Calling .GetMachineName
	I0918 19:39:37.055051   15635 main.go:141] libmachine: Using API Version  1
	I0918 19:39:37.055180   15635 main.go:141] libmachine: () Calling .SetConfigRaw
	I0918 19:39:37.055565   15635 main.go:141] libmachine: () Calling .GetMachineName
	I0918 19:39:37.062646   15635 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34865
	I0918 19:39:37.064595   15635 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0918 19:39:37.064636   15635 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0918 19:39:37.064659   15635 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0918 19:39:37.064700   15635 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0918 19:39:37.064716   15635 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0918 19:39:37.064740   15635 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0918 19:39:37.064784   15635 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0918 19:39:37.064821   15635 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0918 19:39:37.065051   15635 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43139
	I0918 19:39:37.065527   15635 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0918 19:39:37.065555   15635 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0918 19:39:37.066116   15635 main.go:141] libmachine: () Calling .GetVersion
	I0918 19:39:37.066219   15635 main.go:141] libmachine: () Calling .GetVersion
	I0918 19:39:37.066752   15635 main.go:141] libmachine: Using API Version  1
	I0918 19:39:37.066769   15635 main.go:141] libmachine: () Calling .SetConfigRaw
	I0918 19:39:37.067162   15635 main.go:141] libmachine: () Calling .GetMachineName
	I0918 19:39:37.067703   15635 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0918 19:39:37.067726   15635 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0918 19:39:37.069000   15635 main.go:141] libmachine: Using API Version  1
	I0918 19:39:37.069018   15635 main.go:141] libmachine: () Calling .SetConfigRaw
	I0918 19:39:37.069473   15635 main.go:141] libmachine: () Calling .GetMachineName
	I0918 19:39:37.070080   15635 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0918 19:39:37.070105   15635 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0918 19:39:37.098916   15635 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37831
	I0918 19:39:37.099493   15635 main.go:141] libmachine: () Calling .GetVersion
	I0918 19:39:37.100084   15635 main.go:141] libmachine: Using API Version  1
	I0918 19:39:37.100108   15635 main.go:141] libmachine: () Calling .SetConfigRaw
	I0918 19:39:37.100477   15635 main.go:141] libmachine: () Calling .GetMachineName
	I0918 19:39:37.100643   15635 main.go:141] libmachine: (addons-815929) Calling .GetState
	I0918 19:39:37.103211   15635 host.go:66] Checking if "addons-815929" exists ...
	I0918 19:39:37.103677   15635 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0918 19:39:37.103724   15635 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0918 19:39:37.106175   15635 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42007
	I0918 19:39:37.106455   15635 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38193
	I0918 19:39:37.106629   15635 main.go:141] libmachine: () Calling .GetVersion
	I0918 19:39:37.106732   15635 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44527
	I0918 19:39:37.107318   15635 main.go:141] libmachine: Using API Version  1
	I0918 19:39:37.107333   15635 main.go:141] libmachine: () Calling .SetConfigRaw
	I0918 19:39:37.107356   15635 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45109
	I0918 19:39:37.107737   15635 main.go:141] libmachine: () Calling .GetVersion
	I0918 19:39:37.107821   15635 main.go:141] libmachine: () Calling .GetMachineName
	I0918 19:39:37.107875   15635 main.go:141] libmachine: () Calling .GetVersion
	I0918 19:39:37.108413   15635 main.go:141] libmachine: Using API Version  1
	I0918 19:39:37.108435   15635 main.go:141] libmachine: () Calling .SetConfigRaw
	I0918 19:39:37.108877   15635 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0918 19:39:37.108909   15635 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0918 19:39:37.109176   15635 main.go:141] libmachine: () Calling .GetVersion
	I0918 19:39:37.109264   15635 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38691
	I0918 19:39:37.109861   15635 main.go:141] libmachine: () Calling .GetVersion
	I0918 19:39:37.109995   15635 main.go:141] libmachine: Using API Version  1
	I0918 19:39:37.110005   15635 main.go:141] libmachine: () Calling .SetConfigRaw
	I0918 19:39:37.110065   15635 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40721
	I0918 19:39:37.110320   15635 main.go:141] libmachine: () Calling .GetMachineName
	I0918 19:39:37.110484   15635 main.go:141] libmachine: (addons-815929) Calling .GetState
	I0918 19:39:37.110838   15635 main.go:141] libmachine: Using API Version  1
	I0918 19:39:37.110854   15635 main.go:141] libmachine: () Calling .SetConfigRaw
	I0918 19:39:37.111189   15635 main.go:141] libmachine: () Calling .GetMachineName
	I0918 19:39:37.111701   15635 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0918 19:39:37.111733   15635 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0918 19:39:37.112042   15635 main.go:141] libmachine: Using API Version  1
	I0918 19:39:37.112058   15635 main.go:141] libmachine: () Calling .SetConfigRaw
	I0918 19:39:37.112122   15635 main.go:141] libmachine: () Calling .GetVersion
	I0918 19:39:37.112177   15635 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36143
	I0918 19:39:37.112340   15635 main.go:141] libmachine: (addons-815929) Calling .DriverName
	I0918 19:39:37.112872   15635 main.go:141] libmachine: Using API Version  1
	I0918 19:39:37.112893   15635 main.go:141] libmachine: () Calling .SetConfigRaw
	I0918 19:39:37.112958   15635 main.go:141] libmachine: () Calling .GetVersion
	I0918 19:39:37.112994   15635 main.go:141] libmachine: () Calling .GetMachineName
	I0918 19:39:37.113426   15635 main.go:141] libmachine: Using API Version  1
	I0918 19:39:37.113442   15635 main.go:141] libmachine: () Calling .SetConfigRaw
	I0918 19:39:37.113536   15635 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0918 19:39:37.113555   15635 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0918 19:39:37.113791   15635 main.go:141] libmachine: () Calling .GetMachineName
	I0918 19:39:37.113948   15635 main.go:141] libmachine: (addons-815929) Calling .GetState
	I0918 19:39:37.114523   15635 main.go:141] libmachine: () Calling .GetMachineName
	I0918 19:39:37.114567   15635 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40673
	I0918 19:39:37.114766   15635 out.go:177]   - Using image ghcr.io/helm/tiller:v2.17.0
	I0918 19:39:37.114880   15635 main.go:141] libmachine: () Calling .GetMachineName
	I0918 19:39:37.115093   15635 main.go:141] libmachine: (addons-815929) Calling .GetState
	I0918 19:39:37.115461   15635 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0918 19:39:37.115486   15635 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0918 19:39:37.115987   15635 main.go:141] libmachine: (addons-815929) Calling .DriverName
	I0918 19:39:37.116102   15635 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-dp.yaml
	I0918 19:39:37.116125   15635 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-dp.yaml (2422 bytes)
	I0918 19:39:37.116144   15635 main.go:141] libmachine: (addons-815929) Calling .GetSSHHostname
	I0918 19:39:37.116423   15635 main.go:141] libmachine: () Calling .GetVersion
	I0918 19:39:37.116861   15635 main.go:141] libmachine: Using API Version  1
	I0918 19:39:37.116878   15635 main.go:141] libmachine: () Calling .SetConfigRaw
	I0918 19:39:37.117587   15635 main.go:141] libmachine: () Calling .GetMachineName
	I0918 19:39:37.117675   15635 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.32.0
	I0918 19:39:37.117765   15635 main.go:141] libmachine: (addons-815929) Calling .GetState
	I0918 19:39:37.118810   15635 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0918 19:39:37.118832   15635 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0918 19:39:37.118853   15635 main.go:141] libmachine: (addons-815929) Calling .GetSSHHostname
	I0918 19:39:37.119472   15635 main.go:141] libmachine: (addons-815929) Calling .DriverName
	I0918 19:39:37.120244   15635 main.go:141] libmachine: (addons-815929) Calling .DriverName
	I0918 19:39:37.122036   15635 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0918 19:39:37.122153   15635 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I0918 19:39:37.122370   15635 main.go:141] libmachine: (addons-815929) DBG | domain addons-815929 has defined MAC address 52:54:00:11:b1:d6 in network mk-addons-815929
	I0918 19:39:37.123115   15635 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0918 19:39:37.123133   15635 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0918 19:39:37.123160   15635 main.go:141] libmachine: (addons-815929) Calling .GetSSHHostname
	I0918 19:39:37.123192   15635 main.go:141] libmachine: (addons-815929) DBG | domain addons-815929 has defined MAC address 52:54:00:11:b1:d6 in network mk-addons-815929
	I0918 19:39:37.123838   15635 main.go:141] libmachine: (addons-815929) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:b1:d6", ip: ""} in network mk-addons-815929: {Iface:virbr1 ExpiryTime:2024-09-18 20:39:08 +0000 UTC Type:0 Mac:52:54:00:11:b1:d6 Iaid: IPaddr:192.168.39.158 Prefix:24 Hostname:addons-815929 Clientid:01:52:54:00:11:b1:d6}
	I0918 19:39:37.123859   15635 main.go:141] libmachine: (addons-815929) DBG | domain addons-815929 has defined IP address 192.168.39.158 and MAC address 52:54:00:11:b1:d6 in network mk-addons-815929
	I0918 19:39:37.123881   15635 main.go:141] libmachine: (addons-815929) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:b1:d6", ip: ""} in network mk-addons-815929: {Iface:virbr1 ExpiryTime:2024-09-18 20:39:08 +0000 UTC Type:0 Mac:52:54:00:11:b1:d6 Iaid: IPaddr:192.168.39.158 Prefix:24 Hostname:addons-815929 Clientid:01:52:54:00:11:b1:d6}
	I0918 19:39:37.123894   15635 main.go:141] libmachine: (addons-815929) DBG | domain addons-815929 has defined IP address 192.168.39.158 and MAC address 52:54:00:11:b1:d6 in network mk-addons-815929
	I0918 19:39:37.124062   15635 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0918 19:39:37.124077   15635 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0918 19:39:37.124093   15635 main.go:141] libmachine: (addons-815929) Calling .GetSSHHostname
	I0918 19:39:37.124109   15635 main.go:141] libmachine: (addons-815929) Calling .GetSSHPort
	I0918 19:39:37.124224   15635 main.go:141] libmachine: (addons-815929) Calling .GetSSHPort
	I0918 19:39:37.124275   15635 main.go:141] libmachine: (addons-815929) Calling .GetSSHKeyPath
	I0918 19:39:37.124424   15635 main.go:141] libmachine: (addons-815929) Calling .GetSSHUsername
	I0918 19:39:37.124477   15635 main.go:141] libmachine: (addons-815929) Calling .GetSSHKeyPath
	I0918 19:39:37.124532   15635 sshutil.go:53] new ssh client: &{IP:192.168.39.158 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19667-7671/.minikube/machines/addons-815929/id_rsa Username:docker}
	I0918 19:39:37.124835   15635 main.go:141] libmachine: (addons-815929) Calling .GetSSHUsername
	I0918 19:39:37.125242   15635 sshutil.go:53] new ssh client: &{IP:192.168.39.158 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19667-7671/.minikube/machines/addons-815929/id_rsa Username:docker}
	I0918 19:39:37.128252   15635 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46023
	I0918 19:39:37.128414   15635 main.go:141] libmachine: (addons-815929) DBG | domain addons-815929 has defined MAC address 52:54:00:11:b1:d6 in network mk-addons-815929
	I0918 19:39:37.128663   15635 main.go:141] libmachine: (addons-815929) DBG | domain addons-815929 has defined MAC address 52:54:00:11:b1:d6 in network mk-addons-815929
	I0918 19:39:37.128712   15635 main.go:141] libmachine: (addons-815929) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:b1:d6", ip: ""} in network mk-addons-815929: {Iface:virbr1 ExpiryTime:2024-09-18 20:39:08 +0000 UTC Type:0 Mac:52:54:00:11:b1:d6 Iaid: IPaddr:192.168.39.158 Prefix:24 Hostname:addons-815929 Clientid:01:52:54:00:11:b1:d6}
	I0918 19:39:37.128728   15635 main.go:141] libmachine: (addons-815929) DBG | domain addons-815929 has defined IP address 192.168.39.158 and MAC address 52:54:00:11:b1:d6 in network mk-addons-815929
	I0918 19:39:37.129043   15635 main.go:141] libmachine: (addons-815929) Calling .GetSSHPort
	I0918 19:39:37.129183   15635 main.go:141] libmachine: (addons-815929) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:b1:d6", ip: ""} in network mk-addons-815929: {Iface:virbr1 ExpiryTime:2024-09-18 20:39:08 +0000 UTC Type:0 Mac:52:54:00:11:b1:d6 Iaid: IPaddr:192.168.39.158 Prefix:24 Hostname:addons-815929 Clientid:01:52:54:00:11:b1:d6}
	I0918 19:39:37.129197   15635 main.go:141] libmachine: (addons-815929) DBG | domain addons-815929 has defined IP address 192.168.39.158 and MAC address 52:54:00:11:b1:d6 in network mk-addons-815929
	I0918 19:39:37.129373   15635 main.go:141] libmachine: (addons-815929) Calling .GetSSHKeyPath
	I0918 19:39:37.129430   15635 main.go:141] libmachine: (addons-815929) Calling .GetSSHPort
	I0918 19:39:37.129662   15635 main.go:141] libmachine: (addons-815929) Calling .GetSSHUsername
	I0918 19:39:37.129717   15635 main.go:141] libmachine: (addons-815929) Calling .GetSSHKeyPath
	I0918 19:39:37.130003   15635 main.go:141] libmachine: (addons-815929) Calling .GetSSHUsername
	I0918 19:39:37.130044   15635 sshutil.go:53] new ssh client: &{IP:192.168.39.158 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19667-7671/.minikube/machines/addons-815929/id_rsa Username:docker}
	I0918 19:39:37.130291   15635 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33081
	I0918 19:39:37.130581   15635 main.go:141] libmachine: () Calling .GetVersion
	I0918 19:39:37.130635   15635 sshutil.go:53] new ssh client: &{IP:192.168.39.158 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19667-7671/.minikube/machines/addons-815929/id_rsa Username:docker}
	I0918 19:39:37.131050   15635 main.go:141] libmachine: () Calling .GetVersion
	I0918 19:39:37.131646   15635 main.go:141] libmachine: Using API Version  1
	I0918 19:39:37.131664   15635 main.go:141] libmachine: () Calling .SetConfigRaw
	I0918 19:39:37.132051   15635 main.go:141] libmachine: () Calling .GetMachineName
	I0918 19:39:37.132594   15635 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0918 19:39:37.132634   15635 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0918 19:39:37.133414   15635 main.go:141] libmachine: Using API Version  1
	I0918 19:39:37.133432   15635 main.go:141] libmachine: () Calling .SetConfigRaw
	I0918 19:39:37.133778   15635 main.go:141] libmachine: () Calling .GetMachineName
	I0918 19:39:37.134298   15635 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0918 19:39:37.134332   15635 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0918 19:39:37.134555   15635 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42457
	I0918 19:39:37.140829   15635 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39169
	I0918 19:39:37.140852   15635 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38671
	I0918 19:39:37.141363   15635 main.go:141] libmachine: () Calling .GetVersion
	I0918 19:39:37.141476   15635 main.go:141] libmachine: () Calling .GetVersion
	I0918 19:39:37.142020   15635 main.go:141] libmachine: Using API Version  1
	I0918 19:39:37.142041   15635 main.go:141] libmachine: () Calling .SetConfigRaw
	I0918 19:39:37.142402   15635 main.go:141] libmachine: () Calling .GetMachineName
	I0918 19:39:37.143109   15635 main.go:141] libmachine: (addons-815929) Calling .GetState
	I0918 19:39:37.143714   15635 main.go:141] libmachine: Using API Version  1
	I0918 19:39:37.143732   15635 main.go:141] libmachine: () Calling .SetConfigRaw
	I0918 19:39:37.144237   15635 main.go:141] libmachine: () Calling .GetMachineName
	I0918 19:39:37.144935   15635 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0918 19:39:37.144977   15635 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0918 19:39:37.147981   15635 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-815929"
	I0918 19:39:37.148036   15635 host.go:66] Checking if "addons-815929" exists ...
	I0918 19:39:37.148428   15635 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0918 19:39:37.148465   15635 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0918 19:39:37.150809   15635 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44217
	I0918 19:39:37.151218   15635 main.go:141] libmachine: () Calling .GetVersion
	I0918 19:39:37.152360   15635 main.go:141] libmachine: Using API Version  1
	I0918 19:39:37.152379   15635 main.go:141] libmachine: () Calling .SetConfigRaw
	I0918 19:39:37.152751   15635 main.go:141] libmachine: () Calling .GetMachineName
	I0918 19:39:37.152876   15635 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36965
	I0918 19:39:37.153170   15635 main.go:141] libmachine: (addons-815929) Calling .GetState
	I0918 19:39:37.153972   15635 main.go:141] libmachine: () Calling .GetVersion
	I0918 19:39:37.154591   15635 main.go:141] libmachine: Using API Version  1
	I0918 19:39:37.154608   15635 main.go:141] libmachine: () Calling .SetConfigRaw
	I0918 19:39:37.155107   15635 main.go:141] libmachine: () Calling .GetMachineName
	I0918 19:39:37.155379   15635 main.go:141] libmachine: (addons-815929) Calling .GetState
	I0918 19:39:37.155626   15635 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45955
	I0918 19:39:37.155835   15635 main.go:141] libmachine: (addons-815929) Calling .DriverName
	I0918 19:39:37.156440   15635 main.go:141] libmachine: () Calling .GetVersion
	I0918 19:39:37.156559   15635 main.go:141] libmachine: () Calling .GetVersion
	I0918 19:39:37.157000   15635 main.go:141] libmachine: Using API Version  1
	I0918 19:39:37.157022   15635 main.go:141] libmachine: () Calling .SetConfigRaw
	I0918 19:39:37.157078   15635 main.go:141] libmachine: (addons-815929) Calling .DriverName
	I0918 19:39:37.157468   15635 main.go:141] libmachine: Using API Version  1
	I0918 19:39:37.157923   15635 main.go:141] libmachine: () Calling .SetConfigRaw
	I0918 19:39:37.157782   15635 main.go:141] libmachine: () Calling .GetMachineName
	I0918 19:39:37.158172   15635 main.go:141] libmachine: (addons-815929) Calling .GetState
	I0918 19:39:37.158484   15635 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.2
	I0918 19:39:37.158806   15635 main.go:141] libmachine: () Calling .GetMachineName
	I0918 19:39:37.159195   15635 main.go:141] libmachine: (addons-815929) Calling .GetState
	I0918 19:39:37.159405   15635 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
	I0918 19:39:37.159842   15635 main.go:141] libmachine: (addons-815929) Calling .DriverName
	I0918 19:39:37.161182   15635 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0918 19:39:37.161249   15635 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0918 19:39:37.161287   15635 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0918 19:39:37.161304   15635 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0918 19:39:37.161324   15635 main.go:141] libmachine: (addons-815929) Calling .GetSSHHostname
	I0918 19:39:37.162704   15635 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0918 19:39:37.162728   15635 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0918 19:39:37.162748   15635 main.go:141] libmachine: (addons-815929) Calling .GetSSHHostname
	I0918 19:39:37.163160   15635 main.go:141] libmachine: (addons-815929) Calling .DriverName
	I0918 19:39:37.164031   15635 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0918 19:39:37.164902   15635 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0918 19:39:37.165184   15635 main.go:141] libmachine: (addons-815929) DBG | domain addons-815929 has defined MAC address 52:54:00:11:b1:d6 in network mk-addons-815929
	I0918 19:39:37.165515   15635 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0918 19:39:37.165545   15635 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0918 19:39:37.165565   15635 main.go:141] libmachine: (addons-815929) Calling .GetSSHHostname
	I0918 19:39:37.166590   15635 main.go:141] libmachine: (addons-815929) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:b1:d6", ip: ""} in network mk-addons-815929: {Iface:virbr1 ExpiryTime:2024-09-18 20:39:08 +0000 UTC Type:0 Mac:52:54:00:11:b1:d6 Iaid: IPaddr:192.168.39.158 Prefix:24 Hostname:addons-815929 Clientid:01:52:54:00:11:b1:d6}
	I0918 19:39:37.166613   15635 main.go:141] libmachine: (addons-815929) DBG | domain addons-815929 has defined MAC address 52:54:00:11:b1:d6 in network mk-addons-815929
	I0918 19:39:37.166620   15635 main.go:141] libmachine: (addons-815929) DBG | domain addons-815929 has defined IP address 192.168.39.158 and MAC address 52:54:00:11:b1:d6 in network mk-addons-815929
	I0918 19:39:37.166869   15635 main.go:141] libmachine: (addons-815929) Calling .GetSSHPort
	I0918 19:39:37.166933   15635 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0918 19:39:37.167076   15635 main.go:141] libmachine: (addons-815929) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:b1:d6", ip: ""} in network mk-addons-815929: {Iface:virbr1 ExpiryTime:2024-09-18 20:39:08 +0000 UTC Type:0 Mac:52:54:00:11:b1:d6 Iaid: IPaddr:192.168.39.158 Prefix:24 Hostname:addons-815929 Clientid:01:52:54:00:11:b1:d6}
	I0918 19:39:37.167093   15635 main.go:141] libmachine: (addons-815929) DBG | domain addons-815929 has defined IP address 192.168.39.158 and MAC address 52:54:00:11:b1:d6 in network mk-addons-815929
	I0918 19:39:37.167133   15635 main.go:141] libmachine: (addons-815929) Calling .GetSSHKeyPath
	I0918 19:39:37.167258   15635 main.go:141] libmachine: (addons-815929) Calling .GetSSHPort
	I0918 19:39:37.167299   15635 main.go:141] libmachine: (addons-815929) Calling .GetSSHUsername
	I0918 19:39:37.167409   15635 main.go:141] libmachine: (addons-815929) Calling .GetSSHKeyPath
	I0918 19:39:37.167455   15635 sshutil.go:53] new ssh client: &{IP:192.168.39.158 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19667-7671/.minikube/machines/addons-815929/id_rsa Username:docker}
	I0918 19:39:37.167541   15635 main.go:141] libmachine: (addons-815929) Calling .GetSSHUsername
	I0918 19:39:37.167654   15635 sshutil.go:53] new ssh client: &{IP:192.168.39.158 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19667-7671/.minikube/machines/addons-815929/id_rsa Username:docker}
	I0918 19:39:37.169351   15635 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0918 19:39:37.169830   15635 main.go:141] libmachine: (addons-815929) DBG | domain addons-815929 has defined MAC address 52:54:00:11:b1:d6 in network mk-addons-815929
	I0918 19:39:37.169871   15635 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37677
	I0918 19:39:37.170291   15635 main.go:141] libmachine: (addons-815929) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:b1:d6", ip: ""} in network mk-addons-815929: {Iface:virbr1 ExpiryTime:2024-09-18 20:39:08 +0000 UTC Type:0 Mac:52:54:00:11:b1:d6 Iaid: IPaddr:192.168.39.158 Prefix:24 Hostname:addons-815929 Clientid:01:52:54:00:11:b1:d6}
	I0918 19:39:37.170343   15635 main.go:141] libmachine: () Calling .GetVersion
	I0918 19:39:37.170442   15635 main.go:141] libmachine: (addons-815929) DBG | domain addons-815929 has defined IP address 192.168.39.158 and MAC address 52:54:00:11:b1:d6 in network mk-addons-815929
	I0918 19:39:37.170594   15635 main.go:141] libmachine: (addons-815929) Calling .GetSSHPort
	I0918 19:39:37.170684   15635 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43447
	I0918 19:39:37.170837   15635 main.go:141] libmachine: (addons-815929) Calling .GetSSHKeyPath
	I0918 19:39:37.170943   15635 main.go:141] libmachine: Using API Version  1
	I0918 19:39:37.170956   15635 main.go:141] libmachine: () Calling .SetConfigRaw
	I0918 19:39:37.171006   15635 main.go:141] libmachine: (addons-815929) Calling .GetSSHUsername
	I0918 19:39:37.171021   15635 main.go:141] libmachine: () Calling .GetVersion
	I0918 19:39:37.171174   15635 sshutil.go:53] new ssh client: &{IP:192.168.39.158 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19667-7671/.minikube/machines/addons-815929/id_rsa Username:docker}
	I0918 19:39:37.171426   15635 main.go:141] libmachine: () Calling .GetMachineName
	I0918 19:39:37.172178   15635 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0918 19:39:37.172541   15635 main.go:141] libmachine: Using API Version  1
	I0918 19:39:37.172561   15635 main.go:141] libmachine: () Calling .SetConfigRaw
	I0918 19:39:37.173066   15635 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46769
	I0918 19:39:37.173090   15635 main.go:141] libmachine: () Calling .GetMachineName
	I0918 19:39:37.173137   15635 main.go:141] libmachine: (addons-815929) Calling .GetState
	I0918 19:39:37.173352   15635 main.go:141] libmachine: (addons-815929) Calling .GetState
	I0918 19:39:37.174570   15635 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0918 19:39:37.175288   15635 main.go:141] libmachine: (addons-815929) Calling .DriverName
	I0918 19:39:37.175665   15635 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37609
	I0918 19:39:37.175894   15635 main.go:141] libmachine: () Calling .GetVersion
	I0918 19:39:37.175993   15635 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45511
	I0918 19:39:37.176139   15635 main.go:141] libmachine: () Calling .GetVersion
	I0918 19:39:37.176458   15635 main.go:141] libmachine: Using API Version  1
	I0918 19:39:37.176473   15635 main.go:141] libmachine: () Calling .SetConfigRaw
	I0918 19:39:37.176509   15635 main.go:141] libmachine: () Calling .GetVersion
	I0918 19:39:37.176536   15635 main.go:141] libmachine: (addons-815929) Calling .DriverName
	I0918 19:39:37.176688   15635 main.go:141] libmachine: Making call to close driver server
	I0918 19:39:37.176717   15635 main.go:141] libmachine: (addons-815929) Calling .Close
	I0918 19:39:37.176818   15635 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0918 19:39:37.176941   15635 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.23
	I0918 19:39:37.178051   15635 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0918 19:39:37.178163   15635 main.go:141] libmachine: () Calling .GetMachineName
	I0918 19:39:37.178175   15635 main.go:141] libmachine: (addons-815929) DBG | Closing plugin on server side
	I0918 19:39:37.178206   15635 main.go:141] libmachine: Successfully made call to close driver server
	I0918 19:39:37.178214   15635 main.go:141] libmachine: Making call to close connection to plugin binary
	I0918 19:39:37.178235   15635 main.go:141] libmachine: Making call to close driver server
	I0918 19:39:37.178250   15635 main.go:141] libmachine: (addons-815929) Calling .Close
	I0918 19:39:37.178254   15635 main.go:141] libmachine: Using API Version  1
	I0918 19:39:37.178294   15635 main.go:141] libmachine: () Calling .SetConfigRaw
	I0918 19:39:37.178261   15635 main.go:141] libmachine: Using API Version  1
	I0918 19:39:37.178333   15635 main.go:141] libmachine: () Calling .SetConfigRaw
	I0918 19:39:37.178542   15635 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I0918 19:39:37.178556   15635 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0918 19:39:37.178574   15635 main.go:141] libmachine: (addons-815929) Calling .GetSSHHostname
	I0918 19:39:37.178597   15635 main.go:141] libmachine: (addons-815929) Calling .GetState
	I0918 19:39:37.178616   15635 main.go:141] libmachine: () Calling .GetMachineName
	I0918 19:39:37.179193   15635 main.go:141] libmachine: (addons-815929) DBG | Closing plugin on server side
	I0918 19:39:37.179197   15635 main.go:141] libmachine: () Calling .GetMachineName
	I0918 19:39:37.179230   15635 main.go:141] libmachine: Successfully made call to close driver server
	I0918 19:39:37.179243   15635 main.go:141] libmachine: Making call to close connection to plugin binary
	I0918 19:39:37.179280   15635 main.go:141] libmachine: (addons-815929) Calling .DriverName
	W0918 19:39:37.179328   15635 out.go:270] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I0918 19:39:37.179639   15635 main.go:141] libmachine: (addons-815929) Calling .GetState
	I0918 19:39:37.181366   15635 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0918 19:39:37.181535   15635 addons.go:234] Setting addon default-storageclass=true in "addons-815929"
	I0918 19:39:37.181576   15635 host.go:66] Checking if "addons-815929" exists ...
	I0918 19:39:37.181669   15635 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37473
	I0918 19:39:37.181924   15635 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0918 19:39:37.181945   15635 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0918 19:39:37.181961   15635 main.go:141] libmachine: () Calling .GetVersion
	I0918 19:39:37.182145   15635 main.go:141] libmachine: (addons-815929) DBG | domain addons-815929 has defined MAC address 52:54:00:11:b1:d6 in network mk-addons-815929
	I0918 19:39:37.182275   15635 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33577
	I0918 19:39:37.182398   15635 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0918 19:39:37.182418   15635 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0918 19:39:37.182441   15635 main.go:141] libmachine: (addons-815929) Calling .GetSSHHostname
	I0918 19:39:37.182531   15635 main.go:141] libmachine: Using API Version  1
	I0918 19:39:37.182548   15635 main.go:141] libmachine: () Calling .SetConfigRaw
	I0918 19:39:37.182977   15635 main.go:141] libmachine: () Calling .GetVersion
	I0918 19:39:37.183061   15635 main.go:141] libmachine: (addons-815929) Calling .GetSSHPort
	I0918 19:39:37.183086   15635 main.go:141] libmachine: (addons-815929) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:b1:d6", ip: ""} in network mk-addons-815929: {Iface:virbr1 ExpiryTime:2024-09-18 20:39:08 +0000 UTC Type:0 Mac:52:54:00:11:b1:d6 Iaid: IPaddr:192.168.39.158 Prefix:24 Hostname:addons-815929 Clientid:01:52:54:00:11:b1:d6}
	I0918 19:39:37.183117   15635 main.go:141] libmachine: (addons-815929) DBG | domain addons-815929 has defined IP address 192.168.39.158 and MAC address 52:54:00:11:b1:d6 in network mk-addons-815929
	I0918 19:39:37.183231   15635 main.go:141] libmachine: (addons-815929) Calling .GetSSHKeyPath
	I0918 19:39:37.183392   15635 main.go:141] libmachine: (addons-815929) Calling .GetSSHUsername
	I0918 19:39:37.183461   15635 main.go:141] libmachine: () Calling .GetMachineName
	I0918 19:39:37.183556   15635 sshutil.go:53] new ssh client: &{IP:192.168.39.158 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19667-7671/.minikube/machines/addons-815929/id_rsa Username:docker}
	I0918 19:39:37.184190   15635 main.go:141] libmachine: Using API Version  1
	I0918 19:39:37.184195   15635 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0918 19:39:37.184205   15635 main.go:141] libmachine: () Calling .SetConfigRaw
	I0918 19:39:37.184232   15635 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0918 19:39:37.184619   15635 main.go:141] libmachine: () Calling .GetMachineName
	I0918 19:39:37.184791   15635 main.go:141] libmachine: (addons-815929) Calling .GetState
	I0918 19:39:37.185344   15635 main.go:141] libmachine: (addons-815929) Calling .DriverName
	I0918 19:39:37.186225   15635 main.go:141] libmachine: (addons-815929) DBG | domain addons-815929 has defined MAC address 52:54:00:11:b1:d6 in network mk-addons-815929
	I0918 19:39:37.186672   15635 main.go:141] libmachine: (addons-815929) Calling .DriverName
	I0918 19:39:37.186971   15635 main.go:141] libmachine: (addons-815929) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:b1:d6", ip: ""} in network mk-addons-815929: {Iface:virbr1 ExpiryTime:2024-09-18 20:39:08 +0000 UTC Type:0 Mac:52:54:00:11:b1:d6 Iaid: IPaddr:192.168.39.158 Prefix:24 Hostname:addons-815929 Clientid:01:52:54:00:11:b1:d6}
	I0918 19:39:37.186997   15635 main.go:141] libmachine: (addons-815929) DBG | domain addons-815929 has defined IP address 192.168.39.158 and MAC address 52:54:00:11:b1:d6 in network mk-addons-815929
	I0918 19:39:37.187115   15635 main.go:141] libmachine: (addons-815929) Calling .GetSSHPort
	I0918 19:39:37.187255   15635 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0918 19:39:37.187310   15635 main.go:141] libmachine: (addons-815929) Calling .GetSSHKeyPath
	I0918 19:39:37.187453   15635 main.go:141] libmachine: (addons-815929) Calling .GetSSHUsername
	I0918 19:39:37.187632   15635 sshutil.go:53] new ssh client: &{IP:192.168.39.158 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19667-7671/.minikube/machines/addons-815929/id_rsa Username:docker}
	I0918 19:39:37.189574   15635 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0918 19:39:37.189599   15635 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0918 19:39:37.189633   15635 main.go:141] libmachine: (addons-815929) Calling .GetSSHHostname
	I0918 19:39:37.189708   15635 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.2
	I0918 19:39:37.191068   15635 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0918 19:39:37.191091   15635 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0918 19:39:37.191118   15635 main.go:141] libmachine: (addons-815929) Calling .GetSSHHostname
	I0918 19:39:37.193163   15635 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37369
	I0918 19:39:37.193512   15635 main.go:141] libmachine: (addons-815929) DBG | domain addons-815929 has defined MAC address 52:54:00:11:b1:d6 in network mk-addons-815929
	I0918 19:39:37.193809   15635 main.go:141] libmachine: (addons-815929) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:b1:d6", ip: ""} in network mk-addons-815929: {Iface:virbr1 ExpiryTime:2024-09-18 20:39:08 +0000 UTC Type:0 Mac:52:54:00:11:b1:d6 Iaid: IPaddr:192.168.39.158 Prefix:24 Hostname:addons-815929 Clientid:01:52:54:00:11:b1:d6}
	I0918 19:39:37.193886   15635 main.go:141] libmachine: (addons-815929) Calling .GetSSHPort
	I0918 19:39:37.194018   15635 main.go:141] libmachine: (addons-815929) DBG | domain addons-815929 has defined IP address 192.168.39.158 and MAC address 52:54:00:11:b1:d6 in network mk-addons-815929
	I0918 19:39:37.194053   15635 main.go:141] libmachine: (addons-815929) Calling .GetSSHKeyPath
	I0918 19:39:37.194201   15635 main.go:141] libmachine: (addons-815929) Calling .GetSSHUsername
	I0918 19:39:37.194373   15635 sshutil.go:53] new ssh client: &{IP:192.168.39.158 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19667-7671/.minikube/machines/addons-815929/id_rsa Username:docker}
	I0918 19:39:37.195021   15635 main.go:141] libmachine: (addons-815929) DBG | domain addons-815929 has defined MAC address 52:54:00:11:b1:d6 in network mk-addons-815929
	I0918 19:39:37.195342   15635 main.go:141] libmachine: (addons-815929) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:b1:d6", ip: ""} in network mk-addons-815929: {Iface:virbr1 ExpiryTime:2024-09-18 20:39:08 +0000 UTC Type:0 Mac:52:54:00:11:b1:d6 Iaid: IPaddr:192.168.39.158 Prefix:24 Hostname:addons-815929 Clientid:01:52:54:00:11:b1:d6}
	I0918 19:39:37.195382   15635 main.go:141] libmachine: (addons-815929) DBG | domain addons-815929 has defined IP address 192.168.39.158 and MAC address 52:54:00:11:b1:d6 in network mk-addons-815929
	I0918 19:39:37.195574   15635 main.go:141] libmachine: (addons-815929) Calling .GetSSHPort
	I0918 19:39:37.195743   15635 main.go:141] libmachine: (addons-815929) Calling .GetSSHKeyPath
	I0918 19:39:37.195909   15635 main.go:141] libmachine: (addons-815929) Calling .GetSSHUsername
	I0918 19:39:37.195982   15635 main.go:141] libmachine: () Calling .GetVersion
	I0918 19:39:37.196141   15635 sshutil.go:53] new ssh client: &{IP:192.168.39.158 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19667-7671/.minikube/machines/addons-815929/id_rsa Username:docker}
	I0918 19:39:37.196584   15635 main.go:141] libmachine: Using API Version  1
	I0918 19:39:37.196605   15635 main.go:141] libmachine: () Calling .SetConfigRaw
	I0918 19:39:37.197020   15635 main.go:141] libmachine: () Calling .GetMachineName
	W0918 19:39:37.197204   15635 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:46722->192.168.39.158:22: read: connection reset by peer
	I0918 19:39:37.197234   15635 retry.go:31] will retry after 174.790635ms: ssh: handshake failed: read tcp 192.168.39.1:46722->192.168.39.158:22: read: connection reset by peer
	I0918 19:39:37.197279   15635 main.go:141] libmachine: (addons-815929) Calling .GetState
	I0918 19:39:37.198708   15635 main.go:141] libmachine: (addons-815929) Calling .DriverName
	I0918 19:39:37.200881   15635 out.go:177]   - Using image docker.io/registry:2.8.3
	I0918 19:39:37.202072   15635 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0918 19:39:37.203833   15635 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I0918 19:39:37.203851   15635 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0918 19:39:37.203875   15635 main.go:141] libmachine: (addons-815929) Calling .GetSSHHostname
	I0918 19:39:37.205608   15635 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35855
	I0918 19:39:37.206094   15635 main.go:141] libmachine: () Calling .GetVersion
	I0918 19:39:37.206615   15635 main.go:141] libmachine: Using API Version  1
	I0918 19:39:37.206633   15635 main.go:141] libmachine: () Calling .SetConfigRaw
	I0918 19:39:37.206776   15635 main.go:141] libmachine: (addons-815929) DBG | domain addons-815929 has defined MAC address 52:54:00:11:b1:d6 in network mk-addons-815929
	I0918 19:39:37.206913   15635 main.go:141] libmachine: () Calling .GetMachineName
	I0918 19:39:37.207083   15635 main.go:141] libmachine: (addons-815929) Calling .GetState
	I0918 19:39:37.207141   15635 main.go:141] libmachine: (addons-815929) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:b1:d6", ip: ""} in network mk-addons-815929: {Iface:virbr1 ExpiryTime:2024-09-18 20:39:08 +0000 UTC Type:0 Mac:52:54:00:11:b1:d6 Iaid: IPaddr:192.168.39.158 Prefix:24 Hostname:addons-815929 Clientid:01:52:54:00:11:b1:d6}
	I0918 19:39:37.207157   15635 main.go:141] libmachine: (addons-815929) DBG | domain addons-815929 has defined IP address 192.168.39.158 and MAC address 52:54:00:11:b1:d6 in network mk-addons-815929
	I0918 19:39:37.207364   15635 main.go:141] libmachine: (addons-815929) Calling .GetSSHPort
	I0918 19:39:37.207561   15635 main.go:141] libmachine: (addons-815929) Calling .GetSSHKeyPath
	I0918 19:39:37.207717   15635 main.go:141] libmachine: (addons-815929) Calling .GetSSHUsername
	I0918 19:39:37.207864   15635 sshutil.go:53] new ssh client: &{IP:192.168.39.158 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19667-7671/.minikube/machines/addons-815929/id_rsa Username:docker}
	I0918 19:39:37.208374   15635 main.go:141] libmachine: (addons-815929) Calling .DriverName
	I0918 19:39:37.209305   15635 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33097
	I0918 19:39:37.209766   15635 main.go:141] libmachine: () Calling .GetVersion
	I0918 19:39:37.210145   15635 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0918 19:39:37.210290   15635 main.go:141] libmachine: Using API Version  1
	I0918 19:39:37.210312   15635 main.go:141] libmachine: () Calling .SetConfigRaw
	I0918 19:39:37.210763   15635 main.go:141] libmachine: () Calling .GetMachineName
	I0918 19:39:37.211277   15635 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0918 19:39:37.211316   15635 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0918 19:39:37.212277   15635 out.go:177]   - Using image docker.io/busybox:stable
	I0918 19:39:37.213533   15635 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0918 19:39:37.213560   15635 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0918 19:39:37.213578   15635 main.go:141] libmachine: (addons-815929) Calling .GetSSHHostname
	I0918 19:39:37.216384   15635 main.go:141] libmachine: (addons-815929) DBG | domain addons-815929 has defined MAC address 52:54:00:11:b1:d6 in network mk-addons-815929
	I0918 19:39:37.216720   15635 main.go:141] libmachine: (addons-815929) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:b1:d6", ip: ""} in network mk-addons-815929: {Iface:virbr1 ExpiryTime:2024-09-18 20:39:08 +0000 UTC Type:0 Mac:52:54:00:11:b1:d6 Iaid: IPaddr:192.168.39.158 Prefix:24 Hostname:addons-815929 Clientid:01:52:54:00:11:b1:d6}
	I0918 19:39:37.216738   15635 main.go:141] libmachine: (addons-815929) DBG | domain addons-815929 has defined IP address 192.168.39.158 and MAC address 52:54:00:11:b1:d6 in network mk-addons-815929
	I0918 19:39:37.216779   15635 main.go:141] libmachine: (addons-815929) Calling .GetSSHPort
	I0918 19:39:37.216952   15635 main.go:141] libmachine: (addons-815929) Calling .GetSSHKeyPath
	I0918 19:39:37.217072   15635 main.go:141] libmachine: (addons-815929) Calling .GetSSHUsername
	I0918 19:39:37.217179   15635 sshutil.go:53] new ssh client: &{IP:192.168.39.158 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19667-7671/.minikube/machines/addons-815929/id_rsa Username:docker}
	I0918 19:39:37.228878   15635 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40725
	I0918 19:39:37.229369   15635 main.go:141] libmachine: () Calling .GetVersion
	I0918 19:39:37.230046   15635 main.go:141] libmachine: Using API Version  1
	I0918 19:39:37.230074   15635 main.go:141] libmachine: () Calling .SetConfigRaw
	I0918 19:39:37.230431   15635 main.go:141] libmachine: () Calling .GetMachineName
	I0918 19:39:37.230684   15635 main.go:141] libmachine: (addons-815929) Calling .GetState
	I0918 19:39:37.232228   15635 main.go:141] libmachine: (addons-815929) Calling .DriverName
	I0918 19:39:37.232509   15635 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0918 19:39:37.232528   15635 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0918 19:39:37.232547   15635 main.go:141] libmachine: (addons-815929) Calling .GetSSHHostname
	I0918 19:39:37.235855   15635 main.go:141] libmachine: (addons-815929) DBG | domain addons-815929 has defined MAC address 52:54:00:11:b1:d6 in network mk-addons-815929
	I0918 19:39:37.236365   15635 main.go:141] libmachine: (addons-815929) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:b1:d6", ip: ""} in network mk-addons-815929: {Iface:virbr1 ExpiryTime:2024-09-18 20:39:08 +0000 UTC Type:0 Mac:52:54:00:11:b1:d6 Iaid: IPaddr:192.168.39.158 Prefix:24 Hostname:addons-815929 Clientid:01:52:54:00:11:b1:d6}
	I0918 19:39:37.236401   15635 main.go:141] libmachine: (addons-815929) DBG | domain addons-815929 has defined IP address 192.168.39.158 and MAC address 52:54:00:11:b1:d6 in network mk-addons-815929
	I0918 19:39:37.236588   15635 main.go:141] libmachine: (addons-815929) Calling .GetSSHPort
	I0918 19:39:37.236786   15635 main.go:141] libmachine: (addons-815929) Calling .GetSSHKeyPath
	I0918 19:39:37.236960   15635 main.go:141] libmachine: (addons-815929) Calling .GetSSHUsername
	I0918 19:39:37.237110   15635 sshutil.go:53] new ssh client: &{IP:192.168.39.158 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19667-7671/.minikube/machines/addons-815929/id_rsa Username:docker}
	W0918 19:39:37.240137   15635 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:46756->192.168.39.158:22: read: connection reset by peer
	I0918 19:39:37.240169   15635 retry.go:31] will retry after 192.441386ms: ssh: handshake failed: read tcp 192.168.39.1:46756->192.168.39.158:22: read: connection reset by peer
	I0918 19:39:37.520783   15635 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0918 19:39:37.520783   15635 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0918 19:39:37.529703   15635 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0918 19:39:37.533235   15635 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0918 19:39:37.578015   15635 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0918 19:39:37.578038   15635 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0918 19:39:37.582283   15635 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0918 19:39:37.582310   15635 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0918 19:39:37.733970   15635 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0918 19:39:37.770032   15635 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0918 19:39:37.770057   15635 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0918 19:39:37.814514   15635 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-rbac.yaml
	I0918 19:39:37.814546   15635 ssh_runner.go:362] scp helm-tiller/helm-tiller-rbac.yaml --> /etc/kubernetes/addons/helm-tiller-rbac.yaml (1188 bytes)
	I0918 19:39:37.816619   15635 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I0918 19:39:37.816636   15635 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0918 19:39:37.817489   15635 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0918 19:39:37.817508   15635 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0918 19:39:37.828765   15635 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0918 19:39:37.831161   15635 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0918 19:39:37.841293   15635 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0918 19:39:37.841341   15635 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0918 19:39:37.866270   15635 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0918 19:39:37.866300   15635 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0918 19:39:37.866300   15635 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0918 19:39:37.866320   15635 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0918 19:39:37.873023   15635 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0918 19:39:37.957968   15635 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0918 19:39:37.960217   15635 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0918 19:39:37.960242   15635 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0918 19:39:37.978264   15635 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0918 19:39:37.978296   15635 ssh_runner.go:362] scp helm-tiller/helm-tiller-svc.yaml --> /etc/kubernetes/addons/helm-tiller-svc.yaml (951 bytes)
	I0918 19:39:37.993929   15635 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I0918 19:39:37.993959   15635 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0918 19:39:37.994429   15635 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0918 19:39:37.994444   15635 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0918 19:39:38.017387   15635 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0918 19:39:38.017418   15635 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0918 19:39:38.088277   15635 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0918 19:39:38.088303   15635 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0918 19:39:38.131818   15635 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0918 19:39:38.131848   15635 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0918 19:39:38.203126   15635 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0918 19:39:38.203154   15635 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0918 19:39:38.226489   15635 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0918 19:39:38.226526   15635 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0918 19:39:38.250324   15635 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0918 19:39:38.273276   15635 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0918 19:39:38.283323   15635 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0918 19:39:38.332008   15635 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0918 19:39:38.332058   15635 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0918 19:39:38.385633   15635 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0918 19:39:38.385664   15635 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0918 19:39:38.469197   15635 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0918 19:39:38.469230   15635 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0918 19:39:38.472759   15635 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0918 19:39:38.472785   15635 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0918 19:39:38.628857   15635 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0918 19:39:38.628886   15635 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0918 19:39:38.637712   15635 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0918 19:39:38.637741   15635 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0918 19:39:38.656333   15635 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0918 19:39:38.656366   15635 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0918 19:39:38.714144   15635 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0918 19:39:38.714168   15635 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0918 19:39:38.932471   15635 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0918 19:39:38.932511   15635 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0918 19:39:38.964592   15635 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0918 19:39:38.971990   15635 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0918 19:39:39.017042   15635 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I0918 19:39:39.017073   15635 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0918 19:39:39.160724   15635 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0918 19:39:39.160756   15635 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0918 19:39:39.194791   15635 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0918 19:39:39.194821   15635 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0918 19:39:39.392439   15635 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0918 19:39:39.392461   15635 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0918 19:39:39.435551   15635 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0918 19:39:39.558272   15635 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0918 19:39:39.558296   15635 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0918 19:39:39.836142   15635 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0918 19:39:39.836167   15635 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0918 19:39:39.990546   15635 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.469638539s)
	I0918 19:39:39.990571   15635 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (2.469741333s)
	I0918 19:39:39.990600   15635 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0918 19:39:39.990604   15635 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (2.460853163s)
	I0918 19:39:39.990694   15635 main.go:141] libmachine: Making call to close driver server
	I0918 19:39:39.990714   15635 main.go:141] libmachine: (addons-815929) Calling .Close
	I0918 19:39:39.990994   15635 main.go:141] libmachine: Successfully made call to close driver server
	I0918 19:39:39.991007   15635 main.go:141] libmachine: Making call to close connection to plugin binary
	I0918 19:39:39.991015   15635 main.go:141] libmachine: Making call to close driver server
	I0918 19:39:39.991022   15635 main.go:141] libmachine: (addons-815929) Calling .Close
	I0918 19:39:39.991348   15635 main.go:141] libmachine: (addons-815929) DBG | Closing plugin on server side
	I0918 19:39:39.991365   15635 main.go:141] libmachine: Successfully made call to close driver server
	I0918 19:39:39.991372   15635 main.go:141] libmachine: Making call to close connection to plugin binary
	I0918 19:39:39.991593   15635 node_ready.go:35] waiting up to 6m0s for node "addons-815929" to be "Ready" ...
	I0918 19:39:40.004733   15635 node_ready.go:49] node "addons-815929" has status "Ready":"True"
	I0918 19:39:40.004757   15635 node_ready.go:38] duration metric: took 13.145596ms for node "addons-815929" to be "Ready" ...
	I0918 19:39:40.004768   15635 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0918 19:39:40.018964   15635 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-lr452" in "kube-system" namespace to be "Ready" ...
	I0918 19:39:40.314801   15635 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0918 19:39:40.509787   15635 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-815929" context rescaled to 1 replicas
	I0918 19:39:41.035157   15635 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (3.501885691s)
	I0918 19:39:41.035216   15635 main.go:141] libmachine: Making call to close driver server
	I0918 19:39:41.035231   15635 main.go:141] libmachine: (addons-815929) Calling .Close
	I0918 19:39:41.035566   15635 main.go:141] libmachine: (addons-815929) DBG | Closing plugin on server side
	I0918 19:39:41.035605   15635 main.go:141] libmachine: Successfully made call to close driver server
	I0918 19:39:41.035619   15635 main.go:141] libmachine: Making call to close connection to plugin binary
	I0918 19:39:41.035631   15635 main.go:141] libmachine: Making call to close driver server
	I0918 19:39:41.035643   15635 main.go:141] libmachine: (addons-815929) Calling .Close
	I0918 19:39:41.035883   15635 main.go:141] libmachine: Successfully made call to close driver server
	I0918 19:39:41.035902   15635 main.go:141] libmachine: Making call to close connection to plugin binary
	I0918 19:39:41.035907   15635 main.go:141] libmachine: (addons-815929) DBG | Closing plugin on server side
	I0918 19:39:42.108696   15635 pod_ready.go:103] pod "coredns-7c65d6cfc9-lr452" in "kube-system" namespace has status "Ready":"False"
	I0918 19:39:43.536656   15635 pod_ready.go:93] pod "coredns-7c65d6cfc9-lr452" in "kube-system" namespace has status "Ready":"True"
	I0918 19:39:43.536690   15635 pod_ready.go:82] duration metric: took 3.517697272s for pod "coredns-7c65d6cfc9-lr452" in "kube-system" namespace to be "Ready" ...
	I0918 19:39:43.536705   15635 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-p6827" in "kube-system" namespace to be "Ready" ...
	I0918 19:39:44.249408   15635 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0918 19:39:44.249450   15635 main.go:141] libmachine: (addons-815929) Calling .GetSSHHostname
	I0918 19:39:44.252925   15635 main.go:141] libmachine: (addons-815929) DBG | domain addons-815929 has defined MAC address 52:54:00:11:b1:d6 in network mk-addons-815929
	I0918 19:39:44.253362   15635 main.go:141] libmachine: (addons-815929) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:b1:d6", ip: ""} in network mk-addons-815929: {Iface:virbr1 ExpiryTime:2024-09-18 20:39:08 +0000 UTC Type:0 Mac:52:54:00:11:b1:d6 Iaid: IPaddr:192.168.39.158 Prefix:24 Hostname:addons-815929 Clientid:01:52:54:00:11:b1:d6}
	I0918 19:39:44.253399   15635 main.go:141] libmachine: (addons-815929) DBG | domain addons-815929 has defined IP address 192.168.39.158 and MAC address 52:54:00:11:b1:d6 in network mk-addons-815929
	I0918 19:39:44.253700   15635 main.go:141] libmachine: (addons-815929) Calling .GetSSHPort
	I0918 19:39:44.253927   15635 main.go:141] libmachine: (addons-815929) Calling .GetSSHKeyPath
	I0918 19:39:44.254121   15635 main.go:141] libmachine: (addons-815929) Calling .GetSSHUsername
	I0918 19:39:44.254291   15635 sshutil.go:53] new ssh client: &{IP:192.168.39.158 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19667-7671/.minikube/machines/addons-815929/id_rsa Username:docker}
	I0918 19:39:44.688107   15635 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0918 19:39:44.805145   15635 addons.go:234] Setting addon gcp-auth=true in "addons-815929"
	I0918 19:39:44.805206   15635 host.go:66] Checking if "addons-815929" exists ...
	I0918 19:39:44.805565   15635 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0918 19:39:44.805610   15635 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0918 19:39:44.822607   15635 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38481
	I0918 19:39:44.823258   15635 main.go:141] libmachine: () Calling .GetVersion
	I0918 19:39:44.823818   15635 main.go:141] libmachine: Using API Version  1
	I0918 19:39:44.823842   15635 main.go:141] libmachine: () Calling .SetConfigRaw
	I0918 19:39:44.824190   15635 main.go:141] libmachine: () Calling .GetMachineName
	I0918 19:39:44.824669   15635 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0918 19:39:44.824704   15635 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0918 19:39:44.840858   15635 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46495
	I0918 19:39:44.841389   15635 main.go:141] libmachine: () Calling .GetVersion
	I0918 19:39:44.841928   15635 main.go:141] libmachine: Using API Version  1
	I0918 19:39:44.841957   15635 main.go:141] libmachine: () Calling .SetConfigRaw
	I0918 19:39:44.842262   15635 main.go:141] libmachine: () Calling .GetMachineName
	I0918 19:39:44.842449   15635 main.go:141] libmachine: (addons-815929) Calling .GetState
	I0918 19:39:44.844152   15635 main.go:141] libmachine: (addons-815929) Calling .DriverName
	I0918 19:39:44.844416   15635 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0918 19:39:44.844445   15635 main.go:141] libmachine: (addons-815929) Calling .GetSSHHostname
	I0918 19:39:44.847034   15635 main.go:141] libmachine: (addons-815929) DBG | domain addons-815929 has defined MAC address 52:54:00:11:b1:d6 in network mk-addons-815929
	I0918 19:39:44.847375   15635 main.go:141] libmachine: (addons-815929) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:b1:d6", ip: ""} in network mk-addons-815929: {Iface:virbr1 ExpiryTime:2024-09-18 20:39:08 +0000 UTC Type:0 Mac:52:54:00:11:b1:d6 Iaid: IPaddr:192.168.39.158 Prefix:24 Hostname:addons-815929 Clientid:01:52:54:00:11:b1:d6}
	I0918 19:39:44.847408   15635 main.go:141] libmachine: (addons-815929) DBG | domain addons-815929 has defined IP address 192.168.39.158 and MAC address 52:54:00:11:b1:d6 in network mk-addons-815929
	I0918 19:39:44.847555   15635 main.go:141] libmachine: (addons-815929) Calling .GetSSHPort
	I0918 19:39:44.847716   15635 main.go:141] libmachine: (addons-815929) Calling .GetSSHKeyPath
	I0918 19:39:44.847869   15635 main.go:141] libmachine: (addons-815929) Calling .GetSSHUsername
	I0918 19:39:44.847967   15635 sshutil.go:53] new ssh client: &{IP:192.168.39.158 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19667-7671/.minikube/machines/addons-815929/id_rsa Username:docker}
	I0918 19:39:45.554393   15635 pod_ready.go:103] pod "coredns-7c65d6cfc9-p6827" in "kube-system" namespace has status "Ready":"False"
	I0918 19:39:46.370997   15635 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (8.636984505s)
	I0918 19:39:46.371041   15635 main.go:141] libmachine: Making call to close driver server
	I0918 19:39:46.371051   15635 main.go:141] libmachine: (addons-815929) Calling .Close
	I0918 19:39:46.371140   15635 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (8.542336234s)
	I0918 19:39:46.371200   15635 main.go:141] libmachine: Making call to close driver server
	I0918 19:39:46.371213   15635 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (8.540023182s)
	I0918 19:39:46.371243   15635 main.go:141] libmachine: Making call to close driver server
	I0918 19:39:46.371261   15635 main.go:141] libmachine: (addons-815929) Calling .Close
	I0918 19:39:46.371218   15635 main.go:141] libmachine: (addons-815929) Calling .Close
	I0918 19:39:46.371285   15635 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (8.498241115s)
	I0918 19:39:46.371313   15635 main.go:141] libmachine: Making call to close driver server
	I0918 19:39:46.371329   15635 main.go:141] libmachine: (addons-815929) Calling .Close
	I0918 19:39:46.371344   15635 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (8.413335296s)
	I0918 19:39:46.371375   15635 main.go:141] libmachine: Making call to close driver server
	I0918 19:39:46.371385   15635 main.go:141] libmachine: (addons-815929) Calling .Close
	I0918 19:39:46.371505   15635 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (8.121141782s)
	I0918 19:39:46.371534   15635 main.go:141] libmachine: Making call to close driver server
	I0918 19:39:46.371545   15635 main.go:141] libmachine: (addons-815929) Calling .Close
	I0918 19:39:46.371623   15635 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (8.098312186s)
	I0918 19:39:46.371639   15635 main.go:141] libmachine: Making call to close driver server
	I0918 19:39:46.371649   15635 main.go:141] libmachine: (addons-815929) Calling .Close
	I0918 19:39:46.371723   15635 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml: (8.088369116s)
	I0918 19:39:46.371745   15635 main.go:141] libmachine: Making call to close driver server
	I0918 19:39:46.371754   15635 main.go:141] libmachine: (addons-815929) Calling .Close
	I0918 19:39:46.371830   15635 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (7.407202442s)
	I0918 19:39:46.371846   15635 main.go:141] libmachine: Making call to close driver server
	I0918 19:39:46.371855   15635 main.go:141] libmachine: (addons-815929) Calling .Close
	I0918 19:39:46.371988   15635 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (7.399943132s)
	W0918 19:39:46.372035   15635 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0918 19:39:46.372077   15635 retry.go:31] will retry after 252.9912ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0918 19:39:46.372176   15635 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (6.936592442s)
	I0918 19:39:46.372198   15635 main.go:141] libmachine: Making call to close driver server
	I0918 19:39:46.372207   15635 main.go:141] libmachine: (addons-815929) Calling .Close
	I0918 19:39:46.374316   15635 main.go:141] libmachine: (addons-815929) DBG | Closing plugin on server side
	I0918 19:39:46.374333   15635 main.go:141] libmachine: (addons-815929) DBG | Closing plugin on server side
	I0918 19:39:46.374334   15635 main.go:141] libmachine: (addons-815929) DBG | Closing plugin on server side
	I0918 19:39:46.374348   15635 main.go:141] libmachine: Successfully made call to close driver server
	I0918 19:39:46.374354   15635 main.go:141] libmachine: Successfully made call to close driver server
	I0918 19:39:46.374357   15635 main.go:141] libmachine: Making call to close connection to plugin binary
	I0918 19:39:46.374361   15635 main.go:141] libmachine: (addons-815929) DBG | Closing plugin on server side
	I0918 19:39:46.374362   15635 main.go:141] libmachine: Making call to close connection to plugin binary
	I0918 19:39:46.374370   15635 main.go:141] libmachine: Successfully made call to close driver server
	I0918 19:39:46.374376   15635 main.go:141] libmachine: Making call to close driver server
	I0918 19:39:46.374379   15635 main.go:141] libmachine: Making call to close connection to plugin binary
	I0918 19:39:46.374385   15635 main.go:141] libmachine: (addons-815929) Calling .Close
	I0918 19:39:46.374389   15635 main.go:141] libmachine: Making call to close driver server
	I0918 19:39:46.374396   15635 main.go:141] libmachine: (addons-815929) Calling .Close
	I0918 19:39:46.374475   15635 main.go:141] libmachine: Successfully made call to close driver server
	I0918 19:39:46.374483   15635 main.go:141] libmachine: Making call to close connection to plugin binary
	I0918 19:39:46.374492   15635 main.go:141] libmachine: Making call to close driver server
	I0918 19:39:46.374499   15635 main.go:141] libmachine: (addons-815929) Calling .Close
	I0918 19:39:46.374549   15635 main.go:141] libmachine: (addons-815929) DBG | Closing plugin on server side
	I0918 19:39:46.374573   15635 main.go:141] libmachine: Successfully made call to close driver server
	I0918 19:39:46.374582   15635 main.go:141] libmachine: Making call to close connection to plugin binary
	I0918 19:39:46.374590   15635 main.go:141] libmachine: Making call to close driver server
	I0918 19:39:46.374596   15635 main.go:141] libmachine: (addons-815929) Calling .Close
	I0918 19:39:46.374639   15635 main.go:141] libmachine: (addons-815929) DBG | Closing plugin on server side
	I0918 19:39:46.374657   15635 main.go:141] libmachine: Successfully made call to close driver server
	I0918 19:39:46.374663   15635 main.go:141] libmachine: Making call to close connection to plugin binary
	I0918 19:39:46.374670   15635 main.go:141] libmachine: Making call to close driver server
	I0918 19:39:46.374676   15635 main.go:141] libmachine: (addons-815929) Calling .Close
	I0918 19:39:46.374713   15635 main.go:141] libmachine: (addons-815929) DBG | Closing plugin on server side
	I0918 19:39:46.374846   15635 main.go:141] libmachine: (addons-815929) DBG | Closing plugin on server side
	I0918 19:39:46.374866   15635 main.go:141] libmachine: Successfully made call to close driver server
	I0918 19:39:46.374873   15635 main.go:141] libmachine: Making call to close connection to plugin binary
	I0918 19:39:46.374878   15635 main.go:141] libmachine: (addons-815929) DBG | Closing plugin on server side
	I0918 19:39:46.374883   15635 addons.go:475] Verifying addon registry=true in "addons-815929"
	I0918 19:39:46.374923   15635 main.go:141] libmachine: (addons-815929) DBG | Closing plugin on server side
	I0918 19:39:46.374366   15635 main.go:141] libmachine: Making call to close driver server
	I0918 19:39:46.374938   15635 main.go:141] libmachine: (addons-815929) Calling .Close
	I0918 19:39:46.375214   15635 main.go:141] libmachine: Successfully made call to close driver server
	I0918 19:39:46.375231   15635 main.go:141] libmachine: Making call to close connection to plugin binary
	I0918 19:39:46.375240   15635 main.go:141] libmachine: Making call to close driver server
	I0918 19:39:46.375247   15635 main.go:141] libmachine: (addons-815929) Calling .Close
	I0918 19:39:46.375333   15635 main.go:141] libmachine: (addons-815929) DBG | Closing plugin on server side
	I0918 19:39:46.375384   15635 main.go:141] libmachine: Successfully made call to close driver server
	I0918 19:39:46.375393   15635 main.go:141] libmachine: Making call to close connection to plugin binary
	I0918 19:39:46.375626   15635 main.go:141] libmachine: (addons-815929) DBG | Closing plugin on server side
	I0918 19:39:46.375651   15635 main.go:141] libmachine: Successfully made call to close driver server
	I0918 19:39:46.375660   15635 main.go:141] libmachine: Making call to close connection to plugin binary
	I0918 19:39:46.375836   15635 main.go:141] libmachine: Successfully made call to close driver server
	I0918 19:39:46.375848   15635 main.go:141] libmachine: Making call to close connection to plugin binary
	I0918 19:39:46.375961   15635 main.go:141] libmachine: (addons-815929) DBG | Closing plugin on server side
	I0918 19:39:46.375994   15635 main.go:141] libmachine: Successfully made call to close driver server
	I0918 19:39:46.376000   15635 main.go:141] libmachine: Making call to close connection to plugin binary
	I0918 19:39:46.376222   15635 main.go:141] libmachine: Successfully made call to close driver server
	I0918 19:39:46.376235   15635 main.go:141] libmachine: Making call to close connection to plugin binary
	I0918 19:39:46.376264   15635 main.go:141] libmachine: Successfully made call to close driver server
	I0918 19:39:46.376278   15635 main.go:141] libmachine: Making call to close connection to plugin binary
	I0918 19:39:46.376287   15635 main.go:141] libmachine: Making call to close driver server
	I0918 19:39:46.376294   15635 main.go:141] libmachine: (addons-815929) Calling .Close
	I0918 19:39:46.376316   15635 main.go:141] libmachine: Successfully made call to close driver server
	I0918 19:39:46.376350   15635 main.go:141] libmachine: Making call to close connection to plugin binary
	I0918 19:39:46.376358   15635 main.go:141] libmachine: Making call to close driver server
	I0918 19:39:46.376365   15635 main.go:141] libmachine: (addons-815929) Calling .Close
	I0918 19:39:46.376224   15635 main.go:141] libmachine: (addons-815929) DBG | Closing plugin on server side
	I0918 19:39:46.376564   15635 main.go:141] libmachine: Successfully made call to close driver server
	I0918 19:39:46.376572   15635 main.go:141] libmachine: Making call to close connection to plugin binary
	I0918 19:39:46.376579   15635 main.go:141] libmachine: Making call to close driver server
	I0918 19:39:46.376587   15635 main.go:141] libmachine: (addons-815929) Calling .Close
	I0918 19:39:46.376681   15635 main.go:141] libmachine: (addons-815929) DBG | Closing plugin on server side
	I0918 19:39:46.376710   15635 main.go:141] libmachine: (addons-815929) DBG | Closing plugin on server side
	I0918 19:39:46.376716   15635 main.go:141] libmachine: Successfully made call to close driver server
	I0918 19:39:46.376725   15635 main.go:141] libmachine: Making call to close connection to plugin binary
	I0918 19:39:46.376730   15635 main.go:141] libmachine: Successfully made call to close driver server
	I0918 19:39:46.376737   15635 main.go:141] libmachine: Making call to close connection to plugin binary
	I0918 19:39:46.376736   15635 addons.go:475] Verifying addon ingress=true in "addons-815929"
	I0918 19:39:46.376903   15635 main.go:141] libmachine: (addons-815929) DBG | Closing plugin on server side
	I0918 19:39:46.376938   15635 main.go:141] libmachine: Successfully made call to close driver server
	I0918 19:39:46.376950   15635 main.go:141] libmachine: Making call to close connection to plugin binary
	I0918 19:39:46.377421   15635 main.go:141] libmachine: (addons-815929) DBG | Closing plugin on server side
	I0918 19:39:46.377456   15635 main.go:141] libmachine: Successfully made call to close driver server
	I0918 19:39:46.377466   15635 main.go:141] libmachine: Making call to close connection to plugin binary
	I0918 19:39:46.377475   15635 addons.go:475] Verifying addon metrics-server=true in "addons-815929"
	I0918 19:39:46.379309   15635 out.go:177] * Verifying registry addon...
	I0918 19:39:46.380107   15635 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-815929 service yakd-dashboard -n yakd-dashboard
	
	I0918 19:39:46.380116   15635 out.go:177] * Verifying ingress addon...
	I0918 19:39:46.381716   15635 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0918 19:39:46.382703   15635 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0918 19:39:46.442984   15635 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0918 19:39:46.443008   15635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:39:46.444566   15635 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0918 19:39:46.444589   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 19:39:46.448430   15635 main.go:141] libmachine: Making call to close driver server
	I0918 19:39:46.448452   15635 main.go:141] libmachine: (addons-815929) Calling .Close
	I0918 19:39:46.448784   15635 main.go:141] libmachine: Successfully made call to close driver server
	I0918 19:39:46.448805   15635 main.go:141] libmachine: Making call to close connection to plugin binary
	W0918 19:39:46.448896   15635 out.go:270] ! Enabling 'storage-provisioner-rancher' returned an error: running callbacks: [Error making local-path the default storage class: Error while marking storage class local-path as default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I0918 19:39:46.455634   15635 main.go:141] libmachine: Making call to close driver server
	I0918 19:39:46.455659   15635 main.go:141] libmachine: (addons-815929) Calling .Close
	I0918 19:39:46.455916   15635 main.go:141] libmachine: Successfully made call to close driver server
	I0918 19:39:46.455934   15635 main.go:141] libmachine: Making call to close connection to plugin binary
	I0918 19:39:46.625453   15635 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0918 19:39:46.891556   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 19:39:46.891905   15635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:39:47.249900   15635 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (6.935039531s)
	I0918 19:39:47.249959   15635 main.go:141] libmachine: Making call to close driver server
	I0918 19:39:47.249978   15635 main.go:141] libmachine: (addons-815929) Calling .Close
	I0918 19:39:47.249996   15635 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (2.405553986s)
	I0918 19:39:47.250263   15635 main.go:141] libmachine: Successfully made call to close driver server
	I0918 19:39:47.250285   15635 main.go:141] libmachine: Making call to close connection to plugin binary
	I0918 19:39:47.250291   15635 main.go:141] libmachine: (addons-815929) DBG | Closing plugin on server side
	I0918 19:39:47.250295   15635 main.go:141] libmachine: Making call to close driver server
	I0918 19:39:47.250310   15635 main.go:141] libmachine: (addons-815929) Calling .Close
	I0918 19:39:47.250600   15635 main.go:141] libmachine: Successfully made call to close driver server
	I0918 19:39:47.250616   15635 main.go:141] libmachine: Making call to close connection to plugin binary
	I0918 19:39:47.250626   15635 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-815929"
	I0918 19:39:47.250628   15635 main.go:141] libmachine: (addons-815929) DBG | Closing plugin on server side
	I0918 19:39:47.252725   15635 out.go:177] * Verifying csi-hostpath-driver addon...
	I0918 19:39:47.252729   15635 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0918 19:39:47.255488   15635 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0918 19:39:47.256476   15635 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0918 19:39:47.257160   15635 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0918 19:39:47.257179   15635 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0918 19:39:47.266081   15635 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0918 19:39:47.266118   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:39:47.352351   15635 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0918 19:39:47.352379   15635 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0918 19:39:47.382654   15635 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0918 19:39:47.382683   15635 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0918 19:39:47.400310   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 19:39:47.400779   15635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:39:47.466002   15635 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0918 19:39:47.762194   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:39:47.887466   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 19:39:47.888155   15635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:39:48.042818   15635 pod_ready.go:103] pod "coredns-7c65d6cfc9-p6827" in "kube-system" namespace has status "Ready":"False"
	I0918 19:39:48.150956   15635 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.525434984s)
	I0918 19:39:48.151014   15635 main.go:141] libmachine: Making call to close driver server
	I0918 19:39:48.151031   15635 main.go:141] libmachine: (addons-815929) Calling .Close
	I0918 19:39:48.151273   15635 main.go:141] libmachine: Successfully made call to close driver server
	I0918 19:39:48.151328   15635 main.go:141] libmachine: Making call to close connection to plugin binary
	I0918 19:39:48.151343   15635 main.go:141] libmachine: Making call to close driver server
	I0918 19:39:48.151350   15635 main.go:141] libmachine: (addons-815929) Calling .Close
	I0918 19:39:48.151297   15635 main.go:141] libmachine: (addons-815929) DBG | Closing plugin on server side
	I0918 19:39:48.151627   15635 main.go:141] libmachine: Successfully made call to close driver server
	I0918 19:39:48.151645   15635 main.go:141] libmachine: Making call to close connection to plugin binary
	I0918 19:39:48.262278   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:39:48.386162   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 19:39:48.388035   15635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:39:48.772137   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:39:48.928973   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 19:39:48.931748   15635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:39:49.012611   15635 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.546559646s)
	I0918 19:39:49.012680   15635 main.go:141] libmachine: Making call to close driver server
	I0918 19:39:49.012710   15635 main.go:141] libmachine: (addons-815929) Calling .Close
	I0918 19:39:49.013006   15635 main.go:141] libmachine: (addons-815929) DBG | Closing plugin on server side
	I0918 19:39:49.013065   15635 main.go:141] libmachine: Successfully made call to close driver server
	I0918 19:39:49.013099   15635 main.go:141] libmachine: Making call to close connection to plugin binary
	I0918 19:39:49.013113   15635 main.go:141] libmachine: Making call to close driver server
	I0918 19:39:49.013124   15635 main.go:141] libmachine: (addons-815929) Calling .Close
	I0918 19:39:49.013450   15635 main.go:141] libmachine: Successfully made call to close driver server
	I0918 19:39:49.013486   15635 main.go:141] libmachine: Making call to close connection to plugin binary
	I0918 19:39:49.015257   15635 addons.go:475] Verifying addon gcp-auth=true in "addons-815929"
	I0918 19:39:49.017437   15635 out.go:177] * Verifying gcp-auth addon...
	I0918 19:39:49.019355   15635 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0918 19:39:49.079848   15635 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0918 19:39:49.079876   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:39:49.263588   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:39:49.386599   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 19:39:49.386930   15635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:39:49.524123   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:39:49.546553   15635 pod_ready.go:98] pod "coredns-7c65d6cfc9-p6827" in "kube-system" namespace has status phase "Succeeded" (skipping!): {Phase:Succeeded Conditions:[{Type:PodReadyToStartContainers Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-18 19:39:49 +0000 UTC Reason: Message:} {Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-18 19:39:37 +0000 UTC Reason:PodCompleted Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-18 19:39:37 +0000 UTC Reason:PodCompleted Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-18 19:39:37 +0000 UTC Reason:PodCompleted Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-18 19:39:37 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.39.158 HostIPs:[{IP:192.168.39
.158}] PodIP:10.244.0.3 PodIPs:[{IP:10.244.0.3}] StartTime:2024-09-18 19:39:37 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2024-09-18 19:39:42 +0000 UTC,FinishedAt:2024-09-18 19:39:48 +0000 UTC,ContainerID:cri-o://d30d881ad2efefa3975ae112f0b3f4ed34e2c3e9169e1bab215cb4cba955008e,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.11.3 ImageID:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6 ContainerID:cri-o://d30d881ad2efefa3975ae112f0b3f4ed34e2c3e9169e1bab215cb4cba955008e Started:0xc001efb2c0 AllocatedResources:map[] Resources:nil VolumeMounts:[{Name:config-volume MountPath:/etc/coredns ReadOnly:true RecursiveReadOnly:0xc0022a0c40} {Name:kube-api-access-cpn6n MountPath:/var/run/secrets/kubernetes.io/serviceaccount ReadOnly:true RecursiveReadOnly:0xc0022a0c50}] User:ni
l AllocatedResourcesStatus:[]}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I0918 19:39:49.546588   15635 pod_ready.go:82] duration metric: took 6.009874416s for pod "coredns-7c65d6cfc9-p6827" in "kube-system" namespace to be "Ready" ...
	E0918 19:39:49.546603   15635 pod_ready.go:67] WaitExtra: waitPodCondition: pod "coredns-7c65d6cfc9-p6827" in "kube-system" namespace has status phase "Succeeded" (skipping!): {Phase:Succeeded Conditions:[{Type:PodReadyToStartContainers Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-18 19:39:49 +0000 UTC Reason: Message:} {Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-18 19:39:37 +0000 UTC Reason:PodCompleted Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-18 19:39:37 +0000 UTC Reason:PodCompleted Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-18 19:39:37 +0000 UTC Reason:PodCompleted Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-18 19:39:37 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.3
9.158 HostIPs:[{IP:192.168.39.158}] PodIP:10.244.0.3 PodIPs:[{IP:10.244.0.3}] StartTime:2024-09-18 19:39:37 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2024-09-18 19:39:42 +0000 UTC,FinishedAt:2024-09-18 19:39:48 +0000 UTC,ContainerID:cri-o://d30d881ad2efefa3975ae112f0b3f4ed34e2c3e9169e1bab215cb4cba955008e,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.11.3 ImageID:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6 ContainerID:cri-o://d30d881ad2efefa3975ae112f0b3f4ed34e2c3e9169e1bab215cb4cba955008e Started:0xc001efb2c0 AllocatedResources:map[] Resources:nil VolumeMounts:[{Name:config-volume MountPath:/etc/coredns ReadOnly:true RecursiveReadOnly:0xc0022a0c40} {Name:kube-api-access-cpn6n MountPath:/var/run/secrets/kubernetes.io/serviceaccount ReadOnly:true RecursiveRe
adOnly:0xc0022a0c50}] User:nil AllocatedResourcesStatus:[]}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I0918 19:39:49.546621   15635 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-815929" in "kube-system" namespace to be "Ready" ...
	I0918 19:39:49.567559   15635 pod_ready.go:93] pod "etcd-addons-815929" in "kube-system" namespace has status "Ready":"True"
	I0918 19:39:49.567588   15635 pod_ready.go:82] duration metric: took 20.955221ms for pod "etcd-addons-815929" in "kube-system" namespace to be "Ready" ...
	I0918 19:39:49.567598   15635 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-815929" in "kube-system" namespace to be "Ready" ...
	I0918 19:39:49.574966   15635 pod_ready.go:93] pod "kube-apiserver-addons-815929" in "kube-system" namespace has status "Ready":"True"
	I0918 19:39:49.574994   15635 pod_ready.go:82] duration metric: took 7.38881ms for pod "kube-apiserver-addons-815929" in "kube-system" namespace to be "Ready" ...
	I0918 19:39:49.575009   15635 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-815929" in "kube-system" namespace to be "Ready" ...
	I0918 19:39:49.582171   15635 pod_ready.go:93] pod "kube-controller-manager-addons-815929" in "kube-system" namespace has status "Ready":"True"
	I0918 19:39:49.582197   15635 pod_ready.go:82] duration metric: took 7.179565ms for pod "kube-controller-manager-addons-815929" in "kube-system" namespace to be "Ready" ...
	I0918 19:39:49.582207   15635 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-pqt4n" in "kube-system" namespace to be "Ready" ...
	I0918 19:39:49.590756   15635 pod_ready.go:93] pod "kube-proxy-pqt4n" in "kube-system" namespace has status "Ready":"True"
	I0918 19:39:49.590786   15635 pod_ready.go:82] duration metric: took 8.57165ms for pod "kube-proxy-pqt4n" in "kube-system" namespace to be "Ready" ...
	I0918 19:39:49.590800   15635 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-815929" in "kube-system" namespace to be "Ready" ...
	I0918 19:39:49.761078   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:39:49.887586   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 19:39:49.887848   15635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:39:49.941378   15635 pod_ready.go:93] pod "kube-scheduler-addons-815929" in "kube-system" namespace has status "Ready":"True"
	I0918 19:39:49.941403   15635 pod_ready.go:82] duration metric: took 350.596076ms for pod "kube-scheduler-addons-815929" in "kube-system" namespace to be "Ready" ...
	I0918 19:39:49.941414   15635 pod_ready.go:79] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-rvssn" in "kube-system" namespace to be "Ready" ...
	I0918 19:39:50.023296   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:39:50.262472   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:39:50.386706   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 19:39:50.387374   15635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:39:50.523109   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:39:50.762340   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:39:50.885849   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 19:39:50.886679   15635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:39:51.023386   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:39:51.261021   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:39:51.386809   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 19:39:51.387671   15635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:39:51.524078   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:39:51.760280   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:39:51.886917   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 19:39:51.887197   15635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:39:51.949053   15635 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-rvssn" in "kube-system" namespace has status "Ready":"False"
	I0918 19:39:52.023505   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:39:52.261214   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:39:52.385448   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 19:39:52.387823   15635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:39:52.522732   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:39:52.977102   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 19:39:52.977482   15635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:39:52.977880   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:39:53.022497   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:39:53.262850   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:39:53.388253   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 19:39:53.389257   15635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:39:53.523172   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:39:53.766469   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:39:53.890155   15635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:39:53.890309   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 19:39:53.949275   15635 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-rvssn" in "kube-system" namespace has status "Ready":"False"
	I0918 19:39:54.023129   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:39:54.260967   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:39:54.387271   15635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:39:54.387324   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 19:39:54.522450   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:39:54.762114   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:39:54.886263   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 19:39:54.886718   15635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:39:55.023055   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:39:55.262254   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:39:55.387141   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 19:39:55.387313   15635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:39:55.522239   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:39:55.761296   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:39:55.886317   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 19:39:55.886679   15635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:39:56.023100   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:39:56.261495   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:39:56.385260   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 19:39:56.386259   15635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:39:56.447336   15635 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-rvssn" in "kube-system" namespace has status "Ready":"False"
	I0918 19:39:56.523265   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:39:56.761818   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:39:56.885802   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 19:39:56.887031   15635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:39:57.022996   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:39:57.261375   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:39:57.388082   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 19:39:57.389199   15635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:39:57.536872   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:39:57.762269   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:39:57.887305   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 19:39:57.889861   15635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:39:58.023455   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:39:58.262414   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:39:58.385419   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 19:39:58.387689   15635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:39:58.447488   15635 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-rvssn" in "kube-system" namespace has status "Ready":"False"
	I0918 19:39:58.523505   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:39:58.761358   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:39:58.887588   15635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:39:58.887675   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 19:39:59.023310   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:39:59.261446   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:39:59.387083   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 19:39:59.387736   15635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:39:59.523936   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:39:59.761153   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:39:59.886378   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 19:39:59.886953   15635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:00.023551   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:40:00.261578   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:00.385740   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 19:40:00.387538   15635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:00.523033   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:40:00.761124   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:00.901613   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 19:40:00.904385   15635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:00.949037   15635 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-rvssn" in "kube-system" namespace has status "Ready":"False"
	I0918 19:40:01.023854   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:40:01.344698   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:01.386813   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 19:40:01.387259   15635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:01.523778   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:40:01.760693   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:01.889999   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 19:40:01.899640   15635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:02.024352   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:40:02.261808   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:02.386899   15635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:02.388992   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 19:40:02.524521   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:40:02.762196   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:02.885282   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 19:40:02.886521   15635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:03.023357   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:40:03.261472   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:03.394612   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 19:40:03.395145   15635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:03.451681   15635 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-rvssn" in "kube-system" namespace has status "Ready":"False"
	I0918 19:40:03.522752   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:40:03.760533   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:03.885469   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 19:40:03.886354   15635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:04.023419   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:40:04.261547   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:04.386446   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 19:40:04.387820   15635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:04.524995   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:40:04.761777   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:04.887074   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 19:40:04.887525   15635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:05.022473   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:40:05.261925   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:05.386445   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 19:40:05.386949   15635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:05.522814   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:40:05.762538   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:05.884894   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 19:40:05.888202   15635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:05.947082   15635 pod_ready.go:93] pod "nvidia-device-plugin-daemonset-rvssn" in "kube-system" namespace has status "Ready":"True"
	I0918 19:40:05.947114   15635 pod_ready.go:82] duration metric: took 16.005692748s for pod "nvidia-device-plugin-daemonset-rvssn" in "kube-system" namespace to be "Ready" ...
	I0918 19:40:05.947126   15635 pod_ready.go:39] duration metric: took 25.942342862s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0918 19:40:05.947145   15635 api_server.go:52] waiting for apiserver process to appear ...
	I0918 19:40:05.947207   15635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 19:40:05.964600   15635 api_server.go:72] duration metric: took 28.935412924s to wait for apiserver process to appear ...
	I0918 19:40:05.964629   15635 api_server.go:88] waiting for apiserver healthz status ...
	I0918 19:40:05.964653   15635 api_server.go:253] Checking apiserver healthz at https://192.168.39.158:8443/healthz ...
	I0918 19:40:05.971057   15635 api_server.go:279] https://192.168.39.158:8443/healthz returned 200:
	ok
	I0918 19:40:05.971991   15635 api_server.go:141] control plane version: v1.31.1
	I0918 19:40:05.972031   15635 api_server.go:131] duration metric: took 7.377749ms to wait for apiserver health ...
	I0918 19:40:05.972043   15635 system_pods.go:43] waiting for kube-system pods to appear ...
	I0918 19:40:05.981465   15635 system_pods.go:59] 18 kube-system pods found
	I0918 19:40:05.981498   15635 system_pods.go:61] "coredns-7c65d6cfc9-lr452" [ce99a83b-0924-4fe4-9a52-4c3400846319] Running
	I0918 19:40:05.981508   15635 system_pods.go:61] "csi-hostpath-attacher-0" [35c12d0e-5b48-4b7d-ba59-4a4c10501739] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0918 19:40:05.981516   15635 system_pods.go:61] "csi-hostpath-resizer-0" [888fc926-7f0f-445a-ad0d-196d1e4a131e] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0918 19:40:05.981528   15635 system_pods.go:61] "csi-hostpathplugin-tndql" [f9b32e85-54dc-4219-b8f2-ccd81d61ca01] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0918 19:40:05.981534   15635 system_pods.go:61] "etcd-addons-815929" [74c62370-8b66-4518-8839-5ce337d8ed18] Running
	I0918 19:40:05.981538   15635 system_pods.go:61] "kube-apiserver-addons-815929" [4b802a7c-d79a-4778-b93c-fb2eddfd3103] Running
	I0918 19:40:05.981541   15635 system_pods.go:61] "kube-controller-manager-addons-815929" [0478f6c2-35df-4091-9be5-2f739c29a169] Running
	I0918 19:40:05.981545   15635 system_pods.go:61] "kube-ingress-dns-minikube" [9660f591-df30-4595-ad30-0b79d840f779] Running
	I0918 19:40:05.981549   15635 system_pods.go:61] "kube-proxy-pqt4n" [f0634583-edcc-434a-9062-5511ff79a084] Running
	I0918 19:40:05.981552   15635 system_pods.go:61] "kube-scheduler-addons-815929" [577bc872-21cc-4a90-82e1-7552ce7eeb7c] Running
	I0918 19:40:05.981558   15635 system_pods.go:61] "metrics-server-84c5f94fbc-fvm48" [20825ca5-3044-4221-84bc-6fc04d1038fd] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0918 19:40:05.981564   15635 system_pods.go:61] "nvidia-device-plugin-daemonset-rvssn" [eed41d4f-f686-4590-959b-344ece686560] Running
	I0918 19:40:05.981570   15635 system_pods.go:61] "registry-66c9cd494c-96wcm" [170420dc-8ea6-4aba-99c1-9f61d4449fff] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0918 19:40:05.981575   15635 system_pods.go:61] "registry-proxy-jwxzj" [5ee21740-39f3-406e-bb72-65a28c5b5dde] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0918 19:40:05.981584   15635 system_pods.go:61] "snapshot-controller-56fcc65765-22mlv" [5197a1d6-f767-4030-b870-5fdd325589d2] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0918 19:40:05.981590   15635 system_pods.go:61] "snapshot-controller-56fcc65765-dzxnk" [28036305-c26d-4a1d-aa47-04d577b32c35] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0918 19:40:05.981596   15635 system_pods.go:61] "storage-provisioner" [a1b4202d-fa9c-4f1b-9a34-40ef0da0d6e8] Running
	I0918 19:40:05.981601   15635 system_pods.go:61] "tiller-deploy-b48cc5f79-8nbwq" [4d5df6b3-4cf1-4ce5-9cb4-2983f9aa2728] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I0918 19:40:05.981609   15635 system_pods.go:74] duration metric: took 9.560439ms to wait for pod list to return data ...
	I0918 19:40:05.981619   15635 default_sa.go:34] waiting for default service account to be created ...
	I0918 19:40:05.984361   15635 default_sa.go:45] found service account: "default"
	I0918 19:40:05.984393   15635 default_sa.go:55] duration metric: took 2.768053ms for default service account to be created ...
	I0918 19:40:05.984403   15635 system_pods.go:116] waiting for k8s-apps to be running ...
	I0918 19:40:05.992866   15635 system_pods.go:86] 18 kube-system pods found
	I0918 19:40:05.992896   15635 system_pods.go:89] "coredns-7c65d6cfc9-lr452" [ce99a83b-0924-4fe4-9a52-4c3400846319] Running
	I0918 19:40:05.992905   15635 system_pods.go:89] "csi-hostpath-attacher-0" [35c12d0e-5b48-4b7d-ba59-4a4c10501739] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0918 19:40:05.992913   15635 system_pods.go:89] "csi-hostpath-resizer-0" [888fc926-7f0f-445a-ad0d-196d1e4a131e] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0918 19:40:05.992919   15635 system_pods.go:89] "csi-hostpathplugin-tndql" [f9b32e85-54dc-4219-b8f2-ccd81d61ca01] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0918 19:40:05.992924   15635 system_pods.go:89] "etcd-addons-815929" [74c62370-8b66-4518-8839-5ce337d8ed18] Running
	I0918 19:40:05.992928   15635 system_pods.go:89] "kube-apiserver-addons-815929" [4b802a7c-d79a-4778-b93c-fb2eddfd3103] Running
	I0918 19:40:05.992932   15635 system_pods.go:89] "kube-controller-manager-addons-815929" [0478f6c2-35df-4091-9be5-2f739c29a169] Running
	I0918 19:40:05.992937   15635 system_pods.go:89] "kube-ingress-dns-minikube" [9660f591-df30-4595-ad30-0b79d840f779] Running
	I0918 19:40:05.992940   15635 system_pods.go:89] "kube-proxy-pqt4n" [f0634583-edcc-434a-9062-5511ff79a084] Running
	I0918 19:40:05.992944   15635 system_pods.go:89] "kube-scheduler-addons-815929" [577bc872-21cc-4a90-82e1-7552ce7eeb7c] Running
	I0918 19:40:05.992949   15635 system_pods.go:89] "metrics-server-84c5f94fbc-fvm48" [20825ca5-3044-4221-84bc-6fc04d1038fd] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0918 19:40:05.992956   15635 system_pods.go:89] "nvidia-device-plugin-daemonset-rvssn" [eed41d4f-f686-4590-959b-344ece686560] Running
	I0918 19:40:05.992962   15635 system_pods.go:89] "registry-66c9cd494c-96wcm" [170420dc-8ea6-4aba-99c1-9f61d4449fff] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0918 19:40:05.992970   15635 system_pods.go:89] "registry-proxy-jwxzj" [5ee21740-39f3-406e-bb72-65a28c5b5dde] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0918 19:40:05.992975   15635 system_pods.go:89] "snapshot-controller-56fcc65765-22mlv" [5197a1d6-f767-4030-b870-5fdd325589d2] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0918 19:40:05.992982   15635 system_pods.go:89] "snapshot-controller-56fcc65765-dzxnk" [28036305-c26d-4a1d-aa47-04d577b32c35] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0918 19:40:05.992988   15635 system_pods.go:89] "storage-provisioner" [a1b4202d-fa9c-4f1b-9a34-40ef0da0d6e8] Running
	I0918 19:40:05.992993   15635 system_pods.go:89] "tiller-deploy-b48cc5f79-8nbwq" [4d5df6b3-4cf1-4ce5-9cb4-2983f9aa2728] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I0918 19:40:05.993002   15635 system_pods.go:126] duration metric: took 8.592753ms to wait for k8s-apps to be running ...
	I0918 19:40:05.993011   15635 system_svc.go:44] waiting for kubelet service to be running ....
	I0918 19:40:05.993062   15635 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0918 19:40:06.007851   15635 system_svc.go:56] duration metric: took 14.818536ms WaitForService to wait for kubelet
	I0918 19:40:06.007886   15635 kubeadm.go:582] duration metric: took 28.978706928s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0918 19:40:06.007906   15635 node_conditions.go:102] verifying NodePressure condition ...
	I0918 19:40:06.010681   15635 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0918 19:40:06.010706   15635 node_conditions.go:123] node cpu capacity is 2
	I0918 19:40:06.010717   15635 node_conditions.go:105] duration metric: took 2.806111ms to run NodePressure ...
	I0918 19:40:06.010733   15635 start.go:241] waiting for startup goroutines ...
	I0918 19:40:06.023097   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:40:06.261044   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:06.386905   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 19:40:06.387938   15635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:06.523598   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:40:06.760607   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:06.885236   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 19:40:06.887183   15635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:07.023353   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:40:07.261244   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:07.387847   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 19:40:07.388133   15635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:07.523004   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:40:07.761314   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:07.886195   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 19:40:07.887026   15635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:08.022790   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:40:08.261966   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:08.386350   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 19:40:08.387977   15635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:08.522334   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:40:08.764428   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:08.887159   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 19:40:08.887636   15635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:09.023425   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:40:09.261458   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:09.386770   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 19:40:09.386931   15635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:09.523989   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:40:09.761715   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:09.888756   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 19:40:09.888913   15635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:10.022737   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:40:10.260843   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:10.385983   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 19:40:10.388375   15635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:10.523284   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:40:10.761667   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:10.886996   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 19:40:10.887478   15635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:11.023066   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:40:11.690574   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:40:11.691399   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:11.691415   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 19:40:11.692178   15635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:11.761412   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:11.886928   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 19:40:11.888082   15635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:12.023473   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:40:12.263133   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:12.386142   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 19:40:12.386662   15635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:12.525219   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:40:12.761693   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:12.886447   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 19:40:12.888253   15635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:13.022946   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:40:13.260971   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:13.386945   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 19:40:13.387172   15635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:13.522915   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:40:13.761554   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:13.885105   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 19:40:13.887694   15635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:14.028072   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:40:14.261504   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:14.385337   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 19:40:14.387622   15635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:14.523157   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:40:14.762317   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:14.886699   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 19:40:14.887653   15635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:15.023213   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:40:15.261539   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:15.386295   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 19:40:15.387692   15635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:15.523371   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:40:15.762030   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:15.887580   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 19:40:15.888087   15635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:16.024741   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:40:16.261036   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:16.385093   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 19:40:16.387141   15635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:16.523454   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:40:16.762326   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:16.890861   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 19:40:16.891242   15635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:17.022953   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:40:17.261544   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:17.386363   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 19:40:17.386458   15635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:17.523434   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:40:17.762229   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:17.889132   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 19:40:17.889326   15635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:18.028210   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:40:18.261194   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:18.385413   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 19:40:18.388574   15635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:18.523150   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:40:18.761054   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:18.887450   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 19:40:18.887779   15635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:19.024134   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:40:19.263289   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:19.385338   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 19:40:19.388348   15635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:19.523385   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:40:19.762917   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:19.885695   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 19:40:19.887582   15635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:20.022753   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:40:20.261175   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:20.385377   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 19:40:20.387295   15635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:20.522634   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:40:20.760753   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:20.887703   15635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:20.887712   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 19:40:21.235070   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:40:21.335251   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:21.387180   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 19:40:21.387296   15635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:21.523173   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:40:21.761619   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:21.885946   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 19:40:21.888761   15635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:22.023654   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:40:22.261327   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:22.385941   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 19:40:22.387514   15635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:22.524276   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:40:22.761455   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:22.889959   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 19:40:22.890148   15635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:23.023369   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:40:23.261803   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:23.385743   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 19:40:23.386867   15635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:23.523409   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:40:23.762815   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:23.889426   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 19:40:23.889754   15635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:24.023031   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:40:24.260909   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:24.385696   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 19:40:24.387779   15635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:24.523715   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:40:24.761870   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:24.887000   15635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:24.887192   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 19:40:25.025469   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:40:25.261744   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:25.385737   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 19:40:25.387836   15635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:25.523787   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:40:25.760667   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:25.886302   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 19:40:25.886864   15635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:26.023745   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:40:26.260372   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:26.387125   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 19:40:26.387622   15635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:26.522749   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:40:26.760728   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:26.887795   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 19:40:26.887929   15635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:27.022490   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:40:27.261285   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:27.387152   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 19:40:27.387208   15635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:27.526131   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:40:27.761567   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:27.885662   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 19:40:27.887226   15635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:28.023350   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:40:28.262005   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:28.386363   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 19:40:28.386533   15635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:28.523122   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:40:28.761308   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:28.886659   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 19:40:28.886766   15635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:29.025016   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:40:29.262720   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:29.385861   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 19:40:29.387067   15635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:29.523415   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:40:29.762456   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:29.889274   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 19:40:29.889409   15635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:30.022858   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:40:30.260706   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:30.385500   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 19:40:30.388109   15635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:30.523569   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:40:30.761409   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:30.887262   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 19:40:30.887513   15635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:31.022836   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:40:31.351673   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:31.631619   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 19:40:31.633821   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:40:31.634496   15635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:31.761680   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:31.886416   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 19:40:31.887521   15635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:32.022820   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:40:32.261775   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:32.385585   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 19:40:32.387110   15635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:32.522760   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:40:32.760560   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:32.887381   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 19:40:32.887779   15635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:33.023205   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:40:33.262433   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:33.386411   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 19:40:33.388473   15635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:33.522967   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:40:33.761145   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:33.885587   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 19:40:33.886411   15635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:34.024126   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:40:34.262336   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:34.386385   15635 kapi.go:107] duration metric: took 48.00466589s to wait for kubernetes.io/minikube-addons=registry ...
	I0918 19:40:34.387967   15635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:34.523142   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:40:34.761519   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:34.886743   15635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:35.023677   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:40:35.261534   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:35.386825   15635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:35.523475   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:40:35.761775   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:35.887530   15635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:36.024928   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:40:36.261912   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:36.389258   15635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:36.612910   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:40:36.760710   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:36.886570   15635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:37.023075   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:40:37.261566   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:37.386912   15635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:37.523858   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:40:37.761369   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:37.888082   15635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:38.023650   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:40:38.262241   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:38.387213   15635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:38.523095   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:40:38.761080   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:38.887662   15635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:39.022879   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:40:39.261795   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:39.388645   15635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:39.523629   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:40:39.764147   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:39.895243   15635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:40.023681   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:40:40.263820   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:40.388383   15635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:40.522769   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:40:40.760902   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:40.887214   15635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:41.024863   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:40:41.261355   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:41.388156   15635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:41.523189   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:40:41.763743   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:41.895229   15635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:42.024381   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:40:42.263606   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:42.388165   15635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:42.522769   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:40:42.760446   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:42.888084   15635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:43.022431   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:40:43.261740   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:43.387448   15635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:43.523089   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:40:43.761302   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:43.887688   15635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:44.023769   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:40:44.261649   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:44.388929   15635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:44.523353   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:40:44.761594   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:44.887209   15635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:45.022295   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:40:45.261575   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:45.386431   15635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:45.526748   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:40:45.761136   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:45.887483   15635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:46.023405   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:40:46.261710   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:46.386504   15635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:46.522766   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:40:46.760678   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:46.888552   15635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:47.408300   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:40:47.409204   15635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:47.409327   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:47.526541   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:40:47.762023   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:47.887476   15635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:48.024692   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:40:48.262281   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:48.387819   15635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:48.525990   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:40:48.761029   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:48.887048   15635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:49.022685   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:40:49.264666   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:49.387613   15635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:49.523501   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:40:49.762305   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:49.888742   15635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:50.023259   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:40:50.264411   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:50.391053   15635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:50.535411   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:40:50.763577   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:50.887602   15635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:51.022865   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:40:51.264264   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:51.398209   15635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:51.523440   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:40:51.762761   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:51.887030   15635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:52.022677   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:40:52.263450   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:52.388431   15635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:52.523149   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:40:52.763152   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:52.902779   15635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:53.024293   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:40:53.261509   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:53.386654   15635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:53.523350   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:40:53.790983   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:53.886920   15635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:54.029870   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:40:54.261998   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:54.386998   15635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:54.523404   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:40:54.762135   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:54.889645   15635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:55.023574   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:40:55.261586   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:55.799628   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:40:55.800153   15635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:55.800272   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:55.887540   15635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:56.023456   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:40:56.262164   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:56.387474   15635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:56.522936   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:40:56.760920   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:56.887129   15635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:57.022637   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:40:57.261192   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:57.387888   15635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:57.523659   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:40:57.761302   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:57.887216   15635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:58.022541   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:40:58.261223   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:58.386957   15635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:58.523331   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:40:58.762168   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:58.886618   15635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:59.023205   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:40:59.262141   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:59.387428   15635 kapi.go:107] duration metric: took 1m13.004718276s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0918 19:40:59.524360   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:40:59.762283   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:41:00.024053   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:41:00.262681   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:41:00.522704   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:41:00.760702   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:41:01.023661   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:41:01.260993   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:41:01.523442   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:41:01.762425   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:41:02.023110   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:41:02.265384   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:41:02.527771   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:41:02.761127   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:41:03.022885   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:41:03.260335   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:41:03.522913   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:41:03.761077   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:41:04.022763   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:41:04.263630   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:41:04.523144   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:41:04.761725   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:41:05.022991   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:41:05.261573   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:41:05.523927   15635 kapi.go:107] duration metric: took 1m16.504569327s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0918 19:41:05.526416   15635 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-815929 cluster.
	I0918 19:41:05.527994   15635 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0918 19:41:05.529367   15635 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0918 19:41:05.761527   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:41:06.266297   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:41:06.761123   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:41:07.260618   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:41:07.761457   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:41:08.260850   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:41:08.761648   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:41:09.260937   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:41:09.763235   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:41:10.264930   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:41:10.762866   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:41:11.262554   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:41:11.762641   15635 kapi.go:107] duration metric: took 1m24.506164382s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0918 19:41:11.764555   15635 out.go:177] * Enabled addons: cloud-spanner, ingress-dns, inspektor-gadget, helm-tiller, storage-provisioner, nvidia-device-plugin, metrics-server, yakd, default-storageclass, volumesnapshots, registry, ingress, gcp-auth, csi-hostpath-driver
	I0918 19:41:11.765613   15635 addons.go:510] duration metric: took 1m34.736385177s for enable addons: enabled=[cloud-spanner ingress-dns inspektor-gadget helm-tiller storage-provisioner nvidia-device-plugin metrics-server yakd default-storageclass volumesnapshots registry ingress gcp-auth csi-hostpath-driver]
	I0918 19:41:11.765657   15635 start.go:246] waiting for cluster config update ...
	I0918 19:41:11.765680   15635 start.go:255] writing updated cluster config ...
	I0918 19:41:11.765982   15635 ssh_runner.go:195] Run: rm -f paused
	I0918 19:41:11.816314   15635 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I0918 19:41:11.818785   15635 out.go:177] * Done! kubectl is now configured to use "addons-815929" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Sep 18 19:52:52 addons-815929 crio[660]: time="2024-09-18 19:52:52.409046930Z" level=debug msg="Request: &ContainerStatusRequest{ContainerId:dcda62e7939defac3570394097f84c23877573964da78205f5348d3f4c1f746d,Verbose:false,}" file="otel-collector/interceptors.go:62" id=1e9d15b6-8bc9-4235-9d0b-500d2ed578b5 name=/runtime.v1.RuntimeService/ContainerStatus
	Sep 18 19:52:52 addons-815929 crio[660]: time="2024-09-18 19:52:52.409147857Z" level=debug msg="Response: &ContainerStatusResponse{Status:&ContainerStatus{Id:dcda62e7939defac3570394097f84c23877573964da78205f5348d3f4c1f746d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},State:CONTAINER_RUNNING,CreatedAt:1726688366948707195,StartedAt:1726688367216298348,FinishedAt:0,ExitCode:0,Image:&ImageSpec{Image:registry.k8s.io/kube-scheduler:v1.31.1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Reason:,Message:,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-815929,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a98120b82566515a490f1d4014b63db2,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,i
o.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},Mounts:[]*Mount{&Mount{ContainerPath:/etc/hosts,HostPath:/var/lib/kubelet/pods/a98120b82566515a490f1d4014b63db2/etc-hosts,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/dev/termination-log,HostPath:/var/lib/kubelet/pods/a98120b82566515a490f1d4014b63db2/containers/kube-scheduler/4f855853,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/etc/kubernetes/scheduler.conf,HostPath:/etc/kubernetes/scheduler.conf,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},},LogPath:/var/log/pods/kube-system_kube-scheduler-addons-815929_a98120b82566515a490f1d4014b63db2/kube-scheduler/0.log,Resources:&ContainerResources{Linux:&LinuxContainerResources{CpuPeriod:100000,
CpuQuota:0,CpuShares:102,MemoryLimitInBytes:0,OomScoreAdj:-997,CpusetCpus:,CpusetMems:,HugepageLimits:[]*HugepageLimit{&HugepageLimit{PageSize:2MB,Limit:0,},},Unified:map[string]string{memory.oom.group: 1,memory.swap.max: 0,},MemorySwapLimitInBytes:0,},Windows:nil,},},Info:map[string]string{},}" file="otel-collector/interceptors.go:74" id=1e9d15b6-8bc9-4235-9d0b-500d2ed578b5 name=/runtime.v1.RuntimeService/ContainerStatus
	Sep 18 19:52:52 addons-815929 crio[660]: time="2024-09-18 19:52:52.409470341Z" level=debug msg="Request: &ContainerStatusRequest{ContainerId:bd304f4e9c5207664c3f5ae1d500d9898ffb98e6f5fbe2945d1a71f7d5f78e6c,Verbose:false,}" file="otel-collector/interceptors.go:62" id=f4ac613b-5f27-47fc-8c69-210cedee99bd name=/runtime.v1.RuntimeService/ContainerStatus
	Sep 18 19:52:52 addons-815929 crio[660]: time="2024-09-18 19:52:52.409572205Z" level=debug msg="Response: &ContainerStatusResponse{Status:&ContainerStatus{Id:bd304f4e9c5207664c3f5ae1d500d9898ffb98e6f5fbe2945d1a71f7d5f78e6c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},State:CONTAINER_RUNNING,CreatedAt:1726688366774420174,StartedAt:1726688367065874093,FinishedAt:0,ExitCode:0,Image:&ImageSpec{Image:registry.k8s.io/kube-apiserver:v1.31.1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Reason:,Message:,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-815929,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 098eb44a2bb0f4719ebb8fbbc9c0e2ef,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,i
o.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},Mounts:[]*Mount{&Mount{ContainerPath:/etc/hosts,HostPath:/var/lib/kubelet/pods/098eb44a2bb0f4719ebb8fbbc9c0e2ef/etc-hosts,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/dev/termination-log,HostPath:/var/lib/kubelet/pods/098eb44a2bb0f4719ebb8fbbc9c0e2ef/containers/kube-apiserver/86d964c0,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/etc/ssl/certs,HostPath:/etc/ssl/certs,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/usr/share/ca-certificates,HostPath:/usr/share/ca-certificates,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/
var/lib/minikube/certs,HostPath:/var/lib/minikube/certs,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},},LogPath:/var/log/pods/kube-system_kube-apiserver-addons-815929_098eb44a2bb0f4719ebb8fbbc9c0e2ef/kube-apiserver/0.log,Resources:&ContainerResources{Linux:&LinuxContainerResources{CpuPeriod:100000,CpuQuota:0,CpuShares:256,MemoryLimitInBytes:0,OomScoreAdj:-997,CpusetCpus:,CpusetMems:,HugepageLimits:[]*HugepageLimit{&HugepageLimit{PageSize:2MB,Limit:0,},},Unified:map[string]string{memory.oom.group: 1,memory.swap.max: 0,},MemorySwapLimitInBytes:0,},Windows:nil,},},Info:map[string]string{},}" file="otel-collector/interceptors.go:74" id=f4ac613b-5f27-47fc-8c69-210cedee99bd name=/runtime.v1.RuntimeService/ContainerStatus
	Sep 18 19:52:52 addons-815929 crio[660]: time="2024-09-18 19:52:52.428195698Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=880a2745-66e3-4a40-a740-9971ef3c3263 name=/runtime.v1.RuntimeService/Version
	Sep 18 19:52:52 addons-815929 crio[660]: time="2024-09-18 19:52:52.428288069Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=880a2745-66e3-4a40-a740-9971ef3c3263 name=/runtime.v1.RuntimeService/Version
	Sep 18 19:52:52 addons-815929 crio[660]: time="2024-09-18 19:52:52.429721489Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=a52f5603-d65e-4588-b31e-ab80fcca1344 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 18 19:52:52 addons-815929 crio[660]: time="2024-09-18 19:52:52.430899839Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726689172430867482,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:579800,},InodesUsed:&UInt64Value{Value:200,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=a52f5603-d65e-4588-b31e-ab80fcca1344 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 18 19:52:52 addons-815929 crio[660]: time="2024-09-18 19:52:52.431376483Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=a131986d-9330-4214-a734-dae5495e9a67 name=/runtime.v1.RuntimeService/ListContainers
	Sep 18 19:52:52 addons-815929 crio[660]: time="2024-09-18 19:52:52.431450412Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=a131986d-9330-4214-a734-dae5495e9a67 name=/runtime.v1.RuntimeService/ListContainers
	Sep 18 19:52:52 addons-815929 crio[660]: time="2024-09-18 19:52:52.431973398Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a73634fe0b5696c3769ed8462b5108653e982571f2c36b6389db0ad4cf2d1e0d,PodSandboxId:691f84bd2d65dd7aa08e76637e924e2ae7daf15a84048a0e2c694243f398475b,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1726689165321119254,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-55bf9c44b4-qqrwc,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f887e2d6-f352-42de-b6c8-bf994f11b057,},Annotations:map[string]string{io.kubernetes.container.hash: 1220bd81,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ee08a5ec3a5132463c0f585d72fb18c317c0a695f08f0e275e18e05ef4428cbf,PodSandboxId:1c669dd18bc972b33b71c0d9d6876a7f13ebd97ba81189204a3a3568718e94c1,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:074604130336e3c431b7c6b5b551b5a6ae5b67db13b3d223c6db638f85c7ff56,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7b4f26a7d93f4f1f276c51adb03ef0df54a82de89f254a9aec5c18bf0e45ee9,State:CONTAINER_RUNNING,CreatedAt:1726689026739213654,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c5107435-d0c7-4308-88a9-d0fc42111e5e,},Annotations:map[string]string{io.kubernet
es.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0b7f6341e501fa110fadd9a0a0001bcb33871d3bc4ab563f0aa03fc284fcd161,PodSandboxId:01e76804f1f1516fcb14a4c7e4c0a18197fc9ac156b68e935033da704200b78a,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:65a75b550f62ee651071b602f3d2c1daf0362638f620743c61b25eb4a1759f0a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:41eb637aa779284762db7a79fac77894d8fc6d967404e9c7f0760cb4c97a4766,State:CONTAINER_RUNNING,CreatedAt:1726689020241923244,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-7b5c95b59d-6t8xs,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.ui
d: f6f2d55f-b3e7-44c7-a00d-e99861a3846e,},Annotations:map[string]string{io.kubernetes.container.hash: 8f6f6c99,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:172ef2c9c611dd5748901516fac554cdb8f031212b96ee13467bda756557a347,PodSandboxId:915e30c1ffac764546e1016e7d844dba605fd5479681fe534c1f9029f5feba8b,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1726688465027006829,Labels:map[string]string{io.kubernetes.container.name: gcp-aut
h,io.kubernetes.pod.name: gcp-auth-89d5ffd79-fm986,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: e2301096-da7b-42be-8816-73101bc30414,},Annotations:map[string]string{io.kubernetes.container.hash: 91308b2f,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ecec83906a409cfeeef232dfb7a20554d36260eac2551a01367bd6377b88d166,PodSandboxId:4a8bc60acf00b11ddb5d25e33edbb3e33b8ce94cc3c04f15389673cbb5953cba,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:1b792367d0e1350ee869b15f851d9e4de17db10f33fadaef628db3e6457aa012,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,State:CONTAINE
R_EXITED,CreatedAt:1726688449324839851,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-xp8xg,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 39c73617-0364-4264-ac44-066443ccd53b,},Annotations:map[string]string{io.kubernetes.container.hash: eb970c83,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d4c42a127325ce063944d7e1e0381be58ae7f7cc40e3942f9b4193372d0a4acc,PodSandboxId:39b348ff86c378fd76725b65ba5c001cf8aa82ba001bf8b8ad8e95063cf36c4d,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:1b792367d0e1350ee869b15f851d9e4de17db10f33fadaef628db3e6457aa012,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac
90663ad3b8f3a9bcef242,State:CONTAINER_EXITED,CreatedAt:1726688447524177867,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-r4nz6,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 5a8926a3-3d5b-45e1-b400-6f82f29835e1,},Annotations:map[string]string{io.kubernetes.container.hash: c5cfc092,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a5437d1207356580b131f78a5ed6a838e146f94fbe52be5ca0620cd6ac81bdcc,PodSandboxId:a99ddd38ed1033622f0652deaa3e9940c40dadfb2f97de56ef4763ef83dce64b,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef
:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1726688416783940148,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-86d989889c-vr6hr,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: 9c12919f-751d-4621-8776-2f28c168c022,},Annotations:map[string]string{io.kubernetes.container.hash: d609dd0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6109c3afb8acc80aa959e4eaf358de8318644f8165612661f8aab6ab0fc3b7fd,PodSandboxId:13e8766f7460e88ac5ebdfce2b89530694d3ef7061d9d210bd779e3cac2a787b,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a,Annotations:
map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:48d9cfaaf3904a3821b1e71e50d7cbcf52fb19d5286c59e0f86b1389d189b19c,State:CONTAINER_RUNNING,CreatedAt:1726688413970410070,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-84c5f94fbc-fvm48,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 20825ca5-3044-4221-84bc-6fc04d1038fd,},Annotations:map[string]string{io.kubernetes.container.hash: d807d4fe,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3759671f1017e4877f68e8f02b9e88508a0b5b788503476fa6663f5b152d0fa6,PodSandboxId:0e451edbd642fa9533a8ef1a3e5d92b07eb94e1822b2aa46ce889e6871271115,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&
ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726688384429539904,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a1b4202d-fa9c-4f1b-9a34-40ef0da0d6e8,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fe26b1e2b409b9001b9416d86ce3ddf38df363c27c9d60c0de10b74f8347ee69,PodSandboxId:ddc00d8b37d3ead4d58b559a6d620a95b2de499e662e1a022a40e0ef82db9ad7,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9
cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726688381094396361,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-lr452,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ce99a83b-0924-4fe4-9a52-4c3400846319,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c25ce10b42b68a6ed5d508643a26856b
be78ab45c824d9bb52b62536a54e94f6,PodSandboxId:4edb1f646199c36711a433e5ba508e20000410524d763079fd45fc841d2c4767,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726688378858176128,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-pqt4n,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f0634583-edcc-434a-9062-5511ff79a084,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f287481be73d00137673fed540f11490c4a3a51a98e37282dab41b64635fe4f1,
PodSandboxId:ad2848e491363188387660c21e25cb62694852a7892c1dfb39f8f14e3f676ac1,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726688366879277667,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-815929,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 930a8ca0840484b267f0a98bf1169134,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:af153f3716e56e0aba4d70e3ae86ff7d46933ad9b7f0dcd1f1ab547
6c014ec6c,PodSandboxId:a30d6c4574148a54601e52f8af341ec46a0590d50a249cdb3a2800f36a646ae9,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726688366895990952,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-815929,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d8c33f2eeebde18e8b35412f814f8356,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dcda62e7939defac3570394097f84c23877573964da78205f5348d3f4c1f746d,PodSandboxId:b7081c4721d58d2f5e4fed21
9eb212eff1d2a534a8a57261215a6e8241d71502,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726688366880581166,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-815929,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a98120b82566515a490f1d4014b63db2,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bd304f4e9c5207664c3f5ae1d500d9898ffb98e6f5fbe2945d1a71f7d5f78e6c,PodSandboxId:da55a8add53252b2a0b247aa0b9b5e0cb6baebb7a
b165ad9df6df7ebda9a0dfd,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726688366733811882,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-815929,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 098eb44a2bb0f4719ebb8fbbc9c0e2ef,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=a131986d-9330-4214-a734-dae5495e9a67 name=/runtime.v1.RuntimeService/ListContainers
	Sep 18 19:52:52 addons-815929 crio[660]: time="2024-09-18 19:52:52.468785520Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=9b20d62c-ecab-4c7f-b06d-b17ebede3808 name=/runtime.v1.RuntimeService/Version
	Sep 18 19:52:52 addons-815929 crio[660]: time="2024-09-18 19:52:52.468875275Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=9b20d62c-ecab-4c7f-b06d-b17ebede3808 name=/runtime.v1.RuntimeService/Version
	Sep 18 19:52:52 addons-815929 crio[660]: time="2024-09-18 19:52:52.470002600Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=7e416698-1a99-473f-aa21-7192c1d4a39e name=/runtime.v1.ImageService/ImageFsInfo
	Sep 18 19:52:52 addons-815929 crio[660]: time="2024-09-18 19:52:52.471184412Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726689172471149828,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:579800,},InodesUsed:&UInt64Value{Value:200,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=7e416698-1a99-473f-aa21-7192c1d4a39e name=/runtime.v1.ImageService/ImageFsInfo
	Sep 18 19:52:52 addons-815929 crio[660]: time="2024-09-18 19:52:52.471778125Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=c0d79911-0439-4748-91b3-e3db2ba7adf9 name=/runtime.v1.RuntimeService/ListContainers
	Sep 18 19:52:52 addons-815929 crio[660]: time="2024-09-18 19:52:52.471849829Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=c0d79911-0439-4748-91b3-e3db2ba7adf9 name=/runtime.v1.RuntimeService/ListContainers
	Sep 18 19:52:52 addons-815929 crio[660]: time="2024-09-18 19:52:52.472188537Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a73634fe0b5696c3769ed8462b5108653e982571f2c36b6389db0ad4cf2d1e0d,PodSandboxId:691f84bd2d65dd7aa08e76637e924e2ae7daf15a84048a0e2c694243f398475b,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1726689165321119254,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-55bf9c44b4-qqrwc,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f887e2d6-f352-42de-b6c8-bf994f11b057,},Annotations:map[string]string{io.kubernetes.container.hash: 1220bd81,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ee08a5ec3a5132463c0f585d72fb18c317c0a695f08f0e275e18e05ef4428cbf,PodSandboxId:1c669dd18bc972b33b71c0d9d6876a7f13ebd97ba81189204a3a3568718e94c1,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:074604130336e3c431b7c6b5b551b5a6ae5b67db13b3d223c6db638f85c7ff56,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7b4f26a7d93f4f1f276c51adb03ef0df54a82de89f254a9aec5c18bf0e45ee9,State:CONTAINER_RUNNING,CreatedAt:1726689026739213654,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c5107435-d0c7-4308-88a9-d0fc42111e5e,},Annotations:map[string]string{io.kubernet
es.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0b7f6341e501fa110fadd9a0a0001bcb33871d3bc4ab563f0aa03fc284fcd161,PodSandboxId:01e76804f1f1516fcb14a4c7e4c0a18197fc9ac156b68e935033da704200b78a,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:65a75b550f62ee651071b602f3d2c1daf0362638f620743c61b25eb4a1759f0a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:41eb637aa779284762db7a79fac77894d8fc6d967404e9c7f0760cb4c97a4766,State:CONTAINER_RUNNING,CreatedAt:1726689020241923244,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-7b5c95b59d-6t8xs,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.ui
d: f6f2d55f-b3e7-44c7-a00d-e99861a3846e,},Annotations:map[string]string{io.kubernetes.container.hash: 8f6f6c99,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:172ef2c9c611dd5748901516fac554cdb8f031212b96ee13467bda756557a347,PodSandboxId:915e30c1ffac764546e1016e7d844dba605fd5479681fe534c1f9029f5feba8b,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1726688465027006829,Labels:map[string]string{io.kubernetes.container.name: gcp-aut
h,io.kubernetes.pod.name: gcp-auth-89d5ffd79-fm986,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: e2301096-da7b-42be-8816-73101bc30414,},Annotations:map[string]string{io.kubernetes.container.hash: 91308b2f,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ecec83906a409cfeeef232dfb7a20554d36260eac2551a01367bd6377b88d166,PodSandboxId:4a8bc60acf00b11ddb5d25e33edbb3e33b8ce94cc3c04f15389673cbb5953cba,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:1b792367d0e1350ee869b15f851d9e4de17db10f33fadaef628db3e6457aa012,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,State:CONTAINE
R_EXITED,CreatedAt:1726688449324839851,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-xp8xg,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 39c73617-0364-4264-ac44-066443ccd53b,},Annotations:map[string]string{io.kubernetes.container.hash: eb970c83,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d4c42a127325ce063944d7e1e0381be58ae7f7cc40e3942f9b4193372d0a4acc,PodSandboxId:39b348ff86c378fd76725b65ba5c001cf8aa82ba001bf8b8ad8e95063cf36c4d,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:1b792367d0e1350ee869b15f851d9e4de17db10f33fadaef628db3e6457aa012,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac
90663ad3b8f3a9bcef242,State:CONTAINER_EXITED,CreatedAt:1726688447524177867,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-r4nz6,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 5a8926a3-3d5b-45e1-b400-6f82f29835e1,},Annotations:map[string]string{io.kubernetes.container.hash: c5cfc092,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a5437d1207356580b131f78a5ed6a838e146f94fbe52be5ca0620cd6ac81bdcc,PodSandboxId:a99ddd38ed1033622f0652deaa3e9940c40dadfb2f97de56ef4763ef83dce64b,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef
:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1726688416783940148,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-86d989889c-vr6hr,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: 9c12919f-751d-4621-8776-2f28c168c022,},Annotations:map[string]string{io.kubernetes.container.hash: d609dd0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6109c3afb8acc80aa959e4eaf358de8318644f8165612661f8aab6ab0fc3b7fd,PodSandboxId:13e8766f7460e88ac5ebdfce2b89530694d3ef7061d9d210bd779e3cac2a787b,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a,Annotations:
map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:48d9cfaaf3904a3821b1e71e50d7cbcf52fb19d5286c59e0f86b1389d189b19c,State:CONTAINER_RUNNING,CreatedAt:1726688413970410070,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-84c5f94fbc-fvm48,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 20825ca5-3044-4221-84bc-6fc04d1038fd,},Annotations:map[string]string{io.kubernetes.container.hash: d807d4fe,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3759671f1017e4877f68e8f02b9e88508a0b5b788503476fa6663f5b152d0fa6,PodSandboxId:0e451edbd642fa9533a8ef1a3e5d92b07eb94e1822b2aa46ce889e6871271115,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&
ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726688384429539904,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a1b4202d-fa9c-4f1b-9a34-40ef0da0d6e8,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fe26b1e2b409b9001b9416d86ce3ddf38df363c27c9d60c0de10b74f8347ee69,PodSandboxId:ddc00d8b37d3ead4d58b559a6d620a95b2de499e662e1a022a40e0ef82db9ad7,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9
cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726688381094396361,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-lr452,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ce99a83b-0924-4fe4-9a52-4c3400846319,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c25ce10b42b68a6ed5d508643a26856b
be78ab45c824d9bb52b62536a54e94f6,PodSandboxId:4edb1f646199c36711a433e5ba508e20000410524d763079fd45fc841d2c4767,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726688378858176128,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-pqt4n,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f0634583-edcc-434a-9062-5511ff79a084,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f287481be73d00137673fed540f11490c4a3a51a98e37282dab41b64635fe4f1,
PodSandboxId:ad2848e491363188387660c21e25cb62694852a7892c1dfb39f8f14e3f676ac1,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726688366879277667,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-815929,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 930a8ca0840484b267f0a98bf1169134,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:af153f3716e56e0aba4d70e3ae86ff7d46933ad9b7f0dcd1f1ab547
6c014ec6c,PodSandboxId:a30d6c4574148a54601e52f8af341ec46a0590d50a249cdb3a2800f36a646ae9,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726688366895990952,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-815929,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d8c33f2eeebde18e8b35412f814f8356,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dcda62e7939defac3570394097f84c23877573964da78205f5348d3f4c1f746d,PodSandboxId:b7081c4721d58d2f5e4fed21
9eb212eff1d2a534a8a57261215a6e8241d71502,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726688366880581166,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-815929,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a98120b82566515a490f1d4014b63db2,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bd304f4e9c5207664c3f5ae1d500d9898ffb98e6f5fbe2945d1a71f7d5f78e6c,PodSandboxId:da55a8add53252b2a0b247aa0b9b5e0cb6baebb7a
b165ad9df6df7ebda9a0dfd,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726688366733811882,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-815929,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 098eb44a2bb0f4719ebb8fbbc9c0e2ef,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=c0d79911-0439-4748-91b3-e3db2ba7adf9 name=/runtime.v1.RuntimeService/ListContainers
	Sep 18 19:52:52 addons-815929 crio[660]: time="2024-09-18 19:52:52.511700320Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=671e495b-c580-433b-a8c8-5bc16af9ca6f name=/runtime.v1.RuntimeService/Version
	Sep 18 19:52:52 addons-815929 crio[660]: time="2024-09-18 19:52:52.511803537Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=671e495b-c580-433b-a8c8-5bc16af9ca6f name=/runtime.v1.RuntimeService/Version
	Sep 18 19:52:52 addons-815929 crio[660]: time="2024-09-18 19:52:52.512903310Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=51dc2c77-d23d-4134-ad77-47dd485c2778 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 18 19:52:52 addons-815929 crio[660]: time="2024-09-18 19:52:52.514708927Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726689172514677448,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:579800,},InodesUsed:&UInt64Value{Value:200,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=51dc2c77-d23d-4134-ad77-47dd485c2778 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 18 19:52:52 addons-815929 crio[660]: time="2024-09-18 19:52:52.515340698Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=333f2c48-a3be-4aff-afcf-99e2124bd866 name=/runtime.v1.RuntimeService/ListContainers
	Sep 18 19:52:52 addons-815929 crio[660]: time="2024-09-18 19:52:52.515403361Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=333f2c48-a3be-4aff-afcf-99e2124bd866 name=/runtime.v1.RuntimeService/ListContainers
	Sep 18 19:52:52 addons-815929 crio[660]: time="2024-09-18 19:52:52.515766642Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a73634fe0b5696c3769ed8462b5108653e982571f2c36b6389db0ad4cf2d1e0d,PodSandboxId:691f84bd2d65dd7aa08e76637e924e2ae7daf15a84048a0e2c694243f398475b,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1726689165321119254,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-55bf9c44b4-qqrwc,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f887e2d6-f352-42de-b6c8-bf994f11b057,},Annotations:map[string]string{io.kubernetes.container.hash: 1220bd81,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ee08a5ec3a5132463c0f585d72fb18c317c0a695f08f0e275e18e05ef4428cbf,PodSandboxId:1c669dd18bc972b33b71c0d9d6876a7f13ebd97ba81189204a3a3568718e94c1,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:074604130336e3c431b7c6b5b551b5a6ae5b67db13b3d223c6db638f85c7ff56,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7b4f26a7d93f4f1f276c51adb03ef0df54a82de89f254a9aec5c18bf0e45ee9,State:CONTAINER_RUNNING,CreatedAt:1726689026739213654,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c5107435-d0c7-4308-88a9-d0fc42111e5e,},Annotations:map[string]string{io.kubernet
es.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0b7f6341e501fa110fadd9a0a0001bcb33871d3bc4ab563f0aa03fc284fcd161,PodSandboxId:01e76804f1f1516fcb14a4c7e4c0a18197fc9ac156b68e935033da704200b78a,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:65a75b550f62ee651071b602f3d2c1daf0362638f620743c61b25eb4a1759f0a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:41eb637aa779284762db7a79fac77894d8fc6d967404e9c7f0760cb4c97a4766,State:CONTAINER_RUNNING,CreatedAt:1726689020241923244,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-7b5c95b59d-6t8xs,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.ui
d: f6f2d55f-b3e7-44c7-a00d-e99861a3846e,},Annotations:map[string]string{io.kubernetes.container.hash: 8f6f6c99,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:172ef2c9c611dd5748901516fac554cdb8f031212b96ee13467bda756557a347,PodSandboxId:915e30c1ffac764546e1016e7d844dba605fd5479681fe534c1f9029f5feba8b,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1726688465027006829,Labels:map[string]string{io.kubernetes.container.name: gcp-aut
h,io.kubernetes.pod.name: gcp-auth-89d5ffd79-fm986,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: e2301096-da7b-42be-8816-73101bc30414,},Annotations:map[string]string{io.kubernetes.container.hash: 91308b2f,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ecec83906a409cfeeef232dfb7a20554d36260eac2551a01367bd6377b88d166,PodSandboxId:4a8bc60acf00b11ddb5d25e33edbb3e33b8ce94cc3c04f15389673cbb5953cba,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:1b792367d0e1350ee869b15f851d9e4de17db10f33fadaef628db3e6457aa012,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,State:CONTAINE
R_EXITED,CreatedAt:1726688449324839851,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-xp8xg,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 39c73617-0364-4264-ac44-066443ccd53b,},Annotations:map[string]string{io.kubernetes.container.hash: eb970c83,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d4c42a127325ce063944d7e1e0381be58ae7f7cc40e3942f9b4193372d0a4acc,PodSandboxId:39b348ff86c378fd76725b65ba5c001cf8aa82ba001bf8b8ad8e95063cf36c4d,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:1b792367d0e1350ee869b15f851d9e4de17db10f33fadaef628db3e6457aa012,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac
90663ad3b8f3a9bcef242,State:CONTAINER_EXITED,CreatedAt:1726688447524177867,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-r4nz6,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 5a8926a3-3d5b-45e1-b400-6f82f29835e1,},Annotations:map[string]string{io.kubernetes.container.hash: c5cfc092,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a5437d1207356580b131f78a5ed6a838e146f94fbe52be5ca0620cd6ac81bdcc,PodSandboxId:a99ddd38ed1033622f0652deaa3e9940c40dadfb2f97de56ef4763ef83dce64b,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef
:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1726688416783940148,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-86d989889c-vr6hr,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: 9c12919f-751d-4621-8776-2f28c168c022,},Annotations:map[string]string{io.kubernetes.container.hash: d609dd0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6109c3afb8acc80aa959e4eaf358de8318644f8165612661f8aab6ab0fc3b7fd,PodSandboxId:13e8766f7460e88ac5ebdfce2b89530694d3ef7061d9d210bd779e3cac2a787b,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a,Annotations:
map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:48d9cfaaf3904a3821b1e71e50d7cbcf52fb19d5286c59e0f86b1389d189b19c,State:CONTAINER_RUNNING,CreatedAt:1726688413970410070,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-84c5f94fbc-fvm48,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 20825ca5-3044-4221-84bc-6fc04d1038fd,},Annotations:map[string]string{io.kubernetes.container.hash: d807d4fe,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3759671f1017e4877f68e8f02b9e88508a0b5b788503476fa6663f5b152d0fa6,PodSandboxId:0e451edbd642fa9533a8ef1a3e5d92b07eb94e1822b2aa46ce889e6871271115,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&
ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726688384429539904,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a1b4202d-fa9c-4f1b-9a34-40ef0da0d6e8,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fe26b1e2b409b9001b9416d86ce3ddf38df363c27c9d60c0de10b74f8347ee69,PodSandboxId:ddc00d8b37d3ead4d58b559a6d620a95b2de499e662e1a022a40e0ef82db9ad7,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9
cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726688381094396361,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-lr452,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ce99a83b-0924-4fe4-9a52-4c3400846319,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c25ce10b42b68a6ed5d508643a26856b
be78ab45c824d9bb52b62536a54e94f6,PodSandboxId:4edb1f646199c36711a433e5ba508e20000410524d763079fd45fc841d2c4767,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726688378858176128,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-pqt4n,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f0634583-edcc-434a-9062-5511ff79a084,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f287481be73d00137673fed540f11490c4a3a51a98e37282dab41b64635fe4f1,
PodSandboxId:ad2848e491363188387660c21e25cb62694852a7892c1dfb39f8f14e3f676ac1,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726688366879277667,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-815929,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 930a8ca0840484b267f0a98bf1169134,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:af153f3716e56e0aba4d70e3ae86ff7d46933ad9b7f0dcd1f1ab547
6c014ec6c,PodSandboxId:a30d6c4574148a54601e52f8af341ec46a0590d50a249cdb3a2800f36a646ae9,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726688366895990952,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-815929,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d8c33f2eeebde18e8b35412f814f8356,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dcda62e7939defac3570394097f84c23877573964da78205f5348d3f4c1f746d,PodSandboxId:b7081c4721d58d2f5e4fed21
9eb212eff1d2a534a8a57261215a6e8241d71502,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726688366880581166,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-815929,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a98120b82566515a490f1d4014b63db2,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bd304f4e9c5207664c3f5ae1d500d9898ffb98e6f5fbe2945d1a71f7d5f78e6c,PodSandboxId:da55a8add53252b2a0b247aa0b9b5e0cb6baebb7a
b165ad9df6df7ebda9a0dfd,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726688366733811882,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-815929,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 098eb44a2bb0f4719ebb8fbbc9c0e2ef,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=333f2c48-a3be-4aff-afcf-99e2124bd866 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                        CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	a73634fe0b569       docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6                        7 seconds ago       Running             hello-world-app           0                   691f84bd2d65d       hello-world-app-55bf9c44b4-qqrwc
	ee08a5ec3a513       docker.io/library/nginx@sha256:074604130336e3c431b7c6b5b551b5a6ae5b67db13b3d223c6db638f85c7ff56                              2 minutes ago       Running             nginx                     0                   1c669dd18bc97       nginx
	0b7f6341e501f       ghcr.io/headlamp-k8s/headlamp@sha256:65a75b550f62ee651071b602f3d2c1daf0362638f620743c61b25eb4a1759f0a                        2 minutes ago       Running             headlamp                  0                   01e76804f1f15       headlamp-7b5c95b59d-6t8xs
	172ef2c9c611d       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b                 11 minutes ago      Running             gcp-auth                  0                   915e30c1ffac7       gcp-auth-89d5ffd79-fm986
	ecec83906a409       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:1b792367d0e1350ee869b15f851d9e4de17db10f33fadaef628db3e6457aa012   12 minutes ago      Exited              patch                     0                   4a8bc60acf00b       ingress-nginx-admission-patch-xp8xg
	d4c42a127325c       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:1b792367d0e1350ee869b15f851d9e4de17db10f33fadaef628db3e6457aa012   12 minutes ago      Exited              create                    0                   39b348ff86c37       ingress-nginx-admission-create-r4nz6
	a5437d1207356       docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef             12 minutes ago      Running             local-path-provisioner    0                   a99ddd38ed103       local-path-provisioner-86d989889c-vr6hr
	6109c3afb8acc       registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a        12 minutes ago      Running             metrics-server            0                   13e8766f7460e       metrics-server-84c5f94fbc-fvm48
	3759671f1017e       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                             13 minutes ago      Running             storage-provisioner       0                   0e451edbd642f       storage-provisioner
	fe26b1e2b409b       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                                             13 minutes ago      Running             coredns                   0                   ddc00d8b37d3e       coredns-7c65d6cfc9-lr452
	c25ce10b42b68       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                                             13 minutes ago      Running             kube-proxy                0                   4edb1f646199c       kube-proxy-pqt4n
	af153f3716e56       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                                             13 minutes ago      Running             etcd                      0                   a30d6c4574148       etcd-addons-815929
	dcda62e7939de       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                                             13 minutes ago      Running             kube-scheduler            0                   b7081c4721d58       kube-scheduler-addons-815929
	f287481be73d0       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                                             13 minutes ago      Running             kube-controller-manager   0                   ad2848e491363       kube-controller-manager-addons-815929
	bd304f4e9c520       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee                                                             13 minutes ago      Running             kube-apiserver            0                   da55a8add5325       kube-apiserver-addons-815929
	
	
	==> coredns [fe26b1e2b409b9001b9416d86ce3ddf38df363c27c9d60c0de10b74f8347ee69] <==
	[INFO] 127.0.0.1:33911 - 37399 "HINFO IN 5747327246118162623.8020402030463234675. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.016819419s
	[INFO] 10.244.0.7:59262 - 43432 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000336053s
	[INFO] 10.244.0.7:59262 - 17322 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000150287s
	[INFO] 10.244.0.7:41687 - 18673 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000099786s
	[INFO] 10.244.0.7:41687 - 9207 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000065823s
	[INFO] 10.244.0.7:33094 - 24891 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000094342s
	[INFO] 10.244.0.7:33094 - 26173 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000059261s
	[INFO] 10.244.0.7:56632 - 33786 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000087163s
	[INFO] 10.244.0.7:56632 - 4856 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000058072s
	[INFO] 10.244.0.7:36451 - 41922 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000084154s
	[INFO] 10.244.0.7:36451 - 33727 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000092459s
	[INFO] 10.244.0.7:39340 - 30237 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000083666s
	[INFO] 10.244.0.7:39340 - 56611 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000065217s
	[INFO] 10.244.0.7:60263 - 43577 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000042731s
	[INFO] 10.244.0.7:60263 - 42043 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000060323s
	[INFO] 10.244.0.7:49317 - 26894 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000071504s
	[INFO] 10.244.0.7:49317 - 41231 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000053913s
	[INFO] 10.244.0.22:56096 - 25617 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000559428s
	[INFO] 10.244.0.22:46332 - 60333 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.00009869s
	[INFO] 10.244.0.22:56500 - 14226 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000212602s
	[INFO] 10.244.0.22:49148 - 10468 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.00009573s
	[INFO] 10.244.0.22:40941 - 26523 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000113485s
	[INFO] 10.244.0.22:37539 - 18925 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000348096s
	[INFO] 10.244.0.22:41445 - 2227 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 534 0.002727628s
	[INFO] 10.244.0.22:57259 - 2571 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.00255705s
	
	
	==> describe nodes <==
	Name:               addons-815929
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-815929
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=85073601a832bd4bbda5d11fa91feafff6ec6b91
	                    minikube.k8s.io/name=addons-815929
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_18T19_39_32_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-815929
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 18 Sep 2024 19:39:29 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-815929
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 18 Sep 2024 19:52:50 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 18 Sep 2024 19:50:35 +0000   Wed, 18 Sep 2024 19:39:27 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 18 Sep 2024 19:50:35 +0000   Wed, 18 Sep 2024 19:39:27 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 18 Sep 2024 19:50:35 +0000   Wed, 18 Sep 2024 19:39:27 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 18 Sep 2024 19:50:35 +0000   Wed, 18 Sep 2024 19:39:32 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.158
	  Hostname:    addons-815929
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	System Info:
	  Machine ID:                 7e65d1c428634e33ae59c564f000aca1
	  System UUID:                7e65d1c4-2863-4e33-ae59-c564f000aca1
	  Boot ID:                    eb3346ec-958a-43c9-b91c-e6223f603868
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (14 in total)
	  Namespace                   Name                                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  default                     hello-world-app-55bf9c44b4-qqrwc           0 (0%)        0 (0%)      0 (0%)           0 (0%)         10s
	  default                     nginx                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m30s
	  gcp-auth                    gcp-auth-89d5ffd79-fm986                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  headlamp                    headlamp-7b5c95b59d-6t8xs                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m38s
	  kube-system                 coredns-7c65d6cfc9-lr452                   100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     13m
	  kube-system                 etcd-addons-815929                         100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         13m
	  kube-system                 kube-apiserver-addons-815929               250m (12%)    0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-controller-manager-addons-815929      200m (10%)    0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-proxy-pqt4n                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-scheduler-addons-815929               100m (5%)     0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 metrics-server-84c5f94fbc-fvm48            100m (5%)     0 (0%)      200Mi (5%)       0 (0%)         13m
	  kube-system                 storage-provisioner                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  local-path-storage          local-path-provisioner-86d989889c-vr6hr    0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  0 (0%)
	  memory             370Mi (9%)  170Mi (4%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 13m   kube-proxy       
	  Normal  Starting                 13m   kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  13m   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  13m   kubelet          Node addons-815929 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    13m   kubelet          Node addons-815929 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     13m   kubelet          Node addons-815929 status is now: NodeHasSufficientPID
	  Normal  NodeReady                13m   kubelet          Node addons-815929 status is now: NodeReady
	  Normal  RegisteredNode           13m   node-controller  Node addons-815929 event: Registered Node addons-815929 in Controller
	
	
	==> dmesg <==
	[  +5.343608] kauditd_printk_skb: 127 callbacks suppressed
	[  +5.904152] kauditd_printk_skb: 83 callbacks suppressed
	[Sep18 19:40] kauditd_printk_skb: 9 callbacks suppressed
	[  +5.880845] kauditd_printk_skb: 34 callbacks suppressed
	[ +18.003706] kauditd_printk_skb: 12 callbacks suppressed
	[  +5.043206] kauditd_printk_skb: 46 callbacks suppressed
	[  +8.731855] kauditd_printk_skb: 72 callbacks suppressed
	[Sep18 19:41] kauditd_printk_skb: 6 callbacks suppressed
	[  +5.081169] kauditd_printk_skb: 44 callbacks suppressed
	[ +12.641013] kauditd_printk_skb: 12 callbacks suppressed
	[Sep18 19:42] kauditd_printk_skb: 28 callbacks suppressed
	[Sep18 19:43] kauditd_printk_skb: 28 callbacks suppressed
	[Sep18 19:46] kauditd_printk_skb: 28 callbacks suppressed
	[Sep18 19:49] kauditd_printk_skb: 28 callbacks suppressed
	[  +5.945047] kauditd_printk_skb: 15 callbacks suppressed
	[  +5.644536] kauditd_printk_skb: 25 callbacks suppressed
	[  +6.473959] kauditd_printk_skb: 17 callbacks suppressed
	[  +6.527768] kauditd_printk_skb: 14 callbacks suppressed
	[ +12.093612] kauditd_printk_skb: 3 callbacks suppressed
	[Sep18 19:50] kauditd_printk_skb: 24 callbacks suppressed
	[  +5.519549] kauditd_printk_skb: 10 callbacks suppressed
	[  +7.416140] kauditd_printk_skb: 55 callbacks suppressed
	[  +5.600964] kauditd_printk_skb: 31 callbacks suppressed
	[Sep18 19:52] kauditd_printk_skb: 25 callbacks suppressed
	[  +5.013586] kauditd_printk_skb: 19 callbacks suppressed
	
	
	==> etcd [af153f3716e56e0aba4d70e3ae86ff7d46933ad9b7f0dcd1f1ab5476c014ec6c] <==
	{"level":"info","ts":"2024-09-18T19:40:55.783491Z","caller":"traceutil/trace.go:171","msg":"trace[616005258] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1095; }","duration":"411.443513ms","start":"2024-09-18T19:40:55.372039Z","end":"2024-09-18T19:40:55.783482Z","steps":["trace[616005258] 'agreement among raft nodes before linearized reading'  (duration: 411.261686ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-18T19:40:55.783514Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-18T19:40:55.372005Z","time spent":"411.502511ms","remote":"127.0.0.1:37770","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":28,"request content":"key:\"/registry/pods\" limit:1 "}
	{"level":"warn","ts":"2024-09-18T19:40:55.783555Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-18T19:40:55.320353Z","time spent":"463.071957ms","remote":"127.0.0.1:37658","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":764,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/events/gadget/gadget-56gjj.17f66dffc2f7c48d\" mod_revision:1086 > success:<request_put:<key:\"/registry/events/gadget/gadget-56gjj.17f66dffc2f7c48d\" value_size:693 lease:8396277637547487747 >> failure:<request_range:<key:\"/registry/events/gadget/gadget-56gjj.17f66dffc2f7c48d\" > >"}
	{"level":"info","ts":"2024-09-18T19:40:55.783273Z","caller":"traceutil/trace.go:171","msg":"trace[1329412000] linearizableReadLoop","detail":"{readStateIndex:1127; appliedIndex:1126; }","duration":"411.153549ms","start":"2024-09-18T19:40:55.372056Z","end":"2024-09-18T19:40:55.783210Z","steps":["trace[1329412000] 'read index received'  (duration: 410.953333ms)","trace[1329412000] 'applied index is now lower than readState.Index'  (duration: 199.615µs)"],"step_count":2}
	{"level":"warn","ts":"2024-09-18T19:40:55.783842Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"274.750813ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-18T19:40:55.783883Z","caller":"traceutil/trace.go:171","msg":"trace[1696276562] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1095; }","duration":"274.791129ms","start":"2024-09-18T19:40:55.509082Z","end":"2024-09-18T19:40:55.783873Z","steps":["trace[1696276562] 'agreement among raft nodes before linearized reading'  (duration: 274.733422ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-18T19:40:58.140649Z","caller":"traceutil/trace.go:171","msg":"trace[1269958253] transaction","detail":"{read_only:false; response_revision:1100; number_of_response:1; }","duration":"116.461138ms","start":"2024-09-18T19:40:58.024133Z","end":"2024-09-18T19:40:58.140595Z","steps":["trace[1269958253] 'process raft request'  (duration: 116.084244ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-18T19:49:27.720379Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1528}
	{"level":"info","ts":"2024-09-18T19:49:27.755828Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":1528,"took":"34.749964ms","hash":189233142,"current-db-size-bytes":6471680,"current-db-size":"6.5 MB","current-db-size-in-use-bytes":3465216,"current-db-size-in-use":"3.5 MB"}
	{"level":"info","ts":"2024-09-18T19:49:27.755900Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":189233142,"revision":1528,"compact-revision":-1}
	{"level":"warn","ts":"2024-09-18T19:49:39.140178Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"372.134407ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" ","response":"range_response_count:1 size:1114"}
	{"level":"info","ts":"2024-09-18T19:49:39.140269Z","caller":"traceutil/trace.go:171","msg":"trace[44095741] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:1; response_revision:2065; }","duration":"372.258566ms","start":"2024-09-18T19:49:38.767988Z","end":"2024-09-18T19:49:39.140247Z","steps":["trace[44095741] 'range keys from in-memory index tree'  (duration: 371.974715ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-18T19:49:39.140351Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-18T19:49:38.767903Z","time spent":"372.433662ms","remote":"127.0.0.1:37750","response type":"/etcdserverpb.KV/Range","request count":0,"request size":67,"response count":1,"response size":1137,"request content":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" "}
	{"level":"warn","ts":"2024-09-18T19:49:39.140594Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"366.852656ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/namespaces/yakd-dashboard\" ","response":"range_response_count:1 size:2312"}
	{"level":"info","ts":"2024-09-18T19:49:39.140666Z","caller":"traceutil/trace.go:171","msg":"trace[955157213] range","detail":"{range_begin:/registry/namespaces/yakd-dashboard; range_end:; response_count:1; response_revision:2065; }","duration":"366.925812ms","start":"2024-09-18T19:49:38.773733Z","end":"2024-09-18T19:49:39.140659Z","steps":["trace[955157213] 'range keys from in-memory index tree'  (duration: 366.803518ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-18T19:49:39.140687Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-18T19:49:38.773695Z","time spent":"366.986639ms","remote":"127.0.0.1:37688","response type":"/etcdserverpb.KV/Range","request count":0,"request size":37,"response count":1,"response size":2335,"request content":"key:\"/registry/namespaces/yakd-dashboard\" "}
	{"level":"info","ts":"2024-09-18T19:49:39.140890Z","caller":"traceutil/trace.go:171","msg":"trace[1592645195] linearizableReadLoop","detail":"{readStateIndex:2214; appliedIndex:2213; }","duration":"186.300087ms","start":"2024-09-18T19:49:38.954572Z","end":"2024-09-18T19:49:39.140872Z","steps":["trace[1592645195] 'read index received'  (duration: 184.999995ms)","trace[1592645195] 'applied index is now lower than readState.Index'  (duration: 1.299584ms)"],"step_count":2}
	{"level":"info","ts":"2024-09-18T19:49:39.141064Z","caller":"traceutil/trace.go:171","msg":"trace[1097499478] transaction","detail":"{read_only:false; response_revision:2066; number_of_response:1; }","duration":"254.38821ms","start":"2024-09-18T19:49:38.886663Z","end":"2024-09-18T19:49:39.141051Z","steps":["trace[1097499478] 'process raft request'  (duration: 252.880343ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-18T19:49:39.141181Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"186.598804ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-18T19:49:39.141221Z","caller":"traceutil/trace.go:171","msg":"trace[2143615549] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:2066; }","duration":"186.63848ms","start":"2024-09-18T19:49:38.954567Z","end":"2024-09-18T19:49:39.141206Z","steps":["trace[2143615549] 'agreement among raft nodes before linearized reading'  (duration: 186.586728ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-18T19:49:39.141319Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"163.150361ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-18T19:49:39.141350Z","caller":"traceutil/trace.go:171","msg":"trace[8159077] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:2066; }","duration":"163.180095ms","start":"2024-09-18T19:49:38.978162Z","end":"2024-09-18T19:49:39.141343Z","steps":["trace[8159077] 'agreement among raft nodes before linearized reading'  (duration: 163.144483ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-18T19:50:18.732088Z","caller":"traceutil/trace.go:171","msg":"trace[1049508816] transaction","detail":"{read_only:false; response_revision:2373; number_of_response:1; }","duration":"138.120687ms","start":"2024-09-18T19:50:18.593955Z","end":"2024-09-18T19:50:18.732075Z","steps":["trace[1049508816] 'process raft request'  (duration: 136.844604ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-18T19:50:26.593679Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"295.207794ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-18T19:50:26.593767Z","caller":"traceutil/trace.go:171","msg":"trace[1193075531] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:2438; }","duration":"295.306191ms","start":"2024-09-18T19:50:26.298443Z","end":"2024-09-18T19:50:26.593750Z","steps":["trace[1193075531] 'range keys from in-memory index tree'  (duration: 295.158117ms)"],"step_count":1}
	
	
	==> gcp-auth [172ef2c9c611dd5748901516fac554cdb8f031212b96ee13467bda756557a347] <==
	2024/09/18 19:41:12 Ready to write response ...
	2024/09/18 19:49:14 Ready to marshal response ...
	2024/09/18 19:49:14 Ready to write response ...
	2024/09/18 19:49:15 Ready to marshal response ...
	2024/09/18 19:49:15 Ready to write response ...
	2024/09/18 19:49:25 Ready to marshal response ...
	2024/09/18 19:49:25 Ready to write response ...
	2024/09/18 19:49:27 Ready to marshal response ...
	2024/09/18 19:49:27 Ready to write response ...
	2024/09/18 19:49:33 Ready to marshal response ...
	2024/09/18 19:49:33 Ready to write response ...
	2024/09/18 19:50:01 Ready to marshal response ...
	2024/09/18 19:50:01 Ready to write response ...
	2024/09/18 19:50:04 Ready to marshal response ...
	2024/09/18 19:50:04 Ready to write response ...
	2024/09/18 19:50:14 Ready to marshal response ...
	2024/09/18 19:50:14 Ready to write response ...
	2024/09/18 19:50:14 Ready to marshal response ...
	2024/09/18 19:50:14 Ready to write response ...
	2024/09/18 19:50:14 Ready to marshal response ...
	2024/09/18 19:50:14 Ready to write response ...
	2024/09/18 19:50:22 Ready to marshal response ...
	2024/09/18 19:50:22 Ready to write response ...
	2024/09/18 19:52:42 Ready to marshal response ...
	2024/09/18 19:52:42 Ready to write response ...
	
	
	==> kernel <==
	 19:52:52 up 13 min,  0 users,  load average: 0.26, 0.63, 0.52
	Linux addons-815929 5.10.207 #1 SMP Mon Sep 16 15:00:28 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [bd304f4e9c5207664c3f5ae1d500d9898ffb98e6f5fbe2945d1a71f7d5f78e6c] <==
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E0918 19:41:22.794814       1 remote_available_controller.go:448] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.103.233.223:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.103.233.223:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.103.233.223:443: connect: connection refused" logger="UnhandledError"
	E0918 19:41:22.796712       1 remote_available_controller.go:448] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.103.233.223:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.103.233.223:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.103.233.223:443: connect: connection refused" logger="UnhandledError"
	E0918 19:41:22.802171       1 remote_available_controller.go:448] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.103.233.223:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.103.233.223:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.103.233.223:443: connect: connection refused" logger="UnhandledError"
	I0918 19:41:22.877204       1 handler.go:286] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0918 19:49:46.122311       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I0918 19:49:51.351654       1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W0918 19:49:52.486009       1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I0918 19:50:14.711287       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.96.178.208"}
	I0918 19:50:21.473006       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0918 19:50:21.475672       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0918 19:50:21.499495       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0918 19:50:21.499582       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0918 19:50:21.528355       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0918 19:50:21.528504       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0918 19:50:21.650040       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0918 19:50:21.650140       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0918 19:50:22.108280       1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
	I0918 19:50:22.290418       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.97.211.60"}
	W0918 19:50:22.650820       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0918 19:50:22.650941       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W0918 19:50:22.664307       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	I0918 19:52:42.564269       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.98.247.40"}
	
	
	==> kube-controller-manager [f287481be73d00137673fed540f11490c4a3a51a98e37282dab41b64635fe4f1] <==
	W0918 19:51:32.633991       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0918 19:51:32.634061       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0918 19:51:45.514432       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0918 19:51:45.514523       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0918 19:51:52.620037       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0918 19:51:52.620167       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0918 19:52:11.589158       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0918 19:52:11.589298       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0918 19:52:25.937501       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0918 19:52:25.937688       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0918 19:52:26.862119       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0918 19:52:26.862232       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0918 19:52:42.410782       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="72.346254ms"
	I0918 19:52:42.420070       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="9.172267ms"
	I0918 19:52:42.420240       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="93.532µs"
	I0918 19:52:42.427858       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="160.594µs"
	I0918 19:52:44.506107       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="ingress-nginx/ingress-nginx-admission-create" delay="0s"
	I0918 19:52:44.515739       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-bc57996ff" duration="6.408µs"
	I0918 19:52:44.524182       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="ingress-nginx/ingress-nginx-admission-patch" delay="0s"
	W0918 19:52:45.371047       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0918 19:52:45.371154       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0918 19:52:45.589091       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="9.724191ms"
	I0918 19:52:45.589170       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="35.96µs"
	W0918 19:52:47.503762       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0918 19:52:47.503894       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	
	
	==> kube-proxy [c25ce10b42b68a6ed5d508643a26856bbe78ab45c824d9bb52b62536a54e94f6] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0918 19:39:39.772742       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0918 19:39:39.855112       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.158"]
	E0918 19:39:39.855197       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0918 19:39:39.943796       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0918 19:39:39.943838       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0918 19:39:39.943864       1 server_linux.go:169] "Using iptables Proxier"
	I0918 19:39:39.953935       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0918 19:39:39.954227       1 server.go:483] "Version info" version="v1.31.1"
	I0918 19:39:39.954239       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0918 19:39:39.958453       1 config.go:199] "Starting service config controller"
	I0918 19:39:39.958495       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0918 19:39:39.958560       1 config.go:105] "Starting endpoint slice config controller"
	I0918 19:39:39.958577       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0918 19:39:39.965954       1 config.go:328] "Starting node config controller"
	I0918 19:39:39.965978       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0918 19:39:40.059312       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0918 19:39:40.059385       1 shared_informer.go:320] Caches are synced for service config
	I0918 19:39:40.067090       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [dcda62e7939defac3570394097f84c23877573964da78205f5348d3f4c1f746d] <==
	W0918 19:39:30.259773       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0918 19:39:30.259828       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0918 19:39:30.260863       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0918 19:39:30.260937       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0918 19:39:30.316355       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0918 19:39:30.316410       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0918 19:39:30.325700       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0918 19:39:30.325748       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0918 19:39:30.384152       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0918 19:39:30.384201       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0918 19:39:30.388938       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0918 19:39:30.388996       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0918 19:39:30.471673       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0918 19:39:30.471719       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0918 19:39:30.484033       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0918 19:39:30.484082       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0918 19:39:30.491339       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0918 19:39:30.491383       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0918 19:39:30.519278       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0918 19:39:30.519335       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0918 19:39:30.634983       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0918 19:39:30.635043       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0918 19:39:30.839874       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0918 19:39:30.840702       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	I0918 19:39:32.951022       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 18 19:52:42 addons-815929 kubelet[1202]: I0918 19:52:42.538067    1202 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/f887e2d6-f352-42de-b6c8-bf994f11b057-gcp-creds\") pod \"hello-world-app-55bf9c44b4-qqrwc\" (UID: \"f887e2d6-f352-42de-b6c8-bf994f11b057\") " pod="default/hello-world-app-55bf9c44b4-qqrwc"
	Sep 18 19:52:43 addons-815929 kubelet[1202]: I0918 19:52:43.547336    1202 scope.go:117] "RemoveContainer" containerID="bd2242b8b099dcd58389ad8290ca8148085fe17ab656a52ef2c8dc8ca4c31da9"
	Sep 18 19:52:43 addons-815929 kubelet[1202]: I0918 19:52:43.565994    1202 scope.go:117] "RemoveContainer" containerID="bd2242b8b099dcd58389ad8290ca8148085fe17ab656a52ef2c8dc8ca4c31da9"
	Sep 18 19:52:43 addons-815929 kubelet[1202]: E0918 19:52:43.566525    1202 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bd2242b8b099dcd58389ad8290ca8148085fe17ab656a52ef2c8dc8ca4c31da9\": container with ID starting with bd2242b8b099dcd58389ad8290ca8148085fe17ab656a52ef2c8dc8ca4c31da9 not found: ID does not exist" containerID="bd2242b8b099dcd58389ad8290ca8148085fe17ab656a52ef2c8dc8ca4c31da9"
	Sep 18 19:52:43 addons-815929 kubelet[1202]: I0918 19:52:43.566577    1202 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bd2242b8b099dcd58389ad8290ca8148085fe17ab656a52ef2c8dc8ca4c31da9"} err="failed to get container status \"bd2242b8b099dcd58389ad8290ca8148085fe17ab656a52ef2c8dc8ca4c31da9\": rpc error: code = NotFound desc = could not find container \"bd2242b8b099dcd58389ad8290ca8148085fe17ab656a52ef2c8dc8ca4c31da9\": container with ID starting with bd2242b8b099dcd58389ad8290ca8148085fe17ab656a52ef2c8dc8ca4c31da9 not found: ID does not exist"
	Sep 18 19:52:43 addons-815929 kubelet[1202]: I0918 19:52:43.645708    1202 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d2v2t\" (UniqueName: \"kubernetes.io/projected/9660f591-df30-4595-ad30-0b79d840f779-kube-api-access-d2v2t\") pod \"9660f591-df30-4595-ad30-0b79d840f779\" (UID: \"9660f591-df30-4595-ad30-0b79d840f779\") "
	Sep 18 19:52:43 addons-815929 kubelet[1202]: I0918 19:52:43.648740    1202 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9660f591-df30-4595-ad30-0b79d840f779-kube-api-access-d2v2t" (OuterVolumeSpecName: "kube-api-access-d2v2t") pod "9660f591-df30-4595-ad30-0b79d840f779" (UID: "9660f591-df30-4595-ad30-0b79d840f779"). InnerVolumeSpecName "kube-api-access-d2v2t". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 18 19:52:43 addons-815929 kubelet[1202]: I0918 19:52:43.747087    1202 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-d2v2t\" (UniqueName: \"kubernetes.io/projected/9660f591-df30-4595-ad30-0b79d840f779-kube-api-access-d2v2t\") on node \"addons-815929\" DevicePath \"\""
	Sep 18 19:52:44 addons-815929 kubelet[1202]: I0918 19:52:44.050896    1202 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9660f591-df30-4595-ad30-0b79d840f779" path="/var/lib/kubelet/pods/9660f591-df30-4595-ad30-0b79d840f779/volumes"
	Sep 18 19:52:46 addons-815929 kubelet[1202]: I0918 19:52:46.054377    1202 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="39c73617-0364-4264-ac44-066443ccd53b" path="/var/lib/kubelet/pods/39c73617-0364-4264-ac44-066443ccd53b/volumes"
	Sep 18 19:52:46 addons-815929 kubelet[1202]: I0918 19:52:46.055029    1202 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5a8926a3-3d5b-45e1-b400-6f82f29835e1" path="/var/lib/kubelet/pods/5a8926a3-3d5b-45e1-b400-6f82f29835e1/volumes"
	Sep 18 19:52:47 addons-815929 kubelet[1202]: I0918 19:52:47.777991    1202 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r25m6\" (UniqueName: \"kubernetes.io/projected/596a7d9b-9170-460e-989d-e89064bf965d-kube-api-access-r25m6\") pod \"596a7d9b-9170-460e-989d-e89064bf965d\" (UID: \"596a7d9b-9170-460e-989d-e89064bf965d\") "
	Sep 18 19:52:47 addons-815929 kubelet[1202]: I0918 19:52:47.778060    1202 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/596a7d9b-9170-460e-989d-e89064bf965d-webhook-cert\") pod \"596a7d9b-9170-460e-989d-e89064bf965d\" (UID: \"596a7d9b-9170-460e-989d-e89064bf965d\") "
	Sep 18 19:52:47 addons-815929 kubelet[1202]: I0918 19:52:47.780175    1202 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/596a7d9b-9170-460e-989d-e89064bf965d-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "596a7d9b-9170-460e-989d-e89064bf965d" (UID: "596a7d9b-9170-460e-989d-e89064bf965d"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Sep 18 19:52:47 addons-815929 kubelet[1202]: I0918 19:52:47.782066    1202 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/596a7d9b-9170-460e-989d-e89064bf965d-kube-api-access-r25m6" (OuterVolumeSpecName: "kube-api-access-r25m6") pod "596a7d9b-9170-460e-989d-e89064bf965d" (UID: "596a7d9b-9170-460e-989d-e89064bf965d"). InnerVolumeSpecName "kube-api-access-r25m6". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 18 19:52:47 addons-815929 kubelet[1202]: I0918 19:52:47.878532    1202 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-r25m6\" (UniqueName: \"kubernetes.io/projected/596a7d9b-9170-460e-989d-e89064bf965d-kube-api-access-r25m6\") on node \"addons-815929\" DevicePath \"\""
	Sep 18 19:52:47 addons-815929 kubelet[1202]: I0918 19:52:47.878568    1202 reconciler_common.go:288] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/596a7d9b-9170-460e-989d-e89064bf965d-webhook-cert\") on node \"addons-815929\" DevicePath \"\""
	Sep 18 19:52:48 addons-815929 kubelet[1202]: I0918 19:52:48.051189    1202 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="596a7d9b-9170-460e-989d-e89064bf965d" path="/var/lib/kubelet/pods/596a7d9b-9170-460e-989d-e89064bf965d/volumes"
	Sep 18 19:52:48 addons-815929 kubelet[1202]: I0918 19:52:48.586642    1202 scope.go:117] "RemoveContainer" containerID="62659a92289f5239862076bc0e6fc258e6797f13285157bf5e721db9a8560e68"
	Sep 18 19:52:48 addons-815929 kubelet[1202]: I0918 19:52:48.600500    1202 scope.go:117] "RemoveContainer" containerID="62659a92289f5239862076bc0e6fc258e6797f13285157bf5e721db9a8560e68"
	Sep 18 19:52:48 addons-815929 kubelet[1202]: E0918 19:52:48.601070    1202 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"62659a92289f5239862076bc0e6fc258e6797f13285157bf5e721db9a8560e68\": container with ID starting with 62659a92289f5239862076bc0e6fc258e6797f13285157bf5e721db9a8560e68 not found: ID does not exist" containerID="62659a92289f5239862076bc0e6fc258e6797f13285157bf5e721db9a8560e68"
	Sep 18 19:52:48 addons-815929 kubelet[1202]: I0918 19:52:48.601115    1202 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"62659a92289f5239862076bc0e6fc258e6797f13285157bf5e721db9a8560e68"} err="failed to get container status \"62659a92289f5239862076bc0e6fc258e6797f13285157bf5e721db9a8560e68\": rpc error: code = NotFound desc = could not find container \"62659a92289f5239862076bc0e6fc258e6797f13285157bf5e721db9a8560e68\": container with ID starting with 62659a92289f5239862076bc0e6fc258e6797f13285157bf5e721db9a8560e68 not found: ID does not exist"
	Sep 18 19:52:49 addons-815929 kubelet[1202]: E0918 19:52:49.048148    1202 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ImagePullBackOff: \"Back-off pulling image \\\"gcr.io/k8s-minikube/busybox:1.28.4-glibc\\\"\"" pod="default/busybox" podUID="cb868ec9-73ea-446b-9a7e-aac3552bb3f6"
	Sep 18 19:52:52 addons-815929 kubelet[1202]: E0918 19:52:52.372250    1202 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726689172371908705,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:579800,},InodesUsed:&UInt64Value{Value:200,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 18 19:52:52 addons-815929 kubelet[1202]: E0918 19:52:52.372289    1202 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726689172371908705,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:579800,},InodesUsed:&UInt64Value{Value:200,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	
	
	==> storage-provisioner [3759671f1017e4877f68e8f02b9e88508a0b5b788503476fa6663f5b152d0fa6] <==
	I0918 19:39:45.052140       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0918 19:39:45.070541       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0918 19:39:45.070599       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0918 19:39:45.124575       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0918 19:39:45.124795       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-815929_cdd2de0e-7456-4024-9550-98c8060a35ab!
	I0918 19:39:45.133742       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"ab4840eb-b79e-468b-af43-50c550ad69c5", APIVersion:"v1", ResourceVersion:"660", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-815929_cdd2de0e-7456-4024-9550-98c8060a35ab became leader
	I0918 19:39:45.237552       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-815929_cdd2de0e-7456-4024-9550-98c8060a35ab!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-815929 -n addons-815929
helpers_test.go:261: (dbg) Run:  kubectl --context addons-815929 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/Ingress]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-815929 describe pod busybox
helpers_test.go:282: (dbg) kubectl --context addons-815929 describe pod busybox:

                                                
                                                
-- stdout --
	Name:             busybox
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-815929/192.168.39.158
	Start Time:       Wed, 18 Sep 2024 19:41:12 +0000
	Labels:           integration-test=busybox
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.23
	IPs:
	  IP:  10.244.0.23
	Containers:
	  busybox:
	    Container ID:  
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      sleep
	      3600
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:
	      GOOGLE_APPLICATION_CREDENTIALS:  /google-app-creds.json
	      PROJECT_ID:                      this_is_fake
	      GCP_PROJECT:                     this_is_fake
	      GCLOUD_PROJECT:                  this_is_fake
	      GOOGLE_CLOUD_PROJECT:            this_is_fake
	      CLOUDSDK_CORE_PROJECT:           this_is_fake
	    Mounts:
	      /google-app-creds.json from gcp-creds (ro)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-kvbgq (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-kvbgq:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	  gcp-creds:
	    Type:          HostPath (bare host directory volume)
	    Path:          /var/lib/minikube/google_application_credentials.json
	    HostPathType:  File
	QoS Class:         BestEffort
	Node-Selectors:    <none>
	Tolerations:       node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                   node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                  From               Message
	  ----     ------     ----                 ----               -------
	  Normal   Scheduled  11m                  default-scheduler  Successfully assigned default/busybox to addons-815929
	  Normal   Pulling    10m (x4 over 11m)    kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Warning  Failed     10m (x4 over 11m)    kubelet            Failed to pull image "gcr.io/k8s-minikube/busybox:1.28.4-glibc": unable to retrieve auth token: invalid username/password: unauthorized: authentication failed
	  Warning  Failed     10m (x4 over 11m)    kubelet            Error: ErrImagePull
	  Warning  Failed     9m54s (x6 over 11m)  kubelet            Error: ImagePullBackOff
	  Normal   BackOff    92s (x43 over 11m)   kubelet            Back-off pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"

                                                
                                                
-- /stdout --
helpers_test.go:285: <<< TestAddons/parallel/Ingress FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/Ingress (151.88s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (327.28s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:409: metrics-server stabilized in 2.934554ms
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-84c5f94fbc-fvm48" [20825ca5-3044-4221-84bc-6fc04d1038fd] Running
I0918 19:49:14.767187   14878 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I0918 19:49:14.767208   14878 kapi.go:107] duration metric: took 24.529185ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 6.004221311s
addons_test.go:417: (dbg) Run:  kubectl --context addons-815929 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-815929 top pods -n kube-system: exit status 1 (65.583163ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-lr452, age: 9m43.814684151s

                                                
                                                
** /stderr **
I0918 19:49:20.816477   14878 retry.go:31] will retry after 2.420290528s: exit status 1
addons_test.go:417: (dbg) Run:  kubectl --context addons-815929 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-815929 top pods -n kube-system: exit status 1 (67.304345ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-lr452, age: 9m46.303087511s

                                                
                                                
** /stderr **
I0918 19:49:23.304919   14878 retry.go:31] will retry after 6.510714995s: exit status 1
addons_test.go:417: (dbg) Run:  kubectl --context addons-815929 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-815929 top pods -n kube-system: exit status 1 (63.682557ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-lr452, age: 9m52.877779337s

                                                
                                                
** /stderr **
I0918 19:49:29.879696   14878 retry.go:31] will retry after 9.051667667s: exit status 1
addons_test.go:417: (dbg) Run:  kubectl --context addons-815929 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-815929 top pods -n kube-system: exit status 1 (267.230375ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-lr452, age: 10m2.198295933s

                                                
                                                
** /stderr **
I0918 19:49:39.199766   14878 retry.go:31] will retry after 7.098225898s: exit status 1
addons_test.go:417: (dbg) Run:  kubectl --context addons-815929 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-815929 top pods -n kube-system: exit status 1 (64.642932ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-lr452, age: 10m9.361772188s

                                                
                                                
** /stderr **
I0918 19:49:46.363247   14878 retry.go:31] will retry after 13.2149131s: exit status 1
addons_test.go:417: (dbg) Run:  kubectl --context addons-815929 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-815929 top pods -n kube-system: exit status 1 (65.32615ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-lr452, age: 10m22.641944001s

                                                
                                                
** /stderr **
I0918 19:49:59.643744   14878 retry.go:31] will retry after 19.626641785s: exit status 1
addons_test.go:417: (dbg) Run:  kubectl --context addons-815929 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-815929 top pods -n kube-system: exit status 1 (72.884472ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-lr452, age: 10m42.342156149s

                                                
                                                
** /stderr **
I0918 19:50:19.343626   14878 retry.go:31] will retry after 22.032132312s: exit status 1
addons_test.go:417: (dbg) Run:  kubectl --context addons-815929 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-815929 top pods -n kube-system: exit status 1 (62.974899ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-lr452, age: 11m4.440973296s

                                                
                                                
** /stderr **
I0918 19:50:41.442785   14878 retry.go:31] will retry after 57.653076548s: exit status 1
addons_test.go:417: (dbg) Run:  kubectl --context addons-815929 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-815929 top pods -n kube-system: exit status 1 (67.691257ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-lr452, age: 12m2.163254951s

                                                
                                                
** /stderr **
I0918 19:51:39.165182   14878 retry.go:31] will retry after 54.872549906s: exit status 1
addons_test.go:417: (dbg) Run:  kubectl --context addons-815929 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-815929 top pods -n kube-system: exit status 1 (68.888173ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-lr452, age: 12m57.10574521s

                                                
                                                
** /stderr **
I0918 19:52:34.107896   14878 retry.go:31] will retry after 48.924098163s: exit status 1
addons_test.go:417: (dbg) Run:  kubectl --context addons-815929 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-815929 top pods -n kube-system: exit status 1 (61.499037ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-lr452, age: 13m46.092878783s

                                                
                                                
** /stderr **
I0918 19:53:23.094624   14878 retry.go:31] will retry after 43.586182818s: exit status 1
addons_test.go:417: (dbg) Run:  kubectl --context addons-815929 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-815929 top pods -n kube-system: exit status 1 (62.945652ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-lr452, age: 14m29.745647069s

                                                
                                                
** /stderr **
I0918 19:54:06.747267   14878 retry.go:31] will retry after 32.376507055s: exit status 1
addons_test.go:417: (dbg) Run:  kubectl --context addons-815929 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-815929 top pods -n kube-system: exit status 1 (66.336024ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-lr452, age: 15m2.190018058s

                                                
                                                
** /stderr **
addons_test.go:431: failed checking metric server: exit status 1
addons_test.go:434: (dbg) Run:  out/minikube-linux-amd64 -p addons-815929 addons disable metrics-server --alsologtostderr -v=1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-815929 -n addons-815929
helpers_test.go:244: <<< TestAddons/parallel/MetricsServer FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/MetricsServer]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-815929 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-815929 logs -n 25: (1.405614039s)
helpers_test.go:252: TestAddons/parallel/MetricsServer logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| delete  | -p download-only-228031                                                                     | download-only-228031 | jenkins | v1.34.0 | 18 Sep 24 19:38 UTC | 18 Sep 24 19:38 UTC |
	| delete  | -p download-only-226542                                                                     | download-only-226542 | jenkins | v1.34.0 | 18 Sep 24 19:38 UTC | 18 Sep 24 19:38 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-930383 | jenkins | v1.34.0 | 18 Sep 24 19:38 UTC |                     |
	|         | binary-mirror-930383                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                      |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                      |         |         |                     |                     |
	|         | http://127.0.0.1:32853                                                                      |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-930383                                                                     | binary-mirror-930383 | jenkins | v1.34.0 | 18 Sep 24 19:38 UTC | 18 Sep 24 19:38 UTC |
	| addons  | enable dashboard -p                                                                         | addons-815929        | jenkins | v1.34.0 | 18 Sep 24 19:38 UTC |                     |
	|         | addons-815929                                                                               |                      |         |         |                     |                     |
	| addons  | disable dashboard -p                                                                        | addons-815929        | jenkins | v1.34.0 | 18 Sep 24 19:38 UTC |                     |
	|         | addons-815929                                                                               |                      |         |         |                     |                     |
	| start   | -p addons-815929 --wait=true                                                                | addons-815929        | jenkins | v1.34.0 | 18 Sep 24 19:38 UTC | 18 Sep 24 19:41 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                      |         |         |                     |                     |
	|         | --addons=registry                                                                           |                      |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                      |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                      |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                      |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                      |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano                                                              |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                      |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                      |         |         |                     |                     |
	|         | --addons=helm-tiller                                                                        |                      |         |         |                     |                     |
	| ssh     | addons-815929 ssh cat                                                                       | addons-815929        | jenkins | v1.34.0 | 18 Sep 24 19:49 UTC | 18 Sep 24 19:49 UTC |
	|         | /opt/local-path-provisioner/pvc-640ef54b-981f-4e43-8493-c1fa2c048453_default_test-pvc/file1 |                      |         |         |                     |                     |
	| addons  | addons-815929 addons disable                                                                | addons-815929        | jenkins | v1.34.0 | 18 Sep 24 19:49 UTC | 18 Sep 24 19:49 UTC |
	|         | storage-provisioner-rancher                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-815929 addons disable                                                                | addons-815929        | jenkins | v1.34.0 | 18 Sep 24 19:49 UTC | 18 Sep 24 19:49 UTC |
	|         | yakd --alsologtostderr -v=1                                                                 |                      |         |         |                     |                     |
	| addons  | disable nvidia-device-plugin                                                                | addons-815929        | jenkins | v1.34.0 | 18 Sep 24 19:49 UTC | 18 Sep 24 19:49 UTC |
	|         | -p addons-815929                                                                            |                      |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p                                                                 | addons-815929        | jenkins | v1.34.0 | 18 Sep 24 19:49 UTC | 18 Sep 24 19:49 UTC |
	|         | addons-815929                                                                               |                      |         |         |                     |                     |
	| addons  | addons-815929 addons disable                                                                | addons-815929        | jenkins | v1.34.0 | 18 Sep 24 19:50 UTC | 18 Sep 24 19:50 UTC |
	|         | helm-tiller --alsologtostderr                                                               |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | disable cloud-spanner -p                                                                    | addons-815929        | jenkins | v1.34.0 | 18 Sep 24 19:50 UTC | 18 Sep 24 19:50 UTC |
	|         | addons-815929                                                                               |                      |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-815929        | jenkins | v1.34.0 | 18 Sep 24 19:50 UTC | 18 Sep 24 19:50 UTC |
	|         | -p addons-815929                                                                            |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-815929 addons                                                                        | addons-815929        | jenkins | v1.34.0 | 18 Sep 24 19:50 UTC | 18 Sep 24 19:50 UTC |
	|         | disable csi-hostpath-driver                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-815929 addons                                                                        | addons-815929        | jenkins | v1.34.0 | 18 Sep 24 19:50 UTC | 18 Sep 24 19:50 UTC |
	|         | disable volumesnapshots                                                                     |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ip      | addons-815929 ip                                                                            | addons-815929        | jenkins | v1.34.0 | 18 Sep 24 19:50 UTC | 18 Sep 24 19:50 UTC |
	| addons  | addons-815929 addons disable                                                                | addons-815929        | jenkins | v1.34.0 | 18 Sep 24 19:50 UTC | 18 Sep 24 19:50 UTC |
	|         | registry --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-815929 addons disable                                                                | addons-815929        | jenkins | v1.34.0 | 18 Sep 24 19:50 UTC | 18 Sep 24 19:50 UTC |
	|         | headlamp --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| ssh     | addons-815929 ssh curl -s                                                                   | addons-815929        | jenkins | v1.34.0 | 18 Sep 24 19:50 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                      |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                      |         |         |                     |                     |
	| ip      | addons-815929 ip                                                                            | addons-815929        | jenkins | v1.34.0 | 18 Sep 24 19:52 UTC | 18 Sep 24 19:52 UTC |
	| addons  | addons-815929 addons disable                                                                | addons-815929        | jenkins | v1.34.0 | 18 Sep 24 19:52 UTC | 18 Sep 24 19:52 UTC |
	|         | ingress-dns --alsologtostderr                                                               |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-815929 addons disable                                                                | addons-815929        | jenkins | v1.34.0 | 18 Sep 24 19:52 UTC | 18 Sep 24 19:52 UTC |
	|         | ingress --alsologtostderr -v=1                                                              |                      |         |         |                     |                     |
	| addons  | addons-815929 addons                                                                        | addons-815929        | jenkins | v1.34.0 | 18 Sep 24 19:54 UTC | 18 Sep 24 19:54 UTC |
	|         | disable metrics-server                                                                      |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/18 19:38:53
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0918 19:38:53.118706   15635 out.go:345] Setting OutFile to fd 1 ...
	I0918 19:38:53.118965   15635 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0918 19:38:53.118975   15635 out.go:358] Setting ErrFile to fd 2...
	I0918 19:38:53.118980   15635 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0918 19:38:53.119217   15635 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19667-7671/.minikube/bin
	I0918 19:38:53.119878   15635 out.go:352] Setting JSON to false
	I0918 19:38:53.120737   15635 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":1277,"bootTime":1726687056,"procs":171,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0918 19:38:53.120834   15635 start.go:139] virtualization: kvm guest
	I0918 19:38:53.123148   15635 out.go:177] * [addons-815929] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0918 19:38:53.124482   15635 notify.go:220] Checking for updates...
	I0918 19:38:53.124492   15635 out.go:177]   - MINIKUBE_LOCATION=19667
	I0918 19:38:53.125673   15635 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0918 19:38:53.126877   15635 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19667-7671/kubeconfig
	I0918 19:38:53.127987   15635 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19667-7671/.minikube
	I0918 19:38:53.129021   15635 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0918 19:38:53.130051   15635 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0918 19:38:53.131293   15635 driver.go:394] Setting default libvirt URI to qemu:///system
	I0918 19:38:53.163239   15635 out.go:177] * Using the kvm2 driver based on user configuration
	I0918 19:38:53.164302   15635 start.go:297] selected driver: kvm2
	I0918 19:38:53.164318   15635 start.go:901] validating driver "kvm2" against <nil>
	I0918 19:38:53.164342   15635 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0918 19:38:53.165066   15635 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0918 19:38:53.165151   15635 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19667-7671/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0918 19:38:53.179993   15635 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0918 19:38:53.180067   15635 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0918 19:38:53.180362   15635 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0918 19:38:53.180395   15635 cni.go:84] Creating CNI manager for ""
	I0918 19:38:53.180443   15635 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0918 19:38:53.180452   15635 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0918 19:38:53.180510   15635 start.go:340] cluster config:
	{Name:addons-815929 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-815929 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAg
entPID:0 GPUs: AutoPauseInterval:1m0s}
	I0918 19:38:53.180624   15635 iso.go:125] acquiring lock: {Name:mkbcf4d84f3091567a39802cd5e773cb10b8c545 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0918 19:38:53.182868   15635 out.go:177] * Starting "addons-815929" primary control-plane node in "addons-815929" cluster
	I0918 19:38:53.183982   15635 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0918 19:38:53.184039   15635 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19667-7671/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I0918 19:38:53.184052   15635 cache.go:56] Caching tarball of preloaded images
	I0918 19:38:53.184131   15635 preload.go:172] Found /home/jenkins/minikube-integration/19667-7671/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0918 19:38:53.184144   15635 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0918 19:38:53.184489   15635 profile.go:143] Saving config to /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/addons-815929/config.json ...
	I0918 19:38:53.184512   15635 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/addons-815929/config.json: {Name:mk126f196443338ecc21176132e0fd9e3cc4ae5b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0918 19:38:53.184666   15635 start.go:360] acquireMachinesLock for addons-815929: {Name:mk1b0d68a0b7d9d4bc8204d5d81cb9eb77222526 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0918 19:38:53.184723   15635 start.go:364] duration metric: took 41.331µs to acquireMachinesLock for "addons-815929"
	I0918 19:38:53.184743   15635 start.go:93] Provisioning new machine with config: &{Name:addons-815929 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.1 ClusterName:addons-815929 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0918 19:38:53.184805   15635 start.go:125] createHost starting for "" (driver="kvm2")
	I0918 19:38:53.186310   15635 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0918 19:38:53.186442   15635 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0918 19:38:53.186488   15635 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0918 19:38:53.200841   15635 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32951
	I0918 19:38:53.201300   15635 main.go:141] libmachine: () Calling .GetVersion
	I0918 19:38:53.201895   15635 main.go:141] libmachine: Using API Version  1
	I0918 19:38:53.201914   15635 main.go:141] libmachine: () Calling .SetConfigRaw
	I0918 19:38:53.202258   15635 main.go:141] libmachine: () Calling .GetMachineName
	I0918 19:38:53.202436   15635 main.go:141] libmachine: (addons-815929) Calling .GetMachineName
	I0918 19:38:53.202591   15635 main.go:141] libmachine: (addons-815929) Calling .DriverName
	I0918 19:38:53.202765   15635 start.go:159] libmachine.API.Create for "addons-815929" (driver="kvm2")
	I0918 19:38:53.202793   15635 client.go:168] LocalClient.Create starting
	I0918 19:38:53.202832   15635 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/19667-7671/.minikube/certs/ca.pem
	I0918 19:38:53.498664   15635 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/19667-7671/.minikube/certs/cert.pem
	I0918 19:38:53.663477   15635 main.go:141] libmachine: Running pre-create checks...
	I0918 19:38:53.663499   15635 main.go:141] libmachine: (addons-815929) Calling .PreCreateCheck
	I0918 19:38:53.663965   15635 main.go:141] libmachine: (addons-815929) Calling .GetConfigRaw
	I0918 19:38:53.664477   15635 main.go:141] libmachine: Creating machine...
	I0918 19:38:53.664493   15635 main.go:141] libmachine: (addons-815929) Calling .Create
	I0918 19:38:53.664656   15635 main.go:141] libmachine: (addons-815929) Creating KVM machine...
	I0918 19:38:53.665882   15635 main.go:141] libmachine: (addons-815929) DBG | found existing default KVM network
	I0918 19:38:53.666727   15635 main.go:141] libmachine: (addons-815929) DBG | I0918 19:38:53.666575   15656 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0002111f0}
	I0918 19:38:53.666778   15635 main.go:141] libmachine: (addons-815929) DBG | created network xml: 
	I0918 19:38:53.666798   15635 main.go:141] libmachine: (addons-815929) DBG | <network>
	I0918 19:38:53.666808   15635 main.go:141] libmachine: (addons-815929) DBG |   <name>mk-addons-815929</name>
	I0918 19:38:53.666813   15635 main.go:141] libmachine: (addons-815929) DBG |   <dns enable='no'/>
	I0918 19:38:53.666818   15635 main.go:141] libmachine: (addons-815929) DBG |   
	I0918 19:38:53.666825   15635 main.go:141] libmachine: (addons-815929) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0918 19:38:53.666831   15635 main.go:141] libmachine: (addons-815929) DBG |     <dhcp>
	I0918 19:38:53.666838   15635 main.go:141] libmachine: (addons-815929) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0918 19:38:53.666843   15635 main.go:141] libmachine: (addons-815929) DBG |     </dhcp>
	I0918 19:38:53.666848   15635 main.go:141] libmachine: (addons-815929) DBG |   </ip>
	I0918 19:38:53.666855   15635 main.go:141] libmachine: (addons-815929) DBG |   
	I0918 19:38:53.666859   15635 main.go:141] libmachine: (addons-815929) DBG | </network>
	I0918 19:38:53.666868   15635 main.go:141] libmachine: (addons-815929) DBG | 
	I0918 19:38:53.672175   15635 main.go:141] libmachine: (addons-815929) DBG | trying to create private KVM network mk-addons-815929 192.168.39.0/24...
	I0918 19:38:53.742842   15635 main.go:141] libmachine: (addons-815929) Setting up store path in /home/jenkins/minikube-integration/19667-7671/.minikube/machines/addons-815929 ...
	I0918 19:38:53.742874   15635 main.go:141] libmachine: (addons-815929) DBG | private KVM network mk-addons-815929 192.168.39.0/24 created
	I0918 19:38:53.742891   15635 main.go:141] libmachine: (addons-815929) Building disk image from file:///home/jenkins/minikube-integration/19667-7671/.minikube/cache/iso/amd64/minikube-v1.34.0-1726481713-19649-amd64.iso
	I0918 19:38:53.742925   15635 main.go:141] libmachine: (addons-815929) DBG | I0918 19:38:53.742793   15656 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19667-7671/.minikube
	I0918 19:38:53.742951   15635 main.go:141] libmachine: (addons-815929) Downloading /home/jenkins/minikube-integration/19667-7671/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19667-7671/.minikube/cache/iso/amd64/minikube-v1.34.0-1726481713-19649-amd64.iso...
	I0918 19:38:54.002785   15635 main.go:141] libmachine: (addons-815929) DBG | I0918 19:38:54.002609   15656 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19667-7671/.minikube/machines/addons-815929/id_rsa...
	I0918 19:38:54.238348   15635 main.go:141] libmachine: (addons-815929) DBG | I0918 19:38:54.238178   15656 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19667-7671/.minikube/machines/addons-815929/addons-815929.rawdisk...
	I0918 19:38:54.238378   15635 main.go:141] libmachine: (addons-815929) DBG | Writing magic tar header
	I0918 19:38:54.238388   15635 main.go:141] libmachine: (addons-815929) DBG | Writing SSH key tar header
	I0918 19:38:54.238395   15635 main.go:141] libmachine: (addons-815929) DBG | I0918 19:38:54.238295   15656 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19667-7671/.minikube/machines/addons-815929 ...
	I0918 19:38:54.238406   15635 main.go:141] libmachine: (addons-815929) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19667-7671/.minikube/machines/addons-815929
	I0918 19:38:54.238460   15635 main.go:141] libmachine: (addons-815929) Setting executable bit set on /home/jenkins/minikube-integration/19667-7671/.minikube/machines/addons-815929 (perms=drwx------)
	I0918 19:38:54.238483   15635 main.go:141] libmachine: (addons-815929) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19667-7671/.minikube/machines
	I0918 19:38:54.238491   15635 main.go:141] libmachine: (addons-815929) Setting executable bit set on /home/jenkins/minikube-integration/19667-7671/.minikube/machines (perms=drwxr-xr-x)
	I0918 19:38:54.238513   15635 main.go:141] libmachine: (addons-815929) Setting executable bit set on /home/jenkins/minikube-integration/19667-7671/.minikube (perms=drwxr-xr-x)
	I0918 19:38:54.238523   15635 main.go:141] libmachine: (addons-815929) Setting executable bit set on /home/jenkins/minikube-integration/19667-7671 (perms=drwxrwxr-x)
	I0918 19:38:54.238534   15635 main.go:141] libmachine: (addons-815929) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19667-7671/.minikube
	I0918 19:38:54.238548   15635 main.go:141] libmachine: (addons-815929) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19667-7671
	I0918 19:38:54.238559   15635 main.go:141] libmachine: (addons-815929) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0918 19:38:54.238565   15635 main.go:141] libmachine: (addons-815929) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0918 19:38:54.238571   15635 main.go:141] libmachine: (addons-815929) DBG | Checking permissions on dir: /home/jenkins
	I0918 19:38:54.238576   15635 main.go:141] libmachine: (addons-815929) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0918 19:38:54.238581   15635 main.go:141] libmachine: (addons-815929) DBG | Checking permissions on dir: /home
	I0918 19:38:54.238588   15635 main.go:141] libmachine: (addons-815929) DBG | Skipping /home - not owner
	I0918 19:38:54.238597   15635 main.go:141] libmachine: (addons-815929) Creating domain...
	I0918 19:38:54.239507   15635 main.go:141] libmachine: (addons-815929) define libvirt domain using xml: 
	I0918 19:38:54.239529   15635 main.go:141] libmachine: (addons-815929) <domain type='kvm'>
	I0918 19:38:54.239536   15635 main.go:141] libmachine: (addons-815929)   <name>addons-815929</name>
	I0918 19:38:54.239543   15635 main.go:141] libmachine: (addons-815929)   <memory unit='MiB'>4000</memory>
	I0918 19:38:54.239549   15635 main.go:141] libmachine: (addons-815929)   <vcpu>2</vcpu>
	I0918 19:38:54.239553   15635 main.go:141] libmachine: (addons-815929)   <features>
	I0918 19:38:54.239557   15635 main.go:141] libmachine: (addons-815929)     <acpi/>
	I0918 19:38:54.239561   15635 main.go:141] libmachine: (addons-815929)     <apic/>
	I0918 19:38:54.239566   15635 main.go:141] libmachine: (addons-815929)     <pae/>
	I0918 19:38:54.239569   15635 main.go:141] libmachine: (addons-815929)     
	I0918 19:38:54.239574   15635 main.go:141] libmachine: (addons-815929)   </features>
	I0918 19:38:54.239581   15635 main.go:141] libmachine: (addons-815929)   <cpu mode='host-passthrough'>
	I0918 19:38:54.239588   15635 main.go:141] libmachine: (addons-815929)   
	I0918 19:38:54.239596   15635 main.go:141] libmachine: (addons-815929)   </cpu>
	I0918 19:38:54.239608   15635 main.go:141] libmachine: (addons-815929)   <os>
	I0918 19:38:54.239618   15635 main.go:141] libmachine: (addons-815929)     <type>hvm</type>
	I0918 19:38:54.239629   15635 main.go:141] libmachine: (addons-815929)     <boot dev='cdrom'/>
	I0918 19:38:54.239633   15635 main.go:141] libmachine: (addons-815929)     <boot dev='hd'/>
	I0918 19:38:54.239640   15635 main.go:141] libmachine: (addons-815929)     <bootmenu enable='no'/>
	I0918 19:38:54.239643   15635 main.go:141] libmachine: (addons-815929)   </os>
	I0918 19:38:54.239648   15635 main.go:141] libmachine: (addons-815929)   <devices>
	I0918 19:38:54.239652   15635 main.go:141] libmachine: (addons-815929)     <disk type='file' device='cdrom'>
	I0918 19:38:54.239672   15635 main.go:141] libmachine: (addons-815929)       <source file='/home/jenkins/minikube-integration/19667-7671/.minikube/machines/addons-815929/boot2docker.iso'/>
	I0918 19:38:54.239681   15635 main.go:141] libmachine: (addons-815929)       <target dev='hdc' bus='scsi'/>
	I0918 19:38:54.239689   15635 main.go:141] libmachine: (addons-815929)       <readonly/>
	I0918 19:38:54.239699   15635 main.go:141] libmachine: (addons-815929)     </disk>
	I0918 19:38:54.239708   15635 main.go:141] libmachine: (addons-815929)     <disk type='file' device='disk'>
	I0918 19:38:54.239717   15635 main.go:141] libmachine: (addons-815929)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0918 19:38:54.239726   15635 main.go:141] libmachine: (addons-815929)       <source file='/home/jenkins/minikube-integration/19667-7671/.minikube/machines/addons-815929/addons-815929.rawdisk'/>
	I0918 19:38:54.239739   15635 main.go:141] libmachine: (addons-815929)       <target dev='hda' bus='virtio'/>
	I0918 19:38:54.239762   15635 main.go:141] libmachine: (addons-815929)     </disk>
	I0918 19:38:54.239780   15635 main.go:141] libmachine: (addons-815929)     <interface type='network'>
	I0918 19:38:54.239787   15635 main.go:141] libmachine: (addons-815929)       <source network='mk-addons-815929'/>
	I0918 19:38:54.239799   15635 main.go:141] libmachine: (addons-815929)       <model type='virtio'/>
	I0918 19:38:54.239804   15635 main.go:141] libmachine: (addons-815929)     </interface>
	I0918 19:38:54.239809   15635 main.go:141] libmachine: (addons-815929)     <interface type='network'>
	I0918 19:38:54.239815   15635 main.go:141] libmachine: (addons-815929)       <source network='default'/>
	I0918 19:38:54.239819   15635 main.go:141] libmachine: (addons-815929)       <model type='virtio'/>
	I0918 19:38:54.239824   15635 main.go:141] libmachine: (addons-815929)     </interface>
	I0918 19:38:54.239832   15635 main.go:141] libmachine: (addons-815929)     <serial type='pty'>
	I0918 19:38:54.239837   15635 main.go:141] libmachine: (addons-815929)       <target port='0'/>
	I0918 19:38:54.239844   15635 main.go:141] libmachine: (addons-815929)     </serial>
	I0918 19:38:54.239849   15635 main.go:141] libmachine: (addons-815929)     <console type='pty'>
	I0918 19:38:54.239868   15635 main.go:141] libmachine: (addons-815929)       <target type='serial' port='0'/>
	I0918 19:38:54.239879   15635 main.go:141] libmachine: (addons-815929)     </console>
	I0918 19:38:54.239883   15635 main.go:141] libmachine: (addons-815929)     <rng model='virtio'>
	I0918 19:38:54.239889   15635 main.go:141] libmachine: (addons-815929)       <backend model='random'>/dev/random</backend>
	I0918 19:38:54.239893   15635 main.go:141] libmachine: (addons-815929)     </rng>
	I0918 19:38:54.239897   15635 main.go:141] libmachine: (addons-815929)     
	I0918 19:38:54.239901   15635 main.go:141] libmachine: (addons-815929)     
	I0918 19:38:54.239913   15635 main.go:141] libmachine: (addons-815929)   </devices>
	I0918 19:38:54.239925   15635 main.go:141] libmachine: (addons-815929) </domain>
	I0918 19:38:54.239934   15635 main.go:141] libmachine: (addons-815929) 
	I0918 19:38:54.245827   15635 main.go:141] libmachine: (addons-815929) DBG | domain addons-815929 has defined MAC address 52:54:00:cb:c3:cb in network default
	I0918 19:38:54.246274   15635 main.go:141] libmachine: (addons-815929) Ensuring networks are active...
	I0918 19:38:54.246289   15635 main.go:141] libmachine: (addons-815929) DBG | domain addons-815929 has defined MAC address 52:54:00:11:b1:d6 in network mk-addons-815929
	I0918 19:38:54.246951   15635 main.go:141] libmachine: (addons-815929) Ensuring network default is active
	I0918 19:38:54.247192   15635 main.go:141] libmachine: (addons-815929) Ensuring network mk-addons-815929 is active
	I0918 19:38:54.247672   15635 main.go:141] libmachine: (addons-815929) Getting domain xml...
	I0918 19:38:54.248278   15635 main.go:141] libmachine: (addons-815929) Creating domain...
	I0918 19:38:55.697959   15635 main.go:141] libmachine: (addons-815929) Waiting to get IP...
	I0918 19:38:55.698757   15635 main.go:141] libmachine: (addons-815929) DBG | domain addons-815929 has defined MAC address 52:54:00:11:b1:d6 in network mk-addons-815929
	I0918 19:38:55.699235   15635 main.go:141] libmachine: (addons-815929) DBG | unable to find current IP address of domain addons-815929 in network mk-addons-815929
	I0918 19:38:55.699284   15635 main.go:141] libmachine: (addons-815929) DBG | I0918 19:38:55.699220   15656 retry.go:31] will retry after 240.136101ms: waiting for machine to come up
	I0918 19:38:55.940564   15635 main.go:141] libmachine: (addons-815929) DBG | domain addons-815929 has defined MAC address 52:54:00:11:b1:d6 in network mk-addons-815929
	I0918 19:38:55.941063   15635 main.go:141] libmachine: (addons-815929) DBG | unable to find current IP address of domain addons-815929 in network mk-addons-815929
	I0918 19:38:55.941095   15635 main.go:141] libmachine: (addons-815929) DBG | I0918 19:38:55.941001   15656 retry.go:31] will retry after 357.629453ms: waiting for machine to come up
	I0918 19:38:56.300779   15635 main.go:141] libmachine: (addons-815929) DBG | domain addons-815929 has defined MAC address 52:54:00:11:b1:d6 in network mk-addons-815929
	I0918 19:38:56.301261   15635 main.go:141] libmachine: (addons-815929) DBG | unable to find current IP address of domain addons-815929 in network mk-addons-815929
	I0918 19:38:56.301288   15635 main.go:141] libmachine: (addons-815929) DBG | I0918 19:38:56.301210   15656 retry.go:31] will retry after 307.786585ms: waiting for machine to come up
	I0918 19:38:56.610678   15635 main.go:141] libmachine: (addons-815929) DBG | domain addons-815929 has defined MAC address 52:54:00:11:b1:d6 in network mk-addons-815929
	I0918 19:38:56.611160   15635 main.go:141] libmachine: (addons-815929) DBG | unable to find current IP address of domain addons-815929 in network mk-addons-815929
	I0918 19:38:56.611191   15635 main.go:141] libmachine: (addons-815929) DBG | I0918 19:38:56.611111   15656 retry.go:31] will retry after 517.569687ms: waiting for machine to come up
	I0918 19:38:57.129855   15635 main.go:141] libmachine: (addons-815929) DBG | domain addons-815929 has defined MAC address 52:54:00:11:b1:d6 in network mk-addons-815929
	I0918 19:38:57.130252   15635 main.go:141] libmachine: (addons-815929) DBG | unable to find current IP address of domain addons-815929 in network mk-addons-815929
	I0918 19:38:57.130293   15635 main.go:141] libmachine: (addons-815929) DBG | I0918 19:38:57.130200   15656 retry.go:31] will retry after 494.799445ms: waiting for machine to come up
	I0918 19:38:57.626875   15635 main.go:141] libmachine: (addons-815929) DBG | domain addons-815929 has defined MAC address 52:54:00:11:b1:d6 in network mk-addons-815929
	I0918 19:38:57.627350   15635 main.go:141] libmachine: (addons-815929) DBG | unable to find current IP address of domain addons-815929 in network mk-addons-815929
	I0918 19:38:57.627378   15635 main.go:141] libmachine: (addons-815929) DBG | I0918 19:38:57.627307   15656 retry.go:31] will retry after 626.236714ms: waiting for machine to come up
	I0918 19:38:58.255770   15635 main.go:141] libmachine: (addons-815929) DBG | domain addons-815929 has defined MAC address 52:54:00:11:b1:d6 in network mk-addons-815929
	I0918 19:38:58.256298   15635 main.go:141] libmachine: (addons-815929) DBG | unable to find current IP address of domain addons-815929 in network mk-addons-815929
	I0918 19:38:58.256317   15635 main.go:141] libmachine: (addons-815929) DBG | I0918 19:38:58.256214   15656 retry.go:31] will retry after 826.525241ms: waiting for machine to come up
	I0918 19:38:59.083830   15635 main.go:141] libmachine: (addons-815929) DBG | domain addons-815929 has defined MAC address 52:54:00:11:b1:d6 in network mk-addons-815929
	I0918 19:38:59.084379   15635 main.go:141] libmachine: (addons-815929) DBG | unable to find current IP address of domain addons-815929 in network mk-addons-815929
	I0918 19:38:59.084413   15635 main.go:141] libmachine: (addons-815929) DBG | I0918 19:38:59.084316   15656 retry.go:31] will retry after 1.302088375s: waiting for machine to come up
	I0918 19:39:00.388874   15635 main.go:141] libmachine: (addons-815929) DBG | domain addons-815929 has defined MAC address 52:54:00:11:b1:d6 in network mk-addons-815929
	I0918 19:39:00.389329   15635 main.go:141] libmachine: (addons-815929) DBG | unable to find current IP address of domain addons-815929 in network mk-addons-815929
	I0918 19:39:00.389357   15635 main.go:141] libmachine: (addons-815929) DBG | I0918 19:39:00.389259   15656 retry.go:31] will retry after 1.82403913s: waiting for machine to come up
	I0918 19:39:02.216192   15635 main.go:141] libmachine: (addons-815929) DBG | domain addons-815929 has defined MAC address 52:54:00:11:b1:d6 in network mk-addons-815929
	I0918 19:39:02.216654   15635 main.go:141] libmachine: (addons-815929) DBG | unable to find current IP address of domain addons-815929 in network mk-addons-815929
	I0918 19:39:02.216681   15635 main.go:141] libmachine: (addons-815929) DBG | I0918 19:39:02.216609   15656 retry.go:31] will retry after 2.008231355s: waiting for machine to come up
	I0918 19:39:04.226837   15635 main.go:141] libmachine: (addons-815929) DBG | domain addons-815929 has defined MAC address 52:54:00:11:b1:d6 in network mk-addons-815929
	I0918 19:39:04.227248   15635 main.go:141] libmachine: (addons-815929) DBG | unable to find current IP address of domain addons-815929 in network mk-addons-815929
	I0918 19:39:04.227278   15635 main.go:141] libmachine: (addons-815929) DBG | I0918 19:39:04.227201   15656 retry.go:31] will retry after 2.836403576s: waiting for machine to come up
	I0918 19:39:07.065332   15635 main.go:141] libmachine: (addons-815929) DBG | domain addons-815929 has defined MAC address 52:54:00:11:b1:d6 in network mk-addons-815929
	I0918 19:39:07.065713   15635 main.go:141] libmachine: (addons-815929) DBG | unable to find current IP address of domain addons-815929 in network mk-addons-815929
	I0918 19:39:07.065748   15635 main.go:141] libmachine: (addons-815929) DBG | I0918 19:39:07.065691   15656 retry.go:31] will retry after 3.279472186s: waiting for machine to come up
	I0918 19:39:10.348133   15635 main.go:141] libmachine: (addons-815929) DBG | domain addons-815929 has defined MAC address 52:54:00:11:b1:d6 in network mk-addons-815929
	I0918 19:39:10.348607   15635 main.go:141] libmachine: (addons-815929) DBG | unable to find current IP address of domain addons-815929 in network mk-addons-815929
	I0918 19:39:10.348632   15635 main.go:141] libmachine: (addons-815929) DBG | I0918 19:39:10.348560   15656 retry.go:31] will retry after 3.871116508s: waiting for machine to come up
	I0918 19:39:14.220928   15635 main.go:141] libmachine: (addons-815929) DBG | domain addons-815929 has defined MAC address 52:54:00:11:b1:d6 in network mk-addons-815929
	I0918 19:39:14.221295   15635 main.go:141] libmachine: (addons-815929) DBG | domain addons-815929 has current primary IP address 192.168.39.158 and MAC address 52:54:00:11:b1:d6 in network mk-addons-815929
	I0918 19:39:14.221321   15635 main.go:141] libmachine: (addons-815929) Found IP for machine: 192.168.39.158
	I0918 19:39:14.221331   15635 main.go:141] libmachine: (addons-815929) Reserving static IP address...
	I0918 19:39:14.221782   15635 main.go:141] libmachine: (addons-815929) DBG | unable to find host DHCP lease matching {name: "addons-815929", mac: "52:54:00:11:b1:d6", ip: "192.168.39.158"} in network mk-addons-815929
	I0918 19:39:14.297555   15635 main.go:141] libmachine: (addons-815929) Reserved static IP address: 192.168.39.158
	I0918 19:39:14.297592   15635 main.go:141] libmachine: (addons-815929) DBG | Getting to WaitForSSH function...
	I0918 19:39:14.297601   15635 main.go:141] libmachine: (addons-815929) Waiting for SSH to be available...
	I0918 19:39:14.300410   15635 main.go:141] libmachine: (addons-815929) DBG | domain addons-815929 has defined MAC address 52:54:00:11:b1:d6 in network mk-addons-815929
	I0918 19:39:14.300839   15635 main.go:141] libmachine: (addons-815929) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:b1:d6", ip: ""} in network mk-addons-815929: {Iface:virbr1 ExpiryTime:2024-09-18 20:39:08 +0000 UTC Type:0 Mac:52:54:00:11:b1:d6 Iaid: IPaddr:192.168.39.158 Prefix:24 Hostname:minikube Clientid:01:52:54:00:11:b1:d6}
	I0918 19:39:14.300870   15635 main.go:141] libmachine: (addons-815929) DBG | domain addons-815929 has defined IP address 192.168.39.158 and MAC address 52:54:00:11:b1:d6 in network mk-addons-815929
	I0918 19:39:14.301080   15635 main.go:141] libmachine: (addons-815929) DBG | Using SSH client type: external
	I0918 19:39:14.301103   15635 main.go:141] libmachine: (addons-815929) DBG | Using SSH private key: /home/jenkins/minikube-integration/19667-7671/.minikube/machines/addons-815929/id_rsa (-rw-------)
	I0918 19:39:14.301133   15635 main.go:141] libmachine: (addons-815929) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.158 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19667-7671/.minikube/machines/addons-815929/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0918 19:39:14.301145   15635 main.go:141] libmachine: (addons-815929) DBG | About to run SSH command:
	I0918 19:39:14.301158   15635 main.go:141] libmachine: (addons-815929) DBG | exit 0
	I0918 19:39:14.432076   15635 main.go:141] libmachine: (addons-815929) DBG | SSH cmd err, output: <nil>: 
	I0918 19:39:14.432351   15635 main.go:141] libmachine: (addons-815929) KVM machine creation complete!
	I0918 19:39:14.432733   15635 main.go:141] libmachine: (addons-815929) Calling .GetConfigRaw
	I0918 19:39:14.433533   15635 main.go:141] libmachine: (addons-815929) Calling .DriverName
	I0918 19:39:14.433729   15635 main.go:141] libmachine: (addons-815929) Calling .DriverName
	I0918 19:39:14.433919   15635 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0918 19:39:14.433937   15635 main.go:141] libmachine: (addons-815929) Calling .GetState
	I0918 19:39:14.435144   15635 main.go:141] libmachine: Detecting operating system of created instance...
	I0918 19:39:14.435157   15635 main.go:141] libmachine: Waiting for SSH to be available...
	I0918 19:39:14.435162   15635 main.go:141] libmachine: Getting to WaitForSSH function...
	I0918 19:39:14.435167   15635 main.go:141] libmachine: (addons-815929) Calling .GetSSHHostname
	I0918 19:39:14.437837   15635 main.go:141] libmachine: (addons-815929) DBG | domain addons-815929 has defined MAC address 52:54:00:11:b1:d6 in network mk-addons-815929
	I0918 19:39:14.438147   15635 main.go:141] libmachine: (addons-815929) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:b1:d6", ip: ""} in network mk-addons-815929: {Iface:virbr1 ExpiryTime:2024-09-18 20:39:08 +0000 UTC Type:0 Mac:52:54:00:11:b1:d6 Iaid: IPaddr:192.168.39.158 Prefix:24 Hostname:addons-815929 Clientid:01:52:54:00:11:b1:d6}
	I0918 19:39:14.438173   15635 main.go:141] libmachine: (addons-815929) DBG | domain addons-815929 has defined IP address 192.168.39.158 and MAC address 52:54:00:11:b1:d6 in network mk-addons-815929
	I0918 19:39:14.438353   15635 main.go:141] libmachine: (addons-815929) Calling .GetSSHPort
	I0918 19:39:14.438525   15635 main.go:141] libmachine: (addons-815929) Calling .GetSSHKeyPath
	I0918 19:39:14.438702   15635 main.go:141] libmachine: (addons-815929) Calling .GetSSHKeyPath
	I0918 19:39:14.438842   15635 main.go:141] libmachine: (addons-815929) Calling .GetSSHUsername
	I0918 19:39:14.439003   15635 main.go:141] libmachine: Using SSH client type: native
	I0918 19:39:14.439223   15635 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.158 22 <nil> <nil>}
	I0918 19:39:14.439238   15635 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0918 19:39:14.543283   15635 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0918 19:39:14.543308   15635 main.go:141] libmachine: Detecting the provisioner...
	I0918 19:39:14.543317   15635 main.go:141] libmachine: (addons-815929) Calling .GetSSHHostname
	I0918 19:39:14.545882   15635 main.go:141] libmachine: (addons-815929) DBG | domain addons-815929 has defined MAC address 52:54:00:11:b1:d6 in network mk-addons-815929
	I0918 19:39:14.546221   15635 main.go:141] libmachine: (addons-815929) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:b1:d6", ip: ""} in network mk-addons-815929: {Iface:virbr1 ExpiryTime:2024-09-18 20:39:08 +0000 UTC Type:0 Mac:52:54:00:11:b1:d6 Iaid: IPaddr:192.168.39.158 Prefix:24 Hostname:addons-815929 Clientid:01:52:54:00:11:b1:d6}
	I0918 19:39:14.546253   15635 main.go:141] libmachine: (addons-815929) DBG | domain addons-815929 has defined IP address 192.168.39.158 and MAC address 52:54:00:11:b1:d6 in network mk-addons-815929
	I0918 19:39:14.546395   15635 main.go:141] libmachine: (addons-815929) Calling .GetSSHPort
	I0918 19:39:14.546623   15635 main.go:141] libmachine: (addons-815929) Calling .GetSSHKeyPath
	I0918 19:39:14.546775   15635 main.go:141] libmachine: (addons-815929) Calling .GetSSHKeyPath
	I0918 19:39:14.546892   15635 main.go:141] libmachine: (addons-815929) Calling .GetSSHUsername
	I0918 19:39:14.547035   15635 main.go:141] libmachine: Using SSH client type: native
	I0918 19:39:14.547232   15635 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.158 22 <nil> <nil>}
	I0918 19:39:14.547245   15635 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0918 19:39:14.652809   15635 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0918 19:39:14.652895   15635 main.go:141] libmachine: found compatible host: buildroot
	I0918 19:39:14.652905   15635 main.go:141] libmachine: Provisioning with buildroot...
	I0918 19:39:14.652912   15635 main.go:141] libmachine: (addons-815929) Calling .GetMachineName
	I0918 19:39:14.653238   15635 buildroot.go:166] provisioning hostname "addons-815929"
	I0918 19:39:14.653269   15635 main.go:141] libmachine: (addons-815929) Calling .GetMachineName
	I0918 19:39:14.653524   15635 main.go:141] libmachine: (addons-815929) Calling .GetSSHHostname
	I0918 19:39:14.656525   15635 main.go:141] libmachine: (addons-815929) DBG | domain addons-815929 has defined MAC address 52:54:00:11:b1:d6 in network mk-addons-815929
	I0918 19:39:14.656903   15635 main.go:141] libmachine: (addons-815929) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:b1:d6", ip: ""} in network mk-addons-815929: {Iface:virbr1 ExpiryTime:2024-09-18 20:39:08 +0000 UTC Type:0 Mac:52:54:00:11:b1:d6 Iaid: IPaddr:192.168.39.158 Prefix:24 Hostname:addons-815929 Clientid:01:52:54:00:11:b1:d6}
	I0918 19:39:14.656925   15635 main.go:141] libmachine: (addons-815929) DBG | domain addons-815929 has defined IP address 192.168.39.158 and MAC address 52:54:00:11:b1:d6 in network mk-addons-815929
	I0918 19:39:14.657113   15635 main.go:141] libmachine: (addons-815929) Calling .GetSSHPort
	I0918 19:39:14.657313   15635 main.go:141] libmachine: (addons-815929) Calling .GetSSHKeyPath
	I0918 19:39:14.657465   15635 main.go:141] libmachine: (addons-815929) Calling .GetSSHKeyPath
	I0918 19:39:14.657637   15635 main.go:141] libmachine: (addons-815929) Calling .GetSSHUsername
	I0918 19:39:14.657763   15635 main.go:141] libmachine: Using SSH client type: native
	I0918 19:39:14.657923   15635 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.158 22 <nil> <nil>}
	I0918 19:39:14.657933   15635 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-815929 && echo "addons-815929" | sudo tee /etc/hostname
	I0918 19:39:14.778145   15635 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-815929
	
	I0918 19:39:14.778168   15635 main.go:141] libmachine: (addons-815929) Calling .GetSSHHostname
	I0918 19:39:14.782280   15635 main.go:141] libmachine: (addons-815929) DBG | domain addons-815929 has defined MAC address 52:54:00:11:b1:d6 in network mk-addons-815929
	I0918 19:39:14.782681   15635 main.go:141] libmachine: (addons-815929) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:b1:d6", ip: ""} in network mk-addons-815929: {Iface:virbr1 ExpiryTime:2024-09-18 20:39:08 +0000 UTC Type:0 Mac:52:54:00:11:b1:d6 Iaid: IPaddr:192.168.39.158 Prefix:24 Hostname:addons-815929 Clientid:01:52:54:00:11:b1:d6}
	I0918 19:39:14.782707   15635 main.go:141] libmachine: (addons-815929) DBG | domain addons-815929 has defined IP address 192.168.39.158 and MAC address 52:54:00:11:b1:d6 in network mk-addons-815929
	I0918 19:39:14.782911   15635 main.go:141] libmachine: (addons-815929) Calling .GetSSHPort
	I0918 19:39:14.783128   15635 main.go:141] libmachine: (addons-815929) Calling .GetSSHKeyPath
	I0918 19:39:14.783294   15635 main.go:141] libmachine: (addons-815929) Calling .GetSSHKeyPath
	I0918 19:39:14.783416   15635 main.go:141] libmachine: (addons-815929) Calling .GetSSHUsername
	I0918 19:39:14.783559   15635 main.go:141] libmachine: Using SSH client type: native
	I0918 19:39:14.783758   15635 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.158 22 <nil> <nil>}
	I0918 19:39:14.783782   15635 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-815929' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-815929/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-815929' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0918 19:39:14.896628   15635 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0918 19:39:14.896658   15635 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19667-7671/.minikube CaCertPath:/home/jenkins/minikube-integration/19667-7671/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19667-7671/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19667-7671/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19667-7671/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19667-7671/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19667-7671/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19667-7671/.minikube}
	I0918 19:39:14.896682   15635 buildroot.go:174] setting up certificates
	I0918 19:39:14.896700   15635 provision.go:84] configureAuth start
	I0918 19:39:14.896715   15635 main.go:141] libmachine: (addons-815929) Calling .GetMachineName
	I0918 19:39:14.896993   15635 main.go:141] libmachine: (addons-815929) Calling .GetIP
	I0918 19:39:14.899455   15635 main.go:141] libmachine: (addons-815929) DBG | domain addons-815929 has defined MAC address 52:54:00:11:b1:d6 in network mk-addons-815929
	I0918 19:39:14.899815   15635 main.go:141] libmachine: (addons-815929) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:b1:d6", ip: ""} in network mk-addons-815929: {Iface:virbr1 ExpiryTime:2024-09-18 20:39:08 +0000 UTC Type:0 Mac:52:54:00:11:b1:d6 Iaid: IPaddr:192.168.39.158 Prefix:24 Hostname:addons-815929 Clientid:01:52:54:00:11:b1:d6}
	I0918 19:39:14.899848   15635 main.go:141] libmachine: (addons-815929) DBG | domain addons-815929 has defined IP address 192.168.39.158 and MAC address 52:54:00:11:b1:d6 in network mk-addons-815929
	I0918 19:39:14.900060   15635 main.go:141] libmachine: (addons-815929) Calling .GetSSHHostname
	I0918 19:39:14.902022   15635 main.go:141] libmachine: (addons-815929) DBG | domain addons-815929 has defined MAC address 52:54:00:11:b1:d6 in network mk-addons-815929
	I0918 19:39:14.902265   15635 main.go:141] libmachine: (addons-815929) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:b1:d6", ip: ""} in network mk-addons-815929: {Iface:virbr1 ExpiryTime:2024-09-18 20:39:08 +0000 UTC Type:0 Mac:52:54:00:11:b1:d6 Iaid: IPaddr:192.168.39.158 Prefix:24 Hostname:addons-815929 Clientid:01:52:54:00:11:b1:d6}
	I0918 19:39:14.902293   15635 main.go:141] libmachine: (addons-815929) DBG | domain addons-815929 has defined IP address 192.168.39.158 and MAC address 52:54:00:11:b1:d6 in network mk-addons-815929
	I0918 19:39:14.902392   15635 provision.go:143] copyHostCerts
	I0918 19:39:14.902479   15635 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19667-7671/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19667-7671/.minikube/ca.pem (1082 bytes)
	I0918 19:39:14.902600   15635 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19667-7671/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19667-7671/.minikube/cert.pem (1123 bytes)
	I0918 19:39:14.902671   15635 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19667-7671/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19667-7671/.minikube/key.pem (1679 bytes)
	I0918 19:39:14.902724   15635 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19667-7671/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19667-7671/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19667-7671/.minikube/certs/ca-key.pem org=jenkins.addons-815929 san=[127.0.0.1 192.168.39.158 addons-815929 localhost minikube]
	I0918 19:39:15.027079   15635 provision.go:177] copyRemoteCerts
	I0918 19:39:15.027139   15635 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0918 19:39:15.027161   15635 main.go:141] libmachine: (addons-815929) Calling .GetSSHHostname
	I0918 19:39:15.029651   15635 main.go:141] libmachine: (addons-815929) DBG | domain addons-815929 has defined MAC address 52:54:00:11:b1:d6 in network mk-addons-815929
	I0918 19:39:15.029950   15635 main.go:141] libmachine: (addons-815929) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:b1:d6", ip: ""} in network mk-addons-815929: {Iface:virbr1 ExpiryTime:2024-09-18 20:39:08 +0000 UTC Type:0 Mac:52:54:00:11:b1:d6 Iaid: IPaddr:192.168.39.158 Prefix:24 Hostname:addons-815929 Clientid:01:52:54:00:11:b1:d6}
	I0918 19:39:15.029974   15635 main.go:141] libmachine: (addons-815929) DBG | domain addons-815929 has defined IP address 192.168.39.158 and MAC address 52:54:00:11:b1:d6 in network mk-addons-815929
	I0918 19:39:15.030191   15635 main.go:141] libmachine: (addons-815929) Calling .GetSSHPort
	I0918 19:39:15.030381   15635 main.go:141] libmachine: (addons-815929) Calling .GetSSHKeyPath
	I0918 19:39:15.030555   15635 main.go:141] libmachine: (addons-815929) Calling .GetSSHUsername
	I0918 19:39:15.030715   15635 sshutil.go:53] new ssh client: &{IP:192.168.39.158 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19667-7671/.minikube/machines/addons-815929/id_rsa Username:docker}
	I0918 19:39:15.113743   15635 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0918 19:39:15.137366   15635 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0918 19:39:15.160840   15635 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0918 19:39:15.184268   15635 provision.go:87] duration metric: took 287.554696ms to configureAuth
	I0918 19:39:15.184296   15635 buildroot.go:189] setting minikube options for container-runtime
	I0918 19:39:15.184488   15635 config.go:182] Loaded profile config "addons-815929": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0918 19:39:15.184570   15635 main.go:141] libmachine: (addons-815929) Calling .GetSSHHostname
	I0918 19:39:15.187055   15635 main.go:141] libmachine: (addons-815929) DBG | domain addons-815929 has defined MAC address 52:54:00:11:b1:d6 in network mk-addons-815929
	I0918 19:39:15.187394   15635 main.go:141] libmachine: (addons-815929) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:b1:d6", ip: ""} in network mk-addons-815929: {Iface:virbr1 ExpiryTime:2024-09-18 20:39:08 +0000 UTC Type:0 Mac:52:54:00:11:b1:d6 Iaid: IPaddr:192.168.39.158 Prefix:24 Hostname:addons-815929 Clientid:01:52:54:00:11:b1:d6}
	I0918 19:39:15.187422   15635 main.go:141] libmachine: (addons-815929) DBG | domain addons-815929 has defined IP address 192.168.39.158 and MAC address 52:54:00:11:b1:d6 in network mk-addons-815929
	I0918 19:39:15.187614   15635 main.go:141] libmachine: (addons-815929) Calling .GetSSHPort
	I0918 19:39:15.187812   15635 main.go:141] libmachine: (addons-815929) Calling .GetSSHKeyPath
	I0918 19:39:15.187967   15635 main.go:141] libmachine: (addons-815929) Calling .GetSSHKeyPath
	I0918 19:39:15.188117   15635 main.go:141] libmachine: (addons-815929) Calling .GetSSHUsername
	I0918 19:39:15.188300   15635 main.go:141] libmachine: Using SSH client type: native
	I0918 19:39:15.188467   15635 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.158 22 <nil> <nil>}
	I0918 19:39:15.188480   15635 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0918 19:39:15.422203   15635 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0918 19:39:15.422228   15635 main.go:141] libmachine: Checking connection to Docker...
	I0918 19:39:15.422236   15635 main.go:141] libmachine: (addons-815929) Calling .GetURL
	I0918 19:39:15.423388   15635 main.go:141] libmachine: (addons-815929) DBG | Using libvirt version 6000000
	I0918 19:39:15.425708   15635 main.go:141] libmachine: (addons-815929) DBG | domain addons-815929 has defined MAC address 52:54:00:11:b1:d6 in network mk-addons-815929
	I0918 19:39:15.426166   15635 main.go:141] libmachine: (addons-815929) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:b1:d6", ip: ""} in network mk-addons-815929: {Iface:virbr1 ExpiryTime:2024-09-18 20:39:08 +0000 UTC Type:0 Mac:52:54:00:11:b1:d6 Iaid: IPaddr:192.168.39.158 Prefix:24 Hostname:addons-815929 Clientid:01:52:54:00:11:b1:d6}
	I0918 19:39:15.426200   15635 main.go:141] libmachine: (addons-815929) DBG | domain addons-815929 has defined IP address 192.168.39.158 and MAC address 52:54:00:11:b1:d6 in network mk-addons-815929
	I0918 19:39:15.426400   15635 main.go:141] libmachine: Docker is up and running!
	I0918 19:39:15.426415   15635 main.go:141] libmachine: Reticulating splines...
	I0918 19:39:15.426421   15635 client.go:171] duration metric: took 22.223621675s to LocalClient.Create
	I0918 19:39:15.426449   15635 start.go:167] duration metric: took 22.22368243s to libmachine.API.Create "addons-815929"
	I0918 19:39:15.426462   15635 start.go:293] postStartSetup for "addons-815929" (driver="kvm2")
	I0918 19:39:15.426475   15635 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0918 19:39:15.426497   15635 main.go:141] libmachine: (addons-815929) Calling .DriverName
	I0918 19:39:15.426717   15635 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0918 19:39:15.426747   15635 main.go:141] libmachine: (addons-815929) Calling .GetSSHHostname
	I0918 19:39:15.429165   15635 main.go:141] libmachine: (addons-815929) DBG | domain addons-815929 has defined MAC address 52:54:00:11:b1:d6 in network mk-addons-815929
	I0918 19:39:15.429467   15635 main.go:141] libmachine: (addons-815929) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:b1:d6", ip: ""} in network mk-addons-815929: {Iface:virbr1 ExpiryTime:2024-09-18 20:39:08 +0000 UTC Type:0 Mac:52:54:00:11:b1:d6 Iaid: IPaddr:192.168.39.158 Prefix:24 Hostname:addons-815929 Clientid:01:52:54:00:11:b1:d6}
	I0918 19:39:15.429493   15635 main.go:141] libmachine: (addons-815929) DBG | domain addons-815929 has defined IP address 192.168.39.158 and MAC address 52:54:00:11:b1:d6 in network mk-addons-815929
	I0918 19:39:15.429654   15635 main.go:141] libmachine: (addons-815929) Calling .GetSSHPort
	I0918 19:39:15.429831   15635 main.go:141] libmachine: (addons-815929) Calling .GetSSHKeyPath
	I0918 19:39:15.429969   15635 main.go:141] libmachine: (addons-815929) Calling .GetSSHUsername
	I0918 19:39:15.430118   15635 sshutil.go:53] new ssh client: &{IP:192.168.39.158 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19667-7671/.minikube/machines/addons-815929/id_rsa Username:docker}
	I0918 19:39:15.514784   15635 ssh_runner.go:195] Run: cat /etc/os-release
	I0918 19:39:15.519847   15635 info.go:137] Remote host: Buildroot 2023.02.9
	I0918 19:39:15.519878   15635 filesync.go:126] Scanning /home/jenkins/minikube-integration/19667-7671/.minikube/addons for local assets ...
	I0918 19:39:15.519966   15635 filesync.go:126] Scanning /home/jenkins/minikube-integration/19667-7671/.minikube/files for local assets ...
	I0918 19:39:15.519998   15635 start.go:296] duration metric: took 93.528833ms for postStartSetup
	I0918 19:39:15.520064   15635 main.go:141] libmachine: (addons-815929) Calling .GetConfigRaw
	I0918 19:39:15.520653   15635 main.go:141] libmachine: (addons-815929) Calling .GetIP
	I0918 19:39:15.523455   15635 main.go:141] libmachine: (addons-815929) DBG | domain addons-815929 has defined MAC address 52:54:00:11:b1:d6 in network mk-addons-815929
	I0918 19:39:15.523846   15635 main.go:141] libmachine: (addons-815929) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:b1:d6", ip: ""} in network mk-addons-815929: {Iface:virbr1 ExpiryTime:2024-09-18 20:39:08 +0000 UTC Type:0 Mac:52:54:00:11:b1:d6 Iaid: IPaddr:192.168.39.158 Prefix:24 Hostname:addons-815929 Clientid:01:52:54:00:11:b1:d6}
	I0918 19:39:15.523874   15635 main.go:141] libmachine: (addons-815929) DBG | domain addons-815929 has defined IP address 192.168.39.158 and MAC address 52:54:00:11:b1:d6 in network mk-addons-815929
	I0918 19:39:15.524124   15635 profile.go:143] Saving config to /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/addons-815929/config.json ...
	I0918 19:39:15.524332   15635 start.go:128] duration metric: took 22.339516337s to createHost
	I0918 19:39:15.524360   15635 main.go:141] libmachine: (addons-815929) Calling .GetSSHHostname
	I0918 19:39:15.526732   15635 main.go:141] libmachine: (addons-815929) DBG | domain addons-815929 has defined MAC address 52:54:00:11:b1:d6 in network mk-addons-815929
	I0918 19:39:15.527041   15635 main.go:141] libmachine: (addons-815929) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:b1:d6", ip: ""} in network mk-addons-815929: {Iface:virbr1 ExpiryTime:2024-09-18 20:39:08 +0000 UTC Type:0 Mac:52:54:00:11:b1:d6 Iaid: IPaddr:192.168.39.158 Prefix:24 Hostname:addons-815929 Clientid:01:52:54:00:11:b1:d6}
	I0918 19:39:15.527070   15635 main.go:141] libmachine: (addons-815929) DBG | domain addons-815929 has defined IP address 192.168.39.158 and MAC address 52:54:00:11:b1:d6 in network mk-addons-815929
	I0918 19:39:15.527313   15635 main.go:141] libmachine: (addons-815929) Calling .GetSSHPort
	I0918 19:39:15.527542   15635 main.go:141] libmachine: (addons-815929) Calling .GetSSHKeyPath
	I0918 19:39:15.527709   15635 main.go:141] libmachine: (addons-815929) Calling .GetSSHKeyPath
	I0918 19:39:15.527867   15635 main.go:141] libmachine: (addons-815929) Calling .GetSSHUsername
	I0918 19:39:15.528155   15635 main.go:141] libmachine: Using SSH client type: native
	I0918 19:39:15.528375   15635 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.158 22 <nil> <nil>}
	I0918 19:39:15.528388   15635 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0918 19:39:15.632644   15635 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726688355.604291671
	
	I0918 19:39:15.632664   15635 fix.go:216] guest clock: 1726688355.604291671
	I0918 19:39:15.632671   15635 fix.go:229] Guest: 2024-09-18 19:39:15.604291671 +0000 UTC Remote: 2024-09-18 19:39:15.524343859 +0000 UTC m=+22.440132340 (delta=79.947812ms)
	I0918 19:39:15.632711   15635 fix.go:200] guest clock delta is within tolerance: 79.947812ms
	I0918 19:39:15.632716   15635 start.go:83] releasing machines lock for "addons-815929", held for 22.447981743s
	I0918 19:39:15.632734   15635 main.go:141] libmachine: (addons-815929) Calling .DriverName
	I0918 19:39:15.632989   15635 main.go:141] libmachine: (addons-815929) Calling .GetIP
	I0918 19:39:15.635689   15635 main.go:141] libmachine: (addons-815929) DBG | domain addons-815929 has defined MAC address 52:54:00:11:b1:d6 in network mk-addons-815929
	I0918 19:39:15.636073   15635 main.go:141] libmachine: (addons-815929) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:b1:d6", ip: ""} in network mk-addons-815929: {Iface:virbr1 ExpiryTime:2024-09-18 20:39:08 +0000 UTC Type:0 Mac:52:54:00:11:b1:d6 Iaid: IPaddr:192.168.39.158 Prefix:24 Hostname:addons-815929 Clientid:01:52:54:00:11:b1:d6}
	I0918 19:39:15.636100   15635 main.go:141] libmachine: (addons-815929) DBG | domain addons-815929 has defined IP address 192.168.39.158 and MAC address 52:54:00:11:b1:d6 in network mk-addons-815929
	I0918 19:39:15.636232   15635 main.go:141] libmachine: (addons-815929) Calling .DriverName
	I0918 19:39:15.636698   15635 main.go:141] libmachine: (addons-815929) Calling .DriverName
	I0918 19:39:15.636877   15635 main.go:141] libmachine: (addons-815929) Calling .DriverName
	I0918 19:39:15.636982   15635 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0918 19:39:15.637025   15635 main.go:141] libmachine: (addons-815929) Calling .GetSSHHostname
	I0918 19:39:15.637083   15635 ssh_runner.go:195] Run: cat /version.json
	I0918 19:39:15.637103   15635 main.go:141] libmachine: (addons-815929) Calling .GetSSHHostname
	I0918 19:39:15.639906   15635 main.go:141] libmachine: (addons-815929) DBG | domain addons-815929 has defined MAC address 52:54:00:11:b1:d6 in network mk-addons-815929
	I0918 19:39:15.640052   15635 main.go:141] libmachine: (addons-815929) DBG | domain addons-815929 has defined MAC address 52:54:00:11:b1:d6 in network mk-addons-815929
	I0918 19:39:15.640306   15635 main.go:141] libmachine: (addons-815929) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:b1:d6", ip: ""} in network mk-addons-815929: {Iface:virbr1 ExpiryTime:2024-09-18 20:39:08 +0000 UTC Type:0 Mac:52:54:00:11:b1:d6 Iaid: IPaddr:192.168.39.158 Prefix:24 Hostname:addons-815929 Clientid:01:52:54:00:11:b1:d6}
	I0918 19:39:15.640333   15635 main.go:141] libmachine: (addons-815929) DBG | domain addons-815929 has defined IP address 192.168.39.158 and MAC address 52:54:00:11:b1:d6 in network mk-addons-815929
	I0918 19:39:15.640430   15635 main.go:141] libmachine: (addons-815929) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:b1:d6", ip: ""} in network mk-addons-815929: {Iface:virbr1 ExpiryTime:2024-09-18 20:39:08 +0000 UTC Type:0 Mac:52:54:00:11:b1:d6 Iaid: IPaddr:192.168.39.158 Prefix:24 Hostname:addons-815929 Clientid:01:52:54:00:11:b1:d6}
	I0918 19:39:15.640449   15635 main.go:141] libmachine: (addons-815929) Calling .GetSSHPort
	I0918 19:39:15.640456   15635 main.go:141] libmachine: (addons-815929) DBG | domain addons-815929 has defined IP address 192.168.39.158 and MAC address 52:54:00:11:b1:d6 in network mk-addons-815929
	I0918 19:39:15.640658   15635 main.go:141] libmachine: (addons-815929) Calling .GetSSHPort
	I0918 19:39:15.640662   15635 main.go:141] libmachine: (addons-815929) Calling .GetSSHKeyPath
	I0918 19:39:15.640846   15635 main.go:141] libmachine: (addons-815929) Calling .GetSSHUsername
	I0918 19:39:15.640865   15635 main.go:141] libmachine: (addons-815929) Calling .GetSSHKeyPath
	I0918 19:39:15.640960   15635 main.go:141] libmachine: (addons-815929) Calling .GetSSHUsername
	I0918 19:39:15.640964   15635 sshutil.go:53] new ssh client: &{IP:192.168.39.158 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19667-7671/.minikube/machines/addons-815929/id_rsa Username:docker}
	I0918 19:39:15.641064   15635 sshutil.go:53] new ssh client: &{IP:192.168.39.158 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19667-7671/.minikube/machines/addons-815929/id_rsa Username:docker}
	I0918 19:39:15.724678   15635 ssh_runner.go:195] Run: systemctl --version
	I0918 19:39:15.769924   15635 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0918 19:39:15.924625   15635 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0918 19:39:15.930995   15635 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0918 19:39:15.931078   15635 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0918 19:39:15.946257   15635 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0918 19:39:15.946282   15635 start.go:495] detecting cgroup driver to use...
	I0918 19:39:15.946349   15635 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0918 19:39:15.962493   15635 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0918 19:39:15.976970   15635 docker.go:217] disabling cri-docker service (if available) ...
	I0918 19:39:15.977037   15635 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0918 19:39:15.990730   15635 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0918 19:39:16.004287   15635 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0918 19:39:16.120456   15635 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0918 19:39:16.273269   15635 docker.go:233] disabling docker service ...
	I0918 19:39:16.273355   15635 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0918 19:39:16.287263   15635 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0918 19:39:16.300054   15635 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0918 19:39:16.431534   15635 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0918 19:39:16.542730   15635 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0918 19:39:16.556593   15635 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0918 19:39:16.574110   15635 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0918 19:39:16.574168   15635 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0918 19:39:16.584364   15635 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0918 19:39:16.584433   15635 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0918 19:39:16.595648   15635 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0918 19:39:16.605606   15635 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0918 19:39:16.615817   15635 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0918 19:39:16.625545   15635 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0918 19:39:16.635288   15635 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0918 19:39:16.651799   15635 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0918 19:39:16.662018   15635 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0918 19:39:16.671973   15635 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0918 19:39:16.672038   15635 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0918 19:39:16.684348   15635 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0918 19:39:16.694527   15635 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0918 19:39:16.806557   15635 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0918 19:39:16.893853   15635 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0918 19:39:16.893979   15635 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0918 19:39:16.898741   15635 start.go:563] Will wait 60s for crictl version
	I0918 19:39:16.898823   15635 ssh_runner.go:195] Run: which crictl
	I0918 19:39:16.903203   15635 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0918 19:39:16.954060   15635 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0918 19:39:16.954193   15635 ssh_runner.go:195] Run: crio --version
	I0918 19:39:16.982884   15635 ssh_runner.go:195] Run: crio --version
	I0918 19:39:17.014729   15635 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0918 19:39:17.016149   15635 main.go:141] libmachine: (addons-815929) Calling .GetIP
	I0918 19:39:17.018519   15635 main.go:141] libmachine: (addons-815929) DBG | domain addons-815929 has defined MAC address 52:54:00:11:b1:d6 in network mk-addons-815929
	I0918 19:39:17.018848   15635 main.go:141] libmachine: (addons-815929) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:b1:d6", ip: ""} in network mk-addons-815929: {Iface:virbr1 ExpiryTime:2024-09-18 20:39:08 +0000 UTC Type:0 Mac:52:54:00:11:b1:d6 Iaid: IPaddr:192.168.39.158 Prefix:24 Hostname:addons-815929 Clientid:01:52:54:00:11:b1:d6}
	I0918 19:39:17.018881   15635 main.go:141] libmachine: (addons-815929) DBG | domain addons-815929 has defined IP address 192.168.39.158 and MAC address 52:54:00:11:b1:d6 in network mk-addons-815929
	I0918 19:39:17.019079   15635 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0918 19:39:17.022910   15635 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0918 19:39:17.034489   15635 kubeadm.go:883] updating cluster {Name:addons-815929 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.
1 ClusterName:addons-815929 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.158 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountT
ype:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0918 19:39:17.034619   15635 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0918 19:39:17.034683   15635 ssh_runner.go:195] Run: sudo crictl images --output json
	I0918 19:39:17.066943   15635 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I0918 19:39:17.067023   15635 ssh_runner.go:195] Run: which lz4
	I0918 19:39:17.071020   15635 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0918 19:39:17.075441   15635 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0918 19:39:17.075480   15635 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I0918 19:39:18.279753   15635 crio.go:462] duration metric: took 1.208762257s to copy over tarball
	I0918 19:39:18.279822   15635 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0918 19:39:20.398594   15635 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.118749248s)
	I0918 19:39:20.398620   15635 crio.go:469] duration metric: took 2.11883848s to extract the tarball
	I0918 19:39:20.398627   15635 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0918 19:39:20.434881   15635 ssh_runner.go:195] Run: sudo crictl images --output json
	I0918 19:39:20.475778   15635 crio.go:514] all images are preloaded for cri-o runtime.
	I0918 19:39:20.475806   15635 cache_images.go:84] Images are preloaded, skipping loading
	I0918 19:39:20.475816   15635 kubeadm.go:934] updating node { 192.168.39.158 8443 v1.31.1 crio true true} ...
	I0918 19:39:20.475923   15635 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-815929 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.158
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:addons-815929 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0918 19:39:20.475986   15635 ssh_runner.go:195] Run: crio config
	I0918 19:39:20.519952   15635 cni.go:84] Creating CNI manager for ""
	I0918 19:39:20.519977   15635 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0918 19:39:20.519986   15635 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0918 19:39:20.520005   15635 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.158 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-815929 NodeName:addons-815929 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.158"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.158 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0918 19:39:20.520160   15635 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.158
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-815929"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.158
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.158"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0918 19:39:20.520220   15635 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0918 19:39:20.530115   15635 binaries.go:44] Found k8s binaries, skipping transfer
	I0918 19:39:20.530193   15635 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0918 19:39:20.539110   15635 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0918 19:39:20.554855   15635 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0918 19:39:20.570703   15635 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2157 bytes)
	I0918 19:39:20.586047   15635 ssh_runner.go:195] Run: grep 192.168.39.158	control-plane.minikube.internal$ /etc/hosts
	I0918 19:39:20.589512   15635 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.158	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0918 19:39:20.600947   15635 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0918 19:39:20.714800   15635 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0918 19:39:20.731863   15635 certs.go:68] Setting up /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/addons-815929 for IP: 192.168.39.158
	I0918 19:39:20.731895   15635 certs.go:194] generating shared ca certs ...
	I0918 19:39:20.731916   15635 certs.go:226] acquiring lock for ca certs: {Name:mkf54e0e71ed884c4372f9d3d4cb308ea4600185 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0918 19:39:20.732126   15635 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/19667-7671/.minikube/ca.key
	I0918 19:39:20.903635   15635 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19667-7671/.minikube/ca.crt ...
	I0918 19:39:20.903669   15635 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19667-7671/.minikube/ca.crt: {Name:mk5ab9af521edad191e1df188ac5d1ec102df64d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0918 19:39:20.903847   15635 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19667-7671/.minikube/ca.key ...
	I0918 19:39:20.903857   15635 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19667-7671/.minikube/ca.key: {Name:mk39487a69c8f19d5c09499199945d3411122eb5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0918 19:39:20.903924   15635 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19667-7671/.minikube/proxy-client-ca.key
	I0918 19:39:21.222001   15635 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19667-7671/.minikube/proxy-client-ca.crt ...
	I0918 19:39:21.222033   15635 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19667-7671/.minikube/proxy-client-ca.crt: {Name:mk216a92c8e5c2cc109551a33de4057317853d8d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0918 19:39:21.222192   15635 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19667-7671/.minikube/proxy-client-ca.key ...
	I0918 19:39:21.222203   15635 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19667-7671/.minikube/proxy-client-ca.key: {Name:mk5acd984a1bdd683ae18bb5abd36964f6b7c3c7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0918 19:39:21.222274   15635 certs.go:256] generating profile certs ...
	I0918 19:39:21.222328   15635 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/addons-815929/client.key
	I0918 19:39:21.222353   15635 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/addons-815929/client.crt with IP's: []
	I0918 19:39:21.427586   15635 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/addons-815929/client.crt ...
	I0918 19:39:21.427617   15635 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/addons-815929/client.crt: {Name:mka7942c1a0a773e2c8b5c86112e9c1ca7fd5d08 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0918 19:39:21.427767   15635 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/addons-815929/client.key ...
	I0918 19:39:21.427782   15635 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/addons-815929/client.key: {Name:mk0bb80ad3a72e414322fa8381dc0c9ca95a04d1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0918 19:39:21.427845   15635 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/addons-815929/apiserver.key.1207c200
	I0918 19:39:21.427862   15635 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/addons-815929/apiserver.crt.1207c200 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.158]
	I0918 19:39:21.547680   15635 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/addons-815929/apiserver.crt.1207c200 ...
	I0918 19:39:21.547712   15635 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/addons-815929/apiserver.crt.1207c200: {Name:mk8a17d4138be2d4aed650c4aadb0e9b8271625f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0918 19:39:21.547864   15635 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/addons-815929/apiserver.key.1207c200 ...
	I0918 19:39:21.547877   15635 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/addons-815929/apiserver.key.1207c200: {Name:mkca16a53905ed18fa3435c13c0144e57c60188b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0918 19:39:21.547942   15635 certs.go:381] copying /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/addons-815929/apiserver.crt.1207c200 -> /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/addons-815929/apiserver.crt
	I0918 19:39:21.548029   15635 certs.go:385] copying /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/addons-815929/apiserver.key.1207c200 -> /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/addons-815929/apiserver.key
	I0918 19:39:21.548077   15635 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/addons-815929/proxy-client.key
	I0918 19:39:21.548094   15635 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/addons-815929/proxy-client.crt with IP's: []
	I0918 19:39:21.746355   15635 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/addons-815929/proxy-client.crt ...
	I0918 19:39:21.746391   15635 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/addons-815929/proxy-client.crt: {Name:mk72f125b96fe55f295e7ce9376879b898e47f36 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0918 19:39:21.746557   15635 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/addons-815929/proxy-client.key ...
	I0918 19:39:21.746567   15635 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/addons-815929/proxy-client.key: {Name:mk6d5f5778449275cb7d437edd936b0c1235f081 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0918 19:39:21.746748   15635 certs.go:484] found cert: /home/jenkins/minikube-integration/19667-7671/.minikube/certs/ca-key.pem (1675 bytes)
	I0918 19:39:21.746783   15635 certs.go:484] found cert: /home/jenkins/minikube-integration/19667-7671/.minikube/certs/ca.pem (1082 bytes)
	I0918 19:39:21.746808   15635 certs.go:484] found cert: /home/jenkins/minikube-integration/19667-7671/.minikube/certs/cert.pem (1123 bytes)
	I0918 19:39:21.746830   15635 certs.go:484] found cert: /home/jenkins/minikube-integration/19667-7671/.minikube/certs/key.pem (1679 bytes)
	I0918 19:39:21.747359   15635 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0918 19:39:21.774678   15635 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0918 19:39:21.798559   15635 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0918 19:39:21.824550   15635 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0918 19:39:21.856972   15635 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/addons-815929/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0918 19:39:21.881486   15635 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/addons-815929/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0918 19:39:21.905485   15635 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/addons-815929/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0918 19:39:21.929966   15635 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/addons-815929/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0918 19:39:21.954634   15635 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0918 19:39:21.979726   15635 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0918 19:39:21.996220   15635 ssh_runner.go:195] Run: openssl version
	I0918 19:39:22.002125   15635 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0918 19:39:22.012616   15635 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0918 19:39:22.016717   15635 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 18 19:39 /usr/share/ca-certificates/minikubeCA.pem
	I0918 19:39:22.016780   15635 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0918 19:39:22.022337   15635 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0918 19:39:22.032855   15635 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0918 19:39:22.039081   15635 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0918 19:39:22.039137   15635 kubeadm.go:392] StartCluster: {Name:addons-815929 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 C
lusterName:addons-815929 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.158 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0918 19:39:22.039203   15635 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0918 19:39:22.039252   15635 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0918 19:39:22.077128   15635 cri.go:89] found id: ""
	I0918 19:39:22.077203   15635 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0918 19:39:22.087133   15635 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0918 19:39:22.096945   15635 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0918 19:39:22.106483   15635 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0918 19:39:22.106519   15635 kubeadm.go:157] found existing configuration files:
	
	I0918 19:39:22.106562   15635 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0918 19:39:22.115601   15635 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0918 19:39:22.115658   15635 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0918 19:39:22.125000   15635 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0918 19:39:22.134204   15635 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0918 19:39:22.134259   15635 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0918 19:39:22.143745   15635 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0918 19:39:22.152804   15635 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0918 19:39:22.152866   15635 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0918 19:39:22.162802   15635 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0918 19:39:22.173020   15635 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0918 19:39:22.173087   15635 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0918 19:39:22.184200   15635 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0918 19:39:22.239157   15635 kubeadm.go:310] W0918 19:39:22.219472     812 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0918 19:39:22.239864   15635 kubeadm.go:310] W0918 19:39:22.220484     812 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0918 19:39:22.375715   15635 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0918 19:39:32.745678   15635 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I0918 19:39:32.745741   15635 kubeadm.go:310] [preflight] Running pre-flight checks
	I0918 19:39:32.745827   15635 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0918 19:39:32.745932   15635 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0918 19:39:32.746038   15635 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0918 19:39:32.746135   15635 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0918 19:39:32.747995   15635 out.go:235]   - Generating certificates and keys ...
	I0918 19:39:32.748120   15635 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0918 19:39:32.748185   15635 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0918 19:39:32.748309   15635 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0918 19:39:32.748397   15635 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0918 19:39:32.748486   15635 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0918 19:39:32.748581   15635 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0918 19:39:32.748667   15635 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0918 19:39:32.748784   15635 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-815929 localhost] and IPs [192.168.39.158 127.0.0.1 ::1]
	I0918 19:39:32.748865   15635 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0918 19:39:32.748977   15635 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-815929 localhost] and IPs [192.168.39.158 127.0.0.1 ::1]
	I0918 19:39:32.749034   15635 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0918 19:39:32.749100   15635 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0918 19:39:32.749149   15635 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0918 19:39:32.749202   15635 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0918 19:39:32.749248   15635 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0918 19:39:32.749300   15635 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0918 19:39:32.749346   15635 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0918 19:39:32.749404   15635 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0918 19:39:32.749451   15635 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0918 19:39:32.749533   15635 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0918 19:39:32.749608   15635 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0918 19:39:32.751199   15635 out.go:235]   - Booting up control plane ...
	I0918 19:39:32.751299   15635 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0918 19:39:32.751390   15635 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0918 19:39:32.751462   15635 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0918 19:39:32.751561   15635 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0918 19:39:32.751639   15635 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0918 19:39:32.751678   15635 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0918 19:39:32.751805   15635 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0918 19:39:32.751940   15635 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0918 19:39:32.751993   15635 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.248865ms
	I0918 19:39:32.752083   15635 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0918 19:39:32.752136   15635 kubeadm.go:310] [api-check] The API server is healthy after 5.5020976s
	I0918 19:39:32.752230   15635 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0918 19:39:32.752341   15635 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0918 19:39:32.752393   15635 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0918 19:39:32.752553   15635 kubeadm.go:310] [mark-control-plane] Marking the node addons-815929 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0918 19:39:32.752613   15635 kubeadm.go:310] [bootstrap-token] Using token: 67qfck.xhy2rt9vuaaqal6w
	I0918 19:39:32.755162   15635 out.go:235]   - Configuring RBAC rules ...
	I0918 19:39:32.755272   15635 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0918 19:39:32.755391   15635 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0918 19:39:32.755583   15635 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0918 19:39:32.755697   15635 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0918 19:39:32.755824   15635 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0918 19:39:32.755931   15635 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0918 19:39:32.756094   15635 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0918 19:39:32.756170   15635 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0918 19:39:32.756238   15635 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0918 19:39:32.756250   15635 kubeadm.go:310] 
	I0918 19:39:32.756306   15635 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0918 19:39:32.756314   15635 kubeadm.go:310] 
	I0918 19:39:32.756394   15635 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0918 19:39:32.756403   15635 kubeadm.go:310] 
	I0918 19:39:32.756429   15635 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0918 19:39:32.756479   15635 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0918 19:39:32.756523   15635 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0918 19:39:32.756530   15635 kubeadm.go:310] 
	I0918 19:39:32.756585   15635 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0918 19:39:32.756595   15635 kubeadm.go:310] 
	I0918 19:39:32.756638   15635 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0918 19:39:32.756643   15635 kubeadm.go:310] 
	I0918 19:39:32.756686   15635 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0918 19:39:32.756750   15635 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0918 19:39:32.756808   15635 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0918 19:39:32.756814   15635 kubeadm.go:310] 
	I0918 19:39:32.756887   15635 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0918 19:39:32.756954   15635 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0918 19:39:32.756960   15635 kubeadm.go:310] 
	I0918 19:39:32.757031   15635 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 67qfck.xhy2rt9vuaaqal6w \
	I0918 19:39:32.757120   15635 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:8ad46bf288e8cc2226f5a929a768e6c95cddecc97ffc4016be27ef0987b1e36f \
	I0918 19:39:32.757151   15635 kubeadm.go:310] 	--control-plane 
	I0918 19:39:32.757157   15635 kubeadm.go:310] 
	I0918 19:39:32.757248   15635 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0918 19:39:32.757257   15635 kubeadm.go:310] 
	I0918 19:39:32.757354   15635 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 67qfck.xhy2rt9vuaaqal6w \
	I0918 19:39:32.757490   15635 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:8ad46bf288e8cc2226f5a929a768e6c95cddecc97ffc4016be27ef0987b1e36f 
	I0918 19:39:32.757501   15635 cni.go:84] Creating CNI manager for ""
	I0918 19:39:32.757507   15635 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0918 19:39:32.760281   15635 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0918 19:39:32.761848   15635 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0918 19:39:32.772978   15635 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0918 19:39:32.796231   15635 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0918 19:39:32.796347   15635 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0918 19:39:32.796347   15635 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-815929 minikube.k8s.io/updated_at=2024_09_18T19_39_32_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=85073601a832bd4bbda5d11fa91feafff6ec6b91 minikube.k8s.io/name=addons-815929 minikube.k8s.io/primary=true
	I0918 19:39:32.810093   15635 ops.go:34] apiserver oom_adj: -16
	I0918 19:39:32.947600   15635 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0918 19:39:33.448372   15635 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0918 19:39:33.947877   15635 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0918 19:39:34.447886   15635 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0918 19:39:34.948598   15635 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0918 19:39:35.448280   15635 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0918 19:39:35.947854   15635 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0918 19:39:36.447710   15635 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0918 19:39:36.948512   15635 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0918 19:39:37.028366   15635 kubeadm.go:1113] duration metric: took 4.232084306s to wait for elevateKubeSystemPrivileges
	I0918 19:39:37.028407   15635 kubeadm.go:394] duration metric: took 14.989273723s to StartCluster
	I0918 19:39:37.028429   15635 settings.go:142] acquiring lock: {Name:mk6ae95a69dcbe00c28846409e9f46945a88de2f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0918 19:39:37.028570   15635 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19667-7671/kubeconfig
	I0918 19:39:37.028921   15635 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19667-7671/kubeconfig: {Name:mk3a3eecadb0164dfdd5d3c4392b5b473a2a9bb0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0918 19:39:37.029140   15635 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0918 19:39:37.029150   15635 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.158 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0918 19:39:37.029221   15635 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:true inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0918 19:39:37.029346   15635 config.go:182] Loaded profile config "addons-815929": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0918 19:39:37.029362   15635 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-815929"
	I0918 19:39:37.029377   15635 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-815929"
	I0918 19:39:37.029386   15635 addons.go:69] Setting helm-tiller=true in profile "addons-815929"
	I0918 19:39:37.029349   15635 addons.go:69] Setting yakd=true in profile "addons-815929"
	I0918 19:39:37.029407   15635 addons.go:234] Setting addon helm-tiller=true in "addons-815929"
	I0918 19:39:37.029413   15635 addons.go:234] Setting addon yakd=true in "addons-815929"
	I0918 19:39:37.029425   15635 addons.go:69] Setting volcano=true in profile "addons-815929"
	I0918 19:39:37.029450   15635 addons.go:69] Setting default-storageclass=true in profile "addons-815929"
	I0918 19:39:37.029476   15635 addons.go:69] Setting ingress-dns=true in profile "addons-815929"
	I0918 19:39:37.029490   15635 host.go:66] Checking if "addons-815929" exists ...
	I0918 19:39:37.029496   15635 addons.go:234] Setting addon ingress-dns=true in "addons-815929"
	I0918 19:39:37.029460   15635 addons.go:69] Setting ingress=true in profile "addons-815929"
	I0918 19:39:37.029523   15635 host.go:66] Checking if "addons-815929" exists ...
	I0918 19:39:37.029373   15635 addons.go:69] Setting inspektor-gadget=true in profile "addons-815929"
	I0918 19:39:37.029658   15635 addons.go:234] Setting addon inspektor-gadget=true in "addons-815929"
	I0918 19:39:37.029673   15635 host.go:66] Checking if "addons-815929" exists ...
	I0918 19:39:37.029524   15635 addons.go:234] Setting addon ingress=true in "addons-815929"
	I0918 19:39:37.029797   15635 host.go:66] Checking if "addons-815929" exists ...
	I0918 19:39:37.029443   15635 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-815929"
	I0918 19:39:37.029906   15635 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-815929"
	I0918 19:39:37.029440   15635 host.go:66] Checking if "addons-815929" exists ...
	I0918 19:39:37.029984   15635 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0918 19:39:37.030010   15635 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0918 19:39:37.030050   15635 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0918 19:39:37.030053   15635 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0918 19:39:37.030084   15635 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0918 19:39:37.030095   15635 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0918 19:39:37.029415   15635 host.go:66] Checking if "addons-815929" exists ...
	I0918 19:39:37.030352   15635 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0918 19:39:37.030383   15635 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0918 19:39:37.030388   15635 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0918 19:39:37.030405   15635 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0918 19:39:37.029357   15635 addons.go:69] Setting metrics-server=true in profile "addons-815929"
	I0918 19:39:37.030536   15635 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0918 19:39:37.030543   15635 addons.go:234] Setting addon metrics-server=true in "addons-815929"
	I0918 19:39:37.029434   15635 addons.go:69] Setting gcp-auth=true in profile "addons-815929"
	I0918 19:39:37.030567   15635 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0918 19:39:37.030570   15635 mustload.go:65] Loading cluster: addons-815929
	I0918 19:39:37.029447   15635 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-815929"
	I0918 19:39:37.030611   15635 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-815929"
	I0918 19:39:37.029452   15635 addons.go:69] Setting volumesnapshots=true in profile "addons-815929"
	I0918 19:39:37.030625   15635 addons.go:234] Setting addon volumesnapshots=true in "addons-815929"
	I0918 19:39:37.029456   15635 addons.go:234] Setting addon volcano=true in "addons-815929"
	I0918 19:39:37.029457   15635 addons.go:69] Setting registry=true in profile "addons-815929"
	I0918 19:39:37.030642   15635 addons.go:234] Setting addon registry=true in "addons-815929"
	I0918 19:39:37.030669   15635 host.go:66] Checking if "addons-815929" exists ...
	I0918 19:39:37.029458   15635 addons.go:69] Setting cloud-spanner=true in profile "addons-815929"
	I0918 19:39:37.030800   15635 config.go:182] Loaded profile config "addons-815929": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0918 19:39:37.030815   15635 addons.go:234] Setting addon cloud-spanner=true in "addons-815929"
	I0918 19:39:37.030841   15635 host.go:66] Checking if "addons-815929" exists ...
	I0918 19:39:37.031041   15635 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0918 19:39:37.031067   15635 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0918 19:39:37.031110   15635 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0918 19:39:37.031114   15635 host.go:66] Checking if "addons-815929" exists ...
	I0918 19:39:37.031133   15635 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0918 19:39:37.031187   15635 host.go:66] Checking if "addons-815929" exists ...
	I0918 19:39:37.031267   15635 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0918 19:39:37.031290   15635 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0918 19:39:37.029462   15635 addons.go:69] Setting storage-provisioner=true in profile "addons-815929"
	I0918 19:39:37.031351   15635 addons.go:234] Setting addon storage-provisioner=true in "addons-815929"
	I0918 19:39:37.031456   15635 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0918 19:39:37.031479   15635 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0918 19:39:37.031509   15635 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0918 19:39:37.029483   15635 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-815929"
	I0918 19:39:37.031530   15635 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0918 19:39:37.031597   15635 host.go:66] Checking if "addons-815929" exists ...
	I0918 19:39:37.031865   15635 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0918 19:39:37.031880   15635 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0918 19:39:37.031919   15635 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0918 19:39:37.031942   15635 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0918 19:39:37.032180   15635 host.go:66] Checking if "addons-815929" exists ...
	I0918 19:39:37.032334   15635 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0918 19:39:37.032367   15635 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0918 19:39:37.032458   15635 host.go:66] Checking if "addons-815929" exists ...
	I0918 19:39:37.040881   15635 out.go:177] * Verifying Kubernetes components...
	I0918 19:39:37.042576   15635 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0918 19:39:37.051516   15635 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43999
	I0918 19:39:37.052168   15635 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33723
	I0918 19:39:37.052235   15635 main.go:141] libmachine: () Calling .GetVersion
	I0918 19:39:37.052173   15635 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35173
	I0918 19:39:37.052393   15635 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39509
	I0918 19:39:37.052668   15635 main.go:141] libmachine: () Calling .GetVersion
	I0918 19:39:37.052961   15635 main.go:141] libmachine: Using API Version  1
	I0918 19:39:37.052978   15635 main.go:141] libmachine: () Calling .SetConfigRaw
	I0918 19:39:37.053395   15635 main.go:141] libmachine: () Calling .GetMachineName
	I0918 19:39:37.053567   15635 main.go:141] libmachine: Using API Version  1
	I0918 19:39:37.053580   15635 main.go:141] libmachine: () Calling .SetConfigRaw
	I0918 19:39:37.053833   15635 main.go:141] libmachine: () Calling .GetVersion
	I0918 19:39:37.053907   15635 main.go:141] libmachine: () Calling .GetMachineName
	I0918 19:39:37.054034   15635 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0918 19:39:37.054084   15635 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0918 19:39:37.054251   15635 main.go:141] libmachine: Using API Version  1
	I0918 19:39:37.054272   15635 main.go:141] libmachine: () Calling .SetConfigRaw
	I0918 19:39:37.054491   15635 main.go:141] libmachine: () Calling .GetVersion
	I0918 19:39:37.054656   15635 main.go:141] libmachine: () Calling .GetMachineName
	I0918 19:39:37.055051   15635 main.go:141] libmachine: Using API Version  1
	I0918 19:39:37.055180   15635 main.go:141] libmachine: () Calling .SetConfigRaw
	I0918 19:39:37.055565   15635 main.go:141] libmachine: () Calling .GetMachineName
	I0918 19:39:37.062646   15635 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34865
	I0918 19:39:37.064595   15635 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0918 19:39:37.064636   15635 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0918 19:39:37.064659   15635 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0918 19:39:37.064700   15635 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0918 19:39:37.064716   15635 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0918 19:39:37.064740   15635 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0918 19:39:37.064784   15635 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0918 19:39:37.064821   15635 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0918 19:39:37.065051   15635 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43139
	I0918 19:39:37.065527   15635 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0918 19:39:37.065555   15635 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0918 19:39:37.066116   15635 main.go:141] libmachine: () Calling .GetVersion
	I0918 19:39:37.066219   15635 main.go:141] libmachine: () Calling .GetVersion
	I0918 19:39:37.066752   15635 main.go:141] libmachine: Using API Version  1
	I0918 19:39:37.066769   15635 main.go:141] libmachine: () Calling .SetConfigRaw
	I0918 19:39:37.067162   15635 main.go:141] libmachine: () Calling .GetMachineName
	I0918 19:39:37.067703   15635 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0918 19:39:37.067726   15635 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0918 19:39:37.069000   15635 main.go:141] libmachine: Using API Version  1
	I0918 19:39:37.069018   15635 main.go:141] libmachine: () Calling .SetConfigRaw
	I0918 19:39:37.069473   15635 main.go:141] libmachine: () Calling .GetMachineName
	I0918 19:39:37.070080   15635 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0918 19:39:37.070105   15635 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0918 19:39:37.098916   15635 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37831
	I0918 19:39:37.099493   15635 main.go:141] libmachine: () Calling .GetVersion
	I0918 19:39:37.100084   15635 main.go:141] libmachine: Using API Version  1
	I0918 19:39:37.100108   15635 main.go:141] libmachine: () Calling .SetConfigRaw
	I0918 19:39:37.100477   15635 main.go:141] libmachine: () Calling .GetMachineName
	I0918 19:39:37.100643   15635 main.go:141] libmachine: (addons-815929) Calling .GetState
	I0918 19:39:37.103211   15635 host.go:66] Checking if "addons-815929" exists ...
	I0918 19:39:37.103677   15635 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0918 19:39:37.103724   15635 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0918 19:39:37.106175   15635 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42007
	I0918 19:39:37.106455   15635 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38193
	I0918 19:39:37.106629   15635 main.go:141] libmachine: () Calling .GetVersion
	I0918 19:39:37.106732   15635 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44527
	I0918 19:39:37.107318   15635 main.go:141] libmachine: Using API Version  1
	I0918 19:39:37.107333   15635 main.go:141] libmachine: () Calling .SetConfigRaw
	I0918 19:39:37.107356   15635 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45109
	I0918 19:39:37.107737   15635 main.go:141] libmachine: () Calling .GetVersion
	I0918 19:39:37.107821   15635 main.go:141] libmachine: () Calling .GetMachineName
	I0918 19:39:37.107875   15635 main.go:141] libmachine: () Calling .GetVersion
	I0918 19:39:37.108413   15635 main.go:141] libmachine: Using API Version  1
	I0918 19:39:37.108435   15635 main.go:141] libmachine: () Calling .SetConfigRaw
	I0918 19:39:37.108877   15635 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0918 19:39:37.108909   15635 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0918 19:39:37.109176   15635 main.go:141] libmachine: () Calling .GetVersion
	I0918 19:39:37.109264   15635 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38691
	I0918 19:39:37.109861   15635 main.go:141] libmachine: () Calling .GetVersion
	I0918 19:39:37.109995   15635 main.go:141] libmachine: Using API Version  1
	I0918 19:39:37.110005   15635 main.go:141] libmachine: () Calling .SetConfigRaw
	I0918 19:39:37.110065   15635 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40721
	I0918 19:39:37.110320   15635 main.go:141] libmachine: () Calling .GetMachineName
	I0918 19:39:37.110484   15635 main.go:141] libmachine: (addons-815929) Calling .GetState
	I0918 19:39:37.110838   15635 main.go:141] libmachine: Using API Version  1
	I0918 19:39:37.110854   15635 main.go:141] libmachine: () Calling .SetConfigRaw
	I0918 19:39:37.111189   15635 main.go:141] libmachine: () Calling .GetMachineName
	I0918 19:39:37.111701   15635 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0918 19:39:37.111733   15635 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0918 19:39:37.112042   15635 main.go:141] libmachine: Using API Version  1
	I0918 19:39:37.112058   15635 main.go:141] libmachine: () Calling .SetConfigRaw
	I0918 19:39:37.112122   15635 main.go:141] libmachine: () Calling .GetVersion
	I0918 19:39:37.112177   15635 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36143
	I0918 19:39:37.112340   15635 main.go:141] libmachine: (addons-815929) Calling .DriverName
	I0918 19:39:37.112872   15635 main.go:141] libmachine: Using API Version  1
	I0918 19:39:37.112893   15635 main.go:141] libmachine: () Calling .SetConfigRaw
	I0918 19:39:37.112958   15635 main.go:141] libmachine: () Calling .GetVersion
	I0918 19:39:37.112994   15635 main.go:141] libmachine: () Calling .GetMachineName
	I0918 19:39:37.113426   15635 main.go:141] libmachine: Using API Version  1
	I0918 19:39:37.113442   15635 main.go:141] libmachine: () Calling .SetConfigRaw
	I0918 19:39:37.113536   15635 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0918 19:39:37.113555   15635 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0918 19:39:37.113791   15635 main.go:141] libmachine: () Calling .GetMachineName
	I0918 19:39:37.113948   15635 main.go:141] libmachine: (addons-815929) Calling .GetState
	I0918 19:39:37.114523   15635 main.go:141] libmachine: () Calling .GetMachineName
	I0918 19:39:37.114567   15635 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40673
	I0918 19:39:37.114766   15635 out.go:177]   - Using image ghcr.io/helm/tiller:v2.17.0
	I0918 19:39:37.114880   15635 main.go:141] libmachine: () Calling .GetMachineName
	I0918 19:39:37.115093   15635 main.go:141] libmachine: (addons-815929) Calling .GetState
	I0918 19:39:37.115461   15635 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0918 19:39:37.115486   15635 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0918 19:39:37.115987   15635 main.go:141] libmachine: (addons-815929) Calling .DriverName
	I0918 19:39:37.116102   15635 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-dp.yaml
	I0918 19:39:37.116125   15635 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-dp.yaml (2422 bytes)
	I0918 19:39:37.116144   15635 main.go:141] libmachine: (addons-815929) Calling .GetSSHHostname
	I0918 19:39:37.116423   15635 main.go:141] libmachine: () Calling .GetVersion
	I0918 19:39:37.116861   15635 main.go:141] libmachine: Using API Version  1
	I0918 19:39:37.116878   15635 main.go:141] libmachine: () Calling .SetConfigRaw
	I0918 19:39:37.117587   15635 main.go:141] libmachine: () Calling .GetMachineName
	I0918 19:39:37.117675   15635 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.32.0
	I0918 19:39:37.117765   15635 main.go:141] libmachine: (addons-815929) Calling .GetState
	I0918 19:39:37.118810   15635 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0918 19:39:37.118832   15635 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0918 19:39:37.118853   15635 main.go:141] libmachine: (addons-815929) Calling .GetSSHHostname
	I0918 19:39:37.119472   15635 main.go:141] libmachine: (addons-815929) Calling .DriverName
	I0918 19:39:37.120244   15635 main.go:141] libmachine: (addons-815929) Calling .DriverName
	I0918 19:39:37.122036   15635 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0918 19:39:37.122153   15635 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I0918 19:39:37.122370   15635 main.go:141] libmachine: (addons-815929) DBG | domain addons-815929 has defined MAC address 52:54:00:11:b1:d6 in network mk-addons-815929
	I0918 19:39:37.123115   15635 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0918 19:39:37.123133   15635 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0918 19:39:37.123160   15635 main.go:141] libmachine: (addons-815929) Calling .GetSSHHostname
	I0918 19:39:37.123192   15635 main.go:141] libmachine: (addons-815929) DBG | domain addons-815929 has defined MAC address 52:54:00:11:b1:d6 in network mk-addons-815929
	I0918 19:39:37.123838   15635 main.go:141] libmachine: (addons-815929) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:b1:d6", ip: ""} in network mk-addons-815929: {Iface:virbr1 ExpiryTime:2024-09-18 20:39:08 +0000 UTC Type:0 Mac:52:54:00:11:b1:d6 Iaid: IPaddr:192.168.39.158 Prefix:24 Hostname:addons-815929 Clientid:01:52:54:00:11:b1:d6}
	I0918 19:39:37.123859   15635 main.go:141] libmachine: (addons-815929) DBG | domain addons-815929 has defined IP address 192.168.39.158 and MAC address 52:54:00:11:b1:d6 in network mk-addons-815929
	I0918 19:39:37.123881   15635 main.go:141] libmachine: (addons-815929) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:b1:d6", ip: ""} in network mk-addons-815929: {Iface:virbr1 ExpiryTime:2024-09-18 20:39:08 +0000 UTC Type:0 Mac:52:54:00:11:b1:d6 Iaid: IPaddr:192.168.39.158 Prefix:24 Hostname:addons-815929 Clientid:01:52:54:00:11:b1:d6}
	I0918 19:39:37.123894   15635 main.go:141] libmachine: (addons-815929) DBG | domain addons-815929 has defined IP address 192.168.39.158 and MAC address 52:54:00:11:b1:d6 in network mk-addons-815929
	I0918 19:39:37.124062   15635 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0918 19:39:37.124077   15635 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0918 19:39:37.124093   15635 main.go:141] libmachine: (addons-815929) Calling .GetSSHHostname
	I0918 19:39:37.124109   15635 main.go:141] libmachine: (addons-815929) Calling .GetSSHPort
	I0918 19:39:37.124224   15635 main.go:141] libmachine: (addons-815929) Calling .GetSSHPort
	I0918 19:39:37.124275   15635 main.go:141] libmachine: (addons-815929) Calling .GetSSHKeyPath
	I0918 19:39:37.124424   15635 main.go:141] libmachine: (addons-815929) Calling .GetSSHUsername
	I0918 19:39:37.124477   15635 main.go:141] libmachine: (addons-815929) Calling .GetSSHKeyPath
	I0918 19:39:37.124532   15635 sshutil.go:53] new ssh client: &{IP:192.168.39.158 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19667-7671/.minikube/machines/addons-815929/id_rsa Username:docker}
	I0918 19:39:37.124835   15635 main.go:141] libmachine: (addons-815929) Calling .GetSSHUsername
	I0918 19:39:37.125242   15635 sshutil.go:53] new ssh client: &{IP:192.168.39.158 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19667-7671/.minikube/machines/addons-815929/id_rsa Username:docker}
	I0918 19:39:37.128252   15635 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46023
	I0918 19:39:37.128414   15635 main.go:141] libmachine: (addons-815929) DBG | domain addons-815929 has defined MAC address 52:54:00:11:b1:d6 in network mk-addons-815929
	I0918 19:39:37.128663   15635 main.go:141] libmachine: (addons-815929) DBG | domain addons-815929 has defined MAC address 52:54:00:11:b1:d6 in network mk-addons-815929
	I0918 19:39:37.128712   15635 main.go:141] libmachine: (addons-815929) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:b1:d6", ip: ""} in network mk-addons-815929: {Iface:virbr1 ExpiryTime:2024-09-18 20:39:08 +0000 UTC Type:0 Mac:52:54:00:11:b1:d6 Iaid: IPaddr:192.168.39.158 Prefix:24 Hostname:addons-815929 Clientid:01:52:54:00:11:b1:d6}
	I0918 19:39:37.128728   15635 main.go:141] libmachine: (addons-815929) DBG | domain addons-815929 has defined IP address 192.168.39.158 and MAC address 52:54:00:11:b1:d6 in network mk-addons-815929
	I0918 19:39:37.129043   15635 main.go:141] libmachine: (addons-815929) Calling .GetSSHPort
	I0918 19:39:37.129183   15635 main.go:141] libmachine: (addons-815929) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:b1:d6", ip: ""} in network mk-addons-815929: {Iface:virbr1 ExpiryTime:2024-09-18 20:39:08 +0000 UTC Type:0 Mac:52:54:00:11:b1:d6 Iaid: IPaddr:192.168.39.158 Prefix:24 Hostname:addons-815929 Clientid:01:52:54:00:11:b1:d6}
	I0918 19:39:37.129197   15635 main.go:141] libmachine: (addons-815929) DBG | domain addons-815929 has defined IP address 192.168.39.158 and MAC address 52:54:00:11:b1:d6 in network mk-addons-815929
	I0918 19:39:37.129373   15635 main.go:141] libmachine: (addons-815929) Calling .GetSSHKeyPath
	I0918 19:39:37.129430   15635 main.go:141] libmachine: (addons-815929) Calling .GetSSHPort
	I0918 19:39:37.129662   15635 main.go:141] libmachine: (addons-815929) Calling .GetSSHUsername
	I0918 19:39:37.129717   15635 main.go:141] libmachine: (addons-815929) Calling .GetSSHKeyPath
	I0918 19:39:37.130003   15635 main.go:141] libmachine: (addons-815929) Calling .GetSSHUsername
	I0918 19:39:37.130044   15635 sshutil.go:53] new ssh client: &{IP:192.168.39.158 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19667-7671/.minikube/machines/addons-815929/id_rsa Username:docker}
	I0918 19:39:37.130291   15635 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33081
	I0918 19:39:37.130581   15635 main.go:141] libmachine: () Calling .GetVersion
	I0918 19:39:37.130635   15635 sshutil.go:53] new ssh client: &{IP:192.168.39.158 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19667-7671/.minikube/machines/addons-815929/id_rsa Username:docker}
	I0918 19:39:37.131050   15635 main.go:141] libmachine: () Calling .GetVersion
	I0918 19:39:37.131646   15635 main.go:141] libmachine: Using API Version  1
	I0918 19:39:37.131664   15635 main.go:141] libmachine: () Calling .SetConfigRaw
	I0918 19:39:37.132051   15635 main.go:141] libmachine: () Calling .GetMachineName
	I0918 19:39:37.132594   15635 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0918 19:39:37.132634   15635 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0918 19:39:37.133414   15635 main.go:141] libmachine: Using API Version  1
	I0918 19:39:37.133432   15635 main.go:141] libmachine: () Calling .SetConfigRaw
	I0918 19:39:37.133778   15635 main.go:141] libmachine: () Calling .GetMachineName
	I0918 19:39:37.134298   15635 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0918 19:39:37.134332   15635 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0918 19:39:37.134555   15635 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42457
	I0918 19:39:37.140829   15635 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39169
	I0918 19:39:37.140852   15635 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38671
	I0918 19:39:37.141363   15635 main.go:141] libmachine: () Calling .GetVersion
	I0918 19:39:37.141476   15635 main.go:141] libmachine: () Calling .GetVersion
	I0918 19:39:37.142020   15635 main.go:141] libmachine: Using API Version  1
	I0918 19:39:37.142041   15635 main.go:141] libmachine: () Calling .SetConfigRaw
	I0918 19:39:37.142402   15635 main.go:141] libmachine: () Calling .GetMachineName
	I0918 19:39:37.143109   15635 main.go:141] libmachine: (addons-815929) Calling .GetState
	I0918 19:39:37.143714   15635 main.go:141] libmachine: Using API Version  1
	I0918 19:39:37.143732   15635 main.go:141] libmachine: () Calling .SetConfigRaw
	I0918 19:39:37.144237   15635 main.go:141] libmachine: () Calling .GetMachineName
	I0918 19:39:37.144935   15635 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0918 19:39:37.144977   15635 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0918 19:39:37.147981   15635 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-815929"
	I0918 19:39:37.148036   15635 host.go:66] Checking if "addons-815929" exists ...
	I0918 19:39:37.148428   15635 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0918 19:39:37.148465   15635 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0918 19:39:37.150809   15635 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44217
	I0918 19:39:37.151218   15635 main.go:141] libmachine: () Calling .GetVersion
	I0918 19:39:37.152360   15635 main.go:141] libmachine: Using API Version  1
	I0918 19:39:37.152379   15635 main.go:141] libmachine: () Calling .SetConfigRaw
	I0918 19:39:37.152751   15635 main.go:141] libmachine: () Calling .GetMachineName
	I0918 19:39:37.152876   15635 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36965
	I0918 19:39:37.153170   15635 main.go:141] libmachine: (addons-815929) Calling .GetState
	I0918 19:39:37.153972   15635 main.go:141] libmachine: () Calling .GetVersion
	I0918 19:39:37.154591   15635 main.go:141] libmachine: Using API Version  1
	I0918 19:39:37.154608   15635 main.go:141] libmachine: () Calling .SetConfigRaw
	I0918 19:39:37.155107   15635 main.go:141] libmachine: () Calling .GetMachineName
	I0918 19:39:37.155379   15635 main.go:141] libmachine: (addons-815929) Calling .GetState
	I0918 19:39:37.155626   15635 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45955
	I0918 19:39:37.155835   15635 main.go:141] libmachine: (addons-815929) Calling .DriverName
	I0918 19:39:37.156440   15635 main.go:141] libmachine: () Calling .GetVersion
	I0918 19:39:37.156559   15635 main.go:141] libmachine: () Calling .GetVersion
	I0918 19:39:37.157000   15635 main.go:141] libmachine: Using API Version  1
	I0918 19:39:37.157022   15635 main.go:141] libmachine: () Calling .SetConfigRaw
	I0918 19:39:37.157078   15635 main.go:141] libmachine: (addons-815929) Calling .DriverName
	I0918 19:39:37.157468   15635 main.go:141] libmachine: Using API Version  1
	I0918 19:39:37.157923   15635 main.go:141] libmachine: () Calling .SetConfigRaw
	I0918 19:39:37.157782   15635 main.go:141] libmachine: () Calling .GetMachineName
	I0918 19:39:37.158172   15635 main.go:141] libmachine: (addons-815929) Calling .GetState
	I0918 19:39:37.158484   15635 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.2
	I0918 19:39:37.158806   15635 main.go:141] libmachine: () Calling .GetMachineName
	I0918 19:39:37.159195   15635 main.go:141] libmachine: (addons-815929) Calling .GetState
	I0918 19:39:37.159405   15635 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
	I0918 19:39:37.159842   15635 main.go:141] libmachine: (addons-815929) Calling .DriverName
	I0918 19:39:37.161182   15635 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0918 19:39:37.161249   15635 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0918 19:39:37.161287   15635 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0918 19:39:37.161304   15635 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0918 19:39:37.161324   15635 main.go:141] libmachine: (addons-815929) Calling .GetSSHHostname
	I0918 19:39:37.162704   15635 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0918 19:39:37.162728   15635 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0918 19:39:37.162748   15635 main.go:141] libmachine: (addons-815929) Calling .GetSSHHostname
	I0918 19:39:37.163160   15635 main.go:141] libmachine: (addons-815929) Calling .DriverName
	I0918 19:39:37.164031   15635 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0918 19:39:37.164902   15635 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0918 19:39:37.165184   15635 main.go:141] libmachine: (addons-815929) DBG | domain addons-815929 has defined MAC address 52:54:00:11:b1:d6 in network mk-addons-815929
	I0918 19:39:37.165515   15635 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0918 19:39:37.165545   15635 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0918 19:39:37.165565   15635 main.go:141] libmachine: (addons-815929) Calling .GetSSHHostname
	I0918 19:39:37.166590   15635 main.go:141] libmachine: (addons-815929) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:b1:d6", ip: ""} in network mk-addons-815929: {Iface:virbr1 ExpiryTime:2024-09-18 20:39:08 +0000 UTC Type:0 Mac:52:54:00:11:b1:d6 Iaid: IPaddr:192.168.39.158 Prefix:24 Hostname:addons-815929 Clientid:01:52:54:00:11:b1:d6}
	I0918 19:39:37.166613   15635 main.go:141] libmachine: (addons-815929) DBG | domain addons-815929 has defined MAC address 52:54:00:11:b1:d6 in network mk-addons-815929
	I0918 19:39:37.166620   15635 main.go:141] libmachine: (addons-815929) DBG | domain addons-815929 has defined IP address 192.168.39.158 and MAC address 52:54:00:11:b1:d6 in network mk-addons-815929
	I0918 19:39:37.166869   15635 main.go:141] libmachine: (addons-815929) Calling .GetSSHPort
	I0918 19:39:37.166933   15635 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0918 19:39:37.167076   15635 main.go:141] libmachine: (addons-815929) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:b1:d6", ip: ""} in network mk-addons-815929: {Iface:virbr1 ExpiryTime:2024-09-18 20:39:08 +0000 UTC Type:0 Mac:52:54:00:11:b1:d6 Iaid: IPaddr:192.168.39.158 Prefix:24 Hostname:addons-815929 Clientid:01:52:54:00:11:b1:d6}
	I0918 19:39:37.167093   15635 main.go:141] libmachine: (addons-815929) DBG | domain addons-815929 has defined IP address 192.168.39.158 and MAC address 52:54:00:11:b1:d6 in network mk-addons-815929
	I0918 19:39:37.167133   15635 main.go:141] libmachine: (addons-815929) Calling .GetSSHKeyPath
	I0918 19:39:37.167258   15635 main.go:141] libmachine: (addons-815929) Calling .GetSSHPort
	I0918 19:39:37.167299   15635 main.go:141] libmachine: (addons-815929) Calling .GetSSHUsername
	I0918 19:39:37.167409   15635 main.go:141] libmachine: (addons-815929) Calling .GetSSHKeyPath
	I0918 19:39:37.167455   15635 sshutil.go:53] new ssh client: &{IP:192.168.39.158 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19667-7671/.minikube/machines/addons-815929/id_rsa Username:docker}
	I0918 19:39:37.167541   15635 main.go:141] libmachine: (addons-815929) Calling .GetSSHUsername
	I0918 19:39:37.167654   15635 sshutil.go:53] new ssh client: &{IP:192.168.39.158 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19667-7671/.minikube/machines/addons-815929/id_rsa Username:docker}
	I0918 19:39:37.169351   15635 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0918 19:39:37.169830   15635 main.go:141] libmachine: (addons-815929) DBG | domain addons-815929 has defined MAC address 52:54:00:11:b1:d6 in network mk-addons-815929
	I0918 19:39:37.169871   15635 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37677
	I0918 19:39:37.170291   15635 main.go:141] libmachine: (addons-815929) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:b1:d6", ip: ""} in network mk-addons-815929: {Iface:virbr1 ExpiryTime:2024-09-18 20:39:08 +0000 UTC Type:0 Mac:52:54:00:11:b1:d6 Iaid: IPaddr:192.168.39.158 Prefix:24 Hostname:addons-815929 Clientid:01:52:54:00:11:b1:d6}
	I0918 19:39:37.170343   15635 main.go:141] libmachine: () Calling .GetVersion
	I0918 19:39:37.170442   15635 main.go:141] libmachine: (addons-815929) DBG | domain addons-815929 has defined IP address 192.168.39.158 and MAC address 52:54:00:11:b1:d6 in network mk-addons-815929
	I0918 19:39:37.170594   15635 main.go:141] libmachine: (addons-815929) Calling .GetSSHPort
	I0918 19:39:37.170684   15635 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43447
	I0918 19:39:37.170837   15635 main.go:141] libmachine: (addons-815929) Calling .GetSSHKeyPath
	I0918 19:39:37.170943   15635 main.go:141] libmachine: Using API Version  1
	I0918 19:39:37.170956   15635 main.go:141] libmachine: () Calling .SetConfigRaw
	I0918 19:39:37.171006   15635 main.go:141] libmachine: (addons-815929) Calling .GetSSHUsername
	I0918 19:39:37.171021   15635 main.go:141] libmachine: () Calling .GetVersion
	I0918 19:39:37.171174   15635 sshutil.go:53] new ssh client: &{IP:192.168.39.158 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19667-7671/.minikube/machines/addons-815929/id_rsa Username:docker}
	I0918 19:39:37.171426   15635 main.go:141] libmachine: () Calling .GetMachineName
	I0918 19:39:37.172178   15635 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0918 19:39:37.172541   15635 main.go:141] libmachine: Using API Version  1
	I0918 19:39:37.172561   15635 main.go:141] libmachine: () Calling .SetConfigRaw
	I0918 19:39:37.173066   15635 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46769
	I0918 19:39:37.173090   15635 main.go:141] libmachine: () Calling .GetMachineName
	I0918 19:39:37.173137   15635 main.go:141] libmachine: (addons-815929) Calling .GetState
	I0918 19:39:37.173352   15635 main.go:141] libmachine: (addons-815929) Calling .GetState
	I0918 19:39:37.174570   15635 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0918 19:39:37.175288   15635 main.go:141] libmachine: (addons-815929) Calling .DriverName
	I0918 19:39:37.175665   15635 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37609
	I0918 19:39:37.175894   15635 main.go:141] libmachine: () Calling .GetVersion
	I0918 19:39:37.175993   15635 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45511
	I0918 19:39:37.176139   15635 main.go:141] libmachine: () Calling .GetVersion
	I0918 19:39:37.176458   15635 main.go:141] libmachine: Using API Version  1
	I0918 19:39:37.176473   15635 main.go:141] libmachine: () Calling .SetConfigRaw
	I0918 19:39:37.176509   15635 main.go:141] libmachine: () Calling .GetVersion
	I0918 19:39:37.176536   15635 main.go:141] libmachine: (addons-815929) Calling .DriverName
	I0918 19:39:37.176688   15635 main.go:141] libmachine: Making call to close driver server
	I0918 19:39:37.176717   15635 main.go:141] libmachine: (addons-815929) Calling .Close
	I0918 19:39:37.176818   15635 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0918 19:39:37.176941   15635 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.23
	I0918 19:39:37.178051   15635 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0918 19:39:37.178163   15635 main.go:141] libmachine: () Calling .GetMachineName
	I0918 19:39:37.178175   15635 main.go:141] libmachine: (addons-815929) DBG | Closing plugin on server side
	I0918 19:39:37.178206   15635 main.go:141] libmachine: Successfully made call to close driver server
	I0918 19:39:37.178214   15635 main.go:141] libmachine: Making call to close connection to plugin binary
	I0918 19:39:37.178235   15635 main.go:141] libmachine: Making call to close driver server
	I0918 19:39:37.178250   15635 main.go:141] libmachine: (addons-815929) Calling .Close
	I0918 19:39:37.178254   15635 main.go:141] libmachine: Using API Version  1
	I0918 19:39:37.178294   15635 main.go:141] libmachine: () Calling .SetConfigRaw
	I0918 19:39:37.178261   15635 main.go:141] libmachine: Using API Version  1
	I0918 19:39:37.178333   15635 main.go:141] libmachine: () Calling .SetConfigRaw
	I0918 19:39:37.178542   15635 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I0918 19:39:37.178556   15635 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0918 19:39:37.178574   15635 main.go:141] libmachine: (addons-815929) Calling .GetSSHHostname
	I0918 19:39:37.178597   15635 main.go:141] libmachine: (addons-815929) Calling .GetState
	I0918 19:39:37.178616   15635 main.go:141] libmachine: () Calling .GetMachineName
	I0918 19:39:37.179193   15635 main.go:141] libmachine: (addons-815929) DBG | Closing plugin on server side
	I0918 19:39:37.179197   15635 main.go:141] libmachine: () Calling .GetMachineName
	I0918 19:39:37.179230   15635 main.go:141] libmachine: Successfully made call to close driver server
	I0918 19:39:37.179243   15635 main.go:141] libmachine: Making call to close connection to plugin binary
	I0918 19:39:37.179280   15635 main.go:141] libmachine: (addons-815929) Calling .DriverName
	W0918 19:39:37.179328   15635 out.go:270] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I0918 19:39:37.179639   15635 main.go:141] libmachine: (addons-815929) Calling .GetState
	I0918 19:39:37.181366   15635 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0918 19:39:37.181535   15635 addons.go:234] Setting addon default-storageclass=true in "addons-815929"
	I0918 19:39:37.181576   15635 host.go:66] Checking if "addons-815929" exists ...
	I0918 19:39:37.181669   15635 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37473
	I0918 19:39:37.181924   15635 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0918 19:39:37.181945   15635 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0918 19:39:37.181961   15635 main.go:141] libmachine: () Calling .GetVersion
	I0918 19:39:37.182145   15635 main.go:141] libmachine: (addons-815929) DBG | domain addons-815929 has defined MAC address 52:54:00:11:b1:d6 in network mk-addons-815929
	I0918 19:39:37.182275   15635 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33577
	I0918 19:39:37.182398   15635 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0918 19:39:37.182418   15635 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0918 19:39:37.182441   15635 main.go:141] libmachine: (addons-815929) Calling .GetSSHHostname
	I0918 19:39:37.182531   15635 main.go:141] libmachine: Using API Version  1
	I0918 19:39:37.182548   15635 main.go:141] libmachine: () Calling .SetConfigRaw
	I0918 19:39:37.182977   15635 main.go:141] libmachine: () Calling .GetVersion
	I0918 19:39:37.183061   15635 main.go:141] libmachine: (addons-815929) Calling .GetSSHPort
	I0918 19:39:37.183086   15635 main.go:141] libmachine: (addons-815929) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:b1:d6", ip: ""} in network mk-addons-815929: {Iface:virbr1 ExpiryTime:2024-09-18 20:39:08 +0000 UTC Type:0 Mac:52:54:00:11:b1:d6 Iaid: IPaddr:192.168.39.158 Prefix:24 Hostname:addons-815929 Clientid:01:52:54:00:11:b1:d6}
	I0918 19:39:37.183117   15635 main.go:141] libmachine: (addons-815929) DBG | domain addons-815929 has defined IP address 192.168.39.158 and MAC address 52:54:00:11:b1:d6 in network mk-addons-815929
	I0918 19:39:37.183231   15635 main.go:141] libmachine: (addons-815929) Calling .GetSSHKeyPath
	I0918 19:39:37.183392   15635 main.go:141] libmachine: (addons-815929) Calling .GetSSHUsername
	I0918 19:39:37.183461   15635 main.go:141] libmachine: () Calling .GetMachineName
	I0918 19:39:37.183556   15635 sshutil.go:53] new ssh client: &{IP:192.168.39.158 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19667-7671/.minikube/machines/addons-815929/id_rsa Username:docker}
	I0918 19:39:37.184190   15635 main.go:141] libmachine: Using API Version  1
	I0918 19:39:37.184195   15635 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0918 19:39:37.184205   15635 main.go:141] libmachine: () Calling .SetConfigRaw
	I0918 19:39:37.184232   15635 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0918 19:39:37.184619   15635 main.go:141] libmachine: () Calling .GetMachineName
	I0918 19:39:37.184791   15635 main.go:141] libmachine: (addons-815929) Calling .GetState
	I0918 19:39:37.185344   15635 main.go:141] libmachine: (addons-815929) Calling .DriverName
	I0918 19:39:37.186225   15635 main.go:141] libmachine: (addons-815929) DBG | domain addons-815929 has defined MAC address 52:54:00:11:b1:d6 in network mk-addons-815929
	I0918 19:39:37.186672   15635 main.go:141] libmachine: (addons-815929) Calling .DriverName
	I0918 19:39:37.186971   15635 main.go:141] libmachine: (addons-815929) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:b1:d6", ip: ""} in network mk-addons-815929: {Iface:virbr1 ExpiryTime:2024-09-18 20:39:08 +0000 UTC Type:0 Mac:52:54:00:11:b1:d6 Iaid: IPaddr:192.168.39.158 Prefix:24 Hostname:addons-815929 Clientid:01:52:54:00:11:b1:d6}
	I0918 19:39:37.186997   15635 main.go:141] libmachine: (addons-815929) DBG | domain addons-815929 has defined IP address 192.168.39.158 and MAC address 52:54:00:11:b1:d6 in network mk-addons-815929
	I0918 19:39:37.187115   15635 main.go:141] libmachine: (addons-815929) Calling .GetSSHPort
	I0918 19:39:37.187255   15635 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0918 19:39:37.187310   15635 main.go:141] libmachine: (addons-815929) Calling .GetSSHKeyPath
	I0918 19:39:37.187453   15635 main.go:141] libmachine: (addons-815929) Calling .GetSSHUsername
	I0918 19:39:37.187632   15635 sshutil.go:53] new ssh client: &{IP:192.168.39.158 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19667-7671/.minikube/machines/addons-815929/id_rsa Username:docker}
	I0918 19:39:37.189574   15635 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0918 19:39:37.189599   15635 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0918 19:39:37.189633   15635 main.go:141] libmachine: (addons-815929) Calling .GetSSHHostname
	I0918 19:39:37.189708   15635 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.2
	I0918 19:39:37.191068   15635 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0918 19:39:37.191091   15635 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0918 19:39:37.191118   15635 main.go:141] libmachine: (addons-815929) Calling .GetSSHHostname
	I0918 19:39:37.193163   15635 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37369
	I0918 19:39:37.193512   15635 main.go:141] libmachine: (addons-815929) DBG | domain addons-815929 has defined MAC address 52:54:00:11:b1:d6 in network mk-addons-815929
	I0918 19:39:37.193809   15635 main.go:141] libmachine: (addons-815929) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:b1:d6", ip: ""} in network mk-addons-815929: {Iface:virbr1 ExpiryTime:2024-09-18 20:39:08 +0000 UTC Type:0 Mac:52:54:00:11:b1:d6 Iaid: IPaddr:192.168.39.158 Prefix:24 Hostname:addons-815929 Clientid:01:52:54:00:11:b1:d6}
	I0918 19:39:37.193886   15635 main.go:141] libmachine: (addons-815929) Calling .GetSSHPort
	I0918 19:39:37.194018   15635 main.go:141] libmachine: (addons-815929) DBG | domain addons-815929 has defined IP address 192.168.39.158 and MAC address 52:54:00:11:b1:d6 in network mk-addons-815929
	I0918 19:39:37.194053   15635 main.go:141] libmachine: (addons-815929) Calling .GetSSHKeyPath
	I0918 19:39:37.194201   15635 main.go:141] libmachine: (addons-815929) Calling .GetSSHUsername
	I0918 19:39:37.194373   15635 sshutil.go:53] new ssh client: &{IP:192.168.39.158 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19667-7671/.minikube/machines/addons-815929/id_rsa Username:docker}
	I0918 19:39:37.195021   15635 main.go:141] libmachine: (addons-815929) DBG | domain addons-815929 has defined MAC address 52:54:00:11:b1:d6 in network mk-addons-815929
	I0918 19:39:37.195342   15635 main.go:141] libmachine: (addons-815929) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:b1:d6", ip: ""} in network mk-addons-815929: {Iface:virbr1 ExpiryTime:2024-09-18 20:39:08 +0000 UTC Type:0 Mac:52:54:00:11:b1:d6 Iaid: IPaddr:192.168.39.158 Prefix:24 Hostname:addons-815929 Clientid:01:52:54:00:11:b1:d6}
	I0918 19:39:37.195382   15635 main.go:141] libmachine: (addons-815929) DBG | domain addons-815929 has defined IP address 192.168.39.158 and MAC address 52:54:00:11:b1:d6 in network mk-addons-815929
	I0918 19:39:37.195574   15635 main.go:141] libmachine: (addons-815929) Calling .GetSSHPort
	I0918 19:39:37.195743   15635 main.go:141] libmachine: (addons-815929) Calling .GetSSHKeyPath
	I0918 19:39:37.195909   15635 main.go:141] libmachine: (addons-815929) Calling .GetSSHUsername
	I0918 19:39:37.195982   15635 main.go:141] libmachine: () Calling .GetVersion
	I0918 19:39:37.196141   15635 sshutil.go:53] new ssh client: &{IP:192.168.39.158 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19667-7671/.minikube/machines/addons-815929/id_rsa Username:docker}
	I0918 19:39:37.196584   15635 main.go:141] libmachine: Using API Version  1
	I0918 19:39:37.196605   15635 main.go:141] libmachine: () Calling .SetConfigRaw
	I0918 19:39:37.197020   15635 main.go:141] libmachine: () Calling .GetMachineName
	W0918 19:39:37.197204   15635 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:46722->192.168.39.158:22: read: connection reset by peer
	I0918 19:39:37.197234   15635 retry.go:31] will retry after 174.790635ms: ssh: handshake failed: read tcp 192.168.39.1:46722->192.168.39.158:22: read: connection reset by peer
	I0918 19:39:37.197279   15635 main.go:141] libmachine: (addons-815929) Calling .GetState
	I0918 19:39:37.198708   15635 main.go:141] libmachine: (addons-815929) Calling .DriverName
	I0918 19:39:37.200881   15635 out.go:177]   - Using image docker.io/registry:2.8.3
	I0918 19:39:37.202072   15635 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0918 19:39:37.203833   15635 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I0918 19:39:37.203851   15635 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0918 19:39:37.203875   15635 main.go:141] libmachine: (addons-815929) Calling .GetSSHHostname
	I0918 19:39:37.205608   15635 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35855
	I0918 19:39:37.206094   15635 main.go:141] libmachine: () Calling .GetVersion
	I0918 19:39:37.206615   15635 main.go:141] libmachine: Using API Version  1
	I0918 19:39:37.206633   15635 main.go:141] libmachine: () Calling .SetConfigRaw
	I0918 19:39:37.206776   15635 main.go:141] libmachine: (addons-815929) DBG | domain addons-815929 has defined MAC address 52:54:00:11:b1:d6 in network mk-addons-815929
	I0918 19:39:37.206913   15635 main.go:141] libmachine: () Calling .GetMachineName
	I0918 19:39:37.207083   15635 main.go:141] libmachine: (addons-815929) Calling .GetState
	I0918 19:39:37.207141   15635 main.go:141] libmachine: (addons-815929) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:b1:d6", ip: ""} in network mk-addons-815929: {Iface:virbr1 ExpiryTime:2024-09-18 20:39:08 +0000 UTC Type:0 Mac:52:54:00:11:b1:d6 Iaid: IPaddr:192.168.39.158 Prefix:24 Hostname:addons-815929 Clientid:01:52:54:00:11:b1:d6}
	I0918 19:39:37.207157   15635 main.go:141] libmachine: (addons-815929) DBG | domain addons-815929 has defined IP address 192.168.39.158 and MAC address 52:54:00:11:b1:d6 in network mk-addons-815929
	I0918 19:39:37.207364   15635 main.go:141] libmachine: (addons-815929) Calling .GetSSHPort
	I0918 19:39:37.207561   15635 main.go:141] libmachine: (addons-815929) Calling .GetSSHKeyPath
	I0918 19:39:37.207717   15635 main.go:141] libmachine: (addons-815929) Calling .GetSSHUsername
	I0918 19:39:37.207864   15635 sshutil.go:53] new ssh client: &{IP:192.168.39.158 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19667-7671/.minikube/machines/addons-815929/id_rsa Username:docker}
	I0918 19:39:37.208374   15635 main.go:141] libmachine: (addons-815929) Calling .DriverName
	I0918 19:39:37.209305   15635 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33097
	I0918 19:39:37.209766   15635 main.go:141] libmachine: () Calling .GetVersion
	I0918 19:39:37.210145   15635 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0918 19:39:37.210290   15635 main.go:141] libmachine: Using API Version  1
	I0918 19:39:37.210312   15635 main.go:141] libmachine: () Calling .SetConfigRaw
	I0918 19:39:37.210763   15635 main.go:141] libmachine: () Calling .GetMachineName
	I0918 19:39:37.211277   15635 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0918 19:39:37.211316   15635 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0918 19:39:37.212277   15635 out.go:177]   - Using image docker.io/busybox:stable
	I0918 19:39:37.213533   15635 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0918 19:39:37.213560   15635 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0918 19:39:37.213578   15635 main.go:141] libmachine: (addons-815929) Calling .GetSSHHostname
	I0918 19:39:37.216384   15635 main.go:141] libmachine: (addons-815929) DBG | domain addons-815929 has defined MAC address 52:54:00:11:b1:d6 in network mk-addons-815929
	I0918 19:39:37.216720   15635 main.go:141] libmachine: (addons-815929) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:b1:d6", ip: ""} in network mk-addons-815929: {Iface:virbr1 ExpiryTime:2024-09-18 20:39:08 +0000 UTC Type:0 Mac:52:54:00:11:b1:d6 Iaid: IPaddr:192.168.39.158 Prefix:24 Hostname:addons-815929 Clientid:01:52:54:00:11:b1:d6}
	I0918 19:39:37.216738   15635 main.go:141] libmachine: (addons-815929) DBG | domain addons-815929 has defined IP address 192.168.39.158 and MAC address 52:54:00:11:b1:d6 in network mk-addons-815929
	I0918 19:39:37.216779   15635 main.go:141] libmachine: (addons-815929) Calling .GetSSHPort
	I0918 19:39:37.216952   15635 main.go:141] libmachine: (addons-815929) Calling .GetSSHKeyPath
	I0918 19:39:37.217072   15635 main.go:141] libmachine: (addons-815929) Calling .GetSSHUsername
	I0918 19:39:37.217179   15635 sshutil.go:53] new ssh client: &{IP:192.168.39.158 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19667-7671/.minikube/machines/addons-815929/id_rsa Username:docker}
	I0918 19:39:37.228878   15635 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40725
	I0918 19:39:37.229369   15635 main.go:141] libmachine: () Calling .GetVersion
	I0918 19:39:37.230046   15635 main.go:141] libmachine: Using API Version  1
	I0918 19:39:37.230074   15635 main.go:141] libmachine: () Calling .SetConfigRaw
	I0918 19:39:37.230431   15635 main.go:141] libmachine: () Calling .GetMachineName
	I0918 19:39:37.230684   15635 main.go:141] libmachine: (addons-815929) Calling .GetState
	I0918 19:39:37.232228   15635 main.go:141] libmachine: (addons-815929) Calling .DriverName
	I0918 19:39:37.232509   15635 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0918 19:39:37.232528   15635 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0918 19:39:37.232547   15635 main.go:141] libmachine: (addons-815929) Calling .GetSSHHostname
	I0918 19:39:37.235855   15635 main.go:141] libmachine: (addons-815929) DBG | domain addons-815929 has defined MAC address 52:54:00:11:b1:d6 in network mk-addons-815929
	I0918 19:39:37.236365   15635 main.go:141] libmachine: (addons-815929) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:b1:d6", ip: ""} in network mk-addons-815929: {Iface:virbr1 ExpiryTime:2024-09-18 20:39:08 +0000 UTC Type:0 Mac:52:54:00:11:b1:d6 Iaid: IPaddr:192.168.39.158 Prefix:24 Hostname:addons-815929 Clientid:01:52:54:00:11:b1:d6}
	I0918 19:39:37.236401   15635 main.go:141] libmachine: (addons-815929) DBG | domain addons-815929 has defined IP address 192.168.39.158 and MAC address 52:54:00:11:b1:d6 in network mk-addons-815929
	I0918 19:39:37.236588   15635 main.go:141] libmachine: (addons-815929) Calling .GetSSHPort
	I0918 19:39:37.236786   15635 main.go:141] libmachine: (addons-815929) Calling .GetSSHKeyPath
	I0918 19:39:37.236960   15635 main.go:141] libmachine: (addons-815929) Calling .GetSSHUsername
	I0918 19:39:37.237110   15635 sshutil.go:53] new ssh client: &{IP:192.168.39.158 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19667-7671/.minikube/machines/addons-815929/id_rsa Username:docker}
	W0918 19:39:37.240137   15635 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:46756->192.168.39.158:22: read: connection reset by peer
	I0918 19:39:37.240169   15635 retry.go:31] will retry after 192.441386ms: ssh: handshake failed: read tcp 192.168.39.1:46756->192.168.39.158:22: read: connection reset by peer
	I0918 19:39:37.520783   15635 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0918 19:39:37.520783   15635 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0918 19:39:37.529703   15635 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0918 19:39:37.533235   15635 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0918 19:39:37.578015   15635 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0918 19:39:37.578038   15635 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0918 19:39:37.582283   15635 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0918 19:39:37.582310   15635 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0918 19:39:37.733970   15635 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0918 19:39:37.770032   15635 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0918 19:39:37.770057   15635 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0918 19:39:37.814514   15635 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-rbac.yaml
	I0918 19:39:37.814546   15635 ssh_runner.go:362] scp helm-tiller/helm-tiller-rbac.yaml --> /etc/kubernetes/addons/helm-tiller-rbac.yaml (1188 bytes)
	I0918 19:39:37.816619   15635 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I0918 19:39:37.816636   15635 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0918 19:39:37.817489   15635 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0918 19:39:37.817508   15635 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0918 19:39:37.828765   15635 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0918 19:39:37.831161   15635 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0918 19:39:37.841293   15635 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0918 19:39:37.841341   15635 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0918 19:39:37.866270   15635 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0918 19:39:37.866300   15635 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0918 19:39:37.866300   15635 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0918 19:39:37.866320   15635 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0918 19:39:37.873023   15635 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0918 19:39:37.957968   15635 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0918 19:39:37.960217   15635 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0918 19:39:37.960242   15635 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0918 19:39:37.978264   15635 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0918 19:39:37.978296   15635 ssh_runner.go:362] scp helm-tiller/helm-tiller-svc.yaml --> /etc/kubernetes/addons/helm-tiller-svc.yaml (951 bytes)
	I0918 19:39:37.993929   15635 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I0918 19:39:37.993959   15635 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0918 19:39:37.994429   15635 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0918 19:39:37.994444   15635 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0918 19:39:38.017387   15635 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0918 19:39:38.017418   15635 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0918 19:39:38.088277   15635 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0918 19:39:38.088303   15635 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0918 19:39:38.131818   15635 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0918 19:39:38.131848   15635 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0918 19:39:38.203126   15635 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0918 19:39:38.203154   15635 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0918 19:39:38.226489   15635 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0918 19:39:38.226526   15635 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0918 19:39:38.250324   15635 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0918 19:39:38.273276   15635 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0918 19:39:38.283323   15635 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0918 19:39:38.332008   15635 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0918 19:39:38.332058   15635 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0918 19:39:38.385633   15635 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0918 19:39:38.385664   15635 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0918 19:39:38.469197   15635 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0918 19:39:38.469230   15635 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0918 19:39:38.472759   15635 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0918 19:39:38.472785   15635 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0918 19:39:38.628857   15635 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0918 19:39:38.628886   15635 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0918 19:39:38.637712   15635 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0918 19:39:38.637741   15635 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0918 19:39:38.656333   15635 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0918 19:39:38.656366   15635 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0918 19:39:38.714144   15635 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0918 19:39:38.714168   15635 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0918 19:39:38.932471   15635 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0918 19:39:38.932511   15635 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0918 19:39:38.964592   15635 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0918 19:39:38.971990   15635 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0918 19:39:39.017042   15635 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I0918 19:39:39.017073   15635 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0918 19:39:39.160724   15635 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0918 19:39:39.160756   15635 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0918 19:39:39.194791   15635 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0918 19:39:39.194821   15635 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0918 19:39:39.392439   15635 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0918 19:39:39.392461   15635 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0918 19:39:39.435551   15635 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0918 19:39:39.558272   15635 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0918 19:39:39.558296   15635 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0918 19:39:39.836142   15635 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0918 19:39:39.836167   15635 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0918 19:39:39.990546   15635 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.469638539s)
	I0918 19:39:39.990571   15635 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (2.469741333s)
	I0918 19:39:39.990600   15635 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0918 19:39:39.990604   15635 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (2.460853163s)
	I0918 19:39:39.990694   15635 main.go:141] libmachine: Making call to close driver server
	I0918 19:39:39.990714   15635 main.go:141] libmachine: (addons-815929) Calling .Close
	I0918 19:39:39.990994   15635 main.go:141] libmachine: Successfully made call to close driver server
	I0918 19:39:39.991007   15635 main.go:141] libmachine: Making call to close connection to plugin binary
	I0918 19:39:39.991015   15635 main.go:141] libmachine: Making call to close driver server
	I0918 19:39:39.991022   15635 main.go:141] libmachine: (addons-815929) Calling .Close
	I0918 19:39:39.991348   15635 main.go:141] libmachine: (addons-815929) DBG | Closing plugin on server side
	I0918 19:39:39.991365   15635 main.go:141] libmachine: Successfully made call to close driver server
	I0918 19:39:39.991372   15635 main.go:141] libmachine: Making call to close connection to plugin binary
	I0918 19:39:39.991593   15635 node_ready.go:35] waiting up to 6m0s for node "addons-815929" to be "Ready" ...
	I0918 19:39:40.004733   15635 node_ready.go:49] node "addons-815929" has status "Ready":"True"
	I0918 19:39:40.004757   15635 node_ready.go:38] duration metric: took 13.145596ms for node "addons-815929" to be "Ready" ...
	I0918 19:39:40.004768   15635 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0918 19:39:40.018964   15635 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-lr452" in "kube-system" namespace to be "Ready" ...
	I0918 19:39:40.314801   15635 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0918 19:39:40.509787   15635 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-815929" context rescaled to 1 replicas
	I0918 19:39:41.035157   15635 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (3.501885691s)
	I0918 19:39:41.035216   15635 main.go:141] libmachine: Making call to close driver server
	I0918 19:39:41.035231   15635 main.go:141] libmachine: (addons-815929) Calling .Close
	I0918 19:39:41.035566   15635 main.go:141] libmachine: (addons-815929) DBG | Closing plugin on server side
	I0918 19:39:41.035605   15635 main.go:141] libmachine: Successfully made call to close driver server
	I0918 19:39:41.035619   15635 main.go:141] libmachine: Making call to close connection to plugin binary
	I0918 19:39:41.035631   15635 main.go:141] libmachine: Making call to close driver server
	I0918 19:39:41.035643   15635 main.go:141] libmachine: (addons-815929) Calling .Close
	I0918 19:39:41.035883   15635 main.go:141] libmachine: Successfully made call to close driver server
	I0918 19:39:41.035902   15635 main.go:141] libmachine: Making call to close connection to plugin binary
	I0918 19:39:41.035907   15635 main.go:141] libmachine: (addons-815929) DBG | Closing plugin on server side
	I0918 19:39:42.108696   15635 pod_ready.go:103] pod "coredns-7c65d6cfc9-lr452" in "kube-system" namespace has status "Ready":"False"
	I0918 19:39:43.536656   15635 pod_ready.go:93] pod "coredns-7c65d6cfc9-lr452" in "kube-system" namespace has status "Ready":"True"
	I0918 19:39:43.536690   15635 pod_ready.go:82] duration metric: took 3.517697272s for pod "coredns-7c65d6cfc9-lr452" in "kube-system" namespace to be "Ready" ...
	I0918 19:39:43.536705   15635 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-p6827" in "kube-system" namespace to be "Ready" ...
	I0918 19:39:44.249408   15635 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0918 19:39:44.249450   15635 main.go:141] libmachine: (addons-815929) Calling .GetSSHHostname
	I0918 19:39:44.252925   15635 main.go:141] libmachine: (addons-815929) DBG | domain addons-815929 has defined MAC address 52:54:00:11:b1:d6 in network mk-addons-815929
	I0918 19:39:44.253362   15635 main.go:141] libmachine: (addons-815929) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:b1:d6", ip: ""} in network mk-addons-815929: {Iface:virbr1 ExpiryTime:2024-09-18 20:39:08 +0000 UTC Type:0 Mac:52:54:00:11:b1:d6 Iaid: IPaddr:192.168.39.158 Prefix:24 Hostname:addons-815929 Clientid:01:52:54:00:11:b1:d6}
	I0918 19:39:44.253399   15635 main.go:141] libmachine: (addons-815929) DBG | domain addons-815929 has defined IP address 192.168.39.158 and MAC address 52:54:00:11:b1:d6 in network mk-addons-815929
	I0918 19:39:44.253700   15635 main.go:141] libmachine: (addons-815929) Calling .GetSSHPort
	I0918 19:39:44.253927   15635 main.go:141] libmachine: (addons-815929) Calling .GetSSHKeyPath
	I0918 19:39:44.254121   15635 main.go:141] libmachine: (addons-815929) Calling .GetSSHUsername
	I0918 19:39:44.254291   15635 sshutil.go:53] new ssh client: &{IP:192.168.39.158 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19667-7671/.minikube/machines/addons-815929/id_rsa Username:docker}
	I0918 19:39:44.688107   15635 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0918 19:39:44.805145   15635 addons.go:234] Setting addon gcp-auth=true in "addons-815929"
	I0918 19:39:44.805206   15635 host.go:66] Checking if "addons-815929" exists ...
	I0918 19:39:44.805565   15635 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0918 19:39:44.805610   15635 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0918 19:39:44.822607   15635 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38481
	I0918 19:39:44.823258   15635 main.go:141] libmachine: () Calling .GetVersion
	I0918 19:39:44.823818   15635 main.go:141] libmachine: Using API Version  1
	I0918 19:39:44.823842   15635 main.go:141] libmachine: () Calling .SetConfigRaw
	I0918 19:39:44.824190   15635 main.go:141] libmachine: () Calling .GetMachineName
	I0918 19:39:44.824669   15635 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0918 19:39:44.824704   15635 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0918 19:39:44.840858   15635 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46495
	I0918 19:39:44.841389   15635 main.go:141] libmachine: () Calling .GetVersion
	I0918 19:39:44.841928   15635 main.go:141] libmachine: Using API Version  1
	I0918 19:39:44.841957   15635 main.go:141] libmachine: () Calling .SetConfigRaw
	I0918 19:39:44.842262   15635 main.go:141] libmachine: () Calling .GetMachineName
	I0918 19:39:44.842449   15635 main.go:141] libmachine: (addons-815929) Calling .GetState
	I0918 19:39:44.844152   15635 main.go:141] libmachine: (addons-815929) Calling .DriverName
	I0918 19:39:44.844416   15635 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0918 19:39:44.844445   15635 main.go:141] libmachine: (addons-815929) Calling .GetSSHHostname
	I0918 19:39:44.847034   15635 main.go:141] libmachine: (addons-815929) DBG | domain addons-815929 has defined MAC address 52:54:00:11:b1:d6 in network mk-addons-815929
	I0918 19:39:44.847375   15635 main.go:141] libmachine: (addons-815929) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:b1:d6", ip: ""} in network mk-addons-815929: {Iface:virbr1 ExpiryTime:2024-09-18 20:39:08 +0000 UTC Type:0 Mac:52:54:00:11:b1:d6 Iaid: IPaddr:192.168.39.158 Prefix:24 Hostname:addons-815929 Clientid:01:52:54:00:11:b1:d6}
	I0918 19:39:44.847408   15635 main.go:141] libmachine: (addons-815929) DBG | domain addons-815929 has defined IP address 192.168.39.158 and MAC address 52:54:00:11:b1:d6 in network mk-addons-815929
	I0918 19:39:44.847555   15635 main.go:141] libmachine: (addons-815929) Calling .GetSSHPort
	I0918 19:39:44.847716   15635 main.go:141] libmachine: (addons-815929) Calling .GetSSHKeyPath
	I0918 19:39:44.847869   15635 main.go:141] libmachine: (addons-815929) Calling .GetSSHUsername
	I0918 19:39:44.847967   15635 sshutil.go:53] new ssh client: &{IP:192.168.39.158 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19667-7671/.minikube/machines/addons-815929/id_rsa Username:docker}
	I0918 19:39:45.554393   15635 pod_ready.go:103] pod "coredns-7c65d6cfc9-p6827" in "kube-system" namespace has status "Ready":"False"
	I0918 19:39:46.370997   15635 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (8.636984505s)
	I0918 19:39:46.371041   15635 main.go:141] libmachine: Making call to close driver server
	I0918 19:39:46.371051   15635 main.go:141] libmachine: (addons-815929) Calling .Close
	I0918 19:39:46.371140   15635 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (8.542336234s)
	I0918 19:39:46.371200   15635 main.go:141] libmachine: Making call to close driver server
	I0918 19:39:46.371213   15635 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (8.540023182s)
	I0918 19:39:46.371243   15635 main.go:141] libmachine: Making call to close driver server
	I0918 19:39:46.371261   15635 main.go:141] libmachine: (addons-815929) Calling .Close
	I0918 19:39:46.371218   15635 main.go:141] libmachine: (addons-815929) Calling .Close
	I0918 19:39:46.371285   15635 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (8.498241115s)
	I0918 19:39:46.371313   15635 main.go:141] libmachine: Making call to close driver server
	I0918 19:39:46.371329   15635 main.go:141] libmachine: (addons-815929) Calling .Close
	I0918 19:39:46.371344   15635 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (8.413335296s)
	I0918 19:39:46.371375   15635 main.go:141] libmachine: Making call to close driver server
	I0918 19:39:46.371385   15635 main.go:141] libmachine: (addons-815929) Calling .Close
	I0918 19:39:46.371505   15635 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (8.121141782s)
	I0918 19:39:46.371534   15635 main.go:141] libmachine: Making call to close driver server
	I0918 19:39:46.371545   15635 main.go:141] libmachine: (addons-815929) Calling .Close
	I0918 19:39:46.371623   15635 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (8.098312186s)
	I0918 19:39:46.371639   15635 main.go:141] libmachine: Making call to close driver server
	I0918 19:39:46.371649   15635 main.go:141] libmachine: (addons-815929) Calling .Close
	I0918 19:39:46.371723   15635 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml: (8.088369116s)
	I0918 19:39:46.371745   15635 main.go:141] libmachine: Making call to close driver server
	I0918 19:39:46.371754   15635 main.go:141] libmachine: (addons-815929) Calling .Close
	I0918 19:39:46.371830   15635 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (7.407202442s)
	I0918 19:39:46.371846   15635 main.go:141] libmachine: Making call to close driver server
	I0918 19:39:46.371855   15635 main.go:141] libmachine: (addons-815929) Calling .Close
	I0918 19:39:46.371988   15635 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (7.399943132s)
	W0918 19:39:46.372035   15635 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0918 19:39:46.372077   15635 retry.go:31] will retry after 252.9912ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0918 19:39:46.372176   15635 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (6.936592442s)
	I0918 19:39:46.372198   15635 main.go:141] libmachine: Making call to close driver server
	I0918 19:39:46.372207   15635 main.go:141] libmachine: (addons-815929) Calling .Close
	I0918 19:39:46.374316   15635 main.go:141] libmachine: (addons-815929) DBG | Closing plugin on server side
	I0918 19:39:46.374333   15635 main.go:141] libmachine: (addons-815929) DBG | Closing plugin on server side
	I0918 19:39:46.374334   15635 main.go:141] libmachine: (addons-815929) DBG | Closing plugin on server side
	I0918 19:39:46.374348   15635 main.go:141] libmachine: Successfully made call to close driver server
	I0918 19:39:46.374354   15635 main.go:141] libmachine: Successfully made call to close driver server
	I0918 19:39:46.374357   15635 main.go:141] libmachine: Making call to close connection to plugin binary
	I0918 19:39:46.374361   15635 main.go:141] libmachine: (addons-815929) DBG | Closing plugin on server side
	I0918 19:39:46.374362   15635 main.go:141] libmachine: Making call to close connection to plugin binary
	I0918 19:39:46.374370   15635 main.go:141] libmachine: Successfully made call to close driver server
	I0918 19:39:46.374376   15635 main.go:141] libmachine: Making call to close driver server
	I0918 19:39:46.374379   15635 main.go:141] libmachine: Making call to close connection to plugin binary
	I0918 19:39:46.374385   15635 main.go:141] libmachine: (addons-815929) Calling .Close
	I0918 19:39:46.374389   15635 main.go:141] libmachine: Making call to close driver server
	I0918 19:39:46.374396   15635 main.go:141] libmachine: (addons-815929) Calling .Close
	I0918 19:39:46.374475   15635 main.go:141] libmachine: Successfully made call to close driver server
	I0918 19:39:46.374483   15635 main.go:141] libmachine: Making call to close connection to plugin binary
	I0918 19:39:46.374492   15635 main.go:141] libmachine: Making call to close driver server
	I0918 19:39:46.374499   15635 main.go:141] libmachine: (addons-815929) Calling .Close
	I0918 19:39:46.374549   15635 main.go:141] libmachine: (addons-815929) DBG | Closing plugin on server side
	I0918 19:39:46.374573   15635 main.go:141] libmachine: Successfully made call to close driver server
	I0918 19:39:46.374582   15635 main.go:141] libmachine: Making call to close connection to plugin binary
	I0918 19:39:46.374590   15635 main.go:141] libmachine: Making call to close driver server
	I0918 19:39:46.374596   15635 main.go:141] libmachine: (addons-815929) Calling .Close
	I0918 19:39:46.374639   15635 main.go:141] libmachine: (addons-815929) DBG | Closing plugin on server side
	I0918 19:39:46.374657   15635 main.go:141] libmachine: Successfully made call to close driver server
	I0918 19:39:46.374663   15635 main.go:141] libmachine: Making call to close connection to plugin binary
	I0918 19:39:46.374670   15635 main.go:141] libmachine: Making call to close driver server
	I0918 19:39:46.374676   15635 main.go:141] libmachine: (addons-815929) Calling .Close
	I0918 19:39:46.374713   15635 main.go:141] libmachine: (addons-815929) DBG | Closing plugin on server side
	I0918 19:39:46.374846   15635 main.go:141] libmachine: (addons-815929) DBG | Closing plugin on server side
	I0918 19:39:46.374866   15635 main.go:141] libmachine: Successfully made call to close driver server
	I0918 19:39:46.374873   15635 main.go:141] libmachine: Making call to close connection to plugin binary
	I0918 19:39:46.374878   15635 main.go:141] libmachine: (addons-815929) DBG | Closing plugin on server side
	I0918 19:39:46.374883   15635 addons.go:475] Verifying addon registry=true in "addons-815929"
	I0918 19:39:46.374923   15635 main.go:141] libmachine: (addons-815929) DBG | Closing plugin on server side
	I0918 19:39:46.374366   15635 main.go:141] libmachine: Making call to close driver server
	I0918 19:39:46.374938   15635 main.go:141] libmachine: (addons-815929) Calling .Close
	I0918 19:39:46.375214   15635 main.go:141] libmachine: Successfully made call to close driver server
	I0918 19:39:46.375231   15635 main.go:141] libmachine: Making call to close connection to plugin binary
	I0918 19:39:46.375240   15635 main.go:141] libmachine: Making call to close driver server
	I0918 19:39:46.375247   15635 main.go:141] libmachine: (addons-815929) Calling .Close
	I0918 19:39:46.375333   15635 main.go:141] libmachine: (addons-815929) DBG | Closing plugin on server side
	I0918 19:39:46.375384   15635 main.go:141] libmachine: Successfully made call to close driver server
	I0918 19:39:46.375393   15635 main.go:141] libmachine: Making call to close connection to plugin binary
	I0918 19:39:46.375626   15635 main.go:141] libmachine: (addons-815929) DBG | Closing plugin on server side
	I0918 19:39:46.375651   15635 main.go:141] libmachine: Successfully made call to close driver server
	I0918 19:39:46.375660   15635 main.go:141] libmachine: Making call to close connection to plugin binary
	I0918 19:39:46.375836   15635 main.go:141] libmachine: Successfully made call to close driver server
	I0918 19:39:46.375848   15635 main.go:141] libmachine: Making call to close connection to plugin binary
	I0918 19:39:46.375961   15635 main.go:141] libmachine: (addons-815929) DBG | Closing plugin on server side
	I0918 19:39:46.375994   15635 main.go:141] libmachine: Successfully made call to close driver server
	I0918 19:39:46.376000   15635 main.go:141] libmachine: Making call to close connection to plugin binary
	I0918 19:39:46.376222   15635 main.go:141] libmachine: Successfully made call to close driver server
	I0918 19:39:46.376235   15635 main.go:141] libmachine: Making call to close connection to plugin binary
	I0918 19:39:46.376264   15635 main.go:141] libmachine: Successfully made call to close driver server
	I0918 19:39:46.376278   15635 main.go:141] libmachine: Making call to close connection to plugin binary
	I0918 19:39:46.376287   15635 main.go:141] libmachine: Making call to close driver server
	I0918 19:39:46.376294   15635 main.go:141] libmachine: (addons-815929) Calling .Close
	I0918 19:39:46.376316   15635 main.go:141] libmachine: Successfully made call to close driver server
	I0918 19:39:46.376350   15635 main.go:141] libmachine: Making call to close connection to plugin binary
	I0918 19:39:46.376358   15635 main.go:141] libmachine: Making call to close driver server
	I0918 19:39:46.376365   15635 main.go:141] libmachine: (addons-815929) Calling .Close
	I0918 19:39:46.376224   15635 main.go:141] libmachine: (addons-815929) DBG | Closing plugin on server side
	I0918 19:39:46.376564   15635 main.go:141] libmachine: Successfully made call to close driver server
	I0918 19:39:46.376572   15635 main.go:141] libmachine: Making call to close connection to plugin binary
	I0918 19:39:46.376579   15635 main.go:141] libmachine: Making call to close driver server
	I0918 19:39:46.376587   15635 main.go:141] libmachine: (addons-815929) Calling .Close
	I0918 19:39:46.376681   15635 main.go:141] libmachine: (addons-815929) DBG | Closing plugin on server side
	I0918 19:39:46.376710   15635 main.go:141] libmachine: (addons-815929) DBG | Closing plugin on server side
	I0918 19:39:46.376716   15635 main.go:141] libmachine: Successfully made call to close driver server
	I0918 19:39:46.376725   15635 main.go:141] libmachine: Making call to close connection to plugin binary
	I0918 19:39:46.376730   15635 main.go:141] libmachine: Successfully made call to close driver server
	I0918 19:39:46.376737   15635 main.go:141] libmachine: Making call to close connection to plugin binary
	I0918 19:39:46.376736   15635 addons.go:475] Verifying addon ingress=true in "addons-815929"
	I0918 19:39:46.376903   15635 main.go:141] libmachine: (addons-815929) DBG | Closing plugin on server side
	I0918 19:39:46.376938   15635 main.go:141] libmachine: Successfully made call to close driver server
	I0918 19:39:46.376950   15635 main.go:141] libmachine: Making call to close connection to plugin binary
	I0918 19:39:46.377421   15635 main.go:141] libmachine: (addons-815929) DBG | Closing plugin on server side
	I0918 19:39:46.377456   15635 main.go:141] libmachine: Successfully made call to close driver server
	I0918 19:39:46.377466   15635 main.go:141] libmachine: Making call to close connection to plugin binary
	I0918 19:39:46.377475   15635 addons.go:475] Verifying addon metrics-server=true in "addons-815929"
	I0918 19:39:46.379309   15635 out.go:177] * Verifying registry addon...
	I0918 19:39:46.380107   15635 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-815929 service yakd-dashboard -n yakd-dashboard
	
	I0918 19:39:46.380116   15635 out.go:177] * Verifying ingress addon...
	I0918 19:39:46.381716   15635 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0918 19:39:46.382703   15635 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0918 19:39:46.442984   15635 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0918 19:39:46.443008   15635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:39:46.444566   15635 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0918 19:39:46.444589   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 19:39:46.448430   15635 main.go:141] libmachine: Making call to close driver server
	I0918 19:39:46.448452   15635 main.go:141] libmachine: (addons-815929) Calling .Close
	I0918 19:39:46.448784   15635 main.go:141] libmachine: Successfully made call to close driver server
	I0918 19:39:46.448805   15635 main.go:141] libmachine: Making call to close connection to plugin binary
	W0918 19:39:46.448896   15635 out.go:270] ! Enabling 'storage-provisioner-rancher' returned an error: running callbacks: [Error making local-path the default storage class: Error while marking storage class local-path as default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I0918 19:39:46.455634   15635 main.go:141] libmachine: Making call to close driver server
	I0918 19:39:46.455659   15635 main.go:141] libmachine: (addons-815929) Calling .Close
	I0918 19:39:46.455916   15635 main.go:141] libmachine: Successfully made call to close driver server
	I0918 19:39:46.455934   15635 main.go:141] libmachine: Making call to close connection to plugin binary
	I0918 19:39:46.625453   15635 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0918 19:39:46.891556   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 19:39:46.891905   15635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:39:47.249900   15635 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (6.935039531s)
	I0918 19:39:47.249959   15635 main.go:141] libmachine: Making call to close driver server
	I0918 19:39:47.249978   15635 main.go:141] libmachine: (addons-815929) Calling .Close
	I0918 19:39:47.249996   15635 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (2.405553986s)
	I0918 19:39:47.250263   15635 main.go:141] libmachine: Successfully made call to close driver server
	I0918 19:39:47.250285   15635 main.go:141] libmachine: Making call to close connection to plugin binary
	I0918 19:39:47.250291   15635 main.go:141] libmachine: (addons-815929) DBG | Closing plugin on server side
	I0918 19:39:47.250295   15635 main.go:141] libmachine: Making call to close driver server
	I0918 19:39:47.250310   15635 main.go:141] libmachine: (addons-815929) Calling .Close
	I0918 19:39:47.250600   15635 main.go:141] libmachine: Successfully made call to close driver server
	I0918 19:39:47.250616   15635 main.go:141] libmachine: Making call to close connection to plugin binary
	I0918 19:39:47.250626   15635 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-815929"
	I0918 19:39:47.250628   15635 main.go:141] libmachine: (addons-815929) DBG | Closing plugin on server side
	I0918 19:39:47.252725   15635 out.go:177] * Verifying csi-hostpath-driver addon...
	I0918 19:39:47.252729   15635 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0918 19:39:47.255488   15635 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0918 19:39:47.256476   15635 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0918 19:39:47.257160   15635 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0918 19:39:47.257179   15635 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0918 19:39:47.266081   15635 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0918 19:39:47.266118   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:39:47.352351   15635 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0918 19:39:47.352379   15635 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0918 19:39:47.382654   15635 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0918 19:39:47.382683   15635 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0918 19:39:47.400310   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 19:39:47.400779   15635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:39:47.466002   15635 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0918 19:39:47.762194   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:39:47.887466   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 19:39:47.888155   15635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:39:48.042818   15635 pod_ready.go:103] pod "coredns-7c65d6cfc9-p6827" in "kube-system" namespace has status "Ready":"False"
	I0918 19:39:48.150956   15635 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.525434984s)
	I0918 19:39:48.151014   15635 main.go:141] libmachine: Making call to close driver server
	I0918 19:39:48.151031   15635 main.go:141] libmachine: (addons-815929) Calling .Close
	I0918 19:39:48.151273   15635 main.go:141] libmachine: Successfully made call to close driver server
	I0918 19:39:48.151328   15635 main.go:141] libmachine: Making call to close connection to plugin binary
	I0918 19:39:48.151343   15635 main.go:141] libmachine: Making call to close driver server
	I0918 19:39:48.151350   15635 main.go:141] libmachine: (addons-815929) Calling .Close
	I0918 19:39:48.151297   15635 main.go:141] libmachine: (addons-815929) DBG | Closing plugin on server side
	I0918 19:39:48.151627   15635 main.go:141] libmachine: Successfully made call to close driver server
	I0918 19:39:48.151645   15635 main.go:141] libmachine: Making call to close connection to plugin binary
	I0918 19:39:48.262278   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:39:48.386162   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 19:39:48.388035   15635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:39:48.772137   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:39:48.928973   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 19:39:48.931748   15635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:39:49.012611   15635 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.546559646s)
	I0918 19:39:49.012680   15635 main.go:141] libmachine: Making call to close driver server
	I0918 19:39:49.012710   15635 main.go:141] libmachine: (addons-815929) Calling .Close
	I0918 19:39:49.013006   15635 main.go:141] libmachine: (addons-815929) DBG | Closing plugin on server side
	I0918 19:39:49.013065   15635 main.go:141] libmachine: Successfully made call to close driver server
	I0918 19:39:49.013099   15635 main.go:141] libmachine: Making call to close connection to plugin binary
	I0918 19:39:49.013113   15635 main.go:141] libmachine: Making call to close driver server
	I0918 19:39:49.013124   15635 main.go:141] libmachine: (addons-815929) Calling .Close
	I0918 19:39:49.013450   15635 main.go:141] libmachine: Successfully made call to close driver server
	I0918 19:39:49.013486   15635 main.go:141] libmachine: Making call to close connection to plugin binary
	I0918 19:39:49.015257   15635 addons.go:475] Verifying addon gcp-auth=true in "addons-815929"
	I0918 19:39:49.017437   15635 out.go:177] * Verifying gcp-auth addon...
	I0918 19:39:49.019355   15635 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0918 19:39:49.079848   15635 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0918 19:39:49.079876   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:39:49.263588   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:39:49.386599   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 19:39:49.386930   15635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:39:49.524123   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:39:49.546553   15635 pod_ready.go:98] pod "coredns-7c65d6cfc9-p6827" in "kube-system" namespace has status phase "Succeeded" (skipping!): {Phase:Succeeded Conditions:[{Type:PodReadyToStartContainers Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-18 19:39:49 +0000 UTC Reason: Message:} {Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-18 19:39:37 +0000 UTC Reason:PodCompleted Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-18 19:39:37 +0000 UTC Reason:PodCompleted Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-18 19:39:37 +0000 UTC Reason:PodCompleted Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-18 19:39:37 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.39.158 HostIPs:[{IP:192.168.39
.158}] PodIP:10.244.0.3 PodIPs:[{IP:10.244.0.3}] StartTime:2024-09-18 19:39:37 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2024-09-18 19:39:42 +0000 UTC,FinishedAt:2024-09-18 19:39:48 +0000 UTC,ContainerID:cri-o://d30d881ad2efefa3975ae112f0b3f4ed34e2c3e9169e1bab215cb4cba955008e,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.11.3 ImageID:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6 ContainerID:cri-o://d30d881ad2efefa3975ae112f0b3f4ed34e2c3e9169e1bab215cb4cba955008e Started:0xc001efb2c0 AllocatedResources:map[] Resources:nil VolumeMounts:[{Name:config-volume MountPath:/etc/coredns ReadOnly:true RecursiveReadOnly:0xc0022a0c40} {Name:kube-api-access-cpn6n MountPath:/var/run/secrets/kubernetes.io/serviceaccount ReadOnly:true RecursiveReadOnly:0xc0022a0c50}] User:ni
l AllocatedResourcesStatus:[]}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I0918 19:39:49.546588   15635 pod_ready.go:82] duration metric: took 6.009874416s for pod "coredns-7c65d6cfc9-p6827" in "kube-system" namespace to be "Ready" ...
	E0918 19:39:49.546603   15635 pod_ready.go:67] WaitExtra: waitPodCondition: pod "coredns-7c65d6cfc9-p6827" in "kube-system" namespace has status phase "Succeeded" (skipping!): {Phase:Succeeded Conditions:[{Type:PodReadyToStartContainers Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-18 19:39:49 +0000 UTC Reason: Message:} {Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-18 19:39:37 +0000 UTC Reason:PodCompleted Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-18 19:39:37 +0000 UTC Reason:PodCompleted Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-18 19:39:37 +0000 UTC Reason:PodCompleted Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-18 19:39:37 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.3
9.158 HostIPs:[{IP:192.168.39.158}] PodIP:10.244.0.3 PodIPs:[{IP:10.244.0.3}] StartTime:2024-09-18 19:39:37 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2024-09-18 19:39:42 +0000 UTC,FinishedAt:2024-09-18 19:39:48 +0000 UTC,ContainerID:cri-o://d30d881ad2efefa3975ae112f0b3f4ed34e2c3e9169e1bab215cb4cba955008e,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.11.3 ImageID:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6 ContainerID:cri-o://d30d881ad2efefa3975ae112f0b3f4ed34e2c3e9169e1bab215cb4cba955008e Started:0xc001efb2c0 AllocatedResources:map[] Resources:nil VolumeMounts:[{Name:config-volume MountPath:/etc/coredns ReadOnly:true RecursiveReadOnly:0xc0022a0c40} {Name:kube-api-access-cpn6n MountPath:/var/run/secrets/kubernetes.io/serviceaccount ReadOnly:true RecursiveRe
adOnly:0xc0022a0c50}] User:nil AllocatedResourcesStatus:[]}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I0918 19:39:49.546621   15635 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-815929" in "kube-system" namespace to be "Ready" ...
	I0918 19:39:49.567559   15635 pod_ready.go:93] pod "etcd-addons-815929" in "kube-system" namespace has status "Ready":"True"
	I0918 19:39:49.567588   15635 pod_ready.go:82] duration metric: took 20.955221ms for pod "etcd-addons-815929" in "kube-system" namespace to be "Ready" ...
	I0918 19:39:49.567598   15635 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-815929" in "kube-system" namespace to be "Ready" ...
	I0918 19:39:49.574966   15635 pod_ready.go:93] pod "kube-apiserver-addons-815929" in "kube-system" namespace has status "Ready":"True"
	I0918 19:39:49.574994   15635 pod_ready.go:82] duration metric: took 7.38881ms for pod "kube-apiserver-addons-815929" in "kube-system" namespace to be "Ready" ...
	I0918 19:39:49.575009   15635 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-815929" in "kube-system" namespace to be "Ready" ...
	I0918 19:39:49.582171   15635 pod_ready.go:93] pod "kube-controller-manager-addons-815929" in "kube-system" namespace has status "Ready":"True"
	I0918 19:39:49.582197   15635 pod_ready.go:82] duration metric: took 7.179565ms for pod "kube-controller-manager-addons-815929" in "kube-system" namespace to be "Ready" ...
	I0918 19:39:49.582207   15635 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-pqt4n" in "kube-system" namespace to be "Ready" ...
	I0918 19:39:49.590756   15635 pod_ready.go:93] pod "kube-proxy-pqt4n" in "kube-system" namespace has status "Ready":"True"
	I0918 19:39:49.590786   15635 pod_ready.go:82] duration metric: took 8.57165ms for pod "kube-proxy-pqt4n" in "kube-system" namespace to be "Ready" ...
	I0918 19:39:49.590800   15635 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-815929" in "kube-system" namespace to be "Ready" ...
	I0918 19:39:49.761078   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:39:49.887586   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 19:39:49.887848   15635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:39:49.941378   15635 pod_ready.go:93] pod "kube-scheduler-addons-815929" in "kube-system" namespace has status "Ready":"True"
	I0918 19:39:49.941403   15635 pod_ready.go:82] duration metric: took 350.596076ms for pod "kube-scheduler-addons-815929" in "kube-system" namespace to be "Ready" ...
	I0918 19:39:49.941414   15635 pod_ready.go:79] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-rvssn" in "kube-system" namespace to be "Ready" ...
	I0918 19:39:50.023296   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:39:50.262472   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:39:50.386706   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 19:39:50.387374   15635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:39:50.523109   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:39:50.762340   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:39:50.885849   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 19:39:50.886679   15635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:39:51.023386   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:39:51.261021   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:39:51.386809   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 19:39:51.387671   15635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:39:51.524078   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:39:51.760280   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:39:51.886917   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 19:39:51.887197   15635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:39:51.949053   15635 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-rvssn" in "kube-system" namespace has status "Ready":"False"
	I0918 19:39:52.023505   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:39:52.261214   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:39:52.385448   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 19:39:52.387823   15635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:39:52.522732   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:39:52.977102   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 19:39:52.977482   15635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:39:52.977880   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:39:53.022497   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:39:53.262850   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:39:53.388253   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 19:39:53.389257   15635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:39:53.523172   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:39:53.766469   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:39:53.890155   15635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:39:53.890309   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 19:39:53.949275   15635 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-rvssn" in "kube-system" namespace has status "Ready":"False"
	I0918 19:39:54.023129   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:39:54.260967   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:39:54.387271   15635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:39:54.387324   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 19:39:54.522450   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:39:54.762114   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:39:54.886263   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 19:39:54.886718   15635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:39:55.023055   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:39:55.262254   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:39:55.387141   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 19:39:55.387313   15635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:39:55.522239   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:39:55.761296   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:39:55.886317   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 19:39:55.886679   15635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:39:56.023100   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:39:56.261495   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:39:56.385260   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 19:39:56.386259   15635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:39:56.447336   15635 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-rvssn" in "kube-system" namespace has status "Ready":"False"
	I0918 19:39:56.523265   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:39:56.761818   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:39:56.885802   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 19:39:56.887031   15635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:39:57.022996   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:39:57.261375   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:39:57.388082   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 19:39:57.389199   15635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:39:57.536872   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:39:57.762269   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:39:57.887305   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 19:39:57.889861   15635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:39:58.023455   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:39:58.262414   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:39:58.385419   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 19:39:58.387689   15635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:39:58.447488   15635 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-rvssn" in "kube-system" namespace has status "Ready":"False"
	I0918 19:39:58.523505   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:39:58.761358   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:39:58.887588   15635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:39:58.887675   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 19:39:59.023310   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:39:59.261446   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:39:59.387083   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 19:39:59.387736   15635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:39:59.523936   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:39:59.761153   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:39:59.886378   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 19:39:59.886953   15635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:00.023551   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:40:00.261578   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:00.385740   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 19:40:00.387538   15635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:00.523033   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:40:00.761124   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:00.901613   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 19:40:00.904385   15635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:00.949037   15635 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-rvssn" in "kube-system" namespace has status "Ready":"False"
	I0918 19:40:01.023854   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:40:01.344698   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:01.386813   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 19:40:01.387259   15635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:01.523778   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:40:01.760693   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:01.889999   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 19:40:01.899640   15635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:02.024352   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:40:02.261808   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:02.386899   15635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:02.388992   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 19:40:02.524521   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:40:02.762196   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:02.885282   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 19:40:02.886521   15635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:03.023357   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:40:03.261472   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:03.394612   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 19:40:03.395145   15635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:03.451681   15635 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-rvssn" in "kube-system" namespace has status "Ready":"False"
	I0918 19:40:03.522752   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:40:03.760533   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:03.885469   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 19:40:03.886354   15635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:04.023419   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:40:04.261547   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:04.386446   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 19:40:04.387820   15635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:04.524995   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:40:04.761777   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:04.887074   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 19:40:04.887525   15635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:05.022473   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:40:05.261925   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:05.386445   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 19:40:05.386949   15635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:05.522814   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:40:05.762538   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:05.884894   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 19:40:05.888202   15635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:05.947082   15635 pod_ready.go:93] pod "nvidia-device-plugin-daemonset-rvssn" in "kube-system" namespace has status "Ready":"True"
	I0918 19:40:05.947114   15635 pod_ready.go:82] duration metric: took 16.005692748s for pod "nvidia-device-plugin-daemonset-rvssn" in "kube-system" namespace to be "Ready" ...
	I0918 19:40:05.947126   15635 pod_ready.go:39] duration metric: took 25.942342862s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0918 19:40:05.947145   15635 api_server.go:52] waiting for apiserver process to appear ...
	I0918 19:40:05.947207   15635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 19:40:05.964600   15635 api_server.go:72] duration metric: took 28.935412924s to wait for apiserver process to appear ...
	I0918 19:40:05.964629   15635 api_server.go:88] waiting for apiserver healthz status ...
	I0918 19:40:05.964653   15635 api_server.go:253] Checking apiserver healthz at https://192.168.39.158:8443/healthz ...
	I0918 19:40:05.971057   15635 api_server.go:279] https://192.168.39.158:8443/healthz returned 200:
	ok
	I0918 19:40:05.971991   15635 api_server.go:141] control plane version: v1.31.1
	I0918 19:40:05.972031   15635 api_server.go:131] duration metric: took 7.377749ms to wait for apiserver health ...
	I0918 19:40:05.972043   15635 system_pods.go:43] waiting for kube-system pods to appear ...
	I0918 19:40:05.981465   15635 system_pods.go:59] 18 kube-system pods found
	I0918 19:40:05.981498   15635 system_pods.go:61] "coredns-7c65d6cfc9-lr452" [ce99a83b-0924-4fe4-9a52-4c3400846319] Running
	I0918 19:40:05.981508   15635 system_pods.go:61] "csi-hostpath-attacher-0" [35c12d0e-5b48-4b7d-ba59-4a4c10501739] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0918 19:40:05.981516   15635 system_pods.go:61] "csi-hostpath-resizer-0" [888fc926-7f0f-445a-ad0d-196d1e4a131e] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0918 19:40:05.981528   15635 system_pods.go:61] "csi-hostpathplugin-tndql" [f9b32e85-54dc-4219-b8f2-ccd81d61ca01] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0918 19:40:05.981534   15635 system_pods.go:61] "etcd-addons-815929" [74c62370-8b66-4518-8839-5ce337d8ed18] Running
	I0918 19:40:05.981538   15635 system_pods.go:61] "kube-apiserver-addons-815929" [4b802a7c-d79a-4778-b93c-fb2eddfd3103] Running
	I0918 19:40:05.981541   15635 system_pods.go:61] "kube-controller-manager-addons-815929" [0478f6c2-35df-4091-9be5-2f739c29a169] Running
	I0918 19:40:05.981545   15635 system_pods.go:61] "kube-ingress-dns-minikube" [9660f591-df30-4595-ad30-0b79d840f779] Running
	I0918 19:40:05.981549   15635 system_pods.go:61] "kube-proxy-pqt4n" [f0634583-edcc-434a-9062-5511ff79a084] Running
	I0918 19:40:05.981552   15635 system_pods.go:61] "kube-scheduler-addons-815929" [577bc872-21cc-4a90-82e1-7552ce7eeb7c] Running
	I0918 19:40:05.981558   15635 system_pods.go:61] "metrics-server-84c5f94fbc-fvm48" [20825ca5-3044-4221-84bc-6fc04d1038fd] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0918 19:40:05.981564   15635 system_pods.go:61] "nvidia-device-plugin-daemonset-rvssn" [eed41d4f-f686-4590-959b-344ece686560] Running
	I0918 19:40:05.981570   15635 system_pods.go:61] "registry-66c9cd494c-96wcm" [170420dc-8ea6-4aba-99c1-9f61d4449fff] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0918 19:40:05.981575   15635 system_pods.go:61] "registry-proxy-jwxzj" [5ee21740-39f3-406e-bb72-65a28c5b5dde] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0918 19:40:05.981584   15635 system_pods.go:61] "snapshot-controller-56fcc65765-22mlv" [5197a1d6-f767-4030-b870-5fdd325589d2] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0918 19:40:05.981590   15635 system_pods.go:61] "snapshot-controller-56fcc65765-dzxnk" [28036305-c26d-4a1d-aa47-04d577b32c35] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0918 19:40:05.981596   15635 system_pods.go:61] "storage-provisioner" [a1b4202d-fa9c-4f1b-9a34-40ef0da0d6e8] Running
	I0918 19:40:05.981601   15635 system_pods.go:61] "tiller-deploy-b48cc5f79-8nbwq" [4d5df6b3-4cf1-4ce5-9cb4-2983f9aa2728] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I0918 19:40:05.981609   15635 system_pods.go:74] duration metric: took 9.560439ms to wait for pod list to return data ...
	I0918 19:40:05.981619   15635 default_sa.go:34] waiting for default service account to be created ...
	I0918 19:40:05.984361   15635 default_sa.go:45] found service account: "default"
	I0918 19:40:05.984393   15635 default_sa.go:55] duration metric: took 2.768053ms for default service account to be created ...
	I0918 19:40:05.984403   15635 system_pods.go:116] waiting for k8s-apps to be running ...
	I0918 19:40:05.992866   15635 system_pods.go:86] 18 kube-system pods found
	I0918 19:40:05.992896   15635 system_pods.go:89] "coredns-7c65d6cfc9-lr452" [ce99a83b-0924-4fe4-9a52-4c3400846319] Running
	I0918 19:40:05.992905   15635 system_pods.go:89] "csi-hostpath-attacher-0" [35c12d0e-5b48-4b7d-ba59-4a4c10501739] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0918 19:40:05.992913   15635 system_pods.go:89] "csi-hostpath-resizer-0" [888fc926-7f0f-445a-ad0d-196d1e4a131e] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0918 19:40:05.992919   15635 system_pods.go:89] "csi-hostpathplugin-tndql" [f9b32e85-54dc-4219-b8f2-ccd81d61ca01] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0918 19:40:05.992924   15635 system_pods.go:89] "etcd-addons-815929" [74c62370-8b66-4518-8839-5ce337d8ed18] Running
	I0918 19:40:05.992928   15635 system_pods.go:89] "kube-apiserver-addons-815929" [4b802a7c-d79a-4778-b93c-fb2eddfd3103] Running
	I0918 19:40:05.992932   15635 system_pods.go:89] "kube-controller-manager-addons-815929" [0478f6c2-35df-4091-9be5-2f739c29a169] Running
	I0918 19:40:05.992937   15635 system_pods.go:89] "kube-ingress-dns-minikube" [9660f591-df30-4595-ad30-0b79d840f779] Running
	I0918 19:40:05.992940   15635 system_pods.go:89] "kube-proxy-pqt4n" [f0634583-edcc-434a-9062-5511ff79a084] Running
	I0918 19:40:05.992944   15635 system_pods.go:89] "kube-scheduler-addons-815929" [577bc872-21cc-4a90-82e1-7552ce7eeb7c] Running
	I0918 19:40:05.992949   15635 system_pods.go:89] "metrics-server-84c5f94fbc-fvm48" [20825ca5-3044-4221-84bc-6fc04d1038fd] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0918 19:40:05.992956   15635 system_pods.go:89] "nvidia-device-plugin-daemonset-rvssn" [eed41d4f-f686-4590-959b-344ece686560] Running
	I0918 19:40:05.992962   15635 system_pods.go:89] "registry-66c9cd494c-96wcm" [170420dc-8ea6-4aba-99c1-9f61d4449fff] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0918 19:40:05.992970   15635 system_pods.go:89] "registry-proxy-jwxzj" [5ee21740-39f3-406e-bb72-65a28c5b5dde] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0918 19:40:05.992975   15635 system_pods.go:89] "snapshot-controller-56fcc65765-22mlv" [5197a1d6-f767-4030-b870-5fdd325589d2] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0918 19:40:05.992982   15635 system_pods.go:89] "snapshot-controller-56fcc65765-dzxnk" [28036305-c26d-4a1d-aa47-04d577b32c35] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0918 19:40:05.992988   15635 system_pods.go:89] "storage-provisioner" [a1b4202d-fa9c-4f1b-9a34-40ef0da0d6e8] Running
	I0918 19:40:05.992993   15635 system_pods.go:89] "tiller-deploy-b48cc5f79-8nbwq" [4d5df6b3-4cf1-4ce5-9cb4-2983f9aa2728] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I0918 19:40:05.993002   15635 system_pods.go:126] duration metric: took 8.592753ms to wait for k8s-apps to be running ...
	I0918 19:40:05.993011   15635 system_svc.go:44] waiting for kubelet service to be running ....
	I0918 19:40:05.993062   15635 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0918 19:40:06.007851   15635 system_svc.go:56] duration metric: took 14.818536ms WaitForService to wait for kubelet
	I0918 19:40:06.007886   15635 kubeadm.go:582] duration metric: took 28.978706928s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0918 19:40:06.007906   15635 node_conditions.go:102] verifying NodePressure condition ...
	I0918 19:40:06.010681   15635 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0918 19:40:06.010706   15635 node_conditions.go:123] node cpu capacity is 2
	I0918 19:40:06.010717   15635 node_conditions.go:105] duration metric: took 2.806111ms to run NodePressure ...
	I0918 19:40:06.010733   15635 start.go:241] waiting for startup goroutines ...
	I0918 19:40:06.023097   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:40:06.261044   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:06.386905   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 19:40:06.387938   15635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:06.523598   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:40:06.760607   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:06.885236   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 19:40:06.887183   15635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:07.023353   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:40:07.261244   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:07.387847   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 19:40:07.388133   15635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:07.523004   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:40:07.761314   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:07.886195   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 19:40:07.887026   15635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:08.022790   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:40:08.261966   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:08.386350   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 19:40:08.387977   15635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:08.522334   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:40:08.764428   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:08.887159   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 19:40:08.887636   15635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:09.023425   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:40:09.261458   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:09.386770   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 19:40:09.386931   15635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:09.523989   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:40:09.761715   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:09.888756   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 19:40:09.888913   15635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:10.022737   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:40:10.260843   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:10.385983   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 19:40:10.388375   15635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:10.523284   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:40:10.761667   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:10.886996   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 19:40:10.887478   15635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:11.023066   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:40:11.690574   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:40:11.691399   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:11.691415   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 19:40:11.692178   15635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:11.761412   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:11.886928   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 19:40:11.888082   15635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:12.023473   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:40:12.263133   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:12.386142   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 19:40:12.386662   15635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:12.525219   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:40:12.761693   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:12.886447   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 19:40:12.888253   15635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:13.022946   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:40:13.260971   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:13.386945   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 19:40:13.387172   15635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:13.522915   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:40:13.761554   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:13.885105   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 19:40:13.887694   15635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:14.028072   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:40:14.261504   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:14.385337   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 19:40:14.387622   15635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:14.523157   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:40:14.762317   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:14.886699   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 19:40:14.887653   15635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:15.023213   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:40:15.261539   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:15.386295   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 19:40:15.387692   15635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:15.523371   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:40:15.762030   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:15.887580   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 19:40:15.888087   15635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:16.024741   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:40:16.261036   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:16.385093   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 19:40:16.387141   15635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:16.523454   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:40:16.762326   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:16.890861   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 19:40:16.891242   15635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:17.022953   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:40:17.261544   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:17.386363   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 19:40:17.386458   15635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:17.523434   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:40:17.762229   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:17.889132   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 19:40:17.889326   15635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:18.028210   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:40:18.261194   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:18.385413   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 19:40:18.388574   15635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:18.523150   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:40:18.761054   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:18.887450   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 19:40:18.887779   15635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:19.024134   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:40:19.263289   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:19.385338   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 19:40:19.388348   15635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:19.523385   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:40:19.762917   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:19.885695   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 19:40:19.887582   15635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:20.022753   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:40:20.261175   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:20.385377   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 19:40:20.387295   15635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:20.522634   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:40:20.760753   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:20.887703   15635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:20.887712   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 19:40:21.235070   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:40:21.335251   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:21.387180   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 19:40:21.387296   15635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:21.523173   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:40:21.761619   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:21.885946   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 19:40:21.888761   15635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:22.023654   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:40:22.261327   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:22.385941   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 19:40:22.387514   15635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:22.524276   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:40:22.761455   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:22.889959   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 19:40:22.890148   15635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:23.023369   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:40:23.261803   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:23.385743   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 19:40:23.386867   15635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:23.523409   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:40:23.762815   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:23.889426   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 19:40:23.889754   15635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:24.023031   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:40:24.260909   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:24.385696   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 19:40:24.387779   15635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:24.523715   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:40:24.761870   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:24.887000   15635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:24.887192   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 19:40:25.025469   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:40:25.261744   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:25.385737   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 19:40:25.387836   15635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:25.523787   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:40:25.760667   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:25.886302   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 19:40:25.886864   15635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:26.023745   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:40:26.260372   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:26.387125   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 19:40:26.387622   15635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:26.522749   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:40:26.760728   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:26.887795   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 19:40:26.887929   15635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:27.022490   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:40:27.261285   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:27.387152   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 19:40:27.387208   15635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:27.526131   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:40:27.761567   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:27.885662   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 19:40:27.887226   15635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:28.023350   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:40:28.262005   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:28.386363   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 19:40:28.386533   15635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:28.523122   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:40:28.761308   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:28.886659   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 19:40:28.886766   15635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:29.025016   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:40:29.262720   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:29.385861   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 19:40:29.387067   15635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:29.523415   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:40:29.762456   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:29.889274   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 19:40:29.889409   15635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:30.022858   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:40:30.260706   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:30.385500   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 19:40:30.388109   15635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:30.523569   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:40:30.761409   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:30.887262   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 19:40:30.887513   15635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:31.022836   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:40:31.351673   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:31.631619   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 19:40:31.633821   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:40:31.634496   15635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:31.761680   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:31.886416   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 19:40:31.887521   15635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:32.022820   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:40:32.261775   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:32.385585   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 19:40:32.387110   15635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:32.522760   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:40:32.760560   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:32.887381   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 19:40:32.887779   15635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:33.023205   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:40:33.262433   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:33.386411   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 19:40:33.388473   15635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:33.522967   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:40:33.761145   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:33.885587   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 19:40:33.886411   15635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:34.024126   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:40:34.262336   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:34.386385   15635 kapi.go:107] duration metric: took 48.00466589s to wait for kubernetes.io/minikube-addons=registry ...
	I0918 19:40:34.387967   15635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:34.523142   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:40:34.761519   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:34.886743   15635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:35.023677   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:40:35.261534   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:35.386825   15635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:35.523475   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:40:35.761775   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:35.887530   15635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:36.024928   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:40:36.261912   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:36.389258   15635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:36.612910   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:40:36.760710   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:36.886570   15635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:37.023075   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:40:37.261566   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:37.386912   15635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:37.523858   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:40:37.761369   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:37.888082   15635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:38.023650   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:40:38.262241   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:38.387213   15635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:38.523095   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:40:38.761080   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:38.887662   15635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:39.022879   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:40:39.261795   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:39.388645   15635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:39.523629   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:40:39.764147   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:39.895243   15635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:40.023681   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:40:40.263820   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:40.388383   15635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:40.522769   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:40:40.760902   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:40.887214   15635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:41.024863   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:40:41.261355   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:41.388156   15635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:41.523189   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:40:41.763743   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:41.895229   15635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:42.024381   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:40:42.263606   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:42.388165   15635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:42.522769   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:40:42.760446   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:42.888084   15635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:43.022431   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:40:43.261740   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:43.387448   15635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:43.523089   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:40:43.761302   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:43.887688   15635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:44.023769   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:40:44.261649   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:44.388929   15635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:44.523353   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:40:44.761594   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:44.887209   15635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:45.022295   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:40:45.261575   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:45.386431   15635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:45.526748   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:40:45.761136   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:45.887483   15635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:46.023405   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:40:46.261710   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:46.386504   15635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:46.522766   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:40:46.760678   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:46.888552   15635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:47.408300   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:40:47.409204   15635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:47.409327   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:47.526541   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:40:47.762023   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:47.887476   15635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:48.024692   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:40:48.262281   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:48.387819   15635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:48.525990   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:40:48.761029   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:48.887048   15635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:49.022685   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:40:49.264666   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:49.387613   15635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:49.523501   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:40:49.762305   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:49.888742   15635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:50.023259   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:40:50.264411   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:50.391053   15635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:50.535411   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:40:50.763577   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:50.887602   15635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:51.022865   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:40:51.264264   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:51.398209   15635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:51.523440   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:40:51.762761   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:51.887030   15635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:52.022677   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:40:52.263450   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:52.388431   15635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:52.523149   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:40:52.763152   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:52.902779   15635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:53.024293   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:40:53.261509   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:53.386654   15635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:53.523350   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:40:53.790983   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:53.886920   15635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:54.029870   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:40:54.261998   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:54.386998   15635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:54.523404   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:40:54.762135   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:54.889645   15635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:55.023574   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:40:55.261586   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:55.799628   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:40:55.800153   15635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:55.800272   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:55.887540   15635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:56.023456   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:40:56.262164   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:56.387474   15635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:56.522936   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:40:56.760920   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:56.887129   15635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:57.022637   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:40:57.261192   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:57.387888   15635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:57.523659   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:40:57.761302   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:57.887216   15635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:58.022541   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:40:58.261223   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:58.386957   15635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:58.523331   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:40:58.762168   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:58.886618   15635 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:59.023205   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:40:59.262141   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:59.387428   15635 kapi.go:107] duration metric: took 1m13.004718276s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0918 19:40:59.524360   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:40:59.762283   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:41:00.024053   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:41:00.262681   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:41:00.522704   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:41:00.760702   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:41:01.023661   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:41:01.260993   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:41:01.523442   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:41:01.762425   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:41:02.023110   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:41:02.265384   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:41:02.527771   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:41:02.761127   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:41:03.022885   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:41:03.260335   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:41:03.522913   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:41:03.761077   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:41:04.022763   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:41:04.263630   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:41:04.523144   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:41:04.761725   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:41:05.022991   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:41:05.261573   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:41:05.523927   15635 kapi.go:107] duration metric: took 1m16.504569327s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0918 19:41:05.526416   15635 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-815929 cluster.
	I0918 19:41:05.527994   15635 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0918 19:41:05.529367   15635 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0918 19:41:05.761527   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:41:06.266297   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:41:06.761123   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:41:07.260618   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:41:07.761457   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:41:08.260850   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:41:08.761648   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:41:09.260937   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:41:09.763235   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:41:10.264930   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:41:10.762866   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:41:11.262554   15635 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:41:11.762641   15635 kapi.go:107] duration metric: took 1m24.506164382s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0918 19:41:11.764555   15635 out.go:177] * Enabled addons: cloud-spanner, ingress-dns, inspektor-gadget, helm-tiller, storage-provisioner, nvidia-device-plugin, metrics-server, yakd, default-storageclass, volumesnapshots, registry, ingress, gcp-auth, csi-hostpath-driver
	I0918 19:41:11.765613   15635 addons.go:510] duration metric: took 1m34.736385177s for enable addons: enabled=[cloud-spanner ingress-dns inspektor-gadget helm-tiller storage-provisioner nvidia-device-plugin metrics-server yakd default-storageclass volumesnapshots registry ingress gcp-auth csi-hostpath-driver]
	I0918 19:41:11.765657   15635 start.go:246] waiting for cluster config update ...
	I0918 19:41:11.765680   15635 start.go:255] writing updated cluster config ...
	I0918 19:41:11.765982   15635 ssh_runner.go:195] Run: rm -f paused
	I0918 19:41:11.816314   15635 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I0918 19:41:11.818785   15635 out.go:177] * Done! kubectl is now configured to use "addons-815929" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Sep 18 19:54:40 addons-815929 crio[660]: time="2024-09-18 19:54:40.572333243Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726689280572293039,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:579800,},InodesUsed:&UInt64Value{Value:200,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=29dcd2b3-682e-46bf-aa61-c0c01be4685a name=/runtime.v1.ImageService/ImageFsInfo
	Sep 18 19:54:40 addons-815929 crio[660]: time="2024-09-18 19:54:40.576809745Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=bc1cd0b3-5f9d-4cc5-8d06-6d5c2fa72a77 name=/runtime.v1.RuntimeService/ListContainers
	Sep 18 19:54:40 addons-815929 crio[660]: time="2024-09-18 19:54:40.576972743Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=bc1cd0b3-5f9d-4cc5-8d06-6d5c2fa72a77 name=/runtime.v1.RuntimeService/ListContainers
	Sep 18 19:54:40 addons-815929 crio[660]: time="2024-09-18 19:54:40.577304041Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a73634fe0b5696c3769ed8462b5108653e982571f2c36b6389db0ad4cf2d1e0d,PodSandboxId:691f84bd2d65dd7aa08e76637e924e2ae7daf15a84048a0e2c694243f398475b,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1726689165321119254,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-55bf9c44b4-qqrwc,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f887e2d6-f352-42de-b6c8-bf994f11b057,},Annotations:map[string]string{io.kubernetes.container.hash: 1220bd81,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ee08a5ec3a5132463c0f585d72fb18c317c0a695f08f0e275e18e05ef4428cbf,PodSandboxId:1c669dd18bc972b33b71c0d9d6876a7f13ebd97ba81189204a3a3568718e94c1,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:074604130336e3c431b7c6b5b551b5a6ae5b67db13b3d223c6db638f85c7ff56,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7b4f26a7d93f4f1f276c51adb03ef0df54a82de89f254a9aec5c18bf0e45ee9,State:CONTAINER_RUNNING,CreatedAt:1726689026739213654,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c5107435-d0c7-4308-88a9-d0fc42111e5e,},Annotations:map[string]string{io.kubernet
es.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0b7f6341e501fa110fadd9a0a0001bcb33871d3bc4ab563f0aa03fc284fcd161,PodSandboxId:01e76804f1f1516fcb14a4c7e4c0a18197fc9ac156b68e935033da704200b78a,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:65a75b550f62ee651071b602f3d2c1daf0362638f620743c61b25eb4a1759f0a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:41eb637aa779284762db7a79fac77894d8fc6d967404e9c7f0760cb4c97a4766,State:CONTAINER_RUNNING,CreatedAt:1726689020241923244,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-7b5c95b59d-6t8xs,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.ui
d: f6f2d55f-b3e7-44c7-a00d-e99861a3846e,},Annotations:map[string]string{io.kubernetes.container.hash: 8f6f6c99,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:172ef2c9c611dd5748901516fac554cdb8f031212b96ee13467bda756557a347,PodSandboxId:915e30c1ffac764546e1016e7d844dba605fd5479681fe534c1f9029f5feba8b,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1726688465027006829,Labels:map[string]string{io.kubernetes.container.name: gcp-aut
h,io.kubernetes.pod.name: gcp-auth-89d5ffd79-fm986,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: e2301096-da7b-42be-8816-73101bc30414,},Annotations:map[string]string{io.kubernetes.container.hash: 91308b2f,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a5437d1207356580b131f78a5ed6a838e146f94fbe52be5ca0620cd6ac81bdcc,PodSandboxId:a99ddd38ed1033622f0652deaa3e9940c40dadfb2f97de56ef4763ef83dce64b,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:C
ONTAINER_RUNNING,CreatedAt:1726688416783940148,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-86d989889c-vr6hr,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: 9c12919f-751d-4621-8776-2f28c168c022,},Annotations:map[string]string{io.kubernetes.container.hash: d609dd0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6109c3afb8acc80aa959e4eaf358de8318644f8165612661f8aab6ab0fc3b7fd,PodSandboxId:13e8766f7460e88ac5ebdfce2b89530694d3ef7061d9d210bd779e3cac2a787b,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:48d9cf
aaf3904a3821b1e71e50d7cbcf52fb19d5286c59e0f86b1389d189b19c,State:CONTAINER_RUNNING,CreatedAt:1726688413970410070,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-84c5f94fbc-fvm48,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 20825ca5-3044-4221-84bc-6fc04d1038fd,},Annotations:map[string]string{io.kubernetes.container.hash: d807d4fe,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3759671f1017e4877f68e8f02b9e88508a0b5b788503476fa6663f5b152d0fa6,PodSandboxId:0e451edbd642fa9533a8ef1a3e5d92b07eb94e1822b2aa46ce889e6871271115,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a
302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726688384429539904,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a1b4202d-fa9c-4f1b-9a34-40ef0da0d6e8,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fe26b1e2b409b9001b9416d86ce3ddf38df363c27c9d60c0de10b74f8347ee69,PodSandboxId:ddc00d8b37d3ead4d58b559a6d620a95b2de499e662e1a022a40e0ef82db9ad7,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[
string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726688381094396361,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-lr452,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ce99a83b-0924-4fe4-9a52-4c3400846319,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c25ce10b42b68a6ed5d508643a26856bbe78ab45c824d9bb52b62536a54e94f6,PodSandboxId:4edb1f646199c36711a433e5ba5
08e20000410524d763079fd45fc841d2c4767,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726688378858176128,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-pqt4n,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f0634583-edcc-434a-9062-5511ff79a084,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f287481be73d00137673fed540f11490c4a3a51a98e37282dab41b64635fe4f1,PodSandboxId:ad2848e491363188387660c21e25cb62694852a7892c1dfb39f8f14e3f67
6ac1,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726688366879277667,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-815929,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 930a8ca0840484b267f0a98bf1169134,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:af153f3716e56e0aba4d70e3ae86ff7d46933ad9b7f0dcd1f1ab5476c014ec6c,PodSandboxId:a30d6c4574148a54601e52f8af341ec46a0590d50a249cdb3a
2800f36a646ae9,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726688366895990952,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-815929,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d8c33f2eeebde18e8b35412f814f8356,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dcda62e7939defac3570394097f84c23877573964da78205f5348d3f4c1f746d,PodSandboxId:b7081c4721d58d2f5e4fed219eb212eff1d2a534a8a57261215a6e8241d71502,Metadata:&ContainerMetadata{Name
:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726688366880581166,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-815929,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a98120b82566515a490f1d4014b63db2,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bd304f4e9c5207664c3f5ae1d500d9898ffb98e6f5fbe2945d1a71f7d5f78e6c,PodSandboxId:da55a8add53252b2a0b247aa0b9b5e0cb6baebb7ab165ad9df6df7ebda9a0dfd,Metadata:&ContainerMetadata{Name:kube-apiserver,A
ttempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726688366733811882,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-815929,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 098eb44a2bb0f4719ebb8fbbc9c0e2ef,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=bc1cd0b3-5f9d-4cc5-8d06-6d5c2fa72a77 name=/runtime.v1.RuntimeService/ListContainers
	Sep 18 19:54:40 addons-815929 crio[660]: time="2024-09-18 19:54:40.613567958Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=b3fe37a8-5713-44ec-b1ac-3fab72ee97ad name=/runtime.v1.RuntimeService/Version
	Sep 18 19:54:40 addons-815929 crio[660]: time="2024-09-18 19:54:40.613696686Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=b3fe37a8-5713-44ec-b1ac-3fab72ee97ad name=/runtime.v1.RuntimeService/Version
	Sep 18 19:54:40 addons-815929 crio[660]: time="2024-09-18 19:54:40.615358613Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=fea6da0d-3ca3-4f3c-b3cc-fe943edd5608 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 18 19:54:40 addons-815929 crio[660]: time="2024-09-18 19:54:40.616569217Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726689280616537927,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:579800,},InodesUsed:&UInt64Value{Value:200,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=fea6da0d-3ca3-4f3c-b3cc-fe943edd5608 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 18 19:54:40 addons-815929 crio[660]: time="2024-09-18 19:54:40.617365697Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=33ef996a-4395-4ee7-8790-0cda0fb16113 name=/runtime.v1.RuntimeService/ListContainers
	Sep 18 19:54:40 addons-815929 crio[660]: time="2024-09-18 19:54:40.617427214Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=33ef996a-4395-4ee7-8790-0cda0fb16113 name=/runtime.v1.RuntimeService/ListContainers
	Sep 18 19:54:40 addons-815929 crio[660]: time="2024-09-18 19:54:40.617772778Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a73634fe0b5696c3769ed8462b5108653e982571f2c36b6389db0ad4cf2d1e0d,PodSandboxId:691f84bd2d65dd7aa08e76637e924e2ae7daf15a84048a0e2c694243f398475b,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1726689165321119254,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-55bf9c44b4-qqrwc,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f887e2d6-f352-42de-b6c8-bf994f11b057,},Annotations:map[string]string{io.kubernetes.container.hash: 1220bd81,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ee08a5ec3a5132463c0f585d72fb18c317c0a695f08f0e275e18e05ef4428cbf,PodSandboxId:1c669dd18bc972b33b71c0d9d6876a7f13ebd97ba81189204a3a3568718e94c1,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:074604130336e3c431b7c6b5b551b5a6ae5b67db13b3d223c6db638f85c7ff56,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7b4f26a7d93f4f1f276c51adb03ef0df54a82de89f254a9aec5c18bf0e45ee9,State:CONTAINER_RUNNING,CreatedAt:1726689026739213654,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c5107435-d0c7-4308-88a9-d0fc42111e5e,},Annotations:map[string]string{io.kubernet
es.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0b7f6341e501fa110fadd9a0a0001bcb33871d3bc4ab563f0aa03fc284fcd161,PodSandboxId:01e76804f1f1516fcb14a4c7e4c0a18197fc9ac156b68e935033da704200b78a,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:65a75b550f62ee651071b602f3d2c1daf0362638f620743c61b25eb4a1759f0a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:41eb637aa779284762db7a79fac77894d8fc6d967404e9c7f0760cb4c97a4766,State:CONTAINER_RUNNING,CreatedAt:1726689020241923244,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-7b5c95b59d-6t8xs,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.ui
d: f6f2d55f-b3e7-44c7-a00d-e99861a3846e,},Annotations:map[string]string{io.kubernetes.container.hash: 8f6f6c99,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:172ef2c9c611dd5748901516fac554cdb8f031212b96ee13467bda756557a347,PodSandboxId:915e30c1ffac764546e1016e7d844dba605fd5479681fe534c1f9029f5feba8b,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1726688465027006829,Labels:map[string]string{io.kubernetes.container.name: gcp-aut
h,io.kubernetes.pod.name: gcp-auth-89d5ffd79-fm986,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: e2301096-da7b-42be-8816-73101bc30414,},Annotations:map[string]string{io.kubernetes.container.hash: 91308b2f,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a5437d1207356580b131f78a5ed6a838e146f94fbe52be5ca0620cd6ac81bdcc,PodSandboxId:a99ddd38ed1033622f0652deaa3e9940c40dadfb2f97de56ef4763ef83dce64b,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:C
ONTAINER_RUNNING,CreatedAt:1726688416783940148,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-86d989889c-vr6hr,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: 9c12919f-751d-4621-8776-2f28c168c022,},Annotations:map[string]string{io.kubernetes.container.hash: d609dd0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6109c3afb8acc80aa959e4eaf358de8318644f8165612661f8aab6ab0fc3b7fd,PodSandboxId:13e8766f7460e88ac5ebdfce2b89530694d3ef7061d9d210bd779e3cac2a787b,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:48d9cf
aaf3904a3821b1e71e50d7cbcf52fb19d5286c59e0f86b1389d189b19c,State:CONTAINER_RUNNING,CreatedAt:1726688413970410070,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-84c5f94fbc-fvm48,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 20825ca5-3044-4221-84bc-6fc04d1038fd,},Annotations:map[string]string{io.kubernetes.container.hash: d807d4fe,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3759671f1017e4877f68e8f02b9e88508a0b5b788503476fa6663f5b152d0fa6,PodSandboxId:0e451edbd642fa9533a8ef1a3e5d92b07eb94e1822b2aa46ce889e6871271115,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a
302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726688384429539904,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a1b4202d-fa9c-4f1b-9a34-40ef0da0d6e8,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fe26b1e2b409b9001b9416d86ce3ddf38df363c27c9d60c0de10b74f8347ee69,PodSandboxId:ddc00d8b37d3ead4d58b559a6d620a95b2de499e662e1a022a40e0ef82db9ad7,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[
string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726688381094396361,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-lr452,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ce99a83b-0924-4fe4-9a52-4c3400846319,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c25ce10b42b68a6ed5d508643a26856bbe78ab45c824d9bb52b62536a54e94f6,PodSandboxId:4edb1f646199c36711a433e5ba5
08e20000410524d763079fd45fc841d2c4767,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726688378858176128,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-pqt4n,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f0634583-edcc-434a-9062-5511ff79a084,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f287481be73d00137673fed540f11490c4a3a51a98e37282dab41b64635fe4f1,PodSandboxId:ad2848e491363188387660c21e25cb62694852a7892c1dfb39f8f14e3f67
6ac1,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726688366879277667,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-815929,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 930a8ca0840484b267f0a98bf1169134,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:af153f3716e56e0aba4d70e3ae86ff7d46933ad9b7f0dcd1f1ab5476c014ec6c,PodSandboxId:a30d6c4574148a54601e52f8af341ec46a0590d50a249cdb3a
2800f36a646ae9,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726688366895990952,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-815929,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d8c33f2eeebde18e8b35412f814f8356,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dcda62e7939defac3570394097f84c23877573964da78205f5348d3f4c1f746d,PodSandboxId:b7081c4721d58d2f5e4fed219eb212eff1d2a534a8a57261215a6e8241d71502,Metadata:&ContainerMetadata{Name
:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726688366880581166,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-815929,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a98120b82566515a490f1d4014b63db2,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bd304f4e9c5207664c3f5ae1d500d9898ffb98e6f5fbe2945d1a71f7d5f78e6c,PodSandboxId:da55a8add53252b2a0b247aa0b9b5e0cb6baebb7ab165ad9df6df7ebda9a0dfd,Metadata:&ContainerMetadata{Name:kube-apiserver,A
ttempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726688366733811882,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-815929,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 098eb44a2bb0f4719ebb8fbbc9c0e2ef,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=33ef996a-4395-4ee7-8790-0cda0fb16113 name=/runtime.v1.RuntimeService/ListContainers
	Sep 18 19:54:40 addons-815929 crio[660]: time="2024-09-18 19:54:40.650832929Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=a52f1ea9-a65d-4f28-8e4a-b0e1d187cf1f name=/runtime.v1.RuntimeService/Version
	Sep 18 19:54:40 addons-815929 crio[660]: time="2024-09-18 19:54:40.650909796Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=a52f1ea9-a65d-4f28-8e4a-b0e1d187cf1f name=/runtime.v1.RuntimeService/Version
	Sep 18 19:54:40 addons-815929 crio[660]: time="2024-09-18 19:54:40.652452497Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=b0ed0891-5cb7-4dbb-a07c-3b244f759787 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 18 19:54:40 addons-815929 crio[660]: time="2024-09-18 19:54:40.653783684Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726689280653755561,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:579800,},InodesUsed:&UInt64Value{Value:200,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=b0ed0891-5cb7-4dbb-a07c-3b244f759787 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 18 19:54:40 addons-815929 crio[660]: time="2024-09-18 19:54:40.654362262Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=82f8932c-7ace-442d-ae34-499a9e595be0 name=/runtime.v1.RuntimeService/ListContainers
	Sep 18 19:54:40 addons-815929 crio[660]: time="2024-09-18 19:54:40.654434900Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=82f8932c-7ace-442d-ae34-499a9e595be0 name=/runtime.v1.RuntimeService/ListContainers
	Sep 18 19:54:40 addons-815929 crio[660]: time="2024-09-18 19:54:40.654769447Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a73634fe0b5696c3769ed8462b5108653e982571f2c36b6389db0ad4cf2d1e0d,PodSandboxId:691f84bd2d65dd7aa08e76637e924e2ae7daf15a84048a0e2c694243f398475b,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1726689165321119254,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-55bf9c44b4-qqrwc,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f887e2d6-f352-42de-b6c8-bf994f11b057,},Annotations:map[string]string{io.kubernetes.container.hash: 1220bd81,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ee08a5ec3a5132463c0f585d72fb18c317c0a695f08f0e275e18e05ef4428cbf,PodSandboxId:1c669dd18bc972b33b71c0d9d6876a7f13ebd97ba81189204a3a3568718e94c1,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:074604130336e3c431b7c6b5b551b5a6ae5b67db13b3d223c6db638f85c7ff56,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7b4f26a7d93f4f1f276c51adb03ef0df54a82de89f254a9aec5c18bf0e45ee9,State:CONTAINER_RUNNING,CreatedAt:1726689026739213654,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c5107435-d0c7-4308-88a9-d0fc42111e5e,},Annotations:map[string]string{io.kubernet
es.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0b7f6341e501fa110fadd9a0a0001bcb33871d3bc4ab563f0aa03fc284fcd161,PodSandboxId:01e76804f1f1516fcb14a4c7e4c0a18197fc9ac156b68e935033da704200b78a,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:65a75b550f62ee651071b602f3d2c1daf0362638f620743c61b25eb4a1759f0a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:41eb637aa779284762db7a79fac77894d8fc6d967404e9c7f0760cb4c97a4766,State:CONTAINER_RUNNING,CreatedAt:1726689020241923244,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-7b5c95b59d-6t8xs,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.ui
d: f6f2d55f-b3e7-44c7-a00d-e99861a3846e,},Annotations:map[string]string{io.kubernetes.container.hash: 8f6f6c99,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:172ef2c9c611dd5748901516fac554cdb8f031212b96ee13467bda756557a347,PodSandboxId:915e30c1ffac764546e1016e7d844dba605fd5479681fe534c1f9029f5feba8b,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1726688465027006829,Labels:map[string]string{io.kubernetes.container.name: gcp-aut
h,io.kubernetes.pod.name: gcp-auth-89d5ffd79-fm986,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: e2301096-da7b-42be-8816-73101bc30414,},Annotations:map[string]string{io.kubernetes.container.hash: 91308b2f,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a5437d1207356580b131f78a5ed6a838e146f94fbe52be5ca0620cd6ac81bdcc,PodSandboxId:a99ddd38ed1033622f0652deaa3e9940c40dadfb2f97de56ef4763ef83dce64b,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:C
ONTAINER_RUNNING,CreatedAt:1726688416783940148,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-86d989889c-vr6hr,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: 9c12919f-751d-4621-8776-2f28c168c022,},Annotations:map[string]string{io.kubernetes.container.hash: d609dd0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6109c3afb8acc80aa959e4eaf358de8318644f8165612661f8aab6ab0fc3b7fd,PodSandboxId:13e8766f7460e88ac5ebdfce2b89530694d3ef7061d9d210bd779e3cac2a787b,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:48d9cf
aaf3904a3821b1e71e50d7cbcf52fb19d5286c59e0f86b1389d189b19c,State:CONTAINER_RUNNING,CreatedAt:1726688413970410070,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-84c5f94fbc-fvm48,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 20825ca5-3044-4221-84bc-6fc04d1038fd,},Annotations:map[string]string{io.kubernetes.container.hash: d807d4fe,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3759671f1017e4877f68e8f02b9e88508a0b5b788503476fa6663f5b152d0fa6,PodSandboxId:0e451edbd642fa9533a8ef1a3e5d92b07eb94e1822b2aa46ce889e6871271115,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a
302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726688384429539904,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a1b4202d-fa9c-4f1b-9a34-40ef0da0d6e8,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fe26b1e2b409b9001b9416d86ce3ddf38df363c27c9d60c0de10b74f8347ee69,PodSandboxId:ddc00d8b37d3ead4d58b559a6d620a95b2de499e662e1a022a40e0ef82db9ad7,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[
string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726688381094396361,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-lr452,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ce99a83b-0924-4fe4-9a52-4c3400846319,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c25ce10b42b68a6ed5d508643a26856bbe78ab45c824d9bb52b62536a54e94f6,PodSandboxId:4edb1f646199c36711a433e5ba5
08e20000410524d763079fd45fc841d2c4767,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726688378858176128,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-pqt4n,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f0634583-edcc-434a-9062-5511ff79a084,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f287481be73d00137673fed540f11490c4a3a51a98e37282dab41b64635fe4f1,PodSandboxId:ad2848e491363188387660c21e25cb62694852a7892c1dfb39f8f14e3f67
6ac1,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726688366879277667,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-815929,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 930a8ca0840484b267f0a98bf1169134,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:af153f3716e56e0aba4d70e3ae86ff7d46933ad9b7f0dcd1f1ab5476c014ec6c,PodSandboxId:a30d6c4574148a54601e52f8af341ec46a0590d50a249cdb3a
2800f36a646ae9,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726688366895990952,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-815929,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d8c33f2eeebde18e8b35412f814f8356,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dcda62e7939defac3570394097f84c23877573964da78205f5348d3f4c1f746d,PodSandboxId:b7081c4721d58d2f5e4fed219eb212eff1d2a534a8a57261215a6e8241d71502,Metadata:&ContainerMetadata{Name
:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726688366880581166,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-815929,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a98120b82566515a490f1d4014b63db2,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bd304f4e9c5207664c3f5ae1d500d9898ffb98e6f5fbe2945d1a71f7d5f78e6c,PodSandboxId:da55a8add53252b2a0b247aa0b9b5e0cb6baebb7ab165ad9df6df7ebda9a0dfd,Metadata:&ContainerMetadata{Name:kube-apiserver,A
ttempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726688366733811882,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-815929,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 098eb44a2bb0f4719ebb8fbbc9c0e2ef,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=82f8932c-7ace-442d-ae34-499a9e595be0 name=/runtime.v1.RuntimeService/ListContainers
	Sep 18 19:54:40 addons-815929 crio[660]: time="2024-09-18 19:54:40.692359481Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=1d6355db-b356-4892-96f8-0ce6313bdf4f name=/runtime.v1.RuntimeService/Version
	Sep 18 19:54:40 addons-815929 crio[660]: time="2024-09-18 19:54:40.692436864Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=1d6355db-b356-4892-96f8-0ce6313bdf4f name=/runtime.v1.RuntimeService/Version
	Sep 18 19:54:40 addons-815929 crio[660]: time="2024-09-18 19:54:40.693732922Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=2cb20a4b-7038-4d59-b551-f3707c32ede8 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 18 19:54:40 addons-815929 crio[660]: time="2024-09-18 19:54:40.695283953Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726689280695255694,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:579800,},InodesUsed:&UInt64Value{Value:200,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=2cb20a4b-7038-4d59-b551-f3707c32ede8 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 18 19:54:40 addons-815929 crio[660]: time="2024-09-18 19:54:40.696014009Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=2a320734-4488-4a65-90d2-0dd5c34a90b7 name=/runtime.v1.RuntimeService/ListContainers
	Sep 18 19:54:40 addons-815929 crio[660]: time="2024-09-18 19:54:40.696088324Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=2a320734-4488-4a65-90d2-0dd5c34a90b7 name=/runtime.v1.RuntimeService/ListContainers
	Sep 18 19:54:40 addons-815929 crio[660]: time="2024-09-18 19:54:40.696367943Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a73634fe0b5696c3769ed8462b5108653e982571f2c36b6389db0ad4cf2d1e0d,PodSandboxId:691f84bd2d65dd7aa08e76637e924e2ae7daf15a84048a0e2c694243f398475b,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1726689165321119254,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-55bf9c44b4-qqrwc,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f887e2d6-f352-42de-b6c8-bf994f11b057,},Annotations:map[string]string{io.kubernetes.container.hash: 1220bd81,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ee08a5ec3a5132463c0f585d72fb18c317c0a695f08f0e275e18e05ef4428cbf,PodSandboxId:1c669dd18bc972b33b71c0d9d6876a7f13ebd97ba81189204a3a3568718e94c1,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:074604130336e3c431b7c6b5b551b5a6ae5b67db13b3d223c6db638f85c7ff56,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7b4f26a7d93f4f1f276c51adb03ef0df54a82de89f254a9aec5c18bf0e45ee9,State:CONTAINER_RUNNING,CreatedAt:1726689026739213654,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c5107435-d0c7-4308-88a9-d0fc42111e5e,},Annotations:map[string]string{io.kubernet
es.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0b7f6341e501fa110fadd9a0a0001bcb33871d3bc4ab563f0aa03fc284fcd161,PodSandboxId:01e76804f1f1516fcb14a4c7e4c0a18197fc9ac156b68e935033da704200b78a,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:65a75b550f62ee651071b602f3d2c1daf0362638f620743c61b25eb4a1759f0a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:41eb637aa779284762db7a79fac77894d8fc6d967404e9c7f0760cb4c97a4766,State:CONTAINER_RUNNING,CreatedAt:1726689020241923244,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-7b5c95b59d-6t8xs,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.ui
d: f6f2d55f-b3e7-44c7-a00d-e99861a3846e,},Annotations:map[string]string{io.kubernetes.container.hash: 8f6f6c99,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:172ef2c9c611dd5748901516fac554cdb8f031212b96ee13467bda756557a347,PodSandboxId:915e30c1ffac764546e1016e7d844dba605fd5479681fe534c1f9029f5feba8b,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1726688465027006829,Labels:map[string]string{io.kubernetes.container.name: gcp-aut
h,io.kubernetes.pod.name: gcp-auth-89d5ffd79-fm986,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: e2301096-da7b-42be-8816-73101bc30414,},Annotations:map[string]string{io.kubernetes.container.hash: 91308b2f,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a5437d1207356580b131f78a5ed6a838e146f94fbe52be5ca0620cd6ac81bdcc,PodSandboxId:a99ddd38ed1033622f0652deaa3e9940c40dadfb2f97de56ef4763ef83dce64b,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:C
ONTAINER_RUNNING,CreatedAt:1726688416783940148,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-86d989889c-vr6hr,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: 9c12919f-751d-4621-8776-2f28c168c022,},Annotations:map[string]string{io.kubernetes.container.hash: d609dd0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6109c3afb8acc80aa959e4eaf358de8318644f8165612661f8aab6ab0fc3b7fd,PodSandboxId:13e8766f7460e88ac5ebdfce2b89530694d3ef7061d9d210bd779e3cac2a787b,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:48d9cf
aaf3904a3821b1e71e50d7cbcf52fb19d5286c59e0f86b1389d189b19c,State:CONTAINER_RUNNING,CreatedAt:1726688413970410070,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-84c5f94fbc-fvm48,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 20825ca5-3044-4221-84bc-6fc04d1038fd,},Annotations:map[string]string{io.kubernetes.container.hash: d807d4fe,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3759671f1017e4877f68e8f02b9e88508a0b5b788503476fa6663f5b152d0fa6,PodSandboxId:0e451edbd642fa9533a8ef1a3e5d92b07eb94e1822b2aa46ce889e6871271115,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a
302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726688384429539904,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a1b4202d-fa9c-4f1b-9a34-40ef0da0d6e8,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fe26b1e2b409b9001b9416d86ce3ddf38df363c27c9d60c0de10b74f8347ee69,PodSandboxId:ddc00d8b37d3ead4d58b559a6d620a95b2de499e662e1a022a40e0ef82db9ad7,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[
string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726688381094396361,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-lr452,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ce99a83b-0924-4fe4-9a52-4c3400846319,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c25ce10b42b68a6ed5d508643a26856bbe78ab45c824d9bb52b62536a54e94f6,PodSandboxId:4edb1f646199c36711a433e5ba5
08e20000410524d763079fd45fc841d2c4767,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726688378858176128,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-pqt4n,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f0634583-edcc-434a-9062-5511ff79a084,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f287481be73d00137673fed540f11490c4a3a51a98e37282dab41b64635fe4f1,PodSandboxId:ad2848e491363188387660c21e25cb62694852a7892c1dfb39f8f14e3f67
6ac1,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726688366879277667,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-815929,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 930a8ca0840484b267f0a98bf1169134,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:af153f3716e56e0aba4d70e3ae86ff7d46933ad9b7f0dcd1f1ab5476c014ec6c,PodSandboxId:a30d6c4574148a54601e52f8af341ec46a0590d50a249cdb3a
2800f36a646ae9,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726688366895990952,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-815929,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d8c33f2eeebde18e8b35412f814f8356,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dcda62e7939defac3570394097f84c23877573964da78205f5348d3f4c1f746d,PodSandboxId:b7081c4721d58d2f5e4fed219eb212eff1d2a534a8a57261215a6e8241d71502,Metadata:&ContainerMetadata{Name
:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726688366880581166,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-815929,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a98120b82566515a490f1d4014b63db2,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bd304f4e9c5207664c3f5ae1d500d9898ffb98e6f5fbe2945d1a71f7d5f78e6c,PodSandboxId:da55a8add53252b2a0b247aa0b9b5e0cb6baebb7ab165ad9df6df7ebda9a0dfd,Metadata:&ContainerMetadata{Name:kube-apiserver,A
ttempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726688366733811882,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-815929,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 098eb44a2bb0f4719ebb8fbbc9c0e2ef,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=2a320734-4488-4a65-90d2-0dd5c34a90b7 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                   CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	a73634fe0b569       docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6                   About a minute ago   Running             hello-world-app           0                   691f84bd2d65d       hello-world-app-55bf9c44b4-qqrwc
	ee08a5ec3a513       docker.io/library/nginx@sha256:074604130336e3c431b7c6b5b551b5a6ae5b67db13b3d223c6db638f85c7ff56                         4 minutes ago        Running             nginx                     0                   1c669dd18bc97       nginx
	0b7f6341e501f       ghcr.io/headlamp-k8s/headlamp@sha256:65a75b550f62ee651071b602f3d2c1daf0362638f620743c61b25eb4a1759f0a                   4 minutes ago        Running             headlamp                  0                   01e76804f1f15       headlamp-7b5c95b59d-6t8xs
	172ef2c9c611d       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b            13 minutes ago       Running             gcp-auth                  0                   915e30c1ffac7       gcp-auth-89d5ffd79-fm986
	a5437d1207356       docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef        14 minutes ago       Running             local-path-provisioner    0                   a99ddd38ed103       local-path-provisioner-86d989889c-vr6hr
	6109c3afb8acc       registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a   14 minutes ago       Running             metrics-server            0                   13e8766f7460e       metrics-server-84c5f94fbc-fvm48
	3759671f1017e       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                        14 minutes ago       Running             storage-provisioner       0                   0e451edbd642f       storage-provisioner
	fe26b1e2b409b       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                                        14 minutes ago       Running             coredns                   0                   ddc00d8b37d3e       coredns-7c65d6cfc9-lr452
	c25ce10b42b68       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                                        15 minutes ago       Running             kube-proxy                0                   4edb1f646199c       kube-proxy-pqt4n
	af153f3716e56       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                                        15 minutes ago       Running             etcd                      0                   a30d6c4574148       etcd-addons-815929
	dcda62e7939de       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                                        15 minutes ago       Running             kube-scheduler            0                   b7081c4721d58       kube-scheduler-addons-815929
	f287481be73d0       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                                        15 minutes ago       Running             kube-controller-manager   0                   ad2848e491363       kube-controller-manager-addons-815929
	bd304f4e9c520       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee                                                        15 minutes ago       Running             kube-apiserver            0                   da55a8add5325       kube-apiserver-addons-815929
	
	
	==> coredns [fe26b1e2b409b9001b9416d86ce3ddf38df363c27c9d60c0de10b74f8347ee69] <==
	[INFO] 127.0.0.1:33911 - 37399 "HINFO IN 5747327246118162623.8020402030463234675. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.016819419s
	[INFO] 10.244.0.7:59262 - 43432 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000336053s
	[INFO] 10.244.0.7:59262 - 17322 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000150287s
	[INFO] 10.244.0.7:41687 - 18673 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000099786s
	[INFO] 10.244.0.7:41687 - 9207 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000065823s
	[INFO] 10.244.0.7:33094 - 24891 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000094342s
	[INFO] 10.244.0.7:33094 - 26173 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000059261s
	[INFO] 10.244.0.7:56632 - 33786 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000087163s
	[INFO] 10.244.0.7:56632 - 4856 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000058072s
	[INFO] 10.244.0.7:36451 - 41922 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000084154s
	[INFO] 10.244.0.7:36451 - 33727 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000092459s
	[INFO] 10.244.0.7:39340 - 30237 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000083666s
	[INFO] 10.244.0.7:39340 - 56611 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000065217s
	[INFO] 10.244.0.7:60263 - 43577 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000042731s
	[INFO] 10.244.0.7:60263 - 42043 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000060323s
	[INFO] 10.244.0.7:49317 - 26894 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000071504s
	[INFO] 10.244.0.7:49317 - 41231 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000053913s
	[INFO] 10.244.0.22:56096 - 25617 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000559428s
	[INFO] 10.244.0.22:46332 - 60333 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.00009869s
	[INFO] 10.244.0.22:56500 - 14226 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000212602s
	[INFO] 10.244.0.22:49148 - 10468 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.00009573s
	[INFO] 10.244.0.22:40941 - 26523 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000113485s
	[INFO] 10.244.0.22:37539 - 18925 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000348096s
	[INFO] 10.244.0.22:41445 - 2227 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 534 0.002727628s
	[INFO] 10.244.0.22:57259 - 2571 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.00255705s
	
	
	==> describe nodes <==
	Name:               addons-815929
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-815929
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=85073601a832bd4bbda5d11fa91feafff6ec6b91
	                    minikube.k8s.io/name=addons-815929
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_18T19_39_32_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-815929
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 18 Sep 2024 19:39:29 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-815929
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 18 Sep 2024 19:54:32 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 18 Sep 2024 19:53:07 +0000   Wed, 18 Sep 2024 19:39:27 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 18 Sep 2024 19:53:07 +0000   Wed, 18 Sep 2024 19:39:27 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 18 Sep 2024 19:53:07 +0000   Wed, 18 Sep 2024 19:39:27 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 18 Sep 2024 19:53:07 +0000   Wed, 18 Sep 2024 19:39:32 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.158
	  Hostname:    addons-815929
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	System Info:
	  Machine ID:                 7e65d1c428634e33ae59c564f000aca1
	  System UUID:                7e65d1c4-2863-4e33-ae59-c564f000aca1
	  Boot ID:                    eb3346ec-958a-43c9-b91c-e6223f603868
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (13 in total)
	  Namespace                   Name                                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  default                     hello-world-app-55bf9c44b4-qqrwc           0 (0%)        0 (0%)      0 (0%)           0 (0%)         119s
	  default                     nginx                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m19s
	  gcp-auth                    gcp-auth-89d5ffd79-fm986                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	  headlamp                    headlamp-7b5c95b59d-6t8xs                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m27s
	  kube-system                 coredns-7c65d6cfc9-lr452                   100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     15m
	  kube-system                 etcd-addons-815929                         100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         15m
	  kube-system                 kube-apiserver-addons-815929               250m (12%)    0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 kube-controller-manager-addons-815929      200m (10%)    0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 kube-proxy-pqt4n                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 kube-scheduler-addons-815929               100m (5%)     0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 storage-provisioner                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	  local-path-storage          local-path-provisioner-86d989889c-vr6hr    0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  0 (0%)
	  memory             170Mi (4%)  170Mi (4%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 15m   kube-proxy       
	  Normal  Starting                 15m   kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  15m   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  15m   kubelet          Node addons-815929 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    15m   kubelet          Node addons-815929 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     15m   kubelet          Node addons-815929 status is now: NodeHasSufficientPID
	  Normal  NodeReady                15m   kubelet          Node addons-815929 status is now: NodeReady
	  Normal  RegisteredNode           15m   node-controller  Node addons-815929 event: Registered Node addons-815929 in Controller
	
	
	==> dmesg <==
	[  +5.343608] kauditd_printk_skb: 127 callbacks suppressed
	[  +5.904152] kauditd_printk_skb: 83 callbacks suppressed
	[Sep18 19:40] kauditd_printk_skb: 9 callbacks suppressed
	[  +5.880845] kauditd_printk_skb: 34 callbacks suppressed
	[ +18.003706] kauditd_printk_skb: 12 callbacks suppressed
	[  +5.043206] kauditd_printk_skb: 46 callbacks suppressed
	[  +8.731855] kauditd_printk_skb: 72 callbacks suppressed
	[Sep18 19:41] kauditd_printk_skb: 6 callbacks suppressed
	[  +5.081169] kauditd_printk_skb: 44 callbacks suppressed
	[ +12.641013] kauditd_printk_skb: 12 callbacks suppressed
	[Sep18 19:42] kauditd_printk_skb: 28 callbacks suppressed
	[Sep18 19:43] kauditd_printk_skb: 28 callbacks suppressed
	[Sep18 19:46] kauditd_printk_skb: 28 callbacks suppressed
	[Sep18 19:49] kauditd_printk_skb: 28 callbacks suppressed
	[  +5.945047] kauditd_printk_skb: 15 callbacks suppressed
	[  +5.644536] kauditd_printk_skb: 25 callbacks suppressed
	[  +6.473959] kauditd_printk_skb: 17 callbacks suppressed
	[  +6.527768] kauditd_printk_skb: 14 callbacks suppressed
	[ +12.093612] kauditd_printk_skb: 3 callbacks suppressed
	[Sep18 19:50] kauditd_printk_skb: 24 callbacks suppressed
	[  +5.519549] kauditd_printk_skb: 10 callbacks suppressed
	[  +7.416140] kauditd_printk_skb: 55 callbacks suppressed
	[  +5.600964] kauditd_printk_skb: 31 callbacks suppressed
	[Sep18 19:52] kauditd_printk_skb: 25 callbacks suppressed
	[  +5.013586] kauditd_printk_skb: 19 callbacks suppressed
	
	
	==> etcd [af153f3716e56e0aba4d70e3ae86ff7d46933ad9b7f0dcd1f1ab5476c014ec6c] <==
	{"level":"info","ts":"2024-09-18T19:40:55.783273Z","caller":"traceutil/trace.go:171","msg":"trace[1329412000] linearizableReadLoop","detail":"{readStateIndex:1127; appliedIndex:1126; }","duration":"411.153549ms","start":"2024-09-18T19:40:55.372056Z","end":"2024-09-18T19:40:55.783210Z","steps":["trace[1329412000] 'read index received'  (duration: 410.953333ms)","trace[1329412000] 'applied index is now lower than readState.Index'  (duration: 199.615µs)"],"step_count":2}
	{"level":"warn","ts":"2024-09-18T19:40:55.783842Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"274.750813ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-18T19:40:55.783883Z","caller":"traceutil/trace.go:171","msg":"trace[1696276562] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1095; }","duration":"274.791129ms","start":"2024-09-18T19:40:55.509082Z","end":"2024-09-18T19:40:55.783873Z","steps":["trace[1696276562] 'agreement among raft nodes before linearized reading'  (duration: 274.733422ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-18T19:40:58.140649Z","caller":"traceutil/trace.go:171","msg":"trace[1269958253] transaction","detail":"{read_only:false; response_revision:1100; number_of_response:1; }","duration":"116.461138ms","start":"2024-09-18T19:40:58.024133Z","end":"2024-09-18T19:40:58.140595Z","steps":["trace[1269958253] 'process raft request'  (duration: 116.084244ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-18T19:49:27.720379Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1528}
	{"level":"info","ts":"2024-09-18T19:49:27.755828Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":1528,"took":"34.749964ms","hash":189233142,"current-db-size-bytes":6471680,"current-db-size":"6.5 MB","current-db-size-in-use-bytes":3465216,"current-db-size-in-use":"3.5 MB"}
	{"level":"info","ts":"2024-09-18T19:49:27.755900Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":189233142,"revision":1528,"compact-revision":-1}
	{"level":"warn","ts":"2024-09-18T19:49:39.140178Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"372.134407ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" ","response":"range_response_count:1 size:1114"}
	{"level":"info","ts":"2024-09-18T19:49:39.140269Z","caller":"traceutil/trace.go:171","msg":"trace[44095741] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:1; response_revision:2065; }","duration":"372.258566ms","start":"2024-09-18T19:49:38.767988Z","end":"2024-09-18T19:49:39.140247Z","steps":["trace[44095741] 'range keys from in-memory index tree'  (duration: 371.974715ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-18T19:49:39.140351Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-18T19:49:38.767903Z","time spent":"372.433662ms","remote":"127.0.0.1:37750","response type":"/etcdserverpb.KV/Range","request count":0,"request size":67,"response count":1,"response size":1137,"request content":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" "}
	{"level":"warn","ts":"2024-09-18T19:49:39.140594Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"366.852656ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/namespaces/yakd-dashboard\" ","response":"range_response_count:1 size:2312"}
	{"level":"info","ts":"2024-09-18T19:49:39.140666Z","caller":"traceutil/trace.go:171","msg":"trace[955157213] range","detail":"{range_begin:/registry/namespaces/yakd-dashboard; range_end:; response_count:1; response_revision:2065; }","duration":"366.925812ms","start":"2024-09-18T19:49:38.773733Z","end":"2024-09-18T19:49:39.140659Z","steps":["trace[955157213] 'range keys from in-memory index tree'  (duration: 366.803518ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-18T19:49:39.140687Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-18T19:49:38.773695Z","time spent":"366.986639ms","remote":"127.0.0.1:37688","response type":"/etcdserverpb.KV/Range","request count":0,"request size":37,"response count":1,"response size":2335,"request content":"key:\"/registry/namespaces/yakd-dashboard\" "}
	{"level":"info","ts":"2024-09-18T19:49:39.140890Z","caller":"traceutil/trace.go:171","msg":"trace[1592645195] linearizableReadLoop","detail":"{readStateIndex:2214; appliedIndex:2213; }","duration":"186.300087ms","start":"2024-09-18T19:49:38.954572Z","end":"2024-09-18T19:49:39.140872Z","steps":["trace[1592645195] 'read index received'  (duration: 184.999995ms)","trace[1592645195] 'applied index is now lower than readState.Index'  (duration: 1.299584ms)"],"step_count":2}
	{"level":"info","ts":"2024-09-18T19:49:39.141064Z","caller":"traceutil/trace.go:171","msg":"trace[1097499478] transaction","detail":"{read_only:false; response_revision:2066; number_of_response:1; }","duration":"254.38821ms","start":"2024-09-18T19:49:38.886663Z","end":"2024-09-18T19:49:39.141051Z","steps":["trace[1097499478] 'process raft request'  (duration: 252.880343ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-18T19:49:39.141181Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"186.598804ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-18T19:49:39.141221Z","caller":"traceutil/trace.go:171","msg":"trace[2143615549] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:2066; }","duration":"186.63848ms","start":"2024-09-18T19:49:38.954567Z","end":"2024-09-18T19:49:39.141206Z","steps":["trace[2143615549] 'agreement among raft nodes before linearized reading'  (duration: 186.586728ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-18T19:49:39.141319Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"163.150361ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-18T19:49:39.141350Z","caller":"traceutil/trace.go:171","msg":"trace[8159077] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:2066; }","duration":"163.180095ms","start":"2024-09-18T19:49:38.978162Z","end":"2024-09-18T19:49:39.141343Z","steps":["trace[8159077] 'agreement among raft nodes before linearized reading'  (duration: 163.144483ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-18T19:50:18.732088Z","caller":"traceutil/trace.go:171","msg":"trace[1049508816] transaction","detail":"{read_only:false; response_revision:2373; number_of_response:1; }","duration":"138.120687ms","start":"2024-09-18T19:50:18.593955Z","end":"2024-09-18T19:50:18.732075Z","steps":["trace[1049508816] 'process raft request'  (duration: 136.844604ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-18T19:50:26.593679Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"295.207794ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-18T19:50:26.593767Z","caller":"traceutil/trace.go:171","msg":"trace[1193075531] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:2438; }","duration":"295.306191ms","start":"2024-09-18T19:50:26.298443Z","end":"2024-09-18T19:50:26.593750Z","steps":["trace[1193075531] 'range keys from in-memory index tree'  (duration: 295.158117ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-18T19:54:27.730109Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":2002}
	{"level":"info","ts":"2024-09-18T19:54:27.750715Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":2002,"took":"19.884847ms","hash":3886011600,"current-db-size-bytes":6471680,"current-db-size":"6.5 MB","current-db-size-in-use-bytes":4808704,"current-db-size-in-use":"4.8 MB"}
	{"level":"info","ts":"2024-09-18T19:54:27.750786Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":3886011600,"revision":2002,"compact-revision":1528}
	
	
	==> gcp-auth [172ef2c9c611dd5748901516fac554cdb8f031212b96ee13467bda756557a347] <==
	2024/09/18 19:41:12 Ready to write response ...
	2024/09/18 19:49:14 Ready to marshal response ...
	2024/09/18 19:49:14 Ready to write response ...
	2024/09/18 19:49:15 Ready to marshal response ...
	2024/09/18 19:49:15 Ready to write response ...
	2024/09/18 19:49:25 Ready to marshal response ...
	2024/09/18 19:49:25 Ready to write response ...
	2024/09/18 19:49:27 Ready to marshal response ...
	2024/09/18 19:49:27 Ready to write response ...
	2024/09/18 19:49:33 Ready to marshal response ...
	2024/09/18 19:49:33 Ready to write response ...
	2024/09/18 19:50:01 Ready to marshal response ...
	2024/09/18 19:50:01 Ready to write response ...
	2024/09/18 19:50:04 Ready to marshal response ...
	2024/09/18 19:50:04 Ready to write response ...
	2024/09/18 19:50:14 Ready to marshal response ...
	2024/09/18 19:50:14 Ready to write response ...
	2024/09/18 19:50:14 Ready to marshal response ...
	2024/09/18 19:50:14 Ready to write response ...
	2024/09/18 19:50:14 Ready to marshal response ...
	2024/09/18 19:50:14 Ready to write response ...
	2024/09/18 19:50:22 Ready to marshal response ...
	2024/09/18 19:50:22 Ready to write response ...
	2024/09/18 19:52:42 Ready to marshal response ...
	2024/09/18 19:52:42 Ready to write response ...
	
	
	==> kernel <==
	 19:54:41 up 15 min,  0 users,  load average: 0.58, 0.58, 0.51
	Linux addons-815929 5.10.207 #1 SMP Mon Sep 16 15:00:28 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [bd304f4e9c5207664c3f5ae1d500d9898ffb98e6f5fbe2945d1a71f7d5f78e6c] <==
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E0918 19:41:22.794814       1 remote_available_controller.go:448] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.103.233.223:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.103.233.223:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.103.233.223:443: connect: connection refused" logger="UnhandledError"
	E0918 19:41:22.796712       1 remote_available_controller.go:448] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.103.233.223:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.103.233.223:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.103.233.223:443: connect: connection refused" logger="UnhandledError"
	E0918 19:41:22.802171       1 remote_available_controller.go:448] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.103.233.223:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.103.233.223:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.103.233.223:443: connect: connection refused" logger="UnhandledError"
	I0918 19:41:22.877204       1 handler.go:286] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0918 19:49:46.122311       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I0918 19:49:51.351654       1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W0918 19:49:52.486009       1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I0918 19:50:14.711287       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.96.178.208"}
	I0918 19:50:21.473006       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0918 19:50:21.475672       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0918 19:50:21.499495       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0918 19:50:21.499582       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0918 19:50:21.528355       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0918 19:50:21.528504       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0918 19:50:21.650040       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0918 19:50:21.650140       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0918 19:50:22.108280       1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
	I0918 19:50:22.290418       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.97.211.60"}
	W0918 19:50:22.650820       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0918 19:50:22.650941       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W0918 19:50:22.664307       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	I0918 19:52:42.564269       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.98.247.40"}
	
	
	==> kube-controller-manager [f287481be73d00137673fed540f11490c4a3a51a98e37282dab41b64635fe4f1] <==
	I0918 19:52:44.515739       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-bc57996ff" duration="6.408µs"
	I0918 19:52:44.524182       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="ingress-nginx/ingress-nginx-admission-patch" delay="0s"
	W0918 19:52:45.371047       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0918 19:52:45.371154       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0918 19:52:45.589091       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="9.724191ms"
	I0918 19:52:45.589170       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="35.96µs"
	W0918 19:52:47.503762       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0918 19:52:47.503894       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0918 19:52:54.568194       1 namespace_controller.go:187] "Namespace has been deleted" logger="namespace-controller" namespace="ingress-nginx"
	I0918 19:53:07.930461       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="addons-815929"
	W0918 19:53:16.790033       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0918 19:53:16.790085       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0918 19:53:19.546149       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0918 19:53:19.546213       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0918 19:53:25.685215       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0918 19:53:25.685337       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0918 19:53:42.169570       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0918 19:53:42.169770       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0918 19:53:53.194368       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0918 19:53:53.194450       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0918 19:54:08.213734       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0918 19:54:08.213784       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0918 19:54:11.746796       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0918 19:54:11.746838       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0918 19:54:39.626549       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-84c5f94fbc" duration="12.933µs"
	
	
	==> kube-proxy [c25ce10b42b68a6ed5d508643a26856bbe78ab45c824d9bb52b62536a54e94f6] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0918 19:39:39.772742       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0918 19:39:39.855112       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.158"]
	E0918 19:39:39.855197       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0918 19:39:39.943796       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0918 19:39:39.943838       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0918 19:39:39.943864       1 server_linux.go:169] "Using iptables Proxier"
	I0918 19:39:39.953935       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0918 19:39:39.954227       1 server.go:483] "Version info" version="v1.31.1"
	I0918 19:39:39.954239       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0918 19:39:39.958453       1 config.go:199] "Starting service config controller"
	I0918 19:39:39.958495       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0918 19:39:39.958560       1 config.go:105] "Starting endpoint slice config controller"
	I0918 19:39:39.958577       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0918 19:39:39.965954       1 config.go:328] "Starting node config controller"
	I0918 19:39:39.965978       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0918 19:39:40.059312       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0918 19:39:40.059385       1 shared_informer.go:320] Caches are synced for service config
	I0918 19:39:40.067090       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [dcda62e7939defac3570394097f84c23877573964da78205f5348d3f4c1f746d] <==
	W0918 19:39:30.259773       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0918 19:39:30.259828       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0918 19:39:30.260863       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0918 19:39:30.260937       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0918 19:39:30.316355       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0918 19:39:30.316410       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0918 19:39:30.325700       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0918 19:39:30.325748       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0918 19:39:30.384152       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0918 19:39:30.384201       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0918 19:39:30.388938       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0918 19:39:30.388996       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0918 19:39:30.471673       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0918 19:39:30.471719       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0918 19:39:30.484033       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0918 19:39:30.484082       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0918 19:39:30.491339       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0918 19:39:30.491383       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0918 19:39:30.519278       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0918 19:39:30.519335       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0918 19:39:30.634983       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0918 19:39:30.635043       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0918 19:39:30.839874       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0918 19:39:30.840702       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	I0918 19:39:32.951022       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 18 19:54:05 addons-815929 kubelet[1202]: E0918 19:54:05.048554    1202 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ImagePullBackOff: \"Back-off pulling image \\\"gcr.io/k8s-minikube/busybox:1.28.4-glibc\\\"\"" pod="default/busybox" podUID="cb868ec9-73ea-446b-9a7e-aac3552bb3f6"
	Sep 18 19:54:12 addons-815929 kubelet[1202]: E0918 19:54:12.396433    1202 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726689252396057919,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:579800,},InodesUsed:&UInt64Value{Value:200,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 18 19:54:12 addons-815929 kubelet[1202]: E0918 19:54:12.396557    1202 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726689252396057919,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:579800,},InodesUsed:&UInt64Value{Value:200,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 18 19:54:19 addons-815929 kubelet[1202]: E0918 19:54:19.048279    1202 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ImagePullBackOff: \"Back-off pulling image \\\"gcr.io/k8s-minikube/busybox:1.28.4-glibc\\\"\"" pod="default/busybox" podUID="cb868ec9-73ea-446b-9a7e-aac3552bb3f6"
	Sep 18 19:54:22 addons-815929 kubelet[1202]: E0918 19:54:22.399202    1202 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726689262398840668,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:579800,},InodesUsed:&UInt64Value{Value:200,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 18 19:54:22 addons-815929 kubelet[1202]: E0918 19:54:22.399241    1202 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726689262398840668,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:579800,},InodesUsed:&UInt64Value{Value:200,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 18 19:54:32 addons-815929 kubelet[1202]: E0918 19:54:32.070905    1202 iptables.go:577] "Could not set up iptables canary" err=<
	Sep 18 19:54:32 addons-815929 kubelet[1202]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Sep 18 19:54:32 addons-815929 kubelet[1202]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 18 19:54:32 addons-815929 kubelet[1202]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 18 19:54:32 addons-815929 kubelet[1202]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 18 19:54:32 addons-815929 kubelet[1202]: E0918 19:54:32.401487    1202 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726689272401062484,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:579800,},InodesUsed:&UInt64Value{Value:200,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 18 19:54:32 addons-815929 kubelet[1202]: E0918 19:54:32.401522    1202 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726689272401062484,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:579800,},InodesUsed:&UInt64Value{Value:200,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 18 19:54:34 addons-815929 kubelet[1202]: E0918 19:54:34.049808    1202 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ImagePullBackOff: \"Back-off pulling image \\\"gcr.io/k8s-minikube/busybox:1.28.4-glibc\\\"\"" pod="default/busybox" podUID="cb868ec9-73ea-446b-9a7e-aac3552bb3f6"
	Sep 18 19:54:39 addons-815929 kubelet[1202]: I0918 19:54:39.649845    1202 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/hello-world-app-55bf9c44b4-qqrwc" podStartSLOduration=115.31184374 podStartE2EDuration="1m57.649796547s" podCreationTimestamp="2024-09-18 19:52:42 +0000 UTC" firstStartedPulling="2024-09-18 19:52:42.969115521 +0000 UTC m=+791.041419132" lastFinishedPulling="2024-09-18 19:52:45.307068326 +0000 UTC m=+793.379371939" observedRunningTime="2024-09-18 19:52:45.578271704 +0000 UTC m=+793.650575331" watchObservedRunningTime="2024-09-18 19:54:39.649796547 +0000 UTC m=+907.722100179"
	Sep 18 19:54:41 addons-815929 kubelet[1202]: I0918 19:54:41.042538    1202 scope.go:117] "RemoveContainer" containerID="6109c3afb8acc80aa959e4eaf358de8318644f8165612661f8aab6ab0fc3b7fd"
	Sep 18 19:54:41 addons-815929 kubelet[1202]: I0918 19:54:41.054298    1202 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q5nfn\" (UniqueName: \"kubernetes.io/projected/20825ca5-3044-4221-84bc-6fc04d1038fd-kube-api-access-q5nfn\") pod \"20825ca5-3044-4221-84bc-6fc04d1038fd\" (UID: \"20825ca5-3044-4221-84bc-6fc04d1038fd\") "
	Sep 18 19:54:41 addons-815929 kubelet[1202]: I0918 19:54:41.054357    1202 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/20825ca5-3044-4221-84bc-6fc04d1038fd-tmp-dir\") pod \"20825ca5-3044-4221-84bc-6fc04d1038fd\" (UID: \"20825ca5-3044-4221-84bc-6fc04d1038fd\") "
	Sep 18 19:54:41 addons-815929 kubelet[1202]: I0918 19:54:41.054859    1202 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/20825ca5-3044-4221-84bc-6fc04d1038fd-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "20825ca5-3044-4221-84bc-6fc04d1038fd" (UID: "20825ca5-3044-4221-84bc-6fc04d1038fd"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGidValue ""
	Sep 18 19:54:41 addons-815929 kubelet[1202]: I0918 19:54:41.064264    1202 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20825ca5-3044-4221-84bc-6fc04d1038fd-kube-api-access-q5nfn" (OuterVolumeSpecName: "kube-api-access-q5nfn") pod "20825ca5-3044-4221-84bc-6fc04d1038fd" (UID: "20825ca5-3044-4221-84bc-6fc04d1038fd"). InnerVolumeSpecName "kube-api-access-q5nfn". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 18 19:54:41 addons-815929 kubelet[1202]: I0918 19:54:41.066579    1202 scope.go:117] "RemoveContainer" containerID="6109c3afb8acc80aa959e4eaf358de8318644f8165612661f8aab6ab0fc3b7fd"
	Sep 18 19:54:41 addons-815929 kubelet[1202]: E0918 19:54:41.067325    1202 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6109c3afb8acc80aa959e4eaf358de8318644f8165612661f8aab6ab0fc3b7fd\": container with ID starting with 6109c3afb8acc80aa959e4eaf358de8318644f8165612661f8aab6ab0fc3b7fd not found: ID does not exist" containerID="6109c3afb8acc80aa959e4eaf358de8318644f8165612661f8aab6ab0fc3b7fd"
	Sep 18 19:54:41 addons-815929 kubelet[1202]: I0918 19:54:41.067359    1202 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6109c3afb8acc80aa959e4eaf358de8318644f8165612661f8aab6ab0fc3b7fd"} err="failed to get container status \"6109c3afb8acc80aa959e4eaf358de8318644f8165612661f8aab6ab0fc3b7fd\": rpc error: code = NotFound desc = could not find container \"6109c3afb8acc80aa959e4eaf358de8318644f8165612661f8aab6ab0fc3b7fd\": container with ID starting with 6109c3afb8acc80aa959e4eaf358de8318644f8165612661f8aab6ab0fc3b7fd not found: ID does not exist"
	Sep 18 19:54:41 addons-815929 kubelet[1202]: I0918 19:54:41.155262    1202 reconciler_common.go:288] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/20825ca5-3044-4221-84bc-6fc04d1038fd-tmp-dir\") on node \"addons-815929\" DevicePath \"\""
	Sep 18 19:54:41 addons-815929 kubelet[1202]: I0918 19:54:41.155295    1202 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-q5nfn\" (UniqueName: \"kubernetes.io/projected/20825ca5-3044-4221-84bc-6fc04d1038fd-kube-api-access-q5nfn\") on node \"addons-815929\" DevicePath \"\""
	
	
	==> storage-provisioner [3759671f1017e4877f68e8f02b9e88508a0b5b788503476fa6663f5b152d0fa6] <==
	I0918 19:39:45.052140       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0918 19:39:45.070541       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0918 19:39:45.070599       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0918 19:39:45.124575       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0918 19:39:45.124795       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-815929_cdd2de0e-7456-4024-9550-98c8060a35ab!
	I0918 19:39:45.133742       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"ab4840eb-b79e-468b-af43-50c550ad69c5", APIVersion:"v1", ResourceVersion:"660", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-815929_cdd2de0e-7456-4024-9550-98c8060a35ab became leader
	I0918 19:39:45.237552       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-815929_cdd2de0e-7456-4024-9550-98c8060a35ab!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-815929 -n addons-815929
helpers_test.go:261: (dbg) Run:  kubectl --context addons-815929 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/MetricsServer]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-815929 describe pod busybox
helpers_test.go:282: (dbg) kubectl --context addons-815929 describe pod busybox:

                                                
                                                
-- stdout --
	Name:             busybox
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-815929/192.168.39.158
	Start Time:       Wed, 18 Sep 2024 19:41:12 +0000
	Labels:           integration-test=busybox
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.23
	IPs:
	  IP:  10.244.0.23
	Containers:
	  busybox:
	    Container ID:  
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      sleep
	      3600
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:
	      GOOGLE_APPLICATION_CREDENTIALS:  /google-app-creds.json
	      PROJECT_ID:                      this_is_fake
	      GCP_PROJECT:                     this_is_fake
	      GCLOUD_PROJECT:                  this_is_fake
	      GOOGLE_CLOUD_PROJECT:            this_is_fake
	      CLOUDSDK_CORE_PROJECT:           this_is_fake
	    Mounts:
	      /google-app-creds.json from gcp-creds (ro)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-kvbgq (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-kvbgq:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	  gcp-creds:
	    Type:          HostPath (bare host directory volume)
	    Path:          /var/lib/minikube/google_application_credentials.json
	    HostPathType:  File
	QoS Class:         BestEffort
	Node-Selectors:    <none>
	Tolerations:       node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                   node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                   From               Message
	  ----     ------     ----                  ----               -------
	  Normal   Scheduled  13m                   default-scheduler  Successfully assigned default/busybox to addons-815929
	  Normal   Pulling    11m (x4 over 13m)     kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Warning  Failed     11m (x4 over 13m)     kubelet            Failed to pull image "gcr.io/k8s-minikube/busybox:1.28.4-glibc": unable to retrieve auth token: invalid username/password: unauthorized: authentication failed
	  Warning  Failed     11m (x4 over 13m)     kubelet            Error: ErrImagePull
	  Warning  Failed     11m (x6 over 13m)     kubelet            Error: ImagePullBackOff
	  Normal   BackOff    3m21s (x43 over 13m)  kubelet            Back-off pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"

                                                
                                                
-- /stdout --
helpers_test.go:285: <<< TestAddons/parallel/MetricsServer FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/MetricsServer (327.28s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (141.44s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:363: (dbg) Run:  out/minikube-linux-amd64 -p ha-091565 node stop m02 -v=7 --alsologtostderr
E0918 20:05:42.263755   14878 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/functional-790989/client.crt: no such file or directory" logger="UnhandledError"
E0918 20:06:12.175631   14878 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/addons-815929/client.crt: no such file or directory" logger="UnhandledError"
E0918 20:06:23.225177   14878 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/functional-790989/client.crt: no such file or directory" logger="UnhandledError"
E0918 20:06:39.878663   14878 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/addons-815929/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:363: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-091565 node stop m02 -v=7 --alsologtostderr: exit status 30 (2m0.477019796s)

                                                
                                                
-- stdout --
	* Stopping node "ha-091565-m02"  ...

                                                
                                                
-- /stdout --
** stderr ** 
	I0918 20:05:42.100810   30919 out.go:345] Setting OutFile to fd 1 ...
	I0918 20:05:42.100962   30919 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0918 20:05:42.100972   30919 out.go:358] Setting ErrFile to fd 2...
	I0918 20:05:42.100979   30919 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0918 20:05:42.101200   30919 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19667-7671/.minikube/bin
	I0918 20:05:42.101468   30919 mustload.go:65] Loading cluster: ha-091565
	I0918 20:05:42.101890   30919 config.go:182] Loaded profile config "ha-091565": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0918 20:05:42.101906   30919 stop.go:39] StopHost: ha-091565-m02
	I0918 20:05:42.102285   30919 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0918 20:05:42.102333   30919 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0918 20:05:42.117825   30919 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44419
	I0918 20:05:42.118324   30919 main.go:141] libmachine: () Calling .GetVersion
	I0918 20:05:42.118847   30919 main.go:141] libmachine: Using API Version  1
	I0918 20:05:42.118868   30919 main.go:141] libmachine: () Calling .SetConfigRaw
	I0918 20:05:42.119157   30919 main.go:141] libmachine: () Calling .GetMachineName
	I0918 20:05:42.121417   30919 out.go:177] * Stopping node "ha-091565-m02"  ...
	I0918 20:05:42.122404   30919 machine.go:156] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0918 20:05:42.122430   30919 main.go:141] libmachine: (ha-091565-m02) Calling .DriverName
	I0918 20:05:42.122676   30919 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0918 20:05:42.122701   30919 main.go:141] libmachine: (ha-091565-m02) Calling .GetSSHHostname
	I0918 20:05:42.125763   30919 main.go:141] libmachine: (ha-091565-m02) DBG | domain ha-091565-m02 has defined MAC address 52:54:00:21:2b:96 in network mk-ha-091565
	I0918 20:05:42.126248   30919 main.go:141] libmachine: (ha-091565-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:2b:96", ip: ""} in network mk-ha-091565: {Iface:virbr1 ExpiryTime:2024-09-18 21:02:02 +0000 UTC Type:0 Mac:52:54:00:21:2b:96 Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:ha-091565-m02 Clientid:01:52:54:00:21:2b:96}
	I0918 20:05:42.126282   30919 main.go:141] libmachine: (ha-091565-m02) DBG | domain ha-091565-m02 has defined IP address 192.168.39.92 and MAC address 52:54:00:21:2b:96 in network mk-ha-091565
	I0918 20:05:42.126427   30919 main.go:141] libmachine: (ha-091565-m02) Calling .GetSSHPort
	I0918 20:05:42.126622   30919 main.go:141] libmachine: (ha-091565-m02) Calling .GetSSHKeyPath
	I0918 20:05:42.126757   30919 main.go:141] libmachine: (ha-091565-m02) Calling .GetSSHUsername
	I0918 20:05:42.126871   30919 sshutil.go:53] new ssh client: &{IP:192.168.39.92 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19667-7671/.minikube/machines/ha-091565-m02/id_rsa Username:docker}
	I0918 20:05:42.219402   30919 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0918 20:05:42.273269   30919 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0918 20:05:42.329702   30919 main.go:141] libmachine: Stopping "ha-091565-m02"...
	I0918 20:05:42.329754   30919 main.go:141] libmachine: (ha-091565-m02) Calling .GetState
	I0918 20:05:42.331425   30919 main.go:141] libmachine: (ha-091565-m02) Calling .Stop
	I0918 20:05:42.335432   30919 main.go:141] libmachine: (ha-091565-m02) Waiting for machine to stop 0/120
	I0918 20:05:43.336920   30919 main.go:141] libmachine: (ha-091565-m02) Waiting for machine to stop 1/120
	I0918 20:05:44.338387   30919 main.go:141] libmachine: (ha-091565-m02) Waiting for machine to stop 2/120
	I0918 20:05:45.339992   30919 main.go:141] libmachine: (ha-091565-m02) Waiting for machine to stop 3/120
	I0918 20:05:46.341363   30919 main.go:141] libmachine: (ha-091565-m02) Waiting for machine to stop 4/120
	I0918 20:05:47.342943   30919 main.go:141] libmachine: (ha-091565-m02) Waiting for machine to stop 5/120
	I0918 20:05:48.344353   30919 main.go:141] libmachine: (ha-091565-m02) Waiting for machine to stop 6/120
	I0918 20:05:49.346460   30919 main.go:141] libmachine: (ha-091565-m02) Waiting for machine to stop 7/120
	I0918 20:05:50.347801   30919 main.go:141] libmachine: (ha-091565-m02) Waiting for machine to stop 8/120
	I0918 20:05:51.349796   30919 main.go:141] libmachine: (ha-091565-m02) Waiting for machine to stop 9/120
	I0918 20:05:52.351654   30919 main.go:141] libmachine: (ha-091565-m02) Waiting for machine to stop 10/120
	I0918 20:05:53.353040   30919 main.go:141] libmachine: (ha-091565-m02) Waiting for machine to stop 11/120
	I0918 20:05:54.354309   30919 main.go:141] libmachine: (ha-091565-m02) Waiting for machine to stop 12/120
	I0918 20:05:55.355941   30919 main.go:141] libmachine: (ha-091565-m02) Waiting for machine to stop 13/120
	I0918 20:05:56.357328   30919 main.go:141] libmachine: (ha-091565-m02) Waiting for machine to stop 14/120
	I0918 20:05:57.359529   30919 main.go:141] libmachine: (ha-091565-m02) Waiting for machine to stop 15/120
	I0918 20:05:58.361136   30919 main.go:141] libmachine: (ha-091565-m02) Waiting for machine to stop 16/120
	I0918 20:05:59.362947   30919 main.go:141] libmachine: (ha-091565-m02) Waiting for machine to stop 17/120
	I0918 20:06:00.364394   30919 main.go:141] libmachine: (ha-091565-m02) Waiting for machine to stop 18/120
	I0918 20:06:01.365843   30919 main.go:141] libmachine: (ha-091565-m02) Waiting for machine to stop 19/120
	I0918 20:06:02.367816   30919 main.go:141] libmachine: (ha-091565-m02) Waiting for machine to stop 20/120
	I0918 20:06:03.369421   30919 main.go:141] libmachine: (ha-091565-m02) Waiting for machine to stop 21/120
	I0918 20:06:04.370911   30919 main.go:141] libmachine: (ha-091565-m02) Waiting for machine to stop 22/120
	I0918 20:06:05.373047   30919 main.go:141] libmachine: (ha-091565-m02) Waiting for machine to stop 23/120
	I0918 20:06:06.374407   30919 main.go:141] libmachine: (ha-091565-m02) Waiting for machine to stop 24/120
	I0918 20:06:07.376261   30919 main.go:141] libmachine: (ha-091565-m02) Waiting for machine to stop 25/120
	I0918 20:06:08.377631   30919 main.go:141] libmachine: (ha-091565-m02) Waiting for machine to stop 26/120
	I0918 20:06:09.378845   30919 main.go:141] libmachine: (ha-091565-m02) Waiting for machine to stop 27/120
	I0918 20:06:10.380292   30919 main.go:141] libmachine: (ha-091565-m02) Waiting for machine to stop 28/120
	I0918 20:06:11.382638   30919 main.go:141] libmachine: (ha-091565-m02) Waiting for machine to stop 29/120
	I0918 20:06:12.384572   30919 main.go:141] libmachine: (ha-091565-m02) Waiting for machine to stop 30/120
	I0918 20:06:13.386588   30919 main.go:141] libmachine: (ha-091565-m02) Waiting for machine to stop 31/120
	I0918 20:06:14.388067   30919 main.go:141] libmachine: (ha-091565-m02) Waiting for machine to stop 32/120
	I0918 20:06:15.389266   30919 main.go:141] libmachine: (ha-091565-m02) Waiting for machine to stop 33/120
	I0918 20:06:16.390964   30919 main.go:141] libmachine: (ha-091565-m02) Waiting for machine to stop 34/120
	I0918 20:06:17.393046   30919 main.go:141] libmachine: (ha-091565-m02) Waiting for machine to stop 35/120
	I0918 20:06:18.394436   30919 main.go:141] libmachine: (ha-091565-m02) Waiting for machine to stop 36/120
	I0918 20:06:19.396247   30919 main.go:141] libmachine: (ha-091565-m02) Waiting for machine to stop 37/120
	I0918 20:06:20.398515   30919 main.go:141] libmachine: (ha-091565-m02) Waiting for machine to stop 38/120
	I0918 20:06:21.399935   30919 main.go:141] libmachine: (ha-091565-m02) Waiting for machine to stop 39/120
	I0918 20:06:22.401743   30919 main.go:141] libmachine: (ha-091565-m02) Waiting for machine to stop 40/120
	I0918 20:06:23.403396   30919 main.go:141] libmachine: (ha-091565-m02) Waiting for machine to stop 41/120
	I0918 20:06:24.404968   30919 main.go:141] libmachine: (ha-091565-m02) Waiting for machine to stop 42/120
	I0918 20:06:25.406550   30919 main.go:141] libmachine: (ha-091565-m02) Waiting for machine to stop 43/120
	I0918 20:06:26.408399   30919 main.go:141] libmachine: (ha-091565-m02) Waiting for machine to stop 44/120
	I0918 20:06:27.410134   30919 main.go:141] libmachine: (ha-091565-m02) Waiting for machine to stop 45/120
	I0918 20:06:28.411649   30919 main.go:141] libmachine: (ha-091565-m02) Waiting for machine to stop 46/120
	I0918 20:06:29.413077   30919 main.go:141] libmachine: (ha-091565-m02) Waiting for machine to stop 47/120
	I0918 20:06:30.414698   30919 main.go:141] libmachine: (ha-091565-m02) Waiting for machine to stop 48/120
	I0918 20:06:31.416190   30919 main.go:141] libmachine: (ha-091565-m02) Waiting for machine to stop 49/120
	I0918 20:06:32.418429   30919 main.go:141] libmachine: (ha-091565-m02) Waiting for machine to stop 50/120
	I0918 20:06:33.419762   30919 main.go:141] libmachine: (ha-091565-m02) Waiting for machine to stop 51/120
	I0918 20:06:34.421042   30919 main.go:141] libmachine: (ha-091565-m02) Waiting for machine to stop 52/120
	I0918 20:06:35.422621   30919 main.go:141] libmachine: (ha-091565-m02) Waiting for machine to stop 53/120
	I0918 20:06:36.424180   30919 main.go:141] libmachine: (ha-091565-m02) Waiting for machine to stop 54/120
	I0918 20:06:37.425981   30919 main.go:141] libmachine: (ha-091565-m02) Waiting for machine to stop 55/120
	I0918 20:06:38.427274   30919 main.go:141] libmachine: (ha-091565-m02) Waiting for machine to stop 56/120
	I0918 20:06:39.428951   30919 main.go:141] libmachine: (ha-091565-m02) Waiting for machine to stop 57/120
	I0918 20:06:40.431182   30919 main.go:141] libmachine: (ha-091565-m02) Waiting for machine to stop 58/120
	I0918 20:06:41.432863   30919 main.go:141] libmachine: (ha-091565-m02) Waiting for machine to stop 59/120
	I0918 20:06:42.434896   30919 main.go:141] libmachine: (ha-091565-m02) Waiting for machine to stop 60/120
	I0918 20:06:43.436398   30919 main.go:141] libmachine: (ha-091565-m02) Waiting for machine to stop 61/120
	I0918 20:06:44.438534   30919 main.go:141] libmachine: (ha-091565-m02) Waiting for machine to stop 62/120
	I0918 20:06:45.439941   30919 main.go:141] libmachine: (ha-091565-m02) Waiting for machine to stop 63/120
	I0918 20:06:46.441611   30919 main.go:141] libmachine: (ha-091565-m02) Waiting for machine to stop 64/120
	I0918 20:06:47.443532   30919 main.go:141] libmachine: (ha-091565-m02) Waiting for machine to stop 65/120
	I0918 20:06:48.444980   30919 main.go:141] libmachine: (ha-091565-m02) Waiting for machine to stop 66/120
	I0918 20:06:49.446425   30919 main.go:141] libmachine: (ha-091565-m02) Waiting for machine to stop 67/120
	I0918 20:06:50.447936   30919 main.go:141] libmachine: (ha-091565-m02) Waiting for machine to stop 68/120
	I0918 20:06:51.449238   30919 main.go:141] libmachine: (ha-091565-m02) Waiting for machine to stop 69/120
	I0918 20:06:52.450707   30919 main.go:141] libmachine: (ha-091565-m02) Waiting for machine to stop 70/120
	I0918 20:06:53.452199   30919 main.go:141] libmachine: (ha-091565-m02) Waiting for machine to stop 71/120
	I0918 20:06:54.453775   30919 main.go:141] libmachine: (ha-091565-m02) Waiting for machine to stop 72/120
	I0918 20:06:55.455153   30919 main.go:141] libmachine: (ha-091565-m02) Waiting for machine to stop 73/120
	I0918 20:06:56.456302   30919 main.go:141] libmachine: (ha-091565-m02) Waiting for machine to stop 74/120
	I0918 20:06:57.458203   30919 main.go:141] libmachine: (ha-091565-m02) Waiting for machine to stop 75/120
	I0918 20:06:58.459689   30919 main.go:141] libmachine: (ha-091565-m02) Waiting for machine to stop 76/120
	I0918 20:06:59.461290   30919 main.go:141] libmachine: (ha-091565-m02) Waiting for machine to stop 77/120
	I0918 20:07:00.463339   30919 main.go:141] libmachine: (ha-091565-m02) Waiting for machine to stop 78/120
	I0918 20:07:01.465570   30919 main.go:141] libmachine: (ha-091565-m02) Waiting for machine to stop 79/120
	I0918 20:07:02.467479   30919 main.go:141] libmachine: (ha-091565-m02) Waiting for machine to stop 80/120
	I0918 20:07:03.468960   30919 main.go:141] libmachine: (ha-091565-m02) Waiting for machine to stop 81/120
	I0918 20:07:04.470189   30919 main.go:141] libmachine: (ha-091565-m02) Waiting for machine to stop 82/120
	I0918 20:07:05.471646   30919 main.go:141] libmachine: (ha-091565-m02) Waiting for machine to stop 83/120
	I0918 20:07:06.473018   30919 main.go:141] libmachine: (ha-091565-m02) Waiting for machine to stop 84/120
	I0918 20:07:07.475183   30919 main.go:141] libmachine: (ha-091565-m02) Waiting for machine to stop 85/120
	I0918 20:07:08.476593   30919 main.go:141] libmachine: (ha-091565-m02) Waiting for machine to stop 86/120
	I0918 20:07:09.477948   30919 main.go:141] libmachine: (ha-091565-m02) Waiting for machine to stop 87/120
	I0918 20:07:10.479674   30919 main.go:141] libmachine: (ha-091565-m02) Waiting for machine to stop 88/120
	I0918 20:07:11.480980   30919 main.go:141] libmachine: (ha-091565-m02) Waiting for machine to stop 89/120
	I0918 20:07:12.483399   30919 main.go:141] libmachine: (ha-091565-m02) Waiting for machine to stop 90/120
	I0918 20:07:13.484785   30919 main.go:141] libmachine: (ha-091565-m02) Waiting for machine to stop 91/120
	I0918 20:07:14.486581   30919 main.go:141] libmachine: (ha-091565-m02) Waiting for machine to stop 92/120
	I0918 20:07:15.488257   30919 main.go:141] libmachine: (ha-091565-m02) Waiting for machine to stop 93/120
	I0918 20:07:16.490904   30919 main.go:141] libmachine: (ha-091565-m02) Waiting for machine to stop 94/120
	I0918 20:07:17.492993   30919 main.go:141] libmachine: (ha-091565-m02) Waiting for machine to stop 95/120
	I0918 20:07:18.494572   30919 main.go:141] libmachine: (ha-091565-m02) Waiting for machine to stop 96/120
	I0918 20:07:19.496360   30919 main.go:141] libmachine: (ha-091565-m02) Waiting for machine to stop 97/120
	I0918 20:07:20.498667   30919 main.go:141] libmachine: (ha-091565-m02) Waiting for machine to stop 98/120
	I0918 20:07:21.501169   30919 main.go:141] libmachine: (ha-091565-m02) Waiting for machine to stop 99/120
	I0918 20:07:22.503335   30919 main.go:141] libmachine: (ha-091565-m02) Waiting for machine to stop 100/120
	I0918 20:07:23.505508   30919 main.go:141] libmachine: (ha-091565-m02) Waiting for machine to stop 101/120
	I0918 20:07:24.506918   30919 main.go:141] libmachine: (ha-091565-m02) Waiting for machine to stop 102/120
	I0918 20:07:25.508351   30919 main.go:141] libmachine: (ha-091565-m02) Waiting for machine to stop 103/120
	I0918 20:07:26.509821   30919 main.go:141] libmachine: (ha-091565-m02) Waiting for machine to stop 104/120
	I0918 20:07:27.511205   30919 main.go:141] libmachine: (ha-091565-m02) Waiting for machine to stop 105/120
	I0918 20:07:28.512735   30919 main.go:141] libmachine: (ha-091565-m02) Waiting for machine to stop 106/120
	I0918 20:07:29.514112   30919 main.go:141] libmachine: (ha-091565-m02) Waiting for machine to stop 107/120
	I0918 20:07:30.515653   30919 main.go:141] libmachine: (ha-091565-m02) Waiting for machine to stop 108/120
	I0918 20:07:31.516961   30919 main.go:141] libmachine: (ha-091565-m02) Waiting for machine to stop 109/120
	I0918 20:07:32.518952   30919 main.go:141] libmachine: (ha-091565-m02) Waiting for machine to stop 110/120
	I0918 20:07:33.520260   30919 main.go:141] libmachine: (ha-091565-m02) Waiting for machine to stop 111/120
	I0918 20:07:34.521944   30919 main.go:141] libmachine: (ha-091565-m02) Waiting for machine to stop 112/120
	I0918 20:07:35.523351   30919 main.go:141] libmachine: (ha-091565-m02) Waiting for machine to stop 113/120
	I0918 20:07:36.524496   30919 main.go:141] libmachine: (ha-091565-m02) Waiting for machine to stop 114/120
	I0918 20:07:37.526082   30919 main.go:141] libmachine: (ha-091565-m02) Waiting for machine to stop 115/120
	I0918 20:07:38.527411   30919 main.go:141] libmachine: (ha-091565-m02) Waiting for machine to stop 116/120
	I0918 20:07:39.529081   30919 main.go:141] libmachine: (ha-091565-m02) Waiting for machine to stop 117/120
	I0918 20:07:40.530535   30919 main.go:141] libmachine: (ha-091565-m02) Waiting for machine to stop 118/120
	I0918 20:07:41.532035   30919 main.go:141] libmachine: (ha-091565-m02) Waiting for machine to stop 119/120
	I0918 20:07:42.532748   30919 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0918 20:07:42.532867   30919 out.go:270] X Failed to stop node m02: Temporary Error: stop: unable to stop vm, current state "Running"
	X Failed to stop node m02: Temporary Error: stop: unable to stop vm, current state "Running"

                                                
                                                
** /stderr **
ha_test.go:365: secondary control-plane node stop returned an error. args "out/minikube-linux-amd64 -p ha-091565 node stop m02 -v=7 --alsologtostderr": exit status 30
ha_test.go:369: (dbg) Run:  out/minikube-linux-amd64 -p ha-091565 status -v=7 --alsologtostderr
E0918 20:07:45.148206   14878 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/functional-790989/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:369: (dbg) Done: out/minikube-linux-amd64 -p ha-091565 status -v=7 --alsologtostderr: (18.646583181s)
ha_test.go:375: status says not all three control-plane nodes are present: args "out/minikube-linux-amd64 -p ha-091565 status -v=7 --alsologtostderr": 
ha_test.go:378: status says not three hosts are running: args "out/minikube-linux-amd64 -p ha-091565 status -v=7 --alsologtostderr": 
ha_test.go:381: status says not three kubelets are running: args "out/minikube-linux-amd64 -p ha-091565 status -v=7 --alsologtostderr": 
ha_test.go:384: status says not two apiservers are running: args "out/minikube-linux-amd64 -p ha-091565 status -v=7 --alsologtostderr": 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-091565 -n ha-091565
helpers_test.go:244: <<< TestMultiControlPlane/serial/StopSecondaryNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/StopSecondaryNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-091565 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-091565 logs -n 25: (1.418708044s)
helpers_test.go:252: TestMultiControlPlane/serial/StopSecondaryNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| cp      | ha-091565 cp ha-091565-m03:/home/docker/cp-test.txt                              | ha-091565 | jenkins | v1.34.0 | 18 Sep 24 20:05 UTC | 18 Sep 24 20:05 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile3131576438/001/cp-test_ha-091565-m03.txt |           |         |         |                     |                     |
	| ssh     | ha-091565 ssh -n                                                                 | ha-091565 | jenkins | v1.34.0 | 18 Sep 24 20:05 UTC | 18 Sep 24 20:05 UTC |
	|         | ha-091565-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-091565 cp ha-091565-m03:/home/docker/cp-test.txt                              | ha-091565 | jenkins | v1.34.0 | 18 Sep 24 20:05 UTC | 18 Sep 24 20:05 UTC |
	|         | ha-091565:/home/docker/cp-test_ha-091565-m03_ha-091565.txt                       |           |         |         |                     |                     |
	| ssh     | ha-091565 ssh -n                                                                 | ha-091565 | jenkins | v1.34.0 | 18 Sep 24 20:05 UTC | 18 Sep 24 20:05 UTC |
	|         | ha-091565-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-091565 ssh -n ha-091565 sudo cat                                              | ha-091565 | jenkins | v1.34.0 | 18 Sep 24 20:05 UTC | 18 Sep 24 20:05 UTC |
	|         | /home/docker/cp-test_ha-091565-m03_ha-091565.txt                                 |           |         |         |                     |                     |
	| cp      | ha-091565 cp ha-091565-m03:/home/docker/cp-test.txt                              | ha-091565 | jenkins | v1.34.0 | 18 Sep 24 20:05 UTC | 18 Sep 24 20:05 UTC |
	|         | ha-091565-m02:/home/docker/cp-test_ha-091565-m03_ha-091565-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-091565 ssh -n                                                                 | ha-091565 | jenkins | v1.34.0 | 18 Sep 24 20:05 UTC | 18 Sep 24 20:05 UTC |
	|         | ha-091565-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-091565 ssh -n ha-091565-m02 sudo cat                                          | ha-091565 | jenkins | v1.34.0 | 18 Sep 24 20:05 UTC | 18 Sep 24 20:05 UTC |
	|         | /home/docker/cp-test_ha-091565-m03_ha-091565-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-091565 cp ha-091565-m03:/home/docker/cp-test.txt                              | ha-091565 | jenkins | v1.34.0 | 18 Sep 24 20:05 UTC | 18 Sep 24 20:05 UTC |
	|         | ha-091565-m04:/home/docker/cp-test_ha-091565-m03_ha-091565-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-091565 ssh -n                                                                 | ha-091565 | jenkins | v1.34.0 | 18 Sep 24 20:05 UTC | 18 Sep 24 20:05 UTC |
	|         | ha-091565-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-091565 ssh -n ha-091565-m04 sudo cat                                          | ha-091565 | jenkins | v1.34.0 | 18 Sep 24 20:05 UTC | 18 Sep 24 20:05 UTC |
	|         | /home/docker/cp-test_ha-091565-m03_ha-091565-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-091565 cp testdata/cp-test.txt                                                | ha-091565 | jenkins | v1.34.0 | 18 Sep 24 20:05 UTC | 18 Sep 24 20:05 UTC |
	|         | ha-091565-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-091565 ssh -n                                                                 | ha-091565 | jenkins | v1.34.0 | 18 Sep 24 20:05 UTC | 18 Sep 24 20:05 UTC |
	|         | ha-091565-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-091565 cp ha-091565-m04:/home/docker/cp-test.txt                              | ha-091565 | jenkins | v1.34.0 | 18 Sep 24 20:05 UTC | 18 Sep 24 20:05 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile3131576438/001/cp-test_ha-091565-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-091565 ssh -n                                                                 | ha-091565 | jenkins | v1.34.0 | 18 Sep 24 20:05 UTC | 18 Sep 24 20:05 UTC |
	|         | ha-091565-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-091565 cp ha-091565-m04:/home/docker/cp-test.txt                              | ha-091565 | jenkins | v1.34.0 | 18 Sep 24 20:05 UTC | 18 Sep 24 20:05 UTC |
	|         | ha-091565:/home/docker/cp-test_ha-091565-m04_ha-091565.txt                       |           |         |         |                     |                     |
	| ssh     | ha-091565 ssh -n                                                                 | ha-091565 | jenkins | v1.34.0 | 18 Sep 24 20:05 UTC | 18 Sep 24 20:05 UTC |
	|         | ha-091565-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-091565 ssh -n ha-091565 sudo cat                                              | ha-091565 | jenkins | v1.34.0 | 18 Sep 24 20:05 UTC | 18 Sep 24 20:05 UTC |
	|         | /home/docker/cp-test_ha-091565-m04_ha-091565.txt                                 |           |         |         |                     |                     |
	| cp      | ha-091565 cp ha-091565-m04:/home/docker/cp-test.txt                              | ha-091565 | jenkins | v1.34.0 | 18 Sep 24 20:05 UTC | 18 Sep 24 20:05 UTC |
	|         | ha-091565-m02:/home/docker/cp-test_ha-091565-m04_ha-091565-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-091565 ssh -n                                                                 | ha-091565 | jenkins | v1.34.0 | 18 Sep 24 20:05 UTC | 18 Sep 24 20:05 UTC |
	|         | ha-091565-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-091565 ssh -n ha-091565-m02 sudo cat                                          | ha-091565 | jenkins | v1.34.0 | 18 Sep 24 20:05 UTC | 18 Sep 24 20:05 UTC |
	|         | /home/docker/cp-test_ha-091565-m04_ha-091565-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-091565 cp ha-091565-m04:/home/docker/cp-test.txt                              | ha-091565 | jenkins | v1.34.0 | 18 Sep 24 20:05 UTC | 18 Sep 24 20:05 UTC |
	|         | ha-091565-m03:/home/docker/cp-test_ha-091565-m04_ha-091565-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-091565 ssh -n                                                                 | ha-091565 | jenkins | v1.34.0 | 18 Sep 24 20:05 UTC | 18 Sep 24 20:05 UTC |
	|         | ha-091565-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-091565 ssh -n ha-091565-m03 sudo cat                                          | ha-091565 | jenkins | v1.34.0 | 18 Sep 24 20:05 UTC | 18 Sep 24 20:05 UTC |
	|         | /home/docker/cp-test_ha-091565-m04_ha-091565-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-091565 node stop m02 -v=7                                                     | ha-091565 | jenkins | v1.34.0 | 18 Sep 24 20:05 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/18 20:00:57
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0918 20:00:57.640467   26827 out.go:345] Setting OutFile to fd 1 ...
	I0918 20:00:57.640561   26827 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0918 20:00:57.640569   26827 out.go:358] Setting ErrFile to fd 2...
	I0918 20:00:57.640573   26827 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0918 20:00:57.640761   26827 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19667-7671/.minikube/bin
	I0918 20:00:57.641318   26827 out.go:352] Setting JSON to false
	I0918 20:00:57.642141   26827 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":2602,"bootTime":1726687056,"procs":180,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0918 20:00:57.642239   26827 start.go:139] virtualization: kvm guest
	I0918 20:00:57.644428   26827 out.go:177] * [ha-091565] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0918 20:00:57.645728   26827 notify.go:220] Checking for updates...
	I0918 20:00:57.645758   26827 out.go:177]   - MINIKUBE_LOCATION=19667
	I0918 20:00:57.647179   26827 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0918 20:00:57.648500   26827 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19667-7671/kubeconfig
	I0918 20:00:57.649839   26827 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19667-7671/.minikube
	I0918 20:00:57.651097   26827 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0918 20:00:57.652502   26827 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0918 20:00:57.653976   26827 driver.go:394] Setting default libvirt URI to qemu:///system
	I0918 20:00:57.687513   26827 out.go:177] * Using the kvm2 driver based on user configuration
	I0918 20:00:57.688577   26827 start.go:297] selected driver: kvm2
	I0918 20:00:57.688601   26827 start.go:901] validating driver "kvm2" against <nil>
	I0918 20:00:57.688623   26827 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0918 20:00:57.689634   26827 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0918 20:00:57.689741   26827 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19667-7671/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0918 20:00:57.704974   26827 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0918 20:00:57.705031   26827 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0918 20:00:57.705320   26827 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0918 20:00:57.705370   26827 cni.go:84] Creating CNI manager for ""
	I0918 20:00:57.705425   26827 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0918 20:00:57.705440   26827 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0918 20:00:57.705520   26827 start.go:340] cluster config:
	{Name:ha-091565 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:ha-091565 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRIS
ocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0
GPUs: AutoPauseInterval:1m0s}
	I0918 20:00:57.705651   26827 iso.go:125] acquiring lock: {Name:mkbcf4d84f3091567a39802cd5e773cb10b8c545 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0918 20:00:57.707426   26827 out.go:177] * Starting "ha-091565" primary control-plane node in "ha-091565" cluster
	I0918 20:00:57.708558   26827 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0918 20:00:57.708602   26827 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19667-7671/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I0918 20:00:57.708622   26827 cache.go:56] Caching tarball of preloaded images
	I0918 20:00:57.708700   26827 preload.go:172] Found /home/jenkins/minikube-integration/19667-7671/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0918 20:00:57.708710   26827 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0918 20:00:57.708999   26827 profile.go:143] Saving config to /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/ha-091565/config.json ...
	I0918 20:00:57.709019   26827 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/ha-091565/config.json: {Name:mk6751feb5fedaf9ba97f9b527df45d961607c7d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0918 20:00:57.709176   26827 start.go:360] acquireMachinesLock for ha-091565: {Name:mk1b0d68a0b7d9d4bc8204d5d81cb9eb77222526 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0918 20:00:57.709206   26827 start.go:364] duration metric: took 18.41µs to acquireMachinesLock for "ha-091565"
	I0918 20:00:57.709221   26827 start.go:93] Provisioning new machine with config: &{Name:ha-091565 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.1 ClusterName:ha-091565 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0918 20:00:57.709299   26827 start.go:125] createHost starting for "" (driver="kvm2")
	I0918 20:00:57.710894   26827 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0918 20:00:57.711003   26827 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0918 20:00:57.711035   26827 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0918 20:00:57.725443   26827 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33447
	I0918 20:00:57.725903   26827 main.go:141] libmachine: () Calling .GetVersion
	I0918 20:00:57.726425   26827 main.go:141] libmachine: Using API Version  1
	I0918 20:00:57.726445   26827 main.go:141] libmachine: () Calling .SetConfigRaw
	I0918 20:00:57.726722   26827 main.go:141] libmachine: () Calling .GetMachineName
	I0918 20:00:57.726883   26827 main.go:141] libmachine: (ha-091565) Calling .GetMachineName
	I0918 20:00:57.727025   26827 main.go:141] libmachine: (ha-091565) Calling .DriverName
	I0918 20:00:57.727181   26827 start.go:159] libmachine.API.Create for "ha-091565" (driver="kvm2")
	I0918 20:00:57.727222   26827 client.go:168] LocalClient.Create starting
	I0918 20:00:57.727261   26827 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19667-7671/.minikube/certs/ca.pem
	I0918 20:00:57.727293   26827 main.go:141] libmachine: Decoding PEM data...
	I0918 20:00:57.727312   26827 main.go:141] libmachine: Parsing certificate...
	I0918 20:00:57.727377   26827 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19667-7671/.minikube/certs/cert.pem
	I0918 20:00:57.727407   26827 main.go:141] libmachine: Decoding PEM data...
	I0918 20:00:57.727427   26827 main.go:141] libmachine: Parsing certificate...
	I0918 20:00:57.727451   26827 main.go:141] libmachine: Running pre-create checks...
	I0918 20:00:57.727462   26827 main.go:141] libmachine: (ha-091565) Calling .PreCreateCheck
	I0918 20:00:57.727741   26827 main.go:141] libmachine: (ha-091565) Calling .GetConfigRaw
	I0918 20:00:57.728143   26827 main.go:141] libmachine: Creating machine...
	I0918 20:00:57.728157   26827 main.go:141] libmachine: (ha-091565) Calling .Create
	I0918 20:00:57.728286   26827 main.go:141] libmachine: (ha-091565) Creating KVM machine...
	I0918 20:00:57.729703   26827 main.go:141] libmachine: (ha-091565) DBG | found existing default KVM network
	I0918 20:00:57.730516   26827 main.go:141] libmachine: (ha-091565) DBG | I0918 20:00:57.730387   26850 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0002211e0}
	I0918 20:00:57.730578   26827 main.go:141] libmachine: (ha-091565) DBG | created network xml: 
	I0918 20:00:57.730605   26827 main.go:141] libmachine: (ha-091565) DBG | <network>
	I0918 20:00:57.730618   26827 main.go:141] libmachine: (ha-091565) DBG |   <name>mk-ha-091565</name>
	I0918 20:00:57.730631   26827 main.go:141] libmachine: (ha-091565) DBG |   <dns enable='no'/>
	I0918 20:00:57.730660   26827 main.go:141] libmachine: (ha-091565) DBG |   
	I0918 20:00:57.730680   26827 main.go:141] libmachine: (ha-091565) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0918 20:00:57.730693   26827 main.go:141] libmachine: (ha-091565) DBG |     <dhcp>
	I0918 20:00:57.730703   26827 main.go:141] libmachine: (ha-091565) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0918 20:00:57.730715   26827 main.go:141] libmachine: (ha-091565) DBG |     </dhcp>
	I0918 20:00:57.730736   26827 main.go:141] libmachine: (ha-091565) DBG |   </ip>
	I0918 20:00:57.730748   26827 main.go:141] libmachine: (ha-091565) DBG |   
	I0918 20:00:57.730757   26827 main.go:141] libmachine: (ha-091565) DBG | </network>
	I0918 20:00:57.730768   26827 main.go:141] libmachine: (ha-091565) DBG | 
	I0918 20:00:57.735618   26827 main.go:141] libmachine: (ha-091565) DBG | trying to create private KVM network mk-ha-091565 192.168.39.0/24...
	I0918 20:00:57.800998   26827 main.go:141] libmachine: (ha-091565) DBG | private KVM network mk-ha-091565 192.168.39.0/24 created
	I0918 20:00:57.801029   26827 main.go:141] libmachine: (ha-091565) Setting up store path in /home/jenkins/minikube-integration/19667-7671/.minikube/machines/ha-091565 ...
	I0918 20:00:57.801041   26827 main.go:141] libmachine: (ha-091565) DBG | I0918 20:00:57.800989   26850 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19667-7671/.minikube
	I0918 20:00:57.801133   26827 main.go:141] libmachine: (ha-091565) Building disk image from file:///home/jenkins/minikube-integration/19667-7671/.minikube/cache/iso/amd64/minikube-v1.34.0-1726481713-19649-amd64.iso
	I0918 20:00:57.801206   26827 main.go:141] libmachine: (ha-091565) Downloading /home/jenkins/minikube-integration/19667-7671/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19667-7671/.minikube/cache/iso/amd64/minikube-v1.34.0-1726481713-19649-amd64.iso...
	I0918 20:00:58.046606   26827 main.go:141] libmachine: (ha-091565) DBG | I0918 20:00:58.046472   26850 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19667-7671/.minikube/machines/ha-091565/id_rsa...
	I0918 20:00:58.328818   26827 main.go:141] libmachine: (ha-091565) DBG | I0918 20:00:58.328673   26850 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19667-7671/.minikube/machines/ha-091565/ha-091565.rawdisk...
	I0918 20:00:58.328844   26827 main.go:141] libmachine: (ha-091565) DBG | Writing magic tar header
	I0918 20:00:58.328853   26827 main.go:141] libmachine: (ha-091565) DBG | Writing SSH key tar header
	I0918 20:00:58.328860   26827 main.go:141] libmachine: (ha-091565) DBG | I0918 20:00:58.328794   26850 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19667-7671/.minikube/machines/ha-091565 ...
	I0918 20:00:58.328961   26827 main.go:141] libmachine: (ha-091565) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19667-7671/.minikube/machines/ha-091565
	I0918 20:00:58.328984   26827 main.go:141] libmachine: (ha-091565) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19667-7671/.minikube/machines
	I0918 20:00:58.328999   26827 main.go:141] libmachine: (ha-091565) Setting executable bit set on /home/jenkins/minikube-integration/19667-7671/.minikube/machines/ha-091565 (perms=drwx------)
	I0918 20:00:58.329013   26827 main.go:141] libmachine: (ha-091565) Setting executable bit set on /home/jenkins/minikube-integration/19667-7671/.minikube/machines (perms=drwxr-xr-x)
	I0918 20:00:58.329024   26827 main.go:141] libmachine: (ha-091565) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19667-7671/.minikube
	I0918 20:00:58.329034   26827 main.go:141] libmachine: (ha-091565) Setting executable bit set on /home/jenkins/minikube-integration/19667-7671/.minikube (perms=drwxr-xr-x)
	I0918 20:00:58.329045   26827 main.go:141] libmachine: (ha-091565) Setting executable bit set on /home/jenkins/minikube-integration/19667-7671 (perms=drwxrwxr-x)
	I0918 20:00:58.329050   26827 main.go:141] libmachine: (ha-091565) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0918 20:00:58.329063   26827 main.go:141] libmachine: (ha-091565) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0918 20:00:58.329069   26827 main.go:141] libmachine: (ha-091565) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19667-7671
	I0918 20:00:58.329081   26827 main.go:141] libmachine: (ha-091565) Creating domain...
	I0918 20:00:58.329099   26827 main.go:141] libmachine: (ha-091565) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0918 20:00:58.329114   26827 main.go:141] libmachine: (ha-091565) DBG | Checking permissions on dir: /home/jenkins
	I0918 20:00:58.329136   26827 main.go:141] libmachine: (ha-091565) DBG | Checking permissions on dir: /home
	I0918 20:00:58.329143   26827 main.go:141] libmachine: (ha-091565) DBG | Skipping /home - not owner
	I0918 20:00:58.330265   26827 main.go:141] libmachine: (ha-091565) define libvirt domain using xml: 
	I0918 20:00:58.330282   26827 main.go:141] libmachine: (ha-091565) <domain type='kvm'>
	I0918 20:00:58.330289   26827 main.go:141] libmachine: (ha-091565)   <name>ha-091565</name>
	I0918 20:00:58.330298   26827 main.go:141] libmachine: (ha-091565)   <memory unit='MiB'>2200</memory>
	I0918 20:00:58.330305   26827 main.go:141] libmachine: (ha-091565)   <vcpu>2</vcpu>
	I0918 20:00:58.330311   26827 main.go:141] libmachine: (ha-091565)   <features>
	I0918 20:00:58.330318   26827 main.go:141] libmachine: (ha-091565)     <acpi/>
	I0918 20:00:58.330326   26827 main.go:141] libmachine: (ha-091565)     <apic/>
	I0918 20:00:58.330334   26827 main.go:141] libmachine: (ha-091565)     <pae/>
	I0918 20:00:58.330345   26827 main.go:141] libmachine: (ha-091565)     
	I0918 20:00:58.330353   26827 main.go:141] libmachine: (ha-091565)   </features>
	I0918 20:00:58.330358   26827 main.go:141] libmachine: (ha-091565)   <cpu mode='host-passthrough'>
	I0918 20:00:58.330364   26827 main.go:141] libmachine: (ha-091565)   
	I0918 20:00:58.330372   26827 main.go:141] libmachine: (ha-091565)   </cpu>
	I0918 20:00:58.330400   26827 main.go:141] libmachine: (ha-091565)   <os>
	I0918 20:00:58.330421   26827 main.go:141] libmachine: (ha-091565)     <type>hvm</type>
	I0918 20:00:58.330446   26827 main.go:141] libmachine: (ha-091565)     <boot dev='cdrom'/>
	I0918 20:00:58.330464   26827 main.go:141] libmachine: (ha-091565)     <boot dev='hd'/>
	I0918 20:00:58.330471   26827 main.go:141] libmachine: (ha-091565)     <bootmenu enable='no'/>
	I0918 20:00:58.330481   26827 main.go:141] libmachine: (ha-091565)   </os>
	I0918 20:00:58.330492   26827 main.go:141] libmachine: (ha-091565)   <devices>
	I0918 20:00:58.330501   26827 main.go:141] libmachine: (ha-091565)     <disk type='file' device='cdrom'>
	I0918 20:00:58.330523   26827 main.go:141] libmachine: (ha-091565)       <source file='/home/jenkins/minikube-integration/19667-7671/.minikube/machines/ha-091565/boot2docker.iso'/>
	I0918 20:00:58.330530   26827 main.go:141] libmachine: (ha-091565)       <target dev='hdc' bus='scsi'/>
	I0918 20:00:58.330535   26827 main.go:141] libmachine: (ha-091565)       <readonly/>
	I0918 20:00:58.330541   26827 main.go:141] libmachine: (ha-091565)     </disk>
	I0918 20:00:58.330546   26827 main.go:141] libmachine: (ha-091565)     <disk type='file' device='disk'>
	I0918 20:00:58.330551   26827 main.go:141] libmachine: (ha-091565)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0918 20:00:58.330560   26827 main.go:141] libmachine: (ha-091565)       <source file='/home/jenkins/minikube-integration/19667-7671/.minikube/machines/ha-091565/ha-091565.rawdisk'/>
	I0918 20:00:58.330569   26827 main.go:141] libmachine: (ha-091565)       <target dev='hda' bus='virtio'/>
	I0918 20:00:58.330586   26827 main.go:141] libmachine: (ha-091565)     </disk>
	I0918 20:00:58.330591   26827 main.go:141] libmachine: (ha-091565)     <interface type='network'>
	I0918 20:00:58.330601   26827 main.go:141] libmachine: (ha-091565)       <source network='mk-ha-091565'/>
	I0918 20:00:58.330608   26827 main.go:141] libmachine: (ha-091565)       <model type='virtio'/>
	I0918 20:00:58.330612   26827 main.go:141] libmachine: (ha-091565)     </interface>
	I0918 20:00:58.330618   26827 main.go:141] libmachine: (ha-091565)     <interface type='network'>
	I0918 20:00:58.330625   26827 main.go:141] libmachine: (ha-091565)       <source network='default'/>
	I0918 20:00:58.330635   26827 main.go:141] libmachine: (ha-091565)       <model type='virtio'/>
	I0918 20:00:58.330641   26827 main.go:141] libmachine: (ha-091565)     </interface>
	I0918 20:00:58.330646   26827 main.go:141] libmachine: (ha-091565)     <serial type='pty'>
	I0918 20:00:58.330652   26827 main.go:141] libmachine: (ha-091565)       <target port='0'/>
	I0918 20:00:58.330656   26827 main.go:141] libmachine: (ha-091565)     </serial>
	I0918 20:00:58.330664   26827 main.go:141] libmachine: (ha-091565)     <console type='pty'>
	I0918 20:00:58.330671   26827 main.go:141] libmachine: (ha-091565)       <target type='serial' port='0'/>
	I0918 20:00:58.330684   26827 main.go:141] libmachine: (ha-091565)     </console>
	I0918 20:00:58.330693   26827 main.go:141] libmachine: (ha-091565)     <rng model='virtio'>
	I0918 20:00:58.330702   26827 main.go:141] libmachine: (ha-091565)       <backend model='random'>/dev/random</backend>
	I0918 20:00:58.330710   26827 main.go:141] libmachine: (ha-091565)     </rng>
	I0918 20:00:58.330716   26827 main.go:141] libmachine: (ha-091565)     
	I0918 20:00:58.330722   26827 main.go:141] libmachine: (ha-091565)     
	I0918 20:00:58.330726   26827 main.go:141] libmachine: (ha-091565)   </devices>
	I0918 20:00:58.330730   26827 main.go:141] libmachine: (ha-091565) </domain>
	I0918 20:00:58.330736   26827 main.go:141] libmachine: (ha-091565) 
	I0918 20:00:58.335391   26827 main.go:141] libmachine: (ha-091565) DBG | domain ha-091565 has defined MAC address 52:54:00:62:68:64 in network default
	I0918 20:00:58.335905   26827 main.go:141] libmachine: (ha-091565) Ensuring networks are active...
	I0918 20:00:58.335918   26827 main.go:141] libmachine: (ha-091565) DBG | domain ha-091565 has defined MAC address 52:54:00:2a:13:d8 in network mk-ha-091565
	I0918 20:00:58.336784   26827 main.go:141] libmachine: (ha-091565) Ensuring network default is active
	I0918 20:00:58.337204   26827 main.go:141] libmachine: (ha-091565) Ensuring network mk-ha-091565 is active
	I0918 20:00:58.337781   26827 main.go:141] libmachine: (ha-091565) Getting domain xml...
	I0918 20:00:58.338545   26827 main.go:141] libmachine: (ha-091565) Creating domain...
	I0918 20:00:59.533947   26827 main.go:141] libmachine: (ha-091565) Waiting to get IP...
	I0918 20:00:59.534657   26827 main.go:141] libmachine: (ha-091565) DBG | domain ha-091565 has defined MAC address 52:54:00:2a:13:d8 in network mk-ha-091565
	I0918 20:00:59.535035   26827 main.go:141] libmachine: (ha-091565) DBG | unable to find current IP address of domain ha-091565 in network mk-ha-091565
	I0918 20:00:59.535072   26827 main.go:141] libmachine: (ha-091565) DBG | I0918 20:00:59.535025   26850 retry.go:31] will retry after 237.916234ms: waiting for machine to come up
	I0918 20:00:59.774780   26827 main.go:141] libmachine: (ha-091565) DBG | domain ha-091565 has defined MAC address 52:54:00:2a:13:d8 in network mk-ha-091565
	I0918 20:00:59.775260   26827 main.go:141] libmachine: (ha-091565) DBG | unable to find current IP address of domain ha-091565 in network mk-ha-091565
	I0918 20:00:59.775295   26827 main.go:141] libmachine: (ha-091565) DBG | I0918 20:00:59.775205   26850 retry.go:31] will retry after 262.842806ms: waiting for machine to come up
	I0918 20:01:00.039656   26827 main.go:141] libmachine: (ha-091565) DBG | domain ha-091565 has defined MAC address 52:54:00:2a:13:d8 in network mk-ha-091565
	I0918 20:01:00.040069   26827 main.go:141] libmachine: (ha-091565) DBG | unable to find current IP address of domain ha-091565 in network mk-ha-091565
	I0918 20:01:00.040093   26827 main.go:141] libmachine: (ha-091565) DBG | I0918 20:01:00.040046   26850 retry.go:31] will retry after 393.798982ms: waiting for machine to come up
	I0918 20:01:00.435673   26827 main.go:141] libmachine: (ha-091565) DBG | domain ha-091565 has defined MAC address 52:54:00:2a:13:d8 in network mk-ha-091565
	I0918 20:01:00.436127   26827 main.go:141] libmachine: (ha-091565) DBG | unable to find current IP address of domain ha-091565 in network mk-ha-091565
	I0918 20:01:00.436161   26827 main.go:141] libmachine: (ha-091565) DBG | I0918 20:01:00.436100   26850 retry.go:31] will retry after 446.519452ms: waiting for machine to come up
	I0918 20:01:00.883844   26827 main.go:141] libmachine: (ha-091565) DBG | domain ha-091565 has defined MAC address 52:54:00:2a:13:d8 in network mk-ha-091565
	I0918 20:01:00.884367   26827 main.go:141] libmachine: (ha-091565) DBG | unable to find current IP address of domain ha-091565 in network mk-ha-091565
	I0918 20:01:00.884396   26827 main.go:141] libmachine: (ha-091565) DBG | I0918 20:01:00.884301   26850 retry.go:31] will retry after 528.125995ms: waiting for machine to come up
	I0918 20:01:01.414131   26827 main.go:141] libmachine: (ha-091565) DBG | domain ha-091565 has defined MAC address 52:54:00:2a:13:d8 in network mk-ha-091565
	I0918 20:01:01.414641   26827 main.go:141] libmachine: (ha-091565) DBG | unable to find current IP address of domain ha-091565 in network mk-ha-091565
	I0918 20:01:01.414662   26827 main.go:141] libmachine: (ha-091565) DBG | I0918 20:01:01.414600   26850 retry.go:31] will retry after 935.867422ms: waiting for machine to come up
	I0918 20:01:02.352501   26827 main.go:141] libmachine: (ha-091565) DBG | domain ha-091565 has defined MAC address 52:54:00:2a:13:d8 in network mk-ha-091565
	I0918 20:01:02.353101   26827 main.go:141] libmachine: (ha-091565) DBG | unable to find current IP address of domain ha-091565 in network mk-ha-091565
	I0918 20:01:02.353136   26827 main.go:141] libmachine: (ha-091565) DBG | I0918 20:01:02.353036   26850 retry.go:31] will retry after 916.470629ms: waiting for machine to come up
	I0918 20:01:03.270901   26827 main.go:141] libmachine: (ha-091565) DBG | domain ha-091565 has defined MAC address 52:54:00:2a:13:d8 in network mk-ha-091565
	I0918 20:01:03.271592   26827 main.go:141] libmachine: (ha-091565) DBG | unable to find current IP address of domain ha-091565 in network mk-ha-091565
	I0918 20:01:03.271617   26827 main.go:141] libmachine: (ha-091565) DBG | I0918 20:01:03.271544   26850 retry.go:31] will retry after 1.230905631s: waiting for machine to come up
	I0918 20:01:04.504061   26827 main.go:141] libmachine: (ha-091565) DBG | domain ha-091565 has defined MAC address 52:54:00:2a:13:d8 in network mk-ha-091565
	I0918 20:01:04.504573   26827 main.go:141] libmachine: (ha-091565) DBG | unable to find current IP address of domain ha-091565 in network mk-ha-091565
	I0918 20:01:04.504600   26827 main.go:141] libmachine: (ha-091565) DBG | I0918 20:01:04.504501   26850 retry.go:31] will retry after 1.334656049s: waiting for machine to come up
	I0918 20:01:05.841225   26827 main.go:141] libmachine: (ha-091565) DBG | domain ha-091565 has defined MAC address 52:54:00:2a:13:d8 in network mk-ha-091565
	I0918 20:01:05.841603   26827 main.go:141] libmachine: (ha-091565) DBG | unable to find current IP address of domain ha-091565 in network mk-ha-091565
	I0918 20:01:05.841627   26827 main.go:141] libmachine: (ha-091565) DBG | I0918 20:01:05.841542   26850 retry.go:31] will retry after 1.509327207s: waiting for machine to come up
	I0918 20:01:07.353477   26827 main.go:141] libmachine: (ha-091565) DBG | domain ha-091565 has defined MAC address 52:54:00:2a:13:d8 in network mk-ha-091565
	I0918 20:01:07.353907   26827 main.go:141] libmachine: (ha-091565) DBG | unable to find current IP address of domain ha-091565 in network mk-ha-091565
	I0918 20:01:07.353958   26827 main.go:141] libmachine: (ha-091565) DBG | I0918 20:01:07.353878   26850 retry.go:31] will retry after 2.403908861s: waiting for machine to come up
	I0918 20:01:09.760703   26827 main.go:141] libmachine: (ha-091565) DBG | domain ha-091565 has defined MAC address 52:54:00:2a:13:d8 in network mk-ha-091565
	I0918 20:01:09.761285   26827 main.go:141] libmachine: (ha-091565) DBG | unable to find current IP address of domain ha-091565 in network mk-ha-091565
	I0918 20:01:09.761311   26827 main.go:141] libmachine: (ha-091565) DBG | I0918 20:01:09.761245   26850 retry.go:31] will retry after 3.18859433s: waiting for machine to come up
	I0918 20:01:12.951021   26827 main.go:141] libmachine: (ha-091565) DBG | domain ha-091565 has defined MAC address 52:54:00:2a:13:d8 in network mk-ha-091565
	I0918 20:01:12.951436   26827 main.go:141] libmachine: (ha-091565) DBG | unable to find current IP address of domain ha-091565 in network mk-ha-091565
	I0918 20:01:12.951466   26827 main.go:141] libmachine: (ha-091565) DBG | I0918 20:01:12.951387   26850 retry.go:31] will retry after 4.080420969s: waiting for machine to come up
	I0918 20:01:17.036664   26827 main.go:141] libmachine: (ha-091565) DBG | domain ha-091565 has defined MAC address 52:54:00:2a:13:d8 in network mk-ha-091565
	I0918 20:01:17.037090   26827 main.go:141] libmachine: (ha-091565) DBG | unable to find current IP address of domain ha-091565 in network mk-ha-091565
	I0918 20:01:17.037112   26827 main.go:141] libmachine: (ha-091565) DBG | I0918 20:01:17.037044   26850 retry.go:31] will retry after 5.244932355s: waiting for machine to come up
	I0918 20:01:22.287118   26827 main.go:141] libmachine: (ha-091565) DBG | domain ha-091565 has defined MAC address 52:54:00:2a:13:d8 in network mk-ha-091565
	I0918 20:01:22.287574   26827 main.go:141] libmachine: (ha-091565) Found IP for machine: 192.168.39.215
	I0918 20:01:22.287594   26827 main.go:141] libmachine: (ha-091565) Reserving static IP address...
	I0918 20:01:22.287606   26827 main.go:141] libmachine: (ha-091565) DBG | domain ha-091565 has current primary IP address 192.168.39.215 and MAC address 52:54:00:2a:13:d8 in network mk-ha-091565
	I0918 20:01:22.287959   26827 main.go:141] libmachine: (ha-091565) DBG | unable to find host DHCP lease matching {name: "ha-091565", mac: "52:54:00:2a:13:d8", ip: "192.168.39.215"} in network mk-ha-091565
	I0918 20:01:22.360495   26827 main.go:141] libmachine: (ha-091565) DBG | Getting to WaitForSSH function...
	I0918 20:01:22.360523   26827 main.go:141] libmachine: (ha-091565) Reserved static IP address: 192.168.39.215
	I0918 20:01:22.360535   26827 main.go:141] libmachine: (ha-091565) Waiting for SSH to be available...
	I0918 20:01:22.362885   26827 main.go:141] libmachine: (ha-091565) DBG | domain ha-091565 has defined MAC address 52:54:00:2a:13:d8 in network mk-ha-091565
	I0918 20:01:22.363193   26827 main.go:141] libmachine: (ha-091565) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:2a:13:d8", ip: ""} in network mk-ha-091565
	I0918 20:01:22.363217   26827 main.go:141] libmachine: (ha-091565) DBG | unable to find defined IP address of network mk-ha-091565 interface with MAC address 52:54:00:2a:13:d8
	I0918 20:01:22.363387   26827 main.go:141] libmachine: (ha-091565) DBG | Using SSH client type: external
	I0918 20:01:22.363410   26827 main.go:141] libmachine: (ha-091565) DBG | Using SSH private key: /home/jenkins/minikube-integration/19667-7671/.minikube/machines/ha-091565/id_rsa (-rw-------)
	I0918 20:01:22.363445   26827 main.go:141] libmachine: (ha-091565) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19667-7671/.minikube/machines/ha-091565/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0918 20:01:22.363470   26827 main.go:141] libmachine: (ha-091565) DBG | About to run SSH command:
	I0918 20:01:22.363487   26827 main.go:141] libmachine: (ha-091565) DBG | exit 0
	I0918 20:01:22.367035   26827 main.go:141] libmachine: (ha-091565) DBG | SSH cmd err, output: exit status 255: 
	I0918 20:01:22.367062   26827 main.go:141] libmachine: (ha-091565) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I0918 20:01:22.367069   26827 main.go:141] libmachine: (ha-091565) DBG | command : exit 0
	I0918 20:01:22.367074   26827 main.go:141] libmachine: (ha-091565) DBG | err     : exit status 255
	I0918 20:01:22.367081   26827 main.go:141] libmachine: (ha-091565) DBG | output  : 
	I0918 20:01:25.368924   26827 main.go:141] libmachine: (ha-091565) DBG | Getting to WaitForSSH function...
	I0918 20:01:25.371732   26827 main.go:141] libmachine: (ha-091565) DBG | domain ha-091565 has defined MAC address 52:54:00:2a:13:d8 in network mk-ha-091565
	I0918 20:01:25.372247   26827 main.go:141] libmachine: (ha-091565) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:13:d8", ip: ""} in network mk-ha-091565: {Iface:virbr1 ExpiryTime:2024-09-18 21:01:12 +0000 UTC Type:0 Mac:52:54:00:2a:13:d8 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:ha-091565 Clientid:01:52:54:00:2a:13:d8}
	I0918 20:01:25.372276   26827 main.go:141] libmachine: (ha-091565) DBG | domain ha-091565 has defined IP address 192.168.39.215 and MAC address 52:54:00:2a:13:d8 in network mk-ha-091565
	I0918 20:01:25.372360   26827 main.go:141] libmachine: (ha-091565) DBG | Using SSH client type: external
	I0918 20:01:25.372393   26827 main.go:141] libmachine: (ha-091565) DBG | Using SSH private key: /home/jenkins/minikube-integration/19667-7671/.minikube/machines/ha-091565/id_rsa (-rw-------)
	I0918 20:01:25.372430   26827 main.go:141] libmachine: (ha-091565) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.215 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19667-7671/.minikube/machines/ha-091565/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0918 20:01:25.372447   26827 main.go:141] libmachine: (ha-091565) DBG | About to run SSH command:
	I0918 20:01:25.372458   26827 main.go:141] libmachine: (ha-091565) DBG | exit 0
	I0918 20:01:25.500108   26827 main.go:141] libmachine: (ha-091565) DBG | SSH cmd err, output: <nil>: 
	I0918 20:01:25.500382   26827 main.go:141] libmachine: (ha-091565) KVM machine creation complete!
	I0918 20:01:25.500836   26827 main.go:141] libmachine: (ha-091565) Calling .GetConfigRaw
	I0918 20:01:25.501392   26827 main.go:141] libmachine: (ha-091565) Calling .DriverName
	I0918 20:01:25.501585   26827 main.go:141] libmachine: (ha-091565) Calling .DriverName
	I0918 20:01:25.501791   26827 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0918 20:01:25.501803   26827 main.go:141] libmachine: (ha-091565) Calling .GetState
	I0918 20:01:25.503113   26827 main.go:141] libmachine: Detecting operating system of created instance...
	I0918 20:01:25.503144   26827 main.go:141] libmachine: Waiting for SSH to be available...
	I0918 20:01:25.503151   26827 main.go:141] libmachine: Getting to WaitForSSH function...
	I0918 20:01:25.503163   26827 main.go:141] libmachine: (ha-091565) Calling .GetSSHHostname
	I0918 20:01:25.505584   26827 main.go:141] libmachine: (ha-091565) DBG | domain ha-091565 has defined MAC address 52:54:00:2a:13:d8 in network mk-ha-091565
	I0918 20:01:25.505981   26827 main.go:141] libmachine: (ha-091565) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:13:d8", ip: ""} in network mk-ha-091565: {Iface:virbr1 ExpiryTime:2024-09-18 21:01:12 +0000 UTC Type:0 Mac:52:54:00:2a:13:d8 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:ha-091565 Clientid:01:52:54:00:2a:13:d8}
	I0918 20:01:25.506016   26827 main.go:141] libmachine: (ha-091565) DBG | domain ha-091565 has defined IP address 192.168.39.215 and MAC address 52:54:00:2a:13:d8 in network mk-ha-091565
	I0918 20:01:25.506132   26827 main.go:141] libmachine: (ha-091565) Calling .GetSSHPort
	I0918 20:01:25.506286   26827 main.go:141] libmachine: (ha-091565) Calling .GetSSHKeyPath
	I0918 20:01:25.506450   26827 main.go:141] libmachine: (ha-091565) Calling .GetSSHKeyPath
	I0918 20:01:25.506567   26827 main.go:141] libmachine: (ha-091565) Calling .GetSSHUsername
	I0918 20:01:25.506705   26827 main.go:141] libmachine: Using SSH client type: native
	I0918 20:01:25.506964   26827 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.215 22 <nil> <nil>}
	I0918 20:01:25.506980   26827 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0918 20:01:25.615489   26827 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0918 20:01:25.615512   26827 main.go:141] libmachine: Detecting the provisioner...
	I0918 20:01:25.615519   26827 main.go:141] libmachine: (ha-091565) Calling .GetSSHHostname
	I0918 20:01:25.618058   26827 main.go:141] libmachine: (ha-091565) DBG | domain ha-091565 has defined MAC address 52:54:00:2a:13:d8 in network mk-ha-091565
	I0918 20:01:25.618343   26827 main.go:141] libmachine: (ha-091565) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:13:d8", ip: ""} in network mk-ha-091565: {Iface:virbr1 ExpiryTime:2024-09-18 21:01:12 +0000 UTC Type:0 Mac:52:54:00:2a:13:d8 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:ha-091565 Clientid:01:52:54:00:2a:13:d8}
	I0918 20:01:25.618365   26827 main.go:141] libmachine: (ha-091565) DBG | domain ha-091565 has defined IP address 192.168.39.215 and MAC address 52:54:00:2a:13:d8 in network mk-ha-091565
	I0918 20:01:25.618476   26827 main.go:141] libmachine: (ha-091565) Calling .GetSSHPort
	I0918 20:01:25.618650   26827 main.go:141] libmachine: (ha-091565) Calling .GetSSHKeyPath
	I0918 20:01:25.618786   26827 main.go:141] libmachine: (ha-091565) Calling .GetSSHKeyPath
	I0918 20:01:25.618935   26827 main.go:141] libmachine: (ha-091565) Calling .GetSSHUsername
	I0918 20:01:25.619044   26827 main.go:141] libmachine: Using SSH client type: native
	I0918 20:01:25.619200   26827 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.215 22 <nil> <nil>}
	I0918 20:01:25.619210   26827 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0918 20:01:25.732502   26827 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0918 20:01:25.732589   26827 main.go:141] libmachine: found compatible host: buildroot
	I0918 20:01:25.732599   26827 main.go:141] libmachine: Provisioning with buildroot...
	I0918 20:01:25.732606   26827 main.go:141] libmachine: (ha-091565) Calling .GetMachineName
	I0918 20:01:25.732852   26827 buildroot.go:166] provisioning hostname "ha-091565"
	I0918 20:01:25.732880   26827 main.go:141] libmachine: (ha-091565) Calling .GetMachineName
	I0918 20:01:25.733067   26827 main.go:141] libmachine: (ha-091565) Calling .GetSSHHostname
	I0918 20:01:25.735789   26827 main.go:141] libmachine: (ha-091565) DBG | domain ha-091565 has defined MAC address 52:54:00:2a:13:d8 in network mk-ha-091565
	I0918 20:01:25.736134   26827 main.go:141] libmachine: (ha-091565) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:13:d8", ip: ""} in network mk-ha-091565: {Iface:virbr1 ExpiryTime:2024-09-18 21:01:12 +0000 UTC Type:0 Mac:52:54:00:2a:13:d8 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:ha-091565 Clientid:01:52:54:00:2a:13:d8}
	I0918 20:01:25.736170   26827 main.go:141] libmachine: (ha-091565) DBG | domain ha-091565 has defined IP address 192.168.39.215 and MAC address 52:54:00:2a:13:d8 in network mk-ha-091565
	I0918 20:01:25.736303   26827 main.go:141] libmachine: (ha-091565) Calling .GetSSHPort
	I0918 20:01:25.736498   26827 main.go:141] libmachine: (ha-091565) Calling .GetSSHKeyPath
	I0918 20:01:25.736664   26827 main.go:141] libmachine: (ha-091565) Calling .GetSSHKeyPath
	I0918 20:01:25.736815   26827 main.go:141] libmachine: (ha-091565) Calling .GetSSHUsername
	I0918 20:01:25.736962   26827 main.go:141] libmachine: Using SSH client type: native
	I0918 20:01:25.737181   26827 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.215 22 <nil> <nil>}
	I0918 20:01:25.737194   26827 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-091565 && echo "ha-091565" | sudo tee /etc/hostname
	I0918 20:01:25.862508   26827 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-091565
	
	I0918 20:01:25.862540   26827 main.go:141] libmachine: (ha-091565) Calling .GetSSHHostname
	I0918 20:01:25.866613   26827 main.go:141] libmachine: (ha-091565) DBG | domain ha-091565 has defined MAC address 52:54:00:2a:13:d8 in network mk-ha-091565
	I0918 20:01:25.867074   26827 main.go:141] libmachine: (ha-091565) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:13:d8", ip: ""} in network mk-ha-091565: {Iface:virbr1 ExpiryTime:2024-09-18 21:01:12 +0000 UTC Type:0 Mac:52:54:00:2a:13:d8 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:ha-091565 Clientid:01:52:54:00:2a:13:d8}
	I0918 20:01:25.867104   26827 main.go:141] libmachine: (ha-091565) DBG | domain ha-091565 has defined IP address 192.168.39.215 and MAC address 52:54:00:2a:13:d8 in network mk-ha-091565
	I0918 20:01:25.867538   26827 main.go:141] libmachine: (ha-091565) Calling .GetSSHPort
	I0918 20:01:25.867789   26827 main.go:141] libmachine: (ha-091565) Calling .GetSSHKeyPath
	I0918 20:01:25.867962   26827 main.go:141] libmachine: (ha-091565) Calling .GetSSHKeyPath
	I0918 20:01:25.868230   26827 main.go:141] libmachine: (ha-091565) Calling .GetSSHUsername
	I0918 20:01:25.868389   26827 main.go:141] libmachine: Using SSH client type: native
	I0918 20:01:25.868588   26827 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.215 22 <nil> <nil>}
	I0918 20:01:25.868607   26827 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-091565' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-091565/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-091565' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0918 20:01:25.988748   26827 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0918 20:01:25.988798   26827 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19667-7671/.minikube CaCertPath:/home/jenkins/minikube-integration/19667-7671/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19667-7671/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19667-7671/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19667-7671/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19667-7671/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19667-7671/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19667-7671/.minikube}
	I0918 20:01:25.988838   26827 buildroot.go:174] setting up certificates
	I0918 20:01:25.988848   26827 provision.go:84] configureAuth start
	I0918 20:01:25.988857   26827 main.go:141] libmachine: (ha-091565) Calling .GetMachineName
	I0918 20:01:25.989144   26827 main.go:141] libmachine: (ha-091565) Calling .GetIP
	I0918 20:01:25.991863   26827 main.go:141] libmachine: (ha-091565) DBG | domain ha-091565 has defined MAC address 52:54:00:2a:13:d8 in network mk-ha-091565
	I0918 20:01:25.992270   26827 main.go:141] libmachine: (ha-091565) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:13:d8", ip: ""} in network mk-ha-091565: {Iface:virbr1 ExpiryTime:2024-09-18 21:01:12 +0000 UTC Type:0 Mac:52:54:00:2a:13:d8 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:ha-091565 Clientid:01:52:54:00:2a:13:d8}
	I0918 20:01:25.992315   26827 main.go:141] libmachine: (ha-091565) DBG | domain ha-091565 has defined IP address 192.168.39.215 and MAC address 52:54:00:2a:13:d8 in network mk-ha-091565
	I0918 20:01:25.992456   26827 main.go:141] libmachine: (ha-091565) Calling .GetSSHHostname
	I0918 20:01:25.994511   26827 main.go:141] libmachine: (ha-091565) DBG | domain ha-091565 has defined MAC address 52:54:00:2a:13:d8 in network mk-ha-091565
	I0918 20:01:25.994809   26827 main.go:141] libmachine: (ha-091565) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:13:d8", ip: ""} in network mk-ha-091565: {Iface:virbr1 ExpiryTime:2024-09-18 21:01:12 +0000 UTC Type:0 Mac:52:54:00:2a:13:d8 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:ha-091565 Clientid:01:52:54:00:2a:13:d8}
	I0918 20:01:25.994834   26827 main.go:141] libmachine: (ha-091565) DBG | domain ha-091565 has defined IP address 192.168.39.215 and MAC address 52:54:00:2a:13:d8 in network mk-ha-091565
	I0918 20:01:25.994954   26827 provision.go:143] copyHostCerts
	I0918 20:01:25.994981   26827 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19667-7671/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19667-7671/.minikube/ca.pem
	I0918 20:01:25.995025   26827 exec_runner.go:144] found /home/jenkins/minikube-integration/19667-7671/.minikube/ca.pem, removing ...
	I0918 20:01:25.995039   26827 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19667-7671/.minikube/ca.pem
	I0918 20:01:25.995103   26827 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19667-7671/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19667-7671/.minikube/ca.pem (1082 bytes)
	I0918 20:01:25.995191   26827 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19667-7671/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19667-7671/.minikube/cert.pem
	I0918 20:01:25.995209   26827 exec_runner.go:144] found /home/jenkins/minikube-integration/19667-7671/.minikube/cert.pem, removing ...
	I0918 20:01:25.995213   26827 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19667-7671/.minikube/cert.pem
	I0918 20:01:25.995242   26827 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19667-7671/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19667-7671/.minikube/cert.pem (1123 bytes)
	I0918 20:01:25.995301   26827 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19667-7671/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19667-7671/.minikube/key.pem
	I0918 20:01:25.995316   26827 exec_runner.go:144] found /home/jenkins/minikube-integration/19667-7671/.minikube/key.pem, removing ...
	I0918 20:01:25.995322   26827 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19667-7671/.minikube/key.pem
	I0918 20:01:25.995343   26827 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19667-7671/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19667-7671/.minikube/key.pem (1679 bytes)
	I0918 20:01:25.995405   26827 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19667-7671/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19667-7671/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19667-7671/.minikube/certs/ca-key.pem org=jenkins.ha-091565 san=[127.0.0.1 192.168.39.215 ha-091565 localhost minikube]
	I0918 20:01:26.117902   26827 provision.go:177] copyRemoteCerts
	I0918 20:01:26.117954   26827 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0918 20:01:26.117977   26827 main.go:141] libmachine: (ha-091565) Calling .GetSSHHostname
	I0918 20:01:26.120733   26827 main.go:141] libmachine: (ha-091565) DBG | domain ha-091565 has defined MAC address 52:54:00:2a:13:d8 in network mk-ha-091565
	I0918 20:01:26.121075   26827 main.go:141] libmachine: (ha-091565) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:13:d8", ip: ""} in network mk-ha-091565: {Iface:virbr1 ExpiryTime:2024-09-18 21:01:12 +0000 UTC Type:0 Mac:52:54:00:2a:13:d8 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:ha-091565 Clientid:01:52:54:00:2a:13:d8}
	I0918 20:01:26.121091   26827 main.go:141] libmachine: (ha-091565) DBG | domain ha-091565 has defined IP address 192.168.39.215 and MAC address 52:54:00:2a:13:d8 in network mk-ha-091565
	I0918 20:01:26.121297   26827 main.go:141] libmachine: (ha-091565) Calling .GetSSHPort
	I0918 20:01:26.121502   26827 main.go:141] libmachine: (ha-091565) Calling .GetSSHKeyPath
	I0918 20:01:26.121666   26827 main.go:141] libmachine: (ha-091565) Calling .GetSSHUsername
	I0918 20:01:26.121786   26827 sshutil.go:53] new ssh client: &{IP:192.168.39.215 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19667-7671/.minikube/machines/ha-091565/id_rsa Username:docker}
	I0918 20:01:26.205619   26827 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19667-7671/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0918 20:01:26.205705   26827 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I0918 20:01:26.228613   26827 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19667-7671/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0918 20:01:26.228682   26827 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0918 20:01:26.252879   26827 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19667-7671/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0918 20:01:26.252953   26827 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0918 20:01:26.277029   26827 provision.go:87] duration metric: took 288.170096ms to configureAuth
	I0918 20:01:26.277056   26827 buildroot.go:189] setting minikube options for container-runtime
	I0918 20:01:26.277264   26827 config.go:182] Loaded profile config "ha-091565": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0918 20:01:26.277380   26827 main.go:141] libmachine: (ha-091565) Calling .GetSSHHostname
	I0918 20:01:26.279749   26827 main.go:141] libmachine: (ha-091565) DBG | domain ha-091565 has defined MAC address 52:54:00:2a:13:d8 in network mk-ha-091565
	I0918 20:01:26.280128   26827 main.go:141] libmachine: (ha-091565) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:13:d8", ip: ""} in network mk-ha-091565: {Iface:virbr1 ExpiryTime:2024-09-18 21:01:12 +0000 UTC Type:0 Mac:52:54:00:2a:13:d8 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:ha-091565 Clientid:01:52:54:00:2a:13:d8}
	I0918 20:01:26.280154   26827 main.go:141] libmachine: (ha-091565) DBG | domain ha-091565 has defined IP address 192.168.39.215 and MAC address 52:54:00:2a:13:d8 in network mk-ha-091565
	I0918 20:01:26.280280   26827 main.go:141] libmachine: (ha-091565) Calling .GetSSHPort
	I0918 20:01:26.280444   26827 main.go:141] libmachine: (ha-091565) Calling .GetSSHKeyPath
	I0918 20:01:26.280617   26827 main.go:141] libmachine: (ha-091565) Calling .GetSSHKeyPath
	I0918 20:01:26.280788   26827 main.go:141] libmachine: (ha-091565) Calling .GetSSHUsername
	I0918 20:01:26.280946   26827 main.go:141] libmachine: Using SSH client type: native
	I0918 20:01:26.281114   26827 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.215 22 <nil> <nil>}
	I0918 20:01:26.281127   26827 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0918 20:01:26.505775   26827 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0918 20:01:26.505808   26827 main.go:141] libmachine: Checking connection to Docker...
	I0918 20:01:26.505817   26827 main.go:141] libmachine: (ha-091565) Calling .GetURL
	I0918 20:01:26.507070   26827 main.go:141] libmachine: (ha-091565) DBG | Using libvirt version 6000000
	I0918 20:01:26.509239   26827 main.go:141] libmachine: (ha-091565) DBG | domain ha-091565 has defined MAC address 52:54:00:2a:13:d8 in network mk-ha-091565
	I0918 20:01:26.509623   26827 main.go:141] libmachine: (ha-091565) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:13:d8", ip: ""} in network mk-ha-091565: {Iface:virbr1 ExpiryTime:2024-09-18 21:01:12 +0000 UTC Type:0 Mac:52:54:00:2a:13:d8 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:ha-091565 Clientid:01:52:54:00:2a:13:d8}
	I0918 20:01:26.509653   26827 main.go:141] libmachine: (ha-091565) DBG | domain ha-091565 has defined IP address 192.168.39.215 and MAC address 52:54:00:2a:13:d8 in network mk-ha-091565
	I0918 20:01:26.509837   26827 main.go:141] libmachine: Docker is up and running!
	I0918 20:01:26.509859   26827 main.go:141] libmachine: Reticulating splines...
	I0918 20:01:26.509874   26827 client.go:171] duration metric: took 28.782642826s to LocalClient.Create
	I0918 20:01:26.509892   26827 start.go:167] duration metric: took 28.782711953s to libmachine.API.Create "ha-091565"
	I0918 20:01:26.509901   26827 start.go:293] postStartSetup for "ha-091565" (driver="kvm2")
	I0918 20:01:26.509909   26827 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0918 20:01:26.509925   26827 main.go:141] libmachine: (ha-091565) Calling .DriverName
	I0918 20:01:26.510174   26827 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0918 20:01:26.510198   26827 main.go:141] libmachine: (ha-091565) Calling .GetSSHHostname
	I0918 20:01:26.512537   26827 main.go:141] libmachine: (ha-091565) DBG | domain ha-091565 has defined MAC address 52:54:00:2a:13:d8 in network mk-ha-091565
	I0918 20:01:26.512896   26827 main.go:141] libmachine: (ha-091565) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:13:d8", ip: ""} in network mk-ha-091565: {Iface:virbr1 ExpiryTime:2024-09-18 21:01:12 +0000 UTC Type:0 Mac:52:54:00:2a:13:d8 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:ha-091565 Clientid:01:52:54:00:2a:13:d8}
	I0918 20:01:26.512927   26827 main.go:141] libmachine: (ha-091565) DBG | domain ha-091565 has defined IP address 192.168.39.215 and MAC address 52:54:00:2a:13:d8 in network mk-ha-091565
	I0918 20:01:26.513099   26827 main.go:141] libmachine: (ha-091565) Calling .GetSSHPort
	I0918 20:01:26.513302   26827 main.go:141] libmachine: (ha-091565) Calling .GetSSHKeyPath
	I0918 20:01:26.513485   26827 main.go:141] libmachine: (ha-091565) Calling .GetSSHUsername
	I0918 20:01:26.513627   26827 sshutil.go:53] new ssh client: &{IP:192.168.39.215 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19667-7671/.minikube/machines/ha-091565/id_rsa Username:docker}
	I0918 20:01:26.598408   26827 ssh_runner.go:195] Run: cat /etc/os-release
	I0918 20:01:26.602627   26827 info.go:137] Remote host: Buildroot 2023.02.9
	I0918 20:01:26.602663   26827 filesync.go:126] Scanning /home/jenkins/minikube-integration/19667-7671/.minikube/addons for local assets ...
	I0918 20:01:26.602726   26827 filesync.go:126] Scanning /home/jenkins/minikube-integration/19667-7671/.minikube/files for local assets ...
	I0918 20:01:26.602800   26827 filesync.go:149] local asset: /home/jenkins/minikube-integration/19667-7671/.minikube/files/etc/ssl/certs/148782.pem -> 148782.pem in /etc/ssl/certs
	I0918 20:01:26.602810   26827 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19667-7671/.minikube/files/etc/ssl/certs/148782.pem -> /etc/ssl/certs/148782.pem
	I0918 20:01:26.602901   26827 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0918 20:01:26.612359   26827 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/files/etc/ssl/certs/148782.pem --> /etc/ssl/certs/148782.pem (1708 bytes)
	I0918 20:01:26.635555   26827 start.go:296] duration metric: took 125.639833ms for postStartSetup
	I0918 20:01:26.635626   26827 main.go:141] libmachine: (ha-091565) Calling .GetConfigRaw
	I0918 20:01:26.636227   26827 main.go:141] libmachine: (ha-091565) Calling .GetIP
	I0918 20:01:26.638938   26827 main.go:141] libmachine: (ha-091565) DBG | domain ha-091565 has defined MAC address 52:54:00:2a:13:d8 in network mk-ha-091565
	I0918 20:01:26.639246   26827 main.go:141] libmachine: (ha-091565) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:13:d8", ip: ""} in network mk-ha-091565: {Iface:virbr1 ExpiryTime:2024-09-18 21:01:12 +0000 UTC Type:0 Mac:52:54:00:2a:13:d8 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:ha-091565 Clientid:01:52:54:00:2a:13:d8}
	I0918 20:01:26.639274   26827 main.go:141] libmachine: (ha-091565) DBG | domain ha-091565 has defined IP address 192.168.39.215 and MAC address 52:54:00:2a:13:d8 in network mk-ha-091565
	I0918 20:01:26.639496   26827 profile.go:143] Saving config to /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/ha-091565/config.json ...
	I0918 20:01:26.639737   26827 start.go:128] duration metric: took 28.930427667s to createHost
	I0918 20:01:26.639765   26827 main.go:141] libmachine: (ha-091565) Calling .GetSSHHostname
	I0918 20:01:26.642131   26827 main.go:141] libmachine: (ha-091565) DBG | domain ha-091565 has defined MAC address 52:54:00:2a:13:d8 in network mk-ha-091565
	I0918 20:01:26.642460   26827 main.go:141] libmachine: (ha-091565) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:13:d8", ip: ""} in network mk-ha-091565: {Iface:virbr1 ExpiryTime:2024-09-18 21:01:12 +0000 UTC Type:0 Mac:52:54:00:2a:13:d8 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:ha-091565 Clientid:01:52:54:00:2a:13:d8}
	I0918 20:01:26.642482   26827 main.go:141] libmachine: (ha-091565) DBG | domain ha-091565 has defined IP address 192.168.39.215 and MAC address 52:54:00:2a:13:d8 in network mk-ha-091565
	I0918 20:01:26.642675   26827 main.go:141] libmachine: (ha-091565) Calling .GetSSHPort
	I0918 20:01:26.642866   26827 main.go:141] libmachine: (ha-091565) Calling .GetSSHKeyPath
	I0918 20:01:26.643104   26827 main.go:141] libmachine: (ha-091565) Calling .GetSSHKeyPath
	I0918 20:01:26.643258   26827 main.go:141] libmachine: (ha-091565) Calling .GetSSHUsername
	I0918 20:01:26.643412   26827 main.go:141] libmachine: Using SSH client type: native
	I0918 20:01:26.643644   26827 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.215 22 <nil> <nil>}
	I0918 20:01:26.643661   26827 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0918 20:01:26.756537   26827 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726689686.738518611
	
	I0918 20:01:26.756561   26827 fix.go:216] guest clock: 1726689686.738518611
	I0918 20:01:26.756568   26827 fix.go:229] Guest: 2024-09-18 20:01:26.738518611 +0000 UTC Remote: 2024-09-18 20:01:26.639754618 +0000 UTC m=+29.034479506 (delta=98.763993ms)
	I0918 20:01:26.756587   26827 fix.go:200] guest clock delta is within tolerance: 98.763993ms
	I0918 20:01:26.756592   26827 start.go:83] releasing machines lock for "ha-091565", held for 29.047378188s
	I0918 20:01:26.756612   26827 main.go:141] libmachine: (ha-091565) Calling .DriverName
	I0918 20:01:26.756891   26827 main.go:141] libmachine: (ha-091565) Calling .GetIP
	I0918 20:01:26.759638   26827 main.go:141] libmachine: (ha-091565) DBG | domain ha-091565 has defined MAC address 52:54:00:2a:13:d8 in network mk-ha-091565
	I0918 20:01:26.759950   26827 main.go:141] libmachine: (ha-091565) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:13:d8", ip: ""} in network mk-ha-091565: {Iface:virbr1 ExpiryTime:2024-09-18 21:01:12 +0000 UTC Type:0 Mac:52:54:00:2a:13:d8 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:ha-091565 Clientid:01:52:54:00:2a:13:d8}
	I0918 20:01:26.759972   26827 main.go:141] libmachine: (ha-091565) DBG | domain ha-091565 has defined IP address 192.168.39.215 and MAC address 52:54:00:2a:13:d8 in network mk-ha-091565
	I0918 20:01:26.760128   26827 main.go:141] libmachine: (ha-091565) Calling .DriverName
	I0918 20:01:26.760656   26827 main.go:141] libmachine: (ha-091565) Calling .DriverName
	I0918 20:01:26.760816   26827 main.go:141] libmachine: (ha-091565) Calling .DriverName
	I0918 20:01:26.760919   26827 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0918 20:01:26.760970   26827 main.go:141] libmachine: (ha-091565) Calling .GetSSHHostname
	I0918 20:01:26.761017   26827 ssh_runner.go:195] Run: cat /version.json
	I0918 20:01:26.761043   26827 main.go:141] libmachine: (ha-091565) Calling .GetSSHHostname
	I0918 20:01:26.763588   26827 main.go:141] libmachine: (ha-091565) DBG | domain ha-091565 has defined MAC address 52:54:00:2a:13:d8 in network mk-ha-091565
	I0918 20:01:26.763617   26827 main.go:141] libmachine: (ha-091565) DBG | domain ha-091565 has defined MAC address 52:54:00:2a:13:d8 in network mk-ha-091565
	I0918 20:01:26.763927   26827 main.go:141] libmachine: (ha-091565) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:13:d8", ip: ""} in network mk-ha-091565: {Iface:virbr1 ExpiryTime:2024-09-18 21:01:12 +0000 UTC Type:0 Mac:52:54:00:2a:13:d8 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:ha-091565 Clientid:01:52:54:00:2a:13:d8}
	I0918 20:01:26.763960   26827 main.go:141] libmachine: (ha-091565) DBG | domain ha-091565 has defined IP address 192.168.39.215 and MAC address 52:54:00:2a:13:d8 in network mk-ha-091565
	I0918 20:01:26.763986   26827 main.go:141] libmachine: (ha-091565) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:13:d8", ip: ""} in network mk-ha-091565: {Iface:virbr1 ExpiryTime:2024-09-18 21:01:12 +0000 UTC Type:0 Mac:52:54:00:2a:13:d8 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:ha-091565 Clientid:01:52:54:00:2a:13:d8}
	I0918 20:01:26.764000   26827 main.go:141] libmachine: (ha-091565) DBG | domain ha-091565 has defined IP address 192.168.39.215 and MAC address 52:54:00:2a:13:d8 in network mk-ha-091565
	I0918 20:01:26.764093   26827 main.go:141] libmachine: (ha-091565) Calling .GetSSHPort
	I0918 20:01:26.764219   26827 main.go:141] libmachine: (ha-091565) Calling .GetSSHPort
	I0918 20:01:26.764334   26827 main.go:141] libmachine: (ha-091565) Calling .GetSSHKeyPath
	I0918 20:01:26.764352   26827 main.go:141] libmachine: (ha-091565) Calling .GetSSHKeyPath
	I0918 20:01:26.764485   26827 main.go:141] libmachine: (ha-091565) Calling .GetSSHUsername
	I0918 20:01:26.764503   26827 main.go:141] libmachine: (ha-091565) Calling .GetSSHUsername
	I0918 20:01:26.764654   26827 sshutil.go:53] new ssh client: &{IP:192.168.39.215 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19667-7671/.minikube/machines/ha-091565/id_rsa Username:docker}
	I0918 20:01:26.764655   26827 sshutil.go:53] new ssh client: &{IP:192.168.39.215 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19667-7671/.minikube/machines/ha-091565/id_rsa Username:docker}
	I0918 20:01:26.887790   26827 ssh_runner.go:195] Run: systemctl --version
	I0918 20:01:26.893767   26827 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0918 20:01:27.057963   26827 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0918 20:01:27.064172   26827 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0918 20:01:27.064252   26827 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0918 20:01:27.080537   26827 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0918 20:01:27.080566   26827 start.go:495] detecting cgroup driver to use...
	I0918 20:01:27.080726   26827 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0918 20:01:27.098904   26827 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0918 20:01:27.113999   26827 docker.go:217] disabling cri-docker service (if available) ...
	I0918 20:01:27.114063   26827 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0918 20:01:27.127448   26827 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0918 20:01:27.140971   26827 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0918 20:01:27.277092   26827 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0918 20:01:27.438944   26827 docker.go:233] disabling docker service ...
	I0918 20:01:27.439019   26827 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0918 20:01:27.452578   26827 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0918 20:01:27.465616   26827 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0918 20:01:27.576240   26827 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0918 20:01:27.692187   26827 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0918 20:01:27.706450   26827 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0918 20:01:27.724470   26827 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0918 20:01:27.724548   26827 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0918 20:01:27.734691   26827 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0918 20:01:27.734759   26827 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0918 20:01:27.744841   26827 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0918 20:01:27.754941   26827 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0918 20:01:27.765749   26827 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0918 20:01:27.776994   26827 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0918 20:01:27.787772   26827 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0918 20:01:27.805476   26827 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0918 20:01:27.815577   26827 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0918 20:01:27.824923   26827 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0918 20:01:27.825000   26827 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0918 20:01:27.837394   26827 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0918 20:01:27.847278   26827 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0918 20:01:27.957450   26827 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0918 20:01:28.049268   26827 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0918 20:01:28.049347   26827 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0918 20:01:28.053609   26827 start.go:563] Will wait 60s for crictl version
	I0918 20:01:28.053664   26827 ssh_runner.go:195] Run: which crictl
	I0918 20:01:28.057561   26827 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0918 20:01:28.095781   26827 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0918 20:01:28.095855   26827 ssh_runner.go:195] Run: crio --version
	I0918 20:01:28.122990   26827 ssh_runner.go:195] Run: crio --version
	I0918 20:01:28.151689   26827 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0918 20:01:28.153185   26827 main.go:141] libmachine: (ha-091565) Calling .GetIP
	I0918 20:01:28.155727   26827 main.go:141] libmachine: (ha-091565) DBG | domain ha-091565 has defined MAC address 52:54:00:2a:13:d8 in network mk-ha-091565
	I0918 20:01:28.156071   26827 main.go:141] libmachine: (ha-091565) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:13:d8", ip: ""} in network mk-ha-091565: {Iface:virbr1 ExpiryTime:2024-09-18 21:01:12 +0000 UTC Type:0 Mac:52:54:00:2a:13:d8 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:ha-091565 Clientid:01:52:54:00:2a:13:d8}
	I0918 20:01:28.156102   26827 main.go:141] libmachine: (ha-091565) DBG | domain ha-091565 has defined IP address 192.168.39.215 and MAC address 52:54:00:2a:13:d8 in network mk-ha-091565
	I0918 20:01:28.156291   26827 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0918 20:01:28.160094   26827 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0918 20:01:28.172348   26827 kubeadm.go:883] updating cluster {Name:ha-091565 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Cl
usterName:ha-091565 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.215 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0918 20:01:28.172455   26827 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0918 20:01:28.172495   26827 ssh_runner.go:195] Run: sudo crictl images --output json
	I0918 20:01:28.202903   26827 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I0918 20:01:28.202968   26827 ssh_runner.go:195] Run: which lz4
	I0918 20:01:28.206524   26827 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19667-7671/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0918 20:01:28.206640   26827 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0918 20:01:28.210309   26827 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0918 20:01:28.210346   26827 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I0918 20:01:29.428932   26827 crio.go:462] duration metric: took 1.222324485s to copy over tarball
	I0918 20:01:29.428998   26827 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0918 20:01:31.427670   26827 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.998650683s)
	I0918 20:01:31.427701   26827 crio.go:469] duration metric: took 1.998743987s to extract the tarball
	I0918 20:01:31.427710   26827 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0918 20:01:31.465115   26827 ssh_runner.go:195] Run: sudo crictl images --output json
	I0918 20:01:31.512315   26827 crio.go:514] all images are preloaded for cri-o runtime.
	I0918 20:01:31.512340   26827 cache_images.go:84] Images are preloaded, skipping loading
	I0918 20:01:31.512349   26827 kubeadm.go:934] updating node { 192.168.39.215 8443 v1.31.1 crio true true} ...
	I0918 20:01:31.512489   26827 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-091565 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.215
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-091565 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0918 20:01:31.512625   26827 ssh_runner.go:195] Run: crio config
	I0918 20:01:31.557297   26827 cni.go:84] Creating CNI manager for ""
	I0918 20:01:31.557325   26827 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0918 20:01:31.557342   26827 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0918 20:01:31.557362   26827 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.215 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-091565 NodeName:ha-091565 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.215"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.215 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0918 20:01:31.557481   26827 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.215
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-091565"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.215
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.215"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0918 20:01:31.557515   26827 kube-vip.go:115] generating kube-vip config ...
	I0918 20:01:31.557571   26827 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0918 20:01:31.573497   26827 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0918 20:01:31.573622   26827 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I0918 20:01:31.573693   26827 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0918 20:01:31.583548   26827 binaries.go:44] Found k8s binaries, skipping transfer
	I0918 20:01:31.583630   26827 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0918 20:01:31.592787   26827 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I0918 20:01:31.608721   26827 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0918 20:01:31.624827   26827 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2153 bytes)
	I0918 20:01:31.640691   26827 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1447 bytes)
	I0918 20:01:31.656477   26827 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0918 20:01:31.660115   26827 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0918 20:01:31.671977   26827 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0918 20:01:31.797641   26827 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0918 20:01:31.815122   26827 certs.go:68] Setting up /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/ha-091565 for IP: 192.168.39.215
	I0918 20:01:31.815151   26827 certs.go:194] generating shared ca certs ...
	I0918 20:01:31.815173   26827 certs.go:226] acquiring lock for ca certs: {Name:mkf54e0e71ed884c4372f9d3d4cb308ea4600185 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0918 20:01:31.815382   26827 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19667-7671/.minikube/ca.key
	I0918 20:01:31.815442   26827 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19667-7671/.minikube/proxy-client-ca.key
	I0918 20:01:31.815465   26827 certs.go:256] generating profile certs ...
	I0918 20:01:31.815537   26827 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/ha-091565/client.key
	I0918 20:01:31.815566   26827 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/ha-091565/client.crt with IP's: []
	I0918 20:01:31.882711   26827 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/ha-091565/client.crt ...
	I0918 20:01:31.882735   26827 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/ha-091565/client.crt: {Name:mk22393d10a62db8be4ee96423eb8999dca92051 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0918 20:01:31.882908   26827 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/ha-091565/client.key ...
	I0918 20:01:31.882923   26827 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/ha-091565/client.key: {Name:mk40398d3c215962d47b7b1ac3b33466404e1ae5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0918 20:01:31.883062   26827 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/ha-091565/apiserver.key.286a620e
	I0918 20:01:31.883085   26827 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/ha-091565/apiserver.crt.286a620e with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.215 192.168.39.254]
	I0918 20:01:32.176911   26827 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/ha-091565/apiserver.crt.286a620e ...
	I0918 20:01:32.176938   26827 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/ha-091565/apiserver.crt.286a620e: {Name:mk6e12e8d7297caa8349fc6fe030d9b3d69c43ba Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0918 20:01:32.177087   26827 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/ha-091565/apiserver.key.286a620e ...
	I0918 20:01:32.177099   26827 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/ha-091565/apiserver.key.286a620e: {Name:mkbac5b4ddde2084fa4364c4dee4c3ed0d321a5e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0918 20:01:32.177161   26827 certs.go:381] copying /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/ha-091565/apiserver.crt.286a620e -> /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/ha-091565/apiserver.crt
	I0918 20:01:32.177247   26827 certs.go:385] copying /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/ha-091565/apiserver.key.286a620e -> /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/ha-091565/apiserver.key
	I0918 20:01:32.177297   26827 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/ha-091565/proxy-client.key
	I0918 20:01:32.177310   26827 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/ha-091565/proxy-client.crt with IP's: []
	I0918 20:01:32.272727   26827 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/ha-091565/proxy-client.crt ...
	I0918 20:01:32.272755   26827 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/ha-091565/proxy-client.crt: {Name:mk83a2402d1ff78c6dd742b96bf8c90e2537b4cd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0918 20:01:32.272892   26827 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/ha-091565/proxy-client.key ...
	I0918 20:01:32.272902   26827 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/ha-091565/proxy-client.key: {Name:mk377a0949cdb8c08e373abce1488149f3aaff34 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0918 20:01:32.272968   26827 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19667-7671/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0918 20:01:32.272985   26827 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19667-7671/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0918 20:01:32.272998   26827 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19667-7671/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0918 20:01:32.273010   26827 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19667-7671/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0918 20:01:32.273031   26827 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/ha-091565/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0918 20:01:32.273043   26827 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/ha-091565/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0918 20:01:32.273055   26827 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/ha-091565/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0918 20:01:32.273066   26827 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/ha-091565/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0918 20:01:32.273127   26827 certs.go:484] found cert: /home/jenkins/minikube-integration/19667-7671/.minikube/certs/14878.pem (1338 bytes)
	W0918 20:01:32.273161   26827 certs.go:480] ignoring /home/jenkins/minikube-integration/19667-7671/.minikube/certs/14878_empty.pem, impossibly tiny 0 bytes
	I0918 20:01:32.273170   26827 certs.go:484] found cert: /home/jenkins/minikube-integration/19667-7671/.minikube/certs/ca-key.pem (1675 bytes)
	I0918 20:01:32.273195   26827 certs.go:484] found cert: /home/jenkins/minikube-integration/19667-7671/.minikube/certs/ca.pem (1082 bytes)
	I0918 20:01:32.273219   26827 certs.go:484] found cert: /home/jenkins/minikube-integration/19667-7671/.minikube/certs/cert.pem (1123 bytes)
	I0918 20:01:32.273239   26827 certs.go:484] found cert: /home/jenkins/minikube-integration/19667-7671/.minikube/certs/key.pem (1679 bytes)
	I0918 20:01:32.273274   26827 certs.go:484] found cert: /home/jenkins/minikube-integration/19667-7671/.minikube/files/etc/ssl/certs/148782.pem (1708 bytes)
	I0918 20:01:32.273302   26827 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19667-7671/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0918 20:01:32.273315   26827 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19667-7671/.minikube/certs/14878.pem -> /usr/share/ca-certificates/14878.pem
	I0918 20:01:32.273327   26827 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19667-7671/.minikube/files/etc/ssl/certs/148782.pem -> /usr/share/ca-certificates/148782.pem
	I0918 20:01:32.273874   26827 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0918 20:01:32.300229   26827 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0918 20:01:32.325896   26827 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0918 20:01:32.351512   26827 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0918 20:01:32.377318   26827 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/ha-091565/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0918 20:01:32.402367   26827 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/ha-091565/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0918 20:01:32.427668   26827 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/ha-091565/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0918 20:01:32.452847   26827 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/ha-091565/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0918 20:01:32.478252   26827 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0918 20:01:32.502486   26827 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/certs/14878.pem --> /usr/share/ca-certificates/14878.pem (1338 bytes)
	I0918 20:01:32.525747   26827 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/files/etc/ssl/certs/148782.pem --> /usr/share/ca-certificates/148782.pem (1708 bytes)
	I0918 20:01:32.548776   26827 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0918 20:01:32.568576   26827 ssh_runner.go:195] Run: openssl version
	I0918 20:01:32.574892   26827 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14878.pem && ln -fs /usr/share/ca-certificates/14878.pem /etc/ssl/certs/14878.pem"
	I0918 20:01:32.589112   26827 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14878.pem
	I0918 20:01:32.594154   26827 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 18 19:57 /usr/share/ca-certificates/14878.pem
	I0918 20:01:32.594216   26827 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14878.pem
	I0918 20:01:32.601293   26827 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14878.pem /etc/ssl/certs/51391683.0"
	I0918 20:01:32.612847   26827 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/148782.pem && ln -fs /usr/share/ca-certificates/148782.pem /etc/ssl/certs/148782.pem"
	I0918 20:01:32.626745   26827 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/148782.pem
	I0918 20:01:32.631036   26827 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 18 19:57 /usr/share/ca-certificates/148782.pem
	I0918 20:01:32.631097   26827 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/148782.pem
	I0918 20:01:32.636840   26827 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/148782.pem /etc/ssl/certs/3ec20f2e.0"
	I0918 20:01:32.647396   26827 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0918 20:01:32.658543   26827 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0918 20:01:32.663199   26827 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 18 19:39 /usr/share/ca-certificates/minikubeCA.pem
	I0918 20:01:32.663269   26827 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0918 20:01:32.669178   26827 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0918 20:01:32.680536   26827 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0918 20:01:32.684596   26827 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0918 20:01:32.684652   26827 kubeadm.go:392] StartCluster: {Name:ha-091565 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Clust
erName:ha-091565 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.215 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Moun
tType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0918 20:01:32.684723   26827 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0918 20:01:32.684781   26827 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0918 20:01:32.725657   26827 cri.go:89] found id: ""
	I0918 20:01:32.725738   26827 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0918 20:01:32.736032   26827 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0918 20:01:32.745809   26827 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0918 20:01:32.755660   26827 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0918 20:01:32.755683   26827 kubeadm.go:157] found existing configuration files:
	
	I0918 20:01:32.755734   26827 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0918 20:01:32.765360   26827 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0918 20:01:32.765422   26827 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0918 20:01:32.774977   26827 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0918 20:01:32.784236   26827 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0918 20:01:32.784323   26827 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0918 20:01:32.794385   26827 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0918 20:01:32.803877   26827 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0918 20:01:32.803962   26827 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0918 20:01:32.813974   26827 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0918 20:01:32.824307   26827 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0918 20:01:32.824372   26827 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0918 20:01:32.833810   26827 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0918 20:01:32.930760   26827 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I0918 20:01:32.930831   26827 kubeadm.go:310] [preflight] Running pre-flight checks
	I0918 20:01:33.036305   26827 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0918 20:01:33.036446   26827 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0918 20:01:33.036572   26827 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0918 20:01:33.048889   26827 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0918 20:01:33.216902   26827 out.go:235]   - Generating certificates and keys ...
	I0918 20:01:33.217021   26827 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0918 20:01:33.217118   26827 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0918 20:01:33.410022   26827 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0918 20:01:33.571042   26827 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0918 20:01:34.285080   26827 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0918 20:01:34.386506   26827 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0918 20:01:34.560257   26827 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0918 20:01:34.560457   26827 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [ha-091565 localhost] and IPs [192.168.39.215 127.0.0.1 ::1]
	I0918 20:01:34.830386   26827 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0918 20:01:34.830530   26827 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [ha-091565 localhost] and IPs [192.168.39.215 127.0.0.1 ::1]
	I0918 20:01:34.951453   26827 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0918 20:01:35.138903   26827 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0918 20:01:35.238989   26827 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0918 20:01:35.239055   26827 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0918 20:01:35.347180   26827 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0918 20:01:35.486849   26827 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0918 20:01:35.625355   26827 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0918 20:01:35.747961   26827 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0918 20:01:35.790004   26827 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0918 20:01:35.790529   26827 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0918 20:01:35.794055   26827 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0918 20:01:35.796153   26827 out.go:235]   - Booting up control plane ...
	I0918 20:01:35.796260   26827 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0918 20:01:35.796362   26827 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0918 20:01:35.796717   26827 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0918 20:01:35.811747   26827 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0918 20:01:35.820566   26827 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0918 20:01:35.820644   26827 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0918 20:01:35.959348   26827 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0918 20:01:35.959478   26827 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0918 20:01:36.960132   26827 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.00167882s
	I0918 20:01:36.960220   26827 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0918 20:01:42.633375   26827 kubeadm.go:310] [api-check] The API server is healthy after 5.675608776s
	I0918 20:01:42.646137   26827 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0918 20:01:42.670455   26827 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0918 20:01:42.705148   26827 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0918 20:01:42.705327   26827 kubeadm.go:310] [mark-control-plane] Marking the node ha-091565 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0918 20:01:42.722155   26827 kubeadm.go:310] [bootstrap-token] Using token: 1ejtyk.26hc6xxbyyyx578s
	I0918 20:01:42.723458   26827 out.go:235]   - Configuring RBAC rules ...
	I0918 20:01:42.723598   26827 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0918 20:01:42.732040   26827 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0918 20:01:42.744976   26827 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0918 20:01:42.750140   26827 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0918 20:01:42.755732   26827 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0918 20:01:42.762953   26827 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0918 20:01:43.043394   26827 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0918 20:01:43.485553   26827 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0918 20:01:44.041202   26827 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0918 20:01:44.041225   26827 kubeadm.go:310] 
	I0918 20:01:44.041318   26827 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0918 20:01:44.041338   26827 kubeadm.go:310] 
	I0918 20:01:44.041443   26827 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0918 20:01:44.041471   26827 kubeadm.go:310] 
	I0918 20:01:44.041497   26827 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0918 20:01:44.041547   26827 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0918 20:01:44.041640   26827 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0918 20:01:44.041659   26827 kubeadm.go:310] 
	I0918 20:01:44.041751   26827 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0918 20:01:44.041778   26827 kubeadm.go:310] 
	I0918 20:01:44.041846   26827 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0918 20:01:44.041857   26827 kubeadm.go:310] 
	I0918 20:01:44.041977   26827 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0918 20:01:44.042082   26827 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0918 20:01:44.042182   26827 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0918 20:01:44.042190   26827 kubeadm.go:310] 
	I0918 20:01:44.042302   26827 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0918 20:01:44.042416   26827 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0918 20:01:44.042425   26827 kubeadm.go:310] 
	I0918 20:01:44.042517   26827 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 1ejtyk.26hc6xxbyyyx578s \
	I0918 20:01:44.042666   26827 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:8ad46bf288e8cc2226f5a929a768e6c95cddecc97ffc4016be27ef0987b1e36f \
	I0918 20:01:44.042690   26827 kubeadm.go:310] 	--control-plane 
	I0918 20:01:44.042694   26827 kubeadm.go:310] 
	I0918 20:01:44.042795   26827 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0918 20:01:44.042811   26827 kubeadm.go:310] 
	I0918 20:01:44.042929   26827 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 1ejtyk.26hc6xxbyyyx578s \
	I0918 20:01:44.043079   26827 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:8ad46bf288e8cc2226f5a929a768e6c95cddecc97ffc4016be27ef0987b1e36f 
	I0918 20:01:44.043428   26827 kubeadm.go:310] W0918 20:01:32.914360     832 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0918 20:01:44.043697   26827 kubeadm.go:310] W0918 20:01:32.915480     832 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0918 20:01:44.043826   26827 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0918 20:01:44.043856   26827 cni.go:84] Creating CNI manager for ""
	I0918 20:01:44.043867   26827 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0918 20:01:44.045606   26827 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0918 20:01:44.046719   26827 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0918 20:01:44.052565   26827 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.1/kubectl ...
	I0918 20:01:44.052591   26827 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I0918 20:01:44.074207   26827 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0918 20:01:44.422814   26827 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0918 20:01:44.422902   26827 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0918 20:01:44.422924   26827 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-091565 minikube.k8s.io/updated_at=2024_09_18T20_01_44_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=85073601a832bd4bbda5d11fa91feafff6ec6b91 minikube.k8s.io/name=ha-091565 minikube.k8s.io/primary=true
	I0918 20:01:44.659852   26827 ops.go:34] apiserver oom_adj: -16
	I0918 20:01:44.660163   26827 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0918 20:01:45.160146   26827 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0918 20:01:45.660152   26827 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0918 20:01:46.161013   26827 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0918 20:01:46.660936   26827 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0918 20:01:47.160166   26827 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0918 20:01:47.266634   26827 kubeadm.go:1113] duration metric: took 2.843807989s to wait for elevateKubeSystemPrivileges
	I0918 20:01:47.266673   26827 kubeadm.go:394] duration metric: took 14.582024612s to StartCluster
	I0918 20:01:47.266695   26827 settings.go:142] acquiring lock: {Name:mk6ae95a69dcbe00c28846409e9f46945a88de2f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0918 20:01:47.266765   26827 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19667-7671/kubeconfig
	I0918 20:01:47.267982   26827 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19667-7671/kubeconfig: {Name:mk3a3eecadb0164dfdd5d3c4392b5b473a2a9bb0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0918 20:01:47.268278   26827 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.39.215 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0918 20:01:47.268306   26827 start.go:241] waiting for startup goroutines ...
	I0918 20:01:47.268323   26827 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0918 20:01:47.268480   26827 addons.go:69] Setting storage-provisioner=true in profile "ha-091565"
	I0918 20:01:47.268500   26827 addons.go:234] Setting addon storage-provisioner=true in "ha-091565"
	I0918 20:01:47.268535   26827 host.go:66] Checking if "ha-091565" exists ...
	I0918 20:01:47.268594   26827 addons.go:69] Setting default-storageclass=true in profile "ha-091565"
	I0918 20:01:47.268631   26827 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-091565"
	I0918 20:01:47.268658   26827 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0918 20:01:47.268843   26827 config.go:182] Loaded profile config "ha-091565": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0918 20:01:47.269530   26827 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0918 20:01:47.269576   26827 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0918 20:01:47.269584   26827 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0918 20:01:47.269740   26827 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0918 20:01:47.284536   26827 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39723
	I0918 20:01:47.284536   26827 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40865
	I0918 20:01:47.285102   26827 main.go:141] libmachine: () Calling .GetVersion
	I0918 20:01:47.285215   26827 main.go:141] libmachine: () Calling .GetVersion
	I0918 20:01:47.285649   26827 main.go:141] libmachine: Using API Version  1
	I0918 20:01:47.285665   26827 main.go:141] libmachine: () Calling .SetConfigRaw
	I0918 20:01:47.285788   26827 main.go:141] libmachine: Using API Version  1
	I0918 20:01:47.285813   26827 main.go:141] libmachine: () Calling .SetConfigRaw
	I0918 20:01:47.286000   26827 main.go:141] libmachine: () Calling .GetMachineName
	I0918 20:01:47.286165   26827 main.go:141] libmachine: () Calling .GetMachineName
	I0918 20:01:47.286188   26827 main.go:141] libmachine: (ha-091565) Calling .GetState
	I0918 20:01:47.286733   26827 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0918 20:01:47.286779   26827 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0918 20:01:47.288227   26827 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19667-7671/kubeconfig
	I0918 20:01:47.288530   26827 kapi.go:59] client config for ha-091565: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19667-7671/.minikube/profiles/ha-091565/client.crt", KeyFile:"/home/jenkins/minikube-integration/19667-7671/.minikube/profiles/ha-091565/client.key", CAFile:"/home/jenkins/minikube-integration/19667-7671/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f6fca0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0918 20:01:47.289088   26827 cert_rotation.go:140] Starting client certificate rotation controller
	I0918 20:01:47.289302   26827 addons.go:234] Setting addon default-storageclass=true in "ha-091565"
	I0918 20:01:47.289329   26827 host.go:66] Checking if "ha-091565" exists ...
	I0918 20:01:47.289569   26827 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0918 20:01:47.289600   26827 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0918 20:01:47.302279   26827 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33947
	I0918 20:01:47.302845   26827 main.go:141] libmachine: () Calling .GetVersion
	I0918 20:01:47.303361   26827 main.go:141] libmachine: Using API Version  1
	I0918 20:01:47.303390   26827 main.go:141] libmachine: () Calling .SetConfigRaw
	I0918 20:01:47.303730   26827 main.go:141] libmachine: () Calling .GetMachineName
	I0918 20:01:47.303943   26827 main.go:141] libmachine: (ha-091565) Calling .GetState
	I0918 20:01:47.304502   26827 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33887
	I0918 20:01:47.304796   26827 main.go:141] libmachine: () Calling .GetVersion
	I0918 20:01:47.305341   26827 main.go:141] libmachine: Using API Version  1
	I0918 20:01:47.305367   26827 main.go:141] libmachine: () Calling .SetConfigRaw
	I0918 20:01:47.305641   26827 main.go:141] libmachine: () Calling .GetMachineName
	I0918 20:01:47.305684   26827 main.go:141] libmachine: (ha-091565) Calling .DriverName
	I0918 20:01:47.306081   26827 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0918 20:01:47.306112   26827 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0918 20:01:47.307722   26827 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0918 20:01:47.309002   26827 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0918 20:01:47.309023   26827 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0918 20:01:47.309041   26827 main.go:141] libmachine: (ha-091565) Calling .GetSSHHostname
	I0918 20:01:47.311945   26827 main.go:141] libmachine: (ha-091565) DBG | domain ha-091565 has defined MAC address 52:54:00:2a:13:d8 in network mk-ha-091565
	I0918 20:01:47.312427   26827 main.go:141] libmachine: (ha-091565) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:13:d8", ip: ""} in network mk-ha-091565: {Iface:virbr1 ExpiryTime:2024-09-18 21:01:12 +0000 UTC Type:0 Mac:52:54:00:2a:13:d8 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:ha-091565 Clientid:01:52:54:00:2a:13:d8}
	I0918 20:01:47.312448   26827 main.go:141] libmachine: (ha-091565) DBG | domain ha-091565 has defined IP address 192.168.39.215 and MAC address 52:54:00:2a:13:d8 in network mk-ha-091565
	I0918 20:01:47.312599   26827 main.go:141] libmachine: (ha-091565) Calling .GetSSHPort
	I0918 20:01:47.312781   26827 main.go:141] libmachine: (ha-091565) Calling .GetSSHKeyPath
	I0918 20:01:47.312931   26827 main.go:141] libmachine: (ha-091565) Calling .GetSSHUsername
	I0918 20:01:47.313072   26827 sshutil.go:53] new ssh client: &{IP:192.168.39.215 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19667-7671/.minikube/machines/ha-091565/id_rsa Username:docker}
	I0918 20:01:47.321291   26827 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43097
	I0918 20:01:47.321760   26827 main.go:141] libmachine: () Calling .GetVersion
	I0918 20:01:47.322322   26827 main.go:141] libmachine: Using API Version  1
	I0918 20:01:47.322343   26827 main.go:141] libmachine: () Calling .SetConfigRaw
	I0918 20:01:47.322630   26827 main.go:141] libmachine: () Calling .GetMachineName
	I0918 20:01:47.322807   26827 main.go:141] libmachine: (ha-091565) Calling .GetState
	I0918 20:01:47.324450   26827 main.go:141] libmachine: (ha-091565) Calling .DriverName
	I0918 20:01:47.324624   26827 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0918 20:01:47.324639   26827 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0918 20:01:47.324656   26827 main.go:141] libmachine: (ha-091565) Calling .GetSSHHostname
	I0918 20:01:47.327553   26827 main.go:141] libmachine: (ha-091565) DBG | domain ha-091565 has defined MAC address 52:54:00:2a:13:d8 in network mk-ha-091565
	I0918 20:01:47.328031   26827 main.go:141] libmachine: (ha-091565) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:13:d8", ip: ""} in network mk-ha-091565: {Iface:virbr1 ExpiryTime:2024-09-18 21:01:12 +0000 UTC Type:0 Mac:52:54:00:2a:13:d8 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:ha-091565 Clientid:01:52:54:00:2a:13:d8}
	I0918 20:01:47.328103   26827 main.go:141] libmachine: (ha-091565) DBG | domain ha-091565 has defined IP address 192.168.39.215 and MAC address 52:54:00:2a:13:d8 in network mk-ha-091565
	I0918 20:01:47.328319   26827 main.go:141] libmachine: (ha-091565) Calling .GetSSHPort
	I0918 20:01:47.328490   26827 main.go:141] libmachine: (ha-091565) Calling .GetSSHKeyPath
	I0918 20:01:47.328627   26827 main.go:141] libmachine: (ha-091565) Calling .GetSSHUsername
	I0918 20:01:47.328755   26827 sshutil.go:53] new ssh client: &{IP:192.168.39.215 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19667-7671/.minikube/machines/ha-091565/id_rsa Username:docker}
	I0918 20:01:47.399915   26827 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0918 20:01:47.490020   26827 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0918 20:01:47.507383   26827 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0918 20:01:47.769102   26827 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0918 20:01:48.124518   26827 main.go:141] libmachine: Making call to close driver server
	I0918 20:01:48.124546   26827 main.go:141] libmachine: (ha-091565) Calling .Close
	I0918 20:01:48.124566   26827 main.go:141] libmachine: Making call to close driver server
	I0918 20:01:48.124582   26827 main.go:141] libmachine: (ha-091565) Calling .Close
	I0918 20:01:48.124826   26827 main.go:141] libmachine: Successfully made call to close driver server
	I0918 20:01:48.124838   26827 main.go:141] libmachine: Successfully made call to close driver server
	I0918 20:01:48.124842   26827 main.go:141] libmachine: Making call to close connection to plugin binary
	I0918 20:01:48.124851   26827 main.go:141] libmachine: Making call to close connection to plugin binary
	I0918 20:01:48.124852   26827 main.go:141] libmachine: Making call to close driver server
	I0918 20:01:48.124854   26827 main.go:141] libmachine: (ha-091565) DBG | Closing plugin on server side
	I0918 20:01:48.124854   26827 main.go:141] libmachine: (ha-091565) DBG | Closing plugin on server side
	I0918 20:01:48.124860   26827 main.go:141] libmachine: Making call to close driver server
	I0918 20:01:48.124891   26827 main.go:141] libmachine: (ha-091565) Calling .Close
	I0918 20:01:48.124906   26827 main.go:141] libmachine: (ha-091565) Calling .Close
	I0918 20:01:48.125117   26827 main.go:141] libmachine: (ha-091565) DBG | Closing plugin on server side
	I0918 20:01:48.125151   26827 main.go:141] libmachine: Successfully made call to close driver server
	I0918 20:01:48.125160   26827 main.go:141] libmachine: Making call to close connection to plugin binary
	I0918 20:01:48.125197   26827 main.go:141] libmachine: Successfully made call to close driver server
	I0918 20:01:48.125206   26827 main.go:141] libmachine: Making call to close connection to plugin binary
	I0918 20:01:48.125293   26827 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0918 20:01:48.125321   26827 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0918 20:01:48.125410   26827 round_trippers.go:463] GET https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses
	I0918 20:01:48.125420   26827 round_trippers.go:469] Request Headers:
	I0918 20:01:48.125433   26827 round_trippers.go:473]     Accept: application/json, */*
	I0918 20:01:48.125438   26827 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0918 20:01:48.140920   26827 round_trippers.go:574] Response Status: 200 OK in 15 milliseconds
	I0918 20:01:48.141439   26827 round_trippers.go:463] PUT https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0918 20:01:48.141452   26827 round_trippers.go:469] Request Headers:
	I0918 20:01:48.141459   26827 round_trippers.go:473]     Content-Type: application/json
	I0918 20:01:48.141463   26827 round_trippers.go:473]     Accept: application/json, */*
	I0918 20:01:48.141466   26827 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0918 20:01:48.144763   26827 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0918 20:01:48.144914   26827 main.go:141] libmachine: Making call to close driver server
	I0918 20:01:48.144928   26827 main.go:141] libmachine: (ha-091565) Calling .Close
	I0918 20:01:48.145191   26827 main.go:141] libmachine: Successfully made call to close driver server
	I0918 20:01:48.145213   26827 main.go:141] libmachine: Making call to close connection to plugin binary
	I0918 20:01:48.145197   26827 main.go:141] libmachine: (ha-091565) DBG | Closing plugin on server side
	I0918 20:01:48.146835   26827 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0918 20:01:48.148231   26827 addons.go:510] duration metric: took 879.91145ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I0918 20:01:48.148269   26827 start.go:246] waiting for cluster config update ...
	I0918 20:01:48.148286   26827 start.go:255] writing updated cluster config ...
	I0918 20:01:48.150246   26827 out.go:201] 
	I0918 20:01:48.151820   26827 config.go:182] Loaded profile config "ha-091565": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0918 20:01:48.151905   26827 profile.go:143] Saving config to /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/ha-091565/config.json ...
	I0918 20:01:48.153514   26827 out.go:177] * Starting "ha-091565-m02" control-plane node in "ha-091565" cluster
	I0918 20:01:48.154560   26827 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0918 20:01:48.154580   26827 cache.go:56] Caching tarball of preloaded images
	I0918 20:01:48.154669   26827 preload.go:172] Found /home/jenkins/minikube-integration/19667-7671/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0918 20:01:48.154681   26827 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0918 20:01:48.154748   26827 profile.go:143] Saving config to /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/ha-091565/config.json ...
	I0918 20:01:48.154916   26827 start.go:360] acquireMachinesLock for ha-091565-m02: {Name:mk1b0d68a0b7d9d4bc8204d5d81cb9eb77222526 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0918 20:01:48.154979   26827 start.go:364] duration metric: took 35.44µs to acquireMachinesLock for "ha-091565-m02"
	I0918 20:01:48.155003   26827 start.go:93] Provisioning new machine with config: &{Name:ha-091565 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.1 ClusterName:ha-091565 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.215 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 Cer
tExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0918 20:01:48.155077   26827 start.go:125] createHost starting for "m02" (driver="kvm2")
	I0918 20:01:48.156472   26827 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0918 20:01:48.156553   26827 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0918 20:01:48.156597   26827 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0918 20:01:48.171048   26827 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41535
	I0918 20:01:48.171579   26827 main.go:141] libmachine: () Calling .GetVersion
	I0918 20:01:48.172102   26827 main.go:141] libmachine: Using API Version  1
	I0918 20:01:48.172121   26827 main.go:141] libmachine: () Calling .SetConfigRaw
	I0918 20:01:48.172468   26827 main.go:141] libmachine: () Calling .GetMachineName
	I0918 20:01:48.172651   26827 main.go:141] libmachine: (ha-091565-m02) Calling .GetMachineName
	I0918 20:01:48.172786   26827 main.go:141] libmachine: (ha-091565-m02) Calling .DriverName
	I0918 20:01:48.172987   26827 start.go:159] libmachine.API.Create for "ha-091565" (driver="kvm2")
	I0918 20:01:48.173015   26827 client.go:168] LocalClient.Create starting
	I0918 20:01:48.173044   26827 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19667-7671/.minikube/certs/ca.pem
	I0918 20:01:48.173085   26827 main.go:141] libmachine: Decoding PEM data...
	I0918 20:01:48.173100   26827 main.go:141] libmachine: Parsing certificate...
	I0918 20:01:48.173147   26827 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19667-7671/.minikube/certs/cert.pem
	I0918 20:01:48.173164   26827 main.go:141] libmachine: Decoding PEM data...
	I0918 20:01:48.173174   26827 main.go:141] libmachine: Parsing certificate...
	I0918 20:01:48.173189   26827 main.go:141] libmachine: Running pre-create checks...
	I0918 20:01:48.173197   26827 main.go:141] libmachine: (ha-091565-m02) Calling .PreCreateCheck
	I0918 20:01:48.173330   26827 main.go:141] libmachine: (ha-091565-m02) Calling .GetConfigRaw
	I0918 20:01:48.173685   26827 main.go:141] libmachine: Creating machine...
	I0918 20:01:48.173707   26827 main.go:141] libmachine: (ha-091565-m02) Calling .Create
	I0918 20:01:48.173849   26827 main.go:141] libmachine: (ha-091565-m02) Creating KVM machine...
	I0918 20:01:48.175160   26827 main.go:141] libmachine: (ha-091565-m02) DBG | found existing default KVM network
	I0918 20:01:48.175336   26827 main.go:141] libmachine: (ha-091565-m02) DBG | found existing private KVM network mk-ha-091565
	I0918 20:01:48.175456   26827 main.go:141] libmachine: (ha-091565-m02) Setting up store path in /home/jenkins/minikube-integration/19667-7671/.minikube/machines/ha-091565-m02 ...
	I0918 20:01:48.175493   26827 main.go:141] libmachine: (ha-091565-m02) Building disk image from file:///home/jenkins/minikube-integration/19667-7671/.minikube/cache/iso/amd64/minikube-v1.34.0-1726481713-19649-amd64.iso
	I0918 20:01:48.175585   26827 main.go:141] libmachine: (ha-091565-m02) DBG | I0918 20:01:48.175471   27201 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19667-7671/.minikube
	I0918 20:01:48.175662   26827 main.go:141] libmachine: (ha-091565-m02) Downloading /home/jenkins/minikube-integration/19667-7671/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19667-7671/.minikube/cache/iso/amd64/minikube-v1.34.0-1726481713-19649-amd64.iso...
	I0918 20:01:48.401510   26827 main.go:141] libmachine: (ha-091565-m02) DBG | I0918 20:01:48.401363   27201 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19667-7671/.minikube/machines/ha-091565-m02/id_rsa...
	I0918 20:01:48.608450   26827 main.go:141] libmachine: (ha-091565-m02) DBG | I0918 20:01:48.608312   27201 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19667-7671/.minikube/machines/ha-091565-m02/ha-091565-m02.rawdisk...
	I0918 20:01:48.608478   26827 main.go:141] libmachine: (ha-091565-m02) DBG | Writing magic tar header
	I0918 20:01:48.608491   26827 main.go:141] libmachine: (ha-091565-m02) DBG | Writing SSH key tar header
	I0918 20:01:48.608498   26827 main.go:141] libmachine: (ha-091565-m02) DBG | I0918 20:01:48.608419   27201 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19667-7671/.minikube/machines/ha-091565-m02 ...
	I0918 20:01:48.608508   26827 main.go:141] libmachine: (ha-091565-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19667-7671/.minikube/machines/ha-091565-m02
	I0918 20:01:48.608550   26827 main.go:141] libmachine: (ha-091565-m02) Setting executable bit set on /home/jenkins/minikube-integration/19667-7671/.minikube/machines/ha-091565-m02 (perms=drwx------)
	I0918 20:01:48.608571   26827 main.go:141] libmachine: (ha-091565-m02) Setting executable bit set on /home/jenkins/minikube-integration/19667-7671/.minikube/machines (perms=drwxr-xr-x)
	I0918 20:01:48.608596   26827 main.go:141] libmachine: (ha-091565-m02) Setting executable bit set on /home/jenkins/minikube-integration/19667-7671/.minikube (perms=drwxr-xr-x)
	I0918 20:01:48.608618   26827 main.go:141] libmachine: (ha-091565-m02) Setting executable bit set on /home/jenkins/minikube-integration/19667-7671 (perms=drwxrwxr-x)
	I0918 20:01:48.608631   26827 main.go:141] libmachine: (ha-091565-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19667-7671/.minikube/machines
	I0918 20:01:48.608650   26827 main.go:141] libmachine: (ha-091565-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19667-7671/.minikube
	I0918 20:01:48.608662   26827 main.go:141] libmachine: (ha-091565-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19667-7671
	I0918 20:01:48.608675   26827 main.go:141] libmachine: (ha-091565-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0918 20:01:48.608686   26827 main.go:141] libmachine: (ha-091565-m02) DBG | Checking permissions on dir: /home/jenkins
	I0918 20:01:48.608698   26827 main.go:141] libmachine: (ha-091565-m02) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0918 20:01:48.608710   26827 main.go:141] libmachine: (ha-091565-m02) DBG | Checking permissions on dir: /home
	I0918 20:01:48.608728   26827 main.go:141] libmachine: (ha-091565-m02) DBG | Skipping /home - not owner
	I0918 20:01:48.608744   26827 main.go:141] libmachine: (ha-091565-m02) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0918 20:01:48.608754   26827 main.go:141] libmachine: (ha-091565-m02) Creating domain...
	I0918 20:01:48.609781   26827 main.go:141] libmachine: (ha-091565-m02) define libvirt domain using xml: 
	I0918 20:01:48.609802   26827 main.go:141] libmachine: (ha-091565-m02) <domain type='kvm'>
	I0918 20:01:48.609813   26827 main.go:141] libmachine: (ha-091565-m02)   <name>ha-091565-m02</name>
	I0918 20:01:48.609825   26827 main.go:141] libmachine: (ha-091565-m02)   <memory unit='MiB'>2200</memory>
	I0918 20:01:48.609846   26827 main.go:141] libmachine: (ha-091565-m02)   <vcpu>2</vcpu>
	I0918 20:01:48.609855   26827 main.go:141] libmachine: (ha-091565-m02)   <features>
	I0918 20:01:48.609866   26827 main.go:141] libmachine: (ha-091565-m02)     <acpi/>
	I0918 20:01:48.609874   26827 main.go:141] libmachine: (ha-091565-m02)     <apic/>
	I0918 20:01:48.609884   26827 main.go:141] libmachine: (ha-091565-m02)     <pae/>
	I0918 20:01:48.609891   26827 main.go:141] libmachine: (ha-091565-m02)     
	I0918 20:01:48.609898   26827 main.go:141] libmachine: (ha-091565-m02)   </features>
	I0918 20:01:48.609911   26827 main.go:141] libmachine: (ha-091565-m02)   <cpu mode='host-passthrough'>
	I0918 20:01:48.609932   26827 main.go:141] libmachine: (ha-091565-m02)   
	I0918 20:01:48.609948   26827 main.go:141] libmachine: (ha-091565-m02)   </cpu>
	I0918 20:01:48.609957   26827 main.go:141] libmachine: (ha-091565-m02)   <os>
	I0918 20:01:48.609972   26827 main.go:141] libmachine: (ha-091565-m02)     <type>hvm</type>
	I0918 20:01:48.609984   26827 main.go:141] libmachine: (ha-091565-m02)     <boot dev='cdrom'/>
	I0918 20:01:48.609994   26827 main.go:141] libmachine: (ha-091565-m02)     <boot dev='hd'/>
	I0918 20:01:48.610006   26827 main.go:141] libmachine: (ha-091565-m02)     <bootmenu enable='no'/>
	I0918 20:01:48.610016   26827 main.go:141] libmachine: (ha-091565-m02)   </os>
	I0918 20:01:48.610031   26827 main.go:141] libmachine: (ha-091565-m02)   <devices>
	I0918 20:01:48.610042   26827 main.go:141] libmachine: (ha-091565-m02)     <disk type='file' device='cdrom'>
	I0918 20:01:48.610058   26827 main.go:141] libmachine: (ha-091565-m02)       <source file='/home/jenkins/minikube-integration/19667-7671/.minikube/machines/ha-091565-m02/boot2docker.iso'/>
	I0918 20:01:48.610074   26827 main.go:141] libmachine: (ha-091565-m02)       <target dev='hdc' bus='scsi'/>
	I0918 20:01:48.610086   26827 main.go:141] libmachine: (ha-091565-m02)       <readonly/>
	I0918 20:01:48.610096   26827 main.go:141] libmachine: (ha-091565-m02)     </disk>
	I0918 20:01:48.610106   26827 main.go:141] libmachine: (ha-091565-m02)     <disk type='file' device='disk'>
	I0918 20:01:48.610120   26827 main.go:141] libmachine: (ha-091565-m02)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0918 20:01:48.610136   26827 main.go:141] libmachine: (ha-091565-m02)       <source file='/home/jenkins/minikube-integration/19667-7671/.minikube/machines/ha-091565-m02/ha-091565-m02.rawdisk'/>
	I0918 20:01:48.610147   26827 main.go:141] libmachine: (ha-091565-m02)       <target dev='hda' bus='virtio'/>
	I0918 20:01:48.610170   26827 main.go:141] libmachine: (ha-091565-m02)     </disk>
	I0918 20:01:48.610187   26827 main.go:141] libmachine: (ha-091565-m02)     <interface type='network'>
	I0918 20:01:48.610207   26827 main.go:141] libmachine: (ha-091565-m02)       <source network='mk-ha-091565'/>
	I0918 20:01:48.610225   26827 main.go:141] libmachine: (ha-091565-m02)       <model type='virtio'/>
	I0918 20:01:48.610237   26827 main.go:141] libmachine: (ha-091565-m02)     </interface>
	I0918 20:01:48.610247   26827 main.go:141] libmachine: (ha-091565-m02)     <interface type='network'>
	I0918 20:01:48.610255   26827 main.go:141] libmachine: (ha-091565-m02)       <source network='default'/>
	I0918 20:01:48.610265   26827 main.go:141] libmachine: (ha-091565-m02)       <model type='virtio'/>
	I0918 20:01:48.610275   26827 main.go:141] libmachine: (ha-091565-m02)     </interface>
	I0918 20:01:48.610285   26827 main.go:141] libmachine: (ha-091565-m02)     <serial type='pty'>
	I0918 20:01:48.610296   26827 main.go:141] libmachine: (ha-091565-m02)       <target port='0'/>
	I0918 20:01:48.610310   26827 main.go:141] libmachine: (ha-091565-m02)     </serial>
	I0918 20:01:48.610325   26827 main.go:141] libmachine: (ha-091565-m02)     <console type='pty'>
	I0918 20:01:48.610342   26827 main.go:141] libmachine: (ha-091565-m02)       <target type='serial' port='0'/>
	I0918 20:01:48.610353   26827 main.go:141] libmachine: (ha-091565-m02)     </console>
	I0918 20:01:48.610360   26827 main.go:141] libmachine: (ha-091565-m02)     <rng model='virtio'>
	I0918 20:01:48.610371   26827 main.go:141] libmachine: (ha-091565-m02)       <backend model='random'>/dev/random</backend>
	I0918 20:01:48.610380   26827 main.go:141] libmachine: (ha-091565-m02)     </rng>
	I0918 20:01:48.610390   26827 main.go:141] libmachine: (ha-091565-m02)     
	I0918 20:01:48.610396   26827 main.go:141] libmachine: (ha-091565-m02)     
	I0918 20:01:48.610409   26827 main.go:141] libmachine: (ha-091565-m02)   </devices>
	I0918 20:01:48.610423   26827 main.go:141] libmachine: (ha-091565-m02) </domain>
	I0918 20:01:48.610436   26827 main.go:141] libmachine: (ha-091565-m02) 
	I0918 20:01:48.617221   26827 main.go:141] libmachine: (ha-091565-m02) DBG | domain ha-091565-m02 has defined MAC address 52:54:00:15:ec:ae in network default
	I0918 20:01:48.617722   26827 main.go:141] libmachine: (ha-091565-m02) Ensuring networks are active...
	I0918 20:01:48.617752   26827 main.go:141] libmachine: (ha-091565-m02) DBG | domain ha-091565-m02 has defined MAC address 52:54:00:21:2b:96 in network mk-ha-091565
	I0918 20:01:48.618492   26827 main.go:141] libmachine: (ha-091565-m02) Ensuring network default is active
	I0918 20:01:48.618796   26827 main.go:141] libmachine: (ha-091565-m02) Ensuring network mk-ha-091565 is active
	I0918 20:01:48.619157   26827 main.go:141] libmachine: (ha-091565-m02) Getting domain xml...
	I0918 20:01:48.619865   26827 main.go:141] libmachine: (ha-091565-m02) Creating domain...
	I0918 20:01:49.853791   26827 main.go:141] libmachine: (ha-091565-m02) Waiting to get IP...
	I0918 20:01:49.854650   26827 main.go:141] libmachine: (ha-091565-m02) DBG | domain ha-091565-m02 has defined MAC address 52:54:00:21:2b:96 in network mk-ha-091565
	I0918 20:01:49.855084   26827 main.go:141] libmachine: (ha-091565-m02) DBG | unable to find current IP address of domain ha-091565-m02 in network mk-ha-091565
	I0918 20:01:49.855112   26827 main.go:141] libmachine: (ha-091565-m02) DBG | I0918 20:01:49.855067   27201 retry.go:31] will retry after 283.999691ms: waiting for machine to come up
	I0918 20:01:50.140266   26827 main.go:141] libmachine: (ha-091565-m02) DBG | domain ha-091565-m02 has defined MAC address 52:54:00:21:2b:96 in network mk-ha-091565
	I0918 20:01:50.140696   26827 main.go:141] libmachine: (ha-091565-m02) DBG | unable to find current IP address of domain ha-091565-m02 in network mk-ha-091565
	I0918 20:01:50.140718   26827 main.go:141] libmachine: (ha-091565-m02) DBG | I0918 20:01:50.140668   27201 retry.go:31] will retry after 243.982504ms: waiting for machine to come up
	I0918 20:01:50.386066   26827 main.go:141] libmachine: (ha-091565-m02) DBG | domain ha-091565-m02 has defined MAC address 52:54:00:21:2b:96 in network mk-ha-091565
	I0918 20:01:50.386487   26827 main.go:141] libmachine: (ha-091565-m02) DBG | unable to find current IP address of domain ha-091565-m02 in network mk-ha-091565
	I0918 20:01:50.386515   26827 main.go:141] libmachine: (ha-091565-m02) DBG | I0918 20:01:50.386440   27201 retry.go:31] will retry after 384.970289ms: waiting for machine to come up
	I0918 20:01:50.773049   26827 main.go:141] libmachine: (ha-091565-m02) DBG | domain ha-091565-m02 has defined MAC address 52:54:00:21:2b:96 in network mk-ha-091565
	I0918 20:01:50.773463   26827 main.go:141] libmachine: (ha-091565-m02) DBG | unable to find current IP address of domain ha-091565-m02 in network mk-ha-091565
	I0918 20:01:50.773490   26827 main.go:141] libmachine: (ha-091565-m02) DBG | I0918 20:01:50.773419   27201 retry.go:31] will retry after 383.687698ms: waiting for machine to come up
	I0918 20:01:51.158968   26827 main.go:141] libmachine: (ha-091565-m02) DBG | domain ha-091565-m02 has defined MAC address 52:54:00:21:2b:96 in network mk-ha-091565
	I0918 20:01:51.159478   26827 main.go:141] libmachine: (ha-091565-m02) DBG | unable to find current IP address of domain ha-091565-m02 in network mk-ha-091565
	I0918 20:01:51.159506   26827 main.go:141] libmachine: (ha-091565-m02) DBG | I0918 20:01:51.159430   27201 retry.go:31] will retry after 708.286443ms: waiting for machine to come up
	I0918 20:01:51.869406   26827 main.go:141] libmachine: (ha-091565-m02) DBG | domain ha-091565-m02 has defined MAC address 52:54:00:21:2b:96 in network mk-ha-091565
	I0918 20:01:51.869911   26827 main.go:141] libmachine: (ha-091565-m02) DBG | unable to find current IP address of domain ha-091565-m02 in network mk-ha-091565
	I0918 20:01:51.869932   26827 main.go:141] libmachine: (ha-091565-m02) DBG | I0918 20:01:51.869871   27201 retry.go:31] will retry after 693.038682ms: waiting for machine to come up
	I0918 20:01:52.564866   26827 main.go:141] libmachine: (ha-091565-m02) DBG | domain ha-091565-m02 has defined MAC address 52:54:00:21:2b:96 in network mk-ha-091565
	I0918 20:01:52.565352   26827 main.go:141] libmachine: (ha-091565-m02) DBG | unable to find current IP address of domain ha-091565-m02 in network mk-ha-091565
	I0918 20:01:52.565380   26827 main.go:141] libmachine: (ha-091565-m02) DBG | I0918 20:01:52.565257   27201 retry.go:31] will retry after 736.537004ms: waiting for machine to come up
	I0918 20:01:53.303205   26827 main.go:141] libmachine: (ha-091565-m02) DBG | domain ha-091565-m02 has defined MAC address 52:54:00:21:2b:96 in network mk-ha-091565
	I0918 20:01:53.303598   26827 main.go:141] libmachine: (ha-091565-m02) DBG | unable to find current IP address of domain ha-091565-m02 in network mk-ha-091565
	I0918 20:01:53.303630   26827 main.go:141] libmachine: (ha-091565-m02) DBG | I0918 20:01:53.303562   27201 retry.go:31] will retry after 1.042865785s: waiting for machine to come up
	I0918 20:01:54.347669   26827 main.go:141] libmachine: (ha-091565-m02) DBG | domain ha-091565-m02 has defined MAC address 52:54:00:21:2b:96 in network mk-ha-091565
	I0918 20:01:54.348067   26827 main.go:141] libmachine: (ha-091565-m02) DBG | unable to find current IP address of domain ha-091565-m02 in network mk-ha-091565
	I0918 20:01:54.348094   26827 main.go:141] libmachine: (ha-091565-m02) DBG | I0918 20:01:54.348054   27201 retry.go:31] will retry after 1.167725142s: waiting for machine to come up
	I0918 20:01:55.517065   26827 main.go:141] libmachine: (ha-091565-m02) DBG | domain ha-091565-m02 has defined MAC address 52:54:00:21:2b:96 in network mk-ha-091565
	I0918 20:01:55.517432   26827 main.go:141] libmachine: (ha-091565-m02) DBG | unable to find current IP address of domain ha-091565-m02 in network mk-ha-091565
	I0918 20:01:55.517468   26827 main.go:141] libmachine: (ha-091565-m02) DBG | I0918 20:01:55.517401   27201 retry.go:31] will retry after 1.527504069s: waiting for machine to come up
	I0918 20:01:57.046257   26827 main.go:141] libmachine: (ha-091565-m02) DBG | domain ha-091565-m02 has defined MAC address 52:54:00:21:2b:96 in network mk-ha-091565
	I0918 20:01:57.046707   26827 main.go:141] libmachine: (ha-091565-m02) DBG | unable to find current IP address of domain ha-091565-m02 in network mk-ha-091565
	I0918 20:01:57.046734   26827 main.go:141] libmachine: (ha-091565-m02) DBG | I0918 20:01:57.046662   27201 retry.go:31] will retry after 2.687348908s: waiting for machine to come up
	I0918 20:01:59.735480   26827 main.go:141] libmachine: (ha-091565-m02) DBG | domain ha-091565-m02 has defined MAC address 52:54:00:21:2b:96 in network mk-ha-091565
	I0918 20:01:59.736079   26827 main.go:141] libmachine: (ha-091565-m02) DBG | unable to find current IP address of domain ha-091565-m02 in network mk-ha-091565
	I0918 20:01:59.736176   26827 main.go:141] libmachine: (ha-091565-m02) DBG | I0918 20:01:59.736024   27201 retry.go:31] will retry after 2.655283124s: waiting for machine to come up
	I0918 20:02:02.393219   26827 main.go:141] libmachine: (ha-091565-m02) DBG | domain ha-091565-m02 has defined MAC address 52:54:00:21:2b:96 in network mk-ha-091565
	I0918 20:02:02.393704   26827 main.go:141] libmachine: (ha-091565-m02) DBG | unable to find current IP address of domain ha-091565-m02 in network mk-ha-091565
	I0918 20:02:02.393725   26827 main.go:141] libmachine: (ha-091565-m02) DBG | I0918 20:02:02.393678   27201 retry.go:31] will retry after 3.65154054s: waiting for machine to come up
	I0918 20:02:06.048480   26827 main.go:141] libmachine: (ha-091565-m02) DBG | domain ha-091565-m02 has defined MAC address 52:54:00:21:2b:96 in network mk-ha-091565
	I0918 20:02:06.048911   26827 main.go:141] libmachine: (ha-091565-m02) DBG | unable to find current IP address of domain ha-091565-m02 in network mk-ha-091565
	I0918 20:02:06.048932   26827 main.go:141] libmachine: (ha-091565-m02) DBG | I0918 20:02:06.048885   27201 retry.go:31] will retry after 4.061870544s: waiting for machine to come up
	I0918 20:02:10.113660   26827 main.go:141] libmachine: (ha-091565-m02) DBG | domain ha-091565-m02 has defined MAC address 52:54:00:21:2b:96 in network mk-ha-091565
	I0918 20:02:10.114089   26827 main.go:141] libmachine: (ha-091565-m02) Found IP for machine: 192.168.39.92
	I0918 20:02:10.114110   26827 main.go:141] libmachine: (ha-091565-m02) Reserving static IP address...
	I0918 20:02:10.114118   26827 main.go:141] libmachine: (ha-091565-m02) DBG | domain ha-091565-m02 has current primary IP address 192.168.39.92 and MAC address 52:54:00:21:2b:96 in network mk-ha-091565
	I0918 20:02:10.114476   26827 main.go:141] libmachine: (ha-091565-m02) DBG | unable to find host DHCP lease matching {name: "ha-091565-m02", mac: "52:54:00:21:2b:96", ip: "192.168.39.92"} in network mk-ha-091565
	I0918 20:02:10.190986   26827 main.go:141] libmachine: (ha-091565-m02) DBG | Getting to WaitForSSH function...
	I0918 20:02:10.191024   26827 main.go:141] libmachine: (ha-091565-m02) Reserved static IP address: 192.168.39.92
	I0918 20:02:10.191040   26827 main.go:141] libmachine: (ha-091565-m02) Waiting for SSH to be available...
	I0918 20:02:10.193580   26827 main.go:141] libmachine: (ha-091565-m02) DBG | domain ha-091565-m02 has defined MAC address 52:54:00:21:2b:96 in network mk-ha-091565
	I0918 20:02:10.194009   26827 main.go:141] libmachine: (ha-091565-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:2b:96", ip: ""} in network mk-ha-091565: {Iface:virbr1 ExpiryTime:2024-09-18 21:02:02 +0000 UTC Type:0 Mac:52:54:00:21:2b:96 Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:minikube Clientid:01:52:54:00:21:2b:96}
	I0918 20:02:10.194037   26827 main.go:141] libmachine: (ha-091565-m02) DBG | domain ha-091565-m02 has defined IP address 192.168.39.92 and MAC address 52:54:00:21:2b:96 in network mk-ha-091565
	I0918 20:02:10.194132   26827 main.go:141] libmachine: (ha-091565-m02) DBG | Using SSH client type: external
	I0918 20:02:10.194161   26827 main.go:141] libmachine: (ha-091565-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/19667-7671/.minikube/machines/ha-091565-m02/id_rsa (-rw-------)
	I0918 20:02:10.194197   26827 main.go:141] libmachine: (ha-091565-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.92 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19667-7671/.minikube/machines/ha-091565-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0918 20:02:10.194215   26827 main.go:141] libmachine: (ha-091565-m02) DBG | About to run SSH command:
	I0918 20:02:10.194223   26827 main.go:141] libmachine: (ha-091565-m02) DBG | exit 0
	I0918 20:02:10.323932   26827 main.go:141] libmachine: (ha-091565-m02) DBG | SSH cmd err, output: <nil>: 
	I0918 20:02:10.324269   26827 main.go:141] libmachine: (ha-091565-m02) KVM machine creation complete!
	I0918 20:02:10.324574   26827 main.go:141] libmachine: (ha-091565-m02) Calling .GetConfigRaw
	I0918 20:02:10.325151   26827 main.go:141] libmachine: (ha-091565-m02) Calling .DriverName
	I0918 20:02:10.325341   26827 main.go:141] libmachine: (ha-091565-m02) Calling .DriverName
	I0918 20:02:10.325477   26827 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0918 20:02:10.325492   26827 main.go:141] libmachine: (ha-091565-m02) Calling .GetState
	I0918 20:02:10.326893   26827 main.go:141] libmachine: Detecting operating system of created instance...
	I0918 20:02:10.326917   26827 main.go:141] libmachine: Waiting for SSH to be available...
	I0918 20:02:10.326923   26827 main.go:141] libmachine: Getting to WaitForSSH function...
	I0918 20:02:10.326931   26827 main.go:141] libmachine: (ha-091565-m02) Calling .GetSSHHostname
	I0918 20:02:10.329564   26827 main.go:141] libmachine: (ha-091565-m02) DBG | domain ha-091565-m02 has defined MAC address 52:54:00:21:2b:96 in network mk-ha-091565
	I0918 20:02:10.330006   26827 main.go:141] libmachine: (ha-091565-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:2b:96", ip: ""} in network mk-ha-091565: {Iface:virbr1 ExpiryTime:2024-09-18 21:02:02 +0000 UTC Type:0 Mac:52:54:00:21:2b:96 Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:ha-091565-m02 Clientid:01:52:54:00:21:2b:96}
	I0918 20:02:10.330033   26827 main.go:141] libmachine: (ha-091565-m02) DBG | domain ha-091565-m02 has defined IP address 192.168.39.92 and MAC address 52:54:00:21:2b:96 in network mk-ha-091565
	I0918 20:02:10.330172   26827 main.go:141] libmachine: (ha-091565-m02) Calling .GetSSHPort
	I0918 20:02:10.330344   26827 main.go:141] libmachine: (ha-091565-m02) Calling .GetSSHKeyPath
	I0918 20:02:10.330500   26827 main.go:141] libmachine: (ha-091565-m02) Calling .GetSSHKeyPath
	I0918 20:02:10.330636   26827 main.go:141] libmachine: (ha-091565-m02) Calling .GetSSHUsername
	I0918 20:02:10.330796   26827 main.go:141] libmachine: Using SSH client type: native
	I0918 20:02:10.331010   26827 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.92 22 <nil> <nil>}
	I0918 20:02:10.331023   26827 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0918 20:02:10.443345   26827 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0918 20:02:10.443373   26827 main.go:141] libmachine: Detecting the provisioner...
	I0918 20:02:10.443397   26827 main.go:141] libmachine: (ha-091565-m02) Calling .GetSSHHostname
	I0918 20:02:10.446214   26827 main.go:141] libmachine: (ha-091565-m02) DBG | domain ha-091565-m02 has defined MAC address 52:54:00:21:2b:96 in network mk-ha-091565
	I0918 20:02:10.446561   26827 main.go:141] libmachine: (ha-091565-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:2b:96", ip: ""} in network mk-ha-091565: {Iface:virbr1 ExpiryTime:2024-09-18 21:02:02 +0000 UTC Type:0 Mac:52:54:00:21:2b:96 Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:ha-091565-m02 Clientid:01:52:54:00:21:2b:96}
	I0918 20:02:10.446609   26827 main.go:141] libmachine: (ha-091565-m02) DBG | domain ha-091565-m02 has defined IP address 192.168.39.92 and MAC address 52:54:00:21:2b:96 in network mk-ha-091565
	I0918 20:02:10.446805   26827 main.go:141] libmachine: (ha-091565-m02) Calling .GetSSHPort
	I0918 20:02:10.447003   26827 main.go:141] libmachine: (ha-091565-m02) Calling .GetSSHKeyPath
	I0918 20:02:10.447152   26827 main.go:141] libmachine: (ha-091565-m02) Calling .GetSSHKeyPath
	I0918 20:02:10.447299   26827 main.go:141] libmachine: (ha-091565-m02) Calling .GetSSHUsername
	I0918 20:02:10.447466   26827 main.go:141] libmachine: Using SSH client type: native
	I0918 20:02:10.447651   26827 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.92 22 <nil> <nil>}
	I0918 20:02:10.447661   26827 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0918 20:02:10.560498   26827 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0918 20:02:10.560569   26827 main.go:141] libmachine: found compatible host: buildroot
	I0918 20:02:10.560579   26827 main.go:141] libmachine: Provisioning with buildroot...
	I0918 20:02:10.560587   26827 main.go:141] libmachine: (ha-091565-m02) Calling .GetMachineName
	I0918 20:02:10.560807   26827 buildroot.go:166] provisioning hostname "ha-091565-m02"
	I0918 20:02:10.560829   26827 main.go:141] libmachine: (ha-091565-m02) Calling .GetMachineName
	I0918 20:02:10.561019   26827 main.go:141] libmachine: (ha-091565-m02) Calling .GetSSHHostname
	I0918 20:02:10.563200   26827 main.go:141] libmachine: (ha-091565-m02) DBG | domain ha-091565-m02 has defined MAC address 52:54:00:21:2b:96 in network mk-ha-091565
	I0918 20:02:10.563504   26827 main.go:141] libmachine: (ha-091565-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:2b:96", ip: ""} in network mk-ha-091565: {Iface:virbr1 ExpiryTime:2024-09-18 21:02:02 +0000 UTC Type:0 Mac:52:54:00:21:2b:96 Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:ha-091565-m02 Clientid:01:52:54:00:21:2b:96}
	I0918 20:02:10.563529   26827 main.go:141] libmachine: (ha-091565-m02) DBG | domain ha-091565-m02 has defined IP address 192.168.39.92 and MAC address 52:54:00:21:2b:96 in network mk-ha-091565
	I0918 20:02:10.563719   26827 main.go:141] libmachine: (ha-091565-m02) Calling .GetSSHPort
	I0918 20:02:10.563862   26827 main.go:141] libmachine: (ha-091565-m02) Calling .GetSSHKeyPath
	I0918 20:02:10.564010   26827 main.go:141] libmachine: (ha-091565-m02) Calling .GetSSHKeyPath
	I0918 20:02:10.564147   26827 main.go:141] libmachine: (ha-091565-m02) Calling .GetSSHUsername
	I0918 20:02:10.564297   26827 main.go:141] libmachine: Using SSH client type: native
	I0918 20:02:10.564453   26827 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.92 22 <nil> <nil>}
	I0918 20:02:10.564464   26827 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-091565-m02 && echo "ha-091565-m02" | sudo tee /etc/hostname
	I0918 20:02:10.691295   26827 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-091565-m02
	
	I0918 20:02:10.691325   26827 main.go:141] libmachine: (ha-091565-m02) Calling .GetSSHHostname
	I0918 20:02:10.693996   26827 main.go:141] libmachine: (ha-091565-m02) DBG | domain ha-091565-m02 has defined MAC address 52:54:00:21:2b:96 in network mk-ha-091565
	I0918 20:02:10.694327   26827 main.go:141] libmachine: (ha-091565-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:2b:96", ip: ""} in network mk-ha-091565: {Iface:virbr1 ExpiryTime:2024-09-18 21:02:02 +0000 UTC Type:0 Mac:52:54:00:21:2b:96 Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:ha-091565-m02 Clientid:01:52:54:00:21:2b:96}
	I0918 20:02:10.694365   26827 main.go:141] libmachine: (ha-091565-m02) DBG | domain ha-091565-m02 has defined IP address 192.168.39.92 and MAC address 52:54:00:21:2b:96 in network mk-ha-091565
	I0918 20:02:10.694501   26827 main.go:141] libmachine: (ha-091565-m02) Calling .GetSSHPort
	I0918 20:02:10.694688   26827 main.go:141] libmachine: (ha-091565-m02) Calling .GetSSHKeyPath
	I0918 20:02:10.694846   26827 main.go:141] libmachine: (ha-091565-m02) Calling .GetSSHKeyPath
	I0918 20:02:10.694979   26827 main.go:141] libmachine: (ha-091565-m02) Calling .GetSSHUsername
	I0918 20:02:10.695122   26827 main.go:141] libmachine: Using SSH client type: native
	I0918 20:02:10.695275   26827 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.92 22 <nil> <nil>}
	I0918 20:02:10.695290   26827 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-091565-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-091565-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-091565-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0918 20:02:10.816522   26827 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0918 20:02:10.816548   26827 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19667-7671/.minikube CaCertPath:/home/jenkins/minikube-integration/19667-7671/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19667-7671/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19667-7671/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19667-7671/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19667-7671/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19667-7671/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19667-7671/.minikube}
	I0918 20:02:10.816563   26827 buildroot.go:174] setting up certificates
	I0918 20:02:10.816571   26827 provision.go:84] configureAuth start
	I0918 20:02:10.816581   26827 main.go:141] libmachine: (ha-091565-m02) Calling .GetMachineName
	I0918 20:02:10.816839   26827 main.go:141] libmachine: (ha-091565-m02) Calling .GetIP
	I0918 20:02:10.819595   26827 main.go:141] libmachine: (ha-091565-m02) DBG | domain ha-091565-m02 has defined MAC address 52:54:00:21:2b:96 in network mk-ha-091565
	I0918 20:02:10.819999   26827 main.go:141] libmachine: (ha-091565-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:2b:96", ip: ""} in network mk-ha-091565: {Iface:virbr1 ExpiryTime:2024-09-18 21:02:02 +0000 UTC Type:0 Mac:52:54:00:21:2b:96 Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:ha-091565-m02 Clientid:01:52:54:00:21:2b:96}
	I0918 20:02:10.820045   26827 main.go:141] libmachine: (ha-091565-m02) DBG | domain ha-091565-m02 has defined IP address 192.168.39.92 and MAC address 52:54:00:21:2b:96 in network mk-ha-091565
	I0918 20:02:10.820197   26827 main.go:141] libmachine: (ha-091565-m02) Calling .GetSSHHostname
	I0918 20:02:10.822853   26827 main.go:141] libmachine: (ha-091565-m02) DBG | domain ha-091565-m02 has defined MAC address 52:54:00:21:2b:96 in network mk-ha-091565
	I0918 20:02:10.823229   26827 main.go:141] libmachine: (ha-091565-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:2b:96", ip: ""} in network mk-ha-091565: {Iface:virbr1 ExpiryTime:2024-09-18 21:02:02 +0000 UTC Type:0 Mac:52:54:00:21:2b:96 Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:ha-091565-m02 Clientid:01:52:54:00:21:2b:96}
	I0918 20:02:10.823283   26827 main.go:141] libmachine: (ha-091565-m02) DBG | domain ha-091565-m02 has defined IP address 192.168.39.92 and MAC address 52:54:00:21:2b:96 in network mk-ha-091565
	I0918 20:02:10.823418   26827 provision.go:143] copyHostCerts
	I0918 20:02:10.823446   26827 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19667-7671/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19667-7671/.minikube/key.pem
	I0918 20:02:10.823472   26827 exec_runner.go:144] found /home/jenkins/minikube-integration/19667-7671/.minikube/key.pem, removing ...
	I0918 20:02:10.823482   26827 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19667-7671/.minikube/key.pem
	I0918 20:02:10.823549   26827 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19667-7671/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19667-7671/.minikube/key.pem (1679 bytes)
	I0918 20:02:10.823626   26827 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19667-7671/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19667-7671/.minikube/ca.pem
	I0918 20:02:10.823644   26827 exec_runner.go:144] found /home/jenkins/minikube-integration/19667-7671/.minikube/ca.pem, removing ...
	I0918 20:02:10.823651   26827 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19667-7671/.minikube/ca.pem
	I0918 20:02:10.823674   26827 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19667-7671/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19667-7671/.minikube/ca.pem (1082 bytes)
	I0918 20:02:10.823715   26827 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19667-7671/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19667-7671/.minikube/cert.pem
	I0918 20:02:10.823731   26827 exec_runner.go:144] found /home/jenkins/minikube-integration/19667-7671/.minikube/cert.pem, removing ...
	I0918 20:02:10.823737   26827 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19667-7671/.minikube/cert.pem
	I0918 20:02:10.823757   26827 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19667-7671/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19667-7671/.minikube/cert.pem (1123 bytes)
	I0918 20:02:10.823804   26827 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19667-7671/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19667-7671/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19667-7671/.minikube/certs/ca-key.pem org=jenkins.ha-091565-m02 san=[127.0.0.1 192.168.39.92 ha-091565-m02 localhost minikube]
	I0918 20:02:11.057033   26827 provision.go:177] copyRemoteCerts
	I0918 20:02:11.057095   26827 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0918 20:02:11.057117   26827 main.go:141] libmachine: (ha-091565-m02) Calling .GetSSHHostname
	I0918 20:02:11.059721   26827 main.go:141] libmachine: (ha-091565-m02) DBG | domain ha-091565-m02 has defined MAC address 52:54:00:21:2b:96 in network mk-ha-091565
	I0918 20:02:11.060054   26827 main.go:141] libmachine: (ha-091565-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:2b:96", ip: ""} in network mk-ha-091565: {Iface:virbr1 ExpiryTime:2024-09-18 21:02:02 +0000 UTC Type:0 Mac:52:54:00:21:2b:96 Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:ha-091565-m02 Clientid:01:52:54:00:21:2b:96}
	I0918 20:02:11.060083   26827 main.go:141] libmachine: (ha-091565-m02) DBG | domain ha-091565-m02 has defined IP address 192.168.39.92 and MAC address 52:54:00:21:2b:96 in network mk-ha-091565
	I0918 20:02:11.060241   26827 main.go:141] libmachine: (ha-091565-m02) Calling .GetSSHPort
	I0918 20:02:11.060442   26827 main.go:141] libmachine: (ha-091565-m02) Calling .GetSSHKeyPath
	I0918 20:02:11.060560   26827 main.go:141] libmachine: (ha-091565-m02) Calling .GetSSHUsername
	I0918 20:02:11.060670   26827 sshutil.go:53] new ssh client: &{IP:192.168.39.92 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19667-7671/.minikube/machines/ha-091565-m02/id_rsa Username:docker}
	I0918 20:02:11.145946   26827 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19667-7671/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0918 20:02:11.146020   26827 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0918 20:02:11.169808   26827 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19667-7671/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0918 20:02:11.169883   26827 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0918 20:02:11.192067   26827 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19667-7671/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0918 20:02:11.192133   26827 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0918 20:02:11.213945   26827 provision.go:87] duration metric: took 397.362437ms to configureAuth
	I0918 20:02:11.213974   26827 buildroot.go:189] setting minikube options for container-runtime
	I0918 20:02:11.214161   26827 config.go:182] Loaded profile config "ha-091565": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0918 20:02:11.214232   26827 main.go:141] libmachine: (ha-091565-m02) Calling .GetSSHHostname
	I0918 20:02:11.216594   26827 main.go:141] libmachine: (ha-091565-m02) DBG | domain ha-091565-m02 has defined MAC address 52:54:00:21:2b:96 in network mk-ha-091565
	I0918 20:02:11.216996   26827 main.go:141] libmachine: (ha-091565-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:2b:96", ip: ""} in network mk-ha-091565: {Iface:virbr1 ExpiryTime:2024-09-18 21:02:02 +0000 UTC Type:0 Mac:52:54:00:21:2b:96 Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:ha-091565-m02 Clientid:01:52:54:00:21:2b:96}
	I0918 20:02:11.217027   26827 main.go:141] libmachine: (ha-091565-m02) DBG | domain ha-091565-m02 has defined IP address 192.168.39.92 and MAC address 52:54:00:21:2b:96 in network mk-ha-091565
	I0918 20:02:11.217192   26827 main.go:141] libmachine: (ha-091565-m02) Calling .GetSSHPort
	I0918 20:02:11.217382   26827 main.go:141] libmachine: (ha-091565-m02) Calling .GetSSHKeyPath
	I0918 20:02:11.217568   26827 main.go:141] libmachine: (ha-091565-m02) Calling .GetSSHKeyPath
	I0918 20:02:11.217782   26827 main.go:141] libmachine: (ha-091565-m02) Calling .GetSSHUsername
	I0918 20:02:11.217991   26827 main.go:141] libmachine: Using SSH client type: native
	I0918 20:02:11.218183   26827 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.92 22 <nil> <nil>}
	I0918 20:02:11.218201   26827 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0918 20:02:11.450199   26827 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0918 20:02:11.450222   26827 main.go:141] libmachine: Checking connection to Docker...
	I0918 20:02:11.450231   26827 main.go:141] libmachine: (ha-091565-m02) Calling .GetURL
	I0918 20:02:11.451440   26827 main.go:141] libmachine: (ha-091565-m02) DBG | Using libvirt version 6000000
	I0918 20:02:11.453501   26827 main.go:141] libmachine: (ha-091565-m02) DBG | domain ha-091565-m02 has defined MAC address 52:54:00:21:2b:96 in network mk-ha-091565
	I0918 20:02:11.453892   26827 main.go:141] libmachine: (ha-091565-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:2b:96", ip: ""} in network mk-ha-091565: {Iface:virbr1 ExpiryTime:2024-09-18 21:02:02 +0000 UTC Type:0 Mac:52:54:00:21:2b:96 Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:ha-091565-m02 Clientid:01:52:54:00:21:2b:96}
	I0918 20:02:11.453920   26827 main.go:141] libmachine: (ha-091565-m02) DBG | domain ha-091565-m02 has defined IP address 192.168.39.92 and MAC address 52:54:00:21:2b:96 in network mk-ha-091565
	I0918 20:02:11.454034   26827 main.go:141] libmachine: Docker is up and running!
	I0918 20:02:11.454051   26827 main.go:141] libmachine: Reticulating splines...
	I0918 20:02:11.454059   26827 client.go:171] duration metric: took 23.281034632s to LocalClient.Create
	I0918 20:02:11.454083   26827 start.go:167] duration metric: took 23.281096503s to libmachine.API.Create "ha-091565"
	I0918 20:02:11.454095   26827 start.go:293] postStartSetup for "ha-091565-m02" (driver="kvm2")
	I0918 20:02:11.454108   26827 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0918 20:02:11.454129   26827 main.go:141] libmachine: (ha-091565-m02) Calling .DriverName
	I0918 20:02:11.454363   26827 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0918 20:02:11.454391   26827 main.go:141] libmachine: (ha-091565-m02) Calling .GetSSHHostname
	I0918 20:02:11.456695   26827 main.go:141] libmachine: (ha-091565-m02) DBG | domain ha-091565-m02 has defined MAC address 52:54:00:21:2b:96 in network mk-ha-091565
	I0918 20:02:11.457025   26827 main.go:141] libmachine: (ha-091565-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:2b:96", ip: ""} in network mk-ha-091565: {Iface:virbr1 ExpiryTime:2024-09-18 21:02:02 +0000 UTC Type:0 Mac:52:54:00:21:2b:96 Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:ha-091565-m02 Clientid:01:52:54:00:21:2b:96}
	I0918 20:02:11.457053   26827 main.go:141] libmachine: (ha-091565-m02) DBG | domain ha-091565-m02 has defined IP address 192.168.39.92 and MAC address 52:54:00:21:2b:96 in network mk-ha-091565
	I0918 20:02:11.457216   26827 main.go:141] libmachine: (ha-091565-m02) Calling .GetSSHPort
	I0918 20:02:11.457393   26827 main.go:141] libmachine: (ha-091565-m02) Calling .GetSSHKeyPath
	I0918 20:02:11.457548   26827 main.go:141] libmachine: (ha-091565-m02) Calling .GetSSHUsername
	I0918 20:02:11.457664   26827 sshutil.go:53] new ssh client: &{IP:192.168.39.92 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19667-7671/.minikube/machines/ha-091565-m02/id_rsa Username:docker}
	I0918 20:02:11.543806   26827 ssh_runner.go:195] Run: cat /etc/os-release
	I0918 20:02:11.548176   26827 info.go:137] Remote host: Buildroot 2023.02.9
	I0918 20:02:11.548212   26827 filesync.go:126] Scanning /home/jenkins/minikube-integration/19667-7671/.minikube/addons for local assets ...
	I0918 20:02:11.548288   26827 filesync.go:126] Scanning /home/jenkins/minikube-integration/19667-7671/.minikube/files for local assets ...
	I0918 20:02:11.548387   26827 filesync.go:149] local asset: /home/jenkins/minikube-integration/19667-7671/.minikube/files/etc/ssl/certs/148782.pem -> 148782.pem in /etc/ssl/certs
	I0918 20:02:11.548401   26827 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19667-7671/.minikube/files/etc/ssl/certs/148782.pem -> /etc/ssl/certs/148782.pem
	I0918 20:02:11.548509   26827 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0918 20:02:11.557991   26827 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/files/etc/ssl/certs/148782.pem --> /etc/ssl/certs/148782.pem (1708 bytes)
	I0918 20:02:11.580809   26827 start.go:296] duration metric: took 126.700515ms for postStartSetup
	I0918 20:02:11.580869   26827 main.go:141] libmachine: (ha-091565-m02) Calling .GetConfigRaw
	I0918 20:02:11.581461   26827 main.go:141] libmachine: (ha-091565-m02) Calling .GetIP
	I0918 20:02:11.583798   26827 main.go:141] libmachine: (ha-091565-m02) DBG | domain ha-091565-m02 has defined MAC address 52:54:00:21:2b:96 in network mk-ha-091565
	I0918 20:02:11.584145   26827 main.go:141] libmachine: (ha-091565-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:2b:96", ip: ""} in network mk-ha-091565: {Iface:virbr1 ExpiryTime:2024-09-18 21:02:02 +0000 UTC Type:0 Mac:52:54:00:21:2b:96 Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:ha-091565-m02 Clientid:01:52:54:00:21:2b:96}
	I0918 20:02:11.584166   26827 main.go:141] libmachine: (ha-091565-m02) DBG | domain ha-091565-m02 has defined IP address 192.168.39.92 and MAC address 52:54:00:21:2b:96 in network mk-ha-091565
	I0918 20:02:11.584397   26827 profile.go:143] Saving config to /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/ha-091565/config.json ...
	I0918 20:02:11.584590   26827 start.go:128] duration metric: took 23.429501872s to createHost
	I0918 20:02:11.584610   26827 main.go:141] libmachine: (ha-091565-m02) Calling .GetSSHHostname
	I0918 20:02:11.586789   26827 main.go:141] libmachine: (ha-091565-m02) DBG | domain ha-091565-m02 has defined MAC address 52:54:00:21:2b:96 in network mk-ha-091565
	I0918 20:02:11.587088   26827 main.go:141] libmachine: (ha-091565-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:2b:96", ip: ""} in network mk-ha-091565: {Iface:virbr1 ExpiryTime:2024-09-18 21:02:02 +0000 UTC Type:0 Mac:52:54:00:21:2b:96 Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:ha-091565-m02 Clientid:01:52:54:00:21:2b:96}
	I0918 20:02:11.587104   26827 main.go:141] libmachine: (ha-091565-m02) DBG | domain ha-091565-m02 has defined IP address 192.168.39.92 and MAC address 52:54:00:21:2b:96 in network mk-ha-091565
	I0918 20:02:11.587289   26827 main.go:141] libmachine: (ha-091565-m02) Calling .GetSSHPort
	I0918 20:02:11.587470   26827 main.go:141] libmachine: (ha-091565-m02) Calling .GetSSHKeyPath
	I0918 20:02:11.587595   26827 main.go:141] libmachine: (ha-091565-m02) Calling .GetSSHKeyPath
	I0918 20:02:11.587738   26827 main.go:141] libmachine: (ha-091565-m02) Calling .GetSSHUsername
	I0918 20:02:11.587870   26827 main.go:141] libmachine: Using SSH client type: native
	I0918 20:02:11.588036   26827 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.92 22 <nil> <nil>}
	I0918 20:02:11.588047   26827 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0918 20:02:11.700738   26827 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726689731.662490371
	
	I0918 20:02:11.700765   26827 fix.go:216] guest clock: 1726689731.662490371
	I0918 20:02:11.700775   26827 fix.go:229] Guest: 2024-09-18 20:02:11.662490371 +0000 UTC Remote: 2024-09-18 20:02:11.584601507 +0000 UTC m=+73.979326396 (delta=77.888864ms)
	I0918 20:02:11.700793   26827 fix.go:200] guest clock delta is within tolerance: 77.888864ms
	I0918 20:02:11.700797   26827 start.go:83] releasing machines lock for "ha-091565-m02", held for 23.545807984s
	I0918 20:02:11.700814   26827 main.go:141] libmachine: (ha-091565-m02) Calling .DriverName
	I0918 20:02:11.701084   26827 main.go:141] libmachine: (ha-091565-m02) Calling .GetIP
	I0918 20:02:11.703834   26827 main.go:141] libmachine: (ha-091565-m02) DBG | domain ha-091565-m02 has defined MAC address 52:54:00:21:2b:96 in network mk-ha-091565
	I0918 20:02:11.704301   26827 main.go:141] libmachine: (ha-091565-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:2b:96", ip: ""} in network mk-ha-091565: {Iface:virbr1 ExpiryTime:2024-09-18 21:02:02 +0000 UTC Type:0 Mac:52:54:00:21:2b:96 Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:ha-091565-m02 Clientid:01:52:54:00:21:2b:96}
	I0918 20:02:11.704332   26827 main.go:141] libmachine: (ha-091565-m02) DBG | domain ha-091565-m02 has defined IP address 192.168.39.92 and MAC address 52:54:00:21:2b:96 in network mk-ha-091565
	I0918 20:02:11.706825   26827 out.go:177] * Found network options:
	I0918 20:02:11.708191   26827 out.go:177]   - NO_PROXY=192.168.39.215
	W0918 20:02:11.709336   26827 proxy.go:119] fail to check proxy env: Error ip not in block
	I0918 20:02:11.709382   26827 main.go:141] libmachine: (ha-091565-m02) Calling .DriverName
	I0918 20:02:11.710083   26827 main.go:141] libmachine: (ha-091565-m02) Calling .DriverName
	I0918 20:02:11.710311   26827 main.go:141] libmachine: (ha-091565-m02) Calling .DriverName
	I0918 20:02:11.710420   26827 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0918 20:02:11.710463   26827 main.go:141] libmachine: (ha-091565-m02) Calling .GetSSHHostname
	W0918 20:02:11.710532   26827 proxy.go:119] fail to check proxy env: Error ip not in block
	I0918 20:02:11.710615   26827 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0918 20:02:11.710636   26827 main.go:141] libmachine: (ha-091565-m02) Calling .GetSSHHostname
	I0918 20:02:11.714007   26827 main.go:141] libmachine: (ha-091565-m02) DBG | domain ha-091565-m02 has defined MAC address 52:54:00:21:2b:96 in network mk-ha-091565
	I0918 20:02:11.714090   26827 main.go:141] libmachine: (ha-091565-m02) DBG | domain ha-091565-m02 has defined MAC address 52:54:00:21:2b:96 in network mk-ha-091565
	I0918 20:02:11.714449   26827 main.go:141] libmachine: (ha-091565-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:2b:96", ip: ""} in network mk-ha-091565: {Iface:virbr1 ExpiryTime:2024-09-18 21:02:02 +0000 UTC Type:0 Mac:52:54:00:21:2b:96 Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:ha-091565-m02 Clientid:01:52:54:00:21:2b:96}
	I0918 20:02:11.714474   26827 main.go:141] libmachine: (ha-091565-m02) DBG | domain ha-091565-m02 has defined IP address 192.168.39.92 and MAC address 52:54:00:21:2b:96 in network mk-ha-091565
	I0918 20:02:11.714500   26827 main.go:141] libmachine: (ha-091565-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:2b:96", ip: ""} in network mk-ha-091565: {Iface:virbr1 ExpiryTime:2024-09-18 21:02:02 +0000 UTC Type:0 Mac:52:54:00:21:2b:96 Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:ha-091565-m02 Clientid:01:52:54:00:21:2b:96}
	I0918 20:02:11.714515   26827 main.go:141] libmachine: (ha-091565-m02) DBG | domain ha-091565-m02 has defined IP address 192.168.39.92 and MAC address 52:54:00:21:2b:96 in network mk-ha-091565
	I0918 20:02:11.714602   26827 main.go:141] libmachine: (ha-091565-m02) Calling .GetSSHPort
	I0918 20:02:11.714757   26827 main.go:141] libmachine: (ha-091565-m02) Calling .GetSSHPort
	I0918 20:02:11.714809   26827 main.go:141] libmachine: (ha-091565-m02) Calling .GetSSHKeyPath
	I0918 20:02:11.714897   26827 main.go:141] libmachine: (ha-091565-m02) Calling .GetSSHKeyPath
	I0918 20:02:11.714955   26827 main.go:141] libmachine: (ha-091565-m02) Calling .GetSSHUsername
	I0918 20:02:11.715014   26827 main.go:141] libmachine: (ha-091565-m02) Calling .GetSSHUsername
	I0918 20:02:11.715075   26827 sshutil.go:53] new ssh client: &{IP:192.168.39.92 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19667-7671/.minikube/machines/ha-091565-m02/id_rsa Username:docker}
	I0918 20:02:11.715103   26827 sshutil.go:53] new ssh client: &{IP:192.168.39.92 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19667-7671/.minikube/machines/ha-091565-m02/id_rsa Username:docker}
	I0918 20:02:11.951540   26827 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0918 20:02:11.958397   26827 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0918 20:02:11.958472   26827 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0918 20:02:11.975402   26827 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0918 20:02:11.975429   26827 start.go:495] detecting cgroup driver to use...
	I0918 20:02:11.975517   26827 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0918 20:02:11.992284   26827 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0918 20:02:12.006780   26827 docker.go:217] disabling cri-docker service (if available) ...
	I0918 20:02:12.006835   26827 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0918 20:02:12.021223   26827 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0918 20:02:12.035137   26827 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0918 20:02:12.152314   26827 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0918 20:02:12.308984   26827 docker.go:233] disabling docker service ...
	I0918 20:02:12.309056   26827 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0918 20:02:12.322897   26827 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0918 20:02:12.336617   26827 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0918 20:02:12.473473   26827 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0918 20:02:12.584374   26827 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0918 20:02:12.597923   26827 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0918 20:02:12.615683   26827 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0918 20:02:12.615759   26827 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0918 20:02:12.625760   26827 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0918 20:02:12.625817   26827 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0918 20:02:12.635917   26827 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0918 20:02:12.645924   26827 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0918 20:02:12.655813   26827 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0918 20:02:12.666525   26827 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0918 20:02:12.676621   26827 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0918 20:02:12.693200   26827 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0918 20:02:12.703365   26827 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0918 20:02:12.713885   26827 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0918 20:02:12.713948   26827 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0918 20:02:12.728888   26827 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0918 20:02:12.749626   26827 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0918 20:02:12.881747   26827 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0918 20:02:12.971475   26827 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0918 20:02:12.971567   26827 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0918 20:02:12.976879   26827 start.go:563] Will wait 60s for crictl version
	I0918 20:02:12.976965   26827 ssh_runner.go:195] Run: which crictl
	I0918 20:02:12.980716   26827 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0918 20:02:13.019156   26827 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0918 20:02:13.019245   26827 ssh_runner.go:195] Run: crio --version
	I0918 20:02:13.046401   26827 ssh_runner.go:195] Run: crio --version
	I0918 20:02:13.075823   26827 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0918 20:02:13.077052   26827 out.go:177]   - env NO_PROXY=192.168.39.215
	I0918 20:02:13.078258   26827 main.go:141] libmachine: (ha-091565-m02) Calling .GetIP
	I0918 20:02:13.081042   26827 main.go:141] libmachine: (ha-091565-m02) DBG | domain ha-091565-m02 has defined MAC address 52:54:00:21:2b:96 in network mk-ha-091565
	I0918 20:02:13.081379   26827 main.go:141] libmachine: (ha-091565-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:2b:96", ip: ""} in network mk-ha-091565: {Iface:virbr1 ExpiryTime:2024-09-18 21:02:02 +0000 UTC Type:0 Mac:52:54:00:21:2b:96 Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:ha-091565-m02 Clientid:01:52:54:00:21:2b:96}
	I0918 20:02:13.081410   26827 main.go:141] libmachine: (ha-091565-m02) DBG | domain ha-091565-m02 has defined IP address 192.168.39.92 and MAC address 52:54:00:21:2b:96 in network mk-ha-091565
	I0918 20:02:13.081604   26827 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0918 20:02:13.085957   26827 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0918 20:02:13.098025   26827 mustload.go:65] Loading cluster: ha-091565
	I0918 20:02:13.098236   26827 config.go:182] Loaded profile config "ha-091565": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0918 20:02:13.098500   26827 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0918 20:02:13.098540   26827 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0918 20:02:13.113020   26827 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43137
	I0918 20:02:13.113466   26827 main.go:141] libmachine: () Calling .GetVersion
	I0918 20:02:13.113910   26827 main.go:141] libmachine: Using API Version  1
	I0918 20:02:13.113932   26827 main.go:141] libmachine: () Calling .SetConfigRaw
	I0918 20:02:13.114242   26827 main.go:141] libmachine: () Calling .GetMachineName
	I0918 20:02:13.114415   26827 main.go:141] libmachine: (ha-091565) Calling .GetState
	I0918 20:02:13.115854   26827 host.go:66] Checking if "ha-091565" exists ...
	I0918 20:02:13.116211   26827 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0918 20:02:13.116246   26827 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0918 20:02:13.130542   26827 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42385
	I0918 20:02:13.130887   26827 main.go:141] libmachine: () Calling .GetVersion
	I0918 20:02:13.131305   26827 main.go:141] libmachine: Using API Version  1
	I0918 20:02:13.131334   26827 main.go:141] libmachine: () Calling .SetConfigRaw
	I0918 20:02:13.131650   26827 main.go:141] libmachine: () Calling .GetMachineName
	I0918 20:02:13.131812   26827 main.go:141] libmachine: (ha-091565) Calling .DriverName
	I0918 20:02:13.131970   26827 certs.go:68] Setting up /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/ha-091565 for IP: 192.168.39.92
	I0918 20:02:13.131980   26827 certs.go:194] generating shared ca certs ...
	I0918 20:02:13.131999   26827 certs.go:226] acquiring lock for ca certs: {Name:mkf54e0e71ed884c4372f9d3d4cb308ea4600185 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0918 20:02:13.132147   26827 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19667-7671/.minikube/ca.key
	I0918 20:02:13.132196   26827 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19667-7671/.minikube/proxy-client-ca.key
	I0918 20:02:13.132210   26827 certs.go:256] generating profile certs ...
	I0918 20:02:13.132298   26827 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/ha-091565/client.key
	I0918 20:02:13.132328   26827 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/ha-091565/apiserver.key.7629692a
	I0918 20:02:13.132349   26827 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/ha-091565/apiserver.crt.7629692a with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.215 192.168.39.92 192.168.39.254]
	I0918 20:02:13.381001   26827 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/ha-091565/apiserver.crt.7629692a ...
	I0918 20:02:13.381032   26827 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/ha-091565/apiserver.crt.7629692a: {Name:mk24fda3fc7efba8ec26d63c4d1c3262bef6ab2c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0918 20:02:13.381214   26827 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/ha-091565/apiserver.key.7629692a ...
	I0918 20:02:13.381231   26827 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/ha-091565/apiserver.key.7629692a: {Name:mk2ca0cef4c9dc7b760b7f2d962b84f60a94bd6c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0918 20:02:13.381333   26827 certs.go:381] copying /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/ha-091565/apiserver.crt.7629692a -> /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/ha-091565/apiserver.crt
	I0918 20:02:13.381891   26827 certs.go:385] copying /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/ha-091565/apiserver.key.7629692a -> /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/ha-091565/apiserver.key
	I0918 20:02:13.382099   26827 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/ha-091565/proxy-client.key
	I0918 20:02:13.382115   26827 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19667-7671/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0918 20:02:13.382140   26827 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19667-7671/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0918 20:02:13.382158   26827 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19667-7671/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0918 20:02:13.382174   26827 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19667-7671/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0918 20:02:13.382188   26827 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/ha-091565/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0918 20:02:13.382203   26827 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/ha-091565/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0918 20:02:13.382217   26827 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/ha-091565/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0918 20:02:13.382242   26827 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/ha-091565/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0918 20:02:13.382310   26827 certs.go:484] found cert: /home/jenkins/minikube-integration/19667-7671/.minikube/certs/14878.pem (1338 bytes)
	W0918 20:02:13.382346   26827 certs.go:480] ignoring /home/jenkins/minikube-integration/19667-7671/.minikube/certs/14878_empty.pem, impossibly tiny 0 bytes
	I0918 20:02:13.382356   26827 certs.go:484] found cert: /home/jenkins/minikube-integration/19667-7671/.minikube/certs/ca-key.pem (1675 bytes)
	I0918 20:02:13.382393   26827 certs.go:484] found cert: /home/jenkins/minikube-integration/19667-7671/.minikube/certs/ca.pem (1082 bytes)
	I0918 20:02:13.382425   26827 certs.go:484] found cert: /home/jenkins/minikube-integration/19667-7671/.minikube/certs/cert.pem (1123 bytes)
	I0918 20:02:13.382456   26827 certs.go:484] found cert: /home/jenkins/minikube-integration/19667-7671/.minikube/certs/key.pem (1679 bytes)
	I0918 20:02:13.382505   26827 certs.go:484] found cert: /home/jenkins/minikube-integration/19667-7671/.minikube/files/etc/ssl/certs/148782.pem (1708 bytes)
	I0918 20:02:13.382538   26827 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19667-7671/.minikube/files/etc/ssl/certs/148782.pem -> /usr/share/ca-certificates/148782.pem
	I0918 20:02:13.382565   26827 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19667-7671/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0918 20:02:13.382604   26827 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19667-7671/.minikube/certs/14878.pem -> /usr/share/ca-certificates/14878.pem
	I0918 20:02:13.382670   26827 main.go:141] libmachine: (ha-091565) Calling .GetSSHHostname
	I0918 20:02:13.385533   26827 main.go:141] libmachine: (ha-091565) DBG | domain ha-091565 has defined MAC address 52:54:00:2a:13:d8 in network mk-ha-091565
	I0918 20:02:13.385884   26827 main.go:141] libmachine: (ha-091565) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:13:d8", ip: ""} in network mk-ha-091565: {Iface:virbr1 ExpiryTime:2024-09-18 21:01:12 +0000 UTC Type:0 Mac:52:54:00:2a:13:d8 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:ha-091565 Clientid:01:52:54:00:2a:13:d8}
	I0918 20:02:13.385914   26827 main.go:141] libmachine: (ha-091565) DBG | domain ha-091565 has defined IP address 192.168.39.215 and MAC address 52:54:00:2a:13:d8 in network mk-ha-091565
	I0918 20:02:13.386036   26827 main.go:141] libmachine: (ha-091565) Calling .GetSSHPort
	I0918 20:02:13.386204   26827 main.go:141] libmachine: (ha-091565) Calling .GetSSHKeyPath
	I0918 20:02:13.386359   26827 main.go:141] libmachine: (ha-091565) Calling .GetSSHUsername
	I0918 20:02:13.386456   26827 sshutil.go:53] new ssh client: &{IP:192.168.39.215 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19667-7671/.minikube/machines/ha-091565/id_rsa Username:docker}
	I0918 20:02:13.464434   26827 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0918 20:02:13.469316   26827 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0918 20:02:13.479828   26827 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0918 20:02:13.484029   26827 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I0918 20:02:13.493840   26827 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0918 20:02:13.497931   26827 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0918 20:02:13.507815   26827 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0918 20:02:13.512123   26827 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I0918 20:02:13.522655   26827 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0918 20:02:13.527051   26827 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0918 20:02:13.538403   26827 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0918 20:02:13.542432   26827 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I0918 20:02:13.553060   26827 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0918 20:02:13.579635   26827 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0918 20:02:13.603368   26827 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0918 20:02:13.625998   26827 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0918 20:02:13.648303   26827 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/ha-091565/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0918 20:02:13.671000   26827 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/ha-091565/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0918 20:02:13.694050   26827 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/ha-091565/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0918 20:02:13.719216   26827 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/ha-091565/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0918 20:02:13.742544   26827 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/files/etc/ssl/certs/148782.pem --> /usr/share/ca-certificates/148782.pem (1708 bytes)
	I0918 20:02:13.765706   26827 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0918 20:02:13.789848   26827 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/certs/14878.pem --> /usr/share/ca-certificates/14878.pem (1338 bytes)
	I0918 20:02:13.814441   26827 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0918 20:02:13.831542   26827 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I0918 20:02:13.848254   26827 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0918 20:02:13.865737   26827 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I0918 20:02:13.881778   26827 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0918 20:02:13.898086   26827 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I0918 20:02:13.913537   26827 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0918 20:02:13.929503   26827 ssh_runner.go:195] Run: openssl version
	I0918 20:02:13.934878   26827 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0918 20:02:13.945006   26827 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0918 20:02:13.949290   26827 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 18 19:39 /usr/share/ca-certificates/minikubeCA.pem
	I0918 20:02:13.949360   26827 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0918 20:02:13.955252   26827 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0918 20:02:13.965953   26827 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14878.pem && ln -fs /usr/share/ca-certificates/14878.pem /etc/ssl/certs/14878.pem"
	I0918 20:02:13.976794   26827 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14878.pem
	I0918 20:02:13.981192   26827 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 18 19:57 /usr/share/ca-certificates/14878.pem
	I0918 20:02:13.981245   26827 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14878.pem
	I0918 20:02:13.986694   26827 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14878.pem /etc/ssl/certs/51391683.0"
	I0918 20:02:13.996869   26827 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/148782.pem && ln -fs /usr/share/ca-certificates/148782.pem /etc/ssl/certs/148782.pem"
	I0918 20:02:14.006855   26827 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/148782.pem
	I0918 20:02:14.010785   26827 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 18 19:57 /usr/share/ca-certificates/148782.pem
	I0918 20:02:14.010831   26827 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/148782.pem
	I0918 20:02:14.016603   26827 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/148782.pem /etc/ssl/certs/3ec20f2e.0"
	I0918 20:02:14.026923   26827 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0918 20:02:14.030483   26827 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0918 20:02:14.030540   26827 kubeadm.go:934] updating node {m02 192.168.39.92 8443 v1.31.1 crio true true} ...
	I0918 20:02:14.030615   26827 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-091565-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.92
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-091565 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0918 20:02:14.030638   26827 kube-vip.go:115] generating kube-vip config ...
	I0918 20:02:14.030669   26827 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0918 20:02:14.046531   26827 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0918 20:02:14.046601   26827 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0918 20:02:14.046656   26827 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0918 20:02:14.056509   26827 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.1: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.31.1': No such file or directory
	
	Initiating transfer...
	I0918 20:02:14.056563   26827 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.1
	I0918 20:02:14.065775   26827 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl.sha256
	I0918 20:02:14.065800   26827 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19667-7671/.minikube/cache/linux/amd64/v1.31.1/kubectl -> /var/lib/minikube/binaries/v1.31.1/kubectl
	I0918 20:02:14.065850   26827 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/19667-7671/.minikube/cache/linux/amd64/v1.31.1/kubeadm
	I0918 20:02:14.065881   26827 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/19667-7671/.minikube/cache/linux/amd64/v1.31.1/kubelet
	I0918 20:02:14.065857   26827 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubectl
	I0918 20:02:14.069919   26827 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubectl': No such file or directory
	I0918 20:02:14.069943   26827 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/cache/linux/amd64/v1.31.1/kubectl --> /var/lib/minikube/binaries/v1.31.1/kubectl (56381592 bytes)
	I0918 20:02:15.108841   26827 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19667-7671/.minikube/cache/linux/amd64/v1.31.1/kubeadm -> /var/lib/minikube/binaries/v1.31.1/kubeadm
	I0918 20:02:15.108916   26827 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubeadm
	I0918 20:02:15.113741   26827 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubeadm': No such file or directory
	I0918 20:02:15.113786   26827 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/cache/linux/amd64/v1.31.1/kubeadm --> /var/lib/minikube/binaries/v1.31.1/kubeadm (58290328 bytes)
	I0918 20:02:15.268546   26827 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0918 20:02:15.304643   26827 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19667-7671/.minikube/cache/linux/amd64/v1.31.1/kubelet -> /var/lib/minikube/binaries/v1.31.1/kubelet
	I0918 20:02:15.304757   26827 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubelet
	I0918 20:02:15.316920   26827 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubelet': No such file or directory
	I0918 20:02:15.316964   26827 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/cache/linux/amd64/v1.31.1/kubelet --> /var/lib/minikube/binaries/v1.31.1/kubelet (76869944 bytes)
	I0918 20:02:15.681051   26827 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0918 20:02:15.690458   26827 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0918 20:02:15.707147   26827 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0918 20:02:15.723671   26827 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0918 20:02:15.740654   26827 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0918 20:02:15.744145   26827 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0918 20:02:15.755908   26827 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0918 20:02:15.867566   26827 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0918 20:02:15.884693   26827 host.go:66] Checking if "ha-091565" exists ...
	I0918 20:02:15.885015   26827 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0918 20:02:15.885055   26827 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0918 20:02:15.899922   26827 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46063
	I0918 20:02:15.900446   26827 main.go:141] libmachine: () Calling .GetVersion
	I0918 20:02:15.900956   26827 main.go:141] libmachine: Using API Version  1
	I0918 20:02:15.900978   26827 main.go:141] libmachine: () Calling .SetConfigRaw
	I0918 20:02:15.901391   26827 main.go:141] libmachine: () Calling .GetMachineName
	I0918 20:02:15.901591   26827 main.go:141] libmachine: (ha-091565) Calling .DriverName
	I0918 20:02:15.901775   26827 start.go:317] joinCluster: &{Name:ha-091565 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Cluster
Name:ha-091565 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.215 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.92 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0918 20:02:15.901868   26827 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0918 20:02:15.901882   26827 main.go:141] libmachine: (ha-091565) Calling .GetSSHHostname
	I0918 20:02:15.904812   26827 main.go:141] libmachine: (ha-091565) DBG | domain ha-091565 has defined MAC address 52:54:00:2a:13:d8 in network mk-ha-091565
	I0918 20:02:15.905340   26827 main.go:141] libmachine: (ha-091565) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:13:d8", ip: ""} in network mk-ha-091565: {Iface:virbr1 ExpiryTime:2024-09-18 21:01:12 +0000 UTC Type:0 Mac:52:54:00:2a:13:d8 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:ha-091565 Clientid:01:52:54:00:2a:13:d8}
	I0918 20:02:15.905365   26827 main.go:141] libmachine: (ha-091565) DBG | domain ha-091565 has defined IP address 192.168.39.215 and MAC address 52:54:00:2a:13:d8 in network mk-ha-091565
	I0918 20:02:15.905530   26827 main.go:141] libmachine: (ha-091565) Calling .GetSSHPort
	I0918 20:02:15.905692   26827 main.go:141] libmachine: (ha-091565) Calling .GetSSHKeyPath
	I0918 20:02:15.905842   26827 main.go:141] libmachine: (ha-091565) Calling .GetSSHUsername
	I0918 20:02:15.905998   26827 sshutil.go:53] new ssh client: &{IP:192.168.39.215 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19667-7671/.minikube/machines/ha-091565/id_rsa Username:docker}
	I0918 20:02:16.056145   26827 start.go:343] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:192.168.39.92 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0918 20:02:16.056188   26827 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token c3chy6.pphzks8qg9r6i1q7 --discovery-token-ca-cert-hash sha256:8ad46bf288e8cc2226f5a929a768e6c95cddecc97ffc4016be27ef0987b1e36f --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-091565-m02 --control-plane --apiserver-advertise-address=192.168.39.92 --apiserver-bind-port=8443"
	I0918 20:02:39.534299   26827 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token c3chy6.pphzks8qg9r6i1q7 --discovery-token-ca-cert-hash sha256:8ad46bf288e8cc2226f5a929a768e6c95cddecc97ffc4016be27ef0987b1e36f --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-091565-m02 --control-plane --apiserver-advertise-address=192.168.39.92 --apiserver-bind-port=8443": (23.478085214s)
	I0918 20:02:39.534349   26827 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0918 20:02:40.082157   26827 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-091565-m02 minikube.k8s.io/updated_at=2024_09_18T20_02_40_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=85073601a832bd4bbda5d11fa91feafff6ec6b91 minikube.k8s.io/name=ha-091565 minikube.k8s.io/primary=false
	I0918 20:02:40.225760   26827 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-091565-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I0918 20:02:40.371807   26827 start.go:319] duration metric: took 24.470025441s to joinCluster
	I0918 20:02:40.371885   26827 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.168.39.92 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0918 20:02:40.372206   26827 config.go:182] Loaded profile config "ha-091565": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0918 20:02:40.373180   26827 out.go:177] * Verifying Kubernetes components...
	I0918 20:02:40.374584   26827 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0918 20:02:40.624879   26827 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0918 20:02:40.676856   26827 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19667-7671/kubeconfig
	I0918 20:02:40.677129   26827 kapi.go:59] client config for ha-091565: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19667-7671/.minikube/profiles/ha-091565/client.crt", KeyFile:"/home/jenkins/minikube-integration/19667-7671/.minikube/profiles/ha-091565/client.key", CAFile:"/home/jenkins/minikube-integration/19667-7671/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f6fca0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0918 20:02:40.677196   26827 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.215:8443
	I0918 20:02:40.677413   26827 node_ready.go:35] waiting up to 6m0s for node "ha-091565-m02" to be "Ready" ...
	I0918 20:02:40.677523   26827 round_trippers.go:463] GET https://192.168.39.215:8443/api/v1/nodes/ha-091565-m02
	I0918 20:02:40.677531   26827 round_trippers.go:469] Request Headers:
	I0918 20:02:40.677538   26827 round_trippers.go:473]     Accept: application/json, */*
	I0918 20:02:40.677545   26827 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0918 20:02:40.686192   26827 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0918 20:02:41.177691   26827 round_trippers.go:463] GET https://192.168.39.215:8443/api/v1/nodes/ha-091565-m02
	I0918 20:02:41.177719   26827 round_trippers.go:469] Request Headers:
	I0918 20:02:41.177732   26827 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0918 20:02:41.177740   26827 round_trippers.go:473]     Accept: application/json, */*
	I0918 20:02:41.183226   26827 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0918 20:02:41.678101   26827 round_trippers.go:463] GET https://192.168.39.215:8443/api/v1/nodes/ha-091565-m02
	I0918 20:02:41.678120   26827 round_trippers.go:469] Request Headers:
	I0918 20:02:41.678127   26827 round_trippers.go:473]     Accept: application/json, */*
	I0918 20:02:41.678130   26827 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0918 20:02:41.692857   26827 round_trippers.go:574] Response Status: 200 OK in 14 milliseconds
	I0918 20:02:42.177589   26827 round_trippers.go:463] GET https://192.168.39.215:8443/api/v1/nodes/ha-091565-m02
	I0918 20:02:42.177610   26827 round_trippers.go:469] Request Headers:
	I0918 20:02:42.177621   26827 round_trippers.go:473]     Accept: application/json, */*
	I0918 20:02:42.177625   26827 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0918 20:02:42.180992   26827 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0918 20:02:42.677789   26827 round_trippers.go:463] GET https://192.168.39.215:8443/api/v1/nodes/ha-091565-m02
	I0918 20:02:42.677810   26827 round_trippers.go:469] Request Headers:
	I0918 20:02:42.677818   26827 round_trippers.go:473]     Accept: application/json, */*
	I0918 20:02:42.677822   26827 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0918 20:02:42.682783   26827 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0918 20:02:42.683426   26827 node_ready.go:53] node "ha-091565-m02" has status "Ready":"False"
	I0918 20:02:43.178132   26827 round_trippers.go:463] GET https://192.168.39.215:8443/api/v1/nodes/ha-091565-m02
	I0918 20:02:43.178152   26827 round_trippers.go:469] Request Headers:
	I0918 20:02:43.178164   26827 round_trippers.go:473]     Accept: application/json, */*
	I0918 20:02:43.178170   26827 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0918 20:02:43.181084   26827 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0918 20:02:43.678483   26827 round_trippers.go:463] GET https://192.168.39.215:8443/api/v1/nodes/ha-091565-m02
	I0918 20:02:43.678502   26827 round_trippers.go:469] Request Headers:
	I0918 20:02:43.678510   26827 round_trippers.go:473]     Accept: application/json, */*
	I0918 20:02:43.678515   26827 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0918 20:02:43.683496   26827 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0918 20:02:44.178547   26827 round_trippers.go:463] GET https://192.168.39.215:8443/api/v1/nodes/ha-091565-m02
	I0918 20:02:44.178567   26827 round_trippers.go:469] Request Headers:
	I0918 20:02:44.178576   26827 round_trippers.go:473]     Accept: application/json, */*
	I0918 20:02:44.178579   26827 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0918 20:02:44.181977   26827 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0918 20:02:44.677784   26827 round_trippers.go:463] GET https://192.168.39.215:8443/api/v1/nodes/ha-091565-m02
	I0918 20:02:44.677816   26827 round_trippers.go:469] Request Headers:
	I0918 20:02:44.677827   26827 round_trippers.go:473]     Accept: application/json, */*
	I0918 20:02:44.677835   26827 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0918 20:02:44.682556   26827 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0918 20:02:45.177682   26827 round_trippers.go:463] GET https://192.168.39.215:8443/api/v1/nodes/ha-091565-m02
	I0918 20:02:45.177710   26827 round_trippers.go:469] Request Headers:
	I0918 20:02:45.177723   26827 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0918 20:02:45.177731   26827 round_trippers.go:473]     Accept: application/json, */*
	I0918 20:02:45.181803   26827 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0918 20:02:45.182526   26827 node_ready.go:53] node "ha-091565-m02" has status "Ready":"False"
	I0918 20:02:45.677703   26827 round_trippers.go:463] GET https://192.168.39.215:8443/api/v1/nodes/ha-091565-m02
	I0918 20:02:45.677727   26827 round_trippers.go:469] Request Headers:
	I0918 20:02:45.677735   26827 round_trippers.go:473]     Accept: application/json, */*
	I0918 20:02:45.677739   26827 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0918 20:02:45.684776   26827 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0918 20:02:46.178417   26827 round_trippers.go:463] GET https://192.168.39.215:8443/api/v1/nodes/ha-091565-m02
	I0918 20:02:46.178441   26827 round_trippers.go:469] Request Headers:
	I0918 20:02:46.178448   26827 round_trippers.go:473]     Accept: application/json, */*
	I0918 20:02:46.178456   26827 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0918 20:02:46.181952   26827 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0918 20:02:46.677961   26827 round_trippers.go:463] GET https://192.168.39.215:8443/api/v1/nodes/ha-091565-m02
	I0918 20:02:46.677985   26827 round_trippers.go:469] Request Headers:
	I0918 20:02:46.677992   26827 round_trippers.go:473]     Accept: application/json, */*
	I0918 20:02:46.677996   26827 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0918 20:02:46.681910   26827 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0918 20:02:47.178442   26827 round_trippers.go:463] GET https://192.168.39.215:8443/api/v1/nodes/ha-091565-m02
	I0918 20:02:47.178466   26827 round_trippers.go:469] Request Headers:
	I0918 20:02:47.178474   26827 round_trippers.go:473]     Accept: application/json, */*
	I0918 20:02:47.178479   26827 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0918 20:02:47.212429   26827 round_trippers.go:574] Response Status: 200 OK in 33 milliseconds
	I0918 20:02:47.213077   26827 node_ready.go:53] node "ha-091565-m02" has status "Ready":"False"
	I0918 20:02:47.678191   26827 round_trippers.go:463] GET https://192.168.39.215:8443/api/v1/nodes/ha-091565-m02
	I0918 20:02:47.678213   26827 round_trippers.go:469] Request Headers:
	I0918 20:02:47.678221   26827 round_trippers.go:473]     Accept: application/json, */*
	I0918 20:02:47.678225   26827 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0918 20:02:47.682040   26827 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0918 20:02:48.178008   26827 round_trippers.go:463] GET https://192.168.39.215:8443/api/v1/nodes/ha-091565-m02
	I0918 20:02:48.178028   26827 round_trippers.go:469] Request Headers:
	I0918 20:02:48.178038   26827 round_trippers.go:473]     Accept: application/json, */*
	I0918 20:02:48.178043   26827 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0918 20:02:48.181099   26827 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0918 20:02:48.677668   26827 round_trippers.go:463] GET https://192.168.39.215:8443/api/v1/nodes/ha-091565-m02
	I0918 20:02:48.677698   26827 round_trippers.go:469] Request Headers:
	I0918 20:02:48.677711   26827 round_trippers.go:473]     Accept: application/json, */*
	I0918 20:02:48.677717   26827 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0918 20:02:48.681381   26827 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0918 20:02:49.178444   26827 round_trippers.go:463] GET https://192.168.39.215:8443/api/v1/nodes/ha-091565-m02
	I0918 20:02:49.178465   26827 round_trippers.go:469] Request Headers:
	I0918 20:02:49.178472   26827 round_trippers.go:473]     Accept: application/json, */*
	I0918 20:02:49.178475   26827 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0918 20:02:49.182036   26827 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0918 20:02:49.678042   26827 round_trippers.go:463] GET https://192.168.39.215:8443/api/v1/nodes/ha-091565-m02
	I0918 20:02:49.678068   26827 round_trippers.go:469] Request Headers:
	I0918 20:02:49.678080   26827 round_trippers.go:473]     Accept: application/json, */*
	I0918 20:02:49.678088   26827 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0918 20:02:49.690181   26827 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I0918 20:02:49.690997   26827 node_ready.go:53] node "ha-091565-m02" has status "Ready":"False"
	I0918 20:02:50.178273   26827 round_trippers.go:463] GET https://192.168.39.215:8443/api/v1/nodes/ha-091565-m02
	I0918 20:02:50.178297   26827 round_trippers.go:469] Request Headers:
	I0918 20:02:50.178304   26827 round_trippers.go:473]     Accept: application/json, */*
	I0918 20:02:50.178308   26827 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0918 20:02:50.181653   26827 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0918 20:02:50.677625   26827 round_trippers.go:463] GET https://192.168.39.215:8443/api/v1/nodes/ha-091565-m02
	I0918 20:02:50.677648   26827 round_trippers.go:469] Request Headers:
	I0918 20:02:50.677656   26827 round_trippers.go:473]     Accept: application/json, */*
	I0918 20:02:50.677661   26827 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0918 20:02:50.681751   26827 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0918 20:02:51.178317   26827 round_trippers.go:463] GET https://192.168.39.215:8443/api/v1/nodes/ha-091565-m02
	I0918 20:02:51.178366   26827 round_trippers.go:469] Request Headers:
	I0918 20:02:51.178378   26827 round_trippers.go:473]     Accept: application/json, */*
	I0918 20:02:51.178384   26827 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0918 20:02:51.181883   26827 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0918 20:02:51.678030   26827 round_trippers.go:463] GET https://192.168.39.215:8443/api/v1/nodes/ha-091565-m02
	I0918 20:02:51.678058   26827 round_trippers.go:469] Request Headers:
	I0918 20:02:51.678069   26827 round_trippers.go:473]     Accept: application/json, */*
	I0918 20:02:51.678074   26827 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0918 20:02:51.681343   26827 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0918 20:02:52.178201   26827 round_trippers.go:463] GET https://192.168.39.215:8443/api/v1/nodes/ha-091565-m02
	I0918 20:02:52.178228   26827 round_trippers.go:469] Request Headers:
	I0918 20:02:52.178239   26827 round_trippers.go:473]     Accept: application/json, */*
	I0918 20:02:52.178246   26827 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0918 20:02:52.181149   26827 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0918 20:02:52.181830   26827 node_ready.go:53] node "ha-091565-m02" has status "Ready":"False"
	I0918 20:02:52.678195   26827 round_trippers.go:463] GET https://192.168.39.215:8443/api/v1/nodes/ha-091565-m02
	I0918 20:02:52.678219   26827 round_trippers.go:469] Request Headers:
	I0918 20:02:52.678227   26827 round_trippers.go:473]     Accept: application/json, */*
	I0918 20:02:52.678230   26827 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0918 20:02:52.681789   26827 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0918 20:02:53.178242   26827 round_trippers.go:463] GET https://192.168.39.215:8443/api/v1/nodes/ha-091565-m02
	I0918 20:02:53.178268   26827 round_trippers.go:469] Request Headers:
	I0918 20:02:53.178279   26827 round_trippers.go:473]     Accept: application/json, */*
	I0918 20:02:53.178284   26827 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0918 20:02:53.181682   26827 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0918 20:02:53.677884   26827 round_trippers.go:463] GET https://192.168.39.215:8443/api/v1/nodes/ha-091565-m02
	I0918 20:02:53.677907   26827 round_trippers.go:469] Request Headers:
	I0918 20:02:53.677916   26827 round_trippers.go:473]     Accept: application/json, */*
	I0918 20:02:53.677921   26827 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0918 20:02:53.681477   26827 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0918 20:02:54.178412   26827 round_trippers.go:463] GET https://192.168.39.215:8443/api/v1/nodes/ha-091565-m02
	I0918 20:02:54.178438   26827 round_trippers.go:469] Request Headers:
	I0918 20:02:54.178445   26827 round_trippers.go:473]     Accept: application/json, */*
	I0918 20:02:54.178449   26827 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0918 20:02:54.182375   26827 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0918 20:02:54.182956   26827 node_ready.go:53] node "ha-091565-m02" has status "Ready":"False"
	I0918 20:02:54.678270   26827 round_trippers.go:463] GET https://192.168.39.215:8443/api/v1/nodes/ha-091565-m02
	I0918 20:02:54.678294   26827 round_trippers.go:469] Request Headers:
	I0918 20:02:54.678301   26827 round_trippers.go:473]     Accept: application/json, */*
	I0918 20:02:54.678306   26827 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0918 20:02:54.681439   26827 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0918 20:02:55.178343   26827 round_trippers.go:463] GET https://192.168.39.215:8443/api/v1/nodes/ha-091565-m02
	I0918 20:02:55.178364   26827 round_trippers.go:469] Request Headers:
	I0918 20:02:55.178372   26827 round_trippers.go:473]     Accept: application/json, */*
	I0918 20:02:55.178376   26827 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0918 20:02:55.181349   26827 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0918 20:02:55.678277   26827 round_trippers.go:463] GET https://192.168.39.215:8443/api/v1/nodes/ha-091565-m02
	I0918 20:02:55.678299   26827 round_trippers.go:469] Request Headers:
	I0918 20:02:55.678307   26827 round_trippers.go:473]     Accept: application/json, */*
	I0918 20:02:55.678312   26827 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0918 20:02:55.681665   26827 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0918 20:02:56.177994   26827 round_trippers.go:463] GET https://192.168.39.215:8443/api/v1/nodes/ha-091565-m02
	I0918 20:02:56.178018   26827 round_trippers.go:469] Request Headers:
	I0918 20:02:56.178025   26827 round_trippers.go:473]     Accept: application/json, */*
	I0918 20:02:56.178030   26827 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0918 20:02:56.181355   26827 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0918 20:02:56.678444   26827 round_trippers.go:463] GET https://192.168.39.215:8443/api/v1/nodes/ha-091565-m02
	I0918 20:02:56.678487   26827 round_trippers.go:469] Request Headers:
	I0918 20:02:56.678502   26827 round_trippers.go:473]     Accept: application/json, */*
	I0918 20:02:56.678506   26827 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0918 20:02:56.682256   26827 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0918 20:02:56.683058   26827 node_ready.go:53] node "ha-091565-m02" has status "Ready":"False"
	I0918 20:02:57.178486   26827 round_trippers.go:463] GET https://192.168.39.215:8443/api/v1/nodes/ha-091565-m02
	I0918 20:02:57.178510   26827 round_trippers.go:469] Request Headers:
	I0918 20:02:57.178517   26827 round_trippers.go:473]     Accept: application/json, */*
	I0918 20:02:57.178521   26827 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0918 20:02:57.182538   26827 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0918 20:02:57.678060   26827 round_trippers.go:463] GET https://192.168.39.215:8443/api/v1/nodes/ha-091565-m02
	I0918 20:02:57.678084   26827 round_trippers.go:469] Request Headers:
	I0918 20:02:57.678091   26827 round_trippers.go:473]     Accept: application/json, */*
	I0918 20:02:57.678096   26827 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0918 20:02:57.681385   26827 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0918 20:02:58.177838   26827 round_trippers.go:463] GET https://192.168.39.215:8443/api/v1/nodes/ha-091565-m02
	I0918 20:02:58.177866   26827 round_trippers.go:469] Request Headers:
	I0918 20:02:58.177876   26827 round_trippers.go:473]     Accept: application/json, */*
	I0918 20:02:58.177887   26827 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0918 20:02:58.181116   26827 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0918 20:02:58.677581   26827 round_trippers.go:463] GET https://192.168.39.215:8443/api/v1/nodes/ha-091565-m02
	I0918 20:02:58.677623   26827 round_trippers.go:469] Request Headers:
	I0918 20:02:58.677631   26827 round_trippers.go:473]     Accept: application/json, */*
	I0918 20:02:58.677634   26827 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0918 20:02:58.681025   26827 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0918 20:02:59.178037   26827 round_trippers.go:463] GET https://192.168.39.215:8443/api/v1/nodes/ha-091565-m02
	I0918 20:02:59.178075   26827 round_trippers.go:469] Request Headers:
	I0918 20:02:59.178083   26827 round_trippers.go:473]     Accept: application/json, */*
	I0918 20:02:59.178087   26827 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0918 20:02:59.182040   26827 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0918 20:02:59.182593   26827 node_ready.go:49] node "ha-091565-m02" has status "Ready":"True"
	I0918 20:02:59.182614   26827 node_ready.go:38] duration metric: took 18.505159093s for node "ha-091565-m02" to be "Ready" ...
	I0918 20:02:59.182625   26827 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0918 20:02:59.182713   26827 round_trippers.go:463] GET https://192.168.39.215:8443/api/v1/namespaces/kube-system/pods
	I0918 20:02:59.182724   26827 round_trippers.go:469] Request Headers:
	I0918 20:02:59.182731   26827 round_trippers.go:473]     Accept: application/json, */*
	I0918 20:02:59.182736   26827 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0918 20:02:59.187930   26827 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0918 20:02:59.193874   26827 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-8zcqk" in "kube-system" namespace to be "Ready" ...
	I0918 20:02:59.193977   26827 round_trippers.go:463] GET https://192.168.39.215:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-8zcqk
	I0918 20:02:59.193988   26827 round_trippers.go:469] Request Headers:
	I0918 20:02:59.193999   26827 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0918 20:02:59.194007   26827 round_trippers.go:473]     Accept: application/json, */*
	I0918 20:02:59.197103   26827 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0918 20:02:59.198209   26827 round_trippers.go:463] GET https://192.168.39.215:8443/api/v1/nodes/ha-091565
	I0918 20:02:59.198228   26827 round_trippers.go:469] Request Headers:
	I0918 20:02:59.198238   26827 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0918 20:02:59.198256   26827 round_trippers.go:473]     Accept: application/json, */*
	I0918 20:02:59.201933   26827 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0918 20:02:59.202515   26827 pod_ready.go:93] pod "coredns-7c65d6cfc9-8zcqk" in "kube-system" namespace has status "Ready":"True"
	I0918 20:02:59.202532   26827 pod_ready.go:82] duration metric: took 8.636844ms for pod "coredns-7c65d6cfc9-8zcqk" in "kube-system" namespace to be "Ready" ...
	I0918 20:02:59.202541   26827 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-w97kk" in "kube-system" namespace to be "Ready" ...
	I0918 20:02:59.202613   26827 round_trippers.go:463] GET https://192.168.39.215:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-w97kk
	I0918 20:02:59.202622   26827 round_trippers.go:469] Request Headers:
	I0918 20:02:59.202631   26827 round_trippers.go:473]     Accept: application/json, */*
	I0918 20:02:59.202639   26827 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0918 20:02:59.206149   26827 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0918 20:02:59.206923   26827 round_trippers.go:463] GET https://192.168.39.215:8443/api/v1/nodes/ha-091565
	I0918 20:02:59.206938   26827 round_trippers.go:469] Request Headers:
	I0918 20:02:59.206945   26827 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0918 20:02:59.206948   26827 round_trippers.go:473]     Accept: application/json, */*
	I0918 20:02:59.210089   26827 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0918 20:02:59.211132   26827 pod_ready.go:93] pod "coredns-7c65d6cfc9-w97kk" in "kube-system" namespace has status "Ready":"True"
	I0918 20:02:59.211152   26827 pod_ready.go:82] duration metric: took 8.603074ms for pod "coredns-7c65d6cfc9-w97kk" in "kube-system" namespace to be "Ready" ...
	I0918 20:02:59.211164   26827 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-091565" in "kube-system" namespace to be "Ready" ...
	I0918 20:02:59.211226   26827 round_trippers.go:463] GET https://192.168.39.215:8443/api/v1/namespaces/kube-system/pods/etcd-ha-091565
	I0918 20:02:59.211237   26827 round_trippers.go:469] Request Headers:
	I0918 20:02:59.211248   26827 round_trippers.go:473]     Accept: application/json, */*
	I0918 20:02:59.211257   26827 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0918 20:02:59.214280   26827 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0918 20:02:59.214888   26827 round_trippers.go:463] GET https://192.168.39.215:8443/api/v1/nodes/ha-091565
	I0918 20:02:59.214903   26827 round_trippers.go:469] Request Headers:
	I0918 20:02:59.214912   26827 round_trippers.go:473]     Accept: application/json, */*
	I0918 20:02:59.214917   26827 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0918 20:02:59.217599   26827 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0918 20:02:59.218135   26827 pod_ready.go:93] pod "etcd-ha-091565" in "kube-system" namespace has status "Ready":"True"
	I0918 20:02:59.218154   26827 pod_ready.go:82] duration metric: took 6.982451ms for pod "etcd-ha-091565" in "kube-system" namespace to be "Ready" ...
	I0918 20:02:59.218164   26827 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-091565-m02" in "kube-system" namespace to be "Ready" ...
	I0918 20:02:59.218230   26827 round_trippers.go:463] GET https://192.168.39.215:8443/api/v1/namespaces/kube-system/pods/etcd-ha-091565-m02
	I0918 20:02:59.218241   26827 round_trippers.go:469] Request Headers:
	I0918 20:02:59.218251   26827 round_trippers.go:473]     Accept: application/json, */*
	I0918 20:02:59.218257   26827 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0918 20:02:59.221067   26827 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0918 20:02:59.221787   26827 round_trippers.go:463] GET https://192.168.39.215:8443/api/v1/nodes/ha-091565-m02
	I0918 20:02:59.221803   26827 round_trippers.go:469] Request Headers:
	I0918 20:02:59.221813   26827 round_trippers.go:473]     Accept: application/json, */*
	I0918 20:02:59.221821   26827 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0918 20:02:59.224586   26827 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0918 20:02:59.225580   26827 pod_ready.go:93] pod "etcd-ha-091565-m02" in "kube-system" namespace has status "Ready":"True"
	I0918 20:02:59.225600   26827 pod_ready.go:82] duration metric: took 7.424608ms for pod "etcd-ha-091565-m02" in "kube-system" namespace to be "Ready" ...
	I0918 20:02:59.225619   26827 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-091565" in "kube-system" namespace to be "Ready" ...
	I0918 20:02:59.379036   26827 request.go:632] Waited for 153.330309ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.215:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-091565
	I0918 20:02:59.379109   26827 round_trippers.go:463] GET https://192.168.39.215:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-091565
	I0918 20:02:59.379118   26827 round_trippers.go:469] Request Headers:
	I0918 20:02:59.379133   26827 round_trippers.go:473]     Accept: application/json, */*
	I0918 20:02:59.379139   26827 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0918 20:02:59.384080   26827 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0918 20:02:59.578427   26827 request.go:632] Waited for 193.345723ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.215:8443/api/v1/nodes/ha-091565
	I0918 20:02:59.578498   26827 round_trippers.go:463] GET https://192.168.39.215:8443/api/v1/nodes/ha-091565
	I0918 20:02:59.578503   26827 round_trippers.go:469] Request Headers:
	I0918 20:02:59.578510   26827 round_trippers.go:473]     Accept: application/json, */*
	I0918 20:02:59.578515   26827 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0918 20:02:59.581538   26827 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0918 20:02:59.581992   26827 pod_ready.go:93] pod "kube-apiserver-ha-091565" in "kube-system" namespace has status "Ready":"True"
	I0918 20:02:59.582010   26827 pod_ready.go:82] duration metric: took 356.380215ms for pod "kube-apiserver-ha-091565" in "kube-system" namespace to be "Ready" ...
	I0918 20:02:59.582019   26827 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-091565-m02" in "kube-system" namespace to be "Ready" ...
	I0918 20:02:59.778110   26827 request.go:632] Waited for 196.027349ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.215:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-091565-m02
	I0918 20:02:59.778193   26827 round_trippers.go:463] GET https://192.168.39.215:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-091565-m02
	I0918 20:02:59.778199   26827 round_trippers.go:469] Request Headers:
	I0918 20:02:59.778206   26827 round_trippers.go:473]     Accept: application/json, */*
	I0918 20:02:59.778215   26827 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0918 20:02:59.781615   26827 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0918 20:02:59.978660   26827 request.go:632] Waited for 196.397557ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.215:8443/api/v1/nodes/ha-091565-m02
	I0918 20:02:59.978711   26827 round_trippers.go:463] GET https://192.168.39.215:8443/api/v1/nodes/ha-091565-m02
	I0918 20:02:59.978716   26827 round_trippers.go:469] Request Headers:
	I0918 20:02:59.978723   26827 round_trippers.go:473]     Accept: application/json, */*
	I0918 20:02:59.978730   26827 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0918 20:02:59.982057   26827 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0918 20:02:59.982534   26827 pod_ready.go:93] pod "kube-apiserver-ha-091565-m02" in "kube-system" namespace has status "Ready":"True"
	I0918 20:02:59.982552   26827 pod_ready.go:82] duration metric: took 400.527398ms for pod "kube-apiserver-ha-091565-m02" in "kube-system" namespace to be "Ready" ...
	I0918 20:02:59.982561   26827 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-091565" in "kube-system" namespace to be "Ready" ...
	I0918 20:03:00.178731   26827 request.go:632] Waited for 196.108369ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.215:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-091565
	I0918 20:03:00.178818   26827 round_trippers.go:463] GET https://192.168.39.215:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-091565
	I0918 20:03:00.178826   26827 round_trippers.go:469] Request Headers:
	I0918 20:03:00.178835   26827 round_trippers.go:473]     Accept: application/json, */*
	I0918 20:03:00.178842   26827 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0918 20:03:00.182695   26827 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0918 20:03:00.378911   26827 request.go:632] Waited for 195.422738ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.215:8443/api/v1/nodes/ha-091565
	I0918 20:03:00.378963   26827 round_trippers.go:463] GET https://192.168.39.215:8443/api/v1/nodes/ha-091565
	I0918 20:03:00.378972   26827 round_trippers.go:469] Request Headers:
	I0918 20:03:00.378980   26827 round_trippers.go:473]     Accept: application/json, */*
	I0918 20:03:00.378983   26827 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0918 20:03:00.382498   26827 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0918 20:03:00.383092   26827 pod_ready.go:93] pod "kube-controller-manager-ha-091565" in "kube-system" namespace has status "Ready":"True"
	I0918 20:03:00.383121   26827 pod_ready.go:82] duration metric: took 400.554078ms for pod "kube-controller-manager-ha-091565" in "kube-system" namespace to be "Ready" ...
	I0918 20:03:00.383131   26827 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-091565-m02" in "kube-system" namespace to be "Ready" ...
	I0918 20:03:00.578098   26827 request.go:632] Waited for 194.899438ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.215:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-091565-m02
	I0918 20:03:00.578185   26827 round_trippers.go:463] GET https://192.168.39.215:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-091565-m02
	I0918 20:03:00.578193   26827 round_trippers.go:469] Request Headers:
	I0918 20:03:00.578204   26827 round_trippers.go:473]     Accept: application/json, */*
	I0918 20:03:00.578210   26827 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0918 20:03:00.581985   26827 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0918 20:03:00.779051   26827 request.go:632] Waited for 196.416005ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.215:8443/api/v1/nodes/ha-091565-m02
	I0918 20:03:00.779104   26827 round_trippers.go:463] GET https://192.168.39.215:8443/api/v1/nodes/ha-091565-m02
	I0918 20:03:00.779109   26827 round_trippers.go:469] Request Headers:
	I0918 20:03:00.779116   26827 round_trippers.go:473]     Accept: application/json, */*
	I0918 20:03:00.779121   26827 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0918 20:03:00.782383   26827 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0918 20:03:00.782978   26827 pod_ready.go:93] pod "kube-controller-manager-ha-091565-m02" in "kube-system" namespace has status "Ready":"True"
	I0918 20:03:00.782999   26827 pod_ready.go:82] duration metric: took 399.861964ms for pod "kube-controller-manager-ha-091565-m02" in "kube-system" namespace to be "Ready" ...
	I0918 20:03:00.783008   26827 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-4wm6h" in "kube-system" namespace to be "Ready" ...
	I0918 20:03:00.978573   26827 request.go:632] Waited for 195.502032ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.215:8443/api/v1/namespaces/kube-system/pods/kube-proxy-4wm6h
	I0918 20:03:00.978651   26827 round_trippers.go:463] GET https://192.168.39.215:8443/api/v1/namespaces/kube-system/pods/kube-proxy-4wm6h
	I0918 20:03:00.978672   26827 round_trippers.go:469] Request Headers:
	I0918 20:03:00.978683   26827 round_trippers.go:473]     Accept: application/json, */*
	I0918 20:03:00.978689   26827 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0918 20:03:00.982275   26827 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0918 20:03:01.178232   26827 request.go:632] Waited for 195.323029ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.215:8443/api/v1/nodes/ha-091565
	I0918 20:03:01.178304   26827 round_trippers.go:463] GET https://192.168.39.215:8443/api/v1/nodes/ha-091565
	I0918 20:03:01.178310   26827 round_trippers.go:469] Request Headers:
	I0918 20:03:01.178317   26827 round_trippers.go:473]     Accept: application/json, */*
	I0918 20:03:01.178320   26827 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0918 20:03:01.181251   26827 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0918 20:03:01.181856   26827 pod_ready.go:93] pod "kube-proxy-4wm6h" in "kube-system" namespace has status "Ready":"True"
	I0918 20:03:01.181875   26827 pod_ready.go:82] duration metric: took 398.861474ms for pod "kube-proxy-4wm6h" in "kube-system" namespace to be "Ready" ...
	I0918 20:03:01.181884   26827 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-bxblp" in "kube-system" namespace to be "Ready" ...
	I0918 20:03:01.379020   26827 request.go:632] Waited for 197.061195ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.215:8443/api/v1/namespaces/kube-system/pods/kube-proxy-bxblp
	I0918 20:03:01.379094   26827 round_trippers.go:463] GET https://192.168.39.215:8443/api/v1/namespaces/kube-system/pods/kube-proxy-bxblp
	I0918 20:03:01.379101   26827 round_trippers.go:469] Request Headers:
	I0918 20:03:01.379112   26827 round_trippers.go:473]     Accept: application/json, */*
	I0918 20:03:01.379117   26827 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0918 20:03:01.384213   26827 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0918 20:03:01.578259   26827 request.go:632] Waited for 193.306434ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.215:8443/api/v1/nodes/ha-091565-m02
	I0918 20:03:01.578314   26827 round_trippers.go:463] GET https://192.168.39.215:8443/api/v1/nodes/ha-091565-m02
	I0918 20:03:01.578319   26827 round_trippers.go:469] Request Headers:
	I0918 20:03:01.578326   26827 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0918 20:03:01.578331   26827 round_trippers.go:473]     Accept: application/json, */*
	I0918 20:03:01.581837   26827 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0918 20:03:01.582292   26827 pod_ready.go:93] pod "kube-proxy-bxblp" in "kube-system" namespace has status "Ready":"True"
	I0918 20:03:01.582308   26827 pod_ready.go:82] duration metric: took 400.4182ms for pod "kube-proxy-bxblp" in "kube-system" namespace to be "Ready" ...
	I0918 20:03:01.582315   26827 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-091565" in "kube-system" namespace to be "Ready" ...
	I0918 20:03:01.778453   26827 request.go:632] Waited for 196.055453ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.215:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-091565
	I0918 20:03:01.778506   26827 round_trippers.go:463] GET https://192.168.39.215:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-091565
	I0918 20:03:01.778511   26827 round_trippers.go:469] Request Headers:
	I0918 20:03:01.778518   26827 round_trippers.go:473]     Accept: application/json, */*
	I0918 20:03:01.778522   26827 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0918 20:03:01.782644   26827 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0918 20:03:01.978591   26827 request.go:632] Waited for 195.380537ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.215:8443/api/v1/nodes/ha-091565
	I0918 20:03:01.978678   26827 round_trippers.go:463] GET https://192.168.39.215:8443/api/v1/nodes/ha-091565
	I0918 20:03:01.978686   26827 round_trippers.go:469] Request Headers:
	I0918 20:03:01.978700   26827 round_trippers.go:473]     Accept: application/json, */*
	I0918 20:03:01.978707   26827 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0918 20:03:01.982445   26827 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0918 20:03:01.982967   26827 pod_ready.go:93] pod "kube-scheduler-ha-091565" in "kube-system" namespace has status "Ready":"True"
	I0918 20:03:01.982989   26827 pod_ready.go:82] duration metric: took 400.667605ms for pod "kube-scheduler-ha-091565" in "kube-system" namespace to be "Ready" ...
	I0918 20:03:01.982998   26827 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-091565-m02" in "kube-system" namespace to be "Ready" ...
	I0918 20:03:02.179055   26827 request.go:632] Waited for 195.997204ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.215:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-091565-m02
	I0918 20:03:02.179125   26827 round_trippers.go:463] GET https://192.168.39.215:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-091565-m02
	I0918 20:03:02.179132   26827 round_trippers.go:469] Request Headers:
	I0918 20:03:02.179144   26827 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0918 20:03:02.179150   26827 round_trippers.go:473]     Accept: application/json, */*
	I0918 20:03:02.182779   26827 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0918 20:03:02.378680   26827 request.go:632] Waited for 195.344249ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.215:8443/api/v1/nodes/ha-091565-m02
	I0918 20:03:02.378732   26827 round_trippers.go:463] GET https://192.168.39.215:8443/api/v1/nodes/ha-091565-m02
	I0918 20:03:02.378737   26827 round_trippers.go:469] Request Headers:
	I0918 20:03:02.378744   26827 round_trippers.go:473]     Accept: application/json, */*
	I0918 20:03:02.378749   26827 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0918 20:03:02.387672   26827 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0918 20:03:02.388432   26827 pod_ready.go:93] pod "kube-scheduler-ha-091565-m02" in "kube-system" namespace has status "Ready":"True"
	I0918 20:03:02.388454   26827 pod_ready.go:82] duration metric: took 405.448688ms for pod "kube-scheduler-ha-091565-m02" in "kube-system" namespace to be "Ready" ...
	I0918 20:03:02.388468   26827 pod_ready.go:39] duration metric: took 3.205828816s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0918 20:03:02.388484   26827 api_server.go:52] waiting for apiserver process to appear ...
	I0918 20:03:02.388545   26827 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 20:03:02.403691   26827 api_server.go:72] duration metric: took 22.031762634s to wait for apiserver process to appear ...
	I0918 20:03:02.403716   26827 api_server.go:88] waiting for apiserver healthz status ...
	I0918 20:03:02.403738   26827 api_server.go:253] Checking apiserver healthz at https://192.168.39.215:8443/healthz ...
	I0918 20:03:02.408810   26827 api_server.go:279] https://192.168.39.215:8443/healthz returned 200:
	ok
	I0918 20:03:02.408891   26827 round_trippers.go:463] GET https://192.168.39.215:8443/version
	I0918 20:03:02.408903   26827 round_trippers.go:469] Request Headers:
	I0918 20:03:02.408914   26827 round_trippers.go:473]     Accept: application/json, */*
	I0918 20:03:02.408923   26827 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0918 20:03:02.409886   26827 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0918 20:03:02.409963   26827 api_server.go:141] control plane version: v1.31.1
	I0918 20:03:02.409977   26827 api_server.go:131] duration metric: took 6.255647ms to wait for apiserver health ...
	I0918 20:03:02.409986   26827 system_pods.go:43] waiting for kube-system pods to appear ...
	I0918 20:03:02.578323   26827 request.go:632] Waited for 168.279427ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.215:8443/api/v1/namespaces/kube-system/pods
	I0918 20:03:02.578410   26827 round_trippers.go:463] GET https://192.168.39.215:8443/api/v1/namespaces/kube-system/pods
	I0918 20:03:02.578421   26827 round_trippers.go:469] Request Headers:
	I0918 20:03:02.578429   26827 round_trippers.go:473]     Accept: application/json, */*
	I0918 20:03:02.578435   26827 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0918 20:03:02.583311   26827 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0918 20:03:02.589108   26827 system_pods.go:59] 17 kube-system pods found
	I0918 20:03:02.589162   26827 system_pods.go:61] "coredns-7c65d6cfc9-8zcqk" [644e8147-96e9-41a1-99b8-d2de17e4798c] Running
	I0918 20:03:02.589168   26827 system_pods.go:61] "coredns-7c65d6cfc9-w97kk" [70428cd6-0523-44c8-89f3-62837b52ca80] Running
	I0918 20:03:02.589172   26827 system_pods.go:61] "etcd-ha-091565" [c5af3e98-c375-448c-9cac-6a83b115ca71] Running
	I0918 20:03:02.589176   26827 system_pods.go:61] "etcd-ha-091565-m02" [71e1c78e-6a11-4b46-baa2-83c98d666cee] Running
	I0918 20:03:02.589180   26827 system_pods.go:61] "kindnet-7fl5w" [5c3a9d82-3815-4aa1-8d04-14be25394dcf] Running
	I0918 20:03:02.589183   26827 system_pods.go:61] "kindnet-bzsqr" [bf1e6f1b-d3ad-439a-9b9a-882e6e989a56] Running
	I0918 20:03:02.589188   26827 system_pods.go:61] "kube-apiserver-ha-091565" [1c2ddbd9-3f78-415b-af21-1d43925382c5] Running
	I0918 20:03:02.589193   26827 system_pods.go:61] "kube-apiserver-ha-091565-m02" [b482ba6c-e415-4e3c-aa56-c17a4f3bcc13] Running
	I0918 20:03:02.589197   26827 system_pods.go:61] "kube-controller-manager-ha-091565" [3812cd1b-619d-4da8-b039-18f7adb53647] Running
	I0918 20:03:02.589206   26827 system_pods.go:61] "kube-controller-manager-ha-091565-m02" [2c3ecce6-b148-498b-b66e-e5c000f51940] Running
	I0918 20:03:02.589210   26827 system_pods.go:61] "kube-proxy-4wm6h" [d6904231-6f64-4447-9932-0cd5d692978b] Running
	I0918 20:03:02.589213   26827 system_pods.go:61] "kube-proxy-bxblp" [68143b05-afd1-409a-aaa6-8b3c6841bbfb] Running
	I0918 20:03:02.589217   26827 system_pods.go:61] "kube-scheduler-ha-091565" [ad2ac667-3328-4ad3-b736-c8c633140e2d] Running
	I0918 20:03:02.589222   26827 system_pods.go:61] "kube-scheduler-ha-091565-m02" [fd27db83-43f3-40a8-9ca7-5570f78d562b] Running
	I0918 20:03:02.589226   26827 system_pods.go:61] "kube-vip-ha-091565" [b3fccd60-ac88-4dda-b8bb-b1b8c45cbfe5] Running
	I0918 20:03:02.589233   26827 system_pods.go:61] "kube-vip-ha-091565-m02" [12ca12ad-6dee-4b50-9273-c48d8f06acf4] Running
	I0918 20:03:02.589236   26827 system_pods.go:61] "storage-provisioner" [b7dffb85-905b-4166-a680-34c77cf87d09] Running
	I0918 20:03:02.589247   26827 system_pods.go:74] duration metric: took 179.252102ms to wait for pod list to return data ...
	I0918 20:03:02.589258   26827 default_sa.go:34] waiting for default service account to be created ...
	I0918 20:03:02.778073   26827 request.go:632] Waited for 188.733447ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.215:8443/api/v1/namespaces/default/serviceaccounts
	I0918 20:03:02.778127   26827 round_trippers.go:463] GET https://192.168.39.215:8443/api/v1/namespaces/default/serviceaccounts
	I0918 20:03:02.778132   26827 round_trippers.go:469] Request Headers:
	I0918 20:03:02.778141   26827 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0918 20:03:02.778148   26827 round_trippers.go:473]     Accept: application/json, */*
	I0918 20:03:02.781930   26827 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0918 20:03:02.782168   26827 default_sa.go:45] found service account: "default"
	I0918 20:03:02.782184   26827 default_sa.go:55] duration metric: took 192.91745ms for default service account to be created ...
	I0918 20:03:02.782192   26827 system_pods.go:116] waiting for k8s-apps to be running ...
	I0918 20:03:02.978682   26827 request.go:632] Waited for 196.414466ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.215:8443/api/v1/namespaces/kube-system/pods
	I0918 20:03:02.978755   26827 round_trippers.go:463] GET https://192.168.39.215:8443/api/v1/namespaces/kube-system/pods
	I0918 20:03:02.978762   26827 round_trippers.go:469] Request Headers:
	I0918 20:03:02.978771   26827 round_trippers.go:473]     Accept: application/json, */*
	I0918 20:03:02.978775   26827 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0918 20:03:02.983628   26827 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0918 20:03:02.989503   26827 system_pods.go:86] 17 kube-system pods found
	I0918 20:03:02.989531   26827 system_pods.go:89] "coredns-7c65d6cfc9-8zcqk" [644e8147-96e9-41a1-99b8-d2de17e4798c] Running
	I0918 20:03:02.989536   26827 system_pods.go:89] "coredns-7c65d6cfc9-w97kk" [70428cd6-0523-44c8-89f3-62837b52ca80] Running
	I0918 20:03:02.989540   26827 system_pods.go:89] "etcd-ha-091565" [c5af3e98-c375-448c-9cac-6a83b115ca71] Running
	I0918 20:03:02.989543   26827 system_pods.go:89] "etcd-ha-091565-m02" [71e1c78e-6a11-4b46-baa2-83c98d666cee] Running
	I0918 20:03:02.989547   26827 system_pods.go:89] "kindnet-7fl5w" [5c3a9d82-3815-4aa1-8d04-14be25394dcf] Running
	I0918 20:03:02.989550   26827 system_pods.go:89] "kindnet-bzsqr" [bf1e6f1b-d3ad-439a-9b9a-882e6e989a56] Running
	I0918 20:03:02.989555   26827 system_pods.go:89] "kube-apiserver-ha-091565" [1c2ddbd9-3f78-415b-af21-1d43925382c5] Running
	I0918 20:03:02.989558   26827 system_pods.go:89] "kube-apiserver-ha-091565-m02" [b482ba6c-e415-4e3c-aa56-c17a4f3bcc13] Running
	I0918 20:03:02.989562   26827 system_pods.go:89] "kube-controller-manager-ha-091565" [3812cd1b-619d-4da8-b039-18f7adb53647] Running
	I0918 20:03:02.989565   26827 system_pods.go:89] "kube-controller-manager-ha-091565-m02" [2c3ecce6-b148-498b-b66e-e5c000f51940] Running
	I0918 20:03:02.989568   26827 system_pods.go:89] "kube-proxy-4wm6h" [d6904231-6f64-4447-9932-0cd5d692978b] Running
	I0918 20:03:02.989571   26827 system_pods.go:89] "kube-proxy-bxblp" [68143b05-afd1-409a-aaa6-8b3c6841bbfb] Running
	I0918 20:03:02.989574   26827 system_pods.go:89] "kube-scheduler-ha-091565" [ad2ac667-3328-4ad3-b736-c8c633140e2d] Running
	I0918 20:03:02.989577   26827 system_pods.go:89] "kube-scheduler-ha-091565-m02" [fd27db83-43f3-40a8-9ca7-5570f78d562b] Running
	I0918 20:03:02.989580   26827 system_pods.go:89] "kube-vip-ha-091565" [b3fccd60-ac88-4dda-b8bb-b1b8c45cbfe5] Running
	I0918 20:03:02.989583   26827 system_pods.go:89] "kube-vip-ha-091565-m02" [12ca12ad-6dee-4b50-9273-c48d8f06acf4] Running
	I0918 20:03:02.989590   26827 system_pods.go:89] "storage-provisioner" [b7dffb85-905b-4166-a680-34c77cf87d09] Running
	I0918 20:03:02.989597   26827 system_pods.go:126] duration metric: took 207.397178ms to wait for k8s-apps to be running ...
	I0918 20:03:02.989610   26827 system_svc.go:44] waiting for kubelet service to be running ....
	I0918 20:03:02.989698   26827 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0918 20:03:03.003927   26827 system_svc.go:56] duration metric: took 14.306514ms WaitForService to wait for kubelet
	I0918 20:03:03.003954   26827 kubeadm.go:582] duration metric: took 22.632027977s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0918 20:03:03.003974   26827 node_conditions.go:102] verifying NodePressure condition ...
	I0918 20:03:03.179047   26827 request.go:632] Waited for 174.972185ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.215:8443/api/v1/nodes
	I0918 20:03:03.179141   26827 round_trippers.go:463] GET https://192.168.39.215:8443/api/v1/nodes
	I0918 20:03:03.179150   26827 round_trippers.go:469] Request Headers:
	I0918 20:03:03.179161   26827 round_trippers.go:473]     Accept: application/json, */*
	I0918 20:03:03.179171   26827 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0918 20:03:03.183675   26827 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0918 20:03:03.184384   26827 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0918 20:03:03.184407   26827 node_conditions.go:123] node cpu capacity is 2
	I0918 20:03:03.184443   26827 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0918 20:03:03.184452   26827 node_conditions.go:123] node cpu capacity is 2
	I0918 20:03:03.184459   26827 node_conditions.go:105] duration metric: took 180.479849ms to run NodePressure ...
	I0918 20:03:03.184475   26827 start.go:241] waiting for startup goroutines ...
	I0918 20:03:03.184509   26827 start.go:255] writing updated cluster config ...
	I0918 20:03:03.186759   26827 out.go:201] 
	I0918 20:03:03.188291   26827 config.go:182] Loaded profile config "ha-091565": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0918 20:03:03.188401   26827 profile.go:143] Saving config to /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/ha-091565/config.json ...
	I0918 20:03:03.189951   26827 out.go:177] * Starting "ha-091565-m03" control-plane node in "ha-091565" cluster
	I0918 20:03:03.191020   26827 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0918 20:03:03.191045   26827 cache.go:56] Caching tarball of preloaded images
	I0918 20:03:03.191138   26827 preload.go:172] Found /home/jenkins/minikube-integration/19667-7671/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0918 20:03:03.191150   26827 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0918 20:03:03.191241   26827 profile.go:143] Saving config to /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/ha-091565/config.json ...
	I0918 20:03:03.191410   26827 start.go:360] acquireMachinesLock for ha-091565-m03: {Name:mk1b0d68a0b7d9d4bc8204d5d81cb9eb77222526 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0918 20:03:03.191465   26827 start.go:364] duration metric: took 34.695µs to acquireMachinesLock for "ha-091565-m03"
	I0918 20:03:03.191486   26827 start.go:93] Provisioning new machine with config: &{Name:ha-091565 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.1 ClusterName:ha-091565 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.215 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.92 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-d
ns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0918 20:03:03.191596   26827 start.go:125] createHost starting for "m03" (driver="kvm2")
	I0918 20:03:03.193058   26827 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0918 20:03:03.193149   26827 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0918 20:03:03.193188   26827 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0918 20:03:03.208171   26827 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42195
	I0918 20:03:03.208580   26827 main.go:141] libmachine: () Calling .GetVersion
	I0918 20:03:03.209079   26827 main.go:141] libmachine: Using API Version  1
	I0918 20:03:03.209101   26827 main.go:141] libmachine: () Calling .SetConfigRaw
	I0918 20:03:03.209382   26827 main.go:141] libmachine: () Calling .GetMachineName
	I0918 20:03:03.209530   26827 main.go:141] libmachine: (ha-091565-m03) Calling .GetMachineName
	I0918 20:03:03.209649   26827 main.go:141] libmachine: (ha-091565-m03) Calling .DriverName
	I0918 20:03:03.209778   26827 start.go:159] libmachine.API.Create for "ha-091565" (driver="kvm2")
	I0918 20:03:03.209809   26827 client.go:168] LocalClient.Create starting
	I0918 20:03:03.209839   26827 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19667-7671/.minikube/certs/ca.pem
	I0918 20:03:03.209872   26827 main.go:141] libmachine: Decoding PEM data...
	I0918 20:03:03.209887   26827 main.go:141] libmachine: Parsing certificate...
	I0918 20:03:03.209935   26827 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19667-7671/.minikube/certs/cert.pem
	I0918 20:03:03.209954   26827 main.go:141] libmachine: Decoding PEM data...
	I0918 20:03:03.209965   26827 main.go:141] libmachine: Parsing certificate...
	I0918 20:03:03.209982   26827 main.go:141] libmachine: Running pre-create checks...
	I0918 20:03:03.209989   26827 main.go:141] libmachine: (ha-091565-m03) Calling .PreCreateCheck
	I0918 20:03:03.210137   26827 main.go:141] libmachine: (ha-091565-m03) Calling .GetConfigRaw
	I0918 20:03:03.210522   26827 main.go:141] libmachine: Creating machine...
	I0918 20:03:03.210535   26827 main.go:141] libmachine: (ha-091565-m03) Calling .Create
	I0918 20:03:03.210656   26827 main.go:141] libmachine: (ha-091565-m03) Creating KVM machine...
	I0918 20:03:03.211861   26827 main.go:141] libmachine: (ha-091565-m03) DBG | found existing default KVM network
	I0918 20:03:03.212028   26827 main.go:141] libmachine: (ha-091565-m03) DBG | found existing private KVM network mk-ha-091565
	I0918 20:03:03.212185   26827 main.go:141] libmachine: (ha-091565-m03) Setting up store path in /home/jenkins/minikube-integration/19667-7671/.minikube/machines/ha-091565-m03 ...
	I0918 20:03:03.212211   26827 main.go:141] libmachine: (ha-091565-m03) Building disk image from file:///home/jenkins/minikube-integration/19667-7671/.minikube/cache/iso/amd64/minikube-v1.34.0-1726481713-19649-amd64.iso
	I0918 20:03:03.212251   26827 main.go:141] libmachine: (ha-091565-m03) DBG | I0918 20:03:03.212170   27609 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19667-7671/.minikube
	I0918 20:03:03.212315   26827 main.go:141] libmachine: (ha-091565-m03) Downloading /home/jenkins/minikube-integration/19667-7671/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19667-7671/.minikube/cache/iso/amd64/minikube-v1.34.0-1726481713-19649-amd64.iso...
	I0918 20:03:03.448950   26827 main.go:141] libmachine: (ha-091565-m03) DBG | I0918 20:03:03.448813   27609 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19667-7671/.minikube/machines/ha-091565-m03/id_rsa...
	I0918 20:03:03.656714   26827 main.go:141] libmachine: (ha-091565-m03) DBG | I0918 20:03:03.656571   27609 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19667-7671/.minikube/machines/ha-091565-m03/ha-091565-m03.rawdisk...
	I0918 20:03:03.656743   26827 main.go:141] libmachine: (ha-091565-m03) DBG | Writing magic tar header
	I0918 20:03:03.656757   26827 main.go:141] libmachine: (ha-091565-m03) DBG | Writing SSH key tar header
	I0918 20:03:03.656767   26827 main.go:141] libmachine: (ha-091565-m03) DBG | I0918 20:03:03.656684   27609 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19667-7671/.minikube/machines/ha-091565-m03 ...
	I0918 20:03:03.656796   26827 main.go:141] libmachine: (ha-091565-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19667-7671/.minikube/machines/ha-091565-m03
	I0918 20:03:03.656816   26827 main.go:141] libmachine: (ha-091565-m03) Setting executable bit set on /home/jenkins/minikube-integration/19667-7671/.minikube/machines/ha-091565-m03 (perms=drwx------)
	I0918 20:03:03.656843   26827 main.go:141] libmachine: (ha-091565-m03) Setting executable bit set on /home/jenkins/minikube-integration/19667-7671/.minikube/machines (perms=drwxr-xr-x)
	I0918 20:03:03.656855   26827 main.go:141] libmachine: (ha-091565-m03) Setting executable bit set on /home/jenkins/minikube-integration/19667-7671/.minikube (perms=drwxr-xr-x)
	I0918 20:03:03.656870   26827 main.go:141] libmachine: (ha-091565-m03) Setting executable bit set on /home/jenkins/minikube-integration/19667-7671 (perms=drwxrwxr-x)
	I0918 20:03:03.656884   26827 main.go:141] libmachine: (ha-091565-m03) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0918 20:03:03.656898   26827 main.go:141] libmachine: (ha-091565-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19667-7671/.minikube/machines
	I0918 20:03:03.656911   26827 main.go:141] libmachine: (ha-091565-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19667-7671/.minikube
	I0918 20:03:03.656924   26827 main.go:141] libmachine: (ha-091565-m03) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0918 20:03:03.656938   26827 main.go:141] libmachine: (ha-091565-m03) Creating domain...
	I0918 20:03:03.656953   26827 main.go:141] libmachine: (ha-091565-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19667-7671
	I0918 20:03:03.656966   26827 main.go:141] libmachine: (ha-091565-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0918 20:03:03.656984   26827 main.go:141] libmachine: (ha-091565-m03) DBG | Checking permissions on dir: /home/jenkins
	I0918 20:03:03.656999   26827 main.go:141] libmachine: (ha-091565-m03) DBG | Checking permissions on dir: /home
	I0918 20:03:03.657013   26827 main.go:141] libmachine: (ha-091565-m03) DBG | Skipping /home - not owner
	I0918 20:03:03.657931   26827 main.go:141] libmachine: (ha-091565-m03) define libvirt domain using xml: 
	I0918 20:03:03.657960   26827 main.go:141] libmachine: (ha-091565-m03) <domain type='kvm'>
	I0918 20:03:03.657971   26827 main.go:141] libmachine: (ha-091565-m03)   <name>ha-091565-m03</name>
	I0918 20:03:03.657985   26827 main.go:141] libmachine: (ha-091565-m03)   <memory unit='MiB'>2200</memory>
	I0918 20:03:03.657993   26827 main.go:141] libmachine: (ha-091565-m03)   <vcpu>2</vcpu>
	I0918 20:03:03.658002   26827 main.go:141] libmachine: (ha-091565-m03)   <features>
	I0918 20:03:03.658008   26827 main.go:141] libmachine: (ha-091565-m03)     <acpi/>
	I0918 20:03:03.658012   26827 main.go:141] libmachine: (ha-091565-m03)     <apic/>
	I0918 20:03:03.658017   26827 main.go:141] libmachine: (ha-091565-m03)     <pae/>
	I0918 20:03:03.658024   26827 main.go:141] libmachine: (ha-091565-m03)     
	I0918 20:03:03.658028   26827 main.go:141] libmachine: (ha-091565-m03)   </features>
	I0918 20:03:03.658035   26827 main.go:141] libmachine: (ha-091565-m03)   <cpu mode='host-passthrough'>
	I0918 20:03:03.658040   26827 main.go:141] libmachine: (ha-091565-m03)   
	I0918 20:03:03.658051   26827 main.go:141] libmachine: (ha-091565-m03)   </cpu>
	I0918 20:03:03.658072   26827 main.go:141] libmachine: (ha-091565-m03)   <os>
	I0918 20:03:03.658091   26827 main.go:141] libmachine: (ha-091565-m03)     <type>hvm</type>
	I0918 20:03:03.658100   26827 main.go:141] libmachine: (ha-091565-m03)     <boot dev='cdrom'/>
	I0918 20:03:03.658104   26827 main.go:141] libmachine: (ha-091565-m03)     <boot dev='hd'/>
	I0918 20:03:03.658112   26827 main.go:141] libmachine: (ha-091565-m03)     <bootmenu enable='no'/>
	I0918 20:03:03.658119   26827 main.go:141] libmachine: (ha-091565-m03)   </os>
	I0918 20:03:03.658127   26827 main.go:141] libmachine: (ha-091565-m03)   <devices>
	I0918 20:03:03.658137   26827 main.go:141] libmachine: (ha-091565-m03)     <disk type='file' device='cdrom'>
	I0918 20:03:03.658153   26827 main.go:141] libmachine: (ha-091565-m03)       <source file='/home/jenkins/minikube-integration/19667-7671/.minikube/machines/ha-091565-m03/boot2docker.iso'/>
	I0918 20:03:03.658166   26827 main.go:141] libmachine: (ha-091565-m03)       <target dev='hdc' bus='scsi'/>
	I0918 20:03:03.658176   26827 main.go:141] libmachine: (ha-091565-m03)       <readonly/>
	I0918 20:03:03.658181   26827 main.go:141] libmachine: (ha-091565-m03)     </disk>
	I0918 20:03:03.658187   26827 main.go:141] libmachine: (ha-091565-m03)     <disk type='file' device='disk'>
	I0918 20:03:03.658196   26827 main.go:141] libmachine: (ha-091565-m03)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0918 20:03:03.658208   26827 main.go:141] libmachine: (ha-091565-m03)       <source file='/home/jenkins/minikube-integration/19667-7671/.minikube/machines/ha-091565-m03/ha-091565-m03.rawdisk'/>
	I0918 20:03:03.658218   26827 main.go:141] libmachine: (ha-091565-m03)       <target dev='hda' bus='virtio'/>
	I0918 20:03:03.658230   26827 main.go:141] libmachine: (ha-091565-m03)     </disk>
	I0918 20:03:03.658240   26827 main.go:141] libmachine: (ha-091565-m03)     <interface type='network'>
	I0918 20:03:03.658251   26827 main.go:141] libmachine: (ha-091565-m03)       <source network='mk-ha-091565'/>
	I0918 20:03:03.658261   26827 main.go:141] libmachine: (ha-091565-m03)       <model type='virtio'/>
	I0918 20:03:03.658268   26827 main.go:141] libmachine: (ha-091565-m03)     </interface>
	I0918 20:03:03.658277   26827 main.go:141] libmachine: (ha-091565-m03)     <interface type='network'>
	I0918 20:03:03.658286   26827 main.go:141] libmachine: (ha-091565-m03)       <source network='default'/>
	I0918 20:03:03.658301   26827 main.go:141] libmachine: (ha-091565-m03)       <model type='virtio'/>
	I0918 20:03:03.658313   26827 main.go:141] libmachine: (ha-091565-m03)     </interface>
	I0918 20:03:03.658320   26827 main.go:141] libmachine: (ha-091565-m03)     <serial type='pty'>
	I0918 20:03:03.658333   26827 main.go:141] libmachine: (ha-091565-m03)       <target port='0'/>
	I0918 20:03:03.658342   26827 main.go:141] libmachine: (ha-091565-m03)     </serial>
	I0918 20:03:03.658350   26827 main.go:141] libmachine: (ha-091565-m03)     <console type='pty'>
	I0918 20:03:03.658360   26827 main.go:141] libmachine: (ha-091565-m03)       <target type='serial' port='0'/>
	I0918 20:03:03.658368   26827 main.go:141] libmachine: (ha-091565-m03)     </console>
	I0918 20:03:03.658381   26827 main.go:141] libmachine: (ha-091565-m03)     <rng model='virtio'>
	I0918 20:03:03.658393   26827 main.go:141] libmachine: (ha-091565-m03)       <backend model='random'>/dev/random</backend>
	I0918 20:03:03.658402   26827 main.go:141] libmachine: (ha-091565-m03)     </rng>
	I0918 20:03:03.658410   26827 main.go:141] libmachine: (ha-091565-m03)     
	I0918 20:03:03.658418   26827 main.go:141] libmachine: (ha-091565-m03)     
	I0918 20:03:03.658425   26827 main.go:141] libmachine: (ha-091565-m03)   </devices>
	I0918 20:03:03.658434   26827 main.go:141] libmachine: (ha-091565-m03) </domain>
	I0918 20:03:03.658445   26827 main.go:141] libmachine: (ha-091565-m03) 
	I0918 20:03:03.665123   26827 main.go:141] libmachine: (ha-091565-m03) DBG | domain ha-091565-m03 has defined MAC address 52:54:00:28:9c:e9 in network default
	I0918 20:03:03.665651   26827 main.go:141] libmachine: (ha-091565-m03) Ensuring networks are active...
	I0918 20:03:03.665672   26827 main.go:141] libmachine: (ha-091565-m03) DBG | domain ha-091565-m03 has defined MAC address 52:54:00:7c:50:95 in network mk-ha-091565
	I0918 20:03:03.666384   26827 main.go:141] libmachine: (ha-091565-m03) Ensuring network default is active
	I0918 20:03:03.666733   26827 main.go:141] libmachine: (ha-091565-m03) Ensuring network mk-ha-091565 is active
	I0918 20:03:03.667154   26827 main.go:141] libmachine: (ha-091565-m03) Getting domain xml...
	I0918 20:03:03.668052   26827 main.go:141] libmachine: (ha-091565-m03) Creating domain...
	I0918 20:03:04.935268   26827 main.go:141] libmachine: (ha-091565-m03) Waiting to get IP...
	I0918 20:03:04.936028   26827 main.go:141] libmachine: (ha-091565-m03) DBG | domain ha-091565-m03 has defined MAC address 52:54:00:7c:50:95 in network mk-ha-091565
	I0918 20:03:04.936415   26827 main.go:141] libmachine: (ha-091565-m03) DBG | unable to find current IP address of domain ha-091565-m03 in network mk-ha-091565
	I0918 20:03:04.936435   26827 main.go:141] libmachine: (ha-091565-m03) DBG | I0918 20:03:04.936394   27609 retry.go:31] will retry after 190.945774ms: waiting for machine to come up
	I0918 20:03:05.128750   26827 main.go:141] libmachine: (ha-091565-m03) DBG | domain ha-091565-m03 has defined MAC address 52:54:00:7c:50:95 in network mk-ha-091565
	I0918 20:03:05.129236   26827 main.go:141] libmachine: (ha-091565-m03) DBG | unable to find current IP address of domain ha-091565-m03 in network mk-ha-091565
	I0918 20:03:05.129261   26827 main.go:141] libmachine: (ha-091565-m03) DBG | I0918 20:03:05.129196   27609 retry.go:31] will retry after 291.266146ms: waiting for machine to come up
	I0918 20:03:05.422550   26827 main.go:141] libmachine: (ha-091565-m03) DBG | domain ha-091565-m03 has defined MAC address 52:54:00:7c:50:95 in network mk-ha-091565
	I0918 20:03:05.423137   26827 main.go:141] libmachine: (ha-091565-m03) DBG | unable to find current IP address of domain ha-091565-m03 in network mk-ha-091565
	I0918 20:03:05.423170   26827 main.go:141] libmachine: (ha-091565-m03) DBG | I0918 20:03:05.423078   27609 retry.go:31] will retry after 371.409086ms: waiting for machine to come up
	I0918 20:03:05.795700   26827 main.go:141] libmachine: (ha-091565-m03) DBG | domain ha-091565-m03 has defined MAC address 52:54:00:7c:50:95 in network mk-ha-091565
	I0918 20:03:05.796222   26827 main.go:141] libmachine: (ha-091565-m03) DBG | unable to find current IP address of domain ha-091565-m03 in network mk-ha-091565
	I0918 20:03:05.796248   26827 main.go:141] libmachine: (ha-091565-m03) DBG | I0918 20:03:05.796182   27609 retry.go:31] will retry after 527.63812ms: waiting for machine to come up
	I0918 20:03:06.325912   26827 main.go:141] libmachine: (ha-091565-m03) DBG | domain ha-091565-m03 has defined MAC address 52:54:00:7c:50:95 in network mk-ha-091565
	I0918 20:03:06.326349   26827 main.go:141] libmachine: (ha-091565-m03) DBG | unable to find current IP address of domain ha-091565-m03 in network mk-ha-091565
	I0918 20:03:06.326379   26827 main.go:141] libmachine: (ha-091565-m03) DBG | I0918 20:03:06.326307   27609 retry.go:31] will retry after 471.938108ms: waiting for machine to come up
	I0918 20:03:06.799896   26827 main.go:141] libmachine: (ha-091565-m03) DBG | domain ha-091565-m03 has defined MAC address 52:54:00:7c:50:95 in network mk-ha-091565
	I0918 20:03:06.800358   26827 main.go:141] libmachine: (ha-091565-m03) DBG | unable to find current IP address of domain ha-091565-m03 in network mk-ha-091565
	I0918 20:03:06.800384   26827 main.go:141] libmachine: (ha-091565-m03) DBG | I0918 20:03:06.800288   27609 retry.go:31] will retry after 607.364821ms: waiting for machine to come up
	I0918 20:03:07.408959   26827 main.go:141] libmachine: (ha-091565-m03) DBG | domain ha-091565-m03 has defined MAC address 52:54:00:7c:50:95 in network mk-ha-091565
	I0918 20:03:07.409429   26827 main.go:141] libmachine: (ha-091565-m03) DBG | unable to find current IP address of domain ha-091565-m03 in network mk-ha-091565
	I0918 20:03:07.409459   26827 main.go:141] libmachine: (ha-091565-m03) DBG | I0918 20:03:07.409383   27609 retry.go:31] will retry after 864.680144ms: waiting for machine to come up
	I0918 20:03:08.275959   26827 main.go:141] libmachine: (ha-091565-m03) DBG | domain ha-091565-m03 has defined MAC address 52:54:00:7c:50:95 in network mk-ha-091565
	I0918 20:03:08.276377   26827 main.go:141] libmachine: (ha-091565-m03) DBG | unable to find current IP address of domain ha-091565-m03 in network mk-ha-091565
	I0918 20:03:08.276404   26827 main.go:141] libmachine: (ha-091565-m03) DBG | I0918 20:03:08.276319   27609 retry.go:31] will retry after 900.946411ms: waiting for machine to come up
	I0918 20:03:09.178488   26827 main.go:141] libmachine: (ha-091565-m03) DBG | domain ha-091565-m03 has defined MAC address 52:54:00:7c:50:95 in network mk-ha-091565
	I0918 20:03:09.178913   26827 main.go:141] libmachine: (ha-091565-m03) DBG | unable to find current IP address of domain ha-091565-m03 in network mk-ha-091565
	I0918 20:03:09.178936   26827 main.go:141] libmachine: (ha-091565-m03) DBG | I0918 20:03:09.178885   27609 retry.go:31] will retry after 1.803312814s: waiting for machine to come up
	I0918 20:03:10.983480   26827 main.go:141] libmachine: (ha-091565-m03) DBG | domain ha-091565-m03 has defined MAC address 52:54:00:7c:50:95 in network mk-ha-091565
	I0918 20:03:10.983921   26827 main.go:141] libmachine: (ha-091565-m03) DBG | unable to find current IP address of domain ha-091565-m03 in network mk-ha-091565
	I0918 20:03:10.983943   26827 main.go:141] libmachine: (ha-091565-m03) DBG | I0918 20:03:10.983874   27609 retry.go:31] will retry after 2.318003161s: waiting for machine to come up
	I0918 20:03:13.303826   26827 main.go:141] libmachine: (ha-091565-m03) DBG | domain ha-091565-m03 has defined MAC address 52:54:00:7c:50:95 in network mk-ha-091565
	I0918 20:03:13.304364   26827 main.go:141] libmachine: (ha-091565-m03) DBG | unable to find current IP address of domain ha-091565-m03 in network mk-ha-091565
	I0918 20:03:13.304389   26827 main.go:141] libmachine: (ha-091565-m03) DBG | I0918 20:03:13.304319   27609 retry.go:31] will retry after 2.309847279s: waiting for machine to come up
	I0918 20:03:15.615522   26827 main.go:141] libmachine: (ha-091565-m03) DBG | domain ha-091565-m03 has defined MAC address 52:54:00:7c:50:95 in network mk-ha-091565
	I0918 20:03:15.616142   26827 main.go:141] libmachine: (ha-091565-m03) DBG | unable to find current IP address of domain ha-091565-m03 in network mk-ha-091565
	I0918 20:03:15.616170   26827 main.go:141] libmachine: (ha-091565-m03) DBG | I0918 20:03:15.616108   27609 retry.go:31] will retry after 2.559399773s: waiting for machine to come up
	I0918 20:03:18.176689   26827 main.go:141] libmachine: (ha-091565-m03) DBG | domain ha-091565-m03 has defined MAC address 52:54:00:7c:50:95 in network mk-ha-091565
	I0918 20:03:18.177086   26827 main.go:141] libmachine: (ha-091565-m03) DBG | unable to find current IP address of domain ha-091565-m03 in network mk-ha-091565
	I0918 20:03:18.177108   26827 main.go:141] libmachine: (ha-091565-m03) DBG | I0918 20:03:18.177044   27609 retry.go:31] will retry after 4.502260419s: waiting for machine to come up
	I0918 20:03:22.681016   26827 main.go:141] libmachine: (ha-091565-m03) DBG | domain ha-091565-m03 has defined MAC address 52:54:00:7c:50:95 in network mk-ha-091565
	I0918 20:03:22.681368   26827 main.go:141] libmachine: (ha-091565-m03) DBG | unable to find current IP address of domain ha-091565-m03 in network mk-ha-091565
	I0918 20:03:22.681391   26827 main.go:141] libmachine: (ha-091565-m03) DBG | I0918 20:03:22.681330   27609 retry.go:31] will retry after 3.82668599s: waiting for machine to come up
	I0918 20:03:26.510988   26827 main.go:141] libmachine: (ha-091565-m03) DBG | domain ha-091565-m03 has defined MAC address 52:54:00:7c:50:95 in network mk-ha-091565
	I0918 20:03:26.511503   26827 main.go:141] libmachine: (ha-091565-m03) Found IP for machine: 192.168.39.53
	I0918 20:03:26.511523   26827 main.go:141] libmachine: (ha-091565-m03) DBG | domain ha-091565-m03 has current primary IP address 192.168.39.53 and MAC address 52:54:00:7c:50:95 in network mk-ha-091565
	I0918 20:03:26.511529   26827 main.go:141] libmachine: (ha-091565-m03) Reserving static IP address...
	I0918 20:03:26.511838   26827 main.go:141] libmachine: (ha-091565-m03) DBG | unable to find host DHCP lease matching {name: "ha-091565-m03", mac: "52:54:00:7c:50:95", ip: "192.168.39.53"} in network mk-ha-091565
	I0918 20:03:26.588090   26827 main.go:141] libmachine: (ha-091565-m03) Reserved static IP address: 192.168.39.53
	I0918 20:03:26.588125   26827 main.go:141] libmachine: (ha-091565-m03) Waiting for SSH to be available...
	I0918 20:03:26.588134   26827 main.go:141] libmachine: (ha-091565-m03) DBG | Getting to WaitForSSH function...
	I0918 20:03:26.590288   26827 main.go:141] libmachine: (ha-091565-m03) DBG | domain ha-091565-m03 has defined MAC address 52:54:00:7c:50:95 in network mk-ha-091565
	I0918 20:03:26.590706   26827 main.go:141] libmachine: (ha-091565-m03) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:7c:50:95", ip: ""} in network mk-ha-091565
	I0918 20:03:26.590731   26827 main.go:141] libmachine: (ha-091565-m03) DBG | unable to find defined IP address of network mk-ha-091565 interface with MAC address 52:54:00:7c:50:95
	I0918 20:03:26.590858   26827 main.go:141] libmachine: (ha-091565-m03) DBG | Using SSH client type: external
	I0918 20:03:26.590882   26827 main.go:141] libmachine: (ha-091565-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/19667-7671/.minikube/machines/ha-091565-m03/id_rsa (-rw-------)
	I0918 20:03:26.590920   26827 main.go:141] libmachine: (ha-091565-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19667-7671/.minikube/machines/ha-091565-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0918 20:03:26.590933   26827 main.go:141] libmachine: (ha-091565-m03) DBG | About to run SSH command:
	I0918 20:03:26.590946   26827 main.go:141] libmachine: (ha-091565-m03) DBG | exit 0
	I0918 20:03:26.594686   26827 main.go:141] libmachine: (ha-091565-m03) DBG | SSH cmd err, output: exit status 255: 
	I0918 20:03:26.594715   26827 main.go:141] libmachine: (ha-091565-m03) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I0918 20:03:26.594726   26827 main.go:141] libmachine: (ha-091565-m03) DBG | command : exit 0
	I0918 20:03:26.594733   26827 main.go:141] libmachine: (ha-091565-m03) DBG | err     : exit status 255
	I0918 20:03:26.594744   26827 main.go:141] libmachine: (ha-091565-m03) DBG | output  : 
	I0918 20:03:29.596158   26827 main.go:141] libmachine: (ha-091565-m03) DBG | Getting to WaitForSSH function...
	I0918 20:03:29.598576   26827 main.go:141] libmachine: (ha-091565-m03) DBG | domain ha-091565-m03 has defined MAC address 52:54:00:7c:50:95 in network mk-ha-091565
	I0918 20:03:29.598871   26827 main.go:141] libmachine: (ha-091565-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7c:50:95", ip: ""} in network mk-ha-091565: {Iface:virbr1 ExpiryTime:2024-09-18 21:03:17 +0000 UTC Type:0 Mac:52:54:00:7c:50:95 Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:ha-091565-m03 Clientid:01:52:54:00:7c:50:95}
	I0918 20:03:29.598894   26827 main.go:141] libmachine: (ha-091565-m03) DBG | domain ha-091565-m03 has defined IP address 192.168.39.53 and MAC address 52:54:00:7c:50:95 in network mk-ha-091565
	I0918 20:03:29.599022   26827 main.go:141] libmachine: (ha-091565-m03) DBG | Using SSH client type: external
	I0918 20:03:29.599043   26827 main.go:141] libmachine: (ha-091565-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/19667-7671/.minikube/machines/ha-091565-m03/id_rsa (-rw-------)
	I0918 20:03:29.599071   26827 main.go:141] libmachine: (ha-091565-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.53 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19667-7671/.minikube/machines/ha-091565-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0918 20:03:29.599088   26827 main.go:141] libmachine: (ha-091565-m03) DBG | About to run SSH command:
	I0918 20:03:29.599104   26827 main.go:141] libmachine: (ha-091565-m03) DBG | exit 0
	I0918 20:03:29.719912   26827 main.go:141] libmachine: (ha-091565-m03) DBG | SSH cmd err, output: <nil>: 
	I0918 20:03:29.720164   26827 main.go:141] libmachine: (ha-091565-m03) KVM machine creation complete!
	I0918 20:03:29.720484   26827 main.go:141] libmachine: (ha-091565-m03) Calling .GetConfigRaw
	I0918 20:03:29.720974   26827 main.go:141] libmachine: (ha-091565-m03) Calling .DriverName
	I0918 20:03:29.721178   26827 main.go:141] libmachine: (ha-091565-m03) Calling .DriverName
	I0918 20:03:29.721342   26827 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0918 20:03:29.721355   26827 main.go:141] libmachine: (ha-091565-m03) Calling .GetState
	I0918 20:03:29.722748   26827 main.go:141] libmachine: Detecting operating system of created instance...
	I0918 20:03:29.722760   26827 main.go:141] libmachine: Waiting for SSH to be available...
	I0918 20:03:29.722765   26827 main.go:141] libmachine: Getting to WaitForSSH function...
	I0918 20:03:29.722771   26827 main.go:141] libmachine: (ha-091565-m03) Calling .GetSSHHostname
	I0918 20:03:29.725146   26827 main.go:141] libmachine: (ha-091565-m03) DBG | domain ha-091565-m03 has defined MAC address 52:54:00:7c:50:95 in network mk-ha-091565
	I0918 20:03:29.725535   26827 main.go:141] libmachine: (ha-091565-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7c:50:95", ip: ""} in network mk-ha-091565: {Iface:virbr1 ExpiryTime:2024-09-18 21:03:17 +0000 UTC Type:0 Mac:52:54:00:7c:50:95 Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:ha-091565-m03 Clientid:01:52:54:00:7c:50:95}
	I0918 20:03:29.725560   26827 main.go:141] libmachine: (ha-091565-m03) DBG | domain ha-091565-m03 has defined IP address 192.168.39.53 and MAC address 52:54:00:7c:50:95 in network mk-ha-091565
	I0918 20:03:29.725856   26827 main.go:141] libmachine: (ha-091565-m03) Calling .GetSSHPort
	I0918 20:03:29.726005   26827 main.go:141] libmachine: (ha-091565-m03) Calling .GetSSHKeyPath
	I0918 20:03:29.726172   26827 main.go:141] libmachine: (ha-091565-m03) Calling .GetSSHKeyPath
	I0918 20:03:29.726341   26827 main.go:141] libmachine: (ha-091565-m03) Calling .GetSSHUsername
	I0918 20:03:29.726485   26827 main.go:141] libmachine: Using SSH client type: native
	I0918 20:03:29.726681   26827 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.53 22 <nil> <nil>}
	I0918 20:03:29.726692   26827 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0918 20:03:29.823579   26827 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0918 20:03:29.823600   26827 main.go:141] libmachine: Detecting the provisioner...
	I0918 20:03:29.823610   26827 main.go:141] libmachine: (ha-091565-m03) Calling .GetSSHHostname
	I0918 20:03:29.826127   26827 main.go:141] libmachine: (ha-091565-m03) DBG | domain ha-091565-m03 has defined MAC address 52:54:00:7c:50:95 in network mk-ha-091565
	I0918 20:03:29.826487   26827 main.go:141] libmachine: (ha-091565-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7c:50:95", ip: ""} in network mk-ha-091565: {Iface:virbr1 ExpiryTime:2024-09-18 21:03:17 +0000 UTC Type:0 Mac:52:54:00:7c:50:95 Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:ha-091565-m03 Clientid:01:52:54:00:7c:50:95}
	I0918 20:03:29.826524   26827 main.go:141] libmachine: (ha-091565-m03) DBG | domain ha-091565-m03 has defined IP address 192.168.39.53 and MAC address 52:54:00:7c:50:95 in network mk-ha-091565
	I0918 20:03:29.826650   26827 main.go:141] libmachine: (ha-091565-m03) Calling .GetSSHPort
	I0918 20:03:29.826822   26827 main.go:141] libmachine: (ha-091565-m03) Calling .GetSSHKeyPath
	I0918 20:03:29.826946   26827 main.go:141] libmachine: (ha-091565-m03) Calling .GetSSHKeyPath
	I0918 20:03:29.827049   26827 main.go:141] libmachine: (ha-091565-m03) Calling .GetSSHUsername
	I0918 20:03:29.827203   26827 main.go:141] libmachine: Using SSH client type: native
	I0918 20:03:29.827417   26827 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.53 22 <nil> <nil>}
	I0918 20:03:29.827434   26827 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0918 20:03:29.932519   26827 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0918 20:03:29.932589   26827 main.go:141] libmachine: found compatible host: buildroot
	I0918 20:03:29.932601   26827 main.go:141] libmachine: Provisioning with buildroot...
	I0918 20:03:29.932612   26827 main.go:141] libmachine: (ha-091565-m03) Calling .GetMachineName
	I0918 20:03:29.932841   26827 buildroot.go:166] provisioning hostname "ha-091565-m03"
	I0918 20:03:29.932860   26827 main.go:141] libmachine: (ha-091565-m03) Calling .GetMachineName
	I0918 20:03:29.933042   26827 main.go:141] libmachine: (ha-091565-m03) Calling .GetSSHHostname
	I0918 20:03:29.935764   26827 main.go:141] libmachine: (ha-091565-m03) DBG | domain ha-091565-m03 has defined MAC address 52:54:00:7c:50:95 in network mk-ha-091565
	I0918 20:03:29.936201   26827 main.go:141] libmachine: (ha-091565-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7c:50:95", ip: ""} in network mk-ha-091565: {Iface:virbr1 ExpiryTime:2024-09-18 21:03:17 +0000 UTC Type:0 Mac:52:54:00:7c:50:95 Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:ha-091565-m03 Clientid:01:52:54:00:7c:50:95}
	I0918 20:03:29.936227   26827 main.go:141] libmachine: (ha-091565-m03) DBG | domain ha-091565-m03 has defined IP address 192.168.39.53 and MAC address 52:54:00:7c:50:95 in network mk-ha-091565
	I0918 20:03:29.936365   26827 main.go:141] libmachine: (ha-091565-m03) Calling .GetSSHPort
	I0918 20:03:29.936539   26827 main.go:141] libmachine: (ha-091565-m03) Calling .GetSSHKeyPath
	I0918 20:03:29.936695   26827 main.go:141] libmachine: (ha-091565-m03) Calling .GetSSHKeyPath
	I0918 20:03:29.936848   26827 main.go:141] libmachine: (ha-091565-m03) Calling .GetSSHUsername
	I0918 20:03:29.937078   26827 main.go:141] libmachine: Using SSH client type: native
	I0918 20:03:29.937287   26827 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.53 22 <nil> <nil>}
	I0918 20:03:29.937301   26827 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-091565-m03 && echo "ha-091565-m03" | sudo tee /etc/hostname
	I0918 20:03:30.050382   26827 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-091565-m03
	
	I0918 20:03:30.050410   26827 main.go:141] libmachine: (ha-091565-m03) Calling .GetSSHHostname
	I0918 20:03:30.053336   26827 main.go:141] libmachine: (ha-091565-m03) DBG | domain ha-091565-m03 has defined MAC address 52:54:00:7c:50:95 in network mk-ha-091565
	I0918 20:03:30.053858   26827 main.go:141] libmachine: (ha-091565-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7c:50:95", ip: ""} in network mk-ha-091565: {Iface:virbr1 ExpiryTime:2024-09-18 21:03:17 +0000 UTC Type:0 Mac:52:54:00:7c:50:95 Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:ha-091565-m03 Clientid:01:52:54:00:7c:50:95}
	I0918 20:03:30.053888   26827 main.go:141] libmachine: (ha-091565-m03) DBG | domain ha-091565-m03 has defined IP address 192.168.39.53 and MAC address 52:54:00:7c:50:95 in network mk-ha-091565
	I0918 20:03:30.054088   26827 main.go:141] libmachine: (ha-091565-m03) Calling .GetSSHPort
	I0918 20:03:30.054256   26827 main.go:141] libmachine: (ha-091565-m03) Calling .GetSSHKeyPath
	I0918 20:03:30.054372   26827 main.go:141] libmachine: (ha-091565-m03) Calling .GetSSHKeyPath
	I0918 20:03:30.054537   26827 main.go:141] libmachine: (ha-091565-m03) Calling .GetSSHUsername
	I0918 20:03:30.054678   26827 main.go:141] libmachine: Using SSH client type: native
	I0918 20:03:30.054886   26827 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.53 22 <nil> <nil>}
	I0918 20:03:30.054906   26827 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-091565-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-091565-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-091565-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0918 20:03:30.160725   26827 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0918 20:03:30.160756   26827 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19667-7671/.minikube CaCertPath:/home/jenkins/minikube-integration/19667-7671/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19667-7671/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19667-7671/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19667-7671/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19667-7671/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19667-7671/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19667-7671/.minikube}
	I0918 20:03:30.160770   26827 buildroot.go:174] setting up certificates
	I0918 20:03:30.160779   26827 provision.go:84] configureAuth start
	I0918 20:03:30.160787   26827 main.go:141] libmachine: (ha-091565-m03) Calling .GetMachineName
	I0918 20:03:30.161095   26827 main.go:141] libmachine: (ha-091565-m03) Calling .GetIP
	I0918 20:03:30.164061   26827 main.go:141] libmachine: (ha-091565-m03) DBG | domain ha-091565-m03 has defined MAC address 52:54:00:7c:50:95 in network mk-ha-091565
	I0918 20:03:30.164503   26827 main.go:141] libmachine: (ha-091565-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7c:50:95", ip: ""} in network mk-ha-091565: {Iface:virbr1 ExpiryTime:2024-09-18 21:03:17 +0000 UTC Type:0 Mac:52:54:00:7c:50:95 Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:ha-091565-m03 Clientid:01:52:54:00:7c:50:95}
	I0918 20:03:30.164540   26827 main.go:141] libmachine: (ha-091565-m03) DBG | domain ha-091565-m03 has defined IP address 192.168.39.53 and MAC address 52:54:00:7c:50:95 in network mk-ha-091565
	I0918 20:03:30.164704   26827 main.go:141] libmachine: (ha-091565-m03) Calling .GetSSHHostname
	I0918 20:03:30.167047   26827 main.go:141] libmachine: (ha-091565-m03) DBG | domain ha-091565-m03 has defined MAC address 52:54:00:7c:50:95 in network mk-ha-091565
	I0918 20:03:30.167370   26827 main.go:141] libmachine: (ha-091565-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7c:50:95", ip: ""} in network mk-ha-091565: {Iface:virbr1 ExpiryTime:2024-09-18 21:03:17 +0000 UTC Type:0 Mac:52:54:00:7c:50:95 Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:ha-091565-m03 Clientid:01:52:54:00:7c:50:95}
	I0918 20:03:30.167392   26827 main.go:141] libmachine: (ha-091565-m03) DBG | domain ha-091565-m03 has defined IP address 192.168.39.53 and MAC address 52:54:00:7c:50:95 in network mk-ha-091565
	I0918 20:03:30.167538   26827 provision.go:143] copyHostCerts
	I0918 20:03:30.167573   26827 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19667-7671/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19667-7671/.minikube/cert.pem
	I0918 20:03:30.167622   26827 exec_runner.go:144] found /home/jenkins/minikube-integration/19667-7671/.minikube/cert.pem, removing ...
	I0918 20:03:30.167633   26827 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19667-7671/.minikube/cert.pem
	I0918 20:03:30.167703   26827 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19667-7671/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19667-7671/.minikube/cert.pem (1123 bytes)
	I0918 20:03:30.167779   26827 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19667-7671/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19667-7671/.minikube/key.pem
	I0918 20:03:30.167796   26827 exec_runner.go:144] found /home/jenkins/minikube-integration/19667-7671/.minikube/key.pem, removing ...
	I0918 20:03:30.167812   26827 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19667-7671/.minikube/key.pem
	I0918 20:03:30.167845   26827 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19667-7671/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19667-7671/.minikube/key.pem (1679 bytes)
	I0918 20:03:30.167891   26827 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19667-7671/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19667-7671/.minikube/ca.pem
	I0918 20:03:30.167910   26827 exec_runner.go:144] found /home/jenkins/minikube-integration/19667-7671/.minikube/ca.pem, removing ...
	I0918 20:03:30.167916   26827 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19667-7671/.minikube/ca.pem
	I0918 20:03:30.167937   26827 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19667-7671/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19667-7671/.minikube/ca.pem (1082 bytes)
	I0918 20:03:30.167986   26827 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19667-7671/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19667-7671/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19667-7671/.minikube/certs/ca-key.pem org=jenkins.ha-091565-m03 san=[127.0.0.1 192.168.39.53 ha-091565-m03 localhost minikube]
	I0918 20:03:30.213280   26827 provision.go:177] copyRemoteCerts
	I0918 20:03:30.213334   26827 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0918 20:03:30.213360   26827 main.go:141] libmachine: (ha-091565-m03) Calling .GetSSHHostname
	I0918 20:03:30.215750   26827 main.go:141] libmachine: (ha-091565-m03) DBG | domain ha-091565-m03 has defined MAC address 52:54:00:7c:50:95 in network mk-ha-091565
	I0918 20:03:30.216074   26827 main.go:141] libmachine: (ha-091565-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7c:50:95", ip: ""} in network mk-ha-091565: {Iface:virbr1 ExpiryTime:2024-09-18 21:03:17 +0000 UTC Type:0 Mac:52:54:00:7c:50:95 Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:ha-091565-m03 Clientid:01:52:54:00:7c:50:95}
	I0918 20:03:30.216102   26827 main.go:141] libmachine: (ha-091565-m03) DBG | domain ha-091565-m03 has defined IP address 192.168.39.53 and MAC address 52:54:00:7c:50:95 in network mk-ha-091565
	I0918 20:03:30.216270   26827 main.go:141] libmachine: (ha-091565-m03) Calling .GetSSHPort
	I0918 20:03:30.216448   26827 main.go:141] libmachine: (ha-091565-m03) Calling .GetSSHKeyPath
	I0918 20:03:30.216580   26827 main.go:141] libmachine: (ha-091565-m03) Calling .GetSSHUsername
	I0918 20:03:30.216699   26827 sshutil.go:53] new ssh client: &{IP:192.168.39.53 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19667-7671/.minikube/machines/ha-091565-m03/id_rsa Username:docker}
	I0918 20:03:30.298100   26827 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19667-7671/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0918 20:03:30.298182   26827 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0918 20:03:30.322613   26827 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19667-7671/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0918 20:03:30.322696   26827 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0918 20:03:30.345951   26827 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19667-7671/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0918 20:03:30.346039   26827 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0918 20:03:30.368781   26827 provision.go:87] duration metric: took 207.991221ms to configureAuth
	I0918 20:03:30.368806   26827 buildroot.go:189] setting minikube options for container-runtime
	I0918 20:03:30.369006   26827 config.go:182] Loaded profile config "ha-091565": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0918 20:03:30.369075   26827 main.go:141] libmachine: (ha-091565-m03) Calling .GetSSHHostname
	I0918 20:03:30.372054   26827 main.go:141] libmachine: (ha-091565-m03) DBG | domain ha-091565-m03 has defined MAC address 52:54:00:7c:50:95 in network mk-ha-091565
	I0918 20:03:30.372443   26827 main.go:141] libmachine: (ha-091565-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7c:50:95", ip: ""} in network mk-ha-091565: {Iface:virbr1 ExpiryTime:2024-09-18 21:03:17 +0000 UTC Type:0 Mac:52:54:00:7c:50:95 Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:ha-091565-m03 Clientid:01:52:54:00:7c:50:95}
	I0918 20:03:30.372472   26827 main.go:141] libmachine: (ha-091565-m03) DBG | domain ha-091565-m03 has defined IP address 192.168.39.53 and MAC address 52:54:00:7c:50:95 in network mk-ha-091565
	I0918 20:03:30.372725   26827 main.go:141] libmachine: (ha-091565-m03) Calling .GetSSHPort
	I0918 20:03:30.372907   26827 main.go:141] libmachine: (ha-091565-m03) Calling .GetSSHKeyPath
	I0918 20:03:30.373069   26827 main.go:141] libmachine: (ha-091565-m03) Calling .GetSSHKeyPath
	I0918 20:03:30.373164   26827 main.go:141] libmachine: (ha-091565-m03) Calling .GetSSHUsername
	I0918 20:03:30.373299   26827 main.go:141] libmachine: Using SSH client type: native
	I0918 20:03:30.373493   26827 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.53 22 <nil> <nil>}
	I0918 20:03:30.373508   26827 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0918 20:03:30.578858   26827 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0918 20:03:30.578882   26827 main.go:141] libmachine: Checking connection to Docker...
	I0918 20:03:30.578892   26827 main.go:141] libmachine: (ha-091565-m03) Calling .GetURL
	I0918 20:03:30.580144   26827 main.go:141] libmachine: (ha-091565-m03) DBG | Using libvirt version 6000000
	I0918 20:03:30.582476   26827 main.go:141] libmachine: (ha-091565-m03) DBG | domain ha-091565-m03 has defined MAC address 52:54:00:7c:50:95 in network mk-ha-091565
	I0918 20:03:30.582791   26827 main.go:141] libmachine: (ha-091565-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7c:50:95", ip: ""} in network mk-ha-091565: {Iface:virbr1 ExpiryTime:2024-09-18 21:03:17 +0000 UTC Type:0 Mac:52:54:00:7c:50:95 Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:ha-091565-m03 Clientid:01:52:54:00:7c:50:95}
	I0918 20:03:30.582820   26827 main.go:141] libmachine: (ha-091565-m03) DBG | domain ha-091565-m03 has defined IP address 192.168.39.53 and MAC address 52:54:00:7c:50:95 in network mk-ha-091565
	I0918 20:03:30.582956   26827 main.go:141] libmachine: Docker is up and running!
	I0918 20:03:30.582970   26827 main.go:141] libmachine: Reticulating splines...
	I0918 20:03:30.582978   26827 client.go:171] duration metric: took 27.373159137s to LocalClient.Create
	I0918 20:03:30.583008   26827 start.go:167] duration metric: took 27.373230204s to libmachine.API.Create "ha-091565"
	I0918 20:03:30.583021   26827 start.go:293] postStartSetup for "ha-091565-m03" (driver="kvm2")
	I0918 20:03:30.583039   26827 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0918 20:03:30.583062   26827 main.go:141] libmachine: (ha-091565-m03) Calling .DriverName
	I0918 20:03:30.583373   26827 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0918 20:03:30.583399   26827 main.go:141] libmachine: (ha-091565-m03) Calling .GetSSHHostname
	I0918 20:03:30.585622   26827 main.go:141] libmachine: (ha-091565-m03) DBG | domain ha-091565-m03 has defined MAC address 52:54:00:7c:50:95 in network mk-ha-091565
	I0918 20:03:30.585919   26827 main.go:141] libmachine: (ha-091565-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7c:50:95", ip: ""} in network mk-ha-091565: {Iface:virbr1 ExpiryTime:2024-09-18 21:03:17 +0000 UTC Type:0 Mac:52:54:00:7c:50:95 Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:ha-091565-m03 Clientid:01:52:54:00:7c:50:95}
	I0918 20:03:30.585944   26827 main.go:141] libmachine: (ha-091565-m03) DBG | domain ha-091565-m03 has defined IP address 192.168.39.53 and MAC address 52:54:00:7c:50:95 in network mk-ha-091565
	I0918 20:03:30.586091   26827 main.go:141] libmachine: (ha-091565-m03) Calling .GetSSHPort
	I0918 20:03:30.586267   26827 main.go:141] libmachine: (ha-091565-m03) Calling .GetSSHKeyPath
	I0918 20:03:30.586429   26827 main.go:141] libmachine: (ha-091565-m03) Calling .GetSSHUsername
	I0918 20:03:30.586561   26827 sshutil.go:53] new ssh client: &{IP:192.168.39.53 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19667-7671/.minikube/machines/ha-091565-m03/id_rsa Username:docker}
	I0918 20:03:30.666586   26827 ssh_runner.go:195] Run: cat /etc/os-release
	I0918 20:03:30.670835   26827 info.go:137] Remote host: Buildroot 2023.02.9
	I0918 20:03:30.670865   26827 filesync.go:126] Scanning /home/jenkins/minikube-integration/19667-7671/.minikube/addons for local assets ...
	I0918 20:03:30.670930   26827 filesync.go:126] Scanning /home/jenkins/minikube-integration/19667-7671/.minikube/files for local assets ...
	I0918 20:03:30.671001   26827 filesync.go:149] local asset: /home/jenkins/minikube-integration/19667-7671/.minikube/files/etc/ssl/certs/148782.pem -> 148782.pem in /etc/ssl/certs
	I0918 20:03:30.671010   26827 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19667-7671/.minikube/files/etc/ssl/certs/148782.pem -> /etc/ssl/certs/148782.pem
	I0918 20:03:30.671101   26827 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0918 20:03:30.680354   26827 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/files/etc/ssl/certs/148782.pem --> /etc/ssl/certs/148782.pem (1708 bytes)
	I0918 20:03:30.703833   26827 start.go:296] duration metric: took 120.797692ms for postStartSetup
	I0918 20:03:30.703888   26827 main.go:141] libmachine: (ha-091565-m03) Calling .GetConfigRaw
	I0918 20:03:30.704508   26827 main.go:141] libmachine: (ha-091565-m03) Calling .GetIP
	I0918 20:03:30.707440   26827 main.go:141] libmachine: (ha-091565-m03) DBG | domain ha-091565-m03 has defined MAC address 52:54:00:7c:50:95 in network mk-ha-091565
	I0918 20:03:30.707936   26827 main.go:141] libmachine: (ha-091565-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7c:50:95", ip: ""} in network mk-ha-091565: {Iface:virbr1 ExpiryTime:2024-09-18 21:03:17 +0000 UTC Type:0 Mac:52:54:00:7c:50:95 Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:ha-091565-m03 Clientid:01:52:54:00:7c:50:95}
	I0918 20:03:30.707965   26827 main.go:141] libmachine: (ha-091565-m03) DBG | domain ha-091565-m03 has defined IP address 192.168.39.53 and MAC address 52:54:00:7c:50:95 in network mk-ha-091565
	I0918 20:03:30.708291   26827 profile.go:143] Saving config to /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/ha-091565/config.json ...
	I0918 20:03:30.708542   26827 start.go:128] duration metric: took 27.516932332s to createHost
	I0918 20:03:30.708573   26827 main.go:141] libmachine: (ha-091565-m03) Calling .GetSSHHostname
	I0918 20:03:30.711228   26827 main.go:141] libmachine: (ha-091565-m03) DBG | domain ha-091565-m03 has defined MAC address 52:54:00:7c:50:95 in network mk-ha-091565
	I0918 20:03:30.711630   26827 main.go:141] libmachine: (ha-091565-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7c:50:95", ip: ""} in network mk-ha-091565: {Iface:virbr1 ExpiryTime:2024-09-18 21:03:17 +0000 UTC Type:0 Mac:52:54:00:7c:50:95 Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:ha-091565-m03 Clientid:01:52:54:00:7c:50:95}
	I0918 20:03:30.711656   26827 main.go:141] libmachine: (ha-091565-m03) DBG | domain ha-091565-m03 has defined IP address 192.168.39.53 and MAC address 52:54:00:7c:50:95 in network mk-ha-091565
	I0918 20:03:30.711872   26827 main.go:141] libmachine: (ha-091565-m03) Calling .GetSSHPort
	I0918 20:03:30.712061   26827 main.go:141] libmachine: (ha-091565-m03) Calling .GetSSHKeyPath
	I0918 20:03:30.712192   26827 main.go:141] libmachine: (ha-091565-m03) Calling .GetSSHKeyPath
	I0918 20:03:30.712327   26827 main.go:141] libmachine: (ha-091565-m03) Calling .GetSSHUsername
	I0918 20:03:30.712477   26827 main.go:141] libmachine: Using SSH client type: native
	I0918 20:03:30.712684   26827 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.53 22 <nil> <nil>}
	I0918 20:03:30.712697   26827 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0918 20:03:30.812539   26827 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726689810.794368232
	
	I0918 20:03:30.812561   26827 fix.go:216] guest clock: 1726689810.794368232
	I0918 20:03:30.812570   26827 fix.go:229] Guest: 2024-09-18 20:03:30.794368232 +0000 UTC Remote: 2024-09-18 20:03:30.708558501 +0000 UTC m=+153.103283397 (delta=85.809731ms)
	I0918 20:03:30.812588   26827 fix.go:200] guest clock delta is within tolerance: 85.809731ms
	I0918 20:03:30.812595   26827 start.go:83] releasing machines lock for "ha-091565-m03", held for 27.621119617s
	I0918 20:03:30.812619   26827 main.go:141] libmachine: (ha-091565-m03) Calling .DriverName
	I0918 20:03:30.812898   26827 main.go:141] libmachine: (ha-091565-m03) Calling .GetIP
	I0918 20:03:30.815402   26827 main.go:141] libmachine: (ha-091565-m03) DBG | domain ha-091565-m03 has defined MAC address 52:54:00:7c:50:95 in network mk-ha-091565
	I0918 20:03:30.815769   26827 main.go:141] libmachine: (ha-091565-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7c:50:95", ip: ""} in network mk-ha-091565: {Iface:virbr1 ExpiryTime:2024-09-18 21:03:17 +0000 UTC Type:0 Mac:52:54:00:7c:50:95 Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:ha-091565-m03 Clientid:01:52:54:00:7c:50:95}
	I0918 20:03:30.815791   26827 main.go:141] libmachine: (ha-091565-m03) DBG | domain ha-091565-m03 has defined IP address 192.168.39.53 and MAC address 52:54:00:7c:50:95 in network mk-ha-091565
	I0918 20:03:30.817414   26827 out.go:177] * Found network options:
	I0918 20:03:30.818426   26827 out.go:177]   - NO_PROXY=192.168.39.215,192.168.39.92
	W0918 20:03:30.819353   26827 proxy.go:119] fail to check proxy env: Error ip not in block
	W0918 20:03:30.819370   26827 proxy.go:119] fail to check proxy env: Error ip not in block
	I0918 20:03:30.819384   26827 main.go:141] libmachine: (ha-091565-m03) Calling .DriverName
	I0918 20:03:30.820044   26827 main.go:141] libmachine: (ha-091565-m03) Calling .DriverName
	I0918 20:03:30.820235   26827 main.go:141] libmachine: (ha-091565-m03) Calling .DriverName
	I0918 20:03:30.820315   26827 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0918 20:03:30.820362   26827 main.go:141] libmachine: (ha-091565-m03) Calling .GetSSHHostname
	W0918 20:03:30.820405   26827 proxy.go:119] fail to check proxy env: Error ip not in block
	W0918 20:03:30.820438   26827 proxy.go:119] fail to check proxy env: Error ip not in block
	I0918 20:03:30.820512   26827 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0918 20:03:30.820534   26827 main.go:141] libmachine: (ha-091565-m03) Calling .GetSSHHostname
	I0918 20:03:30.823394   26827 main.go:141] libmachine: (ha-091565-m03) DBG | domain ha-091565-m03 has defined MAC address 52:54:00:7c:50:95 in network mk-ha-091565
	I0918 20:03:30.823660   26827 main.go:141] libmachine: (ha-091565-m03) DBG | domain ha-091565-m03 has defined MAC address 52:54:00:7c:50:95 in network mk-ha-091565
	I0918 20:03:30.823821   26827 main.go:141] libmachine: (ha-091565-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7c:50:95", ip: ""} in network mk-ha-091565: {Iface:virbr1 ExpiryTime:2024-09-18 21:03:17 +0000 UTC Type:0 Mac:52:54:00:7c:50:95 Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:ha-091565-m03 Clientid:01:52:54:00:7c:50:95}
	I0918 20:03:30.823857   26827 main.go:141] libmachine: (ha-091565-m03) DBG | domain ha-091565-m03 has defined IP address 192.168.39.53 and MAC address 52:54:00:7c:50:95 in network mk-ha-091565
	I0918 20:03:30.824042   26827 main.go:141] libmachine: (ha-091565-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7c:50:95", ip: ""} in network mk-ha-091565: {Iface:virbr1 ExpiryTime:2024-09-18 21:03:17 +0000 UTC Type:0 Mac:52:54:00:7c:50:95 Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:ha-091565-m03 Clientid:01:52:54:00:7c:50:95}
	I0918 20:03:30.824069   26827 main.go:141] libmachine: (ha-091565-m03) DBG | domain ha-091565-m03 has defined IP address 192.168.39.53 and MAC address 52:54:00:7c:50:95 in network mk-ha-091565
	I0918 20:03:30.824075   26827 main.go:141] libmachine: (ha-091565-m03) Calling .GetSSHPort
	I0918 20:03:30.824246   26827 main.go:141] libmachine: (ha-091565-m03) Calling .GetSSHPort
	I0918 20:03:30.824249   26827 main.go:141] libmachine: (ha-091565-m03) Calling .GetSSHKeyPath
	I0918 20:03:30.824447   26827 main.go:141] libmachine: (ha-091565-m03) Calling .GetSSHKeyPath
	I0918 20:03:30.824451   26827 main.go:141] libmachine: (ha-091565-m03) Calling .GetSSHUsername
	I0918 20:03:30.824629   26827 main.go:141] libmachine: (ha-091565-m03) Calling .GetSSHUsername
	I0918 20:03:30.824648   26827 sshutil.go:53] new ssh client: &{IP:192.168.39.53 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19667-7671/.minikube/machines/ha-091565-m03/id_rsa Username:docker}
	I0918 20:03:30.824774   26827 sshutil.go:53] new ssh client: &{IP:192.168.39.53 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19667-7671/.minikube/machines/ha-091565-m03/id_rsa Username:docker}
	I0918 20:03:31.051973   26827 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0918 20:03:31.057939   26827 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0918 20:03:31.058015   26827 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0918 20:03:31.075034   26827 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0918 20:03:31.075060   26827 start.go:495] detecting cgroup driver to use...
	I0918 20:03:31.075137   26827 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0918 20:03:31.091617   26827 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0918 20:03:31.105746   26827 docker.go:217] disabling cri-docker service (if available) ...
	I0918 20:03:31.105817   26827 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0918 20:03:31.120080   26827 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0918 20:03:31.134004   26827 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0918 20:03:31.254184   26827 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0918 20:03:31.414257   26827 docker.go:233] disabling docker service ...
	I0918 20:03:31.414322   26827 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0918 20:03:31.428960   26827 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0918 20:03:31.442338   26827 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0918 20:03:31.584328   26827 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0918 20:03:31.721005   26827 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0918 20:03:31.735675   26827 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0918 20:03:31.753606   26827 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0918 20:03:31.753676   26827 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0918 20:03:31.764390   26827 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0918 20:03:31.764453   26827 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0918 20:03:31.775371   26827 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0918 20:03:31.786080   26827 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0918 20:03:31.797003   26827 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0918 20:03:31.807848   26827 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0918 20:03:31.821134   26827 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0918 20:03:31.840511   26827 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0918 20:03:31.851912   26827 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0918 20:03:31.861895   26827 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0918 20:03:31.861971   26827 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0918 20:03:31.875783   26827 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0918 20:03:31.887581   26827 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0918 20:03:32.009173   26827 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0918 20:03:32.097676   26827 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0918 20:03:32.097742   26827 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0918 20:03:32.102640   26827 start.go:563] Will wait 60s for crictl version
	I0918 20:03:32.102696   26827 ssh_runner.go:195] Run: which crictl
	I0918 20:03:32.106231   26827 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0918 20:03:32.142182   26827 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0918 20:03:32.142270   26827 ssh_runner.go:195] Run: crio --version
	I0918 20:03:32.169659   26827 ssh_runner.go:195] Run: crio --version
	I0918 20:03:32.199737   26827 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0918 20:03:32.201225   26827 out.go:177]   - env NO_PROXY=192.168.39.215
	I0918 20:03:32.202507   26827 out.go:177]   - env NO_PROXY=192.168.39.215,192.168.39.92
	I0918 20:03:32.203714   26827 main.go:141] libmachine: (ha-091565-m03) Calling .GetIP
	I0918 20:03:32.206442   26827 main.go:141] libmachine: (ha-091565-m03) DBG | domain ha-091565-m03 has defined MAC address 52:54:00:7c:50:95 in network mk-ha-091565
	I0918 20:03:32.206810   26827 main.go:141] libmachine: (ha-091565-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7c:50:95", ip: ""} in network mk-ha-091565: {Iface:virbr1 ExpiryTime:2024-09-18 21:03:17 +0000 UTC Type:0 Mac:52:54:00:7c:50:95 Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:ha-091565-m03 Clientid:01:52:54:00:7c:50:95}
	I0918 20:03:32.206850   26827 main.go:141] libmachine: (ha-091565-m03) DBG | domain ha-091565-m03 has defined IP address 192.168.39.53 and MAC address 52:54:00:7c:50:95 in network mk-ha-091565
	I0918 20:03:32.207043   26827 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0918 20:03:32.211258   26827 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0918 20:03:32.223734   26827 mustload.go:65] Loading cluster: ha-091565
	I0918 20:03:32.224039   26827 config.go:182] Loaded profile config "ha-091565": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0918 20:03:32.224319   26827 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0918 20:03:32.224365   26827 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0918 20:03:32.239611   26827 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37387
	I0918 20:03:32.240066   26827 main.go:141] libmachine: () Calling .GetVersion
	I0918 20:03:32.240552   26827 main.go:141] libmachine: Using API Version  1
	I0918 20:03:32.240576   26827 main.go:141] libmachine: () Calling .SetConfigRaw
	I0918 20:03:32.240920   26827 main.go:141] libmachine: () Calling .GetMachineName
	I0918 20:03:32.241082   26827 main.go:141] libmachine: (ha-091565) Calling .GetState
	I0918 20:03:32.242720   26827 host.go:66] Checking if "ha-091565" exists ...
	I0918 20:03:32.243009   26827 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0918 20:03:32.243043   26827 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0918 20:03:32.258246   26827 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40335
	I0918 20:03:32.258705   26827 main.go:141] libmachine: () Calling .GetVersion
	I0918 20:03:32.259124   26827 main.go:141] libmachine: Using API Version  1
	I0918 20:03:32.259146   26827 main.go:141] libmachine: () Calling .SetConfigRaw
	I0918 20:03:32.259417   26827 main.go:141] libmachine: () Calling .GetMachineName
	I0918 20:03:32.259553   26827 main.go:141] libmachine: (ha-091565) Calling .DriverName
	I0918 20:03:32.259662   26827 certs.go:68] Setting up /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/ha-091565 for IP: 192.168.39.53
	I0918 20:03:32.259671   26827 certs.go:194] generating shared ca certs ...
	I0918 20:03:32.259683   26827 certs.go:226] acquiring lock for ca certs: {Name:mkf54e0e71ed884c4372f9d3d4cb308ea4600185 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0918 20:03:32.259810   26827 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19667-7671/.minikube/ca.key
	I0918 20:03:32.259850   26827 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19667-7671/.minikube/proxy-client-ca.key
	I0918 20:03:32.259860   26827 certs.go:256] generating profile certs ...
	I0918 20:03:32.259928   26827 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/ha-091565/client.key
	I0918 20:03:32.259953   26827 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/ha-091565/apiserver.key.4abf4119
	I0918 20:03:32.259967   26827 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/ha-091565/apiserver.crt.4abf4119 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.215 192.168.39.92 192.168.39.53 192.168.39.254]
	I0918 20:03:32.391787   26827 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/ha-091565/apiserver.crt.4abf4119 ...
	I0918 20:03:32.391818   26827 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/ha-091565/apiserver.crt.4abf4119: {Name:mkb34973ffb4d10e1c252f20090951c99d9a8a6d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0918 20:03:32.392002   26827 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/ha-091565/apiserver.key.4abf4119 ...
	I0918 20:03:32.392039   26827 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/ha-091565/apiserver.key.4abf4119: {Name:mk8dda3654eb1370812c69b5ca23990ee4bb5898 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0918 20:03:32.392142   26827 certs.go:381] copying /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/ha-091565/apiserver.crt.4abf4119 -> /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/ha-091565/apiserver.crt
	I0918 20:03:32.392302   26827 certs.go:385] copying /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/ha-091565/apiserver.key.4abf4119 -> /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/ha-091565/apiserver.key
	I0918 20:03:32.392476   26827 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/ha-091565/proxy-client.key
	I0918 20:03:32.392495   26827 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19667-7671/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0918 20:03:32.392514   26827 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19667-7671/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0918 20:03:32.392532   26827 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19667-7671/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0918 20:03:32.392556   26827 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19667-7671/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0918 20:03:32.392573   26827 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/ha-091565/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0918 20:03:32.392588   26827 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/ha-091565/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0918 20:03:32.392606   26827 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/ha-091565/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0918 20:03:32.416080   26827 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/ha-091565/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0918 20:03:32.416180   26827 certs.go:484] found cert: /home/jenkins/minikube-integration/19667-7671/.minikube/certs/14878.pem (1338 bytes)
	W0918 20:03:32.416223   26827 certs.go:480] ignoring /home/jenkins/minikube-integration/19667-7671/.minikube/certs/14878_empty.pem, impossibly tiny 0 bytes
	I0918 20:03:32.416236   26827 certs.go:484] found cert: /home/jenkins/minikube-integration/19667-7671/.minikube/certs/ca-key.pem (1675 bytes)
	I0918 20:03:32.416259   26827 certs.go:484] found cert: /home/jenkins/minikube-integration/19667-7671/.minikube/certs/ca.pem (1082 bytes)
	I0918 20:03:32.416280   26827 certs.go:484] found cert: /home/jenkins/minikube-integration/19667-7671/.minikube/certs/cert.pem (1123 bytes)
	I0918 20:03:32.416312   26827 certs.go:484] found cert: /home/jenkins/minikube-integration/19667-7671/.minikube/certs/key.pem (1679 bytes)
	I0918 20:03:32.416373   26827 certs.go:484] found cert: /home/jenkins/minikube-integration/19667-7671/.minikube/files/etc/ssl/certs/148782.pem (1708 bytes)
	I0918 20:03:32.416406   26827 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19667-7671/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0918 20:03:32.416423   26827 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19667-7671/.minikube/certs/14878.pem -> /usr/share/ca-certificates/14878.pem
	I0918 20:03:32.416442   26827 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19667-7671/.minikube/files/etc/ssl/certs/148782.pem -> /usr/share/ca-certificates/148782.pem
	I0918 20:03:32.416482   26827 main.go:141] libmachine: (ha-091565) Calling .GetSSHHostname
	I0918 20:03:32.419323   26827 main.go:141] libmachine: (ha-091565) DBG | domain ha-091565 has defined MAC address 52:54:00:2a:13:d8 in network mk-ha-091565
	I0918 20:03:32.419709   26827 main.go:141] libmachine: (ha-091565) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:13:d8", ip: ""} in network mk-ha-091565: {Iface:virbr1 ExpiryTime:2024-09-18 21:01:12 +0000 UTC Type:0 Mac:52:54:00:2a:13:d8 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:ha-091565 Clientid:01:52:54:00:2a:13:d8}
	I0918 20:03:32.419736   26827 main.go:141] libmachine: (ha-091565) DBG | domain ha-091565 has defined IP address 192.168.39.215 and MAC address 52:54:00:2a:13:d8 in network mk-ha-091565
	I0918 20:03:32.419880   26827 main.go:141] libmachine: (ha-091565) Calling .GetSSHPort
	I0918 20:03:32.420098   26827 main.go:141] libmachine: (ha-091565) Calling .GetSSHKeyPath
	I0918 20:03:32.420242   26827 main.go:141] libmachine: (ha-091565) Calling .GetSSHUsername
	I0918 20:03:32.420374   26827 sshutil.go:53] new ssh client: &{IP:192.168.39.215 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19667-7671/.minikube/machines/ha-091565/id_rsa Username:docker}
	I0918 20:03:32.496485   26827 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0918 20:03:32.501230   26827 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0918 20:03:32.512278   26827 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0918 20:03:32.516258   26827 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I0918 20:03:32.526925   26827 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0918 20:03:32.530942   26827 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0918 20:03:32.541480   26827 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0918 20:03:32.545232   26827 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I0918 20:03:32.555472   26827 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0918 20:03:32.559397   26827 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0918 20:03:32.569567   26827 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0918 20:03:32.573499   26827 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I0918 20:03:32.583358   26827 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0918 20:03:32.611524   26827 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0918 20:03:32.636264   26827 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0918 20:03:32.660205   26827 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0918 20:03:32.686819   26827 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/ha-091565/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I0918 20:03:32.710441   26827 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/ha-091565/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0918 20:03:32.737760   26827 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/ha-091565/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0918 20:03:32.763299   26827 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/ha-091565/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0918 20:03:32.788066   26827 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0918 20:03:32.811311   26827 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/certs/14878.pem --> /usr/share/ca-certificates/14878.pem (1338 bytes)
	I0918 20:03:32.837707   26827 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/files/etc/ssl/certs/148782.pem --> /usr/share/ca-certificates/148782.pem (1708 bytes)
	I0918 20:03:32.862254   26827 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0918 20:03:32.879051   26827 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I0918 20:03:32.895538   26827 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0918 20:03:32.911669   26827 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I0918 20:03:32.927230   26827 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0918 20:03:32.943165   26827 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I0918 20:03:32.959777   26827 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0918 20:03:32.976941   26827 ssh_runner.go:195] Run: openssl version
	I0918 20:03:32.982956   26827 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0918 20:03:32.994065   26827 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0918 20:03:32.998638   26827 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 18 19:39 /usr/share/ca-certificates/minikubeCA.pem
	I0918 20:03:32.998702   26827 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0918 20:03:33.004856   26827 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0918 20:03:33.016234   26827 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14878.pem && ln -fs /usr/share/ca-certificates/14878.pem /etc/ssl/certs/14878.pem"
	I0918 20:03:33.027625   26827 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14878.pem
	I0918 20:03:33.032333   26827 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 18 19:57 /usr/share/ca-certificates/14878.pem
	I0918 20:03:33.032408   26827 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14878.pem
	I0918 20:03:33.038142   26827 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14878.pem /etc/ssl/certs/51391683.0"
	I0918 20:03:33.049048   26827 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/148782.pem && ln -fs /usr/share/ca-certificates/148782.pem /etc/ssl/certs/148782.pem"
	I0918 20:03:33.060201   26827 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/148782.pem
	I0918 20:03:33.064969   26827 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 18 19:57 /usr/share/ca-certificates/148782.pem
	I0918 20:03:33.065039   26827 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/148782.pem
	I0918 20:03:33.070737   26827 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/148782.pem /etc/ssl/certs/3ec20f2e.0"
	I0918 20:03:33.082171   26827 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0918 20:03:33.086441   26827 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0918 20:03:33.086499   26827 kubeadm.go:934] updating node {m03 192.168.39.53 8443 v1.31.1 crio true true} ...
	I0918 20:03:33.086588   26827 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-091565-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.53
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-091565 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0918 20:03:33.086614   26827 kube-vip.go:115] generating kube-vip config ...
	I0918 20:03:33.086658   26827 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0918 20:03:33.104138   26827 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0918 20:03:33.104231   26827 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0918 20:03:33.104297   26827 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0918 20:03:33.114293   26827 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.1: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.31.1': No such file or directory
	
	Initiating transfer...
	I0918 20:03:33.114356   26827 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.1
	I0918 20:03:33.124170   26827 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet.sha256
	I0918 20:03:33.124182   26827 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm.sha256
	I0918 20:03:33.124199   26827 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl.sha256
	I0918 20:03:33.124207   26827 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19667-7671/.minikube/cache/linux/amd64/v1.31.1/kubeadm -> /var/lib/minikube/binaries/v1.31.1/kubeadm
	I0918 20:03:33.124216   26827 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19667-7671/.minikube/cache/linux/amd64/v1.31.1/kubectl -> /var/lib/minikube/binaries/v1.31.1/kubectl
	I0918 20:03:33.124219   26827 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0918 20:03:33.124273   26827 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubeadm
	I0918 20:03:33.124275   26827 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubectl
	I0918 20:03:33.141327   26827 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubeadm': No such file or directory
	I0918 20:03:33.141375   26827 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/cache/linux/amd64/v1.31.1/kubeadm --> /var/lib/minikube/binaries/v1.31.1/kubeadm (58290328 bytes)
	I0918 20:03:33.141401   26827 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubectl': No such file or directory
	I0918 20:03:33.141433   26827 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/cache/linux/amd64/v1.31.1/kubectl --> /var/lib/minikube/binaries/v1.31.1/kubectl (56381592 bytes)
	I0918 20:03:33.141477   26827 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19667-7671/.minikube/cache/linux/amd64/v1.31.1/kubelet -> /var/lib/minikube/binaries/v1.31.1/kubelet
	I0918 20:03:33.141555   26827 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubelet
	I0918 20:03:33.173036   26827 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubelet': No such file or directory
	I0918 20:03:33.173093   26827 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/cache/linux/amd64/v1.31.1/kubelet --> /var/lib/minikube/binaries/v1.31.1/kubelet (76869944 bytes)
	I0918 20:03:33.972939   26827 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0918 20:03:33.982247   26827 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0918 20:03:34.000126   26827 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0918 20:03:34.018674   26827 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0918 20:03:34.036270   26827 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0918 20:03:34.040368   26827 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0918 20:03:34.053122   26827 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0918 20:03:34.171306   26827 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0918 20:03:34.188115   26827 host.go:66] Checking if "ha-091565" exists ...
	I0918 20:03:34.188456   26827 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0918 20:03:34.188496   26827 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0918 20:03:34.204519   26827 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33565
	I0918 20:03:34.205017   26827 main.go:141] libmachine: () Calling .GetVersion
	I0918 20:03:34.205836   26827 main.go:141] libmachine: Using API Version  1
	I0918 20:03:34.205858   26827 main.go:141] libmachine: () Calling .SetConfigRaw
	I0918 20:03:34.206189   26827 main.go:141] libmachine: () Calling .GetMachineName
	I0918 20:03:34.206366   26827 main.go:141] libmachine: (ha-091565) Calling .DriverName
	I0918 20:03:34.206499   26827 start.go:317] joinCluster: &{Name:ha-091565 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Cluster
Name:ha-091565 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.215 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.92 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.53 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false in
spektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOpt
imizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0918 20:03:34.206634   26827 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0918 20:03:34.206657   26827 main.go:141] libmachine: (ha-091565) Calling .GetSSHHostname
	I0918 20:03:34.210032   26827 main.go:141] libmachine: (ha-091565) DBG | domain ha-091565 has defined MAC address 52:54:00:2a:13:d8 in network mk-ha-091565
	I0918 20:03:34.210517   26827 main.go:141] libmachine: (ha-091565) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:13:d8", ip: ""} in network mk-ha-091565: {Iface:virbr1 ExpiryTime:2024-09-18 21:01:12 +0000 UTC Type:0 Mac:52:54:00:2a:13:d8 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:ha-091565 Clientid:01:52:54:00:2a:13:d8}
	I0918 20:03:34.210550   26827 main.go:141] libmachine: (ha-091565) DBG | domain ha-091565 has defined IP address 192.168.39.215 and MAC address 52:54:00:2a:13:d8 in network mk-ha-091565
	I0918 20:03:34.210721   26827 main.go:141] libmachine: (ha-091565) Calling .GetSSHPort
	I0918 20:03:34.210878   26827 main.go:141] libmachine: (ha-091565) Calling .GetSSHKeyPath
	I0918 20:03:34.211058   26827 main.go:141] libmachine: (ha-091565) Calling .GetSSHUsername
	I0918 20:03:34.211223   26827 sshutil.go:53] new ssh client: &{IP:192.168.39.215 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19667-7671/.minikube/machines/ha-091565/id_rsa Username:docker}
	I0918 20:03:34.497537   26827 start.go:343] trying to join control-plane node "m03" to cluster: &{Name:m03 IP:192.168.39.53 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0918 20:03:34.497597   26827 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token i0u1iv.ilurlcyw4668mpw6 --discovery-token-ca-cert-hash sha256:8ad46bf288e8cc2226f5a929a768e6c95cddecc97ffc4016be27ef0987b1e36f --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-091565-m03 --control-plane --apiserver-advertise-address=192.168.39.53 --apiserver-bind-port=8443"
	I0918 20:03:56.510162   26827 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token i0u1iv.ilurlcyw4668mpw6 --discovery-token-ca-cert-hash sha256:8ad46bf288e8cc2226f5a929a768e6c95cddecc97ffc4016be27ef0987b1e36f --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-091565-m03 --control-plane --apiserver-advertise-address=192.168.39.53 --apiserver-bind-port=8443": (22.012541289s)
	I0918 20:03:56.510194   26827 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0918 20:03:57.007413   26827 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-091565-m03 minikube.k8s.io/updated_at=2024_09_18T20_03_57_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=85073601a832bd4bbda5d11fa91feafff6ec6b91 minikube.k8s.io/name=ha-091565 minikube.k8s.io/primary=false
	I0918 20:03:57.136553   26827 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-091565-m03 node-role.kubernetes.io/control-plane:NoSchedule-
	I0918 20:03:57.243081   26827 start.go:319] duration metric: took 23.036576923s to joinCluster
	I0918 20:03:57.243171   26827 start.go:235] Will wait 6m0s for node &{Name:m03 IP:192.168.39.53 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0918 20:03:57.243516   26827 config.go:182] Loaded profile config "ha-091565": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0918 20:03:57.244463   26827 out.go:177] * Verifying Kubernetes components...
	I0918 20:03:57.245675   26827 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0918 20:03:57.491302   26827 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0918 20:03:57.553167   26827 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19667-7671/kubeconfig
	I0918 20:03:57.553587   26827 kapi.go:59] client config for ha-091565: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19667-7671/.minikube/profiles/ha-091565/client.crt", KeyFile:"/home/jenkins/minikube-integration/19667-7671/.minikube/profiles/ha-091565/client.key", CAFile:"/home/jenkins/minikube-integration/19667-7671/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f6fca0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0918 20:03:57.553676   26827 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.215:8443
	I0918 20:03:57.554162   26827 node_ready.go:35] waiting up to 6m0s for node "ha-091565-m03" to be "Ready" ...
	I0918 20:03:57.554529   26827 round_trippers.go:463] GET https://192.168.39.215:8443/api/v1/nodes/ha-091565-m03
	I0918 20:03:57.554540   26827 round_trippers.go:469] Request Headers:
	I0918 20:03:57.554551   26827 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0918 20:03:57.554560   26827 round_trippers.go:473]     Accept: application/json, */*
	I0918 20:03:57.558531   26827 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0918 20:03:58.055469   26827 round_trippers.go:463] GET https://192.168.39.215:8443/api/v1/nodes/ha-091565-m03
	I0918 20:03:58.055497   26827 round_trippers.go:469] Request Headers:
	I0918 20:03:58.055509   26827 round_trippers.go:473]     Accept: application/json, */*
	I0918 20:03:58.055515   26827 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0918 20:03:58.065944   26827 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0918 20:03:58.555709   26827 round_trippers.go:463] GET https://192.168.39.215:8443/api/v1/nodes/ha-091565-m03
	I0918 20:03:58.555741   26827 round_trippers.go:469] Request Headers:
	I0918 20:03:58.555751   26827 round_trippers.go:473]     Accept: application/json, */*
	I0918 20:03:58.555755   26827 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0918 20:03:58.559403   26827 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0918 20:03:59.055396   26827 round_trippers.go:463] GET https://192.168.39.215:8443/api/v1/nodes/ha-091565-m03
	I0918 20:03:59.055421   26827 round_trippers.go:469] Request Headers:
	I0918 20:03:59.055432   26827 round_trippers.go:473]     Accept: application/json, */*
	I0918 20:03:59.055439   26827 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0918 20:03:59.058942   26827 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0918 20:03:59.555365   26827 round_trippers.go:463] GET https://192.168.39.215:8443/api/v1/nodes/ha-091565-m03
	I0918 20:03:59.555390   26827 round_trippers.go:469] Request Headers:
	I0918 20:03:59.555400   26827 round_trippers.go:473]     Accept: application/json, */*
	I0918 20:03:59.555406   26827 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0918 20:03:59.558786   26827 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0918 20:03:59.559242   26827 node_ready.go:53] node "ha-091565-m03" has status "Ready":"False"
	I0918 20:04:00.054633   26827 round_trippers.go:463] GET https://192.168.39.215:8443/api/v1/nodes/ha-091565-m03
	I0918 20:04:00.054659   26827 round_trippers.go:469] Request Headers:
	I0918 20:04:00.054669   26827 round_trippers.go:473]     Accept: application/json, */*
	I0918 20:04:00.054674   26827 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0918 20:04:00.058075   26827 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0918 20:04:00.555492   26827 round_trippers.go:463] GET https://192.168.39.215:8443/api/v1/nodes/ha-091565-m03
	I0918 20:04:00.555516   26827 round_trippers.go:469] Request Headers:
	I0918 20:04:00.555526   26827 round_trippers.go:473]     Accept: application/json, */*
	I0918 20:04:00.555529   26827 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0918 20:04:00.559811   26827 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0918 20:04:01.055537   26827 round_trippers.go:463] GET https://192.168.39.215:8443/api/v1/nodes/ha-091565-m03
	I0918 20:04:01.055563   26827 round_trippers.go:469] Request Headers:
	I0918 20:04:01.055575   26827 round_trippers.go:473]     Accept: application/json, */*
	I0918 20:04:01.055580   26827 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0918 20:04:01.059555   26827 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0918 20:04:01.555672   26827 round_trippers.go:463] GET https://192.168.39.215:8443/api/v1/nodes/ha-091565-m03
	I0918 20:04:01.555697   26827 round_trippers.go:469] Request Headers:
	I0918 20:04:01.555706   26827 round_trippers.go:473]     Accept: application/json, */*
	I0918 20:04:01.555711   26827 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0918 20:04:01.559137   26827 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0918 20:04:01.559627   26827 node_ready.go:53] node "ha-091565-m03" has status "Ready":"False"
	I0918 20:04:02.054683   26827 round_trippers.go:463] GET https://192.168.39.215:8443/api/v1/nodes/ha-091565-m03
	I0918 20:04:02.054723   26827 round_trippers.go:469] Request Headers:
	I0918 20:04:02.054731   26827 round_trippers.go:473]     Accept: application/json, */*
	I0918 20:04:02.054745   26827 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0918 20:04:02.058557   26827 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0918 20:04:02.555203   26827 round_trippers.go:463] GET https://192.168.39.215:8443/api/v1/nodes/ha-091565-m03
	I0918 20:04:02.555226   26827 round_trippers.go:469] Request Headers:
	I0918 20:04:02.555234   26827 round_trippers.go:473]     Accept: application/json, */*
	I0918 20:04:02.555238   26827 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0918 20:04:02.558769   26827 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0918 20:04:03.055525   26827 round_trippers.go:463] GET https://192.168.39.215:8443/api/v1/nodes/ha-091565-m03
	I0918 20:04:03.055564   26827 round_trippers.go:469] Request Headers:
	I0918 20:04:03.055574   26827 round_trippers.go:473]     Accept: application/json, */*
	I0918 20:04:03.055577   26827 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0918 20:04:03.059340   26827 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0918 20:04:03.554931   26827 round_trippers.go:463] GET https://192.168.39.215:8443/api/v1/nodes/ha-091565-m03
	I0918 20:04:03.554959   26827 round_trippers.go:469] Request Headers:
	I0918 20:04:03.554970   26827 round_trippers.go:473]     Accept: application/json, */*
	I0918 20:04:03.554979   26827 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0918 20:04:03.558864   26827 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0918 20:04:03.559650   26827 node_ready.go:53] node "ha-091565-m03" has status "Ready":"False"
	I0918 20:04:04.054716   26827 round_trippers.go:463] GET https://192.168.39.215:8443/api/v1/nodes/ha-091565-m03
	I0918 20:04:04.054744   26827 round_trippers.go:469] Request Headers:
	I0918 20:04:04.054755   26827 round_trippers.go:473]     Accept: application/json, */*
	I0918 20:04:04.054761   26827 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0918 20:04:04.058693   26827 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0918 20:04:04.555064   26827 round_trippers.go:463] GET https://192.168.39.215:8443/api/v1/nodes/ha-091565-m03
	I0918 20:04:04.555088   26827 round_trippers.go:469] Request Headers:
	I0918 20:04:04.555100   26827 round_trippers.go:473]     Accept: application/json, */*
	I0918 20:04:04.555106   26827 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0918 20:04:04.558892   26827 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0918 20:04:05.054691   26827 round_trippers.go:463] GET https://192.168.39.215:8443/api/v1/nodes/ha-091565-m03
	I0918 20:04:05.054712   26827 round_trippers.go:469] Request Headers:
	I0918 20:04:05.054719   26827 round_trippers.go:473]     Accept: application/json, */*
	I0918 20:04:05.054741   26827 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0918 20:04:05.059560   26827 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0918 20:04:05.555504   26827 round_trippers.go:463] GET https://192.168.39.215:8443/api/v1/nodes/ha-091565-m03
	I0918 20:04:05.555527   26827 round_trippers.go:469] Request Headers:
	I0918 20:04:05.555534   26827 round_trippers.go:473]     Accept: application/json, */*
	I0918 20:04:05.555539   26827 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0918 20:04:05.558864   26827 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0918 20:04:06.055334   26827 round_trippers.go:463] GET https://192.168.39.215:8443/api/v1/nodes/ha-091565-m03
	I0918 20:04:06.055377   26827 round_trippers.go:469] Request Headers:
	I0918 20:04:06.055389   26827 round_trippers.go:473]     Accept: application/json, */*
	I0918 20:04:06.055397   26827 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0918 20:04:06.059156   26827 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0918 20:04:06.059757   26827 node_ready.go:53] node "ha-091565-m03" has status "Ready":"False"
	I0918 20:04:06.555030   26827 round_trippers.go:463] GET https://192.168.39.215:8443/api/v1/nodes/ha-091565-m03
	I0918 20:04:06.555053   26827 round_trippers.go:469] Request Headers:
	I0918 20:04:06.555063   26827 round_trippers.go:473]     Accept: application/json, */*
	I0918 20:04:06.555069   26827 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0918 20:04:06.558335   26827 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0918 20:04:07.055192   26827 round_trippers.go:463] GET https://192.168.39.215:8443/api/v1/nodes/ha-091565-m03
	I0918 20:04:07.055215   26827 round_trippers.go:469] Request Headers:
	I0918 20:04:07.055224   26827 round_trippers.go:473]     Accept: application/json, */*
	I0918 20:04:07.055227   26827 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0918 20:04:07.059362   26827 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0918 20:04:07.555236   26827 round_trippers.go:463] GET https://192.168.39.215:8443/api/v1/nodes/ha-091565-m03
	I0918 20:04:07.555261   26827 round_trippers.go:469] Request Headers:
	I0918 20:04:07.555269   26827 round_trippers.go:473]     Accept: application/json, */*
	I0918 20:04:07.555274   26827 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0918 20:04:07.558863   26827 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0918 20:04:08.055465   26827 round_trippers.go:463] GET https://192.168.39.215:8443/api/v1/nodes/ha-091565-m03
	I0918 20:04:08.055488   26827 round_trippers.go:469] Request Headers:
	I0918 20:04:08.055495   26827 round_trippers.go:473]     Accept: application/json, */*
	I0918 20:04:08.055498   26827 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0918 20:04:08.059132   26827 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0918 20:04:08.555504   26827 round_trippers.go:463] GET https://192.168.39.215:8443/api/v1/nodes/ha-091565-m03
	I0918 20:04:08.555526   26827 round_trippers.go:469] Request Headers:
	I0918 20:04:08.555535   26827 round_trippers.go:473]     Accept: application/json, */*
	I0918 20:04:08.555538   26827 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0918 20:04:08.559353   26827 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0918 20:04:08.559819   26827 node_ready.go:53] node "ha-091565-m03" has status "Ready":"False"
	I0918 20:04:09.055283   26827 round_trippers.go:463] GET https://192.168.39.215:8443/api/v1/nodes/ha-091565-m03
	I0918 20:04:09.055306   26827 round_trippers.go:469] Request Headers:
	I0918 20:04:09.055314   26827 round_trippers.go:473]     Accept: application/json, */*
	I0918 20:04:09.055317   26827 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0918 20:04:09.058873   26827 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0918 20:04:09.555171   26827 round_trippers.go:463] GET https://192.168.39.215:8443/api/v1/nodes/ha-091565-m03
	I0918 20:04:09.555196   26827 round_trippers.go:469] Request Headers:
	I0918 20:04:09.555204   26827 round_trippers.go:473]     Accept: application/json, */*
	I0918 20:04:09.555208   26827 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0918 20:04:09.559068   26827 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0918 20:04:10.055288   26827 round_trippers.go:463] GET https://192.168.39.215:8443/api/v1/nodes/ha-091565-m03
	I0918 20:04:10.055311   26827 round_trippers.go:469] Request Headers:
	I0918 20:04:10.055320   26827 round_trippers.go:473]     Accept: application/json, */*
	I0918 20:04:10.055325   26827 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0918 20:04:10.059182   26827 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0918 20:04:10.555106   26827 round_trippers.go:463] GET https://192.168.39.215:8443/api/v1/nodes/ha-091565-m03
	I0918 20:04:10.555128   26827 round_trippers.go:469] Request Headers:
	I0918 20:04:10.555139   26827 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0918 20:04:10.555144   26827 round_trippers.go:473]     Accept: application/json, */*
	I0918 20:04:10.558578   26827 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0918 20:04:11.054941   26827 round_trippers.go:463] GET https://192.168.39.215:8443/api/v1/nodes/ha-091565-m03
	I0918 20:04:11.054964   26827 round_trippers.go:469] Request Headers:
	I0918 20:04:11.054972   26827 round_trippers.go:473]     Accept: application/json, */*
	I0918 20:04:11.054975   26827 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0918 20:04:11.059278   26827 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0918 20:04:11.059847   26827 node_ready.go:53] node "ha-091565-m03" has status "Ready":"False"
	I0918 20:04:11.555315   26827 round_trippers.go:463] GET https://192.168.39.215:8443/api/v1/nodes/ha-091565-m03
	I0918 20:04:11.555339   26827 round_trippers.go:469] Request Headers:
	I0918 20:04:11.555347   26827 round_trippers.go:473]     Accept: application/json, */*
	I0918 20:04:11.555355   26827 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0918 20:04:11.558773   26827 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0918 20:04:12.054728   26827 round_trippers.go:463] GET https://192.168.39.215:8443/api/v1/nodes/ha-091565-m03
	I0918 20:04:12.054751   26827 round_trippers.go:469] Request Headers:
	I0918 20:04:12.054765   26827 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0918 20:04:12.054770   26827 round_trippers.go:473]     Accept: application/json, */*
	I0918 20:04:12.058180   26827 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0918 20:04:12.554816   26827 round_trippers.go:463] GET https://192.168.39.215:8443/api/v1/nodes/ha-091565-m03
	I0918 20:04:12.554836   26827 round_trippers.go:469] Request Headers:
	I0918 20:04:12.554844   26827 round_trippers.go:473]     Accept: application/json, */*
	I0918 20:04:12.554849   26827 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0918 20:04:12.558473   26827 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0918 20:04:13.055199   26827 round_trippers.go:463] GET https://192.168.39.215:8443/api/v1/nodes/ha-091565-m03
	I0918 20:04:13.055227   26827 round_trippers.go:469] Request Headers:
	I0918 20:04:13.055245   26827 round_trippers.go:473]     Accept: application/json, */*
	I0918 20:04:13.055254   26827 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0918 20:04:13.058868   26827 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0918 20:04:13.554700   26827 round_trippers.go:463] GET https://192.168.39.215:8443/api/v1/nodes/ha-091565-m03
	I0918 20:04:13.554723   26827 round_trippers.go:469] Request Headers:
	I0918 20:04:13.554732   26827 round_trippers.go:473]     Accept: application/json, */*
	I0918 20:04:13.554736   26827 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0918 20:04:13.559302   26827 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0918 20:04:13.560622   26827 node_ready.go:53] node "ha-091565-m03" has status "Ready":"False"
	I0918 20:04:14.054755   26827 round_trippers.go:463] GET https://192.168.39.215:8443/api/v1/nodes/ha-091565-m03
	I0918 20:04:14.054786   26827 round_trippers.go:469] Request Headers:
	I0918 20:04:14.054798   26827 round_trippers.go:473]     Accept: application/json, */*
	I0918 20:04:14.054803   26827 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0918 20:04:14.058095   26827 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0918 20:04:14.555493   26827 round_trippers.go:463] GET https://192.168.39.215:8443/api/v1/nodes/ha-091565-m03
	I0918 20:04:14.555515   26827 round_trippers.go:469] Request Headers:
	I0918 20:04:14.555524   26827 round_trippers.go:473]     Accept: application/json, */*
	I0918 20:04:14.555528   26827 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0918 20:04:14.559446   26827 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0918 20:04:15.055291   26827 round_trippers.go:463] GET https://192.168.39.215:8443/api/v1/nodes/ha-091565-m03
	I0918 20:04:15.055323   26827 round_trippers.go:469] Request Headers:
	I0918 20:04:15.055333   26827 round_trippers.go:473]     Accept: application/json, */*
	I0918 20:04:15.055336   26827 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0918 20:04:15.059042   26827 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0918 20:04:15.555105   26827 round_trippers.go:463] GET https://192.168.39.215:8443/api/v1/nodes/ha-091565-m03
	I0918 20:04:15.555127   26827 round_trippers.go:469] Request Headers:
	I0918 20:04:15.555135   26827 round_trippers.go:473]     Accept: application/json, */*
	I0918 20:04:15.555138   26827 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0918 20:04:15.558918   26827 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0918 20:04:16.055211   26827 round_trippers.go:463] GET https://192.168.39.215:8443/api/v1/nodes/ha-091565-m03
	I0918 20:04:16.055237   26827 round_trippers.go:469] Request Headers:
	I0918 20:04:16.055246   26827 round_trippers.go:473]     Accept: application/json, */*
	I0918 20:04:16.055251   26827 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0918 20:04:16.059232   26827 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0918 20:04:16.059819   26827 node_ready.go:49] node "ha-091565-m03" has status "Ready":"True"
	I0918 20:04:16.059841   26827 node_ready.go:38] duration metric: took 18.505389798s for node "ha-091565-m03" to be "Ready" ...
	I0918 20:04:16.059852   26827 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0918 20:04:16.059929   26827 round_trippers.go:463] GET https://192.168.39.215:8443/api/v1/namespaces/kube-system/pods
	I0918 20:04:16.059941   26827 round_trippers.go:469] Request Headers:
	I0918 20:04:16.059951   26827 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0918 20:04:16.059957   26827 round_trippers.go:473]     Accept: application/json, */*
	I0918 20:04:16.065715   26827 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0918 20:04:16.071783   26827 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-8zcqk" in "kube-system" namespace to be "Ready" ...
	I0918 20:04:16.071882   26827 round_trippers.go:463] GET https://192.168.39.215:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-8zcqk
	I0918 20:04:16.071891   26827 round_trippers.go:469] Request Headers:
	I0918 20:04:16.071899   26827 round_trippers.go:473]     Accept: application/json, */*
	I0918 20:04:16.071903   26827 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0918 20:04:16.075405   26827 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0918 20:04:16.075962   26827 round_trippers.go:463] GET https://192.168.39.215:8443/api/v1/nodes/ha-091565
	I0918 20:04:16.075978   26827 round_trippers.go:469] Request Headers:
	I0918 20:04:16.075987   26827 round_trippers.go:473]     Accept: application/json, */*
	I0918 20:04:16.075992   26827 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0918 20:04:16.078716   26827 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0918 20:04:16.079267   26827 pod_ready.go:93] pod "coredns-7c65d6cfc9-8zcqk" in "kube-system" namespace has status "Ready":"True"
	I0918 20:04:16.079293   26827 pod_ready.go:82] duration metric: took 7.472161ms for pod "coredns-7c65d6cfc9-8zcqk" in "kube-system" namespace to be "Ready" ...
	I0918 20:04:16.079302   26827 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-w97kk" in "kube-system" namespace to be "Ready" ...
	I0918 20:04:16.079361   26827 round_trippers.go:463] GET https://192.168.39.215:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-w97kk
	I0918 20:04:16.079369   26827 round_trippers.go:469] Request Headers:
	I0918 20:04:16.079376   26827 round_trippers.go:473]     Accept: application/json, */*
	I0918 20:04:16.079380   26827 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0918 20:04:16.082131   26827 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0918 20:04:16.082926   26827 round_trippers.go:463] GET https://192.168.39.215:8443/api/v1/nodes/ha-091565
	I0918 20:04:16.082939   26827 round_trippers.go:469] Request Headers:
	I0918 20:04:16.082946   26827 round_trippers.go:473]     Accept: application/json, */*
	I0918 20:04:16.082949   26827 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0918 20:04:16.085556   26827 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0918 20:04:16.085896   26827 pod_ready.go:93] pod "coredns-7c65d6cfc9-w97kk" in "kube-system" namespace has status "Ready":"True"
	I0918 20:04:16.085910   26827 pod_ready.go:82] duration metric: took 6.602392ms for pod "coredns-7c65d6cfc9-w97kk" in "kube-system" namespace to be "Ready" ...
	I0918 20:04:16.085919   26827 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-091565" in "kube-system" namespace to be "Ready" ...
	I0918 20:04:16.085972   26827 round_trippers.go:463] GET https://192.168.39.215:8443/api/v1/namespaces/kube-system/pods/etcd-ha-091565
	I0918 20:04:16.085980   26827 round_trippers.go:469] Request Headers:
	I0918 20:04:16.085986   26827 round_trippers.go:473]     Accept: application/json, */*
	I0918 20:04:16.085989   26827 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0918 20:04:16.089699   26827 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0918 20:04:16.090300   26827 round_trippers.go:463] GET https://192.168.39.215:8443/api/v1/nodes/ha-091565
	I0918 20:04:16.090315   26827 round_trippers.go:469] Request Headers:
	I0918 20:04:16.090322   26827 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0918 20:04:16.090326   26827 round_trippers.go:473]     Accept: application/json, */*
	I0918 20:04:16.093063   26827 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0918 20:04:16.093596   26827 pod_ready.go:93] pod "etcd-ha-091565" in "kube-system" namespace has status "Ready":"True"
	I0918 20:04:16.093612   26827 pod_ready.go:82] duration metric: took 7.687899ms for pod "etcd-ha-091565" in "kube-system" namespace to be "Ready" ...
	I0918 20:04:16.093621   26827 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-091565-m02" in "kube-system" namespace to be "Ready" ...
	I0918 20:04:16.093672   26827 round_trippers.go:463] GET https://192.168.39.215:8443/api/v1/namespaces/kube-system/pods/etcd-ha-091565-m02
	I0918 20:04:16.093679   26827 round_trippers.go:469] Request Headers:
	I0918 20:04:16.093686   26827 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0918 20:04:16.093691   26827 round_trippers.go:473]     Accept: application/json, */*
	I0918 20:04:16.096387   26827 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0918 20:04:16.097042   26827 round_trippers.go:463] GET https://192.168.39.215:8443/api/v1/nodes/ha-091565-m02
	I0918 20:04:16.097062   26827 round_trippers.go:469] Request Headers:
	I0918 20:04:16.097072   26827 round_trippers.go:473]     Accept: application/json, */*
	I0918 20:04:16.097077   26827 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0918 20:04:16.099762   26827 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0918 20:04:16.100164   26827 pod_ready.go:93] pod "etcd-ha-091565-m02" in "kube-system" namespace has status "Ready":"True"
	I0918 20:04:16.100182   26827 pod_ready.go:82] duration metric: took 6.554191ms for pod "etcd-ha-091565-m02" in "kube-system" namespace to be "Ready" ...
	I0918 20:04:16.100193   26827 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-091565-m03" in "kube-system" namespace to be "Ready" ...
	I0918 20:04:16.255579   26827 request.go:632] Waited for 155.319903ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.215:8443/api/v1/namespaces/kube-system/pods/etcd-ha-091565-m03
	I0918 20:04:16.255651   26827 round_trippers.go:463] GET https://192.168.39.215:8443/api/v1/namespaces/kube-system/pods/etcd-ha-091565-m03
	I0918 20:04:16.255659   26827 round_trippers.go:469] Request Headers:
	I0918 20:04:16.255691   26827 round_trippers.go:473]     Accept: application/json, */*
	I0918 20:04:16.255699   26827 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0918 20:04:16.259105   26827 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0918 20:04:16.456134   26827 request.go:632] Waited for 196.426863ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.215:8443/api/v1/nodes/ha-091565-m03
	I0918 20:04:16.456200   26827 round_trippers.go:463] GET https://192.168.39.215:8443/api/v1/nodes/ha-091565-m03
	I0918 20:04:16.456206   26827 round_trippers.go:469] Request Headers:
	I0918 20:04:16.456215   26827 round_trippers.go:473]     Accept: application/json, */*
	I0918 20:04:16.456220   26827 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0918 20:04:16.460303   26827 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0918 20:04:16.460816   26827 pod_ready.go:93] pod "etcd-ha-091565-m03" in "kube-system" namespace has status "Ready":"True"
	I0918 20:04:16.460835   26827 pod_ready.go:82] duration metric: took 360.633247ms for pod "etcd-ha-091565-m03" in "kube-system" namespace to be "Ready" ...
	I0918 20:04:16.460857   26827 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-091565" in "kube-system" namespace to be "Ready" ...
	I0918 20:04:16.656076   26827 request.go:632] Waited for 195.151124ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.215:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-091565
	I0918 20:04:16.656159   26827 round_trippers.go:463] GET https://192.168.39.215:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-091565
	I0918 20:04:16.656167   26827 round_trippers.go:469] Request Headers:
	I0918 20:04:16.656176   26827 round_trippers.go:473]     Accept: application/json, */*
	I0918 20:04:16.656192   26827 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0918 20:04:16.659916   26827 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0918 20:04:16.856095   26827 request.go:632] Waited for 195.376851ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.215:8443/api/v1/nodes/ha-091565
	I0918 20:04:16.856174   26827 round_trippers.go:463] GET https://192.168.39.215:8443/api/v1/nodes/ha-091565
	I0918 20:04:16.856181   26827 round_trippers.go:469] Request Headers:
	I0918 20:04:16.856191   26827 round_trippers.go:473]     Accept: application/json, */*
	I0918 20:04:16.856204   26827 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0918 20:04:16.859780   26827 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0918 20:04:16.860437   26827 pod_ready.go:93] pod "kube-apiserver-ha-091565" in "kube-system" namespace has status "Ready":"True"
	I0918 20:04:16.860458   26827 pod_ready.go:82] duration metric: took 399.594161ms for pod "kube-apiserver-ha-091565" in "kube-system" namespace to be "Ready" ...
	I0918 20:04:16.860467   26827 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-091565-m02" in "kube-system" namespace to be "Ready" ...
	I0918 20:04:17.055619   26827 request.go:632] Waited for 195.084711ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.215:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-091565-m02
	I0918 20:04:17.055737   26827 round_trippers.go:463] GET https://192.168.39.215:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-091565-m02
	I0918 20:04:17.055750   26827 round_trippers.go:469] Request Headers:
	I0918 20:04:17.055759   26827 round_trippers.go:473]     Accept: application/json, */*
	I0918 20:04:17.055765   26827 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0918 20:04:17.059273   26827 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0918 20:04:17.255382   26827 request.go:632] Waited for 195.243567ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.215:8443/api/v1/nodes/ha-091565-m02
	I0918 20:04:17.255449   26827 round_trippers.go:463] GET https://192.168.39.215:8443/api/v1/nodes/ha-091565-m02
	I0918 20:04:17.255457   26827 round_trippers.go:469] Request Headers:
	I0918 20:04:17.255464   26827 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0918 20:04:17.255468   26827 round_trippers.go:473]     Accept: application/json, */*
	I0918 20:04:17.258940   26827 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0918 20:04:17.259557   26827 pod_ready.go:93] pod "kube-apiserver-ha-091565-m02" in "kube-system" namespace has status "Ready":"True"
	I0918 20:04:17.259575   26827 pod_ready.go:82] duration metric: took 399.101471ms for pod "kube-apiserver-ha-091565-m02" in "kube-system" namespace to be "Ready" ...
	I0918 20:04:17.259586   26827 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-091565-m03" in "kube-system" namespace to be "Ready" ...
	I0918 20:04:17.455306   26827 request.go:632] Waited for 195.656133ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.215:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-091565-m03
	I0918 20:04:17.455375   26827 round_trippers.go:463] GET https://192.168.39.215:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-091565-m03
	I0918 20:04:17.455381   26827 round_trippers.go:469] Request Headers:
	I0918 20:04:17.455391   26827 round_trippers.go:473]     Accept: application/json, */*
	I0918 20:04:17.455398   26827 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0918 20:04:17.459141   26827 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0918 20:04:17.656266   26827 request.go:632] Waited for 196.147408ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.215:8443/api/v1/nodes/ha-091565-m03
	I0918 20:04:17.656316   26827 round_trippers.go:463] GET https://192.168.39.215:8443/api/v1/nodes/ha-091565-m03
	I0918 20:04:17.656322   26827 round_trippers.go:469] Request Headers:
	I0918 20:04:17.656332   26827 round_trippers.go:473]     Accept: application/json, */*
	I0918 20:04:17.656341   26827 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0918 20:04:17.659786   26827 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0918 20:04:17.660507   26827 pod_ready.go:93] pod "kube-apiserver-ha-091565-m03" in "kube-system" namespace has status "Ready":"True"
	I0918 20:04:17.660540   26827 pod_ready.go:82] duration metric: took 400.946368ms for pod "kube-apiserver-ha-091565-m03" in "kube-system" namespace to be "Ready" ...
	I0918 20:04:17.660565   26827 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-091565" in "kube-system" namespace to be "Ready" ...
	I0918 20:04:17.855951   26827 request.go:632] Waited for 195.288141ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.215:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-091565
	I0918 20:04:17.856066   26827 round_trippers.go:463] GET https://192.168.39.215:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-091565
	I0918 20:04:17.856076   26827 round_trippers.go:469] Request Headers:
	I0918 20:04:17.856086   26827 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0918 20:04:17.856095   26827 round_trippers.go:473]     Accept: application/json, */*
	I0918 20:04:17.859991   26827 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0918 20:04:18.055205   26827 request.go:632] Waited for 194.285561ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.215:8443/api/v1/nodes/ha-091565
	I0918 20:04:18.055268   26827 round_trippers.go:463] GET https://192.168.39.215:8443/api/v1/nodes/ha-091565
	I0918 20:04:18.055274   26827 round_trippers.go:469] Request Headers:
	I0918 20:04:18.055281   26827 round_trippers.go:473]     Accept: application/json, */*
	I0918 20:04:18.055284   26827 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0918 20:04:18.058520   26827 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0918 20:04:18.059072   26827 pod_ready.go:93] pod "kube-controller-manager-ha-091565" in "kube-system" namespace has status "Ready":"True"
	I0918 20:04:18.059095   26827 pod_ready.go:82] duration metric: took 398.501653ms for pod "kube-controller-manager-ha-091565" in "kube-system" namespace to be "Ready" ...
	I0918 20:04:18.059105   26827 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-091565-m02" in "kube-system" namespace to be "Ready" ...
	I0918 20:04:18.256047   26827 request.go:632] Waited for 196.849365ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.215:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-091565-m02
	I0918 20:04:18.256125   26827 round_trippers.go:463] GET https://192.168.39.215:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-091565-m02
	I0918 20:04:18.256133   26827 round_trippers.go:469] Request Headers:
	I0918 20:04:18.256147   26827 round_trippers.go:473]     Accept: application/json, */*
	I0918 20:04:18.256156   26827 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0918 20:04:18.260076   26827 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0918 20:04:18.455423   26827 request.go:632] Waited for 194.302275ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.215:8443/api/v1/nodes/ha-091565-m02
	I0918 20:04:18.455494   26827 round_trippers.go:463] GET https://192.168.39.215:8443/api/v1/nodes/ha-091565-m02
	I0918 20:04:18.455502   26827 round_trippers.go:469] Request Headers:
	I0918 20:04:18.455513   26827 round_trippers.go:473]     Accept: application/json, */*
	I0918 20:04:18.455524   26827 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0918 20:04:18.460052   26827 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0918 20:04:18.460616   26827 pod_ready.go:93] pod "kube-controller-manager-ha-091565-m02" in "kube-system" namespace has status "Ready":"True"
	I0918 20:04:18.460634   26827 pod_ready.go:82] duration metric: took 401.521777ms for pod "kube-controller-manager-ha-091565-m02" in "kube-system" namespace to be "Ready" ...
	I0918 20:04:18.460645   26827 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-091565-m03" in "kube-system" namespace to be "Ready" ...
	I0918 20:04:18.655830   26827 request.go:632] Waited for 195.117473ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.215:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-091565-m03
	I0918 20:04:18.655906   26827 round_trippers.go:463] GET https://192.168.39.215:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-091565-m03
	I0918 20:04:18.655912   26827 round_trippers.go:469] Request Headers:
	I0918 20:04:18.655926   26827 round_trippers.go:473]     Accept: application/json, */*
	I0918 20:04:18.655934   26827 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0918 20:04:18.661181   26827 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0918 20:04:18.855471   26827 request.go:632] Waited for 193.339141ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.215:8443/api/v1/nodes/ha-091565-m03
	I0918 20:04:18.855546   26827 round_trippers.go:463] GET https://192.168.39.215:8443/api/v1/nodes/ha-091565-m03
	I0918 20:04:18.855553   26827 round_trippers.go:469] Request Headers:
	I0918 20:04:18.855560   26827 round_trippers.go:473]     Accept: application/json, */*
	I0918 20:04:18.855565   26827 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0918 20:04:18.859369   26827 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0918 20:04:18.860202   26827 pod_ready.go:93] pod "kube-controller-manager-ha-091565-m03" in "kube-system" namespace has status "Ready":"True"
	I0918 20:04:18.860225   26827 pod_ready.go:82] duration metric: took 399.570485ms for pod "kube-controller-manager-ha-091565-m03" in "kube-system" namespace to be "Ready" ...
	I0918 20:04:18.860239   26827 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-4p8rj" in "kube-system" namespace to be "Ready" ...
	I0918 20:04:19.055323   26827 request.go:632] Waited for 195.018584ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.215:8443/api/v1/namespaces/kube-system/pods/kube-proxy-4p8rj
	I0918 20:04:19.055407   26827 round_trippers.go:463] GET https://192.168.39.215:8443/api/v1/namespaces/kube-system/pods/kube-proxy-4p8rj
	I0918 20:04:19.055415   26827 round_trippers.go:469] Request Headers:
	I0918 20:04:19.055425   26827 round_trippers.go:473]     Accept: application/json, */*
	I0918 20:04:19.055434   26827 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0918 20:04:19.058851   26827 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0918 20:04:19.255631   26827 request.go:632] Waited for 196.124849ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.215:8443/api/v1/nodes/ha-091565-m03
	I0918 20:04:19.255685   26827 round_trippers.go:463] GET https://192.168.39.215:8443/api/v1/nodes/ha-091565-m03
	I0918 20:04:19.255692   26827 round_trippers.go:469] Request Headers:
	I0918 20:04:19.255702   26827 round_trippers.go:473]     Accept: application/json, */*
	I0918 20:04:19.255710   26827 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0918 20:04:19.260421   26827 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0918 20:04:19.261253   26827 pod_ready.go:93] pod "kube-proxy-4p8rj" in "kube-system" namespace has status "Ready":"True"
	I0918 20:04:19.261276   26827 pod_ready.go:82] duration metric: took 401.027744ms for pod "kube-proxy-4p8rj" in "kube-system" namespace to be "Ready" ...
	I0918 20:04:19.261289   26827 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-4wm6h" in "kube-system" namespace to be "Ready" ...
	I0918 20:04:19.455210   26827 request.go:632] Waited for 193.843238ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.215:8443/api/v1/namespaces/kube-system/pods/kube-proxy-4wm6h
	I0918 20:04:19.455295   26827 round_trippers.go:463] GET https://192.168.39.215:8443/api/v1/namespaces/kube-system/pods/kube-proxy-4wm6h
	I0918 20:04:19.455303   26827 round_trippers.go:469] Request Headers:
	I0918 20:04:19.455314   26827 round_trippers.go:473]     Accept: application/json, */*
	I0918 20:04:19.455322   26827 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0918 20:04:19.458975   26827 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0918 20:04:19.656036   26827 request.go:632] Waited for 196.360424ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.215:8443/api/v1/nodes/ha-091565
	I0918 20:04:19.656109   26827 round_trippers.go:463] GET https://192.168.39.215:8443/api/v1/nodes/ha-091565
	I0918 20:04:19.656115   26827 round_trippers.go:469] Request Headers:
	I0918 20:04:19.656122   26827 round_trippers.go:473]     Accept: application/json, */*
	I0918 20:04:19.656126   26827 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0918 20:04:19.659749   26827 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0918 20:04:19.660473   26827 pod_ready.go:93] pod "kube-proxy-4wm6h" in "kube-system" namespace has status "Ready":"True"
	I0918 20:04:19.660500   26827 pod_ready.go:82] duration metric: took 399.202104ms for pod "kube-proxy-4wm6h" in "kube-system" namespace to be "Ready" ...
	I0918 20:04:19.660513   26827 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-bxblp" in "kube-system" namespace to be "Ready" ...
	I0918 20:04:19.855602   26827 request.go:632] Waited for 195.016629ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.215:8443/api/v1/namespaces/kube-system/pods/kube-proxy-bxblp
	I0918 20:04:19.855669   26827 round_trippers.go:463] GET https://192.168.39.215:8443/api/v1/namespaces/kube-system/pods/kube-proxy-bxblp
	I0918 20:04:19.855674   26827 round_trippers.go:469] Request Headers:
	I0918 20:04:19.855684   26827 round_trippers.go:473]     Accept: application/json, */*
	I0918 20:04:19.855688   26827 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0918 20:04:19.859561   26827 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0918 20:04:20.055770   26827 request.go:632] Waited for 195.418705ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.215:8443/api/v1/nodes/ha-091565-m02
	I0918 20:04:20.055846   26827 round_trippers.go:463] GET https://192.168.39.215:8443/api/v1/nodes/ha-091565-m02
	I0918 20:04:20.055852   26827 round_trippers.go:469] Request Headers:
	I0918 20:04:20.055859   26827 round_trippers.go:473]     Accept: application/json, */*
	I0918 20:04:20.055866   26827 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0918 20:04:20.059482   26827 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0918 20:04:20.060369   26827 pod_ready.go:93] pod "kube-proxy-bxblp" in "kube-system" namespace has status "Ready":"True"
	I0918 20:04:20.060396   26827 pod_ready.go:82] duration metric: took 399.875436ms for pod "kube-proxy-bxblp" in "kube-system" namespace to be "Ready" ...
	I0918 20:04:20.060408   26827 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-091565" in "kube-system" namespace to be "Ready" ...
	I0918 20:04:20.255225   26827 request.go:632] Waited for 194.753676ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.215:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-091565
	I0918 20:04:20.255322   26827 round_trippers.go:463] GET https://192.168.39.215:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-091565
	I0918 20:04:20.255331   26827 round_trippers.go:469] Request Headers:
	I0918 20:04:20.255341   26827 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0918 20:04:20.255351   26827 round_trippers.go:473]     Accept: application/json, */*
	I0918 20:04:20.259061   26827 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0918 20:04:20.456103   26827 request.go:632] Waited for 196.430637ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.215:8443/api/v1/nodes/ha-091565
	I0918 20:04:20.456163   26827 round_trippers.go:463] GET https://192.168.39.215:8443/api/v1/nodes/ha-091565
	I0918 20:04:20.456168   26827 round_trippers.go:469] Request Headers:
	I0918 20:04:20.456175   26827 round_trippers.go:473]     Accept: application/json, */*
	I0918 20:04:20.456179   26827 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0918 20:04:20.459797   26827 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0918 20:04:20.460332   26827 pod_ready.go:93] pod "kube-scheduler-ha-091565" in "kube-system" namespace has status "Ready":"True"
	I0918 20:04:20.460355   26827 pod_ready.go:82] duration metric: took 399.937556ms for pod "kube-scheduler-ha-091565" in "kube-system" namespace to be "Ready" ...
	I0918 20:04:20.460365   26827 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-091565-m02" in "kube-system" namespace to be "Ready" ...
	I0918 20:04:20.655303   26827 request.go:632] Waited for 194.860443ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.215:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-091565-m02
	I0918 20:04:20.655387   26827 round_trippers.go:463] GET https://192.168.39.215:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-091565-m02
	I0918 20:04:20.655395   26827 round_trippers.go:469] Request Headers:
	I0918 20:04:20.655405   26827 round_trippers.go:473]     Accept: application/json, */*
	I0918 20:04:20.655425   26827 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0918 20:04:20.658807   26827 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0918 20:04:20.855714   26827 request.go:632] Waited for 196.369108ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.215:8443/api/v1/nodes/ha-091565-m02
	I0918 20:04:20.855780   26827 round_trippers.go:463] GET https://192.168.39.215:8443/api/v1/nodes/ha-091565-m02
	I0918 20:04:20.855787   26827 round_trippers.go:469] Request Headers:
	I0918 20:04:20.855798   26827 round_trippers.go:473]     Accept: application/json, */*
	I0918 20:04:20.855804   26827 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0918 20:04:20.859686   26827 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0918 20:04:20.860506   26827 pod_ready.go:93] pod "kube-scheduler-ha-091565-m02" in "kube-system" namespace has status "Ready":"True"
	I0918 20:04:20.860527   26827 pod_ready.go:82] duration metric: took 400.151195ms for pod "kube-scheduler-ha-091565-m02" in "kube-system" namespace to be "Ready" ...
	I0918 20:04:20.860539   26827 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-091565-m03" in "kube-system" namespace to be "Ready" ...
	I0918 20:04:21.056006   26827 request.go:632] Waited for 195.380183ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.215:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-091565-m03
	I0918 20:04:21.056089   26827 round_trippers.go:463] GET https://192.168.39.215:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-091565-m03
	I0918 20:04:21.056096   26827 round_trippers.go:469] Request Headers:
	I0918 20:04:21.056104   26827 round_trippers.go:473]     Accept: application/json, */*
	I0918 20:04:21.056108   26827 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0918 20:04:21.059632   26827 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0918 20:04:21.255734   26827 request.go:632] Waited for 195.357475ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.215:8443/api/v1/nodes/ha-091565-m03
	I0918 20:04:21.255796   26827 round_trippers.go:463] GET https://192.168.39.215:8443/api/v1/nodes/ha-091565-m03
	I0918 20:04:21.255801   26827 round_trippers.go:469] Request Headers:
	I0918 20:04:21.255808   26827 round_trippers.go:473]     Accept: application/json, */*
	I0918 20:04:21.255813   26827 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0918 20:04:21.259440   26827 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0918 20:04:21.260300   26827 pod_ready.go:93] pod "kube-scheduler-ha-091565-m03" in "kube-system" namespace has status "Ready":"True"
	I0918 20:04:21.260322   26827 pod_ready.go:82] duration metric: took 399.775629ms for pod "kube-scheduler-ha-091565-m03" in "kube-system" namespace to be "Ready" ...
	I0918 20:04:21.260332   26827 pod_ready.go:39] duration metric: took 5.200469523s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0918 20:04:21.260346   26827 api_server.go:52] waiting for apiserver process to appear ...
	I0918 20:04:21.260416   26827 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 20:04:21.276372   26827 api_server.go:72] duration metric: took 24.03316608s to wait for apiserver process to appear ...
	I0918 20:04:21.276400   26827 api_server.go:88] waiting for apiserver healthz status ...
	I0918 20:04:21.276422   26827 api_server.go:253] Checking apiserver healthz at https://192.168.39.215:8443/healthz ...
	I0918 20:04:21.282493   26827 api_server.go:279] https://192.168.39.215:8443/healthz returned 200:
	ok
	I0918 20:04:21.282563   26827 round_trippers.go:463] GET https://192.168.39.215:8443/version
	I0918 20:04:21.282571   26827 round_trippers.go:469] Request Headers:
	I0918 20:04:21.282579   26827 round_trippers.go:473]     Accept: application/json, */*
	I0918 20:04:21.282586   26827 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0918 20:04:21.283373   26827 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0918 20:04:21.283434   26827 api_server.go:141] control plane version: v1.31.1
	I0918 20:04:21.283445   26827 api_server.go:131] duration metric: took 7.03877ms to wait for apiserver health ...
	I0918 20:04:21.283452   26827 system_pods.go:43] waiting for kube-system pods to appear ...
	I0918 20:04:21.455842   26827 request.go:632] Waited for 172.326435ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.215:8443/api/v1/namespaces/kube-system/pods
	I0918 20:04:21.455906   26827 round_trippers.go:463] GET https://192.168.39.215:8443/api/v1/namespaces/kube-system/pods
	I0918 20:04:21.455913   26827 round_trippers.go:469] Request Headers:
	I0918 20:04:21.455920   26827 round_trippers.go:473]     Accept: application/json, */*
	I0918 20:04:21.455924   26827 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0918 20:04:21.461721   26827 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0918 20:04:21.469221   26827 system_pods.go:59] 24 kube-system pods found
	I0918 20:04:21.469250   26827 system_pods.go:61] "coredns-7c65d6cfc9-8zcqk" [644e8147-96e9-41a1-99b8-d2de17e4798c] Running
	I0918 20:04:21.469256   26827 system_pods.go:61] "coredns-7c65d6cfc9-w97kk" [70428cd6-0523-44c8-89f3-62837b52ca80] Running
	I0918 20:04:21.469260   26827 system_pods.go:61] "etcd-ha-091565" [c5af3e98-c375-448c-9cac-6a83b115ca71] Running
	I0918 20:04:21.469263   26827 system_pods.go:61] "etcd-ha-091565-m02" [71e1c78e-6a11-4b46-baa2-83c98d666cee] Running
	I0918 20:04:21.469267   26827 system_pods.go:61] "etcd-ha-091565-m03" [9c1e9878-8b36-4e4d-9fc1-b81e4cd49c08] Running
	I0918 20:04:21.469270   26827 system_pods.go:61] "kindnet-5rh2w" [8fbd3b35-4d3a-497f-bbcf-0cc0b04ec495] Running
	I0918 20:04:21.469273   26827 system_pods.go:61] "kindnet-7fl5w" [5c3a9d82-3815-4aa1-8d04-14be25394dcf] Running
	I0918 20:04:21.469278   26827 system_pods.go:61] "kindnet-bzsqr" [bf1e6f1b-d3ad-439a-9b9a-882e6e989a56] Running
	I0918 20:04:21.469282   26827 system_pods.go:61] "kube-apiserver-ha-091565" [1c2ddbd9-3f78-415b-af21-1d43925382c5] Running
	I0918 20:04:21.469285   26827 system_pods.go:61] "kube-apiserver-ha-091565-m02" [b482ba6c-e415-4e3c-aa56-c17a4f3bcc13] Running
	I0918 20:04:21.469288   26827 system_pods.go:61] "kube-apiserver-ha-091565-m03" [597eb4b7-df02-430e-98f9-24de20295e3b] Running
	I0918 20:04:21.469291   26827 system_pods.go:61] "kube-controller-manager-ha-091565" [3812cd1b-619d-4da8-b039-18f7adb53647] Running
	I0918 20:04:21.469295   26827 system_pods.go:61] "kube-controller-manager-ha-091565-m02" [2c3ecce6-b148-498b-b66e-e5c000f51940] Running
	I0918 20:04:21.469298   26827 system_pods.go:61] "kube-controller-manager-ha-091565-m03" [d9871df2-6370-47a6-98d4-fd9acfddd11a] Running
	I0918 20:04:21.469301   26827 system_pods.go:61] "kube-proxy-4p8rj" [ebe65af8-abb1-4ed3-a12f-b822ec09e891] Running
	I0918 20:04:21.469305   26827 system_pods.go:61] "kube-proxy-4wm6h" [d6904231-6f64-4447-9932-0cd5d692978b] Running
	I0918 20:04:21.469310   26827 system_pods.go:61] "kube-proxy-bxblp" [68143b05-afd1-409a-aaa6-8b3c6841bbfb] Running
	I0918 20:04:21.469314   26827 system_pods.go:61] "kube-scheduler-ha-091565" [ad2ac667-3328-4ad3-b736-c8c633140e2d] Running
	I0918 20:04:21.469319   26827 system_pods.go:61] "kube-scheduler-ha-091565-m02" [fd27db83-43f3-40a8-9ca7-5570f78d562b] Running
	I0918 20:04:21.469322   26827 system_pods.go:61] "kube-scheduler-ha-091565-m03" [c8432a2a-548b-4a97-852a-a18f82f406d2] Running
	I0918 20:04:21.469326   26827 system_pods.go:61] "kube-vip-ha-091565" [b3fccd60-ac88-4dda-b8bb-b1b8c45cbfe5] Running
	I0918 20:04:21.469332   26827 system_pods.go:61] "kube-vip-ha-091565-m02" [12ca12ad-6dee-4b50-9273-c48d8f06acf4] Running
	I0918 20:04:21.469336   26827 system_pods.go:61] "kube-vip-ha-091565-m03" [8389ddfd-fca7-4698-a747-4eedf299dc4a] Running
	I0918 20:04:21.469341   26827 system_pods.go:61] "storage-provisioner" [b7dffb85-905b-4166-a680-34c77cf87d09] Running
	I0918 20:04:21.469347   26827 system_pods.go:74] duration metric: took 185.890335ms to wait for pod list to return data ...
	I0918 20:04:21.469357   26827 default_sa.go:34] waiting for default service account to be created ...
	I0918 20:04:21.655850   26827 request.go:632] Waited for 186.415202ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.215:8443/api/v1/namespaces/default/serviceaccounts
	I0918 20:04:21.655922   26827 round_trippers.go:463] GET https://192.168.39.215:8443/api/v1/namespaces/default/serviceaccounts
	I0918 20:04:21.655931   26827 round_trippers.go:469] Request Headers:
	I0918 20:04:21.655941   26827 round_trippers.go:473]     Accept: application/json, */*
	I0918 20:04:21.655949   26827 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0918 20:04:21.659629   26827 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0918 20:04:21.659759   26827 default_sa.go:45] found service account: "default"
	I0918 20:04:21.659777   26827 default_sa.go:55] duration metric: took 190.414417ms for default service account to be created ...
	I0918 20:04:21.659788   26827 system_pods.go:116] waiting for k8s-apps to be running ...
	I0918 20:04:21.856111   26827 request.go:632] Waited for 196.255287ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.215:8443/api/v1/namespaces/kube-system/pods
	I0918 20:04:21.856170   26827 round_trippers.go:463] GET https://192.168.39.215:8443/api/v1/namespaces/kube-system/pods
	I0918 20:04:21.856175   26827 round_trippers.go:469] Request Headers:
	I0918 20:04:21.856182   26827 round_trippers.go:473]     Accept: application/json, */*
	I0918 20:04:21.856186   26827 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0918 20:04:21.863662   26827 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0918 20:04:21.871644   26827 system_pods.go:86] 24 kube-system pods found
	I0918 20:04:21.871682   26827 system_pods.go:89] "coredns-7c65d6cfc9-8zcqk" [644e8147-96e9-41a1-99b8-d2de17e4798c] Running
	I0918 20:04:21.871691   26827 system_pods.go:89] "coredns-7c65d6cfc9-w97kk" [70428cd6-0523-44c8-89f3-62837b52ca80] Running
	I0918 20:04:21.871696   26827 system_pods.go:89] "etcd-ha-091565" [c5af3e98-c375-448c-9cac-6a83b115ca71] Running
	I0918 20:04:21.871703   26827 system_pods.go:89] "etcd-ha-091565-m02" [71e1c78e-6a11-4b46-baa2-83c98d666cee] Running
	I0918 20:04:21.871708   26827 system_pods.go:89] "etcd-ha-091565-m03" [9c1e9878-8b36-4e4d-9fc1-b81e4cd49c08] Running
	I0918 20:04:21.871713   26827 system_pods.go:89] "kindnet-5rh2w" [8fbd3b35-4d3a-497f-bbcf-0cc0b04ec495] Running
	I0918 20:04:21.871719   26827 system_pods.go:89] "kindnet-7fl5w" [5c3a9d82-3815-4aa1-8d04-14be25394dcf] Running
	I0918 20:04:21.871725   26827 system_pods.go:89] "kindnet-bzsqr" [bf1e6f1b-d3ad-439a-9b9a-882e6e989a56] Running
	I0918 20:04:21.871731   26827 system_pods.go:89] "kube-apiserver-ha-091565" [1c2ddbd9-3f78-415b-af21-1d43925382c5] Running
	I0918 20:04:21.871739   26827 system_pods.go:89] "kube-apiserver-ha-091565-m02" [b482ba6c-e415-4e3c-aa56-c17a4f3bcc13] Running
	I0918 20:04:21.871746   26827 system_pods.go:89] "kube-apiserver-ha-091565-m03" [597eb4b7-df02-430e-98f9-24de20295e3b] Running
	I0918 20:04:21.871756   26827 system_pods.go:89] "kube-controller-manager-ha-091565" [3812cd1b-619d-4da8-b039-18f7adb53647] Running
	I0918 20:04:21.871763   26827 system_pods.go:89] "kube-controller-manager-ha-091565-m02" [2c3ecce6-b148-498b-b66e-e5c000f51940] Running
	I0918 20:04:21.871771   26827 system_pods.go:89] "kube-controller-manager-ha-091565-m03" [d9871df2-6370-47a6-98d4-fd9acfddd11a] Running
	I0918 20:04:21.871778   26827 system_pods.go:89] "kube-proxy-4p8rj" [ebe65af8-abb1-4ed3-a12f-b822ec09e891] Running
	I0918 20:04:21.871786   26827 system_pods.go:89] "kube-proxy-4wm6h" [d6904231-6f64-4447-9932-0cd5d692978b] Running
	I0918 20:04:21.871792   26827 system_pods.go:89] "kube-proxy-bxblp" [68143b05-afd1-409a-aaa6-8b3c6841bbfb] Running
	I0918 20:04:21.871799   26827 system_pods.go:89] "kube-scheduler-ha-091565" [ad2ac667-3328-4ad3-b736-c8c633140e2d] Running
	I0918 20:04:21.871805   26827 system_pods.go:89] "kube-scheduler-ha-091565-m02" [fd27db83-43f3-40a8-9ca7-5570f78d562b] Running
	I0918 20:04:21.871813   26827 system_pods.go:89] "kube-scheduler-ha-091565-m03" [c8432a2a-548b-4a97-852a-a18f82f406d2] Running
	I0918 20:04:21.871819   26827 system_pods.go:89] "kube-vip-ha-091565" [b3fccd60-ac88-4dda-b8bb-b1b8c45cbfe5] Running
	I0918 20:04:21.871827   26827 system_pods.go:89] "kube-vip-ha-091565-m02" [12ca12ad-6dee-4b50-9273-c48d8f06acf4] Running
	I0918 20:04:21.871833   26827 system_pods.go:89] "kube-vip-ha-091565-m03" [8389ddfd-fca7-4698-a747-4eedf299dc4a] Running
	I0918 20:04:21.871838   26827 system_pods.go:89] "storage-provisioner" [b7dffb85-905b-4166-a680-34c77cf87d09] Running
	I0918 20:04:21.871847   26827 system_pods.go:126] duration metric: took 212.052235ms to wait for k8s-apps to be running ...
	I0918 20:04:21.871859   26827 system_svc.go:44] waiting for kubelet service to be running ....
	I0918 20:04:21.871912   26827 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0918 20:04:21.890997   26827 system_svc.go:56] duration metric: took 19.130745ms WaitForService to wait for kubelet
	I0918 20:04:21.891029   26827 kubeadm.go:582] duration metric: took 24.647829851s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0918 20:04:21.891052   26827 node_conditions.go:102] verifying NodePressure condition ...
	I0918 20:04:22.055297   26827 request.go:632] Waited for 164.164035ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.215:8443/api/v1/nodes
	I0918 20:04:22.055364   26827 round_trippers.go:463] GET https://192.168.39.215:8443/api/v1/nodes
	I0918 20:04:22.055371   26827 round_trippers.go:469] Request Headers:
	I0918 20:04:22.055381   26827 round_trippers.go:473]     Accept: application/json, */*
	I0918 20:04:22.055387   26827 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0918 20:04:22.060147   26827 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0918 20:04:22.061184   26827 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0918 20:04:22.061208   26827 node_conditions.go:123] node cpu capacity is 2
	I0918 20:04:22.061221   26827 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0918 20:04:22.061227   26827 node_conditions.go:123] node cpu capacity is 2
	I0918 20:04:22.061232   26827 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0918 20:04:22.061235   26827 node_conditions.go:123] node cpu capacity is 2
	I0918 20:04:22.061240   26827 node_conditions.go:105] duration metric: took 170.183013ms to run NodePressure ...
	I0918 20:04:22.061274   26827 start.go:241] waiting for startup goroutines ...
	I0918 20:04:22.061303   26827 start.go:255] writing updated cluster config ...
	I0918 20:04:22.061591   26827 ssh_runner.go:195] Run: rm -f paused
	I0918 20:04:22.113181   26827 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I0918 20:04:22.115218   26827 out.go:177] * Done! kubectl is now configured to use "ha-091565" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Sep 18 20:08:01 ha-091565 crio[663]: time="2024-09-18 20:08:01.951738472Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726690081951710512,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=ec6571aa-a91a-4372-9185-c632e2572c79 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 18 20:08:01 ha-091565 crio[663]: time="2024-09-18 20:08:01.952542395Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=8793e645-f81c-4524-ae22-358c94ef6cba name=/runtime.v1.RuntimeService/ListContainers
	Sep 18 20:08:01 ha-091565 crio[663]: time="2024-09-18 20:08:01.952600738Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=8793e645-f81c-4524-ae22-358c94ef6cba name=/runtime.v1.RuntimeService/ListContainers
	Sep 18 20:08:01 ha-091565 crio[663]: time="2024-09-18 20:08:01.952850054Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:7e40397db0622fd3967d20535016d957560de3976f7ca7e977fa2b186ff9f3ec,PodSandboxId:32509037cc4e4898a637b1de6a87eabd6d7504444bcf3f541a5234fb1ed4197a,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1726689867249474588,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-xhmzx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 16808919-56d0-40cd-b88e-28fb5a40b3a2,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4f8cab8eef59352ca937c9b8e7056a419bb426b183a841f9e8a43366ecb1b283,PodSandboxId:16c38fe68d94e679d8aaead9d1be6f411a6984d3a367813f390d2ffc174a7047,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726689721940579425,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-8zcqk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 644e8147-96e9-41a1-99b8-d2de17e4798c,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:26162985f4a281a161b88fc27b5cf0dbedef6117a63f0b64cef28f41346dbd3e,PodSandboxId:12355cb306ab12e321590a927e5f0cd88fa0f505aa7bd470359b1d9c47a7b425,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726689721876307714,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.
kubernetes.pod.uid: b7dffb85-905b-4166-a680-34c77cf87d09,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9b5c6773eef44d848fd26ebcf13ea82ebc3c1e1bd23618a944bf5f6d6a0e7bb8,PodSandboxId:b0c496c53b4c9e8442d0d8a6bcef8269a3baad2995c3647784331bd1b03a34df,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726689721549289388,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-w97kk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 70428cd6-05
23-44c8-89f3-62837b52ca80,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:52ae20a53e17beece735496b8e65537bbebfef5da53e8a2e7a74820fee56cd63,PodSandboxId:e5053f7183e292c9c0d41caef5020bdc6b59ce14f38a3c98750b9667d61f9d14,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:17266897
09419057741,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-7fl5w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5c3a9d82-3815-4aa1-8d04-14be25394dcf,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c9aa80c6b1f558ba9ef5b7e70df3690995e271e9296b0cc3e71ade739843f53a,PodSandboxId:e7fdb7e540529f305bf2eb00c376fe4bdf92019e975ef0252046f5f4a250d965,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726689709057454293,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4wm6h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d6904231-6f64-4447-9932-0cd5d692978b,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f40b55a2539761c632fe964601680ae492c74cf2cad4ef1130393c06f576f943,PodSandboxId:db3221d8284579089e2d011bb197abfc5cd3d1fbb8db7264f21262b09bca210e,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1726689699933969237,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-091565,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dfccef71d5ed662cfd9075de2c23ae11,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8c435dbd5b540cdb87d1f79fc2aadca19db9c46aff42e67d78c5d9c8eee1b6de,PodSandboxId:01b7098c9837574b56ca73c576ce61c56e6190d34e8f2c4814bae7a1f5952e5c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726689697296263323,Labels:map[string]string{io.kubernetes.container.name: kub
e-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-091565,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 42a715c9fe466585ee83dba1966182d5,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4358e16fe123bb31210776fce1969072fb954b1f7cc3b51c459cd155992c1be5,PodSandboxId:ae412aa32e14f8d37184d672500203218827aa56a04162cf4a4e1bde7a1c9833,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726689697193494552,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-
ha-091565,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ed5bb72bdd2d49ac86ea107effb85714,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f141188bda32520e26d392de6e6e49a863d49aacb461b7f3d5649d68557e96d3,PodSandboxId:bfb245c345b6c6bef07d02ba22a1fa12793ef353b2c4551e8ccd42337869e4b3,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726689697212123129,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-091565,io.kubernet
es.pod.namespace: kube-system,io.kubernetes.pod.uid: fb44b1ca5e1af25a31cc20be38506f2d,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:97b3f8978c2594307cdfef72e8b8aebac842152f9c1893bb34700b6e0932027e,PodSandboxId:0555602e8b34d796a1a62f9ff273a78ebf4b7a7f1459439beaa95a2d3c02577b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726689697128272787,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-091565,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d7f1e1feaa8e654300c84052131dd12a,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=8793e645-f81c-4524-ae22-358c94ef6cba name=/runtime.v1.RuntimeService/ListContainers
	Sep 18 20:08:01 ha-091565 crio[663]: time="2024-09-18 20:08:01.993758727Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=e0da04a5-e968-43ae-a2bc-acbfe3a34c96 name=/runtime.v1.RuntimeService/Version
	Sep 18 20:08:01 ha-091565 crio[663]: time="2024-09-18 20:08:01.993846168Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=e0da04a5-e968-43ae-a2bc-acbfe3a34c96 name=/runtime.v1.RuntimeService/Version
	Sep 18 20:08:01 ha-091565 crio[663]: time="2024-09-18 20:08:01.995084665Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=cd91e750-052f-446b-8755-f64cb36817a1 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 18 20:08:01 ha-091565 crio[663]: time="2024-09-18 20:08:01.996030055Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726690081996003644,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=cd91e750-052f-446b-8755-f64cb36817a1 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 18 20:08:01 ha-091565 crio[663]: time="2024-09-18 20:08:01.996610050Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=8cb2d673-2094-4802-bb38-050913eda9a0 name=/runtime.v1.RuntimeService/ListContainers
	Sep 18 20:08:01 ha-091565 crio[663]: time="2024-09-18 20:08:01.996676094Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=8cb2d673-2094-4802-bb38-050913eda9a0 name=/runtime.v1.RuntimeService/ListContainers
	Sep 18 20:08:01 ha-091565 crio[663]: time="2024-09-18 20:08:01.997005608Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:7e40397db0622fd3967d20535016d957560de3976f7ca7e977fa2b186ff9f3ec,PodSandboxId:32509037cc4e4898a637b1de6a87eabd6d7504444bcf3f541a5234fb1ed4197a,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1726689867249474588,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-xhmzx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 16808919-56d0-40cd-b88e-28fb5a40b3a2,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4f8cab8eef59352ca937c9b8e7056a419bb426b183a841f9e8a43366ecb1b283,PodSandboxId:16c38fe68d94e679d8aaead9d1be6f411a6984d3a367813f390d2ffc174a7047,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726689721940579425,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-8zcqk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 644e8147-96e9-41a1-99b8-d2de17e4798c,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:26162985f4a281a161b88fc27b5cf0dbedef6117a63f0b64cef28f41346dbd3e,PodSandboxId:12355cb306ab12e321590a927e5f0cd88fa0f505aa7bd470359b1d9c47a7b425,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726689721876307714,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.
kubernetes.pod.uid: b7dffb85-905b-4166-a680-34c77cf87d09,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9b5c6773eef44d848fd26ebcf13ea82ebc3c1e1bd23618a944bf5f6d6a0e7bb8,PodSandboxId:b0c496c53b4c9e8442d0d8a6bcef8269a3baad2995c3647784331bd1b03a34df,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726689721549289388,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-w97kk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 70428cd6-05
23-44c8-89f3-62837b52ca80,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:52ae20a53e17beece735496b8e65537bbebfef5da53e8a2e7a74820fee56cd63,PodSandboxId:e5053f7183e292c9c0d41caef5020bdc6b59ce14f38a3c98750b9667d61f9d14,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:17266897
09419057741,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-7fl5w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5c3a9d82-3815-4aa1-8d04-14be25394dcf,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c9aa80c6b1f558ba9ef5b7e70df3690995e271e9296b0cc3e71ade739843f53a,PodSandboxId:e7fdb7e540529f305bf2eb00c376fe4bdf92019e975ef0252046f5f4a250d965,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726689709057454293,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4wm6h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d6904231-6f64-4447-9932-0cd5d692978b,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f40b55a2539761c632fe964601680ae492c74cf2cad4ef1130393c06f576f943,PodSandboxId:db3221d8284579089e2d011bb197abfc5cd3d1fbb8db7264f21262b09bca210e,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1726689699933969237,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-091565,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dfccef71d5ed662cfd9075de2c23ae11,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8c435dbd5b540cdb87d1f79fc2aadca19db9c46aff42e67d78c5d9c8eee1b6de,PodSandboxId:01b7098c9837574b56ca73c576ce61c56e6190d34e8f2c4814bae7a1f5952e5c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726689697296263323,Labels:map[string]string{io.kubernetes.container.name: kub
e-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-091565,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 42a715c9fe466585ee83dba1966182d5,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4358e16fe123bb31210776fce1969072fb954b1f7cc3b51c459cd155992c1be5,PodSandboxId:ae412aa32e14f8d37184d672500203218827aa56a04162cf4a4e1bde7a1c9833,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726689697193494552,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-
ha-091565,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ed5bb72bdd2d49ac86ea107effb85714,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f141188bda32520e26d392de6e6e49a863d49aacb461b7f3d5649d68557e96d3,PodSandboxId:bfb245c345b6c6bef07d02ba22a1fa12793ef353b2c4551e8ccd42337869e4b3,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726689697212123129,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-091565,io.kubernet
es.pod.namespace: kube-system,io.kubernetes.pod.uid: fb44b1ca5e1af25a31cc20be38506f2d,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:97b3f8978c2594307cdfef72e8b8aebac842152f9c1893bb34700b6e0932027e,PodSandboxId:0555602e8b34d796a1a62f9ff273a78ebf4b7a7f1459439beaa95a2d3c02577b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726689697128272787,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-091565,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d7f1e1feaa8e654300c84052131dd12a,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=8cb2d673-2094-4802-bb38-050913eda9a0 name=/runtime.v1.RuntimeService/ListContainers
	Sep 18 20:08:02 ha-091565 crio[663]: time="2024-09-18 20:08:02.041654485Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=4a0ae998-6c2d-4161-8252-35bfadc6574c name=/runtime.v1.RuntimeService/Version
	Sep 18 20:08:02 ha-091565 crio[663]: time="2024-09-18 20:08:02.041729604Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=4a0ae998-6c2d-4161-8252-35bfadc6574c name=/runtime.v1.RuntimeService/Version
	Sep 18 20:08:02 ha-091565 crio[663]: time="2024-09-18 20:08:02.043054609Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=bca17fe0-f7f0-4791-a700-63b6549ef2d3 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 18 20:08:02 ha-091565 crio[663]: time="2024-09-18 20:08:02.043482862Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726690082043460423,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=bca17fe0-f7f0-4791-a700-63b6549ef2d3 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 18 20:08:02 ha-091565 crio[663]: time="2024-09-18 20:08:02.044401479Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=bc40888c-cd5c-4fe3-a776-bf1d134eb42f name=/runtime.v1.RuntimeService/ListContainers
	Sep 18 20:08:02 ha-091565 crio[663]: time="2024-09-18 20:08:02.044523838Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=bc40888c-cd5c-4fe3-a776-bf1d134eb42f name=/runtime.v1.RuntimeService/ListContainers
	Sep 18 20:08:02 ha-091565 crio[663]: time="2024-09-18 20:08:02.046125786Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:7e40397db0622fd3967d20535016d957560de3976f7ca7e977fa2b186ff9f3ec,PodSandboxId:32509037cc4e4898a637b1de6a87eabd6d7504444bcf3f541a5234fb1ed4197a,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1726689867249474588,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-xhmzx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 16808919-56d0-40cd-b88e-28fb5a40b3a2,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4f8cab8eef59352ca937c9b8e7056a419bb426b183a841f9e8a43366ecb1b283,PodSandboxId:16c38fe68d94e679d8aaead9d1be6f411a6984d3a367813f390d2ffc174a7047,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726689721940579425,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-8zcqk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 644e8147-96e9-41a1-99b8-d2de17e4798c,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:26162985f4a281a161b88fc27b5cf0dbedef6117a63f0b64cef28f41346dbd3e,PodSandboxId:12355cb306ab12e321590a927e5f0cd88fa0f505aa7bd470359b1d9c47a7b425,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726689721876307714,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.
kubernetes.pod.uid: b7dffb85-905b-4166-a680-34c77cf87d09,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9b5c6773eef44d848fd26ebcf13ea82ebc3c1e1bd23618a944bf5f6d6a0e7bb8,PodSandboxId:b0c496c53b4c9e8442d0d8a6bcef8269a3baad2995c3647784331bd1b03a34df,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726689721549289388,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-w97kk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 70428cd6-05
23-44c8-89f3-62837b52ca80,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:52ae20a53e17beece735496b8e65537bbebfef5da53e8a2e7a74820fee56cd63,PodSandboxId:e5053f7183e292c9c0d41caef5020bdc6b59ce14f38a3c98750b9667d61f9d14,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:17266897
09419057741,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-7fl5w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5c3a9d82-3815-4aa1-8d04-14be25394dcf,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c9aa80c6b1f558ba9ef5b7e70df3690995e271e9296b0cc3e71ade739843f53a,PodSandboxId:e7fdb7e540529f305bf2eb00c376fe4bdf92019e975ef0252046f5f4a250d965,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726689709057454293,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4wm6h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d6904231-6f64-4447-9932-0cd5d692978b,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f40b55a2539761c632fe964601680ae492c74cf2cad4ef1130393c06f576f943,PodSandboxId:db3221d8284579089e2d011bb197abfc5cd3d1fbb8db7264f21262b09bca210e,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1726689699933969237,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-091565,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dfccef71d5ed662cfd9075de2c23ae11,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8c435dbd5b540cdb87d1f79fc2aadca19db9c46aff42e67d78c5d9c8eee1b6de,PodSandboxId:01b7098c9837574b56ca73c576ce61c56e6190d34e8f2c4814bae7a1f5952e5c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726689697296263323,Labels:map[string]string{io.kubernetes.container.name: kub
e-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-091565,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 42a715c9fe466585ee83dba1966182d5,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4358e16fe123bb31210776fce1969072fb954b1f7cc3b51c459cd155992c1be5,PodSandboxId:ae412aa32e14f8d37184d672500203218827aa56a04162cf4a4e1bde7a1c9833,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726689697193494552,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-
ha-091565,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ed5bb72bdd2d49ac86ea107effb85714,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f141188bda32520e26d392de6e6e49a863d49aacb461b7f3d5649d68557e96d3,PodSandboxId:bfb245c345b6c6bef07d02ba22a1fa12793ef353b2c4551e8ccd42337869e4b3,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726689697212123129,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-091565,io.kubernet
es.pod.namespace: kube-system,io.kubernetes.pod.uid: fb44b1ca5e1af25a31cc20be38506f2d,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:97b3f8978c2594307cdfef72e8b8aebac842152f9c1893bb34700b6e0932027e,PodSandboxId:0555602e8b34d796a1a62f9ff273a78ebf4b7a7f1459439beaa95a2d3c02577b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726689697128272787,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-091565,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d7f1e1feaa8e654300c84052131dd12a,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=bc40888c-cd5c-4fe3-a776-bf1d134eb42f name=/runtime.v1.RuntimeService/ListContainers
	Sep 18 20:08:02 ha-091565 crio[663]: time="2024-09-18 20:08:02.086445279Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=bf01d230-1baf-4baa-972a-38a6c19ea3c4 name=/runtime.v1.RuntimeService/Version
	Sep 18 20:08:02 ha-091565 crio[663]: time="2024-09-18 20:08:02.086523028Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=bf01d230-1baf-4baa-972a-38a6c19ea3c4 name=/runtime.v1.RuntimeService/Version
	Sep 18 20:08:02 ha-091565 crio[663]: time="2024-09-18 20:08:02.087502503Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=60e0b495-007a-4d65-af66-542dce87ca8e name=/runtime.v1.ImageService/ImageFsInfo
	Sep 18 20:08:02 ha-091565 crio[663]: time="2024-09-18 20:08:02.088021033Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726690082087995098,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=60e0b495-007a-4d65-af66-542dce87ca8e name=/runtime.v1.ImageService/ImageFsInfo
	Sep 18 20:08:02 ha-091565 crio[663]: time="2024-09-18 20:08:02.088520515Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=e4ffc2f2-8d75-42b0-b898-73514dd6f116 name=/runtime.v1.RuntimeService/ListContainers
	Sep 18 20:08:02 ha-091565 crio[663]: time="2024-09-18 20:08:02.088570828Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=e4ffc2f2-8d75-42b0-b898-73514dd6f116 name=/runtime.v1.RuntimeService/ListContainers
	Sep 18 20:08:02 ha-091565 crio[663]: time="2024-09-18 20:08:02.089114178Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:7e40397db0622fd3967d20535016d957560de3976f7ca7e977fa2b186ff9f3ec,PodSandboxId:32509037cc4e4898a637b1de6a87eabd6d7504444bcf3f541a5234fb1ed4197a,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1726689867249474588,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-xhmzx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 16808919-56d0-40cd-b88e-28fb5a40b3a2,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4f8cab8eef59352ca937c9b8e7056a419bb426b183a841f9e8a43366ecb1b283,PodSandboxId:16c38fe68d94e679d8aaead9d1be6f411a6984d3a367813f390d2ffc174a7047,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726689721940579425,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-8zcqk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 644e8147-96e9-41a1-99b8-d2de17e4798c,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:26162985f4a281a161b88fc27b5cf0dbedef6117a63f0b64cef28f41346dbd3e,PodSandboxId:12355cb306ab12e321590a927e5f0cd88fa0f505aa7bd470359b1d9c47a7b425,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726689721876307714,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.
kubernetes.pod.uid: b7dffb85-905b-4166-a680-34c77cf87d09,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9b5c6773eef44d848fd26ebcf13ea82ebc3c1e1bd23618a944bf5f6d6a0e7bb8,PodSandboxId:b0c496c53b4c9e8442d0d8a6bcef8269a3baad2995c3647784331bd1b03a34df,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726689721549289388,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-w97kk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 70428cd6-05
23-44c8-89f3-62837b52ca80,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:52ae20a53e17beece735496b8e65537bbebfef5da53e8a2e7a74820fee56cd63,PodSandboxId:e5053f7183e292c9c0d41caef5020bdc6b59ce14f38a3c98750b9667d61f9d14,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:17266897
09419057741,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-7fl5w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5c3a9d82-3815-4aa1-8d04-14be25394dcf,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c9aa80c6b1f558ba9ef5b7e70df3690995e271e9296b0cc3e71ade739843f53a,PodSandboxId:e7fdb7e540529f305bf2eb00c376fe4bdf92019e975ef0252046f5f4a250d965,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726689709057454293,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4wm6h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d6904231-6f64-4447-9932-0cd5d692978b,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f40b55a2539761c632fe964601680ae492c74cf2cad4ef1130393c06f576f943,PodSandboxId:db3221d8284579089e2d011bb197abfc5cd3d1fbb8db7264f21262b09bca210e,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1726689699933969237,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-091565,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dfccef71d5ed662cfd9075de2c23ae11,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8c435dbd5b540cdb87d1f79fc2aadca19db9c46aff42e67d78c5d9c8eee1b6de,PodSandboxId:01b7098c9837574b56ca73c576ce61c56e6190d34e8f2c4814bae7a1f5952e5c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726689697296263323,Labels:map[string]string{io.kubernetes.container.name: kub
e-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-091565,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 42a715c9fe466585ee83dba1966182d5,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4358e16fe123bb31210776fce1969072fb954b1f7cc3b51c459cd155992c1be5,PodSandboxId:ae412aa32e14f8d37184d672500203218827aa56a04162cf4a4e1bde7a1c9833,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726689697193494552,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-
ha-091565,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ed5bb72bdd2d49ac86ea107effb85714,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f141188bda32520e26d392de6e6e49a863d49aacb461b7f3d5649d68557e96d3,PodSandboxId:bfb245c345b6c6bef07d02ba22a1fa12793ef353b2c4551e8ccd42337869e4b3,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726689697212123129,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-091565,io.kubernet
es.pod.namespace: kube-system,io.kubernetes.pod.uid: fb44b1ca5e1af25a31cc20be38506f2d,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:97b3f8978c2594307cdfef72e8b8aebac842152f9c1893bb34700b6e0932027e,PodSandboxId:0555602e8b34d796a1a62f9ff273a78ebf4b7a7f1459439beaa95a2d3c02577b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726689697128272787,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-091565,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d7f1e1feaa8e654300c84052131dd12a,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=e4ffc2f2-8d75-42b0-b898-73514dd6f116 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	7e40397db0622       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   3 minutes ago       Running             busybox                   0                   32509037cc4e4       busybox-7dff88458-xhmzx
	4f8cab8eef593       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      6 minutes ago       Running             coredns                   0                   16c38fe68d94e       coredns-7c65d6cfc9-8zcqk
	26162985f4a28       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      6 minutes ago       Running             storage-provisioner       0                   12355cb306ab1       storage-provisioner
	9b5c6773eef44       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      6 minutes ago       Running             coredns                   0                   b0c496c53b4c9       coredns-7c65d6cfc9-w97kk
	52ae20a53e17b       12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f                                      6 minutes ago       Running             kindnet-cni               0                   e5053f7183e29       kindnet-7fl5w
	c9aa80c6b1f55       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                      6 minutes ago       Running             kube-proxy                0                   e7fdb7e540529       kube-proxy-4wm6h
	f40b55a253976       ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f     6 minutes ago       Running             kube-vip                  0                   db3221d828457       kube-vip-ha-091565
	8c435dbd5b540       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                      6 minutes ago       Running             kube-scheduler            0                   01b7098c98375       kube-scheduler-ha-091565
	f141188bda325       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee                                      6 minutes ago       Running             kube-apiserver            0                   bfb245c345b6c       kube-apiserver-ha-091565
	4358e16fe123b       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      6 minutes ago       Running             etcd                      0                   ae412aa32e14f       etcd-ha-091565
	97b3f8978c259       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                      6 minutes ago       Running             kube-controller-manager   0                   0555602e8b34d       kube-controller-manager-ha-091565
	
	
	==> coredns [4f8cab8eef59352ca937c9b8e7056a419bb426b183a841f9e8a43366ecb1b283] <==
	[INFO] 10.244.0.4:46368 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 60 0.000070924s
	[INFO] 10.244.1.2:33610 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000192256s
	[INFO] 10.244.1.2:44224 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.004970814s
	[INFO] 10.244.1.2:38504 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000245166s
	[INFO] 10.244.1.2:33749 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000201604s
	[INFO] 10.244.1.2:44283 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.003884102s
	[INFO] 10.244.1.2:32970 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000204769s
	[INFO] 10.244.1.2:52008 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000243831s
	[INFO] 10.244.2.2:50260 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000163913s
	[INFO] 10.244.2.2:55732 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001811166s
	[INFO] 10.244.2.2:39226 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.00012772s
	[INFO] 10.244.2.2:53709 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.0000925s
	[INFO] 10.244.2.2:41092 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000125187s
	[INFO] 10.244.0.4:40054 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000124612s
	[INFO] 10.244.0.4:38790 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001299276s
	[INFO] 10.244.0.4:59253 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000062856s
	[INFO] 10.244.0.4:38256 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000094015s
	[INFO] 10.244.1.2:44940 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000153669s
	[INFO] 10.244.1.2:48450 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000097947s
	[INFO] 10.244.0.4:38580 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000117553s
	[INFO] 10.244.2.2:59546 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000170402s
	[INFO] 10.244.2.2:49026 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000189642s
	[INFO] 10.244.2.2:45658 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000151371s
	[INFO] 10.244.0.4:51397 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000169114s
	[INFO] 10.244.0.4:47813 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000155527s
	
	
	==> coredns [9b5c6773eef44d848fd26ebcf13ea82ebc3c1e1bd23618a944bf5f6d6a0e7bb8] <==
	[INFO] 10.244.0.4:40496 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 44 0.001977875s
	[INFO] 10.244.1.2:55891 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000166003s
	[INFO] 10.244.2.2:51576 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001523061s
	[INFO] 10.244.2.2:45932 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000147698s
	[INFO] 10.244.2.2:48639 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000087315s
	[INFO] 10.244.0.4:52361 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001834081s
	[INFO] 10.244.0.4:55907 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000221265s
	[INFO] 10.244.0.4:58409 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000117627s
	[INFO] 10.244.0.4:50242 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000115347s
	[INFO] 10.244.1.2:47046 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000136453s
	[INFO] 10.244.1.2:43799 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000196628s
	[INFO] 10.244.2.2:55965 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000123662s
	[INFO] 10.244.2.2:50433 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000098915s
	[INFO] 10.244.2.2:53589 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000068105s
	[INFO] 10.244.2.2:34234 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000074s
	[INFO] 10.244.0.4:45468 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000084304s
	[INFO] 10.244.0.4:51889 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000073683s
	[INFO] 10.244.0.4:50414 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000047051s
	[INFO] 10.244.1.2:45104 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000139109s
	[INFO] 10.244.1.2:42703 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.00019857s
	[INFO] 10.244.1.2:45604 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000184516s
	[INFO] 10.244.1.2:54679 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.00010429s
	[INFO] 10.244.2.2:37265 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000089491s
	[INFO] 10.244.0.4:58464 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000108633s
	[INFO] 10.244.0.4:60733 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.0000682s
	
	
	==> describe nodes <==
	Name:               ha-091565
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-091565
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=85073601a832bd4bbda5d11fa91feafff6ec6b91
	                    minikube.k8s.io/name=ha-091565
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_18T20_01_44_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 18 Sep 2024 20:01:40 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-091565
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 18 Sep 2024 20:08:00 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 18 Sep 2024 20:04:47 +0000   Wed, 18 Sep 2024 20:01:39 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 18 Sep 2024 20:04:47 +0000   Wed, 18 Sep 2024 20:01:39 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 18 Sep 2024 20:04:47 +0000   Wed, 18 Sep 2024 20:01:39 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 18 Sep 2024 20:04:47 +0000   Wed, 18 Sep 2024 20:02:01 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.215
	  Hostname:    ha-091565
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 a62ed2f9eda04eb9bbdd5bc2c8925018
	  System UUID:                a62ed2f9-eda0-4eb9-bbdd-5bc2c8925018
	  Boot ID:                    e0c4d56b-81dc-4d69-9fe6-35f1341e336d
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-xhmzx              0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m40s
	  kube-system                 coredns-7c65d6cfc9-8zcqk             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     6m15s
	  kube-system                 coredns-7c65d6cfc9-w97kk             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     6m15s
	  kube-system                 etcd-ha-091565                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         6m19s
	  kube-system                 kindnet-7fl5w                        100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      6m15s
	  kube-system                 kube-apiserver-ha-091565             250m (12%)    0 (0%)      0 (0%)           0 (0%)         6m19s
	  kube-system                 kube-controller-manager-ha-091565    200m (10%)    0 (0%)      0 (0%)           0 (0%)         6m20s
	  kube-system                 kube-proxy-4wm6h                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m15s
	  kube-system                 kube-scheduler-ha-091565             100m (5%)     0 (0%)      0 (0%)           0 (0%)         6m19s
	  kube-system                 kube-vip-ha-091565                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m21s
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m14s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   100m (5%)
	  memory             290Mi (13%)  390Mi (18%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 6m12s  kube-proxy       
	  Normal  Starting                 6m19s  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  6m19s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  6m19s  kubelet          Node ha-091565 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m19s  kubelet          Node ha-091565 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m19s  kubelet          Node ha-091565 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           6m16s  node-controller  Node ha-091565 event: Registered Node ha-091565 in Controller
	  Normal  NodeReady                6m1s   kubelet          Node ha-091565 status is now: NodeReady
	  Normal  RegisteredNode           5m17s  node-controller  Node ha-091565 event: Registered Node ha-091565 in Controller
	  Normal  RegisteredNode           4m     node-controller  Node ha-091565 event: Registered Node ha-091565 in Controller
	
	
	Name:               ha-091565-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-091565-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=85073601a832bd4bbda5d11fa91feafff6ec6b91
	                    minikube.k8s.io/name=ha-091565
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_18T20_02_40_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 18 Sep 2024 20:02:37 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-091565-m02
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 18 Sep 2024 20:05:41 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Wed, 18 Sep 2024 20:04:41 +0000   Wed, 18 Sep 2024 20:06:21 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Wed, 18 Sep 2024 20:04:41 +0000   Wed, 18 Sep 2024 20:06:21 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Wed, 18 Sep 2024 20:04:41 +0000   Wed, 18 Sep 2024 20:06:21 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Wed, 18 Sep 2024 20:04:41 +0000   Wed, 18 Sep 2024 20:06:21 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.92
	  Hostname:    ha-091565-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 725aeac5e21d42d69ce571d302d9f7bc
	  System UUID:                725aeac5-e21d-42d6-9ce5-71d302d9f7bc
	  Boot ID:                    e1d66727-ad6e-4cce-aca1-07f5fd60d891
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-45phf                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m40s
	  kube-system                 etcd-ha-091565-m02                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         5m23s
	  kube-system                 kindnet-bzsqr                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      5m25s
	  kube-system                 kube-apiserver-ha-091565-m02             250m (12%)    0 (0%)      0 (0%)           0 (0%)         5m24s
	  kube-system                 kube-controller-manager-ha-091565-m02    200m (10%)    0 (0%)      0 (0%)           0 (0%)         5m24s
	  kube-system                 kube-proxy-bxblp                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m25s
	  kube-system                 kube-scheduler-ha-091565-m02             100m (5%)     0 (0%)      0 (0%)           0 (0%)         5m24s
	  kube-system                 kube-vip-ha-091565-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m20s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 5m20s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  5m25s (x8 over 5m25s)  kubelet          Node ha-091565-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m25s (x8 over 5m25s)  kubelet          Node ha-091565-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m25s (x7 over 5m25s)  kubelet          Node ha-091565-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m25s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           5m21s                  node-controller  Node ha-091565-m02 event: Registered Node ha-091565-m02 in Controller
	  Normal  RegisteredNode           5m17s                  node-controller  Node ha-091565-m02 event: Registered Node ha-091565-m02 in Controller
	  Normal  RegisteredNode           4m                     node-controller  Node ha-091565-m02 event: Registered Node ha-091565-m02 in Controller
	  Normal  NodeNotReady             101s                   node-controller  Node ha-091565-m02 status is now: NodeNotReady
	
	
	Name:               ha-091565-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-091565-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=85073601a832bd4bbda5d11fa91feafff6ec6b91
	                    minikube.k8s.io/name=ha-091565
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_18T20_03_57_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 18 Sep 2024 20:03:54 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-091565-m03
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 18 Sep 2024 20:08:00 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 18 Sep 2024 20:04:55 +0000   Wed, 18 Sep 2024 20:03:54 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 18 Sep 2024 20:04:55 +0000   Wed, 18 Sep 2024 20:03:54 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 18 Sep 2024 20:04:55 +0000   Wed, 18 Sep 2024 20:03:54 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 18 Sep 2024 20:04:55 +0000   Wed, 18 Sep 2024 20:04:15 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.53
	  Hostname:    ha-091565-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 d7cb71d27a4f4e8b92a5e72c1afd8865
	  System UUID:                d7cb71d2-7a4f-4e8b-92a5-e72c1afd8865
	  Boot ID:                    df33972c-453a-48d6-99c0-49951abc69d7
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-jjr2n                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m40s
	  kube-system                 etcd-ha-091565-m03                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         4m7s
	  kube-system                 kindnet-5rh2w                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      4m8s
	  kube-system                 kube-apiserver-ha-091565-m03             250m (12%)    0 (0%)      0 (0%)           0 (0%)         4m7s
	  kube-system                 kube-controller-manager-ha-091565-m03    200m (10%)    0 (0%)      0 (0%)           0 (0%)         4m7s
	  kube-system                 kube-proxy-4p8rj                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m8s
	  kube-system                 kube-scheduler-ha-091565-m03             100m (5%)     0 (0%)      0 (0%)           0 (0%)         4m7s
	  kube-system                 kube-vip-ha-091565-m03                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m4s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 4m3s                 kube-proxy       
	  Normal  NodeAllocatableEnforced  4m9s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  4m8s (x8 over 4m9s)  kubelet          Node ha-091565-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m8s (x8 over 4m9s)  kubelet          Node ha-091565-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m8s (x7 over 4m9s)  kubelet          Node ha-091565-m03 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           4m7s                 node-controller  Node ha-091565-m03 event: Registered Node ha-091565-m03 in Controller
	  Normal  RegisteredNode           4m6s                 node-controller  Node ha-091565-m03 event: Registered Node ha-091565-m03 in Controller
	  Normal  RegisteredNode           4m                   node-controller  Node ha-091565-m03 event: Registered Node ha-091565-m03 in Controller
	
	
	Name:               ha-091565-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-091565-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=85073601a832bd4bbda5d11fa91feafff6ec6b91
	                    minikube.k8s.io/name=ha-091565
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_18T20_05_02_0700
	                    minikube.k8s.io/version=v1.34.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 18 Sep 2024 20:05:01 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-091565-m04
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 18 Sep 2024 20:07:54 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 18 Sep 2024 20:05:31 +0000   Wed, 18 Sep 2024 20:05:01 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 18 Sep 2024 20:05:31 +0000   Wed, 18 Sep 2024 20:05:01 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 18 Sep 2024 20:05:31 +0000   Wed, 18 Sep 2024 20:05:01 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 18 Sep 2024 20:05:31 +0000   Wed, 18 Sep 2024 20:05:21 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.9
	  Hostname:    ha-091565-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 cb0096492d0c441d8778e11eb51e77d3
	  System UUID:                cb009649-2d0c-441d-8778-e11eb51e77d3
	  Boot ID:                    c3da5972-b725-4116-9206-7ac2fefa29cb
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-4xtjm       100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      3m1s
	  kube-system                 kube-proxy-8qkpk    0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m1s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 2m56s                kube-proxy       
	  Normal  NodeAllocatableEnforced  3m2s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           3m1s                 node-controller  Node ha-091565-m04 event: Registered Node ha-091565-m04 in Controller
	  Normal  NodeHasSufficientMemory  3m1s (x2 over 3m2s)  kubelet          Node ha-091565-m04 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m1s (x2 over 3m2s)  kubelet          Node ha-091565-m04 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m1s (x2 over 3m2s)  kubelet          Node ha-091565-m04 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           3m                   node-controller  Node ha-091565-m04 event: Registered Node ha-091565-m04 in Controller
	  Normal  RegisteredNode           2m57s                node-controller  Node ha-091565-m04 event: Registered Node ha-091565-m04 in Controller
	  Normal  NodeReady                2m41s                kubelet          Node ha-091565-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[Sep18 20:01] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.051316] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.039151] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.792349] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +1.893273] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +4.904226] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000008] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[ +13.896131] systemd-fstab-generator[585]: Ignoring "noauto" option for root device
	[  +0.067482] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.062052] systemd-fstab-generator[597]: Ignoring "noauto" option for root device
	[  +0.180384] systemd-fstab-generator[611]: Ignoring "noauto" option for root device
	[  +0.116835] systemd-fstab-generator[623]: Ignoring "noauto" option for root device
	[  +0.268512] systemd-fstab-generator[653]: Ignoring "noauto" option for root device
	[  +3.829963] systemd-fstab-generator[747]: Ignoring "noauto" option for root device
	[  +4.147936] systemd-fstab-generator[884]: Ignoring "noauto" option for root device
	[  +0.060572] kauditd_printk_skb: 158 callbacks suppressed
	[  +6.397640] kauditd_printk_skb: 79 callbacks suppressed
	[  +0.774401] systemd-fstab-generator[1308]: Ignoring "noauto" option for root device
	[  +5.898362] kauditd_printk_skb: 15 callbacks suppressed
	[Sep18 20:02] kauditd_printk_skb: 38 callbacks suppressed
	[ +41.961999] kauditd_printk_skb: 26 callbacks suppressed
	
	
	==> etcd [4358e16fe123bb31210776fce1969072fb954b1f7cc3b51c459cd155992c1be5] <==
	{"level":"warn","ts":"2024-09-18T20:08:02.318724Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ce9e8f286885b37e","from":"ce9e8f286885b37e","remote-peer-id":"7208e3715ec3d11b","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-18T20:08:02.358625Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ce9e8f286885b37e","from":"ce9e8f286885b37e","remote-peer-id":"7208e3715ec3d11b","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-18T20:08:02.368819Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ce9e8f286885b37e","from":"ce9e8f286885b37e","remote-peer-id":"7208e3715ec3d11b","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-18T20:08:02.373217Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ce9e8f286885b37e","from":"ce9e8f286885b37e","remote-peer-id":"7208e3715ec3d11b","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-18T20:08:02.388232Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ce9e8f286885b37e","from":"ce9e8f286885b37e","remote-peer-id":"7208e3715ec3d11b","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-18T20:08:02.397234Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ce9e8f286885b37e","from":"ce9e8f286885b37e","remote-peer-id":"7208e3715ec3d11b","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-18T20:08:02.404477Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ce9e8f286885b37e","from":"ce9e8f286885b37e","remote-peer-id":"7208e3715ec3d11b","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-18T20:08:02.409081Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ce9e8f286885b37e","from":"ce9e8f286885b37e","remote-peer-id":"7208e3715ec3d11b","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-18T20:08:02.412124Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ce9e8f286885b37e","from":"ce9e8f286885b37e","remote-peer-id":"7208e3715ec3d11b","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-18T20:08:02.412984Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ce9e8f286885b37e","from":"ce9e8f286885b37e","remote-peer-id":"7208e3715ec3d11b","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-18T20:08:02.420181Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ce9e8f286885b37e","from":"ce9e8f286885b37e","remote-peer-id":"7208e3715ec3d11b","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-18T20:08:02.422143Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ce9e8f286885b37e","from":"ce9e8f286885b37e","remote-peer-id":"7208e3715ec3d11b","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-18T20:08:02.426631Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ce9e8f286885b37e","from":"ce9e8f286885b37e","remote-peer-id":"7208e3715ec3d11b","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-18T20:08:02.433911Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ce9e8f286885b37e","from":"ce9e8f286885b37e","remote-peer-id":"7208e3715ec3d11b","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-18T20:08:02.438146Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ce9e8f286885b37e","from":"ce9e8f286885b37e","remote-peer-id":"7208e3715ec3d11b","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-18T20:08:02.441421Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ce9e8f286885b37e","from":"ce9e8f286885b37e","remote-peer-id":"7208e3715ec3d11b","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-18T20:08:02.449831Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ce9e8f286885b37e","from":"ce9e8f286885b37e","remote-peer-id":"7208e3715ec3d11b","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-18T20:08:02.456915Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ce9e8f286885b37e","from":"ce9e8f286885b37e","remote-peer-id":"7208e3715ec3d11b","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-18T20:08:02.464110Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ce9e8f286885b37e","from":"ce9e8f286885b37e","remote-peer-id":"7208e3715ec3d11b","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-18T20:08:02.468419Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ce9e8f286885b37e","from":"ce9e8f286885b37e","remote-peer-id":"7208e3715ec3d11b","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-18T20:08:02.472203Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ce9e8f286885b37e","from":"ce9e8f286885b37e","remote-peer-id":"7208e3715ec3d11b","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-18T20:08:02.476093Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ce9e8f286885b37e","from":"ce9e8f286885b37e","remote-peer-id":"7208e3715ec3d11b","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-18T20:08:02.483579Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ce9e8f286885b37e","from":"ce9e8f286885b37e","remote-peer-id":"7208e3715ec3d11b","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-18T20:08:02.490363Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ce9e8f286885b37e","from":"ce9e8f286885b37e","remote-peer-id":"7208e3715ec3d11b","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-18T20:08:02.513030Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ce9e8f286885b37e","from":"ce9e8f286885b37e","remote-peer-id":"7208e3715ec3d11b","remote-peer-name":"pipeline","remote-peer-active":false}
	
	
	==> kernel <==
	 20:08:02 up 7 min,  0 users,  load average: 0.34, 0.25, 0.12
	Linux ha-091565 5.10.207 #1 SMP Mon Sep 16 15:00:28 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [52ae20a53e17beece735496b8e65537bbebfef5da53e8a2e7a74820fee56cd63] <==
	I0918 20:07:30.557919       1 main.go:322] Node ha-091565-m04 has CIDR [10.244.3.0/24] 
	I0918 20:07:40.563803       1 main.go:295] Handling node with IPs: map[192.168.39.53:{}]
	I0918 20:07:40.564003       1 main.go:322] Node ha-091565-m03 has CIDR [10.244.2.0/24] 
	I0918 20:07:40.564181       1 main.go:295] Handling node with IPs: map[192.168.39.9:{}]
	I0918 20:07:40.564217       1 main.go:322] Node ha-091565-m04 has CIDR [10.244.3.0/24] 
	I0918 20:07:40.564314       1 main.go:295] Handling node with IPs: map[192.168.39.215:{}]
	I0918 20:07:40.564335       1 main.go:299] handling current node
	I0918 20:07:40.564375       1 main.go:295] Handling node with IPs: map[192.168.39.92:{}]
	I0918 20:07:40.564407       1 main.go:322] Node ha-091565-m02 has CIDR [10.244.1.0/24] 
	I0918 20:07:50.558115       1 main.go:295] Handling node with IPs: map[192.168.39.215:{}]
	I0918 20:07:50.558147       1 main.go:299] handling current node
	I0918 20:07:50.558160       1 main.go:295] Handling node with IPs: map[192.168.39.92:{}]
	I0918 20:07:50.558164       1 main.go:322] Node ha-091565-m02 has CIDR [10.244.1.0/24] 
	I0918 20:07:50.558360       1 main.go:295] Handling node with IPs: map[192.168.39.53:{}]
	I0918 20:07:50.558384       1 main.go:322] Node ha-091565-m03 has CIDR [10.244.2.0/24] 
	I0918 20:07:50.558429       1 main.go:295] Handling node with IPs: map[192.168.39.9:{}]
	I0918 20:07:50.558435       1 main.go:322] Node ha-091565-m04 has CIDR [10.244.3.0/24] 
	I0918 20:08:00.565020       1 main.go:295] Handling node with IPs: map[192.168.39.215:{}]
	I0918 20:08:00.565144       1 main.go:299] handling current node
	I0918 20:08:00.565175       1 main.go:295] Handling node with IPs: map[192.168.39.92:{}]
	I0918 20:08:00.565192       1 main.go:322] Node ha-091565-m02 has CIDR [10.244.1.0/24] 
	I0918 20:08:00.565349       1 main.go:295] Handling node with IPs: map[192.168.39.53:{}]
	I0918 20:08:00.565373       1 main.go:322] Node ha-091565-m03 has CIDR [10.244.2.0/24] 
	I0918 20:08:00.565427       1 main.go:295] Handling node with IPs: map[192.168.39.9:{}]
	I0918 20:08:00.565444       1 main.go:322] Node ha-091565-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [f141188bda32520e26d392de6e6e49a863d49aacb461b7f3d5649d68557e96d3] <==
	I0918 20:01:41.805351       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0918 20:01:41.812255       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.215]
	I0918 20:01:41.813303       1 controller.go:615] quota admission added evaluator for: endpoints
	I0918 20:01:41.817812       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0918 20:01:41.927112       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0918 20:01:43.444505       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0918 20:01:43.474356       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0918 20:01:43.499285       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0918 20:01:47.177380       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I0918 20:01:47.677666       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	E0918 20:04:28.622821       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:38922: use of closed network connection
	E0918 20:04:28.826011       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:38948: use of closed network connection
	E0918 20:04:29.020534       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:38954: use of closed network connection
	E0918 20:04:29.215686       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:38960: use of closed network connection
	E0918 20:04:29.393565       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:38968: use of closed network connection
	E0918 20:04:29.590605       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:38998: use of closed network connection
	E0918 20:04:29.776838       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:39018: use of closed network connection
	E0918 20:04:29.951140       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:39034: use of closed network connection
	E0918 20:04:30.119473       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:39042: use of closed network connection
	E0918 20:04:30.426734       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:39086: use of closed network connection
	E0918 20:04:30.592391       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:39108: use of closed network connection
	E0918 20:04:30.769818       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:39130: use of closed network connection
	E0918 20:04:30.943725       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:39150: use of closed network connection
	E0918 20:04:31.126781       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:39162: use of closed network connection
	E0918 20:04:31.297785       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:39182: use of closed network connection
	
	
	==> kube-controller-manager [97b3f8978c2594307cdfef72e8b8aebac842152f9c1893bb34700b6e0932027e] <==
	I0918 20:05:01.138017       1 range_allocator.go:422] "Set node PodCIDR" logger="node-ipam-controller" node="ha-091565-m04" podCIDRs=["10.244.3.0/24"]
	I0918 20:05:01.138080       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-091565-m04"
	I0918 20:05:01.138115       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-091565-m04"
	I0918 20:05:01.151572       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-091565-m04"
	I0918 20:05:01.738364       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-091565-m04"
	I0918 20:05:01.838841       1 node_lifecycle_controller.go:884] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-091565-m04"
	I0918 20:05:01.852257       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-091565-m04"
	I0918 20:05:02.344310       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-091565-m04"
	I0918 20:05:03.003621       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-091565-m04"
	I0918 20:05:03.051402       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-091565-m04"
	I0918 20:05:05.442431       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-091565-m04"
	I0918 20:05:05.579185       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-091565-m04"
	I0918 20:05:11.327273       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-091565-m04"
	I0918 20:05:21.548407       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-091565-m04"
	I0918 20:05:21.548588       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-091565-m04"
	I0918 20:05:21.567996       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-091565-m04"
	I0918 20:05:21.857696       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-091565-m04"
	I0918 20:05:31.710527       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-091565-m04"
	I0918 20:06:21.883753       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-091565-m04"
	I0918 20:06:21.884037       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-091565-m02"
	I0918 20:06:21.905558       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-091565-m02"
	I0918 20:06:21.987284       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="62.125575ms"
	I0918 20:06:21.987469       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="76.464µs"
	I0918 20:06:23.082191       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-091565-m02"
	I0918 20:06:27.127364       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-091565-m02"
	
	
	==> kube-proxy [c9aa80c6b1f558ba9ef5b7e70df3690995e271e9296b0cc3e71ade739843f53a] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0918 20:01:49.308011       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0918 20:01:49.335379       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.215"]
	E0918 20:01:49.335598       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0918 20:01:49.418096       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0918 20:01:49.418149       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0918 20:01:49.418183       1 server_linux.go:169] "Using iptables Proxier"
	I0918 20:01:49.424497       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0918 20:01:49.425362       1 server.go:483] "Version info" version="v1.31.1"
	I0918 20:01:49.425380       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0918 20:01:49.427370       1 config.go:199] "Starting service config controller"
	I0918 20:01:49.427801       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0918 20:01:49.427983       1 config.go:105] "Starting endpoint slice config controller"
	I0918 20:01:49.427991       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0918 20:01:49.431014       1 config.go:328] "Starting node config controller"
	I0918 20:01:49.431036       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0918 20:01:49.528624       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0918 20:01:49.528643       1 shared_informer.go:320] Caches are synced for service config
	I0918 20:01:49.531423       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [8c435dbd5b540cdb87d1f79fc2aadca19db9c46aff42e67d78c5d9c8eee1b6de] <==
	E0918 20:03:54.130068       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod d1fea214-55d3-4291-bc7b-cfa3d01a8ead(kube-system/kube-proxy-j766p) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-j766p"
	E0918 20:03:54.131984       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-j766p\": pod kube-proxy-j766p is already assigned to node \"ha-091565-m03\"" pod="kube-system/kube-proxy-j766p"
	I0918 20:03:54.132134       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-j766p" node="ha-091565-m03"
	E0918 20:03:54.204764       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-zdpnz\": pod kindnet-zdpnz is already assigned to node \"ha-091565-m03\"" plugin="DefaultBinder" pod="kube-system/kindnet-zdpnz" node="ha-091565-m03"
	E0918 20:03:54.204930       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod bf784ea9-bf66-4fa3-bb04-e893d228713d(kube-system/kindnet-zdpnz) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-zdpnz"
	E0918 20:03:54.205020       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-zdpnz\": pod kindnet-zdpnz is already assigned to node \"ha-091565-m03\"" pod="kube-system/kindnet-zdpnz"
	I0918 20:03:54.205131       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-zdpnz" node="ha-091565-m03"
	E0918 20:04:22.999076       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-45phf\": pod busybox-7dff88458-45phf is already assigned to node \"ha-091565-m02\"" plugin="DefaultBinder" pod="default/busybox-7dff88458-45phf" node="ha-091565-m02"
	E0918 20:04:23.000005       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 8c26f72c-f562-47cb-bb92-9cc60a901f36(default/busybox-7dff88458-45phf) wasn't assumed so cannot be forgotten" pod="default/busybox-7dff88458-45phf"
	E0918 20:04:23.000126       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-45phf\": pod busybox-7dff88458-45phf is already assigned to node \"ha-091565-m02\"" pod="default/busybox-7dff88458-45phf"
	I0918 20:04:23.000204       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-7dff88458-45phf" node="ha-091565-m02"
	E0918 20:05:01.199076       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-4xtjm\": pod kindnet-4xtjm is already assigned to node \"ha-091565-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-4xtjm" node="ha-091565-m04"
	E0918 20:05:01.199468       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 74b52b58-c5d1-4de5-8a71-97a1e9263ee6(kube-system/kindnet-4xtjm) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-4xtjm"
	E0918 20:05:01.199594       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-4xtjm\": pod kindnet-4xtjm is already assigned to node \"ha-091565-m04\"" pod="kube-system/kindnet-4xtjm"
	I0918 20:05:01.199786       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-4xtjm" node="ha-091565-m04"
	E0918 20:05:01.220390       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-8qkpk\": pod kube-proxy-8qkpk is already assigned to node \"ha-091565-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-8qkpk" node="ha-091565-m04"
	E0918 20:05:01.223994       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 819d89b8-2f9d-4a41-ad66-7bfa5e99e840(kube-system/kube-proxy-8qkpk) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-8qkpk"
	E0918 20:05:01.224205       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-8qkpk\": pod kube-proxy-8qkpk is already assigned to node \"ha-091565-m04\"" pod="kube-system/kube-proxy-8qkpk"
	I0918 20:05:01.224300       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-8qkpk" node="ha-091565-m04"
	E0918 20:05:01.248133       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-zmf96\": pod kindnet-zmf96 is already assigned to node \"ha-091565-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-zmf96" node="ha-091565-m04"
	E0918 20:05:01.248459       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-zmf96\": pod kindnet-zmf96 is already assigned to node \"ha-091565-m04\"" pod="kube-system/kindnet-zmf96"
	I0918 20:05:01.248547       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-zmf96" node="ha-091565-m04"
	E0918 20:05:01.248362       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-t72tx\": pod kube-proxy-t72tx is already assigned to node \"ha-091565-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-t72tx" node="ha-091565-m04"
	E0918 20:05:01.249494       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-t72tx\": pod kube-proxy-t72tx is already assigned to node \"ha-091565-m04\"" pod="kube-system/kube-proxy-t72tx"
	I0918 20:05:01.249666       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-t72tx" node="ha-091565-m04"
	
	
	==> kubelet <==
	Sep 18 20:06:43 ha-091565 kubelet[1316]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Sep 18 20:06:43 ha-091565 kubelet[1316]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 18 20:06:43 ha-091565 kubelet[1316]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 18 20:06:43 ha-091565 kubelet[1316]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 18 20:06:43 ha-091565 kubelet[1316]: E0918 20:06:43.476171    1316 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726690003475792506,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 18 20:06:43 ha-091565 kubelet[1316]: E0918 20:06:43.476227    1316 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726690003475792506,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 18 20:06:53 ha-091565 kubelet[1316]: E0918 20:06:53.477743    1316 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726690013477221732,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 18 20:06:53 ha-091565 kubelet[1316]: E0918 20:06:53.477786    1316 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726690013477221732,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 18 20:07:03 ha-091565 kubelet[1316]: E0918 20:07:03.479043    1316 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726690023478737206,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 18 20:07:03 ha-091565 kubelet[1316]: E0918 20:07:03.479081    1316 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726690023478737206,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 18 20:07:13 ha-091565 kubelet[1316]: E0918 20:07:13.481181    1316 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726690033480916901,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 18 20:07:13 ha-091565 kubelet[1316]: E0918 20:07:13.481262    1316 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726690033480916901,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 18 20:07:23 ha-091565 kubelet[1316]: E0918 20:07:23.483563    1316 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726690043483012211,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 18 20:07:23 ha-091565 kubelet[1316]: E0918 20:07:23.483953    1316 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726690043483012211,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 18 20:07:33 ha-091565 kubelet[1316]: E0918 20:07:33.488007    1316 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726690053486820309,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 18 20:07:33 ha-091565 kubelet[1316]: E0918 20:07:33.488449    1316 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726690053486820309,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 18 20:07:43 ha-091565 kubelet[1316]: E0918 20:07:43.398570    1316 iptables.go:577] "Could not set up iptables canary" err=<
	Sep 18 20:07:43 ha-091565 kubelet[1316]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Sep 18 20:07:43 ha-091565 kubelet[1316]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 18 20:07:43 ha-091565 kubelet[1316]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 18 20:07:43 ha-091565 kubelet[1316]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 18 20:07:43 ha-091565 kubelet[1316]: E0918 20:07:43.490989    1316 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726690063490690150,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 18 20:07:43 ha-091565 kubelet[1316]: E0918 20:07:43.491031    1316 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726690063490690150,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 18 20:07:53 ha-091565 kubelet[1316]: E0918 20:07:53.492968    1316 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726690073492462129,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 18 20:07:53 ha-091565 kubelet[1316]: E0918 20:07:53.493287    1316 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726690073492462129,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-091565 -n ha-091565
helpers_test.go:261: (dbg) Run:  kubectl --context ha-091565 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/StopSecondaryNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/StopSecondaryNode (141.44s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (5.65s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
ha_test.go:390: (dbg) Done: out/minikube-linux-amd64 profile list --output json: (3.415347658s)
ha_test.go:413: expected profile "ha-091565" in json of 'profile list' to have "Degraded" status but have "Unknown" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-091565\",\"Status\":\"Unknown\",\"Config\":{\"Name\":\"ha-091565\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"kvm2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":
1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.1\",\"ClusterName\":\"ha-091565\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"192.168.39.254\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"crio\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"192.168.39.215\",\"Port\":8443,\"Kube
rnetesVersion\":\"v1.31.1\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m02\",\"IP\":\"192.168.39.92\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.1\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m03\",\"IP\":\"192.168.39.53\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.1\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m04\",\"IP\":\"192.168.39.9\",\"Port\":0,\"KubernetesVersion\":\"v1.31.1\",\"ContainerRuntime\":\"\",\"ControlPlane\":false,\"Worker\":true}],\"Addons\":{\"ambassador\":false,\"auto-pause\":false,\"cloud-spanner\":false,\"csi-hostpath-driver\":false,\"dashboard\":false,\"default-storageclass\":false,\"efk\":false,\"freshpod\":false,\"gcp-auth\":false,\"gvisor\":false,\"headlamp\":false,\"helm-tiller\":false,\"inaccel\":false,\"ingress\":false,\"ingress-dns\":false,\"inspektor-gadget\":false,\"istio\":false,\"istio-provisioner\":false,\"kong\":false,\"kubeflow\":false,\"kubevirt\":false,\"
logviewer\":false,\"metallb\":false,\"metrics-server\":false,\"nvidia-device-plugin\":false,\"nvidia-driver-installer\":false,\"nvidia-gpu-device-plugin\":false,\"olm\":false,\"pod-security-policy\":false,\"portainer\":false,\"registry\":false,\"registry-aliases\":false,\"registry-creds\":false,\"storage-provisioner\":false,\"storage-provisioner-gluster\":false,\"storage-provisioner-rancher\":false,\"volcano\":false,\"volumesnapshots\":false,\"yakd\":false},\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/home/jenkins:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\"
:\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"\",\"SocketVMnetPath\":\"\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":true}]}"*. args: "out/minikube-linux-amd64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-091565 -n ha-091565
helpers_test.go:244: <<< TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-091565 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-091565 logs -n 25: (1.363288306s)
helpers_test.go:252: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| cp      | ha-091565 cp ha-091565-m03:/home/docker/cp-test.txt                              | ha-091565 | jenkins | v1.34.0 | 18 Sep 24 20:05 UTC | 18 Sep 24 20:05 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile3131576438/001/cp-test_ha-091565-m03.txt |           |         |         |                     |                     |
	| ssh     | ha-091565 ssh -n                                                                 | ha-091565 | jenkins | v1.34.0 | 18 Sep 24 20:05 UTC | 18 Sep 24 20:05 UTC |
	|         | ha-091565-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-091565 cp ha-091565-m03:/home/docker/cp-test.txt                              | ha-091565 | jenkins | v1.34.0 | 18 Sep 24 20:05 UTC | 18 Sep 24 20:05 UTC |
	|         | ha-091565:/home/docker/cp-test_ha-091565-m03_ha-091565.txt                       |           |         |         |                     |                     |
	| ssh     | ha-091565 ssh -n                                                                 | ha-091565 | jenkins | v1.34.0 | 18 Sep 24 20:05 UTC | 18 Sep 24 20:05 UTC |
	|         | ha-091565-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-091565 ssh -n ha-091565 sudo cat                                              | ha-091565 | jenkins | v1.34.0 | 18 Sep 24 20:05 UTC | 18 Sep 24 20:05 UTC |
	|         | /home/docker/cp-test_ha-091565-m03_ha-091565.txt                                 |           |         |         |                     |                     |
	| cp      | ha-091565 cp ha-091565-m03:/home/docker/cp-test.txt                              | ha-091565 | jenkins | v1.34.0 | 18 Sep 24 20:05 UTC | 18 Sep 24 20:05 UTC |
	|         | ha-091565-m02:/home/docker/cp-test_ha-091565-m03_ha-091565-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-091565 ssh -n                                                                 | ha-091565 | jenkins | v1.34.0 | 18 Sep 24 20:05 UTC | 18 Sep 24 20:05 UTC |
	|         | ha-091565-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-091565 ssh -n ha-091565-m02 sudo cat                                          | ha-091565 | jenkins | v1.34.0 | 18 Sep 24 20:05 UTC | 18 Sep 24 20:05 UTC |
	|         | /home/docker/cp-test_ha-091565-m03_ha-091565-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-091565 cp ha-091565-m03:/home/docker/cp-test.txt                              | ha-091565 | jenkins | v1.34.0 | 18 Sep 24 20:05 UTC | 18 Sep 24 20:05 UTC |
	|         | ha-091565-m04:/home/docker/cp-test_ha-091565-m03_ha-091565-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-091565 ssh -n                                                                 | ha-091565 | jenkins | v1.34.0 | 18 Sep 24 20:05 UTC | 18 Sep 24 20:05 UTC |
	|         | ha-091565-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-091565 ssh -n ha-091565-m04 sudo cat                                          | ha-091565 | jenkins | v1.34.0 | 18 Sep 24 20:05 UTC | 18 Sep 24 20:05 UTC |
	|         | /home/docker/cp-test_ha-091565-m03_ha-091565-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-091565 cp testdata/cp-test.txt                                                | ha-091565 | jenkins | v1.34.0 | 18 Sep 24 20:05 UTC | 18 Sep 24 20:05 UTC |
	|         | ha-091565-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-091565 ssh -n                                                                 | ha-091565 | jenkins | v1.34.0 | 18 Sep 24 20:05 UTC | 18 Sep 24 20:05 UTC |
	|         | ha-091565-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-091565 cp ha-091565-m04:/home/docker/cp-test.txt                              | ha-091565 | jenkins | v1.34.0 | 18 Sep 24 20:05 UTC | 18 Sep 24 20:05 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile3131576438/001/cp-test_ha-091565-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-091565 ssh -n                                                                 | ha-091565 | jenkins | v1.34.0 | 18 Sep 24 20:05 UTC | 18 Sep 24 20:05 UTC |
	|         | ha-091565-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-091565 cp ha-091565-m04:/home/docker/cp-test.txt                              | ha-091565 | jenkins | v1.34.0 | 18 Sep 24 20:05 UTC | 18 Sep 24 20:05 UTC |
	|         | ha-091565:/home/docker/cp-test_ha-091565-m04_ha-091565.txt                       |           |         |         |                     |                     |
	| ssh     | ha-091565 ssh -n                                                                 | ha-091565 | jenkins | v1.34.0 | 18 Sep 24 20:05 UTC | 18 Sep 24 20:05 UTC |
	|         | ha-091565-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-091565 ssh -n ha-091565 sudo cat                                              | ha-091565 | jenkins | v1.34.0 | 18 Sep 24 20:05 UTC | 18 Sep 24 20:05 UTC |
	|         | /home/docker/cp-test_ha-091565-m04_ha-091565.txt                                 |           |         |         |                     |                     |
	| cp      | ha-091565 cp ha-091565-m04:/home/docker/cp-test.txt                              | ha-091565 | jenkins | v1.34.0 | 18 Sep 24 20:05 UTC | 18 Sep 24 20:05 UTC |
	|         | ha-091565-m02:/home/docker/cp-test_ha-091565-m04_ha-091565-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-091565 ssh -n                                                                 | ha-091565 | jenkins | v1.34.0 | 18 Sep 24 20:05 UTC | 18 Sep 24 20:05 UTC |
	|         | ha-091565-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-091565 ssh -n ha-091565-m02 sudo cat                                          | ha-091565 | jenkins | v1.34.0 | 18 Sep 24 20:05 UTC | 18 Sep 24 20:05 UTC |
	|         | /home/docker/cp-test_ha-091565-m04_ha-091565-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-091565 cp ha-091565-m04:/home/docker/cp-test.txt                              | ha-091565 | jenkins | v1.34.0 | 18 Sep 24 20:05 UTC | 18 Sep 24 20:05 UTC |
	|         | ha-091565-m03:/home/docker/cp-test_ha-091565-m04_ha-091565-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-091565 ssh -n                                                                 | ha-091565 | jenkins | v1.34.0 | 18 Sep 24 20:05 UTC | 18 Sep 24 20:05 UTC |
	|         | ha-091565-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-091565 ssh -n ha-091565-m03 sudo cat                                          | ha-091565 | jenkins | v1.34.0 | 18 Sep 24 20:05 UTC | 18 Sep 24 20:05 UTC |
	|         | /home/docker/cp-test_ha-091565-m04_ha-091565-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-091565 node stop m02 -v=7                                                     | ha-091565 | jenkins | v1.34.0 | 18 Sep 24 20:05 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/18 20:00:57
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0918 20:00:57.640467   26827 out.go:345] Setting OutFile to fd 1 ...
	I0918 20:00:57.640561   26827 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0918 20:00:57.640569   26827 out.go:358] Setting ErrFile to fd 2...
	I0918 20:00:57.640573   26827 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0918 20:00:57.640761   26827 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19667-7671/.minikube/bin
	I0918 20:00:57.641318   26827 out.go:352] Setting JSON to false
	I0918 20:00:57.642141   26827 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":2602,"bootTime":1726687056,"procs":180,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0918 20:00:57.642239   26827 start.go:139] virtualization: kvm guest
	I0918 20:00:57.644428   26827 out.go:177] * [ha-091565] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0918 20:00:57.645728   26827 notify.go:220] Checking for updates...
	I0918 20:00:57.645758   26827 out.go:177]   - MINIKUBE_LOCATION=19667
	I0918 20:00:57.647179   26827 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0918 20:00:57.648500   26827 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19667-7671/kubeconfig
	I0918 20:00:57.649839   26827 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19667-7671/.minikube
	I0918 20:00:57.651097   26827 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0918 20:00:57.652502   26827 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0918 20:00:57.653976   26827 driver.go:394] Setting default libvirt URI to qemu:///system
	I0918 20:00:57.687513   26827 out.go:177] * Using the kvm2 driver based on user configuration
	I0918 20:00:57.688577   26827 start.go:297] selected driver: kvm2
	I0918 20:00:57.688601   26827 start.go:901] validating driver "kvm2" against <nil>
	I0918 20:00:57.688623   26827 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0918 20:00:57.689634   26827 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0918 20:00:57.689741   26827 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19667-7671/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0918 20:00:57.704974   26827 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0918 20:00:57.705031   26827 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0918 20:00:57.705320   26827 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0918 20:00:57.705370   26827 cni.go:84] Creating CNI manager for ""
	I0918 20:00:57.705425   26827 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0918 20:00:57.705440   26827 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0918 20:00:57.705520   26827 start.go:340] cluster config:
	{Name:ha-091565 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:ha-091565 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRIS
ocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0
GPUs: AutoPauseInterval:1m0s}
	I0918 20:00:57.705651   26827 iso.go:125] acquiring lock: {Name:mkbcf4d84f3091567a39802cd5e773cb10b8c545 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0918 20:00:57.707426   26827 out.go:177] * Starting "ha-091565" primary control-plane node in "ha-091565" cluster
	I0918 20:00:57.708558   26827 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0918 20:00:57.708602   26827 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19667-7671/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I0918 20:00:57.708622   26827 cache.go:56] Caching tarball of preloaded images
	I0918 20:00:57.708700   26827 preload.go:172] Found /home/jenkins/minikube-integration/19667-7671/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0918 20:00:57.708710   26827 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0918 20:00:57.708999   26827 profile.go:143] Saving config to /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/ha-091565/config.json ...
	I0918 20:00:57.709019   26827 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/ha-091565/config.json: {Name:mk6751feb5fedaf9ba97f9b527df45d961607c7d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0918 20:00:57.709176   26827 start.go:360] acquireMachinesLock for ha-091565: {Name:mk1b0d68a0b7d9d4bc8204d5d81cb9eb77222526 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0918 20:00:57.709206   26827 start.go:364] duration metric: took 18.41µs to acquireMachinesLock for "ha-091565"
	I0918 20:00:57.709221   26827 start.go:93] Provisioning new machine with config: &{Name:ha-091565 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.1 ClusterName:ha-091565 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0918 20:00:57.709299   26827 start.go:125] createHost starting for "" (driver="kvm2")
	I0918 20:00:57.710894   26827 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0918 20:00:57.711003   26827 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0918 20:00:57.711035   26827 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0918 20:00:57.725443   26827 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33447
	I0918 20:00:57.725903   26827 main.go:141] libmachine: () Calling .GetVersion
	I0918 20:00:57.726425   26827 main.go:141] libmachine: Using API Version  1
	I0918 20:00:57.726445   26827 main.go:141] libmachine: () Calling .SetConfigRaw
	I0918 20:00:57.726722   26827 main.go:141] libmachine: () Calling .GetMachineName
	I0918 20:00:57.726883   26827 main.go:141] libmachine: (ha-091565) Calling .GetMachineName
	I0918 20:00:57.727025   26827 main.go:141] libmachine: (ha-091565) Calling .DriverName
	I0918 20:00:57.727181   26827 start.go:159] libmachine.API.Create for "ha-091565" (driver="kvm2")
	I0918 20:00:57.727222   26827 client.go:168] LocalClient.Create starting
	I0918 20:00:57.727261   26827 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19667-7671/.minikube/certs/ca.pem
	I0918 20:00:57.727293   26827 main.go:141] libmachine: Decoding PEM data...
	I0918 20:00:57.727312   26827 main.go:141] libmachine: Parsing certificate...
	I0918 20:00:57.727377   26827 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19667-7671/.minikube/certs/cert.pem
	I0918 20:00:57.727407   26827 main.go:141] libmachine: Decoding PEM data...
	I0918 20:00:57.727427   26827 main.go:141] libmachine: Parsing certificate...
	I0918 20:00:57.727451   26827 main.go:141] libmachine: Running pre-create checks...
	I0918 20:00:57.727462   26827 main.go:141] libmachine: (ha-091565) Calling .PreCreateCheck
	I0918 20:00:57.727741   26827 main.go:141] libmachine: (ha-091565) Calling .GetConfigRaw
	I0918 20:00:57.728143   26827 main.go:141] libmachine: Creating machine...
	I0918 20:00:57.728157   26827 main.go:141] libmachine: (ha-091565) Calling .Create
	I0918 20:00:57.728286   26827 main.go:141] libmachine: (ha-091565) Creating KVM machine...
	I0918 20:00:57.729703   26827 main.go:141] libmachine: (ha-091565) DBG | found existing default KVM network
	I0918 20:00:57.730516   26827 main.go:141] libmachine: (ha-091565) DBG | I0918 20:00:57.730387   26850 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0002211e0}
	I0918 20:00:57.730578   26827 main.go:141] libmachine: (ha-091565) DBG | created network xml: 
	I0918 20:00:57.730605   26827 main.go:141] libmachine: (ha-091565) DBG | <network>
	I0918 20:00:57.730618   26827 main.go:141] libmachine: (ha-091565) DBG |   <name>mk-ha-091565</name>
	I0918 20:00:57.730631   26827 main.go:141] libmachine: (ha-091565) DBG |   <dns enable='no'/>
	I0918 20:00:57.730660   26827 main.go:141] libmachine: (ha-091565) DBG |   
	I0918 20:00:57.730680   26827 main.go:141] libmachine: (ha-091565) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0918 20:00:57.730693   26827 main.go:141] libmachine: (ha-091565) DBG |     <dhcp>
	I0918 20:00:57.730703   26827 main.go:141] libmachine: (ha-091565) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0918 20:00:57.730715   26827 main.go:141] libmachine: (ha-091565) DBG |     </dhcp>
	I0918 20:00:57.730736   26827 main.go:141] libmachine: (ha-091565) DBG |   </ip>
	I0918 20:00:57.730748   26827 main.go:141] libmachine: (ha-091565) DBG |   
	I0918 20:00:57.730757   26827 main.go:141] libmachine: (ha-091565) DBG | </network>
	I0918 20:00:57.730768   26827 main.go:141] libmachine: (ha-091565) DBG | 
	I0918 20:00:57.735618   26827 main.go:141] libmachine: (ha-091565) DBG | trying to create private KVM network mk-ha-091565 192.168.39.0/24...
	I0918 20:00:57.800998   26827 main.go:141] libmachine: (ha-091565) DBG | private KVM network mk-ha-091565 192.168.39.0/24 created
	I0918 20:00:57.801029   26827 main.go:141] libmachine: (ha-091565) Setting up store path in /home/jenkins/minikube-integration/19667-7671/.minikube/machines/ha-091565 ...
	I0918 20:00:57.801041   26827 main.go:141] libmachine: (ha-091565) DBG | I0918 20:00:57.800989   26850 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19667-7671/.minikube
	I0918 20:00:57.801133   26827 main.go:141] libmachine: (ha-091565) Building disk image from file:///home/jenkins/minikube-integration/19667-7671/.minikube/cache/iso/amd64/minikube-v1.34.0-1726481713-19649-amd64.iso
	I0918 20:00:57.801206   26827 main.go:141] libmachine: (ha-091565) Downloading /home/jenkins/minikube-integration/19667-7671/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19667-7671/.minikube/cache/iso/amd64/minikube-v1.34.0-1726481713-19649-amd64.iso...
	I0918 20:00:58.046606   26827 main.go:141] libmachine: (ha-091565) DBG | I0918 20:00:58.046472   26850 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19667-7671/.minikube/machines/ha-091565/id_rsa...
	I0918 20:00:58.328818   26827 main.go:141] libmachine: (ha-091565) DBG | I0918 20:00:58.328673   26850 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19667-7671/.minikube/machines/ha-091565/ha-091565.rawdisk...
	I0918 20:00:58.328844   26827 main.go:141] libmachine: (ha-091565) DBG | Writing magic tar header
	I0918 20:00:58.328853   26827 main.go:141] libmachine: (ha-091565) DBG | Writing SSH key tar header
	I0918 20:00:58.328860   26827 main.go:141] libmachine: (ha-091565) DBG | I0918 20:00:58.328794   26850 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19667-7671/.minikube/machines/ha-091565 ...
	I0918 20:00:58.328961   26827 main.go:141] libmachine: (ha-091565) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19667-7671/.minikube/machines/ha-091565
	I0918 20:00:58.328984   26827 main.go:141] libmachine: (ha-091565) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19667-7671/.minikube/machines
	I0918 20:00:58.328999   26827 main.go:141] libmachine: (ha-091565) Setting executable bit set on /home/jenkins/minikube-integration/19667-7671/.minikube/machines/ha-091565 (perms=drwx------)
	I0918 20:00:58.329013   26827 main.go:141] libmachine: (ha-091565) Setting executable bit set on /home/jenkins/minikube-integration/19667-7671/.minikube/machines (perms=drwxr-xr-x)
	I0918 20:00:58.329024   26827 main.go:141] libmachine: (ha-091565) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19667-7671/.minikube
	I0918 20:00:58.329034   26827 main.go:141] libmachine: (ha-091565) Setting executable bit set on /home/jenkins/minikube-integration/19667-7671/.minikube (perms=drwxr-xr-x)
	I0918 20:00:58.329045   26827 main.go:141] libmachine: (ha-091565) Setting executable bit set on /home/jenkins/minikube-integration/19667-7671 (perms=drwxrwxr-x)
	I0918 20:00:58.329050   26827 main.go:141] libmachine: (ha-091565) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0918 20:00:58.329063   26827 main.go:141] libmachine: (ha-091565) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0918 20:00:58.329069   26827 main.go:141] libmachine: (ha-091565) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19667-7671
	I0918 20:00:58.329081   26827 main.go:141] libmachine: (ha-091565) Creating domain...
	I0918 20:00:58.329099   26827 main.go:141] libmachine: (ha-091565) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0918 20:00:58.329114   26827 main.go:141] libmachine: (ha-091565) DBG | Checking permissions on dir: /home/jenkins
	I0918 20:00:58.329136   26827 main.go:141] libmachine: (ha-091565) DBG | Checking permissions on dir: /home
	I0918 20:00:58.329143   26827 main.go:141] libmachine: (ha-091565) DBG | Skipping /home - not owner
	I0918 20:00:58.330265   26827 main.go:141] libmachine: (ha-091565) define libvirt domain using xml: 
	I0918 20:00:58.330282   26827 main.go:141] libmachine: (ha-091565) <domain type='kvm'>
	I0918 20:00:58.330289   26827 main.go:141] libmachine: (ha-091565)   <name>ha-091565</name>
	I0918 20:00:58.330298   26827 main.go:141] libmachine: (ha-091565)   <memory unit='MiB'>2200</memory>
	I0918 20:00:58.330305   26827 main.go:141] libmachine: (ha-091565)   <vcpu>2</vcpu>
	I0918 20:00:58.330311   26827 main.go:141] libmachine: (ha-091565)   <features>
	I0918 20:00:58.330318   26827 main.go:141] libmachine: (ha-091565)     <acpi/>
	I0918 20:00:58.330326   26827 main.go:141] libmachine: (ha-091565)     <apic/>
	I0918 20:00:58.330334   26827 main.go:141] libmachine: (ha-091565)     <pae/>
	I0918 20:00:58.330345   26827 main.go:141] libmachine: (ha-091565)     
	I0918 20:00:58.330353   26827 main.go:141] libmachine: (ha-091565)   </features>
	I0918 20:00:58.330358   26827 main.go:141] libmachine: (ha-091565)   <cpu mode='host-passthrough'>
	I0918 20:00:58.330364   26827 main.go:141] libmachine: (ha-091565)   
	I0918 20:00:58.330372   26827 main.go:141] libmachine: (ha-091565)   </cpu>
	I0918 20:00:58.330400   26827 main.go:141] libmachine: (ha-091565)   <os>
	I0918 20:00:58.330421   26827 main.go:141] libmachine: (ha-091565)     <type>hvm</type>
	I0918 20:00:58.330446   26827 main.go:141] libmachine: (ha-091565)     <boot dev='cdrom'/>
	I0918 20:00:58.330464   26827 main.go:141] libmachine: (ha-091565)     <boot dev='hd'/>
	I0918 20:00:58.330471   26827 main.go:141] libmachine: (ha-091565)     <bootmenu enable='no'/>
	I0918 20:00:58.330481   26827 main.go:141] libmachine: (ha-091565)   </os>
	I0918 20:00:58.330492   26827 main.go:141] libmachine: (ha-091565)   <devices>
	I0918 20:00:58.330501   26827 main.go:141] libmachine: (ha-091565)     <disk type='file' device='cdrom'>
	I0918 20:00:58.330523   26827 main.go:141] libmachine: (ha-091565)       <source file='/home/jenkins/minikube-integration/19667-7671/.minikube/machines/ha-091565/boot2docker.iso'/>
	I0918 20:00:58.330530   26827 main.go:141] libmachine: (ha-091565)       <target dev='hdc' bus='scsi'/>
	I0918 20:00:58.330535   26827 main.go:141] libmachine: (ha-091565)       <readonly/>
	I0918 20:00:58.330541   26827 main.go:141] libmachine: (ha-091565)     </disk>
	I0918 20:00:58.330546   26827 main.go:141] libmachine: (ha-091565)     <disk type='file' device='disk'>
	I0918 20:00:58.330551   26827 main.go:141] libmachine: (ha-091565)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0918 20:00:58.330560   26827 main.go:141] libmachine: (ha-091565)       <source file='/home/jenkins/minikube-integration/19667-7671/.minikube/machines/ha-091565/ha-091565.rawdisk'/>
	I0918 20:00:58.330569   26827 main.go:141] libmachine: (ha-091565)       <target dev='hda' bus='virtio'/>
	I0918 20:00:58.330586   26827 main.go:141] libmachine: (ha-091565)     </disk>
	I0918 20:00:58.330591   26827 main.go:141] libmachine: (ha-091565)     <interface type='network'>
	I0918 20:00:58.330601   26827 main.go:141] libmachine: (ha-091565)       <source network='mk-ha-091565'/>
	I0918 20:00:58.330608   26827 main.go:141] libmachine: (ha-091565)       <model type='virtio'/>
	I0918 20:00:58.330612   26827 main.go:141] libmachine: (ha-091565)     </interface>
	I0918 20:00:58.330618   26827 main.go:141] libmachine: (ha-091565)     <interface type='network'>
	I0918 20:00:58.330625   26827 main.go:141] libmachine: (ha-091565)       <source network='default'/>
	I0918 20:00:58.330635   26827 main.go:141] libmachine: (ha-091565)       <model type='virtio'/>
	I0918 20:00:58.330641   26827 main.go:141] libmachine: (ha-091565)     </interface>
	I0918 20:00:58.330646   26827 main.go:141] libmachine: (ha-091565)     <serial type='pty'>
	I0918 20:00:58.330652   26827 main.go:141] libmachine: (ha-091565)       <target port='0'/>
	I0918 20:00:58.330656   26827 main.go:141] libmachine: (ha-091565)     </serial>
	I0918 20:00:58.330664   26827 main.go:141] libmachine: (ha-091565)     <console type='pty'>
	I0918 20:00:58.330671   26827 main.go:141] libmachine: (ha-091565)       <target type='serial' port='0'/>
	I0918 20:00:58.330684   26827 main.go:141] libmachine: (ha-091565)     </console>
	I0918 20:00:58.330693   26827 main.go:141] libmachine: (ha-091565)     <rng model='virtio'>
	I0918 20:00:58.330702   26827 main.go:141] libmachine: (ha-091565)       <backend model='random'>/dev/random</backend>
	I0918 20:00:58.330710   26827 main.go:141] libmachine: (ha-091565)     </rng>
	I0918 20:00:58.330716   26827 main.go:141] libmachine: (ha-091565)     
	I0918 20:00:58.330722   26827 main.go:141] libmachine: (ha-091565)     
	I0918 20:00:58.330726   26827 main.go:141] libmachine: (ha-091565)   </devices>
	I0918 20:00:58.330730   26827 main.go:141] libmachine: (ha-091565) </domain>
	I0918 20:00:58.330736   26827 main.go:141] libmachine: (ha-091565) 
	I0918 20:00:58.335391   26827 main.go:141] libmachine: (ha-091565) DBG | domain ha-091565 has defined MAC address 52:54:00:62:68:64 in network default
	I0918 20:00:58.335905   26827 main.go:141] libmachine: (ha-091565) Ensuring networks are active...
	I0918 20:00:58.335918   26827 main.go:141] libmachine: (ha-091565) DBG | domain ha-091565 has defined MAC address 52:54:00:2a:13:d8 in network mk-ha-091565
	I0918 20:00:58.336784   26827 main.go:141] libmachine: (ha-091565) Ensuring network default is active
	I0918 20:00:58.337204   26827 main.go:141] libmachine: (ha-091565) Ensuring network mk-ha-091565 is active
	I0918 20:00:58.337781   26827 main.go:141] libmachine: (ha-091565) Getting domain xml...
	I0918 20:00:58.338545   26827 main.go:141] libmachine: (ha-091565) Creating domain...
	I0918 20:00:59.533947   26827 main.go:141] libmachine: (ha-091565) Waiting to get IP...
	I0918 20:00:59.534657   26827 main.go:141] libmachine: (ha-091565) DBG | domain ha-091565 has defined MAC address 52:54:00:2a:13:d8 in network mk-ha-091565
	I0918 20:00:59.535035   26827 main.go:141] libmachine: (ha-091565) DBG | unable to find current IP address of domain ha-091565 in network mk-ha-091565
	I0918 20:00:59.535072   26827 main.go:141] libmachine: (ha-091565) DBG | I0918 20:00:59.535025   26850 retry.go:31] will retry after 237.916234ms: waiting for machine to come up
	I0918 20:00:59.774780   26827 main.go:141] libmachine: (ha-091565) DBG | domain ha-091565 has defined MAC address 52:54:00:2a:13:d8 in network mk-ha-091565
	I0918 20:00:59.775260   26827 main.go:141] libmachine: (ha-091565) DBG | unable to find current IP address of domain ha-091565 in network mk-ha-091565
	I0918 20:00:59.775295   26827 main.go:141] libmachine: (ha-091565) DBG | I0918 20:00:59.775205   26850 retry.go:31] will retry after 262.842806ms: waiting for machine to come up
	I0918 20:01:00.039656   26827 main.go:141] libmachine: (ha-091565) DBG | domain ha-091565 has defined MAC address 52:54:00:2a:13:d8 in network mk-ha-091565
	I0918 20:01:00.040069   26827 main.go:141] libmachine: (ha-091565) DBG | unable to find current IP address of domain ha-091565 in network mk-ha-091565
	I0918 20:01:00.040093   26827 main.go:141] libmachine: (ha-091565) DBG | I0918 20:01:00.040046   26850 retry.go:31] will retry after 393.798982ms: waiting for machine to come up
	I0918 20:01:00.435673   26827 main.go:141] libmachine: (ha-091565) DBG | domain ha-091565 has defined MAC address 52:54:00:2a:13:d8 in network mk-ha-091565
	I0918 20:01:00.436127   26827 main.go:141] libmachine: (ha-091565) DBG | unable to find current IP address of domain ha-091565 in network mk-ha-091565
	I0918 20:01:00.436161   26827 main.go:141] libmachine: (ha-091565) DBG | I0918 20:01:00.436100   26850 retry.go:31] will retry after 446.519452ms: waiting for machine to come up
	I0918 20:01:00.883844   26827 main.go:141] libmachine: (ha-091565) DBG | domain ha-091565 has defined MAC address 52:54:00:2a:13:d8 in network mk-ha-091565
	I0918 20:01:00.884367   26827 main.go:141] libmachine: (ha-091565) DBG | unable to find current IP address of domain ha-091565 in network mk-ha-091565
	I0918 20:01:00.884396   26827 main.go:141] libmachine: (ha-091565) DBG | I0918 20:01:00.884301   26850 retry.go:31] will retry after 528.125995ms: waiting for machine to come up
	I0918 20:01:01.414131   26827 main.go:141] libmachine: (ha-091565) DBG | domain ha-091565 has defined MAC address 52:54:00:2a:13:d8 in network mk-ha-091565
	I0918 20:01:01.414641   26827 main.go:141] libmachine: (ha-091565) DBG | unable to find current IP address of domain ha-091565 in network mk-ha-091565
	I0918 20:01:01.414662   26827 main.go:141] libmachine: (ha-091565) DBG | I0918 20:01:01.414600   26850 retry.go:31] will retry after 935.867422ms: waiting for machine to come up
	I0918 20:01:02.352501   26827 main.go:141] libmachine: (ha-091565) DBG | domain ha-091565 has defined MAC address 52:54:00:2a:13:d8 in network mk-ha-091565
	I0918 20:01:02.353101   26827 main.go:141] libmachine: (ha-091565) DBG | unable to find current IP address of domain ha-091565 in network mk-ha-091565
	I0918 20:01:02.353136   26827 main.go:141] libmachine: (ha-091565) DBG | I0918 20:01:02.353036   26850 retry.go:31] will retry after 916.470629ms: waiting for machine to come up
	I0918 20:01:03.270901   26827 main.go:141] libmachine: (ha-091565) DBG | domain ha-091565 has defined MAC address 52:54:00:2a:13:d8 in network mk-ha-091565
	I0918 20:01:03.271592   26827 main.go:141] libmachine: (ha-091565) DBG | unable to find current IP address of domain ha-091565 in network mk-ha-091565
	I0918 20:01:03.271617   26827 main.go:141] libmachine: (ha-091565) DBG | I0918 20:01:03.271544   26850 retry.go:31] will retry after 1.230905631s: waiting for machine to come up
	I0918 20:01:04.504061   26827 main.go:141] libmachine: (ha-091565) DBG | domain ha-091565 has defined MAC address 52:54:00:2a:13:d8 in network mk-ha-091565
	I0918 20:01:04.504573   26827 main.go:141] libmachine: (ha-091565) DBG | unable to find current IP address of domain ha-091565 in network mk-ha-091565
	I0918 20:01:04.504600   26827 main.go:141] libmachine: (ha-091565) DBG | I0918 20:01:04.504501   26850 retry.go:31] will retry after 1.334656049s: waiting for machine to come up
	I0918 20:01:05.841225   26827 main.go:141] libmachine: (ha-091565) DBG | domain ha-091565 has defined MAC address 52:54:00:2a:13:d8 in network mk-ha-091565
	I0918 20:01:05.841603   26827 main.go:141] libmachine: (ha-091565) DBG | unable to find current IP address of domain ha-091565 in network mk-ha-091565
	I0918 20:01:05.841627   26827 main.go:141] libmachine: (ha-091565) DBG | I0918 20:01:05.841542   26850 retry.go:31] will retry after 1.509327207s: waiting for machine to come up
	I0918 20:01:07.353477   26827 main.go:141] libmachine: (ha-091565) DBG | domain ha-091565 has defined MAC address 52:54:00:2a:13:d8 in network mk-ha-091565
	I0918 20:01:07.353907   26827 main.go:141] libmachine: (ha-091565) DBG | unable to find current IP address of domain ha-091565 in network mk-ha-091565
	I0918 20:01:07.353958   26827 main.go:141] libmachine: (ha-091565) DBG | I0918 20:01:07.353878   26850 retry.go:31] will retry after 2.403908861s: waiting for machine to come up
	I0918 20:01:09.760703   26827 main.go:141] libmachine: (ha-091565) DBG | domain ha-091565 has defined MAC address 52:54:00:2a:13:d8 in network mk-ha-091565
	I0918 20:01:09.761285   26827 main.go:141] libmachine: (ha-091565) DBG | unable to find current IP address of domain ha-091565 in network mk-ha-091565
	I0918 20:01:09.761311   26827 main.go:141] libmachine: (ha-091565) DBG | I0918 20:01:09.761245   26850 retry.go:31] will retry after 3.18859433s: waiting for machine to come up
	I0918 20:01:12.951021   26827 main.go:141] libmachine: (ha-091565) DBG | domain ha-091565 has defined MAC address 52:54:00:2a:13:d8 in network mk-ha-091565
	I0918 20:01:12.951436   26827 main.go:141] libmachine: (ha-091565) DBG | unable to find current IP address of domain ha-091565 in network mk-ha-091565
	I0918 20:01:12.951466   26827 main.go:141] libmachine: (ha-091565) DBG | I0918 20:01:12.951387   26850 retry.go:31] will retry after 4.080420969s: waiting for machine to come up
	I0918 20:01:17.036664   26827 main.go:141] libmachine: (ha-091565) DBG | domain ha-091565 has defined MAC address 52:54:00:2a:13:d8 in network mk-ha-091565
	I0918 20:01:17.037090   26827 main.go:141] libmachine: (ha-091565) DBG | unable to find current IP address of domain ha-091565 in network mk-ha-091565
	I0918 20:01:17.037112   26827 main.go:141] libmachine: (ha-091565) DBG | I0918 20:01:17.037044   26850 retry.go:31] will retry after 5.244932355s: waiting for machine to come up
	I0918 20:01:22.287118   26827 main.go:141] libmachine: (ha-091565) DBG | domain ha-091565 has defined MAC address 52:54:00:2a:13:d8 in network mk-ha-091565
	I0918 20:01:22.287574   26827 main.go:141] libmachine: (ha-091565) Found IP for machine: 192.168.39.215
	I0918 20:01:22.287594   26827 main.go:141] libmachine: (ha-091565) Reserving static IP address...
	I0918 20:01:22.287606   26827 main.go:141] libmachine: (ha-091565) DBG | domain ha-091565 has current primary IP address 192.168.39.215 and MAC address 52:54:00:2a:13:d8 in network mk-ha-091565
	I0918 20:01:22.287959   26827 main.go:141] libmachine: (ha-091565) DBG | unable to find host DHCP lease matching {name: "ha-091565", mac: "52:54:00:2a:13:d8", ip: "192.168.39.215"} in network mk-ha-091565
	I0918 20:01:22.360495   26827 main.go:141] libmachine: (ha-091565) DBG | Getting to WaitForSSH function...
	I0918 20:01:22.360523   26827 main.go:141] libmachine: (ha-091565) Reserved static IP address: 192.168.39.215
	I0918 20:01:22.360535   26827 main.go:141] libmachine: (ha-091565) Waiting for SSH to be available...
	I0918 20:01:22.362885   26827 main.go:141] libmachine: (ha-091565) DBG | domain ha-091565 has defined MAC address 52:54:00:2a:13:d8 in network mk-ha-091565
	I0918 20:01:22.363193   26827 main.go:141] libmachine: (ha-091565) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:2a:13:d8", ip: ""} in network mk-ha-091565
	I0918 20:01:22.363217   26827 main.go:141] libmachine: (ha-091565) DBG | unable to find defined IP address of network mk-ha-091565 interface with MAC address 52:54:00:2a:13:d8
	I0918 20:01:22.363387   26827 main.go:141] libmachine: (ha-091565) DBG | Using SSH client type: external
	I0918 20:01:22.363410   26827 main.go:141] libmachine: (ha-091565) DBG | Using SSH private key: /home/jenkins/minikube-integration/19667-7671/.minikube/machines/ha-091565/id_rsa (-rw-------)
	I0918 20:01:22.363445   26827 main.go:141] libmachine: (ha-091565) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19667-7671/.minikube/machines/ha-091565/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0918 20:01:22.363470   26827 main.go:141] libmachine: (ha-091565) DBG | About to run SSH command:
	I0918 20:01:22.363487   26827 main.go:141] libmachine: (ha-091565) DBG | exit 0
	I0918 20:01:22.367035   26827 main.go:141] libmachine: (ha-091565) DBG | SSH cmd err, output: exit status 255: 
	I0918 20:01:22.367062   26827 main.go:141] libmachine: (ha-091565) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I0918 20:01:22.367069   26827 main.go:141] libmachine: (ha-091565) DBG | command : exit 0
	I0918 20:01:22.367074   26827 main.go:141] libmachine: (ha-091565) DBG | err     : exit status 255
	I0918 20:01:22.367081   26827 main.go:141] libmachine: (ha-091565) DBG | output  : 
	I0918 20:01:25.368924   26827 main.go:141] libmachine: (ha-091565) DBG | Getting to WaitForSSH function...
	I0918 20:01:25.371732   26827 main.go:141] libmachine: (ha-091565) DBG | domain ha-091565 has defined MAC address 52:54:00:2a:13:d8 in network mk-ha-091565
	I0918 20:01:25.372247   26827 main.go:141] libmachine: (ha-091565) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:13:d8", ip: ""} in network mk-ha-091565: {Iface:virbr1 ExpiryTime:2024-09-18 21:01:12 +0000 UTC Type:0 Mac:52:54:00:2a:13:d8 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:ha-091565 Clientid:01:52:54:00:2a:13:d8}
	I0918 20:01:25.372276   26827 main.go:141] libmachine: (ha-091565) DBG | domain ha-091565 has defined IP address 192.168.39.215 and MAC address 52:54:00:2a:13:d8 in network mk-ha-091565
	I0918 20:01:25.372360   26827 main.go:141] libmachine: (ha-091565) DBG | Using SSH client type: external
	I0918 20:01:25.372393   26827 main.go:141] libmachine: (ha-091565) DBG | Using SSH private key: /home/jenkins/minikube-integration/19667-7671/.minikube/machines/ha-091565/id_rsa (-rw-------)
	I0918 20:01:25.372430   26827 main.go:141] libmachine: (ha-091565) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.215 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19667-7671/.minikube/machines/ha-091565/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0918 20:01:25.372447   26827 main.go:141] libmachine: (ha-091565) DBG | About to run SSH command:
	I0918 20:01:25.372458   26827 main.go:141] libmachine: (ha-091565) DBG | exit 0
	I0918 20:01:25.500108   26827 main.go:141] libmachine: (ha-091565) DBG | SSH cmd err, output: <nil>: 
	I0918 20:01:25.500382   26827 main.go:141] libmachine: (ha-091565) KVM machine creation complete!
	I0918 20:01:25.500836   26827 main.go:141] libmachine: (ha-091565) Calling .GetConfigRaw
	I0918 20:01:25.501392   26827 main.go:141] libmachine: (ha-091565) Calling .DriverName
	I0918 20:01:25.501585   26827 main.go:141] libmachine: (ha-091565) Calling .DriverName
	I0918 20:01:25.501791   26827 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0918 20:01:25.501803   26827 main.go:141] libmachine: (ha-091565) Calling .GetState
	I0918 20:01:25.503113   26827 main.go:141] libmachine: Detecting operating system of created instance...
	I0918 20:01:25.503144   26827 main.go:141] libmachine: Waiting for SSH to be available...
	I0918 20:01:25.503151   26827 main.go:141] libmachine: Getting to WaitForSSH function...
	I0918 20:01:25.503163   26827 main.go:141] libmachine: (ha-091565) Calling .GetSSHHostname
	I0918 20:01:25.505584   26827 main.go:141] libmachine: (ha-091565) DBG | domain ha-091565 has defined MAC address 52:54:00:2a:13:d8 in network mk-ha-091565
	I0918 20:01:25.505981   26827 main.go:141] libmachine: (ha-091565) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:13:d8", ip: ""} in network mk-ha-091565: {Iface:virbr1 ExpiryTime:2024-09-18 21:01:12 +0000 UTC Type:0 Mac:52:54:00:2a:13:d8 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:ha-091565 Clientid:01:52:54:00:2a:13:d8}
	I0918 20:01:25.506016   26827 main.go:141] libmachine: (ha-091565) DBG | domain ha-091565 has defined IP address 192.168.39.215 and MAC address 52:54:00:2a:13:d8 in network mk-ha-091565
	I0918 20:01:25.506132   26827 main.go:141] libmachine: (ha-091565) Calling .GetSSHPort
	I0918 20:01:25.506286   26827 main.go:141] libmachine: (ha-091565) Calling .GetSSHKeyPath
	I0918 20:01:25.506450   26827 main.go:141] libmachine: (ha-091565) Calling .GetSSHKeyPath
	I0918 20:01:25.506567   26827 main.go:141] libmachine: (ha-091565) Calling .GetSSHUsername
	I0918 20:01:25.506705   26827 main.go:141] libmachine: Using SSH client type: native
	I0918 20:01:25.506964   26827 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.215 22 <nil> <nil>}
	I0918 20:01:25.506980   26827 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0918 20:01:25.615489   26827 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0918 20:01:25.615512   26827 main.go:141] libmachine: Detecting the provisioner...
	I0918 20:01:25.615519   26827 main.go:141] libmachine: (ha-091565) Calling .GetSSHHostname
	I0918 20:01:25.618058   26827 main.go:141] libmachine: (ha-091565) DBG | domain ha-091565 has defined MAC address 52:54:00:2a:13:d8 in network mk-ha-091565
	I0918 20:01:25.618343   26827 main.go:141] libmachine: (ha-091565) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:13:d8", ip: ""} in network mk-ha-091565: {Iface:virbr1 ExpiryTime:2024-09-18 21:01:12 +0000 UTC Type:0 Mac:52:54:00:2a:13:d8 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:ha-091565 Clientid:01:52:54:00:2a:13:d8}
	I0918 20:01:25.618365   26827 main.go:141] libmachine: (ha-091565) DBG | domain ha-091565 has defined IP address 192.168.39.215 and MAC address 52:54:00:2a:13:d8 in network mk-ha-091565
	I0918 20:01:25.618476   26827 main.go:141] libmachine: (ha-091565) Calling .GetSSHPort
	I0918 20:01:25.618650   26827 main.go:141] libmachine: (ha-091565) Calling .GetSSHKeyPath
	I0918 20:01:25.618786   26827 main.go:141] libmachine: (ha-091565) Calling .GetSSHKeyPath
	I0918 20:01:25.618935   26827 main.go:141] libmachine: (ha-091565) Calling .GetSSHUsername
	I0918 20:01:25.619044   26827 main.go:141] libmachine: Using SSH client type: native
	I0918 20:01:25.619200   26827 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.215 22 <nil> <nil>}
	I0918 20:01:25.619210   26827 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0918 20:01:25.732502   26827 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0918 20:01:25.732589   26827 main.go:141] libmachine: found compatible host: buildroot
	I0918 20:01:25.732599   26827 main.go:141] libmachine: Provisioning with buildroot...
	I0918 20:01:25.732606   26827 main.go:141] libmachine: (ha-091565) Calling .GetMachineName
	I0918 20:01:25.732852   26827 buildroot.go:166] provisioning hostname "ha-091565"
	I0918 20:01:25.732880   26827 main.go:141] libmachine: (ha-091565) Calling .GetMachineName
	I0918 20:01:25.733067   26827 main.go:141] libmachine: (ha-091565) Calling .GetSSHHostname
	I0918 20:01:25.735789   26827 main.go:141] libmachine: (ha-091565) DBG | domain ha-091565 has defined MAC address 52:54:00:2a:13:d8 in network mk-ha-091565
	I0918 20:01:25.736134   26827 main.go:141] libmachine: (ha-091565) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:13:d8", ip: ""} in network mk-ha-091565: {Iface:virbr1 ExpiryTime:2024-09-18 21:01:12 +0000 UTC Type:0 Mac:52:54:00:2a:13:d8 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:ha-091565 Clientid:01:52:54:00:2a:13:d8}
	I0918 20:01:25.736170   26827 main.go:141] libmachine: (ha-091565) DBG | domain ha-091565 has defined IP address 192.168.39.215 and MAC address 52:54:00:2a:13:d8 in network mk-ha-091565
	I0918 20:01:25.736303   26827 main.go:141] libmachine: (ha-091565) Calling .GetSSHPort
	I0918 20:01:25.736498   26827 main.go:141] libmachine: (ha-091565) Calling .GetSSHKeyPath
	I0918 20:01:25.736664   26827 main.go:141] libmachine: (ha-091565) Calling .GetSSHKeyPath
	I0918 20:01:25.736815   26827 main.go:141] libmachine: (ha-091565) Calling .GetSSHUsername
	I0918 20:01:25.736962   26827 main.go:141] libmachine: Using SSH client type: native
	I0918 20:01:25.737181   26827 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.215 22 <nil> <nil>}
	I0918 20:01:25.737194   26827 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-091565 && echo "ha-091565" | sudo tee /etc/hostname
	I0918 20:01:25.862508   26827 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-091565
	
	I0918 20:01:25.862540   26827 main.go:141] libmachine: (ha-091565) Calling .GetSSHHostname
	I0918 20:01:25.866613   26827 main.go:141] libmachine: (ha-091565) DBG | domain ha-091565 has defined MAC address 52:54:00:2a:13:d8 in network mk-ha-091565
	I0918 20:01:25.867074   26827 main.go:141] libmachine: (ha-091565) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:13:d8", ip: ""} in network mk-ha-091565: {Iface:virbr1 ExpiryTime:2024-09-18 21:01:12 +0000 UTC Type:0 Mac:52:54:00:2a:13:d8 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:ha-091565 Clientid:01:52:54:00:2a:13:d8}
	I0918 20:01:25.867104   26827 main.go:141] libmachine: (ha-091565) DBG | domain ha-091565 has defined IP address 192.168.39.215 and MAC address 52:54:00:2a:13:d8 in network mk-ha-091565
	I0918 20:01:25.867538   26827 main.go:141] libmachine: (ha-091565) Calling .GetSSHPort
	I0918 20:01:25.867789   26827 main.go:141] libmachine: (ha-091565) Calling .GetSSHKeyPath
	I0918 20:01:25.867962   26827 main.go:141] libmachine: (ha-091565) Calling .GetSSHKeyPath
	I0918 20:01:25.868230   26827 main.go:141] libmachine: (ha-091565) Calling .GetSSHUsername
	I0918 20:01:25.868389   26827 main.go:141] libmachine: Using SSH client type: native
	I0918 20:01:25.868588   26827 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.215 22 <nil> <nil>}
	I0918 20:01:25.868607   26827 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-091565' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-091565/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-091565' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0918 20:01:25.988748   26827 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0918 20:01:25.988798   26827 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19667-7671/.minikube CaCertPath:/home/jenkins/minikube-integration/19667-7671/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19667-7671/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19667-7671/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19667-7671/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19667-7671/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19667-7671/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19667-7671/.minikube}
	I0918 20:01:25.988838   26827 buildroot.go:174] setting up certificates
	I0918 20:01:25.988848   26827 provision.go:84] configureAuth start
	I0918 20:01:25.988857   26827 main.go:141] libmachine: (ha-091565) Calling .GetMachineName
	I0918 20:01:25.989144   26827 main.go:141] libmachine: (ha-091565) Calling .GetIP
	I0918 20:01:25.991863   26827 main.go:141] libmachine: (ha-091565) DBG | domain ha-091565 has defined MAC address 52:54:00:2a:13:d8 in network mk-ha-091565
	I0918 20:01:25.992270   26827 main.go:141] libmachine: (ha-091565) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:13:d8", ip: ""} in network mk-ha-091565: {Iface:virbr1 ExpiryTime:2024-09-18 21:01:12 +0000 UTC Type:0 Mac:52:54:00:2a:13:d8 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:ha-091565 Clientid:01:52:54:00:2a:13:d8}
	I0918 20:01:25.992315   26827 main.go:141] libmachine: (ha-091565) DBG | domain ha-091565 has defined IP address 192.168.39.215 and MAC address 52:54:00:2a:13:d8 in network mk-ha-091565
	I0918 20:01:25.992456   26827 main.go:141] libmachine: (ha-091565) Calling .GetSSHHostname
	I0918 20:01:25.994511   26827 main.go:141] libmachine: (ha-091565) DBG | domain ha-091565 has defined MAC address 52:54:00:2a:13:d8 in network mk-ha-091565
	I0918 20:01:25.994809   26827 main.go:141] libmachine: (ha-091565) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:13:d8", ip: ""} in network mk-ha-091565: {Iface:virbr1 ExpiryTime:2024-09-18 21:01:12 +0000 UTC Type:0 Mac:52:54:00:2a:13:d8 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:ha-091565 Clientid:01:52:54:00:2a:13:d8}
	I0918 20:01:25.994834   26827 main.go:141] libmachine: (ha-091565) DBG | domain ha-091565 has defined IP address 192.168.39.215 and MAC address 52:54:00:2a:13:d8 in network mk-ha-091565
	I0918 20:01:25.994954   26827 provision.go:143] copyHostCerts
	I0918 20:01:25.994981   26827 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19667-7671/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19667-7671/.minikube/ca.pem
	I0918 20:01:25.995025   26827 exec_runner.go:144] found /home/jenkins/minikube-integration/19667-7671/.minikube/ca.pem, removing ...
	I0918 20:01:25.995039   26827 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19667-7671/.minikube/ca.pem
	I0918 20:01:25.995103   26827 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19667-7671/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19667-7671/.minikube/ca.pem (1082 bytes)
	I0918 20:01:25.995191   26827 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19667-7671/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19667-7671/.minikube/cert.pem
	I0918 20:01:25.995209   26827 exec_runner.go:144] found /home/jenkins/minikube-integration/19667-7671/.minikube/cert.pem, removing ...
	I0918 20:01:25.995213   26827 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19667-7671/.minikube/cert.pem
	I0918 20:01:25.995242   26827 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19667-7671/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19667-7671/.minikube/cert.pem (1123 bytes)
	I0918 20:01:25.995301   26827 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19667-7671/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19667-7671/.minikube/key.pem
	I0918 20:01:25.995316   26827 exec_runner.go:144] found /home/jenkins/minikube-integration/19667-7671/.minikube/key.pem, removing ...
	I0918 20:01:25.995322   26827 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19667-7671/.minikube/key.pem
	I0918 20:01:25.995343   26827 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19667-7671/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19667-7671/.minikube/key.pem (1679 bytes)
	I0918 20:01:25.995405   26827 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19667-7671/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19667-7671/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19667-7671/.minikube/certs/ca-key.pem org=jenkins.ha-091565 san=[127.0.0.1 192.168.39.215 ha-091565 localhost minikube]
	I0918 20:01:26.117902   26827 provision.go:177] copyRemoteCerts
	I0918 20:01:26.117954   26827 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0918 20:01:26.117977   26827 main.go:141] libmachine: (ha-091565) Calling .GetSSHHostname
	I0918 20:01:26.120733   26827 main.go:141] libmachine: (ha-091565) DBG | domain ha-091565 has defined MAC address 52:54:00:2a:13:d8 in network mk-ha-091565
	I0918 20:01:26.121075   26827 main.go:141] libmachine: (ha-091565) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:13:d8", ip: ""} in network mk-ha-091565: {Iface:virbr1 ExpiryTime:2024-09-18 21:01:12 +0000 UTC Type:0 Mac:52:54:00:2a:13:d8 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:ha-091565 Clientid:01:52:54:00:2a:13:d8}
	I0918 20:01:26.121091   26827 main.go:141] libmachine: (ha-091565) DBG | domain ha-091565 has defined IP address 192.168.39.215 and MAC address 52:54:00:2a:13:d8 in network mk-ha-091565
	I0918 20:01:26.121297   26827 main.go:141] libmachine: (ha-091565) Calling .GetSSHPort
	I0918 20:01:26.121502   26827 main.go:141] libmachine: (ha-091565) Calling .GetSSHKeyPath
	I0918 20:01:26.121666   26827 main.go:141] libmachine: (ha-091565) Calling .GetSSHUsername
	I0918 20:01:26.121786   26827 sshutil.go:53] new ssh client: &{IP:192.168.39.215 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19667-7671/.minikube/machines/ha-091565/id_rsa Username:docker}
	I0918 20:01:26.205619   26827 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19667-7671/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0918 20:01:26.205705   26827 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I0918 20:01:26.228613   26827 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19667-7671/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0918 20:01:26.228682   26827 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0918 20:01:26.252879   26827 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19667-7671/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0918 20:01:26.252953   26827 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0918 20:01:26.277029   26827 provision.go:87] duration metric: took 288.170096ms to configureAuth
	I0918 20:01:26.277056   26827 buildroot.go:189] setting minikube options for container-runtime
	I0918 20:01:26.277264   26827 config.go:182] Loaded profile config "ha-091565": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0918 20:01:26.277380   26827 main.go:141] libmachine: (ha-091565) Calling .GetSSHHostname
	I0918 20:01:26.279749   26827 main.go:141] libmachine: (ha-091565) DBG | domain ha-091565 has defined MAC address 52:54:00:2a:13:d8 in network mk-ha-091565
	I0918 20:01:26.280128   26827 main.go:141] libmachine: (ha-091565) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:13:d8", ip: ""} in network mk-ha-091565: {Iface:virbr1 ExpiryTime:2024-09-18 21:01:12 +0000 UTC Type:0 Mac:52:54:00:2a:13:d8 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:ha-091565 Clientid:01:52:54:00:2a:13:d8}
	I0918 20:01:26.280154   26827 main.go:141] libmachine: (ha-091565) DBG | domain ha-091565 has defined IP address 192.168.39.215 and MAC address 52:54:00:2a:13:d8 in network mk-ha-091565
	I0918 20:01:26.280280   26827 main.go:141] libmachine: (ha-091565) Calling .GetSSHPort
	I0918 20:01:26.280444   26827 main.go:141] libmachine: (ha-091565) Calling .GetSSHKeyPath
	I0918 20:01:26.280617   26827 main.go:141] libmachine: (ha-091565) Calling .GetSSHKeyPath
	I0918 20:01:26.280788   26827 main.go:141] libmachine: (ha-091565) Calling .GetSSHUsername
	I0918 20:01:26.280946   26827 main.go:141] libmachine: Using SSH client type: native
	I0918 20:01:26.281114   26827 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.215 22 <nil> <nil>}
	I0918 20:01:26.281127   26827 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0918 20:01:26.505775   26827 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0918 20:01:26.505808   26827 main.go:141] libmachine: Checking connection to Docker...
	I0918 20:01:26.505817   26827 main.go:141] libmachine: (ha-091565) Calling .GetURL
	I0918 20:01:26.507070   26827 main.go:141] libmachine: (ha-091565) DBG | Using libvirt version 6000000
	I0918 20:01:26.509239   26827 main.go:141] libmachine: (ha-091565) DBG | domain ha-091565 has defined MAC address 52:54:00:2a:13:d8 in network mk-ha-091565
	I0918 20:01:26.509623   26827 main.go:141] libmachine: (ha-091565) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:13:d8", ip: ""} in network mk-ha-091565: {Iface:virbr1 ExpiryTime:2024-09-18 21:01:12 +0000 UTC Type:0 Mac:52:54:00:2a:13:d8 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:ha-091565 Clientid:01:52:54:00:2a:13:d8}
	I0918 20:01:26.509653   26827 main.go:141] libmachine: (ha-091565) DBG | domain ha-091565 has defined IP address 192.168.39.215 and MAC address 52:54:00:2a:13:d8 in network mk-ha-091565
	I0918 20:01:26.509837   26827 main.go:141] libmachine: Docker is up and running!
	I0918 20:01:26.509859   26827 main.go:141] libmachine: Reticulating splines...
	I0918 20:01:26.509874   26827 client.go:171] duration metric: took 28.782642826s to LocalClient.Create
	I0918 20:01:26.509892   26827 start.go:167] duration metric: took 28.782711953s to libmachine.API.Create "ha-091565"
	I0918 20:01:26.509901   26827 start.go:293] postStartSetup for "ha-091565" (driver="kvm2")
	I0918 20:01:26.509909   26827 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0918 20:01:26.509925   26827 main.go:141] libmachine: (ha-091565) Calling .DriverName
	I0918 20:01:26.510174   26827 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0918 20:01:26.510198   26827 main.go:141] libmachine: (ha-091565) Calling .GetSSHHostname
	I0918 20:01:26.512537   26827 main.go:141] libmachine: (ha-091565) DBG | domain ha-091565 has defined MAC address 52:54:00:2a:13:d8 in network mk-ha-091565
	I0918 20:01:26.512896   26827 main.go:141] libmachine: (ha-091565) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:13:d8", ip: ""} in network mk-ha-091565: {Iface:virbr1 ExpiryTime:2024-09-18 21:01:12 +0000 UTC Type:0 Mac:52:54:00:2a:13:d8 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:ha-091565 Clientid:01:52:54:00:2a:13:d8}
	I0918 20:01:26.512927   26827 main.go:141] libmachine: (ha-091565) DBG | domain ha-091565 has defined IP address 192.168.39.215 and MAC address 52:54:00:2a:13:d8 in network mk-ha-091565
	I0918 20:01:26.513099   26827 main.go:141] libmachine: (ha-091565) Calling .GetSSHPort
	I0918 20:01:26.513302   26827 main.go:141] libmachine: (ha-091565) Calling .GetSSHKeyPath
	I0918 20:01:26.513485   26827 main.go:141] libmachine: (ha-091565) Calling .GetSSHUsername
	I0918 20:01:26.513627   26827 sshutil.go:53] new ssh client: &{IP:192.168.39.215 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19667-7671/.minikube/machines/ha-091565/id_rsa Username:docker}
	I0918 20:01:26.598408   26827 ssh_runner.go:195] Run: cat /etc/os-release
	I0918 20:01:26.602627   26827 info.go:137] Remote host: Buildroot 2023.02.9
	I0918 20:01:26.602663   26827 filesync.go:126] Scanning /home/jenkins/minikube-integration/19667-7671/.minikube/addons for local assets ...
	I0918 20:01:26.602726   26827 filesync.go:126] Scanning /home/jenkins/minikube-integration/19667-7671/.minikube/files for local assets ...
	I0918 20:01:26.602800   26827 filesync.go:149] local asset: /home/jenkins/minikube-integration/19667-7671/.minikube/files/etc/ssl/certs/148782.pem -> 148782.pem in /etc/ssl/certs
	I0918 20:01:26.602810   26827 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19667-7671/.minikube/files/etc/ssl/certs/148782.pem -> /etc/ssl/certs/148782.pem
	I0918 20:01:26.602901   26827 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0918 20:01:26.612359   26827 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/files/etc/ssl/certs/148782.pem --> /etc/ssl/certs/148782.pem (1708 bytes)
	I0918 20:01:26.635555   26827 start.go:296] duration metric: took 125.639833ms for postStartSetup
	I0918 20:01:26.635626   26827 main.go:141] libmachine: (ha-091565) Calling .GetConfigRaw
	I0918 20:01:26.636227   26827 main.go:141] libmachine: (ha-091565) Calling .GetIP
	I0918 20:01:26.638938   26827 main.go:141] libmachine: (ha-091565) DBG | domain ha-091565 has defined MAC address 52:54:00:2a:13:d8 in network mk-ha-091565
	I0918 20:01:26.639246   26827 main.go:141] libmachine: (ha-091565) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:13:d8", ip: ""} in network mk-ha-091565: {Iface:virbr1 ExpiryTime:2024-09-18 21:01:12 +0000 UTC Type:0 Mac:52:54:00:2a:13:d8 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:ha-091565 Clientid:01:52:54:00:2a:13:d8}
	I0918 20:01:26.639274   26827 main.go:141] libmachine: (ha-091565) DBG | domain ha-091565 has defined IP address 192.168.39.215 and MAC address 52:54:00:2a:13:d8 in network mk-ha-091565
	I0918 20:01:26.639496   26827 profile.go:143] Saving config to /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/ha-091565/config.json ...
	I0918 20:01:26.639737   26827 start.go:128] duration metric: took 28.930427667s to createHost
	I0918 20:01:26.639765   26827 main.go:141] libmachine: (ha-091565) Calling .GetSSHHostname
	I0918 20:01:26.642131   26827 main.go:141] libmachine: (ha-091565) DBG | domain ha-091565 has defined MAC address 52:54:00:2a:13:d8 in network mk-ha-091565
	I0918 20:01:26.642460   26827 main.go:141] libmachine: (ha-091565) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:13:d8", ip: ""} in network mk-ha-091565: {Iface:virbr1 ExpiryTime:2024-09-18 21:01:12 +0000 UTC Type:0 Mac:52:54:00:2a:13:d8 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:ha-091565 Clientid:01:52:54:00:2a:13:d8}
	I0918 20:01:26.642482   26827 main.go:141] libmachine: (ha-091565) DBG | domain ha-091565 has defined IP address 192.168.39.215 and MAC address 52:54:00:2a:13:d8 in network mk-ha-091565
	I0918 20:01:26.642675   26827 main.go:141] libmachine: (ha-091565) Calling .GetSSHPort
	I0918 20:01:26.642866   26827 main.go:141] libmachine: (ha-091565) Calling .GetSSHKeyPath
	I0918 20:01:26.643104   26827 main.go:141] libmachine: (ha-091565) Calling .GetSSHKeyPath
	I0918 20:01:26.643258   26827 main.go:141] libmachine: (ha-091565) Calling .GetSSHUsername
	I0918 20:01:26.643412   26827 main.go:141] libmachine: Using SSH client type: native
	I0918 20:01:26.643644   26827 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.215 22 <nil> <nil>}
	I0918 20:01:26.643661   26827 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0918 20:01:26.756537   26827 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726689686.738518611
	
	I0918 20:01:26.756561   26827 fix.go:216] guest clock: 1726689686.738518611
	I0918 20:01:26.756568   26827 fix.go:229] Guest: 2024-09-18 20:01:26.738518611 +0000 UTC Remote: 2024-09-18 20:01:26.639754618 +0000 UTC m=+29.034479506 (delta=98.763993ms)
	I0918 20:01:26.756587   26827 fix.go:200] guest clock delta is within tolerance: 98.763993ms
	I0918 20:01:26.756592   26827 start.go:83] releasing machines lock for "ha-091565", held for 29.047378188s
	I0918 20:01:26.756612   26827 main.go:141] libmachine: (ha-091565) Calling .DriverName
	I0918 20:01:26.756891   26827 main.go:141] libmachine: (ha-091565) Calling .GetIP
	I0918 20:01:26.759638   26827 main.go:141] libmachine: (ha-091565) DBG | domain ha-091565 has defined MAC address 52:54:00:2a:13:d8 in network mk-ha-091565
	I0918 20:01:26.759950   26827 main.go:141] libmachine: (ha-091565) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:13:d8", ip: ""} in network mk-ha-091565: {Iface:virbr1 ExpiryTime:2024-09-18 21:01:12 +0000 UTC Type:0 Mac:52:54:00:2a:13:d8 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:ha-091565 Clientid:01:52:54:00:2a:13:d8}
	I0918 20:01:26.759972   26827 main.go:141] libmachine: (ha-091565) DBG | domain ha-091565 has defined IP address 192.168.39.215 and MAC address 52:54:00:2a:13:d8 in network mk-ha-091565
	I0918 20:01:26.760128   26827 main.go:141] libmachine: (ha-091565) Calling .DriverName
	I0918 20:01:26.760656   26827 main.go:141] libmachine: (ha-091565) Calling .DriverName
	I0918 20:01:26.760816   26827 main.go:141] libmachine: (ha-091565) Calling .DriverName
	I0918 20:01:26.760919   26827 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0918 20:01:26.760970   26827 main.go:141] libmachine: (ha-091565) Calling .GetSSHHostname
	I0918 20:01:26.761017   26827 ssh_runner.go:195] Run: cat /version.json
	I0918 20:01:26.761043   26827 main.go:141] libmachine: (ha-091565) Calling .GetSSHHostname
	I0918 20:01:26.763588   26827 main.go:141] libmachine: (ha-091565) DBG | domain ha-091565 has defined MAC address 52:54:00:2a:13:d8 in network mk-ha-091565
	I0918 20:01:26.763617   26827 main.go:141] libmachine: (ha-091565) DBG | domain ha-091565 has defined MAC address 52:54:00:2a:13:d8 in network mk-ha-091565
	I0918 20:01:26.763927   26827 main.go:141] libmachine: (ha-091565) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:13:d8", ip: ""} in network mk-ha-091565: {Iface:virbr1 ExpiryTime:2024-09-18 21:01:12 +0000 UTC Type:0 Mac:52:54:00:2a:13:d8 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:ha-091565 Clientid:01:52:54:00:2a:13:d8}
	I0918 20:01:26.763960   26827 main.go:141] libmachine: (ha-091565) DBG | domain ha-091565 has defined IP address 192.168.39.215 and MAC address 52:54:00:2a:13:d8 in network mk-ha-091565
	I0918 20:01:26.763986   26827 main.go:141] libmachine: (ha-091565) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:13:d8", ip: ""} in network mk-ha-091565: {Iface:virbr1 ExpiryTime:2024-09-18 21:01:12 +0000 UTC Type:0 Mac:52:54:00:2a:13:d8 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:ha-091565 Clientid:01:52:54:00:2a:13:d8}
	I0918 20:01:26.764000   26827 main.go:141] libmachine: (ha-091565) DBG | domain ha-091565 has defined IP address 192.168.39.215 and MAC address 52:54:00:2a:13:d8 in network mk-ha-091565
	I0918 20:01:26.764093   26827 main.go:141] libmachine: (ha-091565) Calling .GetSSHPort
	I0918 20:01:26.764219   26827 main.go:141] libmachine: (ha-091565) Calling .GetSSHPort
	I0918 20:01:26.764334   26827 main.go:141] libmachine: (ha-091565) Calling .GetSSHKeyPath
	I0918 20:01:26.764352   26827 main.go:141] libmachine: (ha-091565) Calling .GetSSHKeyPath
	I0918 20:01:26.764485   26827 main.go:141] libmachine: (ha-091565) Calling .GetSSHUsername
	I0918 20:01:26.764503   26827 main.go:141] libmachine: (ha-091565) Calling .GetSSHUsername
	I0918 20:01:26.764654   26827 sshutil.go:53] new ssh client: &{IP:192.168.39.215 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19667-7671/.minikube/machines/ha-091565/id_rsa Username:docker}
	I0918 20:01:26.764655   26827 sshutil.go:53] new ssh client: &{IP:192.168.39.215 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19667-7671/.minikube/machines/ha-091565/id_rsa Username:docker}
	I0918 20:01:26.887790   26827 ssh_runner.go:195] Run: systemctl --version
	I0918 20:01:26.893767   26827 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0918 20:01:27.057963   26827 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0918 20:01:27.064172   26827 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0918 20:01:27.064252   26827 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0918 20:01:27.080537   26827 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0918 20:01:27.080566   26827 start.go:495] detecting cgroup driver to use...
	I0918 20:01:27.080726   26827 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0918 20:01:27.098904   26827 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0918 20:01:27.113999   26827 docker.go:217] disabling cri-docker service (if available) ...
	I0918 20:01:27.114063   26827 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0918 20:01:27.127448   26827 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0918 20:01:27.140971   26827 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0918 20:01:27.277092   26827 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0918 20:01:27.438944   26827 docker.go:233] disabling docker service ...
	I0918 20:01:27.439019   26827 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0918 20:01:27.452578   26827 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0918 20:01:27.465616   26827 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0918 20:01:27.576240   26827 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0918 20:01:27.692187   26827 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0918 20:01:27.706450   26827 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0918 20:01:27.724470   26827 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0918 20:01:27.724548   26827 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0918 20:01:27.734691   26827 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0918 20:01:27.734759   26827 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0918 20:01:27.744841   26827 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0918 20:01:27.754941   26827 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0918 20:01:27.765749   26827 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0918 20:01:27.776994   26827 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0918 20:01:27.787772   26827 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0918 20:01:27.805476   26827 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0918 20:01:27.815577   26827 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0918 20:01:27.824923   26827 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0918 20:01:27.825000   26827 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0918 20:01:27.837394   26827 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0918 20:01:27.847278   26827 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0918 20:01:27.957450   26827 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0918 20:01:28.049268   26827 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0918 20:01:28.049347   26827 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0918 20:01:28.053609   26827 start.go:563] Will wait 60s for crictl version
	I0918 20:01:28.053664   26827 ssh_runner.go:195] Run: which crictl
	I0918 20:01:28.057561   26827 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0918 20:01:28.095781   26827 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0918 20:01:28.095855   26827 ssh_runner.go:195] Run: crio --version
	I0918 20:01:28.122990   26827 ssh_runner.go:195] Run: crio --version
	I0918 20:01:28.151689   26827 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0918 20:01:28.153185   26827 main.go:141] libmachine: (ha-091565) Calling .GetIP
	I0918 20:01:28.155727   26827 main.go:141] libmachine: (ha-091565) DBG | domain ha-091565 has defined MAC address 52:54:00:2a:13:d8 in network mk-ha-091565
	I0918 20:01:28.156071   26827 main.go:141] libmachine: (ha-091565) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:13:d8", ip: ""} in network mk-ha-091565: {Iface:virbr1 ExpiryTime:2024-09-18 21:01:12 +0000 UTC Type:0 Mac:52:54:00:2a:13:d8 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:ha-091565 Clientid:01:52:54:00:2a:13:d8}
	I0918 20:01:28.156102   26827 main.go:141] libmachine: (ha-091565) DBG | domain ha-091565 has defined IP address 192.168.39.215 and MAC address 52:54:00:2a:13:d8 in network mk-ha-091565
	I0918 20:01:28.156291   26827 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0918 20:01:28.160094   26827 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0918 20:01:28.172348   26827 kubeadm.go:883] updating cluster {Name:ha-091565 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Cl
usterName:ha-091565 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.215 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0918 20:01:28.172455   26827 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0918 20:01:28.172495   26827 ssh_runner.go:195] Run: sudo crictl images --output json
	I0918 20:01:28.202903   26827 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I0918 20:01:28.202968   26827 ssh_runner.go:195] Run: which lz4
	I0918 20:01:28.206524   26827 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19667-7671/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0918 20:01:28.206640   26827 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0918 20:01:28.210309   26827 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0918 20:01:28.210346   26827 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I0918 20:01:29.428932   26827 crio.go:462] duration metric: took 1.222324485s to copy over tarball
	I0918 20:01:29.428998   26827 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0918 20:01:31.427670   26827 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.998650683s)
	I0918 20:01:31.427701   26827 crio.go:469] duration metric: took 1.998743987s to extract the tarball
	I0918 20:01:31.427710   26827 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0918 20:01:31.465115   26827 ssh_runner.go:195] Run: sudo crictl images --output json
	I0918 20:01:31.512315   26827 crio.go:514] all images are preloaded for cri-o runtime.
	I0918 20:01:31.512340   26827 cache_images.go:84] Images are preloaded, skipping loading
	I0918 20:01:31.512349   26827 kubeadm.go:934] updating node { 192.168.39.215 8443 v1.31.1 crio true true} ...
	I0918 20:01:31.512489   26827 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-091565 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.215
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-091565 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0918 20:01:31.512625   26827 ssh_runner.go:195] Run: crio config
	I0918 20:01:31.557297   26827 cni.go:84] Creating CNI manager for ""
	I0918 20:01:31.557325   26827 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0918 20:01:31.557342   26827 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0918 20:01:31.557362   26827 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.215 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-091565 NodeName:ha-091565 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.215"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.215 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0918 20:01:31.557481   26827 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.215
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-091565"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.215
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.215"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0918 20:01:31.557515   26827 kube-vip.go:115] generating kube-vip config ...
	I0918 20:01:31.557571   26827 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0918 20:01:31.573497   26827 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0918 20:01:31.573622   26827 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I0918 20:01:31.573693   26827 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0918 20:01:31.583548   26827 binaries.go:44] Found k8s binaries, skipping transfer
	I0918 20:01:31.583630   26827 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0918 20:01:31.592787   26827 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I0918 20:01:31.608721   26827 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0918 20:01:31.624827   26827 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2153 bytes)
	I0918 20:01:31.640691   26827 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1447 bytes)
	I0918 20:01:31.656477   26827 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0918 20:01:31.660115   26827 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0918 20:01:31.671977   26827 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0918 20:01:31.797641   26827 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0918 20:01:31.815122   26827 certs.go:68] Setting up /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/ha-091565 for IP: 192.168.39.215
	I0918 20:01:31.815151   26827 certs.go:194] generating shared ca certs ...
	I0918 20:01:31.815173   26827 certs.go:226] acquiring lock for ca certs: {Name:mkf54e0e71ed884c4372f9d3d4cb308ea4600185 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0918 20:01:31.815382   26827 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19667-7671/.minikube/ca.key
	I0918 20:01:31.815442   26827 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19667-7671/.minikube/proxy-client-ca.key
	I0918 20:01:31.815465   26827 certs.go:256] generating profile certs ...
	I0918 20:01:31.815537   26827 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/ha-091565/client.key
	I0918 20:01:31.815566   26827 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/ha-091565/client.crt with IP's: []
	I0918 20:01:31.882711   26827 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/ha-091565/client.crt ...
	I0918 20:01:31.882735   26827 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/ha-091565/client.crt: {Name:mk22393d10a62db8be4ee96423eb8999dca92051 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0918 20:01:31.882908   26827 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/ha-091565/client.key ...
	I0918 20:01:31.882923   26827 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/ha-091565/client.key: {Name:mk40398d3c215962d47b7b1ac3b33466404e1ae5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0918 20:01:31.883062   26827 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/ha-091565/apiserver.key.286a620e
	I0918 20:01:31.883085   26827 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/ha-091565/apiserver.crt.286a620e with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.215 192.168.39.254]
	I0918 20:01:32.176911   26827 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/ha-091565/apiserver.crt.286a620e ...
	I0918 20:01:32.176938   26827 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/ha-091565/apiserver.crt.286a620e: {Name:mk6e12e8d7297caa8349fc6fe030d9b3d69c43ba Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0918 20:01:32.177087   26827 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/ha-091565/apiserver.key.286a620e ...
	I0918 20:01:32.177099   26827 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/ha-091565/apiserver.key.286a620e: {Name:mkbac5b4ddde2084fa4364c4dee4c3ed0d321a5e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0918 20:01:32.177161   26827 certs.go:381] copying /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/ha-091565/apiserver.crt.286a620e -> /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/ha-091565/apiserver.crt
	I0918 20:01:32.177247   26827 certs.go:385] copying /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/ha-091565/apiserver.key.286a620e -> /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/ha-091565/apiserver.key
	I0918 20:01:32.177297   26827 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/ha-091565/proxy-client.key
	I0918 20:01:32.177310   26827 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/ha-091565/proxy-client.crt with IP's: []
	I0918 20:01:32.272727   26827 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/ha-091565/proxy-client.crt ...
	I0918 20:01:32.272755   26827 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/ha-091565/proxy-client.crt: {Name:mk83a2402d1ff78c6dd742b96bf8c90e2537b4cd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0918 20:01:32.272892   26827 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/ha-091565/proxy-client.key ...
	I0918 20:01:32.272902   26827 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/ha-091565/proxy-client.key: {Name:mk377a0949cdb8c08e373abce1488149f3aaff34 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0918 20:01:32.272968   26827 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19667-7671/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0918 20:01:32.272985   26827 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19667-7671/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0918 20:01:32.272998   26827 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19667-7671/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0918 20:01:32.273010   26827 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19667-7671/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0918 20:01:32.273031   26827 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/ha-091565/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0918 20:01:32.273043   26827 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/ha-091565/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0918 20:01:32.273055   26827 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/ha-091565/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0918 20:01:32.273066   26827 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/ha-091565/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0918 20:01:32.273127   26827 certs.go:484] found cert: /home/jenkins/minikube-integration/19667-7671/.minikube/certs/14878.pem (1338 bytes)
	W0918 20:01:32.273161   26827 certs.go:480] ignoring /home/jenkins/minikube-integration/19667-7671/.minikube/certs/14878_empty.pem, impossibly tiny 0 bytes
	I0918 20:01:32.273170   26827 certs.go:484] found cert: /home/jenkins/minikube-integration/19667-7671/.minikube/certs/ca-key.pem (1675 bytes)
	I0918 20:01:32.273195   26827 certs.go:484] found cert: /home/jenkins/minikube-integration/19667-7671/.minikube/certs/ca.pem (1082 bytes)
	I0918 20:01:32.273219   26827 certs.go:484] found cert: /home/jenkins/minikube-integration/19667-7671/.minikube/certs/cert.pem (1123 bytes)
	I0918 20:01:32.273239   26827 certs.go:484] found cert: /home/jenkins/minikube-integration/19667-7671/.minikube/certs/key.pem (1679 bytes)
	I0918 20:01:32.273274   26827 certs.go:484] found cert: /home/jenkins/minikube-integration/19667-7671/.minikube/files/etc/ssl/certs/148782.pem (1708 bytes)
	I0918 20:01:32.273302   26827 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19667-7671/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0918 20:01:32.273315   26827 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19667-7671/.minikube/certs/14878.pem -> /usr/share/ca-certificates/14878.pem
	I0918 20:01:32.273327   26827 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19667-7671/.minikube/files/etc/ssl/certs/148782.pem -> /usr/share/ca-certificates/148782.pem
	I0918 20:01:32.273874   26827 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0918 20:01:32.300229   26827 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0918 20:01:32.325896   26827 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0918 20:01:32.351512   26827 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0918 20:01:32.377318   26827 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/ha-091565/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0918 20:01:32.402367   26827 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/ha-091565/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0918 20:01:32.427668   26827 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/ha-091565/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0918 20:01:32.452847   26827 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/ha-091565/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0918 20:01:32.478252   26827 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0918 20:01:32.502486   26827 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/certs/14878.pem --> /usr/share/ca-certificates/14878.pem (1338 bytes)
	I0918 20:01:32.525747   26827 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/files/etc/ssl/certs/148782.pem --> /usr/share/ca-certificates/148782.pem (1708 bytes)
	I0918 20:01:32.548776   26827 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0918 20:01:32.568576   26827 ssh_runner.go:195] Run: openssl version
	I0918 20:01:32.574892   26827 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14878.pem && ln -fs /usr/share/ca-certificates/14878.pem /etc/ssl/certs/14878.pem"
	I0918 20:01:32.589112   26827 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14878.pem
	I0918 20:01:32.594154   26827 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 18 19:57 /usr/share/ca-certificates/14878.pem
	I0918 20:01:32.594216   26827 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14878.pem
	I0918 20:01:32.601293   26827 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14878.pem /etc/ssl/certs/51391683.0"
	I0918 20:01:32.612847   26827 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/148782.pem && ln -fs /usr/share/ca-certificates/148782.pem /etc/ssl/certs/148782.pem"
	I0918 20:01:32.626745   26827 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/148782.pem
	I0918 20:01:32.631036   26827 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 18 19:57 /usr/share/ca-certificates/148782.pem
	I0918 20:01:32.631097   26827 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/148782.pem
	I0918 20:01:32.636840   26827 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/148782.pem /etc/ssl/certs/3ec20f2e.0"
	I0918 20:01:32.647396   26827 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0918 20:01:32.658543   26827 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0918 20:01:32.663199   26827 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 18 19:39 /usr/share/ca-certificates/minikubeCA.pem
	I0918 20:01:32.663269   26827 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0918 20:01:32.669178   26827 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0918 20:01:32.680536   26827 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0918 20:01:32.684596   26827 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0918 20:01:32.684652   26827 kubeadm.go:392] StartCluster: {Name:ha-091565 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Clust
erName:ha-091565 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.215 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Moun
tType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0918 20:01:32.684723   26827 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0918 20:01:32.684781   26827 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0918 20:01:32.725657   26827 cri.go:89] found id: ""
	I0918 20:01:32.725738   26827 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0918 20:01:32.736032   26827 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0918 20:01:32.745809   26827 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0918 20:01:32.755660   26827 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0918 20:01:32.755683   26827 kubeadm.go:157] found existing configuration files:
	
	I0918 20:01:32.755734   26827 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0918 20:01:32.765360   26827 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0918 20:01:32.765422   26827 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0918 20:01:32.774977   26827 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0918 20:01:32.784236   26827 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0918 20:01:32.784323   26827 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0918 20:01:32.794385   26827 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0918 20:01:32.803877   26827 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0918 20:01:32.803962   26827 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0918 20:01:32.813974   26827 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0918 20:01:32.824307   26827 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0918 20:01:32.824372   26827 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0918 20:01:32.833810   26827 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0918 20:01:32.930760   26827 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I0918 20:01:32.930831   26827 kubeadm.go:310] [preflight] Running pre-flight checks
	I0918 20:01:33.036305   26827 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0918 20:01:33.036446   26827 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0918 20:01:33.036572   26827 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0918 20:01:33.048889   26827 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0918 20:01:33.216902   26827 out.go:235]   - Generating certificates and keys ...
	I0918 20:01:33.217021   26827 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0918 20:01:33.217118   26827 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0918 20:01:33.410022   26827 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0918 20:01:33.571042   26827 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0918 20:01:34.285080   26827 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0918 20:01:34.386506   26827 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0918 20:01:34.560257   26827 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0918 20:01:34.560457   26827 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [ha-091565 localhost] and IPs [192.168.39.215 127.0.0.1 ::1]
	I0918 20:01:34.830386   26827 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0918 20:01:34.830530   26827 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [ha-091565 localhost] and IPs [192.168.39.215 127.0.0.1 ::1]
	I0918 20:01:34.951453   26827 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0918 20:01:35.138903   26827 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0918 20:01:35.238989   26827 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0918 20:01:35.239055   26827 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0918 20:01:35.347180   26827 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0918 20:01:35.486849   26827 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0918 20:01:35.625355   26827 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0918 20:01:35.747961   26827 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0918 20:01:35.790004   26827 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0918 20:01:35.790529   26827 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0918 20:01:35.794055   26827 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0918 20:01:35.796153   26827 out.go:235]   - Booting up control plane ...
	I0918 20:01:35.796260   26827 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0918 20:01:35.796362   26827 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0918 20:01:35.796717   26827 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0918 20:01:35.811747   26827 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0918 20:01:35.820566   26827 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0918 20:01:35.820644   26827 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0918 20:01:35.959348   26827 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0918 20:01:35.959478   26827 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0918 20:01:36.960132   26827 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.00167882s
	I0918 20:01:36.960220   26827 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0918 20:01:42.633375   26827 kubeadm.go:310] [api-check] The API server is healthy after 5.675608776s
	I0918 20:01:42.646137   26827 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0918 20:01:42.670455   26827 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0918 20:01:42.705148   26827 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0918 20:01:42.705327   26827 kubeadm.go:310] [mark-control-plane] Marking the node ha-091565 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0918 20:01:42.722155   26827 kubeadm.go:310] [bootstrap-token] Using token: 1ejtyk.26hc6xxbyyyx578s
	I0918 20:01:42.723458   26827 out.go:235]   - Configuring RBAC rules ...
	I0918 20:01:42.723598   26827 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0918 20:01:42.732040   26827 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0918 20:01:42.744976   26827 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0918 20:01:42.750140   26827 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0918 20:01:42.755732   26827 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0918 20:01:42.762953   26827 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0918 20:01:43.043394   26827 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0918 20:01:43.485553   26827 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0918 20:01:44.041202   26827 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0918 20:01:44.041225   26827 kubeadm.go:310] 
	I0918 20:01:44.041318   26827 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0918 20:01:44.041338   26827 kubeadm.go:310] 
	I0918 20:01:44.041443   26827 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0918 20:01:44.041471   26827 kubeadm.go:310] 
	I0918 20:01:44.041497   26827 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0918 20:01:44.041547   26827 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0918 20:01:44.041640   26827 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0918 20:01:44.041659   26827 kubeadm.go:310] 
	I0918 20:01:44.041751   26827 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0918 20:01:44.041778   26827 kubeadm.go:310] 
	I0918 20:01:44.041846   26827 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0918 20:01:44.041857   26827 kubeadm.go:310] 
	I0918 20:01:44.041977   26827 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0918 20:01:44.042082   26827 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0918 20:01:44.042182   26827 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0918 20:01:44.042190   26827 kubeadm.go:310] 
	I0918 20:01:44.042302   26827 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0918 20:01:44.042416   26827 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0918 20:01:44.042425   26827 kubeadm.go:310] 
	I0918 20:01:44.042517   26827 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 1ejtyk.26hc6xxbyyyx578s \
	I0918 20:01:44.042666   26827 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:8ad46bf288e8cc2226f5a929a768e6c95cddecc97ffc4016be27ef0987b1e36f \
	I0918 20:01:44.042690   26827 kubeadm.go:310] 	--control-plane 
	I0918 20:01:44.042694   26827 kubeadm.go:310] 
	I0918 20:01:44.042795   26827 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0918 20:01:44.042811   26827 kubeadm.go:310] 
	I0918 20:01:44.042929   26827 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 1ejtyk.26hc6xxbyyyx578s \
	I0918 20:01:44.043079   26827 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:8ad46bf288e8cc2226f5a929a768e6c95cddecc97ffc4016be27ef0987b1e36f 
	I0918 20:01:44.043428   26827 kubeadm.go:310] W0918 20:01:32.914360     832 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0918 20:01:44.043697   26827 kubeadm.go:310] W0918 20:01:32.915480     832 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0918 20:01:44.043826   26827 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0918 20:01:44.043856   26827 cni.go:84] Creating CNI manager for ""
	I0918 20:01:44.043867   26827 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0918 20:01:44.045606   26827 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0918 20:01:44.046719   26827 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0918 20:01:44.052565   26827 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.1/kubectl ...
	I0918 20:01:44.052591   26827 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I0918 20:01:44.074207   26827 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0918 20:01:44.422814   26827 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0918 20:01:44.422902   26827 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0918 20:01:44.422924   26827 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-091565 minikube.k8s.io/updated_at=2024_09_18T20_01_44_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=85073601a832bd4bbda5d11fa91feafff6ec6b91 minikube.k8s.io/name=ha-091565 minikube.k8s.io/primary=true
	I0918 20:01:44.659852   26827 ops.go:34] apiserver oom_adj: -16
	I0918 20:01:44.660163   26827 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0918 20:01:45.160146   26827 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0918 20:01:45.660152   26827 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0918 20:01:46.161013   26827 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0918 20:01:46.660936   26827 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0918 20:01:47.160166   26827 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0918 20:01:47.266634   26827 kubeadm.go:1113] duration metric: took 2.843807989s to wait for elevateKubeSystemPrivileges
	I0918 20:01:47.266673   26827 kubeadm.go:394] duration metric: took 14.582024612s to StartCluster
	I0918 20:01:47.266695   26827 settings.go:142] acquiring lock: {Name:mk6ae95a69dcbe00c28846409e9f46945a88de2f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0918 20:01:47.266765   26827 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19667-7671/kubeconfig
	I0918 20:01:47.267982   26827 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19667-7671/kubeconfig: {Name:mk3a3eecadb0164dfdd5d3c4392b5b473a2a9bb0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0918 20:01:47.268278   26827 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.39.215 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0918 20:01:47.268306   26827 start.go:241] waiting for startup goroutines ...
	I0918 20:01:47.268323   26827 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0918 20:01:47.268480   26827 addons.go:69] Setting storage-provisioner=true in profile "ha-091565"
	I0918 20:01:47.268500   26827 addons.go:234] Setting addon storage-provisioner=true in "ha-091565"
	I0918 20:01:47.268535   26827 host.go:66] Checking if "ha-091565" exists ...
	I0918 20:01:47.268594   26827 addons.go:69] Setting default-storageclass=true in profile "ha-091565"
	I0918 20:01:47.268631   26827 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-091565"
	I0918 20:01:47.268658   26827 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0918 20:01:47.268843   26827 config.go:182] Loaded profile config "ha-091565": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0918 20:01:47.269530   26827 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0918 20:01:47.269576   26827 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0918 20:01:47.269584   26827 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0918 20:01:47.269740   26827 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0918 20:01:47.284536   26827 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39723
	I0918 20:01:47.284536   26827 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40865
	I0918 20:01:47.285102   26827 main.go:141] libmachine: () Calling .GetVersion
	I0918 20:01:47.285215   26827 main.go:141] libmachine: () Calling .GetVersion
	I0918 20:01:47.285649   26827 main.go:141] libmachine: Using API Version  1
	I0918 20:01:47.285665   26827 main.go:141] libmachine: () Calling .SetConfigRaw
	I0918 20:01:47.285788   26827 main.go:141] libmachine: Using API Version  1
	I0918 20:01:47.285813   26827 main.go:141] libmachine: () Calling .SetConfigRaw
	I0918 20:01:47.286000   26827 main.go:141] libmachine: () Calling .GetMachineName
	I0918 20:01:47.286165   26827 main.go:141] libmachine: () Calling .GetMachineName
	I0918 20:01:47.286188   26827 main.go:141] libmachine: (ha-091565) Calling .GetState
	I0918 20:01:47.286733   26827 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0918 20:01:47.286779   26827 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0918 20:01:47.288227   26827 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19667-7671/kubeconfig
	I0918 20:01:47.288530   26827 kapi.go:59] client config for ha-091565: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19667-7671/.minikube/profiles/ha-091565/client.crt", KeyFile:"/home/jenkins/minikube-integration/19667-7671/.minikube/profiles/ha-091565/client.key", CAFile:"/home/jenkins/minikube-integration/19667-7671/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f6fca0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0918 20:01:47.289088   26827 cert_rotation.go:140] Starting client certificate rotation controller
	I0918 20:01:47.289302   26827 addons.go:234] Setting addon default-storageclass=true in "ha-091565"
	I0918 20:01:47.289329   26827 host.go:66] Checking if "ha-091565" exists ...
	I0918 20:01:47.289569   26827 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0918 20:01:47.289600   26827 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0918 20:01:47.302279   26827 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33947
	I0918 20:01:47.302845   26827 main.go:141] libmachine: () Calling .GetVersion
	I0918 20:01:47.303361   26827 main.go:141] libmachine: Using API Version  1
	I0918 20:01:47.303390   26827 main.go:141] libmachine: () Calling .SetConfigRaw
	I0918 20:01:47.303730   26827 main.go:141] libmachine: () Calling .GetMachineName
	I0918 20:01:47.303943   26827 main.go:141] libmachine: (ha-091565) Calling .GetState
	I0918 20:01:47.304502   26827 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33887
	I0918 20:01:47.304796   26827 main.go:141] libmachine: () Calling .GetVersion
	I0918 20:01:47.305341   26827 main.go:141] libmachine: Using API Version  1
	I0918 20:01:47.305367   26827 main.go:141] libmachine: () Calling .SetConfigRaw
	I0918 20:01:47.305641   26827 main.go:141] libmachine: () Calling .GetMachineName
	I0918 20:01:47.305684   26827 main.go:141] libmachine: (ha-091565) Calling .DriverName
	I0918 20:01:47.306081   26827 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0918 20:01:47.306112   26827 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0918 20:01:47.307722   26827 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0918 20:01:47.309002   26827 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0918 20:01:47.309023   26827 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0918 20:01:47.309041   26827 main.go:141] libmachine: (ha-091565) Calling .GetSSHHostname
	I0918 20:01:47.311945   26827 main.go:141] libmachine: (ha-091565) DBG | domain ha-091565 has defined MAC address 52:54:00:2a:13:d8 in network mk-ha-091565
	I0918 20:01:47.312427   26827 main.go:141] libmachine: (ha-091565) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:13:d8", ip: ""} in network mk-ha-091565: {Iface:virbr1 ExpiryTime:2024-09-18 21:01:12 +0000 UTC Type:0 Mac:52:54:00:2a:13:d8 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:ha-091565 Clientid:01:52:54:00:2a:13:d8}
	I0918 20:01:47.312448   26827 main.go:141] libmachine: (ha-091565) DBG | domain ha-091565 has defined IP address 192.168.39.215 and MAC address 52:54:00:2a:13:d8 in network mk-ha-091565
	I0918 20:01:47.312599   26827 main.go:141] libmachine: (ha-091565) Calling .GetSSHPort
	I0918 20:01:47.312781   26827 main.go:141] libmachine: (ha-091565) Calling .GetSSHKeyPath
	I0918 20:01:47.312931   26827 main.go:141] libmachine: (ha-091565) Calling .GetSSHUsername
	I0918 20:01:47.313072   26827 sshutil.go:53] new ssh client: &{IP:192.168.39.215 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19667-7671/.minikube/machines/ha-091565/id_rsa Username:docker}
	I0918 20:01:47.321291   26827 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43097
	I0918 20:01:47.321760   26827 main.go:141] libmachine: () Calling .GetVersion
	I0918 20:01:47.322322   26827 main.go:141] libmachine: Using API Version  1
	I0918 20:01:47.322343   26827 main.go:141] libmachine: () Calling .SetConfigRaw
	I0918 20:01:47.322630   26827 main.go:141] libmachine: () Calling .GetMachineName
	I0918 20:01:47.322807   26827 main.go:141] libmachine: (ha-091565) Calling .GetState
	I0918 20:01:47.324450   26827 main.go:141] libmachine: (ha-091565) Calling .DriverName
	I0918 20:01:47.324624   26827 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0918 20:01:47.324639   26827 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0918 20:01:47.324656   26827 main.go:141] libmachine: (ha-091565) Calling .GetSSHHostname
	I0918 20:01:47.327553   26827 main.go:141] libmachine: (ha-091565) DBG | domain ha-091565 has defined MAC address 52:54:00:2a:13:d8 in network mk-ha-091565
	I0918 20:01:47.328031   26827 main.go:141] libmachine: (ha-091565) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:13:d8", ip: ""} in network mk-ha-091565: {Iface:virbr1 ExpiryTime:2024-09-18 21:01:12 +0000 UTC Type:0 Mac:52:54:00:2a:13:d8 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:ha-091565 Clientid:01:52:54:00:2a:13:d8}
	I0918 20:01:47.328103   26827 main.go:141] libmachine: (ha-091565) DBG | domain ha-091565 has defined IP address 192.168.39.215 and MAC address 52:54:00:2a:13:d8 in network mk-ha-091565
	I0918 20:01:47.328319   26827 main.go:141] libmachine: (ha-091565) Calling .GetSSHPort
	I0918 20:01:47.328490   26827 main.go:141] libmachine: (ha-091565) Calling .GetSSHKeyPath
	I0918 20:01:47.328627   26827 main.go:141] libmachine: (ha-091565) Calling .GetSSHUsername
	I0918 20:01:47.328755   26827 sshutil.go:53] new ssh client: &{IP:192.168.39.215 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19667-7671/.minikube/machines/ha-091565/id_rsa Username:docker}
	I0918 20:01:47.399915   26827 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0918 20:01:47.490020   26827 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0918 20:01:47.507383   26827 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0918 20:01:47.769102   26827 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0918 20:01:48.124518   26827 main.go:141] libmachine: Making call to close driver server
	I0918 20:01:48.124546   26827 main.go:141] libmachine: (ha-091565) Calling .Close
	I0918 20:01:48.124566   26827 main.go:141] libmachine: Making call to close driver server
	I0918 20:01:48.124582   26827 main.go:141] libmachine: (ha-091565) Calling .Close
	I0918 20:01:48.124826   26827 main.go:141] libmachine: Successfully made call to close driver server
	I0918 20:01:48.124838   26827 main.go:141] libmachine: Successfully made call to close driver server
	I0918 20:01:48.124842   26827 main.go:141] libmachine: Making call to close connection to plugin binary
	I0918 20:01:48.124851   26827 main.go:141] libmachine: Making call to close connection to plugin binary
	I0918 20:01:48.124852   26827 main.go:141] libmachine: Making call to close driver server
	I0918 20:01:48.124854   26827 main.go:141] libmachine: (ha-091565) DBG | Closing plugin on server side
	I0918 20:01:48.124854   26827 main.go:141] libmachine: (ha-091565) DBG | Closing plugin on server side
	I0918 20:01:48.124860   26827 main.go:141] libmachine: Making call to close driver server
	I0918 20:01:48.124891   26827 main.go:141] libmachine: (ha-091565) Calling .Close
	I0918 20:01:48.124906   26827 main.go:141] libmachine: (ha-091565) Calling .Close
	I0918 20:01:48.125117   26827 main.go:141] libmachine: (ha-091565) DBG | Closing plugin on server side
	I0918 20:01:48.125151   26827 main.go:141] libmachine: Successfully made call to close driver server
	I0918 20:01:48.125160   26827 main.go:141] libmachine: Making call to close connection to plugin binary
	I0918 20:01:48.125197   26827 main.go:141] libmachine: Successfully made call to close driver server
	I0918 20:01:48.125206   26827 main.go:141] libmachine: Making call to close connection to plugin binary
	I0918 20:01:48.125293   26827 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0918 20:01:48.125321   26827 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0918 20:01:48.125410   26827 round_trippers.go:463] GET https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses
	I0918 20:01:48.125420   26827 round_trippers.go:469] Request Headers:
	I0918 20:01:48.125433   26827 round_trippers.go:473]     Accept: application/json, */*
	I0918 20:01:48.125438   26827 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0918 20:01:48.140920   26827 round_trippers.go:574] Response Status: 200 OK in 15 milliseconds
	I0918 20:01:48.141439   26827 round_trippers.go:463] PUT https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0918 20:01:48.141452   26827 round_trippers.go:469] Request Headers:
	I0918 20:01:48.141459   26827 round_trippers.go:473]     Content-Type: application/json
	I0918 20:01:48.141463   26827 round_trippers.go:473]     Accept: application/json, */*
	I0918 20:01:48.141466   26827 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0918 20:01:48.144763   26827 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0918 20:01:48.144914   26827 main.go:141] libmachine: Making call to close driver server
	I0918 20:01:48.144928   26827 main.go:141] libmachine: (ha-091565) Calling .Close
	I0918 20:01:48.145191   26827 main.go:141] libmachine: Successfully made call to close driver server
	I0918 20:01:48.145213   26827 main.go:141] libmachine: Making call to close connection to plugin binary
	I0918 20:01:48.145197   26827 main.go:141] libmachine: (ha-091565) DBG | Closing plugin on server side
	I0918 20:01:48.146835   26827 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0918 20:01:48.148231   26827 addons.go:510] duration metric: took 879.91145ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I0918 20:01:48.148269   26827 start.go:246] waiting for cluster config update ...
	I0918 20:01:48.148286   26827 start.go:255] writing updated cluster config ...
	I0918 20:01:48.150246   26827 out.go:201] 
	I0918 20:01:48.151820   26827 config.go:182] Loaded profile config "ha-091565": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0918 20:01:48.151905   26827 profile.go:143] Saving config to /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/ha-091565/config.json ...
	I0918 20:01:48.153514   26827 out.go:177] * Starting "ha-091565-m02" control-plane node in "ha-091565" cluster
	I0918 20:01:48.154560   26827 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0918 20:01:48.154580   26827 cache.go:56] Caching tarball of preloaded images
	I0918 20:01:48.154669   26827 preload.go:172] Found /home/jenkins/minikube-integration/19667-7671/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0918 20:01:48.154681   26827 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0918 20:01:48.154748   26827 profile.go:143] Saving config to /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/ha-091565/config.json ...
	I0918 20:01:48.154916   26827 start.go:360] acquireMachinesLock for ha-091565-m02: {Name:mk1b0d68a0b7d9d4bc8204d5d81cb9eb77222526 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0918 20:01:48.154979   26827 start.go:364] duration metric: took 35.44µs to acquireMachinesLock for "ha-091565-m02"
	I0918 20:01:48.155003   26827 start.go:93] Provisioning new machine with config: &{Name:ha-091565 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.1 ClusterName:ha-091565 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.215 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 Cer
tExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0918 20:01:48.155077   26827 start.go:125] createHost starting for "m02" (driver="kvm2")
	I0918 20:01:48.156472   26827 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0918 20:01:48.156553   26827 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0918 20:01:48.156597   26827 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0918 20:01:48.171048   26827 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41535
	I0918 20:01:48.171579   26827 main.go:141] libmachine: () Calling .GetVersion
	I0918 20:01:48.172102   26827 main.go:141] libmachine: Using API Version  1
	I0918 20:01:48.172121   26827 main.go:141] libmachine: () Calling .SetConfigRaw
	I0918 20:01:48.172468   26827 main.go:141] libmachine: () Calling .GetMachineName
	I0918 20:01:48.172651   26827 main.go:141] libmachine: (ha-091565-m02) Calling .GetMachineName
	I0918 20:01:48.172786   26827 main.go:141] libmachine: (ha-091565-m02) Calling .DriverName
	I0918 20:01:48.172987   26827 start.go:159] libmachine.API.Create for "ha-091565" (driver="kvm2")
	I0918 20:01:48.173015   26827 client.go:168] LocalClient.Create starting
	I0918 20:01:48.173044   26827 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19667-7671/.minikube/certs/ca.pem
	I0918 20:01:48.173085   26827 main.go:141] libmachine: Decoding PEM data...
	I0918 20:01:48.173100   26827 main.go:141] libmachine: Parsing certificate...
	I0918 20:01:48.173147   26827 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19667-7671/.minikube/certs/cert.pem
	I0918 20:01:48.173164   26827 main.go:141] libmachine: Decoding PEM data...
	I0918 20:01:48.173174   26827 main.go:141] libmachine: Parsing certificate...
	I0918 20:01:48.173189   26827 main.go:141] libmachine: Running pre-create checks...
	I0918 20:01:48.173197   26827 main.go:141] libmachine: (ha-091565-m02) Calling .PreCreateCheck
	I0918 20:01:48.173330   26827 main.go:141] libmachine: (ha-091565-m02) Calling .GetConfigRaw
	I0918 20:01:48.173685   26827 main.go:141] libmachine: Creating machine...
	I0918 20:01:48.173707   26827 main.go:141] libmachine: (ha-091565-m02) Calling .Create
	I0918 20:01:48.173849   26827 main.go:141] libmachine: (ha-091565-m02) Creating KVM machine...
	I0918 20:01:48.175160   26827 main.go:141] libmachine: (ha-091565-m02) DBG | found existing default KVM network
	I0918 20:01:48.175336   26827 main.go:141] libmachine: (ha-091565-m02) DBG | found existing private KVM network mk-ha-091565
	I0918 20:01:48.175456   26827 main.go:141] libmachine: (ha-091565-m02) Setting up store path in /home/jenkins/minikube-integration/19667-7671/.minikube/machines/ha-091565-m02 ...
	I0918 20:01:48.175493   26827 main.go:141] libmachine: (ha-091565-m02) Building disk image from file:///home/jenkins/minikube-integration/19667-7671/.minikube/cache/iso/amd64/minikube-v1.34.0-1726481713-19649-amd64.iso
	I0918 20:01:48.175585   26827 main.go:141] libmachine: (ha-091565-m02) DBG | I0918 20:01:48.175471   27201 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19667-7671/.minikube
	I0918 20:01:48.175662   26827 main.go:141] libmachine: (ha-091565-m02) Downloading /home/jenkins/minikube-integration/19667-7671/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19667-7671/.minikube/cache/iso/amd64/minikube-v1.34.0-1726481713-19649-amd64.iso...
	I0918 20:01:48.401510   26827 main.go:141] libmachine: (ha-091565-m02) DBG | I0918 20:01:48.401363   27201 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19667-7671/.minikube/machines/ha-091565-m02/id_rsa...
	I0918 20:01:48.608450   26827 main.go:141] libmachine: (ha-091565-m02) DBG | I0918 20:01:48.608312   27201 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19667-7671/.minikube/machines/ha-091565-m02/ha-091565-m02.rawdisk...
	I0918 20:01:48.608478   26827 main.go:141] libmachine: (ha-091565-m02) DBG | Writing magic tar header
	I0918 20:01:48.608491   26827 main.go:141] libmachine: (ha-091565-m02) DBG | Writing SSH key tar header
	I0918 20:01:48.608498   26827 main.go:141] libmachine: (ha-091565-m02) DBG | I0918 20:01:48.608419   27201 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19667-7671/.minikube/machines/ha-091565-m02 ...
	I0918 20:01:48.608508   26827 main.go:141] libmachine: (ha-091565-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19667-7671/.minikube/machines/ha-091565-m02
	I0918 20:01:48.608550   26827 main.go:141] libmachine: (ha-091565-m02) Setting executable bit set on /home/jenkins/minikube-integration/19667-7671/.minikube/machines/ha-091565-m02 (perms=drwx------)
	I0918 20:01:48.608571   26827 main.go:141] libmachine: (ha-091565-m02) Setting executable bit set on /home/jenkins/minikube-integration/19667-7671/.minikube/machines (perms=drwxr-xr-x)
	I0918 20:01:48.608596   26827 main.go:141] libmachine: (ha-091565-m02) Setting executable bit set on /home/jenkins/minikube-integration/19667-7671/.minikube (perms=drwxr-xr-x)
	I0918 20:01:48.608618   26827 main.go:141] libmachine: (ha-091565-m02) Setting executable bit set on /home/jenkins/minikube-integration/19667-7671 (perms=drwxrwxr-x)
	I0918 20:01:48.608631   26827 main.go:141] libmachine: (ha-091565-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19667-7671/.minikube/machines
	I0918 20:01:48.608650   26827 main.go:141] libmachine: (ha-091565-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19667-7671/.minikube
	I0918 20:01:48.608662   26827 main.go:141] libmachine: (ha-091565-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19667-7671
	I0918 20:01:48.608675   26827 main.go:141] libmachine: (ha-091565-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0918 20:01:48.608686   26827 main.go:141] libmachine: (ha-091565-m02) DBG | Checking permissions on dir: /home/jenkins
	I0918 20:01:48.608698   26827 main.go:141] libmachine: (ha-091565-m02) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0918 20:01:48.608710   26827 main.go:141] libmachine: (ha-091565-m02) DBG | Checking permissions on dir: /home
	I0918 20:01:48.608728   26827 main.go:141] libmachine: (ha-091565-m02) DBG | Skipping /home - not owner
	I0918 20:01:48.608744   26827 main.go:141] libmachine: (ha-091565-m02) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0918 20:01:48.608754   26827 main.go:141] libmachine: (ha-091565-m02) Creating domain...
	I0918 20:01:48.609781   26827 main.go:141] libmachine: (ha-091565-m02) define libvirt domain using xml: 
	I0918 20:01:48.609802   26827 main.go:141] libmachine: (ha-091565-m02) <domain type='kvm'>
	I0918 20:01:48.609813   26827 main.go:141] libmachine: (ha-091565-m02)   <name>ha-091565-m02</name>
	I0918 20:01:48.609825   26827 main.go:141] libmachine: (ha-091565-m02)   <memory unit='MiB'>2200</memory>
	I0918 20:01:48.609846   26827 main.go:141] libmachine: (ha-091565-m02)   <vcpu>2</vcpu>
	I0918 20:01:48.609855   26827 main.go:141] libmachine: (ha-091565-m02)   <features>
	I0918 20:01:48.609866   26827 main.go:141] libmachine: (ha-091565-m02)     <acpi/>
	I0918 20:01:48.609874   26827 main.go:141] libmachine: (ha-091565-m02)     <apic/>
	I0918 20:01:48.609884   26827 main.go:141] libmachine: (ha-091565-m02)     <pae/>
	I0918 20:01:48.609891   26827 main.go:141] libmachine: (ha-091565-m02)     
	I0918 20:01:48.609898   26827 main.go:141] libmachine: (ha-091565-m02)   </features>
	I0918 20:01:48.609911   26827 main.go:141] libmachine: (ha-091565-m02)   <cpu mode='host-passthrough'>
	I0918 20:01:48.609932   26827 main.go:141] libmachine: (ha-091565-m02)   
	I0918 20:01:48.609948   26827 main.go:141] libmachine: (ha-091565-m02)   </cpu>
	I0918 20:01:48.609957   26827 main.go:141] libmachine: (ha-091565-m02)   <os>
	I0918 20:01:48.609972   26827 main.go:141] libmachine: (ha-091565-m02)     <type>hvm</type>
	I0918 20:01:48.609984   26827 main.go:141] libmachine: (ha-091565-m02)     <boot dev='cdrom'/>
	I0918 20:01:48.609994   26827 main.go:141] libmachine: (ha-091565-m02)     <boot dev='hd'/>
	I0918 20:01:48.610006   26827 main.go:141] libmachine: (ha-091565-m02)     <bootmenu enable='no'/>
	I0918 20:01:48.610016   26827 main.go:141] libmachine: (ha-091565-m02)   </os>
	I0918 20:01:48.610031   26827 main.go:141] libmachine: (ha-091565-m02)   <devices>
	I0918 20:01:48.610042   26827 main.go:141] libmachine: (ha-091565-m02)     <disk type='file' device='cdrom'>
	I0918 20:01:48.610058   26827 main.go:141] libmachine: (ha-091565-m02)       <source file='/home/jenkins/minikube-integration/19667-7671/.minikube/machines/ha-091565-m02/boot2docker.iso'/>
	I0918 20:01:48.610074   26827 main.go:141] libmachine: (ha-091565-m02)       <target dev='hdc' bus='scsi'/>
	I0918 20:01:48.610086   26827 main.go:141] libmachine: (ha-091565-m02)       <readonly/>
	I0918 20:01:48.610096   26827 main.go:141] libmachine: (ha-091565-m02)     </disk>
	I0918 20:01:48.610106   26827 main.go:141] libmachine: (ha-091565-m02)     <disk type='file' device='disk'>
	I0918 20:01:48.610120   26827 main.go:141] libmachine: (ha-091565-m02)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0918 20:01:48.610136   26827 main.go:141] libmachine: (ha-091565-m02)       <source file='/home/jenkins/minikube-integration/19667-7671/.minikube/machines/ha-091565-m02/ha-091565-m02.rawdisk'/>
	I0918 20:01:48.610147   26827 main.go:141] libmachine: (ha-091565-m02)       <target dev='hda' bus='virtio'/>
	I0918 20:01:48.610170   26827 main.go:141] libmachine: (ha-091565-m02)     </disk>
	I0918 20:01:48.610187   26827 main.go:141] libmachine: (ha-091565-m02)     <interface type='network'>
	I0918 20:01:48.610207   26827 main.go:141] libmachine: (ha-091565-m02)       <source network='mk-ha-091565'/>
	I0918 20:01:48.610225   26827 main.go:141] libmachine: (ha-091565-m02)       <model type='virtio'/>
	I0918 20:01:48.610237   26827 main.go:141] libmachine: (ha-091565-m02)     </interface>
	I0918 20:01:48.610247   26827 main.go:141] libmachine: (ha-091565-m02)     <interface type='network'>
	I0918 20:01:48.610255   26827 main.go:141] libmachine: (ha-091565-m02)       <source network='default'/>
	I0918 20:01:48.610265   26827 main.go:141] libmachine: (ha-091565-m02)       <model type='virtio'/>
	I0918 20:01:48.610275   26827 main.go:141] libmachine: (ha-091565-m02)     </interface>
	I0918 20:01:48.610285   26827 main.go:141] libmachine: (ha-091565-m02)     <serial type='pty'>
	I0918 20:01:48.610296   26827 main.go:141] libmachine: (ha-091565-m02)       <target port='0'/>
	I0918 20:01:48.610310   26827 main.go:141] libmachine: (ha-091565-m02)     </serial>
	I0918 20:01:48.610325   26827 main.go:141] libmachine: (ha-091565-m02)     <console type='pty'>
	I0918 20:01:48.610342   26827 main.go:141] libmachine: (ha-091565-m02)       <target type='serial' port='0'/>
	I0918 20:01:48.610353   26827 main.go:141] libmachine: (ha-091565-m02)     </console>
	I0918 20:01:48.610360   26827 main.go:141] libmachine: (ha-091565-m02)     <rng model='virtio'>
	I0918 20:01:48.610371   26827 main.go:141] libmachine: (ha-091565-m02)       <backend model='random'>/dev/random</backend>
	I0918 20:01:48.610380   26827 main.go:141] libmachine: (ha-091565-m02)     </rng>
	I0918 20:01:48.610390   26827 main.go:141] libmachine: (ha-091565-m02)     
	I0918 20:01:48.610396   26827 main.go:141] libmachine: (ha-091565-m02)     
	I0918 20:01:48.610409   26827 main.go:141] libmachine: (ha-091565-m02)   </devices>
	I0918 20:01:48.610423   26827 main.go:141] libmachine: (ha-091565-m02) </domain>
	I0918 20:01:48.610436   26827 main.go:141] libmachine: (ha-091565-m02) 
	I0918 20:01:48.617221   26827 main.go:141] libmachine: (ha-091565-m02) DBG | domain ha-091565-m02 has defined MAC address 52:54:00:15:ec:ae in network default
	I0918 20:01:48.617722   26827 main.go:141] libmachine: (ha-091565-m02) Ensuring networks are active...
	I0918 20:01:48.617752   26827 main.go:141] libmachine: (ha-091565-m02) DBG | domain ha-091565-m02 has defined MAC address 52:54:00:21:2b:96 in network mk-ha-091565
	I0918 20:01:48.618492   26827 main.go:141] libmachine: (ha-091565-m02) Ensuring network default is active
	I0918 20:01:48.618796   26827 main.go:141] libmachine: (ha-091565-m02) Ensuring network mk-ha-091565 is active
	I0918 20:01:48.619157   26827 main.go:141] libmachine: (ha-091565-m02) Getting domain xml...
	I0918 20:01:48.619865   26827 main.go:141] libmachine: (ha-091565-m02) Creating domain...
	I0918 20:01:49.853791   26827 main.go:141] libmachine: (ha-091565-m02) Waiting to get IP...
	I0918 20:01:49.854650   26827 main.go:141] libmachine: (ha-091565-m02) DBG | domain ha-091565-m02 has defined MAC address 52:54:00:21:2b:96 in network mk-ha-091565
	I0918 20:01:49.855084   26827 main.go:141] libmachine: (ha-091565-m02) DBG | unable to find current IP address of domain ha-091565-m02 in network mk-ha-091565
	I0918 20:01:49.855112   26827 main.go:141] libmachine: (ha-091565-m02) DBG | I0918 20:01:49.855067   27201 retry.go:31] will retry after 283.999691ms: waiting for machine to come up
	I0918 20:01:50.140266   26827 main.go:141] libmachine: (ha-091565-m02) DBG | domain ha-091565-m02 has defined MAC address 52:54:00:21:2b:96 in network mk-ha-091565
	I0918 20:01:50.140696   26827 main.go:141] libmachine: (ha-091565-m02) DBG | unable to find current IP address of domain ha-091565-m02 in network mk-ha-091565
	I0918 20:01:50.140718   26827 main.go:141] libmachine: (ha-091565-m02) DBG | I0918 20:01:50.140668   27201 retry.go:31] will retry after 243.982504ms: waiting for machine to come up
	I0918 20:01:50.386066   26827 main.go:141] libmachine: (ha-091565-m02) DBG | domain ha-091565-m02 has defined MAC address 52:54:00:21:2b:96 in network mk-ha-091565
	I0918 20:01:50.386487   26827 main.go:141] libmachine: (ha-091565-m02) DBG | unable to find current IP address of domain ha-091565-m02 in network mk-ha-091565
	I0918 20:01:50.386515   26827 main.go:141] libmachine: (ha-091565-m02) DBG | I0918 20:01:50.386440   27201 retry.go:31] will retry after 384.970289ms: waiting for machine to come up
	I0918 20:01:50.773049   26827 main.go:141] libmachine: (ha-091565-m02) DBG | domain ha-091565-m02 has defined MAC address 52:54:00:21:2b:96 in network mk-ha-091565
	I0918 20:01:50.773463   26827 main.go:141] libmachine: (ha-091565-m02) DBG | unable to find current IP address of domain ha-091565-m02 in network mk-ha-091565
	I0918 20:01:50.773490   26827 main.go:141] libmachine: (ha-091565-m02) DBG | I0918 20:01:50.773419   27201 retry.go:31] will retry after 383.687698ms: waiting for machine to come up
	I0918 20:01:51.158968   26827 main.go:141] libmachine: (ha-091565-m02) DBG | domain ha-091565-m02 has defined MAC address 52:54:00:21:2b:96 in network mk-ha-091565
	I0918 20:01:51.159478   26827 main.go:141] libmachine: (ha-091565-m02) DBG | unable to find current IP address of domain ha-091565-m02 in network mk-ha-091565
	I0918 20:01:51.159506   26827 main.go:141] libmachine: (ha-091565-m02) DBG | I0918 20:01:51.159430   27201 retry.go:31] will retry after 708.286443ms: waiting for machine to come up
	I0918 20:01:51.869406   26827 main.go:141] libmachine: (ha-091565-m02) DBG | domain ha-091565-m02 has defined MAC address 52:54:00:21:2b:96 in network mk-ha-091565
	I0918 20:01:51.869911   26827 main.go:141] libmachine: (ha-091565-m02) DBG | unable to find current IP address of domain ha-091565-m02 in network mk-ha-091565
	I0918 20:01:51.869932   26827 main.go:141] libmachine: (ha-091565-m02) DBG | I0918 20:01:51.869871   27201 retry.go:31] will retry after 693.038682ms: waiting for machine to come up
	I0918 20:01:52.564866   26827 main.go:141] libmachine: (ha-091565-m02) DBG | domain ha-091565-m02 has defined MAC address 52:54:00:21:2b:96 in network mk-ha-091565
	I0918 20:01:52.565352   26827 main.go:141] libmachine: (ha-091565-m02) DBG | unable to find current IP address of domain ha-091565-m02 in network mk-ha-091565
	I0918 20:01:52.565380   26827 main.go:141] libmachine: (ha-091565-m02) DBG | I0918 20:01:52.565257   27201 retry.go:31] will retry after 736.537004ms: waiting for machine to come up
	I0918 20:01:53.303205   26827 main.go:141] libmachine: (ha-091565-m02) DBG | domain ha-091565-m02 has defined MAC address 52:54:00:21:2b:96 in network mk-ha-091565
	I0918 20:01:53.303598   26827 main.go:141] libmachine: (ha-091565-m02) DBG | unable to find current IP address of domain ha-091565-m02 in network mk-ha-091565
	I0918 20:01:53.303630   26827 main.go:141] libmachine: (ha-091565-m02) DBG | I0918 20:01:53.303562   27201 retry.go:31] will retry after 1.042865785s: waiting for machine to come up
	I0918 20:01:54.347669   26827 main.go:141] libmachine: (ha-091565-m02) DBG | domain ha-091565-m02 has defined MAC address 52:54:00:21:2b:96 in network mk-ha-091565
	I0918 20:01:54.348067   26827 main.go:141] libmachine: (ha-091565-m02) DBG | unable to find current IP address of domain ha-091565-m02 in network mk-ha-091565
	I0918 20:01:54.348094   26827 main.go:141] libmachine: (ha-091565-m02) DBG | I0918 20:01:54.348054   27201 retry.go:31] will retry after 1.167725142s: waiting for machine to come up
	I0918 20:01:55.517065   26827 main.go:141] libmachine: (ha-091565-m02) DBG | domain ha-091565-m02 has defined MAC address 52:54:00:21:2b:96 in network mk-ha-091565
	I0918 20:01:55.517432   26827 main.go:141] libmachine: (ha-091565-m02) DBG | unable to find current IP address of domain ha-091565-m02 in network mk-ha-091565
	I0918 20:01:55.517468   26827 main.go:141] libmachine: (ha-091565-m02) DBG | I0918 20:01:55.517401   27201 retry.go:31] will retry after 1.527504069s: waiting for machine to come up
	I0918 20:01:57.046257   26827 main.go:141] libmachine: (ha-091565-m02) DBG | domain ha-091565-m02 has defined MAC address 52:54:00:21:2b:96 in network mk-ha-091565
	I0918 20:01:57.046707   26827 main.go:141] libmachine: (ha-091565-m02) DBG | unable to find current IP address of domain ha-091565-m02 in network mk-ha-091565
	I0918 20:01:57.046734   26827 main.go:141] libmachine: (ha-091565-m02) DBG | I0918 20:01:57.046662   27201 retry.go:31] will retry after 2.687348908s: waiting for machine to come up
	I0918 20:01:59.735480   26827 main.go:141] libmachine: (ha-091565-m02) DBG | domain ha-091565-m02 has defined MAC address 52:54:00:21:2b:96 in network mk-ha-091565
	I0918 20:01:59.736079   26827 main.go:141] libmachine: (ha-091565-m02) DBG | unable to find current IP address of domain ha-091565-m02 in network mk-ha-091565
	I0918 20:01:59.736176   26827 main.go:141] libmachine: (ha-091565-m02) DBG | I0918 20:01:59.736024   27201 retry.go:31] will retry after 2.655283124s: waiting for machine to come up
	I0918 20:02:02.393219   26827 main.go:141] libmachine: (ha-091565-m02) DBG | domain ha-091565-m02 has defined MAC address 52:54:00:21:2b:96 in network mk-ha-091565
	I0918 20:02:02.393704   26827 main.go:141] libmachine: (ha-091565-m02) DBG | unable to find current IP address of domain ha-091565-m02 in network mk-ha-091565
	I0918 20:02:02.393725   26827 main.go:141] libmachine: (ha-091565-m02) DBG | I0918 20:02:02.393678   27201 retry.go:31] will retry after 3.65154054s: waiting for machine to come up
	I0918 20:02:06.048480   26827 main.go:141] libmachine: (ha-091565-m02) DBG | domain ha-091565-m02 has defined MAC address 52:54:00:21:2b:96 in network mk-ha-091565
	I0918 20:02:06.048911   26827 main.go:141] libmachine: (ha-091565-m02) DBG | unable to find current IP address of domain ha-091565-m02 in network mk-ha-091565
	I0918 20:02:06.048932   26827 main.go:141] libmachine: (ha-091565-m02) DBG | I0918 20:02:06.048885   27201 retry.go:31] will retry after 4.061870544s: waiting for machine to come up
	I0918 20:02:10.113660   26827 main.go:141] libmachine: (ha-091565-m02) DBG | domain ha-091565-m02 has defined MAC address 52:54:00:21:2b:96 in network mk-ha-091565
	I0918 20:02:10.114089   26827 main.go:141] libmachine: (ha-091565-m02) Found IP for machine: 192.168.39.92
	I0918 20:02:10.114110   26827 main.go:141] libmachine: (ha-091565-m02) Reserving static IP address...
	I0918 20:02:10.114118   26827 main.go:141] libmachine: (ha-091565-m02) DBG | domain ha-091565-m02 has current primary IP address 192.168.39.92 and MAC address 52:54:00:21:2b:96 in network mk-ha-091565
	I0918 20:02:10.114476   26827 main.go:141] libmachine: (ha-091565-m02) DBG | unable to find host DHCP lease matching {name: "ha-091565-m02", mac: "52:54:00:21:2b:96", ip: "192.168.39.92"} in network mk-ha-091565
	I0918 20:02:10.190986   26827 main.go:141] libmachine: (ha-091565-m02) DBG | Getting to WaitForSSH function...
	I0918 20:02:10.191024   26827 main.go:141] libmachine: (ha-091565-m02) Reserved static IP address: 192.168.39.92
	I0918 20:02:10.191040   26827 main.go:141] libmachine: (ha-091565-m02) Waiting for SSH to be available...
	I0918 20:02:10.193580   26827 main.go:141] libmachine: (ha-091565-m02) DBG | domain ha-091565-m02 has defined MAC address 52:54:00:21:2b:96 in network mk-ha-091565
	I0918 20:02:10.194009   26827 main.go:141] libmachine: (ha-091565-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:2b:96", ip: ""} in network mk-ha-091565: {Iface:virbr1 ExpiryTime:2024-09-18 21:02:02 +0000 UTC Type:0 Mac:52:54:00:21:2b:96 Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:minikube Clientid:01:52:54:00:21:2b:96}
	I0918 20:02:10.194037   26827 main.go:141] libmachine: (ha-091565-m02) DBG | domain ha-091565-m02 has defined IP address 192.168.39.92 and MAC address 52:54:00:21:2b:96 in network mk-ha-091565
	I0918 20:02:10.194132   26827 main.go:141] libmachine: (ha-091565-m02) DBG | Using SSH client type: external
	I0918 20:02:10.194161   26827 main.go:141] libmachine: (ha-091565-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/19667-7671/.minikube/machines/ha-091565-m02/id_rsa (-rw-------)
	I0918 20:02:10.194197   26827 main.go:141] libmachine: (ha-091565-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.92 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19667-7671/.minikube/machines/ha-091565-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0918 20:02:10.194215   26827 main.go:141] libmachine: (ha-091565-m02) DBG | About to run SSH command:
	I0918 20:02:10.194223   26827 main.go:141] libmachine: (ha-091565-m02) DBG | exit 0
	I0918 20:02:10.323932   26827 main.go:141] libmachine: (ha-091565-m02) DBG | SSH cmd err, output: <nil>: 
	I0918 20:02:10.324269   26827 main.go:141] libmachine: (ha-091565-m02) KVM machine creation complete!
	I0918 20:02:10.324574   26827 main.go:141] libmachine: (ha-091565-m02) Calling .GetConfigRaw
	I0918 20:02:10.325151   26827 main.go:141] libmachine: (ha-091565-m02) Calling .DriverName
	I0918 20:02:10.325341   26827 main.go:141] libmachine: (ha-091565-m02) Calling .DriverName
	I0918 20:02:10.325477   26827 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0918 20:02:10.325492   26827 main.go:141] libmachine: (ha-091565-m02) Calling .GetState
	I0918 20:02:10.326893   26827 main.go:141] libmachine: Detecting operating system of created instance...
	I0918 20:02:10.326917   26827 main.go:141] libmachine: Waiting for SSH to be available...
	I0918 20:02:10.326923   26827 main.go:141] libmachine: Getting to WaitForSSH function...
	I0918 20:02:10.326931   26827 main.go:141] libmachine: (ha-091565-m02) Calling .GetSSHHostname
	I0918 20:02:10.329564   26827 main.go:141] libmachine: (ha-091565-m02) DBG | domain ha-091565-m02 has defined MAC address 52:54:00:21:2b:96 in network mk-ha-091565
	I0918 20:02:10.330006   26827 main.go:141] libmachine: (ha-091565-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:2b:96", ip: ""} in network mk-ha-091565: {Iface:virbr1 ExpiryTime:2024-09-18 21:02:02 +0000 UTC Type:0 Mac:52:54:00:21:2b:96 Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:ha-091565-m02 Clientid:01:52:54:00:21:2b:96}
	I0918 20:02:10.330033   26827 main.go:141] libmachine: (ha-091565-m02) DBG | domain ha-091565-m02 has defined IP address 192.168.39.92 and MAC address 52:54:00:21:2b:96 in network mk-ha-091565
	I0918 20:02:10.330172   26827 main.go:141] libmachine: (ha-091565-m02) Calling .GetSSHPort
	I0918 20:02:10.330344   26827 main.go:141] libmachine: (ha-091565-m02) Calling .GetSSHKeyPath
	I0918 20:02:10.330500   26827 main.go:141] libmachine: (ha-091565-m02) Calling .GetSSHKeyPath
	I0918 20:02:10.330636   26827 main.go:141] libmachine: (ha-091565-m02) Calling .GetSSHUsername
	I0918 20:02:10.330796   26827 main.go:141] libmachine: Using SSH client type: native
	I0918 20:02:10.331010   26827 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.92 22 <nil> <nil>}
	I0918 20:02:10.331023   26827 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0918 20:02:10.443345   26827 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0918 20:02:10.443373   26827 main.go:141] libmachine: Detecting the provisioner...
	I0918 20:02:10.443397   26827 main.go:141] libmachine: (ha-091565-m02) Calling .GetSSHHostname
	I0918 20:02:10.446214   26827 main.go:141] libmachine: (ha-091565-m02) DBG | domain ha-091565-m02 has defined MAC address 52:54:00:21:2b:96 in network mk-ha-091565
	I0918 20:02:10.446561   26827 main.go:141] libmachine: (ha-091565-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:2b:96", ip: ""} in network mk-ha-091565: {Iface:virbr1 ExpiryTime:2024-09-18 21:02:02 +0000 UTC Type:0 Mac:52:54:00:21:2b:96 Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:ha-091565-m02 Clientid:01:52:54:00:21:2b:96}
	I0918 20:02:10.446609   26827 main.go:141] libmachine: (ha-091565-m02) DBG | domain ha-091565-m02 has defined IP address 192.168.39.92 and MAC address 52:54:00:21:2b:96 in network mk-ha-091565
	I0918 20:02:10.446805   26827 main.go:141] libmachine: (ha-091565-m02) Calling .GetSSHPort
	I0918 20:02:10.447003   26827 main.go:141] libmachine: (ha-091565-m02) Calling .GetSSHKeyPath
	I0918 20:02:10.447152   26827 main.go:141] libmachine: (ha-091565-m02) Calling .GetSSHKeyPath
	I0918 20:02:10.447299   26827 main.go:141] libmachine: (ha-091565-m02) Calling .GetSSHUsername
	I0918 20:02:10.447466   26827 main.go:141] libmachine: Using SSH client type: native
	I0918 20:02:10.447651   26827 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.92 22 <nil> <nil>}
	I0918 20:02:10.447661   26827 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0918 20:02:10.560498   26827 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0918 20:02:10.560569   26827 main.go:141] libmachine: found compatible host: buildroot
	I0918 20:02:10.560579   26827 main.go:141] libmachine: Provisioning with buildroot...
	I0918 20:02:10.560587   26827 main.go:141] libmachine: (ha-091565-m02) Calling .GetMachineName
	I0918 20:02:10.560807   26827 buildroot.go:166] provisioning hostname "ha-091565-m02"
	I0918 20:02:10.560829   26827 main.go:141] libmachine: (ha-091565-m02) Calling .GetMachineName
	I0918 20:02:10.561019   26827 main.go:141] libmachine: (ha-091565-m02) Calling .GetSSHHostname
	I0918 20:02:10.563200   26827 main.go:141] libmachine: (ha-091565-m02) DBG | domain ha-091565-m02 has defined MAC address 52:54:00:21:2b:96 in network mk-ha-091565
	I0918 20:02:10.563504   26827 main.go:141] libmachine: (ha-091565-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:2b:96", ip: ""} in network mk-ha-091565: {Iface:virbr1 ExpiryTime:2024-09-18 21:02:02 +0000 UTC Type:0 Mac:52:54:00:21:2b:96 Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:ha-091565-m02 Clientid:01:52:54:00:21:2b:96}
	I0918 20:02:10.563529   26827 main.go:141] libmachine: (ha-091565-m02) DBG | domain ha-091565-m02 has defined IP address 192.168.39.92 and MAC address 52:54:00:21:2b:96 in network mk-ha-091565
	I0918 20:02:10.563719   26827 main.go:141] libmachine: (ha-091565-m02) Calling .GetSSHPort
	I0918 20:02:10.563862   26827 main.go:141] libmachine: (ha-091565-m02) Calling .GetSSHKeyPath
	I0918 20:02:10.564010   26827 main.go:141] libmachine: (ha-091565-m02) Calling .GetSSHKeyPath
	I0918 20:02:10.564147   26827 main.go:141] libmachine: (ha-091565-m02) Calling .GetSSHUsername
	I0918 20:02:10.564297   26827 main.go:141] libmachine: Using SSH client type: native
	I0918 20:02:10.564453   26827 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.92 22 <nil> <nil>}
	I0918 20:02:10.564464   26827 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-091565-m02 && echo "ha-091565-m02" | sudo tee /etc/hostname
	I0918 20:02:10.691295   26827 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-091565-m02
	
	I0918 20:02:10.691325   26827 main.go:141] libmachine: (ha-091565-m02) Calling .GetSSHHostname
	I0918 20:02:10.693996   26827 main.go:141] libmachine: (ha-091565-m02) DBG | domain ha-091565-m02 has defined MAC address 52:54:00:21:2b:96 in network mk-ha-091565
	I0918 20:02:10.694327   26827 main.go:141] libmachine: (ha-091565-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:2b:96", ip: ""} in network mk-ha-091565: {Iface:virbr1 ExpiryTime:2024-09-18 21:02:02 +0000 UTC Type:0 Mac:52:54:00:21:2b:96 Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:ha-091565-m02 Clientid:01:52:54:00:21:2b:96}
	I0918 20:02:10.694365   26827 main.go:141] libmachine: (ha-091565-m02) DBG | domain ha-091565-m02 has defined IP address 192.168.39.92 and MAC address 52:54:00:21:2b:96 in network mk-ha-091565
	I0918 20:02:10.694501   26827 main.go:141] libmachine: (ha-091565-m02) Calling .GetSSHPort
	I0918 20:02:10.694688   26827 main.go:141] libmachine: (ha-091565-m02) Calling .GetSSHKeyPath
	I0918 20:02:10.694846   26827 main.go:141] libmachine: (ha-091565-m02) Calling .GetSSHKeyPath
	I0918 20:02:10.694979   26827 main.go:141] libmachine: (ha-091565-m02) Calling .GetSSHUsername
	I0918 20:02:10.695122   26827 main.go:141] libmachine: Using SSH client type: native
	I0918 20:02:10.695275   26827 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.92 22 <nil> <nil>}
	I0918 20:02:10.695290   26827 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-091565-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-091565-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-091565-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0918 20:02:10.816522   26827 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0918 20:02:10.816548   26827 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19667-7671/.minikube CaCertPath:/home/jenkins/minikube-integration/19667-7671/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19667-7671/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19667-7671/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19667-7671/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19667-7671/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19667-7671/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19667-7671/.minikube}
	I0918 20:02:10.816563   26827 buildroot.go:174] setting up certificates
	I0918 20:02:10.816571   26827 provision.go:84] configureAuth start
	I0918 20:02:10.816581   26827 main.go:141] libmachine: (ha-091565-m02) Calling .GetMachineName
	I0918 20:02:10.816839   26827 main.go:141] libmachine: (ha-091565-m02) Calling .GetIP
	I0918 20:02:10.819595   26827 main.go:141] libmachine: (ha-091565-m02) DBG | domain ha-091565-m02 has defined MAC address 52:54:00:21:2b:96 in network mk-ha-091565
	I0918 20:02:10.819999   26827 main.go:141] libmachine: (ha-091565-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:2b:96", ip: ""} in network mk-ha-091565: {Iface:virbr1 ExpiryTime:2024-09-18 21:02:02 +0000 UTC Type:0 Mac:52:54:00:21:2b:96 Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:ha-091565-m02 Clientid:01:52:54:00:21:2b:96}
	I0918 20:02:10.820045   26827 main.go:141] libmachine: (ha-091565-m02) DBG | domain ha-091565-m02 has defined IP address 192.168.39.92 and MAC address 52:54:00:21:2b:96 in network mk-ha-091565
	I0918 20:02:10.820197   26827 main.go:141] libmachine: (ha-091565-m02) Calling .GetSSHHostname
	I0918 20:02:10.822853   26827 main.go:141] libmachine: (ha-091565-m02) DBG | domain ha-091565-m02 has defined MAC address 52:54:00:21:2b:96 in network mk-ha-091565
	I0918 20:02:10.823229   26827 main.go:141] libmachine: (ha-091565-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:2b:96", ip: ""} in network mk-ha-091565: {Iface:virbr1 ExpiryTime:2024-09-18 21:02:02 +0000 UTC Type:0 Mac:52:54:00:21:2b:96 Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:ha-091565-m02 Clientid:01:52:54:00:21:2b:96}
	I0918 20:02:10.823283   26827 main.go:141] libmachine: (ha-091565-m02) DBG | domain ha-091565-m02 has defined IP address 192.168.39.92 and MAC address 52:54:00:21:2b:96 in network mk-ha-091565
	I0918 20:02:10.823418   26827 provision.go:143] copyHostCerts
	I0918 20:02:10.823446   26827 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19667-7671/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19667-7671/.minikube/key.pem
	I0918 20:02:10.823472   26827 exec_runner.go:144] found /home/jenkins/minikube-integration/19667-7671/.minikube/key.pem, removing ...
	I0918 20:02:10.823482   26827 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19667-7671/.minikube/key.pem
	I0918 20:02:10.823549   26827 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19667-7671/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19667-7671/.minikube/key.pem (1679 bytes)
	I0918 20:02:10.823626   26827 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19667-7671/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19667-7671/.minikube/ca.pem
	I0918 20:02:10.823644   26827 exec_runner.go:144] found /home/jenkins/minikube-integration/19667-7671/.minikube/ca.pem, removing ...
	I0918 20:02:10.823651   26827 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19667-7671/.minikube/ca.pem
	I0918 20:02:10.823674   26827 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19667-7671/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19667-7671/.minikube/ca.pem (1082 bytes)
	I0918 20:02:10.823715   26827 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19667-7671/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19667-7671/.minikube/cert.pem
	I0918 20:02:10.823731   26827 exec_runner.go:144] found /home/jenkins/minikube-integration/19667-7671/.minikube/cert.pem, removing ...
	I0918 20:02:10.823737   26827 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19667-7671/.minikube/cert.pem
	I0918 20:02:10.823757   26827 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19667-7671/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19667-7671/.minikube/cert.pem (1123 bytes)
	I0918 20:02:10.823804   26827 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19667-7671/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19667-7671/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19667-7671/.minikube/certs/ca-key.pem org=jenkins.ha-091565-m02 san=[127.0.0.1 192.168.39.92 ha-091565-m02 localhost minikube]
	I0918 20:02:11.057033   26827 provision.go:177] copyRemoteCerts
	I0918 20:02:11.057095   26827 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0918 20:02:11.057117   26827 main.go:141] libmachine: (ha-091565-m02) Calling .GetSSHHostname
	I0918 20:02:11.059721   26827 main.go:141] libmachine: (ha-091565-m02) DBG | domain ha-091565-m02 has defined MAC address 52:54:00:21:2b:96 in network mk-ha-091565
	I0918 20:02:11.060054   26827 main.go:141] libmachine: (ha-091565-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:2b:96", ip: ""} in network mk-ha-091565: {Iface:virbr1 ExpiryTime:2024-09-18 21:02:02 +0000 UTC Type:0 Mac:52:54:00:21:2b:96 Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:ha-091565-m02 Clientid:01:52:54:00:21:2b:96}
	I0918 20:02:11.060083   26827 main.go:141] libmachine: (ha-091565-m02) DBG | domain ha-091565-m02 has defined IP address 192.168.39.92 and MAC address 52:54:00:21:2b:96 in network mk-ha-091565
	I0918 20:02:11.060241   26827 main.go:141] libmachine: (ha-091565-m02) Calling .GetSSHPort
	I0918 20:02:11.060442   26827 main.go:141] libmachine: (ha-091565-m02) Calling .GetSSHKeyPath
	I0918 20:02:11.060560   26827 main.go:141] libmachine: (ha-091565-m02) Calling .GetSSHUsername
	I0918 20:02:11.060670   26827 sshutil.go:53] new ssh client: &{IP:192.168.39.92 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19667-7671/.minikube/machines/ha-091565-m02/id_rsa Username:docker}
	I0918 20:02:11.145946   26827 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19667-7671/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0918 20:02:11.146020   26827 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0918 20:02:11.169808   26827 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19667-7671/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0918 20:02:11.169883   26827 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0918 20:02:11.192067   26827 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19667-7671/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0918 20:02:11.192133   26827 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0918 20:02:11.213945   26827 provision.go:87] duration metric: took 397.362437ms to configureAuth
	I0918 20:02:11.213974   26827 buildroot.go:189] setting minikube options for container-runtime
	I0918 20:02:11.214161   26827 config.go:182] Loaded profile config "ha-091565": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0918 20:02:11.214232   26827 main.go:141] libmachine: (ha-091565-m02) Calling .GetSSHHostname
	I0918 20:02:11.216594   26827 main.go:141] libmachine: (ha-091565-m02) DBG | domain ha-091565-m02 has defined MAC address 52:54:00:21:2b:96 in network mk-ha-091565
	I0918 20:02:11.216996   26827 main.go:141] libmachine: (ha-091565-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:2b:96", ip: ""} in network mk-ha-091565: {Iface:virbr1 ExpiryTime:2024-09-18 21:02:02 +0000 UTC Type:0 Mac:52:54:00:21:2b:96 Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:ha-091565-m02 Clientid:01:52:54:00:21:2b:96}
	I0918 20:02:11.217027   26827 main.go:141] libmachine: (ha-091565-m02) DBG | domain ha-091565-m02 has defined IP address 192.168.39.92 and MAC address 52:54:00:21:2b:96 in network mk-ha-091565
	I0918 20:02:11.217192   26827 main.go:141] libmachine: (ha-091565-m02) Calling .GetSSHPort
	I0918 20:02:11.217382   26827 main.go:141] libmachine: (ha-091565-m02) Calling .GetSSHKeyPath
	I0918 20:02:11.217568   26827 main.go:141] libmachine: (ha-091565-m02) Calling .GetSSHKeyPath
	I0918 20:02:11.217782   26827 main.go:141] libmachine: (ha-091565-m02) Calling .GetSSHUsername
	I0918 20:02:11.217991   26827 main.go:141] libmachine: Using SSH client type: native
	I0918 20:02:11.218183   26827 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.92 22 <nil> <nil>}
	I0918 20:02:11.218201   26827 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0918 20:02:11.450199   26827 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0918 20:02:11.450222   26827 main.go:141] libmachine: Checking connection to Docker...
	I0918 20:02:11.450231   26827 main.go:141] libmachine: (ha-091565-m02) Calling .GetURL
	I0918 20:02:11.451440   26827 main.go:141] libmachine: (ha-091565-m02) DBG | Using libvirt version 6000000
	I0918 20:02:11.453501   26827 main.go:141] libmachine: (ha-091565-m02) DBG | domain ha-091565-m02 has defined MAC address 52:54:00:21:2b:96 in network mk-ha-091565
	I0918 20:02:11.453892   26827 main.go:141] libmachine: (ha-091565-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:2b:96", ip: ""} in network mk-ha-091565: {Iface:virbr1 ExpiryTime:2024-09-18 21:02:02 +0000 UTC Type:0 Mac:52:54:00:21:2b:96 Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:ha-091565-m02 Clientid:01:52:54:00:21:2b:96}
	I0918 20:02:11.453920   26827 main.go:141] libmachine: (ha-091565-m02) DBG | domain ha-091565-m02 has defined IP address 192.168.39.92 and MAC address 52:54:00:21:2b:96 in network mk-ha-091565
	I0918 20:02:11.454034   26827 main.go:141] libmachine: Docker is up and running!
	I0918 20:02:11.454051   26827 main.go:141] libmachine: Reticulating splines...
	I0918 20:02:11.454059   26827 client.go:171] duration metric: took 23.281034632s to LocalClient.Create
	I0918 20:02:11.454083   26827 start.go:167] duration metric: took 23.281096503s to libmachine.API.Create "ha-091565"
	I0918 20:02:11.454095   26827 start.go:293] postStartSetup for "ha-091565-m02" (driver="kvm2")
	I0918 20:02:11.454108   26827 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0918 20:02:11.454129   26827 main.go:141] libmachine: (ha-091565-m02) Calling .DriverName
	I0918 20:02:11.454363   26827 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0918 20:02:11.454391   26827 main.go:141] libmachine: (ha-091565-m02) Calling .GetSSHHostname
	I0918 20:02:11.456695   26827 main.go:141] libmachine: (ha-091565-m02) DBG | domain ha-091565-m02 has defined MAC address 52:54:00:21:2b:96 in network mk-ha-091565
	I0918 20:02:11.457025   26827 main.go:141] libmachine: (ha-091565-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:2b:96", ip: ""} in network mk-ha-091565: {Iface:virbr1 ExpiryTime:2024-09-18 21:02:02 +0000 UTC Type:0 Mac:52:54:00:21:2b:96 Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:ha-091565-m02 Clientid:01:52:54:00:21:2b:96}
	I0918 20:02:11.457053   26827 main.go:141] libmachine: (ha-091565-m02) DBG | domain ha-091565-m02 has defined IP address 192.168.39.92 and MAC address 52:54:00:21:2b:96 in network mk-ha-091565
	I0918 20:02:11.457216   26827 main.go:141] libmachine: (ha-091565-m02) Calling .GetSSHPort
	I0918 20:02:11.457393   26827 main.go:141] libmachine: (ha-091565-m02) Calling .GetSSHKeyPath
	I0918 20:02:11.457548   26827 main.go:141] libmachine: (ha-091565-m02) Calling .GetSSHUsername
	I0918 20:02:11.457664   26827 sshutil.go:53] new ssh client: &{IP:192.168.39.92 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19667-7671/.minikube/machines/ha-091565-m02/id_rsa Username:docker}
	I0918 20:02:11.543806   26827 ssh_runner.go:195] Run: cat /etc/os-release
	I0918 20:02:11.548176   26827 info.go:137] Remote host: Buildroot 2023.02.9
	I0918 20:02:11.548212   26827 filesync.go:126] Scanning /home/jenkins/minikube-integration/19667-7671/.minikube/addons for local assets ...
	I0918 20:02:11.548288   26827 filesync.go:126] Scanning /home/jenkins/minikube-integration/19667-7671/.minikube/files for local assets ...
	I0918 20:02:11.548387   26827 filesync.go:149] local asset: /home/jenkins/minikube-integration/19667-7671/.minikube/files/etc/ssl/certs/148782.pem -> 148782.pem in /etc/ssl/certs
	I0918 20:02:11.548401   26827 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19667-7671/.minikube/files/etc/ssl/certs/148782.pem -> /etc/ssl/certs/148782.pem
	I0918 20:02:11.548509   26827 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0918 20:02:11.557991   26827 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/files/etc/ssl/certs/148782.pem --> /etc/ssl/certs/148782.pem (1708 bytes)
	I0918 20:02:11.580809   26827 start.go:296] duration metric: took 126.700515ms for postStartSetup
	I0918 20:02:11.580869   26827 main.go:141] libmachine: (ha-091565-m02) Calling .GetConfigRaw
	I0918 20:02:11.581461   26827 main.go:141] libmachine: (ha-091565-m02) Calling .GetIP
	I0918 20:02:11.583798   26827 main.go:141] libmachine: (ha-091565-m02) DBG | domain ha-091565-m02 has defined MAC address 52:54:00:21:2b:96 in network mk-ha-091565
	I0918 20:02:11.584145   26827 main.go:141] libmachine: (ha-091565-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:2b:96", ip: ""} in network mk-ha-091565: {Iface:virbr1 ExpiryTime:2024-09-18 21:02:02 +0000 UTC Type:0 Mac:52:54:00:21:2b:96 Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:ha-091565-m02 Clientid:01:52:54:00:21:2b:96}
	I0918 20:02:11.584166   26827 main.go:141] libmachine: (ha-091565-m02) DBG | domain ha-091565-m02 has defined IP address 192.168.39.92 and MAC address 52:54:00:21:2b:96 in network mk-ha-091565
	I0918 20:02:11.584397   26827 profile.go:143] Saving config to /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/ha-091565/config.json ...
	I0918 20:02:11.584590   26827 start.go:128] duration metric: took 23.429501872s to createHost
	I0918 20:02:11.584610   26827 main.go:141] libmachine: (ha-091565-m02) Calling .GetSSHHostname
	I0918 20:02:11.586789   26827 main.go:141] libmachine: (ha-091565-m02) DBG | domain ha-091565-m02 has defined MAC address 52:54:00:21:2b:96 in network mk-ha-091565
	I0918 20:02:11.587088   26827 main.go:141] libmachine: (ha-091565-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:2b:96", ip: ""} in network mk-ha-091565: {Iface:virbr1 ExpiryTime:2024-09-18 21:02:02 +0000 UTC Type:0 Mac:52:54:00:21:2b:96 Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:ha-091565-m02 Clientid:01:52:54:00:21:2b:96}
	I0918 20:02:11.587104   26827 main.go:141] libmachine: (ha-091565-m02) DBG | domain ha-091565-m02 has defined IP address 192.168.39.92 and MAC address 52:54:00:21:2b:96 in network mk-ha-091565
	I0918 20:02:11.587289   26827 main.go:141] libmachine: (ha-091565-m02) Calling .GetSSHPort
	I0918 20:02:11.587470   26827 main.go:141] libmachine: (ha-091565-m02) Calling .GetSSHKeyPath
	I0918 20:02:11.587595   26827 main.go:141] libmachine: (ha-091565-m02) Calling .GetSSHKeyPath
	I0918 20:02:11.587738   26827 main.go:141] libmachine: (ha-091565-m02) Calling .GetSSHUsername
	I0918 20:02:11.587870   26827 main.go:141] libmachine: Using SSH client type: native
	I0918 20:02:11.588036   26827 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.92 22 <nil> <nil>}
	I0918 20:02:11.588047   26827 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0918 20:02:11.700738   26827 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726689731.662490371
	
	I0918 20:02:11.700765   26827 fix.go:216] guest clock: 1726689731.662490371
	I0918 20:02:11.700775   26827 fix.go:229] Guest: 2024-09-18 20:02:11.662490371 +0000 UTC Remote: 2024-09-18 20:02:11.584601507 +0000 UTC m=+73.979326396 (delta=77.888864ms)
	I0918 20:02:11.700793   26827 fix.go:200] guest clock delta is within tolerance: 77.888864ms
	I0918 20:02:11.700797   26827 start.go:83] releasing machines lock for "ha-091565-m02", held for 23.545807984s
	I0918 20:02:11.700814   26827 main.go:141] libmachine: (ha-091565-m02) Calling .DriverName
	I0918 20:02:11.701084   26827 main.go:141] libmachine: (ha-091565-m02) Calling .GetIP
	I0918 20:02:11.703834   26827 main.go:141] libmachine: (ha-091565-m02) DBG | domain ha-091565-m02 has defined MAC address 52:54:00:21:2b:96 in network mk-ha-091565
	I0918 20:02:11.704301   26827 main.go:141] libmachine: (ha-091565-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:2b:96", ip: ""} in network mk-ha-091565: {Iface:virbr1 ExpiryTime:2024-09-18 21:02:02 +0000 UTC Type:0 Mac:52:54:00:21:2b:96 Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:ha-091565-m02 Clientid:01:52:54:00:21:2b:96}
	I0918 20:02:11.704332   26827 main.go:141] libmachine: (ha-091565-m02) DBG | domain ha-091565-m02 has defined IP address 192.168.39.92 and MAC address 52:54:00:21:2b:96 in network mk-ha-091565
	I0918 20:02:11.706825   26827 out.go:177] * Found network options:
	I0918 20:02:11.708191   26827 out.go:177]   - NO_PROXY=192.168.39.215
	W0918 20:02:11.709336   26827 proxy.go:119] fail to check proxy env: Error ip not in block
	I0918 20:02:11.709382   26827 main.go:141] libmachine: (ha-091565-m02) Calling .DriverName
	I0918 20:02:11.710083   26827 main.go:141] libmachine: (ha-091565-m02) Calling .DriverName
	I0918 20:02:11.710311   26827 main.go:141] libmachine: (ha-091565-m02) Calling .DriverName
	I0918 20:02:11.710420   26827 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0918 20:02:11.710463   26827 main.go:141] libmachine: (ha-091565-m02) Calling .GetSSHHostname
	W0918 20:02:11.710532   26827 proxy.go:119] fail to check proxy env: Error ip not in block
	I0918 20:02:11.710615   26827 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0918 20:02:11.710636   26827 main.go:141] libmachine: (ha-091565-m02) Calling .GetSSHHostname
	I0918 20:02:11.714007   26827 main.go:141] libmachine: (ha-091565-m02) DBG | domain ha-091565-m02 has defined MAC address 52:54:00:21:2b:96 in network mk-ha-091565
	I0918 20:02:11.714090   26827 main.go:141] libmachine: (ha-091565-m02) DBG | domain ha-091565-m02 has defined MAC address 52:54:00:21:2b:96 in network mk-ha-091565
	I0918 20:02:11.714449   26827 main.go:141] libmachine: (ha-091565-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:2b:96", ip: ""} in network mk-ha-091565: {Iface:virbr1 ExpiryTime:2024-09-18 21:02:02 +0000 UTC Type:0 Mac:52:54:00:21:2b:96 Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:ha-091565-m02 Clientid:01:52:54:00:21:2b:96}
	I0918 20:02:11.714474   26827 main.go:141] libmachine: (ha-091565-m02) DBG | domain ha-091565-m02 has defined IP address 192.168.39.92 and MAC address 52:54:00:21:2b:96 in network mk-ha-091565
	I0918 20:02:11.714500   26827 main.go:141] libmachine: (ha-091565-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:2b:96", ip: ""} in network mk-ha-091565: {Iface:virbr1 ExpiryTime:2024-09-18 21:02:02 +0000 UTC Type:0 Mac:52:54:00:21:2b:96 Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:ha-091565-m02 Clientid:01:52:54:00:21:2b:96}
	I0918 20:02:11.714515   26827 main.go:141] libmachine: (ha-091565-m02) DBG | domain ha-091565-m02 has defined IP address 192.168.39.92 and MAC address 52:54:00:21:2b:96 in network mk-ha-091565
	I0918 20:02:11.714602   26827 main.go:141] libmachine: (ha-091565-m02) Calling .GetSSHPort
	I0918 20:02:11.714757   26827 main.go:141] libmachine: (ha-091565-m02) Calling .GetSSHPort
	I0918 20:02:11.714809   26827 main.go:141] libmachine: (ha-091565-m02) Calling .GetSSHKeyPath
	I0918 20:02:11.714897   26827 main.go:141] libmachine: (ha-091565-m02) Calling .GetSSHKeyPath
	I0918 20:02:11.714955   26827 main.go:141] libmachine: (ha-091565-m02) Calling .GetSSHUsername
	I0918 20:02:11.715014   26827 main.go:141] libmachine: (ha-091565-m02) Calling .GetSSHUsername
	I0918 20:02:11.715075   26827 sshutil.go:53] new ssh client: &{IP:192.168.39.92 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19667-7671/.minikube/machines/ha-091565-m02/id_rsa Username:docker}
	I0918 20:02:11.715103   26827 sshutil.go:53] new ssh client: &{IP:192.168.39.92 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19667-7671/.minikube/machines/ha-091565-m02/id_rsa Username:docker}
	I0918 20:02:11.951540   26827 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0918 20:02:11.958397   26827 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0918 20:02:11.958472   26827 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0918 20:02:11.975402   26827 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0918 20:02:11.975429   26827 start.go:495] detecting cgroup driver to use...
	I0918 20:02:11.975517   26827 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0918 20:02:11.992284   26827 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0918 20:02:12.006780   26827 docker.go:217] disabling cri-docker service (if available) ...
	I0918 20:02:12.006835   26827 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0918 20:02:12.021223   26827 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0918 20:02:12.035137   26827 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0918 20:02:12.152314   26827 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0918 20:02:12.308984   26827 docker.go:233] disabling docker service ...
	I0918 20:02:12.309056   26827 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0918 20:02:12.322897   26827 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0918 20:02:12.336617   26827 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0918 20:02:12.473473   26827 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0918 20:02:12.584374   26827 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0918 20:02:12.597923   26827 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0918 20:02:12.615683   26827 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0918 20:02:12.615759   26827 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0918 20:02:12.625760   26827 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0918 20:02:12.625817   26827 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0918 20:02:12.635917   26827 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0918 20:02:12.645924   26827 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0918 20:02:12.655813   26827 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0918 20:02:12.666525   26827 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0918 20:02:12.676621   26827 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0918 20:02:12.693200   26827 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0918 20:02:12.703365   26827 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0918 20:02:12.713885   26827 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0918 20:02:12.713948   26827 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0918 20:02:12.728888   26827 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0918 20:02:12.749626   26827 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0918 20:02:12.881747   26827 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0918 20:02:12.971475   26827 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0918 20:02:12.971567   26827 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0918 20:02:12.976879   26827 start.go:563] Will wait 60s for crictl version
	I0918 20:02:12.976965   26827 ssh_runner.go:195] Run: which crictl
	I0918 20:02:12.980716   26827 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0918 20:02:13.019156   26827 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0918 20:02:13.019245   26827 ssh_runner.go:195] Run: crio --version
	I0918 20:02:13.046401   26827 ssh_runner.go:195] Run: crio --version
	I0918 20:02:13.075823   26827 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0918 20:02:13.077052   26827 out.go:177]   - env NO_PROXY=192.168.39.215
	I0918 20:02:13.078258   26827 main.go:141] libmachine: (ha-091565-m02) Calling .GetIP
	I0918 20:02:13.081042   26827 main.go:141] libmachine: (ha-091565-m02) DBG | domain ha-091565-m02 has defined MAC address 52:54:00:21:2b:96 in network mk-ha-091565
	I0918 20:02:13.081379   26827 main.go:141] libmachine: (ha-091565-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:2b:96", ip: ""} in network mk-ha-091565: {Iface:virbr1 ExpiryTime:2024-09-18 21:02:02 +0000 UTC Type:0 Mac:52:54:00:21:2b:96 Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:ha-091565-m02 Clientid:01:52:54:00:21:2b:96}
	I0918 20:02:13.081410   26827 main.go:141] libmachine: (ha-091565-m02) DBG | domain ha-091565-m02 has defined IP address 192.168.39.92 and MAC address 52:54:00:21:2b:96 in network mk-ha-091565
	I0918 20:02:13.081604   26827 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0918 20:02:13.085957   26827 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0918 20:02:13.098025   26827 mustload.go:65] Loading cluster: ha-091565
	I0918 20:02:13.098236   26827 config.go:182] Loaded profile config "ha-091565": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0918 20:02:13.098500   26827 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0918 20:02:13.098540   26827 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0918 20:02:13.113020   26827 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43137
	I0918 20:02:13.113466   26827 main.go:141] libmachine: () Calling .GetVersion
	I0918 20:02:13.113910   26827 main.go:141] libmachine: Using API Version  1
	I0918 20:02:13.113932   26827 main.go:141] libmachine: () Calling .SetConfigRaw
	I0918 20:02:13.114242   26827 main.go:141] libmachine: () Calling .GetMachineName
	I0918 20:02:13.114415   26827 main.go:141] libmachine: (ha-091565) Calling .GetState
	I0918 20:02:13.115854   26827 host.go:66] Checking if "ha-091565" exists ...
	I0918 20:02:13.116211   26827 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0918 20:02:13.116246   26827 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0918 20:02:13.130542   26827 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42385
	I0918 20:02:13.130887   26827 main.go:141] libmachine: () Calling .GetVersion
	I0918 20:02:13.131305   26827 main.go:141] libmachine: Using API Version  1
	I0918 20:02:13.131334   26827 main.go:141] libmachine: () Calling .SetConfigRaw
	I0918 20:02:13.131650   26827 main.go:141] libmachine: () Calling .GetMachineName
	I0918 20:02:13.131812   26827 main.go:141] libmachine: (ha-091565) Calling .DriverName
	I0918 20:02:13.131970   26827 certs.go:68] Setting up /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/ha-091565 for IP: 192.168.39.92
	I0918 20:02:13.131980   26827 certs.go:194] generating shared ca certs ...
	I0918 20:02:13.131999   26827 certs.go:226] acquiring lock for ca certs: {Name:mkf54e0e71ed884c4372f9d3d4cb308ea4600185 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0918 20:02:13.132147   26827 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19667-7671/.minikube/ca.key
	I0918 20:02:13.132196   26827 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19667-7671/.minikube/proxy-client-ca.key
	I0918 20:02:13.132210   26827 certs.go:256] generating profile certs ...
	I0918 20:02:13.132298   26827 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/ha-091565/client.key
	I0918 20:02:13.132328   26827 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/ha-091565/apiserver.key.7629692a
	I0918 20:02:13.132349   26827 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/ha-091565/apiserver.crt.7629692a with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.215 192.168.39.92 192.168.39.254]
	I0918 20:02:13.381001   26827 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/ha-091565/apiserver.crt.7629692a ...
	I0918 20:02:13.381032   26827 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/ha-091565/apiserver.crt.7629692a: {Name:mk24fda3fc7efba8ec26d63c4d1c3262bef6ab2c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0918 20:02:13.381214   26827 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/ha-091565/apiserver.key.7629692a ...
	I0918 20:02:13.381231   26827 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/ha-091565/apiserver.key.7629692a: {Name:mk2ca0cef4c9dc7b760b7f2d962b84f60a94bd6c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0918 20:02:13.381333   26827 certs.go:381] copying /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/ha-091565/apiserver.crt.7629692a -> /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/ha-091565/apiserver.crt
	I0918 20:02:13.381891   26827 certs.go:385] copying /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/ha-091565/apiserver.key.7629692a -> /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/ha-091565/apiserver.key
	I0918 20:02:13.382099   26827 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/ha-091565/proxy-client.key
	I0918 20:02:13.382115   26827 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19667-7671/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0918 20:02:13.382140   26827 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19667-7671/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0918 20:02:13.382158   26827 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19667-7671/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0918 20:02:13.382174   26827 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19667-7671/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0918 20:02:13.382188   26827 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/ha-091565/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0918 20:02:13.382203   26827 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/ha-091565/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0918 20:02:13.382217   26827 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/ha-091565/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0918 20:02:13.382242   26827 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/ha-091565/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0918 20:02:13.382310   26827 certs.go:484] found cert: /home/jenkins/minikube-integration/19667-7671/.minikube/certs/14878.pem (1338 bytes)
	W0918 20:02:13.382346   26827 certs.go:480] ignoring /home/jenkins/minikube-integration/19667-7671/.minikube/certs/14878_empty.pem, impossibly tiny 0 bytes
	I0918 20:02:13.382356   26827 certs.go:484] found cert: /home/jenkins/minikube-integration/19667-7671/.minikube/certs/ca-key.pem (1675 bytes)
	I0918 20:02:13.382393   26827 certs.go:484] found cert: /home/jenkins/minikube-integration/19667-7671/.minikube/certs/ca.pem (1082 bytes)
	I0918 20:02:13.382425   26827 certs.go:484] found cert: /home/jenkins/minikube-integration/19667-7671/.minikube/certs/cert.pem (1123 bytes)
	I0918 20:02:13.382456   26827 certs.go:484] found cert: /home/jenkins/minikube-integration/19667-7671/.minikube/certs/key.pem (1679 bytes)
	I0918 20:02:13.382505   26827 certs.go:484] found cert: /home/jenkins/minikube-integration/19667-7671/.minikube/files/etc/ssl/certs/148782.pem (1708 bytes)
	I0918 20:02:13.382538   26827 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19667-7671/.minikube/files/etc/ssl/certs/148782.pem -> /usr/share/ca-certificates/148782.pem
	I0918 20:02:13.382565   26827 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19667-7671/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0918 20:02:13.382604   26827 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19667-7671/.minikube/certs/14878.pem -> /usr/share/ca-certificates/14878.pem
	I0918 20:02:13.382670   26827 main.go:141] libmachine: (ha-091565) Calling .GetSSHHostname
	I0918 20:02:13.385533   26827 main.go:141] libmachine: (ha-091565) DBG | domain ha-091565 has defined MAC address 52:54:00:2a:13:d8 in network mk-ha-091565
	I0918 20:02:13.385884   26827 main.go:141] libmachine: (ha-091565) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:13:d8", ip: ""} in network mk-ha-091565: {Iface:virbr1 ExpiryTime:2024-09-18 21:01:12 +0000 UTC Type:0 Mac:52:54:00:2a:13:d8 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:ha-091565 Clientid:01:52:54:00:2a:13:d8}
	I0918 20:02:13.385914   26827 main.go:141] libmachine: (ha-091565) DBG | domain ha-091565 has defined IP address 192.168.39.215 and MAC address 52:54:00:2a:13:d8 in network mk-ha-091565
	I0918 20:02:13.386036   26827 main.go:141] libmachine: (ha-091565) Calling .GetSSHPort
	I0918 20:02:13.386204   26827 main.go:141] libmachine: (ha-091565) Calling .GetSSHKeyPath
	I0918 20:02:13.386359   26827 main.go:141] libmachine: (ha-091565) Calling .GetSSHUsername
	I0918 20:02:13.386456   26827 sshutil.go:53] new ssh client: &{IP:192.168.39.215 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19667-7671/.minikube/machines/ha-091565/id_rsa Username:docker}
	I0918 20:02:13.464434   26827 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0918 20:02:13.469316   26827 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0918 20:02:13.479828   26827 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0918 20:02:13.484029   26827 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I0918 20:02:13.493840   26827 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0918 20:02:13.497931   26827 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0918 20:02:13.507815   26827 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0918 20:02:13.512123   26827 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I0918 20:02:13.522655   26827 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0918 20:02:13.527051   26827 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0918 20:02:13.538403   26827 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0918 20:02:13.542432   26827 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I0918 20:02:13.553060   26827 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0918 20:02:13.579635   26827 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0918 20:02:13.603368   26827 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0918 20:02:13.625998   26827 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0918 20:02:13.648303   26827 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/ha-091565/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0918 20:02:13.671000   26827 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/ha-091565/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0918 20:02:13.694050   26827 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/ha-091565/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0918 20:02:13.719216   26827 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/ha-091565/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0918 20:02:13.742544   26827 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/files/etc/ssl/certs/148782.pem --> /usr/share/ca-certificates/148782.pem (1708 bytes)
	I0918 20:02:13.765706   26827 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0918 20:02:13.789848   26827 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/certs/14878.pem --> /usr/share/ca-certificates/14878.pem (1338 bytes)
	I0918 20:02:13.814441   26827 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0918 20:02:13.831542   26827 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I0918 20:02:13.848254   26827 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0918 20:02:13.865737   26827 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I0918 20:02:13.881778   26827 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0918 20:02:13.898086   26827 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I0918 20:02:13.913537   26827 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0918 20:02:13.929503   26827 ssh_runner.go:195] Run: openssl version
	I0918 20:02:13.934878   26827 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0918 20:02:13.945006   26827 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0918 20:02:13.949290   26827 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 18 19:39 /usr/share/ca-certificates/minikubeCA.pem
	I0918 20:02:13.949360   26827 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0918 20:02:13.955252   26827 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0918 20:02:13.965953   26827 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14878.pem && ln -fs /usr/share/ca-certificates/14878.pem /etc/ssl/certs/14878.pem"
	I0918 20:02:13.976794   26827 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14878.pem
	I0918 20:02:13.981192   26827 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 18 19:57 /usr/share/ca-certificates/14878.pem
	I0918 20:02:13.981245   26827 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14878.pem
	I0918 20:02:13.986694   26827 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14878.pem /etc/ssl/certs/51391683.0"
	I0918 20:02:13.996869   26827 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/148782.pem && ln -fs /usr/share/ca-certificates/148782.pem /etc/ssl/certs/148782.pem"
	I0918 20:02:14.006855   26827 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/148782.pem
	I0918 20:02:14.010785   26827 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 18 19:57 /usr/share/ca-certificates/148782.pem
	I0918 20:02:14.010831   26827 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/148782.pem
	I0918 20:02:14.016603   26827 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/148782.pem /etc/ssl/certs/3ec20f2e.0"
	I0918 20:02:14.026923   26827 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0918 20:02:14.030483   26827 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0918 20:02:14.030540   26827 kubeadm.go:934] updating node {m02 192.168.39.92 8443 v1.31.1 crio true true} ...
	I0918 20:02:14.030615   26827 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-091565-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.92
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-091565 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0918 20:02:14.030638   26827 kube-vip.go:115] generating kube-vip config ...
	I0918 20:02:14.030669   26827 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0918 20:02:14.046531   26827 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0918 20:02:14.046601   26827 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0918 20:02:14.046656   26827 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0918 20:02:14.056509   26827 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.1: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.31.1': No such file or directory
	
	Initiating transfer...
	I0918 20:02:14.056563   26827 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.1
	I0918 20:02:14.065775   26827 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl.sha256
	I0918 20:02:14.065800   26827 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19667-7671/.minikube/cache/linux/amd64/v1.31.1/kubectl -> /var/lib/minikube/binaries/v1.31.1/kubectl
	I0918 20:02:14.065850   26827 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/19667-7671/.minikube/cache/linux/amd64/v1.31.1/kubeadm
	I0918 20:02:14.065881   26827 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/19667-7671/.minikube/cache/linux/amd64/v1.31.1/kubelet
	I0918 20:02:14.065857   26827 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubectl
	I0918 20:02:14.069919   26827 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubectl': No such file or directory
	I0918 20:02:14.069943   26827 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/cache/linux/amd64/v1.31.1/kubectl --> /var/lib/minikube/binaries/v1.31.1/kubectl (56381592 bytes)
	I0918 20:02:15.108841   26827 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19667-7671/.minikube/cache/linux/amd64/v1.31.1/kubeadm -> /var/lib/minikube/binaries/v1.31.1/kubeadm
	I0918 20:02:15.108916   26827 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubeadm
	I0918 20:02:15.113741   26827 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubeadm': No such file or directory
	I0918 20:02:15.113786   26827 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/cache/linux/amd64/v1.31.1/kubeadm --> /var/lib/minikube/binaries/v1.31.1/kubeadm (58290328 bytes)
	I0918 20:02:15.268546   26827 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0918 20:02:15.304643   26827 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19667-7671/.minikube/cache/linux/amd64/v1.31.1/kubelet -> /var/lib/minikube/binaries/v1.31.1/kubelet
	I0918 20:02:15.304757   26827 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubelet
	I0918 20:02:15.316920   26827 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubelet': No such file or directory
	I0918 20:02:15.316964   26827 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/cache/linux/amd64/v1.31.1/kubelet --> /var/lib/minikube/binaries/v1.31.1/kubelet (76869944 bytes)
	I0918 20:02:15.681051   26827 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0918 20:02:15.690458   26827 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0918 20:02:15.707147   26827 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0918 20:02:15.723671   26827 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0918 20:02:15.740654   26827 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0918 20:02:15.744145   26827 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0918 20:02:15.755908   26827 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0918 20:02:15.867566   26827 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0918 20:02:15.884693   26827 host.go:66] Checking if "ha-091565" exists ...
	I0918 20:02:15.885015   26827 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0918 20:02:15.885055   26827 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0918 20:02:15.899922   26827 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46063
	I0918 20:02:15.900446   26827 main.go:141] libmachine: () Calling .GetVersion
	I0918 20:02:15.900956   26827 main.go:141] libmachine: Using API Version  1
	I0918 20:02:15.900978   26827 main.go:141] libmachine: () Calling .SetConfigRaw
	I0918 20:02:15.901391   26827 main.go:141] libmachine: () Calling .GetMachineName
	I0918 20:02:15.901591   26827 main.go:141] libmachine: (ha-091565) Calling .DriverName
	I0918 20:02:15.901775   26827 start.go:317] joinCluster: &{Name:ha-091565 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Cluster
Name:ha-091565 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.215 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.92 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0918 20:02:15.901868   26827 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0918 20:02:15.901882   26827 main.go:141] libmachine: (ha-091565) Calling .GetSSHHostname
	I0918 20:02:15.904812   26827 main.go:141] libmachine: (ha-091565) DBG | domain ha-091565 has defined MAC address 52:54:00:2a:13:d8 in network mk-ha-091565
	I0918 20:02:15.905340   26827 main.go:141] libmachine: (ha-091565) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:13:d8", ip: ""} in network mk-ha-091565: {Iface:virbr1 ExpiryTime:2024-09-18 21:01:12 +0000 UTC Type:0 Mac:52:54:00:2a:13:d8 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:ha-091565 Clientid:01:52:54:00:2a:13:d8}
	I0918 20:02:15.905365   26827 main.go:141] libmachine: (ha-091565) DBG | domain ha-091565 has defined IP address 192.168.39.215 and MAC address 52:54:00:2a:13:d8 in network mk-ha-091565
	I0918 20:02:15.905530   26827 main.go:141] libmachine: (ha-091565) Calling .GetSSHPort
	I0918 20:02:15.905692   26827 main.go:141] libmachine: (ha-091565) Calling .GetSSHKeyPath
	I0918 20:02:15.905842   26827 main.go:141] libmachine: (ha-091565) Calling .GetSSHUsername
	I0918 20:02:15.905998   26827 sshutil.go:53] new ssh client: &{IP:192.168.39.215 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19667-7671/.minikube/machines/ha-091565/id_rsa Username:docker}
	I0918 20:02:16.056145   26827 start.go:343] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:192.168.39.92 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0918 20:02:16.056188   26827 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token c3chy6.pphzks8qg9r6i1q7 --discovery-token-ca-cert-hash sha256:8ad46bf288e8cc2226f5a929a768e6c95cddecc97ffc4016be27ef0987b1e36f --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-091565-m02 --control-plane --apiserver-advertise-address=192.168.39.92 --apiserver-bind-port=8443"
	I0918 20:02:39.534299   26827 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token c3chy6.pphzks8qg9r6i1q7 --discovery-token-ca-cert-hash sha256:8ad46bf288e8cc2226f5a929a768e6c95cddecc97ffc4016be27ef0987b1e36f --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-091565-m02 --control-plane --apiserver-advertise-address=192.168.39.92 --apiserver-bind-port=8443": (23.478085214s)
	I0918 20:02:39.534349   26827 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0918 20:02:40.082157   26827 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-091565-m02 minikube.k8s.io/updated_at=2024_09_18T20_02_40_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=85073601a832bd4bbda5d11fa91feafff6ec6b91 minikube.k8s.io/name=ha-091565 minikube.k8s.io/primary=false
	I0918 20:02:40.225760   26827 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-091565-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I0918 20:02:40.371807   26827 start.go:319] duration metric: took 24.470025441s to joinCluster
	I0918 20:02:40.371885   26827 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.168.39.92 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0918 20:02:40.372206   26827 config.go:182] Loaded profile config "ha-091565": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0918 20:02:40.373180   26827 out.go:177] * Verifying Kubernetes components...
	I0918 20:02:40.374584   26827 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0918 20:02:40.624879   26827 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0918 20:02:40.676856   26827 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19667-7671/kubeconfig
	I0918 20:02:40.677129   26827 kapi.go:59] client config for ha-091565: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19667-7671/.minikube/profiles/ha-091565/client.crt", KeyFile:"/home/jenkins/minikube-integration/19667-7671/.minikube/profiles/ha-091565/client.key", CAFile:"/home/jenkins/minikube-integration/19667-7671/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f6fca0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0918 20:02:40.677196   26827 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.215:8443
	I0918 20:02:40.677413   26827 node_ready.go:35] waiting up to 6m0s for node "ha-091565-m02" to be "Ready" ...
	I0918 20:02:40.677523   26827 round_trippers.go:463] GET https://192.168.39.215:8443/api/v1/nodes/ha-091565-m02
	I0918 20:02:40.677531   26827 round_trippers.go:469] Request Headers:
	I0918 20:02:40.677538   26827 round_trippers.go:473]     Accept: application/json, */*
	I0918 20:02:40.677545   26827 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0918 20:02:40.686192   26827 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0918 20:02:41.177691   26827 round_trippers.go:463] GET https://192.168.39.215:8443/api/v1/nodes/ha-091565-m02
	I0918 20:02:41.177719   26827 round_trippers.go:469] Request Headers:
	I0918 20:02:41.177732   26827 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0918 20:02:41.177740   26827 round_trippers.go:473]     Accept: application/json, */*
	I0918 20:02:41.183226   26827 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0918 20:02:41.678101   26827 round_trippers.go:463] GET https://192.168.39.215:8443/api/v1/nodes/ha-091565-m02
	I0918 20:02:41.678120   26827 round_trippers.go:469] Request Headers:
	I0918 20:02:41.678127   26827 round_trippers.go:473]     Accept: application/json, */*
	I0918 20:02:41.678130   26827 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0918 20:02:41.692857   26827 round_trippers.go:574] Response Status: 200 OK in 14 milliseconds
	I0918 20:02:42.177589   26827 round_trippers.go:463] GET https://192.168.39.215:8443/api/v1/nodes/ha-091565-m02
	I0918 20:02:42.177610   26827 round_trippers.go:469] Request Headers:
	I0918 20:02:42.177621   26827 round_trippers.go:473]     Accept: application/json, */*
	I0918 20:02:42.177625   26827 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0918 20:02:42.180992   26827 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0918 20:02:42.677789   26827 round_trippers.go:463] GET https://192.168.39.215:8443/api/v1/nodes/ha-091565-m02
	I0918 20:02:42.677810   26827 round_trippers.go:469] Request Headers:
	I0918 20:02:42.677818   26827 round_trippers.go:473]     Accept: application/json, */*
	I0918 20:02:42.677822   26827 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0918 20:02:42.682783   26827 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0918 20:02:42.683426   26827 node_ready.go:53] node "ha-091565-m02" has status "Ready":"False"
	I0918 20:02:43.178132   26827 round_trippers.go:463] GET https://192.168.39.215:8443/api/v1/nodes/ha-091565-m02
	I0918 20:02:43.178152   26827 round_trippers.go:469] Request Headers:
	I0918 20:02:43.178164   26827 round_trippers.go:473]     Accept: application/json, */*
	I0918 20:02:43.178170   26827 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0918 20:02:43.181084   26827 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0918 20:02:43.678483   26827 round_trippers.go:463] GET https://192.168.39.215:8443/api/v1/nodes/ha-091565-m02
	I0918 20:02:43.678502   26827 round_trippers.go:469] Request Headers:
	I0918 20:02:43.678510   26827 round_trippers.go:473]     Accept: application/json, */*
	I0918 20:02:43.678515   26827 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0918 20:02:43.683496   26827 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0918 20:02:44.178547   26827 round_trippers.go:463] GET https://192.168.39.215:8443/api/v1/nodes/ha-091565-m02
	I0918 20:02:44.178567   26827 round_trippers.go:469] Request Headers:
	I0918 20:02:44.178576   26827 round_trippers.go:473]     Accept: application/json, */*
	I0918 20:02:44.178579   26827 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0918 20:02:44.181977   26827 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0918 20:02:44.677784   26827 round_trippers.go:463] GET https://192.168.39.215:8443/api/v1/nodes/ha-091565-m02
	I0918 20:02:44.677816   26827 round_trippers.go:469] Request Headers:
	I0918 20:02:44.677827   26827 round_trippers.go:473]     Accept: application/json, */*
	I0918 20:02:44.677835   26827 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0918 20:02:44.682556   26827 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0918 20:02:45.177682   26827 round_trippers.go:463] GET https://192.168.39.215:8443/api/v1/nodes/ha-091565-m02
	I0918 20:02:45.177710   26827 round_trippers.go:469] Request Headers:
	I0918 20:02:45.177723   26827 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0918 20:02:45.177731   26827 round_trippers.go:473]     Accept: application/json, */*
	I0918 20:02:45.181803   26827 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0918 20:02:45.182526   26827 node_ready.go:53] node "ha-091565-m02" has status "Ready":"False"
	I0918 20:02:45.677703   26827 round_trippers.go:463] GET https://192.168.39.215:8443/api/v1/nodes/ha-091565-m02
	I0918 20:02:45.677727   26827 round_trippers.go:469] Request Headers:
	I0918 20:02:45.677735   26827 round_trippers.go:473]     Accept: application/json, */*
	I0918 20:02:45.677739   26827 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0918 20:02:45.684776   26827 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0918 20:02:46.178417   26827 round_trippers.go:463] GET https://192.168.39.215:8443/api/v1/nodes/ha-091565-m02
	I0918 20:02:46.178441   26827 round_trippers.go:469] Request Headers:
	I0918 20:02:46.178448   26827 round_trippers.go:473]     Accept: application/json, */*
	I0918 20:02:46.178456   26827 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0918 20:02:46.181952   26827 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0918 20:02:46.677961   26827 round_trippers.go:463] GET https://192.168.39.215:8443/api/v1/nodes/ha-091565-m02
	I0918 20:02:46.677985   26827 round_trippers.go:469] Request Headers:
	I0918 20:02:46.677992   26827 round_trippers.go:473]     Accept: application/json, */*
	I0918 20:02:46.677996   26827 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0918 20:02:46.681910   26827 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0918 20:02:47.178442   26827 round_trippers.go:463] GET https://192.168.39.215:8443/api/v1/nodes/ha-091565-m02
	I0918 20:02:47.178466   26827 round_trippers.go:469] Request Headers:
	I0918 20:02:47.178474   26827 round_trippers.go:473]     Accept: application/json, */*
	I0918 20:02:47.178479   26827 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0918 20:02:47.212429   26827 round_trippers.go:574] Response Status: 200 OK in 33 milliseconds
	I0918 20:02:47.213077   26827 node_ready.go:53] node "ha-091565-m02" has status "Ready":"False"
	I0918 20:02:47.678191   26827 round_trippers.go:463] GET https://192.168.39.215:8443/api/v1/nodes/ha-091565-m02
	I0918 20:02:47.678213   26827 round_trippers.go:469] Request Headers:
	I0918 20:02:47.678221   26827 round_trippers.go:473]     Accept: application/json, */*
	I0918 20:02:47.678225   26827 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0918 20:02:47.682040   26827 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0918 20:02:48.178008   26827 round_trippers.go:463] GET https://192.168.39.215:8443/api/v1/nodes/ha-091565-m02
	I0918 20:02:48.178028   26827 round_trippers.go:469] Request Headers:
	I0918 20:02:48.178038   26827 round_trippers.go:473]     Accept: application/json, */*
	I0918 20:02:48.178043   26827 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0918 20:02:48.181099   26827 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0918 20:02:48.677668   26827 round_trippers.go:463] GET https://192.168.39.215:8443/api/v1/nodes/ha-091565-m02
	I0918 20:02:48.677698   26827 round_trippers.go:469] Request Headers:
	I0918 20:02:48.677711   26827 round_trippers.go:473]     Accept: application/json, */*
	I0918 20:02:48.677717   26827 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0918 20:02:48.681381   26827 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0918 20:02:49.178444   26827 round_trippers.go:463] GET https://192.168.39.215:8443/api/v1/nodes/ha-091565-m02
	I0918 20:02:49.178465   26827 round_trippers.go:469] Request Headers:
	I0918 20:02:49.178472   26827 round_trippers.go:473]     Accept: application/json, */*
	I0918 20:02:49.178475   26827 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0918 20:02:49.182036   26827 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0918 20:02:49.678042   26827 round_trippers.go:463] GET https://192.168.39.215:8443/api/v1/nodes/ha-091565-m02
	I0918 20:02:49.678068   26827 round_trippers.go:469] Request Headers:
	I0918 20:02:49.678080   26827 round_trippers.go:473]     Accept: application/json, */*
	I0918 20:02:49.678088   26827 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0918 20:02:49.690181   26827 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I0918 20:02:49.690997   26827 node_ready.go:53] node "ha-091565-m02" has status "Ready":"False"
	I0918 20:02:50.178273   26827 round_trippers.go:463] GET https://192.168.39.215:8443/api/v1/nodes/ha-091565-m02
	I0918 20:02:50.178297   26827 round_trippers.go:469] Request Headers:
	I0918 20:02:50.178304   26827 round_trippers.go:473]     Accept: application/json, */*
	I0918 20:02:50.178308   26827 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0918 20:02:50.181653   26827 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0918 20:02:50.677625   26827 round_trippers.go:463] GET https://192.168.39.215:8443/api/v1/nodes/ha-091565-m02
	I0918 20:02:50.677648   26827 round_trippers.go:469] Request Headers:
	I0918 20:02:50.677656   26827 round_trippers.go:473]     Accept: application/json, */*
	I0918 20:02:50.677661   26827 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0918 20:02:50.681751   26827 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0918 20:02:51.178317   26827 round_trippers.go:463] GET https://192.168.39.215:8443/api/v1/nodes/ha-091565-m02
	I0918 20:02:51.178366   26827 round_trippers.go:469] Request Headers:
	I0918 20:02:51.178378   26827 round_trippers.go:473]     Accept: application/json, */*
	I0918 20:02:51.178384   26827 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0918 20:02:51.181883   26827 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0918 20:02:51.678030   26827 round_trippers.go:463] GET https://192.168.39.215:8443/api/v1/nodes/ha-091565-m02
	I0918 20:02:51.678058   26827 round_trippers.go:469] Request Headers:
	I0918 20:02:51.678069   26827 round_trippers.go:473]     Accept: application/json, */*
	I0918 20:02:51.678074   26827 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0918 20:02:51.681343   26827 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0918 20:02:52.178201   26827 round_trippers.go:463] GET https://192.168.39.215:8443/api/v1/nodes/ha-091565-m02
	I0918 20:02:52.178228   26827 round_trippers.go:469] Request Headers:
	I0918 20:02:52.178239   26827 round_trippers.go:473]     Accept: application/json, */*
	I0918 20:02:52.178246   26827 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0918 20:02:52.181149   26827 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0918 20:02:52.181830   26827 node_ready.go:53] node "ha-091565-m02" has status "Ready":"False"
	I0918 20:02:52.678195   26827 round_trippers.go:463] GET https://192.168.39.215:8443/api/v1/nodes/ha-091565-m02
	I0918 20:02:52.678219   26827 round_trippers.go:469] Request Headers:
	I0918 20:02:52.678227   26827 round_trippers.go:473]     Accept: application/json, */*
	I0918 20:02:52.678230   26827 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0918 20:02:52.681789   26827 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0918 20:02:53.178242   26827 round_trippers.go:463] GET https://192.168.39.215:8443/api/v1/nodes/ha-091565-m02
	I0918 20:02:53.178268   26827 round_trippers.go:469] Request Headers:
	I0918 20:02:53.178279   26827 round_trippers.go:473]     Accept: application/json, */*
	I0918 20:02:53.178284   26827 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0918 20:02:53.181682   26827 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0918 20:02:53.677884   26827 round_trippers.go:463] GET https://192.168.39.215:8443/api/v1/nodes/ha-091565-m02
	I0918 20:02:53.677907   26827 round_trippers.go:469] Request Headers:
	I0918 20:02:53.677916   26827 round_trippers.go:473]     Accept: application/json, */*
	I0918 20:02:53.677921   26827 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0918 20:02:53.681477   26827 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0918 20:02:54.178412   26827 round_trippers.go:463] GET https://192.168.39.215:8443/api/v1/nodes/ha-091565-m02
	I0918 20:02:54.178438   26827 round_trippers.go:469] Request Headers:
	I0918 20:02:54.178445   26827 round_trippers.go:473]     Accept: application/json, */*
	I0918 20:02:54.178449   26827 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0918 20:02:54.182375   26827 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0918 20:02:54.182956   26827 node_ready.go:53] node "ha-091565-m02" has status "Ready":"False"
	I0918 20:02:54.678270   26827 round_trippers.go:463] GET https://192.168.39.215:8443/api/v1/nodes/ha-091565-m02
	I0918 20:02:54.678294   26827 round_trippers.go:469] Request Headers:
	I0918 20:02:54.678301   26827 round_trippers.go:473]     Accept: application/json, */*
	I0918 20:02:54.678306   26827 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0918 20:02:54.681439   26827 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0918 20:02:55.178343   26827 round_trippers.go:463] GET https://192.168.39.215:8443/api/v1/nodes/ha-091565-m02
	I0918 20:02:55.178364   26827 round_trippers.go:469] Request Headers:
	I0918 20:02:55.178372   26827 round_trippers.go:473]     Accept: application/json, */*
	I0918 20:02:55.178376   26827 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0918 20:02:55.181349   26827 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0918 20:02:55.678277   26827 round_trippers.go:463] GET https://192.168.39.215:8443/api/v1/nodes/ha-091565-m02
	I0918 20:02:55.678299   26827 round_trippers.go:469] Request Headers:
	I0918 20:02:55.678307   26827 round_trippers.go:473]     Accept: application/json, */*
	I0918 20:02:55.678312   26827 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0918 20:02:55.681665   26827 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0918 20:02:56.177994   26827 round_trippers.go:463] GET https://192.168.39.215:8443/api/v1/nodes/ha-091565-m02
	I0918 20:02:56.178018   26827 round_trippers.go:469] Request Headers:
	I0918 20:02:56.178025   26827 round_trippers.go:473]     Accept: application/json, */*
	I0918 20:02:56.178030   26827 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0918 20:02:56.181355   26827 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0918 20:02:56.678444   26827 round_trippers.go:463] GET https://192.168.39.215:8443/api/v1/nodes/ha-091565-m02
	I0918 20:02:56.678487   26827 round_trippers.go:469] Request Headers:
	I0918 20:02:56.678502   26827 round_trippers.go:473]     Accept: application/json, */*
	I0918 20:02:56.678506   26827 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0918 20:02:56.682256   26827 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0918 20:02:56.683058   26827 node_ready.go:53] node "ha-091565-m02" has status "Ready":"False"
	I0918 20:02:57.178486   26827 round_trippers.go:463] GET https://192.168.39.215:8443/api/v1/nodes/ha-091565-m02
	I0918 20:02:57.178510   26827 round_trippers.go:469] Request Headers:
	I0918 20:02:57.178517   26827 round_trippers.go:473]     Accept: application/json, */*
	I0918 20:02:57.178521   26827 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0918 20:02:57.182538   26827 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0918 20:02:57.678060   26827 round_trippers.go:463] GET https://192.168.39.215:8443/api/v1/nodes/ha-091565-m02
	I0918 20:02:57.678084   26827 round_trippers.go:469] Request Headers:
	I0918 20:02:57.678091   26827 round_trippers.go:473]     Accept: application/json, */*
	I0918 20:02:57.678096   26827 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0918 20:02:57.681385   26827 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0918 20:02:58.177838   26827 round_trippers.go:463] GET https://192.168.39.215:8443/api/v1/nodes/ha-091565-m02
	I0918 20:02:58.177866   26827 round_trippers.go:469] Request Headers:
	I0918 20:02:58.177876   26827 round_trippers.go:473]     Accept: application/json, */*
	I0918 20:02:58.177887   26827 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0918 20:02:58.181116   26827 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0918 20:02:58.677581   26827 round_trippers.go:463] GET https://192.168.39.215:8443/api/v1/nodes/ha-091565-m02
	I0918 20:02:58.677623   26827 round_trippers.go:469] Request Headers:
	I0918 20:02:58.677631   26827 round_trippers.go:473]     Accept: application/json, */*
	I0918 20:02:58.677634   26827 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0918 20:02:58.681025   26827 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0918 20:02:59.178037   26827 round_trippers.go:463] GET https://192.168.39.215:8443/api/v1/nodes/ha-091565-m02
	I0918 20:02:59.178075   26827 round_trippers.go:469] Request Headers:
	I0918 20:02:59.178083   26827 round_trippers.go:473]     Accept: application/json, */*
	I0918 20:02:59.178087   26827 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0918 20:02:59.182040   26827 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0918 20:02:59.182593   26827 node_ready.go:49] node "ha-091565-m02" has status "Ready":"True"
	I0918 20:02:59.182614   26827 node_ready.go:38] duration metric: took 18.505159093s for node "ha-091565-m02" to be "Ready" ...
	I0918 20:02:59.182625   26827 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0918 20:02:59.182713   26827 round_trippers.go:463] GET https://192.168.39.215:8443/api/v1/namespaces/kube-system/pods
	I0918 20:02:59.182724   26827 round_trippers.go:469] Request Headers:
	I0918 20:02:59.182731   26827 round_trippers.go:473]     Accept: application/json, */*
	I0918 20:02:59.182736   26827 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0918 20:02:59.187930   26827 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0918 20:02:59.193874   26827 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-8zcqk" in "kube-system" namespace to be "Ready" ...
	I0918 20:02:59.193977   26827 round_trippers.go:463] GET https://192.168.39.215:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-8zcqk
	I0918 20:02:59.193988   26827 round_trippers.go:469] Request Headers:
	I0918 20:02:59.193999   26827 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0918 20:02:59.194007   26827 round_trippers.go:473]     Accept: application/json, */*
	I0918 20:02:59.197103   26827 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0918 20:02:59.198209   26827 round_trippers.go:463] GET https://192.168.39.215:8443/api/v1/nodes/ha-091565
	I0918 20:02:59.198228   26827 round_trippers.go:469] Request Headers:
	I0918 20:02:59.198238   26827 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0918 20:02:59.198256   26827 round_trippers.go:473]     Accept: application/json, */*
	I0918 20:02:59.201933   26827 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0918 20:02:59.202515   26827 pod_ready.go:93] pod "coredns-7c65d6cfc9-8zcqk" in "kube-system" namespace has status "Ready":"True"
	I0918 20:02:59.202532   26827 pod_ready.go:82] duration metric: took 8.636844ms for pod "coredns-7c65d6cfc9-8zcqk" in "kube-system" namespace to be "Ready" ...
	I0918 20:02:59.202541   26827 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-w97kk" in "kube-system" namespace to be "Ready" ...
	I0918 20:02:59.202613   26827 round_trippers.go:463] GET https://192.168.39.215:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-w97kk
	I0918 20:02:59.202622   26827 round_trippers.go:469] Request Headers:
	I0918 20:02:59.202631   26827 round_trippers.go:473]     Accept: application/json, */*
	I0918 20:02:59.202639   26827 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0918 20:02:59.206149   26827 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0918 20:02:59.206923   26827 round_trippers.go:463] GET https://192.168.39.215:8443/api/v1/nodes/ha-091565
	I0918 20:02:59.206938   26827 round_trippers.go:469] Request Headers:
	I0918 20:02:59.206945   26827 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0918 20:02:59.206948   26827 round_trippers.go:473]     Accept: application/json, */*
	I0918 20:02:59.210089   26827 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0918 20:02:59.211132   26827 pod_ready.go:93] pod "coredns-7c65d6cfc9-w97kk" in "kube-system" namespace has status "Ready":"True"
	I0918 20:02:59.211152   26827 pod_ready.go:82] duration metric: took 8.603074ms for pod "coredns-7c65d6cfc9-w97kk" in "kube-system" namespace to be "Ready" ...
	I0918 20:02:59.211164   26827 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-091565" in "kube-system" namespace to be "Ready" ...
	I0918 20:02:59.211226   26827 round_trippers.go:463] GET https://192.168.39.215:8443/api/v1/namespaces/kube-system/pods/etcd-ha-091565
	I0918 20:02:59.211237   26827 round_trippers.go:469] Request Headers:
	I0918 20:02:59.211248   26827 round_trippers.go:473]     Accept: application/json, */*
	I0918 20:02:59.211257   26827 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0918 20:02:59.214280   26827 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0918 20:02:59.214888   26827 round_trippers.go:463] GET https://192.168.39.215:8443/api/v1/nodes/ha-091565
	I0918 20:02:59.214903   26827 round_trippers.go:469] Request Headers:
	I0918 20:02:59.214912   26827 round_trippers.go:473]     Accept: application/json, */*
	I0918 20:02:59.214917   26827 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0918 20:02:59.217599   26827 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0918 20:02:59.218135   26827 pod_ready.go:93] pod "etcd-ha-091565" in "kube-system" namespace has status "Ready":"True"
	I0918 20:02:59.218154   26827 pod_ready.go:82] duration metric: took 6.982451ms for pod "etcd-ha-091565" in "kube-system" namespace to be "Ready" ...
	I0918 20:02:59.218164   26827 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-091565-m02" in "kube-system" namespace to be "Ready" ...
	I0918 20:02:59.218230   26827 round_trippers.go:463] GET https://192.168.39.215:8443/api/v1/namespaces/kube-system/pods/etcd-ha-091565-m02
	I0918 20:02:59.218241   26827 round_trippers.go:469] Request Headers:
	I0918 20:02:59.218251   26827 round_trippers.go:473]     Accept: application/json, */*
	I0918 20:02:59.218257   26827 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0918 20:02:59.221067   26827 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0918 20:02:59.221787   26827 round_trippers.go:463] GET https://192.168.39.215:8443/api/v1/nodes/ha-091565-m02
	I0918 20:02:59.221803   26827 round_trippers.go:469] Request Headers:
	I0918 20:02:59.221813   26827 round_trippers.go:473]     Accept: application/json, */*
	I0918 20:02:59.221821   26827 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0918 20:02:59.224586   26827 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0918 20:02:59.225580   26827 pod_ready.go:93] pod "etcd-ha-091565-m02" in "kube-system" namespace has status "Ready":"True"
	I0918 20:02:59.225600   26827 pod_ready.go:82] duration metric: took 7.424608ms for pod "etcd-ha-091565-m02" in "kube-system" namespace to be "Ready" ...
	I0918 20:02:59.225619   26827 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-091565" in "kube-system" namespace to be "Ready" ...
	I0918 20:02:59.379036   26827 request.go:632] Waited for 153.330309ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.215:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-091565
	I0918 20:02:59.379109   26827 round_trippers.go:463] GET https://192.168.39.215:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-091565
	I0918 20:02:59.379118   26827 round_trippers.go:469] Request Headers:
	I0918 20:02:59.379133   26827 round_trippers.go:473]     Accept: application/json, */*
	I0918 20:02:59.379139   26827 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0918 20:02:59.384080   26827 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0918 20:02:59.578427   26827 request.go:632] Waited for 193.345723ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.215:8443/api/v1/nodes/ha-091565
	I0918 20:02:59.578498   26827 round_trippers.go:463] GET https://192.168.39.215:8443/api/v1/nodes/ha-091565
	I0918 20:02:59.578503   26827 round_trippers.go:469] Request Headers:
	I0918 20:02:59.578510   26827 round_trippers.go:473]     Accept: application/json, */*
	I0918 20:02:59.578515   26827 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0918 20:02:59.581538   26827 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0918 20:02:59.581992   26827 pod_ready.go:93] pod "kube-apiserver-ha-091565" in "kube-system" namespace has status "Ready":"True"
	I0918 20:02:59.582010   26827 pod_ready.go:82] duration metric: took 356.380215ms for pod "kube-apiserver-ha-091565" in "kube-system" namespace to be "Ready" ...
	I0918 20:02:59.582019   26827 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-091565-m02" in "kube-system" namespace to be "Ready" ...
	I0918 20:02:59.778110   26827 request.go:632] Waited for 196.027349ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.215:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-091565-m02
	I0918 20:02:59.778193   26827 round_trippers.go:463] GET https://192.168.39.215:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-091565-m02
	I0918 20:02:59.778199   26827 round_trippers.go:469] Request Headers:
	I0918 20:02:59.778206   26827 round_trippers.go:473]     Accept: application/json, */*
	I0918 20:02:59.778215   26827 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0918 20:02:59.781615   26827 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0918 20:02:59.978660   26827 request.go:632] Waited for 196.397557ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.215:8443/api/v1/nodes/ha-091565-m02
	I0918 20:02:59.978711   26827 round_trippers.go:463] GET https://192.168.39.215:8443/api/v1/nodes/ha-091565-m02
	I0918 20:02:59.978716   26827 round_trippers.go:469] Request Headers:
	I0918 20:02:59.978723   26827 round_trippers.go:473]     Accept: application/json, */*
	I0918 20:02:59.978730   26827 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0918 20:02:59.982057   26827 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0918 20:02:59.982534   26827 pod_ready.go:93] pod "kube-apiserver-ha-091565-m02" in "kube-system" namespace has status "Ready":"True"
	I0918 20:02:59.982552   26827 pod_ready.go:82] duration metric: took 400.527398ms for pod "kube-apiserver-ha-091565-m02" in "kube-system" namespace to be "Ready" ...
	I0918 20:02:59.982561   26827 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-091565" in "kube-system" namespace to be "Ready" ...
	I0918 20:03:00.178731   26827 request.go:632] Waited for 196.108369ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.215:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-091565
	I0918 20:03:00.178818   26827 round_trippers.go:463] GET https://192.168.39.215:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-091565
	I0918 20:03:00.178826   26827 round_trippers.go:469] Request Headers:
	I0918 20:03:00.178835   26827 round_trippers.go:473]     Accept: application/json, */*
	I0918 20:03:00.178842   26827 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0918 20:03:00.182695   26827 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0918 20:03:00.378911   26827 request.go:632] Waited for 195.422738ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.215:8443/api/v1/nodes/ha-091565
	I0918 20:03:00.378963   26827 round_trippers.go:463] GET https://192.168.39.215:8443/api/v1/nodes/ha-091565
	I0918 20:03:00.378972   26827 round_trippers.go:469] Request Headers:
	I0918 20:03:00.378980   26827 round_trippers.go:473]     Accept: application/json, */*
	I0918 20:03:00.378983   26827 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0918 20:03:00.382498   26827 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0918 20:03:00.383092   26827 pod_ready.go:93] pod "kube-controller-manager-ha-091565" in "kube-system" namespace has status "Ready":"True"
	I0918 20:03:00.383121   26827 pod_ready.go:82] duration metric: took 400.554078ms for pod "kube-controller-manager-ha-091565" in "kube-system" namespace to be "Ready" ...
	I0918 20:03:00.383131   26827 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-091565-m02" in "kube-system" namespace to be "Ready" ...
	I0918 20:03:00.578098   26827 request.go:632] Waited for 194.899438ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.215:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-091565-m02
	I0918 20:03:00.578185   26827 round_trippers.go:463] GET https://192.168.39.215:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-091565-m02
	I0918 20:03:00.578193   26827 round_trippers.go:469] Request Headers:
	I0918 20:03:00.578204   26827 round_trippers.go:473]     Accept: application/json, */*
	I0918 20:03:00.578210   26827 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0918 20:03:00.581985   26827 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0918 20:03:00.779051   26827 request.go:632] Waited for 196.416005ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.215:8443/api/v1/nodes/ha-091565-m02
	I0918 20:03:00.779104   26827 round_trippers.go:463] GET https://192.168.39.215:8443/api/v1/nodes/ha-091565-m02
	I0918 20:03:00.779109   26827 round_trippers.go:469] Request Headers:
	I0918 20:03:00.779116   26827 round_trippers.go:473]     Accept: application/json, */*
	I0918 20:03:00.779121   26827 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0918 20:03:00.782383   26827 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0918 20:03:00.782978   26827 pod_ready.go:93] pod "kube-controller-manager-ha-091565-m02" in "kube-system" namespace has status "Ready":"True"
	I0918 20:03:00.782999   26827 pod_ready.go:82] duration metric: took 399.861964ms for pod "kube-controller-manager-ha-091565-m02" in "kube-system" namespace to be "Ready" ...
	I0918 20:03:00.783008   26827 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-4wm6h" in "kube-system" namespace to be "Ready" ...
	I0918 20:03:00.978573   26827 request.go:632] Waited for 195.502032ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.215:8443/api/v1/namespaces/kube-system/pods/kube-proxy-4wm6h
	I0918 20:03:00.978651   26827 round_trippers.go:463] GET https://192.168.39.215:8443/api/v1/namespaces/kube-system/pods/kube-proxy-4wm6h
	I0918 20:03:00.978672   26827 round_trippers.go:469] Request Headers:
	I0918 20:03:00.978683   26827 round_trippers.go:473]     Accept: application/json, */*
	I0918 20:03:00.978689   26827 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0918 20:03:00.982275   26827 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0918 20:03:01.178232   26827 request.go:632] Waited for 195.323029ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.215:8443/api/v1/nodes/ha-091565
	I0918 20:03:01.178304   26827 round_trippers.go:463] GET https://192.168.39.215:8443/api/v1/nodes/ha-091565
	I0918 20:03:01.178310   26827 round_trippers.go:469] Request Headers:
	I0918 20:03:01.178317   26827 round_trippers.go:473]     Accept: application/json, */*
	I0918 20:03:01.178320   26827 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0918 20:03:01.181251   26827 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0918 20:03:01.181856   26827 pod_ready.go:93] pod "kube-proxy-4wm6h" in "kube-system" namespace has status "Ready":"True"
	I0918 20:03:01.181875   26827 pod_ready.go:82] duration metric: took 398.861474ms for pod "kube-proxy-4wm6h" in "kube-system" namespace to be "Ready" ...
	I0918 20:03:01.181884   26827 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-bxblp" in "kube-system" namespace to be "Ready" ...
	I0918 20:03:01.379020   26827 request.go:632] Waited for 197.061195ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.215:8443/api/v1/namespaces/kube-system/pods/kube-proxy-bxblp
	I0918 20:03:01.379094   26827 round_trippers.go:463] GET https://192.168.39.215:8443/api/v1/namespaces/kube-system/pods/kube-proxy-bxblp
	I0918 20:03:01.379101   26827 round_trippers.go:469] Request Headers:
	I0918 20:03:01.379112   26827 round_trippers.go:473]     Accept: application/json, */*
	I0918 20:03:01.379117   26827 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0918 20:03:01.384213   26827 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0918 20:03:01.578259   26827 request.go:632] Waited for 193.306434ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.215:8443/api/v1/nodes/ha-091565-m02
	I0918 20:03:01.578314   26827 round_trippers.go:463] GET https://192.168.39.215:8443/api/v1/nodes/ha-091565-m02
	I0918 20:03:01.578319   26827 round_trippers.go:469] Request Headers:
	I0918 20:03:01.578326   26827 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0918 20:03:01.578331   26827 round_trippers.go:473]     Accept: application/json, */*
	I0918 20:03:01.581837   26827 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0918 20:03:01.582292   26827 pod_ready.go:93] pod "kube-proxy-bxblp" in "kube-system" namespace has status "Ready":"True"
	I0918 20:03:01.582308   26827 pod_ready.go:82] duration metric: took 400.4182ms for pod "kube-proxy-bxblp" in "kube-system" namespace to be "Ready" ...
	I0918 20:03:01.582315   26827 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-091565" in "kube-system" namespace to be "Ready" ...
	I0918 20:03:01.778453   26827 request.go:632] Waited for 196.055453ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.215:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-091565
	I0918 20:03:01.778506   26827 round_trippers.go:463] GET https://192.168.39.215:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-091565
	I0918 20:03:01.778511   26827 round_trippers.go:469] Request Headers:
	I0918 20:03:01.778518   26827 round_trippers.go:473]     Accept: application/json, */*
	I0918 20:03:01.778522   26827 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0918 20:03:01.782644   26827 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0918 20:03:01.978591   26827 request.go:632] Waited for 195.380537ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.215:8443/api/v1/nodes/ha-091565
	I0918 20:03:01.978678   26827 round_trippers.go:463] GET https://192.168.39.215:8443/api/v1/nodes/ha-091565
	I0918 20:03:01.978686   26827 round_trippers.go:469] Request Headers:
	I0918 20:03:01.978700   26827 round_trippers.go:473]     Accept: application/json, */*
	I0918 20:03:01.978707   26827 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0918 20:03:01.982445   26827 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0918 20:03:01.982967   26827 pod_ready.go:93] pod "kube-scheduler-ha-091565" in "kube-system" namespace has status "Ready":"True"
	I0918 20:03:01.982989   26827 pod_ready.go:82] duration metric: took 400.667605ms for pod "kube-scheduler-ha-091565" in "kube-system" namespace to be "Ready" ...
	I0918 20:03:01.982998   26827 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-091565-m02" in "kube-system" namespace to be "Ready" ...
	I0918 20:03:02.179055   26827 request.go:632] Waited for 195.997204ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.215:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-091565-m02
	I0918 20:03:02.179125   26827 round_trippers.go:463] GET https://192.168.39.215:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-091565-m02
	I0918 20:03:02.179132   26827 round_trippers.go:469] Request Headers:
	I0918 20:03:02.179144   26827 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0918 20:03:02.179150   26827 round_trippers.go:473]     Accept: application/json, */*
	I0918 20:03:02.182779   26827 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0918 20:03:02.378680   26827 request.go:632] Waited for 195.344249ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.215:8443/api/v1/nodes/ha-091565-m02
	I0918 20:03:02.378732   26827 round_trippers.go:463] GET https://192.168.39.215:8443/api/v1/nodes/ha-091565-m02
	I0918 20:03:02.378737   26827 round_trippers.go:469] Request Headers:
	I0918 20:03:02.378744   26827 round_trippers.go:473]     Accept: application/json, */*
	I0918 20:03:02.378749   26827 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0918 20:03:02.387672   26827 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0918 20:03:02.388432   26827 pod_ready.go:93] pod "kube-scheduler-ha-091565-m02" in "kube-system" namespace has status "Ready":"True"
	I0918 20:03:02.388454   26827 pod_ready.go:82] duration metric: took 405.448688ms for pod "kube-scheduler-ha-091565-m02" in "kube-system" namespace to be "Ready" ...
	I0918 20:03:02.388468   26827 pod_ready.go:39] duration metric: took 3.205828816s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0918 20:03:02.388484   26827 api_server.go:52] waiting for apiserver process to appear ...
	I0918 20:03:02.388545   26827 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 20:03:02.403691   26827 api_server.go:72] duration metric: took 22.031762634s to wait for apiserver process to appear ...
	I0918 20:03:02.403716   26827 api_server.go:88] waiting for apiserver healthz status ...
	I0918 20:03:02.403738   26827 api_server.go:253] Checking apiserver healthz at https://192.168.39.215:8443/healthz ...
	I0918 20:03:02.408810   26827 api_server.go:279] https://192.168.39.215:8443/healthz returned 200:
	ok
	I0918 20:03:02.408891   26827 round_trippers.go:463] GET https://192.168.39.215:8443/version
	I0918 20:03:02.408903   26827 round_trippers.go:469] Request Headers:
	I0918 20:03:02.408914   26827 round_trippers.go:473]     Accept: application/json, */*
	I0918 20:03:02.408923   26827 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0918 20:03:02.409886   26827 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0918 20:03:02.409963   26827 api_server.go:141] control plane version: v1.31.1
	I0918 20:03:02.409977   26827 api_server.go:131] duration metric: took 6.255647ms to wait for apiserver health ...
	I0918 20:03:02.409986   26827 system_pods.go:43] waiting for kube-system pods to appear ...
	I0918 20:03:02.578323   26827 request.go:632] Waited for 168.279427ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.215:8443/api/v1/namespaces/kube-system/pods
	I0918 20:03:02.578410   26827 round_trippers.go:463] GET https://192.168.39.215:8443/api/v1/namespaces/kube-system/pods
	I0918 20:03:02.578421   26827 round_trippers.go:469] Request Headers:
	I0918 20:03:02.578429   26827 round_trippers.go:473]     Accept: application/json, */*
	I0918 20:03:02.578435   26827 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0918 20:03:02.583311   26827 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0918 20:03:02.589108   26827 system_pods.go:59] 17 kube-system pods found
	I0918 20:03:02.589162   26827 system_pods.go:61] "coredns-7c65d6cfc9-8zcqk" [644e8147-96e9-41a1-99b8-d2de17e4798c] Running
	I0918 20:03:02.589168   26827 system_pods.go:61] "coredns-7c65d6cfc9-w97kk" [70428cd6-0523-44c8-89f3-62837b52ca80] Running
	I0918 20:03:02.589172   26827 system_pods.go:61] "etcd-ha-091565" [c5af3e98-c375-448c-9cac-6a83b115ca71] Running
	I0918 20:03:02.589176   26827 system_pods.go:61] "etcd-ha-091565-m02" [71e1c78e-6a11-4b46-baa2-83c98d666cee] Running
	I0918 20:03:02.589180   26827 system_pods.go:61] "kindnet-7fl5w" [5c3a9d82-3815-4aa1-8d04-14be25394dcf] Running
	I0918 20:03:02.589183   26827 system_pods.go:61] "kindnet-bzsqr" [bf1e6f1b-d3ad-439a-9b9a-882e6e989a56] Running
	I0918 20:03:02.589188   26827 system_pods.go:61] "kube-apiserver-ha-091565" [1c2ddbd9-3f78-415b-af21-1d43925382c5] Running
	I0918 20:03:02.589193   26827 system_pods.go:61] "kube-apiserver-ha-091565-m02" [b482ba6c-e415-4e3c-aa56-c17a4f3bcc13] Running
	I0918 20:03:02.589197   26827 system_pods.go:61] "kube-controller-manager-ha-091565" [3812cd1b-619d-4da8-b039-18f7adb53647] Running
	I0918 20:03:02.589206   26827 system_pods.go:61] "kube-controller-manager-ha-091565-m02" [2c3ecce6-b148-498b-b66e-e5c000f51940] Running
	I0918 20:03:02.589210   26827 system_pods.go:61] "kube-proxy-4wm6h" [d6904231-6f64-4447-9932-0cd5d692978b] Running
	I0918 20:03:02.589213   26827 system_pods.go:61] "kube-proxy-bxblp" [68143b05-afd1-409a-aaa6-8b3c6841bbfb] Running
	I0918 20:03:02.589217   26827 system_pods.go:61] "kube-scheduler-ha-091565" [ad2ac667-3328-4ad3-b736-c8c633140e2d] Running
	I0918 20:03:02.589222   26827 system_pods.go:61] "kube-scheduler-ha-091565-m02" [fd27db83-43f3-40a8-9ca7-5570f78d562b] Running
	I0918 20:03:02.589226   26827 system_pods.go:61] "kube-vip-ha-091565" [b3fccd60-ac88-4dda-b8bb-b1b8c45cbfe5] Running
	I0918 20:03:02.589233   26827 system_pods.go:61] "kube-vip-ha-091565-m02" [12ca12ad-6dee-4b50-9273-c48d8f06acf4] Running
	I0918 20:03:02.589236   26827 system_pods.go:61] "storage-provisioner" [b7dffb85-905b-4166-a680-34c77cf87d09] Running
	I0918 20:03:02.589247   26827 system_pods.go:74] duration metric: took 179.252102ms to wait for pod list to return data ...
	I0918 20:03:02.589258   26827 default_sa.go:34] waiting for default service account to be created ...
	I0918 20:03:02.778073   26827 request.go:632] Waited for 188.733447ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.215:8443/api/v1/namespaces/default/serviceaccounts
	I0918 20:03:02.778127   26827 round_trippers.go:463] GET https://192.168.39.215:8443/api/v1/namespaces/default/serviceaccounts
	I0918 20:03:02.778132   26827 round_trippers.go:469] Request Headers:
	I0918 20:03:02.778141   26827 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0918 20:03:02.778148   26827 round_trippers.go:473]     Accept: application/json, */*
	I0918 20:03:02.781930   26827 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0918 20:03:02.782168   26827 default_sa.go:45] found service account: "default"
	I0918 20:03:02.782184   26827 default_sa.go:55] duration metric: took 192.91745ms for default service account to be created ...
	I0918 20:03:02.782192   26827 system_pods.go:116] waiting for k8s-apps to be running ...
	I0918 20:03:02.978682   26827 request.go:632] Waited for 196.414466ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.215:8443/api/v1/namespaces/kube-system/pods
	I0918 20:03:02.978755   26827 round_trippers.go:463] GET https://192.168.39.215:8443/api/v1/namespaces/kube-system/pods
	I0918 20:03:02.978762   26827 round_trippers.go:469] Request Headers:
	I0918 20:03:02.978771   26827 round_trippers.go:473]     Accept: application/json, */*
	I0918 20:03:02.978775   26827 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0918 20:03:02.983628   26827 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0918 20:03:02.989503   26827 system_pods.go:86] 17 kube-system pods found
	I0918 20:03:02.989531   26827 system_pods.go:89] "coredns-7c65d6cfc9-8zcqk" [644e8147-96e9-41a1-99b8-d2de17e4798c] Running
	I0918 20:03:02.989536   26827 system_pods.go:89] "coredns-7c65d6cfc9-w97kk" [70428cd6-0523-44c8-89f3-62837b52ca80] Running
	I0918 20:03:02.989540   26827 system_pods.go:89] "etcd-ha-091565" [c5af3e98-c375-448c-9cac-6a83b115ca71] Running
	I0918 20:03:02.989543   26827 system_pods.go:89] "etcd-ha-091565-m02" [71e1c78e-6a11-4b46-baa2-83c98d666cee] Running
	I0918 20:03:02.989547   26827 system_pods.go:89] "kindnet-7fl5w" [5c3a9d82-3815-4aa1-8d04-14be25394dcf] Running
	I0918 20:03:02.989550   26827 system_pods.go:89] "kindnet-bzsqr" [bf1e6f1b-d3ad-439a-9b9a-882e6e989a56] Running
	I0918 20:03:02.989555   26827 system_pods.go:89] "kube-apiserver-ha-091565" [1c2ddbd9-3f78-415b-af21-1d43925382c5] Running
	I0918 20:03:02.989558   26827 system_pods.go:89] "kube-apiserver-ha-091565-m02" [b482ba6c-e415-4e3c-aa56-c17a4f3bcc13] Running
	I0918 20:03:02.989562   26827 system_pods.go:89] "kube-controller-manager-ha-091565" [3812cd1b-619d-4da8-b039-18f7adb53647] Running
	I0918 20:03:02.989565   26827 system_pods.go:89] "kube-controller-manager-ha-091565-m02" [2c3ecce6-b148-498b-b66e-e5c000f51940] Running
	I0918 20:03:02.989568   26827 system_pods.go:89] "kube-proxy-4wm6h" [d6904231-6f64-4447-9932-0cd5d692978b] Running
	I0918 20:03:02.989571   26827 system_pods.go:89] "kube-proxy-bxblp" [68143b05-afd1-409a-aaa6-8b3c6841bbfb] Running
	I0918 20:03:02.989574   26827 system_pods.go:89] "kube-scheduler-ha-091565" [ad2ac667-3328-4ad3-b736-c8c633140e2d] Running
	I0918 20:03:02.989577   26827 system_pods.go:89] "kube-scheduler-ha-091565-m02" [fd27db83-43f3-40a8-9ca7-5570f78d562b] Running
	I0918 20:03:02.989580   26827 system_pods.go:89] "kube-vip-ha-091565" [b3fccd60-ac88-4dda-b8bb-b1b8c45cbfe5] Running
	I0918 20:03:02.989583   26827 system_pods.go:89] "kube-vip-ha-091565-m02" [12ca12ad-6dee-4b50-9273-c48d8f06acf4] Running
	I0918 20:03:02.989590   26827 system_pods.go:89] "storage-provisioner" [b7dffb85-905b-4166-a680-34c77cf87d09] Running
	I0918 20:03:02.989597   26827 system_pods.go:126] duration metric: took 207.397178ms to wait for k8s-apps to be running ...
	I0918 20:03:02.989610   26827 system_svc.go:44] waiting for kubelet service to be running ....
	I0918 20:03:02.989698   26827 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0918 20:03:03.003927   26827 system_svc.go:56] duration metric: took 14.306514ms WaitForService to wait for kubelet
	I0918 20:03:03.003954   26827 kubeadm.go:582] duration metric: took 22.632027977s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0918 20:03:03.003974   26827 node_conditions.go:102] verifying NodePressure condition ...
	I0918 20:03:03.179047   26827 request.go:632] Waited for 174.972185ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.215:8443/api/v1/nodes
	I0918 20:03:03.179141   26827 round_trippers.go:463] GET https://192.168.39.215:8443/api/v1/nodes
	I0918 20:03:03.179150   26827 round_trippers.go:469] Request Headers:
	I0918 20:03:03.179161   26827 round_trippers.go:473]     Accept: application/json, */*
	I0918 20:03:03.179171   26827 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0918 20:03:03.183675   26827 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0918 20:03:03.184384   26827 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0918 20:03:03.184407   26827 node_conditions.go:123] node cpu capacity is 2
	I0918 20:03:03.184443   26827 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0918 20:03:03.184452   26827 node_conditions.go:123] node cpu capacity is 2
	I0918 20:03:03.184459   26827 node_conditions.go:105] duration metric: took 180.479849ms to run NodePressure ...
	I0918 20:03:03.184475   26827 start.go:241] waiting for startup goroutines ...
	I0918 20:03:03.184509   26827 start.go:255] writing updated cluster config ...
	I0918 20:03:03.186759   26827 out.go:201] 
	I0918 20:03:03.188291   26827 config.go:182] Loaded profile config "ha-091565": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0918 20:03:03.188401   26827 profile.go:143] Saving config to /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/ha-091565/config.json ...
	I0918 20:03:03.189951   26827 out.go:177] * Starting "ha-091565-m03" control-plane node in "ha-091565" cluster
	I0918 20:03:03.191020   26827 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0918 20:03:03.191045   26827 cache.go:56] Caching tarball of preloaded images
	I0918 20:03:03.191138   26827 preload.go:172] Found /home/jenkins/minikube-integration/19667-7671/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0918 20:03:03.191150   26827 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0918 20:03:03.191241   26827 profile.go:143] Saving config to /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/ha-091565/config.json ...
	I0918 20:03:03.191410   26827 start.go:360] acquireMachinesLock for ha-091565-m03: {Name:mk1b0d68a0b7d9d4bc8204d5d81cb9eb77222526 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0918 20:03:03.191465   26827 start.go:364] duration metric: took 34.695µs to acquireMachinesLock for "ha-091565-m03"
	I0918 20:03:03.191486   26827 start.go:93] Provisioning new machine with config: &{Name:ha-091565 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.1 ClusterName:ha-091565 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.215 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.92 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-d
ns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0918 20:03:03.191596   26827 start.go:125] createHost starting for "m03" (driver="kvm2")
	I0918 20:03:03.193058   26827 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0918 20:03:03.193149   26827 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0918 20:03:03.193188   26827 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0918 20:03:03.208171   26827 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42195
	I0918 20:03:03.208580   26827 main.go:141] libmachine: () Calling .GetVersion
	I0918 20:03:03.209079   26827 main.go:141] libmachine: Using API Version  1
	I0918 20:03:03.209101   26827 main.go:141] libmachine: () Calling .SetConfigRaw
	I0918 20:03:03.209382   26827 main.go:141] libmachine: () Calling .GetMachineName
	I0918 20:03:03.209530   26827 main.go:141] libmachine: (ha-091565-m03) Calling .GetMachineName
	I0918 20:03:03.209649   26827 main.go:141] libmachine: (ha-091565-m03) Calling .DriverName
	I0918 20:03:03.209778   26827 start.go:159] libmachine.API.Create for "ha-091565" (driver="kvm2")
	I0918 20:03:03.209809   26827 client.go:168] LocalClient.Create starting
	I0918 20:03:03.209839   26827 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19667-7671/.minikube/certs/ca.pem
	I0918 20:03:03.209872   26827 main.go:141] libmachine: Decoding PEM data...
	I0918 20:03:03.209887   26827 main.go:141] libmachine: Parsing certificate...
	I0918 20:03:03.209935   26827 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19667-7671/.minikube/certs/cert.pem
	I0918 20:03:03.209954   26827 main.go:141] libmachine: Decoding PEM data...
	I0918 20:03:03.209965   26827 main.go:141] libmachine: Parsing certificate...
	I0918 20:03:03.209982   26827 main.go:141] libmachine: Running pre-create checks...
	I0918 20:03:03.209989   26827 main.go:141] libmachine: (ha-091565-m03) Calling .PreCreateCheck
	I0918 20:03:03.210137   26827 main.go:141] libmachine: (ha-091565-m03) Calling .GetConfigRaw
	I0918 20:03:03.210522   26827 main.go:141] libmachine: Creating machine...
	I0918 20:03:03.210535   26827 main.go:141] libmachine: (ha-091565-m03) Calling .Create
	I0918 20:03:03.210656   26827 main.go:141] libmachine: (ha-091565-m03) Creating KVM machine...
	I0918 20:03:03.211861   26827 main.go:141] libmachine: (ha-091565-m03) DBG | found existing default KVM network
	I0918 20:03:03.212028   26827 main.go:141] libmachine: (ha-091565-m03) DBG | found existing private KVM network mk-ha-091565
	I0918 20:03:03.212185   26827 main.go:141] libmachine: (ha-091565-m03) Setting up store path in /home/jenkins/minikube-integration/19667-7671/.minikube/machines/ha-091565-m03 ...
	I0918 20:03:03.212211   26827 main.go:141] libmachine: (ha-091565-m03) Building disk image from file:///home/jenkins/minikube-integration/19667-7671/.minikube/cache/iso/amd64/minikube-v1.34.0-1726481713-19649-amd64.iso
	I0918 20:03:03.212251   26827 main.go:141] libmachine: (ha-091565-m03) DBG | I0918 20:03:03.212170   27609 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19667-7671/.minikube
	I0918 20:03:03.212315   26827 main.go:141] libmachine: (ha-091565-m03) Downloading /home/jenkins/minikube-integration/19667-7671/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19667-7671/.minikube/cache/iso/amd64/minikube-v1.34.0-1726481713-19649-amd64.iso...
	I0918 20:03:03.448950   26827 main.go:141] libmachine: (ha-091565-m03) DBG | I0918 20:03:03.448813   27609 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19667-7671/.minikube/machines/ha-091565-m03/id_rsa...
	I0918 20:03:03.656714   26827 main.go:141] libmachine: (ha-091565-m03) DBG | I0918 20:03:03.656571   27609 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19667-7671/.minikube/machines/ha-091565-m03/ha-091565-m03.rawdisk...
	I0918 20:03:03.656743   26827 main.go:141] libmachine: (ha-091565-m03) DBG | Writing magic tar header
	I0918 20:03:03.656757   26827 main.go:141] libmachine: (ha-091565-m03) DBG | Writing SSH key tar header
	I0918 20:03:03.656767   26827 main.go:141] libmachine: (ha-091565-m03) DBG | I0918 20:03:03.656684   27609 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19667-7671/.minikube/machines/ha-091565-m03 ...
	I0918 20:03:03.656796   26827 main.go:141] libmachine: (ha-091565-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19667-7671/.minikube/machines/ha-091565-m03
	I0918 20:03:03.656816   26827 main.go:141] libmachine: (ha-091565-m03) Setting executable bit set on /home/jenkins/minikube-integration/19667-7671/.minikube/machines/ha-091565-m03 (perms=drwx------)
	I0918 20:03:03.656843   26827 main.go:141] libmachine: (ha-091565-m03) Setting executable bit set on /home/jenkins/minikube-integration/19667-7671/.minikube/machines (perms=drwxr-xr-x)
	I0918 20:03:03.656855   26827 main.go:141] libmachine: (ha-091565-m03) Setting executable bit set on /home/jenkins/minikube-integration/19667-7671/.minikube (perms=drwxr-xr-x)
	I0918 20:03:03.656870   26827 main.go:141] libmachine: (ha-091565-m03) Setting executable bit set on /home/jenkins/minikube-integration/19667-7671 (perms=drwxrwxr-x)
	I0918 20:03:03.656884   26827 main.go:141] libmachine: (ha-091565-m03) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0918 20:03:03.656898   26827 main.go:141] libmachine: (ha-091565-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19667-7671/.minikube/machines
	I0918 20:03:03.656911   26827 main.go:141] libmachine: (ha-091565-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19667-7671/.minikube
	I0918 20:03:03.656924   26827 main.go:141] libmachine: (ha-091565-m03) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0918 20:03:03.656938   26827 main.go:141] libmachine: (ha-091565-m03) Creating domain...
	I0918 20:03:03.656953   26827 main.go:141] libmachine: (ha-091565-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19667-7671
	I0918 20:03:03.656966   26827 main.go:141] libmachine: (ha-091565-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0918 20:03:03.656984   26827 main.go:141] libmachine: (ha-091565-m03) DBG | Checking permissions on dir: /home/jenkins
	I0918 20:03:03.656999   26827 main.go:141] libmachine: (ha-091565-m03) DBG | Checking permissions on dir: /home
	I0918 20:03:03.657013   26827 main.go:141] libmachine: (ha-091565-m03) DBG | Skipping /home - not owner
	I0918 20:03:03.657931   26827 main.go:141] libmachine: (ha-091565-m03) define libvirt domain using xml: 
	I0918 20:03:03.657960   26827 main.go:141] libmachine: (ha-091565-m03) <domain type='kvm'>
	I0918 20:03:03.657971   26827 main.go:141] libmachine: (ha-091565-m03)   <name>ha-091565-m03</name>
	I0918 20:03:03.657985   26827 main.go:141] libmachine: (ha-091565-m03)   <memory unit='MiB'>2200</memory>
	I0918 20:03:03.657993   26827 main.go:141] libmachine: (ha-091565-m03)   <vcpu>2</vcpu>
	I0918 20:03:03.658002   26827 main.go:141] libmachine: (ha-091565-m03)   <features>
	I0918 20:03:03.658008   26827 main.go:141] libmachine: (ha-091565-m03)     <acpi/>
	I0918 20:03:03.658012   26827 main.go:141] libmachine: (ha-091565-m03)     <apic/>
	I0918 20:03:03.658017   26827 main.go:141] libmachine: (ha-091565-m03)     <pae/>
	I0918 20:03:03.658024   26827 main.go:141] libmachine: (ha-091565-m03)     
	I0918 20:03:03.658028   26827 main.go:141] libmachine: (ha-091565-m03)   </features>
	I0918 20:03:03.658035   26827 main.go:141] libmachine: (ha-091565-m03)   <cpu mode='host-passthrough'>
	I0918 20:03:03.658040   26827 main.go:141] libmachine: (ha-091565-m03)   
	I0918 20:03:03.658051   26827 main.go:141] libmachine: (ha-091565-m03)   </cpu>
	I0918 20:03:03.658072   26827 main.go:141] libmachine: (ha-091565-m03)   <os>
	I0918 20:03:03.658091   26827 main.go:141] libmachine: (ha-091565-m03)     <type>hvm</type>
	I0918 20:03:03.658100   26827 main.go:141] libmachine: (ha-091565-m03)     <boot dev='cdrom'/>
	I0918 20:03:03.658104   26827 main.go:141] libmachine: (ha-091565-m03)     <boot dev='hd'/>
	I0918 20:03:03.658112   26827 main.go:141] libmachine: (ha-091565-m03)     <bootmenu enable='no'/>
	I0918 20:03:03.658119   26827 main.go:141] libmachine: (ha-091565-m03)   </os>
	I0918 20:03:03.658127   26827 main.go:141] libmachine: (ha-091565-m03)   <devices>
	I0918 20:03:03.658137   26827 main.go:141] libmachine: (ha-091565-m03)     <disk type='file' device='cdrom'>
	I0918 20:03:03.658153   26827 main.go:141] libmachine: (ha-091565-m03)       <source file='/home/jenkins/minikube-integration/19667-7671/.minikube/machines/ha-091565-m03/boot2docker.iso'/>
	I0918 20:03:03.658166   26827 main.go:141] libmachine: (ha-091565-m03)       <target dev='hdc' bus='scsi'/>
	I0918 20:03:03.658176   26827 main.go:141] libmachine: (ha-091565-m03)       <readonly/>
	I0918 20:03:03.658181   26827 main.go:141] libmachine: (ha-091565-m03)     </disk>
	I0918 20:03:03.658187   26827 main.go:141] libmachine: (ha-091565-m03)     <disk type='file' device='disk'>
	I0918 20:03:03.658196   26827 main.go:141] libmachine: (ha-091565-m03)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0918 20:03:03.658208   26827 main.go:141] libmachine: (ha-091565-m03)       <source file='/home/jenkins/minikube-integration/19667-7671/.minikube/machines/ha-091565-m03/ha-091565-m03.rawdisk'/>
	I0918 20:03:03.658218   26827 main.go:141] libmachine: (ha-091565-m03)       <target dev='hda' bus='virtio'/>
	I0918 20:03:03.658230   26827 main.go:141] libmachine: (ha-091565-m03)     </disk>
	I0918 20:03:03.658240   26827 main.go:141] libmachine: (ha-091565-m03)     <interface type='network'>
	I0918 20:03:03.658251   26827 main.go:141] libmachine: (ha-091565-m03)       <source network='mk-ha-091565'/>
	I0918 20:03:03.658261   26827 main.go:141] libmachine: (ha-091565-m03)       <model type='virtio'/>
	I0918 20:03:03.658268   26827 main.go:141] libmachine: (ha-091565-m03)     </interface>
	I0918 20:03:03.658277   26827 main.go:141] libmachine: (ha-091565-m03)     <interface type='network'>
	I0918 20:03:03.658286   26827 main.go:141] libmachine: (ha-091565-m03)       <source network='default'/>
	I0918 20:03:03.658301   26827 main.go:141] libmachine: (ha-091565-m03)       <model type='virtio'/>
	I0918 20:03:03.658313   26827 main.go:141] libmachine: (ha-091565-m03)     </interface>
	I0918 20:03:03.658320   26827 main.go:141] libmachine: (ha-091565-m03)     <serial type='pty'>
	I0918 20:03:03.658333   26827 main.go:141] libmachine: (ha-091565-m03)       <target port='0'/>
	I0918 20:03:03.658342   26827 main.go:141] libmachine: (ha-091565-m03)     </serial>
	I0918 20:03:03.658350   26827 main.go:141] libmachine: (ha-091565-m03)     <console type='pty'>
	I0918 20:03:03.658360   26827 main.go:141] libmachine: (ha-091565-m03)       <target type='serial' port='0'/>
	I0918 20:03:03.658368   26827 main.go:141] libmachine: (ha-091565-m03)     </console>
	I0918 20:03:03.658381   26827 main.go:141] libmachine: (ha-091565-m03)     <rng model='virtio'>
	I0918 20:03:03.658393   26827 main.go:141] libmachine: (ha-091565-m03)       <backend model='random'>/dev/random</backend>
	I0918 20:03:03.658402   26827 main.go:141] libmachine: (ha-091565-m03)     </rng>
	I0918 20:03:03.658410   26827 main.go:141] libmachine: (ha-091565-m03)     
	I0918 20:03:03.658418   26827 main.go:141] libmachine: (ha-091565-m03)     
	I0918 20:03:03.658425   26827 main.go:141] libmachine: (ha-091565-m03)   </devices>
	I0918 20:03:03.658434   26827 main.go:141] libmachine: (ha-091565-m03) </domain>
	I0918 20:03:03.658445   26827 main.go:141] libmachine: (ha-091565-m03) 
	I0918 20:03:03.665123   26827 main.go:141] libmachine: (ha-091565-m03) DBG | domain ha-091565-m03 has defined MAC address 52:54:00:28:9c:e9 in network default
	I0918 20:03:03.665651   26827 main.go:141] libmachine: (ha-091565-m03) Ensuring networks are active...
	I0918 20:03:03.665672   26827 main.go:141] libmachine: (ha-091565-m03) DBG | domain ha-091565-m03 has defined MAC address 52:54:00:7c:50:95 in network mk-ha-091565
	I0918 20:03:03.666384   26827 main.go:141] libmachine: (ha-091565-m03) Ensuring network default is active
	I0918 20:03:03.666733   26827 main.go:141] libmachine: (ha-091565-m03) Ensuring network mk-ha-091565 is active
	I0918 20:03:03.667154   26827 main.go:141] libmachine: (ha-091565-m03) Getting domain xml...
	I0918 20:03:03.668052   26827 main.go:141] libmachine: (ha-091565-m03) Creating domain...
	I0918 20:03:04.935268   26827 main.go:141] libmachine: (ha-091565-m03) Waiting to get IP...
	I0918 20:03:04.936028   26827 main.go:141] libmachine: (ha-091565-m03) DBG | domain ha-091565-m03 has defined MAC address 52:54:00:7c:50:95 in network mk-ha-091565
	I0918 20:03:04.936415   26827 main.go:141] libmachine: (ha-091565-m03) DBG | unable to find current IP address of domain ha-091565-m03 in network mk-ha-091565
	I0918 20:03:04.936435   26827 main.go:141] libmachine: (ha-091565-m03) DBG | I0918 20:03:04.936394   27609 retry.go:31] will retry after 190.945774ms: waiting for machine to come up
	I0918 20:03:05.128750   26827 main.go:141] libmachine: (ha-091565-m03) DBG | domain ha-091565-m03 has defined MAC address 52:54:00:7c:50:95 in network mk-ha-091565
	I0918 20:03:05.129236   26827 main.go:141] libmachine: (ha-091565-m03) DBG | unable to find current IP address of domain ha-091565-m03 in network mk-ha-091565
	I0918 20:03:05.129261   26827 main.go:141] libmachine: (ha-091565-m03) DBG | I0918 20:03:05.129196   27609 retry.go:31] will retry after 291.266146ms: waiting for machine to come up
	I0918 20:03:05.422550   26827 main.go:141] libmachine: (ha-091565-m03) DBG | domain ha-091565-m03 has defined MAC address 52:54:00:7c:50:95 in network mk-ha-091565
	I0918 20:03:05.423137   26827 main.go:141] libmachine: (ha-091565-m03) DBG | unable to find current IP address of domain ha-091565-m03 in network mk-ha-091565
	I0918 20:03:05.423170   26827 main.go:141] libmachine: (ha-091565-m03) DBG | I0918 20:03:05.423078   27609 retry.go:31] will retry after 371.409086ms: waiting for machine to come up
	I0918 20:03:05.795700   26827 main.go:141] libmachine: (ha-091565-m03) DBG | domain ha-091565-m03 has defined MAC address 52:54:00:7c:50:95 in network mk-ha-091565
	I0918 20:03:05.796222   26827 main.go:141] libmachine: (ha-091565-m03) DBG | unable to find current IP address of domain ha-091565-m03 in network mk-ha-091565
	I0918 20:03:05.796248   26827 main.go:141] libmachine: (ha-091565-m03) DBG | I0918 20:03:05.796182   27609 retry.go:31] will retry after 527.63812ms: waiting for machine to come up
	I0918 20:03:06.325912   26827 main.go:141] libmachine: (ha-091565-m03) DBG | domain ha-091565-m03 has defined MAC address 52:54:00:7c:50:95 in network mk-ha-091565
	I0918 20:03:06.326349   26827 main.go:141] libmachine: (ha-091565-m03) DBG | unable to find current IP address of domain ha-091565-m03 in network mk-ha-091565
	I0918 20:03:06.326379   26827 main.go:141] libmachine: (ha-091565-m03) DBG | I0918 20:03:06.326307   27609 retry.go:31] will retry after 471.938108ms: waiting for machine to come up
	I0918 20:03:06.799896   26827 main.go:141] libmachine: (ha-091565-m03) DBG | domain ha-091565-m03 has defined MAC address 52:54:00:7c:50:95 in network mk-ha-091565
	I0918 20:03:06.800358   26827 main.go:141] libmachine: (ha-091565-m03) DBG | unable to find current IP address of domain ha-091565-m03 in network mk-ha-091565
	I0918 20:03:06.800384   26827 main.go:141] libmachine: (ha-091565-m03) DBG | I0918 20:03:06.800288   27609 retry.go:31] will retry after 607.364821ms: waiting for machine to come up
	I0918 20:03:07.408959   26827 main.go:141] libmachine: (ha-091565-m03) DBG | domain ha-091565-m03 has defined MAC address 52:54:00:7c:50:95 in network mk-ha-091565
	I0918 20:03:07.409429   26827 main.go:141] libmachine: (ha-091565-m03) DBG | unable to find current IP address of domain ha-091565-m03 in network mk-ha-091565
	I0918 20:03:07.409459   26827 main.go:141] libmachine: (ha-091565-m03) DBG | I0918 20:03:07.409383   27609 retry.go:31] will retry after 864.680144ms: waiting for machine to come up
	I0918 20:03:08.275959   26827 main.go:141] libmachine: (ha-091565-m03) DBG | domain ha-091565-m03 has defined MAC address 52:54:00:7c:50:95 in network mk-ha-091565
	I0918 20:03:08.276377   26827 main.go:141] libmachine: (ha-091565-m03) DBG | unable to find current IP address of domain ha-091565-m03 in network mk-ha-091565
	I0918 20:03:08.276404   26827 main.go:141] libmachine: (ha-091565-m03) DBG | I0918 20:03:08.276319   27609 retry.go:31] will retry after 900.946411ms: waiting for machine to come up
	I0918 20:03:09.178488   26827 main.go:141] libmachine: (ha-091565-m03) DBG | domain ha-091565-m03 has defined MAC address 52:54:00:7c:50:95 in network mk-ha-091565
	I0918 20:03:09.178913   26827 main.go:141] libmachine: (ha-091565-m03) DBG | unable to find current IP address of domain ha-091565-m03 in network mk-ha-091565
	I0918 20:03:09.178936   26827 main.go:141] libmachine: (ha-091565-m03) DBG | I0918 20:03:09.178885   27609 retry.go:31] will retry after 1.803312814s: waiting for machine to come up
	I0918 20:03:10.983480   26827 main.go:141] libmachine: (ha-091565-m03) DBG | domain ha-091565-m03 has defined MAC address 52:54:00:7c:50:95 in network mk-ha-091565
	I0918 20:03:10.983921   26827 main.go:141] libmachine: (ha-091565-m03) DBG | unable to find current IP address of domain ha-091565-m03 in network mk-ha-091565
	I0918 20:03:10.983943   26827 main.go:141] libmachine: (ha-091565-m03) DBG | I0918 20:03:10.983874   27609 retry.go:31] will retry after 2.318003161s: waiting for machine to come up
	I0918 20:03:13.303826   26827 main.go:141] libmachine: (ha-091565-m03) DBG | domain ha-091565-m03 has defined MAC address 52:54:00:7c:50:95 in network mk-ha-091565
	I0918 20:03:13.304364   26827 main.go:141] libmachine: (ha-091565-m03) DBG | unable to find current IP address of domain ha-091565-m03 in network mk-ha-091565
	I0918 20:03:13.304389   26827 main.go:141] libmachine: (ha-091565-m03) DBG | I0918 20:03:13.304319   27609 retry.go:31] will retry after 2.309847279s: waiting for machine to come up
	I0918 20:03:15.615522   26827 main.go:141] libmachine: (ha-091565-m03) DBG | domain ha-091565-m03 has defined MAC address 52:54:00:7c:50:95 in network mk-ha-091565
	I0918 20:03:15.616142   26827 main.go:141] libmachine: (ha-091565-m03) DBG | unable to find current IP address of domain ha-091565-m03 in network mk-ha-091565
	I0918 20:03:15.616170   26827 main.go:141] libmachine: (ha-091565-m03) DBG | I0918 20:03:15.616108   27609 retry.go:31] will retry after 2.559399773s: waiting for machine to come up
	I0918 20:03:18.176689   26827 main.go:141] libmachine: (ha-091565-m03) DBG | domain ha-091565-m03 has defined MAC address 52:54:00:7c:50:95 in network mk-ha-091565
	I0918 20:03:18.177086   26827 main.go:141] libmachine: (ha-091565-m03) DBG | unable to find current IP address of domain ha-091565-m03 in network mk-ha-091565
	I0918 20:03:18.177108   26827 main.go:141] libmachine: (ha-091565-m03) DBG | I0918 20:03:18.177044   27609 retry.go:31] will retry after 4.502260419s: waiting for machine to come up
	I0918 20:03:22.681016   26827 main.go:141] libmachine: (ha-091565-m03) DBG | domain ha-091565-m03 has defined MAC address 52:54:00:7c:50:95 in network mk-ha-091565
	I0918 20:03:22.681368   26827 main.go:141] libmachine: (ha-091565-m03) DBG | unable to find current IP address of domain ha-091565-m03 in network mk-ha-091565
	I0918 20:03:22.681391   26827 main.go:141] libmachine: (ha-091565-m03) DBG | I0918 20:03:22.681330   27609 retry.go:31] will retry after 3.82668599s: waiting for machine to come up
	I0918 20:03:26.510988   26827 main.go:141] libmachine: (ha-091565-m03) DBG | domain ha-091565-m03 has defined MAC address 52:54:00:7c:50:95 in network mk-ha-091565
	I0918 20:03:26.511503   26827 main.go:141] libmachine: (ha-091565-m03) Found IP for machine: 192.168.39.53
	I0918 20:03:26.511523   26827 main.go:141] libmachine: (ha-091565-m03) DBG | domain ha-091565-m03 has current primary IP address 192.168.39.53 and MAC address 52:54:00:7c:50:95 in network mk-ha-091565
	I0918 20:03:26.511529   26827 main.go:141] libmachine: (ha-091565-m03) Reserving static IP address...
	I0918 20:03:26.511838   26827 main.go:141] libmachine: (ha-091565-m03) DBG | unable to find host DHCP lease matching {name: "ha-091565-m03", mac: "52:54:00:7c:50:95", ip: "192.168.39.53"} in network mk-ha-091565
	I0918 20:03:26.588090   26827 main.go:141] libmachine: (ha-091565-m03) Reserved static IP address: 192.168.39.53
	I0918 20:03:26.588125   26827 main.go:141] libmachine: (ha-091565-m03) Waiting for SSH to be available...
	I0918 20:03:26.588134   26827 main.go:141] libmachine: (ha-091565-m03) DBG | Getting to WaitForSSH function...
	I0918 20:03:26.590288   26827 main.go:141] libmachine: (ha-091565-m03) DBG | domain ha-091565-m03 has defined MAC address 52:54:00:7c:50:95 in network mk-ha-091565
	I0918 20:03:26.590706   26827 main.go:141] libmachine: (ha-091565-m03) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:7c:50:95", ip: ""} in network mk-ha-091565
	I0918 20:03:26.590731   26827 main.go:141] libmachine: (ha-091565-m03) DBG | unable to find defined IP address of network mk-ha-091565 interface with MAC address 52:54:00:7c:50:95
	I0918 20:03:26.590858   26827 main.go:141] libmachine: (ha-091565-m03) DBG | Using SSH client type: external
	I0918 20:03:26.590882   26827 main.go:141] libmachine: (ha-091565-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/19667-7671/.minikube/machines/ha-091565-m03/id_rsa (-rw-------)
	I0918 20:03:26.590920   26827 main.go:141] libmachine: (ha-091565-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19667-7671/.minikube/machines/ha-091565-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0918 20:03:26.590933   26827 main.go:141] libmachine: (ha-091565-m03) DBG | About to run SSH command:
	I0918 20:03:26.590946   26827 main.go:141] libmachine: (ha-091565-m03) DBG | exit 0
	I0918 20:03:26.594686   26827 main.go:141] libmachine: (ha-091565-m03) DBG | SSH cmd err, output: exit status 255: 
	I0918 20:03:26.594715   26827 main.go:141] libmachine: (ha-091565-m03) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I0918 20:03:26.594726   26827 main.go:141] libmachine: (ha-091565-m03) DBG | command : exit 0
	I0918 20:03:26.594733   26827 main.go:141] libmachine: (ha-091565-m03) DBG | err     : exit status 255
	I0918 20:03:26.594744   26827 main.go:141] libmachine: (ha-091565-m03) DBG | output  : 
	I0918 20:03:29.596158   26827 main.go:141] libmachine: (ha-091565-m03) DBG | Getting to WaitForSSH function...
	I0918 20:03:29.598576   26827 main.go:141] libmachine: (ha-091565-m03) DBG | domain ha-091565-m03 has defined MAC address 52:54:00:7c:50:95 in network mk-ha-091565
	I0918 20:03:29.598871   26827 main.go:141] libmachine: (ha-091565-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7c:50:95", ip: ""} in network mk-ha-091565: {Iface:virbr1 ExpiryTime:2024-09-18 21:03:17 +0000 UTC Type:0 Mac:52:54:00:7c:50:95 Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:ha-091565-m03 Clientid:01:52:54:00:7c:50:95}
	I0918 20:03:29.598894   26827 main.go:141] libmachine: (ha-091565-m03) DBG | domain ha-091565-m03 has defined IP address 192.168.39.53 and MAC address 52:54:00:7c:50:95 in network mk-ha-091565
	I0918 20:03:29.599022   26827 main.go:141] libmachine: (ha-091565-m03) DBG | Using SSH client type: external
	I0918 20:03:29.599043   26827 main.go:141] libmachine: (ha-091565-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/19667-7671/.minikube/machines/ha-091565-m03/id_rsa (-rw-------)
	I0918 20:03:29.599071   26827 main.go:141] libmachine: (ha-091565-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.53 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19667-7671/.minikube/machines/ha-091565-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0918 20:03:29.599088   26827 main.go:141] libmachine: (ha-091565-m03) DBG | About to run SSH command:
	I0918 20:03:29.599104   26827 main.go:141] libmachine: (ha-091565-m03) DBG | exit 0
	I0918 20:03:29.719912   26827 main.go:141] libmachine: (ha-091565-m03) DBG | SSH cmd err, output: <nil>: 
	I0918 20:03:29.720164   26827 main.go:141] libmachine: (ha-091565-m03) KVM machine creation complete!
	I0918 20:03:29.720484   26827 main.go:141] libmachine: (ha-091565-m03) Calling .GetConfigRaw
	I0918 20:03:29.720974   26827 main.go:141] libmachine: (ha-091565-m03) Calling .DriverName
	I0918 20:03:29.721178   26827 main.go:141] libmachine: (ha-091565-m03) Calling .DriverName
	I0918 20:03:29.721342   26827 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0918 20:03:29.721355   26827 main.go:141] libmachine: (ha-091565-m03) Calling .GetState
	I0918 20:03:29.722748   26827 main.go:141] libmachine: Detecting operating system of created instance...
	I0918 20:03:29.722760   26827 main.go:141] libmachine: Waiting for SSH to be available...
	I0918 20:03:29.722765   26827 main.go:141] libmachine: Getting to WaitForSSH function...
	I0918 20:03:29.722771   26827 main.go:141] libmachine: (ha-091565-m03) Calling .GetSSHHostname
	I0918 20:03:29.725146   26827 main.go:141] libmachine: (ha-091565-m03) DBG | domain ha-091565-m03 has defined MAC address 52:54:00:7c:50:95 in network mk-ha-091565
	I0918 20:03:29.725535   26827 main.go:141] libmachine: (ha-091565-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7c:50:95", ip: ""} in network mk-ha-091565: {Iface:virbr1 ExpiryTime:2024-09-18 21:03:17 +0000 UTC Type:0 Mac:52:54:00:7c:50:95 Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:ha-091565-m03 Clientid:01:52:54:00:7c:50:95}
	I0918 20:03:29.725560   26827 main.go:141] libmachine: (ha-091565-m03) DBG | domain ha-091565-m03 has defined IP address 192.168.39.53 and MAC address 52:54:00:7c:50:95 in network mk-ha-091565
	I0918 20:03:29.725856   26827 main.go:141] libmachine: (ha-091565-m03) Calling .GetSSHPort
	I0918 20:03:29.726005   26827 main.go:141] libmachine: (ha-091565-m03) Calling .GetSSHKeyPath
	I0918 20:03:29.726172   26827 main.go:141] libmachine: (ha-091565-m03) Calling .GetSSHKeyPath
	I0918 20:03:29.726341   26827 main.go:141] libmachine: (ha-091565-m03) Calling .GetSSHUsername
	I0918 20:03:29.726485   26827 main.go:141] libmachine: Using SSH client type: native
	I0918 20:03:29.726681   26827 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.53 22 <nil> <nil>}
	I0918 20:03:29.726692   26827 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0918 20:03:29.823579   26827 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0918 20:03:29.823600   26827 main.go:141] libmachine: Detecting the provisioner...
	I0918 20:03:29.823610   26827 main.go:141] libmachine: (ha-091565-m03) Calling .GetSSHHostname
	I0918 20:03:29.826127   26827 main.go:141] libmachine: (ha-091565-m03) DBG | domain ha-091565-m03 has defined MAC address 52:54:00:7c:50:95 in network mk-ha-091565
	I0918 20:03:29.826487   26827 main.go:141] libmachine: (ha-091565-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7c:50:95", ip: ""} in network mk-ha-091565: {Iface:virbr1 ExpiryTime:2024-09-18 21:03:17 +0000 UTC Type:0 Mac:52:54:00:7c:50:95 Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:ha-091565-m03 Clientid:01:52:54:00:7c:50:95}
	I0918 20:03:29.826524   26827 main.go:141] libmachine: (ha-091565-m03) DBG | domain ha-091565-m03 has defined IP address 192.168.39.53 and MAC address 52:54:00:7c:50:95 in network mk-ha-091565
	I0918 20:03:29.826650   26827 main.go:141] libmachine: (ha-091565-m03) Calling .GetSSHPort
	I0918 20:03:29.826822   26827 main.go:141] libmachine: (ha-091565-m03) Calling .GetSSHKeyPath
	I0918 20:03:29.826946   26827 main.go:141] libmachine: (ha-091565-m03) Calling .GetSSHKeyPath
	I0918 20:03:29.827049   26827 main.go:141] libmachine: (ha-091565-m03) Calling .GetSSHUsername
	I0918 20:03:29.827203   26827 main.go:141] libmachine: Using SSH client type: native
	I0918 20:03:29.827417   26827 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.53 22 <nil> <nil>}
	I0918 20:03:29.827434   26827 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0918 20:03:29.932519   26827 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0918 20:03:29.932589   26827 main.go:141] libmachine: found compatible host: buildroot
	I0918 20:03:29.932601   26827 main.go:141] libmachine: Provisioning with buildroot...
	I0918 20:03:29.932612   26827 main.go:141] libmachine: (ha-091565-m03) Calling .GetMachineName
	I0918 20:03:29.932841   26827 buildroot.go:166] provisioning hostname "ha-091565-m03"
	I0918 20:03:29.932860   26827 main.go:141] libmachine: (ha-091565-m03) Calling .GetMachineName
	I0918 20:03:29.933042   26827 main.go:141] libmachine: (ha-091565-m03) Calling .GetSSHHostname
	I0918 20:03:29.935764   26827 main.go:141] libmachine: (ha-091565-m03) DBG | domain ha-091565-m03 has defined MAC address 52:54:00:7c:50:95 in network mk-ha-091565
	I0918 20:03:29.936201   26827 main.go:141] libmachine: (ha-091565-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7c:50:95", ip: ""} in network mk-ha-091565: {Iface:virbr1 ExpiryTime:2024-09-18 21:03:17 +0000 UTC Type:0 Mac:52:54:00:7c:50:95 Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:ha-091565-m03 Clientid:01:52:54:00:7c:50:95}
	I0918 20:03:29.936227   26827 main.go:141] libmachine: (ha-091565-m03) DBG | domain ha-091565-m03 has defined IP address 192.168.39.53 and MAC address 52:54:00:7c:50:95 in network mk-ha-091565
	I0918 20:03:29.936365   26827 main.go:141] libmachine: (ha-091565-m03) Calling .GetSSHPort
	I0918 20:03:29.936539   26827 main.go:141] libmachine: (ha-091565-m03) Calling .GetSSHKeyPath
	I0918 20:03:29.936695   26827 main.go:141] libmachine: (ha-091565-m03) Calling .GetSSHKeyPath
	I0918 20:03:29.936848   26827 main.go:141] libmachine: (ha-091565-m03) Calling .GetSSHUsername
	I0918 20:03:29.937078   26827 main.go:141] libmachine: Using SSH client type: native
	I0918 20:03:29.937287   26827 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.53 22 <nil> <nil>}
	I0918 20:03:29.937301   26827 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-091565-m03 && echo "ha-091565-m03" | sudo tee /etc/hostname
	I0918 20:03:30.050382   26827 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-091565-m03
	
	I0918 20:03:30.050410   26827 main.go:141] libmachine: (ha-091565-m03) Calling .GetSSHHostname
	I0918 20:03:30.053336   26827 main.go:141] libmachine: (ha-091565-m03) DBG | domain ha-091565-m03 has defined MAC address 52:54:00:7c:50:95 in network mk-ha-091565
	I0918 20:03:30.053858   26827 main.go:141] libmachine: (ha-091565-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7c:50:95", ip: ""} in network mk-ha-091565: {Iface:virbr1 ExpiryTime:2024-09-18 21:03:17 +0000 UTC Type:0 Mac:52:54:00:7c:50:95 Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:ha-091565-m03 Clientid:01:52:54:00:7c:50:95}
	I0918 20:03:30.053888   26827 main.go:141] libmachine: (ha-091565-m03) DBG | domain ha-091565-m03 has defined IP address 192.168.39.53 and MAC address 52:54:00:7c:50:95 in network mk-ha-091565
	I0918 20:03:30.054088   26827 main.go:141] libmachine: (ha-091565-m03) Calling .GetSSHPort
	I0918 20:03:30.054256   26827 main.go:141] libmachine: (ha-091565-m03) Calling .GetSSHKeyPath
	I0918 20:03:30.054372   26827 main.go:141] libmachine: (ha-091565-m03) Calling .GetSSHKeyPath
	I0918 20:03:30.054537   26827 main.go:141] libmachine: (ha-091565-m03) Calling .GetSSHUsername
	I0918 20:03:30.054678   26827 main.go:141] libmachine: Using SSH client type: native
	I0918 20:03:30.054886   26827 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.53 22 <nil> <nil>}
	I0918 20:03:30.054906   26827 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-091565-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-091565-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-091565-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0918 20:03:30.160725   26827 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0918 20:03:30.160756   26827 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19667-7671/.minikube CaCertPath:/home/jenkins/minikube-integration/19667-7671/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19667-7671/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19667-7671/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19667-7671/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19667-7671/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19667-7671/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19667-7671/.minikube}
	I0918 20:03:30.160770   26827 buildroot.go:174] setting up certificates
	I0918 20:03:30.160779   26827 provision.go:84] configureAuth start
	I0918 20:03:30.160787   26827 main.go:141] libmachine: (ha-091565-m03) Calling .GetMachineName
	I0918 20:03:30.161095   26827 main.go:141] libmachine: (ha-091565-m03) Calling .GetIP
	I0918 20:03:30.164061   26827 main.go:141] libmachine: (ha-091565-m03) DBG | domain ha-091565-m03 has defined MAC address 52:54:00:7c:50:95 in network mk-ha-091565
	I0918 20:03:30.164503   26827 main.go:141] libmachine: (ha-091565-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7c:50:95", ip: ""} in network mk-ha-091565: {Iface:virbr1 ExpiryTime:2024-09-18 21:03:17 +0000 UTC Type:0 Mac:52:54:00:7c:50:95 Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:ha-091565-m03 Clientid:01:52:54:00:7c:50:95}
	I0918 20:03:30.164540   26827 main.go:141] libmachine: (ha-091565-m03) DBG | domain ha-091565-m03 has defined IP address 192.168.39.53 and MAC address 52:54:00:7c:50:95 in network mk-ha-091565
	I0918 20:03:30.164704   26827 main.go:141] libmachine: (ha-091565-m03) Calling .GetSSHHostname
	I0918 20:03:30.167047   26827 main.go:141] libmachine: (ha-091565-m03) DBG | domain ha-091565-m03 has defined MAC address 52:54:00:7c:50:95 in network mk-ha-091565
	I0918 20:03:30.167370   26827 main.go:141] libmachine: (ha-091565-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7c:50:95", ip: ""} in network mk-ha-091565: {Iface:virbr1 ExpiryTime:2024-09-18 21:03:17 +0000 UTC Type:0 Mac:52:54:00:7c:50:95 Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:ha-091565-m03 Clientid:01:52:54:00:7c:50:95}
	I0918 20:03:30.167392   26827 main.go:141] libmachine: (ha-091565-m03) DBG | domain ha-091565-m03 has defined IP address 192.168.39.53 and MAC address 52:54:00:7c:50:95 in network mk-ha-091565
	I0918 20:03:30.167538   26827 provision.go:143] copyHostCerts
	I0918 20:03:30.167573   26827 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19667-7671/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19667-7671/.minikube/cert.pem
	I0918 20:03:30.167622   26827 exec_runner.go:144] found /home/jenkins/minikube-integration/19667-7671/.minikube/cert.pem, removing ...
	I0918 20:03:30.167633   26827 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19667-7671/.minikube/cert.pem
	I0918 20:03:30.167703   26827 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19667-7671/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19667-7671/.minikube/cert.pem (1123 bytes)
	I0918 20:03:30.167779   26827 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19667-7671/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19667-7671/.minikube/key.pem
	I0918 20:03:30.167796   26827 exec_runner.go:144] found /home/jenkins/minikube-integration/19667-7671/.minikube/key.pem, removing ...
	I0918 20:03:30.167812   26827 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19667-7671/.minikube/key.pem
	I0918 20:03:30.167845   26827 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19667-7671/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19667-7671/.minikube/key.pem (1679 bytes)
	I0918 20:03:30.167891   26827 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19667-7671/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19667-7671/.minikube/ca.pem
	I0918 20:03:30.167910   26827 exec_runner.go:144] found /home/jenkins/minikube-integration/19667-7671/.minikube/ca.pem, removing ...
	I0918 20:03:30.167916   26827 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19667-7671/.minikube/ca.pem
	I0918 20:03:30.167937   26827 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19667-7671/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19667-7671/.minikube/ca.pem (1082 bytes)
	I0918 20:03:30.167986   26827 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19667-7671/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19667-7671/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19667-7671/.minikube/certs/ca-key.pem org=jenkins.ha-091565-m03 san=[127.0.0.1 192.168.39.53 ha-091565-m03 localhost minikube]
	I0918 20:03:30.213280   26827 provision.go:177] copyRemoteCerts
	I0918 20:03:30.213334   26827 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0918 20:03:30.213360   26827 main.go:141] libmachine: (ha-091565-m03) Calling .GetSSHHostname
	I0918 20:03:30.215750   26827 main.go:141] libmachine: (ha-091565-m03) DBG | domain ha-091565-m03 has defined MAC address 52:54:00:7c:50:95 in network mk-ha-091565
	I0918 20:03:30.216074   26827 main.go:141] libmachine: (ha-091565-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7c:50:95", ip: ""} in network mk-ha-091565: {Iface:virbr1 ExpiryTime:2024-09-18 21:03:17 +0000 UTC Type:0 Mac:52:54:00:7c:50:95 Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:ha-091565-m03 Clientid:01:52:54:00:7c:50:95}
	I0918 20:03:30.216102   26827 main.go:141] libmachine: (ha-091565-m03) DBG | domain ha-091565-m03 has defined IP address 192.168.39.53 and MAC address 52:54:00:7c:50:95 in network mk-ha-091565
	I0918 20:03:30.216270   26827 main.go:141] libmachine: (ha-091565-m03) Calling .GetSSHPort
	I0918 20:03:30.216448   26827 main.go:141] libmachine: (ha-091565-m03) Calling .GetSSHKeyPath
	I0918 20:03:30.216580   26827 main.go:141] libmachine: (ha-091565-m03) Calling .GetSSHUsername
	I0918 20:03:30.216699   26827 sshutil.go:53] new ssh client: &{IP:192.168.39.53 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19667-7671/.minikube/machines/ha-091565-m03/id_rsa Username:docker}
	I0918 20:03:30.298100   26827 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19667-7671/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0918 20:03:30.298182   26827 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0918 20:03:30.322613   26827 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19667-7671/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0918 20:03:30.322696   26827 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0918 20:03:30.345951   26827 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19667-7671/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0918 20:03:30.346039   26827 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0918 20:03:30.368781   26827 provision.go:87] duration metric: took 207.991221ms to configureAuth
	I0918 20:03:30.368806   26827 buildroot.go:189] setting minikube options for container-runtime
	I0918 20:03:30.369006   26827 config.go:182] Loaded profile config "ha-091565": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0918 20:03:30.369075   26827 main.go:141] libmachine: (ha-091565-m03) Calling .GetSSHHostname
	I0918 20:03:30.372054   26827 main.go:141] libmachine: (ha-091565-m03) DBG | domain ha-091565-m03 has defined MAC address 52:54:00:7c:50:95 in network mk-ha-091565
	I0918 20:03:30.372443   26827 main.go:141] libmachine: (ha-091565-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7c:50:95", ip: ""} in network mk-ha-091565: {Iface:virbr1 ExpiryTime:2024-09-18 21:03:17 +0000 UTC Type:0 Mac:52:54:00:7c:50:95 Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:ha-091565-m03 Clientid:01:52:54:00:7c:50:95}
	I0918 20:03:30.372472   26827 main.go:141] libmachine: (ha-091565-m03) DBG | domain ha-091565-m03 has defined IP address 192.168.39.53 and MAC address 52:54:00:7c:50:95 in network mk-ha-091565
	I0918 20:03:30.372725   26827 main.go:141] libmachine: (ha-091565-m03) Calling .GetSSHPort
	I0918 20:03:30.372907   26827 main.go:141] libmachine: (ha-091565-m03) Calling .GetSSHKeyPath
	I0918 20:03:30.373069   26827 main.go:141] libmachine: (ha-091565-m03) Calling .GetSSHKeyPath
	I0918 20:03:30.373164   26827 main.go:141] libmachine: (ha-091565-m03) Calling .GetSSHUsername
	I0918 20:03:30.373299   26827 main.go:141] libmachine: Using SSH client type: native
	I0918 20:03:30.373493   26827 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.53 22 <nil> <nil>}
	I0918 20:03:30.373508   26827 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0918 20:03:30.578858   26827 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0918 20:03:30.578882   26827 main.go:141] libmachine: Checking connection to Docker...
	I0918 20:03:30.578892   26827 main.go:141] libmachine: (ha-091565-m03) Calling .GetURL
	I0918 20:03:30.580144   26827 main.go:141] libmachine: (ha-091565-m03) DBG | Using libvirt version 6000000
	I0918 20:03:30.582476   26827 main.go:141] libmachine: (ha-091565-m03) DBG | domain ha-091565-m03 has defined MAC address 52:54:00:7c:50:95 in network mk-ha-091565
	I0918 20:03:30.582791   26827 main.go:141] libmachine: (ha-091565-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7c:50:95", ip: ""} in network mk-ha-091565: {Iface:virbr1 ExpiryTime:2024-09-18 21:03:17 +0000 UTC Type:0 Mac:52:54:00:7c:50:95 Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:ha-091565-m03 Clientid:01:52:54:00:7c:50:95}
	I0918 20:03:30.582820   26827 main.go:141] libmachine: (ha-091565-m03) DBG | domain ha-091565-m03 has defined IP address 192.168.39.53 and MAC address 52:54:00:7c:50:95 in network mk-ha-091565
	I0918 20:03:30.582956   26827 main.go:141] libmachine: Docker is up and running!
	I0918 20:03:30.582970   26827 main.go:141] libmachine: Reticulating splines...
	I0918 20:03:30.582978   26827 client.go:171] duration metric: took 27.373159137s to LocalClient.Create
	I0918 20:03:30.583008   26827 start.go:167] duration metric: took 27.373230204s to libmachine.API.Create "ha-091565"
	I0918 20:03:30.583021   26827 start.go:293] postStartSetup for "ha-091565-m03" (driver="kvm2")
	I0918 20:03:30.583039   26827 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0918 20:03:30.583062   26827 main.go:141] libmachine: (ha-091565-m03) Calling .DriverName
	I0918 20:03:30.583373   26827 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0918 20:03:30.583399   26827 main.go:141] libmachine: (ha-091565-m03) Calling .GetSSHHostname
	I0918 20:03:30.585622   26827 main.go:141] libmachine: (ha-091565-m03) DBG | domain ha-091565-m03 has defined MAC address 52:54:00:7c:50:95 in network mk-ha-091565
	I0918 20:03:30.585919   26827 main.go:141] libmachine: (ha-091565-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7c:50:95", ip: ""} in network mk-ha-091565: {Iface:virbr1 ExpiryTime:2024-09-18 21:03:17 +0000 UTC Type:0 Mac:52:54:00:7c:50:95 Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:ha-091565-m03 Clientid:01:52:54:00:7c:50:95}
	I0918 20:03:30.585944   26827 main.go:141] libmachine: (ha-091565-m03) DBG | domain ha-091565-m03 has defined IP address 192.168.39.53 and MAC address 52:54:00:7c:50:95 in network mk-ha-091565
	I0918 20:03:30.586091   26827 main.go:141] libmachine: (ha-091565-m03) Calling .GetSSHPort
	I0918 20:03:30.586267   26827 main.go:141] libmachine: (ha-091565-m03) Calling .GetSSHKeyPath
	I0918 20:03:30.586429   26827 main.go:141] libmachine: (ha-091565-m03) Calling .GetSSHUsername
	I0918 20:03:30.586561   26827 sshutil.go:53] new ssh client: &{IP:192.168.39.53 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19667-7671/.minikube/machines/ha-091565-m03/id_rsa Username:docker}
	I0918 20:03:30.666586   26827 ssh_runner.go:195] Run: cat /etc/os-release
	I0918 20:03:30.670835   26827 info.go:137] Remote host: Buildroot 2023.02.9
	I0918 20:03:30.670865   26827 filesync.go:126] Scanning /home/jenkins/minikube-integration/19667-7671/.minikube/addons for local assets ...
	I0918 20:03:30.670930   26827 filesync.go:126] Scanning /home/jenkins/minikube-integration/19667-7671/.minikube/files for local assets ...
	I0918 20:03:30.671001   26827 filesync.go:149] local asset: /home/jenkins/minikube-integration/19667-7671/.minikube/files/etc/ssl/certs/148782.pem -> 148782.pem in /etc/ssl/certs
	I0918 20:03:30.671010   26827 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19667-7671/.minikube/files/etc/ssl/certs/148782.pem -> /etc/ssl/certs/148782.pem
	I0918 20:03:30.671101   26827 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0918 20:03:30.680354   26827 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/files/etc/ssl/certs/148782.pem --> /etc/ssl/certs/148782.pem (1708 bytes)
	I0918 20:03:30.703833   26827 start.go:296] duration metric: took 120.797692ms for postStartSetup
	I0918 20:03:30.703888   26827 main.go:141] libmachine: (ha-091565-m03) Calling .GetConfigRaw
	I0918 20:03:30.704508   26827 main.go:141] libmachine: (ha-091565-m03) Calling .GetIP
	I0918 20:03:30.707440   26827 main.go:141] libmachine: (ha-091565-m03) DBG | domain ha-091565-m03 has defined MAC address 52:54:00:7c:50:95 in network mk-ha-091565
	I0918 20:03:30.707936   26827 main.go:141] libmachine: (ha-091565-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7c:50:95", ip: ""} in network mk-ha-091565: {Iface:virbr1 ExpiryTime:2024-09-18 21:03:17 +0000 UTC Type:0 Mac:52:54:00:7c:50:95 Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:ha-091565-m03 Clientid:01:52:54:00:7c:50:95}
	I0918 20:03:30.707965   26827 main.go:141] libmachine: (ha-091565-m03) DBG | domain ha-091565-m03 has defined IP address 192.168.39.53 and MAC address 52:54:00:7c:50:95 in network mk-ha-091565
	I0918 20:03:30.708291   26827 profile.go:143] Saving config to /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/ha-091565/config.json ...
	I0918 20:03:30.708542   26827 start.go:128] duration metric: took 27.516932332s to createHost
	I0918 20:03:30.708573   26827 main.go:141] libmachine: (ha-091565-m03) Calling .GetSSHHostname
	I0918 20:03:30.711228   26827 main.go:141] libmachine: (ha-091565-m03) DBG | domain ha-091565-m03 has defined MAC address 52:54:00:7c:50:95 in network mk-ha-091565
	I0918 20:03:30.711630   26827 main.go:141] libmachine: (ha-091565-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7c:50:95", ip: ""} in network mk-ha-091565: {Iface:virbr1 ExpiryTime:2024-09-18 21:03:17 +0000 UTC Type:0 Mac:52:54:00:7c:50:95 Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:ha-091565-m03 Clientid:01:52:54:00:7c:50:95}
	I0918 20:03:30.711656   26827 main.go:141] libmachine: (ha-091565-m03) DBG | domain ha-091565-m03 has defined IP address 192.168.39.53 and MAC address 52:54:00:7c:50:95 in network mk-ha-091565
	I0918 20:03:30.711872   26827 main.go:141] libmachine: (ha-091565-m03) Calling .GetSSHPort
	I0918 20:03:30.712061   26827 main.go:141] libmachine: (ha-091565-m03) Calling .GetSSHKeyPath
	I0918 20:03:30.712192   26827 main.go:141] libmachine: (ha-091565-m03) Calling .GetSSHKeyPath
	I0918 20:03:30.712327   26827 main.go:141] libmachine: (ha-091565-m03) Calling .GetSSHUsername
	I0918 20:03:30.712477   26827 main.go:141] libmachine: Using SSH client type: native
	I0918 20:03:30.712684   26827 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.53 22 <nil> <nil>}
	I0918 20:03:30.712697   26827 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0918 20:03:30.812539   26827 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726689810.794368232
	
	I0918 20:03:30.812561   26827 fix.go:216] guest clock: 1726689810.794368232
	I0918 20:03:30.812570   26827 fix.go:229] Guest: 2024-09-18 20:03:30.794368232 +0000 UTC Remote: 2024-09-18 20:03:30.708558501 +0000 UTC m=+153.103283397 (delta=85.809731ms)
	I0918 20:03:30.812588   26827 fix.go:200] guest clock delta is within tolerance: 85.809731ms
	I0918 20:03:30.812595   26827 start.go:83] releasing machines lock for "ha-091565-m03", held for 27.621119617s
	I0918 20:03:30.812619   26827 main.go:141] libmachine: (ha-091565-m03) Calling .DriverName
	I0918 20:03:30.812898   26827 main.go:141] libmachine: (ha-091565-m03) Calling .GetIP
	I0918 20:03:30.815402   26827 main.go:141] libmachine: (ha-091565-m03) DBG | domain ha-091565-m03 has defined MAC address 52:54:00:7c:50:95 in network mk-ha-091565
	I0918 20:03:30.815769   26827 main.go:141] libmachine: (ha-091565-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7c:50:95", ip: ""} in network mk-ha-091565: {Iface:virbr1 ExpiryTime:2024-09-18 21:03:17 +0000 UTC Type:0 Mac:52:54:00:7c:50:95 Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:ha-091565-m03 Clientid:01:52:54:00:7c:50:95}
	I0918 20:03:30.815791   26827 main.go:141] libmachine: (ha-091565-m03) DBG | domain ha-091565-m03 has defined IP address 192.168.39.53 and MAC address 52:54:00:7c:50:95 in network mk-ha-091565
	I0918 20:03:30.817414   26827 out.go:177] * Found network options:
	I0918 20:03:30.818426   26827 out.go:177]   - NO_PROXY=192.168.39.215,192.168.39.92
	W0918 20:03:30.819353   26827 proxy.go:119] fail to check proxy env: Error ip not in block
	W0918 20:03:30.819370   26827 proxy.go:119] fail to check proxy env: Error ip not in block
	I0918 20:03:30.819384   26827 main.go:141] libmachine: (ha-091565-m03) Calling .DriverName
	I0918 20:03:30.820044   26827 main.go:141] libmachine: (ha-091565-m03) Calling .DriverName
	I0918 20:03:30.820235   26827 main.go:141] libmachine: (ha-091565-m03) Calling .DriverName
	I0918 20:03:30.820315   26827 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0918 20:03:30.820362   26827 main.go:141] libmachine: (ha-091565-m03) Calling .GetSSHHostname
	W0918 20:03:30.820405   26827 proxy.go:119] fail to check proxy env: Error ip not in block
	W0918 20:03:30.820438   26827 proxy.go:119] fail to check proxy env: Error ip not in block
	I0918 20:03:30.820512   26827 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0918 20:03:30.820534   26827 main.go:141] libmachine: (ha-091565-m03) Calling .GetSSHHostname
	I0918 20:03:30.823394   26827 main.go:141] libmachine: (ha-091565-m03) DBG | domain ha-091565-m03 has defined MAC address 52:54:00:7c:50:95 in network mk-ha-091565
	I0918 20:03:30.823660   26827 main.go:141] libmachine: (ha-091565-m03) DBG | domain ha-091565-m03 has defined MAC address 52:54:00:7c:50:95 in network mk-ha-091565
	I0918 20:03:30.823821   26827 main.go:141] libmachine: (ha-091565-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7c:50:95", ip: ""} in network mk-ha-091565: {Iface:virbr1 ExpiryTime:2024-09-18 21:03:17 +0000 UTC Type:0 Mac:52:54:00:7c:50:95 Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:ha-091565-m03 Clientid:01:52:54:00:7c:50:95}
	I0918 20:03:30.823857   26827 main.go:141] libmachine: (ha-091565-m03) DBG | domain ha-091565-m03 has defined IP address 192.168.39.53 and MAC address 52:54:00:7c:50:95 in network mk-ha-091565
	I0918 20:03:30.824042   26827 main.go:141] libmachine: (ha-091565-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7c:50:95", ip: ""} in network mk-ha-091565: {Iface:virbr1 ExpiryTime:2024-09-18 21:03:17 +0000 UTC Type:0 Mac:52:54:00:7c:50:95 Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:ha-091565-m03 Clientid:01:52:54:00:7c:50:95}
	I0918 20:03:30.824069   26827 main.go:141] libmachine: (ha-091565-m03) DBG | domain ha-091565-m03 has defined IP address 192.168.39.53 and MAC address 52:54:00:7c:50:95 in network mk-ha-091565
	I0918 20:03:30.824075   26827 main.go:141] libmachine: (ha-091565-m03) Calling .GetSSHPort
	I0918 20:03:30.824246   26827 main.go:141] libmachine: (ha-091565-m03) Calling .GetSSHPort
	I0918 20:03:30.824249   26827 main.go:141] libmachine: (ha-091565-m03) Calling .GetSSHKeyPath
	I0918 20:03:30.824447   26827 main.go:141] libmachine: (ha-091565-m03) Calling .GetSSHKeyPath
	I0918 20:03:30.824451   26827 main.go:141] libmachine: (ha-091565-m03) Calling .GetSSHUsername
	I0918 20:03:30.824629   26827 main.go:141] libmachine: (ha-091565-m03) Calling .GetSSHUsername
	I0918 20:03:30.824648   26827 sshutil.go:53] new ssh client: &{IP:192.168.39.53 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19667-7671/.minikube/machines/ha-091565-m03/id_rsa Username:docker}
	I0918 20:03:30.824774   26827 sshutil.go:53] new ssh client: &{IP:192.168.39.53 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19667-7671/.minikube/machines/ha-091565-m03/id_rsa Username:docker}
	I0918 20:03:31.051973   26827 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0918 20:03:31.057939   26827 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0918 20:03:31.058015   26827 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0918 20:03:31.075034   26827 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0918 20:03:31.075060   26827 start.go:495] detecting cgroup driver to use...
	I0918 20:03:31.075137   26827 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0918 20:03:31.091617   26827 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0918 20:03:31.105746   26827 docker.go:217] disabling cri-docker service (if available) ...
	I0918 20:03:31.105817   26827 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0918 20:03:31.120080   26827 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0918 20:03:31.134004   26827 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0918 20:03:31.254184   26827 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0918 20:03:31.414257   26827 docker.go:233] disabling docker service ...
	I0918 20:03:31.414322   26827 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0918 20:03:31.428960   26827 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0918 20:03:31.442338   26827 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0918 20:03:31.584328   26827 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0918 20:03:31.721005   26827 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0918 20:03:31.735675   26827 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0918 20:03:31.753606   26827 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0918 20:03:31.753676   26827 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0918 20:03:31.764390   26827 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0918 20:03:31.764453   26827 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0918 20:03:31.775371   26827 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0918 20:03:31.786080   26827 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0918 20:03:31.797003   26827 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0918 20:03:31.807848   26827 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0918 20:03:31.821134   26827 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0918 20:03:31.840511   26827 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0918 20:03:31.851912   26827 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0918 20:03:31.861895   26827 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0918 20:03:31.861971   26827 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0918 20:03:31.875783   26827 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0918 20:03:31.887581   26827 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0918 20:03:32.009173   26827 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0918 20:03:32.097676   26827 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0918 20:03:32.097742   26827 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0918 20:03:32.102640   26827 start.go:563] Will wait 60s for crictl version
	I0918 20:03:32.102696   26827 ssh_runner.go:195] Run: which crictl
	I0918 20:03:32.106231   26827 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0918 20:03:32.142182   26827 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0918 20:03:32.142270   26827 ssh_runner.go:195] Run: crio --version
	I0918 20:03:32.169659   26827 ssh_runner.go:195] Run: crio --version
	I0918 20:03:32.199737   26827 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0918 20:03:32.201225   26827 out.go:177]   - env NO_PROXY=192.168.39.215
	I0918 20:03:32.202507   26827 out.go:177]   - env NO_PROXY=192.168.39.215,192.168.39.92
	I0918 20:03:32.203714   26827 main.go:141] libmachine: (ha-091565-m03) Calling .GetIP
	I0918 20:03:32.206442   26827 main.go:141] libmachine: (ha-091565-m03) DBG | domain ha-091565-m03 has defined MAC address 52:54:00:7c:50:95 in network mk-ha-091565
	I0918 20:03:32.206810   26827 main.go:141] libmachine: (ha-091565-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7c:50:95", ip: ""} in network mk-ha-091565: {Iface:virbr1 ExpiryTime:2024-09-18 21:03:17 +0000 UTC Type:0 Mac:52:54:00:7c:50:95 Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:ha-091565-m03 Clientid:01:52:54:00:7c:50:95}
	I0918 20:03:32.206850   26827 main.go:141] libmachine: (ha-091565-m03) DBG | domain ha-091565-m03 has defined IP address 192.168.39.53 and MAC address 52:54:00:7c:50:95 in network mk-ha-091565
	I0918 20:03:32.207043   26827 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0918 20:03:32.211258   26827 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0918 20:03:32.223734   26827 mustload.go:65] Loading cluster: ha-091565
	I0918 20:03:32.224039   26827 config.go:182] Loaded profile config "ha-091565": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0918 20:03:32.224319   26827 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0918 20:03:32.224365   26827 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0918 20:03:32.239611   26827 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37387
	I0918 20:03:32.240066   26827 main.go:141] libmachine: () Calling .GetVersion
	I0918 20:03:32.240552   26827 main.go:141] libmachine: Using API Version  1
	I0918 20:03:32.240576   26827 main.go:141] libmachine: () Calling .SetConfigRaw
	I0918 20:03:32.240920   26827 main.go:141] libmachine: () Calling .GetMachineName
	I0918 20:03:32.241082   26827 main.go:141] libmachine: (ha-091565) Calling .GetState
	I0918 20:03:32.242720   26827 host.go:66] Checking if "ha-091565" exists ...
	I0918 20:03:32.243009   26827 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0918 20:03:32.243043   26827 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0918 20:03:32.258246   26827 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40335
	I0918 20:03:32.258705   26827 main.go:141] libmachine: () Calling .GetVersion
	I0918 20:03:32.259124   26827 main.go:141] libmachine: Using API Version  1
	I0918 20:03:32.259146   26827 main.go:141] libmachine: () Calling .SetConfigRaw
	I0918 20:03:32.259417   26827 main.go:141] libmachine: () Calling .GetMachineName
	I0918 20:03:32.259553   26827 main.go:141] libmachine: (ha-091565) Calling .DriverName
	I0918 20:03:32.259662   26827 certs.go:68] Setting up /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/ha-091565 for IP: 192.168.39.53
	I0918 20:03:32.259671   26827 certs.go:194] generating shared ca certs ...
	I0918 20:03:32.259683   26827 certs.go:226] acquiring lock for ca certs: {Name:mkf54e0e71ed884c4372f9d3d4cb308ea4600185 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0918 20:03:32.259810   26827 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19667-7671/.minikube/ca.key
	I0918 20:03:32.259850   26827 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19667-7671/.minikube/proxy-client-ca.key
	I0918 20:03:32.259860   26827 certs.go:256] generating profile certs ...
	I0918 20:03:32.259928   26827 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/ha-091565/client.key
	I0918 20:03:32.259953   26827 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/ha-091565/apiserver.key.4abf4119
	I0918 20:03:32.259967   26827 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/ha-091565/apiserver.crt.4abf4119 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.215 192.168.39.92 192.168.39.53 192.168.39.254]
	I0918 20:03:32.391787   26827 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/ha-091565/apiserver.crt.4abf4119 ...
	I0918 20:03:32.391818   26827 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/ha-091565/apiserver.crt.4abf4119: {Name:mkb34973ffb4d10e1c252f20090951c99d9a8a6d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0918 20:03:32.392002   26827 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/ha-091565/apiserver.key.4abf4119 ...
	I0918 20:03:32.392039   26827 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/ha-091565/apiserver.key.4abf4119: {Name:mk8dda3654eb1370812c69b5ca23990ee4bb5898 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0918 20:03:32.392142   26827 certs.go:381] copying /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/ha-091565/apiserver.crt.4abf4119 -> /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/ha-091565/apiserver.crt
	I0918 20:03:32.392302   26827 certs.go:385] copying /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/ha-091565/apiserver.key.4abf4119 -> /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/ha-091565/apiserver.key
	I0918 20:03:32.392476   26827 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/ha-091565/proxy-client.key
	I0918 20:03:32.392495   26827 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19667-7671/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0918 20:03:32.392514   26827 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19667-7671/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0918 20:03:32.392532   26827 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19667-7671/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0918 20:03:32.392556   26827 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19667-7671/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0918 20:03:32.392573   26827 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/ha-091565/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0918 20:03:32.392588   26827 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/ha-091565/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0918 20:03:32.392606   26827 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/ha-091565/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0918 20:03:32.416080   26827 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/ha-091565/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0918 20:03:32.416180   26827 certs.go:484] found cert: /home/jenkins/minikube-integration/19667-7671/.minikube/certs/14878.pem (1338 bytes)
	W0918 20:03:32.416223   26827 certs.go:480] ignoring /home/jenkins/minikube-integration/19667-7671/.minikube/certs/14878_empty.pem, impossibly tiny 0 bytes
	I0918 20:03:32.416236   26827 certs.go:484] found cert: /home/jenkins/minikube-integration/19667-7671/.minikube/certs/ca-key.pem (1675 bytes)
	I0918 20:03:32.416259   26827 certs.go:484] found cert: /home/jenkins/minikube-integration/19667-7671/.minikube/certs/ca.pem (1082 bytes)
	I0918 20:03:32.416280   26827 certs.go:484] found cert: /home/jenkins/minikube-integration/19667-7671/.minikube/certs/cert.pem (1123 bytes)
	I0918 20:03:32.416312   26827 certs.go:484] found cert: /home/jenkins/minikube-integration/19667-7671/.minikube/certs/key.pem (1679 bytes)
	I0918 20:03:32.416373   26827 certs.go:484] found cert: /home/jenkins/minikube-integration/19667-7671/.minikube/files/etc/ssl/certs/148782.pem (1708 bytes)
	I0918 20:03:32.416406   26827 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19667-7671/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0918 20:03:32.416423   26827 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19667-7671/.minikube/certs/14878.pem -> /usr/share/ca-certificates/14878.pem
	I0918 20:03:32.416442   26827 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19667-7671/.minikube/files/etc/ssl/certs/148782.pem -> /usr/share/ca-certificates/148782.pem
	I0918 20:03:32.416482   26827 main.go:141] libmachine: (ha-091565) Calling .GetSSHHostname
	I0918 20:03:32.419323   26827 main.go:141] libmachine: (ha-091565) DBG | domain ha-091565 has defined MAC address 52:54:00:2a:13:d8 in network mk-ha-091565
	I0918 20:03:32.419709   26827 main.go:141] libmachine: (ha-091565) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:13:d8", ip: ""} in network mk-ha-091565: {Iface:virbr1 ExpiryTime:2024-09-18 21:01:12 +0000 UTC Type:0 Mac:52:54:00:2a:13:d8 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:ha-091565 Clientid:01:52:54:00:2a:13:d8}
	I0918 20:03:32.419736   26827 main.go:141] libmachine: (ha-091565) DBG | domain ha-091565 has defined IP address 192.168.39.215 and MAC address 52:54:00:2a:13:d8 in network mk-ha-091565
	I0918 20:03:32.419880   26827 main.go:141] libmachine: (ha-091565) Calling .GetSSHPort
	I0918 20:03:32.420098   26827 main.go:141] libmachine: (ha-091565) Calling .GetSSHKeyPath
	I0918 20:03:32.420242   26827 main.go:141] libmachine: (ha-091565) Calling .GetSSHUsername
	I0918 20:03:32.420374   26827 sshutil.go:53] new ssh client: &{IP:192.168.39.215 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19667-7671/.minikube/machines/ha-091565/id_rsa Username:docker}
	I0918 20:03:32.496485   26827 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0918 20:03:32.501230   26827 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0918 20:03:32.512278   26827 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0918 20:03:32.516258   26827 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I0918 20:03:32.526925   26827 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0918 20:03:32.530942   26827 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0918 20:03:32.541480   26827 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0918 20:03:32.545232   26827 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I0918 20:03:32.555472   26827 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0918 20:03:32.559397   26827 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0918 20:03:32.569567   26827 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0918 20:03:32.573499   26827 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I0918 20:03:32.583358   26827 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0918 20:03:32.611524   26827 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0918 20:03:32.636264   26827 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0918 20:03:32.660205   26827 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0918 20:03:32.686819   26827 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/ha-091565/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I0918 20:03:32.710441   26827 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/ha-091565/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0918 20:03:32.737760   26827 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/ha-091565/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0918 20:03:32.763299   26827 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/ha-091565/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0918 20:03:32.788066   26827 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0918 20:03:32.811311   26827 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/certs/14878.pem --> /usr/share/ca-certificates/14878.pem (1338 bytes)
	I0918 20:03:32.837707   26827 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/files/etc/ssl/certs/148782.pem --> /usr/share/ca-certificates/148782.pem (1708 bytes)
	I0918 20:03:32.862254   26827 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0918 20:03:32.879051   26827 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I0918 20:03:32.895538   26827 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0918 20:03:32.911669   26827 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I0918 20:03:32.927230   26827 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0918 20:03:32.943165   26827 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I0918 20:03:32.959777   26827 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0918 20:03:32.976941   26827 ssh_runner.go:195] Run: openssl version
	I0918 20:03:32.982956   26827 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0918 20:03:32.994065   26827 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0918 20:03:32.998638   26827 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 18 19:39 /usr/share/ca-certificates/minikubeCA.pem
	I0918 20:03:32.998702   26827 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0918 20:03:33.004856   26827 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0918 20:03:33.016234   26827 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14878.pem && ln -fs /usr/share/ca-certificates/14878.pem /etc/ssl/certs/14878.pem"
	I0918 20:03:33.027625   26827 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14878.pem
	I0918 20:03:33.032333   26827 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 18 19:57 /usr/share/ca-certificates/14878.pem
	I0918 20:03:33.032408   26827 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14878.pem
	I0918 20:03:33.038142   26827 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14878.pem /etc/ssl/certs/51391683.0"
	I0918 20:03:33.049048   26827 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/148782.pem && ln -fs /usr/share/ca-certificates/148782.pem /etc/ssl/certs/148782.pem"
	I0918 20:03:33.060201   26827 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/148782.pem
	I0918 20:03:33.064969   26827 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 18 19:57 /usr/share/ca-certificates/148782.pem
	I0918 20:03:33.065039   26827 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/148782.pem
	I0918 20:03:33.070737   26827 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/148782.pem /etc/ssl/certs/3ec20f2e.0"
	I0918 20:03:33.082171   26827 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0918 20:03:33.086441   26827 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0918 20:03:33.086499   26827 kubeadm.go:934] updating node {m03 192.168.39.53 8443 v1.31.1 crio true true} ...
	I0918 20:03:33.086588   26827 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-091565-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.53
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-091565 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0918 20:03:33.086614   26827 kube-vip.go:115] generating kube-vip config ...
	I0918 20:03:33.086658   26827 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0918 20:03:33.104138   26827 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0918 20:03:33.104231   26827 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0918 20:03:33.104297   26827 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0918 20:03:33.114293   26827 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.1: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.31.1': No such file or directory
	
	Initiating transfer...
	I0918 20:03:33.114356   26827 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.1
	I0918 20:03:33.124170   26827 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet.sha256
	I0918 20:03:33.124182   26827 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm.sha256
	I0918 20:03:33.124199   26827 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl.sha256
	I0918 20:03:33.124207   26827 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19667-7671/.minikube/cache/linux/amd64/v1.31.1/kubeadm -> /var/lib/minikube/binaries/v1.31.1/kubeadm
	I0918 20:03:33.124216   26827 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19667-7671/.minikube/cache/linux/amd64/v1.31.1/kubectl -> /var/lib/minikube/binaries/v1.31.1/kubectl
	I0918 20:03:33.124219   26827 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0918 20:03:33.124273   26827 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubeadm
	I0918 20:03:33.124275   26827 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubectl
	I0918 20:03:33.141327   26827 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubeadm': No such file or directory
	I0918 20:03:33.141375   26827 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/cache/linux/amd64/v1.31.1/kubeadm --> /var/lib/minikube/binaries/v1.31.1/kubeadm (58290328 bytes)
	I0918 20:03:33.141401   26827 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubectl': No such file or directory
	I0918 20:03:33.141433   26827 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/cache/linux/amd64/v1.31.1/kubectl --> /var/lib/minikube/binaries/v1.31.1/kubectl (56381592 bytes)
	I0918 20:03:33.141477   26827 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19667-7671/.minikube/cache/linux/amd64/v1.31.1/kubelet -> /var/lib/minikube/binaries/v1.31.1/kubelet
	I0918 20:03:33.141555   26827 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubelet
	I0918 20:03:33.173036   26827 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubelet': No such file or directory
	I0918 20:03:33.173093   26827 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/cache/linux/amd64/v1.31.1/kubelet --> /var/lib/minikube/binaries/v1.31.1/kubelet (76869944 bytes)
	I0918 20:03:33.972939   26827 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0918 20:03:33.982247   26827 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0918 20:03:34.000126   26827 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0918 20:03:34.018674   26827 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0918 20:03:34.036270   26827 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0918 20:03:34.040368   26827 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0918 20:03:34.053122   26827 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0918 20:03:34.171306   26827 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0918 20:03:34.188115   26827 host.go:66] Checking if "ha-091565" exists ...
	I0918 20:03:34.188456   26827 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0918 20:03:34.188496   26827 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0918 20:03:34.204519   26827 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33565
	I0918 20:03:34.205017   26827 main.go:141] libmachine: () Calling .GetVersion
	I0918 20:03:34.205836   26827 main.go:141] libmachine: Using API Version  1
	I0918 20:03:34.205858   26827 main.go:141] libmachine: () Calling .SetConfigRaw
	I0918 20:03:34.206189   26827 main.go:141] libmachine: () Calling .GetMachineName
	I0918 20:03:34.206366   26827 main.go:141] libmachine: (ha-091565) Calling .DriverName
	I0918 20:03:34.206499   26827 start.go:317] joinCluster: &{Name:ha-091565 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Cluster
Name:ha-091565 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.215 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.92 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.53 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false in
spektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOpt
imizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0918 20:03:34.206634   26827 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0918 20:03:34.206657   26827 main.go:141] libmachine: (ha-091565) Calling .GetSSHHostname
	I0918 20:03:34.210032   26827 main.go:141] libmachine: (ha-091565) DBG | domain ha-091565 has defined MAC address 52:54:00:2a:13:d8 in network mk-ha-091565
	I0918 20:03:34.210517   26827 main.go:141] libmachine: (ha-091565) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:13:d8", ip: ""} in network mk-ha-091565: {Iface:virbr1 ExpiryTime:2024-09-18 21:01:12 +0000 UTC Type:0 Mac:52:54:00:2a:13:d8 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:ha-091565 Clientid:01:52:54:00:2a:13:d8}
	I0918 20:03:34.210550   26827 main.go:141] libmachine: (ha-091565) DBG | domain ha-091565 has defined IP address 192.168.39.215 and MAC address 52:54:00:2a:13:d8 in network mk-ha-091565
	I0918 20:03:34.210721   26827 main.go:141] libmachine: (ha-091565) Calling .GetSSHPort
	I0918 20:03:34.210878   26827 main.go:141] libmachine: (ha-091565) Calling .GetSSHKeyPath
	I0918 20:03:34.211058   26827 main.go:141] libmachine: (ha-091565) Calling .GetSSHUsername
	I0918 20:03:34.211223   26827 sshutil.go:53] new ssh client: &{IP:192.168.39.215 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19667-7671/.minikube/machines/ha-091565/id_rsa Username:docker}
	I0918 20:03:34.497537   26827 start.go:343] trying to join control-plane node "m03" to cluster: &{Name:m03 IP:192.168.39.53 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0918 20:03:34.497597   26827 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token i0u1iv.ilurlcyw4668mpw6 --discovery-token-ca-cert-hash sha256:8ad46bf288e8cc2226f5a929a768e6c95cddecc97ffc4016be27ef0987b1e36f --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-091565-m03 --control-plane --apiserver-advertise-address=192.168.39.53 --apiserver-bind-port=8443"
	I0918 20:03:56.510162   26827 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token i0u1iv.ilurlcyw4668mpw6 --discovery-token-ca-cert-hash sha256:8ad46bf288e8cc2226f5a929a768e6c95cddecc97ffc4016be27ef0987b1e36f --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-091565-m03 --control-plane --apiserver-advertise-address=192.168.39.53 --apiserver-bind-port=8443": (22.012541289s)
	I0918 20:03:56.510194   26827 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0918 20:03:57.007413   26827 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-091565-m03 minikube.k8s.io/updated_at=2024_09_18T20_03_57_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=85073601a832bd4bbda5d11fa91feafff6ec6b91 minikube.k8s.io/name=ha-091565 minikube.k8s.io/primary=false
	I0918 20:03:57.136553   26827 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-091565-m03 node-role.kubernetes.io/control-plane:NoSchedule-
	I0918 20:03:57.243081   26827 start.go:319] duration metric: took 23.036576923s to joinCluster
	I0918 20:03:57.243171   26827 start.go:235] Will wait 6m0s for node &{Name:m03 IP:192.168.39.53 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0918 20:03:57.243516   26827 config.go:182] Loaded profile config "ha-091565": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0918 20:03:57.244463   26827 out.go:177] * Verifying Kubernetes components...
	I0918 20:03:57.245675   26827 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0918 20:03:57.491302   26827 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0918 20:03:57.553167   26827 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19667-7671/kubeconfig
	I0918 20:03:57.553587   26827 kapi.go:59] client config for ha-091565: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19667-7671/.minikube/profiles/ha-091565/client.crt", KeyFile:"/home/jenkins/minikube-integration/19667-7671/.minikube/profiles/ha-091565/client.key", CAFile:"/home/jenkins/minikube-integration/19667-7671/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f6fca0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0918 20:03:57.553676   26827 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.215:8443
	I0918 20:03:57.554162   26827 node_ready.go:35] waiting up to 6m0s for node "ha-091565-m03" to be "Ready" ...
	I0918 20:03:57.554529   26827 round_trippers.go:463] GET https://192.168.39.215:8443/api/v1/nodes/ha-091565-m03
	I0918 20:03:57.554540   26827 round_trippers.go:469] Request Headers:
	I0918 20:03:57.554551   26827 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0918 20:03:57.554560   26827 round_trippers.go:473]     Accept: application/json, */*
	I0918 20:03:57.558531   26827 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0918 20:03:58.055469   26827 round_trippers.go:463] GET https://192.168.39.215:8443/api/v1/nodes/ha-091565-m03
	I0918 20:03:58.055497   26827 round_trippers.go:469] Request Headers:
	I0918 20:03:58.055509   26827 round_trippers.go:473]     Accept: application/json, */*
	I0918 20:03:58.055515   26827 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0918 20:03:58.065944   26827 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0918 20:03:58.555709   26827 round_trippers.go:463] GET https://192.168.39.215:8443/api/v1/nodes/ha-091565-m03
	I0918 20:03:58.555741   26827 round_trippers.go:469] Request Headers:
	I0918 20:03:58.555751   26827 round_trippers.go:473]     Accept: application/json, */*
	I0918 20:03:58.555755   26827 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0918 20:03:58.559403   26827 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0918 20:03:59.055396   26827 round_trippers.go:463] GET https://192.168.39.215:8443/api/v1/nodes/ha-091565-m03
	I0918 20:03:59.055421   26827 round_trippers.go:469] Request Headers:
	I0918 20:03:59.055432   26827 round_trippers.go:473]     Accept: application/json, */*
	I0918 20:03:59.055439   26827 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0918 20:03:59.058942   26827 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0918 20:03:59.555365   26827 round_trippers.go:463] GET https://192.168.39.215:8443/api/v1/nodes/ha-091565-m03
	I0918 20:03:59.555390   26827 round_trippers.go:469] Request Headers:
	I0918 20:03:59.555400   26827 round_trippers.go:473]     Accept: application/json, */*
	I0918 20:03:59.555406   26827 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0918 20:03:59.558786   26827 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0918 20:03:59.559242   26827 node_ready.go:53] node "ha-091565-m03" has status "Ready":"False"
	I0918 20:04:00.054633   26827 round_trippers.go:463] GET https://192.168.39.215:8443/api/v1/nodes/ha-091565-m03
	I0918 20:04:00.054659   26827 round_trippers.go:469] Request Headers:
	I0918 20:04:00.054669   26827 round_trippers.go:473]     Accept: application/json, */*
	I0918 20:04:00.054674   26827 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0918 20:04:00.058075   26827 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0918 20:04:00.555492   26827 round_trippers.go:463] GET https://192.168.39.215:8443/api/v1/nodes/ha-091565-m03
	I0918 20:04:00.555516   26827 round_trippers.go:469] Request Headers:
	I0918 20:04:00.555526   26827 round_trippers.go:473]     Accept: application/json, */*
	I0918 20:04:00.555529   26827 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0918 20:04:00.559811   26827 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0918 20:04:01.055537   26827 round_trippers.go:463] GET https://192.168.39.215:8443/api/v1/nodes/ha-091565-m03
	I0918 20:04:01.055563   26827 round_trippers.go:469] Request Headers:
	I0918 20:04:01.055575   26827 round_trippers.go:473]     Accept: application/json, */*
	I0918 20:04:01.055580   26827 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0918 20:04:01.059555   26827 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0918 20:04:01.555672   26827 round_trippers.go:463] GET https://192.168.39.215:8443/api/v1/nodes/ha-091565-m03
	I0918 20:04:01.555697   26827 round_trippers.go:469] Request Headers:
	I0918 20:04:01.555706   26827 round_trippers.go:473]     Accept: application/json, */*
	I0918 20:04:01.555711   26827 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0918 20:04:01.559137   26827 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0918 20:04:01.559627   26827 node_ready.go:53] node "ha-091565-m03" has status "Ready":"False"
	I0918 20:04:02.054683   26827 round_trippers.go:463] GET https://192.168.39.215:8443/api/v1/nodes/ha-091565-m03
	I0918 20:04:02.054723   26827 round_trippers.go:469] Request Headers:
	I0918 20:04:02.054731   26827 round_trippers.go:473]     Accept: application/json, */*
	I0918 20:04:02.054745   26827 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0918 20:04:02.058557   26827 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0918 20:04:02.555203   26827 round_trippers.go:463] GET https://192.168.39.215:8443/api/v1/nodes/ha-091565-m03
	I0918 20:04:02.555226   26827 round_trippers.go:469] Request Headers:
	I0918 20:04:02.555234   26827 round_trippers.go:473]     Accept: application/json, */*
	I0918 20:04:02.555238   26827 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0918 20:04:02.558769   26827 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0918 20:04:03.055525   26827 round_trippers.go:463] GET https://192.168.39.215:8443/api/v1/nodes/ha-091565-m03
	I0918 20:04:03.055564   26827 round_trippers.go:469] Request Headers:
	I0918 20:04:03.055574   26827 round_trippers.go:473]     Accept: application/json, */*
	I0918 20:04:03.055577   26827 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0918 20:04:03.059340   26827 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0918 20:04:03.554931   26827 round_trippers.go:463] GET https://192.168.39.215:8443/api/v1/nodes/ha-091565-m03
	I0918 20:04:03.554959   26827 round_trippers.go:469] Request Headers:
	I0918 20:04:03.554970   26827 round_trippers.go:473]     Accept: application/json, */*
	I0918 20:04:03.554979   26827 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0918 20:04:03.558864   26827 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0918 20:04:03.559650   26827 node_ready.go:53] node "ha-091565-m03" has status "Ready":"False"
	I0918 20:04:04.054716   26827 round_trippers.go:463] GET https://192.168.39.215:8443/api/v1/nodes/ha-091565-m03
	I0918 20:04:04.054744   26827 round_trippers.go:469] Request Headers:
	I0918 20:04:04.054755   26827 round_trippers.go:473]     Accept: application/json, */*
	I0918 20:04:04.054761   26827 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0918 20:04:04.058693   26827 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0918 20:04:04.555064   26827 round_trippers.go:463] GET https://192.168.39.215:8443/api/v1/nodes/ha-091565-m03
	I0918 20:04:04.555088   26827 round_trippers.go:469] Request Headers:
	I0918 20:04:04.555100   26827 round_trippers.go:473]     Accept: application/json, */*
	I0918 20:04:04.555106   26827 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0918 20:04:04.558892   26827 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0918 20:04:05.054691   26827 round_trippers.go:463] GET https://192.168.39.215:8443/api/v1/nodes/ha-091565-m03
	I0918 20:04:05.054712   26827 round_trippers.go:469] Request Headers:
	I0918 20:04:05.054719   26827 round_trippers.go:473]     Accept: application/json, */*
	I0918 20:04:05.054741   26827 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0918 20:04:05.059560   26827 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0918 20:04:05.555504   26827 round_trippers.go:463] GET https://192.168.39.215:8443/api/v1/nodes/ha-091565-m03
	I0918 20:04:05.555527   26827 round_trippers.go:469] Request Headers:
	I0918 20:04:05.555534   26827 round_trippers.go:473]     Accept: application/json, */*
	I0918 20:04:05.555539   26827 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0918 20:04:05.558864   26827 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0918 20:04:06.055334   26827 round_trippers.go:463] GET https://192.168.39.215:8443/api/v1/nodes/ha-091565-m03
	I0918 20:04:06.055377   26827 round_trippers.go:469] Request Headers:
	I0918 20:04:06.055389   26827 round_trippers.go:473]     Accept: application/json, */*
	I0918 20:04:06.055397   26827 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0918 20:04:06.059156   26827 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0918 20:04:06.059757   26827 node_ready.go:53] node "ha-091565-m03" has status "Ready":"False"
	I0918 20:04:06.555030   26827 round_trippers.go:463] GET https://192.168.39.215:8443/api/v1/nodes/ha-091565-m03
	I0918 20:04:06.555053   26827 round_trippers.go:469] Request Headers:
	I0918 20:04:06.555063   26827 round_trippers.go:473]     Accept: application/json, */*
	I0918 20:04:06.555069   26827 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0918 20:04:06.558335   26827 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0918 20:04:07.055192   26827 round_trippers.go:463] GET https://192.168.39.215:8443/api/v1/nodes/ha-091565-m03
	I0918 20:04:07.055215   26827 round_trippers.go:469] Request Headers:
	I0918 20:04:07.055224   26827 round_trippers.go:473]     Accept: application/json, */*
	I0918 20:04:07.055227   26827 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0918 20:04:07.059362   26827 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0918 20:04:07.555236   26827 round_trippers.go:463] GET https://192.168.39.215:8443/api/v1/nodes/ha-091565-m03
	I0918 20:04:07.555261   26827 round_trippers.go:469] Request Headers:
	I0918 20:04:07.555269   26827 round_trippers.go:473]     Accept: application/json, */*
	I0918 20:04:07.555274   26827 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0918 20:04:07.558863   26827 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0918 20:04:08.055465   26827 round_trippers.go:463] GET https://192.168.39.215:8443/api/v1/nodes/ha-091565-m03
	I0918 20:04:08.055488   26827 round_trippers.go:469] Request Headers:
	I0918 20:04:08.055495   26827 round_trippers.go:473]     Accept: application/json, */*
	I0918 20:04:08.055498   26827 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0918 20:04:08.059132   26827 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0918 20:04:08.555504   26827 round_trippers.go:463] GET https://192.168.39.215:8443/api/v1/nodes/ha-091565-m03
	I0918 20:04:08.555526   26827 round_trippers.go:469] Request Headers:
	I0918 20:04:08.555535   26827 round_trippers.go:473]     Accept: application/json, */*
	I0918 20:04:08.555538   26827 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0918 20:04:08.559353   26827 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0918 20:04:08.559819   26827 node_ready.go:53] node "ha-091565-m03" has status "Ready":"False"
	I0918 20:04:09.055283   26827 round_trippers.go:463] GET https://192.168.39.215:8443/api/v1/nodes/ha-091565-m03
	I0918 20:04:09.055306   26827 round_trippers.go:469] Request Headers:
	I0918 20:04:09.055314   26827 round_trippers.go:473]     Accept: application/json, */*
	I0918 20:04:09.055317   26827 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0918 20:04:09.058873   26827 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0918 20:04:09.555171   26827 round_trippers.go:463] GET https://192.168.39.215:8443/api/v1/nodes/ha-091565-m03
	I0918 20:04:09.555196   26827 round_trippers.go:469] Request Headers:
	I0918 20:04:09.555204   26827 round_trippers.go:473]     Accept: application/json, */*
	I0918 20:04:09.555208   26827 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0918 20:04:09.559068   26827 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0918 20:04:10.055288   26827 round_trippers.go:463] GET https://192.168.39.215:8443/api/v1/nodes/ha-091565-m03
	I0918 20:04:10.055311   26827 round_trippers.go:469] Request Headers:
	I0918 20:04:10.055320   26827 round_trippers.go:473]     Accept: application/json, */*
	I0918 20:04:10.055325   26827 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0918 20:04:10.059182   26827 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0918 20:04:10.555106   26827 round_trippers.go:463] GET https://192.168.39.215:8443/api/v1/nodes/ha-091565-m03
	I0918 20:04:10.555128   26827 round_trippers.go:469] Request Headers:
	I0918 20:04:10.555139   26827 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0918 20:04:10.555144   26827 round_trippers.go:473]     Accept: application/json, */*
	I0918 20:04:10.558578   26827 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0918 20:04:11.054941   26827 round_trippers.go:463] GET https://192.168.39.215:8443/api/v1/nodes/ha-091565-m03
	I0918 20:04:11.054964   26827 round_trippers.go:469] Request Headers:
	I0918 20:04:11.054972   26827 round_trippers.go:473]     Accept: application/json, */*
	I0918 20:04:11.054975   26827 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0918 20:04:11.059278   26827 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0918 20:04:11.059847   26827 node_ready.go:53] node "ha-091565-m03" has status "Ready":"False"
	I0918 20:04:11.555315   26827 round_trippers.go:463] GET https://192.168.39.215:8443/api/v1/nodes/ha-091565-m03
	I0918 20:04:11.555339   26827 round_trippers.go:469] Request Headers:
	I0918 20:04:11.555347   26827 round_trippers.go:473]     Accept: application/json, */*
	I0918 20:04:11.555355   26827 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0918 20:04:11.558773   26827 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0918 20:04:12.054728   26827 round_trippers.go:463] GET https://192.168.39.215:8443/api/v1/nodes/ha-091565-m03
	I0918 20:04:12.054751   26827 round_trippers.go:469] Request Headers:
	I0918 20:04:12.054765   26827 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0918 20:04:12.054770   26827 round_trippers.go:473]     Accept: application/json, */*
	I0918 20:04:12.058180   26827 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0918 20:04:12.554816   26827 round_trippers.go:463] GET https://192.168.39.215:8443/api/v1/nodes/ha-091565-m03
	I0918 20:04:12.554836   26827 round_trippers.go:469] Request Headers:
	I0918 20:04:12.554844   26827 round_trippers.go:473]     Accept: application/json, */*
	I0918 20:04:12.554849   26827 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0918 20:04:12.558473   26827 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0918 20:04:13.055199   26827 round_trippers.go:463] GET https://192.168.39.215:8443/api/v1/nodes/ha-091565-m03
	I0918 20:04:13.055227   26827 round_trippers.go:469] Request Headers:
	I0918 20:04:13.055245   26827 round_trippers.go:473]     Accept: application/json, */*
	I0918 20:04:13.055254   26827 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0918 20:04:13.058868   26827 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0918 20:04:13.554700   26827 round_trippers.go:463] GET https://192.168.39.215:8443/api/v1/nodes/ha-091565-m03
	I0918 20:04:13.554723   26827 round_trippers.go:469] Request Headers:
	I0918 20:04:13.554732   26827 round_trippers.go:473]     Accept: application/json, */*
	I0918 20:04:13.554736   26827 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0918 20:04:13.559302   26827 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0918 20:04:13.560622   26827 node_ready.go:53] node "ha-091565-m03" has status "Ready":"False"
	I0918 20:04:14.054755   26827 round_trippers.go:463] GET https://192.168.39.215:8443/api/v1/nodes/ha-091565-m03
	I0918 20:04:14.054786   26827 round_trippers.go:469] Request Headers:
	I0918 20:04:14.054798   26827 round_trippers.go:473]     Accept: application/json, */*
	I0918 20:04:14.054803   26827 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0918 20:04:14.058095   26827 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0918 20:04:14.555493   26827 round_trippers.go:463] GET https://192.168.39.215:8443/api/v1/nodes/ha-091565-m03
	I0918 20:04:14.555515   26827 round_trippers.go:469] Request Headers:
	I0918 20:04:14.555524   26827 round_trippers.go:473]     Accept: application/json, */*
	I0918 20:04:14.555528   26827 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0918 20:04:14.559446   26827 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0918 20:04:15.055291   26827 round_trippers.go:463] GET https://192.168.39.215:8443/api/v1/nodes/ha-091565-m03
	I0918 20:04:15.055323   26827 round_trippers.go:469] Request Headers:
	I0918 20:04:15.055333   26827 round_trippers.go:473]     Accept: application/json, */*
	I0918 20:04:15.055336   26827 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0918 20:04:15.059042   26827 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0918 20:04:15.555105   26827 round_trippers.go:463] GET https://192.168.39.215:8443/api/v1/nodes/ha-091565-m03
	I0918 20:04:15.555127   26827 round_trippers.go:469] Request Headers:
	I0918 20:04:15.555135   26827 round_trippers.go:473]     Accept: application/json, */*
	I0918 20:04:15.555138   26827 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0918 20:04:15.558918   26827 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0918 20:04:16.055211   26827 round_trippers.go:463] GET https://192.168.39.215:8443/api/v1/nodes/ha-091565-m03
	I0918 20:04:16.055237   26827 round_trippers.go:469] Request Headers:
	I0918 20:04:16.055246   26827 round_trippers.go:473]     Accept: application/json, */*
	I0918 20:04:16.055251   26827 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0918 20:04:16.059232   26827 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0918 20:04:16.059819   26827 node_ready.go:49] node "ha-091565-m03" has status "Ready":"True"
	I0918 20:04:16.059841   26827 node_ready.go:38] duration metric: took 18.505389798s for node "ha-091565-m03" to be "Ready" ...
	I0918 20:04:16.059852   26827 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0918 20:04:16.059929   26827 round_trippers.go:463] GET https://192.168.39.215:8443/api/v1/namespaces/kube-system/pods
	I0918 20:04:16.059941   26827 round_trippers.go:469] Request Headers:
	I0918 20:04:16.059951   26827 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0918 20:04:16.059957   26827 round_trippers.go:473]     Accept: application/json, */*
	I0918 20:04:16.065715   26827 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0918 20:04:16.071783   26827 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-8zcqk" in "kube-system" namespace to be "Ready" ...
	I0918 20:04:16.071882   26827 round_trippers.go:463] GET https://192.168.39.215:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-8zcqk
	I0918 20:04:16.071891   26827 round_trippers.go:469] Request Headers:
	I0918 20:04:16.071899   26827 round_trippers.go:473]     Accept: application/json, */*
	I0918 20:04:16.071903   26827 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0918 20:04:16.075405   26827 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0918 20:04:16.075962   26827 round_trippers.go:463] GET https://192.168.39.215:8443/api/v1/nodes/ha-091565
	I0918 20:04:16.075978   26827 round_trippers.go:469] Request Headers:
	I0918 20:04:16.075987   26827 round_trippers.go:473]     Accept: application/json, */*
	I0918 20:04:16.075992   26827 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0918 20:04:16.078716   26827 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0918 20:04:16.079267   26827 pod_ready.go:93] pod "coredns-7c65d6cfc9-8zcqk" in "kube-system" namespace has status "Ready":"True"
	I0918 20:04:16.079293   26827 pod_ready.go:82] duration metric: took 7.472161ms for pod "coredns-7c65d6cfc9-8zcqk" in "kube-system" namespace to be "Ready" ...
	I0918 20:04:16.079302   26827 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-w97kk" in "kube-system" namespace to be "Ready" ...
	I0918 20:04:16.079361   26827 round_trippers.go:463] GET https://192.168.39.215:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-w97kk
	I0918 20:04:16.079369   26827 round_trippers.go:469] Request Headers:
	I0918 20:04:16.079376   26827 round_trippers.go:473]     Accept: application/json, */*
	I0918 20:04:16.079380   26827 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0918 20:04:16.082131   26827 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0918 20:04:16.082926   26827 round_trippers.go:463] GET https://192.168.39.215:8443/api/v1/nodes/ha-091565
	I0918 20:04:16.082939   26827 round_trippers.go:469] Request Headers:
	I0918 20:04:16.082946   26827 round_trippers.go:473]     Accept: application/json, */*
	I0918 20:04:16.082949   26827 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0918 20:04:16.085556   26827 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0918 20:04:16.085896   26827 pod_ready.go:93] pod "coredns-7c65d6cfc9-w97kk" in "kube-system" namespace has status "Ready":"True"
	I0918 20:04:16.085910   26827 pod_ready.go:82] duration metric: took 6.602392ms for pod "coredns-7c65d6cfc9-w97kk" in "kube-system" namespace to be "Ready" ...
	I0918 20:04:16.085919   26827 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-091565" in "kube-system" namespace to be "Ready" ...
	I0918 20:04:16.085972   26827 round_trippers.go:463] GET https://192.168.39.215:8443/api/v1/namespaces/kube-system/pods/etcd-ha-091565
	I0918 20:04:16.085980   26827 round_trippers.go:469] Request Headers:
	I0918 20:04:16.085986   26827 round_trippers.go:473]     Accept: application/json, */*
	I0918 20:04:16.085989   26827 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0918 20:04:16.089699   26827 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0918 20:04:16.090300   26827 round_trippers.go:463] GET https://192.168.39.215:8443/api/v1/nodes/ha-091565
	I0918 20:04:16.090315   26827 round_trippers.go:469] Request Headers:
	I0918 20:04:16.090322   26827 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0918 20:04:16.090326   26827 round_trippers.go:473]     Accept: application/json, */*
	I0918 20:04:16.093063   26827 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0918 20:04:16.093596   26827 pod_ready.go:93] pod "etcd-ha-091565" in "kube-system" namespace has status "Ready":"True"
	I0918 20:04:16.093612   26827 pod_ready.go:82] duration metric: took 7.687899ms for pod "etcd-ha-091565" in "kube-system" namespace to be "Ready" ...
	I0918 20:04:16.093621   26827 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-091565-m02" in "kube-system" namespace to be "Ready" ...
	I0918 20:04:16.093672   26827 round_trippers.go:463] GET https://192.168.39.215:8443/api/v1/namespaces/kube-system/pods/etcd-ha-091565-m02
	I0918 20:04:16.093679   26827 round_trippers.go:469] Request Headers:
	I0918 20:04:16.093686   26827 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0918 20:04:16.093691   26827 round_trippers.go:473]     Accept: application/json, */*
	I0918 20:04:16.096387   26827 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0918 20:04:16.097042   26827 round_trippers.go:463] GET https://192.168.39.215:8443/api/v1/nodes/ha-091565-m02
	I0918 20:04:16.097062   26827 round_trippers.go:469] Request Headers:
	I0918 20:04:16.097072   26827 round_trippers.go:473]     Accept: application/json, */*
	I0918 20:04:16.097077   26827 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0918 20:04:16.099762   26827 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0918 20:04:16.100164   26827 pod_ready.go:93] pod "etcd-ha-091565-m02" in "kube-system" namespace has status "Ready":"True"
	I0918 20:04:16.100182   26827 pod_ready.go:82] duration metric: took 6.554191ms for pod "etcd-ha-091565-m02" in "kube-system" namespace to be "Ready" ...
	I0918 20:04:16.100193   26827 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-091565-m03" in "kube-system" namespace to be "Ready" ...
	I0918 20:04:16.255579   26827 request.go:632] Waited for 155.319903ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.215:8443/api/v1/namespaces/kube-system/pods/etcd-ha-091565-m03
	I0918 20:04:16.255651   26827 round_trippers.go:463] GET https://192.168.39.215:8443/api/v1/namespaces/kube-system/pods/etcd-ha-091565-m03
	I0918 20:04:16.255659   26827 round_trippers.go:469] Request Headers:
	I0918 20:04:16.255691   26827 round_trippers.go:473]     Accept: application/json, */*
	I0918 20:04:16.255699   26827 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0918 20:04:16.259105   26827 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0918 20:04:16.456134   26827 request.go:632] Waited for 196.426863ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.215:8443/api/v1/nodes/ha-091565-m03
	I0918 20:04:16.456200   26827 round_trippers.go:463] GET https://192.168.39.215:8443/api/v1/nodes/ha-091565-m03
	I0918 20:04:16.456206   26827 round_trippers.go:469] Request Headers:
	I0918 20:04:16.456215   26827 round_trippers.go:473]     Accept: application/json, */*
	I0918 20:04:16.456220   26827 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0918 20:04:16.460303   26827 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0918 20:04:16.460816   26827 pod_ready.go:93] pod "etcd-ha-091565-m03" in "kube-system" namespace has status "Ready":"True"
	I0918 20:04:16.460835   26827 pod_ready.go:82] duration metric: took 360.633247ms for pod "etcd-ha-091565-m03" in "kube-system" namespace to be "Ready" ...
	I0918 20:04:16.460857   26827 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-091565" in "kube-system" namespace to be "Ready" ...
	I0918 20:04:16.656076   26827 request.go:632] Waited for 195.151124ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.215:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-091565
	I0918 20:04:16.656159   26827 round_trippers.go:463] GET https://192.168.39.215:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-091565
	I0918 20:04:16.656167   26827 round_trippers.go:469] Request Headers:
	I0918 20:04:16.656176   26827 round_trippers.go:473]     Accept: application/json, */*
	I0918 20:04:16.656192   26827 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0918 20:04:16.659916   26827 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0918 20:04:16.856095   26827 request.go:632] Waited for 195.376851ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.215:8443/api/v1/nodes/ha-091565
	I0918 20:04:16.856174   26827 round_trippers.go:463] GET https://192.168.39.215:8443/api/v1/nodes/ha-091565
	I0918 20:04:16.856181   26827 round_trippers.go:469] Request Headers:
	I0918 20:04:16.856191   26827 round_trippers.go:473]     Accept: application/json, */*
	I0918 20:04:16.856204   26827 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0918 20:04:16.859780   26827 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0918 20:04:16.860437   26827 pod_ready.go:93] pod "kube-apiserver-ha-091565" in "kube-system" namespace has status "Ready":"True"
	I0918 20:04:16.860458   26827 pod_ready.go:82] duration metric: took 399.594161ms for pod "kube-apiserver-ha-091565" in "kube-system" namespace to be "Ready" ...
	I0918 20:04:16.860467   26827 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-091565-m02" in "kube-system" namespace to be "Ready" ...
	I0918 20:04:17.055619   26827 request.go:632] Waited for 195.084711ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.215:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-091565-m02
	I0918 20:04:17.055737   26827 round_trippers.go:463] GET https://192.168.39.215:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-091565-m02
	I0918 20:04:17.055750   26827 round_trippers.go:469] Request Headers:
	I0918 20:04:17.055759   26827 round_trippers.go:473]     Accept: application/json, */*
	I0918 20:04:17.055765   26827 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0918 20:04:17.059273   26827 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0918 20:04:17.255382   26827 request.go:632] Waited for 195.243567ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.215:8443/api/v1/nodes/ha-091565-m02
	I0918 20:04:17.255449   26827 round_trippers.go:463] GET https://192.168.39.215:8443/api/v1/nodes/ha-091565-m02
	I0918 20:04:17.255457   26827 round_trippers.go:469] Request Headers:
	I0918 20:04:17.255464   26827 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0918 20:04:17.255468   26827 round_trippers.go:473]     Accept: application/json, */*
	I0918 20:04:17.258940   26827 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0918 20:04:17.259557   26827 pod_ready.go:93] pod "kube-apiserver-ha-091565-m02" in "kube-system" namespace has status "Ready":"True"
	I0918 20:04:17.259575   26827 pod_ready.go:82] duration metric: took 399.101471ms for pod "kube-apiserver-ha-091565-m02" in "kube-system" namespace to be "Ready" ...
	I0918 20:04:17.259586   26827 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-091565-m03" in "kube-system" namespace to be "Ready" ...
	I0918 20:04:17.455306   26827 request.go:632] Waited for 195.656133ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.215:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-091565-m03
	I0918 20:04:17.455375   26827 round_trippers.go:463] GET https://192.168.39.215:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-091565-m03
	I0918 20:04:17.455381   26827 round_trippers.go:469] Request Headers:
	I0918 20:04:17.455391   26827 round_trippers.go:473]     Accept: application/json, */*
	I0918 20:04:17.455398   26827 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0918 20:04:17.459141   26827 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0918 20:04:17.656266   26827 request.go:632] Waited for 196.147408ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.215:8443/api/v1/nodes/ha-091565-m03
	I0918 20:04:17.656316   26827 round_trippers.go:463] GET https://192.168.39.215:8443/api/v1/nodes/ha-091565-m03
	I0918 20:04:17.656322   26827 round_trippers.go:469] Request Headers:
	I0918 20:04:17.656332   26827 round_trippers.go:473]     Accept: application/json, */*
	I0918 20:04:17.656341   26827 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0918 20:04:17.659786   26827 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0918 20:04:17.660507   26827 pod_ready.go:93] pod "kube-apiserver-ha-091565-m03" in "kube-system" namespace has status "Ready":"True"
	I0918 20:04:17.660540   26827 pod_ready.go:82] duration metric: took 400.946368ms for pod "kube-apiserver-ha-091565-m03" in "kube-system" namespace to be "Ready" ...
	I0918 20:04:17.660565   26827 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-091565" in "kube-system" namespace to be "Ready" ...
	I0918 20:04:17.855951   26827 request.go:632] Waited for 195.288141ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.215:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-091565
	I0918 20:04:17.856066   26827 round_trippers.go:463] GET https://192.168.39.215:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-091565
	I0918 20:04:17.856076   26827 round_trippers.go:469] Request Headers:
	I0918 20:04:17.856086   26827 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0918 20:04:17.856095   26827 round_trippers.go:473]     Accept: application/json, */*
	I0918 20:04:17.859991   26827 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0918 20:04:18.055205   26827 request.go:632] Waited for 194.285561ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.215:8443/api/v1/nodes/ha-091565
	I0918 20:04:18.055268   26827 round_trippers.go:463] GET https://192.168.39.215:8443/api/v1/nodes/ha-091565
	I0918 20:04:18.055274   26827 round_trippers.go:469] Request Headers:
	I0918 20:04:18.055281   26827 round_trippers.go:473]     Accept: application/json, */*
	I0918 20:04:18.055284   26827 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0918 20:04:18.058520   26827 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0918 20:04:18.059072   26827 pod_ready.go:93] pod "kube-controller-manager-ha-091565" in "kube-system" namespace has status "Ready":"True"
	I0918 20:04:18.059095   26827 pod_ready.go:82] duration metric: took 398.501653ms for pod "kube-controller-manager-ha-091565" in "kube-system" namespace to be "Ready" ...
	I0918 20:04:18.059105   26827 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-091565-m02" in "kube-system" namespace to be "Ready" ...
	I0918 20:04:18.256047   26827 request.go:632] Waited for 196.849365ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.215:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-091565-m02
	I0918 20:04:18.256125   26827 round_trippers.go:463] GET https://192.168.39.215:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-091565-m02
	I0918 20:04:18.256133   26827 round_trippers.go:469] Request Headers:
	I0918 20:04:18.256147   26827 round_trippers.go:473]     Accept: application/json, */*
	I0918 20:04:18.256156   26827 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0918 20:04:18.260076   26827 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0918 20:04:18.455423   26827 request.go:632] Waited for 194.302275ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.215:8443/api/v1/nodes/ha-091565-m02
	I0918 20:04:18.455494   26827 round_trippers.go:463] GET https://192.168.39.215:8443/api/v1/nodes/ha-091565-m02
	I0918 20:04:18.455502   26827 round_trippers.go:469] Request Headers:
	I0918 20:04:18.455513   26827 round_trippers.go:473]     Accept: application/json, */*
	I0918 20:04:18.455524   26827 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0918 20:04:18.460052   26827 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0918 20:04:18.460616   26827 pod_ready.go:93] pod "kube-controller-manager-ha-091565-m02" in "kube-system" namespace has status "Ready":"True"
	I0918 20:04:18.460634   26827 pod_ready.go:82] duration metric: took 401.521777ms for pod "kube-controller-manager-ha-091565-m02" in "kube-system" namespace to be "Ready" ...
	I0918 20:04:18.460645   26827 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-091565-m03" in "kube-system" namespace to be "Ready" ...
	I0918 20:04:18.655830   26827 request.go:632] Waited for 195.117473ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.215:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-091565-m03
	I0918 20:04:18.655906   26827 round_trippers.go:463] GET https://192.168.39.215:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-091565-m03
	I0918 20:04:18.655912   26827 round_trippers.go:469] Request Headers:
	I0918 20:04:18.655926   26827 round_trippers.go:473]     Accept: application/json, */*
	I0918 20:04:18.655934   26827 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0918 20:04:18.661181   26827 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0918 20:04:18.855471   26827 request.go:632] Waited for 193.339141ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.215:8443/api/v1/nodes/ha-091565-m03
	I0918 20:04:18.855546   26827 round_trippers.go:463] GET https://192.168.39.215:8443/api/v1/nodes/ha-091565-m03
	I0918 20:04:18.855553   26827 round_trippers.go:469] Request Headers:
	I0918 20:04:18.855560   26827 round_trippers.go:473]     Accept: application/json, */*
	I0918 20:04:18.855565   26827 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0918 20:04:18.859369   26827 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0918 20:04:18.860202   26827 pod_ready.go:93] pod "kube-controller-manager-ha-091565-m03" in "kube-system" namespace has status "Ready":"True"
	I0918 20:04:18.860225   26827 pod_ready.go:82] duration metric: took 399.570485ms for pod "kube-controller-manager-ha-091565-m03" in "kube-system" namespace to be "Ready" ...
	I0918 20:04:18.860239   26827 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-4p8rj" in "kube-system" namespace to be "Ready" ...
	I0918 20:04:19.055323   26827 request.go:632] Waited for 195.018584ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.215:8443/api/v1/namespaces/kube-system/pods/kube-proxy-4p8rj
	I0918 20:04:19.055407   26827 round_trippers.go:463] GET https://192.168.39.215:8443/api/v1/namespaces/kube-system/pods/kube-proxy-4p8rj
	I0918 20:04:19.055415   26827 round_trippers.go:469] Request Headers:
	I0918 20:04:19.055425   26827 round_trippers.go:473]     Accept: application/json, */*
	I0918 20:04:19.055434   26827 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0918 20:04:19.058851   26827 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0918 20:04:19.255631   26827 request.go:632] Waited for 196.124849ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.215:8443/api/v1/nodes/ha-091565-m03
	I0918 20:04:19.255685   26827 round_trippers.go:463] GET https://192.168.39.215:8443/api/v1/nodes/ha-091565-m03
	I0918 20:04:19.255692   26827 round_trippers.go:469] Request Headers:
	I0918 20:04:19.255702   26827 round_trippers.go:473]     Accept: application/json, */*
	I0918 20:04:19.255710   26827 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0918 20:04:19.260421   26827 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0918 20:04:19.261253   26827 pod_ready.go:93] pod "kube-proxy-4p8rj" in "kube-system" namespace has status "Ready":"True"
	I0918 20:04:19.261276   26827 pod_ready.go:82] duration metric: took 401.027744ms for pod "kube-proxy-4p8rj" in "kube-system" namespace to be "Ready" ...
	I0918 20:04:19.261289   26827 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-4wm6h" in "kube-system" namespace to be "Ready" ...
	I0918 20:04:19.455210   26827 request.go:632] Waited for 193.843238ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.215:8443/api/v1/namespaces/kube-system/pods/kube-proxy-4wm6h
	I0918 20:04:19.455295   26827 round_trippers.go:463] GET https://192.168.39.215:8443/api/v1/namespaces/kube-system/pods/kube-proxy-4wm6h
	I0918 20:04:19.455303   26827 round_trippers.go:469] Request Headers:
	I0918 20:04:19.455314   26827 round_trippers.go:473]     Accept: application/json, */*
	I0918 20:04:19.455322   26827 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0918 20:04:19.458975   26827 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0918 20:04:19.656036   26827 request.go:632] Waited for 196.360424ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.215:8443/api/v1/nodes/ha-091565
	I0918 20:04:19.656109   26827 round_trippers.go:463] GET https://192.168.39.215:8443/api/v1/nodes/ha-091565
	I0918 20:04:19.656115   26827 round_trippers.go:469] Request Headers:
	I0918 20:04:19.656122   26827 round_trippers.go:473]     Accept: application/json, */*
	I0918 20:04:19.656126   26827 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0918 20:04:19.659749   26827 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0918 20:04:19.660473   26827 pod_ready.go:93] pod "kube-proxy-4wm6h" in "kube-system" namespace has status "Ready":"True"
	I0918 20:04:19.660500   26827 pod_ready.go:82] duration metric: took 399.202104ms for pod "kube-proxy-4wm6h" in "kube-system" namespace to be "Ready" ...
	I0918 20:04:19.660513   26827 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-bxblp" in "kube-system" namespace to be "Ready" ...
	I0918 20:04:19.855602   26827 request.go:632] Waited for 195.016629ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.215:8443/api/v1/namespaces/kube-system/pods/kube-proxy-bxblp
	I0918 20:04:19.855669   26827 round_trippers.go:463] GET https://192.168.39.215:8443/api/v1/namespaces/kube-system/pods/kube-proxy-bxblp
	I0918 20:04:19.855674   26827 round_trippers.go:469] Request Headers:
	I0918 20:04:19.855684   26827 round_trippers.go:473]     Accept: application/json, */*
	I0918 20:04:19.855688   26827 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0918 20:04:19.859561   26827 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0918 20:04:20.055770   26827 request.go:632] Waited for 195.418705ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.215:8443/api/v1/nodes/ha-091565-m02
	I0918 20:04:20.055846   26827 round_trippers.go:463] GET https://192.168.39.215:8443/api/v1/nodes/ha-091565-m02
	I0918 20:04:20.055852   26827 round_trippers.go:469] Request Headers:
	I0918 20:04:20.055859   26827 round_trippers.go:473]     Accept: application/json, */*
	I0918 20:04:20.055866   26827 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0918 20:04:20.059482   26827 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0918 20:04:20.060369   26827 pod_ready.go:93] pod "kube-proxy-bxblp" in "kube-system" namespace has status "Ready":"True"
	I0918 20:04:20.060396   26827 pod_ready.go:82] duration metric: took 399.875436ms for pod "kube-proxy-bxblp" in "kube-system" namespace to be "Ready" ...
	I0918 20:04:20.060408   26827 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-091565" in "kube-system" namespace to be "Ready" ...
	I0918 20:04:20.255225   26827 request.go:632] Waited for 194.753676ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.215:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-091565
	I0918 20:04:20.255322   26827 round_trippers.go:463] GET https://192.168.39.215:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-091565
	I0918 20:04:20.255331   26827 round_trippers.go:469] Request Headers:
	I0918 20:04:20.255341   26827 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0918 20:04:20.255351   26827 round_trippers.go:473]     Accept: application/json, */*
	I0918 20:04:20.259061   26827 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0918 20:04:20.456103   26827 request.go:632] Waited for 196.430637ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.215:8443/api/v1/nodes/ha-091565
	I0918 20:04:20.456163   26827 round_trippers.go:463] GET https://192.168.39.215:8443/api/v1/nodes/ha-091565
	I0918 20:04:20.456168   26827 round_trippers.go:469] Request Headers:
	I0918 20:04:20.456175   26827 round_trippers.go:473]     Accept: application/json, */*
	I0918 20:04:20.456179   26827 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0918 20:04:20.459797   26827 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0918 20:04:20.460332   26827 pod_ready.go:93] pod "kube-scheduler-ha-091565" in "kube-system" namespace has status "Ready":"True"
	I0918 20:04:20.460355   26827 pod_ready.go:82] duration metric: took 399.937556ms for pod "kube-scheduler-ha-091565" in "kube-system" namespace to be "Ready" ...
	I0918 20:04:20.460365   26827 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-091565-m02" in "kube-system" namespace to be "Ready" ...
	I0918 20:04:20.655303   26827 request.go:632] Waited for 194.860443ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.215:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-091565-m02
	I0918 20:04:20.655387   26827 round_trippers.go:463] GET https://192.168.39.215:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-091565-m02
	I0918 20:04:20.655395   26827 round_trippers.go:469] Request Headers:
	I0918 20:04:20.655405   26827 round_trippers.go:473]     Accept: application/json, */*
	I0918 20:04:20.655425   26827 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0918 20:04:20.658807   26827 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0918 20:04:20.855714   26827 request.go:632] Waited for 196.369108ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.215:8443/api/v1/nodes/ha-091565-m02
	I0918 20:04:20.855780   26827 round_trippers.go:463] GET https://192.168.39.215:8443/api/v1/nodes/ha-091565-m02
	I0918 20:04:20.855787   26827 round_trippers.go:469] Request Headers:
	I0918 20:04:20.855798   26827 round_trippers.go:473]     Accept: application/json, */*
	I0918 20:04:20.855804   26827 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0918 20:04:20.859686   26827 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0918 20:04:20.860506   26827 pod_ready.go:93] pod "kube-scheduler-ha-091565-m02" in "kube-system" namespace has status "Ready":"True"
	I0918 20:04:20.860527   26827 pod_ready.go:82] duration metric: took 400.151195ms for pod "kube-scheduler-ha-091565-m02" in "kube-system" namespace to be "Ready" ...
	I0918 20:04:20.860539   26827 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-091565-m03" in "kube-system" namespace to be "Ready" ...
	I0918 20:04:21.056006   26827 request.go:632] Waited for 195.380183ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.215:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-091565-m03
	I0918 20:04:21.056089   26827 round_trippers.go:463] GET https://192.168.39.215:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-091565-m03
	I0918 20:04:21.056096   26827 round_trippers.go:469] Request Headers:
	I0918 20:04:21.056104   26827 round_trippers.go:473]     Accept: application/json, */*
	I0918 20:04:21.056108   26827 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0918 20:04:21.059632   26827 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0918 20:04:21.255734   26827 request.go:632] Waited for 195.357475ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.215:8443/api/v1/nodes/ha-091565-m03
	I0918 20:04:21.255796   26827 round_trippers.go:463] GET https://192.168.39.215:8443/api/v1/nodes/ha-091565-m03
	I0918 20:04:21.255801   26827 round_trippers.go:469] Request Headers:
	I0918 20:04:21.255808   26827 round_trippers.go:473]     Accept: application/json, */*
	I0918 20:04:21.255813   26827 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0918 20:04:21.259440   26827 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0918 20:04:21.260300   26827 pod_ready.go:93] pod "kube-scheduler-ha-091565-m03" in "kube-system" namespace has status "Ready":"True"
	I0918 20:04:21.260322   26827 pod_ready.go:82] duration metric: took 399.775629ms for pod "kube-scheduler-ha-091565-m03" in "kube-system" namespace to be "Ready" ...
	I0918 20:04:21.260332   26827 pod_ready.go:39] duration metric: took 5.200469523s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0918 20:04:21.260346   26827 api_server.go:52] waiting for apiserver process to appear ...
	I0918 20:04:21.260416   26827 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 20:04:21.276372   26827 api_server.go:72] duration metric: took 24.03316608s to wait for apiserver process to appear ...
	I0918 20:04:21.276400   26827 api_server.go:88] waiting for apiserver healthz status ...
	I0918 20:04:21.276422   26827 api_server.go:253] Checking apiserver healthz at https://192.168.39.215:8443/healthz ...
	I0918 20:04:21.282493   26827 api_server.go:279] https://192.168.39.215:8443/healthz returned 200:
	ok
	I0918 20:04:21.282563   26827 round_trippers.go:463] GET https://192.168.39.215:8443/version
	I0918 20:04:21.282571   26827 round_trippers.go:469] Request Headers:
	I0918 20:04:21.282579   26827 round_trippers.go:473]     Accept: application/json, */*
	I0918 20:04:21.282586   26827 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0918 20:04:21.283373   26827 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0918 20:04:21.283434   26827 api_server.go:141] control plane version: v1.31.1
	I0918 20:04:21.283445   26827 api_server.go:131] duration metric: took 7.03877ms to wait for apiserver health ...
	I0918 20:04:21.283452   26827 system_pods.go:43] waiting for kube-system pods to appear ...
	I0918 20:04:21.455842   26827 request.go:632] Waited for 172.326435ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.215:8443/api/v1/namespaces/kube-system/pods
	I0918 20:04:21.455906   26827 round_trippers.go:463] GET https://192.168.39.215:8443/api/v1/namespaces/kube-system/pods
	I0918 20:04:21.455913   26827 round_trippers.go:469] Request Headers:
	I0918 20:04:21.455920   26827 round_trippers.go:473]     Accept: application/json, */*
	I0918 20:04:21.455924   26827 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0918 20:04:21.461721   26827 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0918 20:04:21.469221   26827 system_pods.go:59] 24 kube-system pods found
	I0918 20:04:21.469250   26827 system_pods.go:61] "coredns-7c65d6cfc9-8zcqk" [644e8147-96e9-41a1-99b8-d2de17e4798c] Running
	I0918 20:04:21.469256   26827 system_pods.go:61] "coredns-7c65d6cfc9-w97kk" [70428cd6-0523-44c8-89f3-62837b52ca80] Running
	I0918 20:04:21.469260   26827 system_pods.go:61] "etcd-ha-091565" [c5af3e98-c375-448c-9cac-6a83b115ca71] Running
	I0918 20:04:21.469263   26827 system_pods.go:61] "etcd-ha-091565-m02" [71e1c78e-6a11-4b46-baa2-83c98d666cee] Running
	I0918 20:04:21.469267   26827 system_pods.go:61] "etcd-ha-091565-m03" [9c1e9878-8b36-4e4d-9fc1-b81e4cd49c08] Running
	I0918 20:04:21.469270   26827 system_pods.go:61] "kindnet-5rh2w" [8fbd3b35-4d3a-497f-bbcf-0cc0b04ec495] Running
	I0918 20:04:21.469273   26827 system_pods.go:61] "kindnet-7fl5w" [5c3a9d82-3815-4aa1-8d04-14be25394dcf] Running
	I0918 20:04:21.469278   26827 system_pods.go:61] "kindnet-bzsqr" [bf1e6f1b-d3ad-439a-9b9a-882e6e989a56] Running
	I0918 20:04:21.469282   26827 system_pods.go:61] "kube-apiserver-ha-091565" [1c2ddbd9-3f78-415b-af21-1d43925382c5] Running
	I0918 20:04:21.469285   26827 system_pods.go:61] "kube-apiserver-ha-091565-m02" [b482ba6c-e415-4e3c-aa56-c17a4f3bcc13] Running
	I0918 20:04:21.469288   26827 system_pods.go:61] "kube-apiserver-ha-091565-m03" [597eb4b7-df02-430e-98f9-24de20295e3b] Running
	I0918 20:04:21.469291   26827 system_pods.go:61] "kube-controller-manager-ha-091565" [3812cd1b-619d-4da8-b039-18f7adb53647] Running
	I0918 20:04:21.469295   26827 system_pods.go:61] "kube-controller-manager-ha-091565-m02" [2c3ecce6-b148-498b-b66e-e5c000f51940] Running
	I0918 20:04:21.469298   26827 system_pods.go:61] "kube-controller-manager-ha-091565-m03" [d9871df2-6370-47a6-98d4-fd9acfddd11a] Running
	I0918 20:04:21.469301   26827 system_pods.go:61] "kube-proxy-4p8rj" [ebe65af8-abb1-4ed3-a12f-b822ec09e891] Running
	I0918 20:04:21.469305   26827 system_pods.go:61] "kube-proxy-4wm6h" [d6904231-6f64-4447-9932-0cd5d692978b] Running
	I0918 20:04:21.469310   26827 system_pods.go:61] "kube-proxy-bxblp" [68143b05-afd1-409a-aaa6-8b3c6841bbfb] Running
	I0918 20:04:21.469314   26827 system_pods.go:61] "kube-scheduler-ha-091565" [ad2ac667-3328-4ad3-b736-c8c633140e2d] Running
	I0918 20:04:21.469319   26827 system_pods.go:61] "kube-scheduler-ha-091565-m02" [fd27db83-43f3-40a8-9ca7-5570f78d562b] Running
	I0918 20:04:21.469322   26827 system_pods.go:61] "kube-scheduler-ha-091565-m03" [c8432a2a-548b-4a97-852a-a18f82f406d2] Running
	I0918 20:04:21.469326   26827 system_pods.go:61] "kube-vip-ha-091565" [b3fccd60-ac88-4dda-b8bb-b1b8c45cbfe5] Running
	I0918 20:04:21.469332   26827 system_pods.go:61] "kube-vip-ha-091565-m02" [12ca12ad-6dee-4b50-9273-c48d8f06acf4] Running
	I0918 20:04:21.469336   26827 system_pods.go:61] "kube-vip-ha-091565-m03" [8389ddfd-fca7-4698-a747-4eedf299dc4a] Running
	I0918 20:04:21.469341   26827 system_pods.go:61] "storage-provisioner" [b7dffb85-905b-4166-a680-34c77cf87d09] Running
	I0918 20:04:21.469347   26827 system_pods.go:74] duration metric: took 185.890335ms to wait for pod list to return data ...
	I0918 20:04:21.469357   26827 default_sa.go:34] waiting for default service account to be created ...
	I0918 20:04:21.655850   26827 request.go:632] Waited for 186.415202ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.215:8443/api/v1/namespaces/default/serviceaccounts
	I0918 20:04:21.655922   26827 round_trippers.go:463] GET https://192.168.39.215:8443/api/v1/namespaces/default/serviceaccounts
	I0918 20:04:21.655931   26827 round_trippers.go:469] Request Headers:
	I0918 20:04:21.655941   26827 round_trippers.go:473]     Accept: application/json, */*
	I0918 20:04:21.655949   26827 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0918 20:04:21.659629   26827 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0918 20:04:21.659759   26827 default_sa.go:45] found service account: "default"
	I0918 20:04:21.659777   26827 default_sa.go:55] duration metric: took 190.414417ms for default service account to be created ...
	I0918 20:04:21.659788   26827 system_pods.go:116] waiting for k8s-apps to be running ...
	I0918 20:04:21.856111   26827 request.go:632] Waited for 196.255287ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.215:8443/api/v1/namespaces/kube-system/pods
	I0918 20:04:21.856170   26827 round_trippers.go:463] GET https://192.168.39.215:8443/api/v1/namespaces/kube-system/pods
	I0918 20:04:21.856175   26827 round_trippers.go:469] Request Headers:
	I0918 20:04:21.856182   26827 round_trippers.go:473]     Accept: application/json, */*
	I0918 20:04:21.856186   26827 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0918 20:04:21.863662   26827 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0918 20:04:21.871644   26827 system_pods.go:86] 24 kube-system pods found
	I0918 20:04:21.871682   26827 system_pods.go:89] "coredns-7c65d6cfc9-8zcqk" [644e8147-96e9-41a1-99b8-d2de17e4798c] Running
	I0918 20:04:21.871691   26827 system_pods.go:89] "coredns-7c65d6cfc9-w97kk" [70428cd6-0523-44c8-89f3-62837b52ca80] Running
	I0918 20:04:21.871696   26827 system_pods.go:89] "etcd-ha-091565" [c5af3e98-c375-448c-9cac-6a83b115ca71] Running
	I0918 20:04:21.871703   26827 system_pods.go:89] "etcd-ha-091565-m02" [71e1c78e-6a11-4b46-baa2-83c98d666cee] Running
	I0918 20:04:21.871708   26827 system_pods.go:89] "etcd-ha-091565-m03" [9c1e9878-8b36-4e4d-9fc1-b81e4cd49c08] Running
	I0918 20:04:21.871713   26827 system_pods.go:89] "kindnet-5rh2w" [8fbd3b35-4d3a-497f-bbcf-0cc0b04ec495] Running
	I0918 20:04:21.871719   26827 system_pods.go:89] "kindnet-7fl5w" [5c3a9d82-3815-4aa1-8d04-14be25394dcf] Running
	I0918 20:04:21.871725   26827 system_pods.go:89] "kindnet-bzsqr" [bf1e6f1b-d3ad-439a-9b9a-882e6e989a56] Running
	I0918 20:04:21.871731   26827 system_pods.go:89] "kube-apiserver-ha-091565" [1c2ddbd9-3f78-415b-af21-1d43925382c5] Running
	I0918 20:04:21.871739   26827 system_pods.go:89] "kube-apiserver-ha-091565-m02" [b482ba6c-e415-4e3c-aa56-c17a4f3bcc13] Running
	I0918 20:04:21.871746   26827 system_pods.go:89] "kube-apiserver-ha-091565-m03" [597eb4b7-df02-430e-98f9-24de20295e3b] Running
	I0918 20:04:21.871756   26827 system_pods.go:89] "kube-controller-manager-ha-091565" [3812cd1b-619d-4da8-b039-18f7adb53647] Running
	I0918 20:04:21.871763   26827 system_pods.go:89] "kube-controller-manager-ha-091565-m02" [2c3ecce6-b148-498b-b66e-e5c000f51940] Running
	I0918 20:04:21.871771   26827 system_pods.go:89] "kube-controller-manager-ha-091565-m03" [d9871df2-6370-47a6-98d4-fd9acfddd11a] Running
	I0918 20:04:21.871778   26827 system_pods.go:89] "kube-proxy-4p8rj" [ebe65af8-abb1-4ed3-a12f-b822ec09e891] Running
	I0918 20:04:21.871786   26827 system_pods.go:89] "kube-proxy-4wm6h" [d6904231-6f64-4447-9932-0cd5d692978b] Running
	I0918 20:04:21.871792   26827 system_pods.go:89] "kube-proxy-bxblp" [68143b05-afd1-409a-aaa6-8b3c6841bbfb] Running
	I0918 20:04:21.871799   26827 system_pods.go:89] "kube-scheduler-ha-091565" [ad2ac667-3328-4ad3-b736-c8c633140e2d] Running
	I0918 20:04:21.871805   26827 system_pods.go:89] "kube-scheduler-ha-091565-m02" [fd27db83-43f3-40a8-9ca7-5570f78d562b] Running
	I0918 20:04:21.871813   26827 system_pods.go:89] "kube-scheduler-ha-091565-m03" [c8432a2a-548b-4a97-852a-a18f82f406d2] Running
	I0918 20:04:21.871819   26827 system_pods.go:89] "kube-vip-ha-091565" [b3fccd60-ac88-4dda-b8bb-b1b8c45cbfe5] Running
	I0918 20:04:21.871827   26827 system_pods.go:89] "kube-vip-ha-091565-m02" [12ca12ad-6dee-4b50-9273-c48d8f06acf4] Running
	I0918 20:04:21.871833   26827 system_pods.go:89] "kube-vip-ha-091565-m03" [8389ddfd-fca7-4698-a747-4eedf299dc4a] Running
	I0918 20:04:21.871838   26827 system_pods.go:89] "storage-provisioner" [b7dffb85-905b-4166-a680-34c77cf87d09] Running
	I0918 20:04:21.871847   26827 system_pods.go:126] duration metric: took 212.052235ms to wait for k8s-apps to be running ...
	I0918 20:04:21.871859   26827 system_svc.go:44] waiting for kubelet service to be running ....
	I0918 20:04:21.871912   26827 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0918 20:04:21.890997   26827 system_svc.go:56] duration metric: took 19.130745ms WaitForService to wait for kubelet
	I0918 20:04:21.891029   26827 kubeadm.go:582] duration metric: took 24.647829851s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0918 20:04:21.891052   26827 node_conditions.go:102] verifying NodePressure condition ...
	I0918 20:04:22.055297   26827 request.go:632] Waited for 164.164035ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.215:8443/api/v1/nodes
	I0918 20:04:22.055364   26827 round_trippers.go:463] GET https://192.168.39.215:8443/api/v1/nodes
	I0918 20:04:22.055371   26827 round_trippers.go:469] Request Headers:
	I0918 20:04:22.055381   26827 round_trippers.go:473]     Accept: application/json, */*
	I0918 20:04:22.055387   26827 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0918 20:04:22.060147   26827 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0918 20:04:22.061184   26827 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0918 20:04:22.061208   26827 node_conditions.go:123] node cpu capacity is 2
	I0918 20:04:22.061221   26827 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0918 20:04:22.061227   26827 node_conditions.go:123] node cpu capacity is 2
	I0918 20:04:22.061232   26827 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0918 20:04:22.061235   26827 node_conditions.go:123] node cpu capacity is 2
	I0918 20:04:22.061240   26827 node_conditions.go:105] duration metric: took 170.183013ms to run NodePressure ...
	I0918 20:04:22.061274   26827 start.go:241] waiting for startup goroutines ...
	I0918 20:04:22.061303   26827 start.go:255] writing updated cluster config ...
	I0918 20:04:22.061591   26827 ssh_runner.go:195] Run: rm -f paused
	I0918 20:04:22.113181   26827 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I0918 20:04:22.115218   26827 out.go:177] * Done! kubectl is now configured to use "ha-091565" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Sep 18 20:08:07 ha-091565 crio[663]: time="2024-09-18 20:08:07.639691077Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726690087639666317,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=eff0c3a1-9ae6-43ae-93e1-f7baae0aebbe name=/runtime.v1.ImageService/ImageFsInfo
	Sep 18 20:08:07 ha-091565 crio[663]: time="2024-09-18 20:08:07.640237663Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f34b1be8-79eb-4a03-b67e-9842d60b9ff5 name=/runtime.v1.RuntimeService/ListContainers
	Sep 18 20:08:07 ha-091565 crio[663]: time="2024-09-18 20:08:07.640287937Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f34b1be8-79eb-4a03-b67e-9842d60b9ff5 name=/runtime.v1.RuntimeService/ListContainers
	Sep 18 20:08:07 ha-091565 crio[663]: time="2024-09-18 20:08:07.640545387Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:7e40397db0622fd3967d20535016d957560de3976f7ca7e977fa2b186ff9f3ec,PodSandboxId:32509037cc4e4898a637b1de6a87eabd6d7504444bcf3f541a5234fb1ed4197a,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1726689867249474588,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-xhmzx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 16808919-56d0-40cd-b88e-28fb5a40b3a2,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4f8cab8eef59352ca937c9b8e7056a419bb426b183a841f9e8a43366ecb1b283,PodSandboxId:16c38fe68d94e679d8aaead9d1be6f411a6984d3a367813f390d2ffc174a7047,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726689721940579425,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-8zcqk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 644e8147-96e9-41a1-99b8-d2de17e4798c,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:26162985f4a281a161b88fc27b5cf0dbedef6117a63f0b64cef28f41346dbd3e,PodSandboxId:12355cb306ab12e321590a927e5f0cd88fa0f505aa7bd470359b1d9c47a7b425,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726689721876307714,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.
kubernetes.pod.uid: b7dffb85-905b-4166-a680-34c77cf87d09,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9b5c6773eef44d848fd26ebcf13ea82ebc3c1e1bd23618a944bf5f6d6a0e7bb8,PodSandboxId:b0c496c53b4c9e8442d0d8a6bcef8269a3baad2995c3647784331bd1b03a34df,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726689721549289388,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-w97kk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 70428cd6-05
23-44c8-89f3-62837b52ca80,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:52ae20a53e17beece735496b8e65537bbebfef5da53e8a2e7a74820fee56cd63,PodSandboxId:e5053f7183e292c9c0d41caef5020bdc6b59ce14f38a3c98750b9667d61f9d14,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:17266897
09419057741,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-7fl5w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5c3a9d82-3815-4aa1-8d04-14be25394dcf,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c9aa80c6b1f558ba9ef5b7e70df3690995e271e9296b0cc3e71ade739843f53a,PodSandboxId:e7fdb7e540529f305bf2eb00c376fe4bdf92019e975ef0252046f5f4a250d965,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726689709057454293,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4wm6h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d6904231-6f64-4447-9932-0cd5d692978b,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f40b55a2539761c632fe964601680ae492c74cf2cad4ef1130393c06f576f943,PodSandboxId:db3221d8284579089e2d011bb197abfc5cd3d1fbb8db7264f21262b09bca210e,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1726689699933969237,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-091565,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dfccef71d5ed662cfd9075de2c23ae11,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8c435dbd5b540cdb87d1f79fc2aadca19db9c46aff42e67d78c5d9c8eee1b6de,PodSandboxId:01b7098c9837574b56ca73c576ce61c56e6190d34e8f2c4814bae7a1f5952e5c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726689697296263323,Labels:map[string]string{io.kubernetes.container.name: kub
e-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-091565,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 42a715c9fe466585ee83dba1966182d5,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4358e16fe123bb31210776fce1969072fb954b1f7cc3b51c459cd155992c1be5,PodSandboxId:ae412aa32e14f8d37184d672500203218827aa56a04162cf4a4e1bde7a1c9833,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726689697193494552,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-
ha-091565,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ed5bb72bdd2d49ac86ea107effb85714,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f141188bda32520e26d392de6e6e49a863d49aacb461b7f3d5649d68557e96d3,PodSandboxId:bfb245c345b6c6bef07d02ba22a1fa12793ef353b2c4551e8ccd42337869e4b3,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726689697212123129,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-091565,io.kubernet
es.pod.namespace: kube-system,io.kubernetes.pod.uid: fb44b1ca5e1af25a31cc20be38506f2d,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:97b3f8978c2594307cdfef72e8b8aebac842152f9c1893bb34700b6e0932027e,PodSandboxId:0555602e8b34d796a1a62f9ff273a78ebf4b7a7f1459439beaa95a2d3c02577b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726689697128272787,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-091565,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d7f1e1feaa8e654300c84052131dd12a,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=f34b1be8-79eb-4a03-b67e-9842d60b9ff5 name=/runtime.v1.RuntimeService/ListContainers
	Sep 18 20:08:07 ha-091565 crio[663]: time="2024-09-18 20:08:07.680407118Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=737aa1b0-327c-4698-86ea-680112d3ef66 name=/runtime.v1.RuntimeService/Version
	Sep 18 20:08:07 ha-091565 crio[663]: time="2024-09-18 20:08:07.680519722Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=737aa1b0-327c-4698-86ea-680112d3ef66 name=/runtime.v1.RuntimeService/Version
	Sep 18 20:08:07 ha-091565 crio[663]: time="2024-09-18 20:08:07.684779872Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=ed2af418-495d-43c3-8ef2-5dae4dbd1a35 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 18 20:08:07 ha-091565 crio[663]: time="2024-09-18 20:08:07.685277528Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726690087685250993,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=ed2af418-495d-43c3-8ef2-5dae4dbd1a35 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 18 20:08:07 ha-091565 crio[663]: time="2024-09-18 20:08:07.685971532Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=00165690-42dd-49ca-abba-dab496e737e3 name=/runtime.v1.RuntimeService/ListContainers
	Sep 18 20:08:07 ha-091565 crio[663]: time="2024-09-18 20:08:07.686049592Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=00165690-42dd-49ca-abba-dab496e737e3 name=/runtime.v1.RuntimeService/ListContainers
	Sep 18 20:08:07 ha-091565 crio[663]: time="2024-09-18 20:08:07.686339099Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:7e40397db0622fd3967d20535016d957560de3976f7ca7e977fa2b186ff9f3ec,PodSandboxId:32509037cc4e4898a637b1de6a87eabd6d7504444bcf3f541a5234fb1ed4197a,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1726689867249474588,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-xhmzx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 16808919-56d0-40cd-b88e-28fb5a40b3a2,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4f8cab8eef59352ca937c9b8e7056a419bb426b183a841f9e8a43366ecb1b283,PodSandboxId:16c38fe68d94e679d8aaead9d1be6f411a6984d3a367813f390d2ffc174a7047,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726689721940579425,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-8zcqk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 644e8147-96e9-41a1-99b8-d2de17e4798c,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:26162985f4a281a161b88fc27b5cf0dbedef6117a63f0b64cef28f41346dbd3e,PodSandboxId:12355cb306ab12e321590a927e5f0cd88fa0f505aa7bd470359b1d9c47a7b425,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726689721876307714,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.
kubernetes.pod.uid: b7dffb85-905b-4166-a680-34c77cf87d09,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9b5c6773eef44d848fd26ebcf13ea82ebc3c1e1bd23618a944bf5f6d6a0e7bb8,PodSandboxId:b0c496c53b4c9e8442d0d8a6bcef8269a3baad2995c3647784331bd1b03a34df,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726689721549289388,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-w97kk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 70428cd6-05
23-44c8-89f3-62837b52ca80,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:52ae20a53e17beece735496b8e65537bbebfef5da53e8a2e7a74820fee56cd63,PodSandboxId:e5053f7183e292c9c0d41caef5020bdc6b59ce14f38a3c98750b9667d61f9d14,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:17266897
09419057741,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-7fl5w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5c3a9d82-3815-4aa1-8d04-14be25394dcf,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c9aa80c6b1f558ba9ef5b7e70df3690995e271e9296b0cc3e71ade739843f53a,PodSandboxId:e7fdb7e540529f305bf2eb00c376fe4bdf92019e975ef0252046f5f4a250d965,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726689709057454293,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4wm6h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d6904231-6f64-4447-9932-0cd5d692978b,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f40b55a2539761c632fe964601680ae492c74cf2cad4ef1130393c06f576f943,PodSandboxId:db3221d8284579089e2d011bb197abfc5cd3d1fbb8db7264f21262b09bca210e,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1726689699933969237,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-091565,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dfccef71d5ed662cfd9075de2c23ae11,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8c435dbd5b540cdb87d1f79fc2aadca19db9c46aff42e67d78c5d9c8eee1b6de,PodSandboxId:01b7098c9837574b56ca73c576ce61c56e6190d34e8f2c4814bae7a1f5952e5c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726689697296263323,Labels:map[string]string{io.kubernetes.container.name: kub
e-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-091565,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 42a715c9fe466585ee83dba1966182d5,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4358e16fe123bb31210776fce1969072fb954b1f7cc3b51c459cd155992c1be5,PodSandboxId:ae412aa32e14f8d37184d672500203218827aa56a04162cf4a4e1bde7a1c9833,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726689697193494552,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-
ha-091565,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ed5bb72bdd2d49ac86ea107effb85714,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f141188bda32520e26d392de6e6e49a863d49aacb461b7f3d5649d68557e96d3,PodSandboxId:bfb245c345b6c6bef07d02ba22a1fa12793ef353b2c4551e8ccd42337869e4b3,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726689697212123129,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-091565,io.kubernet
es.pod.namespace: kube-system,io.kubernetes.pod.uid: fb44b1ca5e1af25a31cc20be38506f2d,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:97b3f8978c2594307cdfef72e8b8aebac842152f9c1893bb34700b6e0932027e,PodSandboxId:0555602e8b34d796a1a62f9ff273a78ebf4b7a7f1459439beaa95a2d3c02577b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726689697128272787,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-091565,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d7f1e1feaa8e654300c84052131dd12a,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=00165690-42dd-49ca-abba-dab496e737e3 name=/runtime.v1.RuntimeService/ListContainers
	Sep 18 20:08:07 ha-091565 crio[663]: time="2024-09-18 20:08:07.727550945Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=3bcc4b1e-c2f1-413d-96f9-4b202846829b name=/runtime.v1.RuntimeService/Version
	Sep 18 20:08:07 ha-091565 crio[663]: time="2024-09-18 20:08:07.727644581Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=3bcc4b1e-c2f1-413d-96f9-4b202846829b name=/runtime.v1.RuntimeService/Version
	Sep 18 20:08:07 ha-091565 crio[663]: time="2024-09-18 20:08:07.729184575Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=567e219d-15ec-4093-82a5-6e68e1c18799 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 18 20:08:07 ha-091565 crio[663]: time="2024-09-18 20:08:07.729617439Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726690087729584123,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=567e219d-15ec-4093-82a5-6e68e1c18799 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 18 20:08:07 ha-091565 crio[663]: time="2024-09-18 20:08:07.730133390Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=c3d670dd-8434-41be-8815-8c323e05a81f name=/runtime.v1.RuntimeService/ListContainers
	Sep 18 20:08:07 ha-091565 crio[663]: time="2024-09-18 20:08:07.730188134Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=c3d670dd-8434-41be-8815-8c323e05a81f name=/runtime.v1.RuntimeService/ListContainers
	Sep 18 20:08:07 ha-091565 crio[663]: time="2024-09-18 20:08:07.730400589Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:7e40397db0622fd3967d20535016d957560de3976f7ca7e977fa2b186ff9f3ec,PodSandboxId:32509037cc4e4898a637b1de6a87eabd6d7504444bcf3f541a5234fb1ed4197a,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1726689867249474588,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-xhmzx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 16808919-56d0-40cd-b88e-28fb5a40b3a2,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4f8cab8eef59352ca937c9b8e7056a419bb426b183a841f9e8a43366ecb1b283,PodSandboxId:16c38fe68d94e679d8aaead9d1be6f411a6984d3a367813f390d2ffc174a7047,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726689721940579425,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-8zcqk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 644e8147-96e9-41a1-99b8-d2de17e4798c,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:26162985f4a281a161b88fc27b5cf0dbedef6117a63f0b64cef28f41346dbd3e,PodSandboxId:12355cb306ab12e321590a927e5f0cd88fa0f505aa7bd470359b1d9c47a7b425,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726689721876307714,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.
kubernetes.pod.uid: b7dffb85-905b-4166-a680-34c77cf87d09,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9b5c6773eef44d848fd26ebcf13ea82ebc3c1e1bd23618a944bf5f6d6a0e7bb8,PodSandboxId:b0c496c53b4c9e8442d0d8a6bcef8269a3baad2995c3647784331bd1b03a34df,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726689721549289388,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-w97kk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 70428cd6-05
23-44c8-89f3-62837b52ca80,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:52ae20a53e17beece735496b8e65537bbebfef5da53e8a2e7a74820fee56cd63,PodSandboxId:e5053f7183e292c9c0d41caef5020bdc6b59ce14f38a3c98750b9667d61f9d14,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:17266897
09419057741,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-7fl5w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5c3a9d82-3815-4aa1-8d04-14be25394dcf,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c9aa80c6b1f558ba9ef5b7e70df3690995e271e9296b0cc3e71ade739843f53a,PodSandboxId:e7fdb7e540529f305bf2eb00c376fe4bdf92019e975ef0252046f5f4a250d965,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726689709057454293,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4wm6h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d6904231-6f64-4447-9932-0cd5d692978b,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f40b55a2539761c632fe964601680ae492c74cf2cad4ef1130393c06f576f943,PodSandboxId:db3221d8284579089e2d011bb197abfc5cd3d1fbb8db7264f21262b09bca210e,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1726689699933969237,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-091565,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dfccef71d5ed662cfd9075de2c23ae11,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8c435dbd5b540cdb87d1f79fc2aadca19db9c46aff42e67d78c5d9c8eee1b6de,PodSandboxId:01b7098c9837574b56ca73c576ce61c56e6190d34e8f2c4814bae7a1f5952e5c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726689697296263323,Labels:map[string]string{io.kubernetes.container.name: kub
e-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-091565,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 42a715c9fe466585ee83dba1966182d5,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4358e16fe123bb31210776fce1969072fb954b1f7cc3b51c459cd155992c1be5,PodSandboxId:ae412aa32e14f8d37184d672500203218827aa56a04162cf4a4e1bde7a1c9833,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726689697193494552,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-
ha-091565,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ed5bb72bdd2d49ac86ea107effb85714,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f141188bda32520e26d392de6e6e49a863d49aacb461b7f3d5649d68557e96d3,PodSandboxId:bfb245c345b6c6bef07d02ba22a1fa12793ef353b2c4551e8ccd42337869e4b3,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726689697212123129,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-091565,io.kubernet
es.pod.namespace: kube-system,io.kubernetes.pod.uid: fb44b1ca5e1af25a31cc20be38506f2d,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:97b3f8978c2594307cdfef72e8b8aebac842152f9c1893bb34700b6e0932027e,PodSandboxId:0555602e8b34d796a1a62f9ff273a78ebf4b7a7f1459439beaa95a2d3c02577b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726689697128272787,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-091565,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d7f1e1feaa8e654300c84052131dd12a,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=c3d670dd-8434-41be-8815-8c323e05a81f name=/runtime.v1.RuntimeService/ListContainers
	Sep 18 20:08:07 ha-091565 crio[663]: time="2024-09-18 20:08:07.770847651Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=45ff0d87-3e51-4a73-be64-c40b79d1e49b name=/runtime.v1.RuntimeService/Version
	Sep 18 20:08:07 ha-091565 crio[663]: time="2024-09-18 20:08:07.770986099Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=45ff0d87-3e51-4a73-be64-c40b79d1e49b name=/runtime.v1.RuntimeService/Version
	Sep 18 20:08:07 ha-091565 crio[663]: time="2024-09-18 20:08:07.772290619Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=be124d5c-5f4c-4417-abcd-0be285536545 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 18 20:08:07 ha-091565 crio[663]: time="2024-09-18 20:08:07.772937797Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726690087772841526,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=be124d5c-5f4c-4417-abcd-0be285536545 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 18 20:08:07 ha-091565 crio[663]: time="2024-09-18 20:08:07.774017332Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=066f79b0-0a43-42f8-aa8e-70b2d408ec9b name=/runtime.v1.RuntimeService/ListContainers
	Sep 18 20:08:07 ha-091565 crio[663]: time="2024-09-18 20:08:07.774092153Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=066f79b0-0a43-42f8-aa8e-70b2d408ec9b name=/runtime.v1.RuntimeService/ListContainers
	Sep 18 20:08:07 ha-091565 crio[663]: time="2024-09-18 20:08:07.774360757Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:7e40397db0622fd3967d20535016d957560de3976f7ca7e977fa2b186ff9f3ec,PodSandboxId:32509037cc4e4898a637b1de6a87eabd6d7504444bcf3f541a5234fb1ed4197a,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1726689867249474588,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-xhmzx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 16808919-56d0-40cd-b88e-28fb5a40b3a2,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4f8cab8eef59352ca937c9b8e7056a419bb426b183a841f9e8a43366ecb1b283,PodSandboxId:16c38fe68d94e679d8aaead9d1be6f411a6984d3a367813f390d2ffc174a7047,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726689721940579425,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-8zcqk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 644e8147-96e9-41a1-99b8-d2de17e4798c,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:26162985f4a281a161b88fc27b5cf0dbedef6117a63f0b64cef28f41346dbd3e,PodSandboxId:12355cb306ab12e321590a927e5f0cd88fa0f505aa7bd470359b1d9c47a7b425,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726689721876307714,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.
kubernetes.pod.uid: b7dffb85-905b-4166-a680-34c77cf87d09,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9b5c6773eef44d848fd26ebcf13ea82ebc3c1e1bd23618a944bf5f6d6a0e7bb8,PodSandboxId:b0c496c53b4c9e8442d0d8a6bcef8269a3baad2995c3647784331bd1b03a34df,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726689721549289388,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-w97kk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 70428cd6-05
23-44c8-89f3-62837b52ca80,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:52ae20a53e17beece735496b8e65537bbebfef5da53e8a2e7a74820fee56cd63,PodSandboxId:e5053f7183e292c9c0d41caef5020bdc6b59ce14f38a3c98750b9667d61f9d14,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:17266897
09419057741,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-7fl5w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5c3a9d82-3815-4aa1-8d04-14be25394dcf,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c9aa80c6b1f558ba9ef5b7e70df3690995e271e9296b0cc3e71ade739843f53a,PodSandboxId:e7fdb7e540529f305bf2eb00c376fe4bdf92019e975ef0252046f5f4a250d965,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726689709057454293,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4wm6h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d6904231-6f64-4447-9932-0cd5d692978b,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f40b55a2539761c632fe964601680ae492c74cf2cad4ef1130393c06f576f943,PodSandboxId:db3221d8284579089e2d011bb197abfc5cd3d1fbb8db7264f21262b09bca210e,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1726689699933969237,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-091565,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dfccef71d5ed662cfd9075de2c23ae11,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8c435dbd5b540cdb87d1f79fc2aadca19db9c46aff42e67d78c5d9c8eee1b6de,PodSandboxId:01b7098c9837574b56ca73c576ce61c56e6190d34e8f2c4814bae7a1f5952e5c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726689697296263323,Labels:map[string]string{io.kubernetes.container.name: kub
e-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-091565,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 42a715c9fe466585ee83dba1966182d5,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4358e16fe123bb31210776fce1969072fb954b1f7cc3b51c459cd155992c1be5,PodSandboxId:ae412aa32e14f8d37184d672500203218827aa56a04162cf4a4e1bde7a1c9833,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726689697193494552,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-
ha-091565,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ed5bb72bdd2d49ac86ea107effb85714,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f141188bda32520e26d392de6e6e49a863d49aacb461b7f3d5649d68557e96d3,PodSandboxId:bfb245c345b6c6bef07d02ba22a1fa12793ef353b2c4551e8ccd42337869e4b3,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726689697212123129,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-091565,io.kubernet
es.pod.namespace: kube-system,io.kubernetes.pod.uid: fb44b1ca5e1af25a31cc20be38506f2d,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:97b3f8978c2594307cdfef72e8b8aebac842152f9c1893bb34700b6e0932027e,PodSandboxId:0555602e8b34d796a1a62f9ff273a78ebf4b7a7f1459439beaa95a2d3c02577b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726689697128272787,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-091565,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d7f1e1feaa8e654300c84052131dd12a,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=066f79b0-0a43-42f8-aa8e-70b2d408ec9b name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	7e40397db0622       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   3 minutes ago       Running             busybox                   0                   32509037cc4e4       busybox-7dff88458-xhmzx
	4f8cab8eef593       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      6 minutes ago       Running             coredns                   0                   16c38fe68d94e       coredns-7c65d6cfc9-8zcqk
	26162985f4a28       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      6 minutes ago       Running             storage-provisioner       0                   12355cb306ab1       storage-provisioner
	9b5c6773eef44       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      6 minutes ago       Running             coredns                   0                   b0c496c53b4c9       coredns-7c65d6cfc9-w97kk
	52ae20a53e17b       12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f                                      6 minutes ago       Running             kindnet-cni               0                   e5053f7183e29       kindnet-7fl5w
	c9aa80c6b1f55       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                      6 minutes ago       Running             kube-proxy                0                   e7fdb7e540529       kube-proxy-4wm6h
	f40b55a253976       ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f     6 minutes ago       Running             kube-vip                  0                   db3221d828457       kube-vip-ha-091565
	8c435dbd5b540       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                      6 minutes ago       Running             kube-scheduler            0                   01b7098c98375       kube-scheduler-ha-091565
	f141188bda325       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee                                      6 minutes ago       Running             kube-apiserver            0                   bfb245c345b6c       kube-apiserver-ha-091565
	4358e16fe123b       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      6 minutes ago       Running             etcd                      0                   ae412aa32e14f       etcd-ha-091565
	97b3f8978c259       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                      6 minutes ago       Running             kube-controller-manager   0                   0555602e8b34d       kube-controller-manager-ha-091565
	
	
	==> coredns [4f8cab8eef59352ca937c9b8e7056a419bb426b183a841f9e8a43366ecb1b283] <==
	[INFO] 10.244.0.4:46368 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 60 0.000070924s
	[INFO] 10.244.1.2:33610 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000192256s
	[INFO] 10.244.1.2:44224 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.004970814s
	[INFO] 10.244.1.2:38504 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000245166s
	[INFO] 10.244.1.2:33749 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000201604s
	[INFO] 10.244.1.2:44283 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.003884102s
	[INFO] 10.244.1.2:32970 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000204769s
	[INFO] 10.244.1.2:52008 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000243831s
	[INFO] 10.244.2.2:50260 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000163913s
	[INFO] 10.244.2.2:55732 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001811166s
	[INFO] 10.244.2.2:39226 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.00012772s
	[INFO] 10.244.2.2:53709 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.0000925s
	[INFO] 10.244.2.2:41092 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000125187s
	[INFO] 10.244.0.4:40054 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000124612s
	[INFO] 10.244.0.4:38790 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001299276s
	[INFO] 10.244.0.4:59253 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000062856s
	[INFO] 10.244.0.4:38256 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000094015s
	[INFO] 10.244.1.2:44940 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000153669s
	[INFO] 10.244.1.2:48450 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000097947s
	[INFO] 10.244.0.4:38580 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000117553s
	[INFO] 10.244.2.2:59546 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000170402s
	[INFO] 10.244.2.2:49026 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000189642s
	[INFO] 10.244.2.2:45658 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000151371s
	[INFO] 10.244.0.4:51397 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000169114s
	[INFO] 10.244.0.4:47813 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000155527s
	
	
	==> coredns [9b5c6773eef44d848fd26ebcf13ea82ebc3c1e1bd23618a944bf5f6d6a0e7bb8] <==
	[INFO] 10.244.0.4:40496 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 44 0.001977875s
	[INFO] 10.244.1.2:55891 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000166003s
	[INFO] 10.244.2.2:51576 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001523061s
	[INFO] 10.244.2.2:45932 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000147698s
	[INFO] 10.244.2.2:48639 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000087315s
	[INFO] 10.244.0.4:52361 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001834081s
	[INFO] 10.244.0.4:55907 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000221265s
	[INFO] 10.244.0.4:58409 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000117627s
	[INFO] 10.244.0.4:50242 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000115347s
	[INFO] 10.244.1.2:47046 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000136453s
	[INFO] 10.244.1.2:43799 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000196628s
	[INFO] 10.244.2.2:55965 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000123662s
	[INFO] 10.244.2.2:50433 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000098915s
	[INFO] 10.244.2.2:53589 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000068105s
	[INFO] 10.244.2.2:34234 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000074s
	[INFO] 10.244.0.4:45468 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000084304s
	[INFO] 10.244.0.4:51889 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000073683s
	[INFO] 10.244.0.4:50414 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000047051s
	[INFO] 10.244.1.2:45104 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000139109s
	[INFO] 10.244.1.2:42703 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.00019857s
	[INFO] 10.244.1.2:45604 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000184516s
	[INFO] 10.244.1.2:54679 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.00010429s
	[INFO] 10.244.2.2:37265 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000089491s
	[INFO] 10.244.0.4:58464 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000108633s
	[INFO] 10.244.0.4:60733 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.0000682s
	
	
	==> describe nodes <==
	Name:               ha-091565
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-091565
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=85073601a832bd4bbda5d11fa91feafff6ec6b91
	                    minikube.k8s.io/name=ha-091565
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_18T20_01_44_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 18 Sep 2024 20:01:40 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-091565
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 18 Sep 2024 20:08:00 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 18 Sep 2024 20:04:47 +0000   Wed, 18 Sep 2024 20:01:39 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 18 Sep 2024 20:04:47 +0000   Wed, 18 Sep 2024 20:01:39 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 18 Sep 2024 20:04:47 +0000   Wed, 18 Sep 2024 20:01:39 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 18 Sep 2024 20:04:47 +0000   Wed, 18 Sep 2024 20:02:01 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.215
	  Hostname:    ha-091565
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 a62ed2f9eda04eb9bbdd5bc2c8925018
	  System UUID:                a62ed2f9-eda0-4eb9-bbdd-5bc2c8925018
	  Boot ID:                    e0c4d56b-81dc-4d69-9fe6-35f1341e336d
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-xhmzx              0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m46s
	  kube-system                 coredns-7c65d6cfc9-8zcqk             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     6m21s
	  kube-system                 coredns-7c65d6cfc9-w97kk             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     6m21s
	  kube-system                 etcd-ha-091565                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         6m25s
	  kube-system                 kindnet-7fl5w                        100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      6m21s
	  kube-system                 kube-apiserver-ha-091565             250m (12%)    0 (0%)      0 (0%)           0 (0%)         6m25s
	  kube-system                 kube-controller-manager-ha-091565    200m (10%)    0 (0%)      0 (0%)           0 (0%)         6m26s
	  kube-system                 kube-proxy-4wm6h                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m21s
	  kube-system                 kube-scheduler-ha-091565             100m (5%)     0 (0%)      0 (0%)           0 (0%)         6m25s
	  kube-system                 kube-vip-ha-091565                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m27s
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m20s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   100m (5%)
	  memory             290Mi (13%)  390Mi (18%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 6m18s  kube-proxy       
	  Normal  Starting                 6m25s  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  6m25s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  6m25s  kubelet          Node ha-091565 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m25s  kubelet          Node ha-091565 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m25s  kubelet          Node ha-091565 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           6m22s  node-controller  Node ha-091565 event: Registered Node ha-091565 in Controller
	  Normal  NodeReady                6m7s   kubelet          Node ha-091565 status is now: NodeReady
	  Normal  RegisteredNode           5m23s  node-controller  Node ha-091565 event: Registered Node ha-091565 in Controller
	  Normal  RegisteredNode           4m6s   node-controller  Node ha-091565 event: Registered Node ha-091565 in Controller
	
	
	Name:               ha-091565-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-091565-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=85073601a832bd4bbda5d11fa91feafff6ec6b91
	                    minikube.k8s.io/name=ha-091565
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_18T20_02_40_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 18 Sep 2024 20:02:37 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-091565-m02
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 18 Sep 2024 20:05:41 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Wed, 18 Sep 2024 20:04:41 +0000   Wed, 18 Sep 2024 20:06:21 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Wed, 18 Sep 2024 20:04:41 +0000   Wed, 18 Sep 2024 20:06:21 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Wed, 18 Sep 2024 20:04:41 +0000   Wed, 18 Sep 2024 20:06:21 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Wed, 18 Sep 2024 20:04:41 +0000   Wed, 18 Sep 2024 20:06:21 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.92
	  Hostname:    ha-091565-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 725aeac5e21d42d69ce571d302d9f7bc
	  System UUID:                725aeac5-e21d-42d6-9ce5-71d302d9f7bc
	  Boot ID:                    e1d66727-ad6e-4cce-aca1-07f5fd60d891
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-45phf                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m46s
	  kube-system                 etcd-ha-091565-m02                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         5m29s
	  kube-system                 kindnet-bzsqr                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      5m31s
	  kube-system                 kube-apiserver-ha-091565-m02             250m (12%)    0 (0%)      0 (0%)           0 (0%)         5m30s
	  kube-system                 kube-controller-manager-ha-091565-m02    200m (10%)    0 (0%)      0 (0%)           0 (0%)         5m30s
	  kube-system                 kube-proxy-bxblp                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m31s
	  kube-system                 kube-scheduler-ha-091565-m02             100m (5%)     0 (0%)      0 (0%)           0 (0%)         5m30s
	  kube-system                 kube-vip-ha-091565-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m26s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 5m26s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  5m31s (x8 over 5m31s)  kubelet          Node ha-091565-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m31s (x8 over 5m31s)  kubelet          Node ha-091565-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m31s (x7 over 5m31s)  kubelet          Node ha-091565-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m31s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           5m27s                  node-controller  Node ha-091565-m02 event: Registered Node ha-091565-m02 in Controller
	  Normal  RegisteredNode           5m23s                  node-controller  Node ha-091565-m02 event: Registered Node ha-091565-m02 in Controller
	  Normal  RegisteredNode           4m6s                   node-controller  Node ha-091565-m02 event: Registered Node ha-091565-m02 in Controller
	  Normal  NodeNotReady             107s                   node-controller  Node ha-091565-m02 status is now: NodeNotReady
	
	
	Name:               ha-091565-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-091565-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=85073601a832bd4bbda5d11fa91feafff6ec6b91
	                    minikube.k8s.io/name=ha-091565
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_18T20_03_57_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 18 Sep 2024 20:03:54 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-091565-m03
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 18 Sep 2024 20:08:00 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 18 Sep 2024 20:04:55 +0000   Wed, 18 Sep 2024 20:03:54 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 18 Sep 2024 20:04:55 +0000   Wed, 18 Sep 2024 20:03:54 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 18 Sep 2024 20:04:55 +0000   Wed, 18 Sep 2024 20:03:54 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 18 Sep 2024 20:04:55 +0000   Wed, 18 Sep 2024 20:04:15 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.53
	  Hostname:    ha-091565-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 d7cb71d27a4f4e8b92a5e72c1afd8865
	  System UUID:                d7cb71d2-7a4f-4e8b-92a5-e72c1afd8865
	  Boot ID:                    df33972c-453a-48d6-99c0-49951abc69d7
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-jjr2n                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m46s
	  kube-system                 etcd-ha-091565-m03                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         4m13s
	  kube-system                 kindnet-5rh2w                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      4m14s
	  kube-system                 kube-apiserver-ha-091565-m03             250m (12%)    0 (0%)      0 (0%)           0 (0%)         4m13s
	  kube-system                 kube-controller-manager-ha-091565-m03    200m (10%)    0 (0%)      0 (0%)           0 (0%)         4m13s
	  kube-system                 kube-proxy-4p8rj                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m14s
	  kube-system                 kube-scheduler-ha-091565-m03             100m (5%)     0 (0%)      0 (0%)           0 (0%)         4m13s
	  kube-system                 kube-vip-ha-091565-m03                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m10s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 4m9s                   kube-proxy       
	  Normal  NodeAllocatableEnforced  4m15s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  4m14s (x8 over 4m15s)  kubelet          Node ha-091565-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m14s (x8 over 4m15s)  kubelet          Node ha-091565-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m14s (x7 over 4m15s)  kubelet          Node ha-091565-m03 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           4m13s                  node-controller  Node ha-091565-m03 event: Registered Node ha-091565-m03 in Controller
	  Normal  RegisteredNode           4m12s                  node-controller  Node ha-091565-m03 event: Registered Node ha-091565-m03 in Controller
	  Normal  RegisteredNode           4m6s                   node-controller  Node ha-091565-m03 event: Registered Node ha-091565-m03 in Controller
	
	
	Name:               ha-091565-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-091565-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=85073601a832bd4bbda5d11fa91feafff6ec6b91
	                    minikube.k8s.io/name=ha-091565
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_18T20_05_02_0700
	                    minikube.k8s.io/version=v1.34.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 18 Sep 2024 20:05:01 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-091565-m04
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 18 Sep 2024 20:08:04 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 18 Sep 2024 20:05:31 +0000   Wed, 18 Sep 2024 20:05:01 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 18 Sep 2024 20:05:31 +0000   Wed, 18 Sep 2024 20:05:01 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 18 Sep 2024 20:05:31 +0000   Wed, 18 Sep 2024 20:05:01 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 18 Sep 2024 20:05:31 +0000   Wed, 18 Sep 2024 20:05:21 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.9
	  Hostname:    ha-091565-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 cb0096492d0c441d8778e11eb51e77d3
	  System UUID:                cb009649-2d0c-441d-8778-e11eb51e77d3
	  Boot ID:                    c3da5972-b725-4116-9206-7ac2fefa29cb
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-4xtjm       100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      3m7s
	  kube-system                 kube-proxy-8qkpk    0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m7s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 3m1s                 kube-proxy       
	  Normal  NodeAllocatableEnforced  3m8s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           3m7s                 node-controller  Node ha-091565-m04 event: Registered Node ha-091565-m04 in Controller
	  Normal  NodeHasSufficientMemory  3m7s (x2 over 3m8s)  kubelet          Node ha-091565-m04 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m7s (x2 over 3m8s)  kubelet          Node ha-091565-m04 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m7s (x2 over 3m8s)  kubelet          Node ha-091565-m04 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           3m6s                 node-controller  Node ha-091565-m04 event: Registered Node ha-091565-m04 in Controller
	  Normal  RegisteredNode           3m3s                 node-controller  Node ha-091565-m04 event: Registered Node ha-091565-m04 in Controller
	  Normal  NodeReady                2m47s                kubelet          Node ha-091565-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[Sep18 20:01] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.051316] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.039151] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.792349] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +1.893273] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +4.904226] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000008] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[ +13.896131] systemd-fstab-generator[585]: Ignoring "noauto" option for root device
	[  +0.067482] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.062052] systemd-fstab-generator[597]: Ignoring "noauto" option for root device
	[  +0.180384] systemd-fstab-generator[611]: Ignoring "noauto" option for root device
	[  +0.116835] systemd-fstab-generator[623]: Ignoring "noauto" option for root device
	[  +0.268512] systemd-fstab-generator[653]: Ignoring "noauto" option for root device
	[  +3.829963] systemd-fstab-generator[747]: Ignoring "noauto" option for root device
	[  +4.147936] systemd-fstab-generator[884]: Ignoring "noauto" option for root device
	[  +0.060572] kauditd_printk_skb: 158 callbacks suppressed
	[  +6.397640] kauditd_printk_skb: 79 callbacks suppressed
	[  +0.774401] systemd-fstab-generator[1308]: Ignoring "noauto" option for root device
	[  +5.898362] kauditd_printk_skb: 15 callbacks suppressed
	[Sep18 20:02] kauditd_printk_skb: 38 callbacks suppressed
	[ +41.961999] kauditd_printk_skb: 26 callbacks suppressed
	
	
	==> etcd [4358e16fe123bb31210776fce1969072fb954b1f7cc3b51c459cd155992c1be5] <==
	{"level":"warn","ts":"2024-09-18T20:08:08.051083Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ce9e8f286885b37e","from":"ce9e8f286885b37e","remote-peer-id":"7208e3715ec3d11b","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-18T20:08:08.058493Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ce9e8f286885b37e","from":"ce9e8f286885b37e","remote-peer-id":"7208e3715ec3d11b","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-18T20:08:08.062941Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ce9e8f286885b37e","from":"ce9e8f286885b37e","remote-peer-id":"7208e3715ec3d11b","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-18T20:08:08.073693Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ce9e8f286885b37e","from":"ce9e8f286885b37e","remote-peer-id":"7208e3715ec3d11b","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-18T20:08:08.080832Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ce9e8f286885b37e","from":"ce9e8f286885b37e","remote-peer-id":"7208e3715ec3d11b","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-18T20:08:08.087929Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ce9e8f286885b37e","from":"ce9e8f286885b37e","remote-peer-id":"7208e3715ec3d11b","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-18T20:08:08.091758Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ce9e8f286885b37e","from":"ce9e8f286885b37e","remote-peer-id":"7208e3715ec3d11b","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-18T20:08:08.092738Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.168.39.92:2380/version","remote-member-id":"7208e3715ec3d11b","error":"Get \"https://192.168.39.92:2380/version\": dial tcp 192.168.39.92:2380: i/o timeout"}
	{"level":"warn","ts":"2024-09-18T20:08:08.092794Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"7208e3715ec3d11b","error":"Get \"https://192.168.39.92:2380/version\": dial tcp 192.168.39.92:2380: i/o timeout"}
	{"level":"warn","ts":"2024-09-18T20:08:08.095462Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ce9e8f286885b37e","from":"ce9e8f286885b37e","remote-peer-id":"7208e3715ec3d11b","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-18T20:08:08.102593Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ce9e8f286885b37e","from":"ce9e8f286885b37e","remote-peer-id":"7208e3715ec3d11b","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-18T20:08:08.108517Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ce9e8f286885b37e","from":"ce9e8f286885b37e","remote-peer-id":"7208e3715ec3d11b","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-18T20:08:08.111135Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ce9e8f286885b37e","from":"ce9e8f286885b37e","remote-peer-id":"7208e3715ec3d11b","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-18T20:08:08.116349Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ce9e8f286885b37e","from":"ce9e8f286885b37e","remote-peer-id":"7208e3715ec3d11b","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-18T20:08:08.120448Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ce9e8f286885b37e","from":"ce9e8f286885b37e","remote-peer-id":"7208e3715ec3d11b","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-18T20:08:08.123920Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ce9e8f286885b37e","from":"ce9e8f286885b37e","remote-peer-id":"7208e3715ec3d11b","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-18T20:08:08.130188Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ce9e8f286885b37e","from":"ce9e8f286885b37e","remote-peer-id":"7208e3715ec3d11b","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-18T20:08:08.136526Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ce9e8f286885b37e","from":"ce9e8f286885b37e","remote-peer-id":"7208e3715ec3d11b","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-18T20:08:08.143077Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ce9e8f286885b37e","from":"ce9e8f286885b37e","remote-peer-id":"7208e3715ec3d11b","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-18T20:08:08.147676Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ce9e8f286885b37e","from":"ce9e8f286885b37e","remote-peer-id":"7208e3715ec3d11b","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-18T20:08:08.151193Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ce9e8f286885b37e","from":"ce9e8f286885b37e","remote-peer-id":"7208e3715ec3d11b","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-18T20:08:08.155397Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ce9e8f286885b37e","from":"ce9e8f286885b37e","remote-peer-id":"7208e3715ec3d11b","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-18T20:08:08.161431Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ce9e8f286885b37e","from":"ce9e8f286885b37e","remote-peer-id":"7208e3715ec3d11b","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-18T20:08:08.168142Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ce9e8f286885b37e","from":"ce9e8f286885b37e","remote-peer-id":"7208e3715ec3d11b","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-18T20:08:08.211230Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ce9e8f286885b37e","from":"ce9e8f286885b37e","remote-peer-id":"7208e3715ec3d11b","remote-peer-name":"pipeline","remote-peer-active":false}
	
	
	==> kernel <==
	 20:08:08 up 7 min,  0 users,  load average: 0.32, 0.24, 0.12
	Linux ha-091565 5.10.207 #1 SMP Mon Sep 16 15:00:28 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [52ae20a53e17beece735496b8e65537bbebfef5da53e8a2e7a74820fee56cd63] <==
	I0918 20:07:30.557919       1 main.go:322] Node ha-091565-m04 has CIDR [10.244.3.0/24] 
	I0918 20:07:40.563803       1 main.go:295] Handling node with IPs: map[192.168.39.53:{}]
	I0918 20:07:40.564003       1 main.go:322] Node ha-091565-m03 has CIDR [10.244.2.0/24] 
	I0918 20:07:40.564181       1 main.go:295] Handling node with IPs: map[192.168.39.9:{}]
	I0918 20:07:40.564217       1 main.go:322] Node ha-091565-m04 has CIDR [10.244.3.0/24] 
	I0918 20:07:40.564314       1 main.go:295] Handling node with IPs: map[192.168.39.215:{}]
	I0918 20:07:40.564335       1 main.go:299] handling current node
	I0918 20:07:40.564375       1 main.go:295] Handling node with IPs: map[192.168.39.92:{}]
	I0918 20:07:40.564407       1 main.go:322] Node ha-091565-m02 has CIDR [10.244.1.0/24] 
	I0918 20:07:50.558115       1 main.go:295] Handling node with IPs: map[192.168.39.215:{}]
	I0918 20:07:50.558147       1 main.go:299] handling current node
	I0918 20:07:50.558160       1 main.go:295] Handling node with IPs: map[192.168.39.92:{}]
	I0918 20:07:50.558164       1 main.go:322] Node ha-091565-m02 has CIDR [10.244.1.0/24] 
	I0918 20:07:50.558360       1 main.go:295] Handling node with IPs: map[192.168.39.53:{}]
	I0918 20:07:50.558384       1 main.go:322] Node ha-091565-m03 has CIDR [10.244.2.0/24] 
	I0918 20:07:50.558429       1 main.go:295] Handling node with IPs: map[192.168.39.9:{}]
	I0918 20:07:50.558435       1 main.go:322] Node ha-091565-m04 has CIDR [10.244.3.0/24] 
	I0918 20:08:00.565020       1 main.go:295] Handling node with IPs: map[192.168.39.215:{}]
	I0918 20:08:00.565144       1 main.go:299] handling current node
	I0918 20:08:00.565175       1 main.go:295] Handling node with IPs: map[192.168.39.92:{}]
	I0918 20:08:00.565192       1 main.go:322] Node ha-091565-m02 has CIDR [10.244.1.0/24] 
	I0918 20:08:00.565349       1 main.go:295] Handling node with IPs: map[192.168.39.53:{}]
	I0918 20:08:00.565373       1 main.go:322] Node ha-091565-m03 has CIDR [10.244.2.0/24] 
	I0918 20:08:00.565427       1 main.go:295] Handling node with IPs: map[192.168.39.9:{}]
	I0918 20:08:00.565444       1 main.go:322] Node ha-091565-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [f141188bda32520e26d392de6e6e49a863d49aacb461b7f3d5649d68557e96d3] <==
	I0918 20:01:41.805351       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0918 20:01:41.812255       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.215]
	I0918 20:01:41.813303       1 controller.go:615] quota admission added evaluator for: endpoints
	I0918 20:01:41.817812       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0918 20:01:41.927112       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0918 20:01:43.444505       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0918 20:01:43.474356       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0918 20:01:43.499285       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0918 20:01:47.177380       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I0918 20:01:47.677666       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	E0918 20:04:28.622821       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:38922: use of closed network connection
	E0918 20:04:28.826011       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:38948: use of closed network connection
	E0918 20:04:29.020534       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:38954: use of closed network connection
	E0918 20:04:29.215686       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:38960: use of closed network connection
	E0918 20:04:29.393565       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:38968: use of closed network connection
	E0918 20:04:29.590605       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:38998: use of closed network connection
	E0918 20:04:29.776838       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:39018: use of closed network connection
	E0918 20:04:29.951140       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:39034: use of closed network connection
	E0918 20:04:30.119473       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:39042: use of closed network connection
	E0918 20:04:30.426734       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:39086: use of closed network connection
	E0918 20:04:30.592391       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:39108: use of closed network connection
	E0918 20:04:30.769818       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:39130: use of closed network connection
	E0918 20:04:30.943725       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:39150: use of closed network connection
	E0918 20:04:31.126781       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:39162: use of closed network connection
	E0918 20:04:31.297785       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:39182: use of closed network connection
	
	
	==> kube-controller-manager [97b3f8978c2594307cdfef72e8b8aebac842152f9c1893bb34700b6e0932027e] <==
	I0918 20:05:01.138017       1 range_allocator.go:422] "Set node PodCIDR" logger="node-ipam-controller" node="ha-091565-m04" podCIDRs=["10.244.3.0/24"]
	I0918 20:05:01.138080       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-091565-m04"
	I0918 20:05:01.138115       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-091565-m04"
	I0918 20:05:01.151572       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-091565-m04"
	I0918 20:05:01.738364       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-091565-m04"
	I0918 20:05:01.838841       1 node_lifecycle_controller.go:884] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-091565-m04"
	I0918 20:05:01.852257       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-091565-m04"
	I0918 20:05:02.344310       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-091565-m04"
	I0918 20:05:03.003621       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-091565-m04"
	I0918 20:05:03.051402       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-091565-m04"
	I0918 20:05:05.442431       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-091565-m04"
	I0918 20:05:05.579185       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-091565-m04"
	I0918 20:05:11.327273       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-091565-m04"
	I0918 20:05:21.548407       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-091565-m04"
	I0918 20:05:21.548588       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-091565-m04"
	I0918 20:05:21.567996       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-091565-m04"
	I0918 20:05:21.857696       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-091565-m04"
	I0918 20:05:31.710527       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-091565-m04"
	I0918 20:06:21.883753       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-091565-m04"
	I0918 20:06:21.884037       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-091565-m02"
	I0918 20:06:21.905558       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-091565-m02"
	I0918 20:06:21.987284       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="62.125575ms"
	I0918 20:06:21.987469       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="76.464µs"
	I0918 20:06:23.082191       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-091565-m02"
	I0918 20:06:27.127364       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-091565-m02"
	
	
	==> kube-proxy [c9aa80c6b1f558ba9ef5b7e70df3690995e271e9296b0cc3e71ade739843f53a] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0918 20:01:49.308011       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0918 20:01:49.335379       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.215"]
	E0918 20:01:49.335598       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0918 20:01:49.418096       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0918 20:01:49.418149       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0918 20:01:49.418183       1 server_linux.go:169] "Using iptables Proxier"
	I0918 20:01:49.424497       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0918 20:01:49.425362       1 server.go:483] "Version info" version="v1.31.1"
	I0918 20:01:49.425380       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0918 20:01:49.427370       1 config.go:199] "Starting service config controller"
	I0918 20:01:49.427801       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0918 20:01:49.427983       1 config.go:105] "Starting endpoint slice config controller"
	I0918 20:01:49.427991       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0918 20:01:49.431014       1 config.go:328] "Starting node config controller"
	I0918 20:01:49.431036       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0918 20:01:49.528624       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0918 20:01:49.528643       1 shared_informer.go:320] Caches are synced for service config
	I0918 20:01:49.531423       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [8c435dbd5b540cdb87d1f79fc2aadca19db9c46aff42e67d78c5d9c8eee1b6de] <==
	E0918 20:03:54.130068       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod d1fea214-55d3-4291-bc7b-cfa3d01a8ead(kube-system/kube-proxy-j766p) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-j766p"
	E0918 20:03:54.131984       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-j766p\": pod kube-proxy-j766p is already assigned to node \"ha-091565-m03\"" pod="kube-system/kube-proxy-j766p"
	I0918 20:03:54.132134       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-j766p" node="ha-091565-m03"
	E0918 20:03:54.204764       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-zdpnz\": pod kindnet-zdpnz is already assigned to node \"ha-091565-m03\"" plugin="DefaultBinder" pod="kube-system/kindnet-zdpnz" node="ha-091565-m03"
	E0918 20:03:54.204930       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod bf784ea9-bf66-4fa3-bb04-e893d228713d(kube-system/kindnet-zdpnz) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-zdpnz"
	E0918 20:03:54.205020       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-zdpnz\": pod kindnet-zdpnz is already assigned to node \"ha-091565-m03\"" pod="kube-system/kindnet-zdpnz"
	I0918 20:03:54.205131       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-zdpnz" node="ha-091565-m03"
	E0918 20:04:22.999076       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-45phf\": pod busybox-7dff88458-45phf is already assigned to node \"ha-091565-m02\"" plugin="DefaultBinder" pod="default/busybox-7dff88458-45phf" node="ha-091565-m02"
	E0918 20:04:23.000005       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 8c26f72c-f562-47cb-bb92-9cc60a901f36(default/busybox-7dff88458-45phf) wasn't assumed so cannot be forgotten" pod="default/busybox-7dff88458-45phf"
	E0918 20:04:23.000126       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-45phf\": pod busybox-7dff88458-45phf is already assigned to node \"ha-091565-m02\"" pod="default/busybox-7dff88458-45phf"
	I0918 20:04:23.000204       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-7dff88458-45phf" node="ha-091565-m02"
	E0918 20:05:01.199076       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-4xtjm\": pod kindnet-4xtjm is already assigned to node \"ha-091565-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-4xtjm" node="ha-091565-m04"
	E0918 20:05:01.199468       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 74b52b58-c5d1-4de5-8a71-97a1e9263ee6(kube-system/kindnet-4xtjm) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-4xtjm"
	E0918 20:05:01.199594       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-4xtjm\": pod kindnet-4xtjm is already assigned to node \"ha-091565-m04\"" pod="kube-system/kindnet-4xtjm"
	I0918 20:05:01.199786       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-4xtjm" node="ha-091565-m04"
	E0918 20:05:01.220390       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-8qkpk\": pod kube-proxy-8qkpk is already assigned to node \"ha-091565-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-8qkpk" node="ha-091565-m04"
	E0918 20:05:01.223994       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 819d89b8-2f9d-4a41-ad66-7bfa5e99e840(kube-system/kube-proxy-8qkpk) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-8qkpk"
	E0918 20:05:01.224205       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-8qkpk\": pod kube-proxy-8qkpk is already assigned to node \"ha-091565-m04\"" pod="kube-system/kube-proxy-8qkpk"
	I0918 20:05:01.224300       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-8qkpk" node="ha-091565-m04"
	E0918 20:05:01.248133       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-zmf96\": pod kindnet-zmf96 is already assigned to node \"ha-091565-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-zmf96" node="ha-091565-m04"
	E0918 20:05:01.248459       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-zmf96\": pod kindnet-zmf96 is already assigned to node \"ha-091565-m04\"" pod="kube-system/kindnet-zmf96"
	I0918 20:05:01.248547       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-zmf96" node="ha-091565-m04"
	E0918 20:05:01.248362       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-t72tx\": pod kube-proxy-t72tx is already assigned to node \"ha-091565-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-t72tx" node="ha-091565-m04"
	E0918 20:05:01.249494       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-t72tx\": pod kube-proxy-t72tx is already assigned to node \"ha-091565-m04\"" pod="kube-system/kube-proxy-t72tx"
	I0918 20:05:01.249666       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-t72tx" node="ha-091565-m04"
	
	
	==> kubelet <==
	Sep 18 20:06:43 ha-091565 kubelet[1316]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 18 20:06:43 ha-091565 kubelet[1316]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 18 20:06:43 ha-091565 kubelet[1316]: E0918 20:06:43.476171    1316 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726690003475792506,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 18 20:06:43 ha-091565 kubelet[1316]: E0918 20:06:43.476227    1316 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726690003475792506,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 18 20:06:53 ha-091565 kubelet[1316]: E0918 20:06:53.477743    1316 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726690013477221732,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 18 20:06:53 ha-091565 kubelet[1316]: E0918 20:06:53.477786    1316 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726690013477221732,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 18 20:07:03 ha-091565 kubelet[1316]: E0918 20:07:03.479043    1316 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726690023478737206,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 18 20:07:03 ha-091565 kubelet[1316]: E0918 20:07:03.479081    1316 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726690023478737206,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 18 20:07:13 ha-091565 kubelet[1316]: E0918 20:07:13.481181    1316 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726690033480916901,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 18 20:07:13 ha-091565 kubelet[1316]: E0918 20:07:13.481262    1316 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726690033480916901,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 18 20:07:23 ha-091565 kubelet[1316]: E0918 20:07:23.483563    1316 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726690043483012211,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 18 20:07:23 ha-091565 kubelet[1316]: E0918 20:07:23.483953    1316 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726690043483012211,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 18 20:07:33 ha-091565 kubelet[1316]: E0918 20:07:33.488007    1316 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726690053486820309,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 18 20:07:33 ha-091565 kubelet[1316]: E0918 20:07:33.488449    1316 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726690053486820309,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 18 20:07:43 ha-091565 kubelet[1316]: E0918 20:07:43.398570    1316 iptables.go:577] "Could not set up iptables canary" err=<
	Sep 18 20:07:43 ha-091565 kubelet[1316]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Sep 18 20:07:43 ha-091565 kubelet[1316]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 18 20:07:43 ha-091565 kubelet[1316]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 18 20:07:43 ha-091565 kubelet[1316]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 18 20:07:43 ha-091565 kubelet[1316]: E0918 20:07:43.490989    1316 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726690063490690150,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 18 20:07:43 ha-091565 kubelet[1316]: E0918 20:07:43.491031    1316 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726690063490690150,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 18 20:07:53 ha-091565 kubelet[1316]: E0918 20:07:53.492968    1316 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726690073492462129,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 18 20:07:53 ha-091565 kubelet[1316]: E0918 20:07:53.493287    1316 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726690073492462129,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 18 20:08:03 ha-091565 kubelet[1316]: E0918 20:08:03.495263    1316 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726690083494829193,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 18 20:08:03 ha-091565 kubelet[1316]: E0918 20:08:03.495287    1316 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726690083494829193,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-091565 -n ha-091565
helpers_test.go:261: (dbg) Run:  kubectl --context ha-091565 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (5.65s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (6.48s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:420: (dbg) Run:  out/minikube-linux-amd64 -p ha-091565 node start m02 -v=7 --alsologtostderr
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-091565 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Done: out/minikube-linux-amd64 -p ha-091565 status -v=7 --alsologtostderr: (4.068805167s)
ha_test.go:435: status says not all three control-plane nodes are present: args "out/minikube-linux-amd64 -p ha-091565 status -v=7 --alsologtostderr": 
ha_test.go:438: status says not all four hosts are running: args "out/minikube-linux-amd64 -p ha-091565 status -v=7 --alsologtostderr": 
ha_test.go:441: status says not all four kubelets are running: args "out/minikube-linux-amd64 -p ha-091565 status -v=7 --alsologtostderr": 
ha_test.go:444: status says not all three apiservers are running: args "out/minikube-linux-amd64 -p ha-091565 status -v=7 --alsologtostderr": 
ha_test.go:448: (dbg) Run:  kubectl get nodes
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-091565 -n ha-091565
helpers_test.go:244: <<< TestMultiControlPlane/serial/RestartSecondaryNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/RestartSecondaryNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-091565 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-091565 logs -n 25: (1.415005056s)
helpers_test.go:252: TestMultiControlPlane/serial/RestartSecondaryNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| ssh     | ha-091565 ssh -n                                                                 | ha-091565 | jenkins | v1.34.0 | 18 Sep 24 20:05 UTC | 18 Sep 24 20:05 UTC |
	|         | ha-091565-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-091565 cp ha-091565-m03:/home/docker/cp-test.txt                              | ha-091565 | jenkins | v1.34.0 | 18 Sep 24 20:05 UTC | 18 Sep 24 20:05 UTC |
	|         | ha-091565:/home/docker/cp-test_ha-091565-m03_ha-091565.txt                       |           |         |         |                     |                     |
	| ssh     | ha-091565 ssh -n                                                                 | ha-091565 | jenkins | v1.34.0 | 18 Sep 24 20:05 UTC | 18 Sep 24 20:05 UTC |
	|         | ha-091565-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-091565 ssh -n ha-091565 sudo cat                                              | ha-091565 | jenkins | v1.34.0 | 18 Sep 24 20:05 UTC | 18 Sep 24 20:05 UTC |
	|         | /home/docker/cp-test_ha-091565-m03_ha-091565.txt                                 |           |         |         |                     |                     |
	| cp      | ha-091565 cp ha-091565-m03:/home/docker/cp-test.txt                              | ha-091565 | jenkins | v1.34.0 | 18 Sep 24 20:05 UTC | 18 Sep 24 20:05 UTC |
	|         | ha-091565-m02:/home/docker/cp-test_ha-091565-m03_ha-091565-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-091565 ssh -n                                                                 | ha-091565 | jenkins | v1.34.0 | 18 Sep 24 20:05 UTC | 18 Sep 24 20:05 UTC |
	|         | ha-091565-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-091565 ssh -n ha-091565-m02 sudo cat                                          | ha-091565 | jenkins | v1.34.0 | 18 Sep 24 20:05 UTC | 18 Sep 24 20:05 UTC |
	|         | /home/docker/cp-test_ha-091565-m03_ha-091565-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-091565 cp ha-091565-m03:/home/docker/cp-test.txt                              | ha-091565 | jenkins | v1.34.0 | 18 Sep 24 20:05 UTC | 18 Sep 24 20:05 UTC |
	|         | ha-091565-m04:/home/docker/cp-test_ha-091565-m03_ha-091565-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-091565 ssh -n                                                                 | ha-091565 | jenkins | v1.34.0 | 18 Sep 24 20:05 UTC | 18 Sep 24 20:05 UTC |
	|         | ha-091565-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-091565 ssh -n ha-091565-m04 sudo cat                                          | ha-091565 | jenkins | v1.34.0 | 18 Sep 24 20:05 UTC | 18 Sep 24 20:05 UTC |
	|         | /home/docker/cp-test_ha-091565-m03_ha-091565-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-091565 cp testdata/cp-test.txt                                                | ha-091565 | jenkins | v1.34.0 | 18 Sep 24 20:05 UTC | 18 Sep 24 20:05 UTC |
	|         | ha-091565-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-091565 ssh -n                                                                 | ha-091565 | jenkins | v1.34.0 | 18 Sep 24 20:05 UTC | 18 Sep 24 20:05 UTC |
	|         | ha-091565-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-091565 cp ha-091565-m04:/home/docker/cp-test.txt                              | ha-091565 | jenkins | v1.34.0 | 18 Sep 24 20:05 UTC | 18 Sep 24 20:05 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile3131576438/001/cp-test_ha-091565-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-091565 ssh -n                                                                 | ha-091565 | jenkins | v1.34.0 | 18 Sep 24 20:05 UTC | 18 Sep 24 20:05 UTC |
	|         | ha-091565-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-091565 cp ha-091565-m04:/home/docker/cp-test.txt                              | ha-091565 | jenkins | v1.34.0 | 18 Sep 24 20:05 UTC | 18 Sep 24 20:05 UTC |
	|         | ha-091565:/home/docker/cp-test_ha-091565-m04_ha-091565.txt                       |           |         |         |                     |                     |
	| ssh     | ha-091565 ssh -n                                                                 | ha-091565 | jenkins | v1.34.0 | 18 Sep 24 20:05 UTC | 18 Sep 24 20:05 UTC |
	|         | ha-091565-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-091565 ssh -n ha-091565 sudo cat                                              | ha-091565 | jenkins | v1.34.0 | 18 Sep 24 20:05 UTC | 18 Sep 24 20:05 UTC |
	|         | /home/docker/cp-test_ha-091565-m04_ha-091565.txt                                 |           |         |         |                     |                     |
	| cp      | ha-091565 cp ha-091565-m04:/home/docker/cp-test.txt                              | ha-091565 | jenkins | v1.34.0 | 18 Sep 24 20:05 UTC | 18 Sep 24 20:05 UTC |
	|         | ha-091565-m02:/home/docker/cp-test_ha-091565-m04_ha-091565-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-091565 ssh -n                                                                 | ha-091565 | jenkins | v1.34.0 | 18 Sep 24 20:05 UTC | 18 Sep 24 20:05 UTC |
	|         | ha-091565-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-091565 ssh -n ha-091565-m02 sudo cat                                          | ha-091565 | jenkins | v1.34.0 | 18 Sep 24 20:05 UTC | 18 Sep 24 20:05 UTC |
	|         | /home/docker/cp-test_ha-091565-m04_ha-091565-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-091565 cp ha-091565-m04:/home/docker/cp-test.txt                              | ha-091565 | jenkins | v1.34.0 | 18 Sep 24 20:05 UTC | 18 Sep 24 20:05 UTC |
	|         | ha-091565-m03:/home/docker/cp-test_ha-091565-m04_ha-091565-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-091565 ssh -n                                                                 | ha-091565 | jenkins | v1.34.0 | 18 Sep 24 20:05 UTC | 18 Sep 24 20:05 UTC |
	|         | ha-091565-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-091565 ssh -n ha-091565-m03 sudo cat                                          | ha-091565 | jenkins | v1.34.0 | 18 Sep 24 20:05 UTC | 18 Sep 24 20:05 UTC |
	|         | /home/docker/cp-test_ha-091565-m04_ha-091565-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-091565 node stop m02 -v=7                                                     | ha-091565 | jenkins | v1.34.0 | 18 Sep 24 20:05 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | ha-091565 node start m02 -v=7                                                    | ha-091565 | jenkins | v1.34.0 | 18 Sep 24 20:08 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/18 20:00:57
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0918 20:00:57.640467   26827 out.go:345] Setting OutFile to fd 1 ...
	I0918 20:00:57.640561   26827 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0918 20:00:57.640569   26827 out.go:358] Setting ErrFile to fd 2...
	I0918 20:00:57.640573   26827 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0918 20:00:57.640761   26827 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19667-7671/.minikube/bin
	I0918 20:00:57.641318   26827 out.go:352] Setting JSON to false
	I0918 20:00:57.642141   26827 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":2602,"bootTime":1726687056,"procs":180,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0918 20:00:57.642239   26827 start.go:139] virtualization: kvm guest
	I0918 20:00:57.644428   26827 out.go:177] * [ha-091565] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0918 20:00:57.645728   26827 notify.go:220] Checking for updates...
	I0918 20:00:57.645758   26827 out.go:177]   - MINIKUBE_LOCATION=19667
	I0918 20:00:57.647179   26827 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0918 20:00:57.648500   26827 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19667-7671/kubeconfig
	I0918 20:00:57.649839   26827 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19667-7671/.minikube
	I0918 20:00:57.651097   26827 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0918 20:00:57.652502   26827 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0918 20:00:57.653976   26827 driver.go:394] Setting default libvirt URI to qemu:///system
	I0918 20:00:57.687513   26827 out.go:177] * Using the kvm2 driver based on user configuration
	I0918 20:00:57.688577   26827 start.go:297] selected driver: kvm2
	I0918 20:00:57.688601   26827 start.go:901] validating driver "kvm2" against <nil>
	I0918 20:00:57.688623   26827 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0918 20:00:57.689634   26827 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0918 20:00:57.689741   26827 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19667-7671/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0918 20:00:57.704974   26827 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0918 20:00:57.705031   26827 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0918 20:00:57.705320   26827 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0918 20:00:57.705370   26827 cni.go:84] Creating CNI manager for ""
	I0918 20:00:57.705425   26827 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0918 20:00:57.705440   26827 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0918 20:00:57.705520   26827 start.go:340] cluster config:
	{Name:ha-091565 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:ha-091565 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRIS
ocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0
GPUs: AutoPauseInterval:1m0s}
	I0918 20:00:57.705651   26827 iso.go:125] acquiring lock: {Name:mkbcf4d84f3091567a39802cd5e773cb10b8c545 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0918 20:00:57.707426   26827 out.go:177] * Starting "ha-091565" primary control-plane node in "ha-091565" cluster
	I0918 20:00:57.708558   26827 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0918 20:00:57.708602   26827 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19667-7671/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I0918 20:00:57.708622   26827 cache.go:56] Caching tarball of preloaded images
	I0918 20:00:57.708700   26827 preload.go:172] Found /home/jenkins/minikube-integration/19667-7671/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0918 20:00:57.708710   26827 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0918 20:00:57.708999   26827 profile.go:143] Saving config to /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/ha-091565/config.json ...
	I0918 20:00:57.709019   26827 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/ha-091565/config.json: {Name:mk6751feb5fedaf9ba97f9b527df45d961607c7d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0918 20:00:57.709176   26827 start.go:360] acquireMachinesLock for ha-091565: {Name:mk1b0d68a0b7d9d4bc8204d5d81cb9eb77222526 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0918 20:00:57.709206   26827 start.go:364] duration metric: took 18.41µs to acquireMachinesLock for "ha-091565"
	I0918 20:00:57.709221   26827 start.go:93] Provisioning new machine with config: &{Name:ha-091565 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.1 ClusterName:ha-091565 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0918 20:00:57.709299   26827 start.go:125] createHost starting for "" (driver="kvm2")
	I0918 20:00:57.710894   26827 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0918 20:00:57.711003   26827 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0918 20:00:57.711035   26827 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0918 20:00:57.725443   26827 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33447
	I0918 20:00:57.725903   26827 main.go:141] libmachine: () Calling .GetVersion
	I0918 20:00:57.726425   26827 main.go:141] libmachine: Using API Version  1
	I0918 20:00:57.726445   26827 main.go:141] libmachine: () Calling .SetConfigRaw
	I0918 20:00:57.726722   26827 main.go:141] libmachine: () Calling .GetMachineName
	I0918 20:00:57.726883   26827 main.go:141] libmachine: (ha-091565) Calling .GetMachineName
	I0918 20:00:57.727025   26827 main.go:141] libmachine: (ha-091565) Calling .DriverName
	I0918 20:00:57.727181   26827 start.go:159] libmachine.API.Create for "ha-091565" (driver="kvm2")
	I0918 20:00:57.727222   26827 client.go:168] LocalClient.Create starting
	I0918 20:00:57.727261   26827 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19667-7671/.minikube/certs/ca.pem
	I0918 20:00:57.727293   26827 main.go:141] libmachine: Decoding PEM data...
	I0918 20:00:57.727312   26827 main.go:141] libmachine: Parsing certificate...
	I0918 20:00:57.727377   26827 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19667-7671/.minikube/certs/cert.pem
	I0918 20:00:57.727407   26827 main.go:141] libmachine: Decoding PEM data...
	I0918 20:00:57.727427   26827 main.go:141] libmachine: Parsing certificate...
	I0918 20:00:57.727451   26827 main.go:141] libmachine: Running pre-create checks...
	I0918 20:00:57.727462   26827 main.go:141] libmachine: (ha-091565) Calling .PreCreateCheck
	I0918 20:00:57.727741   26827 main.go:141] libmachine: (ha-091565) Calling .GetConfigRaw
	I0918 20:00:57.728143   26827 main.go:141] libmachine: Creating machine...
	I0918 20:00:57.728157   26827 main.go:141] libmachine: (ha-091565) Calling .Create
	I0918 20:00:57.728286   26827 main.go:141] libmachine: (ha-091565) Creating KVM machine...
	I0918 20:00:57.729703   26827 main.go:141] libmachine: (ha-091565) DBG | found existing default KVM network
	I0918 20:00:57.730516   26827 main.go:141] libmachine: (ha-091565) DBG | I0918 20:00:57.730387   26850 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0002211e0}
	I0918 20:00:57.730578   26827 main.go:141] libmachine: (ha-091565) DBG | created network xml: 
	I0918 20:00:57.730605   26827 main.go:141] libmachine: (ha-091565) DBG | <network>
	I0918 20:00:57.730618   26827 main.go:141] libmachine: (ha-091565) DBG |   <name>mk-ha-091565</name>
	I0918 20:00:57.730631   26827 main.go:141] libmachine: (ha-091565) DBG |   <dns enable='no'/>
	I0918 20:00:57.730660   26827 main.go:141] libmachine: (ha-091565) DBG |   
	I0918 20:00:57.730680   26827 main.go:141] libmachine: (ha-091565) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0918 20:00:57.730693   26827 main.go:141] libmachine: (ha-091565) DBG |     <dhcp>
	I0918 20:00:57.730703   26827 main.go:141] libmachine: (ha-091565) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0918 20:00:57.730715   26827 main.go:141] libmachine: (ha-091565) DBG |     </dhcp>
	I0918 20:00:57.730736   26827 main.go:141] libmachine: (ha-091565) DBG |   </ip>
	I0918 20:00:57.730748   26827 main.go:141] libmachine: (ha-091565) DBG |   
	I0918 20:00:57.730757   26827 main.go:141] libmachine: (ha-091565) DBG | </network>
	I0918 20:00:57.730768   26827 main.go:141] libmachine: (ha-091565) DBG | 
	I0918 20:00:57.735618   26827 main.go:141] libmachine: (ha-091565) DBG | trying to create private KVM network mk-ha-091565 192.168.39.0/24...
	I0918 20:00:57.800998   26827 main.go:141] libmachine: (ha-091565) DBG | private KVM network mk-ha-091565 192.168.39.0/24 created
	I0918 20:00:57.801029   26827 main.go:141] libmachine: (ha-091565) Setting up store path in /home/jenkins/minikube-integration/19667-7671/.minikube/machines/ha-091565 ...
	I0918 20:00:57.801041   26827 main.go:141] libmachine: (ha-091565) DBG | I0918 20:00:57.800989   26850 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19667-7671/.minikube
	I0918 20:00:57.801133   26827 main.go:141] libmachine: (ha-091565) Building disk image from file:///home/jenkins/minikube-integration/19667-7671/.minikube/cache/iso/amd64/minikube-v1.34.0-1726481713-19649-amd64.iso
	I0918 20:00:57.801206   26827 main.go:141] libmachine: (ha-091565) Downloading /home/jenkins/minikube-integration/19667-7671/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19667-7671/.minikube/cache/iso/amd64/minikube-v1.34.0-1726481713-19649-amd64.iso...
	I0918 20:00:58.046606   26827 main.go:141] libmachine: (ha-091565) DBG | I0918 20:00:58.046472   26850 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19667-7671/.minikube/machines/ha-091565/id_rsa...
	I0918 20:00:58.328818   26827 main.go:141] libmachine: (ha-091565) DBG | I0918 20:00:58.328673   26850 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19667-7671/.minikube/machines/ha-091565/ha-091565.rawdisk...
	I0918 20:00:58.328844   26827 main.go:141] libmachine: (ha-091565) DBG | Writing magic tar header
	I0918 20:00:58.328853   26827 main.go:141] libmachine: (ha-091565) DBG | Writing SSH key tar header
	I0918 20:00:58.328860   26827 main.go:141] libmachine: (ha-091565) DBG | I0918 20:00:58.328794   26850 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19667-7671/.minikube/machines/ha-091565 ...
	I0918 20:00:58.328961   26827 main.go:141] libmachine: (ha-091565) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19667-7671/.minikube/machines/ha-091565
	I0918 20:00:58.328984   26827 main.go:141] libmachine: (ha-091565) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19667-7671/.minikube/machines
	I0918 20:00:58.328999   26827 main.go:141] libmachine: (ha-091565) Setting executable bit set on /home/jenkins/minikube-integration/19667-7671/.minikube/machines/ha-091565 (perms=drwx------)
	I0918 20:00:58.329013   26827 main.go:141] libmachine: (ha-091565) Setting executable bit set on /home/jenkins/minikube-integration/19667-7671/.minikube/machines (perms=drwxr-xr-x)
	I0918 20:00:58.329024   26827 main.go:141] libmachine: (ha-091565) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19667-7671/.minikube
	I0918 20:00:58.329034   26827 main.go:141] libmachine: (ha-091565) Setting executable bit set on /home/jenkins/minikube-integration/19667-7671/.minikube (perms=drwxr-xr-x)
	I0918 20:00:58.329045   26827 main.go:141] libmachine: (ha-091565) Setting executable bit set on /home/jenkins/minikube-integration/19667-7671 (perms=drwxrwxr-x)
	I0918 20:00:58.329050   26827 main.go:141] libmachine: (ha-091565) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0918 20:00:58.329063   26827 main.go:141] libmachine: (ha-091565) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0918 20:00:58.329069   26827 main.go:141] libmachine: (ha-091565) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19667-7671
	I0918 20:00:58.329081   26827 main.go:141] libmachine: (ha-091565) Creating domain...
	I0918 20:00:58.329099   26827 main.go:141] libmachine: (ha-091565) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0918 20:00:58.329114   26827 main.go:141] libmachine: (ha-091565) DBG | Checking permissions on dir: /home/jenkins
	I0918 20:00:58.329136   26827 main.go:141] libmachine: (ha-091565) DBG | Checking permissions on dir: /home
	I0918 20:00:58.329143   26827 main.go:141] libmachine: (ha-091565) DBG | Skipping /home - not owner
	I0918 20:00:58.330265   26827 main.go:141] libmachine: (ha-091565) define libvirt domain using xml: 
	I0918 20:00:58.330282   26827 main.go:141] libmachine: (ha-091565) <domain type='kvm'>
	I0918 20:00:58.330289   26827 main.go:141] libmachine: (ha-091565)   <name>ha-091565</name>
	I0918 20:00:58.330298   26827 main.go:141] libmachine: (ha-091565)   <memory unit='MiB'>2200</memory>
	I0918 20:00:58.330305   26827 main.go:141] libmachine: (ha-091565)   <vcpu>2</vcpu>
	I0918 20:00:58.330311   26827 main.go:141] libmachine: (ha-091565)   <features>
	I0918 20:00:58.330318   26827 main.go:141] libmachine: (ha-091565)     <acpi/>
	I0918 20:00:58.330326   26827 main.go:141] libmachine: (ha-091565)     <apic/>
	I0918 20:00:58.330334   26827 main.go:141] libmachine: (ha-091565)     <pae/>
	I0918 20:00:58.330345   26827 main.go:141] libmachine: (ha-091565)     
	I0918 20:00:58.330353   26827 main.go:141] libmachine: (ha-091565)   </features>
	I0918 20:00:58.330358   26827 main.go:141] libmachine: (ha-091565)   <cpu mode='host-passthrough'>
	I0918 20:00:58.330364   26827 main.go:141] libmachine: (ha-091565)   
	I0918 20:00:58.330372   26827 main.go:141] libmachine: (ha-091565)   </cpu>
	I0918 20:00:58.330400   26827 main.go:141] libmachine: (ha-091565)   <os>
	I0918 20:00:58.330421   26827 main.go:141] libmachine: (ha-091565)     <type>hvm</type>
	I0918 20:00:58.330446   26827 main.go:141] libmachine: (ha-091565)     <boot dev='cdrom'/>
	I0918 20:00:58.330464   26827 main.go:141] libmachine: (ha-091565)     <boot dev='hd'/>
	I0918 20:00:58.330471   26827 main.go:141] libmachine: (ha-091565)     <bootmenu enable='no'/>
	I0918 20:00:58.330481   26827 main.go:141] libmachine: (ha-091565)   </os>
	I0918 20:00:58.330492   26827 main.go:141] libmachine: (ha-091565)   <devices>
	I0918 20:00:58.330501   26827 main.go:141] libmachine: (ha-091565)     <disk type='file' device='cdrom'>
	I0918 20:00:58.330523   26827 main.go:141] libmachine: (ha-091565)       <source file='/home/jenkins/minikube-integration/19667-7671/.minikube/machines/ha-091565/boot2docker.iso'/>
	I0918 20:00:58.330530   26827 main.go:141] libmachine: (ha-091565)       <target dev='hdc' bus='scsi'/>
	I0918 20:00:58.330535   26827 main.go:141] libmachine: (ha-091565)       <readonly/>
	I0918 20:00:58.330541   26827 main.go:141] libmachine: (ha-091565)     </disk>
	I0918 20:00:58.330546   26827 main.go:141] libmachine: (ha-091565)     <disk type='file' device='disk'>
	I0918 20:00:58.330551   26827 main.go:141] libmachine: (ha-091565)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0918 20:00:58.330560   26827 main.go:141] libmachine: (ha-091565)       <source file='/home/jenkins/minikube-integration/19667-7671/.minikube/machines/ha-091565/ha-091565.rawdisk'/>
	I0918 20:00:58.330569   26827 main.go:141] libmachine: (ha-091565)       <target dev='hda' bus='virtio'/>
	I0918 20:00:58.330586   26827 main.go:141] libmachine: (ha-091565)     </disk>
	I0918 20:00:58.330591   26827 main.go:141] libmachine: (ha-091565)     <interface type='network'>
	I0918 20:00:58.330601   26827 main.go:141] libmachine: (ha-091565)       <source network='mk-ha-091565'/>
	I0918 20:00:58.330608   26827 main.go:141] libmachine: (ha-091565)       <model type='virtio'/>
	I0918 20:00:58.330612   26827 main.go:141] libmachine: (ha-091565)     </interface>
	I0918 20:00:58.330618   26827 main.go:141] libmachine: (ha-091565)     <interface type='network'>
	I0918 20:00:58.330625   26827 main.go:141] libmachine: (ha-091565)       <source network='default'/>
	I0918 20:00:58.330635   26827 main.go:141] libmachine: (ha-091565)       <model type='virtio'/>
	I0918 20:00:58.330641   26827 main.go:141] libmachine: (ha-091565)     </interface>
	I0918 20:00:58.330646   26827 main.go:141] libmachine: (ha-091565)     <serial type='pty'>
	I0918 20:00:58.330652   26827 main.go:141] libmachine: (ha-091565)       <target port='0'/>
	I0918 20:00:58.330656   26827 main.go:141] libmachine: (ha-091565)     </serial>
	I0918 20:00:58.330664   26827 main.go:141] libmachine: (ha-091565)     <console type='pty'>
	I0918 20:00:58.330671   26827 main.go:141] libmachine: (ha-091565)       <target type='serial' port='0'/>
	I0918 20:00:58.330684   26827 main.go:141] libmachine: (ha-091565)     </console>
	I0918 20:00:58.330693   26827 main.go:141] libmachine: (ha-091565)     <rng model='virtio'>
	I0918 20:00:58.330702   26827 main.go:141] libmachine: (ha-091565)       <backend model='random'>/dev/random</backend>
	I0918 20:00:58.330710   26827 main.go:141] libmachine: (ha-091565)     </rng>
	I0918 20:00:58.330716   26827 main.go:141] libmachine: (ha-091565)     
	I0918 20:00:58.330722   26827 main.go:141] libmachine: (ha-091565)     
	I0918 20:00:58.330726   26827 main.go:141] libmachine: (ha-091565)   </devices>
	I0918 20:00:58.330730   26827 main.go:141] libmachine: (ha-091565) </domain>
	I0918 20:00:58.330736   26827 main.go:141] libmachine: (ha-091565) 
	I0918 20:00:58.335391   26827 main.go:141] libmachine: (ha-091565) DBG | domain ha-091565 has defined MAC address 52:54:00:62:68:64 in network default
	I0918 20:00:58.335905   26827 main.go:141] libmachine: (ha-091565) Ensuring networks are active...
	I0918 20:00:58.335918   26827 main.go:141] libmachine: (ha-091565) DBG | domain ha-091565 has defined MAC address 52:54:00:2a:13:d8 in network mk-ha-091565
	I0918 20:00:58.336784   26827 main.go:141] libmachine: (ha-091565) Ensuring network default is active
	I0918 20:00:58.337204   26827 main.go:141] libmachine: (ha-091565) Ensuring network mk-ha-091565 is active
	I0918 20:00:58.337781   26827 main.go:141] libmachine: (ha-091565) Getting domain xml...
	I0918 20:00:58.338545   26827 main.go:141] libmachine: (ha-091565) Creating domain...
	I0918 20:00:59.533947   26827 main.go:141] libmachine: (ha-091565) Waiting to get IP...
	I0918 20:00:59.534657   26827 main.go:141] libmachine: (ha-091565) DBG | domain ha-091565 has defined MAC address 52:54:00:2a:13:d8 in network mk-ha-091565
	I0918 20:00:59.535035   26827 main.go:141] libmachine: (ha-091565) DBG | unable to find current IP address of domain ha-091565 in network mk-ha-091565
	I0918 20:00:59.535072   26827 main.go:141] libmachine: (ha-091565) DBG | I0918 20:00:59.535025   26850 retry.go:31] will retry after 237.916234ms: waiting for machine to come up
	I0918 20:00:59.774780   26827 main.go:141] libmachine: (ha-091565) DBG | domain ha-091565 has defined MAC address 52:54:00:2a:13:d8 in network mk-ha-091565
	I0918 20:00:59.775260   26827 main.go:141] libmachine: (ha-091565) DBG | unable to find current IP address of domain ha-091565 in network mk-ha-091565
	I0918 20:00:59.775295   26827 main.go:141] libmachine: (ha-091565) DBG | I0918 20:00:59.775205   26850 retry.go:31] will retry after 262.842806ms: waiting for machine to come up
	I0918 20:01:00.039656   26827 main.go:141] libmachine: (ha-091565) DBG | domain ha-091565 has defined MAC address 52:54:00:2a:13:d8 in network mk-ha-091565
	I0918 20:01:00.040069   26827 main.go:141] libmachine: (ha-091565) DBG | unable to find current IP address of domain ha-091565 in network mk-ha-091565
	I0918 20:01:00.040093   26827 main.go:141] libmachine: (ha-091565) DBG | I0918 20:01:00.040046   26850 retry.go:31] will retry after 393.798982ms: waiting for machine to come up
	I0918 20:01:00.435673   26827 main.go:141] libmachine: (ha-091565) DBG | domain ha-091565 has defined MAC address 52:54:00:2a:13:d8 in network mk-ha-091565
	I0918 20:01:00.436127   26827 main.go:141] libmachine: (ha-091565) DBG | unable to find current IP address of domain ha-091565 in network mk-ha-091565
	I0918 20:01:00.436161   26827 main.go:141] libmachine: (ha-091565) DBG | I0918 20:01:00.436100   26850 retry.go:31] will retry after 446.519452ms: waiting for machine to come up
	I0918 20:01:00.883844   26827 main.go:141] libmachine: (ha-091565) DBG | domain ha-091565 has defined MAC address 52:54:00:2a:13:d8 in network mk-ha-091565
	I0918 20:01:00.884367   26827 main.go:141] libmachine: (ha-091565) DBG | unable to find current IP address of domain ha-091565 in network mk-ha-091565
	I0918 20:01:00.884396   26827 main.go:141] libmachine: (ha-091565) DBG | I0918 20:01:00.884301   26850 retry.go:31] will retry after 528.125995ms: waiting for machine to come up
	I0918 20:01:01.414131   26827 main.go:141] libmachine: (ha-091565) DBG | domain ha-091565 has defined MAC address 52:54:00:2a:13:d8 in network mk-ha-091565
	I0918 20:01:01.414641   26827 main.go:141] libmachine: (ha-091565) DBG | unable to find current IP address of domain ha-091565 in network mk-ha-091565
	I0918 20:01:01.414662   26827 main.go:141] libmachine: (ha-091565) DBG | I0918 20:01:01.414600   26850 retry.go:31] will retry after 935.867422ms: waiting for machine to come up
	I0918 20:01:02.352501   26827 main.go:141] libmachine: (ha-091565) DBG | domain ha-091565 has defined MAC address 52:54:00:2a:13:d8 in network mk-ha-091565
	I0918 20:01:02.353101   26827 main.go:141] libmachine: (ha-091565) DBG | unable to find current IP address of domain ha-091565 in network mk-ha-091565
	I0918 20:01:02.353136   26827 main.go:141] libmachine: (ha-091565) DBG | I0918 20:01:02.353036   26850 retry.go:31] will retry after 916.470629ms: waiting for machine to come up
	I0918 20:01:03.270901   26827 main.go:141] libmachine: (ha-091565) DBG | domain ha-091565 has defined MAC address 52:54:00:2a:13:d8 in network mk-ha-091565
	I0918 20:01:03.271592   26827 main.go:141] libmachine: (ha-091565) DBG | unable to find current IP address of domain ha-091565 in network mk-ha-091565
	I0918 20:01:03.271617   26827 main.go:141] libmachine: (ha-091565) DBG | I0918 20:01:03.271544   26850 retry.go:31] will retry after 1.230905631s: waiting for machine to come up
	I0918 20:01:04.504061   26827 main.go:141] libmachine: (ha-091565) DBG | domain ha-091565 has defined MAC address 52:54:00:2a:13:d8 in network mk-ha-091565
	I0918 20:01:04.504573   26827 main.go:141] libmachine: (ha-091565) DBG | unable to find current IP address of domain ha-091565 in network mk-ha-091565
	I0918 20:01:04.504600   26827 main.go:141] libmachine: (ha-091565) DBG | I0918 20:01:04.504501   26850 retry.go:31] will retry after 1.334656049s: waiting for machine to come up
	I0918 20:01:05.841225   26827 main.go:141] libmachine: (ha-091565) DBG | domain ha-091565 has defined MAC address 52:54:00:2a:13:d8 in network mk-ha-091565
	I0918 20:01:05.841603   26827 main.go:141] libmachine: (ha-091565) DBG | unable to find current IP address of domain ha-091565 in network mk-ha-091565
	I0918 20:01:05.841627   26827 main.go:141] libmachine: (ha-091565) DBG | I0918 20:01:05.841542   26850 retry.go:31] will retry after 1.509327207s: waiting for machine to come up
	I0918 20:01:07.353477   26827 main.go:141] libmachine: (ha-091565) DBG | domain ha-091565 has defined MAC address 52:54:00:2a:13:d8 in network mk-ha-091565
	I0918 20:01:07.353907   26827 main.go:141] libmachine: (ha-091565) DBG | unable to find current IP address of domain ha-091565 in network mk-ha-091565
	I0918 20:01:07.353958   26827 main.go:141] libmachine: (ha-091565) DBG | I0918 20:01:07.353878   26850 retry.go:31] will retry after 2.403908861s: waiting for machine to come up
	I0918 20:01:09.760703   26827 main.go:141] libmachine: (ha-091565) DBG | domain ha-091565 has defined MAC address 52:54:00:2a:13:d8 in network mk-ha-091565
	I0918 20:01:09.761285   26827 main.go:141] libmachine: (ha-091565) DBG | unable to find current IP address of domain ha-091565 in network mk-ha-091565
	I0918 20:01:09.761311   26827 main.go:141] libmachine: (ha-091565) DBG | I0918 20:01:09.761245   26850 retry.go:31] will retry after 3.18859433s: waiting for machine to come up
	I0918 20:01:12.951021   26827 main.go:141] libmachine: (ha-091565) DBG | domain ha-091565 has defined MAC address 52:54:00:2a:13:d8 in network mk-ha-091565
	I0918 20:01:12.951436   26827 main.go:141] libmachine: (ha-091565) DBG | unable to find current IP address of domain ha-091565 in network mk-ha-091565
	I0918 20:01:12.951466   26827 main.go:141] libmachine: (ha-091565) DBG | I0918 20:01:12.951387   26850 retry.go:31] will retry after 4.080420969s: waiting for machine to come up
	I0918 20:01:17.036664   26827 main.go:141] libmachine: (ha-091565) DBG | domain ha-091565 has defined MAC address 52:54:00:2a:13:d8 in network mk-ha-091565
	I0918 20:01:17.037090   26827 main.go:141] libmachine: (ha-091565) DBG | unable to find current IP address of domain ha-091565 in network mk-ha-091565
	I0918 20:01:17.037112   26827 main.go:141] libmachine: (ha-091565) DBG | I0918 20:01:17.037044   26850 retry.go:31] will retry after 5.244932355s: waiting for machine to come up
	I0918 20:01:22.287118   26827 main.go:141] libmachine: (ha-091565) DBG | domain ha-091565 has defined MAC address 52:54:00:2a:13:d8 in network mk-ha-091565
	I0918 20:01:22.287574   26827 main.go:141] libmachine: (ha-091565) Found IP for machine: 192.168.39.215
	I0918 20:01:22.287594   26827 main.go:141] libmachine: (ha-091565) Reserving static IP address...
	I0918 20:01:22.287606   26827 main.go:141] libmachine: (ha-091565) DBG | domain ha-091565 has current primary IP address 192.168.39.215 and MAC address 52:54:00:2a:13:d8 in network mk-ha-091565
	I0918 20:01:22.287959   26827 main.go:141] libmachine: (ha-091565) DBG | unable to find host DHCP lease matching {name: "ha-091565", mac: "52:54:00:2a:13:d8", ip: "192.168.39.215"} in network mk-ha-091565
	I0918 20:01:22.360495   26827 main.go:141] libmachine: (ha-091565) DBG | Getting to WaitForSSH function...
	I0918 20:01:22.360523   26827 main.go:141] libmachine: (ha-091565) Reserved static IP address: 192.168.39.215
	I0918 20:01:22.360535   26827 main.go:141] libmachine: (ha-091565) Waiting for SSH to be available...
	I0918 20:01:22.362885   26827 main.go:141] libmachine: (ha-091565) DBG | domain ha-091565 has defined MAC address 52:54:00:2a:13:d8 in network mk-ha-091565
	I0918 20:01:22.363193   26827 main.go:141] libmachine: (ha-091565) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:2a:13:d8", ip: ""} in network mk-ha-091565
	I0918 20:01:22.363217   26827 main.go:141] libmachine: (ha-091565) DBG | unable to find defined IP address of network mk-ha-091565 interface with MAC address 52:54:00:2a:13:d8
	I0918 20:01:22.363387   26827 main.go:141] libmachine: (ha-091565) DBG | Using SSH client type: external
	I0918 20:01:22.363410   26827 main.go:141] libmachine: (ha-091565) DBG | Using SSH private key: /home/jenkins/minikube-integration/19667-7671/.minikube/machines/ha-091565/id_rsa (-rw-------)
	I0918 20:01:22.363445   26827 main.go:141] libmachine: (ha-091565) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19667-7671/.minikube/machines/ha-091565/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0918 20:01:22.363470   26827 main.go:141] libmachine: (ha-091565) DBG | About to run SSH command:
	I0918 20:01:22.363487   26827 main.go:141] libmachine: (ha-091565) DBG | exit 0
	I0918 20:01:22.367035   26827 main.go:141] libmachine: (ha-091565) DBG | SSH cmd err, output: exit status 255: 
	I0918 20:01:22.367062   26827 main.go:141] libmachine: (ha-091565) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I0918 20:01:22.367069   26827 main.go:141] libmachine: (ha-091565) DBG | command : exit 0
	I0918 20:01:22.367074   26827 main.go:141] libmachine: (ha-091565) DBG | err     : exit status 255
	I0918 20:01:22.367081   26827 main.go:141] libmachine: (ha-091565) DBG | output  : 
	I0918 20:01:25.368924   26827 main.go:141] libmachine: (ha-091565) DBG | Getting to WaitForSSH function...
	I0918 20:01:25.371732   26827 main.go:141] libmachine: (ha-091565) DBG | domain ha-091565 has defined MAC address 52:54:00:2a:13:d8 in network mk-ha-091565
	I0918 20:01:25.372247   26827 main.go:141] libmachine: (ha-091565) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:13:d8", ip: ""} in network mk-ha-091565: {Iface:virbr1 ExpiryTime:2024-09-18 21:01:12 +0000 UTC Type:0 Mac:52:54:00:2a:13:d8 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:ha-091565 Clientid:01:52:54:00:2a:13:d8}
	I0918 20:01:25.372276   26827 main.go:141] libmachine: (ha-091565) DBG | domain ha-091565 has defined IP address 192.168.39.215 and MAC address 52:54:00:2a:13:d8 in network mk-ha-091565
	I0918 20:01:25.372360   26827 main.go:141] libmachine: (ha-091565) DBG | Using SSH client type: external
	I0918 20:01:25.372393   26827 main.go:141] libmachine: (ha-091565) DBG | Using SSH private key: /home/jenkins/minikube-integration/19667-7671/.minikube/machines/ha-091565/id_rsa (-rw-------)
	I0918 20:01:25.372430   26827 main.go:141] libmachine: (ha-091565) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.215 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19667-7671/.minikube/machines/ha-091565/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0918 20:01:25.372447   26827 main.go:141] libmachine: (ha-091565) DBG | About to run SSH command:
	I0918 20:01:25.372458   26827 main.go:141] libmachine: (ha-091565) DBG | exit 0
	I0918 20:01:25.500108   26827 main.go:141] libmachine: (ha-091565) DBG | SSH cmd err, output: <nil>: 
	I0918 20:01:25.500382   26827 main.go:141] libmachine: (ha-091565) KVM machine creation complete!
	I0918 20:01:25.500836   26827 main.go:141] libmachine: (ha-091565) Calling .GetConfigRaw
	I0918 20:01:25.501392   26827 main.go:141] libmachine: (ha-091565) Calling .DriverName
	I0918 20:01:25.501585   26827 main.go:141] libmachine: (ha-091565) Calling .DriverName
	I0918 20:01:25.501791   26827 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0918 20:01:25.501803   26827 main.go:141] libmachine: (ha-091565) Calling .GetState
	I0918 20:01:25.503113   26827 main.go:141] libmachine: Detecting operating system of created instance...
	I0918 20:01:25.503144   26827 main.go:141] libmachine: Waiting for SSH to be available...
	I0918 20:01:25.503151   26827 main.go:141] libmachine: Getting to WaitForSSH function...
	I0918 20:01:25.503163   26827 main.go:141] libmachine: (ha-091565) Calling .GetSSHHostname
	I0918 20:01:25.505584   26827 main.go:141] libmachine: (ha-091565) DBG | domain ha-091565 has defined MAC address 52:54:00:2a:13:d8 in network mk-ha-091565
	I0918 20:01:25.505981   26827 main.go:141] libmachine: (ha-091565) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:13:d8", ip: ""} in network mk-ha-091565: {Iface:virbr1 ExpiryTime:2024-09-18 21:01:12 +0000 UTC Type:0 Mac:52:54:00:2a:13:d8 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:ha-091565 Clientid:01:52:54:00:2a:13:d8}
	I0918 20:01:25.506016   26827 main.go:141] libmachine: (ha-091565) DBG | domain ha-091565 has defined IP address 192.168.39.215 and MAC address 52:54:00:2a:13:d8 in network mk-ha-091565
	I0918 20:01:25.506132   26827 main.go:141] libmachine: (ha-091565) Calling .GetSSHPort
	I0918 20:01:25.506286   26827 main.go:141] libmachine: (ha-091565) Calling .GetSSHKeyPath
	I0918 20:01:25.506450   26827 main.go:141] libmachine: (ha-091565) Calling .GetSSHKeyPath
	I0918 20:01:25.506567   26827 main.go:141] libmachine: (ha-091565) Calling .GetSSHUsername
	I0918 20:01:25.506705   26827 main.go:141] libmachine: Using SSH client type: native
	I0918 20:01:25.506964   26827 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.215 22 <nil> <nil>}
	I0918 20:01:25.506980   26827 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0918 20:01:25.615489   26827 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0918 20:01:25.615512   26827 main.go:141] libmachine: Detecting the provisioner...
	I0918 20:01:25.615519   26827 main.go:141] libmachine: (ha-091565) Calling .GetSSHHostname
	I0918 20:01:25.618058   26827 main.go:141] libmachine: (ha-091565) DBG | domain ha-091565 has defined MAC address 52:54:00:2a:13:d8 in network mk-ha-091565
	I0918 20:01:25.618343   26827 main.go:141] libmachine: (ha-091565) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:13:d8", ip: ""} in network mk-ha-091565: {Iface:virbr1 ExpiryTime:2024-09-18 21:01:12 +0000 UTC Type:0 Mac:52:54:00:2a:13:d8 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:ha-091565 Clientid:01:52:54:00:2a:13:d8}
	I0918 20:01:25.618365   26827 main.go:141] libmachine: (ha-091565) DBG | domain ha-091565 has defined IP address 192.168.39.215 and MAC address 52:54:00:2a:13:d8 in network mk-ha-091565
	I0918 20:01:25.618476   26827 main.go:141] libmachine: (ha-091565) Calling .GetSSHPort
	I0918 20:01:25.618650   26827 main.go:141] libmachine: (ha-091565) Calling .GetSSHKeyPath
	I0918 20:01:25.618786   26827 main.go:141] libmachine: (ha-091565) Calling .GetSSHKeyPath
	I0918 20:01:25.618935   26827 main.go:141] libmachine: (ha-091565) Calling .GetSSHUsername
	I0918 20:01:25.619044   26827 main.go:141] libmachine: Using SSH client type: native
	I0918 20:01:25.619200   26827 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.215 22 <nil> <nil>}
	I0918 20:01:25.619210   26827 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0918 20:01:25.732502   26827 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0918 20:01:25.732589   26827 main.go:141] libmachine: found compatible host: buildroot
	I0918 20:01:25.732599   26827 main.go:141] libmachine: Provisioning with buildroot...
	I0918 20:01:25.732606   26827 main.go:141] libmachine: (ha-091565) Calling .GetMachineName
	I0918 20:01:25.732852   26827 buildroot.go:166] provisioning hostname "ha-091565"
	I0918 20:01:25.732880   26827 main.go:141] libmachine: (ha-091565) Calling .GetMachineName
	I0918 20:01:25.733067   26827 main.go:141] libmachine: (ha-091565) Calling .GetSSHHostname
	I0918 20:01:25.735789   26827 main.go:141] libmachine: (ha-091565) DBG | domain ha-091565 has defined MAC address 52:54:00:2a:13:d8 in network mk-ha-091565
	I0918 20:01:25.736134   26827 main.go:141] libmachine: (ha-091565) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:13:d8", ip: ""} in network mk-ha-091565: {Iface:virbr1 ExpiryTime:2024-09-18 21:01:12 +0000 UTC Type:0 Mac:52:54:00:2a:13:d8 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:ha-091565 Clientid:01:52:54:00:2a:13:d8}
	I0918 20:01:25.736170   26827 main.go:141] libmachine: (ha-091565) DBG | domain ha-091565 has defined IP address 192.168.39.215 and MAC address 52:54:00:2a:13:d8 in network mk-ha-091565
	I0918 20:01:25.736303   26827 main.go:141] libmachine: (ha-091565) Calling .GetSSHPort
	I0918 20:01:25.736498   26827 main.go:141] libmachine: (ha-091565) Calling .GetSSHKeyPath
	I0918 20:01:25.736664   26827 main.go:141] libmachine: (ha-091565) Calling .GetSSHKeyPath
	I0918 20:01:25.736815   26827 main.go:141] libmachine: (ha-091565) Calling .GetSSHUsername
	I0918 20:01:25.736962   26827 main.go:141] libmachine: Using SSH client type: native
	I0918 20:01:25.737181   26827 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.215 22 <nil> <nil>}
	I0918 20:01:25.737194   26827 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-091565 && echo "ha-091565" | sudo tee /etc/hostname
	I0918 20:01:25.862508   26827 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-091565
	
	I0918 20:01:25.862540   26827 main.go:141] libmachine: (ha-091565) Calling .GetSSHHostname
	I0918 20:01:25.866613   26827 main.go:141] libmachine: (ha-091565) DBG | domain ha-091565 has defined MAC address 52:54:00:2a:13:d8 in network mk-ha-091565
	I0918 20:01:25.867074   26827 main.go:141] libmachine: (ha-091565) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:13:d8", ip: ""} in network mk-ha-091565: {Iface:virbr1 ExpiryTime:2024-09-18 21:01:12 +0000 UTC Type:0 Mac:52:54:00:2a:13:d8 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:ha-091565 Clientid:01:52:54:00:2a:13:d8}
	I0918 20:01:25.867104   26827 main.go:141] libmachine: (ha-091565) DBG | domain ha-091565 has defined IP address 192.168.39.215 and MAC address 52:54:00:2a:13:d8 in network mk-ha-091565
	I0918 20:01:25.867538   26827 main.go:141] libmachine: (ha-091565) Calling .GetSSHPort
	I0918 20:01:25.867789   26827 main.go:141] libmachine: (ha-091565) Calling .GetSSHKeyPath
	I0918 20:01:25.867962   26827 main.go:141] libmachine: (ha-091565) Calling .GetSSHKeyPath
	I0918 20:01:25.868230   26827 main.go:141] libmachine: (ha-091565) Calling .GetSSHUsername
	I0918 20:01:25.868389   26827 main.go:141] libmachine: Using SSH client type: native
	I0918 20:01:25.868588   26827 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.215 22 <nil> <nil>}
	I0918 20:01:25.868607   26827 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-091565' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-091565/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-091565' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0918 20:01:25.988748   26827 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0918 20:01:25.988798   26827 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19667-7671/.minikube CaCertPath:/home/jenkins/minikube-integration/19667-7671/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19667-7671/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19667-7671/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19667-7671/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19667-7671/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19667-7671/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19667-7671/.minikube}
	I0918 20:01:25.988838   26827 buildroot.go:174] setting up certificates
	I0918 20:01:25.988848   26827 provision.go:84] configureAuth start
	I0918 20:01:25.988857   26827 main.go:141] libmachine: (ha-091565) Calling .GetMachineName
	I0918 20:01:25.989144   26827 main.go:141] libmachine: (ha-091565) Calling .GetIP
	I0918 20:01:25.991863   26827 main.go:141] libmachine: (ha-091565) DBG | domain ha-091565 has defined MAC address 52:54:00:2a:13:d8 in network mk-ha-091565
	I0918 20:01:25.992270   26827 main.go:141] libmachine: (ha-091565) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:13:d8", ip: ""} in network mk-ha-091565: {Iface:virbr1 ExpiryTime:2024-09-18 21:01:12 +0000 UTC Type:0 Mac:52:54:00:2a:13:d8 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:ha-091565 Clientid:01:52:54:00:2a:13:d8}
	I0918 20:01:25.992315   26827 main.go:141] libmachine: (ha-091565) DBG | domain ha-091565 has defined IP address 192.168.39.215 and MAC address 52:54:00:2a:13:d8 in network mk-ha-091565
	I0918 20:01:25.992456   26827 main.go:141] libmachine: (ha-091565) Calling .GetSSHHostname
	I0918 20:01:25.994511   26827 main.go:141] libmachine: (ha-091565) DBG | domain ha-091565 has defined MAC address 52:54:00:2a:13:d8 in network mk-ha-091565
	I0918 20:01:25.994809   26827 main.go:141] libmachine: (ha-091565) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:13:d8", ip: ""} in network mk-ha-091565: {Iface:virbr1 ExpiryTime:2024-09-18 21:01:12 +0000 UTC Type:0 Mac:52:54:00:2a:13:d8 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:ha-091565 Clientid:01:52:54:00:2a:13:d8}
	I0918 20:01:25.994834   26827 main.go:141] libmachine: (ha-091565) DBG | domain ha-091565 has defined IP address 192.168.39.215 and MAC address 52:54:00:2a:13:d8 in network mk-ha-091565
	I0918 20:01:25.994954   26827 provision.go:143] copyHostCerts
	I0918 20:01:25.994981   26827 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19667-7671/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19667-7671/.minikube/ca.pem
	I0918 20:01:25.995025   26827 exec_runner.go:144] found /home/jenkins/minikube-integration/19667-7671/.minikube/ca.pem, removing ...
	I0918 20:01:25.995039   26827 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19667-7671/.minikube/ca.pem
	I0918 20:01:25.995103   26827 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19667-7671/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19667-7671/.minikube/ca.pem (1082 bytes)
	I0918 20:01:25.995191   26827 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19667-7671/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19667-7671/.minikube/cert.pem
	I0918 20:01:25.995209   26827 exec_runner.go:144] found /home/jenkins/minikube-integration/19667-7671/.minikube/cert.pem, removing ...
	I0918 20:01:25.995213   26827 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19667-7671/.minikube/cert.pem
	I0918 20:01:25.995242   26827 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19667-7671/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19667-7671/.minikube/cert.pem (1123 bytes)
	I0918 20:01:25.995301   26827 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19667-7671/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19667-7671/.minikube/key.pem
	I0918 20:01:25.995316   26827 exec_runner.go:144] found /home/jenkins/minikube-integration/19667-7671/.minikube/key.pem, removing ...
	I0918 20:01:25.995322   26827 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19667-7671/.minikube/key.pem
	I0918 20:01:25.995343   26827 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19667-7671/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19667-7671/.minikube/key.pem (1679 bytes)
	I0918 20:01:25.995405   26827 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19667-7671/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19667-7671/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19667-7671/.minikube/certs/ca-key.pem org=jenkins.ha-091565 san=[127.0.0.1 192.168.39.215 ha-091565 localhost minikube]
	I0918 20:01:26.117902   26827 provision.go:177] copyRemoteCerts
	I0918 20:01:26.117954   26827 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0918 20:01:26.117977   26827 main.go:141] libmachine: (ha-091565) Calling .GetSSHHostname
	I0918 20:01:26.120733   26827 main.go:141] libmachine: (ha-091565) DBG | domain ha-091565 has defined MAC address 52:54:00:2a:13:d8 in network mk-ha-091565
	I0918 20:01:26.121075   26827 main.go:141] libmachine: (ha-091565) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:13:d8", ip: ""} in network mk-ha-091565: {Iface:virbr1 ExpiryTime:2024-09-18 21:01:12 +0000 UTC Type:0 Mac:52:54:00:2a:13:d8 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:ha-091565 Clientid:01:52:54:00:2a:13:d8}
	I0918 20:01:26.121091   26827 main.go:141] libmachine: (ha-091565) DBG | domain ha-091565 has defined IP address 192.168.39.215 and MAC address 52:54:00:2a:13:d8 in network mk-ha-091565
	I0918 20:01:26.121297   26827 main.go:141] libmachine: (ha-091565) Calling .GetSSHPort
	I0918 20:01:26.121502   26827 main.go:141] libmachine: (ha-091565) Calling .GetSSHKeyPath
	I0918 20:01:26.121666   26827 main.go:141] libmachine: (ha-091565) Calling .GetSSHUsername
	I0918 20:01:26.121786   26827 sshutil.go:53] new ssh client: &{IP:192.168.39.215 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19667-7671/.minikube/machines/ha-091565/id_rsa Username:docker}
	I0918 20:01:26.205619   26827 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19667-7671/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0918 20:01:26.205705   26827 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I0918 20:01:26.228613   26827 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19667-7671/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0918 20:01:26.228682   26827 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0918 20:01:26.252879   26827 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19667-7671/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0918 20:01:26.252953   26827 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0918 20:01:26.277029   26827 provision.go:87] duration metric: took 288.170096ms to configureAuth
	I0918 20:01:26.277056   26827 buildroot.go:189] setting minikube options for container-runtime
	I0918 20:01:26.277264   26827 config.go:182] Loaded profile config "ha-091565": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0918 20:01:26.277380   26827 main.go:141] libmachine: (ha-091565) Calling .GetSSHHostname
	I0918 20:01:26.279749   26827 main.go:141] libmachine: (ha-091565) DBG | domain ha-091565 has defined MAC address 52:54:00:2a:13:d8 in network mk-ha-091565
	I0918 20:01:26.280128   26827 main.go:141] libmachine: (ha-091565) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:13:d8", ip: ""} in network mk-ha-091565: {Iface:virbr1 ExpiryTime:2024-09-18 21:01:12 +0000 UTC Type:0 Mac:52:54:00:2a:13:d8 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:ha-091565 Clientid:01:52:54:00:2a:13:d8}
	I0918 20:01:26.280154   26827 main.go:141] libmachine: (ha-091565) DBG | domain ha-091565 has defined IP address 192.168.39.215 and MAC address 52:54:00:2a:13:d8 in network mk-ha-091565
	I0918 20:01:26.280280   26827 main.go:141] libmachine: (ha-091565) Calling .GetSSHPort
	I0918 20:01:26.280444   26827 main.go:141] libmachine: (ha-091565) Calling .GetSSHKeyPath
	I0918 20:01:26.280617   26827 main.go:141] libmachine: (ha-091565) Calling .GetSSHKeyPath
	I0918 20:01:26.280788   26827 main.go:141] libmachine: (ha-091565) Calling .GetSSHUsername
	I0918 20:01:26.280946   26827 main.go:141] libmachine: Using SSH client type: native
	I0918 20:01:26.281114   26827 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.215 22 <nil> <nil>}
	I0918 20:01:26.281127   26827 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0918 20:01:26.505775   26827 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0918 20:01:26.505808   26827 main.go:141] libmachine: Checking connection to Docker...
	I0918 20:01:26.505817   26827 main.go:141] libmachine: (ha-091565) Calling .GetURL
	I0918 20:01:26.507070   26827 main.go:141] libmachine: (ha-091565) DBG | Using libvirt version 6000000
	I0918 20:01:26.509239   26827 main.go:141] libmachine: (ha-091565) DBG | domain ha-091565 has defined MAC address 52:54:00:2a:13:d8 in network mk-ha-091565
	I0918 20:01:26.509623   26827 main.go:141] libmachine: (ha-091565) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:13:d8", ip: ""} in network mk-ha-091565: {Iface:virbr1 ExpiryTime:2024-09-18 21:01:12 +0000 UTC Type:0 Mac:52:54:00:2a:13:d8 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:ha-091565 Clientid:01:52:54:00:2a:13:d8}
	I0918 20:01:26.509653   26827 main.go:141] libmachine: (ha-091565) DBG | domain ha-091565 has defined IP address 192.168.39.215 and MAC address 52:54:00:2a:13:d8 in network mk-ha-091565
	I0918 20:01:26.509837   26827 main.go:141] libmachine: Docker is up and running!
	I0918 20:01:26.509859   26827 main.go:141] libmachine: Reticulating splines...
	I0918 20:01:26.509874   26827 client.go:171] duration metric: took 28.782642826s to LocalClient.Create
	I0918 20:01:26.509892   26827 start.go:167] duration metric: took 28.782711953s to libmachine.API.Create "ha-091565"
	I0918 20:01:26.509901   26827 start.go:293] postStartSetup for "ha-091565" (driver="kvm2")
	I0918 20:01:26.509909   26827 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0918 20:01:26.509925   26827 main.go:141] libmachine: (ha-091565) Calling .DriverName
	I0918 20:01:26.510174   26827 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0918 20:01:26.510198   26827 main.go:141] libmachine: (ha-091565) Calling .GetSSHHostname
	I0918 20:01:26.512537   26827 main.go:141] libmachine: (ha-091565) DBG | domain ha-091565 has defined MAC address 52:54:00:2a:13:d8 in network mk-ha-091565
	I0918 20:01:26.512896   26827 main.go:141] libmachine: (ha-091565) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:13:d8", ip: ""} in network mk-ha-091565: {Iface:virbr1 ExpiryTime:2024-09-18 21:01:12 +0000 UTC Type:0 Mac:52:54:00:2a:13:d8 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:ha-091565 Clientid:01:52:54:00:2a:13:d8}
	I0918 20:01:26.512927   26827 main.go:141] libmachine: (ha-091565) DBG | domain ha-091565 has defined IP address 192.168.39.215 and MAC address 52:54:00:2a:13:d8 in network mk-ha-091565
	I0918 20:01:26.513099   26827 main.go:141] libmachine: (ha-091565) Calling .GetSSHPort
	I0918 20:01:26.513302   26827 main.go:141] libmachine: (ha-091565) Calling .GetSSHKeyPath
	I0918 20:01:26.513485   26827 main.go:141] libmachine: (ha-091565) Calling .GetSSHUsername
	I0918 20:01:26.513627   26827 sshutil.go:53] new ssh client: &{IP:192.168.39.215 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19667-7671/.minikube/machines/ha-091565/id_rsa Username:docker}
	I0918 20:01:26.598408   26827 ssh_runner.go:195] Run: cat /etc/os-release
	I0918 20:01:26.602627   26827 info.go:137] Remote host: Buildroot 2023.02.9
	I0918 20:01:26.602663   26827 filesync.go:126] Scanning /home/jenkins/minikube-integration/19667-7671/.minikube/addons for local assets ...
	I0918 20:01:26.602726   26827 filesync.go:126] Scanning /home/jenkins/minikube-integration/19667-7671/.minikube/files for local assets ...
	I0918 20:01:26.602800   26827 filesync.go:149] local asset: /home/jenkins/minikube-integration/19667-7671/.minikube/files/etc/ssl/certs/148782.pem -> 148782.pem in /etc/ssl/certs
	I0918 20:01:26.602810   26827 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19667-7671/.minikube/files/etc/ssl/certs/148782.pem -> /etc/ssl/certs/148782.pem
	I0918 20:01:26.602901   26827 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0918 20:01:26.612359   26827 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/files/etc/ssl/certs/148782.pem --> /etc/ssl/certs/148782.pem (1708 bytes)
	I0918 20:01:26.635555   26827 start.go:296] duration metric: took 125.639833ms for postStartSetup
	I0918 20:01:26.635626   26827 main.go:141] libmachine: (ha-091565) Calling .GetConfigRaw
	I0918 20:01:26.636227   26827 main.go:141] libmachine: (ha-091565) Calling .GetIP
	I0918 20:01:26.638938   26827 main.go:141] libmachine: (ha-091565) DBG | domain ha-091565 has defined MAC address 52:54:00:2a:13:d8 in network mk-ha-091565
	I0918 20:01:26.639246   26827 main.go:141] libmachine: (ha-091565) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:13:d8", ip: ""} in network mk-ha-091565: {Iface:virbr1 ExpiryTime:2024-09-18 21:01:12 +0000 UTC Type:0 Mac:52:54:00:2a:13:d8 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:ha-091565 Clientid:01:52:54:00:2a:13:d8}
	I0918 20:01:26.639274   26827 main.go:141] libmachine: (ha-091565) DBG | domain ha-091565 has defined IP address 192.168.39.215 and MAC address 52:54:00:2a:13:d8 in network mk-ha-091565
	I0918 20:01:26.639496   26827 profile.go:143] Saving config to /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/ha-091565/config.json ...
	I0918 20:01:26.639737   26827 start.go:128] duration metric: took 28.930427667s to createHost
	I0918 20:01:26.639765   26827 main.go:141] libmachine: (ha-091565) Calling .GetSSHHostname
	I0918 20:01:26.642131   26827 main.go:141] libmachine: (ha-091565) DBG | domain ha-091565 has defined MAC address 52:54:00:2a:13:d8 in network mk-ha-091565
	I0918 20:01:26.642460   26827 main.go:141] libmachine: (ha-091565) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:13:d8", ip: ""} in network mk-ha-091565: {Iface:virbr1 ExpiryTime:2024-09-18 21:01:12 +0000 UTC Type:0 Mac:52:54:00:2a:13:d8 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:ha-091565 Clientid:01:52:54:00:2a:13:d8}
	I0918 20:01:26.642482   26827 main.go:141] libmachine: (ha-091565) DBG | domain ha-091565 has defined IP address 192.168.39.215 and MAC address 52:54:00:2a:13:d8 in network mk-ha-091565
	I0918 20:01:26.642675   26827 main.go:141] libmachine: (ha-091565) Calling .GetSSHPort
	I0918 20:01:26.642866   26827 main.go:141] libmachine: (ha-091565) Calling .GetSSHKeyPath
	I0918 20:01:26.643104   26827 main.go:141] libmachine: (ha-091565) Calling .GetSSHKeyPath
	I0918 20:01:26.643258   26827 main.go:141] libmachine: (ha-091565) Calling .GetSSHUsername
	I0918 20:01:26.643412   26827 main.go:141] libmachine: Using SSH client type: native
	I0918 20:01:26.643644   26827 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.215 22 <nil> <nil>}
	I0918 20:01:26.643661   26827 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0918 20:01:26.756537   26827 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726689686.738518611
	
	I0918 20:01:26.756561   26827 fix.go:216] guest clock: 1726689686.738518611
	I0918 20:01:26.756568   26827 fix.go:229] Guest: 2024-09-18 20:01:26.738518611 +0000 UTC Remote: 2024-09-18 20:01:26.639754618 +0000 UTC m=+29.034479506 (delta=98.763993ms)
	I0918 20:01:26.756587   26827 fix.go:200] guest clock delta is within tolerance: 98.763993ms
	I0918 20:01:26.756592   26827 start.go:83] releasing machines lock for "ha-091565", held for 29.047378188s
	I0918 20:01:26.756612   26827 main.go:141] libmachine: (ha-091565) Calling .DriverName
	I0918 20:01:26.756891   26827 main.go:141] libmachine: (ha-091565) Calling .GetIP
	I0918 20:01:26.759638   26827 main.go:141] libmachine: (ha-091565) DBG | domain ha-091565 has defined MAC address 52:54:00:2a:13:d8 in network mk-ha-091565
	I0918 20:01:26.759950   26827 main.go:141] libmachine: (ha-091565) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:13:d8", ip: ""} in network mk-ha-091565: {Iface:virbr1 ExpiryTime:2024-09-18 21:01:12 +0000 UTC Type:0 Mac:52:54:00:2a:13:d8 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:ha-091565 Clientid:01:52:54:00:2a:13:d8}
	I0918 20:01:26.759972   26827 main.go:141] libmachine: (ha-091565) DBG | domain ha-091565 has defined IP address 192.168.39.215 and MAC address 52:54:00:2a:13:d8 in network mk-ha-091565
	I0918 20:01:26.760128   26827 main.go:141] libmachine: (ha-091565) Calling .DriverName
	I0918 20:01:26.760656   26827 main.go:141] libmachine: (ha-091565) Calling .DriverName
	I0918 20:01:26.760816   26827 main.go:141] libmachine: (ha-091565) Calling .DriverName
	I0918 20:01:26.760919   26827 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0918 20:01:26.760970   26827 main.go:141] libmachine: (ha-091565) Calling .GetSSHHostname
	I0918 20:01:26.761017   26827 ssh_runner.go:195] Run: cat /version.json
	I0918 20:01:26.761043   26827 main.go:141] libmachine: (ha-091565) Calling .GetSSHHostname
	I0918 20:01:26.763588   26827 main.go:141] libmachine: (ha-091565) DBG | domain ha-091565 has defined MAC address 52:54:00:2a:13:d8 in network mk-ha-091565
	I0918 20:01:26.763617   26827 main.go:141] libmachine: (ha-091565) DBG | domain ha-091565 has defined MAC address 52:54:00:2a:13:d8 in network mk-ha-091565
	I0918 20:01:26.763927   26827 main.go:141] libmachine: (ha-091565) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:13:d8", ip: ""} in network mk-ha-091565: {Iface:virbr1 ExpiryTime:2024-09-18 21:01:12 +0000 UTC Type:0 Mac:52:54:00:2a:13:d8 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:ha-091565 Clientid:01:52:54:00:2a:13:d8}
	I0918 20:01:26.763960   26827 main.go:141] libmachine: (ha-091565) DBG | domain ha-091565 has defined IP address 192.168.39.215 and MAC address 52:54:00:2a:13:d8 in network mk-ha-091565
	I0918 20:01:26.763986   26827 main.go:141] libmachine: (ha-091565) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:13:d8", ip: ""} in network mk-ha-091565: {Iface:virbr1 ExpiryTime:2024-09-18 21:01:12 +0000 UTC Type:0 Mac:52:54:00:2a:13:d8 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:ha-091565 Clientid:01:52:54:00:2a:13:d8}
	I0918 20:01:26.764000   26827 main.go:141] libmachine: (ha-091565) DBG | domain ha-091565 has defined IP address 192.168.39.215 and MAC address 52:54:00:2a:13:d8 in network mk-ha-091565
	I0918 20:01:26.764093   26827 main.go:141] libmachine: (ha-091565) Calling .GetSSHPort
	I0918 20:01:26.764219   26827 main.go:141] libmachine: (ha-091565) Calling .GetSSHPort
	I0918 20:01:26.764334   26827 main.go:141] libmachine: (ha-091565) Calling .GetSSHKeyPath
	I0918 20:01:26.764352   26827 main.go:141] libmachine: (ha-091565) Calling .GetSSHKeyPath
	I0918 20:01:26.764485   26827 main.go:141] libmachine: (ha-091565) Calling .GetSSHUsername
	I0918 20:01:26.764503   26827 main.go:141] libmachine: (ha-091565) Calling .GetSSHUsername
	I0918 20:01:26.764654   26827 sshutil.go:53] new ssh client: &{IP:192.168.39.215 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19667-7671/.minikube/machines/ha-091565/id_rsa Username:docker}
	I0918 20:01:26.764655   26827 sshutil.go:53] new ssh client: &{IP:192.168.39.215 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19667-7671/.minikube/machines/ha-091565/id_rsa Username:docker}
	I0918 20:01:26.887790   26827 ssh_runner.go:195] Run: systemctl --version
	I0918 20:01:26.893767   26827 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0918 20:01:27.057963   26827 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0918 20:01:27.064172   26827 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0918 20:01:27.064252   26827 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0918 20:01:27.080537   26827 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0918 20:01:27.080566   26827 start.go:495] detecting cgroup driver to use...
	I0918 20:01:27.080726   26827 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0918 20:01:27.098904   26827 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0918 20:01:27.113999   26827 docker.go:217] disabling cri-docker service (if available) ...
	I0918 20:01:27.114063   26827 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0918 20:01:27.127448   26827 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0918 20:01:27.140971   26827 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0918 20:01:27.277092   26827 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0918 20:01:27.438944   26827 docker.go:233] disabling docker service ...
	I0918 20:01:27.439019   26827 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0918 20:01:27.452578   26827 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0918 20:01:27.465616   26827 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0918 20:01:27.576240   26827 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0918 20:01:27.692187   26827 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0918 20:01:27.706450   26827 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0918 20:01:27.724470   26827 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0918 20:01:27.724548   26827 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0918 20:01:27.734691   26827 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0918 20:01:27.734759   26827 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0918 20:01:27.744841   26827 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0918 20:01:27.754941   26827 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0918 20:01:27.765749   26827 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0918 20:01:27.776994   26827 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0918 20:01:27.787772   26827 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0918 20:01:27.805476   26827 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0918 20:01:27.815577   26827 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0918 20:01:27.824923   26827 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0918 20:01:27.825000   26827 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0918 20:01:27.837394   26827 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0918 20:01:27.847278   26827 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0918 20:01:27.957450   26827 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0918 20:01:28.049268   26827 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0918 20:01:28.049347   26827 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0918 20:01:28.053609   26827 start.go:563] Will wait 60s for crictl version
	I0918 20:01:28.053664   26827 ssh_runner.go:195] Run: which crictl
	I0918 20:01:28.057561   26827 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0918 20:01:28.095781   26827 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0918 20:01:28.095855   26827 ssh_runner.go:195] Run: crio --version
	I0918 20:01:28.122990   26827 ssh_runner.go:195] Run: crio --version
	I0918 20:01:28.151689   26827 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0918 20:01:28.153185   26827 main.go:141] libmachine: (ha-091565) Calling .GetIP
	I0918 20:01:28.155727   26827 main.go:141] libmachine: (ha-091565) DBG | domain ha-091565 has defined MAC address 52:54:00:2a:13:d8 in network mk-ha-091565
	I0918 20:01:28.156071   26827 main.go:141] libmachine: (ha-091565) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:13:d8", ip: ""} in network mk-ha-091565: {Iface:virbr1 ExpiryTime:2024-09-18 21:01:12 +0000 UTC Type:0 Mac:52:54:00:2a:13:d8 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:ha-091565 Clientid:01:52:54:00:2a:13:d8}
	I0918 20:01:28.156102   26827 main.go:141] libmachine: (ha-091565) DBG | domain ha-091565 has defined IP address 192.168.39.215 and MAC address 52:54:00:2a:13:d8 in network mk-ha-091565
	I0918 20:01:28.156291   26827 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0918 20:01:28.160094   26827 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0918 20:01:28.172348   26827 kubeadm.go:883] updating cluster {Name:ha-091565 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Cl
usterName:ha-091565 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.215 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0918 20:01:28.172455   26827 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0918 20:01:28.172495   26827 ssh_runner.go:195] Run: sudo crictl images --output json
	I0918 20:01:28.202903   26827 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I0918 20:01:28.202968   26827 ssh_runner.go:195] Run: which lz4
	I0918 20:01:28.206524   26827 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19667-7671/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0918 20:01:28.206640   26827 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0918 20:01:28.210309   26827 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0918 20:01:28.210346   26827 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I0918 20:01:29.428932   26827 crio.go:462] duration metric: took 1.222324485s to copy over tarball
	I0918 20:01:29.428998   26827 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0918 20:01:31.427670   26827 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.998650683s)
	I0918 20:01:31.427701   26827 crio.go:469] duration metric: took 1.998743987s to extract the tarball
	I0918 20:01:31.427710   26827 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0918 20:01:31.465115   26827 ssh_runner.go:195] Run: sudo crictl images --output json
	I0918 20:01:31.512315   26827 crio.go:514] all images are preloaded for cri-o runtime.
	I0918 20:01:31.512340   26827 cache_images.go:84] Images are preloaded, skipping loading
	I0918 20:01:31.512349   26827 kubeadm.go:934] updating node { 192.168.39.215 8443 v1.31.1 crio true true} ...
	I0918 20:01:31.512489   26827 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-091565 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.215
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-091565 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0918 20:01:31.512625   26827 ssh_runner.go:195] Run: crio config
	I0918 20:01:31.557297   26827 cni.go:84] Creating CNI manager for ""
	I0918 20:01:31.557325   26827 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0918 20:01:31.557342   26827 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0918 20:01:31.557362   26827 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.215 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-091565 NodeName:ha-091565 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.215"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.215 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0918 20:01:31.557481   26827 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.215
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-091565"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.215
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.215"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0918 20:01:31.557515   26827 kube-vip.go:115] generating kube-vip config ...
	I0918 20:01:31.557571   26827 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0918 20:01:31.573497   26827 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0918 20:01:31.573622   26827 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I0918 20:01:31.573693   26827 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0918 20:01:31.583548   26827 binaries.go:44] Found k8s binaries, skipping transfer
	I0918 20:01:31.583630   26827 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0918 20:01:31.592787   26827 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I0918 20:01:31.608721   26827 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0918 20:01:31.624827   26827 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2153 bytes)
	I0918 20:01:31.640691   26827 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1447 bytes)
	I0918 20:01:31.656477   26827 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0918 20:01:31.660115   26827 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0918 20:01:31.671977   26827 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0918 20:01:31.797641   26827 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0918 20:01:31.815122   26827 certs.go:68] Setting up /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/ha-091565 for IP: 192.168.39.215
	I0918 20:01:31.815151   26827 certs.go:194] generating shared ca certs ...
	I0918 20:01:31.815173   26827 certs.go:226] acquiring lock for ca certs: {Name:mkf54e0e71ed884c4372f9d3d4cb308ea4600185 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0918 20:01:31.815382   26827 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19667-7671/.minikube/ca.key
	I0918 20:01:31.815442   26827 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19667-7671/.minikube/proxy-client-ca.key
	I0918 20:01:31.815465   26827 certs.go:256] generating profile certs ...
	I0918 20:01:31.815537   26827 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/ha-091565/client.key
	I0918 20:01:31.815566   26827 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/ha-091565/client.crt with IP's: []
	I0918 20:01:31.882711   26827 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/ha-091565/client.crt ...
	I0918 20:01:31.882735   26827 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/ha-091565/client.crt: {Name:mk22393d10a62db8be4ee96423eb8999dca92051 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0918 20:01:31.882908   26827 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/ha-091565/client.key ...
	I0918 20:01:31.882923   26827 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/ha-091565/client.key: {Name:mk40398d3c215962d47b7b1ac3b33466404e1ae5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0918 20:01:31.883062   26827 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/ha-091565/apiserver.key.286a620e
	I0918 20:01:31.883085   26827 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/ha-091565/apiserver.crt.286a620e with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.215 192.168.39.254]
	I0918 20:01:32.176911   26827 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/ha-091565/apiserver.crt.286a620e ...
	I0918 20:01:32.176938   26827 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/ha-091565/apiserver.crt.286a620e: {Name:mk6e12e8d7297caa8349fc6fe030d9b3d69c43ba Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0918 20:01:32.177087   26827 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/ha-091565/apiserver.key.286a620e ...
	I0918 20:01:32.177099   26827 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/ha-091565/apiserver.key.286a620e: {Name:mkbac5b4ddde2084fa4364c4dee4c3ed0d321a5e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0918 20:01:32.177161   26827 certs.go:381] copying /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/ha-091565/apiserver.crt.286a620e -> /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/ha-091565/apiserver.crt
	I0918 20:01:32.177247   26827 certs.go:385] copying /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/ha-091565/apiserver.key.286a620e -> /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/ha-091565/apiserver.key
	I0918 20:01:32.177297   26827 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/ha-091565/proxy-client.key
	I0918 20:01:32.177310   26827 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/ha-091565/proxy-client.crt with IP's: []
	I0918 20:01:32.272727   26827 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/ha-091565/proxy-client.crt ...
	I0918 20:01:32.272755   26827 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/ha-091565/proxy-client.crt: {Name:mk83a2402d1ff78c6dd742b96bf8c90e2537b4cd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0918 20:01:32.272892   26827 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/ha-091565/proxy-client.key ...
	I0918 20:01:32.272902   26827 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/ha-091565/proxy-client.key: {Name:mk377a0949cdb8c08e373abce1488149f3aaff34 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0918 20:01:32.272968   26827 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19667-7671/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0918 20:01:32.272985   26827 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19667-7671/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0918 20:01:32.272998   26827 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19667-7671/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0918 20:01:32.273010   26827 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19667-7671/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0918 20:01:32.273031   26827 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/ha-091565/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0918 20:01:32.273043   26827 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/ha-091565/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0918 20:01:32.273055   26827 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/ha-091565/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0918 20:01:32.273066   26827 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/ha-091565/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0918 20:01:32.273127   26827 certs.go:484] found cert: /home/jenkins/minikube-integration/19667-7671/.minikube/certs/14878.pem (1338 bytes)
	W0918 20:01:32.273161   26827 certs.go:480] ignoring /home/jenkins/minikube-integration/19667-7671/.minikube/certs/14878_empty.pem, impossibly tiny 0 bytes
	I0918 20:01:32.273170   26827 certs.go:484] found cert: /home/jenkins/minikube-integration/19667-7671/.minikube/certs/ca-key.pem (1675 bytes)
	I0918 20:01:32.273195   26827 certs.go:484] found cert: /home/jenkins/minikube-integration/19667-7671/.minikube/certs/ca.pem (1082 bytes)
	I0918 20:01:32.273219   26827 certs.go:484] found cert: /home/jenkins/minikube-integration/19667-7671/.minikube/certs/cert.pem (1123 bytes)
	I0918 20:01:32.273239   26827 certs.go:484] found cert: /home/jenkins/minikube-integration/19667-7671/.minikube/certs/key.pem (1679 bytes)
	I0918 20:01:32.273274   26827 certs.go:484] found cert: /home/jenkins/minikube-integration/19667-7671/.minikube/files/etc/ssl/certs/148782.pem (1708 bytes)
	I0918 20:01:32.273302   26827 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19667-7671/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0918 20:01:32.273315   26827 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19667-7671/.minikube/certs/14878.pem -> /usr/share/ca-certificates/14878.pem
	I0918 20:01:32.273327   26827 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19667-7671/.minikube/files/etc/ssl/certs/148782.pem -> /usr/share/ca-certificates/148782.pem
	I0918 20:01:32.273874   26827 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0918 20:01:32.300229   26827 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0918 20:01:32.325896   26827 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0918 20:01:32.351512   26827 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0918 20:01:32.377318   26827 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/ha-091565/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0918 20:01:32.402367   26827 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/ha-091565/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0918 20:01:32.427668   26827 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/ha-091565/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0918 20:01:32.452847   26827 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/ha-091565/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0918 20:01:32.478252   26827 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0918 20:01:32.502486   26827 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/certs/14878.pem --> /usr/share/ca-certificates/14878.pem (1338 bytes)
	I0918 20:01:32.525747   26827 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/files/etc/ssl/certs/148782.pem --> /usr/share/ca-certificates/148782.pem (1708 bytes)
	I0918 20:01:32.548776   26827 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0918 20:01:32.568576   26827 ssh_runner.go:195] Run: openssl version
	I0918 20:01:32.574892   26827 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14878.pem && ln -fs /usr/share/ca-certificates/14878.pem /etc/ssl/certs/14878.pem"
	I0918 20:01:32.589112   26827 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14878.pem
	I0918 20:01:32.594154   26827 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 18 19:57 /usr/share/ca-certificates/14878.pem
	I0918 20:01:32.594216   26827 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14878.pem
	I0918 20:01:32.601293   26827 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14878.pem /etc/ssl/certs/51391683.0"
	I0918 20:01:32.612847   26827 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/148782.pem && ln -fs /usr/share/ca-certificates/148782.pem /etc/ssl/certs/148782.pem"
	I0918 20:01:32.626745   26827 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/148782.pem
	I0918 20:01:32.631036   26827 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 18 19:57 /usr/share/ca-certificates/148782.pem
	I0918 20:01:32.631097   26827 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/148782.pem
	I0918 20:01:32.636840   26827 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/148782.pem /etc/ssl/certs/3ec20f2e.0"
	I0918 20:01:32.647396   26827 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0918 20:01:32.658543   26827 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0918 20:01:32.663199   26827 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 18 19:39 /usr/share/ca-certificates/minikubeCA.pem
	I0918 20:01:32.663269   26827 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0918 20:01:32.669178   26827 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0918 20:01:32.680536   26827 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0918 20:01:32.684596   26827 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0918 20:01:32.684652   26827 kubeadm.go:392] StartCluster: {Name:ha-091565 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Clust
erName:ha-091565 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.215 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Moun
tType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0918 20:01:32.684723   26827 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0918 20:01:32.684781   26827 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0918 20:01:32.725657   26827 cri.go:89] found id: ""
	I0918 20:01:32.725738   26827 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0918 20:01:32.736032   26827 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0918 20:01:32.745809   26827 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0918 20:01:32.755660   26827 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0918 20:01:32.755683   26827 kubeadm.go:157] found existing configuration files:
	
	I0918 20:01:32.755734   26827 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0918 20:01:32.765360   26827 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0918 20:01:32.765422   26827 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0918 20:01:32.774977   26827 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0918 20:01:32.784236   26827 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0918 20:01:32.784323   26827 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0918 20:01:32.794385   26827 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0918 20:01:32.803877   26827 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0918 20:01:32.803962   26827 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0918 20:01:32.813974   26827 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0918 20:01:32.824307   26827 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0918 20:01:32.824372   26827 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0918 20:01:32.833810   26827 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0918 20:01:32.930760   26827 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I0918 20:01:32.930831   26827 kubeadm.go:310] [preflight] Running pre-flight checks
	I0918 20:01:33.036305   26827 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0918 20:01:33.036446   26827 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0918 20:01:33.036572   26827 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0918 20:01:33.048889   26827 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0918 20:01:33.216902   26827 out.go:235]   - Generating certificates and keys ...
	I0918 20:01:33.217021   26827 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0918 20:01:33.217118   26827 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0918 20:01:33.410022   26827 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0918 20:01:33.571042   26827 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0918 20:01:34.285080   26827 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0918 20:01:34.386506   26827 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0918 20:01:34.560257   26827 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0918 20:01:34.560457   26827 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [ha-091565 localhost] and IPs [192.168.39.215 127.0.0.1 ::1]
	I0918 20:01:34.830386   26827 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0918 20:01:34.830530   26827 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [ha-091565 localhost] and IPs [192.168.39.215 127.0.0.1 ::1]
	I0918 20:01:34.951453   26827 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0918 20:01:35.138903   26827 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0918 20:01:35.238989   26827 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0918 20:01:35.239055   26827 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0918 20:01:35.347180   26827 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0918 20:01:35.486849   26827 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0918 20:01:35.625355   26827 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0918 20:01:35.747961   26827 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0918 20:01:35.790004   26827 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0918 20:01:35.790529   26827 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0918 20:01:35.794055   26827 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0918 20:01:35.796153   26827 out.go:235]   - Booting up control plane ...
	I0918 20:01:35.796260   26827 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0918 20:01:35.796362   26827 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0918 20:01:35.796717   26827 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0918 20:01:35.811747   26827 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0918 20:01:35.820566   26827 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0918 20:01:35.820644   26827 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0918 20:01:35.959348   26827 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0918 20:01:35.959478   26827 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0918 20:01:36.960132   26827 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.00167882s
	I0918 20:01:36.960220   26827 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0918 20:01:42.633375   26827 kubeadm.go:310] [api-check] The API server is healthy after 5.675608776s
	I0918 20:01:42.646137   26827 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0918 20:01:42.670455   26827 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0918 20:01:42.705148   26827 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0918 20:01:42.705327   26827 kubeadm.go:310] [mark-control-plane] Marking the node ha-091565 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0918 20:01:42.722155   26827 kubeadm.go:310] [bootstrap-token] Using token: 1ejtyk.26hc6xxbyyyx578s
	I0918 20:01:42.723458   26827 out.go:235]   - Configuring RBAC rules ...
	I0918 20:01:42.723598   26827 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0918 20:01:42.732040   26827 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0918 20:01:42.744976   26827 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0918 20:01:42.750140   26827 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0918 20:01:42.755732   26827 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0918 20:01:42.762953   26827 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0918 20:01:43.043394   26827 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0918 20:01:43.485553   26827 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0918 20:01:44.041202   26827 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0918 20:01:44.041225   26827 kubeadm.go:310] 
	I0918 20:01:44.041318   26827 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0918 20:01:44.041338   26827 kubeadm.go:310] 
	I0918 20:01:44.041443   26827 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0918 20:01:44.041471   26827 kubeadm.go:310] 
	I0918 20:01:44.041497   26827 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0918 20:01:44.041547   26827 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0918 20:01:44.041640   26827 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0918 20:01:44.041659   26827 kubeadm.go:310] 
	I0918 20:01:44.041751   26827 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0918 20:01:44.041778   26827 kubeadm.go:310] 
	I0918 20:01:44.041846   26827 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0918 20:01:44.041857   26827 kubeadm.go:310] 
	I0918 20:01:44.041977   26827 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0918 20:01:44.042082   26827 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0918 20:01:44.042182   26827 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0918 20:01:44.042190   26827 kubeadm.go:310] 
	I0918 20:01:44.042302   26827 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0918 20:01:44.042416   26827 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0918 20:01:44.042425   26827 kubeadm.go:310] 
	I0918 20:01:44.042517   26827 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 1ejtyk.26hc6xxbyyyx578s \
	I0918 20:01:44.042666   26827 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:8ad46bf288e8cc2226f5a929a768e6c95cddecc97ffc4016be27ef0987b1e36f \
	I0918 20:01:44.042690   26827 kubeadm.go:310] 	--control-plane 
	I0918 20:01:44.042694   26827 kubeadm.go:310] 
	I0918 20:01:44.042795   26827 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0918 20:01:44.042811   26827 kubeadm.go:310] 
	I0918 20:01:44.042929   26827 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 1ejtyk.26hc6xxbyyyx578s \
	I0918 20:01:44.043079   26827 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:8ad46bf288e8cc2226f5a929a768e6c95cddecc97ffc4016be27ef0987b1e36f 
	I0918 20:01:44.043428   26827 kubeadm.go:310] W0918 20:01:32.914360     832 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0918 20:01:44.043697   26827 kubeadm.go:310] W0918 20:01:32.915480     832 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0918 20:01:44.043826   26827 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0918 20:01:44.043856   26827 cni.go:84] Creating CNI manager for ""
	I0918 20:01:44.043867   26827 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0918 20:01:44.045606   26827 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0918 20:01:44.046719   26827 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0918 20:01:44.052565   26827 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.1/kubectl ...
	I0918 20:01:44.052591   26827 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I0918 20:01:44.074207   26827 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0918 20:01:44.422814   26827 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0918 20:01:44.422902   26827 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0918 20:01:44.422924   26827 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-091565 minikube.k8s.io/updated_at=2024_09_18T20_01_44_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=85073601a832bd4bbda5d11fa91feafff6ec6b91 minikube.k8s.io/name=ha-091565 minikube.k8s.io/primary=true
	I0918 20:01:44.659852   26827 ops.go:34] apiserver oom_adj: -16
	I0918 20:01:44.660163   26827 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0918 20:01:45.160146   26827 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0918 20:01:45.660152   26827 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0918 20:01:46.161013   26827 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0918 20:01:46.660936   26827 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0918 20:01:47.160166   26827 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0918 20:01:47.266634   26827 kubeadm.go:1113] duration metric: took 2.843807989s to wait for elevateKubeSystemPrivileges
	I0918 20:01:47.266673   26827 kubeadm.go:394] duration metric: took 14.582024612s to StartCluster
	I0918 20:01:47.266695   26827 settings.go:142] acquiring lock: {Name:mk6ae95a69dcbe00c28846409e9f46945a88de2f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0918 20:01:47.266765   26827 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19667-7671/kubeconfig
	I0918 20:01:47.267982   26827 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19667-7671/kubeconfig: {Name:mk3a3eecadb0164dfdd5d3c4392b5b473a2a9bb0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0918 20:01:47.268278   26827 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.39.215 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0918 20:01:47.268306   26827 start.go:241] waiting for startup goroutines ...
	I0918 20:01:47.268323   26827 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0918 20:01:47.268480   26827 addons.go:69] Setting storage-provisioner=true in profile "ha-091565"
	I0918 20:01:47.268500   26827 addons.go:234] Setting addon storage-provisioner=true in "ha-091565"
	I0918 20:01:47.268535   26827 host.go:66] Checking if "ha-091565" exists ...
	I0918 20:01:47.268594   26827 addons.go:69] Setting default-storageclass=true in profile "ha-091565"
	I0918 20:01:47.268631   26827 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-091565"
	I0918 20:01:47.268658   26827 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0918 20:01:47.268843   26827 config.go:182] Loaded profile config "ha-091565": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0918 20:01:47.269530   26827 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0918 20:01:47.269576   26827 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0918 20:01:47.269584   26827 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0918 20:01:47.269740   26827 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0918 20:01:47.284536   26827 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39723
	I0918 20:01:47.284536   26827 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40865
	I0918 20:01:47.285102   26827 main.go:141] libmachine: () Calling .GetVersion
	I0918 20:01:47.285215   26827 main.go:141] libmachine: () Calling .GetVersion
	I0918 20:01:47.285649   26827 main.go:141] libmachine: Using API Version  1
	I0918 20:01:47.285665   26827 main.go:141] libmachine: () Calling .SetConfigRaw
	I0918 20:01:47.285788   26827 main.go:141] libmachine: Using API Version  1
	I0918 20:01:47.285813   26827 main.go:141] libmachine: () Calling .SetConfigRaw
	I0918 20:01:47.286000   26827 main.go:141] libmachine: () Calling .GetMachineName
	I0918 20:01:47.286165   26827 main.go:141] libmachine: () Calling .GetMachineName
	I0918 20:01:47.286188   26827 main.go:141] libmachine: (ha-091565) Calling .GetState
	I0918 20:01:47.286733   26827 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0918 20:01:47.286779   26827 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0918 20:01:47.288227   26827 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19667-7671/kubeconfig
	I0918 20:01:47.288530   26827 kapi.go:59] client config for ha-091565: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19667-7671/.minikube/profiles/ha-091565/client.crt", KeyFile:"/home/jenkins/minikube-integration/19667-7671/.minikube/profiles/ha-091565/client.key", CAFile:"/home/jenkins/minikube-integration/19667-7671/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f6fca0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0918 20:01:47.289088   26827 cert_rotation.go:140] Starting client certificate rotation controller
	I0918 20:01:47.289302   26827 addons.go:234] Setting addon default-storageclass=true in "ha-091565"
	I0918 20:01:47.289329   26827 host.go:66] Checking if "ha-091565" exists ...
	I0918 20:01:47.289569   26827 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0918 20:01:47.289600   26827 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0918 20:01:47.302279   26827 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33947
	I0918 20:01:47.302845   26827 main.go:141] libmachine: () Calling .GetVersion
	I0918 20:01:47.303361   26827 main.go:141] libmachine: Using API Version  1
	I0918 20:01:47.303390   26827 main.go:141] libmachine: () Calling .SetConfigRaw
	I0918 20:01:47.303730   26827 main.go:141] libmachine: () Calling .GetMachineName
	I0918 20:01:47.303943   26827 main.go:141] libmachine: (ha-091565) Calling .GetState
	I0918 20:01:47.304502   26827 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33887
	I0918 20:01:47.304796   26827 main.go:141] libmachine: () Calling .GetVersion
	I0918 20:01:47.305341   26827 main.go:141] libmachine: Using API Version  1
	I0918 20:01:47.305367   26827 main.go:141] libmachine: () Calling .SetConfigRaw
	I0918 20:01:47.305641   26827 main.go:141] libmachine: () Calling .GetMachineName
	I0918 20:01:47.305684   26827 main.go:141] libmachine: (ha-091565) Calling .DriverName
	I0918 20:01:47.306081   26827 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0918 20:01:47.306112   26827 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0918 20:01:47.307722   26827 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0918 20:01:47.309002   26827 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0918 20:01:47.309023   26827 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0918 20:01:47.309041   26827 main.go:141] libmachine: (ha-091565) Calling .GetSSHHostname
	I0918 20:01:47.311945   26827 main.go:141] libmachine: (ha-091565) DBG | domain ha-091565 has defined MAC address 52:54:00:2a:13:d8 in network mk-ha-091565
	I0918 20:01:47.312427   26827 main.go:141] libmachine: (ha-091565) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:13:d8", ip: ""} in network mk-ha-091565: {Iface:virbr1 ExpiryTime:2024-09-18 21:01:12 +0000 UTC Type:0 Mac:52:54:00:2a:13:d8 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:ha-091565 Clientid:01:52:54:00:2a:13:d8}
	I0918 20:01:47.312448   26827 main.go:141] libmachine: (ha-091565) DBG | domain ha-091565 has defined IP address 192.168.39.215 and MAC address 52:54:00:2a:13:d8 in network mk-ha-091565
	I0918 20:01:47.312599   26827 main.go:141] libmachine: (ha-091565) Calling .GetSSHPort
	I0918 20:01:47.312781   26827 main.go:141] libmachine: (ha-091565) Calling .GetSSHKeyPath
	I0918 20:01:47.312931   26827 main.go:141] libmachine: (ha-091565) Calling .GetSSHUsername
	I0918 20:01:47.313072   26827 sshutil.go:53] new ssh client: &{IP:192.168.39.215 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19667-7671/.minikube/machines/ha-091565/id_rsa Username:docker}
	I0918 20:01:47.321291   26827 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43097
	I0918 20:01:47.321760   26827 main.go:141] libmachine: () Calling .GetVersion
	I0918 20:01:47.322322   26827 main.go:141] libmachine: Using API Version  1
	I0918 20:01:47.322343   26827 main.go:141] libmachine: () Calling .SetConfigRaw
	I0918 20:01:47.322630   26827 main.go:141] libmachine: () Calling .GetMachineName
	I0918 20:01:47.322807   26827 main.go:141] libmachine: (ha-091565) Calling .GetState
	I0918 20:01:47.324450   26827 main.go:141] libmachine: (ha-091565) Calling .DriverName
	I0918 20:01:47.324624   26827 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0918 20:01:47.324639   26827 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0918 20:01:47.324656   26827 main.go:141] libmachine: (ha-091565) Calling .GetSSHHostname
	I0918 20:01:47.327553   26827 main.go:141] libmachine: (ha-091565) DBG | domain ha-091565 has defined MAC address 52:54:00:2a:13:d8 in network mk-ha-091565
	I0918 20:01:47.328031   26827 main.go:141] libmachine: (ha-091565) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:13:d8", ip: ""} in network mk-ha-091565: {Iface:virbr1 ExpiryTime:2024-09-18 21:01:12 +0000 UTC Type:0 Mac:52:54:00:2a:13:d8 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:ha-091565 Clientid:01:52:54:00:2a:13:d8}
	I0918 20:01:47.328103   26827 main.go:141] libmachine: (ha-091565) DBG | domain ha-091565 has defined IP address 192.168.39.215 and MAC address 52:54:00:2a:13:d8 in network mk-ha-091565
	I0918 20:01:47.328319   26827 main.go:141] libmachine: (ha-091565) Calling .GetSSHPort
	I0918 20:01:47.328490   26827 main.go:141] libmachine: (ha-091565) Calling .GetSSHKeyPath
	I0918 20:01:47.328627   26827 main.go:141] libmachine: (ha-091565) Calling .GetSSHUsername
	I0918 20:01:47.328755   26827 sshutil.go:53] new ssh client: &{IP:192.168.39.215 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19667-7671/.minikube/machines/ha-091565/id_rsa Username:docker}
	I0918 20:01:47.399915   26827 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0918 20:01:47.490020   26827 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0918 20:01:47.507383   26827 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0918 20:01:47.769102   26827 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0918 20:01:48.124518   26827 main.go:141] libmachine: Making call to close driver server
	I0918 20:01:48.124546   26827 main.go:141] libmachine: (ha-091565) Calling .Close
	I0918 20:01:48.124566   26827 main.go:141] libmachine: Making call to close driver server
	I0918 20:01:48.124582   26827 main.go:141] libmachine: (ha-091565) Calling .Close
	I0918 20:01:48.124826   26827 main.go:141] libmachine: Successfully made call to close driver server
	I0918 20:01:48.124838   26827 main.go:141] libmachine: Successfully made call to close driver server
	I0918 20:01:48.124842   26827 main.go:141] libmachine: Making call to close connection to plugin binary
	I0918 20:01:48.124851   26827 main.go:141] libmachine: Making call to close connection to plugin binary
	I0918 20:01:48.124852   26827 main.go:141] libmachine: Making call to close driver server
	I0918 20:01:48.124854   26827 main.go:141] libmachine: (ha-091565) DBG | Closing plugin on server side
	I0918 20:01:48.124854   26827 main.go:141] libmachine: (ha-091565) DBG | Closing plugin on server side
	I0918 20:01:48.124860   26827 main.go:141] libmachine: Making call to close driver server
	I0918 20:01:48.124891   26827 main.go:141] libmachine: (ha-091565) Calling .Close
	I0918 20:01:48.124906   26827 main.go:141] libmachine: (ha-091565) Calling .Close
	I0918 20:01:48.125117   26827 main.go:141] libmachine: (ha-091565) DBG | Closing plugin on server side
	I0918 20:01:48.125151   26827 main.go:141] libmachine: Successfully made call to close driver server
	I0918 20:01:48.125160   26827 main.go:141] libmachine: Making call to close connection to plugin binary
	I0918 20:01:48.125197   26827 main.go:141] libmachine: Successfully made call to close driver server
	I0918 20:01:48.125206   26827 main.go:141] libmachine: Making call to close connection to plugin binary
	I0918 20:01:48.125293   26827 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0918 20:01:48.125321   26827 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0918 20:01:48.125410   26827 round_trippers.go:463] GET https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses
	I0918 20:01:48.125420   26827 round_trippers.go:469] Request Headers:
	I0918 20:01:48.125433   26827 round_trippers.go:473]     Accept: application/json, */*
	I0918 20:01:48.125438   26827 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0918 20:01:48.140920   26827 round_trippers.go:574] Response Status: 200 OK in 15 milliseconds
	I0918 20:01:48.141439   26827 round_trippers.go:463] PUT https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0918 20:01:48.141452   26827 round_trippers.go:469] Request Headers:
	I0918 20:01:48.141459   26827 round_trippers.go:473]     Content-Type: application/json
	I0918 20:01:48.141463   26827 round_trippers.go:473]     Accept: application/json, */*
	I0918 20:01:48.141466   26827 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0918 20:01:48.144763   26827 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0918 20:01:48.144914   26827 main.go:141] libmachine: Making call to close driver server
	I0918 20:01:48.144928   26827 main.go:141] libmachine: (ha-091565) Calling .Close
	I0918 20:01:48.145191   26827 main.go:141] libmachine: Successfully made call to close driver server
	I0918 20:01:48.145213   26827 main.go:141] libmachine: Making call to close connection to plugin binary
	I0918 20:01:48.145197   26827 main.go:141] libmachine: (ha-091565) DBG | Closing plugin on server side
	I0918 20:01:48.146835   26827 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0918 20:01:48.148231   26827 addons.go:510] duration metric: took 879.91145ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I0918 20:01:48.148269   26827 start.go:246] waiting for cluster config update ...
	I0918 20:01:48.148286   26827 start.go:255] writing updated cluster config ...
	I0918 20:01:48.150246   26827 out.go:201] 
	I0918 20:01:48.151820   26827 config.go:182] Loaded profile config "ha-091565": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0918 20:01:48.151905   26827 profile.go:143] Saving config to /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/ha-091565/config.json ...
	I0918 20:01:48.153514   26827 out.go:177] * Starting "ha-091565-m02" control-plane node in "ha-091565" cluster
	I0918 20:01:48.154560   26827 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0918 20:01:48.154580   26827 cache.go:56] Caching tarball of preloaded images
	I0918 20:01:48.154669   26827 preload.go:172] Found /home/jenkins/minikube-integration/19667-7671/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0918 20:01:48.154681   26827 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0918 20:01:48.154748   26827 profile.go:143] Saving config to /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/ha-091565/config.json ...
	I0918 20:01:48.154916   26827 start.go:360] acquireMachinesLock for ha-091565-m02: {Name:mk1b0d68a0b7d9d4bc8204d5d81cb9eb77222526 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0918 20:01:48.154979   26827 start.go:364] duration metric: took 35.44µs to acquireMachinesLock for "ha-091565-m02"
	I0918 20:01:48.155003   26827 start.go:93] Provisioning new machine with config: &{Name:ha-091565 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.1 ClusterName:ha-091565 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.215 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 Cer
tExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0918 20:01:48.155077   26827 start.go:125] createHost starting for "m02" (driver="kvm2")
	I0918 20:01:48.156472   26827 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0918 20:01:48.156553   26827 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0918 20:01:48.156597   26827 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0918 20:01:48.171048   26827 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41535
	I0918 20:01:48.171579   26827 main.go:141] libmachine: () Calling .GetVersion
	I0918 20:01:48.172102   26827 main.go:141] libmachine: Using API Version  1
	I0918 20:01:48.172121   26827 main.go:141] libmachine: () Calling .SetConfigRaw
	I0918 20:01:48.172468   26827 main.go:141] libmachine: () Calling .GetMachineName
	I0918 20:01:48.172651   26827 main.go:141] libmachine: (ha-091565-m02) Calling .GetMachineName
	I0918 20:01:48.172786   26827 main.go:141] libmachine: (ha-091565-m02) Calling .DriverName
	I0918 20:01:48.172987   26827 start.go:159] libmachine.API.Create for "ha-091565" (driver="kvm2")
	I0918 20:01:48.173015   26827 client.go:168] LocalClient.Create starting
	I0918 20:01:48.173044   26827 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19667-7671/.minikube/certs/ca.pem
	I0918 20:01:48.173085   26827 main.go:141] libmachine: Decoding PEM data...
	I0918 20:01:48.173100   26827 main.go:141] libmachine: Parsing certificate...
	I0918 20:01:48.173147   26827 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19667-7671/.minikube/certs/cert.pem
	I0918 20:01:48.173164   26827 main.go:141] libmachine: Decoding PEM data...
	I0918 20:01:48.173174   26827 main.go:141] libmachine: Parsing certificate...
	I0918 20:01:48.173189   26827 main.go:141] libmachine: Running pre-create checks...
	I0918 20:01:48.173197   26827 main.go:141] libmachine: (ha-091565-m02) Calling .PreCreateCheck
	I0918 20:01:48.173330   26827 main.go:141] libmachine: (ha-091565-m02) Calling .GetConfigRaw
	I0918 20:01:48.173685   26827 main.go:141] libmachine: Creating machine...
	I0918 20:01:48.173707   26827 main.go:141] libmachine: (ha-091565-m02) Calling .Create
	I0918 20:01:48.173849   26827 main.go:141] libmachine: (ha-091565-m02) Creating KVM machine...
	I0918 20:01:48.175160   26827 main.go:141] libmachine: (ha-091565-m02) DBG | found existing default KVM network
	I0918 20:01:48.175336   26827 main.go:141] libmachine: (ha-091565-m02) DBG | found existing private KVM network mk-ha-091565
	I0918 20:01:48.175456   26827 main.go:141] libmachine: (ha-091565-m02) Setting up store path in /home/jenkins/minikube-integration/19667-7671/.minikube/machines/ha-091565-m02 ...
	I0918 20:01:48.175493   26827 main.go:141] libmachine: (ha-091565-m02) Building disk image from file:///home/jenkins/minikube-integration/19667-7671/.minikube/cache/iso/amd64/minikube-v1.34.0-1726481713-19649-amd64.iso
	I0918 20:01:48.175585   26827 main.go:141] libmachine: (ha-091565-m02) DBG | I0918 20:01:48.175471   27201 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19667-7671/.minikube
	I0918 20:01:48.175662   26827 main.go:141] libmachine: (ha-091565-m02) Downloading /home/jenkins/minikube-integration/19667-7671/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19667-7671/.minikube/cache/iso/amd64/minikube-v1.34.0-1726481713-19649-amd64.iso...
	I0918 20:01:48.401510   26827 main.go:141] libmachine: (ha-091565-m02) DBG | I0918 20:01:48.401363   27201 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19667-7671/.minikube/machines/ha-091565-m02/id_rsa...
	I0918 20:01:48.608450   26827 main.go:141] libmachine: (ha-091565-m02) DBG | I0918 20:01:48.608312   27201 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19667-7671/.minikube/machines/ha-091565-m02/ha-091565-m02.rawdisk...
	I0918 20:01:48.608478   26827 main.go:141] libmachine: (ha-091565-m02) DBG | Writing magic tar header
	I0918 20:01:48.608491   26827 main.go:141] libmachine: (ha-091565-m02) DBG | Writing SSH key tar header
	I0918 20:01:48.608498   26827 main.go:141] libmachine: (ha-091565-m02) DBG | I0918 20:01:48.608419   27201 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19667-7671/.minikube/machines/ha-091565-m02 ...
	I0918 20:01:48.608508   26827 main.go:141] libmachine: (ha-091565-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19667-7671/.minikube/machines/ha-091565-m02
	I0918 20:01:48.608550   26827 main.go:141] libmachine: (ha-091565-m02) Setting executable bit set on /home/jenkins/minikube-integration/19667-7671/.minikube/machines/ha-091565-m02 (perms=drwx------)
	I0918 20:01:48.608571   26827 main.go:141] libmachine: (ha-091565-m02) Setting executable bit set on /home/jenkins/minikube-integration/19667-7671/.minikube/machines (perms=drwxr-xr-x)
	I0918 20:01:48.608596   26827 main.go:141] libmachine: (ha-091565-m02) Setting executable bit set on /home/jenkins/minikube-integration/19667-7671/.minikube (perms=drwxr-xr-x)
	I0918 20:01:48.608618   26827 main.go:141] libmachine: (ha-091565-m02) Setting executable bit set on /home/jenkins/minikube-integration/19667-7671 (perms=drwxrwxr-x)
	I0918 20:01:48.608631   26827 main.go:141] libmachine: (ha-091565-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19667-7671/.minikube/machines
	I0918 20:01:48.608650   26827 main.go:141] libmachine: (ha-091565-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19667-7671/.minikube
	I0918 20:01:48.608662   26827 main.go:141] libmachine: (ha-091565-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19667-7671
	I0918 20:01:48.608675   26827 main.go:141] libmachine: (ha-091565-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0918 20:01:48.608686   26827 main.go:141] libmachine: (ha-091565-m02) DBG | Checking permissions on dir: /home/jenkins
	I0918 20:01:48.608698   26827 main.go:141] libmachine: (ha-091565-m02) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0918 20:01:48.608710   26827 main.go:141] libmachine: (ha-091565-m02) DBG | Checking permissions on dir: /home
	I0918 20:01:48.608728   26827 main.go:141] libmachine: (ha-091565-m02) DBG | Skipping /home - not owner
	I0918 20:01:48.608744   26827 main.go:141] libmachine: (ha-091565-m02) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0918 20:01:48.608754   26827 main.go:141] libmachine: (ha-091565-m02) Creating domain...
	I0918 20:01:48.609781   26827 main.go:141] libmachine: (ha-091565-m02) define libvirt domain using xml: 
	I0918 20:01:48.609802   26827 main.go:141] libmachine: (ha-091565-m02) <domain type='kvm'>
	I0918 20:01:48.609813   26827 main.go:141] libmachine: (ha-091565-m02)   <name>ha-091565-m02</name>
	I0918 20:01:48.609825   26827 main.go:141] libmachine: (ha-091565-m02)   <memory unit='MiB'>2200</memory>
	I0918 20:01:48.609846   26827 main.go:141] libmachine: (ha-091565-m02)   <vcpu>2</vcpu>
	I0918 20:01:48.609855   26827 main.go:141] libmachine: (ha-091565-m02)   <features>
	I0918 20:01:48.609866   26827 main.go:141] libmachine: (ha-091565-m02)     <acpi/>
	I0918 20:01:48.609874   26827 main.go:141] libmachine: (ha-091565-m02)     <apic/>
	I0918 20:01:48.609884   26827 main.go:141] libmachine: (ha-091565-m02)     <pae/>
	I0918 20:01:48.609891   26827 main.go:141] libmachine: (ha-091565-m02)     
	I0918 20:01:48.609898   26827 main.go:141] libmachine: (ha-091565-m02)   </features>
	I0918 20:01:48.609911   26827 main.go:141] libmachine: (ha-091565-m02)   <cpu mode='host-passthrough'>
	I0918 20:01:48.609932   26827 main.go:141] libmachine: (ha-091565-m02)   
	I0918 20:01:48.609948   26827 main.go:141] libmachine: (ha-091565-m02)   </cpu>
	I0918 20:01:48.609957   26827 main.go:141] libmachine: (ha-091565-m02)   <os>
	I0918 20:01:48.609972   26827 main.go:141] libmachine: (ha-091565-m02)     <type>hvm</type>
	I0918 20:01:48.609984   26827 main.go:141] libmachine: (ha-091565-m02)     <boot dev='cdrom'/>
	I0918 20:01:48.609994   26827 main.go:141] libmachine: (ha-091565-m02)     <boot dev='hd'/>
	I0918 20:01:48.610006   26827 main.go:141] libmachine: (ha-091565-m02)     <bootmenu enable='no'/>
	I0918 20:01:48.610016   26827 main.go:141] libmachine: (ha-091565-m02)   </os>
	I0918 20:01:48.610031   26827 main.go:141] libmachine: (ha-091565-m02)   <devices>
	I0918 20:01:48.610042   26827 main.go:141] libmachine: (ha-091565-m02)     <disk type='file' device='cdrom'>
	I0918 20:01:48.610058   26827 main.go:141] libmachine: (ha-091565-m02)       <source file='/home/jenkins/minikube-integration/19667-7671/.minikube/machines/ha-091565-m02/boot2docker.iso'/>
	I0918 20:01:48.610074   26827 main.go:141] libmachine: (ha-091565-m02)       <target dev='hdc' bus='scsi'/>
	I0918 20:01:48.610086   26827 main.go:141] libmachine: (ha-091565-m02)       <readonly/>
	I0918 20:01:48.610096   26827 main.go:141] libmachine: (ha-091565-m02)     </disk>
	I0918 20:01:48.610106   26827 main.go:141] libmachine: (ha-091565-m02)     <disk type='file' device='disk'>
	I0918 20:01:48.610120   26827 main.go:141] libmachine: (ha-091565-m02)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0918 20:01:48.610136   26827 main.go:141] libmachine: (ha-091565-m02)       <source file='/home/jenkins/minikube-integration/19667-7671/.minikube/machines/ha-091565-m02/ha-091565-m02.rawdisk'/>
	I0918 20:01:48.610147   26827 main.go:141] libmachine: (ha-091565-m02)       <target dev='hda' bus='virtio'/>
	I0918 20:01:48.610170   26827 main.go:141] libmachine: (ha-091565-m02)     </disk>
	I0918 20:01:48.610187   26827 main.go:141] libmachine: (ha-091565-m02)     <interface type='network'>
	I0918 20:01:48.610207   26827 main.go:141] libmachine: (ha-091565-m02)       <source network='mk-ha-091565'/>
	I0918 20:01:48.610225   26827 main.go:141] libmachine: (ha-091565-m02)       <model type='virtio'/>
	I0918 20:01:48.610237   26827 main.go:141] libmachine: (ha-091565-m02)     </interface>
	I0918 20:01:48.610247   26827 main.go:141] libmachine: (ha-091565-m02)     <interface type='network'>
	I0918 20:01:48.610255   26827 main.go:141] libmachine: (ha-091565-m02)       <source network='default'/>
	I0918 20:01:48.610265   26827 main.go:141] libmachine: (ha-091565-m02)       <model type='virtio'/>
	I0918 20:01:48.610275   26827 main.go:141] libmachine: (ha-091565-m02)     </interface>
	I0918 20:01:48.610285   26827 main.go:141] libmachine: (ha-091565-m02)     <serial type='pty'>
	I0918 20:01:48.610296   26827 main.go:141] libmachine: (ha-091565-m02)       <target port='0'/>
	I0918 20:01:48.610310   26827 main.go:141] libmachine: (ha-091565-m02)     </serial>
	I0918 20:01:48.610325   26827 main.go:141] libmachine: (ha-091565-m02)     <console type='pty'>
	I0918 20:01:48.610342   26827 main.go:141] libmachine: (ha-091565-m02)       <target type='serial' port='0'/>
	I0918 20:01:48.610353   26827 main.go:141] libmachine: (ha-091565-m02)     </console>
	I0918 20:01:48.610360   26827 main.go:141] libmachine: (ha-091565-m02)     <rng model='virtio'>
	I0918 20:01:48.610371   26827 main.go:141] libmachine: (ha-091565-m02)       <backend model='random'>/dev/random</backend>
	I0918 20:01:48.610380   26827 main.go:141] libmachine: (ha-091565-m02)     </rng>
	I0918 20:01:48.610390   26827 main.go:141] libmachine: (ha-091565-m02)     
	I0918 20:01:48.610396   26827 main.go:141] libmachine: (ha-091565-m02)     
	I0918 20:01:48.610409   26827 main.go:141] libmachine: (ha-091565-m02)   </devices>
	I0918 20:01:48.610423   26827 main.go:141] libmachine: (ha-091565-m02) </domain>
	I0918 20:01:48.610436   26827 main.go:141] libmachine: (ha-091565-m02) 
	I0918 20:01:48.617221   26827 main.go:141] libmachine: (ha-091565-m02) DBG | domain ha-091565-m02 has defined MAC address 52:54:00:15:ec:ae in network default
	I0918 20:01:48.617722   26827 main.go:141] libmachine: (ha-091565-m02) Ensuring networks are active...
	I0918 20:01:48.617752   26827 main.go:141] libmachine: (ha-091565-m02) DBG | domain ha-091565-m02 has defined MAC address 52:54:00:21:2b:96 in network mk-ha-091565
	I0918 20:01:48.618492   26827 main.go:141] libmachine: (ha-091565-m02) Ensuring network default is active
	I0918 20:01:48.618796   26827 main.go:141] libmachine: (ha-091565-m02) Ensuring network mk-ha-091565 is active
	I0918 20:01:48.619157   26827 main.go:141] libmachine: (ha-091565-m02) Getting domain xml...
	I0918 20:01:48.619865   26827 main.go:141] libmachine: (ha-091565-m02) Creating domain...
	I0918 20:01:49.853791   26827 main.go:141] libmachine: (ha-091565-m02) Waiting to get IP...
	I0918 20:01:49.854650   26827 main.go:141] libmachine: (ha-091565-m02) DBG | domain ha-091565-m02 has defined MAC address 52:54:00:21:2b:96 in network mk-ha-091565
	I0918 20:01:49.855084   26827 main.go:141] libmachine: (ha-091565-m02) DBG | unable to find current IP address of domain ha-091565-m02 in network mk-ha-091565
	I0918 20:01:49.855112   26827 main.go:141] libmachine: (ha-091565-m02) DBG | I0918 20:01:49.855067   27201 retry.go:31] will retry after 283.999691ms: waiting for machine to come up
	I0918 20:01:50.140266   26827 main.go:141] libmachine: (ha-091565-m02) DBG | domain ha-091565-m02 has defined MAC address 52:54:00:21:2b:96 in network mk-ha-091565
	I0918 20:01:50.140696   26827 main.go:141] libmachine: (ha-091565-m02) DBG | unable to find current IP address of domain ha-091565-m02 in network mk-ha-091565
	I0918 20:01:50.140718   26827 main.go:141] libmachine: (ha-091565-m02) DBG | I0918 20:01:50.140668   27201 retry.go:31] will retry after 243.982504ms: waiting for machine to come up
	I0918 20:01:50.386066   26827 main.go:141] libmachine: (ha-091565-m02) DBG | domain ha-091565-m02 has defined MAC address 52:54:00:21:2b:96 in network mk-ha-091565
	I0918 20:01:50.386487   26827 main.go:141] libmachine: (ha-091565-m02) DBG | unable to find current IP address of domain ha-091565-m02 in network mk-ha-091565
	I0918 20:01:50.386515   26827 main.go:141] libmachine: (ha-091565-m02) DBG | I0918 20:01:50.386440   27201 retry.go:31] will retry after 384.970289ms: waiting for machine to come up
	I0918 20:01:50.773049   26827 main.go:141] libmachine: (ha-091565-m02) DBG | domain ha-091565-m02 has defined MAC address 52:54:00:21:2b:96 in network mk-ha-091565
	I0918 20:01:50.773463   26827 main.go:141] libmachine: (ha-091565-m02) DBG | unable to find current IP address of domain ha-091565-m02 in network mk-ha-091565
	I0918 20:01:50.773490   26827 main.go:141] libmachine: (ha-091565-m02) DBG | I0918 20:01:50.773419   27201 retry.go:31] will retry after 383.687698ms: waiting for machine to come up
	I0918 20:01:51.158968   26827 main.go:141] libmachine: (ha-091565-m02) DBG | domain ha-091565-m02 has defined MAC address 52:54:00:21:2b:96 in network mk-ha-091565
	I0918 20:01:51.159478   26827 main.go:141] libmachine: (ha-091565-m02) DBG | unable to find current IP address of domain ha-091565-m02 in network mk-ha-091565
	I0918 20:01:51.159506   26827 main.go:141] libmachine: (ha-091565-m02) DBG | I0918 20:01:51.159430   27201 retry.go:31] will retry after 708.286443ms: waiting for machine to come up
	I0918 20:01:51.869406   26827 main.go:141] libmachine: (ha-091565-m02) DBG | domain ha-091565-m02 has defined MAC address 52:54:00:21:2b:96 in network mk-ha-091565
	I0918 20:01:51.869911   26827 main.go:141] libmachine: (ha-091565-m02) DBG | unable to find current IP address of domain ha-091565-m02 in network mk-ha-091565
	I0918 20:01:51.869932   26827 main.go:141] libmachine: (ha-091565-m02) DBG | I0918 20:01:51.869871   27201 retry.go:31] will retry after 693.038682ms: waiting for machine to come up
	I0918 20:01:52.564866   26827 main.go:141] libmachine: (ha-091565-m02) DBG | domain ha-091565-m02 has defined MAC address 52:54:00:21:2b:96 in network mk-ha-091565
	I0918 20:01:52.565352   26827 main.go:141] libmachine: (ha-091565-m02) DBG | unable to find current IP address of domain ha-091565-m02 in network mk-ha-091565
	I0918 20:01:52.565380   26827 main.go:141] libmachine: (ha-091565-m02) DBG | I0918 20:01:52.565257   27201 retry.go:31] will retry after 736.537004ms: waiting for machine to come up
	I0918 20:01:53.303205   26827 main.go:141] libmachine: (ha-091565-m02) DBG | domain ha-091565-m02 has defined MAC address 52:54:00:21:2b:96 in network mk-ha-091565
	I0918 20:01:53.303598   26827 main.go:141] libmachine: (ha-091565-m02) DBG | unable to find current IP address of domain ha-091565-m02 in network mk-ha-091565
	I0918 20:01:53.303630   26827 main.go:141] libmachine: (ha-091565-m02) DBG | I0918 20:01:53.303562   27201 retry.go:31] will retry after 1.042865785s: waiting for machine to come up
	I0918 20:01:54.347669   26827 main.go:141] libmachine: (ha-091565-m02) DBG | domain ha-091565-m02 has defined MAC address 52:54:00:21:2b:96 in network mk-ha-091565
	I0918 20:01:54.348067   26827 main.go:141] libmachine: (ha-091565-m02) DBG | unable to find current IP address of domain ha-091565-m02 in network mk-ha-091565
	I0918 20:01:54.348094   26827 main.go:141] libmachine: (ha-091565-m02) DBG | I0918 20:01:54.348054   27201 retry.go:31] will retry after 1.167725142s: waiting for machine to come up
	I0918 20:01:55.517065   26827 main.go:141] libmachine: (ha-091565-m02) DBG | domain ha-091565-m02 has defined MAC address 52:54:00:21:2b:96 in network mk-ha-091565
	I0918 20:01:55.517432   26827 main.go:141] libmachine: (ha-091565-m02) DBG | unable to find current IP address of domain ha-091565-m02 in network mk-ha-091565
	I0918 20:01:55.517468   26827 main.go:141] libmachine: (ha-091565-m02) DBG | I0918 20:01:55.517401   27201 retry.go:31] will retry after 1.527504069s: waiting for machine to come up
	I0918 20:01:57.046257   26827 main.go:141] libmachine: (ha-091565-m02) DBG | domain ha-091565-m02 has defined MAC address 52:54:00:21:2b:96 in network mk-ha-091565
	I0918 20:01:57.046707   26827 main.go:141] libmachine: (ha-091565-m02) DBG | unable to find current IP address of domain ha-091565-m02 in network mk-ha-091565
	I0918 20:01:57.046734   26827 main.go:141] libmachine: (ha-091565-m02) DBG | I0918 20:01:57.046662   27201 retry.go:31] will retry after 2.687348908s: waiting for machine to come up
	I0918 20:01:59.735480   26827 main.go:141] libmachine: (ha-091565-m02) DBG | domain ha-091565-m02 has defined MAC address 52:54:00:21:2b:96 in network mk-ha-091565
	I0918 20:01:59.736079   26827 main.go:141] libmachine: (ha-091565-m02) DBG | unable to find current IP address of domain ha-091565-m02 in network mk-ha-091565
	I0918 20:01:59.736176   26827 main.go:141] libmachine: (ha-091565-m02) DBG | I0918 20:01:59.736024   27201 retry.go:31] will retry after 2.655283124s: waiting for machine to come up
	I0918 20:02:02.393219   26827 main.go:141] libmachine: (ha-091565-m02) DBG | domain ha-091565-m02 has defined MAC address 52:54:00:21:2b:96 in network mk-ha-091565
	I0918 20:02:02.393704   26827 main.go:141] libmachine: (ha-091565-m02) DBG | unable to find current IP address of domain ha-091565-m02 in network mk-ha-091565
	I0918 20:02:02.393725   26827 main.go:141] libmachine: (ha-091565-m02) DBG | I0918 20:02:02.393678   27201 retry.go:31] will retry after 3.65154054s: waiting for machine to come up
	I0918 20:02:06.048480   26827 main.go:141] libmachine: (ha-091565-m02) DBG | domain ha-091565-m02 has defined MAC address 52:54:00:21:2b:96 in network mk-ha-091565
	I0918 20:02:06.048911   26827 main.go:141] libmachine: (ha-091565-m02) DBG | unable to find current IP address of domain ha-091565-m02 in network mk-ha-091565
	I0918 20:02:06.048932   26827 main.go:141] libmachine: (ha-091565-m02) DBG | I0918 20:02:06.048885   27201 retry.go:31] will retry after 4.061870544s: waiting for machine to come up
	I0918 20:02:10.113660   26827 main.go:141] libmachine: (ha-091565-m02) DBG | domain ha-091565-m02 has defined MAC address 52:54:00:21:2b:96 in network mk-ha-091565
	I0918 20:02:10.114089   26827 main.go:141] libmachine: (ha-091565-m02) Found IP for machine: 192.168.39.92
	I0918 20:02:10.114110   26827 main.go:141] libmachine: (ha-091565-m02) Reserving static IP address...
	I0918 20:02:10.114118   26827 main.go:141] libmachine: (ha-091565-m02) DBG | domain ha-091565-m02 has current primary IP address 192.168.39.92 and MAC address 52:54:00:21:2b:96 in network mk-ha-091565
	I0918 20:02:10.114476   26827 main.go:141] libmachine: (ha-091565-m02) DBG | unable to find host DHCP lease matching {name: "ha-091565-m02", mac: "52:54:00:21:2b:96", ip: "192.168.39.92"} in network mk-ha-091565
	I0918 20:02:10.190986   26827 main.go:141] libmachine: (ha-091565-m02) DBG | Getting to WaitForSSH function...
	I0918 20:02:10.191024   26827 main.go:141] libmachine: (ha-091565-m02) Reserved static IP address: 192.168.39.92
	I0918 20:02:10.191040   26827 main.go:141] libmachine: (ha-091565-m02) Waiting for SSH to be available...
	I0918 20:02:10.193580   26827 main.go:141] libmachine: (ha-091565-m02) DBG | domain ha-091565-m02 has defined MAC address 52:54:00:21:2b:96 in network mk-ha-091565
	I0918 20:02:10.194009   26827 main.go:141] libmachine: (ha-091565-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:2b:96", ip: ""} in network mk-ha-091565: {Iface:virbr1 ExpiryTime:2024-09-18 21:02:02 +0000 UTC Type:0 Mac:52:54:00:21:2b:96 Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:minikube Clientid:01:52:54:00:21:2b:96}
	I0918 20:02:10.194037   26827 main.go:141] libmachine: (ha-091565-m02) DBG | domain ha-091565-m02 has defined IP address 192.168.39.92 and MAC address 52:54:00:21:2b:96 in network mk-ha-091565
	I0918 20:02:10.194132   26827 main.go:141] libmachine: (ha-091565-m02) DBG | Using SSH client type: external
	I0918 20:02:10.194161   26827 main.go:141] libmachine: (ha-091565-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/19667-7671/.minikube/machines/ha-091565-m02/id_rsa (-rw-------)
	I0918 20:02:10.194197   26827 main.go:141] libmachine: (ha-091565-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.92 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19667-7671/.minikube/machines/ha-091565-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0918 20:02:10.194215   26827 main.go:141] libmachine: (ha-091565-m02) DBG | About to run SSH command:
	I0918 20:02:10.194223   26827 main.go:141] libmachine: (ha-091565-m02) DBG | exit 0
	I0918 20:02:10.323932   26827 main.go:141] libmachine: (ha-091565-m02) DBG | SSH cmd err, output: <nil>: 
	I0918 20:02:10.324269   26827 main.go:141] libmachine: (ha-091565-m02) KVM machine creation complete!
	I0918 20:02:10.324574   26827 main.go:141] libmachine: (ha-091565-m02) Calling .GetConfigRaw
	I0918 20:02:10.325151   26827 main.go:141] libmachine: (ha-091565-m02) Calling .DriverName
	I0918 20:02:10.325341   26827 main.go:141] libmachine: (ha-091565-m02) Calling .DriverName
	I0918 20:02:10.325477   26827 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0918 20:02:10.325492   26827 main.go:141] libmachine: (ha-091565-m02) Calling .GetState
	I0918 20:02:10.326893   26827 main.go:141] libmachine: Detecting operating system of created instance...
	I0918 20:02:10.326917   26827 main.go:141] libmachine: Waiting for SSH to be available...
	I0918 20:02:10.326923   26827 main.go:141] libmachine: Getting to WaitForSSH function...
	I0918 20:02:10.326931   26827 main.go:141] libmachine: (ha-091565-m02) Calling .GetSSHHostname
	I0918 20:02:10.329564   26827 main.go:141] libmachine: (ha-091565-m02) DBG | domain ha-091565-m02 has defined MAC address 52:54:00:21:2b:96 in network mk-ha-091565
	I0918 20:02:10.330006   26827 main.go:141] libmachine: (ha-091565-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:2b:96", ip: ""} in network mk-ha-091565: {Iface:virbr1 ExpiryTime:2024-09-18 21:02:02 +0000 UTC Type:0 Mac:52:54:00:21:2b:96 Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:ha-091565-m02 Clientid:01:52:54:00:21:2b:96}
	I0918 20:02:10.330033   26827 main.go:141] libmachine: (ha-091565-m02) DBG | domain ha-091565-m02 has defined IP address 192.168.39.92 and MAC address 52:54:00:21:2b:96 in network mk-ha-091565
	I0918 20:02:10.330172   26827 main.go:141] libmachine: (ha-091565-m02) Calling .GetSSHPort
	I0918 20:02:10.330344   26827 main.go:141] libmachine: (ha-091565-m02) Calling .GetSSHKeyPath
	I0918 20:02:10.330500   26827 main.go:141] libmachine: (ha-091565-m02) Calling .GetSSHKeyPath
	I0918 20:02:10.330636   26827 main.go:141] libmachine: (ha-091565-m02) Calling .GetSSHUsername
	I0918 20:02:10.330796   26827 main.go:141] libmachine: Using SSH client type: native
	I0918 20:02:10.331010   26827 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.92 22 <nil> <nil>}
	I0918 20:02:10.331023   26827 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0918 20:02:10.443345   26827 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0918 20:02:10.443373   26827 main.go:141] libmachine: Detecting the provisioner...
	I0918 20:02:10.443397   26827 main.go:141] libmachine: (ha-091565-m02) Calling .GetSSHHostname
	I0918 20:02:10.446214   26827 main.go:141] libmachine: (ha-091565-m02) DBG | domain ha-091565-m02 has defined MAC address 52:54:00:21:2b:96 in network mk-ha-091565
	I0918 20:02:10.446561   26827 main.go:141] libmachine: (ha-091565-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:2b:96", ip: ""} in network mk-ha-091565: {Iface:virbr1 ExpiryTime:2024-09-18 21:02:02 +0000 UTC Type:0 Mac:52:54:00:21:2b:96 Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:ha-091565-m02 Clientid:01:52:54:00:21:2b:96}
	I0918 20:02:10.446609   26827 main.go:141] libmachine: (ha-091565-m02) DBG | domain ha-091565-m02 has defined IP address 192.168.39.92 and MAC address 52:54:00:21:2b:96 in network mk-ha-091565
	I0918 20:02:10.446805   26827 main.go:141] libmachine: (ha-091565-m02) Calling .GetSSHPort
	I0918 20:02:10.447003   26827 main.go:141] libmachine: (ha-091565-m02) Calling .GetSSHKeyPath
	I0918 20:02:10.447152   26827 main.go:141] libmachine: (ha-091565-m02) Calling .GetSSHKeyPath
	I0918 20:02:10.447299   26827 main.go:141] libmachine: (ha-091565-m02) Calling .GetSSHUsername
	I0918 20:02:10.447466   26827 main.go:141] libmachine: Using SSH client type: native
	I0918 20:02:10.447651   26827 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.92 22 <nil> <nil>}
	I0918 20:02:10.447661   26827 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0918 20:02:10.560498   26827 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0918 20:02:10.560569   26827 main.go:141] libmachine: found compatible host: buildroot
	I0918 20:02:10.560579   26827 main.go:141] libmachine: Provisioning with buildroot...
	I0918 20:02:10.560587   26827 main.go:141] libmachine: (ha-091565-m02) Calling .GetMachineName
	I0918 20:02:10.560807   26827 buildroot.go:166] provisioning hostname "ha-091565-m02"
	I0918 20:02:10.560829   26827 main.go:141] libmachine: (ha-091565-m02) Calling .GetMachineName
	I0918 20:02:10.561019   26827 main.go:141] libmachine: (ha-091565-m02) Calling .GetSSHHostname
	I0918 20:02:10.563200   26827 main.go:141] libmachine: (ha-091565-m02) DBG | domain ha-091565-m02 has defined MAC address 52:54:00:21:2b:96 in network mk-ha-091565
	I0918 20:02:10.563504   26827 main.go:141] libmachine: (ha-091565-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:2b:96", ip: ""} in network mk-ha-091565: {Iface:virbr1 ExpiryTime:2024-09-18 21:02:02 +0000 UTC Type:0 Mac:52:54:00:21:2b:96 Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:ha-091565-m02 Clientid:01:52:54:00:21:2b:96}
	I0918 20:02:10.563529   26827 main.go:141] libmachine: (ha-091565-m02) DBG | domain ha-091565-m02 has defined IP address 192.168.39.92 and MAC address 52:54:00:21:2b:96 in network mk-ha-091565
	I0918 20:02:10.563719   26827 main.go:141] libmachine: (ha-091565-m02) Calling .GetSSHPort
	I0918 20:02:10.563862   26827 main.go:141] libmachine: (ha-091565-m02) Calling .GetSSHKeyPath
	I0918 20:02:10.564010   26827 main.go:141] libmachine: (ha-091565-m02) Calling .GetSSHKeyPath
	I0918 20:02:10.564147   26827 main.go:141] libmachine: (ha-091565-m02) Calling .GetSSHUsername
	I0918 20:02:10.564297   26827 main.go:141] libmachine: Using SSH client type: native
	I0918 20:02:10.564453   26827 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.92 22 <nil> <nil>}
	I0918 20:02:10.564464   26827 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-091565-m02 && echo "ha-091565-m02" | sudo tee /etc/hostname
	I0918 20:02:10.691295   26827 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-091565-m02
	
	I0918 20:02:10.691325   26827 main.go:141] libmachine: (ha-091565-m02) Calling .GetSSHHostname
	I0918 20:02:10.693996   26827 main.go:141] libmachine: (ha-091565-m02) DBG | domain ha-091565-m02 has defined MAC address 52:54:00:21:2b:96 in network mk-ha-091565
	I0918 20:02:10.694327   26827 main.go:141] libmachine: (ha-091565-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:2b:96", ip: ""} in network mk-ha-091565: {Iface:virbr1 ExpiryTime:2024-09-18 21:02:02 +0000 UTC Type:0 Mac:52:54:00:21:2b:96 Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:ha-091565-m02 Clientid:01:52:54:00:21:2b:96}
	I0918 20:02:10.694365   26827 main.go:141] libmachine: (ha-091565-m02) DBG | domain ha-091565-m02 has defined IP address 192.168.39.92 and MAC address 52:54:00:21:2b:96 in network mk-ha-091565
	I0918 20:02:10.694501   26827 main.go:141] libmachine: (ha-091565-m02) Calling .GetSSHPort
	I0918 20:02:10.694688   26827 main.go:141] libmachine: (ha-091565-m02) Calling .GetSSHKeyPath
	I0918 20:02:10.694846   26827 main.go:141] libmachine: (ha-091565-m02) Calling .GetSSHKeyPath
	I0918 20:02:10.694979   26827 main.go:141] libmachine: (ha-091565-m02) Calling .GetSSHUsername
	I0918 20:02:10.695122   26827 main.go:141] libmachine: Using SSH client type: native
	I0918 20:02:10.695275   26827 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.92 22 <nil> <nil>}
	I0918 20:02:10.695290   26827 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-091565-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-091565-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-091565-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0918 20:02:10.816522   26827 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0918 20:02:10.816548   26827 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19667-7671/.minikube CaCertPath:/home/jenkins/minikube-integration/19667-7671/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19667-7671/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19667-7671/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19667-7671/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19667-7671/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19667-7671/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19667-7671/.minikube}
	I0918 20:02:10.816563   26827 buildroot.go:174] setting up certificates
	I0918 20:02:10.816571   26827 provision.go:84] configureAuth start
	I0918 20:02:10.816581   26827 main.go:141] libmachine: (ha-091565-m02) Calling .GetMachineName
	I0918 20:02:10.816839   26827 main.go:141] libmachine: (ha-091565-m02) Calling .GetIP
	I0918 20:02:10.819595   26827 main.go:141] libmachine: (ha-091565-m02) DBG | domain ha-091565-m02 has defined MAC address 52:54:00:21:2b:96 in network mk-ha-091565
	I0918 20:02:10.819999   26827 main.go:141] libmachine: (ha-091565-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:2b:96", ip: ""} in network mk-ha-091565: {Iface:virbr1 ExpiryTime:2024-09-18 21:02:02 +0000 UTC Type:0 Mac:52:54:00:21:2b:96 Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:ha-091565-m02 Clientid:01:52:54:00:21:2b:96}
	I0918 20:02:10.820045   26827 main.go:141] libmachine: (ha-091565-m02) DBG | domain ha-091565-m02 has defined IP address 192.168.39.92 and MAC address 52:54:00:21:2b:96 in network mk-ha-091565
	I0918 20:02:10.820197   26827 main.go:141] libmachine: (ha-091565-m02) Calling .GetSSHHostname
	I0918 20:02:10.822853   26827 main.go:141] libmachine: (ha-091565-m02) DBG | domain ha-091565-m02 has defined MAC address 52:54:00:21:2b:96 in network mk-ha-091565
	I0918 20:02:10.823229   26827 main.go:141] libmachine: (ha-091565-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:2b:96", ip: ""} in network mk-ha-091565: {Iface:virbr1 ExpiryTime:2024-09-18 21:02:02 +0000 UTC Type:0 Mac:52:54:00:21:2b:96 Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:ha-091565-m02 Clientid:01:52:54:00:21:2b:96}
	I0918 20:02:10.823283   26827 main.go:141] libmachine: (ha-091565-m02) DBG | domain ha-091565-m02 has defined IP address 192.168.39.92 and MAC address 52:54:00:21:2b:96 in network mk-ha-091565
	I0918 20:02:10.823418   26827 provision.go:143] copyHostCerts
	I0918 20:02:10.823446   26827 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19667-7671/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19667-7671/.minikube/key.pem
	I0918 20:02:10.823472   26827 exec_runner.go:144] found /home/jenkins/minikube-integration/19667-7671/.minikube/key.pem, removing ...
	I0918 20:02:10.823482   26827 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19667-7671/.minikube/key.pem
	I0918 20:02:10.823549   26827 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19667-7671/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19667-7671/.minikube/key.pem (1679 bytes)
	I0918 20:02:10.823626   26827 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19667-7671/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19667-7671/.minikube/ca.pem
	I0918 20:02:10.823644   26827 exec_runner.go:144] found /home/jenkins/minikube-integration/19667-7671/.minikube/ca.pem, removing ...
	I0918 20:02:10.823651   26827 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19667-7671/.minikube/ca.pem
	I0918 20:02:10.823674   26827 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19667-7671/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19667-7671/.minikube/ca.pem (1082 bytes)
	I0918 20:02:10.823715   26827 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19667-7671/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19667-7671/.minikube/cert.pem
	I0918 20:02:10.823731   26827 exec_runner.go:144] found /home/jenkins/minikube-integration/19667-7671/.minikube/cert.pem, removing ...
	I0918 20:02:10.823737   26827 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19667-7671/.minikube/cert.pem
	I0918 20:02:10.823757   26827 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19667-7671/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19667-7671/.minikube/cert.pem (1123 bytes)
	I0918 20:02:10.823804   26827 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19667-7671/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19667-7671/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19667-7671/.minikube/certs/ca-key.pem org=jenkins.ha-091565-m02 san=[127.0.0.1 192.168.39.92 ha-091565-m02 localhost minikube]
	I0918 20:02:11.057033   26827 provision.go:177] copyRemoteCerts
	I0918 20:02:11.057095   26827 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0918 20:02:11.057117   26827 main.go:141] libmachine: (ha-091565-m02) Calling .GetSSHHostname
	I0918 20:02:11.059721   26827 main.go:141] libmachine: (ha-091565-m02) DBG | domain ha-091565-m02 has defined MAC address 52:54:00:21:2b:96 in network mk-ha-091565
	I0918 20:02:11.060054   26827 main.go:141] libmachine: (ha-091565-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:2b:96", ip: ""} in network mk-ha-091565: {Iface:virbr1 ExpiryTime:2024-09-18 21:02:02 +0000 UTC Type:0 Mac:52:54:00:21:2b:96 Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:ha-091565-m02 Clientid:01:52:54:00:21:2b:96}
	I0918 20:02:11.060083   26827 main.go:141] libmachine: (ha-091565-m02) DBG | domain ha-091565-m02 has defined IP address 192.168.39.92 and MAC address 52:54:00:21:2b:96 in network mk-ha-091565
	I0918 20:02:11.060241   26827 main.go:141] libmachine: (ha-091565-m02) Calling .GetSSHPort
	I0918 20:02:11.060442   26827 main.go:141] libmachine: (ha-091565-m02) Calling .GetSSHKeyPath
	I0918 20:02:11.060560   26827 main.go:141] libmachine: (ha-091565-m02) Calling .GetSSHUsername
	I0918 20:02:11.060670   26827 sshutil.go:53] new ssh client: &{IP:192.168.39.92 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19667-7671/.minikube/machines/ha-091565-m02/id_rsa Username:docker}
	I0918 20:02:11.145946   26827 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19667-7671/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0918 20:02:11.146020   26827 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0918 20:02:11.169808   26827 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19667-7671/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0918 20:02:11.169883   26827 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0918 20:02:11.192067   26827 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19667-7671/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0918 20:02:11.192133   26827 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0918 20:02:11.213945   26827 provision.go:87] duration metric: took 397.362437ms to configureAuth
	I0918 20:02:11.213974   26827 buildroot.go:189] setting minikube options for container-runtime
	I0918 20:02:11.214161   26827 config.go:182] Loaded profile config "ha-091565": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0918 20:02:11.214232   26827 main.go:141] libmachine: (ha-091565-m02) Calling .GetSSHHostname
	I0918 20:02:11.216594   26827 main.go:141] libmachine: (ha-091565-m02) DBG | domain ha-091565-m02 has defined MAC address 52:54:00:21:2b:96 in network mk-ha-091565
	I0918 20:02:11.216996   26827 main.go:141] libmachine: (ha-091565-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:2b:96", ip: ""} in network mk-ha-091565: {Iface:virbr1 ExpiryTime:2024-09-18 21:02:02 +0000 UTC Type:0 Mac:52:54:00:21:2b:96 Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:ha-091565-m02 Clientid:01:52:54:00:21:2b:96}
	I0918 20:02:11.217027   26827 main.go:141] libmachine: (ha-091565-m02) DBG | domain ha-091565-m02 has defined IP address 192.168.39.92 and MAC address 52:54:00:21:2b:96 in network mk-ha-091565
	I0918 20:02:11.217192   26827 main.go:141] libmachine: (ha-091565-m02) Calling .GetSSHPort
	I0918 20:02:11.217382   26827 main.go:141] libmachine: (ha-091565-m02) Calling .GetSSHKeyPath
	I0918 20:02:11.217568   26827 main.go:141] libmachine: (ha-091565-m02) Calling .GetSSHKeyPath
	I0918 20:02:11.217782   26827 main.go:141] libmachine: (ha-091565-m02) Calling .GetSSHUsername
	I0918 20:02:11.217991   26827 main.go:141] libmachine: Using SSH client type: native
	I0918 20:02:11.218183   26827 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.92 22 <nil> <nil>}
	I0918 20:02:11.218201   26827 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0918 20:02:11.450199   26827 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0918 20:02:11.450222   26827 main.go:141] libmachine: Checking connection to Docker...
	I0918 20:02:11.450231   26827 main.go:141] libmachine: (ha-091565-m02) Calling .GetURL
	I0918 20:02:11.451440   26827 main.go:141] libmachine: (ha-091565-m02) DBG | Using libvirt version 6000000
	I0918 20:02:11.453501   26827 main.go:141] libmachine: (ha-091565-m02) DBG | domain ha-091565-m02 has defined MAC address 52:54:00:21:2b:96 in network mk-ha-091565
	I0918 20:02:11.453892   26827 main.go:141] libmachine: (ha-091565-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:2b:96", ip: ""} in network mk-ha-091565: {Iface:virbr1 ExpiryTime:2024-09-18 21:02:02 +0000 UTC Type:0 Mac:52:54:00:21:2b:96 Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:ha-091565-m02 Clientid:01:52:54:00:21:2b:96}
	I0918 20:02:11.453920   26827 main.go:141] libmachine: (ha-091565-m02) DBG | domain ha-091565-m02 has defined IP address 192.168.39.92 and MAC address 52:54:00:21:2b:96 in network mk-ha-091565
	I0918 20:02:11.454034   26827 main.go:141] libmachine: Docker is up and running!
	I0918 20:02:11.454051   26827 main.go:141] libmachine: Reticulating splines...
	I0918 20:02:11.454059   26827 client.go:171] duration metric: took 23.281034632s to LocalClient.Create
	I0918 20:02:11.454083   26827 start.go:167] duration metric: took 23.281096503s to libmachine.API.Create "ha-091565"
	I0918 20:02:11.454095   26827 start.go:293] postStartSetup for "ha-091565-m02" (driver="kvm2")
	I0918 20:02:11.454108   26827 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0918 20:02:11.454129   26827 main.go:141] libmachine: (ha-091565-m02) Calling .DriverName
	I0918 20:02:11.454363   26827 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0918 20:02:11.454391   26827 main.go:141] libmachine: (ha-091565-m02) Calling .GetSSHHostname
	I0918 20:02:11.456695   26827 main.go:141] libmachine: (ha-091565-m02) DBG | domain ha-091565-m02 has defined MAC address 52:54:00:21:2b:96 in network mk-ha-091565
	I0918 20:02:11.457025   26827 main.go:141] libmachine: (ha-091565-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:2b:96", ip: ""} in network mk-ha-091565: {Iface:virbr1 ExpiryTime:2024-09-18 21:02:02 +0000 UTC Type:0 Mac:52:54:00:21:2b:96 Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:ha-091565-m02 Clientid:01:52:54:00:21:2b:96}
	I0918 20:02:11.457053   26827 main.go:141] libmachine: (ha-091565-m02) DBG | domain ha-091565-m02 has defined IP address 192.168.39.92 and MAC address 52:54:00:21:2b:96 in network mk-ha-091565
	I0918 20:02:11.457216   26827 main.go:141] libmachine: (ha-091565-m02) Calling .GetSSHPort
	I0918 20:02:11.457393   26827 main.go:141] libmachine: (ha-091565-m02) Calling .GetSSHKeyPath
	I0918 20:02:11.457548   26827 main.go:141] libmachine: (ha-091565-m02) Calling .GetSSHUsername
	I0918 20:02:11.457664   26827 sshutil.go:53] new ssh client: &{IP:192.168.39.92 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19667-7671/.minikube/machines/ha-091565-m02/id_rsa Username:docker}
	I0918 20:02:11.543806   26827 ssh_runner.go:195] Run: cat /etc/os-release
	I0918 20:02:11.548176   26827 info.go:137] Remote host: Buildroot 2023.02.9
	I0918 20:02:11.548212   26827 filesync.go:126] Scanning /home/jenkins/minikube-integration/19667-7671/.minikube/addons for local assets ...
	I0918 20:02:11.548288   26827 filesync.go:126] Scanning /home/jenkins/minikube-integration/19667-7671/.minikube/files for local assets ...
	I0918 20:02:11.548387   26827 filesync.go:149] local asset: /home/jenkins/minikube-integration/19667-7671/.minikube/files/etc/ssl/certs/148782.pem -> 148782.pem in /etc/ssl/certs
	I0918 20:02:11.548401   26827 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19667-7671/.minikube/files/etc/ssl/certs/148782.pem -> /etc/ssl/certs/148782.pem
	I0918 20:02:11.548509   26827 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0918 20:02:11.557991   26827 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/files/etc/ssl/certs/148782.pem --> /etc/ssl/certs/148782.pem (1708 bytes)
	I0918 20:02:11.580809   26827 start.go:296] duration metric: took 126.700515ms for postStartSetup
	I0918 20:02:11.580869   26827 main.go:141] libmachine: (ha-091565-m02) Calling .GetConfigRaw
	I0918 20:02:11.581461   26827 main.go:141] libmachine: (ha-091565-m02) Calling .GetIP
	I0918 20:02:11.583798   26827 main.go:141] libmachine: (ha-091565-m02) DBG | domain ha-091565-m02 has defined MAC address 52:54:00:21:2b:96 in network mk-ha-091565
	I0918 20:02:11.584145   26827 main.go:141] libmachine: (ha-091565-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:2b:96", ip: ""} in network mk-ha-091565: {Iface:virbr1 ExpiryTime:2024-09-18 21:02:02 +0000 UTC Type:0 Mac:52:54:00:21:2b:96 Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:ha-091565-m02 Clientid:01:52:54:00:21:2b:96}
	I0918 20:02:11.584166   26827 main.go:141] libmachine: (ha-091565-m02) DBG | domain ha-091565-m02 has defined IP address 192.168.39.92 and MAC address 52:54:00:21:2b:96 in network mk-ha-091565
	I0918 20:02:11.584397   26827 profile.go:143] Saving config to /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/ha-091565/config.json ...
	I0918 20:02:11.584590   26827 start.go:128] duration metric: took 23.429501872s to createHost
	I0918 20:02:11.584610   26827 main.go:141] libmachine: (ha-091565-m02) Calling .GetSSHHostname
	I0918 20:02:11.586789   26827 main.go:141] libmachine: (ha-091565-m02) DBG | domain ha-091565-m02 has defined MAC address 52:54:00:21:2b:96 in network mk-ha-091565
	I0918 20:02:11.587088   26827 main.go:141] libmachine: (ha-091565-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:2b:96", ip: ""} in network mk-ha-091565: {Iface:virbr1 ExpiryTime:2024-09-18 21:02:02 +0000 UTC Type:0 Mac:52:54:00:21:2b:96 Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:ha-091565-m02 Clientid:01:52:54:00:21:2b:96}
	I0918 20:02:11.587104   26827 main.go:141] libmachine: (ha-091565-m02) DBG | domain ha-091565-m02 has defined IP address 192.168.39.92 and MAC address 52:54:00:21:2b:96 in network mk-ha-091565
	I0918 20:02:11.587289   26827 main.go:141] libmachine: (ha-091565-m02) Calling .GetSSHPort
	I0918 20:02:11.587470   26827 main.go:141] libmachine: (ha-091565-m02) Calling .GetSSHKeyPath
	I0918 20:02:11.587595   26827 main.go:141] libmachine: (ha-091565-m02) Calling .GetSSHKeyPath
	I0918 20:02:11.587738   26827 main.go:141] libmachine: (ha-091565-m02) Calling .GetSSHUsername
	I0918 20:02:11.587870   26827 main.go:141] libmachine: Using SSH client type: native
	I0918 20:02:11.588036   26827 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.92 22 <nil> <nil>}
	I0918 20:02:11.588047   26827 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0918 20:02:11.700738   26827 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726689731.662490371
	
	I0918 20:02:11.700765   26827 fix.go:216] guest clock: 1726689731.662490371
	I0918 20:02:11.700775   26827 fix.go:229] Guest: 2024-09-18 20:02:11.662490371 +0000 UTC Remote: 2024-09-18 20:02:11.584601507 +0000 UTC m=+73.979326396 (delta=77.888864ms)
	I0918 20:02:11.700793   26827 fix.go:200] guest clock delta is within tolerance: 77.888864ms
	I0918 20:02:11.700797   26827 start.go:83] releasing machines lock for "ha-091565-m02", held for 23.545807984s
	I0918 20:02:11.700814   26827 main.go:141] libmachine: (ha-091565-m02) Calling .DriverName
	I0918 20:02:11.701084   26827 main.go:141] libmachine: (ha-091565-m02) Calling .GetIP
	I0918 20:02:11.703834   26827 main.go:141] libmachine: (ha-091565-m02) DBG | domain ha-091565-m02 has defined MAC address 52:54:00:21:2b:96 in network mk-ha-091565
	I0918 20:02:11.704301   26827 main.go:141] libmachine: (ha-091565-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:2b:96", ip: ""} in network mk-ha-091565: {Iface:virbr1 ExpiryTime:2024-09-18 21:02:02 +0000 UTC Type:0 Mac:52:54:00:21:2b:96 Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:ha-091565-m02 Clientid:01:52:54:00:21:2b:96}
	I0918 20:02:11.704332   26827 main.go:141] libmachine: (ha-091565-m02) DBG | domain ha-091565-m02 has defined IP address 192.168.39.92 and MAC address 52:54:00:21:2b:96 in network mk-ha-091565
	I0918 20:02:11.706825   26827 out.go:177] * Found network options:
	I0918 20:02:11.708191   26827 out.go:177]   - NO_PROXY=192.168.39.215
	W0918 20:02:11.709336   26827 proxy.go:119] fail to check proxy env: Error ip not in block
	I0918 20:02:11.709382   26827 main.go:141] libmachine: (ha-091565-m02) Calling .DriverName
	I0918 20:02:11.710083   26827 main.go:141] libmachine: (ha-091565-m02) Calling .DriverName
	I0918 20:02:11.710311   26827 main.go:141] libmachine: (ha-091565-m02) Calling .DriverName
	I0918 20:02:11.710420   26827 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0918 20:02:11.710463   26827 main.go:141] libmachine: (ha-091565-m02) Calling .GetSSHHostname
	W0918 20:02:11.710532   26827 proxy.go:119] fail to check proxy env: Error ip not in block
	I0918 20:02:11.710615   26827 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0918 20:02:11.710636   26827 main.go:141] libmachine: (ha-091565-m02) Calling .GetSSHHostname
	I0918 20:02:11.714007   26827 main.go:141] libmachine: (ha-091565-m02) DBG | domain ha-091565-m02 has defined MAC address 52:54:00:21:2b:96 in network mk-ha-091565
	I0918 20:02:11.714090   26827 main.go:141] libmachine: (ha-091565-m02) DBG | domain ha-091565-m02 has defined MAC address 52:54:00:21:2b:96 in network mk-ha-091565
	I0918 20:02:11.714449   26827 main.go:141] libmachine: (ha-091565-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:2b:96", ip: ""} in network mk-ha-091565: {Iface:virbr1 ExpiryTime:2024-09-18 21:02:02 +0000 UTC Type:0 Mac:52:54:00:21:2b:96 Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:ha-091565-m02 Clientid:01:52:54:00:21:2b:96}
	I0918 20:02:11.714474   26827 main.go:141] libmachine: (ha-091565-m02) DBG | domain ha-091565-m02 has defined IP address 192.168.39.92 and MAC address 52:54:00:21:2b:96 in network mk-ha-091565
	I0918 20:02:11.714500   26827 main.go:141] libmachine: (ha-091565-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:2b:96", ip: ""} in network mk-ha-091565: {Iface:virbr1 ExpiryTime:2024-09-18 21:02:02 +0000 UTC Type:0 Mac:52:54:00:21:2b:96 Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:ha-091565-m02 Clientid:01:52:54:00:21:2b:96}
	I0918 20:02:11.714515   26827 main.go:141] libmachine: (ha-091565-m02) DBG | domain ha-091565-m02 has defined IP address 192.168.39.92 and MAC address 52:54:00:21:2b:96 in network mk-ha-091565
	I0918 20:02:11.714602   26827 main.go:141] libmachine: (ha-091565-m02) Calling .GetSSHPort
	I0918 20:02:11.714757   26827 main.go:141] libmachine: (ha-091565-m02) Calling .GetSSHPort
	I0918 20:02:11.714809   26827 main.go:141] libmachine: (ha-091565-m02) Calling .GetSSHKeyPath
	I0918 20:02:11.714897   26827 main.go:141] libmachine: (ha-091565-m02) Calling .GetSSHKeyPath
	I0918 20:02:11.714955   26827 main.go:141] libmachine: (ha-091565-m02) Calling .GetSSHUsername
	I0918 20:02:11.715014   26827 main.go:141] libmachine: (ha-091565-m02) Calling .GetSSHUsername
	I0918 20:02:11.715075   26827 sshutil.go:53] new ssh client: &{IP:192.168.39.92 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19667-7671/.minikube/machines/ha-091565-m02/id_rsa Username:docker}
	I0918 20:02:11.715103   26827 sshutil.go:53] new ssh client: &{IP:192.168.39.92 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19667-7671/.minikube/machines/ha-091565-m02/id_rsa Username:docker}
	I0918 20:02:11.951540   26827 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0918 20:02:11.958397   26827 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0918 20:02:11.958472   26827 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0918 20:02:11.975402   26827 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0918 20:02:11.975429   26827 start.go:495] detecting cgroup driver to use...
	I0918 20:02:11.975517   26827 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0918 20:02:11.992284   26827 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0918 20:02:12.006780   26827 docker.go:217] disabling cri-docker service (if available) ...
	I0918 20:02:12.006835   26827 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0918 20:02:12.021223   26827 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0918 20:02:12.035137   26827 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0918 20:02:12.152314   26827 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0918 20:02:12.308984   26827 docker.go:233] disabling docker service ...
	I0918 20:02:12.309056   26827 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0918 20:02:12.322897   26827 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0918 20:02:12.336617   26827 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0918 20:02:12.473473   26827 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0918 20:02:12.584374   26827 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0918 20:02:12.597923   26827 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0918 20:02:12.615683   26827 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0918 20:02:12.615759   26827 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0918 20:02:12.625760   26827 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0918 20:02:12.625817   26827 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0918 20:02:12.635917   26827 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0918 20:02:12.645924   26827 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0918 20:02:12.655813   26827 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0918 20:02:12.666525   26827 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0918 20:02:12.676621   26827 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0918 20:02:12.693200   26827 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0918 20:02:12.703365   26827 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0918 20:02:12.713885   26827 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0918 20:02:12.713948   26827 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0918 20:02:12.728888   26827 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0918 20:02:12.749626   26827 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0918 20:02:12.881747   26827 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0918 20:02:12.971475   26827 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0918 20:02:12.971567   26827 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0918 20:02:12.976879   26827 start.go:563] Will wait 60s for crictl version
	I0918 20:02:12.976965   26827 ssh_runner.go:195] Run: which crictl
	I0918 20:02:12.980716   26827 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0918 20:02:13.019156   26827 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0918 20:02:13.019245   26827 ssh_runner.go:195] Run: crio --version
	I0918 20:02:13.046401   26827 ssh_runner.go:195] Run: crio --version
	I0918 20:02:13.075823   26827 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0918 20:02:13.077052   26827 out.go:177]   - env NO_PROXY=192.168.39.215
	I0918 20:02:13.078258   26827 main.go:141] libmachine: (ha-091565-m02) Calling .GetIP
	I0918 20:02:13.081042   26827 main.go:141] libmachine: (ha-091565-m02) DBG | domain ha-091565-m02 has defined MAC address 52:54:00:21:2b:96 in network mk-ha-091565
	I0918 20:02:13.081379   26827 main.go:141] libmachine: (ha-091565-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:2b:96", ip: ""} in network mk-ha-091565: {Iface:virbr1 ExpiryTime:2024-09-18 21:02:02 +0000 UTC Type:0 Mac:52:54:00:21:2b:96 Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:ha-091565-m02 Clientid:01:52:54:00:21:2b:96}
	I0918 20:02:13.081410   26827 main.go:141] libmachine: (ha-091565-m02) DBG | domain ha-091565-m02 has defined IP address 192.168.39.92 and MAC address 52:54:00:21:2b:96 in network mk-ha-091565
	I0918 20:02:13.081604   26827 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0918 20:02:13.085957   26827 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0918 20:02:13.098025   26827 mustload.go:65] Loading cluster: ha-091565
	I0918 20:02:13.098236   26827 config.go:182] Loaded profile config "ha-091565": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0918 20:02:13.098500   26827 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0918 20:02:13.098540   26827 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0918 20:02:13.113020   26827 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43137
	I0918 20:02:13.113466   26827 main.go:141] libmachine: () Calling .GetVersion
	I0918 20:02:13.113910   26827 main.go:141] libmachine: Using API Version  1
	I0918 20:02:13.113932   26827 main.go:141] libmachine: () Calling .SetConfigRaw
	I0918 20:02:13.114242   26827 main.go:141] libmachine: () Calling .GetMachineName
	I0918 20:02:13.114415   26827 main.go:141] libmachine: (ha-091565) Calling .GetState
	I0918 20:02:13.115854   26827 host.go:66] Checking if "ha-091565" exists ...
	I0918 20:02:13.116211   26827 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0918 20:02:13.116246   26827 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0918 20:02:13.130542   26827 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42385
	I0918 20:02:13.130887   26827 main.go:141] libmachine: () Calling .GetVersion
	I0918 20:02:13.131305   26827 main.go:141] libmachine: Using API Version  1
	I0918 20:02:13.131334   26827 main.go:141] libmachine: () Calling .SetConfigRaw
	I0918 20:02:13.131650   26827 main.go:141] libmachine: () Calling .GetMachineName
	I0918 20:02:13.131812   26827 main.go:141] libmachine: (ha-091565) Calling .DriverName
	I0918 20:02:13.131970   26827 certs.go:68] Setting up /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/ha-091565 for IP: 192.168.39.92
	I0918 20:02:13.131980   26827 certs.go:194] generating shared ca certs ...
	I0918 20:02:13.131999   26827 certs.go:226] acquiring lock for ca certs: {Name:mkf54e0e71ed884c4372f9d3d4cb308ea4600185 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0918 20:02:13.132147   26827 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19667-7671/.minikube/ca.key
	I0918 20:02:13.132196   26827 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19667-7671/.minikube/proxy-client-ca.key
	I0918 20:02:13.132210   26827 certs.go:256] generating profile certs ...
	I0918 20:02:13.132298   26827 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/ha-091565/client.key
	I0918 20:02:13.132328   26827 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/ha-091565/apiserver.key.7629692a
	I0918 20:02:13.132349   26827 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/ha-091565/apiserver.crt.7629692a with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.215 192.168.39.92 192.168.39.254]
	I0918 20:02:13.381001   26827 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/ha-091565/apiserver.crt.7629692a ...
	I0918 20:02:13.381032   26827 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/ha-091565/apiserver.crt.7629692a: {Name:mk24fda3fc7efba8ec26d63c4d1c3262bef6ab2c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0918 20:02:13.381214   26827 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/ha-091565/apiserver.key.7629692a ...
	I0918 20:02:13.381231   26827 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/ha-091565/apiserver.key.7629692a: {Name:mk2ca0cef4c9dc7b760b7f2d962b84f60a94bd6c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0918 20:02:13.381333   26827 certs.go:381] copying /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/ha-091565/apiserver.crt.7629692a -> /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/ha-091565/apiserver.crt
	I0918 20:02:13.381891   26827 certs.go:385] copying /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/ha-091565/apiserver.key.7629692a -> /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/ha-091565/apiserver.key
	I0918 20:02:13.382099   26827 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/ha-091565/proxy-client.key
	I0918 20:02:13.382115   26827 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19667-7671/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0918 20:02:13.382140   26827 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19667-7671/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0918 20:02:13.382158   26827 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19667-7671/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0918 20:02:13.382174   26827 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19667-7671/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0918 20:02:13.382188   26827 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/ha-091565/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0918 20:02:13.382203   26827 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/ha-091565/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0918 20:02:13.382217   26827 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/ha-091565/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0918 20:02:13.382242   26827 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/ha-091565/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0918 20:02:13.382310   26827 certs.go:484] found cert: /home/jenkins/minikube-integration/19667-7671/.minikube/certs/14878.pem (1338 bytes)
	W0918 20:02:13.382346   26827 certs.go:480] ignoring /home/jenkins/minikube-integration/19667-7671/.minikube/certs/14878_empty.pem, impossibly tiny 0 bytes
	I0918 20:02:13.382356   26827 certs.go:484] found cert: /home/jenkins/minikube-integration/19667-7671/.minikube/certs/ca-key.pem (1675 bytes)
	I0918 20:02:13.382393   26827 certs.go:484] found cert: /home/jenkins/minikube-integration/19667-7671/.minikube/certs/ca.pem (1082 bytes)
	I0918 20:02:13.382425   26827 certs.go:484] found cert: /home/jenkins/minikube-integration/19667-7671/.minikube/certs/cert.pem (1123 bytes)
	I0918 20:02:13.382456   26827 certs.go:484] found cert: /home/jenkins/minikube-integration/19667-7671/.minikube/certs/key.pem (1679 bytes)
	I0918 20:02:13.382505   26827 certs.go:484] found cert: /home/jenkins/minikube-integration/19667-7671/.minikube/files/etc/ssl/certs/148782.pem (1708 bytes)
	I0918 20:02:13.382538   26827 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19667-7671/.minikube/files/etc/ssl/certs/148782.pem -> /usr/share/ca-certificates/148782.pem
	I0918 20:02:13.382565   26827 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19667-7671/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0918 20:02:13.382604   26827 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19667-7671/.minikube/certs/14878.pem -> /usr/share/ca-certificates/14878.pem
	I0918 20:02:13.382670   26827 main.go:141] libmachine: (ha-091565) Calling .GetSSHHostname
	I0918 20:02:13.385533   26827 main.go:141] libmachine: (ha-091565) DBG | domain ha-091565 has defined MAC address 52:54:00:2a:13:d8 in network mk-ha-091565
	I0918 20:02:13.385884   26827 main.go:141] libmachine: (ha-091565) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:13:d8", ip: ""} in network mk-ha-091565: {Iface:virbr1 ExpiryTime:2024-09-18 21:01:12 +0000 UTC Type:0 Mac:52:54:00:2a:13:d8 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:ha-091565 Clientid:01:52:54:00:2a:13:d8}
	I0918 20:02:13.385914   26827 main.go:141] libmachine: (ha-091565) DBG | domain ha-091565 has defined IP address 192.168.39.215 and MAC address 52:54:00:2a:13:d8 in network mk-ha-091565
	I0918 20:02:13.386036   26827 main.go:141] libmachine: (ha-091565) Calling .GetSSHPort
	I0918 20:02:13.386204   26827 main.go:141] libmachine: (ha-091565) Calling .GetSSHKeyPath
	I0918 20:02:13.386359   26827 main.go:141] libmachine: (ha-091565) Calling .GetSSHUsername
	I0918 20:02:13.386456   26827 sshutil.go:53] new ssh client: &{IP:192.168.39.215 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19667-7671/.minikube/machines/ha-091565/id_rsa Username:docker}
	I0918 20:02:13.464434   26827 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0918 20:02:13.469316   26827 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0918 20:02:13.479828   26827 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0918 20:02:13.484029   26827 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I0918 20:02:13.493840   26827 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0918 20:02:13.497931   26827 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0918 20:02:13.507815   26827 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0918 20:02:13.512123   26827 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I0918 20:02:13.522655   26827 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0918 20:02:13.527051   26827 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0918 20:02:13.538403   26827 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0918 20:02:13.542432   26827 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I0918 20:02:13.553060   26827 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0918 20:02:13.579635   26827 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0918 20:02:13.603368   26827 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0918 20:02:13.625998   26827 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0918 20:02:13.648303   26827 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/ha-091565/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0918 20:02:13.671000   26827 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/ha-091565/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0918 20:02:13.694050   26827 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/ha-091565/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0918 20:02:13.719216   26827 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/ha-091565/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0918 20:02:13.742544   26827 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/files/etc/ssl/certs/148782.pem --> /usr/share/ca-certificates/148782.pem (1708 bytes)
	I0918 20:02:13.765706   26827 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0918 20:02:13.789848   26827 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/certs/14878.pem --> /usr/share/ca-certificates/14878.pem (1338 bytes)
	I0918 20:02:13.814441   26827 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0918 20:02:13.831542   26827 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I0918 20:02:13.848254   26827 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0918 20:02:13.865737   26827 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I0918 20:02:13.881778   26827 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0918 20:02:13.898086   26827 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I0918 20:02:13.913537   26827 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0918 20:02:13.929503   26827 ssh_runner.go:195] Run: openssl version
	I0918 20:02:13.934878   26827 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0918 20:02:13.945006   26827 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0918 20:02:13.949290   26827 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 18 19:39 /usr/share/ca-certificates/minikubeCA.pem
	I0918 20:02:13.949360   26827 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0918 20:02:13.955252   26827 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0918 20:02:13.965953   26827 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14878.pem && ln -fs /usr/share/ca-certificates/14878.pem /etc/ssl/certs/14878.pem"
	I0918 20:02:13.976794   26827 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14878.pem
	I0918 20:02:13.981192   26827 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 18 19:57 /usr/share/ca-certificates/14878.pem
	I0918 20:02:13.981245   26827 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14878.pem
	I0918 20:02:13.986694   26827 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14878.pem /etc/ssl/certs/51391683.0"
	I0918 20:02:13.996869   26827 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/148782.pem && ln -fs /usr/share/ca-certificates/148782.pem /etc/ssl/certs/148782.pem"
	I0918 20:02:14.006855   26827 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/148782.pem
	I0918 20:02:14.010785   26827 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 18 19:57 /usr/share/ca-certificates/148782.pem
	I0918 20:02:14.010831   26827 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/148782.pem
	I0918 20:02:14.016603   26827 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/148782.pem /etc/ssl/certs/3ec20f2e.0"
	I0918 20:02:14.026923   26827 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0918 20:02:14.030483   26827 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0918 20:02:14.030540   26827 kubeadm.go:934] updating node {m02 192.168.39.92 8443 v1.31.1 crio true true} ...
	I0918 20:02:14.030615   26827 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-091565-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.92
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-091565 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0918 20:02:14.030638   26827 kube-vip.go:115] generating kube-vip config ...
	I0918 20:02:14.030669   26827 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0918 20:02:14.046531   26827 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0918 20:02:14.046601   26827 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0918 20:02:14.046656   26827 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0918 20:02:14.056509   26827 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.1: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.31.1': No such file or directory
	
	Initiating transfer...
	I0918 20:02:14.056563   26827 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.1
	I0918 20:02:14.065775   26827 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl.sha256
	I0918 20:02:14.065800   26827 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19667-7671/.minikube/cache/linux/amd64/v1.31.1/kubectl -> /var/lib/minikube/binaries/v1.31.1/kubectl
	I0918 20:02:14.065850   26827 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/19667-7671/.minikube/cache/linux/amd64/v1.31.1/kubeadm
	I0918 20:02:14.065881   26827 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/19667-7671/.minikube/cache/linux/amd64/v1.31.1/kubelet
	I0918 20:02:14.065857   26827 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubectl
	I0918 20:02:14.069919   26827 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubectl': No such file or directory
	I0918 20:02:14.069943   26827 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/cache/linux/amd64/v1.31.1/kubectl --> /var/lib/minikube/binaries/v1.31.1/kubectl (56381592 bytes)
	I0918 20:02:15.108841   26827 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19667-7671/.minikube/cache/linux/amd64/v1.31.1/kubeadm -> /var/lib/minikube/binaries/v1.31.1/kubeadm
	I0918 20:02:15.108916   26827 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubeadm
	I0918 20:02:15.113741   26827 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubeadm': No such file or directory
	I0918 20:02:15.113786   26827 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/cache/linux/amd64/v1.31.1/kubeadm --> /var/lib/minikube/binaries/v1.31.1/kubeadm (58290328 bytes)
	I0918 20:02:15.268546   26827 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0918 20:02:15.304643   26827 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19667-7671/.minikube/cache/linux/amd64/v1.31.1/kubelet -> /var/lib/minikube/binaries/v1.31.1/kubelet
	I0918 20:02:15.304757   26827 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubelet
	I0918 20:02:15.316920   26827 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubelet': No such file or directory
	I0918 20:02:15.316964   26827 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/cache/linux/amd64/v1.31.1/kubelet --> /var/lib/minikube/binaries/v1.31.1/kubelet (76869944 bytes)
	I0918 20:02:15.681051   26827 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0918 20:02:15.690458   26827 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0918 20:02:15.707147   26827 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0918 20:02:15.723671   26827 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0918 20:02:15.740654   26827 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0918 20:02:15.744145   26827 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0918 20:02:15.755908   26827 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0918 20:02:15.867566   26827 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0918 20:02:15.884693   26827 host.go:66] Checking if "ha-091565" exists ...
	I0918 20:02:15.885015   26827 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0918 20:02:15.885055   26827 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0918 20:02:15.899922   26827 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46063
	I0918 20:02:15.900446   26827 main.go:141] libmachine: () Calling .GetVersion
	I0918 20:02:15.900956   26827 main.go:141] libmachine: Using API Version  1
	I0918 20:02:15.900978   26827 main.go:141] libmachine: () Calling .SetConfigRaw
	I0918 20:02:15.901391   26827 main.go:141] libmachine: () Calling .GetMachineName
	I0918 20:02:15.901591   26827 main.go:141] libmachine: (ha-091565) Calling .DriverName
	I0918 20:02:15.901775   26827 start.go:317] joinCluster: &{Name:ha-091565 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Cluster
Name:ha-091565 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.215 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.92 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0918 20:02:15.901868   26827 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0918 20:02:15.901882   26827 main.go:141] libmachine: (ha-091565) Calling .GetSSHHostname
	I0918 20:02:15.904812   26827 main.go:141] libmachine: (ha-091565) DBG | domain ha-091565 has defined MAC address 52:54:00:2a:13:d8 in network mk-ha-091565
	I0918 20:02:15.905340   26827 main.go:141] libmachine: (ha-091565) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:13:d8", ip: ""} in network mk-ha-091565: {Iface:virbr1 ExpiryTime:2024-09-18 21:01:12 +0000 UTC Type:0 Mac:52:54:00:2a:13:d8 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:ha-091565 Clientid:01:52:54:00:2a:13:d8}
	I0918 20:02:15.905365   26827 main.go:141] libmachine: (ha-091565) DBG | domain ha-091565 has defined IP address 192.168.39.215 and MAC address 52:54:00:2a:13:d8 in network mk-ha-091565
	I0918 20:02:15.905530   26827 main.go:141] libmachine: (ha-091565) Calling .GetSSHPort
	I0918 20:02:15.905692   26827 main.go:141] libmachine: (ha-091565) Calling .GetSSHKeyPath
	I0918 20:02:15.905842   26827 main.go:141] libmachine: (ha-091565) Calling .GetSSHUsername
	I0918 20:02:15.905998   26827 sshutil.go:53] new ssh client: &{IP:192.168.39.215 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19667-7671/.minikube/machines/ha-091565/id_rsa Username:docker}
	I0918 20:02:16.056145   26827 start.go:343] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:192.168.39.92 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0918 20:02:16.056188   26827 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token c3chy6.pphzks8qg9r6i1q7 --discovery-token-ca-cert-hash sha256:8ad46bf288e8cc2226f5a929a768e6c95cddecc97ffc4016be27ef0987b1e36f --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-091565-m02 --control-plane --apiserver-advertise-address=192.168.39.92 --apiserver-bind-port=8443"
	I0918 20:02:39.534299   26827 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token c3chy6.pphzks8qg9r6i1q7 --discovery-token-ca-cert-hash sha256:8ad46bf288e8cc2226f5a929a768e6c95cddecc97ffc4016be27ef0987b1e36f --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-091565-m02 --control-plane --apiserver-advertise-address=192.168.39.92 --apiserver-bind-port=8443": (23.478085214s)
	I0918 20:02:39.534349   26827 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0918 20:02:40.082157   26827 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-091565-m02 minikube.k8s.io/updated_at=2024_09_18T20_02_40_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=85073601a832bd4bbda5d11fa91feafff6ec6b91 minikube.k8s.io/name=ha-091565 minikube.k8s.io/primary=false
	I0918 20:02:40.225760   26827 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-091565-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I0918 20:02:40.371807   26827 start.go:319] duration metric: took 24.470025441s to joinCluster
	I0918 20:02:40.371885   26827 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.168.39.92 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0918 20:02:40.372206   26827 config.go:182] Loaded profile config "ha-091565": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0918 20:02:40.373180   26827 out.go:177] * Verifying Kubernetes components...
	I0918 20:02:40.374584   26827 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0918 20:02:40.624879   26827 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0918 20:02:40.676856   26827 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19667-7671/kubeconfig
	I0918 20:02:40.677129   26827 kapi.go:59] client config for ha-091565: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19667-7671/.minikube/profiles/ha-091565/client.crt", KeyFile:"/home/jenkins/minikube-integration/19667-7671/.minikube/profiles/ha-091565/client.key", CAFile:"/home/jenkins/minikube-integration/19667-7671/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f6fca0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0918 20:02:40.677196   26827 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.215:8443
	I0918 20:02:40.677413   26827 node_ready.go:35] waiting up to 6m0s for node "ha-091565-m02" to be "Ready" ...
	I0918 20:02:40.677523   26827 round_trippers.go:463] GET https://192.168.39.215:8443/api/v1/nodes/ha-091565-m02
	I0918 20:02:40.677531   26827 round_trippers.go:469] Request Headers:
	I0918 20:02:40.677538   26827 round_trippers.go:473]     Accept: application/json, */*
	I0918 20:02:40.677545   26827 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0918 20:02:40.686192   26827 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0918 20:02:41.177691   26827 round_trippers.go:463] GET https://192.168.39.215:8443/api/v1/nodes/ha-091565-m02
	I0918 20:02:41.177719   26827 round_trippers.go:469] Request Headers:
	I0918 20:02:41.177732   26827 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0918 20:02:41.177740   26827 round_trippers.go:473]     Accept: application/json, */*
	I0918 20:02:41.183226   26827 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0918 20:02:41.678101   26827 round_trippers.go:463] GET https://192.168.39.215:8443/api/v1/nodes/ha-091565-m02
	I0918 20:02:41.678120   26827 round_trippers.go:469] Request Headers:
	I0918 20:02:41.678127   26827 round_trippers.go:473]     Accept: application/json, */*
	I0918 20:02:41.678130   26827 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0918 20:02:41.692857   26827 round_trippers.go:574] Response Status: 200 OK in 14 milliseconds
	I0918 20:02:42.177589   26827 round_trippers.go:463] GET https://192.168.39.215:8443/api/v1/nodes/ha-091565-m02
	I0918 20:02:42.177610   26827 round_trippers.go:469] Request Headers:
	I0918 20:02:42.177621   26827 round_trippers.go:473]     Accept: application/json, */*
	I0918 20:02:42.177625   26827 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0918 20:02:42.180992   26827 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0918 20:02:42.677789   26827 round_trippers.go:463] GET https://192.168.39.215:8443/api/v1/nodes/ha-091565-m02
	I0918 20:02:42.677810   26827 round_trippers.go:469] Request Headers:
	I0918 20:02:42.677818   26827 round_trippers.go:473]     Accept: application/json, */*
	I0918 20:02:42.677822   26827 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0918 20:02:42.682783   26827 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0918 20:02:42.683426   26827 node_ready.go:53] node "ha-091565-m02" has status "Ready":"False"
	I0918 20:02:43.178132   26827 round_trippers.go:463] GET https://192.168.39.215:8443/api/v1/nodes/ha-091565-m02
	I0918 20:02:43.178152   26827 round_trippers.go:469] Request Headers:
	I0918 20:02:43.178164   26827 round_trippers.go:473]     Accept: application/json, */*
	I0918 20:02:43.178170   26827 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0918 20:02:43.181084   26827 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0918 20:02:43.678483   26827 round_trippers.go:463] GET https://192.168.39.215:8443/api/v1/nodes/ha-091565-m02
	I0918 20:02:43.678502   26827 round_trippers.go:469] Request Headers:
	I0918 20:02:43.678510   26827 round_trippers.go:473]     Accept: application/json, */*
	I0918 20:02:43.678515   26827 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0918 20:02:43.683496   26827 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0918 20:02:44.178547   26827 round_trippers.go:463] GET https://192.168.39.215:8443/api/v1/nodes/ha-091565-m02
	I0918 20:02:44.178567   26827 round_trippers.go:469] Request Headers:
	I0918 20:02:44.178576   26827 round_trippers.go:473]     Accept: application/json, */*
	I0918 20:02:44.178579   26827 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0918 20:02:44.181977   26827 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0918 20:02:44.677784   26827 round_trippers.go:463] GET https://192.168.39.215:8443/api/v1/nodes/ha-091565-m02
	I0918 20:02:44.677816   26827 round_trippers.go:469] Request Headers:
	I0918 20:02:44.677827   26827 round_trippers.go:473]     Accept: application/json, */*
	I0918 20:02:44.677835   26827 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0918 20:02:44.682556   26827 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0918 20:02:45.177682   26827 round_trippers.go:463] GET https://192.168.39.215:8443/api/v1/nodes/ha-091565-m02
	I0918 20:02:45.177710   26827 round_trippers.go:469] Request Headers:
	I0918 20:02:45.177723   26827 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0918 20:02:45.177731   26827 round_trippers.go:473]     Accept: application/json, */*
	I0918 20:02:45.181803   26827 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0918 20:02:45.182526   26827 node_ready.go:53] node "ha-091565-m02" has status "Ready":"False"
	I0918 20:02:45.677703   26827 round_trippers.go:463] GET https://192.168.39.215:8443/api/v1/nodes/ha-091565-m02
	I0918 20:02:45.677727   26827 round_trippers.go:469] Request Headers:
	I0918 20:02:45.677735   26827 round_trippers.go:473]     Accept: application/json, */*
	I0918 20:02:45.677739   26827 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0918 20:02:45.684776   26827 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0918 20:02:46.178417   26827 round_trippers.go:463] GET https://192.168.39.215:8443/api/v1/nodes/ha-091565-m02
	I0918 20:02:46.178441   26827 round_trippers.go:469] Request Headers:
	I0918 20:02:46.178448   26827 round_trippers.go:473]     Accept: application/json, */*
	I0918 20:02:46.178456   26827 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0918 20:02:46.181952   26827 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0918 20:02:46.677961   26827 round_trippers.go:463] GET https://192.168.39.215:8443/api/v1/nodes/ha-091565-m02
	I0918 20:02:46.677985   26827 round_trippers.go:469] Request Headers:
	I0918 20:02:46.677992   26827 round_trippers.go:473]     Accept: application/json, */*
	I0918 20:02:46.677996   26827 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0918 20:02:46.681910   26827 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0918 20:02:47.178442   26827 round_trippers.go:463] GET https://192.168.39.215:8443/api/v1/nodes/ha-091565-m02
	I0918 20:02:47.178466   26827 round_trippers.go:469] Request Headers:
	I0918 20:02:47.178474   26827 round_trippers.go:473]     Accept: application/json, */*
	I0918 20:02:47.178479   26827 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0918 20:02:47.212429   26827 round_trippers.go:574] Response Status: 200 OK in 33 milliseconds
	I0918 20:02:47.213077   26827 node_ready.go:53] node "ha-091565-m02" has status "Ready":"False"
	I0918 20:02:47.678191   26827 round_trippers.go:463] GET https://192.168.39.215:8443/api/v1/nodes/ha-091565-m02
	I0918 20:02:47.678213   26827 round_trippers.go:469] Request Headers:
	I0918 20:02:47.678221   26827 round_trippers.go:473]     Accept: application/json, */*
	I0918 20:02:47.678225   26827 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0918 20:02:47.682040   26827 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0918 20:02:48.178008   26827 round_trippers.go:463] GET https://192.168.39.215:8443/api/v1/nodes/ha-091565-m02
	I0918 20:02:48.178028   26827 round_trippers.go:469] Request Headers:
	I0918 20:02:48.178038   26827 round_trippers.go:473]     Accept: application/json, */*
	I0918 20:02:48.178043   26827 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0918 20:02:48.181099   26827 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0918 20:02:48.677668   26827 round_trippers.go:463] GET https://192.168.39.215:8443/api/v1/nodes/ha-091565-m02
	I0918 20:02:48.677698   26827 round_trippers.go:469] Request Headers:
	I0918 20:02:48.677711   26827 round_trippers.go:473]     Accept: application/json, */*
	I0918 20:02:48.677717   26827 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0918 20:02:48.681381   26827 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0918 20:02:49.178444   26827 round_trippers.go:463] GET https://192.168.39.215:8443/api/v1/nodes/ha-091565-m02
	I0918 20:02:49.178465   26827 round_trippers.go:469] Request Headers:
	I0918 20:02:49.178472   26827 round_trippers.go:473]     Accept: application/json, */*
	I0918 20:02:49.178475   26827 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0918 20:02:49.182036   26827 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0918 20:02:49.678042   26827 round_trippers.go:463] GET https://192.168.39.215:8443/api/v1/nodes/ha-091565-m02
	I0918 20:02:49.678068   26827 round_trippers.go:469] Request Headers:
	I0918 20:02:49.678080   26827 round_trippers.go:473]     Accept: application/json, */*
	I0918 20:02:49.678088   26827 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0918 20:02:49.690181   26827 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I0918 20:02:49.690997   26827 node_ready.go:53] node "ha-091565-m02" has status "Ready":"False"
	I0918 20:02:50.178273   26827 round_trippers.go:463] GET https://192.168.39.215:8443/api/v1/nodes/ha-091565-m02
	I0918 20:02:50.178297   26827 round_trippers.go:469] Request Headers:
	I0918 20:02:50.178304   26827 round_trippers.go:473]     Accept: application/json, */*
	I0918 20:02:50.178308   26827 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0918 20:02:50.181653   26827 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0918 20:02:50.677625   26827 round_trippers.go:463] GET https://192.168.39.215:8443/api/v1/nodes/ha-091565-m02
	I0918 20:02:50.677648   26827 round_trippers.go:469] Request Headers:
	I0918 20:02:50.677656   26827 round_trippers.go:473]     Accept: application/json, */*
	I0918 20:02:50.677661   26827 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0918 20:02:50.681751   26827 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0918 20:02:51.178317   26827 round_trippers.go:463] GET https://192.168.39.215:8443/api/v1/nodes/ha-091565-m02
	I0918 20:02:51.178366   26827 round_trippers.go:469] Request Headers:
	I0918 20:02:51.178378   26827 round_trippers.go:473]     Accept: application/json, */*
	I0918 20:02:51.178384   26827 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0918 20:02:51.181883   26827 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0918 20:02:51.678030   26827 round_trippers.go:463] GET https://192.168.39.215:8443/api/v1/nodes/ha-091565-m02
	I0918 20:02:51.678058   26827 round_trippers.go:469] Request Headers:
	I0918 20:02:51.678069   26827 round_trippers.go:473]     Accept: application/json, */*
	I0918 20:02:51.678074   26827 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0918 20:02:51.681343   26827 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0918 20:02:52.178201   26827 round_trippers.go:463] GET https://192.168.39.215:8443/api/v1/nodes/ha-091565-m02
	I0918 20:02:52.178228   26827 round_trippers.go:469] Request Headers:
	I0918 20:02:52.178239   26827 round_trippers.go:473]     Accept: application/json, */*
	I0918 20:02:52.178246   26827 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0918 20:02:52.181149   26827 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0918 20:02:52.181830   26827 node_ready.go:53] node "ha-091565-m02" has status "Ready":"False"
	I0918 20:02:52.678195   26827 round_trippers.go:463] GET https://192.168.39.215:8443/api/v1/nodes/ha-091565-m02
	I0918 20:02:52.678219   26827 round_trippers.go:469] Request Headers:
	I0918 20:02:52.678227   26827 round_trippers.go:473]     Accept: application/json, */*
	I0918 20:02:52.678230   26827 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0918 20:02:52.681789   26827 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0918 20:02:53.178242   26827 round_trippers.go:463] GET https://192.168.39.215:8443/api/v1/nodes/ha-091565-m02
	I0918 20:02:53.178268   26827 round_trippers.go:469] Request Headers:
	I0918 20:02:53.178279   26827 round_trippers.go:473]     Accept: application/json, */*
	I0918 20:02:53.178284   26827 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0918 20:02:53.181682   26827 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0918 20:02:53.677884   26827 round_trippers.go:463] GET https://192.168.39.215:8443/api/v1/nodes/ha-091565-m02
	I0918 20:02:53.677907   26827 round_trippers.go:469] Request Headers:
	I0918 20:02:53.677916   26827 round_trippers.go:473]     Accept: application/json, */*
	I0918 20:02:53.677921   26827 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0918 20:02:53.681477   26827 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0918 20:02:54.178412   26827 round_trippers.go:463] GET https://192.168.39.215:8443/api/v1/nodes/ha-091565-m02
	I0918 20:02:54.178438   26827 round_trippers.go:469] Request Headers:
	I0918 20:02:54.178445   26827 round_trippers.go:473]     Accept: application/json, */*
	I0918 20:02:54.178449   26827 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0918 20:02:54.182375   26827 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0918 20:02:54.182956   26827 node_ready.go:53] node "ha-091565-m02" has status "Ready":"False"
	I0918 20:02:54.678270   26827 round_trippers.go:463] GET https://192.168.39.215:8443/api/v1/nodes/ha-091565-m02
	I0918 20:02:54.678294   26827 round_trippers.go:469] Request Headers:
	I0918 20:02:54.678301   26827 round_trippers.go:473]     Accept: application/json, */*
	I0918 20:02:54.678306   26827 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0918 20:02:54.681439   26827 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0918 20:02:55.178343   26827 round_trippers.go:463] GET https://192.168.39.215:8443/api/v1/nodes/ha-091565-m02
	I0918 20:02:55.178364   26827 round_trippers.go:469] Request Headers:
	I0918 20:02:55.178372   26827 round_trippers.go:473]     Accept: application/json, */*
	I0918 20:02:55.178376   26827 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0918 20:02:55.181349   26827 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0918 20:02:55.678277   26827 round_trippers.go:463] GET https://192.168.39.215:8443/api/v1/nodes/ha-091565-m02
	I0918 20:02:55.678299   26827 round_trippers.go:469] Request Headers:
	I0918 20:02:55.678307   26827 round_trippers.go:473]     Accept: application/json, */*
	I0918 20:02:55.678312   26827 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0918 20:02:55.681665   26827 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0918 20:02:56.177994   26827 round_trippers.go:463] GET https://192.168.39.215:8443/api/v1/nodes/ha-091565-m02
	I0918 20:02:56.178018   26827 round_trippers.go:469] Request Headers:
	I0918 20:02:56.178025   26827 round_trippers.go:473]     Accept: application/json, */*
	I0918 20:02:56.178030   26827 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0918 20:02:56.181355   26827 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0918 20:02:56.678444   26827 round_trippers.go:463] GET https://192.168.39.215:8443/api/v1/nodes/ha-091565-m02
	I0918 20:02:56.678487   26827 round_trippers.go:469] Request Headers:
	I0918 20:02:56.678502   26827 round_trippers.go:473]     Accept: application/json, */*
	I0918 20:02:56.678506   26827 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0918 20:02:56.682256   26827 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0918 20:02:56.683058   26827 node_ready.go:53] node "ha-091565-m02" has status "Ready":"False"
	I0918 20:02:57.178486   26827 round_trippers.go:463] GET https://192.168.39.215:8443/api/v1/nodes/ha-091565-m02
	I0918 20:02:57.178510   26827 round_trippers.go:469] Request Headers:
	I0918 20:02:57.178517   26827 round_trippers.go:473]     Accept: application/json, */*
	I0918 20:02:57.178521   26827 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0918 20:02:57.182538   26827 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0918 20:02:57.678060   26827 round_trippers.go:463] GET https://192.168.39.215:8443/api/v1/nodes/ha-091565-m02
	I0918 20:02:57.678084   26827 round_trippers.go:469] Request Headers:
	I0918 20:02:57.678091   26827 round_trippers.go:473]     Accept: application/json, */*
	I0918 20:02:57.678096   26827 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0918 20:02:57.681385   26827 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0918 20:02:58.177838   26827 round_trippers.go:463] GET https://192.168.39.215:8443/api/v1/nodes/ha-091565-m02
	I0918 20:02:58.177866   26827 round_trippers.go:469] Request Headers:
	I0918 20:02:58.177876   26827 round_trippers.go:473]     Accept: application/json, */*
	I0918 20:02:58.177887   26827 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0918 20:02:58.181116   26827 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0918 20:02:58.677581   26827 round_trippers.go:463] GET https://192.168.39.215:8443/api/v1/nodes/ha-091565-m02
	I0918 20:02:58.677623   26827 round_trippers.go:469] Request Headers:
	I0918 20:02:58.677631   26827 round_trippers.go:473]     Accept: application/json, */*
	I0918 20:02:58.677634   26827 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0918 20:02:58.681025   26827 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0918 20:02:59.178037   26827 round_trippers.go:463] GET https://192.168.39.215:8443/api/v1/nodes/ha-091565-m02
	I0918 20:02:59.178075   26827 round_trippers.go:469] Request Headers:
	I0918 20:02:59.178083   26827 round_trippers.go:473]     Accept: application/json, */*
	I0918 20:02:59.178087   26827 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0918 20:02:59.182040   26827 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0918 20:02:59.182593   26827 node_ready.go:49] node "ha-091565-m02" has status "Ready":"True"
	I0918 20:02:59.182614   26827 node_ready.go:38] duration metric: took 18.505159093s for node "ha-091565-m02" to be "Ready" ...
	I0918 20:02:59.182625   26827 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0918 20:02:59.182713   26827 round_trippers.go:463] GET https://192.168.39.215:8443/api/v1/namespaces/kube-system/pods
	I0918 20:02:59.182724   26827 round_trippers.go:469] Request Headers:
	I0918 20:02:59.182731   26827 round_trippers.go:473]     Accept: application/json, */*
	I0918 20:02:59.182736   26827 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0918 20:02:59.187930   26827 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0918 20:02:59.193874   26827 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-8zcqk" in "kube-system" namespace to be "Ready" ...
	I0918 20:02:59.193977   26827 round_trippers.go:463] GET https://192.168.39.215:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-8zcqk
	I0918 20:02:59.193988   26827 round_trippers.go:469] Request Headers:
	I0918 20:02:59.193999   26827 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0918 20:02:59.194007   26827 round_trippers.go:473]     Accept: application/json, */*
	I0918 20:02:59.197103   26827 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0918 20:02:59.198209   26827 round_trippers.go:463] GET https://192.168.39.215:8443/api/v1/nodes/ha-091565
	I0918 20:02:59.198228   26827 round_trippers.go:469] Request Headers:
	I0918 20:02:59.198238   26827 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0918 20:02:59.198256   26827 round_trippers.go:473]     Accept: application/json, */*
	I0918 20:02:59.201933   26827 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0918 20:02:59.202515   26827 pod_ready.go:93] pod "coredns-7c65d6cfc9-8zcqk" in "kube-system" namespace has status "Ready":"True"
	I0918 20:02:59.202532   26827 pod_ready.go:82] duration metric: took 8.636844ms for pod "coredns-7c65d6cfc9-8zcqk" in "kube-system" namespace to be "Ready" ...
	I0918 20:02:59.202541   26827 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-w97kk" in "kube-system" namespace to be "Ready" ...
	I0918 20:02:59.202613   26827 round_trippers.go:463] GET https://192.168.39.215:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-w97kk
	I0918 20:02:59.202622   26827 round_trippers.go:469] Request Headers:
	I0918 20:02:59.202631   26827 round_trippers.go:473]     Accept: application/json, */*
	I0918 20:02:59.202639   26827 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0918 20:02:59.206149   26827 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0918 20:02:59.206923   26827 round_trippers.go:463] GET https://192.168.39.215:8443/api/v1/nodes/ha-091565
	I0918 20:02:59.206938   26827 round_trippers.go:469] Request Headers:
	I0918 20:02:59.206945   26827 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0918 20:02:59.206948   26827 round_trippers.go:473]     Accept: application/json, */*
	I0918 20:02:59.210089   26827 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0918 20:02:59.211132   26827 pod_ready.go:93] pod "coredns-7c65d6cfc9-w97kk" in "kube-system" namespace has status "Ready":"True"
	I0918 20:02:59.211152   26827 pod_ready.go:82] duration metric: took 8.603074ms for pod "coredns-7c65d6cfc9-w97kk" in "kube-system" namespace to be "Ready" ...
	I0918 20:02:59.211164   26827 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-091565" in "kube-system" namespace to be "Ready" ...
	I0918 20:02:59.211226   26827 round_trippers.go:463] GET https://192.168.39.215:8443/api/v1/namespaces/kube-system/pods/etcd-ha-091565
	I0918 20:02:59.211237   26827 round_trippers.go:469] Request Headers:
	I0918 20:02:59.211248   26827 round_trippers.go:473]     Accept: application/json, */*
	I0918 20:02:59.211257   26827 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0918 20:02:59.214280   26827 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0918 20:02:59.214888   26827 round_trippers.go:463] GET https://192.168.39.215:8443/api/v1/nodes/ha-091565
	I0918 20:02:59.214903   26827 round_trippers.go:469] Request Headers:
	I0918 20:02:59.214912   26827 round_trippers.go:473]     Accept: application/json, */*
	I0918 20:02:59.214917   26827 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0918 20:02:59.217599   26827 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0918 20:02:59.218135   26827 pod_ready.go:93] pod "etcd-ha-091565" in "kube-system" namespace has status "Ready":"True"
	I0918 20:02:59.218154   26827 pod_ready.go:82] duration metric: took 6.982451ms for pod "etcd-ha-091565" in "kube-system" namespace to be "Ready" ...
	I0918 20:02:59.218164   26827 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-091565-m02" in "kube-system" namespace to be "Ready" ...
	I0918 20:02:59.218230   26827 round_trippers.go:463] GET https://192.168.39.215:8443/api/v1/namespaces/kube-system/pods/etcd-ha-091565-m02
	I0918 20:02:59.218241   26827 round_trippers.go:469] Request Headers:
	I0918 20:02:59.218251   26827 round_trippers.go:473]     Accept: application/json, */*
	I0918 20:02:59.218257   26827 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0918 20:02:59.221067   26827 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0918 20:02:59.221787   26827 round_trippers.go:463] GET https://192.168.39.215:8443/api/v1/nodes/ha-091565-m02
	I0918 20:02:59.221803   26827 round_trippers.go:469] Request Headers:
	I0918 20:02:59.221813   26827 round_trippers.go:473]     Accept: application/json, */*
	I0918 20:02:59.221821   26827 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0918 20:02:59.224586   26827 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0918 20:02:59.225580   26827 pod_ready.go:93] pod "etcd-ha-091565-m02" in "kube-system" namespace has status "Ready":"True"
	I0918 20:02:59.225600   26827 pod_ready.go:82] duration metric: took 7.424608ms for pod "etcd-ha-091565-m02" in "kube-system" namespace to be "Ready" ...
	I0918 20:02:59.225619   26827 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-091565" in "kube-system" namespace to be "Ready" ...
	I0918 20:02:59.379036   26827 request.go:632] Waited for 153.330309ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.215:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-091565
	I0918 20:02:59.379109   26827 round_trippers.go:463] GET https://192.168.39.215:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-091565
	I0918 20:02:59.379118   26827 round_trippers.go:469] Request Headers:
	I0918 20:02:59.379133   26827 round_trippers.go:473]     Accept: application/json, */*
	I0918 20:02:59.379139   26827 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0918 20:02:59.384080   26827 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0918 20:02:59.578427   26827 request.go:632] Waited for 193.345723ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.215:8443/api/v1/nodes/ha-091565
	I0918 20:02:59.578498   26827 round_trippers.go:463] GET https://192.168.39.215:8443/api/v1/nodes/ha-091565
	I0918 20:02:59.578503   26827 round_trippers.go:469] Request Headers:
	I0918 20:02:59.578510   26827 round_trippers.go:473]     Accept: application/json, */*
	I0918 20:02:59.578515   26827 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0918 20:02:59.581538   26827 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0918 20:02:59.581992   26827 pod_ready.go:93] pod "kube-apiserver-ha-091565" in "kube-system" namespace has status "Ready":"True"
	I0918 20:02:59.582010   26827 pod_ready.go:82] duration metric: took 356.380215ms for pod "kube-apiserver-ha-091565" in "kube-system" namespace to be "Ready" ...
	I0918 20:02:59.582019   26827 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-091565-m02" in "kube-system" namespace to be "Ready" ...
	I0918 20:02:59.778110   26827 request.go:632] Waited for 196.027349ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.215:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-091565-m02
	I0918 20:02:59.778193   26827 round_trippers.go:463] GET https://192.168.39.215:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-091565-m02
	I0918 20:02:59.778199   26827 round_trippers.go:469] Request Headers:
	I0918 20:02:59.778206   26827 round_trippers.go:473]     Accept: application/json, */*
	I0918 20:02:59.778215   26827 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0918 20:02:59.781615   26827 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0918 20:02:59.978660   26827 request.go:632] Waited for 196.397557ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.215:8443/api/v1/nodes/ha-091565-m02
	I0918 20:02:59.978711   26827 round_trippers.go:463] GET https://192.168.39.215:8443/api/v1/nodes/ha-091565-m02
	I0918 20:02:59.978716   26827 round_trippers.go:469] Request Headers:
	I0918 20:02:59.978723   26827 round_trippers.go:473]     Accept: application/json, */*
	I0918 20:02:59.978730   26827 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0918 20:02:59.982057   26827 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0918 20:02:59.982534   26827 pod_ready.go:93] pod "kube-apiserver-ha-091565-m02" in "kube-system" namespace has status "Ready":"True"
	I0918 20:02:59.982552   26827 pod_ready.go:82] duration metric: took 400.527398ms for pod "kube-apiserver-ha-091565-m02" in "kube-system" namespace to be "Ready" ...
	I0918 20:02:59.982561   26827 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-091565" in "kube-system" namespace to be "Ready" ...
	I0918 20:03:00.178731   26827 request.go:632] Waited for 196.108369ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.215:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-091565
	I0918 20:03:00.178818   26827 round_trippers.go:463] GET https://192.168.39.215:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-091565
	I0918 20:03:00.178826   26827 round_trippers.go:469] Request Headers:
	I0918 20:03:00.178835   26827 round_trippers.go:473]     Accept: application/json, */*
	I0918 20:03:00.178842   26827 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0918 20:03:00.182695   26827 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0918 20:03:00.378911   26827 request.go:632] Waited for 195.422738ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.215:8443/api/v1/nodes/ha-091565
	I0918 20:03:00.378963   26827 round_trippers.go:463] GET https://192.168.39.215:8443/api/v1/nodes/ha-091565
	I0918 20:03:00.378972   26827 round_trippers.go:469] Request Headers:
	I0918 20:03:00.378980   26827 round_trippers.go:473]     Accept: application/json, */*
	I0918 20:03:00.378983   26827 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0918 20:03:00.382498   26827 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0918 20:03:00.383092   26827 pod_ready.go:93] pod "kube-controller-manager-ha-091565" in "kube-system" namespace has status "Ready":"True"
	I0918 20:03:00.383121   26827 pod_ready.go:82] duration metric: took 400.554078ms for pod "kube-controller-manager-ha-091565" in "kube-system" namespace to be "Ready" ...
	I0918 20:03:00.383131   26827 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-091565-m02" in "kube-system" namespace to be "Ready" ...
	I0918 20:03:00.578098   26827 request.go:632] Waited for 194.899438ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.215:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-091565-m02
	I0918 20:03:00.578185   26827 round_trippers.go:463] GET https://192.168.39.215:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-091565-m02
	I0918 20:03:00.578193   26827 round_trippers.go:469] Request Headers:
	I0918 20:03:00.578204   26827 round_trippers.go:473]     Accept: application/json, */*
	I0918 20:03:00.578210   26827 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0918 20:03:00.581985   26827 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0918 20:03:00.779051   26827 request.go:632] Waited for 196.416005ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.215:8443/api/v1/nodes/ha-091565-m02
	I0918 20:03:00.779104   26827 round_trippers.go:463] GET https://192.168.39.215:8443/api/v1/nodes/ha-091565-m02
	I0918 20:03:00.779109   26827 round_trippers.go:469] Request Headers:
	I0918 20:03:00.779116   26827 round_trippers.go:473]     Accept: application/json, */*
	I0918 20:03:00.779121   26827 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0918 20:03:00.782383   26827 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0918 20:03:00.782978   26827 pod_ready.go:93] pod "kube-controller-manager-ha-091565-m02" in "kube-system" namespace has status "Ready":"True"
	I0918 20:03:00.782999   26827 pod_ready.go:82] duration metric: took 399.861964ms for pod "kube-controller-manager-ha-091565-m02" in "kube-system" namespace to be "Ready" ...
	I0918 20:03:00.783008   26827 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-4wm6h" in "kube-system" namespace to be "Ready" ...
	I0918 20:03:00.978573   26827 request.go:632] Waited for 195.502032ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.215:8443/api/v1/namespaces/kube-system/pods/kube-proxy-4wm6h
	I0918 20:03:00.978651   26827 round_trippers.go:463] GET https://192.168.39.215:8443/api/v1/namespaces/kube-system/pods/kube-proxy-4wm6h
	I0918 20:03:00.978672   26827 round_trippers.go:469] Request Headers:
	I0918 20:03:00.978683   26827 round_trippers.go:473]     Accept: application/json, */*
	I0918 20:03:00.978689   26827 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0918 20:03:00.982275   26827 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0918 20:03:01.178232   26827 request.go:632] Waited for 195.323029ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.215:8443/api/v1/nodes/ha-091565
	I0918 20:03:01.178304   26827 round_trippers.go:463] GET https://192.168.39.215:8443/api/v1/nodes/ha-091565
	I0918 20:03:01.178310   26827 round_trippers.go:469] Request Headers:
	I0918 20:03:01.178317   26827 round_trippers.go:473]     Accept: application/json, */*
	I0918 20:03:01.178320   26827 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0918 20:03:01.181251   26827 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0918 20:03:01.181856   26827 pod_ready.go:93] pod "kube-proxy-4wm6h" in "kube-system" namespace has status "Ready":"True"
	I0918 20:03:01.181875   26827 pod_ready.go:82] duration metric: took 398.861474ms for pod "kube-proxy-4wm6h" in "kube-system" namespace to be "Ready" ...
	I0918 20:03:01.181884   26827 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-bxblp" in "kube-system" namespace to be "Ready" ...
	I0918 20:03:01.379020   26827 request.go:632] Waited for 197.061195ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.215:8443/api/v1/namespaces/kube-system/pods/kube-proxy-bxblp
	I0918 20:03:01.379094   26827 round_trippers.go:463] GET https://192.168.39.215:8443/api/v1/namespaces/kube-system/pods/kube-proxy-bxblp
	I0918 20:03:01.379101   26827 round_trippers.go:469] Request Headers:
	I0918 20:03:01.379112   26827 round_trippers.go:473]     Accept: application/json, */*
	I0918 20:03:01.379117   26827 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0918 20:03:01.384213   26827 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0918 20:03:01.578259   26827 request.go:632] Waited for 193.306434ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.215:8443/api/v1/nodes/ha-091565-m02
	I0918 20:03:01.578314   26827 round_trippers.go:463] GET https://192.168.39.215:8443/api/v1/nodes/ha-091565-m02
	I0918 20:03:01.578319   26827 round_trippers.go:469] Request Headers:
	I0918 20:03:01.578326   26827 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0918 20:03:01.578331   26827 round_trippers.go:473]     Accept: application/json, */*
	I0918 20:03:01.581837   26827 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0918 20:03:01.582292   26827 pod_ready.go:93] pod "kube-proxy-bxblp" in "kube-system" namespace has status "Ready":"True"
	I0918 20:03:01.582308   26827 pod_ready.go:82] duration metric: took 400.4182ms for pod "kube-proxy-bxblp" in "kube-system" namespace to be "Ready" ...
	I0918 20:03:01.582315   26827 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-091565" in "kube-system" namespace to be "Ready" ...
	I0918 20:03:01.778453   26827 request.go:632] Waited for 196.055453ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.215:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-091565
	I0918 20:03:01.778506   26827 round_trippers.go:463] GET https://192.168.39.215:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-091565
	I0918 20:03:01.778511   26827 round_trippers.go:469] Request Headers:
	I0918 20:03:01.778518   26827 round_trippers.go:473]     Accept: application/json, */*
	I0918 20:03:01.778522   26827 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0918 20:03:01.782644   26827 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0918 20:03:01.978591   26827 request.go:632] Waited for 195.380537ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.215:8443/api/v1/nodes/ha-091565
	I0918 20:03:01.978678   26827 round_trippers.go:463] GET https://192.168.39.215:8443/api/v1/nodes/ha-091565
	I0918 20:03:01.978686   26827 round_trippers.go:469] Request Headers:
	I0918 20:03:01.978700   26827 round_trippers.go:473]     Accept: application/json, */*
	I0918 20:03:01.978707   26827 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0918 20:03:01.982445   26827 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0918 20:03:01.982967   26827 pod_ready.go:93] pod "kube-scheduler-ha-091565" in "kube-system" namespace has status "Ready":"True"
	I0918 20:03:01.982989   26827 pod_ready.go:82] duration metric: took 400.667605ms for pod "kube-scheduler-ha-091565" in "kube-system" namespace to be "Ready" ...
	I0918 20:03:01.982998   26827 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-091565-m02" in "kube-system" namespace to be "Ready" ...
	I0918 20:03:02.179055   26827 request.go:632] Waited for 195.997204ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.215:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-091565-m02
	I0918 20:03:02.179125   26827 round_trippers.go:463] GET https://192.168.39.215:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-091565-m02
	I0918 20:03:02.179132   26827 round_trippers.go:469] Request Headers:
	I0918 20:03:02.179144   26827 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0918 20:03:02.179150   26827 round_trippers.go:473]     Accept: application/json, */*
	I0918 20:03:02.182779   26827 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0918 20:03:02.378680   26827 request.go:632] Waited for 195.344249ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.215:8443/api/v1/nodes/ha-091565-m02
	I0918 20:03:02.378732   26827 round_trippers.go:463] GET https://192.168.39.215:8443/api/v1/nodes/ha-091565-m02
	I0918 20:03:02.378737   26827 round_trippers.go:469] Request Headers:
	I0918 20:03:02.378744   26827 round_trippers.go:473]     Accept: application/json, */*
	I0918 20:03:02.378749   26827 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0918 20:03:02.387672   26827 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0918 20:03:02.388432   26827 pod_ready.go:93] pod "kube-scheduler-ha-091565-m02" in "kube-system" namespace has status "Ready":"True"
	I0918 20:03:02.388454   26827 pod_ready.go:82] duration metric: took 405.448688ms for pod "kube-scheduler-ha-091565-m02" in "kube-system" namespace to be "Ready" ...
	I0918 20:03:02.388468   26827 pod_ready.go:39] duration metric: took 3.205828816s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0918 20:03:02.388484   26827 api_server.go:52] waiting for apiserver process to appear ...
	I0918 20:03:02.388545   26827 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 20:03:02.403691   26827 api_server.go:72] duration metric: took 22.031762634s to wait for apiserver process to appear ...
	I0918 20:03:02.403716   26827 api_server.go:88] waiting for apiserver healthz status ...
	I0918 20:03:02.403738   26827 api_server.go:253] Checking apiserver healthz at https://192.168.39.215:8443/healthz ...
	I0918 20:03:02.408810   26827 api_server.go:279] https://192.168.39.215:8443/healthz returned 200:
	ok
	I0918 20:03:02.408891   26827 round_trippers.go:463] GET https://192.168.39.215:8443/version
	I0918 20:03:02.408903   26827 round_trippers.go:469] Request Headers:
	I0918 20:03:02.408914   26827 round_trippers.go:473]     Accept: application/json, */*
	I0918 20:03:02.408923   26827 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0918 20:03:02.409886   26827 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0918 20:03:02.409963   26827 api_server.go:141] control plane version: v1.31.1
	I0918 20:03:02.409977   26827 api_server.go:131] duration metric: took 6.255647ms to wait for apiserver health ...
	I0918 20:03:02.409986   26827 system_pods.go:43] waiting for kube-system pods to appear ...
	I0918 20:03:02.578323   26827 request.go:632] Waited for 168.279427ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.215:8443/api/v1/namespaces/kube-system/pods
	I0918 20:03:02.578410   26827 round_trippers.go:463] GET https://192.168.39.215:8443/api/v1/namespaces/kube-system/pods
	I0918 20:03:02.578421   26827 round_trippers.go:469] Request Headers:
	I0918 20:03:02.578429   26827 round_trippers.go:473]     Accept: application/json, */*
	I0918 20:03:02.578435   26827 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0918 20:03:02.583311   26827 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0918 20:03:02.589108   26827 system_pods.go:59] 17 kube-system pods found
	I0918 20:03:02.589162   26827 system_pods.go:61] "coredns-7c65d6cfc9-8zcqk" [644e8147-96e9-41a1-99b8-d2de17e4798c] Running
	I0918 20:03:02.589168   26827 system_pods.go:61] "coredns-7c65d6cfc9-w97kk" [70428cd6-0523-44c8-89f3-62837b52ca80] Running
	I0918 20:03:02.589172   26827 system_pods.go:61] "etcd-ha-091565" [c5af3e98-c375-448c-9cac-6a83b115ca71] Running
	I0918 20:03:02.589176   26827 system_pods.go:61] "etcd-ha-091565-m02" [71e1c78e-6a11-4b46-baa2-83c98d666cee] Running
	I0918 20:03:02.589180   26827 system_pods.go:61] "kindnet-7fl5w" [5c3a9d82-3815-4aa1-8d04-14be25394dcf] Running
	I0918 20:03:02.589183   26827 system_pods.go:61] "kindnet-bzsqr" [bf1e6f1b-d3ad-439a-9b9a-882e6e989a56] Running
	I0918 20:03:02.589188   26827 system_pods.go:61] "kube-apiserver-ha-091565" [1c2ddbd9-3f78-415b-af21-1d43925382c5] Running
	I0918 20:03:02.589193   26827 system_pods.go:61] "kube-apiserver-ha-091565-m02" [b482ba6c-e415-4e3c-aa56-c17a4f3bcc13] Running
	I0918 20:03:02.589197   26827 system_pods.go:61] "kube-controller-manager-ha-091565" [3812cd1b-619d-4da8-b039-18f7adb53647] Running
	I0918 20:03:02.589206   26827 system_pods.go:61] "kube-controller-manager-ha-091565-m02" [2c3ecce6-b148-498b-b66e-e5c000f51940] Running
	I0918 20:03:02.589210   26827 system_pods.go:61] "kube-proxy-4wm6h" [d6904231-6f64-4447-9932-0cd5d692978b] Running
	I0918 20:03:02.589213   26827 system_pods.go:61] "kube-proxy-bxblp" [68143b05-afd1-409a-aaa6-8b3c6841bbfb] Running
	I0918 20:03:02.589217   26827 system_pods.go:61] "kube-scheduler-ha-091565" [ad2ac667-3328-4ad3-b736-c8c633140e2d] Running
	I0918 20:03:02.589222   26827 system_pods.go:61] "kube-scheduler-ha-091565-m02" [fd27db83-43f3-40a8-9ca7-5570f78d562b] Running
	I0918 20:03:02.589226   26827 system_pods.go:61] "kube-vip-ha-091565" [b3fccd60-ac88-4dda-b8bb-b1b8c45cbfe5] Running
	I0918 20:03:02.589233   26827 system_pods.go:61] "kube-vip-ha-091565-m02" [12ca12ad-6dee-4b50-9273-c48d8f06acf4] Running
	I0918 20:03:02.589236   26827 system_pods.go:61] "storage-provisioner" [b7dffb85-905b-4166-a680-34c77cf87d09] Running
	I0918 20:03:02.589247   26827 system_pods.go:74] duration metric: took 179.252102ms to wait for pod list to return data ...
	I0918 20:03:02.589258   26827 default_sa.go:34] waiting for default service account to be created ...
	I0918 20:03:02.778073   26827 request.go:632] Waited for 188.733447ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.215:8443/api/v1/namespaces/default/serviceaccounts
	I0918 20:03:02.778127   26827 round_trippers.go:463] GET https://192.168.39.215:8443/api/v1/namespaces/default/serviceaccounts
	I0918 20:03:02.778132   26827 round_trippers.go:469] Request Headers:
	I0918 20:03:02.778141   26827 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0918 20:03:02.778148   26827 round_trippers.go:473]     Accept: application/json, */*
	I0918 20:03:02.781930   26827 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0918 20:03:02.782168   26827 default_sa.go:45] found service account: "default"
	I0918 20:03:02.782184   26827 default_sa.go:55] duration metric: took 192.91745ms for default service account to be created ...
	I0918 20:03:02.782192   26827 system_pods.go:116] waiting for k8s-apps to be running ...
	I0918 20:03:02.978682   26827 request.go:632] Waited for 196.414466ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.215:8443/api/v1/namespaces/kube-system/pods
	I0918 20:03:02.978755   26827 round_trippers.go:463] GET https://192.168.39.215:8443/api/v1/namespaces/kube-system/pods
	I0918 20:03:02.978762   26827 round_trippers.go:469] Request Headers:
	I0918 20:03:02.978771   26827 round_trippers.go:473]     Accept: application/json, */*
	I0918 20:03:02.978775   26827 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0918 20:03:02.983628   26827 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0918 20:03:02.989503   26827 system_pods.go:86] 17 kube-system pods found
	I0918 20:03:02.989531   26827 system_pods.go:89] "coredns-7c65d6cfc9-8zcqk" [644e8147-96e9-41a1-99b8-d2de17e4798c] Running
	I0918 20:03:02.989536   26827 system_pods.go:89] "coredns-7c65d6cfc9-w97kk" [70428cd6-0523-44c8-89f3-62837b52ca80] Running
	I0918 20:03:02.989540   26827 system_pods.go:89] "etcd-ha-091565" [c5af3e98-c375-448c-9cac-6a83b115ca71] Running
	I0918 20:03:02.989543   26827 system_pods.go:89] "etcd-ha-091565-m02" [71e1c78e-6a11-4b46-baa2-83c98d666cee] Running
	I0918 20:03:02.989547   26827 system_pods.go:89] "kindnet-7fl5w" [5c3a9d82-3815-4aa1-8d04-14be25394dcf] Running
	I0918 20:03:02.989550   26827 system_pods.go:89] "kindnet-bzsqr" [bf1e6f1b-d3ad-439a-9b9a-882e6e989a56] Running
	I0918 20:03:02.989555   26827 system_pods.go:89] "kube-apiserver-ha-091565" [1c2ddbd9-3f78-415b-af21-1d43925382c5] Running
	I0918 20:03:02.989558   26827 system_pods.go:89] "kube-apiserver-ha-091565-m02" [b482ba6c-e415-4e3c-aa56-c17a4f3bcc13] Running
	I0918 20:03:02.989562   26827 system_pods.go:89] "kube-controller-manager-ha-091565" [3812cd1b-619d-4da8-b039-18f7adb53647] Running
	I0918 20:03:02.989565   26827 system_pods.go:89] "kube-controller-manager-ha-091565-m02" [2c3ecce6-b148-498b-b66e-e5c000f51940] Running
	I0918 20:03:02.989568   26827 system_pods.go:89] "kube-proxy-4wm6h" [d6904231-6f64-4447-9932-0cd5d692978b] Running
	I0918 20:03:02.989571   26827 system_pods.go:89] "kube-proxy-bxblp" [68143b05-afd1-409a-aaa6-8b3c6841bbfb] Running
	I0918 20:03:02.989574   26827 system_pods.go:89] "kube-scheduler-ha-091565" [ad2ac667-3328-4ad3-b736-c8c633140e2d] Running
	I0918 20:03:02.989577   26827 system_pods.go:89] "kube-scheduler-ha-091565-m02" [fd27db83-43f3-40a8-9ca7-5570f78d562b] Running
	I0918 20:03:02.989580   26827 system_pods.go:89] "kube-vip-ha-091565" [b3fccd60-ac88-4dda-b8bb-b1b8c45cbfe5] Running
	I0918 20:03:02.989583   26827 system_pods.go:89] "kube-vip-ha-091565-m02" [12ca12ad-6dee-4b50-9273-c48d8f06acf4] Running
	I0918 20:03:02.989590   26827 system_pods.go:89] "storage-provisioner" [b7dffb85-905b-4166-a680-34c77cf87d09] Running
	I0918 20:03:02.989597   26827 system_pods.go:126] duration metric: took 207.397178ms to wait for k8s-apps to be running ...
	I0918 20:03:02.989610   26827 system_svc.go:44] waiting for kubelet service to be running ....
	I0918 20:03:02.989698   26827 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0918 20:03:03.003927   26827 system_svc.go:56] duration metric: took 14.306514ms WaitForService to wait for kubelet
	I0918 20:03:03.003954   26827 kubeadm.go:582] duration metric: took 22.632027977s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0918 20:03:03.003974   26827 node_conditions.go:102] verifying NodePressure condition ...
	I0918 20:03:03.179047   26827 request.go:632] Waited for 174.972185ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.215:8443/api/v1/nodes
	I0918 20:03:03.179141   26827 round_trippers.go:463] GET https://192.168.39.215:8443/api/v1/nodes
	I0918 20:03:03.179150   26827 round_trippers.go:469] Request Headers:
	I0918 20:03:03.179161   26827 round_trippers.go:473]     Accept: application/json, */*
	I0918 20:03:03.179171   26827 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0918 20:03:03.183675   26827 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0918 20:03:03.184384   26827 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0918 20:03:03.184407   26827 node_conditions.go:123] node cpu capacity is 2
	I0918 20:03:03.184443   26827 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0918 20:03:03.184452   26827 node_conditions.go:123] node cpu capacity is 2
	I0918 20:03:03.184459   26827 node_conditions.go:105] duration metric: took 180.479849ms to run NodePressure ...
	I0918 20:03:03.184475   26827 start.go:241] waiting for startup goroutines ...
	I0918 20:03:03.184509   26827 start.go:255] writing updated cluster config ...
	I0918 20:03:03.186759   26827 out.go:201] 
	I0918 20:03:03.188291   26827 config.go:182] Loaded profile config "ha-091565": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0918 20:03:03.188401   26827 profile.go:143] Saving config to /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/ha-091565/config.json ...
	I0918 20:03:03.189951   26827 out.go:177] * Starting "ha-091565-m03" control-plane node in "ha-091565" cluster
	I0918 20:03:03.191020   26827 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0918 20:03:03.191045   26827 cache.go:56] Caching tarball of preloaded images
	I0918 20:03:03.191138   26827 preload.go:172] Found /home/jenkins/minikube-integration/19667-7671/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0918 20:03:03.191150   26827 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0918 20:03:03.191241   26827 profile.go:143] Saving config to /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/ha-091565/config.json ...
	I0918 20:03:03.191410   26827 start.go:360] acquireMachinesLock for ha-091565-m03: {Name:mk1b0d68a0b7d9d4bc8204d5d81cb9eb77222526 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0918 20:03:03.191465   26827 start.go:364] duration metric: took 34.695µs to acquireMachinesLock for "ha-091565-m03"
	I0918 20:03:03.191486   26827 start.go:93] Provisioning new machine with config: &{Name:ha-091565 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.1 ClusterName:ha-091565 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.215 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.92 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-d
ns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0918 20:03:03.191596   26827 start.go:125] createHost starting for "m03" (driver="kvm2")
	I0918 20:03:03.193058   26827 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0918 20:03:03.193149   26827 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0918 20:03:03.193188   26827 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0918 20:03:03.208171   26827 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42195
	I0918 20:03:03.208580   26827 main.go:141] libmachine: () Calling .GetVersion
	I0918 20:03:03.209079   26827 main.go:141] libmachine: Using API Version  1
	I0918 20:03:03.209101   26827 main.go:141] libmachine: () Calling .SetConfigRaw
	I0918 20:03:03.209382   26827 main.go:141] libmachine: () Calling .GetMachineName
	I0918 20:03:03.209530   26827 main.go:141] libmachine: (ha-091565-m03) Calling .GetMachineName
	I0918 20:03:03.209649   26827 main.go:141] libmachine: (ha-091565-m03) Calling .DriverName
	I0918 20:03:03.209778   26827 start.go:159] libmachine.API.Create for "ha-091565" (driver="kvm2")
	I0918 20:03:03.209809   26827 client.go:168] LocalClient.Create starting
	I0918 20:03:03.209839   26827 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19667-7671/.minikube/certs/ca.pem
	I0918 20:03:03.209872   26827 main.go:141] libmachine: Decoding PEM data...
	I0918 20:03:03.209887   26827 main.go:141] libmachine: Parsing certificate...
	I0918 20:03:03.209935   26827 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19667-7671/.minikube/certs/cert.pem
	I0918 20:03:03.209954   26827 main.go:141] libmachine: Decoding PEM data...
	I0918 20:03:03.209965   26827 main.go:141] libmachine: Parsing certificate...
	I0918 20:03:03.209982   26827 main.go:141] libmachine: Running pre-create checks...
	I0918 20:03:03.209989   26827 main.go:141] libmachine: (ha-091565-m03) Calling .PreCreateCheck
	I0918 20:03:03.210137   26827 main.go:141] libmachine: (ha-091565-m03) Calling .GetConfigRaw
	I0918 20:03:03.210522   26827 main.go:141] libmachine: Creating machine...
	I0918 20:03:03.210535   26827 main.go:141] libmachine: (ha-091565-m03) Calling .Create
	I0918 20:03:03.210656   26827 main.go:141] libmachine: (ha-091565-m03) Creating KVM machine...
	I0918 20:03:03.211861   26827 main.go:141] libmachine: (ha-091565-m03) DBG | found existing default KVM network
	I0918 20:03:03.212028   26827 main.go:141] libmachine: (ha-091565-m03) DBG | found existing private KVM network mk-ha-091565
	I0918 20:03:03.212185   26827 main.go:141] libmachine: (ha-091565-m03) Setting up store path in /home/jenkins/minikube-integration/19667-7671/.minikube/machines/ha-091565-m03 ...
	I0918 20:03:03.212211   26827 main.go:141] libmachine: (ha-091565-m03) Building disk image from file:///home/jenkins/minikube-integration/19667-7671/.minikube/cache/iso/amd64/minikube-v1.34.0-1726481713-19649-amd64.iso
	I0918 20:03:03.212251   26827 main.go:141] libmachine: (ha-091565-m03) DBG | I0918 20:03:03.212170   27609 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19667-7671/.minikube
	I0918 20:03:03.212315   26827 main.go:141] libmachine: (ha-091565-m03) Downloading /home/jenkins/minikube-integration/19667-7671/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19667-7671/.minikube/cache/iso/amd64/minikube-v1.34.0-1726481713-19649-amd64.iso...
	I0918 20:03:03.448950   26827 main.go:141] libmachine: (ha-091565-m03) DBG | I0918 20:03:03.448813   27609 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19667-7671/.minikube/machines/ha-091565-m03/id_rsa...
	I0918 20:03:03.656714   26827 main.go:141] libmachine: (ha-091565-m03) DBG | I0918 20:03:03.656571   27609 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19667-7671/.minikube/machines/ha-091565-m03/ha-091565-m03.rawdisk...
	I0918 20:03:03.656743   26827 main.go:141] libmachine: (ha-091565-m03) DBG | Writing magic tar header
	I0918 20:03:03.656757   26827 main.go:141] libmachine: (ha-091565-m03) DBG | Writing SSH key tar header
	I0918 20:03:03.656767   26827 main.go:141] libmachine: (ha-091565-m03) DBG | I0918 20:03:03.656684   27609 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19667-7671/.minikube/machines/ha-091565-m03 ...
	I0918 20:03:03.656796   26827 main.go:141] libmachine: (ha-091565-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19667-7671/.minikube/machines/ha-091565-m03
	I0918 20:03:03.656816   26827 main.go:141] libmachine: (ha-091565-m03) Setting executable bit set on /home/jenkins/minikube-integration/19667-7671/.minikube/machines/ha-091565-m03 (perms=drwx------)
	I0918 20:03:03.656843   26827 main.go:141] libmachine: (ha-091565-m03) Setting executable bit set on /home/jenkins/minikube-integration/19667-7671/.minikube/machines (perms=drwxr-xr-x)
	I0918 20:03:03.656855   26827 main.go:141] libmachine: (ha-091565-m03) Setting executable bit set on /home/jenkins/minikube-integration/19667-7671/.minikube (perms=drwxr-xr-x)
	I0918 20:03:03.656870   26827 main.go:141] libmachine: (ha-091565-m03) Setting executable bit set on /home/jenkins/minikube-integration/19667-7671 (perms=drwxrwxr-x)
	I0918 20:03:03.656884   26827 main.go:141] libmachine: (ha-091565-m03) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0918 20:03:03.656898   26827 main.go:141] libmachine: (ha-091565-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19667-7671/.minikube/machines
	I0918 20:03:03.656911   26827 main.go:141] libmachine: (ha-091565-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19667-7671/.minikube
	I0918 20:03:03.656924   26827 main.go:141] libmachine: (ha-091565-m03) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0918 20:03:03.656938   26827 main.go:141] libmachine: (ha-091565-m03) Creating domain...
	I0918 20:03:03.656953   26827 main.go:141] libmachine: (ha-091565-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19667-7671
	I0918 20:03:03.656966   26827 main.go:141] libmachine: (ha-091565-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0918 20:03:03.656984   26827 main.go:141] libmachine: (ha-091565-m03) DBG | Checking permissions on dir: /home/jenkins
	I0918 20:03:03.656999   26827 main.go:141] libmachine: (ha-091565-m03) DBG | Checking permissions on dir: /home
	I0918 20:03:03.657013   26827 main.go:141] libmachine: (ha-091565-m03) DBG | Skipping /home - not owner
	I0918 20:03:03.657931   26827 main.go:141] libmachine: (ha-091565-m03) define libvirt domain using xml: 
	I0918 20:03:03.657960   26827 main.go:141] libmachine: (ha-091565-m03) <domain type='kvm'>
	I0918 20:03:03.657971   26827 main.go:141] libmachine: (ha-091565-m03)   <name>ha-091565-m03</name>
	I0918 20:03:03.657985   26827 main.go:141] libmachine: (ha-091565-m03)   <memory unit='MiB'>2200</memory>
	I0918 20:03:03.657993   26827 main.go:141] libmachine: (ha-091565-m03)   <vcpu>2</vcpu>
	I0918 20:03:03.658002   26827 main.go:141] libmachine: (ha-091565-m03)   <features>
	I0918 20:03:03.658008   26827 main.go:141] libmachine: (ha-091565-m03)     <acpi/>
	I0918 20:03:03.658012   26827 main.go:141] libmachine: (ha-091565-m03)     <apic/>
	I0918 20:03:03.658017   26827 main.go:141] libmachine: (ha-091565-m03)     <pae/>
	I0918 20:03:03.658024   26827 main.go:141] libmachine: (ha-091565-m03)     
	I0918 20:03:03.658028   26827 main.go:141] libmachine: (ha-091565-m03)   </features>
	I0918 20:03:03.658035   26827 main.go:141] libmachine: (ha-091565-m03)   <cpu mode='host-passthrough'>
	I0918 20:03:03.658040   26827 main.go:141] libmachine: (ha-091565-m03)   
	I0918 20:03:03.658051   26827 main.go:141] libmachine: (ha-091565-m03)   </cpu>
	I0918 20:03:03.658072   26827 main.go:141] libmachine: (ha-091565-m03)   <os>
	I0918 20:03:03.658091   26827 main.go:141] libmachine: (ha-091565-m03)     <type>hvm</type>
	I0918 20:03:03.658100   26827 main.go:141] libmachine: (ha-091565-m03)     <boot dev='cdrom'/>
	I0918 20:03:03.658104   26827 main.go:141] libmachine: (ha-091565-m03)     <boot dev='hd'/>
	I0918 20:03:03.658112   26827 main.go:141] libmachine: (ha-091565-m03)     <bootmenu enable='no'/>
	I0918 20:03:03.658119   26827 main.go:141] libmachine: (ha-091565-m03)   </os>
	I0918 20:03:03.658127   26827 main.go:141] libmachine: (ha-091565-m03)   <devices>
	I0918 20:03:03.658137   26827 main.go:141] libmachine: (ha-091565-m03)     <disk type='file' device='cdrom'>
	I0918 20:03:03.658153   26827 main.go:141] libmachine: (ha-091565-m03)       <source file='/home/jenkins/minikube-integration/19667-7671/.minikube/machines/ha-091565-m03/boot2docker.iso'/>
	I0918 20:03:03.658166   26827 main.go:141] libmachine: (ha-091565-m03)       <target dev='hdc' bus='scsi'/>
	I0918 20:03:03.658176   26827 main.go:141] libmachine: (ha-091565-m03)       <readonly/>
	I0918 20:03:03.658181   26827 main.go:141] libmachine: (ha-091565-m03)     </disk>
	I0918 20:03:03.658187   26827 main.go:141] libmachine: (ha-091565-m03)     <disk type='file' device='disk'>
	I0918 20:03:03.658196   26827 main.go:141] libmachine: (ha-091565-m03)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0918 20:03:03.658208   26827 main.go:141] libmachine: (ha-091565-m03)       <source file='/home/jenkins/minikube-integration/19667-7671/.minikube/machines/ha-091565-m03/ha-091565-m03.rawdisk'/>
	I0918 20:03:03.658218   26827 main.go:141] libmachine: (ha-091565-m03)       <target dev='hda' bus='virtio'/>
	I0918 20:03:03.658230   26827 main.go:141] libmachine: (ha-091565-m03)     </disk>
	I0918 20:03:03.658240   26827 main.go:141] libmachine: (ha-091565-m03)     <interface type='network'>
	I0918 20:03:03.658251   26827 main.go:141] libmachine: (ha-091565-m03)       <source network='mk-ha-091565'/>
	I0918 20:03:03.658261   26827 main.go:141] libmachine: (ha-091565-m03)       <model type='virtio'/>
	I0918 20:03:03.658268   26827 main.go:141] libmachine: (ha-091565-m03)     </interface>
	I0918 20:03:03.658277   26827 main.go:141] libmachine: (ha-091565-m03)     <interface type='network'>
	I0918 20:03:03.658286   26827 main.go:141] libmachine: (ha-091565-m03)       <source network='default'/>
	I0918 20:03:03.658301   26827 main.go:141] libmachine: (ha-091565-m03)       <model type='virtio'/>
	I0918 20:03:03.658313   26827 main.go:141] libmachine: (ha-091565-m03)     </interface>
	I0918 20:03:03.658320   26827 main.go:141] libmachine: (ha-091565-m03)     <serial type='pty'>
	I0918 20:03:03.658333   26827 main.go:141] libmachine: (ha-091565-m03)       <target port='0'/>
	I0918 20:03:03.658342   26827 main.go:141] libmachine: (ha-091565-m03)     </serial>
	I0918 20:03:03.658350   26827 main.go:141] libmachine: (ha-091565-m03)     <console type='pty'>
	I0918 20:03:03.658360   26827 main.go:141] libmachine: (ha-091565-m03)       <target type='serial' port='0'/>
	I0918 20:03:03.658368   26827 main.go:141] libmachine: (ha-091565-m03)     </console>
	I0918 20:03:03.658381   26827 main.go:141] libmachine: (ha-091565-m03)     <rng model='virtio'>
	I0918 20:03:03.658393   26827 main.go:141] libmachine: (ha-091565-m03)       <backend model='random'>/dev/random</backend>
	I0918 20:03:03.658402   26827 main.go:141] libmachine: (ha-091565-m03)     </rng>
	I0918 20:03:03.658410   26827 main.go:141] libmachine: (ha-091565-m03)     
	I0918 20:03:03.658418   26827 main.go:141] libmachine: (ha-091565-m03)     
	I0918 20:03:03.658425   26827 main.go:141] libmachine: (ha-091565-m03)   </devices>
	I0918 20:03:03.658434   26827 main.go:141] libmachine: (ha-091565-m03) </domain>
	I0918 20:03:03.658445   26827 main.go:141] libmachine: (ha-091565-m03) 
	I0918 20:03:03.665123   26827 main.go:141] libmachine: (ha-091565-m03) DBG | domain ha-091565-m03 has defined MAC address 52:54:00:28:9c:e9 in network default
	I0918 20:03:03.665651   26827 main.go:141] libmachine: (ha-091565-m03) Ensuring networks are active...
	I0918 20:03:03.665672   26827 main.go:141] libmachine: (ha-091565-m03) DBG | domain ha-091565-m03 has defined MAC address 52:54:00:7c:50:95 in network mk-ha-091565
	I0918 20:03:03.666384   26827 main.go:141] libmachine: (ha-091565-m03) Ensuring network default is active
	I0918 20:03:03.666733   26827 main.go:141] libmachine: (ha-091565-m03) Ensuring network mk-ha-091565 is active
	I0918 20:03:03.667154   26827 main.go:141] libmachine: (ha-091565-m03) Getting domain xml...
	I0918 20:03:03.668052   26827 main.go:141] libmachine: (ha-091565-m03) Creating domain...
	I0918 20:03:04.935268   26827 main.go:141] libmachine: (ha-091565-m03) Waiting to get IP...
	I0918 20:03:04.936028   26827 main.go:141] libmachine: (ha-091565-m03) DBG | domain ha-091565-m03 has defined MAC address 52:54:00:7c:50:95 in network mk-ha-091565
	I0918 20:03:04.936415   26827 main.go:141] libmachine: (ha-091565-m03) DBG | unable to find current IP address of domain ha-091565-m03 in network mk-ha-091565
	I0918 20:03:04.936435   26827 main.go:141] libmachine: (ha-091565-m03) DBG | I0918 20:03:04.936394   27609 retry.go:31] will retry after 190.945774ms: waiting for machine to come up
	I0918 20:03:05.128750   26827 main.go:141] libmachine: (ha-091565-m03) DBG | domain ha-091565-m03 has defined MAC address 52:54:00:7c:50:95 in network mk-ha-091565
	I0918 20:03:05.129236   26827 main.go:141] libmachine: (ha-091565-m03) DBG | unable to find current IP address of domain ha-091565-m03 in network mk-ha-091565
	I0918 20:03:05.129261   26827 main.go:141] libmachine: (ha-091565-m03) DBG | I0918 20:03:05.129196   27609 retry.go:31] will retry after 291.266146ms: waiting for machine to come up
	I0918 20:03:05.422550   26827 main.go:141] libmachine: (ha-091565-m03) DBG | domain ha-091565-m03 has defined MAC address 52:54:00:7c:50:95 in network mk-ha-091565
	I0918 20:03:05.423137   26827 main.go:141] libmachine: (ha-091565-m03) DBG | unable to find current IP address of domain ha-091565-m03 in network mk-ha-091565
	I0918 20:03:05.423170   26827 main.go:141] libmachine: (ha-091565-m03) DBG | I0918 20:03:05.423078   27609 retry.go:31] will retry after 371.409086ms: waiting for machine to come up
	I0918 20:03:05.795700   26827 main.go:141] libmachine: (ha-091565-m03) DBG | domain ha-091565-m03 has defined MAC address 52:54:00:7c:50:95 in network mk-ha-091565
	I0918 20:03:05.796222   26827 main.go:141] libmachine: (ha-091565-m03) DBG | unable to find current IP address of domain ha-091565-m03 in network mk-ha-091565
	I0918 20:03:05.796248   26827 main.go:141] libmachine: (ha-091565-m03) DBG | I0918 20:03:05.796182   27609 retry.go:31] will retry after 527.63812ms: waiting for machine to come up
	I0918 20:03:06.325912   26827 main.go:141] libmachine: (ha-091565-m03) DBG | domain ha-091565-m03 has defined MAC address 52:54:00:7c:50:95 in network mk-ha-091565
	I0918 20:03:06.326349   26827 main.go:141] libmachine: (ha-091565-m03) DBG | unable to find current IP address of domain ha-091565-m03 in network mk-ha-091565
	I0918 20:03:06.326379   26827 main.go:141] libmachine: (ha-091565-m03) DBG | I0918 20:03:06.326307   27609 retry.go:31] will retry after 471.938108ms: waiting for machine to come up
	I0918 20:03:06.799896   26827 main.go:141] libmachine: (ha-091565-m03) DBG | domain ha-091565-m03 has defined MAC address 52:54:00:7c:50:95 in network mk-ha-091565
	I0918 20:03:06.800358   26827 main.go:141] libmachine: (ha-091565-m03) DBG | unable to find current IP address of domain ha-091565-m03 in network mk-ha-091565
	I0918 20:03:06.800384   26827 main.go:141] libmachine: (ha-091565-m03) DBG | I0918 20:03:06.800288   27609 retry.go:31] will retry after 607.364821ms: waiting for machine to come up
	I0918 20:03:07.408959   26827 main.go:141] libmachine: (ha-091565-m03) DBG | domain ha-091565-m03 has defined MAC address 52:54:00:7c:50:95 in network mk-ha-091565
	I0918 20:03:07.409429   26827 main.go:141] libmachine: (ha-091565-m03) DBG | unable to find current IP address of domain ha-091565-m03 in network mk-ha-091565
	I0918 20:03:07.409459   26827 main.go:141] libmachine: (ha-091565-m03) DBG | I0918 20:03:07.409383   27609 retry.go:31] will retry after 864.680144ms: waiting for machine to come up
	I0918 20:03:08.275959   26827 main.go:141] libmachine: (ha-091565-m03) DBG | domain ha-091565-m03 has defined MAC address 52:54:00:7c:50:95 in network mk-ha-091565
	I0918 20:03:08.276377   26827 main.go:141] libmachine: (ha-091565-m03) DBG | unable to find current IP address of domain ha-091565-m03 in network mk-ha-091565
	I0918 20:03:08.276404   26827 main.go:141] libmachine: (ha-091565-m03) DBG | I0918 20:03:08.276319   27609 retry.go:31] will retry after 900.946411ms: waiting for machine to come up
	I0918 20:03:09.178488   26827 main.go:141] libmachine: (ha-091565-m03) DBG | domain ha-091565-m03 has defined MAC address 52:54:00:7c:50:95 in network mk-ha-091565
	I0918 20:03:09.178913   26827 main.go:141] libmachine: (ha-091565-m03) DBG | unable to find current IP address of domain ha-091565-m03 in network mk-ha-091565
	I0918 20:03:09.178936   26827 main.go:141] libmachine: (ha-091565-m03) DBG | I0918 20:03:09.178885   27609 retry.go:31] will retry after 1.803312814s: waiting for machine to come up
	I0918 20:03:10.983480   26827 main.go:141] libmachine: (ha-091565-m03) DBG | domain ha-091565-m03 has defined MAC address 52:54:00:7c:50:95 in network mk-ha-091565
	I0918 20:03:10.983921   26827 main.go:141] libmachine: (ha-091565-m03) DBG | unable to find current IP address of domain ha-091565-m03 in network mk-ha-091565
	I0918 20:03:10.983943   26827 main.go:141] libmachine: (ha-091565-m03) DBG | I0918 20:03:10.983874   27609 retry.go:31] will retry after 2.318003161s: waiting for machine to come up
	I0918 20:03:13.303826   26827 main.go:141] libmachine: (ha-091565-m03) DBG | domain ha-091565-m03 has defined MAC address 52:54:00:7c:50:95 in network mk-ha-091565
	I0918 20:03:13.304364   26827 main.go:141] libmachine: (ha-091565-m03) DBG | unable to find current IP address of domain ha-091565-m03 in network mk-ha-091565
	I0918 20:03:13.304389   26827 main.go:141] libmachine: (ha-091565-m03) DBG | I0918 20:03:13.304319   27609 retry.go:31] will retry after 2.309847279s: waiting for machine to come up
	I0918 20:03:15.615522   26827 main.go:141] libmachine: (ha-091565-m03) DBG | domain ha-091565-m03 has defined MAC address 52:54:00:7c:50:95 in network mk-ha-091565
	I0918 20:03:15.616142   26827 main.go:141] libmachine: (ha-091565-m03) DBG | unable to find current IP address of domain ha-091565-m03 in network mk-ha-091565
	I0918 20:03:15.616170   26827 main.go:141] libmachine: (ha-091565-m03) DBG | I0918 20:03:15.616108   27609 retry.go:31] will retry after 2.559399773s: waiting for machine to come up
	I0918 20:03:18.176689   26827 main.go:141] libmachine: (ha-091565-m03) DBG | domain ha-091565-m03 has defined MAC address 52:54:00:7c:50:95 in network mk-ha-091565
	I0918 20:03:18.177086   26827 main.go:141] libmachine: (ha-091565-m03) DBG | unable to find current IP address of domain ha-091565-m03 in network mk-ha-091565
	I0918 20:03:18.177108   26827 main.go:141] libmachine: (ha-091565-m03) DBG | I0918 20:03:18.177044   27609 retry.go:31] will retry after 4.502260419s: waiting for machine to come up
	I0918 20:03:22.681016   26827 main.go:141] libmachine: (ha-091565-m03) DBG | domain ha-091565-m03 has defined MAC address 52:54:00:7c:50:95 in network mk-ha-091565
	I0918 20:03:22.681368   26827 main.go:141] libmachine: (ha-091565-m03) DBG | unable to find current IP address of domain ha-091565-m03 in network mk-ha-091565
	I0918 20:03:22.681391   26827 main.go:141] libmachine: (ha-091565-m03) DBG | I0918 20:03:22.681330   27609 retry.go:31] will retry after 3.82668599s: waiting for machine to come up
	I0918 20:03:26.510988   26827 main.go:141] libmachine: (ha-091565-m03) DBG | domain ha-091565-m03 has defined MAC address 52:54:00:7c:50:95 in network mk-ha-091565
	I0918 20:03:26.511503   26827 main.go:141] libmachine: (ha-091565-m03) Found IP for machine: 192.168.39.53
	I0918 20:03:26.511523   26827 main.go:141] libmachine: (ha-091565-m03) DBG | domain ha-091565-m03 has current primary IP address 192.168.39.53 and MAC address 52:54:00:7c:50:95 in network mk-ha-091565
	I0918 20:03:26.511529   26827 main.go:141] libmachine: (ha-091565-m03) Reserving static IP address...
	I0918 20:03:26.511838   26827 main.go:141] libmachine: (ha-091565-m03) DBG | unable to find host DHCP lease matching {name: "ha-091565-m03", mac: "52:54:00:7c:50:95", ip: "192.168.39.53"} in network mk-ha-091565
	I0918 20:03:26.588090   26827 main.go:141] libmachine: (ha-091565-m03) Reserved static IP address: 192.168.39.53
	I0918 20:03:26.588125   26827 main.go:141] libmachine: (ha-091565-m03) Waiting for SSH to be available...
	I0918 20:03:26.588134   26827 main.go:141] libmachine: (ha-091565-m03) DBG | Getting to WaitForSSH function...
	I0918 20:03:26.590288   26827 main.go:141] libmachine: (ha-091565-m03) DBG | domain ha-091565-m03 has defined MAC address 52:54:00:7c:50:95 in network mk-ha-091565
	I0918 20:03:26.590706   26827 main.go:141] libmachine: (ha-091565-m03) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:7c:50:95", ip: ""} in network mk-ha-091565
	I0918 20:03:26.590731   26827 main.go:141] libmachine: (ha-091565-m03) DBG | unable to find defined IP address of network mk-ha-091565 interface with MAC address 52:54:00:7c:50:95
	I0918 20:03:26.590858   26827 main.go:141] libmachine: (ha-091565-m03) DBG | Using SSH client type: external
	I0918 20:03:26.590882   26827 main.go:141] libmachine: (ha-091565-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/19667-7671/.minikube/machines/ha-091565-m03/id_rsa (-rw-------)
	I0918 20:03:26.590920   26827 main.go:141] libmachine: (ha-091565-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19667-7671/.minikube/machines/ha-091565-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0918 20:03:26.590933   26827 main.go:141] libmachine: (ha-091565-m03) DBG | About to run SSH command:
	I0918 20:03:26.590946   26827 main.go:141] libmachine: (ha-091565-m03) DBG | exit 0
	I0918 20:03:26.594686   26827 main.go:141] libmachine: (ha-091565-m03) DBG | SSH cmd err, output: exit status 255: 
	I0918 20:03:26.594715   26827 main.go:141] libmachine: (ha-091565-m03) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I0918 20:03:26.594726   26827 main.go:141] libmachine: (ha-091565-m03) DBG | command : exit 0
	I0918 20:03:26.594733   26827 main.go:141] libmachine: (ha-091565-m03) DBG | err     : exit status 255
	I0918 20:03:26.594744   26827 main.go:141] libmachine: (ha-091565-m03) DBG | output  : 
	I0918 20:03:29.596158   26827 main.go:141] libmachine: (ha-091565-m03) DBG | Getting to WaitForSSH function...
	I0918 20:03:29.598576   26827 main.go:141] libmachine: (ha-091565-m03) DBG | domain ha-091565-m03 has defined MAC address 52:54:00:7c:50:95 in network mk-ha-091565
	I0918 20:03:29.598871   26827 main.go:141] libmachine: (ha-091565-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7c:50:95", ip: ""} in network mk-ha-091565: {Iface:virbr1 ExpiryTime:2024-09-18 21:03:17 +0000 UTC Type:0 Mac:52:54:00:7c:50:95 Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:ha-091565-m03 Clientid:01:52:54:00:7c:50:95}
	I0918 20:03:29.598894   26827 main.go:141] libmachine: (ha-091565-m03) DBG | domain ha-091565-m03 has defined IP address 192.168.39.53 and MAC address 52:54:00:7c:50:95 in network mk-ha-091565
	I0918 20:03:29.599022   26827 main.go:141] libmachine: (ha-091565-m03) DBG | Using SSH client type: external
	I0918 20:03:29.599043   26827 main.go:141] libmachine: (ha-091565-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/19667-7671/.minikube/machines/ha-091565-m03/id_rsa (-rw-------)
	I0918 20:03:29.599071   26827 main.go:141] libmachine: (ha-091565-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.53 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19667-7671/.minikube/machines/ha-091565-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0918 20:03:29.599088   26827 main.go:141] libmachine: (ha-091565-m03) DBG | About to run SSH command:
	I0918 20:03:29.599104   26827 main.go:141] libmachine: (ha-091565-m03) DBG | exit 0
	I0918 20:03:29.719912   26827 main.go:141] libmachine: (ha-091565-m03) DBG | SSH cmd err, output: <nil>: 
	I0918 20:03:29.720164   26827 main.go:141] libmachine: (ha-091565-m03) KVM machine creation complete!
	I0918 20:03:29.720484   26827 main.go:141] libmachine: (ha-091565-m03) Calling .GetConfigRaw
	I0918 20:03:29.720974   26827 main.go:141] libmachine: (ha-091565-m03) Calling .DriverName
	I0918 20:03:29.721178   26827 main.go:141] libmachine: (ha-091565-m03) Calling .DriverName
	I0918 20:03:29.721342   26827 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0918 20:03:29.721355   26827 main.go:141] libmachine: (ha-091565-m03) Calling .GetState
	I0918 20:03:29.722748   26827 main.go:141] libmachine: Detecting operating system of created instance...
	I0918 20:03:29.722760   26827 main.go:141] libmachine: Waiting for SSH to be available...
	I0918 20:03:29.722765   26827 main.go:141] libmachine: Getting to WaitForSSH function...
	I0918 20:03:29.722771   26827 main.go:141] libmachine: (ha-091565-m03) Calling .GetSSHHostname
	I0918 20:03:29.725146   26827 main.go:141] libmachine: (ha-091565-m03) DBG | domain ha-091565-m03 has defined MAC address 52:54:00:7c:50:95 in network mk-ha-091565
	I0918 20:03:29.725535   26827 main.go:141] libmachine: (ha-091565-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7c:50:95", ip: ""} in network mk-ha-091565: {Iface:virbr1 ExpiryTime:2024-09-18 21:03:17 +0000 UTC Type:0 Mac:52:54:00:7c:50:95 Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:ha-091565-m03 Clientid:01:52:54:00:7c:50:95}
	I0918 20:03:29.725560   26827 main.go:141] libmachine: (ha-091565-m03) DBG | domain ha-091565-m03 has defined IP address 192.168.39.53 and MAC address 52:54:00:7c:50:95 in network mk-ha-091565
	I0918 20:03:29.725856   26827 main.go:141] libmachine: (ha-091565-m03) Calling .GetSSHPort
	I0918 20:03:29.726005   26827 main.go:141] libmachine: (ha-091565-m03) Calling .GetSSHKeyPath
	I0918 20:03:29.726172   26827 main.go:141] libmachine: (ha-091565-m03) Calling .GetSSHKeyPath
	I0918 20:03:29.726341   26827 main.go:141] libmachine: (ha-091565-m03) Calling .GetSSHUsername
	I0918 20:03:29.726485   26827 main.go:141] libmachine: Using SSH client type: native
	I0918 20:03:29.726681   26827 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.53 22 <nil> <nil>}
	I0918 20:03:29.726692   26827 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0918 20:03:29.823579   26827 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0918 20:03:29.823600   26827 main.go:141] libmachine: Detecting the provisioner...
	I0918 20:03:29.823610   26827 main.go:141] libmachine: (ha-091565-m03) Calling .GetSSHHostname
	I0918 20:03:29.826127   26827 main.go:141] libmachine: (ha-091565-m03) DBG | domain ha-091565-m03 has defined MAC address 52:54:00:7c:50:95 in network mk-ha-091565
	I0918 20:03:29.826487   26827 main.go:141] libmachine: (ha-091565-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7c:50:95", ip: ""} in network mk-ha-091565: {Iface:virbr1 ExpiryTime:2024-09-18 21:03:17 +0000 UTC Type:0 Mac:52:54:00:7c:50:95 Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:ha-091565-m03 Clientid:01:52:54:00:7c:50:95}
	I0918 20:03:29.826524   26827 main.go:141] libmachine: (ha-091565-m03) DBG | domain ha-091565-m03 has defined IP address 192.168.39.53 and MAC address 52:54:00:7c:50:95 in network mk-ha-091565
	I0918 20:03:29.826650   26827 main.go:141] libmachine: (ha-091565-m03) Calling .GetSSHPort
	I0918 20:03:29.826822   26827 main.go:141] libmachine: (ha-091565-m03) Calling .GetSSHKeyPath
	I0918 20:03:29.826946   26827 main.go:141] libmachine: (ha-091565-m03) Calling .GetSSHKeyPath
	I0918 20:03:29.827049   26827 main.go:141] libmachine: (ha-091565-m03) Calling .GetSSHUsername
	I0918 20:03:29.827203   26827 main.go:141] libmachine: Using SSH client type: native
	I0918 20:03:29.827417   26827 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.53 22 <nil> <nil>}
	I0918 20:03:29.827434   26827 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0918 20:03:29.932519   26827 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0918 20:03:29.932589   26827 main.go:141] libmachine: found compatible host: buildroot
	I0918 20:03:29.932601   26827 main.go:141] libmachine: Provisioning with buildroot...
	I0918 20:03:29.932612   26827 main.go:141] libmachine: (ha-091565-m03) Calling .GetMachineName
	I0918 20:03:29.932841   26827 buildroot.go:166] provisioning hostname "ha-091565-m03"
	I0918 20:03:29.932860   26827 main.go:141] libmachine: (ha-091565-m03) Calling .GetMachineName
	I0918 20:03:29.933042   26827 main.go:141] libmachine: (ha-091565-m03) Calling .GetSSHHostname
	I0918 20:03:29.935764   26827 main.go:141] libmachine: (ha-091565-m03) DBG | domain ha-091565-m03 has defined MAC address 52:54:00:7c:50:95 in network mk-ha-091565
	I0918 20:03:29.936201   26827 main.go:141] libmachine: (ha-091565-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7c:50:95", ip: ""} in network mk-ha-091565: {Iface:virbr1 ExpiryTime:2024-09-18 21:03:17 +0000 UTC Type:0 Mac:52:54:00:7c:50:95 Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:ha-091565-m03 Clientid:01:52:54:00:7c:50:95}
	I0918 20:03:29.936227   26827 main.go:141] libmachine: (ha-091565-m03) DBG | domain ha-091565-m03 has defined IP address 192.168.39.53 and MAC address 52:54:00:7c:50:95 in network mk-ha-091565
	I0918 20:03:29.936365   26827 main.go:141] libmachine: (ha-091565-m03) Calling .GetSSHPort
	I0918 20:03:29.936539   26827 main.go:141] libmachine: (ha-091565-m03) Calling .GetSSHKeyPath
	I0918 20:03:29.936695   26827 main.go:141] libmachine: (ha-091565-m03) Calling .GetSSHKeyPath
	I0918 20:03:29.936848   26827 main.go:141] libmachine: (ha-091565-m03) Calling .GetSSHUsername
	I0918 20:03:29.937078   26827 main.go:141] libmachine: Using SSH client type: native
	I0918 20:03:29.937287   26827 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.53 22 <nil> <nil>}
	I0918 20:03:29.937301   26827 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-091565-m03 && echo "ha-091565-m03" | sudo tee /etc/hostname
	I0918 20:03:30.050382   26827 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-091565-m03
	
	I0918 20:03:30.050410   26827 main.go:141] libmachine: (ha-091565-m03) Calling .GetSSHHostname
	I0918 20:03:30.053336   26827 main.go:141] libmachine: (ha-091565-m03) DBG | domain ha-091565-m03 has defined MAC address 52:54:00:7c:50:95 in network mk-ha-091565
	I0918 20:03:30.053858   26827 main.go:141] libmachine: (ha-091565-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7c:50:95", ip: ""} in network mk-ha-091565: {Iface:virbr1 ExpiryTime:2024-09-18 21:03:17 +0000 UTC Type:0 Mac:52:54:00:7c:50:95 Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:ha-091565-m03 Clientid:01:52:54:00:7c:50:95}
	I0918 20:03:30.053888   26827 main.go:141] libmachine: (ha-091565-m03) DBG | domain ha-091565-m03 has defined IP address 192.168.39.53 and MAC address 52:54:00:7c:50:95 in network mk-ha-091565
	I0918 20:03:30.054088   26827 main.go:141] libmachine: (ha-091565-m03) Calling .GetSSHPort
	I0918 20:03:30.054256   26827 main.go:141] libmachine: (ha-091565-m03) Calling .GetSSHKeyPath
	I0918 20:03:30.054372   26827 main.go:141] libmachine: (ha-091565-m03) Calling .GetSSHKeyPath
	I0918 20:03:30.054537   26827 main.go:141] libmachine: (ha-091565-m03) Calling .GetSSHUsername
	I0918 20:03:30.054678   26827 main.go:141] libmachine: Using SSH client type: native
	I0918 20:03:30.054886   26827 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.53 22 <nil> <nil>}
	I0918 20:03:30.054906   26827 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-091565-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-091565-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-091565-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0918 20:03:30.160725   26827 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0918 20:03:30.160756   26827 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19667-7671/.minikube CaCertPath:/home/jenkins/minikube-integration/19667-7671/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19667-7671/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19667-7671/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19667-7671/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19667-7671/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19667-7671/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19667-7671/.minikube}
	I0918 20:03:30.160770   26827 buildroot.go:174] setting up certificates
	I0918 20:03:30.160779   26827 provision.go:84] configureAuth start
	I0918 20:03:30.160787   26827 main.go:141] libmachine: (ha-091565-m03) Calling .GetMachineName
	I0918 20:03:30.161095   26827 main.go:141] libmachine: (ha-091565-m03) Calling .GetIP
	I0918 20:03:30.164061   26827 main.go:141] libmachine: (ha-091565-m03) DBG | domain ha-091565-m03 has defined MAC address 52:54:00:7c:50:95 in network mk-ha-091565
	I0918 20:03:30.164503   26827 main.go:141] libmachine: (ha-091565-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7c:50:95", ip: ""} in network mk-ha-091565: {Iface:virbr1 ExpiryTime:2024-09-18 21:03:17 +0000 UTC Type:0 Mac:52:54:00:7c:50:95 Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:ha-091565-m03 Clientid:01:52:54:00:7c:50:95}
	I0918 20:03:30.164540   26827 main.go:141] libmachine: (ha-091565-m03) DBG | domain ha-091565-m03 has defined IP address 192.168.39.53 and MAC address 52:54:00:7c:50:95 in network mk-ha-091565
	I0918 20:03:30.164704   26827 main.go:141] libmachine: (ha-091565-m03) Calling .GetSSHHostname
	I0918 20:03:30.167047   26827 main.go:141] libmachine: (ha-091565-m03) DBG | domain ha-091565-m03 has defined MAC address 52:54:00:7c:50:95 in network mk-ha-091565
	I0918 20:03:30.167370   26827 main.go:141] libmachine: (ha-091565-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7c:50:95", ip: ""} in network mk-ha-091565: {Iface:virbr1 ExpiryTime:2024-09-18 21:03:17 +0000 UTC Type:0 Mac:52:54:00:7c:50:95 Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:ha-091565-m03 Clientid:01:52:54:00:7c:50:95}
	I0918 20:03:30.167392   26827 main.go:141] libmachine: (ha-091565-m03) DBG | domain ha-091565-m03 has defined IP address 192.168.39.53 and MAC address 52:54:00:7c:50:95 in network mk-ha-091565
	I0918 20:03:30.167538   26827 provision.go:143] copyHostCerts
	I0918 20:03:30.167573   26827 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19667-7671/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19667-7671/.minikube/cert.pem
	I0918 20:03:30.167622   26827 exec_runner.go:144] found /home/jenkins/minikube-integration/19667-7671/.minikube/cert.pem, removing ...
	I0918 20:03:30.167633   26827 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19667-7671/.minikube/cert.pem
	I0918 20:03:30.167703   26827 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19667-7671/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19667-7671/.minikube/cert.pem (1123 bytes)
	I0918 20:03:30.167779   26827 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19667-7671/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19667-7671/.minikube/key.pem
	I0918 20:03:30.167796   26827 exec_runner.go:144] found /home/jenkins/minikube-integration/19667-7671/.minikube/key.pem, removing ...
	I0918 20:03:30.167812   26827 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19667-7671/.minikube/key.pem
	I0918 20:03:30.167845   26827 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19667-7671/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19667-7671/.minikube/key.pem (1679 bytes)
	I0918 20:03:30.167891   26827 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19667-7671/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19667-7671/.minikube/ca.pem
	I0918 20:03:30.167910   26827 exec_runner.go:144] found /home/jenkins/minikube-integration/19667-7671/.minikube/ca.pem, removing ...
	I0918 20:03:30.167916   26827 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19667-7671/.minikube/ca.pem
	I0918 20:03:30.167937   26827 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19667-7671/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19667-7671/.minikube/ca.pem (1082 bytes)
	I0918 20:03:30.167986   26827 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19667-7671/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19667-7671/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19667-7671/.minikube/certs/ca-key.pem org=jenkins.ha-091565-m03 san=[127.0.0.1 192.168.39.53 ha-091565-m03 localhost minikube]
	I0918 20:03:30.213280   26827 provision.go:177] copyRemoteCerts
	I0918 20:03:30.213334   26827 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0918 20:03:30.213360   26827 main.go:141] libmachine: (ha-091565-m03) Calling .GetSSHHostname
	I0918 20:03:30.215750   26827 main.go:141] libmachine: (ha-091565-m03) DBG | domain ha-091565-m03 has defined MAC address 52:54:00:7c:50:95 in network mk-ha-091565
	I0918 20:03:30.216074   26827 main.go:141] libmachine: (ha-091565-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7c:50:95", ip: ""} in network mk-ha-091565: {Iface:virbr1 ExpiryTime:2024-09-18 21:03:17 +0000 UTC Type:0 Mac:52:54:00:7c:50:95 Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:ha-091565-m03 Clientid:01:52:54:00:7c:50:95}
	I0918 20:03:30.216102   26827 main.go:141] libmachine: (ha-091565-m03) DBG | domain ha-091565-m03 has defined IP address 192.168.39.53 and MAC address 52:54:00:7c:50:95 in network mk-ha-091565
	I0918 20:03:30.216270   26827 main.go:141] libmachine: (ha-091565-m03) Calling .GetSSHPort
	I0918 20:03:30.216448   26827 main.go:141] libmachine: (ha-091565-m03) Calling .GetSSHKeyPath
	I0918 20:03:30.216580   26827 main.go:141] libmachine: (ha-091565-m03) Calling .GetSSHUsername
	I0918 20:03:30.216699   26827 sshutil.go:53] new ssh client: &{IP:192.168.39.53 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19667-7671/.minikube/machines/ha-091565-m03/id_rsa Username:docker}
	I0918 20:03:30.298100   26827 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19667-7671/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0918 20:03:30.298182   26827 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0918 20:03:30.322613   26827 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19667-7671/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0918 20:03:30.322696   26827 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0918 20:03:30.345951   26827 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19667-7671/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0918 20:03:30.346039   26827 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0918 20:03:30.368781   26827 provision.go:87] duration metric: took 207.991221ms to configureAuth
	I0918 20:03:30.368806   26827 buildroot.go:189] setting minikube options for container-runtime
	I0918 20:03:30.369006   26827 config.go:182] Loaded profile config "ha-091565": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0918 20:03:30.369075   26827 main.go:141] libmachine: (ha-091565-m03) Calling .GetSSHHostname
	I0918 20:03:30.372054   26827 main.go:141] libmachine: (ha-091565-m03) DBG | domain ha-091565-m03 has defined MAC address 52:54:00:7c:50:95 in network mk-ha-091565
	I0918 20:03:30.372443   26827 main.go:141] libmachine: (ha-091565-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7c:50:95", ip: ""} in network mk-ha-091565: {Iface:virbr1 ExpiryTime:2024-09-18 21:03:17 +0000 UTC Type:0 Mac:52:54:00:7c:50:95 Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:ha-091565-m03 Clientid:01:52:54:00:7c:50:95}
	I0918 20:03:30.372472   26827 main.go:141] libmachine: (ha-091565-m03) DBG | domain ha-091565-m03 has defined IP address 192.168.39.53 and MAC address 52:54:00:7c:50:95 in network mk-ha-091565
	I0918 20:03:30.372725   26827 main.go:141] libmachine: (ha-091565-m03) Calling .GetSSHPort
	I0918 20:03:30.372907   26827 main.go:141] libmachine: (ha-091565-m03) Calling .GetSSHKeyPath
	I0918 20:03:30.373069   26827 main.go:141] libmachine: (ha-091565-m03) Calling .GetSSHKeyPath
	I0918 20:03:30.373164   26827 main.go:141] libmachine: (ha-091565-m03) Calling .GetSSHUsername
	I0918 20:03:30.373299   26827 main.go:141] libmachine: Using SSH client type: native
	I0918 20:03:30.373493   26827 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.53 22 <nil> <nil>}
	I0918 20:03:30.373508   26827 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0918 20:03:30.578858   26827 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0918 20:03:30.578882   26827 main.go:141] libmachine: Checking connection to Docker...
	I0918 20:03:30.578892   26827 main.go:141] libmachine: (ha-091565-m03) Calling .GetURL
	I0918 20:03:30.580144   26827 main.go:141] libmachine: (ha-091565-m03) DBG | Using libvirt version 6000000
	I0918 20:03:30.582476   26827 main.go:141] libmachine: (ha-091565-m03) DBG | domain ha-091565-m03 has defined MAC address 52:54:00:7c:50:95 in network mk-ha-091565
	I0918 20:03:30.582791   26827 main.go:141] libmachine: (ha-091565-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7c:50:95", ip: ""} in network mk-ha-091565: {Iface:virbr1 ExpiryTime:2024-09-18 21:03:17 +0000 UTC Type:0 Mac:52:54:00:7c:50:95 Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:ha-091565-m03 Clientid:01:52:54:00:7c:50:95}
	I0918 20:03:30.582820   26827 main.go:141] libmachine: (ha-091565-m03) DBG | domain ha-091565-m03 has defined IP address 192.168.39.53 and MAC address 52:54:00:7c:50:95 in network mk-ha-091565
	I0918 20:03:30.582956   26827 main.go:141] libmachine: Docker is up and running!
	I0918 20:03:30.582970   26827 main.go:141] libmachine: Reticulating splines...
	I0918 20:03:30.582978   26827 client.go:171] duration metric: took 27.373159137s to LocalClient.Create
	I0918 20:03:30.583008   26827 start.go:167] duration metric: took 27.373230204s to libmachine.API.Create "ha-091565"
	I0918 20:03:30.583021   26827 start.go:293] postStartSetup for "ha-091565-m03" (driver="kvm2")
	I0918 20:03:30.583039   26827 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0918 20:03:30.583062   26827 main.go:141] libmachine: (ha-091565-m03) Calling .DriverName
	I0918 20:03:30.583373   26827 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0918 20:03:30.583399   26827 main.go:141] libmachine: (ha-091565-m03) Calling .GetSSHHostname
	I0918 20:03:30.585622   26827 main.go:141] libmachine: (ha-091565-m03) DBG | domain ha-091565-m03 has defined MAC address 52:54:00:7c:50:95 in network mk-ha-091565
	I0918 20:03:30.585919   26827 main.go:141] libmachine: (ha-091565-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7c:50:95", ip: ""} in network mk-ha-091565: {Iface:virbr1 ExpiryTime:2024-09-18 21:03:17 +0000 UTC Type:0 Mac:52:54:00:7c:50:95 Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:ha-091565-m03 Clientid:01:52:54:00:7c:50:95}
	I0918 20:03:30.585944   26827 main.go:141] libmachine: (ha-091565-m03) DBG | domain ha-091565-m03 has defined IP address 192.168.39.53 and MAC address 52:54:00:7c:50:95 in network mk-ha-091565
	I0918 20:03:30.586091   26827 main.go:141] libmachine: (ha-091565-m03) Calling .GetSSHPort
	I0918 20:03:30.586267   26827 main.go:141] libmachine: (ha-091565-m03) Calling .GetSSHKeyPath
	I0918 20:03:30.586429   26827 main.go:141] libmachine: (ha-091565-m03) Calling .GetSSHUsername
	I0918 20:03:30.586561   26827 sshutil.go:53] new ssh client: &{IP:192.168.39.53 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19667-7671/.minikube/machines/ha-091565-m03/id_rsa Username:docker}
	I0918 20:03:30.666586   26827 ssh_runner.go:195] Run: cat /etc/os-release
	I0918 20:03:30.670835   26827 info.go:137] Remote host: Buildroot 2023.02.9
	I0918 20:03:30.670865   26827 filesync.go:126] Scanning /home/jenkins/minikube-integration/19667-7671/.minikube/addons for local assets ...
	I0918 20:03:30.670930   26827 filesync.go:126] Scanning /home/jenkins/minikube-integration/19667-7671/.minikube/files for local assets ...
	I0918 20:03:30.671001   26827 filesync.go:149] local asset: /home/jenkins/minikube-integration/19667-7671/.minikube/files/etc/ssl/certs/148782.pem -> 148782.pem in /etc/ssl/certs
	I0918 20:03:30.671010   26827 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19667-7671/.minikube/files/etc/ssl/certs/148782.pem -> /etc/ssl/certs/148782.pem
	I0918 20:03:30.671101   26827 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0918 20:03:30.680354   26827 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/files/etc/ssl/certs/148782.pem --> /etc/ssl/certs/148782.pem (1708 bytes)
	I0918 20:03:30.703833   26827 start.go:296] duration metric: took 120.797692ms for postStartSetup
	I0918 20:03:30.703888   26827 main.go:141] libmachine: (ha-091565-m03) Calling .GetConfigRaw
	I0918 20:03:30.704508   26827 main.go:141] libmachine: (ha-091565-m03) Calling .GetIP
	I0918 20:03:30.707440   26827 main.go:141] libmachine: (ha-091565-m03) DBG | domain ha-091565-m03 has defined MAC address 52:54:00:7c:50:95 in network mk-ha-091565
	I0918 20:03:30.707936   26827 main.go:141] libmachine: (ha-091565-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7c:50:95", ip: ""} in network mk-ha-091565: {Iface:virbr1 ExpiryTime:2024-09-18 21:03:17 +0000 UTC Type:0 Mac:52:54:00:7c:50:95 Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:ha-091565-m03 Clientid:01:52:54:00:7c:50:95}
	I0918 20:03:30.707965   26827 main.go:141] libmachine: (ha-091565-m03) DBG | domain ha-091565-m03 has defined IP address 192.168.39.53 and MAC address 52:54:00:7c:50:95 in network mk-ha-091565
	I0918 20:03:30.708291   26827 profile.go:143] Saving config to /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/ha-091565/config.json ...
	I0918 20:03:30.708542   26827 start.go:128] duration metric: took 27.516932332s to createHost
	I0918 20:03:30.708573   26827 main.go:141] libmachine: (ha-091565-m03) Calling .GetSSHHostname
	I0918 20:03:30.711228   26827 main.go:141] libmachine: (ha-091565-m03) DBG | domain ha-091565-m03 has defined MAC address 52:54:00:7c:50:95 in network mk-ha-091565
	I0918 20:03:30.711630   26827 main.go:141] libmachine: (ha-091565-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7c:50:95", ip: ""} in network mk-ha-091565: {Iface:virbr1 ExpiryTime:2024-09-18 21:03:17 +0000 UTC Type:0 Mac:52:54:00:7c:50:95 Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:ha-091565-m03 Clientid:01:52:54:00:7c:50:95}
	I0918 20:03:30.711656   26827 main.go:141] libmachine: (ha-091565-m03) DBG | domain ha-091565-m03 has defined IP address 192.168.39.53 and MAC address 52:54:00:7c:50:95 in network mk-ha-091565
	I0918 20:03:30.711872   26827 main.go:141] libmachine: (ha-091565-m03) Calling .GetSSHPort
	I0918 20:03:30.712061   26827 main.go:141] libmachine: (ha-091565-m03) Calling .GetSSHKeyPath
	I0918 20:03:30.712192   26827 main.go:141] libmachine: (ha-091565-m03) Calling .GetSSHKeyPath
	I0918 20:03:30.712327   26827 main.go:141] libmachine: (ha-091565-m03) Calling .GetSSHUsername
	I0918 20:03:30.712477   26827 main.go:141] libmachine: Using SSH client type: native
	I0918 20:03:30.712684   26827 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.53 22 <nil> <nil>}
	I0918 20:03:30.712697   26827 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0918 20:03:30.812539   26827 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726689810.794368232
	
	I0918 20:03:30.812561   26827 fix.go:216] guest clock: 1726689810.794368232
	I0918 20:03:30.812570   26827 fix.go:229] Guest: 2024-09-18 20:03:30.794368232 +0000 UTC Remote: 2024-09-18 20:03:30.708558501 +0000 UTC m=+153.103283397 (delta=85.809731ms)
	I0918 20:03:30.812588   26827 fix.go:200] guest clock delta is within tolerance: 85.809731ms
	I0918 20:03:30.812595   26827 start.go:83] releasing machines lock for "ha-091565-m03", held for 27.621119617s
	I0918 20:03:30.812619   26827 main.go:141] libmachine: (ha-091565-m03) Calling .DriverName
	I0918 20:03:30.812898   26827 main.go:141] libmachine: (ha-091565-m03) Calling .GetIP
	I0918 20:03:30.815402   26827 main.go:141] libmachine: (ha-091565-m03) DBG | domain ha-091565-m03 has defined MAC address 52:54:00:7c:50:95 in network mk-ha-091565
	I0918 20:03:30.815769   26827 main.go:141] libmachine: (ha-091565-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7c:50:95", ip: ""} in network mk-ha-091565: {Iface:virbr1 ExpiryTime:2024-09-18 21:03:17 +0000 UTC Type:0 Mac:52:54:00:7c:50:95 Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:ha-091565-m03 Clientid:01:52:54:00:7c:50:95}
	I0918 20:03:30.815791   26827 main.go:141] libmachine: (ha-091565-m03) DBG | domain ha-091565-m03 has defined IP address 192.168.39.53 and MAC address 52:54:00:7c:50:95 in network mk-ha-091565
	I0918 20:03:30.817414   26827 out.go:177] * Found network options:
	I0918 20:03:30.818426   26827 out.go:177]   - NO_PROXY=192.168.39.215,192.168.39.92
	W0918 20:03:30.819353   26827 proxy.go:119] fail to check proxy env: Error ip not in block
	W0918 20:03:30.819370   26827 proxy.go:119] fail to check proxy env: Error ip not in block
	I0918 20:03:30.819384   26827 main.go:141] libmachine: (ha-091565-m03) Calling .DriverName
	I0918 20:03:30.820044   26827 main.go:141] libmachine: (ha-091565-m03) Calling .DriverName
	I0918 20:03:30.820235   26827 main.go:141] libmachine: (ha-091565-m03) Calling .DriverName
	I0918 20:03:30.820315   26827 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0918 20:03:30.820362   26827 main.go:141] libmachine: (ha-091565-m03) Calling .GetSSHHostname
	W0918 20:03:30.820405   26827 proxy.go:119] fail to check proxy env: Error ip not in block
	W0918 20:03:30.820438   26827 proxy.go:119] fail to check proxy env: Error ip not in block
	I0918 20:03:30.820512   26827 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0918 20:03:30.820534   26827 main.go:141] libmachine: (ha-091565-m03) Calling .GetSSHHostname
	I0918 20:03:30.823394   26827 main.go:141] libmachine: (ha-091565-m03) DBG | domain ha-091565-m03 has defined MAC address 52:54:00:7c:50:95 in network mk-ha-091565
	I0918 20:03:30.823660   26827 main.go:141] libmachine: (ha-091565-m03) DBG | domain ha-091565-m03 has defined MAC address 52:54:00:7c:50:95 in network mk-ha-091565
	I0918 20:03:30.823821   26827 main.go:141] libmachine: (ha-091565-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7c:50:95", ip: ""} in network mk-ha-091565: {Iface:virbr1 ExpiryTime:2024-09-18 21:03:17 +0000 UTC Type:0 Mac:52:54:00:7c:50:95 Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:ha-091565-m03 Clientid:01:52:54:00:7c:50:95}
	I0918 20:03:30.823857   26827 main.go:141] libmachine: (ha-091565-m03) DBG | domain ha-091565-m03 has defined IP address 192.168.39.53 and MAC address 52:54:00:7c:50:95 in network mk-ha-091565
	I0918 20:03:30.824042   26827 main.go:141] libmachine: (ha-091565-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7c:50:95", ip: ""} in network mk-ha-091565: {Iface:virbr1 ExpiryTime:2024-09-18 21:03:17 +0000 UTC Type:0 Mac:52:54:00:7c:50:95 Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:ha-091565-m03 Clientid:01:52:54:00:7c:50:95}
	I0918 20:03:30.824069   26827 main.go:141] libmachine: (ha-091565-m03) DBG | domain ha-091565-m03 has defined IP address 192.168.39.53 and MAC address 52:54:00:7c:50:95 in network mk-ha-091565
	I0918 20:03:30.824075   26827 main.go:141] libmachine: (ha-091565-m03) Calling .GetSSHPort
	I0918 20:03:30.824246   26827 main.go:141] libmachine: (ha-091565-m03) Calling .GetSSHPort
	I0918 20:03:30.824249   26827 main.go:141] libmachine: (ha-091565-m03) Calling .GetSSHKeyPath
	I0918 20:03:30.824447   26827 main.go:141] libmachine: (ha-091565-m03) Calling .GetSSHKeyPath
	I0918 20:03:30.824451   26827 main.go:141] libmachine: (ha-091565-m03) Calling .GetSSHUsername
	I0918 20:03:30.824629   26827 main.go:141] libmachine: (ha-091565-m03) Calling .GetSSHUsername
	I0918 20:03:30.824648   26827 sshutil.go:53] new ssh client: &{IP:192.168.39.53 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19667-7671/.minikube/machines/ha-091565-m03/id_rsa Username:docker}
	I0918 20:03:30.824774   26827 sshutil.go:53] new ssh client: &{IP:192.168.39.53 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19667-7671/.minikube/machines/ha-091565-m03/id_rsa Username:docker}
	I0918 20:03:31.051973   26827 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0918 20:03:31.057939   26827 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0918 20:03:31.058015   26827 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0918 20:03:31.075034   26827 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0918 20:03:31.075060   26827 start.go:495] detecting cgroup driver to use...
	I0918 20:03:31.075137   26827 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0918 20:03:31.091617   26827 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0918 20:03:31.105746   26827 docker.go:217] disabling cri-docker service (if available) ...
	I0918 20:03:31.105817   26827 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0918 20:03:31.120080   26827 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0918 20:03:31.134004   26827 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0918 20:03:31.254184   26827 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0918 20:03:31.414257   26827 docker.go:233] disabling docker service ...
	I0918 20:03:31.414322   26827 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0918 20:03:31.428960   26827 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0918 20:03:31.442338   26827 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0918 20:03:31.584328   26827 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0918 20:03:31.721005   26827 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0918 20:03:31.735675   26827 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0918 20:03:31.753606   26827 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0918 20:03:31.753676   26827 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0918 20:03:31.764390   26827 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0918 20:03:31.764453   26827 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0918 20:03:31.775371   26827 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0918 20:03:31.786080   26827 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0918 20:03:31.797003   26827 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0918 20:03:31.807848   26827 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0918 20:03:31.821134   26827 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0918 20:03:31.840511   26827 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0918 20:03:31.851912   26827 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0918 20:03:31.861895   26827 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0918 20:03:31.861971   26827 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0918 20:03:31.875783   26827 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0918 20:03:31.887581   26827 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0918 20:03:32.009173   26827 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0918 20:03:32.097676   26827 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0918 20:03:32.097742   26827 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0918 20:03:32.102640   26827 start.go:563] Will wait 60s for crictl version
	I0918 20:03:32.102696   26827 ssh_runner.go:195] Run: which crictl
	I0918 20:03:32.106231   26827 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0918 20:03:32.142182   26827 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0918 20:03:32.142270   26827 ssh_runner.go:195] Run: crio --version
	I0918 20:03:32.169659   26827 ssh_runner.go:195] Run: crio --version
	I0918 20:03:32.199737   26827 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0918 20:03:32.201225   26827 out.go:177]   - env NO_PROXY=192.168.39.215
	I0918 20:03:32.202507   26827 out.go:177]   - env NO_PROXY=192.168.39.215,192.168.39.92
	I0918 20:03:32.203714   26827 main.go:141] libmachine: (ha-091565-m03) Calling .GetIP
	I0918 20:03:32.206442   26827 main.go:141] libmachine: (ha-091565-m03) DBG | domain ha-091565-m03 has defined MAC address 52:54:00:7c:50:95 in network mk-ha-091565
	I0918 20:03:32.206810   26827 main.go:141] libmachine: (ha-091565-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7c:50:95", ip: ""} in network mk-ha-091565: {Iface:virbr1 ExpiryTime:2024-09-18 21:03:17 +0000 UTC Type:0 Mac:52:54:00:7c:50:95 Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:ha-091565-m03 Clientid:01:52:54:00:7c:50:95}
	I0918 20:03:32.206850   26827 main.go:141] libmachine: (ha-091565-m03) DBG | domain ha-091565-m03 has defined IP address 192.168.39.53 and MAC address 52:54:00:7c:50:95 in network mk-ha-091565
	I0918 20:03:32.207043   26827 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0918 20:03:32.211258   26827 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0918 20:03:32.223734   26827 mustload.go:65] Loading cluster: ha-091565
	I0918 20:03:32.224039   26827 config.go:182] Loaded profile config "ha-091565": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0918 20:03:32.224319   26827 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0918 20:03:32.224365   26827 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0918 20:03:32.239611   26827 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37387
	I0918 20:03:32.240066   26827 main.go:141] libmachine: () Calling .GetVersion
	I0918 20:03:32.240552   26827 main.go:141] libmachine: Using API Version  1
	I0918 20:03:32.240576   26827 main.go:141] libmachine: () Calling .SetConfigRaw
	I0918 20:03:32.240920   26827 main.go:141] libmachine: () Calling .GetMachineName
	I0918 20:03:32.241082   26827 main.go:141] libmachine: (ha-091565) Calling .GetState
	I0918 20:03:32.242720   26827 host.go:66] Checking if "ha-091565" exists ...
	I0918 20:03:32.243009   26827 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0918 20:03:32.243043   26827 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0918 20:03:32.258246   26827 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40335
	I0918 20:03:32.258705   26827 main.go:141] libmachine: () Calling .GetVersion
	I0918 20:03:32.259124   26827 main.go:141] libmachine: Using API Version  1
	I0918 20:03:32.259146   26827 main.go:141] libmachine: () Calling .SetConfigRaw
	I0918 20:03:32.259417   26827 main.go:141] libmachine: () Calling .GetMachineName
	I0918 20:03:32.259553   26827 main.go:141] libmachine: (ha-091565) Calling .DriverName
	I0918 20:03:32.259662   26827 certs.go:68] Setting up /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/ha-091565 for IP: 192.168.39.53
	I0918 20:03:32.259671   26827 certs.go:194] generating shared ca certs ...
	I0918 20:03:32.259683   26827 certs.go:226] acquiring lock for ca certs: {Name:mkf54e0e71ed884c4372f9d3d4cb308ea4600185 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0918 20:03:32.259810   26827 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19667-7671/.minikube/ca.key
	I0918 20:03:32.259850   26827 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19667-7671/.minikube/proxy-client-ca.key
	I0918 20:03:32.259860   26827 certs.go:256] generating profile certs ...
	I0918 20:03:32.259928   26827 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/ha-091565/client.key
	I0918 20:03:32.259953   26827 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/ha-091565/apiserver.key.4abf4119
	I0918 20:03:32.259967   26827 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/ha-091565/apiserver.crt.4abf4119 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.215 192.168.39.92 192.168.39.53 192.168.39.254]
	I0918 20:03:32.391787   26827 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/ha-091565/apiserver.crt.4abf4119 ...
	I0918 20:03:32.391818   26827 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/ha-091565/apiserver.crt.4abf4119: {Name:mkb34973ffb4d10e1c252f20090951c99d9a8a6d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0918 20:03:32.392002   26827 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/ha-091565/apiserver.key.4abf4119 ...
	I0918 20:03:32.392039   26827 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/ha-091565/apiserver.key.4abf4119: {Name:mk8dda3654eb1370812c69b5ca23990ee4bb5898 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0918 20:03:32.392142   26827 certs.go:381] copying /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/ha-091565/apiserver.crt.4abf4119 -> /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/ha-091565/apiserver.crt
	I0918 20:03:32.392302   26827 certs.go:385] copying /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/ha-091565/apiserver.key.4abf4119 -> /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/ha-091565/apiserver.key
	I0918 20:03:32.392476   26827 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/ha-091565/proxy-client.key
	I0918 20:03:32.392495   26827 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19667-7671/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0918 20:03:32.392514   26827 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19667-7671/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0918 20:03:32.392532   26827 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19667-7671/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0918 20:03:32.392556   26827 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19667-7671/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0918 20:03:32.392573   26827 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/ha-091565/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0918 20:03:32.392588   26827 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/ha-091565/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0918 20:03:32.392606   26827 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/ha-091565/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0918 20:03:32.416080   26827 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/ha-091565/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0918 20:03:32.416180   26827 certs.go:484] found cert: /home/jenkins/minikube-integration/19667-7671/.minikube/certs/14878.pem (1338 bytes)
	W0918 20:03:32.416223   26827 certs.go:480] ignoring /home/jenkins/minikube-integration/19667-7671/.minikube/certs/14878_empty.pem, impossibly tiny 0 bytes
	I0918 20:03:32.416236   26827 certs.go:484] found cert: /home/jenkins/minikube-integration/19667-7671/.minikube/certs/ca-key.pem (1675 bytes)
	I0918 20:03:32.416259   26827 certs.go:484] found cert: /home/jenkins/minikube-integration/19667-7671/.minikube/certs/ca.pem (1082 bytes)
	I0918 20:03:32.416280   26827 certs.go:484] found cert: /home/jenkins/minikube-integration/19667-7671/.minikube/certs/cert.pem (1123 bytes)
	I0918 20:03:32.416312   26827 certs.go:484] found cert: /home/jenkins/minikube-integration/19667-7671/.minikube/certs/key.pem (1679 bytes)
	I0918 20:03:32.416373   26827 certs.go:484] found cert: /home/jenkins/minikube-integration/19667-7671/.minikube/files/etc/ssl/certs/148782.pem (1708 bytes)
	I0918 20:03:32.416406   26827 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19667-7671/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0918 20:03:32.416423   26827 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19667-7671/.minikube/certs/14878.pem -> /usr/share/ca-certificates/14878.pem
	I0918 20:03:32.416442   26827 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19667-7671/.minikube/files/etc/ssl/certs/148782.pem -> /usr/share/ca-certificates/148782.pem
	I0918 20:03:32.416482   26827 main.go:141] libmachine: (ha-091565) Calling .GetSSHHostname
	I0918 20:03:32.419323   26827 main.go:141] libmachine: (ha-091565) DBG | domain ha-091565 has defined MAC address 52:54:00:2a:13:d8 in network mk-ha-091565
	I0918 20:03:32.419709   26827 main.go:141] libmachine: (ha-091565) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:13:d8", ip: ""} in network mk-ha-091565: {Iface:virbr1 ExpiryTime:2024-09-18 21:01:12 +0000 UTC Type:0 Mac:52:54:00:2a:13:d8 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:ha-091565 Clientid:01:52:54:00:2a:13:d8}
	I0918 20:03:32.419736   26827 main.go:141] libmachine: (ha-091565) DBG | domain ha-091565 has defined IP address 192.168.39.215 and MAC address 52:54:00:2a:13:d8 in network mk-ha-091565
	I0918 20:03:32.419880   26827 main.go:141] libmachine: (ha-091565) Calling .GetSSHPort
	I0918 20:03:32.420098   26827 main.go:141] libmachine: (ha-091565) Calling .GetSSHKeyPath
	I0918 20:03:32.420242   26827 main.go:141] libmachine: (ha-091565) Calling .GetSSHUsername
	I0918 20:03:32.420374   26827 sshutil.go:53] new ssh client: &{IP:192.168.39.215 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19667-7671/.minikube/machines/ha-091565/id_rsa Username:docker}
	I0918 20:03:32.496485   26827 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0918 20:03:32.501230   26827 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0918 20:03:32.512278   26827 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0918 20:03:32.516258   26827 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I0918 20:03:32.526925   26827 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0918 20:03:32.530942   26827 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0918 20:03:32.541480   26827 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0918 20:03:32.545232   26827 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I0918 20:03:32.555472   26827 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0918 20:03:32.559397   26827 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0918 20:03:32.569567   26827 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0918 20:03:32.573499   26827 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I0918 20:03:32.583358   26827 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0918 20:03:32.611524   26827 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0918 20:03:32.636264   26827 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0918 20:03:32.660205   26827 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0918 20:03:32.686819   26827 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/ha-091565/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I0918 20:03:32.710441   26827 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/ha-091565/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0918 20:03:32.737760   26827 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/ha-091565/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0918 20:03:32.763299   26827 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/ha-091565/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0918 20:03:32.788066   26827 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0918 20:03:32.811311   26827 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/certs/14878.pem --> /usr/share/ca-certificates/14878.pem (1338 bytes)
	I0918 20:03:32.837707   26827 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/files/etc/ssl/certs/148782.pem --> /usr/share/ca-certificates/148782.pem (1708 bytes)
	I0918 20:03:32.862254   26827 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0918 20:03:32.879051   26827 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I0918 20:03:32.895538   26827 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0918 20:03:32.911669   26827 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I0918 20:03:32.927230   26827 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0918 20:03:32.943165   26827 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I0918 20:03:32.959777   26827 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0918 20:03:32.976941   26827 ssh_runner.go:195] Run: openssl version
	I0918 20:03:32.982956   26827 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0918 20:03:32.994065   26827 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0918 20:03:32.998638   26827 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 18 19:39 /usr/share/ca-certificates/minikubeCA.pem
	I0918 20:03:32.998702   26827 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0918 20:03:33.004856   26827 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0918 20:03:33.016234   26827 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14878.pem && ln -fs /usr/share/ca-certificates/14878.pem /etc/ssl/certs/14878.pem"
	I0918 20:03:33.027625   26827 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14878.pem
	I0918 20:03:33.032333   26827 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 18 19:57 /usr/share/ca-certificates/14878.pem
	I0918 20:03:33.032408   26827 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14878.pem
	I0918 20:03:33.038142   26827 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14878.pem /etc/ssl/certs/51391683.0"
	I0918 20:03:33.049048   26827 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/148782.pem && ln -fs /usr/share/ca-certificates/148782.pem /etc/ssl/certs/148782.pem"
	I0918 20:03:33.060201   26827 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/148782.pem
	I0918 20:03:33.064969   26827 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 18 19:57 /usr/share/ca-certificates/148782.pem
	I0918 20:03:33.065039   26827 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/148782.pem
	I0918 20:03:33.070737   26827 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/148782.pem /etc/ssl/certs/3ec20f2e.0"
	I0918 20:03:33.082171   26827 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0918 20:03:33.086441   26827 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0918 20:03:33.086499   26827 kubeadm.go:934] updating node {m03 192.168.39.53 8443 v1.31.1 crio true true} ...
	I0918 20:03:33.086588   26827 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-091565-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.53
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-091565 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0918 20:03:33.086614   26827 kube-vip.go:115] generating kube-vip config ...
	I0918 20:03:33.086658   26827 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0918 20:03:33.104138   26827 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0918 20:03:33.104231   26827 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0918 20:03:33.104297   26827 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0918 20:03:33.114293   26827 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.1: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.31.1': No such file or directory
	
	Initiating transfer...
	I0918 20:03:33.114356   26827 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.1
	I0918 20:03:33.124170   26827 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet.sha256
	I0918 20:03:33.124182   26827 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm.sha256
	I0918 20:03:33.124199   26827 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl.sha256
	I0918 20:03:33.124207   26827 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19667-7671/.minikube/cache/linux/amd64/v1.31.1/kubeadm -> /var/lib/minikube/binaries/v1.31.1/kubeadm
	I0918 20:03:33.124216   26827 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19667-7671/.minikube/cache/linux/amd64/v1.31.1/kubectl -> /var/lib/minikube/binaries/v1.31.1/kubectl
	I0918 20:03:33.124219   26827 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0918 20:03:33.124273   26827 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubeadm
	I0918 20:03:33.124275   26827 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubectl
	I0918 20:03:33.141327   26827 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubeadm': No such file or directory
	I0918 20:03:33.141375   26827 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/cache/linux/amd64/v1.31.1/kubeadm --> /var/lib/minikube/binaries/v1.31.1/kubeadm (58290328 bytes)
	I0918 20:03:33.141401   26827 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubectl': No such file or directory
	I0918 20:03:33.141433   26827 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/cache/linux/amd64/v1.31.1/kubectl --> /var/lib/minikube/binaries/v1.31.1/kubectl (56381592 bytes)
	I0918 20:03:33.141477   26827 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19667-7671/.minikube/cache/linux/amd64/v1.31.1/kubelet -> /var/lib/minikube/binaries/v1.31.1/kubelet
	I0918 20:03:33.141555   26827 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubelet
	I0918 20:03:33.173036   26827 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubelet': No such file or directory
	I0918 20:03:33.173093   26827 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/cache/linux/amd64/v1.31.1/kubelet --> /var/lib/minikube/binaries/v1.31.1/kubelet (76869944 bytes)
	I0918 20:03:33.972939   26827 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0918 20:03:33.982247   26827 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0918 20:03:34.000126   26827 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0918 20:03:34.018674   26827 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0918 20:03:34.036270   26827 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0918 20:03:34.040368   26827 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0918 20:03:34.053122   26827 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0918 20:03:34.171306   26827 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0918 20:03:34.188115   26827 host.go:66] Checking if "ha-091565" exists ...
	I0918 20:03:34.188456   26827 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0918 20:03:34.188496   26827 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0918 20:03:34.204519   26827 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33565
	I0918 20:03:34.205017   26827 main.go:141] libmachine: () Calling .GetVersion
	I0918 20:03:34.205836   26827 main.go:141] libmachine: Using API Version  1
	I0918 20:03:34.205858   26827 main.go:141] libmachine: () Calling .SetConfigRaw
	I0918 20:03:34.206189   26827 main.go:141] libmachine: () Calling .GetMachineName
	I0918 20:03:34.206366   26827 main.go:141] libmachine: (ha-091565) Calling .DriverName
	I0918 20:03:34.206499   26827 start.go:317] joinCluster: &{Name:ha-091565 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Cluster
Name:ha-091565 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.215 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.92 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.53 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false in
spektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOpt
imizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0918 20:03:34.206634   26827 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0918 20:03:34.206657   26827 main.go:141] libmachine: (ha-091565) Calling .GetSSHHostname
	I0918 20:03:34.210032   26827 main.go:141] libmachine: (ha-091565) DBG | domain ha-091565 has defined MAC address 52:54:00:2a:13:d8 in network mk-ha-091565
	I0918 20:03:34.210517   26827 main.go:141] libmachine: (ha-091565) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:13:d8", ip: ""} in network mk-ha-091565: {Iface:virbr1 ExpiryTime:2024-09-18 21:01:12 +0000 UTC Type:0 Mac:52:54:00:2a:13:d8 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:ha-091565 Clientid:01:52:54:00:2a:13:d8}
	I0918 20:03:34.210550   26827 main.go:141] libmachine: (ha-091565) DBG | domain ha-091565 has defined IP address 192.168.39.215 and MAC address 52:54:00:2a:13:d8 in network mk-ha-091565
	I0918 20:03:34.210721   26827 main.go:141] libmachine: (ha-091565) Calling .GetSSHPort
	I0918 20:03:34.210878   26827 main.go:141] libmachine: (ha-091565) Calling .GetSSHKeyPath
	I0918 20:03:34.211058   26827 main.go:141] libmachine: (ha-091565) Calling .GetSSHUsername
	I0918 20:03:34.211223   26827 sshutil.go:53] new ssh client: &{IP:192.168.39.215 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19667-7671/.minikube/machines/ha-091565/id_rsa Username:docker}
	I0918 20:03:34.497537   26827 start.go:343] trying to join control-plane node "m03" to cluster: &{Name:m03 IP:192.168.39.53 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0918 20:03:34.497597   26827 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token i0u1iv.ilurlcyw4668mpw6 --discovery-token-ca-cert-hash sha256:8ad46bf288e8cc2226f5a929a768e6c95cddecc97ffc4016be27ef0987b1e36f --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-091565-m03 --control-plane --apiserver-advertise-address=192.168.39.53 --apiserver-bind-port=8443"
	I0918 20:03:56.510162   26827 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token i0u1iv.ilurlcyw4668mpw6 --discovery-token-ca-cert-hash sha256:8ad46bf288e8cc2226f5a929a768e6c95cddecc97ffc4016be27ef0987b1e36f --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-091565-m03 --control-plane --apiserver-advertise-address=192.168.39.53 --apiserver-bind-port=8443": (22.012541289s)
	I0918 20:03:56.510194   26827 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0918 20:03:57.007413   26827 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-091565-m03 minikube.k8s.io/updated_at=2024_09_18T20_03_57_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=85073601a832bd4bbda5d11fa91feafff6ec6b91 minikube.k8s.io/name=ha-091565 minikube.k8s.io/primary=false
	I0918 20:03:57.136553   26827 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-091565-m03 node-role.kubernetes.io/control-plane:NoSchedule-
	I0918 20:03:57.243081   26827 start.go:319] duration metric: took 23.036576923s to joinCluster
	I0918 20:03:57.243171   26827 start.go:235] Will wait 6m0s for node &{Name:m03 IP:192.168.39.53 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0918 20:03:57.243516   26827 config.go:182] Loaded profile config "ha-091565": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0918 20:03:57.244463   26827 out.go:177] * Verifying Kubernetes components...
	I0918 20:03:57.245675   26827 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0918 20:03:57.491302   26827 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0918 20:03:57.553167   26827 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19667-7671/kubeconfig
	I0918 20:03:57.553587   26827 kapi.go:59] client config for ha-091565: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19667-7671/.minikube/profiles/ha-091565/client.crt", KeyFile:"/home/jenkins/minikube-integration/19667-7671/.minikube/profiles/ha-091565/client.key", CAFile:"/home/jenkins/minikube-integration/19667-7671/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f6fca0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0918 20:03:57.553676   26827 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.215:8443
	I0918 20:03:57.554162   26827 node_ready.go:35] waiting up to 6m0s for node "ha-091565-m03" to be "Ready" ...
	I0918 20:03:57.554529   26827 round_trippers.go:463] GET https://192.168.39.215:8443/api/v1/nodes/ha-091565-m03
	I0918 20:03:57.554540   26827 round_trippers.go:469] Request Headers:
	I0918 20:03:57.554551   26827 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0918 20:03:57.554560   26827 round_trippers.go:473]     Accept: application/json, */*
	I0918 20:03:57.558531   26827 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0918 20:03:58.055469   26827 round_trippers.go:463] GET https://192.168.39.215:8443/api/v1/nodes/ha-091565-m03
	I0918 20:03:58.055497   26827 round_trippers.go:469] Request Headers:
	I0918 20:03:58.055509   26827 round_trippers.go:473]     Accept: application/json, */*
	I0918 20:03:58.055515   26827 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0918 20:03:58.065944   26827 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0918 20:03:58.555709   26827 round_trippers.go:463] GET https://192.168.39.215:8443/api/v1/nodes/ha-091565-m03
	I0918 20:03:58.555741   26827 round_trippers.go:469] Request Headers:
	I0918 20:03:58.555751   26827 round_trippers.go:473]     Accept: application/json, */*
	I0918 20:03:58.555755   26827 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0918 20:03:58.559403   26827 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0918 20:03:59.055396   26827 round_trippers.go:463] GET https://192.168.39.215:8443/api/v1/nodes/ha-091565-m03
	I0918 20:03:59.055421   26827 round_trippers.go:469] Request Headers:
	I0918 20:03:59.055432   26827 round_trippers.go:473]     Accept: application/json, */*
	I0918 20:03:59.055439   26827 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0918 20:03:59.058942   26827 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0918 20:03:59.555365   26827 round_trippers.go:463] GET https://192.168.39.215:8443/api/v1/nodes/ha-091565-m03
	I0918 20:03:59.555390   26827 round_trippers.go:469] Request Headers:
	I0918 20:03:59.555400   26827 round_trippers.go:473]     Accept: application/json, */*
	I0918 20:03:59.555406   26827 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0918 20:03:59.558786   26827 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0918 20:03:59.559242   26827 node_ready.go:53] node "ha-091565-m03" has status "Ready":"False"
	I0918 20:04:00.054633   26827 round_trippers.go:463] GET https://192.168.39.215:8443/api/v1/nodes/ha-091565-m03
	I0918 20:04:00.054659   26827 round_trippers.go:469] Request Headers:
	I0918 20:04:00.054669   26827 round_trippers.go:473]     Accept: application/json, */*
	I0918 20:04:00.054674   26827 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0918 20:04:00.058075   26827 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0918 20:04:00.555492   26827 round_trippers.go:463] GET https://192.168.39.215:8443/api/v1/nodes/ha-091565-m03
	I0918 20:04:00.555516   26827 round_trippers.go:469] Request Headers:
	I0918 20:04:00.555526   26827 round_trippers.go:473]     Accept: application/json, */*
	I0918 20:04:00.555529   26827 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0918 20:04:00.559811   26827 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0918 20:04:01.055537   26827 round_trippers.go:463] GET https://192.168.39.215:8443/api/v1/nodes/ha-091565-m03
	I0918 20:04:01.055563   26827 round_trippers.go:469] Request Headers:
	I0918 20:04:01.055575   26827 round_trippers.go:473]     Accept: application/json, */*
	I0918 20:04:01.055580   26827 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0918 20:04:01.059555   26827 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0918 20:04:01.555672   26827 round_trippers.go:463] GET https://192.168.39.215:8443/api/v1/nodes/ha-091565-m03
	I0918 20:04:01.555697   26827 round_trippers.go:469] Request Headers:
	I0918 20:04:01.555706   26827 round_trippers.go:473]     Accept: application/json, */*
	I0918 20:04:01.555711   26827 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0918 20:04:01.559137   26827 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0918 20:04:01.559627   26827 node_ready.go:53] node "ha-091565-m03" has status "Ready":"False"
	I0918 20:04:02.054683   26827 round_trippers.go:463] GET https://192.168.39.215:8443/api/v1/nodes/ha-091565-m03
	I0918 20:04:02.054723   26827 round_trippers.go:469] Request Headers:
	I0918 20:04:02.054731   26827 round_trippers.go:473]     Accept: application/json, */*
	I0918 20:04:02.054745   26827 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0918 20:04:02.058557   26827 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0918 20:04:02.555203   26827 round_trippers.go:463] GET https://192.168.39.215:8443/api/v1/nodes/ha-091565-m03
	I0918 20:04:02.555226   26827 round_trippers.go:469] Request Headers:
	I0918 20:04:02.555234   26827 round_trippers.go:473]     Accept: application/json, */*
	I0918 20:04:02.555238   26827 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0918 20:04:02.558769   26827 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0918 20:04:03.055525   26827 round_trippers.go:463] GET https://192.168.39.215:8443/api/v1/nodes/ha-091565-m03
	I0918 20:04:03.055564   26827 round_trippers.go:469] Request Headers:
	I0918 20:04:03.055574   26827 round_trippers.go:473]     Accept: application/json, */*
	I0918 20:04:03.055577   26827 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0918 20:04:03.059340   26827 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0918 20:04:03.554931   26827 round_trippers.go:463] GET https://192.168.39.215:8443/api/v1/nodes/ha-091565-m03
	I0918 20:04:03.554959   26827 round_trippers.go:469] Request Headers:
	I0918 20:04:03.554970   26827 round_trippers.go:473]     Accept: application/json, */*
	I0918 20:04:03.554979   26827 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0918 20:04:03.558864   26827 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0918 20:04:03.559650   26827 node_ready.go:53] node "ha-091565-m03" has status "Ready":"False"
	I0918 20:04:04.054716   26827 round_trippers.go:463] GET https://192.168.39.215:8443/api/v1/nodes/ha-091565-m03
	I0918 20:04:04.054744   26827 round_trippers.go:469] Request Headers:
	I0918 20:04:04.054755   26827 round_trippers.go:473]     Accept: application/json, */*
	I0918 20:04:04.054761   26827 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0918 20:04:04.058693   26827 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0918 20:04:04.555064   26827 round_trippers.go:463] GET https://192.168.39.215:8443/api/v1/nodes/ha-091565-m03
	I0918 20:04:04.555088   26827 round_trippers.go:469] Request Headers:
	I0918 20:04:04.555100   26827 round_trippers.go:473]     Accept: application/json, */*
	I0918 20:04:04.555106   26827 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0918 20:04:04.558892   26827 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0918 20:04:05.054691   26827 round_trippers.go:463] GET https://192.168.39.215:8443/api/v1/nodes/ha-091565-m03
	I0918 20:04:05.054712   26827 round_trippers.go:469] Request Headers:
	I0918 20:04:05.054719   26827 round_trippers.go:473]     Accept: application/json, */*
	I0918 20:04:05.054741   26827 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0918 20:04:05.059560   26827 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0918 20:04:05.555504   26827 round_trippers.go:463] GET https://192.168.39.215:8443/api/v1/nodes/ha-091565-m03
	I0918 20:04:05.555527   26827 round_trippers.go:469] Request Headers:
	I0918 20:04:05.555534   26827 round_trippers.go:473]     Accept: application/json, */*
	I0918 20:04:05.555539   26827 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0918 20:04:05.558864   26827 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0918 20:04:06.055334   26827 round_trippers.go:463] GET https://192.168.39.215:8443/api/v1/nodes/ha-091565-m03
	I0918 20:04:06.055377   26827 round_trippers.go:469] Request Headers:
	I0918 20:04:06.055389   26827 round_trippers.go:473]     Accept: application/json, */*
	I0918 20:04:06.055397   26827 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0918 20:04:06.059156   26827 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0918 20:04:06.059757   26827 node_ready.go:53] node "ha-091565-m03" has status "Ready":"False"
	I0918 20:04:06.555030   26827 round_trippers.go:463] GET https://192.168.39.215:8443/api/v1/nodes/ha-091565-m03
	I0918 20:04:06.555053   26827 round_trippers.go:469] Request Headers:
	I0918 20:04:06.555063   26827 round_trippers.go:473]     Accept: application/json, */*
	I0918 20:04:06.555069   26827 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0918 20:04:06.558335   26827 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0918 20:04:07.055192   26827 round_trippers.go:463] GET https://192.168.39.215:8443/api/v1/nodes/ha-091565-m03
	I0918 20:04:07.055215   26827 round_trippers.go:469] Request Headers:
	I0918 20:04:07.055224   26827 round_trippers.go:473]     Accept: application/json, */*
	I0918 20:04:07.055227   26827 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0918 20:04:07.059362   26827 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0918 20:04:07.555236   26827 round_trippers.go:463] GET https://192.168.39.215:8443/api/v1/nodes/ha-091565-m03
	I0918 20:04:07.555261   26827 round_trippers.go:469] Request Headers:
	I0918 20:04:07.555269   26827 round_trippers.go:473]     Accept: application/json, */*
	I0918 20:04:07.555274   26827 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0918 20:04:07.558863   26827 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0918 20:04:08.055465   26827 round_trippers.go:463] GET https://192.168.39.215:8443/api/v1/nodes/ha-091565-m03
	I0918 20:04:08.055488   26827 round_trippers.go:469] Request Headers:
	I0918 20:04:08.055495   26827 round_trippers.go:473]     Accept: application/json, */*
	I0918 20:04:08.055498   26827 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0918 20:04:08.059132   26827 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0918 20:04:08.555504   26827 round_trippers.go:463] GET https://192.168.39.215:8443/api/v1/nodes/ha-091565-m03
	I0918 20:04:08.555526   26827 round_trippers.go:469] Request Headers:
	I0918 20:04:08.555535   26827 round_trippers.go:473]     Accept: application/json, */*
	I0918 20:04:08.555538   26827 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0918 20:04:08.559353   26827 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0918 20:04:08.559819   26827 node_ready.go:53] node "ha-091565-m03" has status "Ready":"False"
	I0918 20:04:09.055283   26827 round_trippers.go:463] GET https://192.168.39.215:8443/api/v1/nodes/ha-091565-m03
	I0918 20:04:09.055306   26827 round_trippers.go:469] Request Headers:
	I0918 20:04:09.055314   26827 round_trippers.go:473]     Accept: application/json, */*
	I0918 20:04:09.055317   26827 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0918 20:04:09.058873   26827 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0918 20:04:09.555171   26827 round_trippers.go:463] GET https://192.168.39.215:8443/api/v1/nodes/ha-091565-m03
	I0918 20:04:09.555196   26827 round_trippers.go:469] Request Headers:
	I0918 20:04:09.555204   26827 round_trippers.go:473]     Accept: application/json, */*
	I0918 20:04:09.555208   26827 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0918 20:04:09.559068   26827 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0918 20:04:10.055288   26827 round_trippers.go:463] GET https://192.168.39.215:8443/api/v1/nodes/ha-091565-m03
	I0918 20:04:10.055311   26827 round_trippers.go:469] Request Headers:
	I0918 20:04:10.055320   26827 round_trippers.go:473]     Accept: application/json, */*
	I0918 20:04:10.055325   26827 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0918 20:04:10.059182   26827 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0918 20:04:10.555106   26827 round_trippers.go:463] GET https://192.168.39.215:8443/api/v1/nodes/ha-091565-m03
	I0918 20:04:10.555128   26827 round_trippers.go:469] Request Headers:
	I0918 20:04:10.555139   26827 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0918 20:04:10.555144   26827 round_trippers.go:473]     Accept: application/json, */*
	I0918 20:04:10.558578   26827 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0918 20:04:11.054941   26827 round_trippers.go:463] GET https://192.168.39.215:8443/api/v1/nodes/ha-091565-m03
	I0918 20:04:11.054964   26827 round_trippers.go:469] Request Headers:
	I0918 20:04:11.054972   26827 round_trippers.go:473]     Accept: application/json, */*
	I0918 20:04:11.054975   26827 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0918 20:04:11.059278   26827 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0918 20:04:11.059847   26827 node_ready.go:53] node "ha-091565-m03" has status "Ready":"False"
	I0918 20:04:11.555315   26827 round_trippers.go:463] GET https://192.168.39.215:8443/api/v1/nodes/ha-091565-m03
	I0918 20:04:11.555339   26827 round_trippers.go:469] Request Headers:
	I0918 20:04:11.555347   26827 round_trippers.go:473]     Accept: application/json, */*
	I0918 20:04:11.555355   26827 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0918 20:04:11.558773   26827 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0918 20:04:12.054728   26827 round_trippers.go:463] GET https://192.168.39.215:8443/api/v1/nodes/ha-091565-m03
	I0918 20:04:12.054751   26827 round_trippers.go:469] Request Headers:
	I0918 20:04:12.054765   26827 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0918 20:04:12.054770   26827 round_trippers.go:473]     Accept: application/json, */*
	I0918 20:04:12.058180   26827 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0918 20:04:12.554816   26827 round_trippers.go:463] GET https://192.168.39.215:8443/api/v1/nodes/ha-091565-m03
	I0918 20:04:12.554836   26827 round_trippers.go:469] Request Headers:
	I0918 20:04:12.554844   26827 round_trippers.go:473]     Accept: application/json, */*
	I0918 20:04:12.554849   26827 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0918 20:04:12.558473   26827 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0918 20:04:13.055199   26827 round_trippers.go:463] GET https://192.168.39.215:8443/api/v1/nodes/ha-091565-m03
	I0918 20:04:13.055227   26827 round_trippers.go:469] Request Headers:
	I0918 20:04:13.055245   26827 round_trippers.go:473]     Accept: application/json, */*
	I0918 20:04:13.055254   26827 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0918 20:04:13.058868   26827 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0918 20:04:13.554700   26827 round_trippers.go:463] GET https://192.168.39.215:8443/api/v1/nodes/ha-091565-m03
	I0918 20:04:13.554723   26827 round_trippers.go:469] Request Headers:
	I0918 20:04:13.554732   26827 round_trippers.go:473]     Accept: application/json, */*
	I0918 20:04:13.554736   26827 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0918 20:04:13.559302   26827 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0918 20:04:13.560622   26827 node_ready.go:53] node "ha-091565-m03" has status "Ready":"False"
	I0918 20:04:14.054755   26827 round_trippers.go:463] GET https://192.168.39.215:8443/api/v1/nodes/ha-091565-m03
	I0918 20:04:14.054786   26827 round_trippers.go:469] Request Headers:
	I0918 20:04:14.054798   26827 round_trippers.go:473]     Accept: application/json, */*
	I0918 20:04:14.054803   26827 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0918 20:04:14.058095   26827 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0918 20:04:14.555493   26827 round_trippers.go:463] GET https://192.168.39.215:8443/api/v1/nodes/ha-091565-m03
	I0918 20:04:14.555515   26827 round_trippers.go:469] Request Headers:
	I0918 20:04:14.555524   26827 round_trippers.go:473]     Accept: application/json, */*
	I0918 20:04:14.555528   26827 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0918 20:04:14.559446   26827 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0918 20:04:15.055291   26827 round_trippers.go:463] GET https://192.168.39.215:8443/api/v1/nodes/ha-091565-m03
	I0918 20:04:15.055323   26827 round_trippers.go:469] Request Headers:
	I0918 20:04:15.055333   26827 round_trippers.go:473]     Accept: application/json, */*
	I0918 20:04:15.055336   26827 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0918 20:04:15.059042   26827 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0918 20:04:15.555105   26827 round_trippers.go:463] GET https://192.168.39.215:8443/api/v1/nodes/ha-091565-m03
	I0918 20:04:15.555127   26827 round_trippers.go:469] Request Headers:
	I0918 20:04:15.555135   26827 round_trippers.go:473]     Accept: application/json, */*
	I0918 20:04:15.555138   26827 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0918 20:04:15.558918   26827 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0918 20:04:16.055211   26827 round_trippers.go:463] GET https://192.168.39.215:8443/api/v1/nodes/ha-091565-m03
	I0918 20:04:16.055237   26827 round_trippers.go:469] Request Headers:
	I0918 20:04:16.055246   26827 round_trippers.go:473]     Accept: application/json, */*
	I0918 20:04:16.055251   26827 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0918 20:04:16.059232   26827 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0918 20:04:16.059819   26827 node_ready.go:49] node "ha-091565-m03" has status "Ready":"True"
	I0918 20:04:16.059841   26827 node_ready.go:38] duration metric: took 18.505389798s for node "ha-091565-m03" to be "Ready" ...
	I0918 20:04:16.059852   26827 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0918 20:04:16.059929   26827 round_trippers.go:463] GET https://192.168.39.215:8443/api/v1/namespaces/kube-system/pods
	I0918 20:04:16.059941   26827 round_trippers.go:469] Request Headers:
	I0918 20:04:16.059951   26827 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0918 20:04:16.059957   26827 round_trippers.go:473]     Accept: application/json, */*
	I0918 20:04:16.065715   26827 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0918 20:04:16.071783   26827 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-8zcqk" in "kube-system" namespace to be "Ready" ...
	I0918 20:04:16.071882   26827 round_trippers.go:463] GET https://192.168.39.215:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-8zcqk
	I0918 20:04:16.071891   26827 round_trippers.go:469] Request Headers:
	I0918 20:04:16.071899   26827 round_trippers.go:473]     Accept: application/json, */*
	I0918 20:04:16.071903   26827 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0918 20:04:16.075405   26827 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0918 20:04:16.075962   26827 round_trippers.go:463] GET https://192.168.39.215:8443/api/v1/nodes/ha-091565
	I0918 20:04:16.075978   26827 round_trippers.go:469] Request Headers:
	I0918 20:04:16.075987   26827 round_trippers.go:473]     Accept: application/json, */*
	I0918 20:04:16.075992   26827 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0918 20:04:16.078716   26827 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0918 20:04:16.079267   26827 pod_ready.go:93] pod "coredns-7c65d6cfc9-8zcqk" in "kube-system" namespace has status "Ready":"True"
	I0918 20:04:16.079293   26827 pod_ready.go:82] duration metric: took 7.472161ms for pod "coredns-7c65d6cfc9-8zcqk" in "kube-system" namespace to be "Ready" ...
	I0918 20:04:16.079302   26827 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-w97kk" in "kube-system" namespace to be "Ready" ...
	I0918 20:04:16.079361   26827 round_trippers.go:463] GET https://192.168.39.215:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-w97kk
	I0918 20:04:16.079369   26827 round_trippers.go:469] Request Headers:
	I0918 20:04:16.079376   26827 round_trippers.go:473]     Accept: application/json, */*
	I0918 20:04:16.079380   26827 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0918 20:04:16.082131   26827 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0918 20:04:16.082926   26827 round_trippers.go:463] GET https://192.168.39.215:8443/api/v1/nodes/ha-091565
	I0918 20:04:16.082939   26827 round_trippers.go:469] Request Headers:
	I0918 20:04:16.082946   26827 round_trippers.go:473]     Accept: application/json, */*
	I0918 20:04:16.082949   26827 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0918 20:04:16.085556   26827 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0918 20:04:16.085896   26827 pod_ready.go:93] pod "coredns-7c65d6cfc9-w97kk" in "kube-system" namespace has status "Ready":"True"
	I0918 20:04:16.085910   26827 pod_ready.go:82] duration metric: took 6.602392ms for pod "coredns-7c65d6cfc9-w97kk" in "kube-system" namespace to be "Ready" ...
	I0918 20:04:16.085919   26827 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-091565" in "kube-system" namespace to be "Ready" ...
	I0918 20:04:16.085972   26827 round_trippers.go:463] GET https://192.168.39.215:8443/api/v1/namespaces/kube-system/pods/etcd-ha-091565
	I0918 20:04:16.085980   26827 round_trippers.go:469] Request Headers:
	I0918 20:04:16.085986   26827 round_trippers.go:473]     Accept: application/json, */*
	I0918 20:04:16.085989   26827 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0918 20:04:16.089699   26827 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0918 20:04:16.090300   26827 round_trippers.go:463] GET https://192.168.39.215:8443/api/v1/nodes/ha-091565
	I0918 20:04:16.090315   26827 round_trippers.go:469] Request Headers:
	I0918 20:04:16.090322   26827 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0918 20:04:16.090326   26827 round_trippers.go:473]     Accept: application/json, */*
	I0918 20:04:16.093063   26827 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0918 20:04:16.093596   26827 pod_ready.go:93] pod "etcd-ha-091565" in "kube-system" namespace has status "Ready":"True"
	I0918 20:04:16.093612   26827 pod_ready.go:82] duration metric: took 7.687899ms for pod "etcd-ha-091565" in "kube-system" namespace to be "Ready" ...
	I0918 20:04:16.093621   26827 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-091565-m02" in "kube-system" namespace to be "Ready" ...
	I0918 20:04:16.093672   26827 round_trippers.go:463] GET https://192.168.39.215:8443/api/v1/namespaces/kube-system/pods/etcd-ha-091565-m02
	I0918 20:04:16.093679   26827 round_trippers.go:469] Request Headers:
	I0918 20:04:16.093686   26827 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0918 20:04:16.093691   26827 round_trippers.go:473]     Accept: application/json, */*
	I0918 20:04:16.096387   26827 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0918 20:04:16.097042   26827 round_trippers.go:463] GET https://192.168.39.215:8443/api/v1/nodes/ha-091565-m02
	I0918 20:04:16.097062   26827 round_trippers.go:469] Request Headers:
	I0918 20:04:16.097072   26827 round_trippers.go:473]     Accept: application/json, */*
	I0918 20:04:16.097077   26827 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0918 20:04:16.099762   26827 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0918 20:04:16.100164   26827 pod_ready.go:93] pod "etcd-ha-091565-m02" in "kube-system" namespace has status "Ready":"True"
	I0918 20:04:16.100182   26827 pod_ready.go:82] duration metric: took 6.554191ms for pod "etcd-ha-091565-m02" in "kube-system" namespace to be "Ready" ...
	I0918 20:04:16.100193   26827 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-091565-m03" in "kube-system" namespace to be "Ready" ...
	I0918 20:04:16.255579   26827 request.go:632] Waited for 155.319903ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.215:8443/api/v1/namespaces/kube-system/pods/etcd-ha-091565-m03
	I0918 20:04:16.255651   26827 round_trippers.go:463] GET https://192.168.39.215:8443/api/v1/namespaces/kube-system/pods/etcd-ha-091565-m03
	I0918 20:04:16.255659   26827 round_trippers.go:469] Request Headers:
	I0918 20:04:16.255691   26827 round_trippers.go:473]     Accept: application/json, */*
	I0918 20:04:16.255699   26827 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0918 20:04:16.259105   26827 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0918 20:04:16.456134   26827 request.go:632] Waited for 196.426863ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.215:8443/api/v1/nodes/ha-091565-m03
	I0918 20:04:16.456200   26827 round_trippers.go:463] GET https://192.168.39.215:8443/api/v1/nodes/ha-091565-m03
	I0918 20:04:16.456206   26827 round_trippers.go:469] Request Headers:
	I0918 20:04:16.456215   26827 round_trippers.go:473]     Accept: application/json, */*
	I0918 20:04:16.456220   26827 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0918 20:04:16.460303   26827 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0918 20:04:16.460816   26827 pod_ready.go:93] pod "etcd-ha-091565-m03" in "kube-system" namespace has status "Ready":"True"
	I0918 20:04:16.460835   26827 pod_ready.go:82] duration metric: took 360.633247ms for pod "etcd-ha-091565-m03" in "kube-system" namespace to be "Ready" ...
	I0918 20:04:16.460857   26827 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-091565" in "kube-system" namespace to be "Ready" ...
	I0918 20:04:16.656076   26827 request.go:632] Waited for 195.151124ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.215:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-091565
	I0918 20:04:16.656159   26827 round_trippers.go:463] GET https://192.168.39.215:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-091565
	I0918 20:04:16.656167   26827 round_trippers.go:469] Request Headers:
	I0918 20:04:16.656176   26827 round_trippers.go:473]     Accept: application/json, */*
	I0918 20:04:16.656192   26827 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0918 20:04:16.659916   26827 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0918 20:04:16.856095   26827 request.go:632] Waited for 195.376851ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.215:8443/api/v1/nodes/ha-091565
	I0918 20:04:16.856174   26827 round_trippers.go:463] GET https://192.168.39.215:8443/api/v1/nodes/ha-091565
	I0918 20:04:16.856181   26827 round_trippers.go:469] Request Headers:
	I0918 20:04:16.856191   26827 round_trippers.go:473]     Accept: application/json, */*
	I0918 20:04:16.856204   26827 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0918 20:04:16.859780   26827 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0918 20:04:16.860437   26827 pod_ready.go:93] pod "kube-apiserver-ha-091565" in "kube-system" namespace has status "Ready":"True"
	I0918 20:04:16.860458   26827 pod_ready.go:82] duration metric: took 399.594161ms for pod "kube-apiserver-ha-091565" in "kube-system" namespace to be "Ready" ...
	I0918 20:04:16.860467   26827 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-091565-m02" in "kube-system" namespace to be "Ready" ...
	I0918 20:04:17.055619   26827 request.go:632] Waited for 195.084711ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.215:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-091565-m02
	I0918 20:04:17.055737   26827 round_trippers.go:463] GET https://192.168.39.215:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-091565-m02
	I0918 20:04:17.055750   26827 round_trippers.go:469] Request Headers:
	I0918 20:04:17.055759   26827 round_trippers.go:473]     Accept: application/json, */*
	I0918 20:04:17.055765   26827 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0918 20:04:17.059273   26827 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0918 20:04:17.255382   26827 request.go:632] Waited for 195.243567ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.215:8443/api/v1/nodes/ha-091565-m02
	I0918 20:04:17.255449   26827 round_trippers.go:463] GET https://192.168.39.215:8443/api/v1/nodes/ha-091565-m02
	I0918 20:04:17.255457   26827 round_trippers.go:469] Request Headers:
	I0918 20:04:17.255464   26827 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0918 20:04:17.255468   26827 round_trippers.go:473]     Accept: application/json, */*
	I0918 20:04:17.258940   26827 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0918 20:04:17.259557   26827 pod_ready.go:93] pod "kube-apiserver-ha-091565-m02" in "kube-system" namespace has status "Ready":"True"
	I0918 20:04:17.259575   26827 pod_ready.go:82] duration metric: took 399.101471ms for pod "kube-apiserver-ha-091565-m02" in "kube-system" namespace to be "Ready" ...
	I0918 20:04:17.259586   26827 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-091565-m03" in "kube-system" namespace to be "Ready" ...
	I0918 20:04:17.455306   26827 request.go:632] Waited for 195.656133ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.215:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-091565-m03
	I0918 20:04:17.455375   26827 round_trippers.go:463] GET https://192.168.39.215:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-091565-m03
	I0918 20:04:17.455381   26827 round_trippers.go:469] Request Headers:
	I0918 20:04:17.455391   26827 round_trippers.go:473]     Accept: application/json, */*
	I0918 20:04:17.455398   26827 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0918 20:04:17.459141   26827 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0918 20:04:17.656266   26827 request.go:632] Waited for 196.147408ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.215:8443/api/v1/nodes/ha-091565-m03
	I0918 20:04:17.656316   26827 round_trippers.go:463] GET https://192.168.39.215:8443/api/v1/nodes/ha-091565-m03
	I0918 20:04:17.656322   26827 round_trippers.go:469] Request Headers:
	I0918 20:04:17.656332   26827 round_trippers.go:473]     Accept: application/json, */*
	I0918 20:04:17.656341   26827 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0918 20:04:17.659786   26827 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0918 20:04:17.660507   26827 pod_ready.go:93] pod "kube-apiserver-ha-091565-m03" in "kube-system" namespace has status "Ready":"True"
	I0918 20:04:17.660540   26827 pod_ready.go:82] duration metric: took 400.946368ms for pod "kube-apiserver-ha-091565-m03" in "kube-system" namespace to be "Ready" ...
	I0918 20:04:17.660565   26827 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-091565" in "kube-system" namespace to be "Ready" ...
	I0918 20:04:17.855951   26827 request.go:632] Waited for 195.288141ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.215:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-091565
	I0918 20:04:17.856066   26827 round_trippers.go:463] GET https://192.168.39.215:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-091565
	I0918 20:04:17.856076   26827 round_trippers.go:469] Request Headers:
	I0918 20:04:17.856086   26827 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0918 20:04:17.856095   26827 round_trippers.go:473]     Accept: application/json, */*
	I0918 20:04:17.859991   26827 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0918 20:04:18.055205   26827 request.go:632] Waited for 194.285561ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.215:8443/api/v1/nodes/ha-091565
	I0918 20:04:18.055268   26827 round_trippers.go:463] GET https://192.168.39.215:8443/api/v1/nodes/ha-091565
	I0918 20:04:18.055274   26827 round_trippers.go:469] Request Headers:
	I0918 20:04:18.055281   26827 round_trippers.go:473]     Accept: application/json, */*
	I0918 20:04:18.055284   26827 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0918 20:04:18.058520   26827 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0918 20:04:18.059072   26827 pod_ready.go:93] pod "kube-controller-manager-ha-091565" in "kube-system" namespace has status "Ready":"True"
	I0918 20:04:18.059095   26827 pod_ready.go:82] duration metric: took 398.501653ms for pod "kube-controller-manager-ha-091565" in "kube-system" namespace to be "Ready" ...
	I0918 20:04:18.059105   26827 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-091565-m02" in "kube-system" namespace to be "Ready" ...
	I0918 20:04:18.256047   26827 request.go:632] Waited for 196.849365ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.215:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-091565-m02
	I0918 20:04:18.256125   26827 round_trippers.go:463] GET https://192.168.39.215:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-091565-m02
	I0918 20:04:18.256133   26827 round_trippers.go:469] Request Headers:
	I0918 20:04:18.256147   26827 round_trippers.go:473]     Accept: application/json, */*
	I0918 20:04:18.256156   26827 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0918 20:04:18.260076   26827 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0918 20:04:18.455423   26827 request.go:632] Waited for 194.302275ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.215:8443/api/v1/nodes/ha-091565-m02
	I0918 20:04:18.455494   26827 round_trippers.go:463] GET https://192.168.39.215:8443/api/v1/nodes/ha-091565-m02
	I0918 20:04:18.455502   26827 round_trippers.go:469] Request Headers:
	I0918 20:04:18.455513   26827 round_trippers.go:473]     Accept: application/json, */*
	I0918 20:04:18.455524   26827 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0918 20:04:18.460052   26827 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0918 20:04:18.460616   26827 pod_ready.go:93] pod "kube-controller-manager-ha-091565-m02" in "kube-system" namespace has status "Ready":"True"
	I0918 20:04:18.460634   26827 pod_ready.go:82] duration metric: took 401.521777ms for pod "kube-controller-manager-ha-091565-m02" in "kube-system" namespace to be "Ready" ...
	I0918 20:04:18.460645   26827 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-091565-m03" in "kube-system" namespace to be "Ready" ...
	I0918 20:04:18.655830   26827 request.go:632] Waited for 195.117473ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.215:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-091565-m03
	I0918 20:04:18.655906   26827 round_trippers.go:463] GET https://192.168.39.215:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-091565-m03
	I0918 20:04:18.655912   26827 round_trippers.go:469] Request Headers:
	I0918 20:04:18.655926   26827 round_trippers.go:473]     Accept: application/json, */*
	I0918 20:04:18.655934   26827 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0918 20:04:18.661181   26827 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0918 20:04:18.855471   26827 request.go:632] Waited for 193.339141ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.215:8443/api/v1/nodes/ha-091565-m03
	I0918 20:04:18.855546   26827 round_trippers.go:463] GET https://192.168.39.215:8443/api/v1/nodes/ha-091565-m03
	I0918 20:04:18.855553   26827 round_trippers.go:469] Request Headers:
	I0918 20:04:18.855560   26827 round_trippers.go:473]     Accept: application/json, */*
	I0918 20:04:18.855565   26827 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0918 20:04:18.859369   26827 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0918 20:04:18.860202   26827 pod_ready.go:93] pod "kube-controller-manager-ha-091565-m03" in "kube-system" namespace has status "Ready":"True"
	I0918 20:04:18.860225   26827 pod_ready.go:82] duration metric: took 399.570485ms for pod "kube-controller-manager-ha-091565-m03" in "kube-system" namespace to be "Ready" ...
	I0918 20:04:18.860239   26827 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-4p8rj" in "kube-system" namespace to be "Ready" ...
	I0918 20:04:19.055323   26827 request.go:632] Waited for 195.018584ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.215:8443/api/v1/namespaces/kube-system/pods/kube-proxy-4p8rj
	I0918 20:04:19.055407   26827 round_trippers.go:463] GET https://192.168.39.215:8443/api/v1/namespaces/kube-system/pods/kube-proxy-4p8rj
	I0918 20:04:19.055415   26827 round_trippers.go:469] Request Headers:
	I0918 20:04:19.055425   26827 round_trippers.go:473]     Accept: application/json, */*
	I0918 20:04:19.055434   26827 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0918 20:04:19.058851   26827 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0918 20:04:19.255631   26827 request.go:632] Waited for 196.124849ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.215:8443/api/v1/nodes/ha-091565-m03
	I0918 20:04:19.255685   26827 round_trippers.go:463] GET https://192.168.39.215:8443/api/v1/nodes/ha-091565-m03
	I0918 20:04:19.255692   26827 round_trippers.go:469] Request Headers:
	I0918 20:04:19.255702   26827 round_trippers.go:473]     Accept: application/json, */*
	I0918 20:04:19.255710   26827 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0918 20:04:19.260421   26827 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0918 20:04:19.261253   26827 pod_ready.go:93] pod "kube-proxy-4p8rj" in "kube-system" namespace has status "Ready":"True"
	I0918 20:04:19.261276   26827 pod_ready.go:82] duration metric: took 401.027744ms for pod "kube-proxy-4p8rj" in "kube-system" namespace to be "Ready" ...
	I0918 20:04:19.261289   26827 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-4wm6h" in "kube-system" namespace to be "Ready" ...
	I0918 20:04:19.455210   26827 request.go:632] Waited for 193.843238ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.215:8443/api/v1/namespaces/kube-system/pods/kube-proxy-4wm6h
	I0918 20:04:19.455295   26827 round_trippers.go:463] GET https://192.168.39.215:8443/api/v1/namespaces/kube-system/pods/kube-proxy-4wm6h
	I0918 20:04:19.455303   26827 round_trippers.go:469] Request Headers:
	I0918 20:04:19.455314   26827 round_trippers.go:473]     Accept: application/json, */*
	I0918 20:04:19.455322   26827 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0918 20:04:19.458975   26827 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0918 20:04:19.656036   26827 request.go:632] Waited for 196.360424ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.215:8443/api/v1/nodes/ha-091565
	I0918 20:04:19.656109   26827 round_trippers.go:463] GET https://192.168.39.215:8443/api/v1/nodes/ha-091565
	I0918 20:04:19.656115   26827 round_trippers.go:469] Request Headers:
	I0918 20:04:19.656122   26827 round_trippers.go:473]     Accept: application/json, */*
	I0918 20:04:19.656126   26827 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0918 20:04:19.659749   26827 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0918 20:04:19.660473   26827 pod_ready.go:93] pod "kube-proxy-4wm6h" in "kube-system" namespace has status "Ready":"True"
	I0918 20:04:19.660500   26827 pod_ready.go:82] duration metric: took 399.202104ms for pod "kube-proxy-4wm6h" in "kube-system" namespace to be "Ready" ...
	I0918 20:04:19.660513   26827 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-bxblp" in "kube-system" namespace to be "Ready" ...
	I0918 20:04:19.855602   26827 request.go:632] Waited for 195.016629ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.215:8443/api/v1/namespaces/kube-system/pods/kube-proxy-bxblp
	I0918 20:04:19.855669   26827 round_trippers.go:463] GET https://192.168.39.215:8443/api/v1/namespaces/kube-system/pods/kube-proxy-bxblp
	I0918 20:04:19.855674   26827 round_trippers.go:469] Request Headers:
	I0918 20:04:19.855684   26827 round_trippers.go:473]     Accept: application/json, */*
	I0918 20:04:19.855688   26827 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0918 20:04:19.859561   26827 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0918 20:04:20.055770   26827 request.go:632] Waited for 195.418705ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.215:8443/api/v1/nodes/ha-091565-m02
	I0918 20:04:20.055846   26827 round_trippers.go:463] GET https://192.168.39.215:8443/api/v1/nodes/ha-091565-m02
	I0918 20:04:20.055852   26827 round_trippers.go:469] Request Headers:
	I0918 20:04:20.055859   26827 round_trippers.go:473]     Accept: application/json, */*
	I0918 20:04:20.055866   26827 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0918 20:04:20.059482   26827 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0918 20:04:20.060369   26827 pod_ready.go:93] pod "kube-proxy-bxblp" in "kube-system" namespace has status "Ready":"True"
	I0918 20:04:20.060396   26827 pod_ready.go:82] duration metric: took 399.875436ms for pod "kube-proxy-bxblp" in "kube-system" namespace to be "Ready" ...
	I0918 20:04:20.060408   26827 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-091565" in "kube-system" namespace to be "Ready" ...
	I0918 20:04:20.255225   26827 request.go:632] Waited for 194.753676ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.215:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-091565
	I0918 20:04:20.255322   26827 round_trippers.go:463] GET https://192.168.39.215:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-091565
	I0918 20:04:20.255331   26827 round_trippers.go:469] Request Headers:
	I0918 20:04:20.255341   26827 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0918 20:04:20.255351   26827 round_trippers.go:473]     Accept: application/json, */*
	I0918 20:04:20.259061   26827 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0918 20:04:20.456103   26827 request.go:632] Waited for 196.430637ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.215:8443/api/v1/nodes/ha-091565
	I0918 20:04:20.456163   26827 round_trippers.go:463] GET https://192.168.39.215:8443/api/v1/nodes/ha-091565
	I0918 20:04:20.456168   26827 round_trippers.go:469] Request Headers:
	I0918 20:04:20.456175   26827 round_trippers.go:473]     Accept: application/json, */*
	I0918 20:04:20.456179   26827 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0918 20:04:20.459797   26827 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0918 20:04:20.460332   26827 pod_ready.go:93] pod "kube-scheduler-ha-091565" in "kube-system" namespace has status "Ready":"True"
	I0918 20:04:20.460355   26827 pod_ready.go:82] duration metric: took 399.937556ms for pod "kube-scheduler-ha-091565" in "kube-system" namespace to be "Ready" ...
	I0918 20:04:20.460365   26827 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-091565-m02" in "kube-system" namespace to be "Ready" ...
	I0918 20:04:20.655303   26827 request.go:632] Waited for 194.860443ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.215:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-091565-m02
	I0918 20:04:20.655387   26827 round_trippers.go:463] GET https://192.168.39.215:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-091565-m02
	I0918 20:04:20.655395   26827 round_trippers.go:469] Request Headers:
	I0918 20:04:20.655405   26827 round_trippers.go:473]     Accept: application/json, */*
	I0918 20:04:20.655425   26827 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0918 20:04:20.658807   26827 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0918 20:04:20.855714   26827 request.go:632] Waited for 196.369108ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.215:8443/api/v1/nodes/ha-091565-m02
	I0918 20:04:20.855780   26827 round_trippers.go:463] GET https://192.168.39.215:8443/api/v1/nodes/ha-091565-m02
	I0918 20:04:20.855787   26827 round_trippers.go:469] Request Headers:
	I0918 20:04:20.855798   26827 round_trippers.go:473]     Accept: application/json, */*
	I0918 20:04:20.855804   26827 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0918 20:04:20.859686   26827 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0918 20:04:20.860506   26827 pod_ready.go:93] pod "kube-scheduler-ha-091565-m02" in "kube-system" namespace has status "Ready":"True"
	I0918 20:04:20.860527   26827 pod_ready.go:82] duration metric: took 400.151195ms for pod "kube-scheduler-ha-091565-m02" in "kube-system" namespace to be "Ready" ...
	I0918 20:04:20.860539   26827 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-091565-m03" in "kube-system" namespace to be "Ready" ...
	I0918 20:04:21.056006   26827 request.go:632] Waited for 195.380183ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.215:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-091565-m03
	I0918 20:04:21.056089   26827 round_trippers.go:463] GET https://192.168.39.215:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-091565-m03
	I0918 20:04:21.056096   26827 round_trippers.go:469] Request Headers:
	I0918 20:04:21.056104   26827 round_trippers.go:473]     Accept: application/json, */*
	I0918 20:04:21.056108   26827 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0918 20:04:21.059632   26827 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0918 20:04:21.255734   26827 request.go:632] Waited for 195.357475ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.215:8443/api/v1/nodes/ha-091565-m03
	I0918 20:04:21.255796   26827 round_trippers.go:463] GET https://192.168.39.215:8443/api/v1/nodes/ha-091565-m03
	I0918 20:04:21.255801   26827 round_trippers.go:469] Request Headers:
	I0918 20:04:21.255808   26827 round_trippers.go:473]     Accept: application/json, */*
	I0918 20:04:21.255813   26827 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0918 20:04:21.259440   26827 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0918 20:04:21.260300   26827 pod_ready.go:93] pod "kube-scheduler-ha-091565-m03" in "kube-system" namespace has status "Ready":"True"
	I0918 20:04:21.260322   26827 pod_ready.go:82] duration metric: took 399.775629ms for pod "kube-scheduler-ha-091565-m03" in "kube-system" namespace to be "Ready" ...
	I0918 20:04:21.260332   26827 pod_ready.go:39] duration metric: took 5.200469523s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0918 20:04:21.260346   26827 api_server.go:52] waiting for apiserver process to appear ...
	I0918 20:04:21.260416   26827 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 20:04:21.276372   26827 api_server.go:72] duration metric: took 24.03316608s to wait for apiserver process to appear ...
	I0918 20:04:21.276400   26827 api_server.go:88] waiting for apiserver healthz status ...
	I0918 20:04:21.276422   26827 api_server.go:253] Checking apiserver healthz at https://192.168.39.215:8443/healthz ...
	I0918 20:04:21.282493   26827 api_server.go:279] https://192.168.39.215:8443/healthz returned 200:
	ok
	I0918 20:04:21.282563   26827 round_trippers.go:463] GET https://192.168.39.215:8443/version
	I0918 20:04:21.282571   26827 round_trippers.go:469] Request Headers:
	I0918 20:04:21.282579   26827 round_trippers.go:473]     Accept: application/json, */*
	I0918 20:04:21.282586   26827 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0918 20:04:21.283373   26827 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0918 20:04:21.283434   26827 api_server.go:141] control plane version: v1.31.1
	I0918 20:04:21.283445   26827 api_server.go:131] duration metric: took 7.03877ms to wait for apiserver health ...
	I0918 20:04:21.283452   26827 system_pods.go:43] waiting for kube-system pods to appear ...
	I0918 20:04:21.455842   26827 request.go:632] Waited for 172.326435ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.215:8443/api/v1/namespaces/kube-system/pods
	I0918 20:04:21.455906   26827 round_trippers.go:463] GET https://192.168.39.215:8443/api/v1/namespaces/kube-system/pods
	I0918 20:04:21.455913   26827 round_trippers.go:469] Request Headers:
	I0918 20:04:21.455920   26827 round_trippers.go:473]     Accept: application/json, */*
	I0918 20:04:21.455924   26827 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0918 20:04:21.461721   26827 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0918 20:04:21.469221   26827 system_pods.go:59] 24 kube-system pods found
	I0918 20:04:21.469250   26827 system_pods.go:61] "coredns-7c65d6cfc9-8zcqk" [644e8147-96e9-41a1-99b8-d2de17e4798c] Running
	I0918 20:04:21.469256   26827 system_pods.go:61] "coredns-7c65d6cfc9-w97kk" [70428cd6-0523-44c8-89f3-62837b52ca80] Running
	I0918 20:04:21.469260   26827 system_pods.go:61] "etcd-ha-091565" [c5af3e98-c375-448c-9cac-6a83b115ca71] Running
	I0918 20:04:21.469263   26827 system_pods.go:61] "etcd-ha-091565-m02" [71e1c78e-6a11-4b46-baa2-83c98d666cee] Running
	I0918 20:04:21.469267   26827 system_pods.go:61] "etcd-ha-091565-m03" [9c1e9878-8b36-4e4d-9fc1-b81e4cd49c08] Running
	I0918 20:04:21.469270   26827 system_pods.go:61] "kindnet-5rh2w" [8fbd3b35-4d3a-497f-bbcf-0cc0b04ec495] Running
	I0918 20:04:21.469273   26827 system_pods.go:61] "kindnet-7fl5w" [5c3a9d82-3815-4aa1-8d04-14be25394dcf] Running
	I0918 20:04:21.469278   26827 system_pods.go:61] "kindnet-bzsqr" [bf1e6f1b-d3ad-439a-9b9a-882e6e989a56] Running
	I0918 20:04:21.469282   26827 system_pods.go:61] "kube-apiserver-ha-091565" [1c2ddbd9-3f78-415b-af21-1d43925382c5] Running
	I0918 20:04:21.469285   26827 system_pods.go:61] "kube-apiserver-ha-091565-m02" [b482ba6c-e415-4e3c-aa56-c17a4f3bcc13] Running
	I0918 20:04:21.469288   26827 system_pods.go:61] "kube-apiserver-ha-091565-m03" [597eb4b7-df02-430e-98f9-24de20295e3b] Running
	I0918 20:04:21.469291   26827 system_pods.go:61] "kube-controller-manager-ha-091565" [3812cd1b-619d-4da8-b039-18f7adb53647] Running
	I0918 20:04:21.469295   26827 system_pods.go:61] "kube-controller-manager-ha-091565-m02" [2c3ecce6-b148-498b-b66e-e5c000f51940] Running
	I0918 20:04:21.469298   26827 system_pods.go:61] "kube-controller-manager-ha-091565-m03" [d9871df2-6370-47a6-98d4-fd9acfddd11a] Running
	I0918 20:04:21.469301   26827 system_pods.go:61] "kube-proxy-4p8rj" [ebe65af8-abb1-4ed3-a12f-b822ec09e891] Running
	I0918 20:04:21.469305   26827 system_pods.go:61] "kube-proxy-4wm6h" [d6904231-6f64-4447-9932-0cd5d692978b] Running
	I0918 20:04:21.469310   26827 system_pods.go:61] "kube-proxy-bxblp" [68143b05-afd1-409a-aaa6-8b3c6841bbfb] Running
	I0918 20:04:21.469314   26827 system_pods.go:61] "kube-scheduler-ha-091565" [ad2ac667-3328-4ad3-b736-c8c633140e2d] Running
	I0918 20:04:21.469319   26827 system_pods.go:61] "kube-scheduler-ha-091565-m02" [fd27db83-43f3-40a8-9ca7-5570f78d562b] Running
	I0918 20:04:21.469322   26827 system_pods.go:61] "kube-scheduler-ha-091565-m03" [c8432a2a-548b-4a97-852a-a18f82f406d2] Running
	I0918 20:04:21.469326   26827 system_pods.go:61] "kube-vip-ha-091565" [b3fccd60-ac88-4dda-b8bb-b1b8c45cbfe5] Running
	I0918 20:04:21.469332   26827 system_pods.go:61] "kube-vip-ha-091565-m02" [12ca12ad-6dee-4b50-9273-c48d8f06acf4] Running
	I0918 20:04:21.469336   26827 system_pods.go:61] "kube-vip-ha-091565-m03" [8389ddfd-fca7-4698-a747-4eedf299dc4a] Running
	I0918 20:04:21.469341   26827 system_pods.go:61] "storage-provisioner" [b7dffb85-905b-4166-a680-34c77cf87d09] Running
	I0918 20:04:21.469347   26827 system_pods.go:74] duration metric: took 185.890335ms to wait for pod list to return data ...
	I0918 20:04:21.469357   26827 default_sa.go:34] waiting for default service account to be created ...
	I0918 20:04:21.655850   26827 request.go:632] Waited for 186.415202ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.215:8443/api/v1/namespaces/default/serviceaccounts
	I0918 20:04:21.655922   26827 round_trippers.go:463] GET https://192.168.39.215:8443/api/v1/namespaces/default/serviceaccounts
	I0918 20:04:21.655931   26827 round_trippers.go:469] Request Headers:
	I0918 20:04:21.655941   26827 round_trippers.go:473]     Accept: application/json, */*
	I0918 20:04:21.655949   26827 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0918 20:04:21.659629   26827 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0918 20:04:21.659759   26827 default_sa.go:45] found service account: "default"
	I0918 20:04:21.659777   26827 default_sa.go:55] duration metric: took 190.414417ms for default service account to be created ...
	I0918 20:04:21.659788   26827 system_pods.go:116] waiting for k8s-apps to be running ...
	I0918 20:04:21.856111   26827 request.go:632] Waited for 196.255287ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.215:8443/api/v1/namespaces/kube-system/pods
	I0918 20:04:21.856170   26827 round_trippers.go:463] GET https://192.168.39.215:8443/api/v1/namespaces/kube-system/pods
	I0918 20:04:21.856175   26827 round_trippers.go:469] Request Headers:
	I0918 20:04:21.856182   26827 round_trippers.go:473]     Accept: application/json, */*
	I0918 20:04:21.856186   26827 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0918 20:04:21.863662   26827 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0918 20:04:21.871644   26827 system_pods.go:86] 24 kube-system pods found
	I0918 20:04:21.871682   26827 system_pods.go:89] "coredns-7c65d6cfc9-8zcqk" [644e8147-96e9-41a1-99b8-d2de17e4798c] Running
	I0918 20:04:21.871691   26827 system_pods.go:89] "coredns-7c65d6cfc9-w97kk" [70428cd6-0523-44c8-89f3-62837b52ca80] Running
	I0918 20:04:21.871696   26827 system_pods.go:89] "etcd-ha-091565" [c5af3e98-c375-448c-9cac-6a83b115ca71] Running
	I0918 20:04:21.871703   26827 system_pods.go:89] "etcd-ha-091565-m02" [71e1c78e-6a11-4b46-baa2-83c98d666cee] Running
	I0918 20:04:21.871708   26827 system_pods.go:89] "etcd-ha-091565-m03" [9c1e9878-8b36-4e4d-9fc1-b81e4cd49c08] Running
	I0918 20:04:21.871713   26827 system_pods.go:89] "kindnet-5rh2w" [8fbd3b35-4d3a-497f-bbcf-0cc0b04ec495] Running
	I0918 20:04:21.871719   26827 system_pods.go:89] "kindnet-7fl5w" [5c3a9d82-3815-4aa1-8d04-14be25394dcf] Running
	I0918 20:04:21.871725   26827 system_pods.go:89] "kindnet-bzsqr" [bf1e6f1b-d3ad-439a-9b9a-882e6e989a56] Running
	I0918 20:04:21.871731   26827 system_pods.go:89] "kube-apiserver-ha-091565" [1c2ddbd9-3f78-415b-af21-1d43925382c5] Running
	I0918 20:04:21.871739   26827 system_pods.go:89] "kube-apiserver-ha-091565-m02" [b482ba6c-e415-4e3c-aa56-c17a4f3bcc13] Running
	I0918 20:04:21.871746   26827 system_pods.go:89] "kube-apiserver-ha-091565-m03" [597eb4b7-df02-430e-98f9-24de20295e3b] Running
	I0918 20:04:21.871756   26827 system_pods.go:89] "kube-controller-manager-ha-091565" [3812cd1b-619d-4da8-b039-18f7adb53647] Running
	I0918 20:04:21.871763   26827 system_pods.go:89] "kube-controller-manager-ha-091565-m02" [2c3ecce6-b148-498b-b66e-e5c000f51940] Running
	I0918 20:04:21.871771   26827 system_pods.go:89] "kube-controller-manager-ha-091565-m03" [d9871df2-6370-47a6-98d4-fd9acfddd11a] Running
	I0918 20:04:21.871778   26827 system_pods.go:89] "kube-proxy-4p8rj" [ebe65af8-abb1-4ed3-a12f-b822ec09e891] Running
	I0918 20:04:21.871786   26827 system_pods.go:89] "kube-proxy-4wm6h" [d6904231-6f64-4447-9932-0cd5d692978b] Running
	I0918 20:04:21.871792   26827 system_pods.go:89] "kube-proxy-bxblp" [68143b05-afd1-409a-aaa6-8b3c6841bbfb] Running
	I0918 20:04:21.871799   26827 system_pods.go:89] "kube-scheduler-ha-091565" [ad2ac667-3328-4ad3-b736-c8c633140e2d] Running
	I0918 20:04:21.871805   26827 system_pods.go:89] "kube-scheduler-ha-091565-m02" [fd27db83-43f3-40a8-9ca7-5570f78d562b] Running
	I0918 20:04:21.871813   26827 system_pods.go:89] "kube-scheduler-ha-091565-m03" [c8432a2a-548b-4a97-852a-a18f82f406d2] Running
	I0918 20:04:21.871819   26827 system_pods.go:89] "kube-vip-ha-091565" [b3fccd60-ac88-4dda-b8bb-b1b8c45cbfe5] Running
	I0918 20:04:21.871827   26827 system_pods.go:89] "kube-vip-ha-091565-m02" [12ca12ad-6dee-4b50-9273-c48d8f06acf4] Running
	I0918 20:04:21.871833   26827 system_pods.go:89] "kube-vip-ha-091565-m03" [8389ddfd-fca7-4698-a747-4eedf299dc4a] Running
	I0918 20:04:21.871838   26827 system_pods.go:89] "storage-provisioner" [b7dffb85-905b-4166-a680-34c77cf87d09] Running
	I0918 20:04:21.871847   26827 system_pods.go:126] duration metric: took 212.052235ms to wait for k8s-apps to be running ...
	I0918 20:04:21.871859   26827 system_svc.go:44] waiting for kubelet service to be running ....
	I0918 20:04:21.871912   26827 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0918 20:04:21.890997   26827 system_svc.go:56] duration metric: took 19.130745ms WaitForService to wait for kubelet
	I0918 20:04:21.891029   26827 kubeadm.go:582] duration metric: took 24.647829851s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0918 20:04:21.891052   26827 node_conditions.go:102] verifying NodePressure condition ...
	I0918 20:04:22.055297   26827 request.go:632] Waited for 164.164035ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.215:8443/api/v1/nodes
	I0918 20:04:22.055364   26827 round_trippers.go:463] GET https://192.168.39.215:8443/api/v1/nodes
	I0918 20:04:22.055371   26827 round_trippers.go:469] Request Headers:
	I0918 20:04:22.055381   26827 round_trippers.go:473]     Accept: application/json, */*
	I0918 20:04:22.055387   26827 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0918 20:04:22.060147   26827 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0918 20:04:22.061184   26827 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0918 20:04:22.061208   26827 node_conditions.go:123] node cpu capacity is 2
	I0918 20:04:22.061221   26827 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0918 20:04:22.061227   26827 node_conditions.go:123] node cpu capacity is 2
	I0918 20:04:22.061232   26827 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0918 20:04:22.061235   26827 node_conditions.go:123] node cpu capacity is 2
	I0918 20:04:22.061240   26827 node_conditions.go:105] duration metric: took 170.183013ms to run NodePressure ...
	I0918 20:04:22.061274   26827 start.go:241] waiting for startup goroutines ...
	I0918 20:04:22.061303   26827 start.go:255] writing updated cluster config ...
	I0918 20:04:22.061591   26827 ssh_runner.go:195] Run: rm -f paused
	I0918 20:04:22.113181   26827 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I0918 20:04:22.115218   26827 out.go:177] * Done! kubectl is now configured to use "ha-091565" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Sep 18 20:08:14 ha-091565 crio[663]: time="2024-09-18 20:08:14.077618752Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726690094077572387,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=76bd55f0-7cbb-4974-832c-61fd954e0deb name=/runtime.v1.ImageService/ImageFsInfo
	Sep 18 20:08:14 ha-091565 crio[663]: time="2024-09-18 20:08:14.078222493Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=4918fca6-b0a2-4b5d-b820-6aab927319de name=/runtime.v1.RuntimeService/ListContainers
	Sep 18 20:08:14 ha-091565 crio[663]: time="2024-09-18 20:08:14.078276693Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=4918fca6-b0a2-4b5d-b820-6aab927319de name=/runtime.v1.RuntimeService/ListContainers
	Sep 18 20:08:14 ha-091565 crio[663]: time="2024-09-18 20:08:14.078786899Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:7e40397db0622fd3967d20535016d957560de3976f7ca7e977fa2b186ff9f3ec,PodSandboxId:32509037cc4e4898a637b1de6a87eabd6d7504444bcf3f541a5234fb1ed4197a,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1726689867249474588,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-xhmzx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 16808919-56d0-40cd-b88e-28fb5a40b3a2,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4f8cab8eef59352ca937c9b8e7056a419bb426b183a841f9e8a43366ecb1b283,PodSandboxId:16c38fe68d94e679d8aaead9d1be6f411a6984d3a367813f390d2ffc174a7047,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726689721940579425,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-8zcqk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 644e8147-96e9-41a1-99b8-d2de17e4798c,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:26162985f4a281a161b88fc27b5cf0dbedef6117a63f0b64cef28f41346dbd3e,PodSandboxId:12355cb306ab12e321590a927e5f0cd88fa0f505aa7bd470359b1d9c47a7b425,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726689721876307714,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.
kubernetes.pod.uid: b7dffb85-905b-4166-a680-34c77cf87d09,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9b5c6773eef44d848fd26ebcf13ea82ebc3c1e1bd23618a944bf5f6d6a0e7bb8,PodSandboxId:b0c496c53b4c9e8442d0d8a6bcef8269a3baad2995c3647784331bd1b03a34df,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726689721549289388,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-w97kk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 70428cd6-05
23-44c8-89f3-62837b52ca80,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:52ae20a53e17beece735496b8e65537bbebfef5da53e8a2e7a74820fee56cd63,PodSandboxId:e5053f7183e292c9c0d41caef5020bdc6b59ce14f38a3c98750b9667d61f9d14,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:17266897
09419057741,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-7fl5w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5c3a9d82-3815-4aa1-8d04-14be25394dcf,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c9aa80c6b1f558ba9ef5b7e70df3690995e271e9296b0cc3e71ade739843f53a,PodSandboxId:e7fdb7e540529f305bf2eb00c376fe4bdf92019e975ef0252046f5f4a250d965,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726689709057454293,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4wm6h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d6904231-6f64-4447-9932-0cd5d692978b,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f40b55a2539761c632fe964601680ae492c74cf2cad4ef1130393c06f576f943,PodSandboxId:db3221d8284579089e2d011bb197abfc5cd3d1fbb8db7264f21262b09bca210e,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1726689699933969237,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-091565,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dfccef71d5ed662cfd9075de2c23ae11,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8c435dbd5b540cdb87d1f79fc2aadca19db9c46aff42e67d78c5d9c8eee1b6de,PodSandboxId:01b7098c9837574b56ca73c576ce61c56e6190d34e8f2c4814bae7a1f5952e5c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726689697296263323,Labels:map[string]string{io.kubernetes.container.name: kub
e-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-091565,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 42a715c9fe466585ee83dba1966182d5,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4358e16fe123bb31210776fce1969072fb954b1f7cc3b51c459cd155992c1be5,PodSandboxId:ae412aa32e14f8d37184d672500203218827aa56a04162cf4a4e1bde7a1c9833,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726689697193494552,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-
ha-091565,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ed5bb72bdd2d49ac86ea107effb85714,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f141188bda32520e26d392de6e6e49a863d49aacb461b7f3d5649d68557e96d3,PodSandboxId:bfb245c345b6c6bef07d02ba22a1fa12793ef353b2c4551e8ccd42337869e4b3,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726689697212123129,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-091565,io.kubernet
es.pod.namespace: kube-system,io.kubernetes.pod.uid: fb44b1ca5e1af25a31cc20be38506f2d,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:97b3f8978c2594307cdfef72e8b8aebac842152f9c1893bb34700b6e0932027e,PodSandboxId:0555602e8b34d796a1a62f9ff273a78ebf4b7a7f1459439beaa95a2d3c02577b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726689697128272787,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-091565,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d7f1e1feaa8e654300c84052131dd12a,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=4918fca6-b0a2-4b5d-b820-6aab927319de name=/runtime.v1.RuntimeService/ListContainers
	Sep 18 20:08:14 ha-091565 crio[663]: time="2024-09-18 20:08:14.122064304Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=9bca77ae-d50b-4eca-b6ea-e23ceea72c7c name=/runtime.v1.RuntimeService/Version
	Sep 18 20:08:14 ha-091565 crio[663]: time="2024-09-18 20:08:14.122140256Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=9bca77ae-d50b-4eca-b6ea-e23ceea72c7c name=/runtime.v1.RuntimeService/Version
	Sep 18 20:08:14 ha-091565 crio[663]: time="2024-09-18 20:08:14.124123216Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=b3ced346-daa2-4c3b-9e89-4b5abefbe155 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 18 20:08:14 ha-091565 crio[663]: time="2024-09-18 20:08:14.124852855Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726690094124815643,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=b3ced346-daa2-4c3b-9e89-4b5abefbe155 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 18 20:08:14 ha-091565 crio[663]: time="2024-09-18 20:08:14.125513732Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f5845a0a-0d5a-4283-91fa-77342197e370 name=/runtime.v1.RuntimeService/ListContainers
	Sep 18 20:08:14 ha-091565 crio[663]: time="2024-09-18 20:08:14.125588599Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f5845a0a-0d5a-4283-91fa-77342197e370 name=/runtime.v1.RuntimeService/ListContainers
	Sep 18 20:08:14 ha-091565 crio[663]: time="2024-09-18 20:08:14.125838294Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:7e40397db0622fd3967d20535016d957560de3976f7ca7e977fa2b186ff9f3ec,PodSandboxId:32509037cc4e4898a637b1de6a87eabd6d7504444bcf3f541a5234fb1ed4197a,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1726689867249474588,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-xhmzx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 16808919-56d0-40cd-b88e-28fb5a40b3a2,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4f8cab8eef59352ca937c9b8e7056a419bb426b183a841f9e8a43366ecb1b283,PodSandboxId:16c38fe68d94e679d8aaead9d1be6f411a6984d3a367813f390d2ffc174a7047,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726689721940579425,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-8zcqk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 644e8147-96e9-41a1-99b8-d2de17e4798c,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:26162985f4a281a161b88fc27b5cf0dbedef6117a63f0b64cef28f41346dbd3e,PodSandboxId:12355cb306ab12e321590a927e5f0cd88fa0f505aa7bd470359b1d9c47a7b425,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726689721876307714,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.
kubernetes.pod.uid: b7dffb85-905b-4166-a680-34c77cf87d09,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9b5c6773eef44d848fd26ebcf13ea82ebc3c1e1bd23618a944bf5f6d6a0e7bb8,PodSandboxId:b0c496c53b4c9e8442d0d8a6bcef8269a3baad2995c3647784331bd1b03a34df,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726689721549289388,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-w97kk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 70428cd6-05
23-44c8-89f3-62837b52ca80,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:52ae20a53e17beece735496b8e65537bbebfef5da53e8a2e7a74820fee56cd63,PodSandboxId:e5053f7183e292c9c0d41caef5020bdc6b59ce14f38a3c98750b9667d61f9d14,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:17266897
09419057741,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-7fl5w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5c3a9d82-3815-4aa1-8d04-14be25394dcf,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c9aa80c6b1f558ba9ef5b7e70df3690995e271e9296b0cc3e71ade739843f53a,PodSandboxId:e7fdb7e540529f305bf2eb00c376fe4bdf92019e975ef0252046f5f4a250d965,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726689709057454293,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4wm6h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d6904231-6f64-4447-9932-0cd5d692978b,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f40b55a2539761c632fe964601680ae492c74cf2cad4ef1130393c06f576f943,PodSandboxId:db3221d8284579089e2d011bb197abfc5cd3d1fbb8db7264f21262b09bca210e,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1726689699933969237,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-091565,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dfccef71d5ed662cfd9075de2c23ae11,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8c435dbd5b540cdb87d1f79fc2aadca19db9c46aff42e67d78c5d9c8eee1b6de,PodSandboxId:01b7098c9837574b56ca73c576ce61c56e6190d34e8f2c4814bae7a1f5952e5c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726689697296263323,Labels:map[string]string{io.kubernetes.container.name: kub
e-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-091565,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 42a715c9fe466585ee83dba1966182d5,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4358e16fe123bb31210776fce1969072fb954b1f7cc3b51c459cd155992c1be5,PodSandboxId:ae412aa32e14f8d37184d672500203218827aa56a04162cf4a4e1bde7a1c9833,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726689697193494552,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-
ha-091565,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ed5bb72bdd2d49ac86ea107effb85714,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f141188bda32520e26d392de6e6e49a863d49aacb461b7f3d5649d68557e96d3,PodSandboxId:bfb245c345b6c6bef07d02ba22a1fa12793ef353b2c4551e8ccd42337869e4b3,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726689697212123129,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-091565,io.kubernet
es.pod.namespace: kube-system,io.kubernetes.pod.uid: fb44b1ca5e1af25a31cc20be38506f2d,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:97b3f8978c2594307cdfef72e8b8aebac842152f9c1893bb34700b6e0932027e,PodSandboxId:0555602e8b34d796a1a62f9ff273a78ebf4b7a7f1459439beaa95a2d3c02577b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726689697128272787,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-091565,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d7f1e1feaa8e654300c84052131dd12a,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=f5845a0a-0d5a-4283-91fa-77342197e370 name=/runtime.v1.RuntimeService/ListContainers
	Sep 18 20:08:14 ha-091565 crio[663]: time="2024-09-18 20:08:14.167913342Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=86118b46-3f2e-4990-ac94-dc239c51274b name=/runtime.v1.RuntimeService/Version
	Sep 18 20:08:14 ha-091565 crio[663]: time="2024-09-18 20:08:14.168009722Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=86118b46-3f2e-4990-ac94-dc239c51274b name=/runtime.v1.RuntimeService/Version
	Sep 18 20:08:14 ha-091565 crio[663]: time="2024-09-18 20:08:14.169077952Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=33f72fa8-0dda-403b-8c79-ccc8987685c4 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 18 20:08:14 ha-091565 crio[663]: time="2024-09-18 20:08:14.169506384Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726690094169483301,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=33f72fa8-0dda-403b-8c79-ccc8987685c4 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 18 20:08:14 ha-091565 crio[663]: time="2024-09-18 20:08:14.170070217Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=59dd6dcf-6b87-4824-9115-a0fa41d5e1d9 name=/runtime.v1.RuntimeService/ListContainers
	Sep 18 20:08:14 ha-091565 crio[663]: time="2024-09-18 20:08:14.170124107Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=59dd6dcf-6b87-4824-9115-a0fa41d5e1d9 name=/runtime.v1.RuntimeService/ListContainers
	Sep 18 20:08:14 ha-091565 crio[663]: time="2024-09-18 20:08:14.170368639Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:7e40397db0622fd3967d20535016d957560de3976f7ca7e977fa2b186ff9f3ec,PodSandboxId:32509037cc4e4898a637b1de6a87eabd6d7504444bcf3f541a5234fb1ed4197a,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1726689867249474588,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-xhmzx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 16808919-56d0-40cd-b88e-28fb5a40b3a2,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4f8cab8eef59352ca937c9b8e7056a419bb426b183a841f9e8a43366ecb1b283,PodSandboxId:16c38fe68d94e679d8aaead9d1be6f411a6984d3a367813f390d2ffc174a7047,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726689721940579425,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-8zcqk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 644e8147-96e9-41a1-99b8-d2de17e4798c,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:26162985f4a281a161b88fc27b5cf0dbedef6117a63f0b64cef28f41346dbd3e,PodSandboxId:12355cb306ab12e321590a927e5f0cd88fa0f505aa7bd470359b1d9c47a7b425,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726689721876307714,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.
kubernetes.pod.uid: b7dffb85-905b-4166-a680-34c77cf87d09,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9b5c6773eef44d848fd26ebcf13ea82ebc3c1e1bd23618a944bf5f6d6a0e7bb8,PodSandboxId:b0c496c53b4c9e8442d0d8a6bcef8269a3baad2995c3647784331bd1b03a34df,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726689721549289388,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-w97kk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 70428cd6-05
23-44c8-89f3-62837b52ca80,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:52ae20a53e17beece735496b8e65537bbebfef5da53e8a2e7a74820fee56cd63,PodSandboxId:e5053f7183e292c9c0d41caef5020bdc6b59ce14f38a3c98750b9667d61f9d14,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:17266897
09419057741,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-7fl5w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5c3a9d82-3815-4aa1-8d04-14be25394dcf,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c9aa80c6b1f558ba9ef5b7e70df3690995e271e9296b0cc3e71ade739843f53a,PodSandboxId:e7fdb7e540529f305bf2eb00c376fe4bdf92019e975ef0252046f5f4a250d965,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726689709057454293,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4wm6h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d6904231-6f64-4447-9932-0cd5d692978b,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f40b55a2539761c632fe964601680ae492c74cf2cad4ef1130393c06f576f943,PodSandboxId:db3221d8284579089e2d011bb197abfc5cd3d1fbb8db7264f21262b09bca210e,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1726689699933969237,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-091565,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dfccef71d5ed662cfd9075de2c23ae11,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8c435dbd5b540cdb87d1f79fc2aadca19db9c46aff42e67d78c5d9c8eee1b6de,PodSandboxId:01b7098c9837574b56ca73c576ce61c56e6190d34e8f2c4814bae7a1f5952e5c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726689697296263323,Labels:map[string]string{io.kubernetes.container.name: kub
e-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-091565,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 42a715c9fe466585ee83dba1966182d5,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4358e16fe123bb31210776fce1969072fb954b1f7cc3b51c459cd155992c1be5,PodSandboxId:ae412aa32e14f8d37184d672500203218827aa56a04162cf4a4e1bde7a1c9833,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726689697193494552,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-
ha-091565,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ed5bb72bdd2d49ac86ea107effb85714,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f141188bda32520e26d392de6e6e49a863d49aacb461b7f3d5649d68557e96d3,PodSandboxId:bfb245c345b6c6bef07d02ba22a1fa12793ef353b2c4551e8ccd42337869e4b3,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726689697212123129,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-091565,io.kubernet
es.pod.namespace: kube-system,io.kubernetes.pod.uid: fb44b1ca5e1af25a31cc20be38506f2d,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:97b3f8978c2594307cdfef72e8b8aebac842152f9c1893bb34700b6e0932027e,PodSandboxId:0555602e8b34d796a1a62f9ff273a78ebf4b7a7f1459439beaa95a2d3c02577b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726689697128272787,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-091565,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d7f1e1feaa8e654300c84052131dd12a,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=59dd6dcf-6b87-4824-9115-a0fa41d5e1d9 name=/runtime.v1.RuntimeService/ListContainers
	Sep 18 20:08:14 ha-091565 crio[663]: time="2024-09-18 20:08:14.208572526Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=880f1b38-4bc9-4882-a984-e4371ee86701 name=/runtime.v1.RuntimeService/Version
	Sep 18 20:08:14 ha-091565 crio[663]: time="2024-09-18 20:08:14.208681419Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=880f1b38-4bc9-4882-a984-e4371ee86701 name=/runtime.v1.RuntimeService/Version
	Sep 18 20:08:14 ha-091565 crio[663]: time="2024-09-18 20:08:14.210477381Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=a4fb0f12-61e0-4203-8346-43bd1f226496 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 18 20:08:14 ha-091565 crio[663]: time="2024-09-18 20:08:14.211060416Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726690094211029663,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=a4fb0f12-61e0-4203-8346-43bd1f226496 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 18 20:08:14 ha-091565 crio[663]: time="2024-09-18 20:08:14.213988601Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=22618009-a020-446d-b8cb-17c852e6d011 name=/runtime.v1.RuntimeService/ListContainers
	Sep 18 20:08:14 ha-091565 crio[663]: time="2024-09-18 20:08:14.214045640Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=22618009-a020-446d-b8cb-17c852e6d011 name=/runtime.v1.RuntimeService/ListContainers
	Sep 18 20:08:14 ha-091565 crio[663]: time="2024-09-18 20:08:14.214288502Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:7e40397db0622fd3967d20535016d957560de3976f7ca7e977fa2b186ff9f3ec,PodSandboxId:32509037cc4e4898a637b1de6a87eabd6d7504444bcf3f541a5234fb1ed4197a,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1726689867249474588,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-xhmzx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 16808919-56d0-40cd-b88e-28fb5a40b3a2,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4f8cab8eef59352ca937c9b8e7056a419bb426b183a841f9e8a43366ecb1b283,PodSandboxId:16c38fe68d94e679d8aaead9d1be6f411a6984d3a367813f390d2ffc174a7047,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726689721940579425,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-8zcqk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 644e8147-96e9-41a1-99b8-d2de17e4798c,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:26162985f4a281a161b88fc27b5cf0dbedef6117a63f0b64cef28f41346dbd3e,PodSandboxId:12355cb306ab12e321590a927e5f0cd88fa0f505aa7bd470359b1d9c47a7b425,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726689721876307714,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.
kubernetes.pod.uid: b7dffb85-905b-4166-a680-34c77cf87d09,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9b5c6773eef44d848fd26ebcf13ea82ebc3c1e1bd23618a944bf5f6d6a0e7bb8,PodSandboxId:b0c496c53b4c9e8442d0d8a6bcef8269a3baad2995c3647784331bd1b03a34df,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726689721549289388,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-w97kk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 70428cd6-05
23-44c8-89f3-62837b52ca80,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:52ae20a53e17beece735496b8e65537bbebfef5da53e8a2e7a74820fee56cd63,PodSandboxId:e5053f7183e292c9c0d41caef5020bdc6b59ce14f38a3c98750b9667d61f9d14,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:17266897
09419057741,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-7fl5w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5c3a9d82-3815-4aa1-8d04-14be25394dcf,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c9aa80c6b1f558ba9ef5b7e70df3690995e271e9296b0cc3e71ade739843f53a,PodSandboxId:e7fdb7e540529f305bf2eb00c376fe4bdf92019e975ef0252046f5f4a250d965,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726689709057454293,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4wm6h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d6904231-6f64-4447-9932-0cd5d692978b,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f40b55a2539761c632fe964601680ae492c74cf2cad4ef1130393c06f576f943,PodSandboxId:db3221d8284579089e2d011bb197abfc5cd3d1fbb8db7264f21262b09bca210e,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1726689699933969237,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-091565,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dfccef71d5ed662cfd9075de2c23ae11,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8c435dbd5b540cdb87d1f79fc2aadca19db9c46aff42e67d78c5d9c8eee1b6de,PodSandboxId:01b7098c9837574b56ca73c576ce61c56e6190d34e8f2c4814bae7a1f5952e5c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726689697296263323,Labels:map[string]string{io.kubernetes.container.name: kub
e-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-091565,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 42a715c9fe466585ee83dba1966182d5,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4358e16fe123bb31210776fce1969072fb954b1f7cc3b51c459cd155992c1be5,PodSandboxId:ae412aa32e14f8d37184d672500203218827aa56a04162cf4a4e1bde7a1c9833,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726689697193494552,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-
ha-091565,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ed5bb72bdd2d49ac86ea107effb85714,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f141188bda32520e26d392de6e6e49a863d49aacb461b7f3d5649d68557e96d3,PodSandboxId:bfb245c345b6c6bef07d02ba22a1fa12793ef353b2c4551e8ccd42337869e4b3,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726689697212123129,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-091565,io.kubernet
es.pod.namespace: kube-system,io.kubernetes.pod.uid: fb44b1ca5e1af25a31cc20be38506f2d,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:97b3f8978c2594307cdfef72e8b8aebac842152f9c1893bb34700b6e0932027e,PodSandboxId:0555602e8b34d796a1a62f9ff273a78ebf4b7a7f1459439beaa95a2d3c02577b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726689697128272787,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-091565,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d7f1e1feaa8e654300c84052131dd12a,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=22618009-a020-446d-b8cb-17c852e6d011 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	7e40397db0622       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   3 minutes ago       Running             busybox                   0                   32509037cc4e4       busybox-7dff88458-xhmzx
	4f8cab8eef593       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      6 minutes ago       Running             coredns                   0                   16c38fe68d94e       coredns-7c65d6cfc9-8zcqk
	26162985f4a28       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      6 minutes ago       Running             storage-provisioner       0                   12355cb306ab1       storage-provisioner
	9b5c6773eef44       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      6 minutes ago       Running             coredns                   0                   b0c496c53b4c9       coredns-7c65d6cfc9-w97kk
	52ae20a53e17b       12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f                                      6 minutes ago       Running             kindnet-cni               0                   e5053f7183e29       kindnet-7fl5w
	c9aa80c6b1f55       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                      6 minutes ago       Running             kube-proxy                0                   e7fdb7e540529       kube-proxy-4wm6h
	f40b55a253976       ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f     6 minutes ago       Running             kube-vip                  0                   db3221d828457       kube-vip-ha-091565
	8c435dbd5b540       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                      6 minutes ago       Running             kube-scheduler            0                   01b7098c98375       kube-scheduler-ha-091565
	f141188bda325       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee                                      6 minutes ago       Running             kube-apiserver            0                   bfb245c345b6c       kube-apiserver-ha-091565
	4358e16fe123b       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      6 minutes ago       Running             etcd                      0                   ae412aa32e14f       etcd-ha-091565
	97b3f8978c259       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                      6 minutes ago       Running             kube-controller-manager   0                   0555602e8b34d       kube-controller-manager-ha-091565
	
	
	==> coredns [4f8cab8eef59352ca937c9b8e7056a419bb426b183a841f9e8a43366ecb1b283] <==
	[INFO] 10.244.0.4:46368 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 60 0.000070924s
	[INFO] 10.244.1.2:33610 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000192256s
	[INFO] 10.244.1.2:44224 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.004970814s
	[INFO] 10.244.1.2:38504 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000245166s
	[INFO] 10.244.1.2:33749 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000201604s
	[INFO] 10.244.1.2:44283 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.003884102s
	[INFO] 10.244.1.2:32970 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000204769s
	[INFO] 10.244.1.2:52008 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000243831s
	[INFO] 10.244.2.2:50260 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000163913s
	[INFO] 10.244.2.2:55732 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001811166s
	[INFO] 10.244.2.2:39226 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.00012772s
	[INFO] 10.244.2.2:53709 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.0000925s
	[INFO] 10.244.2.2:41092 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000125187s
	[INFO] 10.244.0.4:40054 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000124612s
	[INFO] 10.244.0.4:38790 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001299276s
	[INFO] 10.244.0.4:59253 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000062856s
	[INFO] 10.244.0.4:38256 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000094015s
	[INFO] 10.244.1.2:44940 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000153669s
	[INFO] 10.244.1.2:48450 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000097947s
	[INFO] 10.244.0.4:38580 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000117553s
	[INFO] 10.244.2.2:59546 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000170402s
	[INFO] 10.244.2.2:49026 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000189642s
	[INFO] 10.244.2.2:45658 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000151371s
	[INFO] 10.244.0.4:51397 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000169114s
	[INFO] 10.244.0.4:47813 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000155527s
	
	
	==> coredns [9b5c6773eef44d848fd26ebcf13ea82ebc3c1e1bd23618a944bf5f6d6a0e7bb8] <==
	[INFO] 10.244.0.4:40496 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 44 0.001977875s
	[INFO] 10.244.1.2:55891 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000166003s
	[INFO] 10.244.2.2:51576 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001523061s
	[INFO] 10.244.2.2:45932 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000147698s
	[INFO] 10.244.2.2:48639 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000087315s
	[INFO] 10.244.0.4:52361 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001834081s
	[INFO] 10.244.0.4:55907 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000221265s
	[INFO] 10.244.0.4:58409 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000117627s
	[INFO] 10.244.0.4:50242 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000115347s
	[INFO] 10.244.1.2:47046 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000136453s
	[INFO] 10.244.1.2:43799 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000196628s
	[INFO] 10.244.2.2:55965 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000123662s
	[INFO] 10.244.2.2:50433 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000098915s
	[INFO] 10.244.2.2:53589 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000068105s
	[INFO] 10.244.2.2:34234 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000074s
	[INFO] 10.244.0.4:45468 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000084304s
	[INFO] 10.244.0.4:51889 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000073683s
	[INFO] 10.244.0.4:50414 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000047051s
	[INFO] 10.244.1.2:45104 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000139109s
	[INFO] 10.244.1.2:42703 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.00019857s
	[INFO] 10.244.1.2:45604 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000184516s
	[INFO] 10.244.1.2:54679 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.00010429s
	[INFO] 10.244.2.2:37265 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000089491s
	[INFO] 10.244.0.4:58464 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000108633s
	[INFO] 10.244.0.4:60733 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.0000682s
	
	
	==> describe nodes <==
	Name:               ha-091565
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-091565
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=85073601a832bd4bbda5d11fa91feafff6ec6b91
	                    minikube.k8s.io/name=ha-091565
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_18T20_01_44_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 18 Sep 2024 20:01:40 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-091565
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 18 Sep 2024 20:08:10 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 18 Sep 2024 20:04:47 +0000   Wed, 18 Sep 2024 20:01:39 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 18 Sep 2024 20:04:47 +0000   Wed, 18 Sep 2024 20:01:39 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 18 Sep 2024 20:04:47 +0000   Wed, 18 Sep 2024 20:01:39 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 18 Sep 2024 20:04:47 +0000   Wed, 18 Sep 2024 20:02:01 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.215
	  Hostname:    ha-091565
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 a62ed2f9eda04eb9bbdd5bc2c8925018
	  System UUID:                a62ed2f9-eda0-4eb9-bbdd-5bc2c8925018
	  Boot ID:                    e0c4d56b-81dc-4d69-9fe6-35f1341e336d
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-xhmzx              0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m52s
	  kube-system                 coredns-7c65d6cfc9-8zcqk             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     6m27s
	  kube-system                 coredns-7c65d6cfc9-w97kk             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     6m27s
	  kube-system                 etcd-ha-091565                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         6m31s
	  kube-system                 kindnet-7fl5w                        100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      6m27s
	  kube-system                 kube-apiserver-ha-091565             250m (12%)    0 (0%)      0 (0%)           0 (0%)         6m31s
	  kube-system                 kube-controller-manager-ha-091565    200m (10%)    0 (0%)      0 (0%)           0 (0%)         6m32s
	  kube-system                 kube-proxy-4wm6h                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m27s
	  kube-system                 kube-scheduler-ha-091565             100m (5%)     0 (0%)      0 (0%)           0 (0%)         6m31s
	  kube-system                 kube-vip-ha-091565                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m33s
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m26s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   100m (5%)
	  memory             290Mi (13%)  390Mi (18%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 6m25s  kube-proxy       
	  Normal  Starting                 6m31s  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  6m31s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  6m31s  kubelet          Node ha-091565 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m31s  kubelet          Node ha-091565 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m31s  kubelet          Node ha-091565 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           6m28s  node-controller  Node ha-091565 event: Registered Node ha-091565 in Controller
	  Normal  NodeReady                6m13s  kubelet          Node ha-091565 status is now: NodeReady
	  Normal  RegisteredNode           5m29s  node-controller  Node ha-091565 event: Registered Node ha-091565 in Controller
	  Normal  RegisteredNode           4m12s  node-controller  Node ha-091565 event: Registered Node ha-091565 in Controller
	
	
	Name:               ha-091565-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-091565-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=85073601a832bd4bbda5d11fa91feafff6ec6b91
	                    minikube.k8s.io/name=ha-091565
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_18T20_02_40_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 18 Sep 2024 20:02:37 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-091565-m02
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 18 Sep 2024 20:05:41 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Wed, 18 Sep 2024 20:04:41 +0000   Wed, 18 Sep 2024 20:06:21 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Wed, 18 Sep 2024 20:04:41 +0000   Wed, 18 Sep 2024 20:06:21 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Wed, 18 Sep 2024 20:04:41 +0000   Wed, 18 Sep 2024 20:06:21 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Wed, 18 Sep 2024 20:04:41 +0000   Wed, 18 Sep 2024 20:06:21 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.92
	  Hostname:    ha-091565-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 725aeac5e21d42d69ce571d302d9f7bc
	  System UUID:                725aeac5-e21d-42d6-9ce5-71d302d9f7bc
	  Boot ID:                    e1d66727-ad6e-4cce-aca1-07f5fd60d891
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-45phf                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m52s
	  kube-system                 etcd-ha-091565-m02                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         5m35s
	  kube-system                 kindnet-bzsqr                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      5m37s
	  kube-system                 kube-apiserver-ha-091565-m02             250m (12%)    0 (0%)      0 (0%)           0 (0%)         5m36s
	  kube-system                 kube-controller-manager-ha-091565-m02    200m (10%)    0 (0%)      0 (0%)           0 (0%)         5m36s
	  kube-system                 kube-proxy-bxblp                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m37s
	  kube-system                 kube-scheduler-ha-091565-m02             100m (5%)     0 (0%)      0 (0%)           0 (0%)         5m36s
	  kube-system                 kube-vip-ha-091565-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m32s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 5m32s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  5m37s (x8 over 5m37s)  kubelet          Node ha-091565-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m37s (x8 over 5m37s)  kubelet          Node ha-091565-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m37s (x7 over 5m37s)  kubelet          Node ha-091565-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m37s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           5m33s                  node-controller  Node ha-091565-m02 event: Registered Node ha-091565-m02 in Controller
	  Normal  RegisteredNode           5m29s                  node-controller  Node ha-091565-m02 event: Registered Node ha-091565-m02 in Controller
	  Normal  RegisteredNode           4m12s                  node-controller  Node ha-091565-m02 event: Registered Node ha-091565-m02 in Controller
	  Normal  NodeNotReady             113s                   node-controller  Node ha-091565-m02 status is now: NodeNotReady
	
	
	Name:               ha-091565-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-091565-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=85073601a832bd4bbda5d11fa91feafff6ec6b91
	                    minikube.k8s.io/name=ha-091565
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_18T20_03_57_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 18 Sep 2024 20:03:54 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-091565-m03
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 18 Sep 2024 20:08:10 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 18 Sep 2024 20:04:55 +0000   Wed, 18 Sep 2024 20:03:54 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 18 Sep 2024 20:04:55 +0000   Wed, 18 Sep 2024 20:03:54 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 18 Sep 2024 20:04:55 +0000   Wed, 18 Sep 2024 20:03:54 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 18 Sep 2024 20:04:55 +0000   Wed, 18 Sep 2024 20:04:15 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.53
	  Hostname:    ha-091565-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 d7cb71d27a4f4e8b92a5e72c1afd8865
	  System UUID:                d7cb71d2-7a4f-4e8b-92a5-e72c1afd8865
	  Boot ID:                    df33972c-453a-48d6-99c0-49951abc69d7
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-jjr2n                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m52s
	  kube-system                 etcd-ha-091565-m03                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         4m19s
	  kube-system                 kindnet-5rh2w                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      4m20s
	  kube-system                 kube-apiserver-ha-091565-m03             250m (12%)    0 (0%)      0 (0%)           0 (0%)         4m19s
	  kube-system                 kube-controller-manager-ha-091565-m03    200m (10%)    0 (0%)      0 (0%)           0 (0%)         4m19s
	  kube-system                 kube-proxy-4p8rj                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m20s
	  kube-system                 kube-scheduler-ha-091565-m03             100m (5%)     0 (0%)      0 (0%)           0 (0%)         4m19s
	  kube-system                 kube-vip-ha-091565-m03                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m16s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 4m15s                  kube-proxy       
	  Normal  NodeAllocatableEnforced  4m21s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  4m20s (x8 over 4m21s)  kubelet          Node ha-091565-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m20s (x8 over 4m21s)  kubelet          Node ha-091565-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m20s (x7 over 4m21s)  kubelet          Node ha-091565-m03 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           4m19s                  node-controller  Node ha-091565-m03 event: Registered Node ha-091565-m03 in Controller
	  Normal  RegisteredNode           4m18s                  node-controller  Node ha-091565-m03 event: Registered Node ha-091565-m03 in Controller
	  Normal  RegisteredNode           4m12s                  node-controller  Node ha-091565-m03 event: Registered Node ha-091565-m03 in Controller
	
	
	Name:               ha-091565-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-091565-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=85073601a832bd4bbda5d11fa91feafff6ec6b91
	                    minikube.k8s.io/name=ha-091565
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_18T20_05_02_0700
	                    minikube.k8s.io/version=v1.34.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 18 Sep 2024 20:05:01 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-091565-m04
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 18 Sep 2024 20:08:14 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 18 Sep 2024 20:05:31 +0000   Wed, 18 Sep 2024 20:05:01 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 18 Sep 2024 20:05:31 +0000   Wed, 18 Sep 2024 20:05:01 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 18 Sep 2024 20:05:31 +0000   Wed, 18 Sep 2024 20:05:01 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 18 Sep 2024 20:05:31 +0000   Wed, 18 Sep 2024 20:05:21 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.9
	  Hostname:    ha-091565-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 cb0096492d0c441d8778e11eb51e77d3
	  System UUID:                cb009649-2d0c-441d-8778-e11eb51e77d3
	  Boot ID:                    c3da5972-b725-4116-9206-7ac2fefa29cb
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-4xtjm       100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      3m13s
	  kube-system                 kube-proxy-8qkpk    0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m13s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 3m8s                   kube-proxy       
	  Normal  NodeAllocatableEnforced  3m14s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           3m13s                  node-controller  Node ha-091565-m04 event: Registered Node ha-091565-m04 in Controller
	  Normal  NodeHasSufficientMemory  3m13s (x2 over 3m14s)  kubelet          Node ha-091565-m04 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m13s (x2 over 3m14s)  kubelet          Node ha-091565-m04 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m13s (x2 over 3m14s)  kubelet          Node ha-091565-m04 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           3m12s                  node-controller  Node ha-091565-m04 event: Registered Node ha-091565-m04 in Controller
	  Normal  RegisteredNode           3m9s                   node-controller  Node ha-091565-m04 event: Registered Node ha-091565-m04 in Controller
	  Normal  NodeReady                2m53s                  kubelet          Node ha-091565-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[Sep18 20:01] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.051316] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.039151] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.792349] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +1.893273] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +4.904226] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000008] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[ +13.896131] systemd-fstab-generator[585]: Ignoring "noauto" option for root device
	[  +0.067482] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.062052] systemd-fstab-generator[597]: Ignoring "noauto" option for root device
	[  +0.180384] systemd-fstab-generator[611]: Ignoring "noauto" option for root device
	[  +0.116835] systemd-fstab-generator[623]: Ignoring "noauto" option for root device
	[  +0.268512] systemd-fstab-generator[653]: Ignoring "noauto" option for root device
	[  +3.829963] systemd-fstab-generator[747]: Ignoring "noauto" option for root device
	[  +4.147936] systemd-fstab-generator[884]: Ignoring "noauto" option for root device
	[  +0.060572] kauditd_printk_skb: 158 callbacks suppressed
	[  +6.397640] kauditd_printk_skb: 79 callbacks suppressed
	[  +0.774401] systemd-fstab-generator[1308]: Ignoring "noauto" option for root device
	[  +5.898362] kauditd_printk_skb: 15 callbacks suppressed
	[Sep18 20:02] kauditd_printk_skb: 38 callbacks suppressed
	[ +41.961999] kauditd_printk_skb: 26 callbacks suppressed
	
	
	==> etcd [4358e16fe123bb31210776fce1969072fb954b1f7cc3b51c459cd155992c1be5] <==
	{"level":"warn","ts":"2024-09-18T20:08:14.467684Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ce9e8f286885b37e","from":"ce9e8f286885b37e","remote-peer-id":"7208e3715ec3d11b","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-18T20:08:14.471305Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ce9e8f286885b37e","from":"ce9e8f286885b37e","remote-peer-id":"7208e3715ec3d11b","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-18T20:08:14.481457Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ce9e8f286885b37e","from":"ce9e8f286885b37e","remote-peer-id":"7208e3715ec3d11b","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-18T20:08:14.489249Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ce9e8f286885b37e","from":"ce9e8f286885b37e","remote-peer-id":"7208e3715ec3d11b","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-18T20:08:14.495369Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ce9e8f286885b37e","from":"ce9e8f286885b37e","remote-peer-id":"7208e3715ec3d11b","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-18T20:08:14.500349Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ce9e8f286885b37e","from":"ce9e8f286885b37e","remote-peer-id":"7208e3715ec3d11b","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-18T20:08:14.503652Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ce9e8f286885b37e","from":"ce9e8f286885b37e","remote-peer-id":"7208e3715ec3d11b","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-18T20:08:14.511200Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ce9e8f286885b37e","from":"ce9e8f286885b37e","remote-peer-id":"7208e3715ec3d11b","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-18T20:08:14.579695Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ce9e8f286885b37e","from":"ce9e8f286885b37e","remote-peer-id":"7208e3715ec3d11b","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-18T20:08:14.585535Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ce9e8f286885b37e","from":"ce9e8f286885b37e","remote-peer-id":"7208e3715ec3d11b","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-18T20:08:14.591532Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ce9e8f286885b37e","from":"ce9e8f286885b37e","remote-peer-id":"7208e3715ec3d11b","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-18T20:08:14.595849Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ce9e8f286885b37e","from":"ce9e8f286885b37e","remote-peer-id":"7208e3715ec3d11b","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-18T20:08:14.598851Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ce9e8f286885b37e","from":"ce9e8f286885b37e","remote-peer-id":"7208e3715ec3d11b","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-18T20:08:14.604543Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ce9e8f286885b37e","from":"ce9e8f286885b37e","remote-peer-id":"7208e3715ec3d11b","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-18T20:08:14.610085Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ce9e8f286885b37e","from":"ce9e8f286885b37e","remote-peer-id":"7208e3715ec3d11b","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-18T20:08:14.611626Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ce9e8f286885b37e","from":"ce9e8f286885b37e","remote-peer-id":"7208e3715ec3d11b","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-18T20:08:14.616569Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ce9e8f286885b37e","from":"ce9e8f286885b37e","remote-peer-id":"7208e3715ec3d11b","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-18T20:08:14.622447Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ce9e8f286885b37e","from":"ce9e8f286885b37e","remote-peer-id":"7208e3715ec3d11b","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-18T20:08:14.625771Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ce9e8f286885b37e","from":"ce9e8f286885b37e","remote-peer-id":"7208e3715ec3d11b","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-18T20:08:14.630063Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ce9e8f286885b37e","from":"ce9e8f286885b37e","remote-peer-id":"7208e3715ec3d11b","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-18T20:08:14.635911Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ce9e8f286885b37e","from":"ce9e8f286885b37e","remote-peer-id":"7208e3715ec3d11b","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-18T20:08:14.642421Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ce9e8f286885b37e","from":"ce9e8f286885b37e","remote-peer-id":"7208e3715ec3d11b","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-18T20:08:14.702487Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ce9e8f286885b37e","from":"ce9e8f286885b37e","remote-peer-id":"7208e3715ec3d11b","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-18T20:08:14.704060Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ce9e8f286885b37e","from":"ce9e8f286885b37e","remote-peer-id":"7208e3715ec3d11b","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-18T20:08:14.712034Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ce9e8f286885b37e","from":"ce9e8f286885b37e","remote-peer-id":"7208e3715ec3d11b","remote-peer-name":"pipeline","remote-peer-active":false}
	
	
	==> kernel <==
	 20:08:14 up 7 min,  0 users,  load average: 0.29, 0.24, 0.12
	Linux ha-091565 5.10.207 #1 SMP Mon Sep 16 15:00:28 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [52ae20a53e17beece735496b8e65537bbebfef5da53e8a2e7a74820fee56cd63] <==
	I0918 20:07:40.564407       1 main.go:322] Node ha-091565-m02 has CIDR [10.244.1.0/24] 
	I0918 20:07:50.558115       1 main.go:295] Handling node with IPs: map[192.168.39.215:{}]
	I0918 20:07:50.558147       1 main.go:299] handling current node
	I0918 20:07:50.558160       1 main.go:295] Handling node with IPs: map[192.168.39.92:{}]
	I0918 20:07:50.558164       1 main.go:322] Node ha-091565-m02 has CIDR [10.244.1.0/24] 
	I0918 20:07:50.558360       1 main.go:295] Handling node with IPs: map[192.168.39.53:{}]
	I0918 20:07:50.558384       1 main.go:322] Node ha-091565-m03 has CIDR [10.244.2.0/24] 
	I0918 20:07:50.558429       1 main.go:295] Handling node with IPs: map[192.168.39.9:{}]
	I0918 20:07:50.558435       1 main.go:322] Node ha-091565-m04 has CIDR [10.244.3.0/24] 
	I0918 20:08:00.565020       1 main.go:295] Handling node with IPs: map[192.168.39.215:{}]
	I0918 20:08:00.565144       1 main.go:299] handling current node
	I0918 20:08:00.565175       1 main.go:295] Handling node with IPs: map[192.168.39.92:{}]
	I0918 20:08:00.565192       1 main.go:322] Node ha-091565-m02 has CIDR [10.244.1.0/24] 
	I0918 20:08:00.565349       1 main.go:295] Handling node with IPs: map[192.168.39.53:{}]
	I0918 20:08:00.565373       1 main.go:322] Node ha-091565-m03 has CIDR [10.244.2.0/24] 
	I0918 20:08:00.565427       1 main.go:295] Handling node with IPs: map[192.168.39.9:{}]
	I0918 20:08:00.565444       1 main.go:322] Node ha-091565-m04 has CIDR [10.244.3.0/24] 
	I0918 20:08:10.561012       1 main.go:295] Handling node with IPs: map[192.168.39.9:{}]
	I0918 20:08:10.561057       1 main.go:322] Node ha-091565-m04 has CIDR [10.244.3.0/24] 
	I0918 20:08:10.561191       1 main.go:295] Handling node with IPs: map[192.168.39.215:{}]
	I0918 20:08:10.561212       1 main.go:299] handling current node
	I0918 20:08:10.561230       1 main.go:295] Handling node with IPs: map[192.168.39.92:{}]
	I0918 20:08:10.561236       1 main.go:322] Node ha-091565-m02 has CIDR [10.244.1.0/24] 
	I0918 20:08:10.561278       1 main.go:295] Handling node with IPs: map[192.168.39.53:{}]
	I0918 20:08:10.561295       1 main.go:322] Node ha-091565-m03 has CIDR [10.244.2.0/24] 
	
	
	==> kube-apiserver [f141188bda32520e26d392de6e6e49a863d49aacb461b7f3d5649d68557e96d3] <==
	I0918 20:01:41.805351       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0918 20:01:41.812255       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.215]
	I0918 20:01:41.813303       1 controller.go:615] quota admission added evaluator for: endpoints
	I0918 20:01:41.817812       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0918 20:01:41.927112       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0918 20:01:43.444505       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0918 20:01:43.474356       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0918 20:01:43.499285       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0918 20:01:47.177380       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I0918 20:01:47.677666       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	E0918 20:04:28.622821       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:38922: use of closed network connection
	E0918 20:04:28.826011       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:38948: use of closed network connection
	E0918 20:04:29.020534       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:38954: use of closed network connection
	E0918 20:04:29.215686       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:38960: use of closed network connection
	E0918 20:04:29.393565       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:38968: use of closed network connection
	E0918 20:04:29.590605       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:38998: use of closed network connection
	E0918 20:04:29.776838       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:39018: use of closed network connection
	E0918 20:04:29.951140       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:39034: use of closed network connection
	E0918 20:04:30.119473       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:39042: use of closed network connection
	E0918 20:04:30.426734       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:39086: use of closed network connection
	E0918 20:04:30.592391       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:39108: use of closed network connection
	E0918 20:04:30.769818       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:39130: use of closed network connection
	E0918 20:04:30.943725       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:39150: use of closed network connection
	E0918 20:04:31.126781       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:39162: use of closed network connection
	E0918 20:04:31.297785       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:39182: use of closed network connection
	
	
	==> kube-controller-manager [97b3f8978c2594307cdfef72e8b8aebac842152f9c1893bb34700b6e0932027e] <==
	I0918 20:05:01.138017       1 range_allocator.go:422] "Set node PodCIDR" logger="node-ipam-controller" node="ha-091565-m04" podCIDRs=["10.244.3.0/24"]
	I0918 20:05:01.138080       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-091565-m04"
	I0918 20:05:01.138115       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-091565-m04"
	I0918 20:05:01.151572       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-091565-m04"
	I0918 20:05:01.738364       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-091565-m04"
	I0918 20:05:01.838841       1 node_lifecycle_controller.go:884] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-091565-m04"
	I0918 20:05:01.852257       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-091565-m04"
	I0918 20:05:02.344310       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-091565-m04"
	I0918 20:05:03.003621       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-091565-m04"
	I0918 20:05:03.051402       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-091565-m04"
	I0918 20:05:05.442431       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-091565-m04"
	I0918 20:05:05.579185       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-091565-m04"
	I0918 20:05:11.327273       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-091565-m04"
	I0918 20:05:21.548407       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-091565-m04"
	I0918 20:05:21.548588       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-091565-m04"
	I0918 20:05:21.567996       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-091565-m04"
	I0918 20:05:21.857696       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-091565-m04"
	I0918 20:05:31.710527       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-091565-m04"
	I0918 20:06:21.883753       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-091565-m04"
	I0918 20:06:21.884037       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-091565-m02"
	I0918 20:06:21.905558       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-091565-m02"
	I0918 20:06:21.987284       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="62.125575ms"
	I0918 20:06:21.987469       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="76.464µs"
	I0918 20:06:23.082191       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-091565-m02"
	I0918 20:06:27.127364       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-091565-m02"
	
	
	==> kube-proxy [c9aa80c6b1f558ba9ef5b7e70df3690995e271e9296b0cc3e71ade739843f53a] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0918 20:01:49.308011       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0918 20:01:49.335379       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.215"]
	E0918 20:01:49.335598       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0918 20:01:49.418096       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0918 20:01:49.418149       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0918 20:01:49.418183       1 server_linux.go:169] "Using iptables Proxier"
	I0918 20:01:49.424497       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0918 20:01:49.425362       1 server.go:483] "Version info" version="v1.31.1"
	I0918 20:01:49.425380       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0918 20:01:49.427370       1 config.go:199] "Starting service config controller"
	I0918 20:01:49.427801       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0918 20:01:49.427983       1 config.go:105] "Starting endpoint slice config controller"
	I0918 20:01:49.427991       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0918 20:01:49.431014       1 config.go:328] "Starting node config controller"
	I0918 20:01:49.431036       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0918 20:01:49.528624       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0918 20:01:49.528643       1 shared_informer.go:320] Caches are synced for service config
	I0918 20:01:49.531423       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [8c435dbd5b540cdb87d1f79fc2aadca19db9c46aff42e67d78c5d9c8eee1b6de] <==
	E0918 20:03:54.130068       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod d1fea214-55d3-4291-bc7b-cfa3d01a8ead(kube-system/kube-proxy-j766p) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-j766p"
	E0918 20:03:54.131984       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-j766p\": pod kube-proxy-j766p is already assigned to node \"ha-091565-m03\"" pod="kube-system/kube-proxy-j766p"
	I0918 20:03:54.132134       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-j766p" node="ha-091565-m03"
	E0918 20:03:54.204764       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-zdpnz\": pod kindnet-zdpnz is already assigned to node \"ha-091565-m03\"" plugin="DefaultBinder" pod="kube-system/kindnet-zdpnz" node="ha-091565-m03"
	E0918 20:03:54.204930       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod bf784ea9-bf66-4fa3-bb04-e893d228713d(kube-system/kindnet-zdpnz) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-zdpnz"
	E0918 20:03:54.205020       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-zdpnz\": pod kindnet-zdpnz is already assigned to node \"ha-091565-m03\"" pod="kube-system/kindnet-zdpnz"
	I0918 20:03:54.205131       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-zdpnz" node="ha-091565-m03"
	E0918 20:04:22.999076       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-45phf\": pod busybox-7dff88458-45phf is already assigned to node \"ha-091565-m02\"" plugin="DefaultBinder" pod="default/busybox-7dff88458-45phf" node="ha-091565-m02"
	E0918 20:04:23.000005       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 8c26f72c-f562-47cb-bb92-9cc60a901f36(default/busybox-7dff88458-45phf) wasn't assumed so cannot be forgotten" pod="default/busybox-7dff88458-45phf"
	E0918 20:04:23.000126       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-45phf\": pod busybox-7dff88458-45phf is already assigned to node \"ha-091565-m02\"" pod="default/busybox-7dff88458-45phf"
	I0918 20:04:23.000204       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-7dff88458-45phf" node="ha-091565-m02"
	E0918 20:05:01.199076       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-4xtjm\": pod kindnet-4xtjm is already assigned to node \"ha-091565-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-4xtjm" node="ha-091565-m04"
	E0918 20:05:01.199468       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 74b52b58-c5d1-4de5-8a71-97a1e9263ee6(kube-system/kindnet-4xtjm) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-4xtjm"
	E0918 20:05:01.199594       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-4xtjm\": pod kindnet-4xtjm is already assigned to node \"ha-091565-m04\"" pod="kube-system/kindnet-4xtjm"
	I0918 20:05:01.199786       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-4xtjm" node="ha-091565-m04"
	E0918 20:05:01.220390       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-8qkpk\": pod kube-proxy-8qkpk is already assigned to node \"ha-091565-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-8qkpk" node="ha-091565-m04"
	E0918 20:05:01.223994       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 819d89b8-2f9d-4a41-ad66-7bfa5e99e840(kube-system/kube-proxy-8qkpk) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-8qkpk"
	E0918 20:05:01.224205       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-8qkpk\": pod kube-proxy-8qkpk is already assigned to node \"ha-091565-m04\"" pod="kube-system/kube-proxy-8qkpk"
	I0918 20:05:01.224300       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-8qkpk" node="ha-091565-m04"
	E0918 20:05:01.248133       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-zmf96\": pod kindnet-zmf96 is already assigned to node \"ha-091565-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-zmf96" node="ha-091565-m04"
	E0918 20:05:01.248459       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-zmf96\": pod kindnet-zmf96 is already assigned to node \"ha-091565-m04\"" pod="kube-system/kindnet-zmf96"
	I0918 20:05:01.248547       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-zmf96" node="ha-091565-m04"
	E0918 20:05:01.248362       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-t72tx\": pod kube-proxy-t72tx is already assigned to node \"ha-091565-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-t72tx" node="ha-091565-m04"
	E0918 20:05:01.249494       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-t72tx\": pod kube-proxy-t72tx is already assigned to node \"ha-091565-m04\"" pod="kube-system/kube-proxy-t72tx"
	I0918 20:05:01.249666       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-t72tx" node="ha-091565-m04"
	
	
	==> kubelet <==
	Sep 18 20:06:43 ha-091565 kubelet[1316]: E0918 20:06:43.476171    1316 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726690003475792506,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 18 20:06:43 ha-091565 kubelet[1316]: E0918 20:06:43.476227    1316 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726690003475792506,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 18 20:06:53 ha-091565 kubelet[1316]: E0918 20:06:53.477743    1316 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726690013477221732,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 18 20:06:53 ha-091565 kubelet[1316]: E0918 20:06:53.477786    1316 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726690013477221732,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 18 20:07:03 ha-091565 kubelet[1316]: E0918 20:07:03.479043    1316 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726690023478737206,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 18 20:07:03 ha-091565 kubelet[1316]: E0918 20:07:03.479081    1316 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726690023478737206,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 18 20:07:13 ha-091565 kubelet[1316]: E0918 20:07:13.481181    1316 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726690033480916901,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 18 20:07:13 ha-091565 kubelet[1316]: E0918 20:07:13.481262    1316 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726690033480916901,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 18 20:07:23 ha-091565 kubelet[1316]: E0918 20:07:23.483563    1316 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726690043483012211,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 18 20:07:23 ha-091565 kubelet[1316]: E0918 20:07:23.483953    1316 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726690043483012211,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 18 20:07:33 ha-091565 kubelet[1316]: E0918 20:07:33.488007    1316 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726690053486820309,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 18 20:07:33 ha-091565 kubelet[1316]: E0918 20:07:33.488449    1316 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726690053486820309,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 18 20:07:43 ha-091565 kubelet[1316]: E0918 20:07:43.398570    1316 iptables.go:577] "Could not set up iptables canary" err=<
	Sep 18 20:07:43 ha-091565 kubelet[1316]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Sep 18 20:07:43 ha-091565 kubelet[1316]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 18 20:07:43 ha-091565 kubelet[1316]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 18 20:07:43 ha-091565 kubelet[1316]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 18 20:07:43 ha-091565 kubelet[1316]: E0918 20:07:43.490989    1316 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726690063490690150,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 18 20:07:43 ha-091565 kubelet[1316]: E0918 20:07:43.491031    1316 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726690063490690150,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 18 20:07:53 ha-091565 kubelet[1316]: E0918 20:07:53.492968    1316 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726690073492462129,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 18 20:07:53 ha-091565 kubelet[1316]: E0918 20:07:53.493287    1316 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726690073492462129,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 18 20:08:03 ha-091565 kubelet[1316]: E0918 20:08:03.495263    1316 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726690083494829193,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 18 20:08:03 ha-091565 kubelet[1316]: E0918 20:08:03.495287    1316 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726690083494829193,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 18 20:08:13 ha-091565 kubelet[1316]: E0918 20:08:13.497480    1316 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726690093497106928,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 18 20:08:13 ha-091565 kubelet[1316]: E0918 20:08:13.497506    1316 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726690093497106928,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-091565 -n ha-091565
helpers_test.go:261: (dbg) Run:  kubectl --context ha-091565 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/RestartSecondaryNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/RestartSecondaryNode (6.48s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (373.91s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:456: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-091565 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Run:  out/minikube-linux-amd64 stop -p ha-091565 -v=7 --alsologtostderr
E0918 20:10:01.286680   14878 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/functional-790989/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:462: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p ha-091565 -v=7 --alsologtostderr: exit status 82 (2m1.834566911s)

                                                
                                                
-- stdout --
	* Stopping node "ha-091565-m04"  ...
	* Stopping node "ha-091565-m03"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0918 20:08:19.644823   32001 out.go:345] Setting OutFile to fd 1 ...
	I0918 20:08:19.644951   32001 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0918 20:08:19.644957   32001 out.go:358] Setting ErrFile to fd 2...
	I0918 20:08:19.644962   32001 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0918 20:08:19.645140   32001 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19667-7671/.minikube/bin
	I0918 20:08:19.645374   32001 out.go:352] Setting JSON to false
	I0918 20:08:19.645463   32001 mustload.go:65] Loading cluster: ha-091565
	I0918 20:08:19.645866   32001 config.go:182] Loaded profile config "ha-091565": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0918 20:08:19.645955   32001 profile.go:143] Saving config to /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/ha-091565/config.json ...
	I0918 20:08:19.646138   32001 mustload.go:65] Loading cluster: ha-091565
	I0918 20:08:19.646317   32001 config.go:182] Loaded profile config "ha-091565": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0918 20:08:19.646385   32001 stop.go:39] StopHost: ha-091565-m04
	I0918 20:08:19.646838   32001 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0918 20:08:19.646892   32001 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0918 20:08:19.662525   32001 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38027
	I0918 20:08:19.662989   32001 main.go:141] libmachine: () Calling .GetVersion
	I0918 20:08:19.663653   32001 main.go:141] libmachine: Using API Version  1
	I0918 20:08:19.663681   32001 main.go:141] libmachine: () Calling .SetConfigRaw
	I0918 20:08:19.664077   32001 main.go:141] libmachine: () Calling .GetMachineName
	I0918 20:08:19.666852   32001 out.go:177] * Stopping node "ha-091565-m04"  ...
	I0918 20:08:19.668249   32001 machine.go:156] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0918 20:08:19.668278   32001 main.go:141] libmachine: (ha-091565-m04) Calling .DriverName
	I0918 20:08:19.668502   32001 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0918 20:08:19.668530   32001 main.go:141] libmachine: (ha-091565-m04) Calling .GetSSHHostname
	I0918 20:08:19.671527   32001 main.go:141] libmachine: (ha-091565-m04) DBG | domain ha-091565-m04 has defined MAC address 52:54:00:70:90:59 in network mk-ha-091565
	I0918 20:08:19.671978   32001 main.go:141] libmachine: (ha-091565-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:70:90:59", ip: ""} in network mk-ha-091565: {Iface:virbr1 ExpiryTime:2024-09-18 21:04:46 +0000 UTC Type:0 Mac:52:54:00:70:90:59 Iaid: IPaddr:192.168.39.9 Prefix:24 Hostname:ha-091565-m04 Clientid:01:52:54:00:70:90:59}
	I0918 20:08:19.672029   32001 main.go:141] libmachine: (ha-091565-m04) DBG | domain ha-091565-m04 has defined IP address 192.168.39.9 and MAC address 52:54:00:70:90:59 in network mk-ha-091565
	I0918 20:08:19.672199   32001 main.go:141] libmachine: (ha-091565-m04) Calling .GetSSHPort
	I0918 20:08:19.672402   32001 main.go:141] libmachine: (ha-091565-m04) Calling .GetSSHKeyPath
	I0918 20:08:19.672546   32001 main.go:141] libmachine: (ha-091565-m04) Calling .GetSSHUsername
	I0918 20:08:19.672700   32001 sshutil.go:53] new ssh client: &{IP:192.168.39.9 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19667-7671/.minikube/machines/ha-091565-m04/id_rsa Username:docker}
	I0918 20:08:19.769477   32001 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0918 20:08:19.823798   32001 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0918 20:08:19.877936   32001 main.go:141] libmachine: Stopping "ha-091565-m04"...
	I0918 20:08:19.877964   32001 main.go:141] libmachine: (ha-091565-m04) Calling .GetState
	I0918 20:08:19.880055   32001 main.go:141] libmachine: (ha-091565-m04) Calling .Stop
	I0918 20:08:19.883828   32001 main.go:141] libmachine: (ha-091565-m04) Waiting for machine to stop 0/120
	I0918 20:08:21.013199   32001 main.go:141] libmachine: (ha-091565-m04) Calling .GetState
	I0918 20:08:21.014514   32001 main.go:141] libmachine: Machine "ha-091565-m04" was stopped.
	I0918 20:08:21.014530   32001 stop.go:75] duration metric: took 1.346285458s to stop
	I0918 20:08:21.014561   32001 stop.go:39] StopHost: ha-091565-m03
	I0918 20:08:21.014898   32001 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0918 20:08:21.014943   32001 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0918 20:08:21.029823   32001 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38895
	I0918 20:08:21.030318   32001 main.go:141] libmachine: () Calling .GetVersion
	I0918 20:08:21.030784   32001 main.go:141] libmachine: Using API Version  1
	I0918 20:08:21.030803   32001 main.go:141] libmachine: () Calling .SetConfigRaw
	I0918 20:08:21.031081   32001 main.go:141] libmachine: () Calling .GetMachineName
	I0918 20:08:21.033008   32001 out.go:177] * Stopping node "ha-091565-m03"  ...
	I0918 20:08:21.034168   32001 machine.go:156] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0918 20:08:21.034191   32001 main.go:141] libmachine: (ha-091565-m03) Calling .DriverName
	I0918 20:08:21.034415   32001 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0918 20:08:21.034440   32001 main.go:141] libmachine: (ha-091565-m03) Calling .GetSSHHostname
	I0918 20:08:21.037813   32001 main.go:141] libmachine: (ha-091565-m03) DBG | domain ha-091565-m03 has defined MAC address 52:54:00:7c:50:95 in network mk-ha-091565
	I0918 20:08:21.038350   32001 main.go:141] libmachine: (ha-091565-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7c:50:95", ip: ""} in network mk-ha-091565: {Iface:virbr1 ExpiryTime:2024-09-18 21:03:17 +0000 UTC Type:0 Mac:52:54:00:7c:50:95 Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:ha-091565-m03 Clientid:01:52:54:00:7c:50:95}
	I0918 20:08:21.038376   32001 main.go:141] libmachine: (ha-091565-m03) DBG | domain ha-091565-m03 has defined IP address 192.168.39.53 and MAC address 52:54:00:7c:50:95 in network mk-ha-091565
	I0918 20:08:21.038529   32001 main.go:141] libmachine: (ha-091565-m03) Calling .GetSSHPort
	I0918 20:08:21.038722   32001 main.go:141] libmachine: (ha-091565-m03) Calling .GetSSHKeyPath
	I0918 20:08:21.038850   32001 main.go:141] libmachine: (ha-091565-m03) Calling .GetSSHUsername
	I0918 20:08:21.038974   32001 sshutil.go:53] new ssh client: &{IP:192.168.39.53 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19667-7671/.minikube/machines/ha-091565-m03/id_rsa Username:docker}
	I0918 20:08:21.123973   32001 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0918 20:08:21.177908   32001 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0918 20:08:21.231065   32001 main.go:141] libmachine: Stopping "ha-091565-m03"...
	I0918 20:08:21.231099   32001 main.go:141] libmachine: (ha-091565-m03) Calling .GetState
	I0918 20:08:21.232813   32001 main.go:141] libmachine: (ha-091565-m03) Calling .Stop
	I0918 20:08:21.236488   32001 main.go:141] libmachine: (ha-091565-m03) Waiting for machine to stop 0/120
	I0918 20:08:22.238189   32001 main.go:141] libmachine: (ha-091565-m03) Waiting for machine to stop 1/120
	I0918 20:08:23.239670   32001 main.go:141] libmachine: (ha-091565-m03) Waiting for machine to stop 2/120
	I0918 20:08:24.241522   32001 main.go:141] libmachine: (ha-091565-m03) Waiting for machine to stop 3/120
	I0918 20:08:25.243054   32001 main.go:141] libmachine: (ha-091565-m03) Waiting for machine to stop 4/120
	I0918 20:08:26.245056   32001 main.go:141] libmachine: (ha-091565-m03) Waiting for machine to stop 5/120
	I0918 20:08:27.246692   32001 main.go:141] libmachine: (ha-091565-m03) Waiting for machine to stop 6/120
	I0918 20:08:28.248418   32001 main.go:141] libmachine: (ha-091565-m03) Waiting for machine to stop 7/120
	I0918 20:08:29.250263   32001 main.go:141] libmachine: (ha-091565-m03) Waiting for machine to stop 8/120
	I0918 20:08:30.252414   32001 main.go:141] libmachine: (ha-091565-m03) Waiting for machine to stop 9/120
	I0918 20:08:31.253999   32001 main.go:141] libmachine: (ha-091565-m03) Waiting for machine to stop 10/120
	I0918 20:08:32.255608   32001 main.go:141] libmachine: (ha-091565-m03) Waiting for machine to stop 11/120
	I0918 20:08:33.257422   32001 main.go:141] libmachine: (ha-091565-m03) Waiting for machine to stop 12/120
	I0918 20:08:34.258848   32001 main.go:141] libmachine: (ha-091565-m03) Waiting for machine to stop 13/120
	I0918 20:08:35.260465   32001 main.go:141] libmachine: (ha-091565-m03) Waiting for machine to stop 14/120
	I0918 20:08:36.262521   32001 main.go:141] libmachine: (ha-091565-m03) Waiting for machine to stop 15/120
	I0918 20:08:37.263954   32001 main.go:141] libmachine: (ha-091565-m03) Waiting for machine to stop 16/120
	I0918 20:08:38.265406   32001 main.go:141] libmachine: (ha-091565-m03) Waiting for machine to stop 17/120
	I0918 20:08:39.267059   32001 main.go:141] libmachine: (ha-091565-m03) Waiting for machine to stop 18/120
	I0918 20:08:40.268420   32001 main.go:141] libmachine: (ha-091565-m03) Waiting for machine to stop 19/120
	I0918 20:08:41.270270   32001 main.go:141] libmachine: (ha-091565-m03) Waiting for machine to stop 20/120
	I0918 20:08:42.271913   32001 main.go:141] libmachine: (ha-091565-m03) Waiting for machine to stop 21/120
	I0918 20:08:43.274408   32001 main.go:141] libmachine: (ha-091565-m03) Waiting for machine to stop 22/120
	I0918 20:08:44.276039   32001 main.go:141] libmachine: (ha-091565-m03) Waiting for machine to stop 23/120
	I0918 20:08:45.277842   32001 main.go:141] libmachine: (ha-091565-m03) Waiting for machine to stop 24/120
	I0918 20:08:46.280261   32001 main.go:141] libmachine: (ha-091565-m03) Waiting for machine to stop 25/120
	I0918 20:08:47.281772   32001 main.go:141] libmachine: (ha-091565-m03) Waiting for machine to stop 26/120
	I0918 20:08:48.283190   32001 main.go:141] libmachine: (ha-091565-m03) Waiting for machine to stop 27/120
	I0918 20:08:49.284750   32001 main.go:141] libmachine: (ha-091565-m03) Waiting for machine to stop 28/120
	I0918 20:08:50.286265   32001 main.go:141] libmachine: (ha-091565-m03) Waiting for machine to stop 29/120
	I0918 20:08:51.288144   32001 main.go:141] libmachine: (ha-091565-m03) Waiting for machine to stop 30/120
	I0918 20:08:52.289599   32001 main.go:141] libmachine: (ha-091565-m03) Waiting for machine to stop 31/120
	I0918 20:08:53.291197   32001 main.go:141] libmachine: (ha-091565-m03) Waiting for machine to stop 32/120
	I0918 20:08:54.292853   32001 main.go:141] libmachine: (ha-091565-m03) Waiting for machine to stop 33/120
	I0918 20:08:55.294393   32001 main.go:141] libmachine: (ha-091565-m03) Waiting for machine to stop 34/120
	I0918 20:08:56.296309   32001 main.go:141] libmachine: (ha-091565-m03) Waiting for machine to stop 35/120
	I0918 20:08:57.297719   32001 main.go:141] libmachine: (ha-091565-m03) Waiting for machine to stop 36/120
	I0918 20:08:58.299302   32001 main.go:141] libmachine: (ha-091565-m03) Waiting for machine to stop 37/120
	I0918 20:08:59.300822   32001 main.go:141] libmachine: (ha-091565-m03) Waiting for machine to stop 38/120
	I0918 20:09:00.302581   32001 main.go:141] libmachine: (ha-091565-m03) Waiting for machine to stop 39/120
	I0918 20:09:01.304673   32001 main.go:141] libmachine: (ha-091565-m03) Waiting for machine to stop 40/120
	I0918 20:09:02.305951   32001 main.go:141] libmachine: (ha-091565-m03) Waiting for machine to stop 41/120
	I0918 20:09:03.307350   32001 main.go:141] libmachine: (ha-091565-m03) Waiting for machine to stop 42/120
	I0918 20:09:04.308658   32001 main.go:141] libmachine: (ha-091565-m03) Waiting for machine to stop 43/120
	I0918 20:09:05.310098   32001 main.go:141] libmachine: (ha-091565-m03) Waiting for machine to stop 44/120
	I0918 20:09:06.311951   32001 main.go:141] libmachine: (ha-091565-m03) Waiting for machine to stop 45/120
	I0918 20:09:07.313280   32001 main.go:141] libmachine: (ha-091565-m03) Waiting for machine to stop 46/120
	I0918 20:09:08.314709   32001 main.go:141] libmachine: (ha-091565-m03) Waiting for machine to stop 47/120
	I0918 20:09:09.315823   32001 main.go:141] libmachine: (ha-091565-m03) Waiting for machine to stop 48/120
	I0918 20:09:10.317290   32001 main.go:141] libmachine: (ha-091565-m03) Waiting for machine to stop 49/120
	I0918 20:09:11.319479   32001 main.go:141] libmachine: (ha-091565-m03) Waiting for machine to stop 50/120
	I0918 20:09:12.320985   32001 main.go:141] libmachine: (ha-091565-m03) Waiting for machine to stop 51/120
	I0918 20:09:13.322283   32001 main.go:141] libmachine: (ha-091565-m03) Waiting for machine to stop 52/120
	I0918 20:09:14.323928   32001 main.go:141] libmachine: (ha-091565-m03) Waiting for machine to stop 53/120
	I0918 20:09:15.325324   32001 main.go:141] libmachine: (ha-091565-m03) Waiting for machine to stop 54/120
	I0918 20:09:16.327105   32001 main.go:141] libmachine: (ha-091565-m03) Waiting for machine to stop 55/120
	I0918 20:09:17.328370   32001 main.go:141] libmachine: (ha-091565-m03) Waiting for machine to stop 56/120
	I0918 20:09:18.329863   32001 main.go:141] libmachine: (ha-091565-m03) Waiting for machine to stop 57/120
	I0918 20:09:19.331192   32001 main.go:141] libmachine: (ha-091565-m03) Waiting for machine to stop 58/120
	I0918 20:09:20.332723   32001 main.go:141] libmachine: (ha-091565-m03) Waiting for machine to stop 59/120
	I0918 20:09:21.334073   32001 main.go:141] libmachine: (ha-091565-m03) Waiting for machine to stop 60/120
	I0918 20:09:22.335444   32001 main.go:141] libmachine: (ha-091565-m03) Waiting for machine to stop 61/120
	I0918 20:09:23.337034   32001 main.go:141] libmachine: (ha-091565-m03) Waiting for machine to stop 62/120
	I0918 20:09:24.338565   32001 main.go:141] libmachine: (ha-091565-m03) Waiting for machine to stop 63/120
	I0918 20:09:25.340749   32001 main.go:141] libmachine: (ha-091565-m03) Waiting for machine to stop 64/120
	I0918 20:09:26.342375   32001 main.go:141] libmachine: (ha-091565-m03) Waiting for machine to stop 65/120
	I0918 20:09:27.343699   32001 main.go:141] libmachine: (ha-091565-m03) Waiting for machine to stop 66/120
	I0918 20:09:28.344972   32001 main.go:141] libmachine: (ha-091565-m03) Waiting for machine to stop 67/120
	I0918 20:09:29.346373   32001 main.go:141] libmachine: (ha-091565-m03) Waiting for machine to stop 68/120
	I0918 20:09:30.347666   32001 main.go:141] libmachine: (ha-091565-m03) Waiting for machine to stop 69/120
	I0918 20:09:31.349303   32001 main.go:141] libmachine: (ha-091565-m03) Waiting for machine to stop 70/120
	I0918 20:09:32.351249   32001 main.go:141] libmachine: (ha-091565-m03) Waiting for machine to stop 71/120
	I0918 20:09:33.352584   32001 main.go:141] libmachine: (ha-091565-m03) Waiting for machine to stop 72/120
	I0918 20:09:34.354399   32001 main.go:141] libmachine: (ha-091565-m03) Waiting for machine to stop 73/120
	I0918 20:09:35.355845   32001 main.go:141] libmachine: (ha-091565-m03) Waiting for machine to stop 74/120
	I0918 20:09:36.357741   32001 main.go:141] libmachine: (ha-091565-m03) Waiting for machine to stop 75/120
	I0918 20:09:37.359330   32001 main.go:141] libmachine: (ha-091565-m03) Waiting for machine to stop 76/120
	I0918 20:09:38.360764   32001 main.go:141] libmachine: (ha-091565-m03) Waiting for machine to stop 77/120
	I0918 20:09:39.362089   32001 main.go:141] libmachine: (ha-091565-m03) Waiting for machine to stop 78/120
	I0918 20:09:40.363604   32001 main.go:141] libmachine: (ha-091565-m03) Waiting for machine to stop 79/120
	I0918 20:09:41.365506   32001 main.go:141] libmachine: (ha-091565-m03) Waiting for machine to stop 80/120
	I0918 20:09:42.367140   32001 main.go:141] libmachine: (ha-091565-m03) Waiting for machine to stop 81/120
	I0918 20:09:43.368648   32001 main.go:141] libmachine: (ha-091565-m03) Waiting for machine to stop 82/120
	I0918 20:09:44.371211   32001 main.go:141] libmachine: (ha-091565-m03) Waiting for machine to stop 83/120
	I0918 20:09:45.372482   32001 main.go:141] libmachine: (ha-091565-m03) Waiting for machine to stop 84/120
	I0918 20:09:46.374428   32001 main.go:141] libmachine: (ha-091565-m03) Waiting for machine to stop 85/120
	I0918 20:09:47.375874   32001 main.go:141] libmachine: (ha-091565-m03) Waiting for machine to stop 86/120
	I0918 20:09:48.377344   32001 main.go:141] libmachine: (ha-091565-m03) Waiting for machine to stop 87/120
	I0918 20:09:49.378624   32001 main.go:141] libmachine: (ha-091565-m03) Waiting for machine to stop 88/120
	I0918 20:09:50.380169   32001 main.go:141] libmachine: (ha-091565-m03) Waiting for machine to stop 89/120
	I0918 20:09:51.381907   32001 main.go:141] libmachine: (ha-091565-m03) Waiting for machine to stop 90/120
	I0918 20:09:52.383558   32001 main.go:141] libmachine: (ha-091565-m03) Waiting for machine to stop 91/120
	I0918 20:09:53.384899   32001 main.go:141] libmachine: (ha-091565-m03) Waiting for machine to stop 92/120
	I0918 20:09:54.386243   32001 main.go:141] libmachine: (ha-091565-m03) Waiting for machine to stop 93/120
	I0918 20:09:55.387606   32001 main.go:141] libmachine: (ha-091565-m03) Waiting for machine to stop 94/120
	I0918 20:09:56.389792   32001 main.go:141] libmachine: (ha-091565-m03) Waiting for machine to stop 95/120
	I0918 20:09:57.391249   32001 main.go:141] libmachine: (ha-091565-m03) Waiting for machine to stop 96/120
	I0918 20:09:58.392637   32001 main.go:141] libmachine: (ha-091565-m03) Waiting for machine to stop 97/120
	I0918 20:09:59.394019   32001 main.go:141] libmachine: (ha-091565-m03) Waiting for machine to stop 98/120
	I0918 20:10:00.395427   32001 main.go:141] libmachine: (ha-091565-m03) Waiting for machine to stop 99/120
	I0918 20:10:01.397391   32001 main.go:141] libmachine: (ha-091565-m03) Waiting for machine to stop 100/120
	I0918 20:10:02.398815   32001 main.go:141] libmachine: (ha-091565-m03) Waiting for machine to stop 101/120
	I0918 20:10:03.400321   32001 main.go:141] libmachine: (ha-091565-m03) Waiting for machine to stop 102/120
	I0918 20:10:04.401944   32001 main.go:141] libmachine: (ha-091565-m03) Waiting for machine to stop 103/120
	I0918 20:10:05.403575   32001 main.go:141] libmachine: (ha-091565-m03) Waiting for machine to stop 104/120
	I0918 20:10:06.405552   32001 main.go:141] libmachine: (ha-091565-m03) Waiting for machine to stop 105/120
	I0918 20:10:07.406957   32001 main.go:141] libmachine: (ha-091565-m03) Waiting for machine to stop 106/120
	I0918 20:10:08.408235   32001 main.go:141] libmachine: (ha-091565-m03) Waiting for machine to stop 107/120
	I0918 20:10:09.409601   32001 main.go:141] libmachine: (ha-091565-m03) Waiting for machine to stop 108/120
	I0918 20:10:10.411057   32001 main.go:141] libmachine: (ha-091565-m03) Waiting for machine to stop 109/120
	I0918 20:10:11.412800   32001 main.go:141] libmachine: (ha-091565-m03) Waiting for machine to stop 110/120
	I0918 20:10:12.414071   32001 main.go:141] libmachine: (ha-091565-m03) Waiting for machine to stop 111/120
	I0918 20:10:13.415606   32001 main.go:141] libmachine: (ha-091565-m03) Waiting for machine to stop 112/120
	I0918 20:10:14.417021   32001 main.go:141] libmachine: (ha-091565-m03) Waiting for machine to stop 113/120
	I0918 20:10:15.418311   32001 main.go:141] libmachine: (ha-091565-m03) Waiting for machine to stop 114/120
	I0918 20:10:16.419997   32001 main.go:141] libmachine: (ha-091565-m03) Waiting for machine to stop 115/120
	I0918 20:10:17.421362   32001 main.go:141] libmachine: (ha-091565-m03) Waiting for machine to stop 116/120
	I0918 20:10:18.422852   32001 main.go:141] libmachine: (ha-091565-m03) Waiting for machine to stop 117/120
	I0918 20:10:19.424186   32001 main.go:141] libmachine: (ha-091565-m03) Waiting for machine to stop 118/120
	I0918 20:10:20.425701   32001 main.go:141] libmachine: (ha-091565-m03) Waiting for machine to stop 119/120
	I0918 20:10:21.426718   32001 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0918 20:10:21.426785   32001 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0918 20:10:21.429307   32001 out.go:201] 
	W0918 20:10:21.430796   32001 out.go:270] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0918 20:10:21.430822   32001 out.go:270] * 
	* 
	W0918 20:10:21.433232   32001 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0918 20:10:21.435395   32001 out.go:201] 

                                                
                                                
** /stderr **
ha_test.go:464: failed to run minikube stop. args "out/minikube-linux-amd64 node list -p ha-091565 -v=7 --alsologtostderr" : exit status 82
ha_test.go:467: (dbg) Run:  out/minikube-linux-amd64 start -p ha-091565 --wait=true -v=7 --alsologtostderr
E0918 20:10:28.990952   14878 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/functional-790989/client.crt: no such file or directory" logger="UnhandledError"
E0918 20:11:12.175643   14878 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/addons-815929/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:467: (dbg) Done: out/minikube-linux-amd64 start -p ha-091565 --wait=true -v=7 --alsologtostderr: (4m9.182918644s)
ha_test.go:472: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-091565
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-091565 -n ha-091565
helpers_test.go:244: <<< TestMultiControlPlane/serial/RestartClusterKeepsNodes FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/RestartClusterKeepsNodes]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-091565 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-091565 logs -n 25: (2.088260504s)
helpers_test.go:252: TestMultiControlPlane/serial/RestartClusterKeepsNodes logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| cp      | ha-091565 cp ha-091565-m03:/home/docker/cp-test.txt                              | ha-091565 | jenkins | v1.34.0 | 18 Sep 24 20:05 UTC | 18 Sep 24 20:05 UTC |
	|         | ha-091565-m02:/home/docker/cp-test_ha-091565-m03_ha-091565-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-091565 ssh -n                                                                 | ha-091565 | jenkins | v1.34.0 | 18 Sep 24 20:05 UTC | 18 Sep 24 20:05 UTC |
	|         | ha-091565-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-091565 ssh -n ha-091565-m02 sudo cat                                          | ha-091565 | jenkins | v1.34.0 | 18 Sep 24 20:05 UTC | 18 Sep 24 20:05 UTC |
	|         | /home/docker/cp-test_ha-091565-m03_ha-091565-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-091565 cp ha-091565-m03:/home/docker/cp-test.txt                              | ha-091565 | jenkins | v1.34.0 | 18 Sep 24 20:05 UTC | 18 Sep 24 20:05 UTC |
	|         | ha-091565-m04:/home/docker/cp-test_ha-091565-m03_ha-091565-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-091565 ssh -n                                                                 | ha-091565 | jenkins | v1.34.0 | 18 Sep 24 20:05 UTC | 18 Sep 24 20:05 UTC |
	|         | ha-091565-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-091565 ssh -n ha-091565-m04 sudo cat                                          | ha-091565 | jenkins | v1.34.0 | 18 Sep 24 20:05 UTC | 18 Sep 24 20:05 UTC |
	|         | /home/docker/cp-test_ha-091565-m03_ha-091565-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-091565 cp testdata/cp-test.txt                                                | ha-091565 | jenkins | v1.34.0 | 18 Sep 24 20:05 UTC | 18 Sep 24 20:05 UTC |
	|         | ha-091565-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-091565 ssh -n                                                                 | ha-091565 | jenkins | v1.34.0 | 18 Sep 24 20:05 UTC | 18 Sep 24 20:05 UTC |
	|         | ha-091565-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-091565 cp ha-091565-m04:/home/docker/cp-test.txt                              | ha-091565 | jenkins | v1.34.0 | 18 Sep 24 20:05 UTC | 18 Sep 24 20:05 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile3131576438/001/cp-test_ha-091565-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-091565 ssh -n                                                                 | ha-091565 | jenkins | v1.34.0 | 18 Sep 24 20:05 UTC | 18 Sep 24 20:05 UTC |
	|         | ha-091565-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-091565 cp ha-091565-m04:/home/docker/cp-test.txt                              | ha-091565 | jenkins | v1.34.0 | 18 Sep 24 20:05 UTC | 18 Sep 24 20:05 UTC |
	|         | ha-091565:/home/docker/cp-test_ha-091565-m04_ha-091565.txt                       |           |         |         |                     |                     |
	| ssh     | ha-091565 ssh -n                                                                 | ha-091565 | jenkins | v1.34.0 | 18 Sep 24 20:05 UTC | 18 Sep 24 20:05 UTC |
	|         | ha-091565-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-091565 ssh -n ha-091565 sudo cat                                              | ha-091565 | jenkins | v1.34.0 | 18 Sep 24 20:05 UTC | 18 Sep 24 20:05 UTC |
	|         | /home/docker/cp-test_ha-091565-m04_ha-091565.txt                                 |           |         |         |                     |                     |
	| cp      | ha-091565 cp ha-091565-m04:/home/docker/cp-test.txt                              | ha-091565 | jenkins | v1.34.0 | 18 Sep 24 20:05 UTC | 18 Sep 24 20:05 UTC |
	|         | ha-091565-m02:/home/docker/cp-test_ha-091565-m04_ha-091565-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-091565 ssh -n                                                                 | ha-091565 | jenkins | v1.34.0 | 18 Sep 24 20:05 UTC | 18 Sep 24 20:05 UTC |
	|         | ha-091565-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-091565 ssh -n ha-091565-m02 sudo cat                                          | ha-091565 | jenkins | v1.34.0 | 18 Sep 24 20:05 UTC | 18 Sep 24 20:05 UTC |
	|         | /home/docker/cp-test_ha-091565-m04_ha-091565-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-091565 cp ha-091565-m04:/home/docker/cp-test.txt                              | ha-091565 | jenkins | v1.34.0 | 18 Sep 24 20:05 UTC | 18 Sep 24 20:05 UTC |
	|         | ha-091565-m03:/home/docker/cp-test_ha-091565-m04_ha-091565-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-091565 ssh -n                                                                 | ha-091565 | jenkins | v1.34.0 | 18 Sep 24 20:05 UTC | 18 Sep 24 20:05 UTC |
	|         | ha-091565-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-091565 ssh -n ha-091565-m03 sudo cat                                          | ha-091565 | jenkins | v1.34.0 | 18 Sep 24 20:05 UTC | 18 Sep 24 20:05 UTC |
	|         | /home/docker/cp-test_ha-091565-m04_ha-091565-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-091565 node stop m02 -v=7                                                     | ha-091565 | jenkins | v1.34.0 | 18 Sep 24 20:05 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | ha-091565 node start m02 -v=7                                                    | ha-091565 | jenkins | v1.34.0 | 18 Sep 24 20:08 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-091565 -v=7                                                           | ha-091565 | jenkins | v1.34.0 | 18 Sep 24 20:08 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| stop    | -p ha-091565 -v=7                                                                | ha-091565 | jenkins | v1.34.0 | 18 Sep 24 20:08 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| start   | -p ha-091565 --wait=true -v=7                                                    | ha-091565 | jenkins | v1.34.0 | 18 Sep 24 20:10 UTC | 18 Sep 24 20:14 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-091565                                                                | ha-091565 | jenkins | v1.34.0 | 18 Sep 24 20:14 UTC |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/18 20:10:21
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0918 20:10:21.481921   32455 out.go:345] Setting OutFile to fd 1 ...
	I0918 20:10:21.482185   32455 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0918 20:10:21.482196   32455 out.go:358] Setting ErrFile to fd 2...
	I0918 20:10:21.482202   32455 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0918 20:10:21.482431   32455 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19667-7671/.minikube/bin
	I0918 20:10:21.482990   32455 out.go:352] Setting JSON to false
	I0918 20:10:21.483887   32455 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":3165,"bootTime":1726687056,"procs":187,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0918 20:10:21.483987   32455 start.go:139] virtualization: kvm guest
	I0918 20:10:21.486482   32455 out.go:177] * [ha-091565] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0918 20:10:21.487917   32455 out.go:177]   - MINIKUBE_LOCATION=19667
	I0918 20:10:21.487906   32455 notify.go:220] Checking for updates...
	I0918 20:10:21.489679   32455 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0918 20:10:21.491004   32455 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19667-7671/kubeconfig
	I0918 20:10:21.492533   32455 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19667-7671/.minikube
	I0918 20:10:21.493829   32455 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0918 20:10:21.495121   32455 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0918 20:10:21.497006   32455 config.go:182] Loaded profile config "ha-091565": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0918 20:10:21.497147   32455 driver.go:394] Setting default libvirt URI to qemu:///system
	I0918 20:10:21.497582   32455 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0918 20:10:21.497629   32455 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0918 20:10:21.512693   32455 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39699
	I0918 20:10:21.513187   32455 main.go:141] libmachine: () Calling .GetVersion
	I0918 20:10:21.513781   32455 main.go:141] libmachine: Using API Version  1
	I0918 20:10:21.513810   32455 main.go:141] libmachine: () Calling .SetConfigRaw
	I0918 20:10:21.514148   32455 main.go:141] libmachine: () Calling .GetMachineName
	I0918 20:10:21.514295   32455 main.go:141] libmachine: (ha-091565) Calling .DriverName
	I0918 20:10:21.551072   32455 out.go:177] * Using the kvm2 driver based on existing profile
	I0918 20:10:21.552619   32455 start.go:297] selected driver: kvm2
	I0918 20:10:21.552649   32455 start.go:901] validating driver "kvm2" against &{Name:ha-091565 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVer
sion:v1.31.1 ClusterName:ha-091565 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.215 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.92 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.53 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.9 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:f
alse freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p20
00.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0918 20:10:21.552853   32455 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0918 20:10:21.553185   32455 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0918 20:10:21.553252   32455 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19667-7671/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0918 20:10:21.569199   32455 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0918 20:10:21.569914   32455 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0918 20:10:21.569958   32455 cni.go:84] Creating CNI manager for ""
	I0918 20:10:21.570008   32455 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0918 20:10:21.570079   32455 start.go:340] cluster config:
	{Name:ha-091565 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:ha-091565 Namespace:default APIServerHAVIP:192.168.39
.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.215 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.92 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.53 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.9 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller
:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:
0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0918 20:10:21.570212   32455 iso.go:125] acquiring lock: {Name:mkbcf4d84f3091567a39802cd5e773cb10b8c545 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0918 20:10:21.572456   32455 out.go:177] * Starting "ha-091565" primary control-plane node in "ha-091565" cluster
	I0918 20:10:21.574020   32455 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0918 20:10:21.574096   32455 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19667-7671/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I0918 20:10:21.574109   32455 cache.go:56] Caching tarball of preloaded images
	I0918 20:10:21.574206   32455 preload.go:172] Found /home/jenkins/minikube-integration/19667-7671/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0918 20:10:21.574218   32455 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0918 20:10:21.574331   32455 profile.go:143] Saving config to /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/ha-091565/config.json ...
	I0918 20:10:21.574532   32455 start.go:360] acquireMachinesLock for ha-091565: {Name:mk1b0d68a0b7d9d4bc8204d5d81cb9eb77222526 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0918 20:10:21.574573   32455 start.go:364] duration metric: took 21.936µs to acquireMachinesLock for "ha-091565"
	I0918 20:10:21.574590   32455 start.go:96] Skipping create...Using existing machine configuration
	I0918 20:10:21.574614   32455 fix.go:54] fixHost starting: 
	I0918 20:10:21.574862   32455 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0918 20:10:21.574893   32455 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0918 20:10:21.590607   32455 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44505
	I0918 20:10:21.591037   32455 main.go:141] libmachine: () Calling .GetVersion
	I0918 20:10:21.591654   32455 main.go:141] libmachine: Using API Version  1
	I0918 20:10:21.591681   32455 main.go:141] libmachine: () Calling .SetConfigRaw
	I0918 20:10:21.592033   32455 main.go:141] libmachine: () Calling .GetMachineName
	I0918 20:10:21.592216   32455 main.go:141] libmachine: (ha-091565) Calling .DriverName
	I0918 20:10:21.592370   32455 main.go:141] libmachine: (ha-091565) Calling .GetState
	I0918 20:10:21.594179   32455 fix.go:112] recreateIfNeeded on ha-091565: state=Running err=<nil>
	W0918 20:10:21.594208   32455 fix.go:138] unexpected machine state, will restart: <nil>
	I0918 20:10:21.596463   32455 out.go:177] * Updating the running kvm2 "ha-091565" VM ...
	I0918 20:10:21.598032   32455 machine.go:93] provisionDockerMachine start ...
	I0918 20:10:21.598057   32455 main.go:141] libmachine: (ha-091565) Calling .DriverName
	I0918 20:10:21.598305   32455 main.go:141] libmachine: (ha-091565) Calling .GetSSHHostname
	I0918 20:10:21.600831   32455 main.go:141] libmachine: (ha-091565) DBG | domain ha-091565 has defined MAC address 52:54:00:2a:13:d8 in network mk-ha-091565
	I0918 20:10:21.601344   32455 main.go:141] libmachine: (ha-091565) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:13:d8", ip: ""} in network mk-ha-091565: {Iface:virbr1 ExpiryTime:2024-09-18 21:01:12 +0000 UTC Type:0 Mac:52:54:00:2a:13:d8 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:ha-091565 Clientid:01:52:54:00:2a:13:d8}
	I0918 20:10:21.601368   32455 main.go:141] libmachine: (ha-091565) DBG | domain ha-091565 has defined IP address 192.168.39.215 and MAC address 52:54:00:2a:13:d8 in network mk-ha-091565
	I0918 20:10:21.601609   32455 main.go:141] libmachine: (ha-091565) Calling .GetSSHPort
	I0918 20:10:21.601830   32455 main.go:141] libmachine: (ha-091565) Calling .GetSSHKeyPath
	I0918 20:10:21.601979   32455 main.go:141] libmachine: (ha-091565) Calling .GetSSHKeyPath
	I0918 20:10:21.602115   32455 main.go:141] libmachine: (ha-091565) Calling .GetSSHUsername
	I0918 20:10:21.602284   32455 main.go:141] libmachine: Using SSH client type: native
	I0918 20:10:21.602489   32455 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.215 22 <nil> <nil>}
	I0918 20:10:21.602500   32455 main.go:141] libmachine: About to run SSH command:
	hostname
	I0918 20:10:21.720989   32455 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-091565
	
	I0918 20:10:21.721024   32455 main.go:141] libmachine: (ha-091565) Calling .GetMachineName
	I0918 20:10:21.721299   32455 buildroot.go:166] provisioning hostname "ha-091565"
	I0918 20:10:21.721327   32455 main.go:141] libmachine: (ha-091565) Calling .GetMachineName
	I0918 20:10:21.721529   32455 main.go:141] libmachine: (ha-091565) Calling .GetSSHHostname
	I0918 20:10:21.724505   32455 main.go:141] libmachine: (ha-091565) DBG | domain ha-091565 has defined MAC address 52:54:00:2a:13:d8 in network mk-ha-091565
	I0918 20:10:21.724849   32455 main.go:141] libmachine: (ha-091565) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:13:d8", ip: ""} in network mk-ha-091565: {Iface:virbr1 ExpiryTime:2024-09-18 21:01:12 +0000 UTC Type:0 Mac:52:54:00:2a:13:d8 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:ha-091565 Clientid:01:52:54:00:2a:13:d8}
	I0918 20:10:21.724878   32455 main.go:141] libmachine: (ha-091565) DBG | domain ha-091565 has defined IP address 192.168.39.215 and MAC address 52:54:00:2a:13:d8 in network mk-ha-091565
	I0918 20:10:21.725080   32455 main.go:141] libmachine: (ha-091565) Calling .GetSSHPort
	I0918 20:10:21.725276   32455 main.go:141] libmachine: (ha-091565) Calling .GetSSHKeyPath
	I0918 20:10:21.725445   32455 main.go:141] libmachine: (ha-091565) Calling .GetSSHKeyPath
	I0918 20:10:21.725599   32455 main.go:141] libmachine: (ha-091565) Calling .GetSSHUsername
	I0918 20:10:21.725814   32455 main.go:141] libmachine: Using SSH client type: native
	I0918 20:10:21.726027   32455 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.215 22 <nil> <nil>}
	I0918 20:10:21.726046   32455 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-091565 && echo "ha-091565" | sudo tee /etc/hostname
	I0918 20:10:21.855963   32455 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-091565
	
	I0918 20:10:21.855996   32455 main.go:141] libmachine: (ha-091565) Calling .GetSSHHostname
	I0918 20:10:21.858764   32455 main.go:141] libmachine: (ha-091565) DBG | domain ha-091565 has defined MAC address 52:54:00:2a:13:d8 in network mk-ha-091565
	I0918 20:10:21.859181   32455 main.go:141] libmachine: (ha-091565) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:13:d8", ip: ""} in network mk-ha-091565: {Iface:virbr1 ExpiryTime:2024-09-18 21:01:12 +0000 UTC Type:0 Mac:52:54:00:2a:13:d8 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:ha-091565 Clientid:01:52:54:00:2a:13:d8}
	I0918 20:10:21.859204   32455 main.go:141] libmachine: (ha-091565) DBG | domain ha-091565 has defined IP address 192.168.39.215 and MAC address 52:54:00:2a:13:d8 in network mk-ha-091565
	I0918 20:10:21.859418   32455 main.go:141] libmachine: (ha-091565) Calling .GetSSHPort
	I0918 20:10:21.859676   32455 main.go:141] libmachine: (ha-091565) Calling .GetSSHKeyPath
	I0918 20:10:21.859851   32455 main.go:141] libmachine: (ha-091565) Calling .GetSSHKeyPath
	I0918 20:10:21.859957   32455 main.go:141] libmachine: (ha-091565) Calling .GetSSHUsername
	I0918 20:10:21.860107   32455 main.go:141] libmachine: Using SSH client type: native
	I0918 20:10:21.860281   32455 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.215 22 <nil> <nil>}
	I0918 20:10:21.860296   32455 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-091565' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-091565/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-091565' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0918 20:10:21.977116   32455 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0918 20:10:21.977154   32455 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19667-7671/.minikube CaCertPath:/home/jenkins/minikube-integration/19667-7671/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19667-7671/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19667-7671/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19667-7671/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19667-7671/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19667-7671/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19667-7671/.minikube}
	I0918 20:10:21.977184   32455 buildroot.go:174] setting up certificates
	I0918 20:10:21.977195   32455 provision.go:84] configureAuth start
	I0918 20:10:21.977208   32455 main.go:141] libmachine: (ha-091565) Calling .GetMachineName
	I0918 20:10:21.977486   32455 main.go:141] libmachine: (ha-091565) Calling .GetIP
	I0918 20:10:21.979778   32455 main.go:141] libmachine: (ha-091565) DBG | domain ha-091565 has defined MAC address 52:54:00:2a:13:d8 in network mk-ha-091565
	I0918 20:10:21.980131   32455 main.go:141] libmachine: (ha-091565) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:13:d8", ip: ""} in network mk-ha-091565: {Iface:virbr1 ExpiryTime:2024-09-18 21:01:12 +0000 UTC Type:0 Mac:52:54:00:2a:13:d8 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:ha-091565 Clientid:01:52:54:00:2a:13:d8}
	I0918 20:10:21.980165   32455 main.go:141] libmachine: (ha-091565) DBG | domain ha-091565 has defined IP address 192.168.39.215 and MAC address 52:54:00:2a:13:d8 in network mk-ha-091565
	I0918 20:10:21.980353   32455 main.go:141] libmachine: (ha-091565) Calling .GetSSHHostname
	I0918 20:10:21.982901   32455 main.go:141] libmachine: (ha-091565) DBG | domain ha-091565 has defined MAC address 52:54:00:2a:13:d8 in network mk-ha-091565
	I0918 20:10:21.983298   32455 main.go:141] libmachine: (ha-091565) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:13:d8", ip: ""} in network mk-ha-091565: {Iface:virbr1 ExpiryTime:2024-09-18 21:01:12 +0000 UTC Type:0 Mac:52:54:00:2a:13:d8 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:ha-091565 Clientid:01:52:54:00:2a:13:d8}
	I0918 20:10:21.983323   32455 main.go:141] libmachine: (ha-091565) DBG | domain ha-091565 has defined IP address 192.168.39.215 and MAC address 52:54:00:2a:13:d8 in network mk-ha-091565
	I0918 20:10:21.983446   32455 provision.go:143] copyHostCerts
	I0918 20:10:21.983475   32455 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19667-7671/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19667-7671/.minikube/ca.pem
	I0918 20:10:21.983511   32455 exec_runner.go:144] found /home/jenkins/minikube-integration/19667-7671/.minikube/ca.pem, removing ...
	I0918 20:10:21.983518   32455 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19667-7671/.minikube/ca.pem
	I0918 20:10:21.983600   32455 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19667-7671/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19667-7671/.minikube/ca.pem (1082 bytes)
	I0918 20:10:21.983698   32455 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19667-7671/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19667-7671/.minikube/cert.pem
	I0918 20:10:21.983723   32455 exec_runner.go:144] found /home/jenkins/minikube-integration/19667-7671/.minikube/cert.pem, removing ...
	I0918 20:10:21.983733   32455 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19667-7671/.minikube/cert.pem
	I0918 20:10:21.983771   32455 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19667-7671/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19667-7671/.minikube/cert.pem (1123 bytes)
	I0918 20:10:21.983828   32455 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19667-7671/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19667-7671/.minikube/key.pem
	I0918 20:10:21.983852   32455 exec_runner.go:144] found /home/jenkins/minikube-integration/19667-7671/.minikube/key.pem, removing ...
	I0918 20:10:21.983861   32455 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19667-7671/.minikube/key.pem
	I0918 20:10:21.983893   32455 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19667-7671/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19667-7671/.minikube/key.pem (1679 bytes)
	I0918 20:10:21.983958   32455 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19667-7671/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19667-7671/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19667-7671/.minikube/certs/ca-key.pem org=jenkins.ha-091565 san=[127.0.0.1 192.168.39.215 ha-091565 localhost minikube]
	I0918 20:10:22.062750   32455 provision.go:177] copyRemoteCerts
	I0918 20:10:22.062812   32455 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0918 20:10:22.062834   32455 main.go:141] libmachine: (ha-091565) Calling .GetSSHHostname
	I0918 20:10:22.065869   32455 main.go:141] libmachine: (ha-091565) DBG | domain ha-091565 has defined MAC address 52:54:00:2a:13:d8 in network mk-ha-091565
	I0918 20:10:22.066235   32455 main.go:141] libmachine: (ha-091565) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:13:d8", ip: ""} in network mk-ha-091565: {Iface:virbr1 ExpiryTime:2024-09-18 21:01:12 +0000 UTC Type:0 Mac:52:54:00:2a:13:d8 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:ha-091565 Clientid:01:52:54:00:2a:13:d8}
	I0918 20:10:22.066265   32455 main.go:141] libmachine: (ha-091565) DBG | domain ha-091565 has defined IP address 192.168.39.215 and MAC address 52:54:00:2a:13:d8 in network mk-ha-091565
	I0918 20:10:22.066465   32455 main.go:141] libmachine: (ha-091565) Calling .GetSSHPort
	I0918 20:10:22.066661   32455 main.go:141] libmachine: (ha-091565) Calling .GetSSHKeyPath
	I0918 20:10:22.066853   32455 main.go:141] libmachine: (ha-091565) Calling .GetSSHUsername
	I0918 20:10:22.066948   32455 sshutil.go:53] new ssh client: &{IP:192.168.39.215 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19667-7671/.minikube/machines/ha-091565/id_rsa Username:docker}
	I0918 20:10:22.154548   32455 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19667-7671/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0918 20:10:22.154645   32455 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I0918 20:10:22.181982   32455 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19667-7671/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0918 20:10:22.182060   32455 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0918 20:10:22.209319   32455 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19667-7671/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0918 20:10:22.209408   32455 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0918 20:10:22.235852   32455 provision.go:87] duration metric: took 258.644873ms to configureAuth
	I0918 20:10:22.235880   32455 buildroot.go:189] setting minikube options for container-runtime
	I0918 20:10:22.236179   32455 config.go:182] Loaded profile config "ha-091565": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0918 20:10:22.236274   32455 main.go:141] libmachine: (ha-091565) Calling .GetSSHHostname
	I0918 20:10:22.238668   32455 main.go:141] libmachine: (ha-091565) DBG | domain ha-091565 has defined MAC address 52:54:00:2a:13:d8 in network mk-ha-091565
	I0918 20:10:22.238999   32455 main.go:141] libmachine: (ha-091565) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:13:d8", ip: ""} in network mk-ha-091565: {Iface:virbr1 ExpiryTime:2024-09-18 21:01:12 +0000 UTC Type:0 Mac:52:54:00:2a:13:d8 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:ha-091565 Clientid:01:52:54:00:2a:13:d8}
	I0918 20:10:22.239017   32455 main.go:141] libmachine: (ha-091565) DBG | domain ha-091565 has defined IP address 192.168.39.215 and MAC address 52:54:00:2a:13:d8 in network mk-ha-091565
	I0918 20:10:22.239210   32455 main.go:141] libmachine: (ha-091565) Calling .GetSSHPort
	I0918 20:10:22.239408   32455 main.go:141] libmachine: (ha-091565) Calling .GetSSHKeyPath
	I0918 20:10:22.239537   32455 main.go:141] libmachine: (ha-091565) Calling .GetSSHKeyPath
	I0918 20:10:22.239650   32455 main.go:141] libmachine: (ha-091565) Calling .GetSSHUsername
	I0918 20:10:22.239799   32455 main.go:141] libmachine: Using SSH client type: native
	I0918 20:10:22.240008   32455 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.215 22 <nil> <nil>}
	I0918 20:10:22.240046   32455 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0918 20:11:53.113384   32455 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0918 20:11:53.113412   32455 machine.go:96] duration metric: took 1m31.515364109s to provisionDockerMachine
	I0918 20:11:53.113422   32455 start.go:293] postStartSetup for "ha-091565" (driver="kvm2")
	I0918 20:11:53.113432   32455 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0918 20:11:53.113447   32455 main.go:141] libmachine: (ha-091565) Calling .DriverName
	I0918 20:11:53.113763   32455 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0918 20:11:53.113791   32455 main.go:141] libmachine: (ha-091565) Calling .GetSSHHostname
	I0918 20:11:53.116790   32455 main.go:141] libmachine: (ha-091565) DBG | domain ha-091565 has defined MAC address 52:54:00:2a:13:d8 in network mk-ha-091565
	I0918 20:11:53.117170   32455 main.go:141] libmachine: (ha-091565) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:13:d8", ip: ""} in network mk-ha-091565: {Iface:virbr1 ExpiryTime:2024-09-18 21:01:12 +0000 UTC Type:0 Mac:52:54:00:2a:13:d8 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:ha-091565 Clientid:01:52:54:00:2a:13:d8}
	I0918 20:11:53.117201   32455 main.go:141] libmachine: (ha-091565) DBG | domain ha-091565 has defined IP address 192.168.39.215 and MAC address 52:54:00:2a:13:d8 in network mk-ha-091565
	I0918 20:11:53.117343   32455 main.go:141] libmachine: (ha-091565) Calling .GetSSHPort
	I0918 20:11:53.117540   32455 main.go:141] libmachine: (ha-091565) Calling .GetSSHKeyPath
	I0918 20:11:53.117794   32455 main.go:141] libmachine: (ha-091565) Calling .GetSSHUsername
	I0918 20:11:53.117929   32455 sshutil.go:53] new ssh client: &{IP:192.168.39.215 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19667-7671/.minikube/machines/ha-091565/id_rsa Username:docker}
	I0918 20:11:53.203998   32455 ssh_runner.go:195] Run: cat /etc/os-release
	I0918 20:11:53.208185   32455 info.go:137] Remote host: Buildroot 2023.02.9
	I0918 20:11:53.208209   32455 filesync.go:126] Scanning /home/jenkins/minikube-integration/19667-7671/.minikube/addons for local assets ...
	I0918 20:11:53.208267   32455 filesync.go:126] Scanning /home/jenkins/minikube-integration/19667-7671/.minikube/files for local assets ...
	I0918 20:11:53.208345   32455 filesync.go:149] local asset: /home/jenkins/minikube-integration/19667-7671/.minikube/files/etc/ssl/certs/148782.pem -> 148782.pem in /etc/ssl/certs
	I0918 20:11:53.208358   32455 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19667-7671/.minikube/files/etc/ssl/certs/148782.pem -> /etc/ssl/certs/148782.pem
	I0918 20:11:53.208461   32455 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0918 20:11:53.217739   32455 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/files/etc/ssl/certs/148782.pem --> /etc/ssl/certs/148782.pem (1708 bytes)
	I0918 20:11:53.242008   32455 start.go:296] duration metric: took 128.571381ms for postStartSetup
	I0918 20:11:53.242077   32455 main.go:141] libmachine: (ha-091565) Calling .DriverName
	I0918 20:11:53.242349   32455 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0918 20:11:53.242459   32455 main.go:141] libmachine: (ha-091565) Calling .GetSSHHostname
	I0918 20:11:53.245241   32455 main.go:141] libmachine: (ha-091565) DBG | domain ha-091565 has defined MAC address 52:54:00:2a:13:d8 in network mk-ha-091565
	I0918 20:11:53.245696   32455 main.go:141] libmachine: (ha-091565) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:13:d8", ip: ""} in network mk-ha-091565: {Iface:virbr1 ExpiryTime:2024-09-18 21:01:12 +0000 UTC Type:0 Mac:52:54:00:2a:13:d8 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:ha-091565 Clientid:01:52:54:00:2a:13:d8}
	I0918 20:11:53.245724   32455 main.go:141] libmachine: (ha-091565) DBG | domain ha-091565 has defined IP address 192.168.39.215 and MAC address 52:54:00:2a:13:d8 in network mk-ha-091565
	I0918 20:11:53.245849   32455 main.go:141] libmachine: (ha-091565) Calling .GetSSHPort
	I0918 20:11:53.246040   32455 main.go:141] libmachine: (ha-091565) Calling .GetSSHKeyPath
	I0918 20:11:53.246184   32455 main.go:141] libmachine: (ha-091565) Calling .GetSSHUsername
	I0918 20:11:53.246314   32455 sshutil.go:53] new ssh client: &{IP:192.168.39.215 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19667-7671/.minikube/machines/ha-091565/id_rsa Username:docker}
	W0918 20:11:53.330601   32455 fix.go:99] cannot read backup folder, skipping restore: read dir: sudo ls --almost-all -1 /var/lib/minikube/backup: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/backup': No such file or directory
	I0918 20:11:53.330629   32455 fix.go:56] duration metric: took 1m31.756024995s for fixHost
	I0918 20:11:53.330649   32455 main.go:141] libmachine: (ha-091565) Calling .GetSSHHostname
	I0918 20:11:53.332906   32455 main.go:141] libmachine: (ha-091565) DBG | domain ha-091565 has defined MAC address 52:54:00:2a:13:d8 in network mk-ha-091565
	I0918 20:11:53.333200   32455 main.go:141] libmachine: (ha-091565) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:13:d8", ip: ""} in network mk-ha-091565: {Iface:virbr1 ExpiryTime:2024-09-18 21:01:12 +0000 UTC Type:0 Mac:52:54:00:2a:13:d8 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:ha-091565 Clientid:01:52:54:00:2a:13:d8}
	I0918 20:11:53.333224   32455 main.go:141] libmachine: (ha-091565) DBG | domain ha-091565 has defined IP address 192.168.39.215 and MAC address 52:54:00:2a:13:d8 in network mk-ha-091565
	I0918 20:11:53.333399   32455 main.go:141] libmachine: (ha-091565) Calling .GetSSHPort
	I0918 20:11:53.333583   32455 main.go:141] libmachine: (ha-091565) Calling .GetSSHKeyPath
	I0918 20:11:53.333727   32455 main.go:141] libmachine: (ha-091565) Calling .GetSSHKeyPath
	I0918 20:11:53.333867   32455 main.go:141] libmachine: (ha-091565) Calling .GetSSHUsername
	I0918 20:11:53.334017   32455 main.go:141] libmachine: Using SSH client type: native
	I0918 20:11:53.334209   32455 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.215 22 <nil> <nil>}
	I0918 20:11:53.334222   32455 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0918 20:11:53.448740   32455 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726690313.409802921
	
	I0918 20:11:53.448766   32455 fix.go:216] guest clock: 1726690313.409802921
	I0918 20:11:53.448774   32455 fix.go:229] Guest: 2024-09-18 20:11:53.409802921 +0000 UTC Remote: 2024-09-18 20:11:53.330635796 +0000 UTC m=+91.884985084 (delta=79.167125ms)
	I0918 20:11:53.448798   32455 fix.go:200] guest clock delta is within tolerance: 79.167125ms
	I0918 20:11:53.448803   32455 start.go:83] releasing machines lock for "ha-091565", held for 1m31.874221941s
	I0918 20:11:53.448825   32455 main.go:141] libmachine: (ha-091565) Calling .DriverName
	I0918 20:11:53.449079   32455 main.go:141] libmachine: (ha-091565) Calling .GetIP
	I0918 20:11:53.451803   32455 main.go:141] libmachine: (ha-091565) DBG | domain ha-091565 has defined MAC address 52:54:00:2a:13:d8 in network mk-ha-091565
	I0918 20:11:53.452169   32455 main.go:141] libmachine: (ha-091565) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:13:d8", ip: ""} in network mk-ha-091565: {Iface:virbr1 ExpiryTime:2024-09-18 21:01:12 +0000 UTC Type:0 Mac:52:54:00:2a:13:d8 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:ha-091565 Clientid:01:52:54:00:2a:13:d8}
	I0918 20:11:53.452193   32455 main.go:141] libmachine: (ha-091565) DBG | domain ha-091565 has defined IP address 192.168.39.215 and MAC address 52:54:00:2a:13:d8 in network mk-ha-091565
	I0918 20:11:53.452344   32455 main.go:141] libmachine: (ha-091565) Calling .DriverName
	I0918 20:11:53.452808   32455 main.go:141] libmachine: (ha-091565) Calling .DriverName
	I0918 20:11:53.452970   32455 main.go:141] libmachine: (ha-091565) Calling .DriverName
	I0918 20:11:53.453048   32455 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0918 20:11:53.453082   32455 main.go:141] libmachine: (ha-091565) Calling .GetSSHHostname
	I0918 20:11:53.453159   32455 ssh_runner.go:195] Run: cat /version.json
	I0918 20:11:53.453178   32455 main.go:141] libmachine: (ha-091565) Calling .GetSSHHostname
	I0918 20:11:53.455562   32455 main.go:141] libmachine: (ha-091565) DBG | domain ha-091565 has defined MAC address 52:54:00:2a:13:d8 in network mk-ha-091565
	I0918 20:11:53.455911   32455 main.go:141] libmachine: (ha-091565) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:13:d8", ip: ""} in network mk-ha-091565: {Iface:virbr1 ExpiryTime:2024-09-18 21:01:12 +0000 UTC Type:0 Mac:52:54:00:2a:13:d8 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:ha-091565 Clientid:01:52:54:00:2a:13:d8}
	I0918 20:11:53.455938   32455 main.go:141] libmachine: (ha-091565) DBG | domain ha-091565 has defined IP address 192.168.39.215 and MAC address 52:54:00:2a:13:d8 in network mk-ha-091565
	I0918 20:11:53.456043   32455 main.go:141] libmachine: (ha-091565) DBG | domain ha-091565 has defined MAC address 52:54:00:2a:13:d8 in network mk-ha-091565
	I0918 20:11:53.456083   32455 main.go:141] libmachine: (ha-091565) Calling .GetSSHPort
	I0918 20:11:53.456277   32455 main.go:141] libmachine: (ha-091565) Calling .GetSSHKeyPath
	I0918 20:11:53.456465   32455 main.go:141] libmachine: (ha-091565) Calling .GetSSHUsername
	I0918 20:11:53.456518   32455 main.go:141] libmachine: (ha-091565) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:13:d8", ip: ""} in network mk-ha-091565: {Iface:virbr1 ExpiryTime:2024-09-18 21:01:12 +0000 UTC Type:0 Mac:52:54:00:2a:13:d8 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:ha-091565 Clientid:01:52:54:00:2a:13:d8}
	I0918 20:11:53.456553   32455 main.go:141] libmachine: (ha-091565) DBG | domain ha-091565 has defined IP address 192.168.39.215 and MAC address 52:54:00:2a:13:d8 in network mk-ha-091565
	I0918 20:11:53.456579   32455 sshutil.go:53] new ssh client: &{IP:192.168.39.215 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19667-7671/.minikube/machines/ha-091565/id_rsa Username:docker}
	I0918 20:11:53.456708   32455 main.go:141] libmachine: (ha-091565) Calling .GetSSHPort
	I0918 20:11:53.456830   32455 main.go:141] libmachine: (ha-091565) Calling .GetSSHKeyPath
	I0918 20:11:53.456953   32455 main.go:141] libmachine: (ha-091565) Calling .GetSSHUsername
	I0918 20:11:53.457079   32455 sshutil.go:53] new ssh client: &{IP:192.168.39.215 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19667-7671/.minikube/machines/ha-091565/id_rsa Username:docker}
	I0918 20:11:53.537826   32455 ssh_runner.go:195] Run: systemctl --version
	I0918 20:11:53.575920   32455 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0918 20:11:53.735535   32455 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0918 20:11:53.742218   32455 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0918 20:11:53.742292   32455 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0918 20:11:53.751629   32455 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0918 20:11:53.751651   32455 start.go:495] detecting cgroup driver to use...
	I0918 20:11:53.751704   32455 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0918 20:11:53.771495   32455 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0918 20:11:53.786858   32455 docker.go:217] disabling cri-docker service (if available) ...
	I0918 20:11:53.786912   32455 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0918 20:11:53.801820   32455 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0918 20:11:53.817023   32455 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0918 20:11:53.989412   32455 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0918 20:11:54.151681   32455 docker.go:233] disabling docker service ...
	I0918 20:11:54.151754   32455 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0918 20:11:54.167751   32455 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0918 20:11:54.181807   32455 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0918 20:11:54.331455   32455 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0918 20:11:54.479073   32455 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0918 20:11:54.494246   32455 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0918 20:11:54.514069   32455 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0918 20:11:54.514139   32455 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0918 20:11:54.524478   32455 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0918 20:11:54.524555   32455 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0918 20:11:54.534711   32455 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0918 20:11:54.544229   32455 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0918 20:11:54.554255   32455 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0918 20:11:54.565072   32455 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0918 20:11:54.575438   32455 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0918 20:11:54.587587   32455 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0918 20:11:54.598966   32455 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0918 20:11:54.608188   32455 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0918 20:11:54.617777   32455 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0918 20:11:54.765099   32455 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0918 20:12:03.790354   32455 ssh_runner.go:235] Completed: sudo systemctl restart crio: (9.025214168s)
	I0918 20:12:03.790385   32455 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0918 20:12:03.790475   32455 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0918 20:12:03.796261   32455 start.go:563] Will wait 60s for crictl version
	I0918 20:12:03.796336   32455 ssh_runner.go:195] Run: which crictl
	I0918 20:12:03.800042   32455 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0918 20:12:03.843037   32455 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0918 20:12:03.843108   32455 ssh_runner.go:195] Run: crio --version
	I0918 20:12:03.871985   32455 ssh_runner.go:195] Run: crio --version
	I0918 20:12:03.901566   32455 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0918 20:12:03.902558   32455 main.go:141] libmachine: (ha-091565) Calling .GetIP
	I0918 20:12:03.905119   32455 main.go:141] libmachine: (ha-091565) DBG | domain ha-091565 has defined MAC address 52:54:00:2a:13:d8 in network mk-ha-091565
	I0918 20:12:03.905425   32455 main.go:141] libmachine: (ha-091565) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:13:d8", ip: ""} in network mk-ha-091565: {Iface:virbr1 ExpiryTime:2024-09-18 21:01:12 +0000 UTC Type:0 Mac:52:54:00:2a:13:d8 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:ha-091565 Clientid:01:52:54:00:2a:13:d8}
	I0918 20:12:03.905455   32455 main.go:141] libmachine: (ha-091565) DBG | domain ha-091565 has defined IP address 192.168.39.215 and MAC address 52:54:00:2a:13:d8 in network mk-ha-091565
	I0918 20:12:03.905699   32455 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0918 20:12:03.910325   32455 kubeadm.go:883] updating cluster {Name:ha-091565 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Cl
usterName:ha-091565 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.215 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.92 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.53 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.9 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshp
od:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountG
ID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0918 20:12:03.910457   32455 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0918 20:12:03.910508   32455 ssh_runner.go:195] Run: sudo crictl images --output json
	I0918 20:12:03.951827   32455 crio.go:514] all images are preloaded for cri-o runtime.
	I0918 20:12:03.951848   32455 crio.go:433] Images already preloaded, skipping extraction
	I0918 20:12:03.951893   32455 ssh_runner.go:195] Run: sudo crictl images --output json
	I0918 20:12:03.993512   32455 crio.go:514] all images are preloaded for cri-o runtime.
	I0918 20:12:03.993576   32455 cache_images.go:84] Images are preloaded, skipping loading
	I0918 20:12:03.993597   32455 kubeadm.go:934] updating node { 192.168.39.215 8443 v1.31.1 crio true true} ...
	I0918 20:12:03.993718   32455 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-091565 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.215
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-091565 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0918 20:12:03.993835   32455 ssh_runner.go:195] Run: crio config
	I0918 20:12:04.043625   32455 cni.go:84] Creating CNI manager for ""
	I0918 20:12:04.043646   32455 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0918 20:12:04.043654   32455 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0918 20:12:04.043672   32455 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.215 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-091565 NodeName:ha-091565 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.215"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.215 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0918 20:12:04.043797   32455 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.215
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-091565"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.215
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.215"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0918 20:12:04.043823   32455 kube-vip.go:115] generating kube-vip config ...
	I0918 20:12:04.043867   32455 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0918 20:12:04.055518   32455 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0918 20:12:04.055641   32455 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0918 20:12:04.055708   32455 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0918 20:12:04.065667   32455 binaries.go:44] Found k8s binaries, skipping transfer
	I0918 20:12:04.065763   32455 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0918 20:12:04.075450   32455 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I0918 20:12:04.092630   32455 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0918 20:12:04.108555   32455 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2153 bytes)
	I0918 20:12:04.124623   32455 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0918 20:12:04.141409   32455 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0918 20:12:04.145785   32455 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0918 20:12:04.307857   32455 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0918 20:12:04.324737   32455 certs.go:68] Setting up /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/ha-091565 for IP: 192.168.39.215
	I0918 20:12:04.324773   32455 certs.go:194] generating shared ca certs ...
	I0918 20:12:04.324789   32455 certs.go:226] acquiring lock for ca certs: {Name:mkf54e0e71ed884c4372f9d3d4cb308ea4600185 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0918 20:12:04.324986   32455 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19667-7671/.minikube/ca.key
	I0918 20:12:04.325053   32455 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19667-7671/.minikube/proxy-client-ca.key
	I0918 20:12:04.325069   32455 certs.go:256] generating profile certs ...
	I0918 20:12:04.325185   32455 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/ha-091565/client.key
	I0918 20:12:04.325226   32455 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/ha-091565/apiserver.key.c0a0a625
	I0918 20:12:04.325256   32455 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/ha-091565/apiserver.crt.c0a0a625 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.215 192.168.39.92 192.168.39.53 192.168.39.254]
	I0918 20:12:04.445574   32455 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/ha-091565/apiserver.crt.c0a0a625 ...
	I0918 20:12:04.445613   32455 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/ha-091565/apiserver.crt.c0a0a625: {Name:mk5247af31881f8e5c986030d6d12b4e48e9acab Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0918 20:12:04.445801   32455 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/ha-091565/apiserver.key.c0a0a625 ...
	I0918 20:12:04.445832   32455 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/ha-091565/apiserver.key.c0a0a625: {Name:mkef42b095c922258fa0861a13f6b4883289befd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0918 20:12:04.445913   32455 certs.go:381] copying /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/ha-091565/apiserver.crt.c0a0a625 -> /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/ha-091565/apiserver.crt
	I0918 20:12:04.446062   32455 certs.go:385] copying /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/ha-091565/apiserver.key.c0a0a625 -> /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/ha-091565/apiserver.key
	I0918 20:12:04.446192   32455 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/ha-091565/proxy-client.key
	I0918 20:12:04.446207   32455 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19667-7671/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0918 20:12:04.446220   32455 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19667-7671/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0918 20:12:04.446232   32455 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19667-7671/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0918 20:12:04.446242   32455 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19667-7671/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0918 20:12:04.446255   32455 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/ha-091565/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0918 20:12:04.446265   32455 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/ha-091565/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0918 20:12:04.446277   32455 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/ha-091565/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0918 20:12:04.446287   32455 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/ha-091565/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0918 20:12:04.446348   32455 certs.go:484] found cert: /home/jenkins/minikube-integration/19667-7671/.minikube/certs/14878.pem (1338 bytes)
	W0918 20:12:04.446376   32455 certs.go:480] ignoring /home/jenkins/minikube-integration/19667-7671/.minikube/certs/14878_empty.pem, impossibly tiny 0 bytes
	I0918 20:12:04.446383   32455 certs.go:484] found cert: /home/jenkins/minikube-integration/19667-7671/.minikube/certs/ca-key.pem (1675 bytes)
	I0918 20:12:04.446404   32455 certs.go:484] found cert: /home/jenkins/minikube-integration/19667-7671/.minikube/certs/ca.pem (1082 bytes)
	I0918 20:12:04.446427   32455 certs.go:484] found cert: /home/jenkins/minikube-integration/19667-7671/.minikube/certs/cert.pem (1123 bytes)
	I0918 20:12:04.446448   32455 certs.go:484] found cert: /home/jenkins/minikube-integration/19667-7671/.minikube/certs/key.pem (1679 bytes)
	I0918 20:12:04.446483   32455 certs.go:484] found cert: /home/jenkins/minikube-integration/19667-7671/.minikube/files/etc/ssl/certs/148782.pem (1708 bytes)
	I0918 20:12:04.446508   32455 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19667-7671/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0918 20:12:04.446522   32455 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19667-7671/.minikube/certs/14878.pem -> /usr/share/ca-certificates/14878.pem
	I0918 20:12:04.446534   32455 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19667-7671/.minikube/files/etc/ssl/certs/148782.pem -> /usr/share/ca-certificates/148782.pem
	I0918 20:12:04.447155   32455 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0918 20:12:04.472997   32455 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0918 20:12:04.498649   32455 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0918 20:12:04.523077   32455 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0918 20:12:04.548688   32455 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/ha-091565/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0918 20:12:04.573643   32455 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/ha-091565/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0918 20:12:04.599072   32455 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/ha-091565/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0918 20:12:04.624269   32455 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/ha-091565/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0918 20:12:04.650005   32455 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0918 20:12:04.676083   32455 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/certs/14878.pem --> /usr/share/ca-certificates/14878.pem (1338 bytes)
	I0918 20:12:04.702552   32455 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/files/etc/ssl/certs/148782.pem --> /usr/share/ca-certificates/148782.pem (1708 bytes)
	I0918 20:12:04.727499   32455 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0918 20:12:04.745602   32455 ssh_runner.go:195] Run: openssl version
	I0918 20:12:04.752166   32455 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14878.pem && ln -fs /usr/share/ca-certificates/14878.pem /etc/ssl/certs/14878.pem"
	I0918 20:12:04.764693   32455 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14878.pem
	I0918 20:12:04.770506   32455 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 18 19:57 /usr/share/ca-certificates/14878.pem
	I0918 20:12:04.770593   32455 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14878.pem
	I0918 20:12:04.777041   32455 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14878.pem /etc/ssl/certs/51391683.0"
	I0918 20:12:04.787475   32455 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/148782.pem && ln -fs /usr/share/ca-certificates/148782.pem /etc/ssl/certs/148782.pem"
	I0918 20:12:04.800141   32455 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/148782.pem
	I0918 20:12:04.804982   32455 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 18 19:57 /usr/share/ca-certificates/148782.pem
	I0918 20:12:04.805052   32455 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/148782.pem
	I0918 20:12:04.811226   32455 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/148782.pem /etc/ssl/certs/3ec20f2e.0"
	I0918 20:12:04.822504   32455 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0918 20:12:04.834670   32455 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0918 20:12:04.839387   32455 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 18 19:39 /usr/share/ca-certificates/minikubeCA.pem
	I0918 20:12:04.839440   32455 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0918 20:12:04.845389   32455 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0918 20:12:04.857297   32455 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0918 20:12:04.862368   32455 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0918 20:12:04.868685   32455 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0918 20:12:04.875055   32455 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0918 20:12:04.881248   32455 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0918 20:12:04.887784   32455 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0918 20:12:04.894760   32455 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0918 20:12:04.901019   32455 kubeadm.go:392] StartCluster: {Name:ha-091565 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Clust
erName:ha-091565 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.215 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.92 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.53 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.9 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:
false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:
docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0918 20:12:04.901132   32455 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0918 20:12:04.901185   32455 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0918 20:12:04.942096   32455 cri.go:89] found id: "dbecd227f3ec46402e8caa90011eda748aa22f0504b6d888270ba095a12c9b89"
	I0918 20:12:04.942124   32455 cri.go:89] found id: "07e1934aefd631296f5de1012cce3d05a901f2aae648317c3b359efd462f870b"
	I0918 20:12:04.942128   32455 cri.go:89] found id: "566b1afb1702d39bc1691911941f883e32db2b47959f624be519dbf4fbc79f71"
	I0918 20:12:04.942131   32455 cri.go:89] found id: "4f8cab8eef59352ca937c9b8e7056a419bb426b183a841f9e8a43366ecb1b283"
	I0918 20:12:04.942134   32455 cri.go:89] found id: "26162985f4a281a161b88fc27b5cf0dbedef6117a63f0b64cef28f41346dbd3e"
	I0918 20:12:04.942136   32455 cri.go:89] found id: "9b5c6773eef44d848fd26ebcf13ea82ebc3c1e1bd23618a944bf5f6d6a0e7bb8"
	I0918 20:12:04.942139   32455 cri.go:89] found id: "52ae20a53e17beece735496b8e65537bbebfef5da53e8a2e7a74820fee56cd63"
	I0918 20:12:04.942142   32455 cri.go:89] found id: "c9aa80c6b1f558ba9ef5b7e70df3690995e271e9296b0cc3e71ade739843f53a"
	I0918 20:12:04.942144   32455 cri.go:89] found id: "f40b55a2539761c632fe964601680ae492c74cf2cad4ef1130393c06f576f943"
	I0918 20:12:04.942151   32455 cri.go:89] found id: "8c435dbd5b540cdb87d1f79fc2aadca19db9c46aff42e67d78c5d9c8eee1b6de"
	I0918 20:12:04.942154   32455 cri.go:89] found id: "f141188bda32520e26d392de6e6e49a863d49aacb461b7f3d5649d68557e96d3"
	I0918 20:12:04.942156   32455 cri.go:89] found id: "4358e16fe123bb31210776fce1969072fb954b1f7cc3b51c459cd155992c1be5"
	I0918 20:12:04.942159   32455 cri.go:89] found id: "97b3f8978c2594307cdfef72e8b8aebac842152f9c1893bb34700b6e0932027e"
	I0918 20:12:04.942161   32455 cri.go:89] found id: ""
	I0918 20:12:04.942224   32455 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Sep 18 20:14:31 ha-091565 crio[3611]: time="2024-09-18 20:14:31.372181094Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:a0b5345aa00cc911cf3ebf6d550cae0e8fae6aa83bba446cedb5885f19562a72,Metadata:&PodSandboxMetadata{Name:busybox-7dff88458-xhmzx,Uid:16808919-56d0-40cd-b88e-28fb5a40b3a2,Namespace:default,Attempt:1,},State:SANDBOX_READY,CreatedAt:1726690364573352454,Labels:map[string]string{app: busybox,io.kubernetes.container.name: POD,io.kubernetes.pod.name: busybox-7dff88458-xhmzx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 16808919-56d0-40cd-b88e-28fb5a40b3a2,pod-template-hash: 7dff88458,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-18T20:04:23.052244539Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:e542a3cfe8082184e2eed6dece495abd78e95812c484bea2401d760729c49c81,Metadata:&PodSandboxMetadata{Name:kube-vip-ha-091565,Uid:f3d3d9af35a10992383331d3d45eaca9,Namespace:kube-system,Attempt:0,},State:SANDBOX_RE
ADY,CreatedAt:1726690344290583124,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-vip-ha-091565,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f3d3d9af35a10992383331d3d45eaca9,},Annotations:map[string]string{kubernetes.io/config.hash: f3d3d9af35a10992383331d3d45eaca9,kubernetes.io/config.seen: 2024-09-18T20:12:04.103185328Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:82ed01bc2dfae627a150f347d6f669f4e189caea575a71d9e6837201ef98ace7,Metadata:&PodSandboxMetadata{Name:coredns-7c65d6cfc9-8zcqk,Uid:644e8147-96e9-41a1-99b8-d2de17e4798c,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1726690330882168473,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7c65d6cfc9-8zcqk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 644e8147-96e9-41a1-99b8-d2de17e4798c,k8s-app: kube-dns,pod-template-hash: 7c65d6cfc9,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09
-18T20:02:01.087730570Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:daf9a948e855e0b3418e648b0e08e5abe75783b75678d6d8e1ad1d450bf2aa04,Metadata:&PodSandboxMetadata{Name:coredns-7c65d6cfc9-w97kk,Uid:70428cd6-0523-44c8-89f3-62837b52ca80,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1726690330848795044,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7c65d6cfc9-w97kk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 70428cd6-0523-44c8-89f3-62837b52ca80,k8s-app: kube-dns,pod-template-hash: 7c65d6cfc9,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-18T20:02:01.092778303Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:9259670422d45498b8f4e45909da18fa355ece853960532d620a5ca1f2e21efb,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:b7dffb85-905b-4166-a680-34c77cf87d09,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1726690330836741824,Labels:map[string]string
{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b7dffb85-905b-4166-a680-34c77cf87d09,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/confi
g.seen: 2024-09-18T20:02:01.093617871Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:68e450d1a9d796861547badef070a688c9eca3e51393bcf3d8d4b48c0af80e45,Metadata:&PodSandboxMetadata{Name:kube-apiserver-ha-091565,Uid:fb44b1ca5e1af25a31cc20be38506f2d,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1726690330820455007,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-ha-091565,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fb44b1ca5e1af25a31cc20be38506f2d,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.215:8443,kubernetes.io/config.hash: fb44b1ca5e1af25a31cc20be38506f2d,kubernetes.io/config.seen: 2024-09-18T20:01:43.320227600Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:6a98de6d0c34be9c61571054c6ee0cfa8df30c161e6da077b0fd3dce43a7c629,Metadata:&PodSandboxMetadata{Name:kube-controller-ma
nager-ha-091565,Uid:d7f1e1feaa8e654300c84052131dd12a,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1726690330806706688,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-ha-091565,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d7f1e1feaa8e654300c84052131dd12a,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: d7f1e1feaa8e654300c84052131dd12a,kubernetes.io/config.seen: 2024-09-18T20:01:43.320228482Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:efa9968c0e0eeaf14440259fb0869ec7455118c74385b559462409719c49d5e7,Metadata:&PodSandboxMetadata{Name:etcd-ha-091565,Uid:ed5bb72bdd2d49ac86ea107effb85714,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1726690330806233622,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-ha-091565,io.kubernetes.pod.namespace: kube-system,io.kubern
etes.pod.uid: ed5bb72bdd2d49ac86ea107effb85714,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.215:2379,kubernetes.io/config.hash: ed5bb72bdd2d49ac86ea107effb85714,kubernetes.io/config.seen: 2024-09-18T20:01:43.320226497Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:53c9d8623b3d890b0f48875319a05766114a9fdc3dadedeaf2f7011ca1bb054c,Metadata:&PodSandboxMetadata{Name:kube-proxy-4wm6h,Uid:d6904231-6f64-4447-9932-0cd5d692978b,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1726690330803648160,Labels:map[string]string{controller-revision-hash: 648b489c5b,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-4wm6h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d6904231-6f64-4447-9932-0cd5d692978b,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-18T20:01:47.729062033Z,kubernetes.io/config.source: api,},RuntimeHa
ndler:,},&PodSandbox{Id:0bc3148ce646e5567d6d3af8ff4813aa443127b853f2271f576167d9d4614c54,Metadata:&PodSandboxMetadata{Name:kube-scheduler-ha-091565,Uid:42a715c9fe466585ee83dba1966182d5,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1726690330800333527,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-ha-091565,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 42a715c9fe466585ee83dba1966182d5,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 42a715c9fe466585ee83dba1966182d5,kubernetes.io/config.seen: 2024-09-18T20:01:43.320220730Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:91eccec72bcc687b4484e04e667e2beac75982ded0203c1cc0d0f5e1fabb6a64,Metadata:&PodSandboxMetadata{Name:kindnet-7fl5w,Uid:5c3a9d82-3815-4aa1-8d04-14be25394dcf,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1726690330791779877,Labels:map[string]string{app: kindnet,controlle
r-revision-hash: 65cbdfc95f,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kindnet-7fl5w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5c3a9d82-3815-4aa1-8d04-14be25394dcf,k8s-app: kindnet,pod-template-generation: 1,tier: node,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-18T20:01:47.741569150Z,kubernetes.io/config.source: api,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=8934031c-19e3-448b-850e-94b0afff45b2 name=/runtime.v1.RuntimeService/ListPodSandbox
	Sep 18 20:14:31 ha-091565 crio[3611]: time="2024-09-18 20:14:31.373331314Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:&ContainerStateValue{State:CONTAINER_RUNNING,},PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=15441daa-db41-4b41-a59a-07638b316494 name=/runtime.v1.RuntimeService/ListContainers
	Sep 18 20:14:31 ha-091565 crio[3611]: time="2024-09-18 20:14:31.373407327Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=15441daa-db41-4b41-a59a-07638b316494 name=/runtime.v1.RuntimeService/ListContainers
	Sep 18 20:14:31 ha-091565 crio[3611]: time="2024-09-18 20:14:31.374125259Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:6489130b5601ef43045b9ef5a8e330f80dae1c20724da7d02b8b61500bc9beaa,PodSandboxId:9259670422d45498b8f4e45909da18fa355ece853960532d620a5ca1f2e21efb,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726690419383021088,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b7dffb85-905b-4166-a680-34c77cf87d09,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e894eebbedc0aac646e7c24ccd43c7ae2e1a00a0275213aa3eaaf78ad5fddb8a,PodSandboxId:68e450d1a9d796861547badef070a688c9eca3e51393bcf3d8d4b48c0af80e45,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726690374390976616,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-091565,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fb44b1ca5e1af25a31cc20be38506f2d,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath
: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c3cbacf6046aded98d17f163f6e437ef37200f71fd333470b3f7b074463c80ca,PodSandboxId:6a98de6d0c34be9c61571054c6ee0cfa8df30c161e6da077b0fd3dce43a7c629,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726690374385528525,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-091565,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d7f1e1feaa8e654300c84052131dd12a,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7deb697baa4b118571cf9f455fd13305bc42377ddcb5b0aaa75763ec49cd7955,PodSandboxId:a0b5345aa00cc911cf3ebf6d550cae0e8fae6aa83bba446cedb5885f19562a72,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1726690364731464824,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-xhmzx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 16808919-56d0-40cd-b88e-28fb5a40b3a2,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:315019425fff0ed52e2f663404258df3b545845e85b4f1a0ea2251dccd135b0a,PodSandboxId:e542a3cfe8082184e2eed6dece495abd78e95812c484bea2401d760729c49c81,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1726690344387758381,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-091565,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f3d3d9af35a10992383331d3d45eaca9,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePoli
cy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bb5776304b68a91c9d294a1efc96b919d7969634a4d19019f4055a97a75eedaa,PodSandboxId:91eccec72bcc687b4484e04e667e2beac75982ded0203c1cc0d0f5e1fabb6a64,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1726690331422271469,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-7fl5w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5c3a9d82-3815-4aa1-8d04-14be25394dcf,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.termina
tionGracePeriod: 30,},},&Container{Id:d23d190c3f7a24c8d76129882123229cdd01c0a6d63bdeca54dd4158a36f52f7,PodSandboxId:53c9d8623b3d890b0f48875319a05766114a9fdc3dadedeaf2f7011ca1bb054c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726690331516445402,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4wm6h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d6904231-6f64-4447-9932-0cd5d692978b,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Containe
r{Id:fbdc184d26af7fa4411e4c2027e6de01f00fda961f753663405491da3f2ab52f,PodSandboxId:daf9a948e855e0b3418e648b0e08e5abe75783b75678d6d8e1ad1d450bf2aa04,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726690331363002835,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-w97kk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 70428cd6-0523-44c8-89f3-62837b52ca80,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.res
tartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bb8d0cf0ea1843751c86efea123f97cd6f5a08fcb1cee02f21311efe6ca1e526,PodSandboxId:0bc3148ce646e5567d6d3af8ff4813aa443127b853f2271f576167d9d4614c54,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726690331193449994,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-091565,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 42a715c9fe466585ee83dba1966182d5,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kuber
netes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9f589735092f230bc5c0db779e10f8b0980c4e646dc42967ce96f9d511fd27e4,PodSandboxId:82ed01bc2dfae627a150f347d6f669f4e189caea575a71d9e6837201ef98ace7,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726690331432050099,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-8zcqk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 644e8147-96e9-41a1-99b8-d2de17e4798c,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protoc
ol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c820a119e934b2a93019ba436b2a4247a959cffd8b253b6758d96276a65aeaa7,PodSandboxId:efa9968c0e0eeaf14440259fb0869ec7455118c74385b559462409719c49d5e7,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726690331217170862,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-091565,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ed5bb72b
dd2d49ac86ea107effb85714,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=15441daa-db41-4b41-a59a-07638b316494 name=/runtime.v1.RuntimeService/ListContainers
	Sep 18 20:14:31 ha-091565 crio[3611]: time="2024-09-18 20:14:31.412629414Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=6bf1e3fd-7438-41e0-a303-6724f68c8c3b name=/runtime.v1.RuntimeService/Version
	Sep 18 20:14:31 ha-091565 crio[3611]: time="2024-09-18 20:14:31.412770744Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=6bf1e3fd-7438-41e0-a303-6724f68c8c3b name=/runtime.v1.RuntimeService/Version
	Sep 18 20:14:31 ha-091565 crio[3611]: time="2024-09-18 20:14:31.414074435Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=6aa50000-6c37-473c-a3e5-ba07650a2976 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 18 20:14:31 ha-091565 crio[3611]: time="2024-09-18 20:14:31.414959449Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726690471414917464,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=6aa50000-6c37-473c-a3e5-ba07650a2976 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 18 20:14:31 ha-091565 crio[3611]: time="2024-09-18 20:14:31.415702286Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=4f4b7202-575c-4548-bd92-4a44357ccde5 name=/runtime.v1.RuntimeService/ListContainers
	Sep 18 20:14:31 ha-091565 crio[3611]: time="2024-09-18 20:14:31.415776831Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=4f4b7202-575c-4548-bd92-4a44357ccde5 name=/runtime.v1.RuntimeService/ListContainers
	Sep 18 20:14:31 ha-091565 crio[3611]: time="2024-09-18 20:14:31.416237329Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:6489130b5601ef43045b9ef5a8e330f80dae1c20724da7d02b8b61500bc9beaa,PodSandboxId:9259670422d45498b8f4e45909da18fa355ece853960532d620a5ca1f2e21efb,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726690419383021088,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b7dffb85-905b-4166-a680-34c77cf87d09,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e894eebbedc0aac646e7c24ccd43c7ae2e1a00a0275213aa3eaaf78ad5fddb8a,PodSandboxId:68e450d1a9d796861547badef070a688c9eca3e51393bcf3d8d4b48c0af80e45,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726690374390976616,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-091565,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fb44b1ca5e1af25a31cc20be38506f2d,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath
: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c3cbacf6046aded98d17f163f6e437ef37200f71fd333470b3f7b074463c80ca,PodSandboxId:6a98de6d0c34be9c61571054c6ee0cfa8df30c161e6da077b0fd3dce43a7c629,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726690374385528525,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-091565,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d7f1e1feaa8e654300c84052131dd12a,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0025f965c449fa8005990c9d16e4bd975337d00b859f495b752751bf93bf4d29,PodSandboxId:9259670422d45498b8f4e45909da18fa355ece853960532d620a5ca1f2e21efb,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1726690365382779674,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b7dffb85-905b-4166-a680-34c77cf87d09,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7deb697baa4b118571cf9f455fd13305bc42377ddcb5b0aaa75763ec49cd7955,PodSandboxId:a0b5345aa00cc911cf3ebf6d550cae0e8fae6aa83bba446cedb5885f19562a72,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1726690364731464824,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-xhmzx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 16808919-56d0-40cd-b88e-28fb5a40b3a2,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contai
ner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:315019425fff0ed52e2f663404258df3b545845e85b4f1a0ea2251dccd135b0a,PodSandboxId:e542a3cfe8082184e2eed6dece495abd78e95812c484bea2401d760729c49c81,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1726690344387758381,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-091565,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f3d3d9af35a10992383331d3d45eaca9,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.k
ubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bb5776304b68a91c9d294a1efc96b919d7969634a4d19019f4055a97a75eedaa,PodSandboxId:91eccec72bcc687b4484e04e667e2beac75982ded0203c1cc0d0f5e1fabb6a64,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1726690331422271469,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-7fl5w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5c3a9d82-3815-4aa1-8d04-14be25394dcf,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeri
od: 30,},},&Container{Id:d23d190c3f7a24c8d76129882123229cdd01c0a6d63bdeca54dd4158a36f52f7,PodSandboxId:53c9d8623b3d890b0f48875319a05766114a9fdc3dadedeaf2f7011ca1bb054c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726690331516445402,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4wm6h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d6904231-6f64-4447-9932-0cd5d692978b,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fbdc184d
26af7fa4411e4c2027e6de01f00fda961f753663405491da3f2ab52f,PodSandboxId:daf9a948e855e0b3418e648b0e08e5abe75783b75678d6d8e1ad1d450bf2aa04,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726690331363002835,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-w97kk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 70428cd6-0523-44c8-89f3-62837b52ca80,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,
io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cd37bc6079dc18794fdcec6cf524e952c17ed75bceccddac6dc5ef5026a7b0d3,PodSandboxId:68e450d1a9d796861547badef070a688c9eca3e51393bcf3d8d4b48c0af80e45,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726690331348615832,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-091565,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fb44b1ca5e1af25a31cc20be38506f2d,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.containe
r.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bb8d0cf0ea1843751c86efea123f97cd6f5a08fcb1cee02f21311efe6ca1e526,PodSandboxId:0bc3148ce646e5567d6d3af8ff4813aa443127b853f2271f576167d9d4614c54,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726690331193449994,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-091565,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 42a715c9fe466585ee83dba1966182d5,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessageP
ath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9f589735092f230bc5c0db779e10f8b0980c4e646dc42967ce96f9d511fd27e4,PodSandboxId:82ed01bc2dfae627a150f347d6f669f4e189caea575a71d9e6837201ef98ace7,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726690331432050099,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-8zcqk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 644e8147-96e9-41a1-99b8-d2de17e4798c,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\"
,\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d455b7b8c960ee88939b22efd881e9063e3cf96fec30f9b6cbdfe3c5670a8b1d,PodSandboxId:6a98de6d0c34be9c61571054c6ee0cfa8df30c161e6da077b0fd3dce43a7c629,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1726690331288188210,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-091565,io.kubernetes.pod.namespace: kube-system,io.kuberne
tes.pod.uid: d7f1e1feaa8e654300c84052131dd12a,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c820a119e934b2a93019ba436b2a4247a959cffd8b253b6758d96276a65aeaa7,PodSandboxId:efa9968c0e0eeaf14440259fb0869ec7455118c74385b559462409719c49d5e7,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726690331217170862,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-091565,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ed5bb72bdd2d49ac86ea107effb85714,},Ann
otations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7e40397db0622fd3967d20535016d957560de3976f7ca7e977fa2b186ff9f3ec,PodSandboxId:32509037cc4e4898a637b1de6a87eabd6d7504444bcf3f541a5234fb1ed4197a,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1726689867249576018,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-xhmzx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 16808919-56d0-40cd-b88e-28fb5a40b3a2,},Annot
ations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4f8cab8eef59352ca937c9b8e7056a419bb426b183a841f9e8a43366ecb1b283,PodSandboxId:16c38fe68d94e679d8aaead9d1be6f411a6984d3a367813f390d2ffc174a7047,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726689721940654598,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-8zcqk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 644e8147-96e9-41a1-99b8-d2de17e4798c,},Annotations:map[string]string{io.kube
rnetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9b5c6773eef44d848fd26ebcf13ea82ebc3c1e1bd23618a944bf5f6d6a0e7bb8,PodSandboxId:b0c496c53b4c9e8442d0d8a6bcef8269a3baad2995c3647784331bd1b03a34df,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726689721549482980,Labels:map[string]string{io.kubernetes.container.name: cor
edns,io.kubernetes.pod.name: coredns-7c65d6cfc9-w97kk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 70428cd6-0523-44c8-89f3-62837b52ca80,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:52ae20a53e17beece735496b8e65537bbebfef5da53e8a2e7a74820fee56cd63,PodSandboxId:e5053f7183e292c9c0d41caef5020bdc6b59ce14f38a3c98750b9667d61f9d14,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,Runti
meHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1726689709419367332,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-7fl5w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5c3a9d82-3815-4aa1-8d04-14be25394dcf,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c9aa80c6b1f558ba9ef5b7e70df3690995e271e9296b0cc3e71ade739843f53a,PodSandboxId:e7fdb7e540529f305bf2eb00c376fe4bdf92019e975ef0252046f5f4a250d965,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3a
d6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1726689709057469326,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4wm6h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d6904231-6f64-4447-9932-0cd5d692978b,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8c435dbd5b540cdb87d1f79fc2aadca19db9c46aff42e67d78c5d9c8eee1b6de,PodSandboxId:01b7098c9837574b56ca73c576ce61c56e6190d34e8f2c4814bae7a1f5952e5c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe
954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1726689697296369486,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-091565,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 42a715c9fe466585ee83dba1966182d5,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4358e16fe123bb31210776fce1969072fb954b1f7cc3b51c459cd155992c1be5,PodSandboxId:ae412aa32e14f8d37184d672500203218827aa56a04162cf4a4e1bde7a1c9833,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTA
INER_EXITED,CreatedAt:1726689697193701986,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-091565,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ed5bb72bdd2d49ac86ea107effb85714,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=4f4b7202-575c-4548-bd92-4a44357ccde5 name=/runtime.v1.RuntimeService/ListContainers
	Sep 18 20:14:31 ha-091565 crio[3611]: time="2024-09-18 20:14:31.461849241Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=d33ade20-3a35-4d37-97fb-6231c2fda2b8 name=/runtime.v1.RuntimeService/Version
	Sep 18 20:14:31 ha-091565 crio[3611]: time="2024-09-18 20:14:31.461986871Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=d33ade20-3a35-4d37-97fb-6231c2fda2b8 name=/runtime.v1.RuntimeService/Version
	Sep 18 20:14:31 ha-091565 crio[3611]: time="2024-09-18 20:14:31.463204061Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=2b9401de-4794-4425-b92a-1c9c776fa754 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 18 20:14:31 ha-091565 crio[3611]: time="2024-09-18 20:14:31.463818894Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726690471463783102,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=2b9401de-4794-4425-b92a-1c9c776fa754 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 18 20:14:31 ha-091565 crio[3611]: time="2024-09-18 20:14:31.464465648Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=7c56d012-36f3-4644-8601-f89b113bdfcc name=/runtime.v1.RuntimeService/ListContainers
	Sep 18 20:14:31 ha-091565 crio[3611]: time="2024-09-18 20:14:31.464522595Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=7c56d012-36f3-4644-8601-f89b113bdfcc name=/runtime.v1.RuntimeService/ListContainers
	Sep 18 20:14:31 ha-091565 crio[3611]: time="2024-09-18 20:14:31.465288130Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:6489130b5601ef43045b9ef5a8e330f80dae1c20724da7d02b8b61500bc9beaa,PodSandboxId:9259670422d45498b8f4e45909da18fa355ece853960532d620a5ca1f2e21efb,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726690419383021088,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b7dffb85-905b-4166-a680-34c77cf87d09,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e894eebbedc0aac646e7c24ccd43c7ae2e1a00a0275213aa3eaaf78ad5fddb8a,PodSandboxId:68e450d1a9d796861547badef070a688c9eca3e51393bcf3d8d4b48c0af80e45,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726690374390976616,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-091565,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fb44b1ca5e1af25a31cc20be38506f2d,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath
: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c3cbacf6046aded98d17f163f6e437ef37200f71fd333470b3f7b074463c80ca,PodSandboxId:6a98de6d0c34be9c61571054c6ee0cfa8df30c161e6da077b0fd3dce43a7c629,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726690374385528525,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-091565,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d7f1e1feaa8e654300c84052131dd12a,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0025f965c449fa8005990c9d16e4bd975337d00b859f495b752751bf93bf4d29,PodSandboxId:9259670422d45498b8f4e45909da18fa355ece853960532d620a5ca1f2e21efb,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1726690365382779674,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b7dffb85-905b-4166-a680-34c77cf87d09,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7deb697baa4b118571cf9f455fd13305bc42377ddcb5b0aaa75763ec49cd7955,PodSandboxId:a0b5345aa00cc911cf3ebf6d550cae0e8fae6aa83bba446cedb5885f19562a72,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1726690364731464824,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-xhmzx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 16808919-56d0-40cd-b88e-28fb5a40b3a2,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contai
ner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:315019425fff0ed52e2f663404258df3b545845e85b4f1a0ea2251dccd135b0a,PodSandboxId:e542a3cfe8082184e2eed6dece495abd78e95812c484bea2401d760729c49c81,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1726690344387758381,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-091565,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f3d3d9af35a10992383331d3d45eaca9,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.k
ubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bb5776304b68a91c9d294a1efc96b919d7969634a4d19019f4055a97a75eedaa,PodSandboxId:91eccec72bcc687b4484e04e667e2beac75982ded0203c1cc0d0f5e1fabb6a64,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1726690331422271469,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-7fl5w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5c3a9d82-3815-4aa1-8d04-14be25394dcf,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeri
od: 30,},},&Container{Id:d23d190c3f7a24c8d76129882123229cdd01c0a6d63bdeca54dd4158a36f52f7,PodSandboxId:53c9d8623b3d890b0f48875319a05766114a9fdc3dadedeaf2f7011ca1bb054c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726690331516445402,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4wm6h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d6904231-6f64-4447-9932-0cd5d692978b,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fbdc184d
26af7fa4411e4c2027e6de01f00fda961f753663405491da3f2ab52f,PodSandboxId:daf9a948e855e0b3418e648b0e08e5abe75783b75678d6d8e1ad1d450bf2aa04,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726690331363002835,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-w97kk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 70428cd6-0523-44c8-89f3-62837b52ca80,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,
io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cd37bc6079dc18794fdcec6cf524e952c17ed75bceccddac6dc5ef5026a7b0d3,PodSandboxId:68e450d1a9d796861547badef070a688c9eca3e51393bcf3d8d4b48c0af80e45,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726690331348615832,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-091565,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fb44b1ca5e1af25a31cc20be38506f2d,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.containe
r.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bb8d0cf0ea1843751c86efea123f97cd6f5a08fcb1cee02f21311efe6ca1e526,PodSandboxId:0bc3148ce646e5567d6d3af8ff4813aa443127b853f2271f576167d9d4614c54,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726690331193449994,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-091565,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 42a715c9fe466585ee83dba1966182d5,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessageP
ath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9f589735092f230bc5c0db779e10f8b0980c4e646dc42967ce96f9d511fd27e4,PodSandboxId:82ed01bc2dfae627a150f347d6f669f4e189caea575a71d9e6837201ef98ace7,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726690331432050099,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-8zcqk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 644e8147-96e9-41a1-99b8-d2de17e4798c,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\"
,\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d455b7b8c960ee88939b22efd881e9063e3cf96fec30f9b6cbdfe3c5670a8b1d,PodSandboxId:6a98de6d0c34be9c61571054c6ee0cfa8df30c161e6da077b0fd3dce43a7c629,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1726690331288188210,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-091565,io.kubernetes.pod.namespace: kube-system,io.kuberne
tes.pod.uid: d7f1e1feaa8e654300c84052131dd12a,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c820a119e934b2a93019ba436b2a4247a959cffd8b253b6758d96276a65aeaa7,PodSandboxId:efa9968c0e0eeaf14440259fb0869ec7455118c74385b559462409719c49d5e7,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726690331217170862,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-091565,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ed5bb72bdd2d49ac86ea107effb85714,},Ann
otations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7e40397db0622fd3967d20535016d957560de3976f7ca7e977fa2b186ff9f3ec,PodSandboxId:32509037cc4e4898a637b1de6a87eabd6d7504444bcf3f541a5234fb1ed4197a,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1726689867249576018,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-xhmzx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 16808919-56d0-40cd-b88e-28fb5a40b3a2,},Annot
ations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4f8cab8eef59352ca937c9b8e7056a419bb426b183a841f9e8a43366ecb1b283,PodSandboxId:16c38fe68d94e679d8aaead9d1be6f411a6984d3a367813f390d2ffc174a7047,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726689721940654598,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-8zcqk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 644e8147-96e9-41a1-99b8-d2de17e4798c,},Annotations:map[string]string{io.kube
rnetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9b5c6773eef44d848fd26ebcf13ea82ebc3c1e1bd23618a944bf5f6d6a0e7bb8,PodSandboxId:b0c496c53b4c9e8442d0d8a6bcef8269a3baad2995c3647784331bd1b03a34df,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726689721549482980,Labels:map[string]string{io.kubernetes.container.name: cor
edns,io.kubernetes.pod.name: coredns-7c65d6cfc9-w97kk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 70428cd6-0523-44c8-89f3-62837b52ca80,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:52ae20a53e17beece735496b8e65537bbebfef5da53e8a2e7a74820fee56cd63,PodSandboxId:e5053f7183e292c9c0d41caef5020bdc6b59ce14f38a3c98750b9667d61f9d14,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,Runti
meHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1726689709419367332,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-7fl5w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5c3a9d82-3815-4aa1-8d04-14be25394dcf,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c9aa80c6b1f558ba9ef5b7e70df3690995e271e9296b0cc3e71ade739843f53a,PodSandboxId:e7fdb7e540529f305bf2eb00c376fe4bdf92019e975ef0252046f5f4a250d965,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3a
d6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1726689709057469326,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4wm6h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d6904231-6f64-4447-9932-0cd5d692978b,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8c435dbd5b540cdb87d1f79fc2aadca19db9c46aff42e67d78c5d9c8eee1b6de,PodSandboxId:01b7098c9837574b56ca73c576ce61c56e6190d34e8f2c4814bae7a1f5952e5c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe
954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1726689697296369486,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-091565,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 42a715c9fe466585ee83dba1966182d5,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4358e16fe123bb31210776fce1969072fb954b1f7cc3b51c459cd155992c1be5,PodSandboxId:ae412aa32e14f8d37184d672500203218827aa56a04162cf4a4e1bde7a1c9833,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTA
INER_EXITED,CreatedAt:1726689697193701986,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-091565,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ed5bb72bdd2d49ac86ea107effb85714,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=7c56d012-36f3-4644-8601-f89b113bdfcc name=/runtime.v1.RuntimeService/ListContainers
	Sep 18 20:14:31 ha-091565 crio[3611]: time="2024-09-18 20:14:31.516491665Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=f4e76e0b-0e51-4f62-8aba-fb6a580e324a name=/runtime.v1.RuntimeService/Version
	Sep 18 20:14:31 ha-091565 crio[3611]: time="2024-09-18 20:14:31.516566876Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=f4e76e0b-0e51-4f62-8aba-fb6a580e324a name=/runtime.v1.RuntimeService/Version
	Sep 18 20:14:31 ha-091565 crio[3611]: time="2024-09-18 20:14:31.518051129Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=bd0358d2-751d-48b3-b071-8e1a0f956c38 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 18 20:14:31 ha-091565 crio[3611]: time="2024-09-18 20:14:31.518528654Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726690471518498217,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=bd0358d2-751d-48b3-b071-8e1a0f956c38 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 18 20:14:31 ha-091565 crio[3611]: time="2024-09-18 20:14:31.519301779Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=9f8ffd58-f432-4455-b602-b9f2d28bc6dc name=/runtime.v1.RuntimeService/ListContainers
	Sep 18 20:14:31 ha-091565 crio[3611]: time="2024-09-18 20:14:31.519363133Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=9f8ffd58-f432-4455-b602-b9f2d28bc6dc name=/runtime.v1.RuntimeService/ListContainers
	Sep 18 20:14:31 ha-091565 crio[3611]: time="2024-09-18 20:14:31.519765018Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:6489130b5601ef43045b9ef5a8e330f80dae1c20724da7d02b8b61500bc9beaa,PodSandboxId:9259670422d45498b8f4e45909da18fa355ece853960532d620a5ca1f2e21efb,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726690419383021088,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b7dffb85-905b-4166-a680-34c77cf87d09,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e894eebbedc0aac646e7c24ccd43c7ae2e1a00a0275213aa3eaaf78ad5fddb8a,PodSandboxId:68e450d1a9d796861547badef070a688c9eca3e51393bcf3d8d4b48c0af80e45,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726690374390976616,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-091565,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fb44b1ca5e1af25a31cc20be38506f2d,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath
: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c3cbacf6046aded98d17f163f6e437ef37200f71fd333470b3f7b074463c80ca,PodSandboxId:6a98de6d0c34be9c61571054c6ee0cfa8df30c161e6da077b0fd3dce43a7c629,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726690374385528525,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-091565,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d7f1e1feaa8e654300c84052131dd12a,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0025f965c449fa8005990c9d16e4bd975337d00b859f495b752751bf93bf4d29,PodSandboxId:9259670422d45498b8f4e45909da18fa355ece853960532d620a5ca1f2e21efb,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1726690365382779674,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b7dffb85-905b-4166-a680-34c77cf87d09,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7deb697baa4b118571cf9f455fd13305bc42377ddcb5b0aaa75763ec49cd7955,PodSandboxId:a0b5345aa00cc911cf3ebf6d550cae0e8fae6aa83bba446cedb5885f19562a72,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1726690364731464824,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-xhmzx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 16808919-56d0-40cd-b88e-28fb5a40b3a2,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contai
ner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:315019425fff0ed52e2f663404258df3b545845e85b4f1a0ea2251dccd135b0a,PodSandboxId:e542a3cfe8082184e2eed6dece495abd78e95812c484bea2401d760729c49c81,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1726690344387758381,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-091565,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f3d3d9af35a10992383331d3d45eaca9,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.k
ubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bb5776304b68a91c9d294a1efc96b919d7969634a4d19019f4055a97a75eedaa,PodSandboxId:91eccec72bcc687b4484e04e667e2beac75982ded0203c1cc0d0f5e1fabb6a64,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1726690331422271469,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-7fl5w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5c3a9d82-3815-4aa1-8d04-14be25394dcf,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeri
od: 30,},},&Container{Id:d23d190c3f7a24c8d76129882123229cdd01c0a6d63bdeca54dd4158a36f52f7,PodSandboxId:53c9d8623b3d890b0f48875319a05766114a9fdc3dadedeaf2f7011ca1bb054c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726690331516445402,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4wm6h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d6904231-6f64-4447-9932-0cd5d692978b,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fbdc184d
26af7fa4411e4c2027e6de01f00fda961f753663405491da3f2ab52f,PodSandboxId:daf9a948e855e0b3418e648b0e08e5abe75783b75678d6d8e1ad1d450bf2aa04,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726690331363002835,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-w97kk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 70428cd6-0523-44c8-89f3-62837b52ca80,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,
io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cd37bc6079dc18794fdcec6cf524e952c17ed75bceccddac6dc5ef5026a7b0d3,PodSandboxId:68e450d1a9d796861547badef070a688c9eca3e51393bcf3d8d4b48c0af80e45,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726690331348615832,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-091565,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fb44b1ca5e1af25a31cc20be38506f2d,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.containe
r.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bb8d0cf0ea1843751c86efea123f97cd6f5a08fcb1cee02f21311efe6ca1e526,PodSandboxId:0bc3148ce646e5567d6d3af8ff4813aa443127b853f2271f576167d9d4614c54,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726690331193449994,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-091565,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 42a715c9fe466585ee83dba1966182d5,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessageP
ath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9f589735092f230bc5c0db779e10f8b0980c4e646dc42967ce96f9d511fd27e4,PodSandboxId:82ed01bc2dfae627a150f347d6f669f4e189caea575a71d9e6837201ef98ace7,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726690331432050099,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-8zcqk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 644e8147-96e9-41a1-99b8-d2de17e4798c,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\"
,\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d455b7b8c960ee88939b22efd881e9063e3cf96fec30f9b6cbdfe3c5670a8b1d,PodSandboxId:6a98de6d0c34be9c61571054c6ee0cfa8df30c161e6da077b0fd3dce43a7c629,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1726690331288188210,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-091565,io.kubernetes.pod.namespace: kube-system,io.kuberne
tes.pod.uid: d7f1e1feaa8e654300c84052131dd12a,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c820a119e934b2a93019ba436b2a4247a959cffd8b253b6758d96276a65aeaa7,PodSandboxId:efa9968c0e0eeaf14440259fb0869ec7455118c74385b559462409719c49d5e7,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726690331217170862,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-091565,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ed5bb72bdd2d49ac86ea107effb85714,},Ann
otations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7e40397db0622fd3967d20535016d957560de3976f7ca7e977fa2b186ff9f3ec,PodSandboxId:32509037cc4e4898a637b1de6a87eabd6d7504444bcf3f541a5234fb1ed4197a,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1726689867249576018,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-xhmzx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 16808919-56d0-40cd-b88e-28fb5a40b3a2,},Annot
ations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4f8cab8eef59352ca937c9b8e7056a419bb426b183a841f9e8a43366ecb1b283,PodSandboxId:16c38fe68d94e679d8aaead9d1be6f411a6984d3a367813f390d2ffc174a7047,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726689721940654598,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-8zcqk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 644e8147-96e9-41a1-99b8-d2de17e4798c,},Annotations:map[string]string{io.kube
rnetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9b5c6773eef44d848fd26ebcf13ea82ebc3c1e1bd23618a944bf5f6d6a0e7bb8,PodSandboxId:b0c496c53b4c9e8442d0d8a6bcef8269a3baad2995c3647784331bd1b03a34df,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726689721549482980,Labels:map[string]string{io.kubernetes.container.name: cor
edns,io.kubernetes.pod.name: coredns-7c65d6cfc9-w97kk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 70428cd6-0523-44c8-89f3-62837b52ca80,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:52ae20a53e17beece735496b8e65537bbebfef5da53e8a2e7a74820fee56cd63,PodSandboxId:e5053f7183e292c9c0d41caef5020bdc6b59ce14f38a3c98750b9667d61f9d14,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,Runti
meHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1726689709419367332,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-7fl5w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5c3a9d82-3815-4aa1-8d04-14be25394dcf,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c9aa80c6b1f558ba9ef5b7e70df3690995e271e9296b0cc3e71ade739843f53a,PodSandboxId:e7fdb7e540529f305bf2eb00c376fe4bdf92019e975ef0252046f5f4a250d965,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3a
d6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1726689709057469326,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4wm6h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d6904231-6f64-4447-9932-0cd5d692978b,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8c435dbd5b540cdb87d1f79fc2aadca19db9c46aff42e67d78c5d9c8eee1b6de,PodSandboxId:01b7098c9837574b56ca73c576ce61c56e6190d34e8f2c4814bae7a1f5952e5c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe
954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1726689697296369486,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-091565,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 42a715c9fe466585ee83dba1966182d5,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4358e16fe123bb31210776fce1969072fb954b1f7cc3b51c459cd155992c1be5,PodSandboxId:ae412aa32e14f8d37184d672500203218827aa56a04162cf4a4e1bde7a1c9833,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTA
INER_EXITED,CreatedAt:1726689697193701986,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-091565,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ed5bb72bdd2d49ac86ea107effb85714,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=9f8ffd58-f432-4455-b602-b9f2d28bc6dc name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	6489130b5601e       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      52 seconds ago       Running             storage-provisioner       4                   9259670422d45       storage-provisioner
	e894eebbedc0a       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee                                      About a minute ago   Running             kube-apiserver            3                   68e450d1a9d79       kube-apiserver-ha-091565
	c3cbacf6046ad       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                      About a minute ago   Running             kube-controller-manager   2                   6a98de6d0c34b       kube-controller-manager-ha-091565
	0025f965c449f       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      About a minute ago   Exited              storage-provisioner       3                   9259670422d45       storage-provisioner
	7deb697baa4b1       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a                                      About a minute ago   Running             busybox                   1                   a0b5345aa00cc       busybox-7dff88458-xhmzx
	315019425fff0       38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12                                      2 minutes ago        Running             kube-vip                  0                   e542a3cfe8082       kube-vip-ha-091565
	d23d190c3f7a2       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                      2 minutes ago        Running             kube-proxy                1                   53c9d8623b3d8       kube-proxy-4wm6h
	9f589735092f2       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      2 minutes ago        Running             coredns                   1                   82ed01bc2dfae       coredns-7c65d6cfc9-8zcqk
	bb5776304b68a       12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f                                      2 minutes ago        Running             kindnet-cni               1                   91eccec72bcc6       kindnet-7fl5w
	fbdc184d26af7       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      2 minutes ago        Running             coredns                   1                   daf9a948e855e       coredns-7c65d6cfc9-w97kk
	cd37bc6079dc1       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee                                      2 minutes ago        Exited              kube-apiserver            2                   68e450d1a9d79       kube-apiserver-ha-091565
	d455b7b8c960e       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                      2 minutes ago        Exited              kube-controller-manager   1                   6a98de6d0c34b       kube-controller-manager-ha-091565
	c820a119e934b       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      2 minutes ago        Running             etcd                      1                   efa9968c0e0ee       etcd-ha-091565
	bb8d0cf0ea184       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                      2 minutes ago        Running             kube-scheduler            1                   0bc3148ce646e       kube-scheduler-ha-091565
	7e40397db0622       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   10 minutes ago       Exited              busybox                   0                   32509037cc4e4       busybox-7dff88458-xhmzx
	4f8cab8eef593       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      12 minutes ago       Exited              coredns                   0                   16c38fe68d94e       coredns-7c65d6cfc9-8zcqk
	9b5c6773eef44       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      12 minutes ago       Exited              coredns                   0                   b0c496c53b4c9       coredns-7c65d6cfc9-w97kk
	52ae20a53e17b       12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f                                      12 minutes ago       Exited              kindnet-cni               0                   e5053f7183e29       kindnet-7fl5w
	c9aa80c6b1f55       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                      12 minutes ago       Exited              kube-proxy                0                   e7fdb7e540529       kube-proxy-4wm6h
	8c435dbd5b540       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                      12 minutes ago       Exited              kube-scheduler            0                   01b7098c98375       kube-scheduler-ha-091565
	4358e16fe123b       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      12 minutes ago       Exited              etcd                      0                   ae412aa32e14f       etcd-ha-091565
	
	
	==> coredns [4f8cab8eef59352ca937c9b8e7056a419bb426b183a841f9e8a43366ecb1b283] <==
	[INFO] 10.244.1.2:44283 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.003884102s
	[INFO] 10.244.1.2:32970 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000204769s
	[INFO] 10.244.1.2:52008 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000243831s
	[INFO] 10.244.2.2:50260 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000163913s
	[INFO] 10.244.2.2:55732 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001811166s
	[INFO] 10.244.2.2:39226 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.00012772s
	[INFO] 10.244.2.2:53709 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.0000925s
	[INFO] 10.244.2.2:41092 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000125187s
	[INFO] 10.244.0.4:40054 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000124612s
	[INFO] 10.244.0.4:38790 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001299276s
	[INFO] 10.244.0.4:59253 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000062856s
	[INFO] 10.244.0.4:38256 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000094015s
	[INFO] 10.244.1.2:44940 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000153669s
	[INFO] 10.244.1.2:48450 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000097947s
	[INFO] 10.244.0.4:38580 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000117553s
	[INFO] 10.244.2.2:59546 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000170402s
	[INFO] 10.244.2.2:49026 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000189642s
	[INFO] 10.244.2.2:45658 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000151371s
	[INFO] 10.244.0.4:51397 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000169114s
	[INFO] 10.244.0.4:47813 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000155527s
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?allowWatchBookmarks=true&resourceVersion=1769&timeout=8m38s&timeoutSeconds=518&watch=true": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: Get "https://10.96.0.1:443/api/v1/services?allowWatchBookmarks=true&resourceVersion=1779&timeout=6m44s&timeoutSeconds=404&watch=true": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?allowWatchBookmarks=true&resourceVersion=1730&timeout=8m56s&timeoutSeconds=536&watch=true": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [9b5c6773eef44d848fd26ebcf13ea82ebc3c1e1bd23618a944bf5f6d6a0e7bb8] <==
	[INFO] 10.244.2.2:48639 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000087315s
	[INFO] 10.244.0.4:52361 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001834081s
	[INFO] 10.244.0.4:55907 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000221265s
	[INFO] 10.244.0.4:58409 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000117627s
	[INFO] 10.244.0.4:50242 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000115347s
	[INFO] 10.244.1.2:47046 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000136453s
	[INFO] 10.244.1.2:43799 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000196628s
	[INFO] 10.244.2.2:55965 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000123662s
	[INFO] 10.244.2.2:50433 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000098915s
	[INFO] 10.244.2.2:53589 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000068105s
	[INFO] 10.244.2.2:34234 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000074s
	[INFO] 10.244.0.4:45468 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000084304s
	[INFO] 10.244.0.4:51889 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000073683s
	[INFO] 10.244.0.4:50414 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000047051s
	[INFO] 10.244.1.2:45104 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000139109s
	[INFO] 10.244.1.2:42703 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.00019857s
	[INFO] 10.244.1.2:45604 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000184516s
	[INFO] 10.244.1.2:54679 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.00010429s
	[INFO] 10.244.2.2:37265 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000089491s
	[INFO] 10.244.0.4:58464 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000108633s
	[INFO] 10.244.0.4:60733 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.0000682s
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?allowWatchBookmarks=true&resourceVersion=1735&timeout=5m14s&timeoutSeconds=314&watch=true": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?allowWatchBookmarks=true&resourceVersion=1730&timeout=7m45s&timeoutSeconds=465&watch=true": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [9f589735092f230bc5c0db779e10f8b0980c4e646dc42967ce96f9d511fd27e4] <==
	Trace[1422487581]: [10.001552156s] [10.001552156s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: Trace[1518205786]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (18-Sep-2024 20:12:21.208) (total time: 10000ms):
	Trace[1518205786]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10000ms (20:12:31.208)
	Trace[1518205786]: [10.000776946s] [10.000776946s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.6:52278->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.6:52278->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> coredns [fbdc184d26af7fa4411e4c2027e6de01f00fda961f753663405491da3f2ab52f] <==
	Trace[924636306]: [13.426694172s] [13.426694172s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.5:40782->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.5:40770->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: Trace[72597827]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (18-Sep-2024 20:12:22.905) (total time: 13729ms):
	Trace[72597827]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.5:40770->10.96.0.1:443: read: connection reset by peer 13728ms (20:12:36.633)
	Trace[72597827]: [13.729060125s] [13.729060125s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.5:40770->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.5:41538->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.5:41538->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> describe nodes <==
	Name:               ha-091565
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-091565
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=85073601a832bd4bbda5d11fa91feafff6ec6b91
	                    minikube.k8s.io/name=ha-091565
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_18T20_01_44_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 18 Sep 2024 20:01:40 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-091565
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 18 Sep 2024 20:14:27 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 18 Sep 2024 20:12:52 +0000   Wed, 18 Sep 2024 20:01:39 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 18 Sep 2024 20:12:52 +0000   Wed, 18 Sep 2024 20:01:39 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 18 Sep 2024 20:12:52 +0000   Wed, 18 Sep 2024 20:01:39 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 18 Sep 2024 20:12:52 +0000   Wed, 18 Sep 2024 20:02:01 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.215
	  Hostname:    ha-091565
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 a62ed2f9eda04eb9bbdd5bc2c8925018
	  System UUID:                a62ed2f9-eda0-4eb9-bbdd-5bc2c8925018
	  Boot ID:                    e0c4d56b-81dc-4d69-9fe6-35f1341e336d
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-xhmzx              0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 coredns-7c65d6cfc9-8zcqk             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     12m
	  kube-system                 coredns-7c65d6cfc9-w97kk             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     12m
	  kube-system                 etcd-ha-091565                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         12m
	  kube-system                 kindnet-7fl5w                        100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      12m
	  kube-system                 kube-apiserver-ha-091565             250m (12%)    0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-controller-manager-ha-091565    200m (10%)    0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-proxy-4wm6h                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-scheduler-ha-091565             100m (5%)     0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-vip-ha-091565                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         64s
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   100m (5%)
	  memory             290Mi (13%)  390Mi (18%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 97s                    kube-proxy       
	  Normal   Starting                 12m                    kube-proxy       
	  Normal   NodeHasSufficientMemory  12m                    kubelet          Node ha-091565 status is now: NodeHasSufficientMemory
	  Normal   NodeHasSufficientPID     12m                    kubelet          Node ha-091565 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    12m                    kubelet          Node ha-091565 status is now: NodeHasNoDiskPressure
	  Normal   Starting                 12m                    kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  12m                    kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           12m                    node-controller  Node ha-091565 event: Registered Node ha-091565 in Controller
	  Normal   NodeReady                12m                    kubelet          Node ha-091565 status is now: NodeReady
	  Normal   RegisteredNode           11m                    node-controller  Node ha-091565 event: Registered Node ha-091565 in Controller
	  Normal   RegisteredNode           10m                    node-controller  Node ha-091565 event: Registered Node ha-091565 in Controller
	  Warning  ContainerGCFailed        2m49s (x2 over 3m49s)  kubelet          rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	  Normal   NodeNotReady             2m35s (x3 over 3m24s)  kubelet          Node ha-091565 status is now: NodeNotReady
	  Normal   RegisteredNode           101s                   node-controller  Node ha-091565 event: Registered Node ha-091565 in Controller
	  Normal   RegisteredNode           92s                    node-controller  Node ha-091565 event: Registered Node ha-091565 in Controller
	  Normal   RegisteredNode           35s                    node-controller  Node ha-091565 event: Registered Node ha-091565 in Controller
	
	
	Name:               ha-091565-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-091565-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=85073601a832bd4bbda5d11fa91feafff6ec6b91
	                    minikube.k8s.io/name=ha-091565
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_18T20_02_40_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 18 Sep 2024 20:02:37 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-091565-m02
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 18 Sep 2024 20:14:26 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 18 Sep 2024 20:14:05 +0000   Wed, 18 Sep 2024 20:12:54 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 18 Sep 2024 20:14:05 +0000   Wed, 18 Sep 2024 20:12:54 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 18 Sep 2024 20:14:05 +0000   Wed, 18 Sep 2024 20:12:54 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 18 Sep 2024 20:14:05 +0000   Wed, 18 Sep 2024 20:13:04 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.92
	  Hostname:    ha-091565-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 725aeac5e21d42d69ce571d302d9f7bc
	  System UUID:                725aeac5-e21d-42d6-9ce5-71d302d9f7bc
	  Boot ID:                    2d038098-44cb-4374-8eb7-a46ab596f517
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-45phf                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 etcd-ha-091565-m02                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         11m
	  kube-system                 kindnet-bzsqr                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      11m
	  kube-system                 kube-apiserver-ha-091565-m02             250m (12%)    0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-controller-manager-ha-091565-m02    200m (10%)    0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-proxy-bxblp                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-scheduler-ha-091565-m02             100m (5%)     0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-vip-ha-091565-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 72s                  kube-proxy       
	  Normal  Starting                 11m                  kube-proxy       
	  Normal  NodeAllocatableEnforced  11m                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  11m (x8 over 11m)    kubelet          Node ha-091565-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    11m (x8 over 11m)    kubelet          Node ha-091565-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     11m (x7 over 11m)    kubelet          Node ha-091565-m02 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           11m                  node-controller  Node ha-091565-m02 event: Registered Node ha-091565-m02 in Controller
	  Normal  RegisteredNode           11m                  node-controller  Node ha-091565-m02 event: Registered Node ha-091565-m02 in Controller
	  Normal  RegisteredNode           10m                  node-controller  Node ha-091565-m02 event: Registered Node ha-091565-m02 in Controller
	  Normal  NodeNotReady             8m11s                node-controller  Node ha-091565-m02 status is now: NodeNotReady
	  Normal  Starting                 2m6s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m6s (x8 over 2m6s)  kubelet          Node ha-091565-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m6s (x8 over 2m6s)  kubelet          Node ha-091565-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m6s (x7 over 2m6s)  kubelet          Node ha-091565-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  2m6s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           101s                 node-controller  Node ha-091565-m02 event: Registered Node ha-091565-m02 in Controller
	  Normal  RegisteredNode           92s                  node-controller  Node ha-091565-m02 event: Registered Node ha-091565-m02 in Controller
	  Normal  RegisteredNode           35s                  node-controller  Node ha-091565-m02 event: Registered Node ha-091565-m02 in Controller
	
	
	Name:               ha-091565-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-091565-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=85073601a832bd4bbda5d11fa91feafff6ec6b91
	                    minikube.k8s.io/name=ha-091565
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_18T20_03_57_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 18 Sep 2024 20:03:54 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-091565-m03
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 18 Sep 2024 20:14:31 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 18 Sep 2024 20:14:10 +0000   Wed, 18 Sep 2024 20:13:40 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 18 Sep 2024 20:14:10 +0000   Wed, 18 Sep 2024 20:13:40 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 18 Sep 2024 20:14:10 +0000   Wed, 18 Sep 2024 20:13:40 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 18 Sep 2024 20:14:10 +0000   Wed, 18 Sep 2024 20:13:40 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.53
	  Hostname:    ha-091565-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 d7cb71d27a4f4e8b92a5e72c1afd8865
	  System UUID:                d7cb71d2-7a4f-4e8b-92a5-e72c1afd8865
	  Boot ID:                    6e526f6e-9e0f-46de-ac51-c8b6d12756ff
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-jjr2n                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 etcd-ha-091565-m03                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         10m
	  kube-system                 kindnet-5rh2w                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      10m
	  kube-system                 kube-apiserver-ha-091565-m03             250m (12%)    0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-controller-manager-ha-091565-m03    200m (10%)    0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-proxy-4p8rj                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-scheduler-ha-091565-m03             100m (5%)     0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-vip-ha-091565-m03                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 10m                kube-proxy       
	  Normal   Starting                 35s                kube-proxy       
	  Normal   NodeAllocatableEnforced  10m                kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  10m (x8 over 10m)  kubelet          Node ha-091565-m03 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    10m (x8 over 10m)  kubelet          Node ha-091565-m03 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     10m (x7 over 10m)  kubelet          Node ha-091565-m03 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           10m                node-controller  Node ha-091565-m03 event: Registered Node ha-091565-m03 in Controller
	  Normal   RegisteredNode           10m                node-controller  Node ha-091565-m03 event: Registered Node ha-091565-m03 in Controller
	  Normal   RegisteredNode           10m                node-controller  Node ha-091565-m03 event: Registered Node ha-091565-m03 in Controller
	  Normal   RegisteredNode           101s               node-controller  Node ha-091565-m03 event: Registered Node ha-091565-m03 in Controller
	  Normal   RegisteredNode           92s                node-controller  Node ha-091565-m03 event: Registered Node ha-091565-m03 in Controller
	  Normal   NodeNotReady             61s                node-controller  Node ha-091565-m03 status is now: NodeNotReady
	  Normal   Starting                 53s                kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  52s                kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  52s (x3 over 52s)  kubelet          Node ha-091565-m03 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    52s (x3 over 52s)  kubelet          Node ha-091565-m03 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     52s (x3 over 52s)  kubelet          Node ha-091565-m03 status is now: NodeHasSufficientPID
	  Warning  Rebooted                 52s (x2 over 52s)  kubelet          Node ha-091565-m03 has been rebooted, boot id: 6e526f6e-9e0f-46de-ac51-c8b6d12756ff
	  Normal   NodeReady                52s (x2 over 52s)  kubelet          Node ha-091565-m03 status is now: NodeReady
	  Normal   RegisteredNode           35s                node-controller  Node ha-091565-m03 event: Registered Node ha-091565-m03 in Controller
	
	
	Name:               ha-091565-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-091565-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=85073601a832bd4bbda5d11fa91feafff6ec6b91
	                    minikube.k8s.io/name=ha-091565
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_18T20_05_02_0700
	                    minikube.k8s.io/version=v1.34.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 18 Sep 2024 20:05:01 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-091565-m04
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 18 Sep 2024 20:14:24 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 18 Sep 2024 20:14:24 +0000   Wed, 18 Sep 2024 20:14:24 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 18 Sep 2024 20:14:24 +0000   Wed, 18 Sep 2024 20:14:24 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 18 Sep 2024 20:14:24 +0000   Wed, 18 Sep 2024 20:14:24 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 18 Sep 2024 20:14:24 +0000   Wed, 18 Sep 2024 20:14:24 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.9
	  Hostname:    ha-091565-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 cb0096492d0c441d8778e11eb51e77d3
	  System UUID:                cb009649-2d0c-441d-8778-e11eb51e77d3
	  Boot ID:                    2ebc0a85-e8f4-451d-ac54-eddc05c67c88
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-4xtjm       100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      9m31s
	  kube-system                 kube-proxy-8qkpk    0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m31s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 4s                     kube-proxy       
	  Normal   Starting                 9m25s                  kube-proxy       
	  Normal   NodeAllocatableEnforced  9m32s                  kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           9m31s                  node-controller  Node ha-091565-m04 event: Registered Node ha-091565-m04 in Controller
	  Normal   NodeHasSufficientMemory  9m31s (x2 over 9m32s)  kubelet          Node ha-091565-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    9m31s (x2 over 9m32s)  kubelet          Node ha-091565-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     9m31s (x2 over 9m32s)  kubelet          Node ha-091565-m04 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           9m30s                  node-controller  Node ha-091565-m04 event: Registered Node ha-091565-m04 in Controller
	  Normal   RegisteredNode           9m27s                  node-controller  Node ha-091565-m04 event: Registered Node ha-091565-m04 in Controller
	  Normal   NodeReady                9m11s                  kubelet          Node ha-091565-m04 status is now: NodeReady
	  Normal   RegisteredNode           101s                   node-controller  Node ha-091565-m04 event: Registered Node ha-091565-m04 in Controller
	  Normal   RegisteredNode           92s                    node-controller  Node ha-091565-m04 event: Registered Node ha-091565-m04 in Controller
	  Normal   NodeNotReady             61s                    node-controller  Node ha-091565-m04 status is now: NodeNotReady
	  Normal   RegisteredNode           35s                    node-controller  Node ha-091565-m04 event: Registered Node ha-091565-m04 in Controller
	  Normal   Starting                 8s                     kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  8s                     kubelet          Updated Node Allocatable limit across pods
	  Warning  Rebooted                 8s                     kubelet          Node ha-091565-m04 has been rebooted, boot id: 2ebc0a85-e8f4-451d-ac54-eddc05c67c88
	  Normal   NodeHasSufficientMemory  8s (x2 over 8s)        kubelet          Node ha-091565-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    8s (x2 over 8s)        kubelet          Node ha-091565-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     8s (x2 over 8s)        kubelet          Node ha-091565-m04 status is now: NodeHasSufficientPID
	  Normal   NodeReady                8s                     kubelet          Node ha-091565-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[ +13.896131] systemd-fstab-generator[585]: Ignoring "noauto" option for root device
	[  +0.067482] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.062052] systemd-fstab-generator[597]: Ignoring "noauto" option for root device
	[  +0.180384] systemd-fstab-generator[611]: Ignoring "noauto" option for root device
	[  +0.116835] systemd-fstab-generator[623]: Ignoring "noauto" option for root device
	[  +0.268512] systemd-fstab-generator[653]: Ignoring "noauto" option for root device
	[  +3.829963] systemd-fstab-generator[747]: Ignoring "noauto" option for root device
	[  +4.147936] systemd-fstab-generator[884]: Ignoring "noauto" option for root device
	[  +0.060572] kauditd_printk_skb: 158 callbacks suppressed
	[  +6.397640] kauditd_printk_skb: 79 callbacks suppressed
	[  +0.774401] systemd-fstab-generator[1308]: Ignoring "noauto" option for root device
	[  +5.898362] kauditd_printk_skb: 15 callbacks suppressed
	[Sep18 20:02] kauditd_printk_skb: 38 callbacks suppressed
	[ +41.961999] kauditd_printk_skb: 26 callbacks suppressed
	[Sep18 20:11] systemd-fstab-generator[3535]: Ignoring "noauto" option for root device
	[  +0.178179] systemd-fstab-generator[3547]: Ignoring "noauto" option for root device
	[  +0.174179] systemd-fstab-generator[3561]: Ignoring "noauto" option for root device
	[  +0.155562] systemd-fstab-generator[3573]: Ignoring "noauto" option for root device
	[  +0.282508] systemd-fstab-generator[3601]: Ignoring "noauto" option for root device
	[Sep18 20:12] systemd-fstab-generator[3695]: Ignoring "noauto" option for root device
	[  +0.102931] kauditd_printk_skb: 100 callbacks suppressed
	[  +6.532330] kauditd_printk_skb: 12 callbacks suppressed
	[  +8.090357] kauditd_printk_skb: 85 callbacks suppressed
	[ +30.933245] kauditd_printk_skb: 8 callbacks suppressed
	[  +7.159614] kauditd_printk_skb: 3 callbacks suppressed
	
	
	==> etcd [4358e16fe123bb31210776fce1969072fb954b1f7cc3b51c459cd155992c1be5] <==
	2024/09/18 20:10:22 WARNING: [core] [Server #5] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"info","ts":"2024-09-18T20:10:22.393664Z","caller":"traceutil/trace.go:171","msg":"trace[739996942] range","detail":"{range_begin:/registry/leases/; range_end:/registry/leases0; }","duration":"838.83649ms","start":"2024-09-18T20:10:21.554825Z","end":"2024-09-18T20:10:22.393661Z","steps":["trace[739996942] 'agreement among raft nodes before linearized reading'  (duration: 835.746199ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-18T20:10:22.400072Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-18T20:10:21.554814Z","time spent":"845.204609ms","remote":"127.0.0.1:38200","response type":"/etcdserverpb.KV/Range","request count":0,"request size":41,"response count":0,"response size":0,"request content":"key:\"/registry/leases/\" range_end:\"/registry/leases0\" limit:10000 "}
	2024/09/18 20:10:22 WARNING: [core] [Server #5] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2024-09-18T20:10:22.456322Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.215:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-09-18T20:10:22.456481Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.215:2379: use of closed network connection"}
	{"level":"info","ts":"2024-09-18T20:10:22.456952Z","caller":"etcdserver/server.go:1512","msg":"skipped leadership transfer; local server is not leader","local-member-id":"ce9e8f286885b37e","current-leader-member-id":"0"}
	{"level":"info","ts":"2024-09-18T20:10:22.457188Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"7208e3715ec3d11b"}
	{"level":"info","ts":"2024-09-18T20:10:22.457285Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"7208e3715ec3d11b"}
	{"level":"info","ts":"2024-09-18T20:10:22.457378Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"7208e3715ec3d11b"}
	{"level":"info","ts":"2024-09-18T20:10:22.457565Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"ce9e8f286885b37e","remote-peer-id":"7208e3715ec3d11b"}
	{"level":"info","ts":"2024-09-18T20:10:22.457657Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"ce9e8f286885b37e","remote-peer-id":"7208e3715ec3d11b"}
	{"level":"info","ts":"2024-09-18T20:10:22.457731Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"ce9e8f286885b37e","remote-peer-id":"7208e3715ec3d11b"}
	{"level":"info","ts":"2024-09-18T20:10:22.457745Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"7208e3715ec3d11b"}
	{"level":"info","ts":"2024-09-18T20:10:22.457751Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"2408d04abbdc115f"}
	{"level":"info","ts":"2024-09-18T20:10:22.457764Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"2408d04abbdc115f"}
	{"level":"info","ts":"2024-09-18T20:10:22.457795Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"2408d04abbdc115f"}
	{"level":"info","ts":"2024-09-18T20:10:22.457844Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"ce9e8f286885b37e","remote-peer-id":"2408d04abbdc115f"}
	{"level":"info","ts":"2024-09-18T20:10:22.457931Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"ce9e8f286885b37e","remote-peer-id":"2408d04abbdc115f"}
	{"level":"info","ts":"2024-09-18T20:10:22.457983Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"ce9e8f286885b37e","remote-peer-id":"2408d04abbdc115f"}
	{"level":"info","ts":"2024-09-18T20:10:22.458008Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"2408d04abbdc115f"}
	{"level":"info","ts":"2024-09-18T20:10:22.461452Z","caller":"embed/etcd.go:581","msg":"stopping serving peer traffic","address":"192.168.39.215:2380"}
	{"level":"info","ts":"2024-09-18T20:10:22.461609Z","caller":"embed/etcd.go:586","msg":"stopped serving peer traffic","address":"192.168.39.215:2380"}
	{"level":"info","ts":"2024-09-18T20:10:22.461633Z","caller":"embed/etcd.go:379","msg":"closed etcd server","name":"ha-091565","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.215:2380"],"advertise-client-urls":["https://192.168.39.215:2379"]}
	{"level":"warn","ts":"2024-09-18T20:10:22.461619Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"8.934967457s","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"","error":"etcdserver: server stopped"}
	
	
	==> etcd [c820a119e934b2a93019ba436b2a4247a959cffd8b253b6758d96276a65aeaa7] <==
	{"level":"warn","ts":"2024-09-18T20:13:34.621539Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ce9e8f286885b37e","from":"ce9e8f286885b37e","remote-peer-id":"2408d04abbdc115f","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-18T20:13:34.722550Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ce9e8f286885b37e","from":"ce9e8f286885b37e","remote-peer-id":"2408d04abbdc115f","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-18T20:13:34.731402Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ce9e8f286885b37e","from":"ce9e8f286885b37e","remote-peer-id":"2408d04abbdc115f","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-18T20:13:34.822079Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ce9e8f286885b37e","from":"ce9e8f286885b37e","remote-peer-id":"2408d04abbdc115f","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-18T20:13:34.873182Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ce9e8f286885b37e","from":"ce9e8f286885b37e","remote-peer-id":"2408d04abbdc115f","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-18T20:13:34.921553Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ce9e8f286885b37e","from":"ce9e8f286885b37e","remote-peer-id":"2408d04abbdc115f","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-18T20:13:35.932180Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.168.39.53:2380/version","remote-member-id":"2408d04abbdc115f","error":"Get \"https://192.168.39.53:2380/version\": dial tcp 192.168.39.53:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-18T20:13:35.932244Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"2408d04abbdc115f","error":"Get \"https://192.168.39.53:2380/version\": dial tcp 192.168.39.53:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-18T20:13:37.282707Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"2408d04abbdc115f","rtt":"0s","error":"dial tcp 192.168.39.53:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-18T20:13:37.282756Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"2408d04abbdc115f","rtt":"0s","error":"dial tcp 192.168.39.53:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-18T20:13:39.935090Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.168.39.53:2380/version","remote-member-id":"2408d04abbdc115f","error":"Get \"https://192.168.39.53:2380/version\": dial tcp 192.168.39.53:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-18T20:13:39.935254Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"2408d04abbdc115f","error":"Get \"https://192.168.39.53:2380/version\": dial tcp 192.168.39.53:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-18T20:13:42.283242Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"2408d04abbdc115f","rtt":"0s","error":"dial tcp 192.168.39.53:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-18T20:13:42.283399Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"2408d04abbdc115f","rtt":"0s","error":"dial tcp 192.168.39.53:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-18T20:13:43.937971Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.168.39.53:2380/version","remote-member-id":"2408d04abbdc115f","error":"Get \"https://192.168.39.53:2380/version\": dial tcp 192.168.39.53:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-18T20:13:43.938055Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"2408d04abbdc115f","error":"Get \"https://192.168.39.53:2380/version\": dial tcp 192.168.39.53:2380: connect: connection refused"}
	{"level":"info","ts":"2024-09-18T20:13:45.022496Z","caller":"rafthttp/peer_status.go:53","msg":"peer became active","peer-id":"2408d04abbdc115f"}
	{"level":"info","ts":"2024-09-18T20:13:45.030200Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"ce9e8f286885b37e","to":"2408d04abbdc115f","stream-type":"stream MsgApp v2"}
	{"level":"info","ts":"2024-09-18T20:13:45.030371Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","local-member-id":"ce9e8f286885b37e","remote-peer-id":"2408d04abbdc115f"}
	{"level":"info","ts":"2024-09-18T20:13:45.031492Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"ce9e8f286885b37e","remote-peer-id":"2408d04abbdc115f"}
	{"level":"info","ts":"2024-09-18T20:13:45.031761Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"ce9e8f286885b37e","remote-peer-id":"2408d04abbdc115f"}
	{"level":"info","ts":"2024-09-18T20:13:45.037997Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"ce9e8f286885b37e","to":"2408d04abbdc115f","stream-type":"stream Message"}
	{"level":"info","ts":"2024-09-18T20:13:45.038067Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"ce9e8f286885b37e","remote-peer-id":"2408d04abbdc115f"}
	{"level":"warn","ts":"2024-09-18T20:13:47.284439Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"2408d04abbdc115f","rtt":"0s","error":"dial tcp 192.168.39.53:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-18T20:13:47.284489Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"2408d04abbdc115f","rtt":"0s","error":"dial tcp 192.168.39.53:2380: connect: connection refused"}
	
	
	==> kernel <==
	 20:14:32 up 13 min,  0 users,  load average: 0.93, 0.91, 0.48
	Linux ha-091565 5.10.207 #1 SMP Mon Sep 16 15:00:28 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [52ae20a53e17beece735496b8e65537bbebfef5da53e8a2e7a74820fee56cd63] <==
	I0918 20:10:00.558185       1 main.go:295] Handling node with IPs: map[192.168.39.92:{}]
	I0918 20:10:00.558226       1 main.go:322] Node ha-091565-m02 has CIDR [10.244.1.0/24] 
	I0918 20:10:00.558366       1 main.go:295] Handling node with IPs: map[192.168.39.53:{}]
	I0918 20:10:00.558423       1 main.go:322] Node ha-091565-m03 has CIDR [10.244.2.0/24] 
	I0918 20:10:00.558483       1 main.go:295] Handling node with IPs: map[192.168.39.9:{}]
	I0918 20:10:00.558501       1 main.go:322] Node ha-091565-m04 has CIDR [10.244.3.0/24] 
	I0918 20:10:00.558556       1 main.go:295] Handling node with IPs: map[192.168.39.215:{}]
	I0918 20:10:00.558576       1 main.go:299] handling current node
	E0918 20:10:09.434208       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: Failed to watch *v1.Node: Get "https://10.96.0.1:443/api/v1/nodes?allowWatchBookmarks=true&resourceVersion=1779&timeout=5m47s&timeoutSeconds=347&watch=true": dial tcp 10.96.0.1:443: connect: no route to host
	I0918 20:10:10.558301       1 main.go:295] Handling node with IPs: map[192.168.39.215:{}]
	I0918 20:10:10.558471       1 main.go:299] handling current node
	I0918 20:10:10.558506       1 main.go:295] Handling node with IPs: map[192.168.39.92:{}]
	I0918 20:10:10.558578       1 main.go:322] Node ha-091565-m02 has CIDR [10.244.1.0/24] 
	I0918 20:10:10.558742       1 main.go:295] Handling node with IPs: map[192.168.39.53:{}]
	I0918 20:10:10.558767       1 main.go:322] Node ha-091565-m03 has CIDR [10.244.2.0/24] 
	I0918 20:10:10.558819       1 main.go:295] Handling node with IPs: map[192.168.39.9:{}]
	I0918 20:10:10.558836       1 main.go:322] Node ha-091565-m04 has CIDR [10.244.3.0/24] 
	I0918 20:10:20.567096       1 main.go:295] Handling node with IPs: map[192.168.39.9:{}]
	I0918 20:10:20.567248       1 main.go:322] Node ha-091565-m04 has CIDR [10.244.3.0/24] 
	I0918 20:10:20.567439       1 main.go:295] Handling node with IPs: map[192.168.39.215:{}]
	I0918 20:10:20.567529       1 main.go:299] handling current node
	I0918 20:10:20.567564       1 main.go:295] Handling node with IPs: map[192.168.39.92:{}]
	I0918 20:10:20.567582       1 main.go:322] Node ha-091565-m02 has CIDR [10.244.1.0/24] 
	I0918 20:10:20.567674       1 main.go:295] Handling node with IPs: map[192.168.39.53:{}]
	I0918 20:10:20.567738       1 main.go:322] Node ha-091565-m03 has CIDR [10.244.2.0/24] 
	
	
	==> kindnet [bb5776304b68a91c9d294a1efc96b919d7969634a4d19019f4055a97a75eedaa] <==
	I0918 20:14:02.605943       1 main.go:322] Node ha-091565-m02 has CIDR [10.244.1.0/24] 
	I0918 20:14:12.602952       1 main.go:295] Handling node with IPs: map[192.168.39.53:{}]
	I0918 20:14:12.603086       1 main.go:322] Node ha-091565-m03 has CIDR [10.244.2.0/24] 
	I0918 20:14:12.603232       1 main.go:295] Handling node with IPs: map[192.168.39.9:{}]
	I0918 20:14:12.603260       1 main.go:322] Node ha-091565-m04 has CIDR [10.244.3.0/24] 
	I0918 20:14:12.603337       1 main.go:295] Handling node with IPs: map[192.168.39.215:{}]
	I0918 20:14:12.603366       1 main.go:299] handling current node
	I0918 20:14:12.603388       1 main.go:295] Handling node with IPs: map[192.168.39.92:{}]
	I0918 20:14:12.603404       1 main.go:322] Node ha-091565-m02 has CIDR [10.244.1.0/24] 
	I0918 20:14:22.611726       1 main.go:295] Handling node with IPs: map[192.168.39.215:{}]
	I0918 20:14:22.611989       1 main.go:299] handling current node
	I0918 20:14:22.612043       1 main.go:295] Handling node with IPs: map[192.168.39.92:{}]
	I0918 20:14:22.612065       1 main.go:322] Node ha-091565-m02 has CIDR [10.244.1.0/24] 
	I0918 20:14:22.612220       1 main.go:295] Handling node with IPs: map[192.168.39.53:{}]
	I0918 20:14:22.612243       1 main.go:322] Node ha-091565-m03 has CIDR [10.244.2.0/24] 
	I0918 20:14:22.612320       1 main.go:295] Handling node with IPs: map[192.168.39.9:{}]
	I0918 20:14:22.612340       1 main.go:322] Node ha-091565-m04 has CIDR [10.244.3.0/24] 
	I0918 20:14:32.608095       1 main.go:295] Handling node with IPs: map[192.168.39.215:{}]
	I0918 20:14:32.608147       1 main.go:299] handling current node
	I0918 20:14:32.608167       1 main.go:295] Handling node with IPs: map[192.168.39.92:{}]
	I0918 20:14:32.608173       1 main.go:322] Node ha-091565-m02 has CIDR [10.244.1.0/24] 
	I0918 20:14:32.608332       1 main.go:295] Handling node with IPs: map[192.168.39.53:{}]
	I0918 20:14:32.608340       1 main.go:322] Node ha-091565-m03 has CIDR [10.244.2.0/24] 
	I0918 20:14:32.608397       1 main.go:295] Handling node with IPs: map[192.168.39.9:{}]
	I0918 20:14:32.608402       1 main.go:322] Node ha-091565-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [cd37bc6079dc18794fdcec6cf524e952c17ed75bceccddac6dc5ef5026a7b0d3] <==
	I0918 20:12:12.021839       1 options.go:228] external host was not specified, using 192.168.39.215
	I0918 20:12:12.024346       1 server.go:142] Version: v1.31.1
	I0918 20:12:12.024618       1 server.go:144] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0918 20:12:12.613292       1 shared_informer.go:313] Waiting for caches to sync for node_authorizer
	I0918 20:12:12.619598       1 shared_informer.go:313] Waiting for caches to sync for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0918 20:12:12.624122       1 plugins.go:157] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I0918 20:12:12.624198       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0918 20:12:12.624464       1 instance.go:232] Using reconciler: lease
	W0918 20:12:32.610052       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W0918 20:12:32.610148       1 logging.go:55] [core] [Channel #2 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	F0918 20:12:32.625398       1 instance.go:225] Error creating leases: error creating storage factory: context deadline exceeded
	
	
	==> kube-apiserver [e894eebbedc0aac646e7c24ccd43c7ae2e1a00a0275213aa3eaaf78ad5fddb8a] <==
	I0918 20:12:56.799667       1 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0918 20:12:56.804042       1 dynamic_cafile_content.go:160] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0918 20:12:56.888656       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0918 20:12:56.889175       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0918 20:12:56.889201       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0918 20:12:56.889402       1 aggregator.go:171] initial CRD sync complete...
	I0918 20:12:56.889435       1 autoregister_controller.go:144] Starting autoregister controller
	I0918 20:12:56.889446       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0918 20:12:56.889451       1 cache.go:39] Caches are synced for autoregister controller
	I0918 20:12:56.890245       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0918 20:12:56.890660       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0918 20:12:56.890781       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0918 20:12:56.896475       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0918 20:12:56.912098       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	I0918 20:12:56.912599       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0918 20:12:56.912635       1 policy_source.go:224] refreshing policies
	I0918 20:12:56.913425       1 shared_informer.go:320] Caches are synced for configmaps
	I0918 20:12:56.916113       1 cache.go:39] Caches are synced for RemoteAvailability controller
	W0918 20:12:56.928648       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.53 192.168.39.92]
	I0918 20:12:56.930502       1 controller.go:615] quota admission added evaluator for: endpoints
	I0918 20:12:56.945192       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	E0918 20:12:56.959182       1 controller.go:95] Found stale data, removed previous endpoints on kubernetes service, apiserver didn't exit successfully previously
	I0918 20:12:56.982363       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0918 20:12:57.797743       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0918 20:12:58.273936       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.215 192.168.39.53 192.168.39.92]
	
	
	==> kube-controller-manager [c3cbacf6046aded98d17f163f6e437ef37200f71fd333470b3f7b074463c80ca] <==
	I0918 20:13:31.087361       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-091565-m04"
	I0918 20:13:31.093964       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-091565-m03"
	I0918 20:13:31.110375       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-091565-m04"
	I0918 20:13:31.113275       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-091565-m03"
	I0918 20:13:31.149814       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="21.607299ms"
	I0918 20:13:31.152273       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="343.433µs"
	I0918 20:13:34.816679       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-091565-m02"
	I0918 20:13:35.498526       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-091565-m03"
	I0918 20:13:36.352816       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-091565-m03"
	I0918 20:13:40.173682       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-091565-m03"
	I0918 20:13:40.199558       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-091565-m03"
	I0918 20:13:40.421153       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-091565-m03"
	I0918 20:13:40.972278       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="62.965µs"
	I0918 20:13:45.670571       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-091565-m04"
	I0918 20:13:46.440420       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-091565-m04"
	I0918 20:13:57.429053       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-091565-m04"
	I0918 20:13:57.521435       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-091565-m04"
	I0918 20:14:03.255554       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="13.294022ms"
	I0918 20:14:03.258062       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="96.956µs"
	I0918 20:14:05.541690       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-091565-m02"
	I0918 20:14:10.917380       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-091565-m03"
	I0918 20:14:24.363744       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-091565-m04"
	I0918 20:14:24.364504       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-091565-m04"
	I0918 20:14:24.381997       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-091565-m04"
	I0918 20:14:25.448500       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-091565-m04"
	
	
	==> kube-controller-manager [d455b7b8c960ee88939b22efd881e9063e3cf96fec30f9b6cbdfe3c5670a8b1d] <==
	I0918 20:12:12.783984       1 serving.go:386] Generated self-signed cert in-memory
	I0918 20:12:13.136684       1 controllermanager.go:197] "Starting" version="v1.31.1"
	I0918 20:12:13.136722       1 controllermanager.go:199] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0918 20:12:13.138650       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0918 20:12:13.139390       1 dynamic_cafile_content.go:160] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0918 20:12:13.139535       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0918 20:12:13.139619       1 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	E0918 20:12:33.631173       1 controllermanager.go:242] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: Get \"https://192.168.39.215:8443/healthz\": dial tcp 192.168.39.215:8443: connect: connection refused"
	
	
	==> kube-proxy [c9aa80c6b1f558ba9ef5b7e70df3690995e271e9296b0cc3e71ade739843f53a] <==
	E0918 20:09:19.962278       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1699\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0918 20:09:19.962398       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-091565&resourceVersion=1700": dial tcp 192.168.39.254:8443: connect: no route to host
	E0918 20:09:19.962431       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-091565&resourceVersion=1700\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0918 20:09:19.962444       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1779": dial tcp 192.168.39.254:8443: connect: no route to host
	E0918 20:09:19.962525       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1779\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0918 20:09:23.034991       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1779": dial tcp 192.168.39.254:8443: connect: no route to host
	E0918 20:09:23.035513       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1779\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0918 20:09:26.106399       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1699": dial tcp 192.168.39.254:8443: connect: no route to host
	E0918 20:09:26.106511       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1699\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0918 20:09:26.106451       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-091565&resourceVersion=1700": dial tcp 192.168.39.254:8443: connect: no route to host
	E0918 20:09:26.106613       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-091565&resourceVersion=1700\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0918 20:09:29.178622       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1779": dial tcp 192.168.39.254:8443: connect: no route to host
	E0918 20:09:29.178760       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1779\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0918 20:09:38.393410       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1699": dial tcp 192.168.39.254:8443: connect: no route to host
	E0918 20:09:38.393487       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1699\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0918 20:09:38.393553       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-091565&resourceVersion=1700": dial tcp 192.168.39.254:8443: connect: no route to host
	E0918 20:09:38.393598       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-091565&resourceVersion=1700\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0918 20:09:38.393683       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1779": dial tcp 192.168.39.254:8443: connect: no route to host
	E0918 20:09:38.393729       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1779\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0918 20:09:53.753532       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1699": dial tcp 192.168.39.254:8443: connect: no route to host
	E0918 20:09:53.753848       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1699\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0918 20:09:56.827032       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1779": dial tcp 192.168.39.254:8443: connect: no route to host
	E0918 20:09:56.827722       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1779\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0918 20:09:59.898185       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-091565&resourceVersion=1700": dial tcp 192.168.39.254:8443: connect: no route to host
	E0918 20:09:59.898560       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-091565&resourceVersion=1700\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	
	
	==> kube-proxy [d23d190c3f7a24c8d76129882123229cdd01c0a6d63bdeca54dd4158a36f52f7] <==
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0918 20:12:15.066531       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-091565\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0918 20:12:18.137854       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-091565\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0918 20:12:21.209697       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-091565\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0918 20:12:27.354081       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-091565\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0918 20:12:36.569671       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-091565\": dial tcp 192.168.39.254:8443: connect: no route to host"
	I0918 20:12:54.239992       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.215"]
	E0918 20:12:54.240276       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0918 20:12:54.278949       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0918 20:12:54.279014       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0918 20:12:54.279067       1 server_linux.go:169] "Using iptables Proxier"
	I0918 20:12:54.281615       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0918 20:12:54.282265       1 server.go:483] "Version info" version="v1.31.1"
	I0918 20:12:54.282286       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0918 20:12:54.284918       1 config.go:199] "Starting service config controller"
	I0918 20:12:54.285030       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0918 20:12:54.285096       1 config.go:105] "Starting endpoint slice config controller"
	I0918 20:12:54.285124       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0918 20:12:54.285945       1 config.go:328] "Starting node config controller"
	I0918 20:12:54.286003       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0918 20:12:54.385995       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0918 20:12:54.386063       1 shared_informer.go:320] Caches are synced for node config
	I0918 20:12:54.386076       1 shared_informer.go:320] Caches are synced for service config
	
	
	==> kube-scheduler [8c435dbd5b540cdb87d1f79fc2aadca19db9c46aff42e67d78c5d9c8eee1b6de] <==
	E0918 20:05:01.220390       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-8qkpk\": pod kube-proxy-8qkpk is already assigned to node \"ha-091565-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-8qkpk" node="ha-091565-m04"
	E0918 20:05:01.223994       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 819d89b8-2f9d-4a41-ad66-7bfa5e99e840(kube-system/kube-proxy-8qkpk) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-8qkpk"
	E0918 20:05:01.224205       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-8qkpk\": pod kube-proxy-8qkpk is already assigned to node \"ha-091565-m04\"" pod="kube-system/kube-proxy-8qkpk"
	I0918 20:05:01.224300       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-8qkpk" node="ha-091565-m04"
	E0918 20:05:01.248133       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-zmf96\": pod kindnet-zmf96 is already assigned to node \"ha-091565-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-zmf96" node="ha-091565-m04"
	E0918 20:05:01.248459       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-zmf96\": pod kindnet-zmf96 is already assigned to node \"ha-091565-m04\"" pod="kube-system/kindnet-zmf96"
	I0918 20:05:01.248547       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-zmf96" node="ha-091565-m04"
	E0918 20:05:01.248362       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-t72tx\": pod kube-proxy-t72tx is already assigned to node \"ha-091565-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-t72tx" node="ha-091565-m04"
	E0918 20:05:01.249494       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-t72tx\": pod kube-proxy-t72tx is already assigned to node \"ha-091565-m04\"" pod="kube-system/kube-proxy-t72tx"
	I0918 20:05:01.249666       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-t72tx" node="ha-091565-m04"
	E0918 20:10:13.126277       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: unknown (get csidrivers.storage.k8s.io)" logger="UnhandledError"
	E0918 20:10:13.724453       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: unknown (get statefulsets.apps)" logger="UnhandledError"
	E0918 20:10:14.237081       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: unknown (get namespaces)" logger="UnhandledError"
	E0918 20:10:14.387558       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: unknown (get configmaps)" logger="UnhandledError"
	E0918 20:10:14.939450       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: unknown (get pods)" logger="UnhandledError"
	E0918 20:10:15.084096       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: unknown (get storageclasses.storage.k8s.io)" logger="UnhandledError"
	E0918 20:10:15.719553       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: unknown (get nodes)" logger="UnhandledError"
	E0918 20:10:16.704708       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: unknown (get persistentvolumeclaims)" logger="UnhandledError"
	E0918 20:10:17.552792       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: unknown (get persistentvolumes)" logger="UnhandledError"
	E0918 20:10:20.155854       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: unknown (get replicasets.apps)" logger="UnhandledError"
	E0918 20:10:21.586463       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: unknown (get csinodes.storage.k8s.io)" logger="UnhandledError"
	E0918 20:10:21.996488       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: unknown (get poddisruptionbudgets.policy)" logger="UnhandledError"
	E0918 20:10:22.157516       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: unknown (get replicationcontrollers)" logger="UnhandledError"
	E0918 20:10:22.232147       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: unknown (get services)" logger="UnhandledError"
	E0918 20:10:22.352901       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [bb8d0cf0ea1843751c86efea123f97cd6f5a08fcb1cee02f21311efe6ca1e526] <==
	W0918 20:12:50.139376       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: Get "https://192.168.39.215:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp 192.168.39.215:8443: connect: connection refused
	E0918 20:12:50.139445       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: Get \"https://192.168.39.215:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0\": dial tcp 192.168.39.215:8443: connect: connection refused" logger="UnhandledError"
	W0918 20:12:50.499370       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: Get "https://192.168.39.215:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp 192.168.39.215:8443: connect: connection refused
	E0918 20:12:50.499522       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: Get \"https://192.168.39.215:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0\": dial tcp 192.168.39.215:8443: connect: connection refused" logger="UnhandledError"
	W0918 20:12:51.406305       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://192.168.39.215:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.39.215:8443: connect: connection refused
	E0918 20:12:51.406429       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://192.168.39.215:8443/api/v1/services?limit=500&resourceVersion=0\": dial tcp 192.168.39.215:8443: connect: connection refused" logger="UnhandledError"
	W0918 20:12:51.497916       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://192.168.39.215:8443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 192.168.39.215:8443: connect: connection refused
	E0918 20:12:51.497982       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://192.168.39.215:8443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 192.168.39.215:8443: connect: connection refused" logger="UnhandledError"
	W0918 20:12:52.583531       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: Get "https://192.168.39.215:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp 192.168.39.215:8443: connect: connection refused
	E0918 20:12:52.583656       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: Get \"https://192.168.39.215:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0\": dial tcp 192.168.39.215:8443: connect: connection refused" logger="UnhandledError"
	W0918 20:12:52.631454       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://192.168.39.215:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.39.215:8443: connect: connection refused
	E0918 20:12:52.631625       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://192.168.39.215:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.168.39.215:8443: connect: connection refused" logger="UnhandledError"
	W0918 20:12:52.731418       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: Get "https://192.168.39.215:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.168.39.215:8443: connect: connection refused
	E0918 20:12:52.731508       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: Get \"https://192.168.39.215:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0\": dial tcp 192.168.39.215:8443: connect: connection refused" logger="UnhandledError"
	W0918 20:12:53.743251       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: Get "https://192.168.39.215:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp 192.168.39.215:8443: connect: connection refused
	E0918 20:12:53.743340       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: Get \"https://192.168.39.215:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0\": dial tcp 192.168.39.215:8443: connect: connection refused" logger="UnhandledError"
	W0918 20:12:53.920652       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: Get "https://192.168.39.215:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.168.39.215:8443: connect: connection refused
	E0918 20:12:53.920753       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: Get \"https://192.168.39.215:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0\": dial tcp 192.168.39.215:8443: connect: connection refused" logger="UnhandledError"
	W0918 20:12:54.029713       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: Get "https://192.168.39.215:8443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 192.168.39.215:8443: connect: connection refused
	E0918 20:12:54.029857       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get \"https://192.168.39.215:8443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 192.168.39.215:8443: connect: connection refused" logger="UnhandledError"
	W0918 20:12:56.817321       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0918 20:12:56.819046       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0918 20:12:56.825944       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0918 20:12:56.826066       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	I0918 20:13:21.244237       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 18 20:13:23 ha-091565 kubelet[1316]: E0918 20:13:23.551516    1316 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726690403551198873,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 18 20:13:23 ha-091565 kubelet[1316]: E0918 20:13:23.551850    1316 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726690403551198873,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 18 20:13:24 ha-091565 kubelet[1316]: I0918 20:13:24.370256    1316 scope.go:117] "RemoveContainer" containerID="0025f965c449fa8005990c9d16e4bd975337d00b859f495b752751bf93bf4d29"
	Sep 18 20:13:24 ha-091565 kubelet[1316]: E0918 20:13:24.370602    1316 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 40s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(b7dffb85-905b-4166-a680-34c77cf87d09)\"" pod="kube-system/storage-provisioner" podUID="b7dffb85-905b-4166-a680-34c77cf87d09"
	Sep 18 20:13:28 ha-091565 kubelet[1316]: I0918 20:13:28.370487    1316 kubelet.go:1895] "Trying to delete pod" pod="kube-system/kube-vip-ha-091565" podUID="b3fccd60-ac88-4dda-b8bb-b1b8c45cbfe5"
	Sep 18 20:13:28 ha-091565 kubelet[1316]: I0918 20:13:28.390578    1316 kubelet.go:1900] "Deleted mirror pod because it is outdated" pod="kube-system/kube-vip-ha-091565"
	Sep 18 20:13:33 ha-091565 kubelet[1316]: I0918 20:13:33.390996    1316 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-vip-ha-091565" podStartSLOduration=5.390962648 podStartE2EDuration="5.390962648s" podCreationTimestamp="2024-09-18 20:13:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-09-18 20:13:33.390505312 +0000 UTC m=+710.164869574" watchObservedRunningTime="2024-09-18 20:13:33.390962648 +0000 UTC m=+710.165326912"
	Sep 18 20:13:33 ha-091565 kubelet[1316]: E0918 20:13:33.554494    1316 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726690413553438352,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 18 20:13:33 ha-091565 kubelet[1316]: E0918 20:13:33.554856    1316 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726690413553438352,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 18 20:13:39 ha-091565 kubelet[1316]: I0918 20:13:39.370475    1316 scope.go:117] "RemoveContainer" containerID="0025f965c449fa8005990c9d16e4bd975337d00b859f495b752751bf93bf4d29"
	Sep 18 20:13:43 ha-091565 kubelet[1316]: E0918 20:13:43.400519    1316 iptables.go:577] "Could not set up iptables canary" err=<
	Sep 18 20:13:43 ha-091565 kubelet[1316]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Sep 18 20:13:43 ha-091565 kubelet[1316]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 18 20:13:43 ha-091565 kubelet[1316]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 18 20:13:43 ha-091565 kubelet[1316]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 18 20:13:43 ha-091565 kubelet[1316]: E0918 20:13:43.561154    1316 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726690423558587740,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 18 20:13:43 ha-091565 kubelet[1316]: E0918 20:13:43.561251    1316 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726690423558587740,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 18 20:13:53 ha-091565 kubelet[1316]: E0918 20:13:53.562527    1316 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726690433562240128,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 18 20:13:53 ha-091565 kubelet[1316]: E0918 20:13:53.562556    1316 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726690433562240128,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 18 20:14:03 ha-091565 kubelet[1316]: E0918 20:14:03.564375    1316 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726690443564127297,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 18 20:14:03 ha-091565 kubelet[1316]: E0918 20:14:03.564412    1316 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726690443564127297,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 18 20:14:13 ha-091565 kubelet[1316]: E0918 20:14:13.566475    1316 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726690453566180570,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 18 20:14:13 ha-091565 kubelet[1316]: E0918 20:14:13.566501    1316 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726690453566180570,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 18 20:14:23 ha-091565 kubelet[1316]: E0918 20:14:23.569522    1316 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726690463568688816,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 18 20:14:23 ha-091565 kubelet[1316]: E0918 20:14:23.570745    1316 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726690463568688816,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0918 20:14:31.045144   33812 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/19667-7671/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-091565 -n ha-091565
helpers_test.go:261: (dbg) Run:  kubectl --context ha-091565 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/RestartClusterKeepsNodes FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/RestartClusterKeepsNodes (373.91s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (141.84s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:531: (dbg) Run:  out/minikube-linux-amd64 -p ha-091565 stop -v=7 --alsologtostderr
E0918 20:15:01.286698   14878 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/functional-790989/client.crt: no such file or directory" logger="UnhandledError"
E0918 20:16:12.176179   14878 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/addons-815929/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:531: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-091565 stop -v=7 --alsologtostderr: exit status 82 (2m0.465219534s)

                                                
                                                
-- stdout --
	* Stopping node "ha-091565-m04"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0918 20:14:51.071243   34250 out.go:345] Setting OutFile to fd 1 ...
	I0918 20:14:51.071370   34250 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0918 20:14:51.071378   34250 out.go:358] Setting ErrFile to fd 2...
	I0918 20:14:51.071383   34250 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0918 20:14:51.071598   34250 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19667-7671/.minikube/bin
	I0918 20:14:51.071829   34250 out.go:352] Setting JSON to false
	I0918 20:14:51.071908   34250 mustload.go:65] Loading cluster: ha-091565
	I0918 20:14:51.072319   34250 config.go:182] Loaded profile config "ha-091565": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0918 20:14:51.072408   34250 profile.go:143] Saving config to /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/ha-091565/config.json ...
	I0918 20:14:51.072611   34250 mustload.go:65] Loading cluster: ha-091565
	I0918 20:14:51.072742   34250 config.go:182] Loaded profile config "ha-091565": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0918 20:14:51.072767   34250 stop.go:39] StopHost: ha-091565-m04
	I0918 20:14:51.073160   34250 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0918 20:14:51.073196   34250 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0918 20:14:51.088057   34250 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44039
	I0918 20:14:51.088563   34250 main.go:141] libmachine: () Calling .GetVersion
	I0918 20:14:51.089208   34250 main.go:141] libmachine: Using API Version  1
	I0918 20:14:51.089234   34250 main.go:141] libmachine: () Calling .SetConfigRaw
	I0918 20:14:51.089528   34250 main.go:141] libmachine: () Calling .GetMachineName
	I0918 20:14:51.091927   34250 out.go:177] * Stopping node "ha-091565-m04"  ...
	I0918 20:14:51.093029   34250 machine.go:156] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0918 20:14:51.093057   34250 main.go:141] libmachine: (ha-091565-m04) Calling .DriverName
	I0918 20:14:51.093290   34250 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0918 20:14:51.093311   34250 main.go:141] libmachine: (ha-091565-m04) Calling .GetSSHHostname
	I0918 20:14:51.095976   34250 main.go:141] libmachine: (ha-091565-m04) DBG | domain ha-091565-m04 has defined MAC address 52:54:00:70:90:59 in network mk-ha-091565
	I0918 20:14:51.096363   34250 main.go:141] libmachine: (ha-091565-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:70:90:59", ip: ""} in network mk-ha-091565: {Iface:virbr1 ExpiryTime:2024-09-18 21:14:18 +0000 UTC Type:0 Mac:52:54:00:70:90:59 Iaid: IPaddr:192.168.39.9 Prefix:24 Hostname:ha-091565-m04 Clientid:01:52:54:00:70:90:59}
	I0918 20:14:51.096386   34250 main.go:141] libmachine: (ha-091565-m04) DBG | domain ha-091565-m04 has defined IP address 192.168.39.9 and MAC address 52:54:00:70:90:59 in network mk-ha-091565
	I0918 20:14:51.096578   34250 main.go:141] libmachine: (ha-091565-m04) Calling .GetSSHPort
	I0918 20:14:51.096739   34250 main.go:141] libmachine: (ha-091565-m04) Calling .GetSSHKeyPath
	I0918 20:14:51.096881   34250 main.go:141] libmachine: (ha-091565-m04) Calling .GetSSHUsername
	I0918 20:14:51.097018   34250 sshutil.go:53] new ssh client: &{IP:192.168.39.9 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19667-7671/.minikube/machines/ha-091565-m04/id_rsa Username:docker}
	I0918 20:14:51.174514   34250 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0918 20:14:51.226914   34250 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0918 20:14:51.279106   34250 main.go:141] libmachine: Stopping "ha-091565-m04"...
	I0918 20:14:51.279144   34250 main.go:141] libmachine: (ha-091565-m04) Calling .GetState
	I0918 20:14:51.280691   34250 main.go:141] libmachine: (ha-091565-m04) Calling .Stop
	I0918 20:14:51.283940   34250 main.go:141] libmachine: (ha-091565-m04) Waiting for machine to stop 0/120
	I0918 20:14:52.285735   34250 main.go:141] libmachine: (ha-091565-m04) Waiting for machine to stop 1/120
	I0918 20:14:53.287110   34250 main.go:141] libmachine: (ha-091565-m04) Waiting for machine to stop 2/120
	I0918 20:14:54.288760   34250 main.go:141] libmachine: (ha-091565-m04) Waiting for machine to stop 3/120
	I0918 20:14:55.291028   34250 main.go:141] libmachine: (ha-091565-m04) Waiting for machine to stop 4/120
	I0918 20:14:56.292995   34250 main.go:141] libmachine: (ha-091565-m04) Waiting for machine to stop 5/120
	I0918 20:14:57.294387   34250 main.go:141] libmachine: (ha-091565-m04) Waiting for machine to stop 6/120
	I0918 20:14:58.295664   34250 main.go:141] libmachine: (ha-091565-m04) Waiting for machine to stop 7/120
	I0918 20:14:59.297023   34250 main.go:141] libmachine: (ha-091565-m04) Waiting for machine to stop 8/120
	I0918 20:15:00.298424   34250 main.go:141] libmachine: (ha-091565-m04) Waiting for machine to stop 9/120
	I0918 20:15:01.300488   34250 main.go:141] libmachine: (ha-091565-m04) Waiting for machine to stop 10/120
	I0918 20:15:02.301709   34250 main.go:141] libmachine: (ha-091565-m04) Waiting for machine to stop 11/120
	I0918 20:15:03.303002   34250 main.go:141] libmachine: (ha-091565-m04) Waiting for machine to stop 12/120
	I0918 20:15:04.304384   34250 main.go:141] libmachine: (ha-091565-m04) Waiting for machine to stop 13/120
	I0918 20:15:05.305747   34250 main.go:141] libmachine: (ha-091565-m04) Waiting for machine to stop 14/120
	I0918 20:15:06.307181   34250 main.go:141] libmachine: (ha-091565-m04) Waiting for machine to stop 15/120
	I0918 20:15:07.308485   34250 main.go:141] libmachine: (ha-091565-m04) Waiting for machine to stop 16/120
	I0918 20:15:08.310013   34250 main.go:141] libmachine: (ha-091565-m04) Waiting for machine to stop 17/120
	I0918 20:15:09.311333   34250 main.go:141] libmachine: (ha-091565-m04) Waiting for machine to stop 18/120
	I0918 20:15:10.312717   34250 main.go:141] libmachine: (ha-091565-m04) Waiting for machine to stop 19/120
	I0918 20:15:11.314984   34250 main.go:141] libmachine: (ha-091565-m04) Waiting for machine to stop 20/120
	I0918 20:15:12.316549   34250 main.go:141] libmachine: (ha-091565-m04) Waiting for machine to stop 21/120
	I0918 20:15:13.318068   34250 main.go:141] libmachine: (ha-091565-m04) Waiting for machine to stop 22/120
	I0918 20:15:14.320232   34250 main.go:141] libmachine: (ha-091565-m04) Waiting for machine to stop 23/120
	I0918 20:15:15.322560   34250 main.go:141] libmachine: (ha-091565-m04) Waiting for machine to stop 24/120
	I0918 20:15:16.324283   34250 main.go:141] libmachine: (ha-091565-m04) Waiting for machine to stop 25/120
	I0918 20:15:17.326396   34250 main.go:141] libmachine: (ha-091565-m04) Waiting for machine to stop 26/120
	I0918 20:15:18.328286   34250 main.go:141] libmachine: (ha-091565-m04) Waiting for machine to stop 27/120
	I0918 20:15:19.330458   34250 main.go:141] libmachine: (ha-091565-m04) Waiting for machine to stop 28/120
	I0918 20:15:20.332416   34250 main.go:141] libmachine: (ha-091565-m04) Waiting for machine to stop 29/120
	I0918 20:15:21.334663   34250 main.go:141] libmachine: (ha-091565-m04) Waiting for machine to stop 30/120
	I0918 20:15:22.336628   34250 main.go:141] libmachine: (ha-091565-m04) Waiting for machine to stop 31/120
	I0918 20:15:23.338514   34250 main.go:141] libmachine: (ha-091565-m04) Waiting for machine to stop 32/120
	I0918 20:15:24.340260   34250 main.go:141] libmachine: (ha-091565-m04) Waiting for machine to stop 33/120
	I0918 20:15:25.341837   34250 main.go:141] libmachine: (ha-091565-m04) Waiting for machine to stop 34/120
	I0918 20:15:26.343866   34250 main.go:141] libmachine: (ha-091565-m04) Waiting for machine to stop 35/120
	I0918 20:15:27.345486   34250 main.go:141] libmachine: (ha-091565-m04) Waiting for machine to stop 36/120
	I0918 20:15:28.347553   34250 main.go:141] libmachine: (ha-091565-m04) Waiting for machine to stop 37/120
	I0918 20:15:29.349770   34250 main.go:141] libmachine: (ha-091565-m04) Waiting for machine to stop 38/120
	I0918 20:15:30.351253   34250 main.go:141] libmachine: (ha-091565-m04) Waiting for machine to stop 39/120
	I0918 20:15:31.353500   34250 main.go:141] libmachine: (ha-091565-m04) Waiting for machine to stop 40/120
	I0918 20:15:32.354893   34250 main.go:141] libmachine: (ha-091565-m04) Waiting for machine to stop 41/120
	I0918 20:15:33.356188   34250 main.go:141] libmachine: (ha-091565-m04) Waiting for machine to stop 42/120
	I0918 20:15:34.357601   34250 main.go:141] libmachine: (ha-091565-m04) Waiting for machine to stop 43/120
	I0918 20:15:35.358967   34250 main.go:141] libmachine: (ha-091565-m04) Waiting for machine to stop 44/120
	I0918 20:15:36.360940   34250 main.go:141] libmachine: (ha-091565-m04) Waiting for machine to stop 45/120
	I0918 20:15:37.362550   34250 main.go:141] libmachine: (ha-091565-m04) Waiting for machine to stop 46/120
	I0918 20:15:38.364257   34250 main.go:141] libmachine: (ha-091565-m04) Waiting for machine to stop 47/120
	I0918 20:15:39.365522   34250 main.go:141] libmachine: (ha-091565-m04) Waiting for machine to stop 48/120
	I0918 20:15:40.367113   34250 main.go:141] libmachine: (ha-091565-m04) Waiting for machine to stop 49/120
	I0918 20:15:41.369209   34250 main.go:141] libmachine: (ha-091565-m04) Waiting for machine to stop 50/120
	I0918 20:15:42.370655   34250 main.go:141] libmachine: (ha-091565-m04) Waiting for machine to stop 51/120
	I0918 20:15:43.372131   34250 main.go:141] libmachine: (ha-091565-m04) Waiting for machine to stop 52/120
	I0918 20:15:44.374527   34250 main.go:141] libmachine: (ha-091565-m04) Waiting for machine to stop 53/120
	I0918 20:15:45.375990   34250 main.go:141] libmachine: (ha-091565-m04) Waiting for machine to stop 54/120
	I0918 20:15:46.377356   34250 main.go:141] libmachine: (ha-091565-m04) Waiting for machine to stop 55/120
	I0918 20:15:47.378913   34250 main.go:141] libmachine: (ha-091565-m04) Waiting for machine to stop 56/120
	I0918 20:15:48.380756   34250 main.go:141] libmachine: (ha-091565-m04) Waiting for machine to stop 57/120
	I0918 20:15:49.382517   34250 main.go:141] libmachine: (ha-091565-m04) Waiting for machine to stop 58/120
	I0918 20:15:50.384102   34250 main.go:141] libmachine: (ha-091565-m04) Waiting for machine to stop 59/120
	I0918 20:15:51.386333   34250 main.go:141] libmachine: (ha-091565-m04) Waiting for machine to stop 60/120
	I0918 20:15:52.387766   34250 main.go:141] libmachine: (ha-091565-m04) Waiting for machine to stop 61/120
	I0918 20:15:53.389308   34250 main.go:141] libmachine: (ha-091565-m04) Waiting for machine to stop 62/120
	I0918 20:15:54.390782   34250 main.go:141] libmachine: (ha-091565-m04) Waiting for machine to stop 63/120
	I0918 20:15:55.392257   34250 main.go:141] libmachine: (ha-091565-m04) Waiting for machine to stop 64/120
	I0918 20:15:56.394481   34250 main.go:141] libmachine: (ha-091565-m04) Waiting for machine to stop 65/120
	I0918 20:15:57.395989   34250 main.go:141] libmachine: (ha-091565-m04) Waiting for machine to stop 66/120
	I0918 20:15:58.397843   34250 main.go:141] libmachine: (ha-091565-m04) Waiting for machine to stop 67/120
	I0918 20:15:59.399344   34250 main.go:141] libmachine: (ha-091565-m04) Waiting for machine to stop 68/120
	I0918 20:16:00.400944   34250 main.go:141] libmachine: (ha-091565-m04) Waiting for machine to stop 69/120
	I0918 20:16:01.403313   34250 main.go:141] libmachine: (ha-091565-m04) Waiting for machine to stop 70/120
	I0918 20:16:02.404682   34250 main.go:141] libmachine: (ha-091565-m04) Waiting for machine to stop 71/120
	I0918 20:16:03.406854   34250 main.go:141] libmachine: (ha-091565-m04) Waiting for machine to stop 72/120
	I0918 20:16:04.408180   34250 main.go:141] libmachine: (ha-091565-m04) Waiting for machine to stop 73/120
	I0918 20:16:05.409633   34250 main.go:141] libmachine: (ha-091565-m04) Waiting for machine to stop 74/120
	I0918 20:16:06.411520   34250 main.go:141] libmachine: (ha-091565-m04) Waiting for machine to stop 75/120
	I0918 20:16:07.413111   34250 main.go:141] libmachine: (ha-091565-m04) Waiting for machine to stop 76/120
	I0918 20:16:08.414933   34250 main.go:141] libmachine: (ha-091565-m04) Waiting for machine to stop 77/120
	I0918 20:16:09.416254   34250 main.go:141] libmachine: (ha-091565-m04) Waiting for machine to stop 78/120
	I0918 20:16:10.418360   34250 main.go:141] libmachine: (ha-091565-m04) Waiting for machine to stop 79/120
	I0918 20:16:11.421057   34250 main.go:141] libmachine: (ha-091565-m04) Waiting for machine to stop 80/120
	I0918 20:16:12.423068   34250 main.go:141] libmachine: (ha-091565-m04) Waiting for machine to stop 81/120
	I0918 20:16:13.424544   34250 main.go:141] libmachine: (ha-091565-m04) Waiting for machine to stop 82/120
	I0918 20:16:14.425814   34250 main.go:141] libmachine: (ha-091565-m04) Waiting for machine to stop 83/120
	I0918 20:16:15.427150   34250 main.go:141] libmachine: (ha-091565-m04) Waiting for machine to stop 84/120
	I0918 20:16:16.428941   34250 main.go:141] libmachine: (ha-091565-m04) Waiting for machine to stop 85/120
	I0918 20:16:17.430750   34250 main.go:141] libmachine: (ha-091565-m04) Waiting for machine to stop 86/120
	I0918 20:16:18.432241   34250 main.go:141] libmachine: (ha-091565-m04) Waiting for machine to stop 87/120
	I0918 20:16:19.434679   34250 main.go:141] libmachine: (ha-091565-m04) Waiting for machine to stop 88/120
	I0918 20:16:20.436764   34250 main.go:141] libmachine: (ha-091565-m04) Waiting for machine to stop 89/120
	I0918 20:16:21.438722   34250 main.go:141] libmachine: (ha-091565-m04) Waiting for machine to stop 90/120
	I0918 20:16:22.440035   34250 main.go:141] libmachine: (ha-091565-m04) Waiting for machine to stop 91/120
	I0918 20:16:23.441397   34250 main.go:141] libmachine: (ha-091565-m04) Waiting for machine to stop 92/120
	I0918 20:16:24.442720   34250 main.go:141] libmachine: (ha-091565-m04) Waiting for machine to stop 93/120
	I0918 20:16:25.444054   34250 main.go:141] libmachine: (ha-091565-m04) Waiting for machine to stop 94/120
	I0918 20:16:26.445462   34250 main.go:141] libmachine: (ha-091565-m04) Waiting for machine to stop 95/120
	I0918 20:16:27.447026   34250 main.go:141] libmachine: (ha-091565-m04) Waiting for machine to stop 96/120
	I0918 20:16:28.448483   34250 main.go:141] libmachine: (ha-091565-m04) Waiting for machine to stop 97/120
	I0918 20:16:29.450572   34250 main.go:141] libmachine: (ha-091565-m04) Waiting for machine to stop 98/120
	I0918 20:16:30.452701   34250 main.go:141] libmachine: (ha-091565-m04) Waiting for machine to stop 99/120
	I0918 20:16:31.454614   34250 main.go:141] libmachine: (ha-091565-m04) Waiting for machine to stop 100/120
	I0918 20:16:32.456091   34250 main.go:141] libmachine: (ha-091565-m04) Waiting for machine to stop 101/120
	I0918 20:16:33.457460   34250 main.go:141] libmachine: (ha-091565-m04) Waiting for machine to stop 102/120
	I0918 20:16:34.458723   34250 main.go:141] libmachine: (ha-091565-m04) Waiting for machine to stop 103/120
	I0918 20:16:35.460319   34250 main.go:141] libmachine: (ha-091565-m04) Waiting for machine to stop 104/120
	I0918 20:16:36.462417   34250 main.go:141] libmachine: (ha-091565-m04) Waiting for machine to stop 105/120
	I0918 20:16:37.464265   34250 main.go:141] libmachine: (ha-091565-m04) Waiting for machine to stop 106/120
	I0918 20:16:38.466432   34250 main.go:141] libmachine: (ha-091565-m04) Waiting for machine to stop 107/120
	I0918 20:16:39.467758   34250 main.go:141] libmachine: (ha-091565-m04) Waiting for machine to stop 108/120
	I0918 20:16:40.469220   34250 main.go:141] libmachine: (ha-091565-m04) Waiting for machine to stop 109/120
	I0918 20:16:41.471425   34250 main.go:141] libmachine: (ha-091565-m04) Waiting for machine to stop 110/120
	I0918 20:16:42.472740   34250 main.go:141] libmachine: (ha-091565-m04) Waiting for machine to stop 111/120
	I0918 20:16:43.474482   34250 main.go:141] libmachine: (ha-091565-m04) Waiting for machine to stop 112/120
	I0918 20:16:44.475691   34250 main.go:141] libmachine: (ha-091565-m04) Waiting for machine to stop 113/120
	I0918 20:16:45.477092   34250 main.go:141] libmachine: (ha-091565-m04) Waiting for machine to stop 114/120
	I0918 20:16:46.479204   34250 main.go:141] libmachine: (ha-091565-m04) Waiting for machine to stop 115/120
	I0918 20:16:47.480496   34250 main.go:141] libmachine: (ha-091565-m04) Waiting for machine to stop 116/120
	I0918 20:16:48.481955   34250 main.go:141] libmachine: (ha-091565-m04) Waiting for machine to stop 117/120
	I0918 20:16:49.483426   34250 main.go:141] libmachine: (ha-091565-m04) Waiting for machine to stop 118/120
	I0918 20:16:50.485051   34250 main.go:141] libmachine: (ha-091565-m04) Waiting for machine to stop 119/120
	I0918 20:16:51.485612   34250 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0918 20:16:51.485662   34250 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0918 20:16:51.487680   34250 out.go:201] 
	W0918 20:16:51.489049   34250 out.go:270] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0918 20:16:51.489073   34250 out.go:270] * 
	* 
	W0918 20:16:51.491439   34250 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0918 20:16:51.492688   34250 out.go:201] 

                                                
                                                
** /stderr **
ha_test.go:533: failed to stop cluster. args "out/minikube-linux-amd64 -p ha-091565 stop -v=7 --alsologtostderr": exit status 82
ha_test.go:537: (dbg) Run:  out/minikube-linux-amd64 -p ha-091565 status -v=7 --alsologtostderr
ha_test.go:537: (dbg) Done: out/minikube-linux-amd64 -p ha-091565 status -v=7 --alsologtostderr: (19.062161107s)
ha_test.go:543: status says not two control-plane nodes are present: args "out/minikube-linux-amd64 -p ha-091565 status -v=7 --alsologtostderr": 
ha_test.go:549: status says not three kubelets are stopped: args "out/minikube-linux-amd64 -p ha-091565 status -v=7 --alsologtostderr": 
ha_test.go:552: status says not two apiservers are stopped: args "out/minikube-linux-amd64 -p ha-091565 status -v=7 --alsologtostderr": 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-091565 -n ha-091565
helpers_test.go:244: <<< TestMultiControlPlane/serial/StopCluster FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/StopCluster]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-091565 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-091565 logs -n 25: (1.664922113s)
helpers_test.go:252: TestMultiControlPlane/serial/StopCluster logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| ssh     | ha-091565 ssh -n ha-091565-m02 sudo cat                                          | ha-091565 | jenkins | v1.34.0 | 18 Sep 24 20:05 UTC | 18 Sep 24 20:05 UTC |
	|         | /home/docker/cp-test_ha-091565-m03_ha-091565-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-091565 cp ha-091565-m03:/home/docker/cp-test.txt                              | ha-091565 | jenkins | v1.34.0 | 18 Sep 24 20:05 UTC | 18 Sep 24 20:05 UTC |
	|         | ha-091565-m04:/home/docker/cp-test_ha-091565-m03_ha-091565-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-091565 ssh -n                                                                 | ha-091565 | jenkins | v1.34.0 | 18 Sep 24 20:05 UTC | 18 Sep 24 20:05 UTC |
	|         | ha-091565-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-091565 ssh -n ha-091565-m04 sudo cat                                          | ha-091565 | jenkins | v1.34.0 | 18 Sep 24 20:05 UTC | 18 Sep 24 20:05 UTC |
	|         | /home/docker/cp-test_ha-091565-m03_ha-091565-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-091565 cp testdata/cp-test.txt                                                | ha-091565 | jenkins | v1.34.0 | 18 Sep 24 20:05 UTC | 18 Sep 24 20:05 UTC |
	|         | ha-091565-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-091565 ssh -n                                                                 | ha-091565 | jenkins | v1.34.0 | 18 Sep 24 20:05 UTC | 18 Sep 24 20:05 UTC |
	|         | ha-091565-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-091565 cp ha-091565-m04:/home/docker/cp-test.txt                              | ha-091565 | jenkins | v1.34.0 | 18 Sep 24 20:05 UTC | 18 Sep 24 20:05 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile3131576438/001/cp-test_ha-091565-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-091565 ssh -n                                                                 | ha-091565 | jenkins | v1.34.0 | 18 Sep 24 20:05 UTC | 18 Sep 24 20:05 UTC |
	|         | ha-091565-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-091565 cp ha-091565-m04:/home/docker/cp-test.txt                              | ha-091565 | jenkins | v1.34.0 | 18 Sep 24 20:05 UTC | 18 Sep 24 20:05 UTC |
	|         | ha-091565:/home/docker/cp-test_ha-091565-m04_ha-091565.txt                       |           |         |         |                     |                     |
	| ssh     | ha-091565 ssh -n                                                                 | ha-091565 | jenkins | v1.34.0 | 18 Sep 24 20:05 UTC | 18 Sep 24 20:05 UTC |
	|         | ha-091565-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-091565 ssh -n ha-091565 sudo cat                                              | ha-091565 | jenkins | v1.34.0 | 18 Sep 24 20:05 UTC | 18 Sep 24 20:05 UTC |
	|         | /home/docker/cp-test_ha-091565-m04_ha-091565.txt                                 |           |         |         |                     |                     |
	| cp      | ha-091565 cp ha-091565-m04:/home/docker/cp-test.txt                              | ha-091565 | jenkins | v1.34.0 | 18 Sep 24 20:05 UTC | 18 Sep 24 20:05 UTC |
	|         | ha-091565-m02:/home/docker/cp-test_ha-091565-m04_ha-091565-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-091565 ssh -n                                                                 | ha-091565 | jenkins | v1.34.0 | 18 Sep 24 20:05 UTC | 18 Sep 24 20:05 UTC |
	|         | ha-091565-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-091565 ssh -n ha-091565-m02 sudo cat                                          | ha-091565 | jenkins | v1.34.0 | 18 Sep 24 20:05 UTC | 18 Sep 24 20:05 UTC |
	|         | /home/docker/cp-test_ha-091565-m04_ha-091565-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-091565 cp ha-091565-m04:/home/docker/cp-test.txt                              | ha-091565 | jenkins | v1.34.0 | 18 Sep 24 20:05 UTC | 18 Sep 24 20:05 UTC |
	|         | ha-091565-m03:/home/docker/cp-test_ha-091565-m04_ha-091565-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-091565 ssh -n                                                                 | ha-091565 | jenkins | v1.34.0 | 18 Sep 24 20:05 UTC | 18 Sep 24 20:05 UTC |
	|         | ha-091565-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-091565 ssh -n ha-091565-m03 sudo cat                                          | ha-091565 | jenkins | v1.34.0 | 18 Sep 24 20:05 UTC | 18 Sep 24 20:05 UTC |
	|         | /home/docker/cp-test_ha-091565-m04_ha-091565-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-091565 node stop m02 -v=7                                                     | ha-091565 | jenkins | v1.34.0 | 18 Sep 24 20:05 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | ha-091565 node start m02 -v=7                                                    | ha-091565 | jenkins | v1.34.0 | 18 Sep 24 20:08 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-091565 -v=7                                                           | ha-091565 | jenkins | v1.34.0 | 18 Sep 24 20:08 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| stop    | -p ha-091565 -v=7                                                                | ha-091565 | jenkins | v1.34.0 | 18 Sep 24 20:08 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| start   | -p ha-091565 --wait=true -v=7                                                    | ha-091565 | jenkins | v1.34.0 | 18 Sep 24 20:10 UTC | 18 Sep 24 20:14 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-091565                                                                | ha-091565 | jenkins | v1.34.0 | 18 Sep 24 20:14 UTC |                     |
	| node    | ha-091565 node delete m03 -v=7                                                   | ha-091565 | jenkins | v1.34.0 | 18 Sep 24 20:14 UTC | 18 Sep 24 20:14 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| stop    | ha-091565 stop -v=7                                                              | ha-091565 | jenkins | v1.34.0 | 18 Sep 24 20:14 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/18 20:10:21
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0918 20:10:21.481921   32455 out.go:345] Setting OutFile to fd 1 ...
	I0918 20:10:21.482185   32455 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0918 20:10:21.482196   32455 out.go:358] Setting ErrFile to fd 2...
	I0918 20:10:21.482202   32455 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0918 20:10:21.482431   32455 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19667-7671/.minikube/bin
	I0918 20:10:21.482990   32455 out.go:352] Setting JSON to false
	I0918 20:10:21.483887   32455 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":3165,"bootTime":1726687056,"procs":187,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0918 20:10:21.483987   32455 start.go:139] virtualization: kvm guest
	I0918 20:10:21.486482   32455 out.go:177] * [ha-091565] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0918 20:10:21.487917   32455 out.go:177]   - MINIKUBE_LOCATION=19667
	I0918 20:10:21.487906   32455 notify.go:220] Checking for updates...
	I0918 20:10:21.489679   32455 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0918 20:10:21.491004   32455 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19667-7671/kubeconfig
	I0918 20:10:21.492533   32455 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19667-7671/.minikube
	I0918 20:10:21.493829   32455 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0918 20:10:21.495121   32455 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0918 20:10:21.497006   32455 config.go:182] Loaded profile config "ha-091565": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0918 20:10:21.497147   32455 driver.go:394] Setting default libvirt URI to qemu:///system
	I0918 20:10:21.497582   32455 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0918 20:10:21.497629   32455 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0918 20:10:21.512693   32455 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39699
	I0918 20:10:21.513187   32455 main.go:141] libmachine: () Calling .GetVersion
	I0918 20:10:21.513781   32455 main.go:141] libmachine: Using API Version  1
	I0918 20:10:21.513810   32455 main.go:141] libmachine: () Calling .SetConfigRaw
	I0918 20:10:21.514148   32455 main.go:141] libmachine: () Calling .GetMachineName
	I0918 20:10:21.514295   32455 main.go:141] libmachine: (ha-091565) Calling .DriverName
	I0918 20:10:21.551072   32455 out.go:177] * Using the kvm2 driver based on existing profile
	I0918 20:10:21.552619   32455 start.go:297] selected driver: kvm2
	I0918 20:10:21.552649   32455 start.go:901] validating driver "kvm2" against &{Name:ha-091565 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVer
sion:v1.31.1 ClusterName:ha-091565 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.215 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.92 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.53 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.9 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:f
alse freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p20
00.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0918 20:10:21.552853   32455 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0918 20:10:21.553185   32455 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0918 20:10:21.553252   32455 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19667-7671/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0918 20:10:21.569199   32455 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0918 20:10:21.569914   32455 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0918 20:10:21.569958   32455 cni.go:84] Creating CNI manager for ""
	I0918 20:10:21.570008   32455 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0918 20:10:21.570079   32455 start.go:340] cluster config:
	{Name:ha-091565 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:ha-091565 Namespace:default APIServerHAVIP:192.168.39
.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.215 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.92 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.53 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.9 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller
:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:
0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0918 20:10:21.570212   32455 iso.go:125] acquiring lock: {Name:mkbcf4d84f3091567a39802cd5e773cb10b8c545 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0918 20:10:21.572456   32455 out.go:177] * Starting "ha-091565" primary control-plane node in "ha-091565" cluster
	I0918 20:10:21.574020   32455 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0918 20:10:21.574096   32455 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19667-7671/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I0918 20:10:21.574109   32455 cache.go:56] Caching tarball of preloaded images
	I0918 20:10:21.574206   32455 preload.go:172] Found /home/jenkins/minikube-integration/19667-7671/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0918 20:10:21.574218   32455 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0918 20:10:21.574331   32455 profile.go:143] Saving config to /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/ha-091565/config.json ...
	I0918 20:10:21.574532   32455 start.go:360] acquireMachinesLock for ha-091565: {Name:mk1b0d68a0b7d9d4bc8204d5d81cb9eb77222526 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0918 20:10:21.574573   32455 start.go:364] duration metric: took 21.936µs to acquireMachinesLock for "ha-091565"
	I0918 20:10:21.574590   32455 start.go:96] Skipping create...Using existing machine configuration
	I0918 20:10:21.574614   32455 fix.go:54] fixHost starting: 
	I0918 20:10:21.574862   32455 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0918 20:10:21.574893   32455 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0918 20:10:21.590607   32455 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44505
	I0918 20:10:21.591037   32455 main.go:141] libmachine: () Calling .GetVersion
	I0918 20:10:21.591654   32455 main.go:141] libmachine: Using API Version  1
	I0918 20:10:21.591681   32455 main.go:141] libmachine: () Calling .SetConfigRaw
	I0918 20:10:21.592033   32455 main.go:141] libmachine: () Calling .GetMachineName
	I0918 20:10:21.592216   32455 main.go:141] libmachine: (ha-091565) Calling .DriverName
	I0918 20:10:21.592370   32455 main.go:141] libmachine: (ha-091565) Calling .GetState
	I0918 20:10:21.594179   32455 fix.go:112] recreateIfNeeded on ha-091565: state=Running err=<nil>
	W0918 20:10:21.594208   32455 fix.go:138] unexpected machine state, will restart: <nil>
	I0918 20:10:21.596463   32455 out.go:177] * Updating the running kvm2 "ha-091565" VM ...
	I0918 20:10:21.598032   32455 machine.go:93] provisionDockerMachine start ...
	I0918 20:10:21.598057   32455 main.go:141] libmachine: (ha-091565) Calling .DriverName
	I0918 20:10:21.598305   32455 main.go:141] libmachine: (ha-091565) Calling .GetSSHHostname
	I0918 20:10:21.600831   32455 main.go:141] libmachine: (ha-091565) DBG | domain ha-091565 has defined MAC address 52:54:00:2a:13:d8 in network mk-ha-091565
	I0918 20:10:21.601344   32455 main.go:141] libmachine: (ha-091565) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:13:d8", ip: ""} in network mk-ha-091565: {Iface:virbr1 ExpiryTime:2024-09-18 21:01:12 +0000 UTC Type:0 Mac:52:54:00:2a:13:d8 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:ha-091565 Clientid:01:52:54:00:2a:13:d8}
	I0918 20:10:21.601368   32455 main.go:141] libmachine: (ha-091565) DBG | domain ha-091565 has defined IP address 192.168.39.215 and MAC address 52:54:00:2a:13:d8 in network mk-ha-091565
	I0918 20:10:21.601609   32455 main.go:141] libmachine: (ha-091565) Calling .GetSSHPort
	I0918 20:10:21.601830   32455 main.go:141] libmachine: (ha-091565) Calling .GetSSHKeyPath
	I0918 20:10:21.601979   32455 main.go:141] libmachine: (ha-091565) Calling .GetSSHKeyPath
	I0918 20:10:21.602115   32455 main.go:141] libmachine: (ha-091565) Calling .GetSSHUsername
	I0918 20:10:21.602284   32455 main.go:141] libmachine: Using SSH client type: native
	I0918 20:10:21.602489   32455 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.215 22 <nil> <nil>}
	I0918 20:10:21.602500   32455 main.go:141] libmachine: About to run SSH command:
	hostname
	I0918 20:10:21.720989   32455 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-091565
	
	I0918 20:10:21.721024   32455 main.go:141] libmachine: (ha-091565) Calling .GetMachineName
	I0918 20:10:21.721299   32455 buildroot.go:166] provisioning hostname "ha-091565"
	I0918 20:10:21.721327   32455 main.go:141] libmachine: (ha-091565) Calling .GetMachineName
	I0918 20:10:21.721529   32455 main.go:141] libmachine: (ha-091565) Calling .GetSSHHostname
	I0918 20:10:21.724505   32455 main.go:141] libmachine: (ha-091565) DBG | domain ha-091565 has defined MAC address 52:54:00:2a:13:d8 in network mk-ha-091565
	I0918 20:10:21.724849   32455 main.go:141] libmachine: (ha-091565) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:13:d8", ip: ""} in network mk-ha-091565: {Iface:virbr1 ExpiryTime:2024-09-18 21:01:12 +0000 UTC Type:0 Mac:52:54:00:2a:13:d8 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:ha-091565 Clientid:01:52:54:00:2a:13:d8}
	I0918 20:10:21.724878   32455 main.go:141] libmachine: (ha-091565) DBG | domain ha-091565 has defined IP address 192.168.39.215 and MAC address 52:54:00:2a:13:d8 in network mk-ha-091565
	I0918 20:10:21.725080   32455 main.go:141] libmachine: (ha-091565) Calling .GetSSHPort
	I0918 20:10:21.725276   32455 main.go:141] libmachine: (ha-091565) Calling .GetSSHKeyPath
	I0918 20:10:21.725445   32455 main.go:141] libmachine: (ha-091565) Calling .GetSSHKeyPath
	I0918 20:10:21.725599   32455 main.go:141] libmachine: (ha-091565) Calling .GetSSHUsername
	I0918 20:10:21.725814   32455 main.go:141] libmachine: Using SSH client type: native
	I0918 20:10:21.726027   32455 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.215 22 <nil> <nil>}
	I0918 20:10:21.726046   32455 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-091565 && echo "ha-091565" | sudo tee /etc/hostname
	I0918 20:10:21.855963   32455 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-091565
	
	I0918 20:10:21.855996   32455 main.go:141] libmachine: (ha-091565) Calling .GetSSHHostname
	I0918 20:10:21.858764   32455 main.go:141] libmachine: (ha-091565) DBG | domain ha-091565 has defined MAC address 52:54:00:2a:13:d8 in network mk-ha-091565
	I0918 20:10:21.859181   32455 main.go:141] libmachine: (ha-091565) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:13:d8", ip: ""} in network mk-ha-091565: {Iface:virbr1 ExpiryTime:2024-09-18 21:01:12 +0000 UTC Type:0 Mac:52:54:00:2a:13:d8 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:ha-091565 Clientid:01:52:54:00:2a:13:d8}
	I0918 20:10:21.859204   32455 main.go:141] libmachine: (ha-091565) DBG | domain ha-091565 has defined IP address 192.168.39.215 and MAC address 52:54:00:2a:13:d8 in network mk-ha-091565
	I0918 20:10:21.859418   32455 main.go:141] libmachine: (ha-091565) Calling .GetSSHPort
	I0918 20:10:21.859676   32455 main.go:141] libmachine: (ha-091565) Calling .GetSSHKeyPath
	I0918 20:10:21.859851   32455 main.go:141] libmachine: (ha-091565) Calling .GetSSHKeyPath
	I0918 20:10:21.859957   32455 main.go:141] libmachine: (ha-091565) Calling .GetSSHUsername
	I0918 20:10:21.860107   32455 main.go:141] libmachine: Using SSH client type: native
	I0918 20:10:21.860281   32455 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.215 22 <nil> <nil>}
	I0918 20:10:21.860296   32455 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-091565' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-091565/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-091565' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0918 20:10:21.977116   32455 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0918 20:10:21.977154   32455 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19667-7671/.minikube CaCertPath:/home/jenkins/minikube-integration/19667-7671/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19667-7671/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19667-7671/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19667-7671/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19667-7671/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19667-7671/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19667-7671/.minikube}
	I0918 20:10:21.977184   32455 buildroot.go:174] setting up certificates
	I0918 20:10:21.977195   32455 provision.go:84] configureAuth start
	I0918 20:10:21.977208   32455 main.go:141] libmachine: (ha-091565) Calling .GetMachineName
	I0918 20:10:21.977486   32455 main.go:141] libmachine: (ha-091565) Calling .GetIP
	I0918 20:10:21.979778   32455 main.go:141] libmachine: (ha-091565) DBG | domain ha-091565 has defined MAC address 52:54:00:2a:13:d8 in network mk-ha-091565
	I0918 20:10:21.980131   32455 main.go:141] libmachine: (ha-091565) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:13:d8", ip: ""} in network mk-ha-091565: {Iface:virbr1 ExpiryTime:2024-09-18 21:01:12 +0000 UTC Type:0 Mac:52:54:00:2a:13:d8 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:ha-091565 Clientid:01:52:54:00:2a:13:d8}
	I0918 20:10:21.980165   32455 main.go:141] libmachine: (ha-091565) DBG | domain ha-091565 has defined IP address 192.168.39.215 and MAC address 52:54:00:2a:13:d8 in network mk-ha-091565
	I0918 20:10:21.980353   32455 main.go:141] libmachine: (ha-091565) Calling .GetSSHHostname
	I0918 20:10:21.982901   32455 main.go:141] libmachine: (ha-091565) DBG | domain ha-091565 has defined MAC address 52:54:00:2a:13:d8 in network mk-ha-091565
	I0918 20:10:21.983298   32455 main.go:141] libmachine: (ha-091565) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:13:d8", ip: ""} in network mk-ha-091565: {Iface:virbr1 ExpiryTime:2024-09-18 21:01:12 +0000 UTC Type:0 Mac:52:54:00:2a:13:d8 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:ha-091565 Clientid:01:52:54:00:2a:13:d8}
	I0918 20:10:21.983323   32455 main.go:141] libmachine: (ha-091565) DBG | domain ha-091565 has defined IP address 192.168.39.215 and MAC address 52:54:00:2a:13:d8 in network mk-ha-091565
	I0918 20:10:21.983446   32455 provision.go:143] copyHostCerts
	I0918 20:10:21.983475   32455 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19667-7671/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19667-7671/.minikube/ca.pem
	I0918 20:10:21.983511   32455 exec_runner.go:144] found /home/jenkins/minikube-integration/19667-7671/.minikube/ca.pem, removing ...
	I0918 20:10:21.983518   32455 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19667-7671/.minikube/ca.pem
	I0918 20:10:21.983600   32455 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19667-7671/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19667-7671/.minikube/ca.pem (1082 bytes)
	I0918 20:10:21.983698   32455 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19667-7671/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19667-7671/.minikube/cert.pem
	I0918 20:10:21.983723   32455 exec_runner.go:144] found /home/jenkins/minikube-integration/19667-7671/.minikube/cert.pem, removing ...
	I0918 20:10:21.983733   32455 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19667-7671/.minikube/cert.pem
	I0918 20:10:21.983771   32455 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19667-7671/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19667-7671/.minikube/cert.pem (1123 bytes)
	I0918 20:10:21.983828   32455 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19667-7671/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19667-7671/.minikube/key.pem
	I0918 20:10:21.983852   32455 exec_runner.go:144] found /home/jenkins/minikube-integration/19667-7671/.minikube/key.pem, removing ...
	I0918 20:10:21.983861   32455 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19667-7671/.minikube/key.pem
	I0918 20:10:21.983893   32455 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19667-7671/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19667-7671/.minikube/key.pem (1679 bytes)
	I0918 20:10:21.983958   32455 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19667-7671/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19667-7671/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19667-7671/.minikube/certs/ca-key.pem org=jenkins.ha-091565 san=[127.0.0.1 192.168.39.215 ha-091565 localhost minikube]
	I0918 20:10:22.062750   32455 provision.go:177] copyRemoteCerts
	I0918 20:10:22.062812   32455 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0918 20:10:22.062834   32455 main.go:141] libmachine: (ha-091565) Calling .GetSSHHostname
	I0918 20:10:22.065869   32455 main.go:141] libmachine: (ha-091565) DBG | domain ha-091565 has defined MAC address 52:54:00:2a:13:d8 in network mk-ha-091565
	I0918 20:10:22.066235   32455 main.go:141] libmachine: (ha-091565) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:13:d8", ip: ""} in network mk-ha-091565: {Iface:virbr1 ExpiryTime:2024-09-18 21:01:12 +0000 UTC Type:0 Mac:52:54:00:2a:13:d8 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:ha-091565 Clientid:01:52:54:00:2a:13:d8}
	I0918 20:10:22.066265   32455 main.go:141] libmachine: (ha-091565) DBG | domain ha-091565 has defined IP address 192.168.39.215 and MAC address 52:54:00:2a:13:d8 in network mk-ha-091565
	I0918 20:10:22.066465   32455 main.go:141] libmachine: (ha-091565) Calling .GetSSHPort
	I0918 20:10:22.066661   32455 main.go:141] libmachine: (ha-091565) Calling .GetSSHKeyPath
	I0918 20:10:22.066853   32455 main.go:141] libmachine: (ha-091565) Calling .GetSSHUsername
	I0918 20:10:22.066948   32455 sshutil.go:53] new ssh client: &{IP:192.168.39.215 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19667-7671/.minikube/machines/ha-091565/id_rsa Username:docker}
	I0918 20:10:22.154548   32455 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19667-7671/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0918 20:10:22.154645   32455 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I0918 20:10:22.181982   32455 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19667-7671/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0918 20:10:22.182060   32455 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0918 20:10:22.209319   32455 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19667-7671/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0918 20:10:22.209408   32455 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0918 20:10:22.235852   32455 provision.go:87] duration metric: took 258.644873ms to configureAuth
	I0918 20:10:22.235880   32455 buildroot.go:189] setting minikube options for container-runtime
	I0918 20:10:22.236179   32455 config.go:182] Loaded profile config "ha-091565": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0918 20:10:22.236274   32455 main.go:141] libmachine: (ha-091565) Calling .GetSSHHostname
	I0918 20:10:22.238668   32455 main.go:141] libmachine: (ha-091565) DBG | domain ha-091565 has defined MAC address 52:54:00:2a:13:d8 in network mk-ha-091565
	I0918 20:10:22.238999   32455 main.go:141] libmachine: (ha-091565) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:13:d8", ip: ""} in network mk-ha-091565: {Iface:virbr1 ExpiryTime:2024-09-18 21:01:12 +0000 UTC Type:0 Mac:52:54:00:2a:13:d8 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:ha-091565 Clientid:01:52:54:00:2a:13:d8}
	I0918 20:10:22.239017   32455 main.go:141] libmachine: (ha-091565) DBG | domain ha-091565 has defined IP address 192.168.39.215 and MAC address 52:54:00:2a:13:d8 in network mk-ha-091565
	I0918 20:10:22.239210   32455 main.go:141] libmachine: (ha-091565) Calling .GetSSHPort
	I0918 20:10:22.239408   32455 main.go:141] libmachine: (ha-091565) Calling .GetSSHKeyPath
	I0918 20:10:22.239537   32455 main.go:141] libmachine: (ha-091565) Calling .GetSSHKeyPath
	I0918 20:10:22.239650   32455 main.go:141] libmachine: (ha-091565) Calling .GetSSHUsername
	I0918 20:10:22.239799   32455 main.go:141] libmachine: Using SSH client type: native
	I0918 20:10:22.240008   32455 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.215 22 <nil> <nil>}
	I0918 20:10:22.240046   32455 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0918 20:11:53.113384   32455 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0918 20:11:53.113412   32455 machine.go:96] duration metric: took 1m31.515364109s to provisionDockerMachine
	I0918 20:11:53.113422   32455 start.go:293] postStartSetup for "ha-091565" (driver="kvm2")
	I0918 20:11:53.113432   32455 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0918 20:11:53.113447   32455 main.go:141] libmachine: (ha-091565) Calling .DriverName
	I0918 20:11:53.113763   32455 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0918 20:11:53.113791   32455 main.go:141] libmachine: (ha-091565) Calling .GetSSHHostname
	I0918 20:11:53.116790   32455 main.go:141] libmachine: (ha-091565) DBG | domain ha-091565 has defined MAC address 52:54:00:2a:13:d8 in network mk-ha-091565
	I0918 20:11:53.117170   32455 main.go:141] libmachine: (ha-091565) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:13:d8", ip: ""} in network mk-ha-091565: {Iface:virbr1 ExpiryTime:2024-09-18 21:01:12 +0000 UTC Type:0 Mac:52:54:00:2a:13:d8 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:ha-091565 Clientid:01:52:54:00:2a:13:d8}
	I0918 20:11:53.117201   32455 main.go:141] libmachine: (ha-091565) DBG | domain ha-091565 has defined IP address 192.168.39.215 and MAC address 52:54:00:2a:13:d8 in network mk-ha-091565
	I0918 20:11:53.117343   32455 main.go:141] libmachine: (ha-091565) Calling .GetSSHPort
	I0918 20:11:53.117540   32455 main.go:141] libmachine: (ha-091565) Calling .GetSSHKeyPath
	I0918 20:11:53.117794   32455 main.go:141] libmachine: (ha-091565) Calling .GetSSHUsername
	I0918 20:11:53.117929   32455 sshutil.go:53] new ssh client: &{IP:192.168.39.215 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19667-7671/.minikube/machines/ha-091565/id_rsa Username:docker}
	I0918 20:11:53.203998   32455 ssh_runner.go:195] Run: cat /etc/os-release
	I0918 20:11:53.208185   32455 info.go:137] Remote host: Buildroot 2023.02.9
	I0918 20:11:53.208209   32455 filesync.go:126] Scanning /home/jenkins/minikube-integration/19667-7671/.minikube/addons for local assets ...
	I0918 20:11:53.208267   32455 filesync.go:126] Scanning /home/jenkins/minikube-integration/19667-7671/.minikube/files for local assets ...
	I0918 20:11:53.208345   32455 filesync.go:149] local asset: /home/jenkins/minikube-integration/19667-7671/.minikube/files/etc/ssl/certs/148782.pem -> 148782.pem in /etc/ssl/certs
	I0918 20:11:53.208358   32455 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19667-7671/.minikube/files/etc/ssl/certs/148782.pem -> /etc/ssl/certs/148782.pem
	I0918 20:11:53.208461   32455 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0918 20:11:53.217739   32455 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/files/etc/ssl/certs/148782.pem --> /etc/ssl/certs/148782.pem (1708 bytes)
	I0918 20:11:53.242008   32455 start.go:296] duration metric: took 128.571381ms for postStartSetup
	I0918 20:11:53.242077   32455 main.go:141] libmachine: (ha-091565) Calling .DriverName
	I0918 20:11:53.242349   32455 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0918 20:11:53.242459   32455 main.go:141] libmachine: (ha-091565) Calling .GetSSHHostname
	I0918 20:11:53.245241   32455 main.go:141] libmachine: (ha-091565) DBG | domain ha-091565 has defined MAC address 52:54:00:2a:13:d8 in network mk-ha-091565
	I0918 20:11:53.245696   32455 main.go:141] libmachine: (ha-091565) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:13:d8", ip: ""} in network mk-ha-091565: {Iface:virbr1 ExpiryTime:2024-09-18 21:01:12 +0000 UTC Type:0 Mac:52:54:00:2a:13:d8 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:ha-091565 Clientid:01:52:54:00:2a:13:d8}
	I0918 20:11:53.245724   32455 main.go:141] libmachine: (ha-091565) DBG | domain ha-091565 has defined IP address 192.168.39.215 and MAC address 52:54:00:2a:13:d8 in network mk-ha-091565
	I0918 20:11:53.245849   32455 main.go:141] libmachine: (ha-091565) Calling .GetSSHPort
	I0918 20:11:53.246040   32455 main.go:141] libmachine: (ha-091565) Calling .GetSSHKeyPath
	I0918 20:11:53.246184   32455 main.go:141] libmachine: (ha-091565) Calling .GetSSHUsername
	I0918 20:11:53.246314   32455 sshutil.go:53] new ssh client: &{IP:192.168.39.215 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19667-7671/.minikube/machines/ha-091565/id_rsa Username:docker}
	W0918 20:11:53.330601   32455 fix.go:99] cannot read backup folder, skipping restore: read dir: sudo ls --almost-all -1 /var/lib/minikube/backup: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/backup': No such file or directory
	I0918 20:11:53.330629   32455 fix.go:56] duration metric: took 1m31.756024995s for fixHost
	I0918 20:11:53.330649   32455 main.go:141] libmachine: (ha-091565) Calling .GetSSHHostname
	I0918 20:11:53.332906   32455 main.go:141] libmachine: (ha-091565) DBG | domain ha-091565 has defined MAC address 52:54:00:2a:13:d8 in network mk-ha-091565
	I0918 20:11:53.333200   32455 main.go:141] libmachine: (ha-091565) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:13:d8", ip: ""} in network mk-ha-091565: {Iface:virbr1 ExpiryTime:2024-09-18 21:01:12 +0000 UTC Type:0 Mac:52:54:00:2a:13:d8 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:ha-091565 Clientid:01:52:54:00:2a:13:d8}
	I0918 20:11:53.333224   32455 main.go:141] libmachine: (ha-091565) DBG | domain ha-091565 has defined IP address 192.168.39.215 and MAC address 52:54:00:2a:13:d8 in network mk-ha-091565
	I0918 20:11:53.333399   32455 main.go:141] libmachine: (ha-091565) Calling .GetSSHPort
	I0918 20:11:53.333583   32455 main.go:141] libmachine: (ha-091565) Calling .GetSSHKeyPath
	I0918 20:11:53.333727   32455 main.go:141] libmachine: (ha-091565) Calling .GetSSHKeyPath
	I0918 20:11:53.333867   32455 main.go:141] libmachine: (ha-091565) Calling .GetSSHUsername
	I0918 20:11:53.334017   32455 main.go:141] libmachine: Using SSH client type: native
	I0918 20:11:53.334209   32455 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.215 22 <nil> <nil>}
	I0918 20:11:53.334222   32455 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0918 20:11:53.448740   32455 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726690313.409802921
	
	I0918 20:11:53.448766   32455 fix.go:216] guest clock: 1726690313.409802921
	I0918 20:11:53.448774   32455 fix.go:229] Guest: 2024-09-18 20:11:53.409802921 +0000 UTC Remote: 2024-09-18 20:11:53.330635796 +0000 UTC m=+91.884985084 (delta=79.167125ms)
	I0918 20:11:53.448798   32455 fix.go:200] guest clock delta is within tolerance: 79.167125ms
	I0918 20:11:53.448803   32455 start.go:83] releasing machines lock for "ha-091565", held for 1m31.874221941s
	I0918 20:11:53.448825   32455 main.go:141] libmachine: (ha-091565) Calling .DriverName
	I0918 20:11:53.449079   32455 main.go:141] libmachine: (ha-091565) Calling .GetIP
	I0918 20:11:53.451803   32455 main.go:141] libmachine: (ha-091565) DBG | domain ha-091565 has defined MAC address 52:54:00:2a:13:d8 in network mk-ha-091565
	I0918 20:11:53.452169   32455 main.go:141] libmachine: (ha-091565) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:13:d8", ip: ""} in network mk-ha-091565: {Iface:virbr1 ExpiryTime:2024-09-18 21:01:12 +0000 UTC Type:0 Mac:52:54:00:2a:13:d8 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:ha-091565 Clientid:01:52:54:00:2a:13:d8}
	I0918 20:11:53.452193   32455 main.go:141] libmachine: (ha-091565) DBG | domain ha-091565 has defined IP address 192.168.39.215 and MAC address 52:54:00:2a:13:d8 in network mk-ha-091565
	I0918 20:11:53.452344   32455 main.go:141] libmachine: (ha-091565) Calling .DriverName
	I0918 20:11:53.452808   32455 main.go:141] libmachine: (ha-091565) Calling .DriverName
	I0918 20:11:53.452970   32455 main.go:141] libmachine: (ha-091565) Calling .DriverName
	I0918 20:11:53.453048   32455 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0918 20:11:53.453082   32455 main.go:141] libmachine: (ha-091565) Calling .GetSSHHostname
	I0918 20:11:53.453159   32455 ssh_runner.go:195] Run: cat /version.json
	I0918 20:11:53.453178   32455 main.go:141] libmachine: (ha-091565) Calling .GetSSHHostname
	I0918 20:11:53.455562   32455 main.go:141] libmachine: (ha-091565) DBG | domain ha-091565 has defined MAC address 52:54:00:2a:13:d8 in network mk-ha-091565
	I0918 20:11:53.455911   32455 main.go:141] libmachine: (ha-091565) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:13:d8", ip: ""} in network mk-ha-091565: {Iface:virbr1 ExpiryTime:2024-09-18 21:01:12 +0000 UTC Type:0 Mac:52:54:00:2a:13:d8 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:ha-091565 Clientid:01:52:54:00:2a:13:d8}
	I0918 20:11:53.455938   32455 main.go:141] libmachine: (ha-091565) DBG | domain ha-091565 has defined IP address 192.168.39.215 and MAC address 52:54:00:2a:13:d8 in network mk-ha-091565
	I0918 20:11:53.456043   32455 main.go:141] libmachine: (ha-091565) DBG | domain ha-091565 has defined MAC address 52:54:00:2a:13:d8 in network mk-ha-091565
	I0918 20:11:53.456083   32455 main.go:141] libmachine: (ha-091565) Calling .GetSSHPort
	I0918 20:11:53.456277   32455 main.go:141] libmachine: (ha-091565) Calling .GetSSHKeyPath
	I0918 20:11:53.456465   32455 main.go:141] libmachine: (ha-091565) Calling .GetSSHUsername
	I0918 20:11:53.456518   32455 main.go:141] libmachine: (ha-091565) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:13:d8", ip: ""} in network mk-ha-091565: {Iface:virbr1 ExpiryTime:2024-09-18 21:01:12 +0000 UTC Type:0 Mac:52:54:00:2a:13:d8 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:ha-091565 Clientid:01:52:54:00:2a:13:d8}
	I0918 20:11:53.456553   32455 main.go:141] libmachine: (ha-091565) DBG | domain ha-091565 has defined IP address 192.168.39.215 and MAC address 52:54:00:2a:13:d8 in network mk-ha-091565
	I0918 20:11:53.456579   32455 sshutil.go:53] new ssh client: &{IP:192.168.39.215 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19667-7671/.minikube/machines/ha-091565/id_rsa Username:docker}
	I0918 20:11:53.456708   32455 main.go:141] libmachine: (ha-091565) Calling .GetSSHPort
	I0918 20:11:53.456830   32455 main.go:141] libmachine: (ha-091565) Calling .GetSSHKeyPath
	I0918 20:11:53.456953   32455 main.go:141] libmachine: (ha-091565) Calling .GetSSHUsername
	I0918 20:11:53.457079   32455 sshutil.go:53] new ssh client: &{IP:192.168.39.215 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19667-7671/.minikube/machines/ha-091565/id_rsa Username:docker}
	I0918 20:11:53.537826   32455 ssh_runner.go:195] Run: systemctl --version
	I0918 20:11:53.575920   32455 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0918 20:11:53.735535   32455 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0918 20:11:53.742218   32455 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0918 20:11:53.742292   32455 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0918 20:11:53.751629   32455 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0918 20:11:53.751651   32455 start.go:495] detecting cgroup driver to use...
	I0918 20:11:53.751704   32455 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0918 20:11:53.771495   32455 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0918 20:11:53.786858   32455 docker.go:217] disabling cri-docker service (if available) ...
	I0918 20:11:53.786912   32455 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0918 20:11:53.801820   32455 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0918 20:11:53.817023   32455 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0918 20:11:53.989412   32455 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0918 20:11:54.151681   32455 docker.go:233] disabling docker service ...
	I0918 20:11:54.151754   32455 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0918 20:11:54.167751   32455 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0918 20:11:54.181807   32455 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0918 20:11:54.331455   32455 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0918 20:11:54.479073   32455 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0918 20:11:54.494246   32455 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0918 20:11:54.514069   32455 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0918 20:11:54.514139   32455 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0918 20:11:54.524478   32455 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0918 20:11:54.524555   32455 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0918 20:11:54.534711   32455 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0918 20:11:54.544229   32455 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0918 20:11:54.554255   32455 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0918 20:11:54.565072   32455 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0918 20:11:54.575438   32455 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0918 20:11:54.587587   32455 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0918 20:11:54.598966   32455 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0918 20:11:54.608188   32455 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0918 20:11:54.617777   32455 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0918 20:11:54.765099   32455 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0918 20:12:03.790354   32455 ssh_runner.go:235] Completed: sudo systemctl restart crio: (9.025214168s)
	I0918 20:12:03.790385   32455 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0918 20:12:03.790475   32455 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0918 20:12:03.796261   32455 start.go:563] Will wait 60s for crictl version
	I0918 20:12:03.796336   32455 ssh_runner.go:195] Run: which crictl
	I0918 20:12:03.800042   32455 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0918 20:12:03.843037   32455 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0918 20:12:03.843108   32455 ssh_runner.go:195] Run: crio --version
	I0918 20:12:03.871985   32455 ssh_runner.go:195] Run: crio --version
	I0918 20:12:03.901566   32455 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0918 20:12:03.902558   32455 main.go:141] libmachine: (ha-091565) Calling .GetIP
	I0918 20:12:03.905119   32455 main.go:141] libmachine: (ha-091565) DBG | domain ha-091565 has defined MAC address 52:54:00:2a:13:d8 in network mk-ha-091565
	I0918 20:12:03.905425   32455 main.go:141] libmachine: (ha-091565) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:13:d8", ip: ""} in network mk-ha-091565: {Iface:virbr1 ExpiryTime:2024-09-18 21:01:12 +0000 UTC Type:0 Mac:52:54:00:2a:13:d8 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:ha-091565 Clientid:01:52:54:00:2a:13:d8}
	I0918 20:12:03.905455   32455 main.go:141] libmachine: (ha-091565) DBG | domain ha-091565 has defined IP address 192.168.39.215 and MAC address 52:54:00:2a:13:d8 in network mk-ha-091565
	I0918 20:12:03.905699   32455 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0918 20:12:03.910325   32455 kubeadm.go:883] updating cluster {Name:ha-091565 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Cl
usterName:ha-091565 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.215 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.92 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.53 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.9 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshp
od:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountG
ID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0918 20:12:03.910457   32455 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0918 20:12:03.910508   32455 ssh_runner.go:195] Run: sudo crictl images --output json
	I0918 20:12:03.951827   32455 crio.go:514] all images are preloaded for cri-o runtime.
	I0918 20:12:03.951848   32455 crio.go:433] Images already preloaded, skipping extraction
	I0918 20:12:03.951893   32455 ssh_runner.go:195] Run: sudo crictl images --output json
	I0918 20:12:03.993512   32455 crio.go:514] all images are preloaded for cri-o runtime.
	I0918 20:12:03.993576   32455 cache_images.go:84] Images are preloaded, skipping loading
	I0918 20:12:03.993597   32455 kubeadm.go:934] updating node { 192.168.39.215 8443 v1.31.1 crio true true} ...
	I0918 20:12:03.993718   32455 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-091565 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.215
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-091565 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0918 20:12:03.993835   32455 ssh_runner.go:195] Run: crio config
	I0918 20:12:04.043625   32455 cni.go:84] Creating CNI manager for ""
	I0918 20:12:04.043646   32455 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0918 20:12:04.043654   32455 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0918 20:12:04.043672   32455 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.215 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-091565 NodeName:ha-091565 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.215"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.215 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0918 20:12:04.043797   32455 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.215
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-091565"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.215
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.215"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0918 20:12:04.043823   32455 kube-vip.go:115] generating kube-vip config ...
	I0918 20:12:04.043867   32455 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0918 20:12:04.055518   32455 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0918 20:12:04.055641   32455 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0918 20:12:04.055708   32455 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0918 20:12:04.065667   32455 binaries.go:44] Found k8s binaries, skipping transfer
	I0918 20:12:04.065763   32455 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0918 20:12:04.075450   32455 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I0918 20:12:04.092630   32455 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0918 20:12:04.108555   32455 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2153 bytes)
	I0918 20:12:04.124623   32455 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0918 20:12:04.141409   32455 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0918 20:12:04.145785   32455 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0918 20:12:04.307857   32455 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0918 20:12:04.324737   32455 certs.go:68] Setting up /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/ha-091565 for IP: 192.168.39.215
	I0918 20:12:04.324773   32455 certs.go:194] generating shared ca certs ...
	I0918 20:12:04.324789   32455 certs.go:226] acquiring lock for ca certs: {Name:mkf54e0e71ed884c4372f9d3d4cb308ea4600185 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0918 20:12:04.324986   32455 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19667-7671/.minikube/ca.key
	I0918 20:12:04.325053   32455 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19667-7671/.minikube/proxy-client-ca.key
	I0918 20:12:04.325069   32455 certs.go:256] generating profile certs ...
	I0918 20:12:04.325185   32455 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/ha-091565/client.key
	I0918 20:12:04.325226   32455 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/ha-091565/apiserver.key.c0a0a625
	I0918 20:12:04.325256   32455 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/ha-091565/apiserver.crt.c0a0a625 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.215 192.168.39.92 192.168.39.53 192.168.39.254]
	I0918 20:12:04.445574   32455 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/ha-091565/apiserver.crt.c0a0a625 ...
	I0918 20:12:04.445613   32455 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/ha-091565/apiserver.crt.c0a0a625: {Name:mk5247af31881f8e5c986030d6d12b4e48e9acab Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0918 20:12:04.445801   32455 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/ha-091565/apiserver.key.c0a0a625 ...
	I0918 20:12:04.445832   32455 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/ha-091565/apiserver.key.c0a0a625: {Name:mkef42b095c922258fa0861a13f6b4883289befd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0918 20:12:04.445913   32455 certs.go:381] copying /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/ha-091565/apiserver.crt.c0a0a625 -> /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/ha-091565/apiserver.crt
	I0918 20:12:04.446062   32455 certs.go:385] copying /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/ha-091565/apiserver.key.c0a0a625 -> /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/ha-091565/apiserver.key
	I0918 20:12:04.446192   32455 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/ha-091565/proxy-client.key
	I0918 20:12:04.446207   32455 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19667-7671/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0918 20:12:04.446220   32455 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19667-7671/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0918 20:12:04.446232   32455 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19667-7671/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0918 20:12:04.446242   32455 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19667-7671/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0918 20:12:04.446255   32455 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/ha-091565/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0918 20:12:04.446265   32455 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/ha-091565/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0918 20:12:04.446277   32455 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/ha-091565/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0918 20:12:04.446287   32455 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/ha-091565/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0918 20:12:04.446348   32455 certs.go:484] found cert: /home/jenkins/minikube-integration/19667-7671/.minikube/certs/14878.pem (1338 bytes)
	W0918 20:12:04.446376   32455 certs.go:480] ignoring /home/jenkins/minikube-integration/19667-7671/.minikube/certs/14878_empty.pem, impossibly tiny 0 bytes
	I0918 20:12:04.446383   32455 certs.go:484] found cert: /home/jenkins/minikube-integration/19667-7671/.minikube/certs/ca-key.pem (1675 bytes)
	I0918 20:12:04.446404   32455 certs.go:484] found cert: /home/jenkins/minikube-integration/19667-7671/.minikube/certs/ca.pem (1082 bytes)
	I0918 20:12:04.446427   32455 certs.go:484] found cert: /home/jenkins/minikube-integration/19667-7671/.minikube/certs/cert.pem (1123 bytes)
	I0918 20:12:04.446448   32455 certs.go:484] found cert: /home/jenkins/minikube-integration/19667-7671/.minikube/certs/key.pem (1679 bytes)
	I0918 20:12:04.446483   32455 certs.go:484] found cert: /home/jenkins/minikube-integration/19667-7671/.minikube/files/etc/ssl/certs/148782.pem (1708 bytes)
	I0918 20:12:04.446508   32455 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19667-7671/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0918 20:12:04.446522   32455 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19667-7671/.minikube/certs/14878.pem -> /usr/share/ca-certificates/14878.pem
	I0918 20:12:04.446534   32455 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19667-7671/.minikube/files/etc/ssl/certs/148782.pem -> /usr/share/ca-certificates/148782.pem
	I0918 20:12:04.447155   32455 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0918 20:12:04.472997   32455 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0918 20:12:04.498649   32455 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0918 20:12:04.523077   32455 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0918 20:12:04.548688   32455 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/ha-091565/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0918 20:12:04.573643   32455 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/ha-091565/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0918 20:12:04.599072   32455 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/ha-091565/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0918 20:12:04.624269   32455 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/ha-091565/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0918 20:12:04.650005   32455 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0918 20:12:04.676083   32455 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/certs/14878.pem --> /usr/share/ca-certificates/14878.pem (1338 bytes)
	I0918 20:12:04.702552   32455 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/files/etc/ssl/certs/148782.pem --> /usr/share/ca-certificates/148782.pem (1708 bytes)
	I0918 20:12:04.727499   32455 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0918 20:12:04.745602   32455 ssh_runner.go:195] Run: openssl version
	I0918 20:12:04.752166   32455 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14878.pem && ln -fs /usr/share/ca-certificates/14878.pem /etc/ssl/certs/14878.pem"
	I0918 20:12:04.764693   32455 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14878.pem
	I0918 20:12:04.770506   32455 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 18 19:57 /usr/share/ca-certificates/14878.pem
	I0918 20:12:04.770593   32455 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14878.pem
	I0918 20:12:04.777041   32455 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14878.pem /etc/ssl/certs/51391683.0"
	I0918 20:12:04.787475   32455 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/148782.pem && ln -fs /usr/share/ca-certificates/148782.pem /etc/ssl/certs/148782.pem"
	I0918 20:12:04.800141   32455 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/148782.pem
	I0918 20:12:04.804982   32455 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 18 19:57 /usr/share/ca-certificates/148782.pem
	I0918 20:12:04.805052   32455 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/148782.pem
	I0918 20:12:04.811226   32455 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/148782.pem /etc/ssl/certs/3ec20f2e.0"
	I0918 20:12:04.822504   32455 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0918 20:12:04.834670   32455 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0918 20:12:04.839387   32455 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 18 19:39 /usr/share/ca-certificates/minikubeCA.pem
	I0918 20:12:04.839440   32455 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0918 20:12:04.845389   32455 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0918 20:12:04.857297   32455 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0918 20:12:04.862368   32455 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0918 20:12:04.868685   32455 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0918 20:12:04.875055   32455 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0918 20:12:04.881248   32455 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0918 20:12:04.887784   32455 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0918 20:12:04.894760   32455 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0918 20:12:04.901019   32455 kubeadm.go:392] StartCluster: {Name:ha-091565 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Clust
erName:ha-091565 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.215 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.92 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.53 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.9 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:
false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:
docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0918 20:12:04.901132   32455 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0918 20:12:04.901185   32455 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0918 20:12:04.942096   32455 cri.go:89] found id: "dbecd227f3ec46402e8caa90011eda748aa22f0504b6d888270ba095a12c9b89"
	I0918 20:12:04.942124   32455 cri.go:89] found id: "07e1934aefd631296f5de1012cce3d05a901f2aae648317c3b359efd462f870b"
	I0918 20:12:04.942128   32455 cri.go:89] found id: "566b1afb1702d39bc1691911941f883e32db2b47959f624be519dbf4fbc79f71"
	I0918 20:12:04.942131   32455 cri.go:89] found id: "4f8cab8eef59352ca937c9b8e7056a419bb426b183a841f9e8a43366ecb1b283"
	I0918 20:12:04.942134   32455 cri.go:89] found id: "26162985f4a281a161b88fc27b5cf0dbedef6117a63f0b64cef28f41346dbd3e"
	I0918 20:12:04.942136   32455 cri.go:89] found id: "9b5c6773eef44d848fd26ebcf13ea82ebc3c1e1bd23618a944bf5f6d6a0e7bb8"
	I0918 20:12:04.942139   32455 cri.go:89] found id: "52ae20a53e17beece735496b8e65537bbebfef5da53e8a2e7a74820fee56cd63"
	I0918 20:12:04.942142   32455 cri.go:89] found id: "c9aa80c6b1f558ba9ef5b7e70df3690995e271e9296b0cc3e71ade739843f53a"
	I0918 20:12:04.942144   32455 cri.go:89] found id: "f40b55a2539761c632fe964601680ae492c74cf2cad4ef1130393c06f576f943"
	I0918 20:12:04.942151   32455 cri.go:89] found id: "8c435dbd5b540cdb87d1f79fc2aadca19db9c46aff42e67d78c5d9c8eee1b6de"
	I0918 20:12:04.942154   32455 cri.go:89] found id: "f141188bda32520e26d392de6e6e49a863d49aacb461b7f3d5649d68557e96d3"
	I0918 20:12:04.942156   32455 cri.go:89] found id: "4358e16fe123bb31210776fce1969072fb954b1f7cc3b51c459cd155992c1be5"
	I0918 20:12:04.942159   32455 cri.go:89] found id: "97b3f8978c2594307cdfef72e8b8aebac842152f9c1893bb34700b6e0932027e"
	I0918 20:12:04.942161   32455 cri.go:89] found id: ""
	I0918 20:12:04.942224   32455 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Sep 18 20:17:11 ha-091565 crio[3611]: time="2024-09-18 20:17:11.171528472Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726690631171504467,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=5a13aa99-4485-421d-af61-e1ed023ca5b1 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 18 20:17:11 ha-091565 crio[3611]: time="2024-09-18 20:17:11.172159464Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=948c4945-f910-48c4-a832-77d2695f4d34 name=/runtime.v1.RuntimeService/ListContainers
	Sep 18 20:17:11 ha-091565 crio[3611]: time="2024-09-18 20:17:11.172221937Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=948c4945-f910-48c4-a832-77d2695f4d34 name=/runtime.v1.RuntimeService/ListContainers
	Sep 18 20:17:11 ha-091565 crio[3611]: time="2024-09-18 20:17:11.172608819Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:6489130b5601ef43045b9ef5a8e330f80dae1c20724da7d02b8b61500bc9beaa,PodSandboxId:9259670422d45498b8f4e45909da18fa355ece853960532d620a5ca1f2e21efb,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726690419383021088,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b7dffb85-905b-4166-a680-34c77cf87d09,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e894eebbedc0aac646e7c24ccd43c7ae2e1a00a0275213aa3eaaf78ad5fddb8a,PodSandboxId:68e450d1a9d796861547badef070a688c9eca3e51393bcf3d8d4b48c0af80e45,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726690374390976616,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-091565,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fb44b1ca5e1af25a31cc20be38506f2d,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath
: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c3cbacf6046aded98d17f163f6e437ef37200f71fd333470b3f7b074463c80ca,PodSandboxId:6a98de6d0c34be9c61571054c6ee0cfa8df30c161e6da077b0fd3dce43a7c629,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726690374385528525,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-091565,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d7f1e1feaa8e654300c84052131dd12a,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0025f965c449fa8005990c9d16e4bd975337d00b859f495b752751bf93bf4d29,PodSandboxId:9259670422d45498b8f4e45909da18fa355ece853960532d620a5ca1f2e21efb,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1726690365382779674,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b7dffb85-905b-4166-a680-34c77cf87d09,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7deb697baa4b118571cf9f455fd13305bc42377ddcb5b0aaa75763ec49cd7955,PodSandboxId:a0b5345aa00cc911cf3ebf6d550cae0e8fae6aa83bba446cedb5885f19562a72,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1726690364731464824,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-xhmzx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 16808919-56d0-40cd-b88e-28fb5a40b3a2,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contai
ner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:315019425fff0ed52e2f663404258df3b545845e85b4f1a0ea2251dccd135b0a,PodSandboxId:e542a3cfe8082184e2eed6dece495abd78e95812c484bea2401d760729c49c81,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1726690344387758381,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-091565,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f3d3d9af35a10992383331d3d45eaca9,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.k
ubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bb5776304b68a91c9d294a1efc96b919d7969634a4d19019f4055a97a75eedaa,PodSandboxId:91eccec72bcc687b4484e04e667e2beac75982ded0203c1cc0d0f5e1fabb6a64,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1726690331422271469,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-7fl5w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5c3a9d82-3815-4aa1-8d04-14be25394dcf,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeri
od: 30,},},&Container{Id:d23d190c3f7a24c8d76129882123229cdd01c0a6d63bdeca54dd4158a36f52f7,PodSandboxId:53c9d8623b3d890b0f48875319a05766114a9fdc3dadedeaf2f7011ca1bb054c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726690331516445402,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4wm6h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d6904231-6f64-4447-9932-0cd5d692978b,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fbdc184d
26af7fa4411e4c2027e6de01f00fda961f753663405491da3f2ab52f,PodSandboxId:daf9a948e855e0b3418e648b0e08e5abe75783b75678d6d8e1ad1d450bf2aa04,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726690331363002835,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-w97kk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 70428cd6-0523-44c8-89f3-62837b52ca80,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,
io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cd37bc6079dc18794fdcec6cf524e952c17ed75bceccddac6dc5ef5026a7b0d3,PodSandboxId:68e450d1a9d796861547badef070a688c9eca3e51393bcf3d8d4b48c0af80e45,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726690331348615832,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-091565,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fb44b1ca5e1af25a31cc20be38506f2d,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.containe
r.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bb8d0cf0ea1843751c86efea123f97cd6f5a08fcb1cee02f21311efe6ca1e526,PodSandboxId:0bc3148ce646e5567d6d3af8ff4813aa443127b853f2271f576167d9d4614c54,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726690331193449994,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-091565,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 42a715c9fe466585ee83dba1966182d5,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessageP
ath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9f589735092f230bc5c0db779e10f8b0980c4e646dc42967ce96f9d511fd27e4,PodSandboxId:82ed01bc2dfae627a150f347d6f669f4e189caea575a71d9e6837201ef98ace7,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726690331432050099,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-8zcqk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 644e8147-96e9-41a1-99b8-d2de17e4798c,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\"
,\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d455b7b8c960ee88939b22efd881e9063e3cf96fec30f9b6cbdfe3c5670a8b1d,PodSandboxId:6a98de6d0c34be9c61571054c6ee0cfa8df30c161e6da077b0fd3dce43a7c629,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1726690331288188210,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-091565,io.kubernetes.pod.namespace: kube-system,io.kuberne
tes.pod.uid: d7f1e1feaa8e654300c84052131dd12a,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c820a119e934b2a93019ba436b2a4247a959cffd8b253b6758d96276a65aeaa7,PodSandboxId:efa9968c0e0eeaf14440259fb0869ec7455118c74385b559462409719c49d5e7,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726690331217170862,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-091565,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ed5bb72bdd2d49ac86ea107effb85714,},Ann
otations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7e40397db0622fd3967d20535016d957560de3976f7ca7e977fa2b186ff9f3ec,PodSandboxId:32509037cc4e4898a637b1de6a87eabd6d7504444bcf3f541a5234fb1ed4197a,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1726689867249576018,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-xhmzx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 16808919-56d0-40cd-b88e-28fb5a40b3a2,},Annot
ations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4f8cab8eef59352ca937c9b8e7056a419bb426b183a841f9e8a43366ecb1b283,PodSandboxId:16c38fe68d94e679d8aaead9d1be6f411a6984d3a367813f390d2ffc174a7047,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726689721940654598,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-8zcqk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 644e8147-96e9-41a1-99b8-d2de17e4798c,},Annotations:map[string]string{io.kube
rnetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9b5c6773eef44d848fd26ebcf13ea82ebc3c1e1bd23618a944bf5f6d6a0e7bb8,PodSandboxId:b0c496c53b4c9e8442d0d8a6bcef8269a3baad2995c3647784331bd1b03a34df,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726689721549482980,Labels:map[string]string{io.kubernetes.container.name: cor
edns,io.kubernetes.pod.name: coredns-7c65d6cfc9-w97kk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 70428cd6-0523-44c8-89f3-62837b52ca80,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:52ae20a53e17beece735496b8e65537bbebfef5da53e8a2e7a74820fee56cd63,PodSandboxId:e5053f7183e292c9c0d41caef5020bdc6b59ce14f38a3c98750b9667d61f9d14,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,Runti
meHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1726689709419367332,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-7fl5w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5c3a9d82-3815-4aa1-8d04-14be25394dcf,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c9aa80c6b1f558ba9ef5b7e70df3690995e271e9296b0cc3e71ade739843f53a,PodSandboxId:e7fdb7e540529f305bf2eb00c376fe4bdf92019e975ef0252046f5f4a250d965,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3a
d6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1726689709057469326,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4wm6h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d6904231-6f64-4447-9932-0cd5d692978b,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8c435dbd5b540cdb87d1f79fc2aadca19db9c46aff42e67d78c5d9c8eee1b6de,PodSandboxId:01b7098c9837574b56ca73c576ce61c56e6190d34e8f2c4814bae7a1f5952e5c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe
954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1726689697296369486,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-091565,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 42a715c9fe466585ee83dba1966182d5,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4358e16fe123bb31210776fce1969072fb954b1f7cc3b51c459cd155992c1be5,PodSandboxId:ae412aa32e14f8d37184d672500203218827aa56a04162cf4a4e1bde7a1c9833,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTA
INER_EXITED,CreatedAt:1726689697193701986,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-091565,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ed5bb72bdd2d49ac86ea107effb85714,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=948c4945-f910-48c4-a832-77d2695f4d34 name=/runtime.v1.RuntimeService/ListContainers
	Sep 18 20:17:11 ha-091565 crio[3611]: time="2024-09-18 20:17:11.215789263Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=d28b50bd-ee8c-479b-aa89-2b2b18b0f676 name=/runtime.v1.RuntimeService/Version
	Sep 18 20:17:11 ha-091565 crio[3611]: time="2024-09-18 20:17:11.215906438Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=d28b50bd-ee8c-479b-aa89-2b2b18b0f676 name=/runtime.v1.RuntimeService/Version
	Sep 18 20:17:11 ha-091565 crio[3611]: time="2024-09-18 20:17:11.217070627Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=174b025e-543a-4ce3-abc4-ade6c8765e9f name=/runtime.v1.ImageService/ImageFsInfo
	Sep 18 20:17:11 ha-091565 crio[3611]: time="2024-09-18 20:17:11.217506027Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726690631217481955,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=174b025e-543a-4ce3-abc4-ade6c8765e9f name=/runtime.v1.ImageService/ImageFsInfo
	Sep 18 20:17:11 ha-091565 crio[3611]: time="2024-09-18 20:17:11.218176347Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=bad39297-1e10-4681-8b06-b262f50a26d7 name=/runtime.v1.RuntimeService/ListContainers
	Sep 18 20:17:11 ha-091565 crio[3611]: time="2024-09-18 20:17:11.218254809Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=bad39297-1e10-4681-8b06-b262f50a26d7 name=/runtime.v1.RuntimeService/ListContainers
	Sep 18 20:17:11 ha-091565 crio[3611]: time="2024-09-18 20:17:11.219191768Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:6489130b5601ef43045b9ef5a8e330f80dae1c20724da7d02b8b61500bc9beaa,PodSandboxId:9259670422d45498b8f4e45909da18fa355ece853960532d620a5ca1f2e21efb,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726690419383021088,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b7dffb85-905b-4166-a680-34c77cf87d09,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e894eebbedc0aac646e7c24ccd43c7ae2e1a00a0275213aa3eaaf78ad5fddb8a,PodSandboxId:68e450d1a9d796861547badef070a688c9eca3e51393bcf3d8d4b48c0af80e45,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726690374390976616,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-091565,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fb44b1ca5e1af25a31cc20be38506f2d,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath
: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c3cbacf6046aded98d17f163f6e437ef37200f71fd333470b3f7b074463c80ca,PodSandboxId:6a98de6d0c34be9c61571054c6ee0cfa8df30c161e6da077b0fd3dce43a7c629,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726690374385528525,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-091565,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d7f1e1feaa8e654300c84052131dd12a,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0025f965c449fa8005990c9d16e4bd975337d00b859f495b752751bf93bf4d29,PodSandboxId:9259670422d45498b8f4e45909da18fa355ece853960532d620a5ca1f2e21efb,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1726690365382779674,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b7dffb85-905b-4166-a680-34c77cf87d09,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7deb697baa4b118571cf9f455fd13305bc42377ddcb5b0aaa75763ec49cd7955,PodSandboxId:a0b5345aa00cc911cf3ebf6d550cae0e8fae6aa83bba446cedb5885f19562a72,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1726690364731464824,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-xhmzx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 16808919-56d0-40cd-b88e-28fb5a40b3a2,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contai
ner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:315019425fff0ed52e2f663404258df3b545845e85b4f1a0ea2251dccd135b0a,PodSandboxId:e542a3cfe8082184e2eed6dece495abd78e95812c484bea2401d760729c49c81,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1726690344387758381,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-091565,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f3d3d9af35a10992383331d3d45eaca9,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.k
ubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bb5776304b68a91c9d294a1efc96b919d7969634a4d19019f4055a97a75eedaa,PodSandboxId:91eccec72bcc687b4484e04e667e2beac75982ded0203c1cc0d0f5e1fabb6a64,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1726690331422271469,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-7fl5w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5c3a9d82-3815-4aa1-8d04-14be25394dcf,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeri
od: 30,},},&Container{Id:d23d190c3f7a24c8d76129882123229cdd01c0a6d63bdeca54dd4158a36f52f7,PodSandboxId:53c9d8623b3d890b0f48875319a05766114a9fdc3dadedeaf2f7011ca1bb054c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726690331516445402,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4wm6h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d6904231-6f64-4447-9932-0cd5d692978b,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fbdc184d
26af7fa4411e4c2027e6de01f00fda961f753663405491da3f2ab52f,PodSandboxId:daf9a948e855e0b3418e648b0e08e5abe75783b75678d6d8e1ad1d450bf2aa04,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726690331363002835,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-w97kk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 70428cd6-0523-44c8-89f3-62837b52ca80,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,
io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cd37bc6079dc18794fdcec6cf524e952c17ed75bceccddac6dc5ef5026a7b0d3,PodSandboxId:68e450d1a9d796861547badef070a688c9eca3e51393bcf3d8d4b48c0af80e45,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726690331348615832,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-091565,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fb44b1ca5e1af25a31cc20be38506f2d,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.containe
r.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bb8d0cf0ea1843751c86efea123f97cd6f5a08fcb1cee02f21311efe6ca1e526,PodSandboxId:0bc3148ce646e5567d6d3af8ff4813aa443127b853f2271f576167d9d4614c54,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726690331193449994,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-091565,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 42a715c9fe466585ee83dba1966182d5,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessageP
ath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9f589735092f230bc5c0db779e10f8b0980c4e646dc42967ce96f9d511fd27e4,PodSandboxId:82ed01bc2dfae627a150f347d6f669f4e189caea575a71d9e6837201ef98ace7,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726690331432050099,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-8zcqk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 644e8147-96e9-41a1-99b8-d2de17e4798c,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\"
,\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d455b7b8c960ee88939b22efd881e9063e3cf96fec30f9b6cbdfe3c5670a8b1d,PodSandboxId:6a98de6d0c34be9c61571054c6ee0cfa8df30c161e6da077b0fd3dce43a7c629,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1726690331288188210,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-091565,io.kubernetes.pod.namespace: kube-system,io.kuberne
tes.pod.uid: d7f1e1feaa8e654300c84052131dd12a,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c820a119e934b2a93019ba436b2a4247a959cffd8b253b6758d96276a65aeaa7,PodSandboxId:efa9968c0e0eeaf14440259fb0869ec7455118c74385b559462409719c49d5e7,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726690331217170862,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-091565,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ed5bb72bdd2d49ac86ea107effb85714,},Ann
otations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7e40397db0622fd3967d20535016d957560de3976f7ca7e977fa2b186ff9f3ec,PodSandboxId:32509037cc4e4898a637b1de6a87eabd6d7504444bcf3f541a5234fb1ed4197a,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1726689867249576018,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-xhmzx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 16808919-56d0-40cd-b88e-28fb5a40b3a2,},Annot
ations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4f8cab8eef59352ca937c9b8e7056a419bb426b183a841f9e8a43366ecb1b283,PodSandboxId:16c38fe68d94e679d8aaead9d1be6f411a6984d3a367813f390d2ffc174a7047,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726689721940654598,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-8zcqk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 644e8147-96e9-41a1-99b8-d2de17e4798c,},Annotations:map[string]string{io.kube
rnetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9b5c6773eef44d848fd26ebcf13ea82ebc3c1e1bd23618a944bf5f6d6a0e7bb8,PodSandboxId:b0c496c53b4c9e8442d0d8a6bcef8269a3baad2995c3647784331bd1b03a34df,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726689721549482980,Labels:map[string]string{io.kubernetes.container.name: cor
edns,io.kubernetes.pod.name: coredns-7c65d6cfc9-w97kk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 70428cd6-0523-44c8-89f3-62837b52ca80,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:52ae20a53e17beece735496b8e65537bbebfef5da53e8a2e7a74820fee56cd63,PodSandboxId:e5053f7183e292c9c0d41caef5020bdc6b59ce14f38a3c98750b9667d61f9d14,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,Runti
meHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1726689709419367332,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-7fl5w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5c3a9d82-3815-4aa1-8d04-14be25394dcf,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c9aa80c6b1f558ba9ef5b7e70df3690995e271e9296b0cc3e71ade739843f53a,PodSandboxId:e7fdb7e540529f305bf2eb00c376fe4bdf92019e975ef0252046f5f4a250d965,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3a
d6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1726689709057469326,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4wm6h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d6904231-6f64-4447-9932-0cd5d692978b,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8c435dbd5b540cdb87d1f79fc2aadca19db9c46aff42e67d78c5d9c8eee1b6de,PodSandboxId:01b7098c9837574b56ca73c576ce61c56e6190d34e8f2c4814bae7a1f5952e5c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe
954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1726689697296369486,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-091565,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 42a715c9fe466585ee83dba1966182d5,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4358e16fe123bb31210776fce1969072fb954b1f7cc3b51c459cd155992c1be5,PodSandboxId:ae412aa32e14f8d37184d672500203218827aa56a04162cf4a4e1bde7a1c9833,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTA
INER_EXITED,CreatedAt:1726689697193701986,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-091565,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ed5bb72bdd2d49ac86ea107effb85714,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=bad39297-1e10-4681-8b06-b262f50a26d7 name=/runtime.v1.RuntimeService/ListContainers
	Sep 18 20:17:11 ha-091565 crio[3611]: time="2024-09-18 20:17:11.265543694Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=907f5278-2048-4055-86f2-12306c066c73 name=/runtime.v1.RuntimeService/Version
	Sep 18 20:17:11 ha-091565 crio[3611]: time="2024-09-18 20:17:11.265676570Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=907f5278-2048-4055-86f2-12306c066c73 name=/runtime.v1.RuntimeService/Version
	Sep 18 20:17:11 ha-091565 crio[3611]: time="2024-09-18 20:17:11.267296824Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=da4bc77e-7f38-4c12-bee0-fd0678e99fed name=/runtime.v1.ImageService/ImageFsInfo
	Sep 18 20:17:11 ha-091565 crio[3611]: time="2024-09-18 20:17:11.268522352Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726690631268481947,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=da4bc77e-7f38-4c12-bee0-fd0678e99fed name=/runtime.v1.ImageService/ImageFsInfo
	Sep 18 20:17:11 ha-091565 crio[3611]: time="2024-09-18 20:17:11.269613556Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=01424677-0599-4e78-91bc-6131e697f567 name=/runtime.v1.RuntimeService/ListContainers
	Sep 18 20:17:11 ha-091565 crio[3611]: time="2024-09-18 20:17:11.269707368Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=01424677-0599-4e78-91bc-6131e697f567 name=/runtime.v1.RuntimeService/ListContainers
	Sep 18 20:17:11 ha-091565 crio[3611]: time="2024-09-18 20:17:11.270396049Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:6489130b5601ef43045b9ef5a8e330f80dae1c20724da7d02b8b61500bc9beaa,PodSandboxId:9259670422d45498b8f4e45909da18fa355ece853960532d620a5ca1f2e21efb,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726690419383021088,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b7dffb85-905b-4166-a680-34c77cf87d09,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e894eebbedc0aac646e7c24ccd43c7ae2e1a00a0275213aa3eaaf78ad5fddb8a,PodSandboxId:68e450d1a9d796861547badef070a688c9eca3e51393bcf3d8d4b48c0af80e45,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726690374390976616,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-091565,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fb44b1ca5e1af25a31cc20be38506f2d,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath
: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c3cbacf6046aded98d17f163f6e437ef37200f71fd333470b3f7b074463c80ca,PodSandboxId:6a98de6d0c34be9c61571054c6ee0cfa8df30c161e6da077b0fd3dce43a7c629,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726690374385528525,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-091565,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d7f1e1feaa8e654300c84052131dd12a,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0025f965c449fa8005990c9d16e4bd975337d00b859f495b752751bf93bf4d29,PodSandboxId:9259670422d45498b8f4e45909da18fa355ece853960532d620a5ca1f2e21efb,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1726690365382779674,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b7dffb85-905b-4166-a680-34c77cf87d09,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7deb697baa4b118571cf9f455fd13305bc42377ddcb5b0aaa75763ec49cd7955,PodSandboxId:a0b5345aa00cc911cf3ebf6d550cae0e8fae6aa83bba446cedb5885f19562a72,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1726690364731464824,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-xhmzx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 16808919-56d0-40cd-b88e-28fb5a40b3a2,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contai
ner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:315019425fff0ed52e2f663404258df3b545845e85b4f1a0ea2251dccd135b0a,PodSandboxId:e542a3cfe8082184e2eed6dece495abd78e95812c484bea2401d760729c49c81,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1726690344387758381,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-091565,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f3d3d9af35a10992383331d3d45eaca9,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.k
ubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bb5776304b68a91c9d294a1efc96b919d7969634a4d19019f4055a97a75eedaa,PodSandboxId:91eccec72bcc687b4484e04e667e2beac75982ded0203c1cc0d0f5e1fabb6a64,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1726690331422271469,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-7fl5w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5c3a9d82-3815-4aa1-8d04-14be25394dcf,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeri
od: 30,},},&Container{Id:d23d190c3f7a24c8d76129882123229cdd01c0a6d63bdeca54dd4158a36f52f7,PodSandboxId:53c9d8623b3d890b0f48875319a05766114a9fdc3dadedeaf2f7011ca1bb054c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726690331516445402,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4wm6h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d6904231-6f64-4447-9932-0cd5d692978b,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fbdc184d
26af7fa4411e4c2027e6de01f00fda961f753663405491da3f2ab52f,PodSandboxId:daf9a948e855e0b3418e648b0e08e5abe75783b75678d6d8e1ad1d450bf2aa04,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726690331363002835,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-w97kk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 70428cd6-0523-44c8-89f3-62837b52ca80,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,
io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cd37bc6079dc18794fdcec6cf524e952c17ed75bceccddac6dc5ef5026a7b0d3,PodSandboxId:68e450d1a9d796861547badef070a688c9eca3e51393bcf3d8d4b48c0af80e45,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726690331348615832,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-091565,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fb44b1ca5e1af25a31cc20be38506f2d,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.containe
r.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bb8d0cf0ea1843751c86efea123f97cd6f5a08fcb1cee02f21311efe6ca1e526,PodSandboxId:0bc3148ce646e5567d6d3af8ff4813aa443127b853f2271f576167d9d4614c54,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726690331193449994,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-091565,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 42a715c9fe466585ee83dba1966182d5,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessageP
ath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9f589735092f230bc5c0db779e10f8b0980c4e646dc42967ce96f9d511fd27e4,PodSandboxId:82ed01bc2dfae627a150f347d6f669f4e189caea575a71d9e6837201ef98ace7,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726690331432050099,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-8zcqk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 644e8147-96e9-41a1-99b8-d2de17e4798c,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\"
,\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d455b7b8c960ee88939b22efd881e9063e3cf96fec30f9b6cbdfe3c5670a8b1d,PodSandboxId:6a98de6d0c34be9c61571054c6ee0cfa8df30c161e6da077b0fd3dce43a7c629,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1726690331288188210,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-091565,io.kubernetes.pod.namespace: kube-system,io.kuberne
tes.pod.uid: d7f1e1feaa8e654300c84052131dd12a,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c820a119e934b2a93019ba436b2a4247a959cffd8b253b6758d96276a65aeaa7,PodSandboxId:efa9968c0e0eeaf14440259fb0869ec7455118c74385b559462409719c49d5e7,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726690331217170862,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-091565,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ed5bb72bdd2d49ac86ea107effb85714,},Ann
otations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7e40397db0622fd3967d20535016d957560de3976f7ca7e977fa2b186ff9f3ec,PodSandboxId:32509037cc4e4898a637b1de6a87eabd6d7504444bcf3f541a5234fb1ed4197a,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1726689867249576018,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-xhmzx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 16808919-56d0-40cd-b88e-28fb5a40b3a2,},Annot
ations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4f8cab8eef59352ca937c9b8e7056a419bb426b183a841f9e8a43366ecb1b283,PodSandboxId:16c38fe68d94e679d8aaead9d1be6f411a6984d3a367813f390d2ffc174a7047,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726689721940654598,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-8zcqk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 644e8147-96e9-41a1-99b8-d2de17e4798c,},Annotations:map[string]string{io.kube
rnetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9b5c6773eef44d848fd26ebcf13ea82ebc3c1e1bd23618a944bf5f6d6a0e7bb8,PodSandboxId:b0c496c53b4c9e8442d0d8a6bcef8269a3baad2995c3647784331bd1b03a34df,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726689721549482980,Labels:map[string]string{io.kubernetes.container.name: cor
edns,io.kubernetes.pod.name: coredns-7c65d6cfc9-w97kk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 70428cd6-0523-44c8-89f3-62837b52ca80,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:52ae20a53e17beece735496b8e65537bbebfef5da53e8a2e7a74820fee56cd63,PodSandboxId:e5053f7183e292c9c0d41caef5020bdc6b59ce14f38a3c98750b9667d61f9d14,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,Runti
meHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1726689709419367332,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-7fl5w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5c3a9d82-3815-4aa1-8d04-14be25394dcf,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c9aa80c6b1f558ba9ef5b7e70df3690995e271e9296b0cc3e71ade739843f53a,PodSandboxId:e7fdb7e540529f305bf2eb00c376fe4bdf92019e975ef0252046f5f4a250d965,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3a
d6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1726689709057469326,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4wm6h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d6904231-6f64-4447-9932-0cd5d692978b,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8c435dbd5b540cdb87d1f79fc2aadca19db9c46aff42e67d78c5d9c8eee1b6de,PodSandboxId:01b7098c9837574b56ca73c576ce61c56e6190d34e8f2c4814bae7a1f5952e5c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe
954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1726689697296369486,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-091565,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 42a715c9fe466585ee83dba1966182d5,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4358e16fe123bb31210776fce1969072fb954b1f7cc3b51c459cd155992c1be5,PodSandboxId:ae412aa32e14f8d37184d672500203218827aa56a04162cf4a4e1bde7a1c9833,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTA
INER_EXITED,CreatedAt:1726689697193701986,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-091565,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ed5bb72bdd2d49ac86ea107effb85714,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=01424677-0599-4e78-91bc-6131e697f567 name=/runtime.v1.RuntimeService/ListContainers
	Sep 18 20:17:11 ha-091565 crio[3611]: time="2024-09-18 20:17:11.314195953Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=bfd4ce62-23e0-4487-b334-9650306e16a0 name=/runtime.v1.RuntimeService/Version
	Sep 18 20:17:11 ha-091565 crio[3611]: time="2024-09-18 20:17:11.314278435Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=bfd4ce62-23e0-4487-b334-9650306e16a0 name=/runtime.v1.RuntimeService/Version
	Sep 18 20:17:11 ha-091565 crio[3611]: time="2024-09-18 20:17:11.315714873Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=58896447-cd92-42d4-b8d6-24306356b5dd name=/runtime.v1.ImageService/ImageFsInfo
	Sep 18 20:17:11 ha-091565 crio[3611]: time="2024-09-18 20:17:11.316207229Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726690631316178184,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=58896447-cd92-42d4-b8d6-24306356b5dd name=/runtime.v1.ImageService/ImageFsInfo
	Sep 18 20:17:11 ha-091565 crio[3611]: time="2024-09-18 20:17:11.316792435Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=d21d52b7-4755-49cf-86d5-e21a8e8f0d8a name=/runtime.v1.RuntimeService/ListContainers
	Sep 18 20:17:11 ha-091565 crio[3611]: time="2024-09-18 20:17:11.316859516Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=d21d52b7-4755-49cf-86d5-e21a8e8f0d8a name=/runtime.v1.RuntimeService/ListContainers
	Sep 18 20:17:11 ha-091565 crio[3611]: time="2024-09-18 20:17:11.317367584Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:6489130b5601ef43045b9ef5a8e330f80dae1c20724da7d02b8b61500bc9beaa,PodSandboxId:9259670422d45498b8f4e45909da18fa355ece853960532d620a5ca1f2e21efb,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726690419383021088,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b7dffb85-905b-4166-a680-34c77cf87d09,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e894eebbedc0aac646e7c24ccd43c7ae2e1a00a0275213aa3eaaf78ad5fddb8a,PodSandboxId:68e450d1a9d796861547badef070a688c9eca3e51393bcf3d8d4b48c0af80e45,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726690374390976616,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-091565,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fb44b1ca5e1af25a31cc20be38506f2d,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath
: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c3cbacf6046aded98d17f163f6e437ef37200f71fd333470b3f7b074463c80ca,PodSandboxId:6a98de6d0c34be9c61571054c6ee0cfa8df30c161e6da077b0fd3dce43a7c629,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726690374385528525,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-091565,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d7f1e1feaa8e654300c84052131dd12a,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0025f965c449fa8005990c9d16e4bd975337d00b859f495b752751bf93bf4d29,PodSandboxId:9259670422d45498b8f4e45909da18fa355ece853960532d620a5ca1f2e21efb,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1726690365382779674,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b7dffb85-905b-4166-a680-34c77cf87d09,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7deb697baa4b118571cf9f455fd13305bc42377ddcb5b0aaa75763ec49cd7955,PodSandboxId:a0b5345aa00cc911cf3ebf6d550cae0e8fae6aa83bba446cedb5885f19562a72,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1726690364731464824,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-xhmzx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 16808919-56d0-40cd-b88e-28fb5a40b3a2,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contai
ner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:315019425fff0ed52e2f663404258df3b545845e85b4f1a0ea2251dccd135b0a,PodSandboxId:e542a3cfe8082184e2eed6dece495abd78e95812c484bea2401d760729c49c81,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1726690344387758381,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-091565,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f3d3d9af35a10992383331d3d45eaca9,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.k
ubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bb5776304b68a91c9d294a1efc96b919d7969634a4d19019f4055a97a75eedaa,PodSandboxId:91eccec72bcc687b4484e04e667e2beac75982ded0203c1cc0d0f5e1fabb6a64,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1726690331422271469,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-7fl5w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5c3a9d82-3815-4aa1-8d04-14be25394dcf,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeri
od: 30,},},&Container{Id:d23d190c3f7a24c8d76129882123229cdd01c0a6d63bdeca54dd4158a36f52f7,PodSandboxId:53c9d8623b3d890b0f48875319a05766114a9fdc3dadedeaf2f7011ca1bb054c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726690331516445402,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4wm6h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d6904231-6f64-4447-9932-0cd5d692978b,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fbdc184d
26af7fa4411e4c2027e6de01f00fda961f753663405491da3f2ab52f,PodSandboxId:daf9a948e855e0b3418e648b0e08e5abe75783b75678d6d8e1ad1d450bf2aa04,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726690331363002835,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-w97kk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 70428cd6-0523-44c8-89f3-62837b52ca80,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,
io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cd37bc6079dc18794fdcec6cf524e952c17ed75bceccddac6dc5ef5026a7b0d3,PodSandboxId:68e450d1a9d796861547badef070a688c9eca3e51393bcf3d8d4b48c0af80e45,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726690331348615832,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-091565,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fb44b1ca5e1af25a31cc20be38506f2d,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.containe
r.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bb8d0cf0ea1843751c86efea123f97cd6f5a08fcb1cee02f21311efe6ca1e526,PodSandboxId:0bc3148ce646e5567d6d3af8ff4813aa443127b853f2271f576167d9d4614c54,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726690331193449994,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-091565,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 42a715c9fe466585ee83dba1966182d5,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessageP
ath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9f589735092f230bc5c0db779e10f8b0980c4e646dc42967ce96f9d511fd27e4,PodSandboxId:82ed01bc2dfae627a150f347d6f669f4e189caea575a71d9e6837201ef98ace7,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726690331432050099,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-8zcqk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 644e8147-96e9-41a1-99b8-d2de17e4798c,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\"
,\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d455b7b8c960ee88939b22efd881e9063e3cf96fec30f9b6cbdfe3c5670a8b1d,PodSandboxId:6a98de6d0c34be9c61571054c6ee0cfa8df30c161e6da077b0fd3dce43a7c629,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1726690331288188210,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-091565,io.kubernetes.pod.namespace: kube-system,io.kuberne
tes.pod.uid: d7f1e1feaa8e654300c84052131dd12a,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c820a119e934b2a93019ba436b2a4247a959cffd8b253b6758d96276a65aeaa7,PodSandboxId:efa9968c0e0eeaf14440259fb0869ec7455118c74385b559462409719c49d5e7,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726690331217170862,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-091565,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ed5bb72bdd2d49ac86ea107effb85714,},Ann
otations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7e40397db0622fd3967d20535016d957560de3976f7ca7e977fa2b186ff9f3ec,PodSandboxId:32509037cc4e4898a637b1de6a87eabd6d7504444bcf3f541a5234fb1ed4197a,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1726689867249576018,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-xhmzx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 16808919-56d0-40cd-b88e-28fb5a40b3a2,},Annot
ations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4f8cab8eef59352ca937c9b8e7056a419bb426b183a841f9e8a43366ecb1b283,PodSandboxId:16c38fe68d94e679d8aaead9d1be6f411a6984d3a367813f390d2ffc174a7047,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726689721940654598,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-8zcqk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 644e8147-96e9-41a1-99b8-d2de17e4798c,},Annotations:map[string]string{io.kube
rnetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9b5c6773eef44d848fd26ebcf13ea82ebc3c1e1bd23618a944bf5f6d6a0e7bb8,PodSandboxId:b0c496c53b4c9e8442d0d8a6bcef8269a3baad2995c3647784331bd1b03a34df,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726689721549482980,Labels:map[string]string{io.kubernetes.container.name: cor
edns,io.kubernetes.pod.name: coredns-7c65d6cfc9-w97kk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 70428cd6-0523-44c8-89f3-62837b52ca80,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:52ae20a53e17beece735496b8e65537bbebfef5da53e8a2e7a74820fee56cd63,PodSandboxId:e5053f7183e292c9c0d41caef5020bdc6b59ce14f38a3c98750b9667d61f9d14,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,Runti
meHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1726689709419367332,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-7fl5w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5c3a9d82-3815-4aa1-8d04-14be25394dcf,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c9aa80c6b1f558ba9ef5b7e70df3690995e271e9296b0cc3e71ade739843f53a,PodSandboxId:e7fdb7e540529f305bf2eb00c376fe4bdf92019e975ef0252046f5f4a250d965,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3a
d6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1726689709057469326,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4wm6h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d6904231-6f64-4447-9932-0cd5d692978b,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8c435dbd5b540cdb87d1f79fc2aadca19db9c46aff42e67d78c5d9c8eee1b6de,PodSandboxId:01b7098c9837574b56ca73c576ce61c56e6190d34e8f2c4814bae7a1f5952e5c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe
954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1726689697296369486,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-091565,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 42a715c9fe466585ee83dba1966182d5,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4358e16fe123bb31210776fce1969072fb954b1f7cc3b51c459cd155992c1be5,PodSandboxId:ae412aa32e14f8d37184d672500203218827aa56a04162cf4a4e1bde7a1c9833,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTA
INER_EXITED,CreatedAt:1726689697193701986,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-091565,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ed5bb72bdd2d49ac86ea107effb85714,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=d21d52b7-4755-49cf-86d5-e21a8e8f0d8a name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	6489130b5601e       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      3 minutes ago       Running             storage-provisioner       4                   9259670422d45       storage-provisioner
	e894eebbedc0a       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee                                      4 minutes ago       Running             kube-apiserver            3                   68e450d1a9d79       kube-apiserver-ha-091565
	c3cbacf6046ad       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                      4 minutes ago       Running             kube-controller-manager   2                   6a98de6d0c34b       kube-controller-manager-ha-091565
	0025f965c449f       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      4 minutes ago       Exited              storage-provisioner       3                   9259670422d45       storage-provisioner
	7deb697baa4b1       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a                                      4 minutes ago       Running             busybox                   1                   a0b5345aa00cc       busybox-7dff88458-xhmzx
	315019425fff0       38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12                                      4 minutes ago       Running             kube-vip                  0                   e542a3cfe8082       kube-vip-ha-091565
	d23d190c3f7a2       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                      4 minutes ago       Running             kube-proxy                1                   53c9d8623b3d8       kube-proxy-4wm6h
	9f589735092f2       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      4 minutes ago       Running             coredns                   1                   82ed01bc2dfae       coredns-7c65d6cfc9-8zcqk
	bb5776304b68a       12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f                                      4 minutes ago       Running             kindnet-cni               1                   91eccec72bcc6       kindnet-7fl5w
	fbdc184d26af7       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      5 minutes ago       Running             coredns                   1                   daf9a948e855e       coredns-7c65d6cfc9-w97kk
	cd37bc6079dc1       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee                                      5 minutes ago       Exited              kube-apiserver            2                   68e450d1a9d79       kube-apiserver-ha-091565
	d455b7b8c960e       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                      5 minutes ago       Exited              kube-controller-manager   1                   6a98de6d0c34b       kube-controller-manager-ha-091565
	c820a119e934b       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      5 minutes ago       Running             etcd                      1                   efa9968c0e0ee       etcd-ha-091565
	bb8d0cf0ea184       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                      5 minutes ago       Running             kube-scheduler            1                   0bc3148ce646e       kube-scheduler-ha-091565
	7e40397db0622       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   12 minutes ago      Exited              busybox                   0                   32509037cc4e4       busybox-7dff88458-xhmzx
	4f8cab8eef593       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      15 minutes ago      Exited              coredns                   0                   16c38fe68d94e       coredns-7c65d6cfc9-8zcqk
	9b5c6773eef44       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      15 minutes ago      Exited              coredns                   0                   b0c496c53b4c9       coredns-7c65d6cfc9-w97kk
	52ae20a53e17b       12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f                                      15 minutes ago      Exited              kindnet-cni               0                   e5053f7183e29       kindnet-7fl5w
	c9aa80c6b1f55       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                      15 minutes ago      Exited              kube-proxy                0                   e7fdb7e540529       kube-proxy-4wm6h
	8c435dbd5b540       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                      15 minutes ago      Exited              kube-scheduler            0                   01b7098c98375       kube-scheduler-ha-091565
	4358e16fe123b       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      15 minutes ago      Exited              etcd                      0                   ae412aa32e14f       etcd-ha-091565
	
	
	==> coredns [4f8cab8eef59352ca937c9b8e7056a419bb426b183a841f9e8a43366ecb1b283] <==
	[INFO] 10.244.1.2:44283 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.003884102s
	[INFO] 10.244.1.2:32970 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000204769s
	[INFO] 10.244.1.2:52008 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000243831s
	[INFO] 10.244.2.2:50260 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000163913s
	[INFO] 10.244.2.2:55732 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001811166s
	[INFO] 10.244.2.2:39226 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.00012772s
	[INFO] 10.244.2.2:53709 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.0000925s
	[INFO] 10.244.2.2:41092 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000125187s
	[INFO] 10.244.0.4:40054 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000124612s
	[INFO] 10.244.0.4:38790 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001299276s
	[INFO] 10.244.0.4:59253 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000062856s
	[INFO] 10.244.0.4:38256 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000094015s
	[INFO] 10.244.1.2:44940 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000153669s
	[INFO] 10.244.1.2:48450 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000097947s
	[INFO] 10.244.0.4:38580 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000117553s
	[INFO] 10.244.2.2:59546 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000170402s
	[INFO] 10.244.2.2:49026 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000189642s
	[INFO] 10.244.2.2:45658 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000151371s
	[INFO] 10.244.0.4:51397 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000169114s
	[INFO] 10.244.0.4:47813 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000155527s
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?allowWatchBookmarks=true&resourceVersion=1769&timeout=8m38s&timeoutSeconds=518&watch=true": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: Get "https://10.96.0.1:443/api/v1/services?allowWatchBookmarks=true&resourceVersion=1779&timeout=6m44s&timeoutSeconds=404&watch=true": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?allowWatchBookmarks=true&resourceVersion=1730&timeout=8m56s&timeoutSeconds=536&watch=true": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [9b5c6773eef44d848fd26ebcf13ea82ebc3c1e1bd23618a944bf5f6d6a0e7bb8] <==
	[INFO] 10.244.2.2:48639 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000087315s
	[INFO] 10.244.0.4:52361 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001834081s
	[INFO] 10.244.0.4:55907 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000221265s
	[INFO] 10.244.0.4:58409 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000117627s
	[INFO] 10.244.0.4:50242 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000115347s
	[INFO] 10.244.1.2:47046 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000136453s
	[INFO] 10.244.1.2:43799 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000196628s
	[INFO] 10.244.2.2:55965 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000123662s
	[INFO] 10.244.2.2:50433 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000098915s
	[INFO] 10.244.2.2:53589 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000068105s
	[INFO] 10.244.2.2:34234 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000074s
	[INFO] 10.244.0.4:45468 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000084304s
	[INFO] 10.244.0.4:51889 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000073683s
	[INFO] 10.244.0.4:50414 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000047051s
	[INFO] 10.244.1.2:45104 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000139109s
	[INFO] 10.244.1.2:42703 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.00019857s
	[INFO] 10.244.1.2:45604 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000184516s
	[INFO] 10.244.1.2:54679 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.00010429s
	[INFO] 10.244.2.2:37265 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000089491s
	[INFO] 10.244.0.4:58464 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000108633s
	[INFO] 10.244.0.4:60733 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.0000682s
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?allowWatchBookmarks=true&resourceVersion=1735&timeout=5m14s&timeoutSeconds=314&watch=true": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?allowWatchBookmarks=true&resourceVersion=1730&timeout=7m45s&timeoutSeconds=465&watch=true": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [9f589735092f230bc5c0db779e10f8b0980c4e646dc42967ce96f9d511fd27e4] <==
	Trace[1422487581]: [10.001552156s] [10.001552156s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: Trace[1518205786]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (18-Sep-2024 20:12:21.208) (total time: 10000ms):
	Trace[1518205786]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10000ms (20:12:31.208)
	Trace[1518205786]: [10.000776946s] [10.000776946s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.6:52278->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.6:52278->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> coredns [fbdc184d26af7fa4411e4c2027e6de01f00fda961f753663405491da3f2ab52f] <==
	Trace[924636306]: [13.426694172s] [13.426694172s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.5:40782->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.5:40770->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: Trace[72597827]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (18-Sep-2024 20:12:22.905) (total time: 13729ms):
	Trace[72597827]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.5:40770->10.96.0.1:443: read: connection reset by peer 13728ms (20:12:36.633)
	Trace[72597827]: [13.729060125s] [13.729060125s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.5:40770->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.5:41538->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.5:41538->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> describe nodes <==
	Name:               ha-091565
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-091565
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=85073601a832bd4bbda5d11fa91feafff6ec6b91
	                    minikube.k8s.io/name=ha-091565
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_18T20_01_44_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 18 Sep 2024 20:01:40 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-091565
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 18 Sep 2024 20:17:10 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 18 Sep 2024 20:12:52 +0000   Wed, 18 Sep 2024 20:01:39 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 18 Sep 2024 20:12:52 +0000   Wed, 18 Sep 2024 20:01:39 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 18 Sep 2024 20:12:52 +0000   Wed, 18 Sep 2024 20:01:39 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 18 Sep 2024 20:12:52 +0000   Wed, 18 Sep 2024 20:02:01 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.215
	  Hostname:    ha-091565
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 a62ed2f9eda04eb9bbdd5bc2c8925018
	  System UUID:                a62ed2f9-eda0-4eb9-bbdd-5bc2c8925018
	  Boot ID:                    e0c4d56b-81dc-4d69-9fe6-35f1341e336d
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-xhmzx              0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 coredns-7c65d6cfc9-8zcqk             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     15m
	  kube-system                 coredns-7c65d6cfc9-w97kk             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     15m
	  kube-system                 etcd-ha-091565                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         15m
	  kube-system                 kindnet-7fl5w                        100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      15m
	  kube-system                 kube-apiserver-ha-091565             250m (12%)    0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 kube-controller-manager-ha-091565    200m (10%)    0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 kube-proxy-4wm6h                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 kube-scheduler-ha-091565             100m (5%)     0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 kube-vip-ha-091565                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m43s
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         15m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   100m (5%)
	  memory             290Mi (13%)  390Mi (18%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 4m17s                  kube-proxy       
	  Normal   Starting                 15m                    kube-proxy       
	  Normal   NodeHasSufficientMemory  15m                    kubelet          Node ha-091565 status is now: NodeHasSufficientMemory
	  Normal   NodeHasSufficientPID     15m                    kubelet          Node ha-091565 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    15m                    kubelet          Node ha-091565 status is now: NodeHasNoDiskPressure
	  Normal   Starting                 15m                    kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  15m                    kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           15m                    node-controller  Node ha-091565 event: Registered Node ha-091565 in Controller
	  Normal   NodeReady                15m                    kubelet          Node ha-091565 status is now: NodeReady
	  Normal   RegisteredNode           14m                    node-controller  Node ha-091565 event: Registered Node ha-091565 in Controller
	  Normal   RegisteredNode           13m                    node-controller  Node ha-091565 event: Registered Node ha-091565 in Controller
	  Warning  ContainerGCFailed        5m28s (x2 over 6m28s)  kubelet          rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	  Normal   NodeNotReady             5m14s (x3 over 6m3s)   kubelet          Node ha-091565 status is now: NodeNotReady
	  Normal   RegisteredNode           4m20s                  node-controller  Node ha-091565 event: Registered Node ha-091565 in Controller
	  Normal   RegisteredNode           4m11s                  node-controller  Node ha-091565 event: Registered Node ha-091565 in Controller
	  Normal   RegisteredNode           3m14s                  node-controller  Node ha-091565 event: Registered Node ha-091565 in Controller
	
	
	Name:               ha-091565-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-091565-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=85073601a832bd4bbda5d11fa91feafff6ec6b91
	                    minikube.k8s.io/name=ha-091565
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_18T20_02_40_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 18 Sep 2024 20:02:37 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-091565-m02
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 18 Sep 2024 20:17:09 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 18 Sep 2024 20:14:05 +0000   Wed, 18 Sep 2024 20:12:54 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 18 Sep 2024 20:14:05 +0000   Wed, 18 Sep 2024 20:12:54 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 18 Sep 2024 20:14:05 +0000   Wed, 18 Sep 2024 20:12:54 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 18 Sep 2024 20:14:05 +0000   Wed, 18 Sep 2024 20:13:04 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.92
	  Hostname:    ha-091565-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 725aeac5e21d42d69ce571d302d9f7bc
	  System UUID:                725aeac5-e21d-42d6-9ce5-71d302d9f7bc
	  Boot ID:                    2d038098-44cb-4374-8eb7-a46ab596f517
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-45phf                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 etcd-ha-091565-m02                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         14m
	  kube-system                 kindnet-bzsqr                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      14m
	  kube-system                 kube-apiserver-ha-091565-m02             250m (12%)    0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-controller-manager-ha-091565-m02    200m (10%)    0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-proxy-bxblp                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-scheduler-ha-091565-m02             100m (5%)     0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-vip-ha-091565-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 3m51s                  kube-proxy       
	  Normal  Starting                 14m                    kube-proxy       
	  Normal  NodeAllocatableEnforced  14m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  14m (x8 over 14m)      kubelet          Node ha-091565-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    14m (x8 over 14m)      kubelet          Node ha-091565-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     14m (x7 over 14m)      kubelet          Node ha-091565-m02 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           14m                    node-controller  Node ha-091565-m02 event: Registered Node ha-091565-m02 in Controller
	  Normal  RegisteredNode           14m                    node-controller  Node ha-091565-m02 event: Registered Node ha-091565-m02 in Controller
	  Normal  RegisteredNode           13m                    node-controller  Node ha-091565-m02 event: Registered Node ha-091565-m02 in Controller
	  Normal  NodeNotReady             10m                    node-controller  Node ha-091565-m02 status is now: NodeNotReady
	  Normal  Starting                 4m45s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m45s (x8 over 4m45s)  kubelet          Node ha-091565-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m45s (x8 over 4m45s)  kubelet          Node ha-091565-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m45s (x7 over 4m45s)  kubelet          Node ha-091565-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m45s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           4m20s                  node-controller  Node ha-091565-m02 event: Registered Node ha-091565-m02 in Controller
	  Normal  RegisteredNode           4m11s                  node-controller  Node ha-091565-m02 event: Registered Node ha-091565-m02 in Controller
	  Normal  RegisteredNode           3m14s                  node-controller  Node ha-091565-m02 event: Registered Node ha-091565-m02 in Controller
	
	
	Name:               ha-091565-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-091565-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=85073601a832bd4bbda5d11fa91feafff6ec6b91
	                    minikube.k8s.io/name=ha-091565
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_18T20_05_02_0700
	                    minikube.k8s.io/version=v1.34.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 18 Sep 2024 20:05:01 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-091565-m04
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 18 Sep 2024 20:14:44 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Wed, 18 Sep 2024 20:14:24 +0000   Wed, 18 Sep 2024 20:15:25 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Wed, 18 Sep 2024 20:14:24 +0000   Wed, 18 Sep 2024 20:15:25 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Wed, 18 Sep 2024 20:14:24 +0000   Wed, 18 Sep 2024 20:15:25 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Wed, 18 Sep 2024 20:14:24 +0000   Wed, 18 Sep 2024 20:15:25 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.9
	  Hostname:    ha-091565-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 cb0096492d0c441d8778e11eb51e77d3
	  System UUID:                cb009649-2d0c-441d-8778-e11eb51e77d3
	  Boot ID:                    2ebc0a85-e8f4-451d-ac54-eddc05c67c88
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-khq2c    0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m37s
	  kube-system                 kindnet-4xtjm              100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      12m
	  kube-system                 kube-proxy-8qkpk           0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 12m                    kube-proxy       
	  Normal   Starting                 2m43s                  kube-proxy       
	  Normal   NodeAllocatableEnforced  12m                    kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           12m                    node-controller  Node ha-091565-m04 event: Registered Node ha-091565-m04 in Controller
	  Normal   NodeHasSufficientMemory  12m (x2 over 12m)      kubelet          Node ha-091565-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    12m (x2 over 12m)      kubelet          Node ha-091565-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     12m (x2 over 12m)      kubelet          Node ha-091565-m04 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           12m                    node-controller  Node ha-091565-m04 event: Registered Node ha-091565-m04 in Controller
	  Normal   RegisteredNode           12m                    node-controller  Node ha-091565-m04 event: Registered Node ha-091565-m04 in Controller
	  Normal   NodeReady                11m                    kubelet          Node ha-091565-m04 status is now: NodeReady
	  Normal   RegisteredNode           4m20s                  node-controller  Node ha-091565-m04 event: Registered Node ha-091565-m04 in Controller
	  Normal   RegisteredNode           4m11s                  node-controller  Node ha-091565-m04 event: Registered Node ha-091565-m04 in Controller
	  Normal   NodeNotReady             3m40s                  node-controller  Node ha-091565-m04 status is now: NodeNotReady
	  Normal   RegisteredNode           3m14s                  node-controller  Node ha-091565-m04 event: Registered Node ha-091565-m04 in Controller
	  Normal   Starting                 2m47s                  kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  2m47s                  kubelet          Updated Node Allocatable limit across pods
	  Warning  Rebooted                 2m47s                  kubelet          Node ha-091565-m04 has been rebooted, boot id: 2ebc0a85-e8f4-451d-ac54-eddc05c67c88
	  Normal   NodeHasSufficientMemory  2m47s (x2 over 2m47s)  kubelet          Node ha-091565-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m47s (x2 over 2m47s)  kubelet          Node ha-091565-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m47s (x2 over 2m47s)  kubelet          Node ha-091565-m04 status is now: NodeHasSufficientPID
	  Normal   NodeReady                2m47s                  kubelet          Node ha-091565-m04 status is now: NodeReady
	  Normal   NodeNotReady             106s                   node-controller  Node ha-091565-m04 status is now: NodeNotReady
	
	
	==> dmesg <==
	[ +13.896131] systemd-fstab-generator[585]: Ignoring "noauto" option for root device
	[  +0.067482] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.062052] systemd-fstab-generator[597]: Ignoring "noauto" option for root device
	[  +0.180384] systemd-fstab-generator[611]: Ignoring "noauto" option for root device
	[  +0.116835] systemd-fstab-generator[623]: Ignoring "noauto" option for root device
	[  +0.268512] systemd-fstab-generator[653]: Ignoring "noauto" option for root device
	[  +3.829963] systemd-fstab-generator[747]: Ignoring "noauto" option for root device
	[  +4.147936] systemd-fstab-generator[884]: Ignoring "noauto" option for root device
	[  +0.060572] kauditd_printk_skb: 158 callbacks suppressed
	[  +6.397640] kauditd_printk_skb: 79 callbacks suppressed
	[  +0.774401] systemd-fstab-generator[1308]: Ignoring "noauto" option for root device
	[  +5.898362] kauditd_printk_skb: 15 callbacks suppressed
	[Sep18 20:02] kauditd_printk_skb: 38 callbacks suppressed
	[ +41.961999] kauditd_printk_skb: 26 callbacks suppressed
	[Sep18 20:11] systemd-fstab-generator[3535]: Ignoring "noauto" option for root device
	[  +0.178179] systemd-fstab-generator[3547]: Ignoring "noauto" option for root device
	[  +0.174179] systemd-fstab-generator[3561]: Ignoring "noauto" option for root device
	[  +0.155562] systemd-fstab-generator[3573]: Ignoring "noauto" option for root device
	[  +0.282508] systemd-fstab-generator[3601]: Ignoring "noauto" option for root device
	[Sep18 20:12] systemd-fstab-generator[3695]: Ignoring "noauto" option for root device
	[  +0.102931] kauditd_printk_skb: 100 callbacks suppressed
	[  +6.532330] kauditd_printk_skb: 12 callbacks suppressed
	[  +8.090357] kauditd_printk_skb: 85 callbacks suppressed
	[ +30.933245] kauditd_printk_skb: 8 callbacks suppressed
	[  +7.159614] kauditd_printk_skb: 3 callbacks suppressed
	
	
	==> etcd [4358e16fe123bb31210776fce1969072fb954b1f7cc3b51c459cd155992c1be5] <==
	2024/09/18 20:10:22 WARNING: [core] [Server #5] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"info","ts":"2024-09-18T20:10:22.393664Z","caller":"traceutil/trace.go:171","msg":"trace[739996942] range","detail":"{range_begin:/registry/leases/; range_end:/registry/leases0; }","duration":"838.83649ms","start":"2024-09-18T20:10:21.554825Z","end":"2024-09-18T20:10:22.393661Z","steps":["trace[739996942] 'agreement among raft nodes before linearized reading'  (duration: 835.746199ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-18T20:10:22.400072Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-18T20:10:21.554814Z","time spent":"845.204609ms","remote":"127.0.0.1:38200","response type":"/etcdserverpb.KV/Range","request count":0,"request size":41,"response count":0,"response size":0,"request content":"key:\"/registry/leases/\" range_end:\"/registry/leases0\" limit:10000 "}
	2024/09/18 20:10:22 WARNING: [core] [Server #5] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2024-09-18T20:10:22.456322Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.215:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-09-18T20:10:22.456481Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.215:2379: use of closed network connection"}
	{"level":"info","ts":"2024-09-18T20:10:22.456952Z","caller":"etcdserver/server.go:1512","msg":"skipped leadership transfer; local server is not leader","local-member-id":"ce9e8f286885b37e","current-leader-member-id":"0"}
	{"level":"info","ts":"2024-09-18T20:10:22.457188Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"7208e3715ec3d11b"}
	{"level":"info","ts":"2024-09-18T20:10:22.457285Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"7208e3715ec3d11b"}
	{"level":"info","ts":"2024-09-18T20:10:22.457378Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"7208e3715ec3d11b"}
	{"level":"info","ts":"2024-09-18T20:10:22.457565Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"ce9e8f286885b37e","remote-peer-id":"7208e3715ec3d11b"}
	{"level":"info","ts":"2024-09-18T20:10:22.457657Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"ce9e8f286885b37e","remote-peer-id":"7208e3715ec3d11b"}
	{"level":"info","ts":"2024-09-18T20:10:22.457731Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"ce9e8f286885b37e","remote-peer-id":"7208e3715ec3d11b"}
	{"level":"info","ts":"2024-09-18T20:10:22.457745Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"7208e3715ec3d11b"}
	{"level":"info","ts":"2024-09-18T20:10:22.457751Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"2408d04abbdc115f"}
	{"level":"info","ts":"2024-09-18T20:10:22.457764Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"2408d04abbdc115f"}
	{"level":"info","ts":"2024-09-18T20:10:22.457795Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"2408d04abbdc115f"}
	{"level":"info","ts":"2024-09-18T20:10:22.457844Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"ce9e8f286885b37e","remote-peer-id":"2408d04abbdc115f"}
	{"level":"info","ts":"2024-09-18T20:10:22.457931Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"ce9e8f286885b37e","remote-peer-id":"2408d04abbdc115f"}
	{"level":"info","ts":"2024-09-18T20:10:22.457983Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"ce9e8f286885b37e","remote-peer-id":"2408d04abbdc115f"}
	{"level":"info","ts":"2024-09-18T20:10:22.458008Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"2408d04abbdc115f"}
	{"level":"info","ts":"2024-09-18T20:10:22.461452Z","caller":"embed/etcd.go:581","msg":"stopping serving peer traffic","address":"192.168.39.215:2380"}
	{"level":"info","ts":"2024-09-18T20:10:22.461609Z","caller":"embed/etcd.go:586","msg":"stopped serving peer traffic","address":"192.168.39.215:2380"}
	{"level":"info","ts":"2024-09-18T20:10:22.461633Z","caller":"embed/etcd.go:379","msg":"closed etcd server","name":"ha-091565","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.215:2380"],"advertise-client-urls":["https://192.168.39.215:2379"]}
	{"level":"warn","ts":"2024-09-18T20:10:22.461619Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"8.934967457s","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"","error":"etcdserver: server stopped"}
	
	
	==> etcd [c820a119e934b2a93019ba436b2a4247a959cffd8b253b6758d96276a65aeaa7] <==
	{"level":"info","ts":"2024-09-18T20:13:45.030200Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"ce9e8f286885b37e","to":"2408d04abbdc115f","stream-type":"stream MsgApp v2"}
	{"level":"info","ts":"2024-09-18T20:13:45.030371Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","local-member-id":"ce9e8f286885b37e","remote-peer-id":"2408d04abbdc115f"}
	{"level":"info","ts":"2024-09-18T20:13:45.031492Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"ce9e8f286885b37e","remote-peer-id":"2408d04abbdc115f"}
	{"level":"info","ts":"2024-09-18T20:13:45.031761Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"ce9e8f286885b37e","remote-peer-id":"2408d04abbdc115f"}
	{"level":"info","ts":"2024-09-18T20:13:45.037997Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"ce9e8f286885b37e","to":"2408d04abbdc115f","stream-type":"stream Message"}
	{"level":"info","ts":"2024-09-18T20:13:45.038067Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"ce9e8f286885b37e","remote-peer-id":"2408d04abbdc115f"}
	{"level":"warn","ts":"2024-09-18T20:13:47.284439Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"2408d04abbdc115f","rtt":"0s","error":"dial tcp 192.168.39.53:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-18T20:13:47.284489Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"2408d04abbdc115f","rtt":"0s","error":"dial tcp 192.168.39.53:2380: connect: connection refused"}
	{"level":"info","ts":"2024-09-18T20:14:37.473583Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ce9e8f286885b37e switched to configuration voters=(8217067596198170907 14888494821848494974)"}
	{"level":"info","ts":"2024-09-18T20:14:37.475930Z","caller":"membership/cluster.go:472","msg":"removed member","cluster-id":"4cd5d1376c5e8c88","local-member-id":"ce9e8f286885b37e","removed-remote-peer-id":"2408d04abbdc115f","removed-remote-peer-urls":["https://192.168.39.53:2380"]}
	{"level":"info","ts":"2024-09-18T20:14:37.476078Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"2408d04abbdc115f"}
	{"level":"warn","ts":"2024-09-18T20:14:37.476432Z","caller":"rafthttp/stream.go:286","msg":"closed TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"2408d04abbdc115f"}
	{"level":"info","ts":"2024-09-18T20:14:37.476713Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"2408d04abbdc115f"}
	{"level":"warn","ts":"2024-09-18T20:14:37.478244Z","caller":"rafthttp/stream.go:286","msg":"closed TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"2408d04abbdc115f"}
	{"level":"info","ts":"2024-09-18T20:14:37.478331Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"2408d04abbdc115f"}
	{"level":"info","ts":"2024-09-18T20:14:37.478537Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"ce9e8f286885b37e","remote-peer-id":"2408d04abbdc115f"}
	{"level":"warn","ts":"2024-09-18T20:14:37.478724Z","caller":"rafthttp/stream.go:421","msg":"lost TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"ce9e8f286885b37e","remote-peer-id":"2408d04abbdc115f","error":"context canceled"}
	{"level":"warn","ts":"2024-09-18T20:14:37.478800Z","caller":"rafthttp/peer_status.go:66","msg":"peer became inactive (message send to peer failed)","peer-id":"2408d04abbdc115f","error":"failed to read 2408d04abbdc115f on stream MsgApp v2 (context canceled)"}
	{"level":"info","ts":"2024-09-18T20:14:37.478859Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"ce9e8f286885b37e","remote-peer-id":"2408d04abbdc115f"}
	{"level":"warn","ts":"2024-09-18T20:14:37.479060Z","caller":"rafthttp/stream.go:421","msg":"lost TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"ce9e8f286885b37e","remote-peer-id":"2408d04abbdc115f","error":"context canceled"}
	{"level":"info","ts":"2024-09-18T20:14:37.479120Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"ce9e8f286885b37e","remote-peer-id":"2408d04abbdc115f"}
	{"level":"info","ts":"2024-09-18T20:14:37.479161Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"2408d04abbdc115f"}
	{"level":"info","ts":"2024-09-18T20:14:37.479206Z","caller":"rafthttp/transport.go:355","msg":"removed remote peer","local-member-id":"ce9e8f286885b37e","removed-remote-peer-id":"2408d04abbdc115f"}
	{"level":"warn","ts":"2024-09-18T20:14:37.485958Z","caller":"rafthttp/http.go:394","msg":"rejected stream from remote peer because it was removed","local-member-id":"ce9e8f286885b37e","remote-peer-id-stream-handler":"ce9e8f286885b37e","remote-peer-id-from":"2408d04abbdc115f"}
	{"level":"warn","ts":"2024-09-18T20:14:37.489625Z","caller":"rafthttp/http.go:394","msg":"rejected stream from remote peer because it was removed","local-member-id":"ce9e8f286885b37e","remote-peer-id-stream-handler":"ce9e8f286885b37e","remote-peer-id-from":"2408d04abbdc115f"}
	
	
	==> kernel <==
	 20:17:11 up 16 min,  0 users,  load average: 0.29, 0.64, 0.44
	Linux ha-091565 5.10.207 #1 SMP Mon Sep 16 15:00:28 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [52ae20a53e17beece735496b8e65537bbebfef5da53e8a2e7a74820fee56cd63] <==
	I0918 20:10:00.558185       1 main.go:295] Handling node with IPs: map[192.168.39.92:{}]
	I0918 20:10:00.558226       1 main.go:322] Node ha-091565-m02 has CIDR [10.244.1.0/24] 
	I0918 20:10:00.558366       1 main.go:295] Handling node with IPs: map[192.168.39.53:{}]
	I0918 20:10:00.558423       1 main.go:322] Node ha-091565-m03 has CIDR [10.244.2.0/24] 
	I0918 20:10:00.558483       1 main.go:295] Handling node with IPs: map[192.168.39.9:{}]
	I0918 20:10:00.558501       1 main.go:322] Node ha-091565-m04 has CIDR [10.244.3.0/24] 
	I0918 20:10:00.558556       1 main.go:295] Handling node with IPs: map[192.168.39.215:{}]
	I0918 20:10:00.558576       1 main.go:299] handling current node
	E0918 20:10:09.434208       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: Failed to watch *v1.Node: Get "https://10.96.0.1:443/api/v1/nodes?allowWatchBookmarks=true&resourceVersion=1779&timeout=5m47s&timeoutSeconds=347&watch=true": dial tcp 10.96.0.1:443: connect: no route to host
	I0918 20:10:10.558301       1 main.go:295] Handling node with IPs: map[192.168.39.215:{}]
	I0918 20:10:10.558471       1 main.go:299] handling current node
	I0918 20:10:10.558506       1 main.go:295] Handling node with IPs: map[192.168.39.92:{}]
	I0918 20:10:10.558578       1 main.go:322] Node ha-091565-m02 has CIDR [10.244.1.0/24] 
	I0918 20:10:10.558742       1 main.go:295] Handling node with IPs: map[192.168.39.53:{}]
	I0918 20:10:10.558767       1 main.go:322] Node ha-091565-m03 has CIDR [10.244.2.0/24] 
	I0918 20:10:10.558819       1 main.go:295] Handling node with IPs: map[192.168.39.9:{}]
	I0918 20:10:10.558836       1 main.go:322] Node ha-091565-m04 has CIDR [10.244.3.0/24] 
	I0918 20:10:20.567096       1 main.go:295] Handling node with IPs: map[192.168.39.9:{}]
	I0918 20:10:20.567248       1 main.go:322] Node ha-091565-m04 has CIDR [10.244.3.0/24] 
	I0918 20:10:20.567439       1 main.go:295] Handling node with IPs: map[192.168.39.215:{}]
	I0918 20:10:20.567529       1 main.go:299] handling current node
	I0918 20:10:20.567564       1 main.go:295] Handling node with IPs: map[192.168.39.92:{}]
	I0918 20:10:20.567582       1 main.go:322] Node ha-091565-m02 has CIDR [10.244.1.0/24] 
	I0918 20:10:20.567674       1 main.go:295] Handling node with IPs: map[192.168.39.53:{}]
	I0918 20:10:20.567738       1 main.go:322] Node ha-091565-m03 has CIDR [10.244.2.0/24] 
	
	
	==> kindnet [bb5776304b68a91c9d294a1efc96b919d7969634a4d19019f4055a97a75eedaa] <==
	I0918 20:16:22.612986       1 main.go:299] handling current node
	I0918 20:16:32.603065       1 main.go:295] Handling node with IPs: map[192.168.39.9:{}]
	I0918 20:16:32.603193       1 main.go:322] Node ha-091565-m04 has CIDR [10.244.3.0/24] 
	I0918 20:16:32.603355       1 main.go:295] Handling node with IPs: map[192.168.39.215:{}]
	I0918 20:16:32.603378       1 main.go:299] handling current node
	I0918 20:16:32.603407       1 main.go:295] Handling node with IPs: map[192.168.39.92:{}]
	I0918 20:16:32.603426       1 main.go:322] Node ha-091565-m02 has CIDR [10.244.1.0/24] 
	I0918 20:16:42.602973       1 main.go:295] Handling node with IPs: map[192.168.39.9:{}]
	I0918 20:16:42.603179       1 main.go:322] Node ha-091565-m04 has CIDR [10.244.3.0/24] 
	I0918 20:16:42.603594       1 main.go:295] Handling node with IPs: map[192.168.39.215:{}]
	I0918 20:16:42.603642       1 main.go:299] handling current node
	I0918 20:16:42.603692       1 main.go:295] Handling node with IPs: map[192.168.39.92:{}]
	I0918 20:16:42.603709       1 main.go:322] Node ha-091565-m02 has CIDR [10.244.1.0/24] 
	I0918 20:16:52.610801       1 main.go:295] Handling node with IPs: map[192.168.39.215:{}]
	I0918 20:16:52.611336       1 main.go:299] handling current node
	I0918 20:16:52.611452       1 main.go:295] Handling node with IPs: map[192.168.39.92:{}]
	I0918 20:16:52.611484       1 main.go:322] Node ha-091565-m02 has CIDR [10.244.1.0/24] 
	I0918 20:16:52.611744       1 main.go:295] Handling node with IPs: map[192.168.39.9:{}]
	I0918 20:16:52.611768       1 main.go:322] Node ha-091565-m04 has CIDR [10.244.3.0/24] 
	I0918 20:17:02.608191       1 main.go:295] Handling node with IPs: map[192.168.39.215:{}]
	I0918 20:17:02.608249       1 main.go:299] handling current node
	I0918 20:17:02.608319       1 main.go:295] Handling node with IPs: map[192.168.39.92:{}]
	I0918 20:17:02.608325       1 main.go:322] Node ha-091565-m02 has CIDR [10.244.1.0/24] 
	I0918 20:17:02.608490       1 main.go:295] Handling node with IPs: map[192.168.39.9:{}]
	I0918 20:17:02.608511       1 main.go:322] Node ha-091565-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [cd37bc6079dc18794fdcec6cf524e952c17ed75bceccddac6dc5ef5026a7b0d3] <==
	I0918 20:12:12.021839       1 options.go:228] external host was not specified, using 192.168.39.215
	I0918 20:12:12.024346       1 server.go:142] Version: v1.31.1
	I0918 20:12:12.024618       1 server.go:144] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0918 20:12:12.613292       1 shared_informer.go:313] Waiting for caches to sync for node_authorizer
	I0918 20:12:12.619598       1 shared_informer.go:313] Waiting for caches to sync for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0918 20:12:12.624122       1 plugins.go:157] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I0918 20:12:12.624198       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0918 20:12:12.624464       1 instance.go:232] Using reconciler: lease
	W0918 20:12:32.610052       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W0918 20:12:32.610148       1 logging.go:55] [core] [Channel #2 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	F0918 20:12:32.625398       1 instance.go:225] Error creating leases: error creating storage factory: context deadline exceeded
	
	
	==> kube-apiserver [e894eebbedc0aac646e7c24ccd43c7ae2e1a00a0275213aa3eaaf78ad5fddb8a] <==
	I0918 20:12:56.799667       1 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0918 20:12:56.804042       1 dynamic_cafile_content.go:160] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0918 20:12:56.888656       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0918 20:12:56.889175       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0918 20:12:56.889201       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0918 20:12:56.889402       1 aggregator.go:171] initial CRD sync complete...
	I0918 20:12:56.889435       1 autoregister_controller.go:144] Starting autoregister controller
	I0918 20:12:56.889446       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0918 20:12:56.889451       1 cache.go:39] Caches are synced for autoregister controller
	I0918 20:12:56.890245       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0918 20:12:56.890660       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0918 20:12:56.890781       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0918 20:12:56.896475       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0918 20:12:56.912098       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	I0918 20:12:56.912599       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0918 20:12:56.912635       1 policy_source.go:224] refreshing policies
	I0918 20:12:56.913425       1 shared_informer.go:320] Caches are synced for configmaps
	I0918 20:12:56.916113       1 cache.go:39] Caches are synced for RemoteAvailability controller
	W0918 20:12:56.928648       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.53 192.168.39.92]
	I0918 20:12:56.930502       1 controller.go:615] quota admission added evaluator for: endpoints
	I0918 20:12:56.945192       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	E0918 20:12:56.959182       1 controller.go:95] Found stale data, removed previous endpoints on kubernetes service, apiserver didn't exit successfully previously
	I0918 20:12:56.982363       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0918 20:12:57.797743       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0918 20:12:58.273936       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.215 192.168.39.53 192.168.39.92]
	
	
	==> kube-controller-manager [c3cbacf6046aded98d17f163f6e437ef37200f71fd333470b3f7b074463c80ca] <==
	I0918 20:14:34.417958       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="35.734481ms"
	I0918 20:14:34.418057       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="51.941µs"
	I0918 20:14:36.266497       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="283.838µs"
	I0918 20:14:36.627269       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="66.535µs"
	I0918 20:14:36.631797       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="44.417µs"
	I0918 20:14:38.583682       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="12.34546ms"
	I0918 20:14:38.584162       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="46.855µs"
	I0918 20:14:48.688270       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-091565-m04"
	I0918 20:14:48.688397       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-091565-m03"
	E0918 20:15:00.316013       1 gc_controller.go:151] "Failed to get node" err="node \"ha-091565-m03\" not found" logger="pod-garbage-collector-controller" node="ha-091565-m03"
	E0918 20:15:00.316146       1 gc_controller.go:151] "Failed to get node" err="node \"ha-091565-m03\" not found" logger="pod-garbage-collector-controller" node="ha-091565-m03"
	E0918 20:15:00.316176       1 gc_controller.go:151] "Failed to get node" err="node \"ha-091565-m03\" not found" logger="pod-garbage-collector-controller" node="ha-091565-m03"
	E0918 20:15:00.316203       1 gc_controller.go:151] "Failed to get node" err="node \"ha-091565-m03\" not found" logger="pod-garbage-collector-controller" node="ha-091565-m03"
	E0918 20:15:00.316242       1 gc_controller.go:151] "Failed to get node" err="node \"ha-091565-m03\" not found" logger="pod-garbage-collector-controller" node="ha-091565-m03"
	E0918 20:15:20.317146       1 gc_controller.go:151] "Failed to get node" err="node \"ha-091565-m03\" not found" logger="pod-garbage-collector-controller" node="ha-091565-m03"
	E0918 20:15:20.317192       1 gc_controller.go:151] "Failed to get node" err="node \"ha-091565-m03\" not found" logger="pod-garbage-collector-controller" node="ha-091565-m03"
	E0918 20:15:20.317202       1 gc_controller.go:151] "Failed to get node" err="node \"ha-091565-m03\" not found" logger="pod-garbage-collector-controller" node="ha-091565-m03"
	E0918 20:15:20.317207       1 gc_controller.go:151] "Failed to get node" err="node \"ha-091565-m03\" not found" logger="pod-garbage-collector-controller" node="ha-091565-m03"
	E0918 20:15:20.317213       1 gc_controller.go:151] "Failed to get node" err="node \"ha-091565-m03\" not found" logger="pod-garbage-collector-controller" node="ha-091565-m03"
	I0918 20:15:25.470599       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-091565-m04"
	I0918 20:15:25.493818       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-091565-m04"
	I0918 20:15:25.556996       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="69.484898ms"
	I0918 20:15:25.557505       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="228.663µs"
	I0918 20:15:26.424348       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-091565-m04"
	I0918 20:15:30.627003       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-091565-m04"
	
	
	==> kube-controller-manager [d455b7b8c960ee88939b22efd881e9063e3cf96fec30f9b6cbdfe3c5670a8b1d] <==
	I0918 20:12:12.783984       1 serving.go:386] Generated self-signed cert in-memory
	I0918 20:12:13.136684       1 controllermanager.go:197] "Starting" version="v1.31.1"
	I0918 20:12:13.136722       1 controllermanager.go:199] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0918 20:12:13.138650       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0918 20:12:13.139390       1 dynamic_cafile_content.go:160] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0918 20:12:13.139535       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0918 20:12:13.139619       1 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	E0918 20:12:33.631173       1 controllermanager.go:242] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: Get \"https://192.168.39.215:8443/healthz\": dial tcp 192.168.39.215:8443: connect: connection refused"
	
	
	==> kube-proxy [c9aa80c6b1f558ba9ef5b7e70df3690995e271e9296b0cc3e71ade739843f53a] <==
	E0918 20:09:19.962278       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1699\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0918 20:09:19.962398       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-091565&resourceVersion=1700": dial tcp 192.168.39.254:8443: connect: no route to host
	E0918 20:09:19.962431       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-091565&resourceVersion=1700\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0918 20:09:19.962444       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1779": dial tcp 192.168.39.254:8443: connect: no route to host
	E0918 20:09:19.962525       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1779\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0918 20:09:23.034991       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1779": dial tcp 192.168.39.254:8443: connect: no route to host
	E0918 20:09:23.035513       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1779\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0918 20:09:26.106399       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1699": dial tcp 192.168.39.254:8443: connect: no route to host
	E0918 20:09:26.106511       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1699\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0918 20:09:26.106451       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-091565&resourceVersion=1700": dial tcp 192.168.39.254:8443: connect: no route to host
	E0918 20:09:26.106613       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-091565&resourceVersion=1700\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0918 20:09:29.178622       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1779": dial tcp 192.168.39.254:8443: connect: no route to host
	E0918 20:09:29.178760       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1779\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0918 20:09:38.393410       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1699": dial tcp 192.168.39.254:8443: connect: no route to host
	E0918 20:09:38.393487       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1699\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0918 20:09:38.393553       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-091565&resourceVersion=1700": dial tcp 192.168.39.254:8443: connect: no route to host
	E0918 20:09:38.393598       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-091565&resourceVersion=1700\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0918 20:09:38.393683       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1779": dial tcp 192.168.39.254:8443: connect: no route to host
	E0918 20:09:38.393729       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1779\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0918 20:09:53.753532       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1699": dial tcp 192.168.39.254:8443: connect: no route to host
	E0918 20:09:53.753848       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1699\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0918 20:09:56.827032       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1779": dial tcp 192.168.39.254:8443: connect: no route to host
	E0918 20:09:56.827722       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1779\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0918 20:09:59.898185       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-091565&resourceVersion=1700": dial tcp 192.168.39.254:8443: connect: no route to host
	E0918 20:09:59.898560       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-091565&resourceVersion=1700\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	
	
	==> kube-proxy [d23d190c3f7a24c8d76129882123229cdd01c0a6d63bdeca54dd4158a36f52f7] <==
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0918 20:12:15.066531       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-091565\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0918 20:12:18.137854       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-091565\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0918 20:12:21.209697       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-091565\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0918 20:12:27.354081       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-091565\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0918 20:12:36.569671       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-091565\": dial tcp 192.168.39.254:8443: connect: no route to host"
	I0918 20:12:54.239992       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.215"]
	E0918 20:12:54.240276       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0918 20:12:54.278949       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0918 20:12:54.279014       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0918 20:12:54.279067       1 server_linux.go:169] "Using iptables Proxier"
	I0918 20:12:54.281615       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0918 20:12:54.282265       1 server.go:483] "Version info" version="v1.31.1"
	I0918 20:12:54.282286       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0918 20:12:54.284918       1 config.go:199] "Starting service config controller"
	I0918 20:12:54.285030       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0918 20:12:54.285096       1 config.go:105] "Starting endpoint slice config controller"
	I0918 20:12:54.285124       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0918 20:12:54.285945       1 config.go:328] "Starting node config controller"
	I0918 20:12:54.286003       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0918 20:12:54.385995       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0918 20:12:54.386063       1 shared_informer.go:320] Caches are synced for node config
	I0918 20:12:54.386076       1 shared_informer.go:320] Caches are synced for service config
	
	
	==> kube-scheduler [8c435dbd5b540cdb87d1f79fc2aadca19db9c46aff42e67d78c5d9c8eee1b6de] <==
	E0918 20:05:01.220390       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-8qkpk\": pod kube-proxy-8qkpk is already assigned to node \"ha-091565-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-8qkpk" node="ha-091565-m04"
	E0918 20:05:01.223994       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 819d89b8-2f9d-4a41-ad66-7bfa5e99e840(kube-system/kube-proxy-8qkpk) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-8qkpk"
	E0918 20:05:01.224205       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-8qkpk\": pod kube-proxy-8qkpk is already assigned to node \"ha-091565-m04\"" pod="kube-system/kube-proxy-8qkpk"
	I0918 20:05:01.224300       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-8qkpk" node="ha-091565-m04"
	E0918 20:05:01.248133       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-zmf96\": pod kindnet-zmf96 is already assigned to node \"ha-091565-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-zmf96" node="ha-091565-m04"
	E0918 20:05:01.248459       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-zmf96\": pod kindnet-zmf96 is already assigned to node \"ha-091565-m04\"" pod="kube-system/kindnet-zmf96"
	I0918 20:05:01.248547       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-zmf96" node="ha-091565-m04"
	E0918 20:05:01.248362       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-t72tx\": pod kube-proxy-t72tx is already assigned to node \"ha-091565-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-t72tx" node="ha-091565-m04"
	E0918 20:05:01.249494       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-t72tx\": pod kube-proxy-t72tx is already assigned to node \"ha-091565-m04\"" pod="kube-system/kube-proxy-t72tx"
	I0918 20:05:01.249666       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-t72tx" node="ha-091565-m04"
	E0918 20:10:13.126277       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: unknown (get csidrivers.storage.k8s.io)" logger="UnhandledError"
	E0918 20:10:13.724453       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: unknown (get statefulsets.apps)" logger="UnhandledError"
	E0918 20:10:14.237081       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: unknown (get namespaces)" logger="UnhandledError"
	E0918 20:10:14.387558       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: unknown (get configmaps)" logger="UnhandledError"
	E0918 20:10:14.939450       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: unknown (get pods)" logger="UnhandledError"
	E0918 20:10:15.084096       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: unknown (get storageclasses.storage.k8s.io)" logger="UnhandledError"
	E0918 20:10:15.719553       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: unknown (get nodes)" logger="UnhandledError"
	E0918 20:10:16.704708       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: unknown (get persistentvolumeclaims)" logger="UnhandledError"
	E0918 20:10:17.552792       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: unknown (get persistentvolumes)" logger="UnhandledError"
	E0918 20:10:20.155854       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: unknown (get replicasets.apps)" logger="UnhandledError"
	E0918 20:10:21.586463       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: unknown (get csinodes.storage.k8s.io)" logger="UnhandledError"
	E0918 20:10:21.996488       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: unknown (get poddisruptionbudgets.policy)" logger="UnhandledError"
	E0918 20:10:22.157516       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: unknown (get replicationcontrollers)" logger="UnhandledError"
	E0918 20:10:22.232147       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: unknown (get services)" logger="UnhandledError"
	E0918 20:10:22.352901       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [bb8d0cf0ea1843751c86efea123f97cd6f5a08fcb1cee02f21311efe6ca1e526] <==
	W0918 20:12:50.139376       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: Get "https://192.168.39.215:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp 192.168.39.215:8443: connect: connection refused
	E0918 20:12:50.139445       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: Get \"https://192.168.39.215:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0\": dial tcp 192.168.39.215:8443: connect: connection refused" logger="UnhandledError"
	W0918 20:12:50.499370       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: Get "https://192.168.39.215:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp 192.168.39.215:8443: connect: connection refused
	E0918 20:12:50.499522       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: Get \"https://192.168.39.215:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0\": dial tcp 192.168.39.215:8443: connect: connection refused" logger="UnhandledError"
	W0918 20:12:51.406305       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://192.168.39.215:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.39.215:8443: connect: connection refused
	E0918 20:12:51.406429       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://192.168.39.215:8443/api/v1/services?limit=500&resourceVersion=0\": dial tcp 192.168.39.215:8443: connect: connection refused" logger="UnhandledError"
	W0918 20:12:51.497916       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://192.168.39.215:8443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 192.168.39.215:8443: connect: connection refused
	E0918 20:12:51.497982       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://192.168.39.215:8443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 192.168.39.215:8443: connect: connection refused" logger="UnhandledError"
	W0918 20:12:52.583531       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: Get "https://192.168.39.215:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp 192.168.39.215:8443: connect: connection refused
	E0918 20:12:52.583656       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: Get \"https://192.168.39.215:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0\": dial tcp 192.168.39.215:8443: connect: connection refused" logger="UnhandledError"
	W0918 20:12:52.631454       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://192.168.39.215:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.39.215:8443: connect: connection refused
	E0918 20:12:52.631625       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://192.168.39.215:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.168.39.215:8443: connect: connection refused" logger="UnhandledError"
	W0918 20:12:52.731418       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: Get "https://192.168.39.215:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.168.39.215:8443: connect: connection refused
	E0918 20:12:52.731508       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: Get \"https://192.168.39.215:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0\": dial tcp 192.168.39.215:8443: connect: connection refused" logger="UnhandledError"
	W0918 20:12:53.743251       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: Get "https://192.168.39.215:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp 192.168.39.215:8443: connect: connection refused
	E0918 20:12:53.743340       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: Get \"https://192.168.39.215:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0\": dial tcp 192.168.39.215:8443: connect: connection refused" logger="UnhandledError"
	W0918 20:12:53.920652       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: Get "https://192.168.39.215:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.168.39.215:8443: connect: connection refused
	E0918 20:12:53.920753       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: Get \"https://192.168.39.215:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0\": dial tcp 192.168.39.215:8443: connect: connection refused" logger="UnhandledError"
	W0918 20:12:54.029713       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: Get "https://192.168.39.215:8443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 192.168.39.215:8443: connect: connection refused
	E0918 20:12:54.029857       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get \"https://192.168.39.215:8443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 192.168.39.215:8443: connect: connection refused" logger="UnhandledError"
	W0918 20:12:56.817321       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0918 20:12:56.819046       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0918 20:12:56.825944       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0918 20:12:56.826066       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	I0918 20:13:21.244237       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 18 20:15:43 ha-091565 kubelet[1316]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 18 20:15:43 ha-091565 kubelet[1316]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 18 20:15:43 ha-091565 kubelet[1316]: E0918 20:15:43.602708    1316 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726690543602497322,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 18 20:15:43 ha-091565 kubelet[1316]: E0918 20:15:43.602750    1316 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726690543602497322,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 18 20:15:53 ha-091565 kubelet[1316]: E0918 20:15:53.604198    1316 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726690553603966495,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 18 20:15:53 ha-091565 kubelet[1316]: E0918 20:15:53.604241    1316 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726690553603966495,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 18 20:16:03 ha-091565 kubelet[1316]: E0918 20:16:03.605827    1316 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726690563605444694,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 18 20:16:03 ha-091565 kubelet[1316]: E0918 20:16:03.606224    1316 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726690563605444694,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 18 20:16:13 ha-091565 kubelet[1316]: E0918 20:16:13.609149    1316 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726690573607940453,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 18 20:16:13 ha-091565 kubelet[1316]: E0918 20:16:13.609250    1316 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726690573607940453,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 18 20:16:23 ha-091565 kubelet[1316]: E0918 20:16:23.610557    1316 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726690583610279039,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 18 20:16:23 ha-091565 kubelet[1316]: E0918 20:16:23.610598    1316 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726690583610279039,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 18 20:16:33 ha-091565 kubelet[1316]: E0918 20:16:33.612330    1316 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726690593612047356,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 18 20:16:33 ha-091565 kubelet[1316]: E0918 20:16:33.612375    1316 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726690593612047356,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 18 20:16:43 ha-091565 kubelet[1316]: E0918 20:16:43.397288    1316 iptables.go:577] "Could not set up iptables canary" err=<
	Sep 18 20:16:43 ha-091565 kubelet[1316]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Sep 18 20:16:43 ha-091565 kubelet[1316]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 18 20:16:43 ha-091565 kubelet[1316]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 18 20:16:43 ha-091565 kubelet[1316]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 18 20:16:43 ha-091565 kubelet[1316]: E0918 20:16:43.616695    1316 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726690603615753116,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 18 20:16:43 ha-091565 kubelet[1316]: E0918 20:16:43.616768    1316 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726690603615753116,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 18 20:16:53 ha-091565 kubelet[1316]: E0918 20:16:53.622423    1316 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726690613618802134,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 18 20:16:53 ha-091565 kubelet[1316]: E0918 20:16:53.623674    1316 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726690613618802134,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 18 20:17:03 ha-091565 kubelet[1316]: E0918 20:17:03.625694    1316 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726690623625282046,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 18 20:17:03 ha-091565 kubelet[1316]: E0918 20:17:03.625751    1316 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726690623625282046,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0918 20:17:10.880129   34839 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/19667-7671/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-091565 -n ha-091565
helpers_test.go:261: (dbg) Run:  kubectl --context ha-091565 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/StopCluster FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/StopCluster (141.84s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (332.17s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-622675
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-622675
multinode_test.go:321: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p multinode-622675: exit status 82 (2m1.812679491s)

                                                
                                                
-- stdout --
	* Stopping node "multinode-622675-m03"  ...
	* Stopping node "multinode-622675-m02"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:323: failed to run minikube stop. args "out/minikube-linux-amd64 node list -p multinode-622675" : exit status 82
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-622675 --wait=true -v=8 --alsologtostderr
E0918 20:34:15.244588   14878 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/addons-815929/client.crt: no such file or directory" logger="UnhandledError"
E0918 20:35:01.289279   14878 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/functional-790989/client.crt: no such file or directory" logger="UnhandledError"
E0918 20:36:12.175565   14878 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/addons-815929/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-622675 --wait=true -v=8 --alsologtostderr: (3m28.15152447s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-622675
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-622675 -n multinode-622675
helpers_test.go:244: <<< TestMultiNode/serial/RestartKeepsNodes FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/RestartKeepsNodes]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p multinode-622675 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p multinode-622675 logs -n 25: (1.459850657s)
helpers_test.go:252: TestMultiNode/serial/RestartKeepsNodes logs: 
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| Command |                                          Args                                           |     Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| ssh     | multinode-622675 ssh -n                                                                 | multinode-622675 | jenkins | v1.34.0 | 18 Sep 24 20:31 UTC | 18 Sep 24 20:31 UTC |
	|         | multinode-622675-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-622675 cp multinode-622675-m02:/home/docker/cp-test.txt                       | multinode-622675 | jenkins | v1.34.0 | 18 Sep 24 20:31 UTC | 18 Sep 24 20:31 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile2019276691/001/cp-test_multinode-622675-m02.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-622675 ssh -n                                                                 | multinode-622675 | jenkins | v1.34.0 | 18 Sep 24 20:31 UTC | 18 Sep 24 20:31 UTC |
	|         | multinode-622675-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-622675 cp multinode-622675-m02:/home/docker/cp-test.txt                       | multinode-622675 | jenkins | v1.34.0 | 18 Sep 24 20:31 UTC | 18 Sep 24 20:31 UTC |
	|         | multinode-622675:/home/docker/cp-test_multinode-622675-m02_multinode-622675.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-622675 ssh -n                                                                 | multinode-622675 | jenkins | v1.34.0 | 18 Sep 24 20:31 UTC | 18 Sep 24 20:31 UTC |
	|         | multinode-622675-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-622675 ssh -n multinode-622675 sudo cat                                       | multinode-622675 | jenkins | v1.34.0 | 18 Sep 24 20:31 UTC | 18 Sep 24 20:31 UTC |
	|         | /home/docker/cp-test_multinode-622675-m02_multinode-622675.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-622675 cp multinode-622675-m02:/home/docker/cp-test.txt                       | multinode-622675 | jenkins | v1.34.0 | 18 Sep 24 20:31 UTC | 18 Sep 24 20:31 UTC |
	|         | multinode-622675-m03:/home/docker/cp-test_multinode-622675-m02_multinode-622675-m03.txt |                  |         |         |                     |                     |
	| ssh     | multinode-622675 ssh -n                                                                 | multinode-622675 | jenkins | v1.34.0 | 18 Sep 24 20:31 UTC | 18 Sep 24 20:31 UTC |
	|         | multinode-622675-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-622675 ssh -n multinode-622675-m03 sudo cat                                   | multinode-622675 | jenkins | v1.34.0 | 18 Sep 24 20:31 UTC | 18 Sep 24 20:31 UTC |
	|         | /home/docker/cp-test_multinode-622675-m02_multinode-622675-m03.txt                      |                  |         |         |                     |                     |
	| cp      | multinode-622675 cp testdata/cp-test.txt                                                | multinode-622675 | jenkins | v1.34.0 | 18 Sep 24 20:31 UTC | 18 Sep 24 20:31 UTC |
	|         | multinode-622675-m03:/home/docker/cp-test.txt                                           |                  |         |         |                     |                     |
	| ssh     | multinode-622675 ssh -n                                                                 | multinode-622675 | jenkins | v1.34.0 | 18 Sep 24 20:31 UTC | 18 Sep 24 20:31 UTC |
	|         | multinode-622675-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-622675 cp multinode-622675-m03:/home/docker/cp-test.txt                       | multinode-622675 | jenkins | v1.34.0 | 18 Sep 24 20:31 UTC | 18 Sep 24 20:31 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile2019276691/001/cp-test_multinode-622675-m03.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-622675 ssh -n                                                                 | multinode-622675 | jenkins | v1.34.0 | 18 Sep 24 20:31 UTC | 18 Sep 24 20:31 UTC |
	|         | multinode-622675-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-622675 cp multinode-622675-m03:/home/docker/cp-test.txt                       | multinode-622675 | jenkins | v1.34.0 | 18 Sep 24 20:31 UTC | 18 Sep 24 20:31 UTC |
	|         | multinode-622675:/home/docker/cp-test_multinode-622675-m03_multinode-622675.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-622675 ssh -n                                                                 | multinode-622675 | jenkins | v1.34.0 | 18 Sep 24 20:31 UTC | 18 Sep 24 20:31 UTC |
	|         | multinode-622675-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-622675 ssh -n multinode-622675 sudo cat                                       | multinode-622675 | jenkins | v1.34.0 | 18 Sep 24 20:31 UTC | 18 Sep 24 20:31 UTC |
	|         | /home/docker/cp-test_multinode-622675-m03_multinode-622675.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-622675 cp multinode-622675-m03:/home/docker/cp-test.txt                       | multinode-622675 | jenkins | v1.34.0 | 18 Sep 24 20:31 UTC | 18 Sep 24 20:31 UTC |
	|         | multinode-622675-m02:/home/docker/cp-test_multinode-622675-m03_multinode-622675-m02.txt |                  |         |         |                     |                     |
	| ssh     | multinode-622675 ssh -n                                                                 | multinode-622675 | jenkins | v1.34.0 | 18 Sep 24 20:31 UTC | 18 Sep 24 20:31 UTC |
	|         | multinode-622675-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-622675 ssh -n multinode-622675-m02 sudo cat                                   | multinode-622675 | jenkins | v1.34.0 | 18 Sep 24 20:31 UTC | 18 Sep 24 20:31 UTC |
	|         | /home/docker/cp-test_multinode-622675-m03_multinode-622675-m02.txt                      |                  |         |         |                     |                     |
	| node    | multinode-622675 node stop m03                                                          | multinode-622675 | jenkins | v1.34.0 | 18 Sep 24 20:31 UTC | 18 Sep 24 20:31 UTC |
	| node    | multinode-622675 node start                                                             | multinode-622675 | jenkins | v1.34.0 | 18 Sep 24 20:31 UTC | 18 Sep 24 20:32 UTC |
	|         | m03 -v=7 --alsologtostderr                                                              |                  |         |         |                     |                     |
	| node    | list -p multinode-622675                                                                | multinode-622675 | jenkins | v1.34.0 | 18 Sep 24 20:32 UTC |                     |
	| stop    | -p multinode-622675                                                                     | multinode-622675 | jenkins | v1.34.0 | 18 Sep 24 20:32 UTC |                     |
	| start   | -p multinode-622675                                                                     | multinode-622675 | jenkins | v1.34.0 | 18 Sep 24 20:34 UTC | 18 Sep 24 20:37 UTC |
	|         | --wait=true -v=8                                                                        |                  |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                  |         |         |                     |                     |
	| node    | list -p multinode-622675                                                                | multinode-622675 | jenkins | v1.34.0 | 18 Sep 24 20:37 UTC |                     |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/18 20:34:04
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0918 20:34:04.479779   44697 out.go:345] Setting OutFile to fd 1 ...
	I0918 20:34:04.479912   44697 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0918 20:34:04.479918   44697 out.go:358] Setting ErrFile to fd 2...
	I0918 20:34:04.479922   44697 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0918 20:34:04.480129   44697 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19667-7671/.minikube/bin
	I0918 20:34:04.480736   44697 out.go:352] Setting JSON to false
	I0918 20:34:04.481651   44697 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":4588,"bootTime":1726687056,"procs":182,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0918 20:34:04.481746   44697 start.go:139] virtualization: kvm guest
	I0918 20:34:04.484109   44697 out.go:177] * [multinode-622675] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0918 20:34:04.485685   44697 notify.go:220] Checking for updates...
	I0918 20:34:04.485732   44697 out.go:177]   - MINIKUBE_LOCATION=19667
	I0918 20:34:04.487384   44697 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0918 20:34:04.488980   44697 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19667-7671/kubeconfig
	I0918 20:34:04.490676   44697 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19667-7671/.minikube
	I0918 20:34:04.492253   44697 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0918 20:34:04.493779   44697 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0918 20:34:04.495586   44697 config.go:182] Loaded profile config "multinode-622675": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0918 20:34:04.495688   44697 driver.go:394] Setting default libvirt URI to qemu:///system
	I0918 20:34:04.496171   44697 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0918 20:34:04.496208   44697 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0918 20:34:04.512042   44697 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39609
	I0918 20:34:04.512552   44697 main.go:141] libmachine: () Calling .GetVersion
	I0918 20:34:04.513148   44697 main.go:141] libmachine: Using API Version  1
	I0918 20:34:04.513173   44697 main.go:141] libmachine: () Calling .SetConfigRaw
	I0918 20:34:04.513551   44697 main.go:141] libmachine: () Calling .GetMachineName
	I0918 20:34:04.513728   44697 main.go:141] libmachine: (multinode-622675) Calling .DriverName
	I0918 20:34:04.550936   44697 out.go:177] * Using the kvm2 driver based on existing profile
	I0918 20:34:04.552612   44697 start.go:297] selected driver: kvm2
	I0918 20:34:04.552633   44697 start.go:901] validating driver "kvm2" against &{Name:multinode-622675 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.31.1 ClusterName:multinode-622675 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.106 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.224 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.216 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ing
ress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryM
irror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0918 20:34:04.552768   44697 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0918 20:34:04.553080   44697 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0918 20:34:04.553163   44697 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19667-7671/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0918 20:34:04.569212   44697 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0918 20:34:04.570061   44697 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0918 20:34:04.570103   44697 cni.go:84] Creating CNI manager for ""
	I0918 20:34:04.570156   44697 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0918 20:34:04.570213   44697 start.go:340] cluster config:
	{Name:multinode-622675 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:multinode-622675 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.106 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.224 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.216 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false
kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwareP
ath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0918 20:34:04.570341   44697 iso.go:125] acquiring lock: {Name:mkbcf4d84f3091567a39802cd5e773cb10b8c545 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0918 20:34:04.572467   44697 out.go:177] * Starting "multinode-622675" primary control-plane node in "multinode-622675" cluster
	I0918 20:34:04.573864   44697 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0918 20:34:04.573931   44697 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19667-7671/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I0918 20:34:04.573946   44697 cache.go:56] Caching tarball of preloaded images
	I0918 20:34:04.574056   44697 preload.go:172] Found /home/jenkins/minikube-integration/19667-7671/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0918 20:34:04.574067   44697 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0918 20:34:04.574191   44697 profile.go:143] Saving config to /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/multinode-622675/config.json ...
	I0918 20:34:04.574406   44697 start.go:360] acquireMachinesLock for multinode-622675: {Name:mk1b0d68a0b7d9d4bc8204d5d81cb9eb77222526 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0918 20:34:04.574450   44697 start.go:364] duration metric: took 25.038µs to acquireMachinesLock for "multinode-622675"
	I0918 20:34:04.574464   44697 start.go:96] Skipping create...Using existing machine configuration
	I0918 20:34:04.574469   44697 fix.go:54] fixHost starting: 
	I0918 20:34:04.574720   44697 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0918 20:34:04.574756   44697 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0918 20:34:04.590907   44697 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36875
	I0918 20:34:04.591348   44697 main.go:141] libmachine: () Calling .GetVersion
	I0918 20:34:04.591821   44697 main.go:141] libmachine: Using API Version  1
	I0918 20:34:04.591837   44697 main.go:141] libmachine: () Calling .SetConfigRaw
	I0918 20:34:04.592204   44697 main.go:141] libmachine: () Calling .GetMachineName
	I0918 20:34:04.592427   44697 main.go:141] libmachine: (multinode-622675) Calling .DriverName
	I0918 20:34:04.592586   44697 main.go:141] libmachine: (multinode-622675) Calling .GetState
	I0918 20:34:04.594108   44697 fix.go:112] recreateIfNeeded on multinode-622675: state=Running err=<nil>
	W0918 20:34:04.594130   44697 fix.go:138] unexpected machine state, will restart: <nil>
	I0918 20:34:04.597784   44697 out.go:177] * Updating the running kvm2 "multinode-622675" VM ...
	I0918 20:34:04.599106   44697 machine.go:93] provisionDockerMachine start ...
	I0918 20:34:04.599131   44697 main.go:141] libmachine: (multinode-622675) Calling .DriverName
	I0918 20:34:04.599389   44697 main.go:141] libmachine: (multinode-622675) Calling .GetSSHHostname
	I0918 20:34:04.602041   44697 main.go:141] libmachine: (multinode-622675) DBG | domain multinode-622675 has defined MAC address 52:54:00:b9:25:86 in network mk-multinode-622675
	I0918 20:34:04.602482   44697 main.go:141] libmachine: (multinode-622675) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:25:86", ip: ""} in network mk-multinode-622675: {Iface:virbr1 ExpiryTime:2024-09-18 21:28:42 +0000 UTC Type:0 Mac:52:54:00:b9:25:86 Iaid: IPaddr:192.168.39.106 Prefix:24 Hostname:multinode-622675 Clientid:01:52:54:00:b9:25:86}
	I0918 20:34:04.602506   44697 main.go:141] libmachine: (multinode-622675) DBG | domain multinode-622675 has defined IP address 192.168.39.106 and MAC address 52:54:00:b9:25:86 in network mk-multinode-622675
	I0918 20:34:04.602674   44697 main.go:141] libmachine: (multinode-622675) Calling .GetSSHPort
	I0918 20:34:04.602822   44697 main.go:141] libmachine: (multinode-622675) Calling .GetSSHKeyPath
	I0918 20:34:04.602970   44697 main.go:141] libmachine: (multinode-622675) Calling .GetSSHKeyPath
	I0918 20:34:04.603137   44697 main.go:141] libmachine: (multinode-622675) Calling .GetSSHUsername
	I0918 20:34:04.603313   44697 main.go:141] libmachine: Using SSH client type: native
	I0918 20:34:04.603510   44697 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.106 22 <nil> <nil>}
	I0918 20:34:04.603521   44697 main.go:141] libmachine: About to run SSH command:
	hostname
	I0918 20:34:04.718430   44697 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-622675
	
	I0918 20:34:04.718464   44697 main.go:141] libmachine: (multinode-622675) Calling .GetMachineName
	I0918 20:34:04.718766   44697 buildroot.go:166] provisioning hostname "multinode-622675"
	I0918 20:34:04.718796   44697 main.go:141] libmachine: (multinode-622675) Calling .GetMachineName
	I0918 20:34:04.718958   44697 main.go:141] libmachine: (multinode-622675) Calling .GetSSHHostname
	I0918 20:34:04.722121   44697 main.go:141] libmachine: (multinode-622675) DBG | domain multinode-622675 has defined MAC address 52:54:00:b9:25:86 in network mk-multinode-622675
	I0918 20:34:04.722521   44697 main.go:141] libmachine: (multinode-622675) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:25:86", ip: ""} in network mk-multinode-622675: {Iface:virbr1 ExpiryTime:2024-09-18 21:28:42 +0000 UTC Type:0 Mac:52:54:00:b9:25:86 Iaid: IPaddr:192.168.39.106 Prefix:24 Hostname:multinode-622675 Clientid:01:52:54:00:b9:25:86}
	I0918 20:34:04.722542   44697 main.go:141] libmachine: (multinode-622675) DBG | domain multinode-622675 has defined IP address 192.168.39.106 and MAC address 52:54:00:b9:25:86 in network mk-multinode-622675
	I0918 20:34:04.722735   44697 main.go:141] libmachine: (multinode-622675) Calling .GetSSHPort
	I0918 20:34:04.722926   44697 main.go:141] libmachine: (multinode-622675) Calling .GetSSHKeyPath
	I0918 20:34:04.723060   44697 main.go:141] libmachine: (multinode-622675) Calling .GetSSHKeyPath
	I0918 20:34:04.723197   44697 main.go:141] libmachine: (multinode-622675) Calling .GetSSHUsername
	I0918 20:34:04.723428   44697 main.go:141] libmachine: Using SSH client type: native
	I0918 20:34:04.723601   44697 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.106 22 <nil> <nil>}
	I0918 20:34:04.723612   44697 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-622675 && echo "multinode-622675" | sudo tee /etc/hostname
	I0918 20:34:04.844693   44697 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-622675
	
	I0918 20:34:04.844736   44697 main.go:141] libmachine: (multinode-622675) Calling .GetSSHHostname
	I0918 20:34:04.847872   44697 main.go:141] libmachine: (multinode-622675) DBG | domain multinode-622675 has defined MAC address 52:54:00:b9:25:86 in network mk-multinode-622675
	I0918 20:34:04.848323   44697 main.go:141] libmachine: (multinode-622675) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:25:86", ip: ""} in network mk-multinode-622675: {Iface:virbr1 ExpiryTime:2024-09-18 21:28:42 +0000 UTC Type:0 Mac:52:54:00:b9:25:86 Iaid: IPaddr:192.168.39.106 Prefix:24 Hostname:multinode-622675 Clientid:01:52:54:00:b9:25:86}
	I0918 20:34:04.848367   44697 main.go:141] libmachine: (multinode-622675) DBG | domain multinode-622675 has defined IP address 192.168.39.106 and MAC address 52:54:00:b9:25:86 in network mk-multinode-622675
	I0918 20:34:04.848661   44697 main.go:141] libmachine: (multinode-622675) Calling .GetSSHPort
	I0918 20:34:04.848921   44697 main.go:141] libmachine: (multinode-622675) Calling .GetSSHKeyPath
	I0918 20:34:04.849262   44697 main.go:141] libmachine: (multinode-622675) Calling .GetSSHKeyPath
	I0918 20:34:04.849470   44697 main.go:141] libmachine: (multinode-622675) Calling .GetSSHUsername
	I0918 20:34:04.849764   44697 main.go:141] libmachine: Using SSH client type: native
	I0918 20:34:04.849944   44697 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.106 22 <nil> <nil>}
	I0918 20:34:04.849961   44697 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-622675' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-622675/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-622675' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0918 20:34:04.957378   44697 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0918 20:34:04.957413   44697 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19667-7671/.minikube CaCertPath:/home/jenkins/minikube-integration/19667-7671/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19667-7671/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19667-7671/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19667-7671/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19667-7671/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19667-7671/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19667-7671/.minikube}
	I0918 20:34:04.957439   44697 buildroot.go:174] setting up certificates
	I0918 20:34:04.957452   44697 provision.go:84] configureAuth start
	I0918 20:34:04.957474   44697 main.go:141] libmachine: (multinode-622675) Calling .GetMachineName
	I0918 20:34:04.957738   44697 main.go:141] libmachine: (multinode-622675) Calling .GetIP
	I0918 20:34:04.960575   44697 main.go:141] libmachine: (multinode-622675) DBG | domain multinode-622675 has defined MAC address 52:54:00:b9:25:86 in network mk-multinode-622675
	I0918 20:34:04.960905   44697 main.go:141] libmachine: (multinode-622675) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:25:86", ip: ""} in network mk-multinode-622675: {Iface:virbr1 ExpiryTime:2024-09-18 21:28:42 +0000 UTC Type:0 Mac:52:54:00:b9:25:86 Iaid: IPaddr:192.168.39.106 Prefix:24 Hostname:multinode-622675 Clientid:01:52:54:00:b9:25:86}
	I0918 20:34:04.960936   44697 main.go:141] libmachine: (multinode-622675) DBG | domain multinode-622675 has defined IP address 192.168.39.106 and MAC address 52:54:00:b9:25:86 in network mk-multinode-622675
	I0918 20:34:04.961187   44697 main.go:141] libmachine: (multinode-622675) Calling .GetSSHHostname
	I0918 20:34:04.963675   44697 main.go:141] libmachine: (multinode-622675) DBG | domain multinode-622675 has defined MAC address 52:54:00:b9:25:86 in network mk-multinode-622675
	I0918 20:34:04.964168   44697 main.go:141] libmachine: (multinode-622675) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:25:86", ip: ""} in network mk-multinode-622675: {Iface:virbr1 ExpiryTime:2024-09-18 21:28:42 +0000 UTC Type:0 Mac:52:54:00:b9:25:86 Iaid: IPaddr:192.168.39.106 Prefix:24 Hostname:multinode-622675 Clientid:01:52:54:00:b9:25:86}
	I0918 20:34:04.964207   44697 main.go:141] libmachine: (multinode-622675) DBG | domain multinode-622675 has defined IP address 192.168.39.106 and MAC address 52:54:00:b9:25:86 in network mk-multinode-622675
	I0918 20:34:04.964384   44697 provision.go:143] copyHostCerts
	I0918 20:34:04.964420   44697 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19667-7671/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19667-7671/.minikube/key.pem
	I0918 20:34:04.964473   44697 exec_runner.go:144] found /home/jenkins/minikube-integration/19667-7671/.minikube/key.pem, removing ...
	I0918 20:34:04.964491   44697 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19667-7671/.minikube/key.pem
	I0918 20:34:04.964569   44697 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19667-7671/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19667-7671/.minikube/key.pem (1679 bytes)
	I0918 20:34:04.964685   44697 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19667-7671/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19667-7671/.minikube/ca.pem
	I0918 20:34:04.964711   44697 exec_runner.go:144] found /home/jenkins/minikube-integration/19667-7671/.minikube/ca.pem, removing ...
	I0918 20:34:04.964718   44697 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19667-7671/.minikube/ca.pem
	I0918 20:34:04.964766   44697 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19667-7671/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19667-7671/.minikube/ca.pem (1082 bytes)
	I0918 20:34:04.964850   44697 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19667-7671/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19667-7671/.minikube/cert.pem
	I0918 20:34:04.964874   44697 exec_runner.go:144] found /home/jenkins/minikube-integration/19667-7671/.minikube/cert.pem, removing ...
	I0918 20:34:04.964889   44697 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19667-7671/.minikube/cert.pem
	I0918 20:34:04.964929   44697 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19667-7671/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19667-7671/.minikube/cert.pem (1123 bytes)
	I0918 20:34:04.965012   44697 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19667-7671/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19667-7671/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19667-7671/.minikube/certs/ca-key.pem org=jenkins.multinode-622675 san=[127.0.0.1 192.168.39.106 localhost minikube multinode-622675]
	I0918 20:34:05.219307   44697 provision.go:177] copyRemoteCerts
	I0918 20:34:05.219380   44697 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0918 20:34:05.219403   44697 main.go:141] libmachine: (multinode-622675) Calling .GetSSHHostname
	I0918 20:34:05.222023   44697 main.go:141] libmachine: (multinode-622675) DBG | domain multinode-622675 has defined MAC address 52:54:00:b9:25:86 in network mk-multinode-622675
	I0918 20:34:05.222311   44697 main.go:141] libmachine: (multinode-622675) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:25:86", ip: ""} in network mk-multinode-622675: {Iface:virbr1 ExpiryTime:2024-09-18 21:28:42 +0000 UTC Type:0 Mac:52:54:00:b9:25:86 Iaid: IPaddr:192.168.39.106 Prefix:24 Hostname:multinode-622675 Clientid:01:52:54:00:b9:25:86}
	I0918 20:34:05.222337   44697 main.go:141] libmachine: (multinode-622675) DBG | domain multinode-622675 has defined IP address 192.168.39.106 and MAC address 52:54:00:b9:25:86 in network mk-multinode-622675
	I0918 20:34:05.222559   44697 main.go:141] libmachine: (multinode-622675) Calling .GetSSHPort
	I0918 20:34:05.222756   44697 main.go:141] libmachine: (multinode-622675) Calling .GetSSHKeyPath
	I0918 20:34:05.222916   44697 main.go:141] libmachine: (multinode-622675) Calling .GetSSHUsername
	I0918 20:34:05.223018   44697 sshutil.go:53] new ssh client: &{IP:192.168.39.106 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19667-7671/.minikube/machines/multinode-622675/id_rsa Username:docker}
	I0918 20:34:05.306981   44697 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19667-7671/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0918 20:34:05.307056   44697 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0918 20:34:05.334778   44697 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19667-7671/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0918 20:34:05.334854   44697 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0918 20:34:05.359332   44697 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19667-7671/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0918 20:34:05.359431   44697 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0918 20:34:05.385864   44697 provision.go:87] duration metric: took 428.39632ms to configureAuth
	I0918 20:34:05.385894   44697 buildroot.go:189] setting minikube options for container-runtime
	I0918 20:34:05.386134   44697 config.go:182] Loaded profile config "multinode-622675": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0918 20:34:05.386235   44697 main.go:141] libmachine: (multinode-622675) Calling .GetSSHHostname
	I0918 20:34:05.388708   44697 main.go:141] libmachine: (multinode-622675) DBG | domain multinode-622675 has defined MAC address 52:54:00:b9:25:86 in network mk-multinode-622675
	I0918 20:34:05.389058   44697 main.go:141] libmachine: (multinode-622675) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:25:86", ip: ""} in network mk-multinode-622675: {Iface:virbr1 ExpiryTime:2024-09-18 21:28:42 +0000 UTC Type:0 Mac:52:54:00:b9:25:86 Iaid: IPaddr:192.168.39.106 Prefix:24 Hostname:multinode-622675 Clientid:01:52:54:00:b9:25:86}
	I0918 20:34:05.389092   44697 main.go:141] libmachine: (multinode-622675) DBG | domain multinode-622675 has defined IP address 192.168.39.106 and MAC address 52:54:00:b9:25:86 in network mk-multinode-622675
	I0918 20:34:05.389211   44697 main.go:141] libmachine: (multinode-622675) Calling .GetSSHPort
	I0918 20:34:05.389433   44697 main.go:141] libmachine: (multinode-622675) Calling .GetSSHKeyPath
	I0918 20:34:05.389571   44697 main.go:141] libmachine: (multinode-622675) Calling .GetSSHKeyPath
	I0918 20:34:05.389687   44697 main.go:141] libmachine: (multinode-622675) Calling .GetSSHUsername
	I0918 20:34:05.389810   44697 main.go:141] libmachine: Using SSH client type: native
	I0918 20:34:05.389970   44697 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.106 22 <nil> <nil>}
	I0918 20:34:05.389984   44697 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0918 20:35:36.211756   44697 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0918 20:35:36.211788   44697 machine.go:96] duration metric: took 1m31.612665437s to provisionDockerMachine
	I0918 20:35:36.211802   44697 start.go:293] postStartSetup for "multinode-622675" (driver="kvm2")
	I0918 20:35:36.211817   44697 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0918 20:35:36.211837   44697 main.go:141] libmachine: (multinode-622675) Calling .DriverName
	I0918 20:35:36.212131   44697 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0918 20:35:36.212158   44697 main.go:141] libmachine: (multinode-622675) Calling .GetSSHHostname
	I0918 20:35:36.215231   44697 main.go:141] libmachine: (multinode-622675) DBG | domain multinode-622675 has defined MAC address 52:54:00:b9:25:86 in network mk-multinode-622675
	I0918 20:35:36.215608   44697 main.go:141] libmachine: (multinode-622675) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:25:86", ip: ""} in network mk-multinode-622675: {Iface:virbr1 ExpiryTime:2024-09-18 21:28:42 +0000 UTC Type:0 Mac:52:54:00:b9:25:86 Iaid: IPaddr:192.168.39.106 Prefix:24 Hostname:multinode-622675 Clientid:01:52:54:00:b9:25:86}
	I0918 20:35:36.215631   44697 main.go:141] libmachine: (multinode-622675) DBG | domain multinode-622675 has defined IP address 192.168.39.106 and MAC address 52:54:00:b9:25:86 in network mk-multinode-622675
	I0918 20:35:36.215744   44697 main.go:141] libmachine: (multinode-622675) Calling .GetSSHPort
	I0918 20:35:36.215973   44697 main.go:141] libmachine: (multinode-622675) Calling .GetSSHKeyPath
	I0918 20:35:36.216143   44697 main.go:141] libmachine: (multinode-622675) Calling .GetSSHUsername
	I0918 20:35:36.216289   44697 sshutil.go:53] new ssh client: &{IP:192.168.39.106 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19667-7671/.minikube/machines/multinode-622675/id_rsa Username:docker}
	I0918 20:35:36.299474   44697 ssh_runner.go:195] Run: cat /etc/os-release
	I0918 20:35:36.303628   44697 command_runner.go:130] > NAME=Buildroot
	I0918 20:35:36.303653   44697 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0918 20:35:36.303660   44697 command_runner.go:130] > ID=buildroot
	I0918 20:35:36.303668   44697 command_runner.go:130] > VERSION_ID=2023.02.9
	I0918 20:35:36.303676   44697 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0918 20:35:36.303729   44697 info.go:137] Remote host: Buildroot 2023.02.9
	I0918 20:35:36.303764   44697 filesync.go:126] Scanning /home/jenkins/minikube-integration/19667-7671/.minikube/addons for local assets ...
	I0918 20:35:36.303864   44697 filesync.go:126] Scanning /home/jenkins/minikube-integration/19667-7671/.minikube/files for local assets ...
	I0918 20:35:36.303978   44697 filesync.go:149] local asset: /home/jenkins/minikube-integration/19667-7671/.minikube/files/etc/ssl/certs/148782.pem -> 148782.pem in /etc/ssl/certs
	I0918 20:35:36.303991   44697 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19667-7671/.minikube/files/etc/ssl/certs/148782.pem -> /etc/ssl/certs/148782.pem
	I0918 20:35:36.304151   44697 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0918 20:35:36.313290   44697 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/files/etc/ssl/certs/148782.pem --> /etc/ssl/certs/148782.pem (1708 bytes)
	I0918 20:35:36.335638   44697 start.go:296] duration metric: took 123.820558ms for postStartSetup
	I0918 20:35:36.335680   44697 fix.go:56] duration metric: took 1m31.761210518s for fixHost
	I0918 20:35:36.335705   44697 main.go:141] libmachine: (multinode-622675) Calling .GetSSHHostname
	I0918 20:35:36.338542   44697 main.go:141] libmachine: (multinode-622675) DBG | domain multinode-622675 has defined MAC address 52:54:00:b9:25:86 in network mk-multinode-622675
	I0918 20:35:36.338980   44697 main.go:141] libmachine: (multinode-622675) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:25:86", ip: ""} in network mk-multinode-622675: {Iface:virbr1 ExpiryTime:2024-09-18 21:28:42 +0000 UTC Type:0 Mac:52:54:00:b9:25:86 Iaid: IPaddr:192.168.39.106 Prefix:24 Hostname:multinode-622675 Clientid:01:52:54:00:b9:25:86}
	I0918 20:35:36.339011   44697 main.go:141] libmachine: (multinode-622675) DBG | domain multinode-622675 has defined IP address 192.168.39.106 and MAC address 52:54:00:b9:25:86 in network mk-multinode-622675
	I0918 20:35:36.339182   44697 main.go:141] libmachine: (multinode-622675) Calling .GetSSHPort
	I0918 20:35:36.339381   44697 main.go:141] libmachine: (multinode-622675) Calling .GetSSHKeyPath
	I0918 20:35:36.339550   44697 main.go:141] libmachine: (multinode-622675) Calling .GetSSHKeyPath
	I0918 20:35:36.339704   44697 main.go:141] libmachine: (multinode-622675) Calling .GetSSHUsername
	I0918 20:35:36.339873   44697 main.go:141] libmachine: Using SSH client type: native
	I0918 20:35:36.340093   44697 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.106 22 <nil> <nil>}
	I0918 20:35:36.340107   44697 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0918 20:35:36.440496   44697 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726691736.415186423
	
	I0918 20:35:36.440527   44697 fix.go:216] guest clock: 1726691736.415186423
	I0918 20:35:36.440539   44697 fix.go:229] Guest: 2024-09-18 20:35:36.415186423 +0000 UTC Remote: 2024-09-18 20:35:36.335685926 +0000 UTC m=+91.892811149 (delta=79.500497ms)
	I0918 20:35:36.440615   44697 fix.go:200] guest clock delta is within tolerance: 79.500497ms
	I0918 20:35:36.440622   44697 start.go:83] releasing machines lock for "multinode-622675", held for 1m31.866163179s
	I0918 20:35:36.440647   44697 main.go:141] libmachine: (multinode-622675) Calling .DriverName
	I0918 20:35:36.440889   44697 main.go:141] libmachine: (multinode-622675) Calling .GetIP
	I0918 20:35:36.443691   44697 main.go:141] libmachine: (multinode-622675) DBG | domain multinode-622675 has defined MAC address 52:54:00:b9:25:86 in network mk-multinode-622675
	I0918 20:35:36.444123   44697 main.go:141] libmachine: (multinode-622675) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:25:86", ip: ""} in network mk-multinode-622675: {Iface:virbr1 ExpiryTime:2024-09-18 21:28:42 +0000 UTC Type:0 Mac:52:54:00:b9:25:86 Iaid: IPaddr:192.168.39.106 Prefix:24 Hostname:multinode-622675 Clientid:01:52:54:00:b9:25:86}
	I0918 20:35:36.444155   44697 main.go:141] libmachine: (multinode-622675) DBG | domain multinode-622675 has defined IP address 192.168.39.106 and MAC address 52:54:00:b9:25:86 in network mk-multinode-622675
	I0918 20:35:36.444325   44697 main.go:141] libmachine: (multinode-622675) Calling .DriverName
	I0918 20:35:36.444841   44697 main.go:141] libmachine: (multinode-622675) Calling .DriverName
	I0918 20:35:36.445014   44697 main.go:141] libmachine: (multinode-622675) Calling .DriverName
	I0918 20:35:36.445121   44697 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0918 20:35:36.445176   44697 main.go:141] libmachine: (multinode-622675) Calling .GetSSHHostname
	I0918 20:35:36.445222   44697 ssh_runner.go:195] Run: cat /version.json
	I0918 20:35:36.445246   44697 main.go:141] libmachine: (multinode-622675) Calling .GetSSHHostname
	I0918 20:35:36.447594   44697 main.go:141] libmachine: (multinode-622675) DBG | domain multinode-622675 has defined MAC address 52:54:00:b9:25:86 in network mk-multinode-622675
	I0918 20:35:36.447888   44697 main.go:141] libmachine: (multinode-622675) DBG | domain multinode-622675 has defined MAC address 52:54:00:b9:25:86 in network mk-multinode-622675
	I0918 20:35:36.447922   44697 main.go:141] libmachine: (multinode-622675) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:25:86", ip: ""} in network mk-multinode-622675: {Iface:virbr1 ExpiryTime:2024-09-18 21:28:42 +0000 UTC Type:0 Mac:52:54:00:b9:25:86 Iaid: IPaddr:192.168.39.106 Prefix:24 Hostname:multinode-622675 Clientid:01:52:54:00:b9:25:86}
	I0918 20:35:36.447943   44697 main.go:141] libmachine: (multinode-622675) DBG | domain multinode-622675 has defined IP address 192.168.39.106 and MAC address 52:54:00:b9:25:86 in network mk-multinode-622675
	I0918 20:35:36.448081   44697 main.go:141] libmachine: (multinode-622675) Calling .GetSSHPort
	I0918 20:35:36.448232   44697 main.go:141] libmachine: (multinode-622675) Calling .GetSSHKeyPath
	I0918 20:35:36.448373   44697 main.go:141] libmachine: (multinode-622675) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:25:86", ip: ""} in network mk-multinode-622675: {Iface:virbr1 ExpiryTime:2024-09-18 21:28:42 +0000 UTC Type:0 Mac:52:54:00:b9:25:86 Iaid: IPaddr:192.168.39.106 Prefix:24 Hostname:multinode-622675 Clientid:01:52:54:00:b9:25:86}
	I0918 20:35:36.448395   44697 main.go:141] libmachine: (multinode-622675) Calling .GetSSHUsername
	I0918 20:35:36.448397   44697 main.go:141] libmachine: (multinode-622675) DBG | domain multinode-622675 has defined IP address 192.168.39.106 and MAC address 52:54:00:b9:25:86 in network mk-multinode-622675
	I0918 20:35:36.448547   44697 main.go:141] libmachine: (multinode-622675) Calling .GetSSHPort
	I0918 20:35:36.448559   44697 sshutil.go:53] new ssh client: &{IP:192.168.39.106 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19667-7671/.minikube/machines/multinode-622675/id_rsa Username:docker}
	I0918 20:35:36.448673   44697 main.go:141] libmachine: (multinode-622675) Calling .GetSSHKeyPath
	I0918 20:35:36.448802   44697 main.go:141] libmachine: (multinode-622675) Calling .GetSSHUsername
	I0918 20:35:36.448952   44697 sshutil.go:53] new ssh client: &{IP:192.168.39.106 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19667-7671/.minikube/machines/multinode-622675/id_rsa Username:docker}
	I0918 20:35:36.561519   44697 command_runner.go:130] > {"iso_version": "v1.34.0-1726481713-19649", "kicbase_version": "v0.0.45-1726358845-19644", "minikube_version": "v1.34.0", "commit": "fcd4ba3dbb1ef408e3a4b79c864df2496ddd3848"}
	I0918 20:35:36.561616   44697 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0918 20:35:36.561699   44697 ssh_runner.go:195] Run: systemctl --version
	I0918 20:35:36.567794   44697 command_runner.go:130] > systemd 252 (252)
	I0918 20:35:36.567843   44697 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0918 20:35:36.567919   44697 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0918 20:35:36.726027   44697 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0918 20:35:36.733249   44697 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0918 20:35:36.733629   44697 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0918 20:35:36.733715   44697 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0918 20:35:36.743239   44697 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0918 20:35:36.743274   44697 start.go:495] detecting cgroup driver to use...
	I0918 20:35:36.743334   44697 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0918 20:35:36.760715   44697 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0918 20:35:36.774927   44697 docker.go:217] disabling cri-docker service (if available) ...
	I0918 20:35:36.775003   44697 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0918 20:35:36.789537   44697 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0918 20:35:36.803881   44697 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0918 20:35:36.948953   44697 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0918 20:35:37.092978   44697 docker.go:233] disabling docker service ...
	I0918 20:35:37.093047   44697 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0918 20:35:37.109546   44697 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0918 20:35:37.122495   44697 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0918 20:35:37.272657   44697 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0918 20:35:37.438662   44697 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0918 20:35:37.453461   44697 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0918 20:35:37.473257   44697 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I0918 20:35:37.473303   44697 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0918 20:35:37.473355   44697 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0918 20:35:37.484093   44697 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0918 20:35:37.484163   44697 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0918 20:35:37.494635   44697 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0918 20:35:37.504677   44697 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0918 20:35:37.514562   44697 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0918 20:35:37.524980   44697 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0918 20:35:37.535106   44697 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0918 20:35:37.545930   44697 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0918 20:35:37.556040   44697 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0918 20:35:37.564982   44697 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0918 20:35:37.565052   44697 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0918 20:35:37.574443   44697 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0918 20:35:37.719364   44697 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0918 20:35:46.336004   44697 ssh_runner.go:235] Completed: sudo systemctl restart crio: (8.616602624s)
	I0918 20:35:46.336059   44697 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0918 20:35:46.336108   44697 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0918 20:35:46.340890   44697 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I0918 20:35:46.340928   44697 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0918 20:35:46.340939   44697 command_runner.go:130] > Device: 0,22	Inode: 1297        Links: 1
	I0918 20:35:46.340951   44697 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I0918 20:35:46.340960   44697 command_runner.go:130] > Access: 2024-09-18 20:35:46.201222832 +0000
	I0918 20:35:46.340970   44697 command_runner.go:130] > Modify: 2024-09-18 20:35:46.201222832 +0000
	I0918 20:35:46.340980   44697 command_runner.go:130] > Change: 2024-09-18 20:35:46.201222832 +0000
	I0918 20:35:46.340991   44697 command_runner.go:130] >  Birth: -
	I0918 20:35:46.341024   44697 start.go:563] Will wait 60s for crictl version
	I0918 20:35:46.341085   44697 ssh_runner.go:195] Run: which crictl
	I0918 20:35:46.344791   44697 command_runner.go:130] > /usr/bin/crictl
	I0918 20:35:46.344862   44697 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0918 20:35:46.379796   44697 command_runner.go:130] > Version:  0.1.0
	I0918 20:35:46.379819   44697 command_runner.go:130] > RuntimeName:  cri-o
	I0918 20:35:46.379823   44697 command_runner.go:130] > RuntimeVersion:  1.29.1
	I0918 20:35:46.379829   44697 command_runner.go:130] > RuntimeApiVersion:  v1
	I0918 20:35:46.381215   44697 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0918 20:35:46.381319   44697 ssh_runner.go:195] Run: crio --version
	I0918 20:35:46.410014   44697 command_runner.go:130] > crio version 1.29.1
	I0918 20:35:46.410041   44697 command_runner.go:130] > Version:        1.29.1
	I0918 20:35:46.410047   44697 command_runner.go:130] > GitCommit:      unknown
	I0918 20:35:46.410052   44697 command_runner.go:130] > GitCommitDate:  unknown
	I0918 20:35:46.410056   44697 command_runner.go:130] > GitTreeState:   clean
	I0918 20:35:46.410062   44697 command_runner.go:130] > BuildDate:      2024-09-16T15:42:14Z
	I0918 20:35:46.410066   44697 command_runner.go:130] > GoVersion:      go1.21.6
	I0918 20:35:46.410070   44697 command_runner.go:130] > Compiler:       gc
	I0918 20:35:46.410074   44697 command_runner.go:130] > Platform:       linux/amd64
	I0918 20:35:46.410078   44697 command_runner.go:130] > Linkmode:       dynamic
	I0918 20:35:46.410082   44697 command_runner.go:130] > BuildTags:      
	I0918 20:35:46.410086   44697 command_runner.go:130] >   containers_image_ostree_stub
	I0918 20:35:46.410090   44697 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0918 20:35:46.410100   44697 command_runner.go:130] >   btrfs_noversion
	I0918 20:35:46.410105   44697 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0918 20:35:46.410109   44697 command_runner.go:130] >   libdm_no_deferred_remove
	I0918 20:35:46.410112   44697 command_runner.go:130] >   seccomp
	I0918 20:35:46.410117   44697 command_runner.go:130] > LDFlags:          unknown
	I0918 20:35:46.410121   44697 command_runner.go:130] > SeccompEnabled:   true
	I0918 20:35:46.410126   44697 command_runner.go:130] > AppArmorEnabled:  false
	I0918 20:35:46.411269   44697 ssh_runner.go:195] Run: crio --version
	I0918 20:35:46.439806   44697 command_runner.go:130] > crio version 1.29.1
	I0918 20:35:46.439830   44697 command_runner.go:130] > Version:        1.29.1
	I0918 20:35:46.439837   44697 command_runner.go:130] > GitCommit:      unknown
	I0918 20:35:46.439844   44697 command_runner.go:130] > GitCommitDate:  unknown
	I0918 20:35:46.439849   44697 command_runner.go:130] > GitTreeState:   clean
	I0918 20:35:46.439856   44697 command_runner.go:130] > BuildDate:      2024-09-16T15:42:14Z
	I0918 20:35:46.439861   44697 command_runner.go:130] > GoVersion:      go1.21.6
	I0918 20:35:46.439867   44697 command_runner.go:130] > Compiler:       gc
	I0918 20:35:46.439873   44697 command_runner.go:130] > Platform:       linux/amd64
	I0918 20:35:46.439880   44697 command_runner.go:130] > Linkmode:       dynamic
	I0918 20:35:46.439888   44697 command_runner.go:130] > BuildTags:      
	I0918 20:35:46.439895   44697 command_runner.go:130] >   containers_image_ostree_stub
	I0918 20:35:46.439905   44697 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0918 20:35:46.439912   44697 command_runner.go:130] >   btrfs_noversion
	I0918 20:35:46.439923   44697 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0918 20:35:46.439930   44697 command_runner.go:130] >   libdm_no_deferred_remove
	I0918 20:35:46.439940   44697 command_runner.go:130] >   seccomp
	I0918 20:35:46.439947   44697 command_runner.go:130] > LDFlags:          unknown
	I0918 20:35:46.439957   44697 command_runner.go:130] > SeccompEnabled:   true
	I0918 20:35:46.439964   44697 command_runner.go:130] > AppArmorEnabled:  false
	I0918 20:35:46.442034   44697 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0918 20:35:46.443490   44697 main.go:141] libmachine: (multinode-622675) Calling .GetIP
	I0918 20:35:46.446480   44697 main.go:141] libmachine: (multinode-622675) DBG | domain multinode-622675 has defined MAC address 52:54:00:b9:25:86 in network mk-multinode-622675
	I0918 20:35:46.446818   44697 main.go:141] libmachine: (multinode-622675) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:25:86", ip: ""} in network mk-multinode-622675: {Iface:virbr1 ExpiryTime:2024-09-18 21:28:42 +0000 UTC Type:0 Mac:52:54:00:b9:25:86 Iaid: IPaddr:192.168.39.106 Prefix:24 Hostname:multinode-622675 Clientid:01:52:54:00:b9:25:86}
	I0918 20:35:46.446846   44697 main.go:141] libmachine: (multinode-622675) DBG | domain multinode-622675 has defined IP address 192.168.39.106 and MAC address 52:54:00:b9:25:86 in network mk-multinode-622675
	I0918 20:35:46.447028   44697 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0918 20:35:46.451277   44697 command_runner.go:130] > 192.168.39.1	host.minikube.internal
	I0918 20:35:46.451398   44697 kubeadm.go:883] updating cluster {Name:multinode-622675 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
31.1 ClusterName:multinode-622675 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.106 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.224 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.216 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:fa
lse inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disa
bleOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0918 20:35:46.451532   44697 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0918 20:35:46.451573   44697 ssh_runner.go:195] Run: sudo crictl images --output json
	I0918 20:35:46.495802   44697 command_runner.go:130] > {
	I0918 20:35:46.495834   44697 command_runner.go:130] >   "images": [
	I0918 20:35:46.495841   44697 command_runner.go:130] >     {
	I0918 20:35:46.495852   44697 command_runner.go:130] >       "id": "12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f",
	I0918 20:35:46.495859   44697 command_runner.go:130] >       "repoTags": [
	I0918 20:35:46.495869   44697 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240813-c6f155d6"
	I0918 20:35:46.495875   44697 command_runner.go:130] >       ],
	I0918 20:35:46.495882   44697 command_runner.go:130] >       "repoDigests": [
	I0918 20:35:46.495895   44697 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b",
	I0918 20:35:46.495910   44697 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166"
	I0918 20:35:46.495919   44697 command_runner.go:130] >       ],
	I0918 20:35:46.495926   44697 command_runner.go:130] >       "size": "87190579",
	I0918 20:35:46.495934   44697 command_runner.go:130] >       "uid": null,
	I0918 20:35:46.495941   44697 command_runner.go:130] >       "username": "",
	I0918 20:35:46.495958   44697 command_runner.go:130] >       "spec": null,
	I0918 20:35:46.495964   44697 command_runner.go:130] >       "pinned": false
	I0918 20:35:46.495967   44697 command_runner.go:130] >     },
	I0918 20:35:46.495972   44697 command_runner.go:130] >     {
	I0918 20:35:46.495978   44697 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I0918 20:35:46.495984   44697 command_runner.go:130] >       "repoTags": [
	I0918 20:35:46.495990   44697 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I0918 20:35:46.495995   44697 command_runner.go:130] >       ],
	I0918 20:35:46.496000   44697 command_runner.go:130] >       "repoDigests": [
	I0918 20:35:46.496006   44697 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I0918 20:35:46.496030   44697 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I0918 20:35:46.496038   44697 command_runner.go:130] >       ],
	I0918 20:35:46.496044   44697 command_runner.go:130] >       "size": "1363676",
	I0918 20:35:46.496054   44697 command_runner.go:130] >       "uid": null,
	I0918 20:35:46.496067   44697 command_runner.go:130] >       "username": "",
	I0918 20:35:46.496076   44697 command_runner.go:130] >       "spec": null,
	I0918 20:35:46.496086   44697 command_runner.go:130] >       "pinned": false
	I0918 20:35:46.496096   44697 command_runner.go:130] >     },
	I0918 20:35:46.496100   44697 command_runner.go:130] >     {
	I0918 20:35:46.496106   44697 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0918 20:35:46.496111   44697 command_runner.go:130] >       "repoTags": [
	I0918 20:35:46.496116   44697 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0918 20:35:46.496122   44697 command_runner.go:130] >       ],
	I0918 20:35:46.496126   44697 command_runner.go:130] >       "repoDigests": [
	I0918 20:35:46.496133   44697 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0918 20:35:46.496142   44697 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0918 20:35:46.496150   44697 command_runner.go:130] >       ],
	I0918 20:35:46.496155   44697 command_runner.go:130] >       "size": "31470524",
	I0918 20:35:46.496178   44697 command_runner.go:130] >       "uid": null,
	I0918 20:35:46.496188   44697 command_runner.go:130] >       "username": "",
	I0918 20:35:46.496192   44697 command_runner.go:130] >       "spec": null,
	I0918 20:35:46.496195   44697 command_runner.go:130] >       "pinned": false
	I0918 20:35:46.496199   44697 command_runner.go:130] >     },
	I0918 20:35:46.496202   44697 command_runner.go:130] >     {
	I0918 20:35:46.496209   44697 command_runner.go:130] >       "id": "c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6",
	I0918 20:35:46.496215   44697 command_runner.go:130] >       "repoTags": [
	I0918 20:35:46.496220   44697 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.3"
	I0918 20:35:46.496225   44697 command_runner.go:130] >       ],
	I0918 20:35:46.496229   44697 command_runner.go:130] >       "repoDigests": [
	I0918 20:35:46.496235   44697 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e",
	I0918 20:35:46.496247   44697 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:f0b8c589314ed010a0c326e987a52b50801f0145ac9b75423af1b5c66dbd6d50"
	I0918 20:35:46.496253   44697 command_runner.go:130] >       ],
	I0918 20:35:46.496257   44697 command_runner.go:130] >       "size": "63273227",
	I0918 20:35:46.496263   44697 command_runner.go:130] >       "uid": null,
	I0918 20:35:46.496273   44697 command_runner.go:130] >       "username": "nonroot",
	I0918 20:35:46.496279   44697 command_runner.go:130] >       "spec": null,
	I0918 20:35:46.496283   44697 command_runner.go:130] >       "pinned": false
	I0918 20:35:46.496288   44697 command_runner.go:130] >     },
	I0918 20:35:46.496292   44697 command_runner.go:130] >     {
	I0918 20:35:46.496299   44697 command_runner.go:130] >       "id": "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4",
	I0918 20:35:46.496305   44697 command_runner.go:130] >       "repoTags": [
	I0918 20:35:46.496310   44697 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.15-0"
	I0918 20:35:46.496313   44697 command_runner.go:130] >       ],
	I0918 20:35:46.496318   44697 command_runner.go:130] >       "repoDigests": [
	I0918 20:35:46.496326   44697 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:4e535f53f767fe400c2deec37fef7a6ab19a79a1db35041d067597641cd8b89d",
	I0918 20:35:46.496335   44697 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a"
	I0918 20:35:46.496339   44697 command_runner.go:130] >       ],
	I0918 20:35:46.496343   44697 command_runner.go:130] >       "size": "149009664",
	I0918 20:35:46.496348   44697 command_runner.go:130] >       "uid": {
	I0918 20:35:46.496351   44697 command_runner.go:130] >         "value": "0"
	I0918 20:35:46.496355   44697 command_runner.go:130] >       },
	I0918 20:35:46.496361   44697 command_runner.go:130] >       "username": "",
	I0918 20:35:46.496364   44697 command_runner.go:130] >       "spec": null,
	I0918 20:35:46.496370   44697 command_runner.go:130] >       "pinned": false
	I0918 20:35:46.496376   44697 command_runner.go:130] >     },
	I0918 20:35:46.496379   44697 command_runner.go:130] >     {
	I0918 20:35:46.496385   44697 command_runner.go:130] >       "id": "6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee",
	I0918 20:35:46.496392   44697 command_runner.go:130] >       "repoTags": [
	I0918 20:35:46.496397   44697 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.31.1"
	I0918 20:35:46.496400   44697 command_runner.go:130] >       ],
	I0918 20:35:46.496406   44697 command_runner.go:130] >       "repoDigests": [
	I0918 20:35:46.496413   44697 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:1f30d71692d2ab71ce2c1dd5fab86e0cb00ce888d21de18806f5482021d18771",
	I0918 20:35:46.496427   44697 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:2409c23dbb5a2b7a81adbb184d3eac43ac653e9b97a7c0ee121b89bb3ef61fdb"
	I0918 20:35:46.496435   44697 command_runner.go:130] >       ],
	I0918 20:35:46.496441   44697 command_runner.go:130] >       "size": "95237600",
	I0918 20:35:46.496449   44697 command_runner.go:130] >       "uid": {
	I0918 20:35:46.496455   44697 command_runner.go:130] >         "value": "0"
	I0918 20:35:46.496464   44697 command_runner.go:130] >       },
	I0918 20:35:46.496470   44697 command_runner.go:130] >       "username": "",
	I0918 20:35:46.496478   44697 command_runner.go:130] >       "spec": null,
	I0918 20:35:46.496484   44697 command_runner.go:130] >       "pinned": false
	I0918 20:35:46.496493   44697 command_runner.go:130] >     },
	I0918 20:35:46.496499   44697 command_runner.go:130] >     {
	I0918 20:35:46.496509   44697 command_runner.go:130] >       "id": "175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1",
	I0918 20:35:46.496513   44697 command_runner.go:130] >       "repoTags": [
	I0918 20:35:46.496522   44697 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.31.1"
	I0918 20:35:46.496526   44697 command_runner.go:130] >       ],
	I0918 20:35:46.496530   44697 command_runner.go:130] >       "repoDigests": [
	I0918 20:35:46.496538   44697 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:9f9da5b27e03f89599cc40ba89150aebf3b4cff001e6db6d998674b34181e1a1",
	I0918 20:35:46.496550   44697 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:e6c5253433f9032cff2bd9b1f41e29b9691a6d6ec97903896c0ca5f069a63748"
	I0918 20:35:46.496555   44697 command_runner.go:130] >       ],
	I0918 20:35:46.496560   44697 command_runner.go:130] >       "size": "89437508",
	I0918 20:35:46.496564   44697 command_runner.go:130] >       "uid": {
	I0918 20:35:46.496568   44697 command_runner.go:130] >         "value": "0"
	I0918 20:35:46.496574   44697 command_runner.go:130] >       },
	I0918 20:35:46.496578   44697 command_runner.go:130] >       "username": "",
	I0918 20:35:46.496581   44697 command_runner.go:130] >       "spec": null,
	I0918 20:35:46.496585   44697 command_runner.go:130] >       "pinned": false
	I0918 20:35:46.496595   44697 command_runner.go:130] >     },
	I0918 20:35:46.496598   44697 command_runner.go:130] >     {
	I0918 20:35:46.496604   44697 command_runner.go:130] >       "id": "60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561",
	I0918 20:35:46.496608   44697 command_runner.go:130] >       "repoTags": [
	I0918 20:35:46.496613   44697 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.31.1"
	I0918 20:35:46.496616   44697 command_runner.go:130] >       ],
	I0918 20:35:46.496620   44697 command_runner.go:130] >       "repoDigests": [
	I0918 20:35:46.496633   44697 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:4ee50b00484d7f39a90fc4cda92251177ef5ad8fdf2f2a0c768f9e634b4c6d44",
	I0918 20:35:46.496642   44697 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:bb26bcf4490a4653ecb77ceb883c0fd8dd876f104f776aa0a6cbf9df68b16af2"
	I0918 20:35:46.496645   44697 command_runner.go:130] >       ],
	I0918 20:35:46.496650   44697 command_runner.go:130] >       "size": "92733849",
	I0918 20:35:46.496654   44697 command_runner.go:130] >       "uid": null,
	I0918 20:35:46.496658   44697 command_runner.go:130] >       "username": "",
	I0918 20:35:46.496662   44697 command_runner.go:130] >       "spec": null,
	I0918 20:35:46.496665   44697 command_runner.go:130] >       "pinned": false
	I0918 20:35:46.496669   44697 command_runner.go:130] >     },
	I0918 20:35:46.496673   44697 command_runner.go:130] >     {
	I0918 20:35:46.496679   44697 command_runner.go:130] >       "id": "9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b",
	I0918 20:35:46.496683   44697 command_runner.go:130] >       "repoTags": [
	I0918 20:35:46.496687   44697 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.31.1"
	I0918 20:35:46.496690   44697 command_runner.go:130] >       ],
	I0918 20:35:46.496695   44697 command_runner.go:130] >       "repoDigests": [
	I0918 20:35:46.496707   44697 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:969a7e96340f3a927b3d652582edec2d6d82a083871d81ef5064b7edaab430d0",
	I0918 20:35:46.496717   44697 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:cb9d9404dddf0c6728b99a42d10d8ab1ece2a1c793ef1d7b03eddaeac26864d8"
	I0918 20:35:46.496720   44697 command_runner.go:130] >       ],
	I0918 20:35:46.496724   44697 command_runner.go:130] >       "size": "68420934",
	I0918 20:35:46.496728   44697 command_runner.go:130] >       "uid": {
	I0918 20:35:46.496731   44697 command_runner.go:130] >         "value": "0"
	I0918 20:35:46.496735   44697 command_runner.go:130] >       },
	I0918 20:35:46.496739   44697 command_runner.go:130] >       "username": "",
	I0918 20:35:46.496743   44697 command_runner.go:130] >       "spec": null,
	I0918 20:35:46.496746   44697 command_runner.go:130] >       "pinned": false
	I0918 20:35:46.496750   44697 command_runner.go:130] >     },
	I0918 20:35:46.496754   44697 command_runner.go:130] >     {
	I0918 20:35:46.496760   44697 command_runner.go:130] >       "id": "873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136",
	I0918 20:35:46.496766   44697 command_runner.go:130] >       "repoTags": [
	I0918 20:35:46.496770   44697 command_runner.go:130] >         "registry.k8s.io/pause:3.10"
	I0918 20:35:46.496773   44697 command_runner.go:130] >       ],
	I0918 20:35:46.496779   44697 command_runner.go:130] >       "repoDigests": [
	I0918 20:35:46.496791   44697 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a",
	I0918 20:35:46.496805   44697 command_runner.go:130] >         "registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"
	I0918 20:35:46.496813   44697 command_runner.go:130] >       ],
	I0918 20:35:46.496819   44697 command_runner.go:130] >       "size": "742080",
	I0918 20:35:46.496826   44697 command_runner.go:130] >       "uid": {
	I0918 20:35:46.496832   44697 command_runner.go:130] >         "value": "65535"
	I0918 20:35:46.496841   44697 command_runner.go:130] >       },
	I0918 20:35:46.496847   44697 command_runner.go:130] >       "username": "",
	I0918 20:35:46.496854   44697 command_runner.go:130] >       "spec": null,
	I0918 20:35:46.496858   44697 command_runner.go:130] >       "pinned": true
	I0918 20:35:46.496864   44697 command_runner.go:130] >     }
	I0918 20:35:46.496868   44697 command_runner.go:130] >   ]
	I0918 20:35:46.496871   44697 command_runner.go:130] > }
	I0918 20:35:46.497027   44697 crio.go:514] all images are preloaded for cri-o runtime.
	I0918 20:35:46.497038   44697 crio.go:433] Images already preloaded, skipping extraction
	I0918 20:35:46.497090   44697 ssh_runner.go:195] Run: sudo crictl images --output json
	I0918 20:35:46.531830   44697 command_runner.go:130] > {
	I0918 20:35:46.531856   44697 command_runner.go:130] >   "images": [
	I0918 20:35:46.531861   44697 command_runner.go:130] >     {
	I0918 20:35:46.531868   44697 command_runner.go:130] >       "id": "12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f",
	I0918 20:35:46.531872   44697 command_runner.go:130] >       "repoTags": [
	I0918 20:35:46.531879   44697 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240813-c6f155d6"
	I0918 20:35:46.531883   44697 command_runner.go:130] >       ],
	I0918 20:35:46.531886   44697 command_runner.go:130] >       "repoDigests": [
	I0918 20:35:46.531894   44697 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b",
	I0918 20:35:46.531901   44697 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166"
	I0918 20:35:46.531905   44697 command_runner.go:130] >       ],
	I0918 20:35:46.531913   44697 command_runner.go:130] >       "size": "87190579",
	I0918 20:35:46.531920   44697 command_runner.go:130] >       "uid": null,
	I0918 20:35:46.531925   44697 command_runner.go:130] >       "username": "",
	I0918 20:35:46.531956   44697 command_runner.go:130] >       "spec": null,
	I0918 20:35:46.531963   44697 command_runner.go:130] >       "pinned": false
	I0918 20:35:46.531969   44697 command_runner.go:130] >     },
	I0918 20:35:46.531976   44697 command_runner.go:130] >     {
	I0918 20:35:46.531985   44697 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I0918 20:35:46.531993   44697 command_runner.go:130] >       "repoTags": [
	I0918 20:35:46.532001   44697 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I0918 20:35:46.532010   44697 command_runner.go:130] >       ],
	I0918 20:35:46.532037   44697 command_runner.go:130] >       "repoDigests": [
	I0918 20:35:46.532045   44697 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I0918 20:35:46.532055   44697 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I0918 20:35:46.532058   44697 command_runner.go:130] >       ],
	I0918 20:35:46.532063   44697 command_runner.go:130] >       "size": "1363676",
	I0918 20:35:46.532066   44697 command_runner.go:130] >       "uid": null,
	I0918 20:35:46.532074   44697 command_runner.go:130] >       "username": "",
	I0918 20:35:46.532079   44697 command_runner.go:130] >       "spec": null,
	I0918 20:35:46.532083   44697 command_runner.go:130] >       "pinned": false
	I0918 20:35:46.532089   44697 command_runner.go:130] >     },
	I0918 20:35:46.532095   44697 command_runner.go:130] >     {
	I0918 20:35:46.532103   44697 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0918 20:35:46.532109   44697 command_runner.go:130] >       "repoTags": [
	I0918 20:35:46.532114   44697 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0918 20:35:46.532120   44697 command_runner.go:130] >       ],
	I0918 20:35:46.532124   44697 command_runner.go:130] >       "repoDigests": [
	I0918 20:35:46.532132   44697 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0918 20:35:46.532141   44697 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0918 20:35:46.532145   44697 command_runner.go:130] >       ],
	I0918 20:35:46.532149   44697 command_runner.go:130] >       "size": "31470524",
	I0918 20:35:46.532154   44697 command_runner.go:130] >       "uid": null,
	I0918 20:35:46.532158   44697 command_runner.go:130] >       "username": "",
	I0918 20:35:46.532163   44697 command_runner.go:130] >       "spec": null,
	I0918 20:35:46.532169   44697 command_runner.go:130] >       "pinned": false
	I0918 20:35:46.532172   44697 command_runner.go:130] >     },
	I0918 20:35:46.532175   44697 command_runner.go:130] >     {
	I0918 20:35:46.532183   44697 command_runner.go:130] >       "id": "c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6",
	I0918 20:35:46.532187   44697 command_runner.go:130] >       "repoTags": [
	I0918 20:35:46.532194   44697 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.3"
	I0918 20:35:46.532198   44697 command_runner.go:130] >       ],
	I0918 20:35:46.532204   44697 command_runner.go:130] >       "repoDigests": [
	I0918 20:35:46.532212   44697 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e",
	I0918 20:35:46.532224   44697 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:f0b8c589314ed010a0c326e987a52b50801f0145ac9b75423af1b5c66dbd6d50"
	I0918 20:35:46.532229   44697 command_runner.go:130] >       ],
	I0918 20:35:46.532233   44697 command_runner.go:130] >       "size": "63273227",
	I0918 20:35:46.532237   44697 command_runner.go:130] >       "uid": null,
	I0918 20:35:46.532245   44697 command_runner.go:130] >       "username": "nonroot",
	I0918 20:35:46.532251   44697 command_runner.go:130] >       "spec": null,
	I0918 20:35:46.532256   44697 command_runner.go:130] >       "pinned": false
	I0918 20:35:46.532260   44697 command_runner.go:130] >     },
	I0918 20:35:46.532263   44697 command_runner.go:130] >     {
	I0918 20:35:46.532270   44697 command_runner.go:130] >       "id": "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4",
	I0918 20:35:46.532274   44697 command_runner.go:130] >       "repoTags": [
	I0918 20:35:46.532282   44697 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.15-0"
	I0918 20:35:46.532290   44697 command_runner.go:130] >       ],
	I0918 20:35:46.532294   44697 command_runner.go:130] >       "repoDigests": [
	I0918 20:35:46.532303   44697 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:4e535f53f767fe400c2deec37fef7a6ab19a79a1db35041d067597641cd8b89d",
	I0918 20:35:46.532312   44697 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a"
	I0918 20:35:46.532322   44697 command_runner.go:130] >       ],
	I0918 20:35:46.532327   44697 command_runner.go:130] >       "size": "149009664",
	I0918 20:35:46.532330   44697 command_runner.go:130] >       "uid": {
	I0918 20:35:46.532334   44697 command_runner.go:130] >         "value": "0"
	I0918 20:35:46.532338   44697 command_runner.go:130] >       },
	I0918 20:35:46.532342   44697 command_runner.go:130] >       "username": "",
	I0918 20:35:46.532346   44697 command_runner.go:130] >       "spec": null,
	I0918 20:35:46.532351   44697 command_runner.go:130] >       "pinned": false
	I0918 20:35:46.532354   44697 command_runner.go:130] >     },
	I0918 20:35:46.532357   44697 command_runner.go:130] >     {
	I0918 20:35:46.532363   44697 command_runner.go:130] >       "id": "6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee",
	I0918 20:35:46.532370   44697 command_runner.go:130] >       "repoTags": [
	I0918 20:35:46.532376   44697 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.31.1"
	I0918 20:35:46.532380   44697 command_runner.go:130] >       ],
	I0918 20:35:46.532384   44697 command_runner.go:130] >       "repoDigests": [
	I0918 20:35:46.532393   44697 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:1f30d71692d2ab71ce2c1dd5fab86e0cb00ce888d21de18806f5482021d18771",
	I0918 20:35:46.532400   44697 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:2409c23dbb5a2b7a81adbb184d3eac43ac653e9b97a7c0ee121b89bb3ef61fdb"
	I0918 20:35:46.532406   44697 command_runner.go:130] >       ],
	I0918 20:35:46.532410   44697 command_runner.go:130] >       "size": "95237600",
	I0918 20:35:46.532417   44697 command_runner.go:130] >       "uid": {
	I0918 20:35:46.532423   44697 command_runner.go:130] >         "value": "0"
	I0918 20:35:46.532429   44697 command_runner.go:130] >       },
	I0918 20:35:46.532435   44697 command_runner.go:130] >       "username": "",
	I0918 20:35:46.532444   44697 command_runner.go:130] >       "spec": null,
	I0918 20:35:46.532451   44697 command_runner.go:130] >       "pinned": false
	I0918 20:35:46.532458   44697 command_runner.go:130] >     },
	I0918 20:35:46.532463   44697 command_runner.go:130] >     {
	I0918 20:35:46.532475   44697 command_runner.go:130] >       "id": "175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1",
	I0918 20:35:46.532484   44697 command_runner.go:130] >       "repoTags": [
	I0918 20:35:46.532495   44697 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.31.1"
	I0918 20:35:46.532504   44697 command_runner.go:130] >       ],
	I0918 20:35:46.532515   44697 command_runner.go:130] >       "repoDigests": [
	I0918 20:35:46.532524   44697 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:9f9da5b27e03f89599cc40ba89150aebf3b4cff001e6db6d998674b34181e1a1",
	I0918 20:35:46.532532   44697 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:e6c5253433f9032cff2bd9b1f41e29b9691a6d6ec97903896c0ca5f069a63748"
	I0918 20:35:46.532541   44697 command_runner.go:130] >       ],
	I0918 20:35:46.532545   44697 command_runner.go:130] >       "size": "89437508",
	I0918 20:35:46.532548   44697 command_runner.go:130] >       "uid": {
	I0918 20:35:46.532552   44697 command_runner.go:130] >         "value": "0"
	I0918 20:35:46.532558   44697 command_runner.go:130] >       },
	I0918 20:35:46.532562   44697 command_runner.go:130] >       "username": "",
	I0918 20:35:46.532569   44697 command_runner.go:130] >       "spec": null,
	I0918 20:35:46.532576   44697 command_runner.go:130] >       "pinned": false
	I0918 20:35:46.532580   44697 command_runner.go:130] >     },
	I0918 20:35:46.532583   44697 command_runner.go:130] >     {
	I0918 20:35:46.532589   44697 command_runner.go:130] >       "id": "60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561",
	I0918 20:35:46.532596   44697 command_runner.go:130] >       "repoTags": [
	I0918 20:35:46.532601   44697 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.31.1"
	I0918 20:35:46.532604   44697 command_runner.go:130] >       ],
	I0918 20:35:46.532609   44697 command_runner.go:130] >       "repoDigests": [
	I0918 20:35:46.532624   44697 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:4ee50b00484d7f39a90fc4cda92251177ef5ad8fdf2f2a0c768f9e634b4c6d44",
	I0918 20:35:46.532634   44697 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:bb26bcf4490a4653ecb77ceb883c0fd8dd876f104f776aa0a6cbf9df68b16af2"
	I0918 20:35:46.532637   44697 command_runner.go:130] >       ],
	I0918 20:35:46.532641   44697 command_runner.go:130] >       "size": "92733849",
	I0918 20:35:46.532647   44697 command_runner.go:130] >       "uid": null,
	I0918 20:35:46.532651   44697 command_runner.go:130] >       "username": "",
	I0918 20:35:46.532658   44697 command_runner.go:130] >       "spec": null,
	I0918 20:35:46.532662   44697 command_runner.go:130] >       "pinned": false
	I0918 20:35:46.532665   44697 command_runner.go:130] >     },
	I0918 20:35:46.532669   44697 command_runner.go:130] >     {
	I0918 20:35:46.532676   44697 command_runner.go:130] >       "id": "9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b",
	I0918 20:35:46.532682   44697 command_runner.go:130] >       "repoTags": [
	I0918 20:35:46.532686   44697 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.31.1"
	I0918 20:35:46.532690   44697 command_runner.go:130] >       ],
	I0918 20:35:46.532694   44697 command_runner.go:130] >       "repoDigests": [
	I0918 20:35:46.532702   44697 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:969a7e96340f3a927b3d652582edec2d6d82a083871d81ef5064b7edaab430d0",
	I0918 20:35:46.532711   44697 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:cb9d9404dddf0c6728b99a42d10d8ab1ece2a1c793ef1d7b03eddaeac26864d8"
	I0918 20:35:46.532714   44697 command_runner.go:130] >       ],
	I0918 20:35:46.532718   44697 command_runner.go:130] >       "size": "68420934",
	I0918 20:35:46.532724   44697 command_runner.go:130] >       "uid": {
	I0918 20:35:46.532727   44697 command_runner.go:130] >         "value": "0"
	I0918 20:35:46.532731   44697 command_runner.go:130] >       },
	I0918 20:35:46.532735   44697 command_runner.go:130] >       "username": "",
	I0918 20:35:46.532739   44697 command_runner.go:130] >       "spec": null,
	I0918 20:35:46.532744   44697 command_runner.go:130] >       "pinned": false
	I0918 20:35:46.532747   44697 command_runner.go:130] >     },
	I0918 20:35:46.532751   44697 command_runner.go:130] >     {
	I0918 20:35:46.532757   44697 command_runner.go:130] >       "id": "873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136",
	I0918 20:35:46.532763   44697 command_runner.go:130] >       "repoTags": [
	I0918 20:35:46.532767   44697 command_runner.go:130] >         "registry.k8s.io/pause:3.10"
	I0918 20:35:46.532770   44697 command_runner.go:130] >       ],
	I0918 20:35:46.532775   44697 command_runner.go:130] >       "repoDigests": [
	I0918 20:35:46.532781   44697 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a",
	I0918 20:35:46.532793   44697 command_runner.go:130] >         "registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"
	I0918 20:35:46.532796   44697 command_runner.go:130] >       ],
	I0918 20:35:46.532800   44697 command_runner.go:130] >       "size": "742080",
	I0918 20:35:46.532804   44697 command_runner.go:130] >       "uid": {
	I0918 20:35:46.532808   44697 command_runner.go:130] >         "value": "65535"
	I0918 20:35:46.532811   44697 command_runner.go:130] >       },
	I0918 20:35:46.532815   44697 command_runner.go:130] >       "username": "",
	I0918 20:35:46.532821   44697 command_runner.go:130] >       "spec": null,
	I0918 20:35:46.532825   44697 command_runner.go:130] >       "pinned": true
	I0918 20:35:46.532828   44697 command_runner.go:130] >     }
	I0918 20:35:46.532834   44697 command_runner.go:130] >   ]
	I0918 20:35:46.532837   44697 command_runner.go:130] > }
	I0918 20:35:46.532948   44697 crio.go:514] all images are preloaded for cri-o runtime.
	I0918 20:35:46.532960   44697 cache_images.go:84] Images are preloaded, skipping loading
	I0918 20:35:46.532967   44697 kubeadm.go:934] updating node { 192.168.39.106 8443 v1.31.1 crio true true} ...
	I0918 20:35:46.533060   44697 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-622675 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.106
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:multinode-622675 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0918 20:35:46.533120   44697 ssh_runner.go:195] Run: crio config
	I0918 20:35:46.570896   44697 command_runner.go:130] ! time="2024-09-18 20:35:46.544989453Z" level=info msg="Starting CRI-O, version: 1.29.1, git: unknown(clean)"
	I0918 20:35:46.576645   44697 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I0918 20:35:46.590735   44697 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I0918 20:35:46.590758   44697 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I0918 20:35:46.590767   44697 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I0918 20:35:46.590771   44697 command_runner.go:130] > #
	I0918 20:35:46.590778   44697 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I0918 20:35:46.590783   44697 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I0918 20:35:46.590790   44697 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I0918 20:35:46.590820   44697 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I0918 20:35:46.590830   44697 command_runner.go:130] > # reload'.
	I0918 20:35:46.590838   44697 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I0918 20:35:46.590847   44697 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I0918 20:35:46.590858   44697 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I0918 20:35:46.590865   44697 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I0918 20:35:46.590873   44697 command_runner.go:130] > [crio]
	I0918 20:35:46.590879   44697 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I0918 20:35:46.590886   44697 command_runner.go:130] > # containers images, in this directory.
	I0918 20:35:46.590891   44697 command_runner.go:130] > root = "/var/lib/containers/storage"
	I0918 20:35:46.590900   44697 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I0918 20:35:46.590906   44697 command_runner.go:130] > runroot = "/var/run/containers/storage"
	I0918 20:35:46.590914   44697 command_runner.go:130] > # Path to the "imagestore". If CRI-O stores all of its images in this directory differently than Root.
	I0918 20:35:46.590923   44697 command_runner.go:130] > # imagestore = ""
	I0918 20:35:46.590935   44697 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I0918 20:35:46.590947   44697 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I0918 20:35:46.590956   44697 command_runner.go:130] > storage_driver = "overlay"
	I0918 20:35:46.590964   44697 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I0918 20:35:46.590972   44697 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I0918 20:35:46.590976   44697 command_runner.go:130] > storage_option = [
	I0918 20:35:46.590980   44697 command_runner.go:130] > 	"overlay.mountopt=nodev,metacopy=on",
	I0918 20:35:46.590983   44697 command_runner.go:130] > ]
	I0918 20:35:46.590990   44697 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I0918 20:35:46.590997   44697 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I0918 20:35:46.591001   44697 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I0918 20:35:46.591009   44697 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I0918 20:35:46.591015   44697 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I0918 20:35:46.591021   44697 command_runner.go:130] > # always happen on a node reboot
	I0918 20:35:46.591026   44697 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I0918 20:35:46.591037   44697 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I0918 20:35:46.591043   44697 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I0918 20:35:46.591048   44697 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I0918 20:35:46.591053   44697 command_runner.go:130] > version_file_persist = "/var/lib/crio/version"
	I0918 20:35:46.591063   44697 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I0918 20:35:46.591070   44697 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I0918 20:35:46.591076   44697 command_runner.go:130] > # internal_wipe = true
	I0918 20:35:46.591084   44697 command_runner.go:130] > # InternalRepair is whether CRI-O should check if the container and image storage was corrupted after a sudden restart.
	I0918 20:35:46.591091   44697 command_runner.go:130] > # If it was, CRI-O also attempts to repair the storage.
	I0918 20:35:46.591095   44697 command_runner.go:130] > # internal_repair = false
	I0918 20:35:46.591102   44697 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I0918 20:35:46.591108   44697 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I0918 20:35:46.591116   44697 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I0918 20:35:46.591121   44697 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I0918 20:35:46.591132   44697 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I0918 20:35:46.591135   44697 command_runner.go:130] > [crio.api]
	I0918 20:35:46.591140   44697 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I0918 20:35:46.591144   44697 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I0918 20:35:46.591151   44697 command_runner.go:130] > # IP address on which the stream server will listen.
	I0918 20:35:46.591155   44697 command_runner.go:130] > # stream_address = "127.0.0.1"
	I0918 20:35:46.591161   44697 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I0918 20:35:46.591167   44697 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I0918 20:35:46.591171   44697 command_runner.go:130] > # stream_port = "0"
	I0918 20:35:46.591177   44697 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I0918 20:35:46.591181   44697 command_runner.go:130] > # stream_enable_tls = false
	I0918 20:35:46.591187   44697 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I0918 20:35:46.591191   44697 command_runner.go:130] > # stream_idle_timeout = ""
	I0918 20:35:46.591198   44697 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I0918 20:35:46.591206   44697 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I0918 20:35:46.591210   44697 command_runner.go:130] > # minutes.
	I0918 20:35:46.591215   44697 command_runner.go:130] > # stream_tls_cert = ""
	I0918 20:35:46.591221   44697 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I0918 20:35:46.591229   44697 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I0918 20:35:46.591233   44697 command_runner.go:130] > # stream_tls_key = ""
	I0918 20:35:46.591241   44697 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I0918 20:35:46.591247   44697 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I0918 20:35:46.591264   44697 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I0918 20:35:46.591282   44697 command_runner.go:130] > # stream_tls_ca = ""
	I0918 20:35:46.591291   44697 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 80 * 1024 * 1024.
	I0918 20:35:46.591297   44697 command_runner.go:130] > grpc_max_send_msg_size = 16777216
	I0918 20:35:46.591307   44697 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 80 * 1024 * 1024.
	I0918 20:35:46.591313   44697 command_runner.go:130] > grpc_max_recv_msg_size = 16777216
	I0918 20:35:46.591319   44697 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I0918 20:35:46.591326   44697 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I0918 20:35:46.591330   44697 command_runner.go:130] > [crio.runtime]
	I0918 20:35:46.591336   44697 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I0918 20:35:46.591343   44697 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I0918 20:35:46.591347   44697 command_runner.go:130] > # "nofile=1024:2048"
	I0918 20:35:46.591353   44697 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I0918 20:35:46.591359   44697 command_runner.go:130] > # default_ulimits = [
	I0918 20:35:46.591362   44697 command_runner.go:130] > # ]
	I0918 20:35:46.591368   44697 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I0918 20:35:46.591373   44697 command_runner.go:130] > # no_pivot = false
	I0918 20:35:46.591381   44697 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I0918 20:35:46.591389   44697 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I0918 20:35:46.591394   44697 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I0918 20:35:46.591402   44697 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I0918 20:35:46.591407   44697 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I0918 20:35:46.591415   44697 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0918 20:35:46.591419   44697 command_runner.go:130] > conmon = "/usr/libexec/crio/conmon"
	I0918 20:35:46.591425   44697 command_runner.go:130] > # Cgroup setting for conmon
	I0918 20:35:46.591432   44697 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I0918 20:35:46.591436   44697 command_runner.go:130] > conmon_cgroup = "pod"
	I0918 20:35:46.591441   44697 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I0918 20:35:46.591449   44697 command_runner.go:130] > # environment variables to conmon or the runtime.
	I0918 20:35:46.591454   44697 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0918 20:35:46.591459   44697 command_runner.go:130] > conmon_env = [
	I0918 20:35:46.591464   44697 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0918 20:35:46.591468   44697 command_runner.go:130] > ]
	I0918 20:35:46.591473   44697 command_runner.go:130] > # Additional environment variables to set for all the
	I0918 20:35:46.591480   44697 command_runner.go:130] > # containers. These are overridden if set in the
	I0918 20:35:46.591486   44697 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I0918 20:35:46.591492   44697 command_runner.go:130] > # default_env = [
	I0918 20:35:46.591495   44697 command_runner.go:130] > # ]
	I0918 20:35:46.591500   44697 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I0918 20:35:46.591509   44697 command_runner.go:130] > # This option is deprecated, and be interpreted from whether SELinux is enabled on the host in the future.
	I0918 20:35:46.591513   44697 command_runner.go:130] > # selinux = false
	I0918 20:35:46.591530   44697 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I0918 20:35:46.591542   44697 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I0918 20:35:46.591551   44697 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I0918 20:35:46.591558   44697 command_runner.go:130] > # seccomp_profile = ""
	I0918 20:35:46.591564   44697 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I0918 20:35:46.591572   44697 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I0918 20:35:46.591578   44697 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I0918 20:35:46.591585   44697 command_runner.go:130] > # which might increase security.
	I0918 20:35:46.591589   44697 command_runner.go:130] > # This option is currently deprecated,
	I0918 20:35:46.591599   44697 command_runner.go:130] > # and will be replaced by the SeccompDefault FeatureGate in Kubernetes.
	I0918 20:35:46.591603   44697 command_runner.go:130] > seccomp_use_default_when_empty = false
	I0918 20:35:46.591609   44697 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I0918 20:35:46.591618   44697 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I0918 20:35:46.591634   44697 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I0918 20:35:46.591648   44697 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I0918 20:35:46.591656   44697 command_runner.go:130] > # This option supports live configuration reload.
	I0918 20:35:46.591660   44697 command_runner.go:130] > # apparmor_profile = "crio-default"
	I0918 20:35:46.591667   44697 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I0918 20:35:46.591671   44697 command_runner.go:130] > # the cgroup blockio controller.
	I0918 20:35:46.591677   44697 command_runner.go:130] > # blockio_config_file = ""
	I0918 20:35:46.591684   44697 command_runner.go:130] > # Reload blockio-config-file and rescan blockio devices in the system before applying
	I0918 20:35:46.591690   44697 command_runner.go:130] > # blockio parameters.
	I0918 20:35:46.591697   44697 command_runner.go:130] > # blockio_reload = false
	I0918 20:35:46.591709   44697 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I0918 20:35:46.591719   44697 command_runner.go:130] > # irqbalance daemon.
	I0918 20:35:46.591729   44697 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I0918 20:35:46.591742   44697 command_runner.go:130] > # irqbalance_config_restore_file allows to set a cpu mask CRI-O should
	I0918 20:35:46.591755   44697 command_runner.go:130] > # restore as irqbalance config at startup. Set to empty string to disable this flow entirely.
	I0918 20:35:46.591764   44697 command_runner.go:130] > # By default, CRI-O manages the irqbalance configuration to enable dynamic IRQ pinning.
	I0918 20:35:46.591772   44697 command_runner.go:130] > # irqbalance_config_restore_file = "/etc/sysconfig/orig_irq_banned_cpus"
	I0918 20:35:46.591779   44697 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I0918 20:35:46.591787   44697 command_runner.go:130] > # This option supports live configuration reload.
	I0918 20:35:46.591790   44697 command_runner.go:130] > # rdt_config_file = ""
	I0918 20:35:46.591795   44697 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I0918 20:35:46.591807   44697 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I0918 20:35:46.591827   44697 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I0918 20:35:46.591837   44697 command_runner.go:130] > # separate_pull_cgroup = ""
	I0918 20:35:46.591847   44697 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I0918 20:35:46.591860   44697 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I0918 20:35:46.591867   44697 command_runner.go:130] > # will be added.
	I0918 20:35:46.591876   44697 command_runner.go:130] > # default_capabilities = [
	I0918 20:35:46.591882   44697 command_runner.go:130] > # 	"CHOWN",
	I0918 20:35:46.591891   44697 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I0918 20:35:46.591897   44697 command_runner.go:130] > # 	"FSETID",
	I0918 20:35:46.591901   44697 command_runner.go:130] > # 	"FOWNER",
	I0918 20:35:46.591905   44697 command_runner.go:130] > # 	"SETGID",
	I0918 20:35:46.591910   44697 command_runner.go:130] > # 	"SETUID",
	I0918 20:35:46.591914   44697 command_runner.go:130] > # 	"SETPCAP",
	I0918 20:35:46.591921   44697 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I0918 20:35:46.591928   44697 command_runner.go:130] > # 	"KILL",
	I0918 20:35:46.591937   44697 command_runner.go:130] > # ]
	I0918 20:35:46.591949   44697 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I0918 20:35:46.591962   44697 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I0918 20:35:46.591973   44697 command_runner.go:130] > # add_inheritable_capabilities = false
	I0918 20:35:46.591986   44697 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I0918 20:35:46.591998   44697 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0918 20:35:46.592005   44697 command_runner.go:130] > default_sysctls = [
	I0918 20:35:46.592011   44697 command_runner.go:130] > 	"net.ipv4.ip_unprivileged_port_start=0",
	I0918 20:35:46.592032   44697 command_runner.go:130] > ]
	I0918 20:35:46.592042   44697 command_runner.go:130] > # List of devices on the host that a
	I0918 20:35:46.592055   44697 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I0918 20:35:46.592064   44697 command_runner.go:130] > # allowed_devices = [
	I0918 20:35:46.592071   44697 command_runner.go:130] > # 	"/dev/fuse",
	I0918 20:35:46.592084   44697 command_runner.go:130] > # ]
	I0918 20:35:46.592094   44697 command_runner.go:130] > # List of additional devices. specified as
	I0918 20:35:46.592109   44697 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I0918 20:35:46.592118   44697 command_runner.go:130] > # If it is empty or commented out, only the devices
	I0918 20:35:46.592127   44697 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0918 20:35:46.592137   44697 command_runner.go:130] > # additional_devices = [
	I0918 20:35:46.592145   44697 command_runner.go:130] > # ]
	I0918 20:35:46.592157   44697 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I0918 20:35:46.592164   44697 command_runner.go:130] > # cdi_spec_dirs = [
	I0918 20:35:46.592173   44697 command_runner.go:130] > # 	"/etc/cdi",
	I0918 20:35:46.592179   44697 command_runner.go:130] > # 	"/var/run/cdi",
	I0918 20:35:46.592187   44697 command_runner.go:130] > # ]
	I0918 20:35:46.592198   44697 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I0918 20:35:46.592208   44697 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I0918 20:35:46.592212   44697 command_runner.go:130] > # Defaults to false.
	I0918 20:35:46.592217   44697 command_runner.go:130] > # device_ownership_from_security_context = false
	I0918 20:35:46.592231   44697 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I0918 20:35:46.592245   44697 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I0918 20:35:46.592250   44697 command_runner.go:130] > # hooks_dir = [
	I0918 20:35:46.592258   44697 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I0918 20:35:46.592270   44697 command_runner.go:130] > # ]
	I0918 20:35:46.592282   44697 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I0918 20:35:46.592295   44697 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I0918 20:35:46.592307   44697 command_runner.go:130] > # its default mounts from the following two files:
	I0918 20:35:46.592314   44697 command_runner.go:130] > #
	I0918 20:35:46.592320   44697 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I0918 20:35:46.592333   44697 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I0918 20:35:46.592345   44697 command_runner.go:130] > #      override the default mounts shipped with the package.
	I0918 20:35:46.592351   44697 command_runner.go:130] > #
	I0918 20:35:46.592365   44697 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I0918 20:35:46.592378   44697 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I0918 20:35:46.592394   44697 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I0918 20:35:46.592405   44697 command_runner.go:130] > #      only add mounts it finds in this file.
	I0918 20:35:46.592411   44697 command_runner.go:130] > #
	I0918 20:35:46.592416   44697 command_runner.go:130] > # default_mounts_file = ""
	I0918 20:35:46.592424   44697 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I0918 20:35:46.592433   44697 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I0918 20:35:46.592443   44697 command_runner.go:130] > pids_limit = 1024
	I0918 20:35:46.592454   44697 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I0918 20:35:46.592466   44697 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I0918 20:35:46.592477   44697 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I0918 20:35:46.592492   44697 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I0918 20:35:46.592503   44697 command_runner.go:130] > # log_size_max = -1
	I0918 20:35:46.592513   44697 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kubernetes log file
	I0918 20:35:46.592519   44697 command_runner.go:130] > # log_to_journald = false
	I0918 20:35:46.592531   44697 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I0918 20:35:46.592543   44697 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I0918 20:35:46.592551   44697 command_runner.go:130] > # Path to directory for container attach sockets.
	I0918 20:35:46.592563   44697 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I0918 20:35:46.592574   44697 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I0918 20:35:46.592583   44697 command_runner.go:130] > # bind_mount_prefix = ""
	I0918 20:35:46.592591   44697 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I0918 20:35:46.592600   44697 command_runner.go:130] > # read_only = false
	I0918 20:35:46.592609   44697 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I0918 20:35:46.592618   44697 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I0918 20:35:46.592624   44697 command_runner.go:130] > # live configuration reload.
	I0918 20:35:46.592633   44697 command_runner.go:130] > # log_level = "info"
	I0918 20:35:46.592642   44697 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I0918 20:35:46.592654   44697 command_runner.go:130] > # This option supports live configuration reload.
	I0918 20:35:46.592661   44697 command_runner.go:130] > # log_filter = ""
	I0918 20:35:46.592674   44697 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I0918 20:35:46.592687   44697 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I0918 20:35:46.592697   44697 command_runner.go:130] > # separated by comma.
	I0918 20:35:46.592708   44697 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0918 20:35:46.592716   44697 command_runner.go:130] > # uid_mappings = ""
	I0918 20:35:46.592726   44697 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I0918 20:35:46.592739   44697 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I0918 20:35:46.592745   44697 command_runner.go:130] > # separated by comma.
	I0918 20:35:46.592760   44697 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0918 20:35:46.592770   44697 command_runner.go:130] > # gid_mappings = ""
	I0918 20:35:46.592783   44697 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I0918 20:35:46.592798   44697 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0918 20:35:46.592808   44697 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0918 20:35:46.592817   44697 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0918 20:35:46.592827   44697 command_runner.go:130] > # minimum_mappable_uid = -1
	I0918 20:35:46.592837   44697 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I0918 20:35:46.592851   44697 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0918 20:35:46.592862   44697 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0918 20:35:46.592879   44697 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0918 20:35:46.592889   44697 command_runner.go:130] > # minimum_mappable_gid = -1
	I0918 20:35:46.592899   44697 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I0918 20:35:46.592907   44697 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I0918 20:35:46.592915   44697 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I0918 20:35:46.592926   44697 command_runner.go:130] > # ctr_stop_timeout = 30
	I0918 20:35:46.592936   44697 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I0918 20:35:46.592948   44697 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I0918 20:35:46.592955   44697 command_runner.go:130] > # a kernel separating runtime (like kata).
	I0918 20:35:46.592972   44697 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I0918 20:35:46.592982   44697 command_runner.go:130] > drop_infra_ctr = false
	I0918 20:35:46.592991   44697 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I0918 20:35:46.593000   44697 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I0918 20:35:46.593016   44697 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I0918 20:35:46.593026   44697 command_runner.go:130] > # infra_ctr_cpuset = ""
	I0918 20:35:46.593037   44697 command_runner.go:130] > # shared_cpuset  determines the CPU set which is allowed to be shared between guaranteed containers,
	I0918 20:35:46.593049   44697 command_runner.go:130] > # regardless of, and in addition to, the exclusiveness of their CPUs.
	I0918 20:35:46.593062   44697 command_runner.go:130] > # This field is optional and would not be used if not specified.
	I0918 20:35:46.593074   44697 command_runner.go:130] > # You can specify CPUs in the Linux CPU list format.
	I0918 20:35:46.593083   44697 command_runner.go:130] > # shared_cpuset = ""
	I0918 20:35:46.593092   44697 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I0918 20:35:46.593100   44697 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I0918 20:35:46.593106   44697 command_runner.go:130] > # namespaces_dir = "/var/run"
	I0918 20:35:46.593126   44697 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I0918 20:35:46.593134   44697 command_runner.go:130] > pinns_path = "/usr/bin/pinns"
	I0918 20:35:46.593143   44697 command_runner.go:130] > # Globally enable/disable CRIU support which is necessary to
	I0918 20:35:46.593156   44697 command_runner.go:130] > # checkpoint and restore container or pods (even if CRIU is found in $PATH).
	I0918 20:35:46.593166   44697 command_runner.go:130] > # enable_criu_support = false
	I0918 20:35:46.593174   44697 command_runner.go:130] > # Enable/disable the generation of the container,
	I0918 20:35:46.593187   44697 command_runner.go:130] > # sandbox lifecycle events to be sent to the Kubelet to optimize the PLEG
	I0918 20:35:46.593194   44697 command_runner.go:130] > # enable_pod_events = false
	I0918 20:35:46.593207   44697 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0918 20:35:46.593217   44697 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0918 20:35:46.593226   44697 command_runner.go:130] > # The name is matched against the runtimes map below.
	I0918 20:35:46.593234   44697 command_runner.go:130] > # default_runtime = "runc"
	I0918 20:35:46.593246   44697 command_runner.go:130] > # A list of paths that, when absent from the host,
	I0918 20:35:46.593258   44697 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I0918 20:35:46.593278   44697 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jeopardize the health of the node, and whose
	I0918 20:35:46.593289   44697 command_runner.go:130] > # creation as a file is not desired either.
	I0918 20:35:46.593305   44697 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I0918 20:35:46.593314   44697 command_runner.go:130] > # the hostname is being managed dynamically.
	I0918 20:35:46.593318   44697 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I0918 20:35:46.593322   44697 command_runner.go:130] > # ]
	I0918 20:35:46.593331   44697 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I0918 20:35:46.593344   44697 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I0918 20:35:46.593354   44697 command_runner.go:130] > # If no runtime handler is provided, the "default_runtime" will be used.
	I0918 20:35:46.593366   44697 command_runner.go:130] > # Each entry in the table should follow the format:
	I0918 20:35:46.593374   44697 command_runner.go:130] > #
	I0918 20:35:46.593382   44697 command_runner.go:130] > # [crio.runtime.runtimes.runtime-handler]
	I0918 20:35:46.593392   44697 command_runner.go:130] > # runtime_path = "/path/to/the/executable"
	I0918 20:35:46.593418   44697 command_runner.go:130] > # runtime_type = "oci"
	I0918 20:35:46.593428   44697 command_runner.go:130] > # runtime_root = "/path/to/the/root"
	I0918 20:35:46.593436   44697 command_runner.go:130] > # monitor_path = "/path/to/container/monitor"
	I0918 20:35:46.593446   44697 command_runner.go:130] > # monitor_cgroup = "/cgroup/path"
	I0918 20:35:46.593454   44697 command_runner.go:130] > # monitor_exec_cgroup = "/cgroup/path"
	I0918 20:35:46.593462   44697 command_runner.go:130] > # monitor_env = []
	I0918 20:35:46.593472   44697 command_runner.go:130] > # privileged_without_host_devices = false
	I0918 20:35:46.593482   44697 command_runner.go:130] > # allowed_annotations = []
	I0918 20:35:46.593493   44697 command_runner.go:130] > # platform_runtime_paths = { "os/arch" = "/path/to/binary" }
	I0918 20:35:46.593499   44697 command_runner.go:130] > # Where:
	I0918 20:35:46.593505   44697 command_runner.go:130] > # - runtime-handler: Name used to identify the runtime.
	I0918 20:35:46.593517   44697 command_runner.go:130] > # - runtime_path (optional, string): Absolute path to the runtime executable in
	I0918 20:35:46.593530   44697 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I0918 20:35:46.593544   44697 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I0918 20:35:46.593553   44697 command_runner.go:130] > #   in $PATH.
	I0918 20:35:46.593563   44697 command_runner.go:130] > # - runtime_type (optional, string): Type of runtime, one of: "oci", "vm". If
	I0918 20:35:46.593574   44697 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I0918 20:35:46.593586   44697 command_runner.go:130] > # - runtime_root (optional, string): Root directory for storage of containers
	I0918 20:35:46.593594   44697 command_runner.go:130] > #   state.
	I0918 20:35:46.593604   44697 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I0918 20:35:46.593614   44697 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I0918 20:35:46.593624   44697 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I0918 20:35:46.593637   44697 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I0918 20:35:46.593647   44697 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I0918 20:35:46.593661   44697 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I0918 20:35:46.593671   44697 command_runner.go:130] > #   The currently recognized values are:
	I0918 20:35:46.593692   44697 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I0918 20:35:46.593705   44697 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I0918 20:35:46.593713   44697 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I0918 20:35:46.593722   44697 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I0918 20:35:46.593738   44697 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I0918 20:35:46.593748   44697 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I0918 20:35:46.593764   44697 command_runner.go:130] > #   "io.kubernetes.cri-o.seccompNotifierAction" for enabling the seccomp notifier feature.
	I0918 20:35:46.593779   44697 command_runner.go:130] > #   "io.kubernetes.cri-o.umask" for setting the umask for container init process.
	I0918 20:35:46.593789   44697 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I0918 20:35:46.593801   44697 command_runner.go:130] > # - monitor_path (optional, string): The path of the monitor binary. Replaces
	I0918 20:35:46.593806   44697 command_runner.go:130] > #   deprecated option "conmon".
	I0918 20:35:46.593814   44697 command_runner.go:130] > # - monitor_cgroup (optional, string): The cgroup the container monitor process will be put in.
	I0918 20:35:46.593824   44697 command_runner.go:130] > #   Replaces deprecated option "conmon_cgroup".
	I0918 20:35:46.593836   44697 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): If set to "container", indicates exec probes
	I0918 20:35:46.593847   44697 command_runner.go:130] > #   should be moved to the container's cgroup
	I0918 20:35:46.593857   44697 command_runner.go:130] > # - monitor_env (optional, array of strings): Environment variables to pass to the montior.
	I0918 20:35:46.593868   44697 command_runner.go:130] > #   Replaces deprecated option "conmon_env".
	I0918 20:35:46.593880   44697 command_runner.go:130] > # - platform_runtime_paths (optional, map): A mapping of platforms to the corresponding
	I0918 20:35:46.593889   44697 command_runner.go:130] > #   runtime executable paths for the runtime handler.
	I0918 20:35:46.593892   44697 command_runner.go:130] > #
	I0918 20:35:46.593898   44697 command_runner.go:130] > # Using the seccomp notifier feature:
	I0918 20:35:46.593908   44697 command_runner.go:130] > #
	I0918 20:35:46.593922   44697 command_runner.go:130] > # This feature can help you to debug seccomp related issues, for example if
	I0918 20:35:46.593935   44697 command_runner.go:130] > # blocked syscalls (permission denied errors) have negative impact on the workload.
	I0918 20:35:46.593943   44697 command_runner.go:130] > #
	I0918 20:35:46.593954   44697 command_runner.go:130] > # To be able to use this feature, configure a runtime which has the annotation
	I0918 20:35:46.593966   44697 command_runner.go:130] > # "io.kubernetes.cri-o.seccompNotifierAction" in the allowed_annotations array.
	I0918 20:35:46.593974   44697 command_runner.go:130] > #
	I0918 20:35:46.593984   44697 command_runner.go:130] > # It also requires at least runc 1.1.0 or crun 0.19 which support the notifier
	I0918 20:35:46.593990   44697 command_runner.go:130] > # feature.
	I0918 20:35:46.593993   44697 command_runner.go:130] > #
	I0918 20:35:46.594002   44697 command_runner.go:130] > # If everything is setup, CRI-O will modify chosen seccomp profiles for
	I0918 20:35:46.594015   44697 command_runner.go:130] > # containers if the annotation "io.kubernetes.cri-o.seccompNotifierAction" is
	I0918 20:35:46.594030   44697 command_runner.go:130] > # set on the Pod sandbox. CRI-O will then get notified if a container is using
	I0918 20:35:46.594043   44697 command_runner.go:130] > # a blocked syscall and then terminate the workload after a timeout of 5
	I0918 20:35:46.594054   44697 command_runner.go:130] > # seconds if the value of "io.kubernetes.cri-o.seccompNotifierAction=stop".
	I0918 20:35:46.594063   44697 command_runner.go:130] > #
	I0918 20:35:46.594073   44697 command_runner.go:130] > # This also means that multiple syscalls can be captured during that period,
	I0918 20:35:46.594082   44697 command_runner.go:130] > # while the timeout will get reset once a new syscall has been discovered.
	I0918 20:35:46.594085   44697 command_runner.go:130] > #
	I0918 20:35:46.594098   44697 command_runner.go:130] > # This also means that the Pods "restartPolicy" has to be set to "Never",
	I0918 20:35:46.594111   44697 command_runner.go:130] > # otherwise the kubelet will restart the container immediately.
	I0918 20:35:46.594117   44697 command_runner.go:130] > #
	I0918 20:35:46.594127   44697 command_runner.go:130] > # Please be aware that CRI-O is not able to get notified if a syscall gets
	I0918 20:35:46.594139   44697 command_runner.go:130] > # blocked based on the seccomp defaultAction, which is a general runtime
	I0918 20:35:46.594147   44697 command_runner.go:130] > # limitation.
	I0918 20:35:46.594156   44697 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I0918 20:35:46.594165   44697 command_runner.go:130] > runtime_path = "/usr/bin/runc"
	I0918 20:35:46.594173   44697 command_runner.go:130] > runtime_type = "oci"
	I0918 20:35:46.594182   44697 command_runner.go:130] > runtime_root = "/run/runc"
	I0918 20:35:46.594189   44697 command_runner.go:130] > runtime_config_path = ""
	I0918 20:35:46.594195   44697 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I0918 20:35:46.594204   44697 command_runner.go:130] > monitor_cgroup = "pod"
	I0918 20:35:46.594211   44697 command_runner.go:130] > monitor_exec_cgroup = ""
	I0918 20:35:46.594221   44697 command_runner.go:130] > monitor_env = [
	I0918 20:35:46.594230   44697 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0918 20:35:46.594238   44697 command_runner.go:130] > ]
	I0918 20:35:46.594246   44697 command_runner.go:130] > privileged_without_host_devices = false
	I0918 20:35:46.594258   44697 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I0918 20:35:46.594271   44697 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I0918 20:35:46.594281   44697 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I0918 20:35:46.594293   44697 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I0918 20:35:46.594311   44697 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I0918 20:35:46.594323   44697 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I0918 20:35:46.594340   44697 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I0918 20:35:46.594354   44697 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I0918 20:35:46.594366   44697 command_runner.go:130] > # signifying for that resource type to override the default value.
	I0918 20:35:46.594376   44697 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I0918 20:35:46.594381   44697 command_runner.go:130] > # Example:
	I0918 20:35:46.594390   44697 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I0918 20:35:46.594402   44697 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I0918 20:35:46.594410   44697 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I0918 20:35:46.594422   44697 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I0918 20:35:46.594432   44697 command_runner.go:130] > # cpuset = 0
	I0918 20:35:46.594442   44697 command_runner.go:130] > # cpushares = "0-1"
	I0918 20:35:46.594450   44697 command_runner.go:130] > # Where:
	I0918 20:35:46.594458   44697 command_runner.go:130] > # The workload name is workload-type.
	I0918 20:35:46.594470   44697 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I0918 20:35:46.594478   44697 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I0918 20:35:46.594486   44697 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I0918 20:35:46.594500   44697 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I0918 20:35:46.594515   44697 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I0918 20:35:46.594526   44697 command_runner.go:130] > # hostnetwork_disable_selinux determines whether
	I0918 20:35:46.594540   44697 command_runner.go:130] > # SELinux should be disabled within a pod when it is running in the host network namespace
	I0918 20:35:46.594549   44697 command_runner.go:130] > # Default value is set to true
	I0918 20:35:46.594557   44697 command_runner.go:130] > # hostnetwork_disable_selinux = true
	I0918 20:35:46.594567   44697 command_runner.go:130] > # disable_hostport_mapping determines whether to enable/disable
	I0918 20:35:46.594574   44697 command_runner.go:130] > # the container hostport mapping in CRI-O.
	I0918 20:35:46.594598   44697 command_runner.go:130] > # Default value is set to 'false'
	I0918 20:35:46.594616   44697 command_runner.go:130] > # disable_hostport_mapping = false
	I0918 20:35:46.594626   44697 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I0918 20:35:46.594636   44697 command_runner.go:130] > #
	I0918 20:35:46.594646   44697 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I0918 20:35:46.594656   44697 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I0918 20:35:46.594672   44697 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I0918 20:35:46.594682   44697 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I0918 20:35:46.594695   44697 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I0918 20:35:46.594699   44697 command_runner.go:130] > [crio.image]
	I0918 20:35:46.594705   44697 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I0918 20:35:46.594711   44697 command_runner.go:130] > # default_transport = "docker://"
	I0918 20:35:46.594720   44697 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I0918 20:35:46.594730   44697 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I0918 20:35:46.594738   44697 command_runner.go:130] > # global_auth_file = ""
	I0918 20:35:46.594745   44697 command_runner.go:130] > # The image used to instantiate infra containers.
	I0918 20:35:46.594754   44697 command_runner.go:130] > # This option supports live configuration reload.
	I0918 20:35:46.594762   44697 command_runner.go:130] > pause_image = "registry.k8s.io/pause:3.10"
	I0918 20:35:46.594773   44697 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I0918 20:35:46.594783   44697 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I0918 20:35:46.594788   44697 command_runner.go:130] > # This option supports live configuration reload.
	I0918 20:35:46.594792   44697 command_runner.go:130] > # pause_image_auth_file = ""
	I0918 20:35:46.594800   44697 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I0918 20:35:46.594809   44697 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I0918 20:35:46.594819   44697 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I0918 20:35:46.594829   44697 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I0918 20:35:46.594836   44697 command_runner.go:130] > # pause_command = "/pause"
	I0918 20:35:46.594845   44697 command_runner.go:130] > # List of images to be excluded from the kubelet's garbage collection.
	I0918 20:35:46.594858   44697 command_runner.go:130] > # It allows specifying image names using either exact, glob, or keyword
	I0918 20:35:46.594870   44697 command_runner.go:130] > # patterns. Exact matches must match the entire name, glob matches can
	I0918 20:35:46.594877   44697 command_runner.go:130] > # have a wildcard * at the end, and keyword matches can have wildcards
	I0918 20:35:46.594888   44697 command_runner.go:130] > # on both ends. By default, this list includes the "pause" image if
	I0918 20:35:46.594901   44697 command_runner.go:130] > # configured by the user, which is used as a placeholder in Kubernetes pods.
	I0918 20:35:46.594908   44697 command_runner.go:130] > # pinned_images = [
	I0918 20:35:46.594923   44697 command_runner.go:130] > # ]
	I0918 20:35:46.594932   44697 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I0918 20:35:46.594944   44697 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I0918 20:35:46.594957   44697 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I0918 20:35:46.594970   44697 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I0918 20:35:46.594976   44697 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I0918 20:35:46.594985   44697 command_runner.go:130] > # signature_policy = ""
	I0918 20:35:46.594993   44697 command_runner.go:130] > # Root path for pod namespace-separated signature policies.
	I0918 20:35:46.595007   44697 command_runner.go:130] > # The final policy to be used on image pull will be <SIGNATURE_POLICY_DIR>/<NAMESPACE>.json.
	I0918 20:35:46.595018   44697 command_runner.go:130] > # If no pod namespace is being provided on image pull (via the sandbox config),
	I0918 20:35:46.595036   44697 command_runner.go:130] > # or the concatenated path is non existent, then the signature_policy or system
	I0918 20:35:46.595049   44697 command_runner.go:130] > # wide policy will be used as fallback. Must be an absolute path.
	I0918 20:35:46.595059   44697 command_runner.go:130] > # signature_policy_dir = "/etc/crio/policies"
	I0918 20:35:46.595071   44697 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I0918 20:35:46.595079   44697 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I0918 20:35:46.595084   44697 command_runner.go:130] > # changing them here.
	I0918 20:35:46.595093   44697 command_runner.go:130] > # insecure_registries = [
	I0918 20:35:46.595100   44697 command_runner.go:130] > # ]
	I0918 20:35:46.595113   44697 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I0918 20:35:46.595124   44697 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I0918 20:35:46.595138   44697 command_runner.go:130] > # image_volumes = "mkdir"
	I0918 20:35:46.595146   44697 command_runner.go:130] > # Temporary directory to use for storing big files
	I0918 20:35:46.595155   44697 command_runner.go:130] > # big_files_temporary_dir = ""
	I0918 20:35:46.595162   44697 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I0918 20:35:46.595169   44697 command_runner.go:130] > # CNI plugins.
	I0918 20:35:46.595175   44697 command_runner.go:130] > [crio.network]
	I0918 20:35:46.595187   44697 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I0918 20:35:46.595199   44697 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I0918 20:35:46.595209   44697 command_runner.go:130] > # cni_default_network = ""
	I0918 20:35:46.595222   44697 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I0918 20:35:46.595233   44697 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I0918 20:35:46.595244   44697 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I0918 20:35:46.595250   44697 command_runner.go:130] > # plugin_dirs = [
	I0918 20:35:46.595254   44697 command_runner.go:130] > # 	"/opt/cni/bin/",
	I0918 20:35:46.595258   44697 command_runner.go:130] > # ]
	I0918 20:35:46.595270   44697 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I0918 20:35:46.595280   44697 command_runner.go:130] > [crio.metrics]
	I0918 20:35:46.595288   44697 command_runner.go:130] > # Globally enable or disable metrics support.
	I0918 20:35:46.595298   44697 command_runner.go:130] > enable_metrics = true
	I0918 20:35:46.595306   44697 command_runner.go:130] > # Specify enabled metrics collectors.
	I0918 20:35:46.595316   44697 command_runner.go:130] > # Per default all metrics are enabled.
	I0918 20:35:46.595328   44697 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I0918 20:35:46.595341   44697 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I0918 20:35:46.595351   44697 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I0918 20:35:46.595355   44697 command_runner.go:130] > # metrics_collectors = [
	I0918 20:35:46.595364   44697 command_runner.go:130] > # 	"operations",
	I0918 20:35:46.595372   44697 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I0918 20:35:46.595382   44697 command_runner.go:130] > # 	"operations_latency_microseconds",
	I0918 20:35:46.595389   44697 command_runner.go:130] > # 	"operations_errors",
	I0918 20:35:46.595398   44697 command_runner.go:130] > # 	"image_pulls_by_digest",
	I0918 20:35:46.595409   44697 command_runner.go:130] > # 	"image_pulls_by_name",
	I0918 20:35:46.595420   44697 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I0918 20:35:46.595434   44697 command_runner.go:130] > # 	"image_pulls_failures",
	I0918 20:35:46.595443   44697 command_runner.go:130] > # 	"image_pulls_successes",
	I0918 20:35:46.595455   44697 command_runner.go:130] > # 	"image_pulls_layer_size",
	I0918 20:35:46.595463   44697 command_runner.go:130] > # 	"image_layer_reuse",
	I0918 20:35:46.595471   44697 command_runner.go:130] > # 	"containers_events_dropped_total",
	I0918 20:35:46.595481   44697 command_runner.go:130] > # 	"containers_oom_total",
	I0918 20:35:46.595489   44697 command_runner.go:130] > # 	"containers_oom",
	I0918 20:35:46.595499   44697 command_runner.go:130] > # 	"processes_defunct",
	I0918 20:35:46.595508   44697 command_runner.go:130] > # 	"operations_total",
	I0918 20:35:46.595518   44697 command_runner.go:130] > # 	"operations_latency_seconds",
	I0918 20:35:46.595528   44697 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I0918 20:35:46.595538   44697 command_runner.go:130] > # 	"operations_errors_total",
	I0918 20:35:46.595547   44697 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I0918 20:35:46.595555   44697 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I0918 20:35:46.595561   44697 command_runner.go:130] > # 	"image_pulls_failure_total",
	I0918 20:35:46.595570   44697 command_runner.go:130] > # 	"image_pulls_success_total",
	I0918 20:35:46.595581   44697 command_runner.go:130] > # 	"image_layer_reuse_total",
	I0918 20:35:46.595588   44697 command_runner.go:130] > # 	"containers_oom_count_total",
	I0918 20:35:46.595599   44697 command_runner.go:130] > # 	"containers_seccomp_notifier_count_total",
	I0918 20:35:46.595613   44697 command_runner.go:130] > # 	"resources_stalled_at_stage",
	I0918 20:35:46.595621   44697 command_runner.go:130] > # ]
	I0918 20:35:46.595632   44697 command_runner.go:130] > # The port on which the metrics server will listen.
	I0918 20:35:46.595639   44697 command_runner.go:130] > # metrics_port = 9090
	I0918 20:35:46.595648   44697 command_runner.go:130] > # Local socket path to bind the metrics server to
	I0918 20:35:46.595654   44697 command_runner.go:130] > # metrics_socket = ""
	I0918 20:35:46.595667   44697 command_runner.go:130] > # The certificate for the secure metrics server.
	I0918 20:35:46.595680   44697 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I0918 20:35:46.595693   44697 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I0918 20:35:46.595703   44697 command_runner.go:130] > # certificate on any modification event.
	I0918 20:35:46.595716   44697 command_runner.go:130] > # metrics_cert = ""
	I0918 20:35:46.595726   44697 command_runner.go:130] > # The certificate key for the secure metrics server.
	I0918 20:35:46.595738   44697 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I0918 20:35:46.595744   44697 command_runner.go:130] > # metrics_key = ""
	I0918 20:35:46.595752   44697 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I0918 20:35:46.595760   44697 command_runner.go:130] > [crio.tracing]
	I0918 20:35:46.595773   44697 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I0918 20:35:46.595782   44697 command_runner.go:130] > # enable_tracing = false
	I0918 20:35:46.595793   44697 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I0918 20:35:46.595803   44697 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I0918 20:35:46.595816   44697 command_runner.go:130] > # Number of samples to collect per million spans. Set to 1000000 to always sample.
	I0918 20:35:46.595824   44697 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I0918 20:35:46.595830   44697 command_runner.go:130] > # CRI-O NRI configuration.
	I0918 20:35:46.595835   44697 command_runner.go:130] > [crio.nri]
	I0918 20:35:46.595845   44697 command_runner.go:130] > # Globally enable or disable NRI.
	I0918 20:35:46.595851   44697 command_runner.go:130] > # enable_nri = false
	I0918 20:35:46.595866   44697 command_runner.go:130] > # NRI socket to listen on.
	I0918 20:35:46.595876   44697 command_runner.go:130] > # nri_listen = "/var/run/nri/nri.sock"
	I0918 20:35:46.595886   44697 command_runner.go:130] > # NRI plugin directory to use.
	I0918 20:35:46.595895   44697 command_runner.go:130] > # nri_plugin_dir = "/opt/nri/plugins"
	I0918 20:35:46.595906   44697 command_runner.go:130] > # NRI plugin configuration directory to use.
	I0918 20:35:46.595913   44697 command_runner.go:130] > # nri_plugin_config_dir = "/etc/nri/conf.d"
	I0918 20:35:46.595920   44697 command_runner.go:130] > # Disable connections from externally launched NRI plugins.
	I0918 20:35:46.595931   44697 command_runner.go:130] > # nri_disable_connections = false
	I0918 20:35:46.595941   44697 command_runner.go:130] > # Timeout for a plugin to register itself with NRI.
	I0918 20:35:46.595949   44697 command_runner.go:130] > # nri_plugin_registration_timeout = "5s"
	I0918 20:35:46.595960   44697 command_runner.go:130] > # Timeout for a plugin to handle an NRI request.
	I0918 20:35:46.595970   44697 command_runner.go:130] > # nri_plugin_request_timeout = "2s"
	I0918 20:35:46.595979   44697 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I0918 20:35:46.595988   44697 command_runner.go:130] > [crio.stats]
	I0918 20:35:46.596000   44697 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I0918 20:35:46.596008   44697 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I0918 20:35:46.596027   44697 command_runner.go:130] > # stats_collection_period = 0
	I0918 20:35:46.596121   44697 cni.go:84] Creating CNI manager for ""
	I0918 20:35:46.596137   44697 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0918 20:35:46.596156   44697 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0918 20:35:46.596189   44697 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.106 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-622675 NodeName:multinode-622675 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.106"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.106 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:
/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0918 20:35:46.596360   44697 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.106
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-622675"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.106
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.106"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0918 20:35:46.596438   44697 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0918 20:35:46.607272   44697 command_runner.go:130] > kubeadm
	I0918 20:35:46.607301   44697 command_runner.go:130] > kubectl
	I0918 20:35:46.607308   44697 command_runner.go:130] > kubelet
	I0918 20:35:46.607346   44697 binaries.go:44] Found k8s binaries, skipping transfer
	I0918 20:35:46.607401   44697 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0918 20:35:46.617110   44697 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (316 bytes)
	I0918 20:35:46.633840   44697 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0918 20:35:46.650882   44697 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2160 bytes)
	I0918 20:35:46.668731   44697 ssh_runner.go:195] Run: grep 192.168.39.106	control-plane.minikube.internal$ /etc/hosts
	I0918 20:35:46.672786   44697 command_runner.go:130] > 192.168.39.106	control-plane.minikube.internal
	I0918 20:35:46.672851   44697 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0918 20:35:46.811923   44697 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0918 20:35:46.826795   44697 certs.go:68] Setting up /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/multinode-622675 for IP: 192.168.39.106
	I0918 20:35:46.826819   44697 certs.go:194] generating shared ca certs ...
	I0918 20:35:46.826846   44697 certs.go:226] acquiring lock for ca certs: {Name:mkf54e0e71ed884c4372f9d3d4cb308ea4600185 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0918 20:35:46.827000   44697 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19667-7671/.minikube/ca.key
	I0918 20:35:46.827040   44697 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19667-7671/.minikube/proxy-client-ca.key
	I0918 20:35:46.827056   44697 certs.go:256] generating profile certs ...
	I0918 20:35:46.827144   44697 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/multinode-622675/client.key
	I0918 20:35:46.827199   44697 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/multinode-622675/apiserver.key.2ea34399
	I0918 20:35:46.827238   44697 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/multinode-622675/proxy-client.key
	I0918 20:35:46.827248   44697 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19667-7671/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0918 20:35:46.827278   44697 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19667-7671/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0918 20:35:46.827294   44697 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19667-7671/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0918 20:35:46.827305   44697 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19667-7671/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0918 20:35:46.827317   44697 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/multinode-622675/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0918 20:35:46.827330   44697 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/multinode-622675/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0918 20:35:46.827342   44697 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/multinode-622675/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0918 20:35:46.827351   44697 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/multinode-622675/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0918 20:35:46.827395   44697 certs.go:484] found cert: /home/jenkins/minikube-integration/19667-7671/.minikube/certs/14878.pem (1338 bytes)
	W0918 20:35:46.827425   44697 certs.go:480] ignoring /home/jenkins/minikube-integration/19667-7671/.minikube/certs/14878_empty.pem, impossibly tiny 0 bytes
	I0918 20:35:46.827434   44697 certs.go:484] found cert: /home/jenkins/minikube-integration/19667-7671/.minikube/certs/ca-key.pem (1675 bytes)
	I0918 20:35:46.827457   44697 certs.go:484] found cert: /home/jenkins/minikube-integration/19667-7671/.minikube/certs/ca.pem (1082 bytes)
	I0918 20:35:46.827480   44697 certs.go:484] found cert: /home/jenkins/minikube-integration/19667-7671/.minikube/certs/cert.pem (1123 bytes)
	I0918 20:35:46.827500   44697 certs.go:484] found cert: /home/jenkins/minikube-integration/19667-7671/.minikube/certs/key.pem (1679 bytes)
	I0918 20:35:46.827540   44697 certs.go:484] found cert: /home/jenkins/minikube-integration/19667-7671/.minikube/files/etc/ssl/certs/148782.pem (1708 bytes)
	I0918 20:35:46.827567   44697 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19667-7671/.minikube/certs/14878.pem -> /usr/share/ca-certificates/14878.pem
	I0918 20:35:46.827580   44697 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19667-7671/.minikube/files/etc/ssl/certs/148782.pem -> /usr/share/ca-certificates/148782.pem
	I0918 20:35:46.827592   44697 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19667-7671/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0918 20:35:46.828140   44697 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0918 20:35:46.852089   44697 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0918 20:35:46.876549   44697 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0918 20:35:46.899917   44697 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0918 20:35:46.924274   44697 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/multinode-622675/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0918 20:35:46.949084   44697 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/multinode-622675/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0918 20:35:46.974418   44697 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/multinode-622675/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0918 20:35:46.999216   44697 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/multinode-622675/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0918 20:35:47.023307   44697 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/certs/14878.pem --> /usr/share/ca-certificates/14878.pem (1338 bytes)
	I0918 20:35:47.046977   44697 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/files/etc/ssl/certs/148782.pem --> /usr/share/ca-certificates/148782.pem (1708 bytes)
	I0918 20:35:47.071690   44697 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0918 20:35:47.095867   44697 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0918 20:35:47.112760   44697 ssh_runner.go:195] Run: openssl version
	I0918 20:35:47.118351   44697 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0918 20:35:47.118633   44697 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0918 20:35:47.129535   44697 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0918 20:35:47.134111   44697 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Sep 18 19:39 /usr/share/ca-certificates/minikubeCA.pem
	I0918 20:35:47.134150   44697 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 18 19:39 /usr/share/ca-certificates/minikubeCA.pem
	I0918 20:35:47.134195   44697 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0918 20:35:47.139660   44697 command_runner.go:130] > b5213941
	I0918 20:35:47.139744   44697 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0918 20:35:47.149149   44697 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14878.pem && ln -fs /usr/share/ca-certificates/14878.pem /etc/ssl/certs/14878.pem"
	I0918 20:35:47.160259   44697 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14878.pem
	I0918 20:35:47.164882   44697 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Sep 18 19:57 /usr/share/ca-certificates/14878.pem
	I0918 20:35:47.164926   44697 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 18 19:57 /usr/share/ca-certificates/14878.pem
	I0918 20:35:47.164989   44697 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14878.pem
	I0918 20:35:47.170696   44697 command_runner.go:130] > 51391683
	I0918 20:35:47.170967   44697 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14878.pem /etc/ssl/certs/51391683.0"
	I0918 20:35:47.180702   44697 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/148782.pem && ln -fs /usr/share/ca-certificates/148782.pem /etc/ssl/certs/148782.pem"
	I0918 20:35:47.191928   44697 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/148782.pem
	I0918 20:35:47.196531   44697 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Sep 18 19:57 /usr/share/ca-certificates/148782.pem
	I0918 20:35:47.196659   44697 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 18 19:57 /usr/share/ca-certificates/148782.pem
	I0918 20:35:47.196711   44697 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/148782.pem
	I0918 20:35:47.202230   44697 command_runner.go:130] > 3ec20f2e
	I0918 20:35:47.202314   44697 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/148782.pem /etc/ssl/certs/3ec20f2e.0"
	I0918 20:35:47.212430   44697 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0918 20:35:47.217336   44697 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0918 20:35:47.217368   44697 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I0918 20:35:47.217374   44697 command_runner.go:130] > Device: 253,1	Inode: 3148840     Links: 1
	I0918 20:35:47.217381   44697 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0918 20:35:47.217386   44697 command_runner.go:130] > Access: 2024-09-18 20:28:57.047619252 +0000
	I0918 20:35:47.217396   44697 command_runner.go:130] > Modify: 2024-09-18 20:28:57.047619252 +0000
	I0918 20:35:47.217401   44697 command_runner.go:130] > Change: 2024-09-18 20:28:57.047619252 +0000
	I0918 20:35:47.217408   44697 command_runner.go:130] >  Birth: 2024-09-18 20:28:57.047619252 +0000
	I0918 20:35:47.217466   44697 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0918 20:35:47.223477   44697 command_runner.go:130] > Certificate will not expire
	I0918 20:35:47.223623   44697 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0918 20:35:47.229805   44697 command_runner.go:130] > Certificate will not expire
	I0918 20:35:47.229873   44697 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0918 20:35:47.235847   44697 command_runner.go:130] > Certificate will not expire
	I0918 20:35:47.235942   44697 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0918 20:35:47.242904   44697 command_runner.go:130] > Certificate will not expire
	I0918 20:35:47.242977   44697 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0918 20:35:47.248632   44697 command_runner.go:130] > Certificate will not expire
	I0918 20:35:47.248773   44697 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0918 20:35:47.254593   44697 command_runner.go:130] > Certificate will not expire
	I0918 20:35:47.254697   44697 kubeadm.go:392] StartCluster: {Name:multinode-622675 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.
1 ClusterName:multinode-622675 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.106 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.224 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.216 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false
inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disable
Optimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0918 20:35:47.254808   44697 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0918 20:35:47.254859   44697 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0918 20:35:47.293771   44697 command_runner.go:130] > 43e5c05bff5626d849f1cc1a2457bed371140f56a7d07e75167462bf594fc28f
	I0918 20:35:47.293800   44697 command_runner.go:130] > d944b227553371de1b94120a984c24cb4b177354de38e8d43eb9ed453b0fba4a
	I0918 20:35:47.293807   44697 command_runner.go:130] > 19d5f178e345d175b48c3050d2afb16f5238d648db531cca0bc1482880d1cd61
	I0918 20:35:47.293816   44697 command_runner.go:130] > e499594bd4ca1e0ffebd9b5b5240ecdc7a824b9fbd8d21840a8cb7e865d601b0
	I0918 20:35:47.293824   44697 command_runner.go:130] > 10852f58e0d0a0b7a14d38e4a5a81025f9506296d50000e0f0157576e90b73d5
	I0918 20:35:47.293833   44697 command_runner.go:130] > aabf741e6c21bcdd33794ad272367e8eee404ab32e56df824e7b682d84ec0cb3
	I0918 20:35:47.293842   44697 command_runner.go:130] > fa1411a6edea077a5bf21467bebe993d54a884c4ef814ac722270c58c4e1027c
	I0918 20:35:47.293872   44697 command_runner.go:130] > ca2c1be8e70a4692c9c15e9da716688fe6dcc7d6c82050191b2576cffe47cc24
	I0918 20:35:47.293902   44697 cri.go:89] found id: "43e5c05bff5626d849f1cc1a2457bed371140f56a7d07e75167462bf594fc28f"
	I0918 20:35:47.293910   44697 cri.go:89] found id: "d944b227553371de1b94120a984c24cb4b177354de38e8d43eb9ed453b0fba4a"
	I0918 20:35:47.293914   44697 cri.go:89] found id: "19d5f178e345d175b48c3050d2afb16f5238d648db531cca0bc1482880d1cd61"
	I0918 20:35:47.293919   44697 cri.go:89] found id: "e499594bd4ca1e0ffebd9b5b5240ecdc7a824b9fbd8d21840a8cb7e865d601b0"
	I0918 20:35:47.293923   44697 cri.go:89] found id: "10852f58e0d0a0b7a14d38e4a5a81025f9506296d50000e0f0157576e90b73d5"
	I0918 20:35:47.293928   44697 cri.go:89] found id: "aabf741e6c21bcdd33794ad272367e8eee404ab32e56df824e7b682d84ec0cb3"
	I0918 20:35:47.293933   44697 cri.go:89] found id: "fa1411a6edea077a5bf21467bebe993d54a884c4ef814ac722270c58c4e1027c"
	I0918 20:35:47.293937   44697 cri.go:89] found id: "ca2c1be8e70a4692c9c15e9da716688fe6dcc7d6c82050191b2576cffe47cc24"
	I0918 20:35:47.293941   44697 cri.go:89] found id: ""
	I0918 20:35:47.293991   44697 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Sep 18 20:37:33 multinode-622675 crio[2695]: time="2024-09-18 20:37:33.238870533Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726691853238847141,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=b9bdb2c5-12a8-4538-9c83-5ca6f264a81e name=/runtime.v1.ImageService/ImageFsInfo
	Sep 18 20:37:33 multinode-622675 crio[2695]: time="2024-09-18 20:37:33.239677652Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=07827455-e4e0-4bad-93b2-da7b9333f9a5 name=/runtime.v1.RuntimeService/ListContainers
	Sep 18 20:37:33 multinode-622675 crio[2695]: time="2024-09-18 20:37:33.239753443Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=07827455-e4e0-4bad-93b2-da7b9333f9a5 name=/runtime.v1.RuntimeService/ListContainers
	Sep 18 20:37:33 multinode-622675 crio[2695]: time="2024-09-18 20:37:33.240205145Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:0aa3be4f6770078682857062b75364a07889ed89b6d7725f4d222a925f54d7e1,PodSandboxId:37fadf186b79639715989c0c251bbd9cb53ed867f20833b59bf2f24d7dc2db19,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1726691787265875647,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-sxchh,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 056a9bd0-0d18-47f1-be48-7f0a773979db,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d800c9f5cd07512852d48c4b63d9ac715b3ab0d8aa9f971659a4d1e098182684,PodSandboxId:7661f91bded7e04ad84cbcb869216e52fb39e725a20511807f190a14f71a138f,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1726691753718362716,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-5mfhg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5dc4d401-73fd-4b13-ac85-f2cb65b4b0f4,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5f42cc3b22183e2667e54bc3e4f3a8694e5a03687258e51ea727f2791e8f2bd2,PodSandboxId:fb8186cb25bedcb5ead72ab2bee49e30b85f3fe3d93d6fbbe8cc27a71a8bcee2,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726691753719419837,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-qhw9j,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 80568ba7-ff98-4d33-a142-e6042afceb54,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d9fa3ac1afc1ca59a2af96e3a25d529be9aefc33f90fed347b098d22cd4f87c6,PodSandboxId:5c2fdf9555c75568a328bcb1dd39c97cc1b6aa551c52d71444d561d1b6b63100,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726691753577636161,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-8bns5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7d082c6f-bda6-4f6e-906f-f54ee78ab21d,},Annotations:map[string]
string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:791bd9a5018aeedf9e06af458153d286ee9adf9d3f305eac37abdf6c15be05e2,PodSandboxId:aa8631f45a260656250e46f0fbd1073c87adef174b5c5aed0eaceda68bcecaca,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726691753569564470,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 66cb614f-b7a9-4d3e-a045-2fdde5e368cf,},Annotations:map[string]string{io.ku
bernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8fd060cb3319be9965fa313cbfa09acc398464e7391a2344409f7e842e63a92d,PodSandboxId:bf9087acb004157f001ddd0a52d8fd362f2da1444d4e2d26954241dd565a64a0,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726691749702651647,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-622675,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fca4c2ba470b4eae5638e77bd36da67c,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kuberne
tes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ff8c8376b280ff49f0e834d1561ecf31b332e70e3c1cd4785429577b30a31a74,PodSandboxId:2b6c1eadfce59acb58e26cc0d2d145c4e5744ffa181d3925f1a7e9d80659642f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726691749672403621,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-622675,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 149a9ea18cda8f18fe46b42f5fdcfc42,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.
restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8195546af7c97dbc4c7e931098abda3db7972fe66027da6eed0e4d0cbbabb435,PodSandboxId:c327ac057bb5e9f01ef80b9b90ecf353c03c86ac611274795c72fe883ee6e2f7,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726691749606943506,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-622675,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fd68ba840847ee4c3afc8ea4a806da80,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount:
1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bb9942cd5a355661495e4b729ac316c9b20f71dc4d56160866541cd1d1be6eb2,PodSandboxId:a7dd9337ebbc27427f6247c2366d2906b6f71f19a387a9abe9be193963de8b83,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726691749588676988,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-622675,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3dd56b6172e959eabe3391d13c3a2a11,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:14daa73e4b6447058fbb9acad7addcf2492e09082bdb8c59f4566c4c3c4ecbb5,PodSandboxId:c8caa799524794493dbfe9bac34f0447360906ceb1aec934ae128638272c56fc,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1726691418331135241,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-sxchh,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 056a9bd0-0d18-47f1-be48-7f0a773979db,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.res
tartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:43e5c05bff5626d849f1cc1a2457bed371140f56a7d07e75167462bf594fc28f,PodSandboxId:28439dd9ecc619703fb66cdece03c6a2eff4a229483ca954401bdc5f938d4efc,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1726691364054297176,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 66cb614f-b7a9-4d3e-a045-2fdde5e368cf,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,
io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d944b227553371de1b94120a984c24cb4b177354de38e8d43eb9ed453b0fba4a,PodSandboxId:7d66f12289576432c00d2bf0919fc70e50b47d67ef3abc205d546052d26f59fe,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726691363752948720,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-qhw9j,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 80568ba7-ff98-4d33-a142-e6042afceb54,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\
"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:19d5f178e345d175b48c3050d2afb16f5238d648db531cca0bc1482880d1cd61,PodSandboxId:6d42deb88737975208f43de569db1326a6e9c10c6cf7828616b31042f861b4c0,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1726691352087479272,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-5mfhg,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: 5dc4d401-73fd-4b13-ac85-f2cb65b4b0f4,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e499594bd4ca1e0ffebd9b5b5240ecdc7a824b9fbd8d21840a8cb7e865d601b0,PodSandboxId:c7c998fccea60bd4755ac2216b7817bc62c33b15e3b2c850f94c3137c789c6f2,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1726691351886016603,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-8bns5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7d082c6f-bda6-4f6e-906f
-f54ee78ab21d,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:10852f58e0d0a0b7a14d38e4a5a81025f9506296d50000e0f0157576e90b73d5,PodSandboxId:000a3cbbd10b3f4051d7b8e30d56b0c4964499d27480805a600e9151703aa33e,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1726691340776059921,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-622675,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fca4c2ba470b4eae5638e77bd36da67c,},Annotations:map[string]string
{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aabf741e6c21bcdd33794ad272367e8eee404ab32e56df824e7b682d84ec0cb3,PodSandboxId:8a3c914b697a74102c43ed3545d35dc856b63d70e9f098351b7bc71b478fea76,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1726691340765600818,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-622675,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 149a9ea18cda8f18fe46b42f5fdcfc42,},Annotations:map[string]string{io.kubernetes.
container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fa1411a6edea077a5bf21467bebe993d54a884c4ef814ac722270c58c4e1027c,PodSandboxId:836940a817d400129afc8f5461aeaf3cd20616d272b8db529bd21a1e30f18af3,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1726691340707884443,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-622675,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3dd56b6172e959eabe3391d13c3a2a11,},Annotations:map[string]string{io
.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ca2c1be8e70a4692c9c15e9da716688fe6dcc7d6c82050191b2576cffe47cc24,PodSandboxId:6bfb0c877ef32204a57b0733140b081e7f546b82b86349c5f66fc15a32410c54,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726691340699526426,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-622675,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fd68ba840847ee4c3afc8ea4a806da80,},Annotations:map[string]string{io.kubernetes.con
tainer.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=07827455-e4e0-4bad-93b2-da7b9333f9a5 name=/runtime.v1.RuntimeService/ListContainers
	Sep 18 20:37:33 multinode-622675 crio[2695]: time="2024-09-18 20:37:33.282303481Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=11bfab55-d855-4e70-9711-739675d0341a name=/runtime.v1.RuntimeService/Version
	Sep 18 20:37:33 multinode-622675 crio[2695]: time="2024-09-18 20:37:33.282396142Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=11bfab55-d855-4e70-9711-739675d0341a name=/runtime.v1.RuntimeService/Version
	Sep 18 20:37:33 multinode-622675 crio[2695]: time="2024-09-18 20:37:33.283817276Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=26c09a12-00c8-4ae1-a116-57ae897e2b4c name=/runtime.v1.ImageService/ImageFsInfo
	Sep 18 20:37:33 multinode-622675 crio[2695]: time="2024-09-18 20:37:33.284382201Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726691853284358016,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=26c09a12-00c8-4ae1-a116-57ae897e2b4c name=/runtime.v1.ImageService/ImageFsInfo
	Sep 18 20:37:33 multinode-622675 crio[2695]: time="2024-09-18 20:37:33.285069278Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=5c00a135-fc1c-4232-a76d-33f1fe30a13a name=/runtime.v1.RuntimeService/ListContainers
	Sep 18 20:37:33 multinode-622675 crio[2695]: time="2024-09-18 20:37:33.285127065Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=5c00a135-fc1c-4232-a76d-33f1fe30a13a name=/runtime.v1.RuntimeService/ListContainers
	Sep 18 20:37:33 multinode-622675 crio[2695]: time="2024-09-18 20:37:33.285495865Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:0aa3be4f6770078682857062b75364a07889ed89b6d7725f4d222a925f54d7e1,PodSandboxId:37fadf186b79639715989c0c251bbd9cb53ed867f20833b59bf2f24d7dc2db19,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1726691787265875647,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-sxchh,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 056a9bd0-0d18-47f1-be48-7f0a773979db,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d800c9f5cd07512852d48c4b63d9ac715b3ab0d8aa9f971659a4d1e098182684,PodSandboxId:7661f91bded7e04ad84cbcb869216e52fb39e725a20511807f190a14f71a138f,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1726691753718362716,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-5mfhg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5dc4d401-73fd-4b13-ac85-f2cb65b4b0f4,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5f42cc3b22183e2667e54bc3e4f3a8694e5a03687258e51ea727f2791e8f2bd2,PodSandboxId:fb8186cb25bedcb5ead72ab2bee49e30b85f3fe3d93d6fbbe8cc27a71a8bcee2,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726691753719419837,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-qhw9j,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 80568ba7-ff98-4d33-a142-e6042afceb54,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d9fa3ac1afc1ca59a2af96e3a25d529be9aefc33f90fed347b098d22cd4f87c6,PodSandboxId:5c2fdf9555c75568a328bcb1dd39c97cc1b6aa551c52d71444d561d1b6b63100,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726691753577636161,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-8bns5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7d082c6f-bda6-4f6e-906f-f54ee78ab21d,},Annotations:map[string]
string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:791bd9a5018aeedf9e06af458153d286ee9adf9d3f305eac37abdf6c15be05e2,PodSandboxId:aa8631f45a260656250e46f0fbd1073c87adef174b5c5aed0eaceda68bcecaca,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726691753569564470,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 66cb614f-b7a9-4d3e-a045-2fdde5e368cf,},Annotations:map[string]string{io.ku
bernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8fd060cb3319be9965fa313cbfa09acc398464e7391a2344409f7e842e63a92d,PodSandboxId:bf9087acb004157f001ddd0a52d8fd362f2da1444d4e2d26954241dd565a64a0,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726691749702651647,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-622675,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fca4c2ba470b4eae5638e77bd36da67c,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kuberne
tes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ff8c8376b280ff49f0e834d1561ecf31b332e70e3c1cd4785429577b30a31a74,PodSandboxId:2b6c1eadfce59acb58e26cc0d2d145c4e5744ffa181d3925f1a7e9d80659642f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726691749672403621,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-622675,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 149a9ea18cda8f18fe46b42f5fdcfc42,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.
restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8195546af7c97dbc4c7e931098abda3db7972fe66027da6eed0e4d0cbbabb435,PodSandboxId:c327ac057bb5e9f01ef80b9b90ecf353c03c86ac611274795c72fe883ee6e2f7,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726691749606943506,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-622675,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fd68ba840847ee4c3afc8ea4a806da80,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount:
1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bb9942cd5a355661495e4b729ac316c9b20f71dc4d56160866541cd1d1be6eb2,PodSandboxId:a7dd9337ebbc27427f6247c2366d2906b6f71f19a387a9abe9be193963de8b83,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726691749588676988,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-622675,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3dd56b6172e959eabe3391d13c3a2a11,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:14daa73e4b6447058fbb9acad7addcf2492e09082bdb8c59f4566c4c3c4ecbb5,PodSandboxId:c8caa799524794493dbfe9bac34f0447360906ceb1aec934ae128638272c56fc,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1726691418331135241,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-sxchh,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 056a9bd0-0d18-47f1-be48-7f0a773979db,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.res
tartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:43e5c05bff5626d849f1cc1a2457bed371140f56a7d07e75167462bf594fc28f,PodSandboxId:28439dd9ecc619703fb66cdece03c6a2eff4a229483ca954401bdc5f938d4efc,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1726691364054297176,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 66cb614f-b7a9-4d3e-a045-2fdde5e368cf,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,
io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d944b227553371de1b94120a984c24cb4b177354de38e8d43eb9ed453b0fba4a,PodSandboxId:7d66f12289576432c00d2bf0919fc70e50b47d67ef3abc205d546052d26f59fe,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726691363752948720,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-qhw9j,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 80568ba7-ff98-4d33-a142-e6042afceb54,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\
"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:19d5f178e345d175b48c3050d2afb16f5238d648db531cca0bc1482880d1cd61,PodSandboxId:6d42deb88737975208f43de569db1326a6e9c10c6cf7828616b31042f861b4c0,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1726691352087479272,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-5mfhg,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: 5dc4d401-73fd-4b13-ac85-f2cb65b4b0f4,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e499594bd4ca1e0ffebd9b5b5240ecdc7a824b9fbd8d21840a8cb7e865d601b0,PodSandboxId:c7c998fccea60bd4755ac2216b7817bc62c33b15e3b2c850f94c3137c789c6f2,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1726691351886016603,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-8bns5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7d082c6f-bda6-4f6e-906f
-f54ee78ab21d,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:10852f58e0d0a0b7a14d38e4a5a81025f9506296d50000e0f0157576e90b73d5,PodSandboxId:000a3cbbd10b3f4051d7b8e30d56b0c4964499d27480805a600e9151703aa33e,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1726691340776059921,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-622675,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fca4c2ba470b4eae5638e77bd36da67c,},Annotations:map[string]string
{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aabf741e6c21bcdd33794ad272367e8eee404ab32e56df824e7b682d84ec0cb3,PodSandboxId:8a3c914b697a74102c43ed3545d35dc856b63d70e9f098351b7bc71b478fea76,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1726691340765600818,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-622675,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 149a9ea18cda8f18fe46b42f5fdcfc42,},Annotations:map[string]string{io.kubernetes.
container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fa1411a6edea077a5bf21467bebe993d54a884c4ef814ac722270c58c4e1027c,PodSandboxId:836940a817d400129afc8f5461aeaf3cd20616d272b8db529bd21a1e30f18af3,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1726691340707884443,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-622675,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3dd56b6172e959eabe3391d13c3a2a11,},Annotations:map[string]string{io
.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ca2c1be8e70a4692c9c15e9da716688fe6dcc7d6c82050191b2576cffe47cc24,PodSandboxId:6bfb0c877ef32204a57b0733140b081e7f546b82b86349c5f66fc15a32410c54,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726691340699526426,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-622675,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fd68ba840847ee4c3afc8ea4a806da80,},Annotations:map[string]string{io.kubernetes.con
tainer.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=5c00a135-fc1c-4232-a76d-33f1fe30a13a name=/runtime.v1.RuntimeService/ListContainers
	Sep 18 20:37:33 multinode-622675 crio[2695]: time="2024-09-18 20:37:33.326107551Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=1130a6c0-a7f5-4243-a24e-bbf188f6a7d7 name=/runtime.v1.RuntimeService/Version
	Sep 18 20:37:33 multinode-622675 crio[2695]: time="2024-09-18 20:37:33.326188457Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=1130a6c0-a7f5-4243-a24e-bbf188f6a7d7 name=/runtime.v1.RuntimeService/Version
	Sep 18 20:37:33 multinode-622675 crio[2695]: time="2024-09-18 20:37:33.327128711Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=ce94b5b1-50b0-45ea-b3f1-e1561955b878 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 18 20:37:33 multinode-622675 crio[2695]: time="2024-09-18 20:37:33.327521760Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726691853327501845,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=ce94b5b1-50b0-45ea-b3f1-e1561955b878 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 18 20:37:33 multinode-622675 crio[2695]: time="2024-09-18 20:37:33.328044111Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=da0c8b93-aac4-43dd-8cf1-bc5a454048fc name=/runtime.v1.RuntimeService/ListContainers
	Sep 18 20:37:33 multinode-622675 crio[2695]: time="2024-09-18 20:37:33.328097675Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=da0c8b93-aac4-43dd-8cf1-bc5a454048fc name=/runtime.v1.RuntimeService/ListContainers
	Sep 18 20:37:33 multinode-622675 crio[2695]: time="2024-09-18 20:37:33.328462809Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:0aa3be4f6770078682857062b75364a07889ed89b6d7725f4d222a925f54d7e1,PodSandboxId:37fadf186b79639715989c0c251bbd9cb53ed867f20833b59bf2f24d7dc2db19,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1726691787265875647,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-sxchh,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 056a9bd0-0d18-47f1-be48-7f0a773979db,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d800c9f5cd07512852d48c4b63d9ac715b3ab0d8aa9f971659a4d1e098182684,PodSandboxId:7661f91bded7e04ad84cbcb869216e52fb39e725a20511807f190a14f71a138f,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1726691753718362716,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-5mfhg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5dc4d401-73fd-4b13-ac85-f2cb65b4b0f4,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5f42cc3b22183e2667e54bc3e4f3a8694e5a03687258e51ea727f2791e8f2bd2,PodSandboxId:fb8186cb25bedcb5ead72ab2bee49e30b85f3fe3d93d6fbbe8cc27a71a8bcee2,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726691753719419837,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-qhw9j,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 80568ba7-ff98-4d33-a142-e6042afceb54,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d9fa3ac1afc1ca59a2af96e3a25d529be9aefc33f90fed347b098d22cd4f87c6,PodSandboxId:5c2fdf9555c75568a328bcb1dd39c97cc1b6aa551c52d71444d561d1b6b63100,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726691753577636161,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-8bns5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7d082c6f-bda6-4f6e-906f-f54ee78ab21d,},Annotations:map[string]
string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:791bd9a5018aeedf9e06af458153d286ee9adf9d3f305eac37abdf6c15be05e2,PodSandboxId:aa8631f45a260656250e46f0fbd1073c87adef174b5c5aed0eaceda68bcecaca,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726691753569564470,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 66cb614f-b7a9-4d3e-a045-2fdde5e368cf,},Annotations:map[string]string{io.ku
bernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8fd060cb3319be9965fa313cbfa09acc398464e7391a2344409f7e842e63a92d,PodSandboxId:bf9087acb004157f001ddd0a52d8fd362f2da1444d4e2d26954241dd565a64a0,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726691749702651647,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-622675,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fca4c2ba470b4eae5638e77bd36da67c,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kuberne
tes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ff8c8376b280ff49f0e834d1561ecf31b332e70e3c1cd4785429577b30a31a74,PodSandboxId:2b6c1eadfce59acb58e26cc0d2d145c4e5744ffa181d3925f1a7e9d80659642f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726691749672403621,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-622675,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 149a9ea18cda8f18fe46b42f5fdcfc42,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.
restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8195546af7c97dbc4c7e931098abda3db7972fe66027da6eed0e4d0cbbabb435,PodSandboxId:c327ac057bb5e9f01ef80b9b90ecf353c03c86ac611274795c72fe883ee6e2f7,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726691749606943506,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-622675,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fd68ba840847ee4c3afc8ea4a806da80,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount:
1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bb9942cd5a355661495e4b729ac316c9b20f71dc4d56160866541cd1d1be6eb2,PodSandboxId:a7dd9337ebbc27427f6247c2366d2906b6f71f19a387a9abe9be193963de8b83,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726691749588676988,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-622675,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3dd56b6172e959eabe3391d13c3a2a11,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:14daa73e4b6447058fbb9acad7addcf2492e09082bdb8c59f4566c4c3c4ecbb5,PodSandboxId:c8caa799524794493dbfe9bac34f0447360906ceb1aec934ae128638272c56fc,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1726691418331135241,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-sxchh,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 056a9bd0-0d18-47f1-be48-7f0a773979db,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.res
tartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:43e5c05bff5626d849f1cc1a2457bed371140f56a7d07e75167462bf594fc28f,PodSandboxId:28439dd9ecc619703fb66cdece03c6a2eff4a229483ca954401bdc5f938d4efc,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1726691364054297176,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 66cb614f-b7a9-4d3e-a045-2fdde5e368cf,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,
io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d944b227553371de1b94120a984c24cb4b177354de38e8d43eb9ed453b0fba4a,PodSandboxId:7d66f12289576432c00d2bf0919fc70e50b47d67ef3abc205d546052d26f59fe,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726691363752948720,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-qhw9j,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 80568ba7-ff98-4d33-a142-e6042afceb54,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\
"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:19d5f178e345d175b48c3050d2afb16f5238d648db531cca0bc1482880d1cd61,PodSandboxId:6d42deb88737975208f43de569db1326a6e9c10c6cf7828616b31042f861b4c0,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1726691352087479272,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-5mfhg,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: 5dc4d401-73fd-4b13-ac85-f2cb65b4b0f4,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e499594bd4ca1e0ffebd9b5b5240ecdc7a824b9fbd8d21840a8cb7e865d601b0,PodSandboxId:c7c998fccea60bd4755ac2216b7817bc62c33b15e3b2c850f94c3137c789c6f2,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1726691351886016603,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-8bns5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7d082c6f-bda6-4f6e-906f
-f54ee78ab21d,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:10852f58e0d0a0b7a14d38e4a5a81025f9506296d50000e0f0157576e90b73d5,PodSandboxId:000a3cbbd10b3f4051d7b8e30d56b0c4964499d27480805a600e9151703aa33e,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1726691340776059921,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-622675,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fca4c2ba470b4eae5638e77bd36da67c,},Annotations:map[string]string
{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aabf741e6c21bcdd33794ad272367e8eee404ab32e56df824e7b682d84ec0cb3,PodSandboxId:8a3c914b697a74102c43ed3545d35dc856b63d70e9f098351b7bc71b478fea76,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1726691340765600818,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-622675,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 149a9ea18cda8f18fe46b42f5fdcfc42,},Annotations:map[string]string{io.kubernetes.
container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fa1411a6edea077a5bf21467bebe993d54a884c4ef814ac722270c58c4e1027c,PodSandboxId:836940a817d400129afc8f5461aeaf3cd20616d272b8db529bd21a1e30f18af3,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1726691340707884443,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-622675,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3dd56b6172e959eabe3391d13c3a2a11,},Annotations:map[string]string{io
.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ca2c1be8e70a4692c9c15e9da716688fe6dcc7d6c82050191b2576cffe47cc24,PodSandboxId:6bfb0c877ef32204a57b0733140b081e7f546b82b86349c5f66fc15a32410c54,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726691340699526426,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-622675,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fd68ba840847ee4c3afc8ea4a806da80,},Annotations:map[string]string{io.kubernetes.con
tainer.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=da0c8b93-aac4-43dd-8cf1-bc5a454048fc name=/runtime.v1.RuntimeService/ListContainers
	Sep 18 20:37:33 multinode-622675 crio[2695]: time="2024-09-18 20:37:33.371875692Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=5fe86bb0-45f0-4e77-80c1-f4571013c441 name=/runtime.v1.RuntimeService/Version
	Sep 18 20:37:33 multinode-622675 crio[2695]: time="2024-09-18 20:37:33.372003032Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=5fe86bb0-45f0-4e77-80c1-f4571013c441 name=/runtime.v1.RuntimeService/Version
	Sep 18 20:37:33 multinode-622675 crio[2695]: time="2024-09-18 20:37:33.373179328Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=c81da3ae-6c08-4610-8465-61efe4509aea name=/runtime.v1.ImageService/ImageFsInfo
	Sep 18 20:37:33 multinode-622675 crio[2695]: time="2024-09-18 20:37:33.373566962Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726691853373546675,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=c81da3ae-6c08-4610-8465-61efe4509aea name=/runtime.v1.ImageService/ImageFsInfo
	Sep 18 20:37:33 multinode-622675 crio[2695]: time="2024-09-18 20:37:33.374152651Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=c5c2906f-05b0-42bc-ad04-c317770d88f9 name=/runtime.v1.RuntimeService/ListContainers
	Sep 18 20:37:33 multinode-622675 crio[2695]: time="2024-09-18 20:37:33.374291666Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=c5c2906f-05b0-42bc-ad04-c317770d88f9 name=/runtime.v1.RuntimeService/ListContainers
	Sep 18 20:37:33 multinode-622675 crio[2695]: time="2024-09-18 20:37:33.375589562Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:0aa3be4f6770078682857062b75364a07889ed89b6d7725f4d222a925f54d7e1,PodSandboxId:37fadf186b79639715989c0c251bbd9cb53ed867f20833b59bf2f24d7dc2db19,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1726691787265875647,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-sxchh,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 056a9bd0-0d18-47f1-be48-7f0a773979db,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d800c9f5cd07512852d48c4b63d9ac715b3ab0d8aa9f971659a4d1e098182684,PodSandboxId:7661f91bded7e04ad84cbcb869216e52fb39e725a20511807f190a14f71a138f,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1726691753718362716,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-5mfhg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5dc4d401-73fd-4b13-ac85-f2cb65b4b0f4,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5f42cc3b22183e2667e54bc3e4f3a8694e5a03687258e51ea727f2791e8f2bd2,PodSandboxId:fb8186cb25bedcb5ead72ab2bee49e30b85f3fe3d93d6fbbe8cc27a71a8bcee2,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726691753719419837,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-qhw9j,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 80568ba7-ff98-4d33-a142-e6042afceb54,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d9fa3ac1afc1ca59a2af96e3a25d529be9aefc33f90fed347b098d22cd4f87c6,PodSandboxId:5c2fdf9555c75568a328bcb1dd39c97cc1b6aa551c52d71444d561d1b6b63100,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726691753577636161,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-8bns5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7d082c6f-bda6-4f6e-906f-f54ee78ab21d,},Annotations:map[string]
string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:791bd9a5018aeedf9e06af458153d286ee9adf9d3f305eac37abdf6c15be05e2,PodSandboxId:aa8631f45a260656250e46f0fbd1073c87adef174b5c5aed0eaceda68bcecaca,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726691753569564470,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 66cb614f-b7a9-4d3e-a045-2fdde5e368cf,},Annotations:map[string]string{io.ku
bernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8fd060cb3319be9965fa313cbfa09acc398464e7391a2344409f7e842e63a92d,PodSandboxId:bf9087acb004157f001ddd0a52d8fd362f2da1444d4e2d26954241dd565a64a0,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726691749702651647,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-622675,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fca4c2ba470b4eae5638e77bd36da67c,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kuberne
tes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ff8c8376b280ff49f0e834d1561ecf31b332e70e3c1cd4785429577b30a31a74,PodSandboxId:2b6c1eadfce59acb58e26cc0d2d145c4e5744ffa181d3925f1a7e9d80659642f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726691749672403621,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-622675,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 149a9ea18cda8f18fe46b42f5fdcfc42,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.
restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8195546af7c97dbc4c7e931098abda3db7972fe66027da6eed0e4d0cbbabb435,PodSandboxId:c327ac057bb5e9f01ef80b9b90ecf353c03c86ac611274795c72fe883ee6e2f7,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726691749606943506,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-622675,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fd68ba840847ee4c3afc8ea4a806da80,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount:
1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bb9942cd5a355661495e4b729ac316c9b20f71dc4d56160866541cd1d1be6eb2,PodSandboxId:a7dd9337ebbc27427f6247c2366d2906b6f71f19a387a9abe9be193963de8b83,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726691749588676988,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-622675,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3dd56b6172e959eabe3391d13c3a2a11,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:14daa73e4b6447058fbb9acad7addcf2492e09082bdb8c59f4566c4c3c4ecbb5,PodSandboxId:c8caa799524794493dbfe9bac34f0447360906ceb1aec934ae128638272c56fc,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1726691418331135241,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-sxchh,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 056a9bd0-0d18-47f1-be48-7f0a773979db,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.res
tartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:43e5c05bff5626d849f1cc1a2457bed371140f56a7d07e75167462bf594fc28f,PodSandboxId:28439dd9ecc619703fb66cdece03c6a2eff4a229483ca954401bdc5f938d4efc,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1726691364054297176,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 66cb614f-b7a9-4d3e-a045-2fdde5e368cf,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,
io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d944b227553371de1b94120a984c24cb4b177354de38e8d43eb9ed453b0fba4a,PodSandboxId:7d66f12289576432c00d2bf0919fc70e50b47d67ef3abc205d546052d26f59fe,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726691363752948720,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-qhw9j,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 80568ba7-ff98-4d33-a142-e6042afceb54,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\
"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:19d5f178e345d175b48c3050d2afb16f5238d648db531cca0bc1482880d1cd61,PodSandboxId:6d42deb88737975208f43de569db1326a6e9c10c6cf7828616b31042f861b4c0,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1726691352087479272,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-5mfhg,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: 5dc4d401-73fd-4b13-ac85-f2cb65b4b0f4,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e499594bd4ca1e0ffebd9b5b5240ecdc7a824b9fbd8d21840a8cb7e865d601b0,PodSandboxId:c7c998fccea60bd4755ac2216b7817bc62c33b15e3b2c850f94c3137c789c6f2,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1726691351886016603,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-8bns5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7d082c6f-bda6-4f6e-906f
-f54ee78ab21d,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:10852f58e0d0a0b7a14d38e4a5a81025f9506296d50000e0f0157576e90b73d5,PodSandboxId:000a3cbbd10b3f4051d7b8e30d56b0c4964499d27480805a600e9151703aa33e,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1726691340776059921,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-622675,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fca4c2ba470b4eae5638e77bd36da67c,},Annotations:map[string]string
{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aabf741e6c21bcdd33794ad272367e8eee404ab32e56df824e7b682d84ec0cb3,PodSandboxId:8a3c914b697a74102c43ed3545d35dc856b63d70e9f098351b7bc71b478fea76,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1726691340765600818,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-622675,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 149a9ea18cda8f18fe46b42f5fdcfc42,},Annotations:map[string]string{io.kubernetes.
container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fa1411a6edea077a5bf21467bebe993d54a884c4ef814ac722270c58c4e1027c,PodSandboxId:836940a817d400129afc8f5461aeaf3cd20616d272b8db529bd21a1e30f18af3,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1726691340707884443,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-622675,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3dd56b6172e959eabe3391d13c3a2a11,},Annotations:map[string]string{io
.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ca2c1be8e70a4692c9c15e9da716688fe6dcc7d6c82050191b2576cffe47cc24,PodSandboxId:6bfb0c877ef32204a57b0733140b081e7f546b82b86349c5f66fc15a32410c54,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726691340699526426,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-622675,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fd68ba840847ee4c3afc8ea4a806da80,},Annotations:map[string]string{io.kubernetes.con
tainer.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=c5c2906f-05b0-42bc-ad04-c317770d88f9 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	0aa3be4f67700       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a                                      About a minute ago   Running             busybox                   1                   37fadf186b796       busybox-7dff88458-sxchh
	5f42cc3b22183       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      About a minute ago   Running             coredns                   1                   fb8186cb25bed       coredns-7c65d6cfc9-qhw9j
	d800c9f5cd075       12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f                                      About a minute ago   Running             kindnet-cni               1                   7661f91bded7e       kindnet-5mfhg
	d9fa3ac1afc1c       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                      About a minute ago   Running             kube-proxy                1                   5c2fdf9555c75       kube-proxy-8bns5
	791bd9a5018ae       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      About a minute ago   Running             storage-provisioner       1                   aa8631f45a260       storage-provisioner
	8fd060cb3319b       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      About a minute ago   Running             etcd                      1                   bf9087acb0041       etcd-multinode-622675
	ff8c8376b280f       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                      About a minute ago   Running             kube-scheduler            1                   2b6c1eadfce59       kube-scheduler-multinode-622675
	8195546af7c97       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee                                      About a minute ago   Running             kube-apiserver            1                   c327ac057bb5e       kube-apiserver-multinode-622675
	bb9942cd5a355       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                      About a minute ago   Running             kube-controller-manager   1                   a7dd9337ebbc2       kube-controller-manager-multinode-622675
	14daa73e4b644       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   7 minutes ago        Exited              busybox                   0                   c8caa79952479       busybox-7dff88458-sxchh
	43e5c05bff562       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      8 minutes ago        Exited              storage-provisioner       0                   28439dd9ecc61       storage-provisioner
	d944b22755337       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      8 minutes ago        Exited              coredns                   0                   7d66f12289576       coredns-7c65d6cfc9-qhw9j
	19d5f178e345d       12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f                                      8 minutes ago        Exited              kindnet-cni               0                   6d42deb887379       kindnet-5mfhg
	e499594bd4ca1       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                      8 minutes ago        Exited              kube-proxy                0                   c7c998fccea60       kube-proxy-8bns5
	10852f58e0d0a       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      8 minutes ago        Exited              etcd                      0                   000a3cbbd10b3       etcd-multinode-622675
	aabf741e6c21b       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                      8 minutes ago        Exited              kube-scheduler            0                   8a3c914b697a7       kube-scheduler-multinode-622675
	fa1411a6edea0       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                      8 minutes ago        Exited              kube-controller-manager   0                   836940a817d40       kube-controller-manager-multinode-622675
	ca2c1be8e70a4       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee                                      8 minutes ago        Exited              kube-apiserver            0                   6bfb0c877ef32       kube-apiserver-multinode-622675
	
	
	==> coredns [5f42cc3b22183e2667e54bc3e4f3a8694e5a03687258e51ea727f2791e8f2bd2] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:60438 - 13597 "HINFO IN 937730212470279970.3145319502201119370. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.01050279s
	
	
	==> coredns [d944b227553371de1b94120a984c24cb4b177354de38e8d43eb9ed453b0fba4a] <==
	[INFO] 10.244.1.2:60475 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.002113425s
	[INFO] 10.244.1.2:57308 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000100558s
	[INFO] 10.244.1.2:40216 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000069207s
	[INFO] 10.244.1.2:59449 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001495016s
	[INFO] 10.244.1.2:47081 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.00020545s
	[INFO] 10.244.1.2:44235 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000070751s
	[INFO] 10.244.1.2:52562 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00018663s
	[INFO] 10.244.0.3:53698 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000110057s
	[INFO] 10.244.0.3:54614 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000178911s
	[INFO] 10.244.0.3:60235 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000108951s
	[INFO] 10.244.0.3:38275 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000195679s
	[INFO] 10.244.1.2:50638 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000164811s
	[INFO] 10.244.1.2:36862 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00027575s
	[INFO] 10.244.1.2:34719 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000086256s
	[INFO] 10.244.1.2:48586 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00018652s
	[INFO] 10.244.0.3:36271 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000156116s
	[INFO] 10.244.0.3:55158 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000232974s
	[INFO] 10.244.0.3:55686 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000109487s
	[INFO] 10.244.0.3:43642 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000220267s
	[INFO] 10.244.1.2:33096 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000186841s
	[INFO] 10.244.1.2:51389 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000196611s
	[INFO] 10.244.1.2:49102 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000108803s
	[INFO] 10.244.1.2:57092 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000135542s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               multinode-622675
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-622675
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=85073601a832bd4bbda5d11fa91feafff6ec6b91
	                    minikube.k8s.io/name=multinode-622675
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_18T20_29_07_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 18 Sep 2024 20:29:03 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-622675
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 18 Sep 2024 20:37:24 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 18 Sep 2024 20:35:53 +0000   Wed, 18 Sep 2024 20:29:01 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 18 Sep 2024 20:35:53 +0000   Wed, 18 Sep 2024 20:29:01 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 18 Sep 2024 20:35:53 +0000   Wed, 18 Sep 2024 20:29:01 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 18 Sep 2024 20:35:53 +0000   Wed, 18 Sep 2024 20:29:23 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.106
	  Hostname:    multinode-622675
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 01fae44ecdba45e88651a7b4ea518137
	  System UUID:                01fae44e-cdba-45e8-8651-a7b4ea518137
	  Boot ID:                    59e3dfe6-8cab-4620-ae90-07cbeed2e1b4
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-sxchh                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m19s
	  kube-system                 coredns-7c65d6cfc9-qhw9j                    100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     8m23s
	  kube-system                 etcd-multinode-622675                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         8m27s
	  kube-system                 kindnet-5mfhg                               100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      8m23s
	  kube-system                 kube-apiserver-multinode-622675             250m (12%)    0 (0%)      0 (0%)           0 (0%)         8m27s
	  kube-system                 kube-controller-manager-multinode-622675    200m (10%)    0 (0%)      0 (0%)           0 (0%)         8m27s
	  kube-system                 kube-proxy-8bns5                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m23s
	  kube-system                 kube-scheduler-multinode-622675             100m (5%)     0 (0%)      0 (0%)           0 (0%)         8m29s
	  kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m22s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%)   100m (5%)
	  memory             220Mi (10%)  220Mi (10%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 8m21s                  kube-proxy       
	  Normal  Starting                 99s                    kube-proxy       
	  Normal  NodeHasSufficientMemory  8m33s (x8 over 8m33s)  kubelet          Node multinode-622675 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8m33s (x8 over 8m33s)  kubelet          Node multinode-622675 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8m33s (x7 over 8m33s)  kubelet          Node multinode-622675 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  8m33s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 8m28s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  8m27s                  kubelet          Node multinode-622675 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8m27s                  kubelet          Node multinode-622675 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8m27s                  kubelet          Node multinode-622675 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  8m27s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           8m24s                  node-controller  Node multinode-622675 event: Registered Node multinode-622675 in Controller
	  Normal  NodeReady                8m10s                  kubelet          Node multinode-622675 status is now: NodeReady
	  Normal  Starting                 105s                   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  104s (x8 over 104s)    kubelet          Node multinode-622675 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    104s (x8 over 104s)    kubelet          Node multinode-622675 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     104s (x7 over 104s)    kubelet          Node multinode-622675 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  104s                   kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           97s                    node-controller  Node multinode-622675 event: Registered Node multinode-622675 in Controller
	
	
	Name:               multinode-622675-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-622675-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=85073601a832bd4bbda5d11fa91feafff6ec6b91
	                    minikube.k8s.io/name=multinode-622675
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_18T20_36_33_0700
	                    minikube.k8s.io/version=v1.34.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 18 Sep 2024 20:36:32 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-622675-m02
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 18 Sep 2024 20:37:23 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 18 Sep 2024 20:37:03 +0000   Wed, 18 Sep 2024 20:36:32 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 18 Sep 2024 20:37:03 +0000   Wed, 18 Sep 2024 20:36:32 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 18 Sep 2024 20:37:03 +0000   Wed, 18 Sep 2024 20:36:32 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 18 Sep 2024 20:37:03 +0000   Wed, 18 Sep 2024 20:36:52 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.224
	  Hostname:    multinode-622675-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 49c42c0661b747518cdb352e5e19d75f
	  System UUID:                49c42c06-61b7-4751-8cdb-352e5e19d75f
	  Boot ID:                    26be908f-499e-43ba-a32b-4fd322c74b55
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-dcmpq    0 (0%)        0 (0%)      0 (0%)           0 (0%)         65s
	  kube-system                 kindnet-wgcjk              100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      7m42s
	  kube-system                 kube-proxy-msqjg           0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m42s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type    Reason                   Age                    From        Message
	  ----    ------                   ----                   ----        -------
	  Normal  Starting                 7m36s                  kube-proxy  
	  Normal  Starting                 56s                    kube-proxy  
	  Normal  NodeHasSufficientMemory  7m42s (x2 over 7m42s)  kubelet     Node multinode-622675-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7m42s (x2 over 7m42s)  kubelet     Node multinode-622675-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     7m42s (x2 over 7m42s)  kubelet     Node multinode-622675-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  7m42s                  kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                7m21s                  kubelet     Node multinode-622675-m02 status is now: NodeReady
	  Normal  NodeHasSufficientMemory  61s (x2 over 61s)      kubelet     Node multinode-622675-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    61s (x2 over 61s)      kubelet     Node multinode-622675-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     61s (x2 over 61s)      kubelet     Node multinode-622675-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  61s                    kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                41s                    kubelet     Node multinode-622675-m02 status is now: NodeReady
	
	
	Name:               multinode-622675-m03
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-622675-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=85073601a832bd4bbda5d11fa91feafff6ec6b91
	                    minikube.k8s.io/name=multinode-622675
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_18T20_37_11_0700
	                    minikube.k8s.io/version=v1.34.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 18 Sep 2024 20:37:11 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-622675-m03
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 18 Sep 2024 20:37:31 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 18 Sep 2024 20:37:30 +0000   Wed, 18 Sep 2024 20:37:11 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 18 Sep 2024 20:37:30 +0000   Wed, 18 Sep 2024 20:37:11 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 18 Sep 2024 20:37:30 +0000   Wed, 18 Sep 2024 20:37:11 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 18 Sep 2024 20:37:30 +0000   Wed, 18 Sep 2024 20:37:30 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.216
	  Hostname:    multinode-622675-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 afffb3a8e52f46c99bd588c3fee952da
	  System UUID:                afffb3a8-e52f-46c9-9bd5-88c3fee952da
	  Boot ID:                    1dcde26a-9cc4-4bae-8aa4-507e1a0f036e
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-zn545       100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      6m43s
	  kube-system                 kube-proxy-scpz2    0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m43s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type    Reason                   Age                    From        Message
	  ----    ------                   ----                   ----        -------
	  Normal  Starting                 6m38s                  kube-proxy  
	  Normal  Starting                 17s                    kube-proxy  
	  Normal  Starting                 5m48s                  kube-proxy  
	  Normal  NodeHasSufficientMemory  6m43s (x2 over 6m43s)  kubelet     Node multinode-622675-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m43s (x2 over 6m43s)  kubelet     Node multinode-622675-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m43s (x2 over 6m43s)  kubelet     Node multinode-622675-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  6m43s                  kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                6m23s                  kubelet     Node multinode-622675-m03 status is now: NodeReady
	  Normal  NodeHasSufficientPID     5m53s (x2 over 5m53s)  kubelet     Node multinode-622675-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m53s                  kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeHasNoDiskPressure    5m53s (x2 over 5m53s)  kubelet     Node multinode-622675-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  5m53s (x2 over 5m53s)  kubelet     Node multinode-622675-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeReady                5m34s                  kubelet     Node multinode-622675-m03 status is now: NodeReady
	  Normal  NodeAllocatableEnforced  23s                    kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  22s (x2 over 23s)      kubelet     Node multinode-622675-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    22s (x2 over 23s)      kubelet     Node multinode-622675-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     22s (x2 over 23s)      kubelet     Node multinode-622675-m03 status is now: NodeHasSufficientPID
	  Normal  NodeReady                3s                     kubelet     Node multinode-622675-m03 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.062253] systemd-fstab-generator[598]: Ignoring "noauto" option for root device
	[  +0.169595] systemd-fstab-generator[612]: Ignoring "noauto" option for root device
	[  +0.145253] systemd-fstab-generator[624]: Ignoring "noauto" option for root device
	[  +0.266606] systemd-fstab-generator[654]: Ignoring "noauto" option for root device
	[  +3.812109] systemd-fstab-generator[744]: Ignoring "noauto" option for root device
	[  +3.759104] systemd-fstab-generator[875]: Ignoring "noauto" option for root device
	[  +0.066909] kauditd_printk_skb: 158 callbacks suppressed
	[Sep18 20:29] systemd-fstab-generator[1211]: Ignoring "noauto" option for root device
	[  +0.076886] kauditd_printk_skb: 69 callbacks suppressed
	[  +5.180679] systemd-fstab-generator[1313]: Ignoring "noauto" option for root device
	[  +0.121853] kauditd_printk_skb: 18 callbacks suppressed
	[ +12.582048] kauditd_printk_skb: 69 callbacks suppressed
	[Sep18 20:30] kauditd_printk_skb: 14 callbacks suppressed
	[Sep18 20:35] systemd-fstab-generator[2620]: Ignoring "noauto" option for root device
	[  +0.144443] systemd-fstab-generator[2632]: Ignoring "noauto" option for root device
	[  +0.168575] systemd-fstab-generator[2646]: Ignoring "noauto" option for root device
	[  +0.167066] systemd-fstab-generator[2658]: Ignoring "noauto" option for root device
	[  +0.289362] systemd-fstab-generator[2687]: Ignoring "noauto" option for root device
	[  +9.090928] systemd-fstab-generator[2777]: Ignoring "noauto" option for root device
	[  +0.085234] kauditd_printk_skb: 100 callbacks suppressed
	[  +1.951763] systemd-fstab-generator[2899]: Ignoring "noauto" option for root device
	[  +4.692670] kauditd_printk_skb: 74 callbacks suppressed
	[  +6.878512] kauditd_printk_skb: 34 callbacks suppressed
	[Sep18 20:36] systemd-fstab-generator[3742]: Ignoring "noauto" option for root device
	[ +18.659386] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> etcd [10852f58e0d0a0b7a14d38e4a5a81025f9506296d50000e0f0157576e90b73d5] <==
	{"level":"info","ts":"2024-09-18T20:29:01.458218Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-09-18T20:29:01.460032Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"db63b0e3647a827","local-member-id":"133f99d1dc1797cc","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-18T20:29:01.460161Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-18T20:29:01.460218Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-18T20:29:01.463271Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-18T20:29:01.469055Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.106:2379"}
	{"level":"warn","ts":"2024-09-18T20:29:51.490882Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"148.850253ms","expected-duration":"100ms","prefix":"","request":"header:<ID:10938278152982715042 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/events/default/multinode-622675-m02.17f670ac035452cc\" mod_revision:0 > success:<request_put:<key:\"/registry/events/default/multinode-622675-m02.17f670ac035452cc\" value_size:646 lease:1714906116127938256 >> failure:<>>","response":"size:16"}
	{"level":"info","ts":"2024-09-18T20:29:51.491241Z","caller":"traceutil/trace.go:171","msg":"trace[1497791882] transaction","detail":"{read_only:false; response_revision:469; number_of_response:1; }","duration":"234.761042ms","start":"2024-09-18T20:29:51.256465Z","end":"2024-09-18T20:29:51.491226Z","steps":["trace[1497791882] 'process raft request'  (duration: 85.256272ms)","trace[1497791882] 'compare'  (duration: 148.749608ms)"],"step_count":2}
	{"level":"warn","ts":"2024-09-18T20:30:50.317337Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"147.311213ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 serializable:true keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-18T20:30:50.317538Z","caller":"traceutil/trace.go:171","msg":"trace[1157474967] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:606; }","duration":"147.601668ms","start":"2024-09-18T20:30:50.169900Z","end":"2024-09-18T20:30:50.317501Z","steps":["trace[1157474967] 'range keys from in-memory index tree'  (duration: 147.285028ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-18T20:30:50.318130Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"109.619201ms","expected-duration":"100ms","prefix":"","request":"header:<ID:10938278152982715568 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/events/default/multinode-622675-m03.17f670b9b5a7cfb7\" mod_revision:0 > success:<request_put:<key:\"/registry/events/default/multinode-622675-m03.17f670b9b5a7cfb7\" value_size:646 lease:1714906116127939447 >> failure:<>>","response":"size:16"}
	{"level":"info","ts":"2024-09-18T20:30:50.318276Z","caller":"traceutil/trace.go:171","msg":"trace[1136392025] linearizableReadLoop","detail":"{readStateIndex:637; appliedIndex:636; }","duration":"159.084562ms","start":"2024-09-18T20:30:50.159171Z","end":"2024-09-18T20:30:50.318256Z","steps":["trace[1136392025] 'read index received'  (duration: 48.925195ms)","trace[1136392025] 'applied index is now lower than readState.Index'  (duration: 110.158575ms)"],"step_count":2}
	{"level":"info","ts":"2024-09-18T20:30:50.318357Z","caller":"traceutil/trace.go:171","msg":"trace[1344067763] transaction","detail":"{read_only:false; response_revision:607; number_of_response:1; }","duration":"236.903834ms","start":"2024-09-18T20:30:50.081421Z","end":"2024-09-18T20:30:50.318325Z","steps":["trace[1344067763] 'process raft request'  (duration: 126.7126ms)","trace[1344067763] 'compare'  (duration: 109.182792ms)"],"step_count":2}
	{"level":"warn","ts":"2024-09-18T20:30:50.318467Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"159.30067ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/multinode-622675-m03\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-18T20:30:50.318512Z","caller":"traceutil/trace.go:171","msg":"trace[1512247111] range","detail":"{range_begin:/registry/minions/multinode-622675-m03; range_end:; response_count:0; response_revision:607; }","duration":"159.345035ms","start":"2024-09-18T20:30:50.159155Z","end":"2024-09-18T20:30:50.318500Z","steps":["trace[1512247111] 'agreement among raft nodes before linearized reading'  (duration: 159.200625ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-18T20:34:05.507688Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-09-18T20:34:05.507880Z","caller":"embed/etcd.go:377","msg":"closing etcd server","name":"multinode-622675","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.106:2380"],"advertise-client-urls":["https://192.168.39.106:2379"]}
	{"level":"warn","ts":"2024-09-18T20:34:05.516007Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-09-18T20:34:05.516843Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-09-18T20:34:05.573339Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.106:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-09-18T20:34:05.573392Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.106:2379: use of closed network connection"}
	{"level":"info","ts":"2024-09-18T20:34:05.573476Z","caller":"etcdserver/server.go:1521","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"133f99d1dc1797cc","current-leader-member-id":"133f99d1dc1797cc"}
	{"level":"info","ts":"2024-09-18T20:34:05.576997Z","caller":"embed/etcd.go:581","msg":"stopping serving peer traffic","address":"192.168.39.106:2380"}
	{"level":"info","ts":"2024-09-18T20:34:05.577244Z","caller":"embed/etcd.go:586","msg":"stopped serving peer traffic","address":"192.168.39.106:2380"}
	{"level":"info","ts":"2024-09-18T20:34:05.577297Z","caller":"embed/etcd.go:379","msg":"closed etcd server","name":"multinode-622675","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.106:2380"],"advertise-client-urls":["https://192.168.39.106:2379"]}
	
	
	==> etcd [8fd060cb3319be9965fa313cbfa09acc398464e7391a2344409f7e842e63a92d] <==
	{"level":"info","ts":"2024-09-18T20:35:50.041042Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"db63b0e3647a827","local-member-id":"133f99d1dc1797cc","added-peer-id":"133f99d1dc1797cc","added-peer-peer-urls":["https://192.168.39.106:2380"]}
	{"level":"info","ts":"2024-09-18T20:35:50.051276Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"db63b0e3647a827","local-member-id":"133f99d1dc1797cc","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-18T20:35:50.051345Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-18T20:35:50.040475Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-18T20:35:50.062129Z","caller":"embed/etcd.go:728","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-09-18T20:35:50.098482Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"133f99d1dc1797cc","initial-advertise-peer-urls":["https://192.168.39.106:2380"],"listen-peer-urls":["https://192.168.39.106:2380"],"advertise-client-urls":["https://192.168.39.106:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.106:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-09-18T20:35:50.098527Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-09-18T20:35:50.098065Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.39.106:2380"}
	{"level":"info","ts":"2024-09-18T20:35:50.098570Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.39.106:2380"}
	{"level":"info","ts":"2024-09-18T20:35:51.594940Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"133f99d1dc1797cc is starting a new election at term 2"}
	{"level":"info","ts":"2024-09-18T20:35:51.595034Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"133f99d1dc1797cc became pre-candidate at term 2"}
	{"level":"info","ts":"2024-09-18T20:35:51.595084Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"133f99d1dc1797cc received MsgPreVoteResp from 133f99d1dc1797cc at term 2"}
	{"level":"info","ts":"2024-09-18T20:35:51.595111Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"133f99d1dc1797cc became candidate at term 3"}
	{"level":"info","ts":"2024-09-18T20:35:51.595119Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"133f99d1dc1797cc received MsgVoteResp from 133f99d1dc1797cc at term 3"}
	{"level":"info","ts":"2024-09-18T20:35:51.595132Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"133f99d1dc1797cc became leader at term 3"}
	{"level":"info","ts":"2024-09-18T20:35:51.595141Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 133f99d1dc1797cc elected leader 133f99d1dc1797cc at term 3"}
	{"level":"info","ts":"2024-09-18T20:35:51.600756Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"133f99d1dc1797cc","local-member-attributes":"{Name:multinode-622675 ClientURLs:[https://192.168.39.106:2379]}","request-path":"/0/members/133f99d1dc1797cc/attributes","cluster-id":"db63b0e3647a827","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-18T20:35:51.600767Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-18T20:35:51.600795Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-18T20:35:51.601587Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-18T20:35:51.601628Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-18T20:35:51.603060Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-18T20:35:51.603072Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-18T20:35:51.604104Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.106:2379"}
	{"level":"info","ts":"2024-09-18T20:35:51.604788Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> kernel <==
	 20:37:33 up 9 min,  0 users,  load average: 0.06, 0.17, 0.10
	Linux multinode-622675 5.10.207 #1 SMP Mon Sep 16 15:00:28 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [19d5f178e345d175b48c3050d2afb16f5238d648db531cca0bc1482880d1cd61] <==
	I0918 20:33:23.144374       1 main.go:322] Node multinode-622675-m03 has CIDR [10.244.4.0/24] 
	I0918 20:33:33.143686       1 main.go:295] Handling node with IPs: map[192.168.39.216:{}]
	I0918 20:33:33.143829       1 main.go:322] Node multinode-622675-m03 has CIDR [10.244.4.0/24] 
	I0918 20:33:33.144069       1 main.go:295] Handling node with IPs: map[192.168.39.106:{}]
	I0918 20:33:33.144122       1 main.go:299] handling current node
	I0918 20:33:33.144149       1 main.go:295] Handling node with IPs: map[192.168.39.224:{}]
	I0918 20:33:33.144167       1 main.go:322] Node multinode-622675-m02 has CIDR [10.244.1.0/24] 
	I0918 20:33:43.144060       1 main.go:295] Handling node with IPs: map[192.168.39.224:{}]
	I0918 20:33:43.144105       1 main.go:322] Node multinode-622675-m02 has CIDR [10.244.1.0/24] 
	I0918 20:33:43.144238       1 main.go:295] Handling node with IPs: map[192.168.39.216:{}]
	I0918 20:33:43.144258       1 main.go:322] Node multinode-622675-m03 has CIDR [10.244.4.0/24] 
	I0918 20:33:43.144320       1 main.go:295] Handling node with IPs: map[192.168.39.106:{}]
	I0918 20:33:43.144337       1 main.go:299] handling current node
	I0918 20:33:53.143699       1 main.go:295] Handling node with IPs: map[192.168.39.106:{}]
	I0918 20:33:53.143742       1 main.go:299] handling current node
	I0918 20:33:53.143762       1 main.go:295] Handling node with IPs: map[192.168.39.224:{}]
	I0918 20:33:53.143768       1 main.go:322] Node multinode-622675-m02 has CIDR [10.244.1.0/24] 
	I0918 20:33:53.143932       1 main.go:295] Handling node with IPs: map[192.168.39.216:{}]
	I0918 20:33:53.144001       1 main.go:322] Node multinode-622675-m03 has CIDR [10.244.4.0/24] 
	I0918 20:34:03.143676       1 main.go:295] Handling node with IPs: map[192.168.39.106:{}]
	I0918 20:34:03.143820       1 main.go:299] handling current node
	I0918 20:34:03.143855       1 main.go:295] Handling node with IPs: map[192.168.39.224:{}]
	I0918 20:34:03.143875       1 main.go:322] Node multinode-622675-m02 has CIDR [10.244.1.0/24] 
	I0918 20:34:03.144053       1 main.go:295] Handling node with IPs: map[192.168.39.216:{}]
	I0918 20:34:03.144080       1 main.go:322] Node multinode-622675-m03 has CIDR [10.244.4.0/24] 
	
	
	==> kindnet [d800c9f5cd07512852d48c4b63d9ac715b3ab0d8aa9f971659a4d1e098182684] <==
	I0918 20:36:44.642201       1 main.go:299] handling current node
	I0918 20:36:54.642560       1 main.go:295] Handling node with IPs: map[192.168.39.106:{}]
	I0918 20:36:54.642685       1 main.go:299] handling current node
	I0918 20:36:54.642719       1 main.go:295] Handling node with IPs: map[192.168.39.224:{}]
	I0918 20:36:54.642739       1 main.go:322] Node multinode-622675-m02 has CIDR [10.244.1.0/24] 
	I0918 20:36:54.642896       1 main.go:295] Handling node with IPs: map[192.168.39.216:{}]
	I0918 20:36:54.643025       1 main.go:322] Node multinode-622675-m03 has CIDR [10.244.4.0/24] 
	I0918 20:37:04.642817       1 main.go:295] Handling node with IPs: map[192.168.39.106:{}]
	I0918 20:37:04.642949       1 main.go:299] handling current node
	I0918 20:37:04.643041       1 main.go:295] Handling node with IPs: map[192.168.39.224:{}]
	I0918 20:37:04.643063       1 main.go:322] Node multinode-622675-m02 has CIDR [10.244.1.0/24] 
	I0918 20:37:04.643264       1 main.go:295] Handling node with IPs: map[192.168.39.216:{}]
	I0918 20:37:04.643299       1 main.go:322] Node multinode-622675-m03 has CIDR [10.244.4.0/24] 
	I0918 20:37:14.642362       1 main.go:295] Handling node with IPs: map[192.168.39.106:{}]
	I0918 20:37:14.642495       1 main.go:299] handling current node
	I0918 20:37:14.642572       1 main.go:295] Handling node with IPs: map[192.168.39.224:{}]
	I0918 20:37:14.642612       1 main.go:322] Node multinode-622675-m02 has CIDR [10.244.1.0/24] 
	I0918 20:37:14.642804       1 main.go:295] Handling node with IPs: map[192.168.39.216:{}]
	I0918 20:37:14.642862       1 main.go:322] Node multinode-622675-m03 has CIDR [10.244.2.0/24] 
	I0918 20:37:24.644105       1 main.go:295] Handling node with IPs: map[192.168.39.106:{}]
	I0918 20:37:24.644214       1 main.go:299] handling current node
	I0918 20:37:24.644243       1 main.go:295] Handling node with IPs: map[192.168.39.224:{}]
	I0918 20:37:24.644261       1 main.go:322] Node multinode-622675-m02 has CIDR [10.244.1.0/24] 
	I0918 20:37:24.644436       1 main.go:295] Handling node with IPs: map[192.168.39.216:{}]
	I0918 20:37:24.644488       1 main.go:322] Node multinode-622675-m03 has CIDR [10.244.2.0/24] 
	
	
	==> kube-apiserver [8195546af7c97dbc4c7e931098abda3db7972fe66027da6eed0e4d0cbbabb435] <==
	I0918 20:35:52.962214       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0918 20:35:52.985582       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0918 20:35:52.985681       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0918 20:35:52.993089       1 aggregator.go:171] initial CRD sync complete...
	I0918 20:35:52.993121       1 autoregister_controller.go:144] Starting autoregister controller
	I0918 20:35:52.993128       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0918 20:35:52.993133       1 cache.go:39] Caches are synced for autoregister controller
	I0918 20:35:52.993849       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	I0918 20:35:53.007410       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0918 20:35:53.008539       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0918 20:35:53.008685       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0918 20:35:53.008728       1 cache.go:39] Caches are synced for LocalAvailability controller
	E0918 20:35:53.023167       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0918 20:35:53.030658       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0918 20:35:53.034501       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0918 20:35:53.034535       1 policy_source.go:224] refreshing policies
	I0918 20:35:53.078305       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0918 20:35:53.901338       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0918 20:35:55.005258       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0918 20:35:55.172568       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0918 20:35:55.187780       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0918 20:35:55.281908       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0918 20:35:55.291294       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0918 20:35:56.728822       1 controller.go:615] quota admission added evaluator for: endpoints
	I0918 20:35:56.778225       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-apiserver [ca2c1be8e70a4692c9c15e9da716688fe6dcc7d6c82050191b2576cffe47cc24] <==
	I0918 20:34:05.526201       1 dynamic_cafile_content.go:174] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0918 20:34:05.525475       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I0918 20:34:05.525838       1 dynamic_serving_content.go:149] "Shutting down controller" name="serving-cert::/var/lib/minikube/certs/apiserver.crt::/var/lib/minikube/certs/apiserver.key"
	I0918 20:34:05.520049       1 naming_controller.go:305] Shutting down NamingConditionController
	I0918 20:34:05.520059       1 controller.go:120] Shutting down OpenAPI V3 controller
	I0918 20:34:05.520072       1 controller.go:170] Shutting down OpenAPI controller
	I0918 20:34:05.520078       1 autoregister_controller.go:168] Shutting down autoregister controller
	I0918 20:34:05.520094       1 crdregistration_controller.go:145] Shutting down crd-autoregister controller
	I0918 20:34:05.520104       1 local_available_controller.go:172] Shutting down LocalAvailability controller
	I0918 20:34:05.520108       1 remote_available_controller.go:427] Shutting down RemoteAvailability controller
	I0918 20:34:05.520121       1 gc_controller.go:91] Shutting down apiserver lease garbage collector
	I0918 20:34:05.520126       1 apiservice_controller.go:134] Shutting down APIServiceRegistrationController
	I0918 20:34:05.520132       1 apf_controller.go:389] Shutting down API Priority and Fairness config worker
	I0918 20:34:05.520145       1 customresource_discovery_controller.go:328] Shutting down DiscoveryController
	I0918 20:34:05.520453       1 controller.go:84] Shutting down OpenAPI AggregationController
	I0918 20:34:05.520562       1 controller.go:86] Shutting down OpenAPI V3 AggregationController
	I0918 20:34:05.524779       1 controller.go:157] Shutting down quota evaluator
	I0918 20:34:05.531516       1 controller.go:176] quota evaluator worker shutdown
	I0918 20:34:05.525538       1 secure_serving.go:258] Stopped listening on [::]:8443
	I0918 20:34:05.531536       1 controller.go:176] quota evaluator worker shutdown
	I0918 20:34:05.531541       1 controller.go:176] quota evaluator worker shutdown
	I0918 20:34:05.531546       1 controller.go:176] quota evaluator worker shutdown
	I0918 20:34:05.531550       1 controller.go:176] quota evaluator worker shutdown
	W0918 20:34:05.532436       1 logging.go:55] [core] [Channel #178 SubChannel #179]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0918 20:34:05.535177       1 logging.go:55] [core] [Channel #115 SubChannel #116]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-controller-manager [bb9942cd5a355661495e4b729ac316c9b20f71dc4d56160866541cd1d1be6eb2] <==
	I0918 20:36:52.171364       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-622675-m02"
	I0918 20:36:52.200695       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-622675-m02"
	I0918 20:36:52.218762       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="68.658µs"
	I0918 20:36:52.247092       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="45.232µs"
	I0918 20:36:56.020054       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="7.383911ms"
	I0918 20:36:56.020118       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="33.105µs"
	I0918 20:36:56.388759       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-622675-m02"
	I0918 20:37:03.437583       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-622675-m02"
	I0918 20:37:09.917064       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-622675-m03"
	I0918 20:37:09.934106       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-622675-m03"
	I0918 20:37:10.169788       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-622675-m03"
	I0918 20:37:10.169923       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-622675-m02"
	I0918 20:37:11.101278       1 actual_state_of_world.go:540] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-622675-m03\" does not exist"
	I0918 20:37:11.101716       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-622675-m02"
	I0918 20:37:11.125223       1 range_allocator.go:422] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-622675-m03" podCIDRs=["10.244.2.0/24"]
	I0918 20:37:11.125588       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-622675-m03"
	I0918 20:37:11.125718       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-622675-m03"
	I0918 20:37:11.192694       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-622675-m03"
	I0918 20:37:11.472880       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-622675-m03"
	I0918 20:37:11.550300       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-622675-m03"
	I0918 20:37:21.237457       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-622675-m03"
	I0918 20:37:30.484698       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-622675-m03"
	I0918 20:37:30.484877       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-622675-m02"
	I0918 20:37:30.493785       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-622675-m03"
	I0918 20:37:31.405567       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-622675-m03"
	
	
	==> kube-controller-manager [fa1411a6edea077a5bf21467bebe993d54a884c4ef814ac722270c58c4e1027c] <==
	I0918 20:31:38.904755       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-622675-m03"
	I0918 20:31:39.133987       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-622675-m03"
	I0918 20:31:39.134567       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-622675-m02"
	I0918 20:31:40.347987       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-622675-m02"
	I0918 20:31:40.351409       1 actual_state_of_world.go:540] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-622675-m03\" does not exist"
	I0918 20:31:40.373556       1 range_allocator.go:422] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-622675-m03" podCIDRs=["10.244.4.0/24"]
	I0918 20:31:40.373600       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-622675-m03"
	I0918 20:31:40.373626       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-622675-m03"
	I0918 20:31:40.647454       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-622675-m03"
	I0918 20:31:40.973536       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-622675-m03"
	I0918 20:31:44.886297       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-622675-m03"
	I0918 20:31:50.475772       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-622675-m03"
	I0918 20:31:59.973890       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-622675-m02"
	I0918 20:31:59.974648       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-622675-m03"
	I0918 20:31:59.986820       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-622675-m03"
	I0918 20:32:04.875502       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-622675-m03"
	I0918 20:32:44.897211       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-622675-m02"
	I0918 20:32:44.897852       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-622675-m03"
	I0918 20:32:44.906236       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-622675-m02"
	I0918 20:32:44.927905       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-622675-m03"
	I0918 20:32:44.933517       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-622675-m02"
	I0918 20:32:44.974122       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="16.974541ms"
	I0918 20:32:44.975119       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="97.3µs"
	I0918 20:32:50.015536       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-622675-m02"
	I0918 20:33:00.104532       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-622675-m03"
	
	
	==> kube-proxy [d9fa3ac1afc1ca59a2af96e3a25d529be9aefc33f90fed347b098d22cd4f87c6] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0918 20:35:54.059170       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0918 20:35:54.073148       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.106"]
	E0918 20:35:54.073249       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0918 20:35:54.164147       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0918 20:35:54.164245       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0918 20:35:54.164288       1 server_linux.go:169] "Using iptables Proxier"
	I0918 20:35:54.167913       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0918 20:35:54.168379       1 server.go:483] "Version info" version="v1.31.1"
	I0918 20:35:54.168447       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0918 20:35:54.171422       1 config.go:328] "Starting node config controller"
	I0918 20:35:54.172025       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0918 20:35:54.171791       1 config.go:199] "Starting service config controller"
	I0918 20:35:54.173641       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0918 20:35:54.171818       1 config.go:105] "Starting endpoint slice config controller"
	I0918 20:35:54.174169       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0918 20:35:54.273861       1 shared_informer.go:320] Caches are synced for node config
	I0918 20:35:54.274462       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0918 20:35:54.274830       1 shared_informer.go:320] Caches are synced for service config
	
	
	==> kube-proxy [e499594bd4ca1e0ffebd9b5b5240ecdc7a824b9fbd8d21840a8cb7e865d601b0] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0918 20:29:12.063603       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0918 20:29:12.104517       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.106"]
	E0918 20:29:12.111148       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0918 20:29:12.150529       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0918 20:29:12.150649       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0918 20:29:12.150690       1 server_linux.go:169] "Using iptables Proxier"
	I0918 20:29:12.153113       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0918 20:29:12.153467       1 server.go:483] "Version info" version="v1.31.1"
	I0918 20:29:12.153621       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0918 20:29:12.155452       1 config.go:199] "Starting service config controller"
	I0918 20:29:12.155517       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0918 20:29:12.155569       1 config.go:105] "Starting endpoint slice config controller"
	I0918 20:29:12.155586       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0918 20:29:12.156348       1 config.go:328] "Starting node config controller"
	I0918 20:29:12.156389       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0918 20:29:12.255649       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0918 20:29:12.255700       1 shared_informer.go:320] Caches are synced for service config
	I0918 20:29:12.257005       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [aabf741e6c21bcdd33794ad272367e8eee404ab32e56df824e7b682d84ec0cb3] <==
	E0918 20:29:04.032294       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0918 20:29:04.102044       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0918 20:29:04.102168       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0918 20:29:04.128226       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0918 20:29:04.128289       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0918 20:29:04.170599       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0918 20:29:04.170650       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0918 20:29:04.213928       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0918 20:29:04.214108       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0918 20:29:04.261425       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0918 20:29:04.261888       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0918 20:29:04.265441       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0918 20:29:04.265532       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0918 20:29:04.303644       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0918 20:29:04.303742       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0918 20:29:04.303829       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0918 20:29:04.303859       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0918 20:29:04.339813       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0918 20:29:04.339915       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0918 20:29:04.341002       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0918 20:29:04.341071       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0918 20:29:04.504933       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0918 20:29:04.505016       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0918 20:29:07.239317       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0918 20:34:05.512441       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [ff8c8376b280ff49f0e834d1561ecf31b332e70e3c1cd4785429577b30a31a74] <==
	I0918 20:35:50.529046       1 serving.go:386] Generated self-signed cert in-memory
	W0918 20:35:52.910132       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0918 20:35:52.910294       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0918 20:35:52.910326       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0918 20:35:52.910559       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0918 20:35:52.985336       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.1"
	I0918 20:35:52.988024       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0918 20:35:52.992295       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0918 20:35:52.992437       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0918 20:35:52.992483       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0918 20:35:52.992509       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0918 20:35:53.092626       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 18 20:35:59 multinode-622675 kubelet[2906]: E0918 20:35:59.054477    2906 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726691759052539024,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 18 20:36:00 multinode-622675 kubelet[2906]: I0918 20:36:00.262179    2906 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Sep 18 20:36:09 multinode-622675 kubelet[2906]: E0918 20:36:09.056896    2906 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726691769055723542,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 18 20:36:09 multinode-622675 kubelet[2906]: E0918 20:36:09.057383    2906 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726691769055723542,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 18 20:36:19 multinode-622675 kubelet[2906]: E0918 20:36:19.060177    2906 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726691779059369348,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 18 20:36:19 multinode-622675 kubelet[2906]: E0918 20:36:19.060232    2906 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726691779059369348,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 18 20:36:29 multinode-622675 kubelet[2906]: E0918 20:36:29.063379    2906 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726691789062830790,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 18 20:36:29 multinode-622675 kubelet[2906]: E0918 20:36:29.063443    2906 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726691789062830790,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 18 20:36:39 multinode-622675 kubelet[2906]: E0918 20:36:39.066636    2906 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726691799065854562,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 18 20:36:39 multinode-622675 kubelet[2906]: E0918 20:36:39.066881    2906 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726691799065854562,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 18 20:36:48 multinode-622675 kubelet[2906]: E0918 20:36:48.977196    2906 iptables.go:577] "Could not set up iptables canary" err=<
	Sep 18 20:36:48 multinode-622675 kubelet[2906]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Sep 18 20:36:48 multinode-622675 kubelet[2906]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 18 20:36:48 multinode-622675 kubelet[2906]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 18 20:36:48 multinode-622675 kubelet[2906]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 18 20:36:49 multinode-622675 kubelet[2906]: E0918 20:36:49.069206    2906 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726691809067939563,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 18 20:36:49 multinode-622675 kubelet[2906]: E0918 20:36:49.069239    2906 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726691809067939563,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 18 20:36:59 multinode-622675 kubelet[2906]: E0918 20:36:59.074028    2906 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726691819072815691,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 18 20:36:59 multinode-622675 kubelet[2906]: E0918 20:36:59.074367    2906 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726691819072815691,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 18 20:37:09 multinode-622675 kubelet[2906]: E0918 20:37:09.080895    2906 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726691829080568369,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 18 20:37:09 multinode-622675 kubelet[2906]: E0918 20:37:09.080929    2906 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726691829080568369,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 18 20:37:19 multinode-622675 kubelet[2906]: E0918 20:37:19.084004    2906 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726691839082866712,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 18 20:37:19 multinode-622675 kubelet[2906]: E0918 20:37:19.084417    2906 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726691839082866712,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 18 20:37:29 multinode-622675 kubelet[2906]: E0918 20:37:29.085682    2906 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726691849085345833,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 18 20:37:29 multinode-622675 kubelet[2906]: E0918 20:37:29.085714    2906 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726691849085345833,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0918 20:37:32.968789   45817 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/19667-7671/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p multinode-622675 -n multinode-622675
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-622675 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/RestartKeepsNodes FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/RestartKeepsNodes (332.17s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (144.64s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-622675 stop
E0918 20:38:04.356107   14878 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/functional-790989/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:345: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-622675 stop: exit status 82 (2m0.47543557s)

                                                
                                                
-- stdout --
	* Stopping node "multinode-622675-m02"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:347: failed to stop cluster. args "out/minikube-linux-amd64 -p multinode-622675 stop": exit status 82
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-622675 status
multinode_test.go:351: (dbg) Done: out/minikube-linux-amd64 -p multinode-622675 status: (18.771790306s)
multinode_test.go:358: (dbg) Run:  out/minikube-linux-amd64 -p multinode-622675 status --alsologtostderr
multinode_test.go:358: (dbg) Done: out/minikube-linux-amd64 -p multinode-622675 status --alsologtostderr: (3.359662371s)
multinode_test.go:364: incorrect number of stopped hosts: args "out/minikube-linux-amd64 -p multinode-622675 status --alsologtostderr": 
multinode_test.go:368: incorrect number of stopped kubelets: args "out/minikube-linux-amd64 -p multinode-622675 status --alsologtostderr": 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-622675 -n multinode-622675
helpers_test.go:244: <<< TestMultiNode/serial/StopMultiNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/StopMultiNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p multinode-622675 logs -n 25
E0918 20:40:01.286060   14878 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/functional-790989/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p multinode-622675 logs -n 25: (1.399774819s)
helpers_test.go:252: TestMultiNode/serial/StopMultiNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| Command |                                          Args                                           |     Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| ssh     | multinode-622675 ssh -n                                                                 | multinode-622675 | jenkins | v1.34.0 | 18 Sep 24 20:31 UTC | 18 Sep 24 20:31 UTC |
	|         | multinode-622675-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-622675 cp multinode-622675-m02:/home/docker/cp-test.txt                       | multinode-622675 | jenkins | v1.34.0 | 18 Sep 24 20:31 UTC | 18 Sep 24 20:31 UTC |
	|         | multinode-622675:/home/docker/cp-test_multinode-622675-m02_multinode-622675.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-622675 ssh -n                                                                 | multinode-622675 | jenkins | v1.34.0 | 18 Sep 24 20:31 UTC | 18 Sep 24 20:31 UTC |
	|         | multinode-622675-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-622675 ssh -n multinode-622675 sudo cat                                       | multinode-622675 | jenkins | v1.34.0 | 18 Sep 24 20:31 UTC | 18 Sep 24 20:31 UTC |
	|         | /home/docker/cp-test_multinode-622675-m02_multinode-622675.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-622675 cp multinode-622675-m02:/home/docker/cp-test.txt                       | multinode-622675 | jenkins | v1.34.0 | 18 Sep 24 20:31 UTC | 18 Sep 24 20:31 UTC |
	|         | multinode-622675-m03:/home/docker/cp-test_multinode-622675-m02_multinode-622675-m03.txt |                  |         |         |                     |                     |
	| ssh     | multinode-622675 ssh -n                                                                 | multinode-622675 | jenkins | v1.34.0 | 18 Sep 24 20:31 UTC | 18 Sep 24 20:31 UTC |
	|         | multinode-622675-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-622675 ssh -n multinode-622675-m03 sudo cat                                   | multinode-622675 | jenkins | v1.34.0 | 18 Sep 24 20:31 UTC | 18 Sep 24 20:31 UTC |
	|         | /home/docker/cp-test_multinode-622675-m02_multinode-622675-m03.txt                      |                  |         |         |                     |                     |
	| cp      | multinode-622675 cp testdata/cp-test.txt                                                | multinode-622675 | jenkins | v1.34.0 | 18 Sep 24 20:31 UTC | 18 Sep 24 20:31 UTC |
	|         | multinode-622675-m03:/home/docker/cp-test.txt                                           |                  |         |         |                     |                     |
	| ssh     | multinode-622675 ssh -n                                                                 | multinode-622675 | jenkins | v1.34.0 | 18 Sep 24 20:31 UTC | 18 Sep 24 20:31 UTC |
	|         | multinode-622675-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-622675 cp multinode-622675-m03:/home/docker/cp-test.txt                       | multinode-622675 | jenkins | v1.34.0 | 18 Sep 24 20:31 UTC | 18 Sep 24 20:31 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile2019276691/001/cp-test_multinode-622675-m03.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-622675 ssh -n                                                                 | multinode-622675 | jenkins | v1.34.0 | 18 Sep 24 20:31 UTC | 18 Sep 24 20:31 UTC |
	|         | multinode-622675-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-622675 cp multinode-622675-m03:/home/docker/cp-test.txt                       | multinode-622675 | jenkins | v1.34.0 | 18 Sep 24 20:31 UTC | 18 Sep 24 20:31 UTC |
	|         | multinode-622675:/home/docker/cp-test_multinode-622675-m03_multinode-622675.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-622675 ssh -n                                                                 | multinode-622675 | jenkins | v1.34.0 | 18 Sep 24 20:31 UTC | 18 Sep 24 20:31 UTC |
	|         | multinode-622675-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-622675 ssh -n multinode-622675 sudo cat                                       | multinode-622675 | jenkins | v1.34.0 | 18 Sep 24 20:31 UTC | 18 Sep 24 20:31 UTC |
	|         | /home/docker/cp-test_multinode-622675-m03_multinode-622675.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-622675 cp multinode-622675-m03:/home/docker/cp-test.txt                       | multinode-622675 | jenkins | v1.34.0 | 18 Sep 24 20:31 UTC | 18 Sep 24 20:31 UTC |
	|         | multinode-622675-m02:/home/docker/cp-test_multinode-622675-m03_multinode-622675-m02.txt |                  |         |         |                     |                     |
	| ssh     | multinode-622675 ssh -n                                                                 | multinode-622675 | jenkins | v1.34.0 | 18 Sep 24 20:31 UTC | 18 Sep 24 20:31 UTC |
	|         | multinode-622675-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-622675 ssh -n multinode-622675-m02 sudo cat                                   | multinode-622675 | jenkins | v1.34.0 | 18 Sep 24 20:31 UTC | 18 Sep 24 20:31 UTC |
	|         | /home/docker/cp-test_multinode-622675-m03_multinode-622675-m02.txt                      |                  |         |         |                     |                     |
	| node    | multinode-622675 node stop m03                                                          | multinode-622675 | jenkins | v1.34.0 | 18 Sep 24 20:31 UTC | 18 Sep 24 20:31 UTC |
	| node    | multinode-622675 node start                                                             | multinode-622675 | jenkins | v1.34.0 | 18 Sep 24 20:31 UTC | 18 Sep 24 20:32 UTC |
	|         | m03 -v=7 --alsologtostderr                                                              |                  |         |         |                     |                     |
	| node    | list -p multinode-622675                                                                | multinode-622675 | jenkins | v1.34.0 | 18 Sep 24 20:32 UTC |                     |
	| stop    | -p multinode-622675                                                                     | multinode-622675 | jenkins | v1.34.0 | 18 Sep 24 20:32 UTC |                     |
	| start   | -p multinode-622675                                                                     | multinode-622675 | jenkins | v1.34.0 | 18 Sep 24 20:34 UTC | 18 Sep 24 20:37 UTC |
	|         | --wait=true -v=8                                                                        |                  |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                  |         |         |                     |                     |
	| node    | list -p multinode-622675                                                                | multinode-622675 | jenkins | v1.34.0 | 18 Sep 24 20:37 UTC |                     |
	| node    | multinode-622675 node delete                                                            | multinode-622675 | jenkins | v1.34.0 | 18 Sep 24 20:37 UTC | 18 Sep 24 20:37 UTC |
	|         | m03                                                                                     |                  |         |         |                     |                     |
	| stop    | multinode-622675 stop                                                                   | multinode-622675 | jenkins | v1.34.0 | 18 Sep 24 20:37 UTC |                     |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/18 20:34:04
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0918 20:34:04.479779   44697 out.go:345] Setting OutFile to fd 1 ...
	I0918 20:34:04.479912   44697 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0918 20:34:04.479918   44697 out.go:358] Setting ErrFile to fd 2...
	I0918 20:34:04.479922   44697 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0918 20:34:04.480129   44697 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19667-7671/.minikube/bin
	I0918 20:34:04.480736   44697 out.go:352] Setting JSON to false
	I0918 20:34:04.481651   44697 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":4588,"bootTime":1726687056,"procs":182,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0918 20:34:04.481746   44697 start.go:139] virtualization: kvm guest
	I0918 20:34:04.484109   44697 out.go:177] * [multinode-622675] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0918 20:34:04.485685   44697 notify.go:220] Checking for updates...
	I0918 20:34:04.485732   44697 out.go:177]   - MINIKUBE_LOCATION=19667
	I0918 20:34:04.487384   44697 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0918 20:34:04.488980   44697 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19667-7671/kubeconfig
	I0918 20:34:04.490676   44697 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19667-7671/.minikube
	I0918 20:34:04.492253   44697 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0918 20:34:04.493779   44697 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0918 20:34:04.495586   44697 config.go:182] Loaded profile config "multinode-622675": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0918 20:34:04.495688   44697 driver.go:394] Setting default libvirt URI to qemu:///system
	I0918 20:34:04.496171   44697 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0918 20:34:04.496208   44697 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0918 20:34:04.512042   44697 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39609
	I0918 20:34:04.512552   44697 main.go:141] libmachine: () Calling .GetVersion
	I0918 20:34:04.513148   44697 main.go:141] libmachine: Using API Version  1
	I0918 20:34:04.513173   44697 main.go:141] libmachine: () Calling .SetConfigRaw
	I0918 20:34:04.513551   44697 main.go:141] libmachine: () Calling .GetMachineName
	I0918 20:34:04.513728   44697 main.go:141] libmachine: (multinode-622675) Calling .DriverName
	I0918 20:34:04.550936   44697 out.go:177] * Using the kvm2 driver based on existing profile
	I0918 20:34:04.552612   44697 start.go:297] selected driver: kvm2
	I0918 20:34:04.552633   44697 start.go:901] validating driver "kvm2" against &{Name:multinode-622675 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.31.1 ClusterName:multinode-622675 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.106 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.224 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.216 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ing
ress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryM
irror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0918 20:34:04.552768   44697 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0918 20:34:04.553080   44697 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0918 20:34:04.553163   44697 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19667-7671/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0918 20:34:04.569212   44697 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0918 20:34:04.570061   44697 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0918 20:34:04.570103   44697 cni.go:84] Creating CNI manager for ""
	I0918 20:34:04.570156   44697 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0918 20:34:04.570213   44697 start.go:340] cluster config:
	{Name:multinode-622675 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:multinode-622675 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.106 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.224 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.216 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false
kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwareP
ath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0918 20:34:04.570341   44697 iso.go:125] acquiring lock: {Name:mkbcf4d84f3091567a39802cd5e773cb10b8c545 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0918 20:34:04.572467   44697 out.go:177] * Starting "multinode-622675" primary control-plane node in "multinode-622675" cluster
	I0918 20:34:04.573864   44697 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0918 20:34:04.573931   44697 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19667-7671/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I0918 20:34:04.573946   44697 cache.go:56] Caching tarball of preloaded images
	I0918 20:34:04.574056   44697 preload.go:172] Found /home/jenkins/minikube-integration/19667-7671/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0918 20:34:04.574067   44697 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0918 20:34:04.574191   44697 profile.go:143] Saving config to /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/multinode-622675/config.json ...
	I0918 20:34:04.574406   44697 start.go:360] acquireMachinesLock for multinode-622675: {Name:mk1b0d68a0b7d9d4bc8204d5d81cb9eb77222526 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0918 20:34:04.574450   44697 start.go:364] duration metric: took 25.038µs to acquireMachinesLock for "multinode-622675"
	I0918 20:34:04.574464   44697 start.go:96] Skipping create...Using existing machine configuration
	I0918 20:34:04.574469   44697 fix.go:54] fixHost starting: 
	I0918 20:34:04.574720   44697 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0918 20:34:04.574756   44697 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0918 20:34:04.590907   44697 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36875
	I0918 20:34:04.591348   44697 main.go:141] libmachine: () Calling .GetVersion
	I0918 20:34:04.591821   44697 main.go:141] libmachine: Using API Version  1
	I0918 20:34:04.591837   44697 main.go:141] libmachine: () Calling .SetConfigRaw
	I0918 20:34:04.592204   44697 main.go:141] libmachine: () Calling .GetMachineName
	I0918 20:34:04.592427   44697 main.go:141] libmachine: (multinode-622675) Calling .DriverName
	I0918 20:34:04.592586   44697 main.go:141] libmachine: (multinode-622675) Calling .GetState
	I0918 20:34:04.594108   44697 fix.go:112] recreateIfNeeded on multinode-622675: state=Running err=<nil>
	W0918 20:34:04.594130   44697 fix.go:138] unexpected machine state, will restart: <nil>
	I0918 20:34:04.597784   44697 out.go:177] * Updating the running kvm2 "multinode-622675" VM ...
	I0918 20:34:04.599106   44697 machine.go:93] provisionDockerMachine start ...
	I0918 20:34:04.599131   44697 main.go:141] libmachine: (multinode-622675) Calling .DriverName
	I0918 20:34:04.599389   44697 main.go:141] libmachine: (multinode-622675) Calling .GetSSHHostname
	I0918 20:34:04.602041   44697 main.go:141] libmachine: (multinode-622675) DBG | domain multinode-622675 has defined MAC address 52:54:00:b9:25:86 in network mk-multinode-622675
	I0918 20:34:04.602482   44697 main.go:141] libmachine: (multinode-622675) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:25:86", ip: ""} in network mk-multinode-622675: {Iface:virbr1 ExpiryTime:2024-09-18 21:28:42 +0000 UTC Type:0 Mac:52:54:00:b9:25:86 Iaid: IPaddr:192.168.39.106 Prefix:24 Hostname:multinode-622675 Clientid:01:52:54:00:b9:25:86}
	I0918 20:34:04.602506   44697 main.go:141] libmachine: (multinode-622675) DBG | domain multinode-622675 has defined IP address 192.168.39.106 and MAC address 52:54:00:b9:25:86 in network mk-multinode-622675
	I0918 20:34:04.602674   44697 main.go:141] libmachine: (multinode-622675) Calling .GetSSHPort
	I0918 20:34:04.602822   44697 main.go:141] libmachine: (multinode-622675) Calling .GetSSHKeyPath
	I0918 20:34:04.602970   44697 main.go:141] libmachine: (multinode-622675) Calling .GetSSHKeyPath
	I0918 20:34:04.603137   44697 main.go:141] libmachine: (multinode-622675) Calling .GetSSHUsername
	I0918 20:34:04.603313   44697 main.go:141] libmachine: Using SSH client type: native
	I0918 20:34:04.603510   44697 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.106 22 <nil> <nil>}
	I0918 20:34:04.603521   44697 main.go:141] libmachine: About to run SSH command:
	hostname
	I0918 20:34:04.718430   44697 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-622675
	
	I0918 20:34:04.718464   44697 main.go:141] libmachine: (multinode-622675) Calling .GetMachineName
	I0918 20:34:04.718766   44697 buildroot.go:166] provisioning hostname "multinode-622675"
	I0918 20:34:04.718796   44697 main.go:141] libmachine: (multinode-622675) Calling .GetMachineName
	I0918 20:34:04.718958   44697 main.go:141] libmachine: (multinode-622675) Calling .GetSSHHostname
	I0918 20:34:04.722121   44697 main.go:141] libmachine: (multinode-622675) DBG | domain multinode-622675 has defined MAC address 52:54:00:b9:25:86 in network mk-multinode-622675
	I0918 20:34:04.722521   44697 main.go:141] libmachine: (multinode-622675) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:25:86", ip: ""} in network mk-multinode-622675: {Iface:virbr1 ExpiryTime:2024-09-18 21:28:42 +0000 UTC Type:0 Mac:52:54:00:b9:25:86 Iaid: IPaddr:192.168.39.106 Prefix:24 Hostname:multinode-622675 Clientid:01:52:54:00:b9:25:86}
	I0918 20:34:04.722542   44697 main.go:141] libmachine: (multinode-622675) DBG | domain multinode-622675 has defined IP address 192.168.39.106 and MAC address 52:54:00:b9:25:86 in network mk-multinode-622675
	I0918 20:34:04.722735   44697 main.go:141] libmachine: (multinode-622675) Calling .GetSSHPort
	I0918 20:34:04.722926   44697 main.go:141] libmachine: (multinode-622675) Calling .GetSSHKeyPath
	I0918 20:34:04.723060   44697 main.go:141] libmachine: (multinode-622675) Calling .GetSSHKeyPath
	I0918 20:34:04.723197   44697 main.go:141] libmachine: (multinode-622675) Calling .GetSSHUsername
	I0918 20:34:04.723428   44697 main.go:141] libmachine: Using SSH client type: native
	I0918 20:34:04.723601   44697 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.106 22 <nil> <nil>}
	I0918 20:34:04.723612   44697 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-622675 && echo "multinode-622675" | sudo tee /etc/hostname
	I0918 20:34:04.844693   44697 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-622675
	
	I0918 20:34:04.844736   44697 main.go:141] libmachine: (multinode-622675) Calling .GetSSHHostname
	I0918 20:34:04.847872   44697 main.go:141] libmachine: (multinode-622675) DBG | domain multinode-622675 has defined MAC address 52:54:00:b9:25:86 in network mk-multinode-622675
	I0918 20:34:04.848323   44697 main.go:141] libmachine: (multinode-622675) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:25:86", ip: ""} in network mk-multinode-622675: {Iface:virbr1 ExpiryTime:2024-09-18 21:28:42 +0000 UTC Type:0 Mac:52:54:00:b9:25:86 Iaid: IPaddr:192.168.39.106 Prefix:24 Hostname:multinode-622675 Clientid:01:52:54:00:b9:25:86}
	I0918 20:34:04.848367   44697 main.go:141] libmachine: (multinode-622675) DBG | domain multinode-622675 has defined IP address 192.168.39.106 and MAC address 52:54:00:b9:25:86 in network mk-multinode-622675
	I0918 20:34:04.848661   44697 main.go:141] libmachine: (multinode-622675) Calling .GetSSHPort
	I0918 20:34:04.848921   44697 main.go:141] libmachine: (multinode-622675) Calling .GetSSHKeyPath
	I0918 20:34:04.849262   44697 main.go:141] libmachine: (multinode-622675) Calling .GetSSHKeyPath
	I0918 20:34:04.849470   44697 main.go:141] libmachine: (multinode-622675) Calling .GetSSHUsername
	I0918 20:34:04.849764   44697 main.go:141] libmachine: Using SSH client type: native
	I0918 20:34:04.849944   44697 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.106 22 <nil> <nil>}
	I0918 20:34:04.849961   44697 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-622675' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-622675/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-622675' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0918 20:34:04.957378   44697 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0918 20:34:04.957413   44697 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19667-7671/.minikube CaCertPath:/home/jenkins/minikube-integration/19667-7671/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19667-7671/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19667-7671/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19667-7671/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19667-7671/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19667-7671/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19667-7671/.minikube}
	I0918 20:34:04.957439   44697 buildroot.go:174] setting up certificates
	I0918 20:34:04.957452   44697 provision.go:84] configureAuth start
	I0918 20:34:04.957474   44697 main.go:141] libmachine: (multinode-622675) Calling .GetMachineName
	I0918 20:34:04.957738   44697 main.go:141] libmachine: (multinode-622675) Calling .GetIP
	I0918 20:34:04.960575   44697 main.go:141] libmachine: (multinode-622675) DBG | domain multinode-622675 has defined MAC address 52:54:00:b9:25:86 in network mk-multinode-622675
	I0918 20:34:04.960905   44697 main.go:141] libmachine: (multinode-622675) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:25:86", ip: ""} in network mk-multinode-622675: {Iface:virbr1 ExpiryTime:2024-09-18 21:28:42 +0000 UTC Type:0 Mac:52:54:00:b9:25:86 Iaid: IPaddr:192.168.39.106 Prefix:24 Hostname:multinode-622675 Clientid:01:52:54:00:b9:25:86}
	I0918 20:34:04.960936   44697 main.go:141] libmachine: (multinode-622675) DBG | domain multinode-622675 has defined IP address 192.168.39.106 and MAC address 52:54:00:b9:25:86 in network mk-multinode-622675
	I0918 20:34:04.961187   44697 main.go:141] libmachine: (multinode-622675) Calling .GetSSHHostname
	I0918 20:34:04.963675   44697 main.go:141] libmachine: (multinode-622675) DBG | domain multinode-622675 has defined MAC address 52:54:00:b9:25:86 in network mk-multinode-622675
	I0918 20:34:04.964168   44697 main.go:141] libmachine: (multinode-622675) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:25:86", ip: ""} in network mk-multinode-622675: {Iface:virbr1 ExpiryTime:2024-09-18 21:28:42 +0000 UTC Type:0 Mac:52:54:00:b9:25:86 Iaid: IPaddr:192.168.39.106 Prefix:24 Hostname:multinode-622675 Clientid:01:52:54:00:b9:25:86}
	I0918 20:34:04.964207   44697 main.go:141] libmachine: (multinode-622675) DBG | domain multinode-622675 has defined IP address 192.168.39.106 and MAC address 52:54:00:b9:25:86 in network mk-multinode-622675
	I0918 20:34:04.964384   44697 provision.go:143] copyHostCerts
	I0918 20:34:04.964420   44697 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19667-7671/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19667-7671/.minikube/key.pem
	I0918 20:34:04.964473   44697 exec_runner.go:144] found /home/jenkins/minikube-integration/19667-7671/.minikube/key.pem, removing ...
	I0918 20:34:04.964491   44697 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19667-7671/.minikube/key.pem
	I0918 20:34:04.964569   44697 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19667-7671/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19667-7671/.minikube/key.pem (1679 bytes)
	I0918 20:34:04.964685   44697 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19667-7671/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19667-7671/.minikube/ca.pem
	I0918 20:34:04.964711   44697 exec_runner.go:144] found /home/jenkins/minikube-integration/19667-7671/.minikube/ca.pem, removing ...
	I0918 20:34:04.964718   44697 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19667-7671/.minikube/ca.pem
	I0918 20:34:04.964766   44697 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19667-7671/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19667-7671/.minikube/ca.pem (1082 bytes)
	I0918 20:34:04.964850   44697 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19667-7671/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19667-7671/.minikube/cert.pem
	I0918 20:34:04.964874   44697 exec_runner.go:144] found /home/jenkins/minikube-integration/19667-7671/.minikube/cert.pem, removing ...
	I0918 20:34:04.964889   44697 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19667-7671/.minikube/cert.pem
	I0918 20:34:04.964929   44697 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19667-7671/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19667-7671/.minikube/cert.pem (1123 bytes)
	I0918 20:34:04.965012   44697 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19667-7671/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19667-7671/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19667-7671/.minikube/certs/ca-key.pem org=jenkins.multinode-622675 san=[127.0.0.1 192.168.39.106 localhost minikube multinode-622675]
	I0918 20:34:05.219307   44697 provision.go:177] copyRemoteCerts
	I0918 20:34:05.219380   44697 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0918 20:34:05.219403   44697 main.go:141] libmachine: (multinode-622675) Calling .GetSSHHostname
	I0918 20:34:05.222023   44697 main.go:141] libmachine: (multinode-622675) DBG | domain multinode-622675 has defined MAC address 52:54:00:b9:25:86 in network mk-multinode-622675
	I0918 20:34:05.222311   44697 main.go:141] libmachine: (multinode-622675) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:25:86", ip: ""} in network mk-multinode-622675: {Iface:virbr1 ExpiryTime:2024-09-18 21:28:42 +0000 UTC Type:0 Mac:52:54:00:b9:25:86 Iaid: IPaddr:192.168.39.106 Prefix:24 Hostname:multinode-622675 Clientid:01:52:54:00:b9:25:86}
	I0918 20:34:05.222337   44697 main.go:141] libmachine: (multinode-622675) DBG | domain multinode-622675 has defined IP address 192.168.39.106 and MAC address 52:54:00:b9:25:86 in network mk-multinode-622675
	I0918 20:34:05.222559   44697 main.go:141] libmachine: (multinode-622675) Calling .GetSSHPort
	I0918 20:34:05.222756   44697 main.go:141] libmachine: (multinode-622675) Calling .GetSSHKeyPath
	I0918 20:34:05.222916   44697 main.go:141] libmachine: (multinode-622675) Calling .GetSSHUsername
	I0918 20:34:05.223018   44697 sshutil.go:53] new ssh client: &{IP:192.168.39.106 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19667-7671/.minikube/machines/multinode-622675/id_rsa Username:docker}
	I0918 20:34:05.306981   44697 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19667-7671/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0918 20:34:05.307056   44697 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0918 20:34:05.334778   44697 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19667-7671/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0918 20:34:05.334854   44697 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0918 20:34:05.359332   44697 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19667-7671/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0918 20:34:05.359431   44697 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0918 20:34:05.385864   44697 provision.go:87] duration metric: took 428.39632ms to configureAuth
	I0918 20:34:05.385894   44697 buildroot.go:189] setting minikube options for container-runtime
	I0918 20:34:05.386134   44697 config.go:182] Loaded profile config "multinode-622675": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0918 20:34:05.386235   44697 main.go:141] libmachine: (multinode-622675) Calling .GetSSHHostname
	I0918 20:34:05.388708   44697 main.go:141] libmachine: (multinode-622675) DBG | domain multinode-622675 has defined MAC address 52:54:00:b9:25:86 in network mk-multinode-622675
	I0918 20:34:05.389058   44697 main.go:141] libmachine: (multinode-622675) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:25:86", ip: ""} in network mk-multinode-622675: {Iface:virbr1 ExpiryTime:2024-09-18 21:28:42 +0000 UTC Type:0 Mac:52:54:00:b9:25:86 Iaid: IPaddr:192.168.39.106 Prefix:24 Hostname:multinode-622675 Clientid:01:52:54:00:b9:25:86}
	I0918 20:34:05.389092   44697 main.go:141] libmachine: (multinode-622675) DBG | domain multinode-622675 has defined IP address 192.168.39.106 and MAC address 52:54:00:b9:25:86 in network mk-multinode-622675
	I0918 20:34:05.389211   44697 main.go:141] libmachine: (multinode-622675) Calling .GetSSHPort
	I0918 20:34:05.389433   44697 main.go:141] libmachine: (multinode-622675) Calling .GetSSHKeyPath
	I0918 20:34:05.389571   44697 main.go:141] libmachine: (multinode-622675) Calling .GetSSHKeyPath
	I0918 20:34:05.389687   44697 main.go:141] libmachine: (multinode-622675) Calling .GetSSHUsername
	I0918 20:34:05.389810   44697 main.go:141] libmachine: Using SSH client type: native
	I0918 20:34:05.389970   44697 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.106 22 <nil> <nil>}
	I0918 20:34:05.389984   44697 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0918 20:35:36.211756   44697 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0918 20:35:36.211788   44697 machine.go:96] duration metric: took 1m31.612665437s to provisionDockerMachine
	I0918 20:35:36.211802   44697 start.go:293] postStartSetup for "multinode-622675" (driver="kvm2")
	I0918 20:35:36.211817   44697 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0918 20:35:36.211837   44697 main.go:141] libmachine: (multinode-622675) Calling .DriverName
	I0918 20:35:36.212131   44697 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0918 20:35:36.212158   44697 main.go:141] libmachine: (multinode-622675) Calling .GetSSHHostname
	I0918 20:35:36.215231   44697 main.go:141] libmachine: (multinode-622675) DBG | domain multinode-622675 has defined MAC address 52:54:00:b9:25:86 in network mk-multinode-622675
	I0918 20:35:36.215608   44697 main.go:141] libmachine: (multinode-622675) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:25:86", ip: ""} in network mk-multinode-622675: {Iface:virbr1 ExpiryTime:2024-09-18 21:28:42 +0000 UTC Type:0 Mac:52:54:00:b9:25:86 Iaid: IPaddr:192.168.39.106 Prefix:24 Hostname:multinode-622675 Clientid:01:52:54:00:b9:25:86}
	I0918 20:35:36.215631   44697 main.go:141] libmachine: (multinode-622675) DBG | domain multinode-622675 has defined IP address 192.168.39.106 and MAC address 52:54:00:b9:25:86 in network mk-multinode-622675
	I0918 20:35:36.215744   44697 main.go:141] libmachine: (multinode-622675) Calling .GetSSHPort
	I0918 20:35:36.215973   44697 main.go:141] libmachine: (multinode-622675) Calling .GetSSHKeyPath
	I0918 20:35:36.216143   44697 main.go:141] libmachine: (multinode-622675) Calling .GetSSHUsername
	I0918 20:35:36.216289   44697 sshutil.go:53] new ssh client: &{IP:192.168.39.106 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19667-7671/.minikube/machines/multinode-622675/id_rsa Username:docker}
	I0918 20:35:36.299474   44697 ssh_runner.go:195] Run: cat /etc/os-release
	I0918 20:35:36.303628   44697 command_runner.go:130] > NAME=Buildroot
	I0918 20:35:36.303653   44697 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0918 20:35:36.303660   44697 command_runner.go:130] > ID=buildroot
	I0918 20:35:36.303668   44697 command_runner.go:130] > VERSION_ID=2023.02.9
	I0918 20:35:36.303676   44697 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0918 20:35:36.303729   44697 info.go:137] Remote host: Buildroot 2023.02.9
	I0918 20:35:36.303764   44697 filesync.go:126] Scanning /home/jenkins/minikube-integration/19667-7671/.minikube/addons for local assets ...
	I0918 20:35:36.303864   44697 filesync.go:126] Scanning /home/jenkins/minikube-integration/19667-7671/.minikube/files for local assets ...
	I0918 20:35:36.303978   44697 filesync.go:149] local asset: /home/jenkins/minikube-integration/19667-7671/.minikube/files/etc/ssl/certs/148782.pem -> 148782.pem in /etc/ssl/certs
	I0918 20:35:36.303991   44697 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19667-7671/.minikube/files/etc/ssl/certs/148782.pem -> /etc/ssl/certs/148782.pem
	I0918 20:35:36.304151   44697 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0918 20:35:36.313290   44697 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/files/etc/ssl/certs/148782.pem --> /etc/ssl/certs/148782.pem (1708 bytes)
	I0918 20:35:36.335638   44697 start.go:296] duration metric: took 123.820558ms for postStartSetup
	I0918 20:35:36.335680   44697 fix.go:56] duration metric: took 1m31.761210518s for fixHost
	I0918 20:35:36.335705   44697 main.go:141] libmachine: (multinode-622675) Calling .GetSSHHostname
	I0918 20:35:36.338542   44697 main.go:141] libmachine: (multinode-622675) DBG | domain multinode-622675 has defined MAC address 52:54:00:b9:25:86 in network mk-multinode-622675
	I0918 20:35:36.338980   44697 main.go:141] libmachine: (multinode-622675) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:25:86", ip: ""} in network mk-multinode-622675: {Iface:virbr1 ExpiryTime:2024-09-18 21:28:42 +0000 UTC Type:0 Mac:52:54:00:b9:25:86 Iaid: IPaddr:192.168.39.106 Prefix:24 Hostname:multinode-622675 Clientid:01:52:54:00:b9:25:86}
	I0918 20:35:36.339011   44697 main.go:141] libmachine: (multinode-622675) DBG | domain multinode-622675 has defined IP address 192.168.39.106 and MAC address 52:54:00:b9:25:86 in network mk-multinode-622675
	I0918 20:35:36.339182   44697 main.go:141] libmachine: (multinode-622675) Calling .GetSSHPort
	I0918 20:35:36.339381   44697 main.go:141] libmachine: (multinode-622675) Calling .GetSSHKeyPath
	I0918 20:35:36.339550   44697 main.go:141] libmachine: (multinode-622675) Calling .GetSSHKeyPath
	I0918 20:35:36.339704   44697 main.go:141] libmachine: (multinode-622675) Calling .GetSSHUsername
	I0918 20:35:36.339873   44697 main.go:141] libmachine: Using SSH client type: native
	I0918 20:35:36.340093   44697 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.106 22 <nil> <nil>}
	I0918 20:35:36.340107   44697 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0918 20:35:36.440496   44697 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726691736.415186423
	
	I0918 20:35:36.440527   44697 fix.go:216] guest clock: 1726691736.415186423
	I0918 20:35:36.440539   44697 fix.go:229] Guest: 2024-09-18 20:35:36.415186423 +0000 UTC Remote: 2024-09-18 20:35:36.335685926 +0000 UTC m=+91.892811149 (delta=79.500497ms)
	I0918 20:35:36.440615   44697 fix.go:200] guest clock delta is within tolerance: 79.500497ms
	I0918 20:35:36.440622   44697 start.go:83] releasing machines lock for "multinode-622675", held for 1m31.866163179s
	I0918 20:35:36.440647   44697 main.go:141] libmachine: (multinode-622675) Calling .DriverName
	I0918 20:35:36.440889   44697 main.go:141] libmachine: (multinode-622675) Calling .GetIP
	I0918 20:35:36.443691   44697 main.go:141] libmachine: (multinode-622675) DBG | domain multinode-622675 has defined MAC address 52:54:00:b9:25:86 in network mk-multinode-622675
	I0918 20:35:36.444123   44697 main.go:141] libmachine: (multinode-622675) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:25:86", ip: ""} in network mk-multinode-622675: {Iface:virbr1 ExpiryTime:2024-09-18 21:28:42 +0000 UTC Type:0 Mac:52:54:00:b9:25:86 Iaid: IPaddr:192.168.39.106 Prefix:24 Hostname:multinode-622675 Clientid:01:52:54:00:b9:25:86}
	I0918 20:35:36.444155   44697 main.go:141] libmachine: (multinode-622675) DBG | domain multinode-622675 has defined IP address 192.168.39.106 and MAC address 52:54:00:b9:25:86 in network mk-multinode-622675
	I0918 20:35:36.444325   44697 main.go:141] libmachine: (multinode-622675) Calling .DriverName
	I0918 20:35:36.444841   44697 main.go:141] libmachine: (multinode-622675) Calling .DriverName
	I0918 20:35:36.445014   44697 main.go:141] libmachine: (multinode-622675) Calling .DriverName
	I0918 20:35:36.445121   44697 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0918 20:35:36.445176   44697 main.go:141] libmachine: (multinode-622675) Calling .GetSSHHostname
	I0918 20:35:36.445222   44697 ssh_runner.go:195] Run: cat /version.json
	I0918 20:35:36.445246   44697 main.go:141] libmachine: (multinode-622675) Calling .GetSSHHostname
	I0918 20:35:36.447594   44697 main.go:141] libmachine: (multinode-622675) DBG | domain multinode-622675 has defined MAC address 52:54:00:b9:25:86 in network mk-multinode-622675
	I0918 20:35:36.447888   44697 main.go:141] libmachine: (multinode-622675) DBG | domain multinode-622675 has defined MAC address 52:54:00:b9:25:86 in network mk-multinode-622675
	I0918 20:35:36.447922   44697 main.go:141] libmachine: (multinode-622675) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:25:86", ip: ""} in network mk-multinode-622675: {Iface:virbr1 ExpiryTime:2024-09-18 21:28:42 +0000 UTC Type:0 Mac:52:54:00:b9:25:86 Iaid: IPaddr:192.168.39.106 Prefix:24 Hostname:multinode-622675 Clientid:01:52:54:00:b9:25:86}
	I0918 20:35:36.447943   44697 main.go:141] libmachine: (multinode-622675) DBG | domain multinode-622675 has defined IP address 192.168.39.106 and MAC address 52:54:00:b9:25:86 in network mk-multinode-622675
	I0918 20:35:36.448081   44697 main.go:141] libmachine: (multinode-622675) Calling .GetSSHPort
	I0918 20:35:36.448232   44697 main.go:141] libmachine: (multinode-622675) Calling .GetSSHKeyPath
	I0918 20:35:36.448373   44697 main.go:141] libmachine: (multinode-622675) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:25:86", ip: ""} in network mk-multinode-622675: {Iface:virbr1 ExpiryTime:2024-09-18 21:28:42 +0000 UTC Type:0 Mac:52:54:00:b9:25:86 Iaid: IPaddr:192.168.39.106 Prefix:24 Hostname:multinode-622675 Clientid:01:52:54:00:b9:25:86}
	I0918 20:35:36.448395   44697 main.go:141] libmachine: (multinode-622675) Calling .GetSSHUsername
	I0918 20:35:36.448397   44697 main.go:141] libmachine: (multinode-622675) DBG | domain multinode-622675 has defined IP address 192.168.39.106 and MAC address 52:54:00:b9:25:86 in network mk-multinode-622675
	I0918 20:35:36.448547   44697 main.go:141] libmachine: (multinode-622675) Calling .GetSSHPort
	I0918 20:35:36.448559   44697 sshutil.go:53] new ssh client: &{IP:192.168.39.106 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19667-7671/.minikube/machines/multinode-622675/id_rsa Username:docker}
	I0918 20:35:36.448673   44697 main.go:141] libmachine: (multinode-622675) Calling .GetSSHKeyPath
	I0918 20:35:36.448802   44697 main.go:141] libmachine: (multinode-622675) Calling .GetSSHUsername
	I0918 20:35:36.448952   44697 sshutil.go:53] new ssh client: &{IP:192.168.39.106 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19667-7671/.minikube/machines/multinode-622675/id_rsa Username:docker}
	I0918 20:35:36.561519   44697 command_runner.go:130] > {"iso_version": "v1.34.0-1726481713-19649", "kicbase_version": "v0.0.45-1726358845-19644", "minikube_version": "v1.34.0", "commit": "fcd4ba3dbb1ef408e3a4b79c864df2496ddd3848"}
	I0918 20:35:36.561616   44697 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0918 20:35:36.561699   44697 ssh_runner.go:195] Run: systemctl --version
	I0918 20:35:36.567794   44697 command_runner.go:130] > systemd 252 (252)
	I0918 20:35:36.567843   44697 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0918 20:35:36.567919   44697 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0918 20:35:36.726027   44697 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0918 20:35:36.733249   44697 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0918 20:35:36.733629   44697 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0918 20:35:36.733715   44697 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0918 20:35:36.743239   44697 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0918 20:35:36.743274   44697 start.go:495] detecting cgroup driver to use...
	I0918 20:35:36.743334   44697 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0918 20:35:36.760715   44697 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0918 20:35:36.774927   44697 docker.go:217] disabling cri-docker service (if available) ...
	I0918 20:35:36.775003   44697 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0918 20:35:36.789537   44697 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0918 20:35:36.803881   44697 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0918 20:35:36.948953   44697 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0918 20:35:37.092978   44697 docker.go:233] disabling docker service ...
	I0918 20:35:37.093047   44697 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0918 20:35:37.109546   44697 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0918 20:35:37.122495   44697 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0918 20:35:37.272657   44697 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0918 20:35:37.438662   44697 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0918 20:35:37.453461   44697 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0918 20:35:37.473257   44697 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I0918 20:35:37.473303   44697 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0918 20:35:37.473355   44697 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0918 20:35:37.484093   44697 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0918 20:35:37.484163   44697 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0918 20:35:37.494635   44697 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0918 20:35:37.504677   44697 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0918 20:35:37.514562   44697 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0918 20:35:37.524980   44697 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0918 20:35:37.535106   44697 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0918 20:35:37.545930   44697 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0918 20:35:37.556040   44697 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0918 20:35:37.564982   44697 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0918 20:35:37.565052   44697 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0918 20:35:37.574443   44697 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0918 20:35:37.719364   44697 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0918 20:35:46.336004   44697 ssh_runner.go:235] Completed: sudo systemctl restart crio: (8.616602624s)
	I0918 20:35:46.336059   44697 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0918 20:35:46.336108   44697 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0918 20:35:46.340890   44697 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I0918 20:35:46.340928   44697 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0918 20:35:46.340939   44697 command_runner.go:130] > Device: 0,22	Inode: 1297        Links: 1
	I0918 20:35:46.340951   44697 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I0918 20:35:46.340960   44697 command_runner.go:130] > Access: 2024-09-18 20:35:46.201222832 +0000
	I0918 20:35:46.340970   44697 command_runner.go:130] > Modify: 2024-09-18 20:35:46.201222832 +0000
	I0918 20:35:46.340980   44697 command_runner.go:130] > Change: 2024-09-18 20:35:46.201222832 +0000
	I0918 20:35:46.340991   44697 command_runner.go:130] >  Birth: -
	I0918 20:35:46.341024   44697 start.go:563] Will wait 60s for crictl version
	I0918 20:35:46.341085   44697 ssh_runner.go:195] Run: which crictl
	I0918 20:35:46.344791   44697 command_runner.go:130] > /usr/bin/crictl
	I0918 20:35:46.344862   44697 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0918 20:35:46.379796   44697 command_runner.go:130] > Version:  0.1.0
	I0918 20:35:46.379819   44697 command_runner.go:130] > RuntimeName:  cri-o
	I0918 20:35:46.379823   44697 command_runner.go:130] > RuntimeVersion:  1.29.1
	I0918 20:35:46.379829   44697 command_runner.go:130] > RuntimeApiVersion:  v1
	I0918 20:35:46.381215   44697 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0918 20:35:46.381319   44697 ssh_runner.go:195] Run: crio --version
	I0918 20:35:46.410014   44697 command_runner.go:130] > crio version 1.29.1
	I0918 20:35:46.410041   44697 command_runner.go:130] > Version:        1.29.1
	I0918 20:35:46.410047   44697 command_runner.go:130] > GitCommit:      unknown
	I0918 20:35:46.410052   44697 command_runner.go:130] > GitCommitDate:  unknown
	I0918 20:35:46.410056   44697 command_runner.go:130] > GitTreeState:   clean
	I0918 20:35:46.410062   44697 command_runner.go:130] > BuildDate:      2024-09-16T15:42:14Z
	I0918 20:35:46.410066   44697 command_runner.go:130] > GoVersion:      go1.21.6
	I0918 20:35:46.410070   44697 command_runner.go:130] > Compiler:       gc
	I0918 20:35:46.410074   44697 command_runner.go:130] > Platform:       linux/amd64
	I0918 20:35:46.410078   44697 command_runner.go:130] > Linkmode:       dynamic
	I0918 20:35:46.410082   44697 command_runner.go:130] > BuildTags:      
	I0918 20:35:46.410086   44697 command_runner.go:130] >   containers_image_ostree_stub
	I0918 20:35:46.410090   44697 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0918 20:35:46.410100   44697 command_runner.go:130] >   btrfs_noversion
	I0918 20:35:46.410105   44697 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0918 20:35:46.410109   44697 command_runner.go:130] >   libdm_no_deferred_remove
	I0918 20:35:46.410112   44697 command_runner.go:130] >   seccomp
	I0918 20:35:46.410117   44697 command_runner.go:130] > LDFlags:          unknown
	I0918 20:35:46.410121   44697 command_runner.go:130] > SeccompEnabled:   true
	I0918 20:35:46.410126   44697 command_runner.go:130] > AppArmorEnabled:  false
	I0918 20:35:46.411269   44697 ssh_runner.go:195] Run: crio --version
	I0918 20:35:46.439806   44697 command_runner.go:130] > crio version 1.29.1
	I0918 20:35:46.439830   44697 command_runner.go:130] > Version:        1.29.1
	I0918 20:35:46.439837   44697 command_runner.go:130] > GitCommit:      unknown
	I0918 20:35:46.439844   44697 command_runner.go:130] > GitCommitDate:  unknown
	I0918 20:35:46.439849   44697 command_runner.go:130] > GitTreeState:   clean
	I0918 20:35:46.439856   44697 command_runner.go:130] > BuildDate:      2024-09-16T15:42:14Z
	I0918 20:35:46.439861   44697 command_runner.go:130] > GoVersion:      go1.21.6
	I0918 20:35:46.439867   44697 command_runner.go:130] > Compiler:       gc
	I0918 20:35:46.439873   44697 command_runner.go:130] > Platform:       linux/amd64
	I0918 20:35:46.439880   44697 command_runner.go:130] > Linkmode:       dynamic
	I0918 20:35:46.439888   44697 command_runner.go:130] > BuildTags:      
	I0918 20:35:46.439895   44697 command_runner.go:130] >   containers_image_ostree_stub
	I0918 20:35:46.439905   44697 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0918 20:35:46.439912   44697 command_runner.go:130] >   btrfs_noversion
	I0918 20:35:46.439923   44697 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0918 20:35:46.439930   44697 command_runner.go:130] >   libdm_no_deferred_remove
	I0918 20:35:46.439940   44697 command_runner.go:130] >   seccomp
	I0918 20:35:46.439947   44697 command_runner.go:130] > LDFlags:          unknown
	I0918 20:35:46.439957   44697 command_runner.go:130] > SeccompEnabled:   true
	I0918 20:35:46.439964   44697 command_runner.go:130] > AppArmorEnabled:  false
	I0918 20:35:46.442034   44697 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0918 20:35:46.443490   44697 main.go:141] libmachine: (multinode-622675) Calling .GetIP
	I0918 20:35:46.446480   44697 main.go:141] libmachine: (multinode-622675) DBG | domain multinode-622675 has defined MAC address 52:54:00:b9:25:86 in network mk-multinode-622675
	I0918 20:35:46.446818   44697 main.go:141] libmachine: (multinode-622675) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:25:86", ip: ""} in network mk-multinode-622675: {Iface:virbr1 ExpiryTime:2024-09-18 21:28:42 +0000 UTC Type:0 Mac:52:54:00:b9:25:86 Iaid: IPaddr:192.168.39.106 Prefix:24 Hostname:multinode-622675 Clientid:01:52:54:00:b9:25:86}
	I0918 20:35:46.446846   44697 main.go:141] libmachine: (multinode-622675) DBG | domain multinode-622675 has defined IP address 192.168.39.106 and MAC address 52:54:00:b9:25:86 in network mk-multinode-622675
	I0918 20:35:46.447028   44697 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0918 20:35:46.451277   44697 command_runner.go:130] > 192.168.39.1	host.minikube.internal
	I0918 20:35:46.451398   44697 kubeadm.go:883] updating cluster {Name:multinode-622675 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
31.1 ClusterName:multinode-622675 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.106 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.224 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.216 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:fa
lse inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disa
bleOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0918 20:35:46.451532   44697 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0918 20:35:46.451573   44697 ssh_runner.go:195] Run: sudo crictl images --output json
	I0918 20:35:46.495802   44697 command_runner.go:130] > {
	I0918 20:35:46.495834   44697 command_runner.go:130] >   "images": [
	I0918 20:35:46.495841   44697 command_runner.go:130] >     {
	I0918 20:35:46.495852   44697 command_runner.go:130] >       "id": "12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f",
	I0918 20:35:46.495859   44697 command_runner.go:130] >       "repoTags": [
	I0918 20:35:46.495869   44697 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240813-c6f155d6"
	I0918 20:35:46.495875   44697 command_runner.go:130] >       ],
	I0918 20:35:46.495882   44697 command_runner.go:130] >       "repoDigests": [
	I0918 20:35:46.495895   44697 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b",
	I0918 20:35:46.495910   44697 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166"
	I0918 20:35:46.495919   44697 command_runner.go:130] >       ],
	I0918 20:35:46.495926   44697 command_runner.go:130] >       "size": "87190579",
	I0918 20:35:46.495934   44697 command_runner.go:130] >       "uid": null,
	I0918 20:35:46.495941   44697 command_runner.go:130] >       "username": "",
	I0918 20:35:46.495958   44697 command_runner.go:130] >       "spec": null,
	I0918 20:35:46.495964   44697 command_runner.go:130] >       "pinned": false
	I0918 20:35:46.495967   44697 command_runner.go:130] >     },
	I0918 20:35:46.495972   44697 command_runner.go:130] >     {
	I0918 20:35:46.495978   44697 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I0918 20:35:46.495984   44697 command_runner.go:130] >       "repoTags": [
	I0918 20:35:46.495990   44697 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I0918 20:35:46.495995   44697 command_runner.go:130] >       ],
	I0918 20:35:46.496000   44697 command_runner.go:130] >       "repoDigests": [
	I0918 20:35:46.496006   44697 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I0918 20:35:46.496030   44697 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I0918 20:35:46.496038   44697 command_runner.go:130] >       ],
	I0918 20:35:46.496044   44697 command_runner.go:130] >       "size": "1363676",
	I0918 20:35:46.496054   44697 command_runner.go:130] >       "uid": null,
	I0918 20:35:46.496067   44697 command_runner.go:130] >       "username": "",
	I0918 20:35:46.496076   44697 command_runner.go:130] >       "spec": null,
	I0918 20:35:46.496086   44697 command_runner.go:130] >       "pinned": false
	I0918 20:35:46.496096   44697 command_runner.go:130] >     },
	I0918 20:35:46.496100   44697 command_runner.go:130] >     {
	I0918 20:35:46.496106   44697 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0918 20:35:46.496111   44697 command_runner.go:130] >       "repoTags": [
	I0918 20:35:46.496116   44697 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0918 20:35:46.496122   44697 command_runner.go:130] >       ],
	I0918 20:35:46.496126   44697 command_runner.go:130] >       "repoDigests": [
	I0918 20:35:46.496133   44697 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0918 20:35:46.496142   44697 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0918 20:35:46.496150   44697 command_runner.go:130] >       ],
	I0918 20:35:46.496155   44697 command_runner.go:130] >       "size": "31470524",
	I0918 20:35:46.496178   44697 command_runner.go:130] >       "uid": null,
	I0918 20:35:46.496188   44697 command_runner.go:130] >       "username": "",
	I0918 20:35:46.496192   44697 command_runner.go:130] >       "spec": null,
	I0918 20:35:46.496195   44697 command_runner.go:130] >       "pinned": false
	I0918 20:35:46.496199   44697 command_runner.go:130] >     },
	I0918 20:35:46.496202   44697 command_runner.go:130] >     {
	I0918 20:35:46.496209   44697 command_runner.go:130] >       "id": "c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6",
	I0918 20:35:46.496215   44697 command_runner.go:130] >       "repoTags": [
	I0918 20:35:46.496220   44697 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.3"
	I0918 20:35:46.496225   44697 command_runner.go:130] >       ],
	I0918 20:35:46.496229   44697 command_runner.go:130] >       "repoDigests": [
	I0918 20:35:46.496235   44697 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e",
	I0918 20:35:46.496247   44697 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:f0b8c589314ed010a0c326e987a52b50801f0145ac9b75423af1b5c66dbd6d50"
	I0918 20:35:46.496253   44697 command_runner.go:130] >       ],
	I0918 20:35:46.496257   44697 command_runner.go:130] >       "size": "63273227",
	I0918 20:35:46.496263   44697 command_runner.go:130] >       "uid": null,
	I0918 20:35:46.496273   44697 command_runner.go:130] >       "username": "nonroot",
	I0918 20:35:46.496279   44697 command_runner.go:130] >       "spec": null,
	I0918 20:35:46.496283   44697 command_runner.go:130] >       "pinned": false
	I0918 20:35:46.496288   44697 command_runner.go:130] >     },
	I0918 20:35:46.496292   44697 command_runner.go:130] >     {
	I0918 20:35:46.496299   44697 command_runner.go:130] >       "id": "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4",
	I0918 20:35:46.496305   44697 command_runner.go:130] >       "repoTags": [
	I0918 20:35:46.496310   44697 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.15-0"
	I0918 20:35:46.496313   44697 command_runner.go:130] >       ],
	I0918 20:35:46.496318   44697 command_runner.go:130] >       "repoDigests": [
	I0918 20:35:46.496326   44697 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:4e535f53f767fe400c2deec37fef7a6ab19a79a1db35041d067597641cd8b89d",
	I0918 20:35:46.496335   44697 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a"
	I0918 20:35:46.496339   44697 command_runner.go:130] >       ],
	I0918 20:35:46.496343   44697 command_runner.go:130] >       "size": "149009664",
	I0918 20:35:46.496348   44697 command_runner.go:130] >       "uid": {
	I0918 20:35:46.496351   44697 command_runner.go:130] >         "value": "0"
	I0918 20:35:46.496355   44697 command_runner.go:130] >       },
	I0918 20:35:46.496361   44697 command_runner.go:130] >       "username": "",
	I0918 20:35:46.496364   44697 command_runner.go:130] >       "spec": null,
	I0918 20:35:46.496370   44697 command_runner.go:130] >       "pinned": false
	I0918 20:35:46.496376   44697 command_runner.go:130] >     },
	I0918 20:35:46.496379   44697 command_runner.go:130] >     {
	I0918 20:35:46.496385   44697 command_runner.go:130] >       "id": "6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee",
	I0918 20:35:46.496392   44697 command_runner.go:130] >       "repoTags": [
	I0918 20:35:46.496397   44697 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.31.1"
	I0918 20:35:46.496400   44697 command_runner.go:130] >       ],
	I0918 20:35:46.496406   44697 command_runner.go:130] >       "repoDigests": [
	I0918 20:35:46.496413   44697 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:1f30d71692d2ab71ce2c1dd5fab86e0cb00ce888d21de18806f5482021d18771",
	I0918 20:35:46.496427   44697 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:2409c23dbb5a2b7a81adbb184d3eac43ac653e9b97a7c0ee121b89bb3ef61fdb"
	I0918 20:35:46.496435   44697 command_runner.go:130] >       ],
	I0918 20:35:46.496441   44697 command_runner.go:130] >       "size": "95237600",
	I0918 20:35:46.496449   44697 command_runner.go:130] >       "uid": {
	I0918 20:35:46.496455   44697 command_runner.go:130] >         "value": "0"
	I0918 20:35:46.496464   44697 command_runner.go:130] >       },
	I0918 20:35:46.496470   44697 command_runner.go:130] >       "username": "",
	I0918 20:35:46.496478   44697 command_runner.go:130] >       "spec": null,
	I0918 20:35:46.496484   44697 command_runner.go:130] >       "pinned": false
	I0918 20:35:46.496493   44697 command_runner.go:130] >     },
	I0918 20:35:46.496499   44697 command_runner.go:130] >     {
	I0918 20:35:46.496509   44697 command_runner.go:130] >       "id": "175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1",
	I0918 20:35:46.496513   44697 command_runner.go:130] >       "repoTags": [
	I0918 20:35:46.496522   44697 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.31.1"
	I0918 20:35:46.496526   44697 command_runner.go:130] >       ],
	I0918 20:35:46.496530   44697 command_runner.go:130] >       "repoDigests": [
	I0918 20:35:46.496538   44697 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:9f9da5b27e03f89599cc40ba89150aebf3b4cff001e6db6d998674b34181e1a1",
	I0918 20:35:46.496550   44697 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:e6c5253433f9032cff2bd9b1f41e29b9691a6d6ec97903896c0ca5f069a63748"
	I0918 20:35:46.496555   44697 command_runner.go:130] >       ],
	I0918 20:35:46.496560   44697 command_runner.go:130] >       "size": "89437508",
	I0918 20:35:46.496564   44697 command_runner.go:130] >       "uid": {
	I0918 20:35:46.496568   44697 command_runner.go:130] >         "value": "0"
	I0918 20:35:46.496574   44697 command_runner.go:130] >       },
	I0918 20:35:46.496578   44697 command_runner.go:130] >       "username": "",
	I0918 20:35:46.496581   44697 command_runner.go:130] >       "spec": null,
	I0918 20:35:46.496585   44697 command_runner.go:130] >       "pinned": false
	I0918 20:35:46.496595   44697 command_runner.go:130] >     },
	I0918 20:35:46.496598   44697 command_runner.go:130] >     {
	I0918 20:35:46.496604   44697 command_runner.go:130] >       "id": "60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561",
	I0918 20:35:46.496608   44697 command_runner.go:130] >       "repoTags": [
	I0918 20:35:46.496613   44697 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.31.1"
	I0918 20:35:46.496616   44697 command_runner.go:130] >       ],
	I0918 20:35:46.496620   44697 command_runner.go:130] >       "repoDigests": [
	I0918 20:35:46.496633   44697 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:4ee50b00484d7f39a90fc4cda92251177ef5ad8fdf2f2a0c768f9e634b4c6d44",
	I0918 20:35:46.496642   44697 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:bb26bcf4490a4653ecb77ceb883c0fd8dd876f104f776aa0a6cbf9df68b16af2"
	I0918 20:35:46.496645   44697 command_runner.go:130] >       ],
	I0918 20:35:46.496650   44697 command_runner.go:130] >       "size": "92733849",
	I0918 20:35:46.496654   44697 command_runner.go:130] >       "uid": null,
	I0918 20:35:46.496658   44697 command_runner.go:130] >       "username": "",
	I0918 20:35:46.496662   44697 command_runner.go:130] >       "spec": null,
	I0918 20:35:46.496665   44697 command_runner.go:130] >       "pinned": false
	I0918 20:35:46.496669   44697 command_runner.go:130] >     },
	I0918 20:35:46.496673   44697 command_runner.go:130] >     {
	I0918 20:35:46.496679   44697 command_runner.go:130] >       "id": "9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b",
	I0918 20:35:46.496683   44697 command_runner.go:130] >       "repoTags": [
	I0918 20:35:46.496687   44697 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.31.1"
	I0918 20:35:46.496690   44697 command_runner.go:130] >       ],
	I0918 20:35:46.496695   44697 command_runner.go:130] >       "repoDigests": [
	I0918 20:35:46.496707   44697 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:969a7e96340f3a927b3d652582edec2d6d82a083871d81ef5064b7edaab430d0",
	I0918 20:35:46.496717   44697 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:cb9d9404dddf0c6728b99a42d10d8ab1ece2a1c793ef1d7b03eddaeac26864d8"
	I0918 20:35:46.496720   44697 command_runner.go:130] >       ],
	I0918 20:35:46.496724   44697 command_runner.go:130] >       "size": "68420934",
	I0918 20:35:46.496728   44697 command_runner.go:130] >       "uid": {
	I0918 20:35:46.496731   44697 command_runner.go:130] >         "value": "0"
	I0918 20:35:46.496735   44697 command_runner.go:130] >       },
	I0918 20:35:46.496739   44697 command_runner.go:130] >       "username": "",
	I0918 20:35:46.496743   44697 command_runner.go:130] >       "spec": null,
	I0918 20:35:46.496746   44697 command_runner.go:130] >       "pinned": false
	I0918 20:35:46.496750   44697 command_runner.go:130] >     },
	I0918 20:35:46.496754   44697 command_runner.go:130] >     {
	I0918 20:35:46.496760   44697 command_runner.go:130] >       "id": "873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136",
	I0918 20:35:46.496766   44697 command_runner.go:130] >       "repoTags": [
	I0918 20:35:46.496770   44697 command_runner.go:130] >         "registry.k8s.io/pause:3.10"
	I0918 20:35:46.496773   44697 command_runner.go:130] >       ],
	I0918 20:35:46.496779   44697 command_runner.go:130] >       "repoDigests": [
	I0918 20:35:46.496791   44697 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a",
	I0918 20:35:46.496805   44697 command_runner.go:130] >         "registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"
	I0918 20:35:46.496813   44697 command_runner.go:130] >       ],
	I0918 20:35:46.496819   44697 command_runner.go:130] >       "size": "742080",
	I0918 20:35:46.496826   44697 command_runner.go:130] >       "uid": {
	I0918 20:35:46.496832   44697 command_runner.go:130] >         "value": "65535"
	I0918 20:35:46.496841   44697 command_runner.go:130] >       },
	I0918 20:35:46.496847   44697 command_runner.go:130] >       "username": "",
	I0918 20:35:46.496854   44697 command_runner.go:130] >       "spec": null,
	I0918 20:35:46.496858   44697 command_runner.go:130] >       "pinned": true
	I0918 20:35:46.496864   44697 command_runner.go:130] >     }
	I0918 20:35:46.496868   44697 command_runner.go:130] >   ]
	I0918 20:35:46.496871   44697 command_runner.go:130] > }
	I0918 20:35:46.497027   44697 crio.go:514] all images are preloaded for cri-o runtime.
	I0918 20:35:46.497038   44697 crio.go:433] Images already preloaded, skipping extraction
	I0918 20:35:46.497090   44697 ssh_runner.go:195] Run: sudo crictl images --output json
	I0918 20:35:46.531830   44697 command_runner.go:130] > {
	I0918 20:35:46.531856   44697 command_runner.go:130] >   "images": [
	I0918 20:35:46.531861   44697 command_runner.go:130] >     {
	I0918 20:35:46.531868   44697 command_runner.go:130] >       "id": "12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f",
	I0918 20:35:46.531872   44697 command_runner.go:130] >       "repoTags": [
	I0918 20:35:46.531879   44697 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240813-c6f155d6"
	I0918 20:35:46.531883   44697 command_runner.go:130] >       ],
	I0918 20:35:46.531886   44697 command_runner.go:130] >       "repoDigests": [
	I0918 20:35:46.531894   44697 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b",
	I0918 20:35:46.531901   44697 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166"
	I0918 20:35:46.531905   44697 command_runner.go:130] >       ],
	I0918 20:35:46.531913   44697 command_runner.go:130] >       "size": "87190579",
	I0918 20:35:46.531920   44697 command_runner.go:130] >       "uid": null,
	I0918 20:35:46.531925   44697 command_runner.go:130] >       "username": "",
	I0918 20:35:46.531956   44697 command_runner.go:130] >       "spec": null,
	I0918 20:35:46.531963   44697 command_runner.go:130] >       "pinned": false
	I0918 20:35:46.531969   44697 command_runner.go:130] >     },
	I0918 20:35:46.531976   44697 command_runner.go:130] >     {
	I0918 20:35:46.531985   44697 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I0918 20:35:46.531993   44697 command_runner.go:130] >       "repoTags": [
	I0918 20:35:46.532001   44697 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I0918 20:35:46.532010   44697 command_runner.go:130] >       ],
	I0918 20:35:46.532037   44697 command_runner.go:130] >       "repoDigests": [
	I0918 20:35:46.532045   44697 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I0918 20:35:46.532055   44697 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I0918 20:35:46.532058   44697 command_runner.go:130] >       ],
	I0918 20:35:46.532063   44697 command_runner.go:130] >       "size": "1363676",
	I0918 20:35:46.532066   44697 command_runner.go:130] >       "uid": null,
	I0918 20:35:46.532074   44697 command_runner.go:130] >       "username": "",
	I0918 20:35:46.532079   44697 command_runner.go:130] >       "spec": null,
	I0918 20:35:46.532083   44697 command_runner.go:130] >       "pinned": false
	I0918 20:35:46.532089   44697 command_runner.go:130] >     },
	I0918 20:35:46.532095   44697 command_runner.go:130] >     {
	I0918 20:35:46.532103   44697 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0918 20:35:46.532109   44697 command_runner.go:130] >       "repoTags": [
	I0918 20:35:46.532114   44697 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0918 20:35:46.532120   44697 command_runner.go:130] >       ],
	I0918 20:35:46.532124   44697 command_runner.go:130] >       "repoDigests": [
	I0918 20:35:46.532132   44697 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0918 20:35:46.532141   44697 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0918 20:35:46.532145   44697 command_runner.go:130] >       ],
	I0918 20:35:46.532149   44697 command_runner.go:130] >       "size": "31470524",
	I0918 20:35:46.532154   44697 command_runner.go:130] >       "uid": null,
	I0918 20:35:46.532158   44697 command_runner.go:130] >       "username": "",
	I0918 20:35:46.532163   44697 command_runner.go:130] >       "spec": null,
	I0918 20:35:46.532169   44697 command_runner.go:130] >       "pinned": false
	I0918 20:35:46.532172   44697 command_runner.go:130] >     },
	I0918 20:35:46.532175   44697 command_runner.go:130] >     {
	I0918 20:35:46.532183   44697 command_runner.go:130] >       "id": "c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6",
	I0918 20:35:46.532187   44697 command_runner.go:130] >       "repoTags": [
	I0918 20:35:46.532194   44697 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.3"
	I0918 20:35:46.532198   44697 command_runner.go:130] >       ],
	I0918 20:35:46.532204   44697 command_runner.go:130] >       "repoDigests": [
	I0918 20:35:46.532212   44697 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e",
	I0918 20:35:46.532224   44697 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:f0b8c589314ed010a0c326e987a52b50801f0145ac9b75423af1b5c66dbd6d50"
	I0918 20:35:46.532229   44697 command_runner.go:130] >       ],
	I0918 20:35:46.532233   44697 command_runner.go:130] >       "size": "63273227",
	I0918 20:35:46.532237   44697 command_runner.go:130] >       "uid": null,
	I0918 20:35:46.532245   44697 command_runner.go:130] >       "username": "nonroot",
	I0918 20:35:46.532251   44697 command_runner.go:130] >       "spec": null,
	I0918 20:35:46.532256   44697 command_runner.go:130] >       "pinned": false
	I0918 20:35:46.532260   44697 command_runner.go:130] >     },
	I0918 20:35:46.532263   44697 command_runner.go:130] >     {
	I0918 20:35:46.532270   44697 command_runner.go:130] >       "id": "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4",
	I0918 20:35:46.532274   44697 command_runner.go:130] >       "repoTags": [
	I0918 20:35:46.532282   44697 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.15-0"
	I0918 20:35:46.532290   44697 command_runner.go:130] >       ],
	I0918 20:35:46.532294   44697 command_runner.go:130] >       "repoDigests": [
	I0918 20:35:46.532303   44697 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:4e535f53f767fe400c2deec37fef7a6ab19a79a1db35041d067597641cd8b89d",
	I0918 20:35:46.532312   44697 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a"
	I0918 20:35:46.532322   44697 command_runner.go:130] >       ],
	I0918 20:35:46.532327   44697 command_runner.go:130] >       "size": "149009664",
	I0918 20:35:46.532330   44697 command_runner.go:130] >       "uid": {
	I0918 20:35:46.532334   44697 command_runner.go:130] >         "value": "0"
	I0918 20:35:46.532338   44697 command_runner.go:130] >       },
	I0918 20:35:46.532342   44697 command_runner.go:130] >       "username": "",
	I0918 20:35:46.532346   44697 command_runner.go:130] >       "spec": null,
	I0918 20:35:46.532351   44697 command_runner.go:130] >       "pinned": false
	I0918 20:35:46.532354   44697 command_runner.go:130] >     },
	I0918 20:35:46.532357   44697 command_runner.go:130] >     {
	I0918 20:35:46.532363   44697 command_runner.go:130] >       "id": "6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee",
	I0918 20:35:46.532370   44697 command_runner.go:130] >       "repoTags": [
	I0918 20:35:46.532376   44697 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.31.1"
	I0918 20:35:46.532380   44697 command_runner.go:130] >       ],
	I0918 20:35:46.532384   44697 command_runner.go:130] >       "repoDigests": [
	I0918 20:35:46.532393   44697 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:1f30d71692d2ab71ce2c1dd5fab86e0cb00ce888d21de18806f5482021d18771",
	I0918 20:35:46.532400   44697 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:2409c23dbb5a2b7a81adbb184d3eac43ac653e9b97a7c0ee121b89bb3ef61fdb"
	I0918 20:35:46.532406   44697 command_runner.go:130] >       ],
	I0918 20:35:46.532410   44697 command_runner.go:130] >       "size": "95237600",
	I0918 20:35:46.532417   44697 command_runner.go:130] >       "uid": {
	I0918 20:35:46.532423   44697 command_runner.go:130] >         "value": "0"
	I0918 20:35:46.532429   44697 command_runner.go:130] >       },
	I0918 20:35:46.532435   44697 command_runner.go:130] >       "username": "",
	I0918 20:35:46.532444   44697 command_runner.go:130] >       "spec": null,
	I0918 20:35:46.532451   44697 command_runner.go:130] >       "pinned": false
	I0918 20:35:46.532458   44697 command_runner.go:130] >     },
	I0918 20:35:46.532463   44697 command_runner.go:130] >     {
	I0918 20:35:46.532475   44697 command_runner.go:130] >       "id": "175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1",
	I0918 20:35:46.532484   44697 command_runner.go:130] >       "repoTags": [
	I0918 20:35:46.532495   44697 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.31.1"
	I0918 20:35:46.532504   44697 command_runner.go:130] >       ],
	I0918 20:35:46.532515   44697 command_runner.go:130] >       "repoDigests": [
	I0918 20:35:46.532524   44697 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:9f9da5b27e03f89599cc40ba89150aebf3b4cff001e6db6d998674b34181e1a1",
	I0918 20:35:46.532532   44697 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:e6c5253433f9032cff2bd9b1f41e29b9691a6d6ec97903896c0ca5f069a63748"
	I0918 20:35:46.532541   44697 command_runner.go:130] >       ],
	I0918 20:35:46.532545   44697 command_runner.go:130] >       "size": "89437508",
	I0918 20:35:46.532548   44697 command_runner.go:130] >       "uid": {
	I0918 20:35:46.532552   44697 command_runner.go:130] >         "value": "0"
	I0918 20:35:46.532558   44697 command_runner.go:130] >       },
	I0918 20:35:46.532562   44697 command_runner.go:130] >       "username": "",
	I0918 20:35:46.532569   44697 command_runner.go:130] >       "spec": null,
	I0918 20:35:46.532576   44697 command_runner.go:130] >       "pinned": false
	I0918 20:35:46.532580   44697 command_runner.go:130] >     },
	I0918 20:35:46.532583   44697 command_runner.go:130] >     {
	I0918 20:35:46.532589   44697 command_runner.go:130] >       "id": "60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561",
	I0918 20:35:46.532596   44697 command_runner.go:130] >       "repoTags": [
	I0918 20:35:46.532601   44697 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.31.1"
	I0918 20:35:46.532604   44697 command_runner.go:130] >       ],
	I0918 20:35:46.532609   44697 command_runner.go:130] >       "repoDigests": [
	I0918 20:35:46.532624   44697 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:4ee50b00484d7f39a90fc4cda92251177ef5ad8fdf2f2a0c768f9e634b4c6d44",
	I0918 20:35:46.532634   44697 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:bb26bcf4490a4653ecb77ceb883c0fd8dd876f104f776aa0a6cbf9df68b16af2"
	I0918 20:35:46.532637   44697 command_runner.go:130] >       ],
	I0918 20:35:46.532641   44697 command_runner.go:130] >       "size": "92733849",
	I0918 20:35:46.532647   44697 command_runner.go:130] >       "uid": null,
	I0918 20:35:46.532651   44697 command_runner.go:130] >       "username": "",
	I0918 20:35:46.532658   44697 command_runner.go:130] >       "spec": null,
	I0918 20:35:46.532662   44697 command_runner.go:130] >       "pinned": false
	I0918 20:35:46.532665   44697 command_runner.go:130] >     },
	I0918 20:35:46.532669   44697 command_runner.go:130] >     {
	I0918 20:35:46.532676   44697 command_runner.go:130] >       "id": "9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b",
	I0918 20:35:46.532682   44697 command_runner.go:130] >       "repoTags": [
	I0918 20:35:46.532686   44697 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.31.1"
	I0918 20:35:46.532690   44697 command_runner.go:130] >       ],
	I0918 20:35:46.532694   44697 command_runner.go:130] >       "repoDigests": [
	I0918 20:35:46.532702   44697 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:969a7e96340f3a927b3d652582edec2d6d82a083871d81ef5064b7edaab430d0",
	I0918 20:35:46.532711   44697 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:cb9d9404dddf0c6728b99a42d10d8ab1ece2a1c793ef1d7b03eddaeac26864d8"
	I0918 20:35:46.532714   44697 command_runner.go:130] >       ],
	I0918 20:35:46.532718   44697 command_runner.go:130] >       "size": "68420934",
	I0918 20:35:46.532724   44697 command_runner.go:130] >       "uid": {
	I0918 20:35:46.532727   44697 command_runner.go:130] >         "value": "0"
	I0918 20:35:46.532731   44697 command_runner.go:130] >       },
	I0918 20:35:46.532735   44697 command_runner.go:130] >       "username": "",
	I0918 20:35:46.532739   44697 command_runner.go:130] >       "spec": null,
	I0918 20:35:46.532744   44697 command_runner.go:130] >       "pinned": false
	I0918 20:35:46.532747   44697 command_runner.go:130] >     },
	I0918 20:35:46.532751   44697 command_runner.go:130] >     {
	I0918 20:35:46.532757   44697 command_runner.go:130] >       "id": "873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136",
	I0918 20:35:46.532763   44697 command_runner.go:130] >       "repoTags": [
	I0918 20:35:46.532767   44697 command_runner.go:130] >         "registry.k8s.io/pause:3.10"
	I0918 20:35:46.532770   44697 command_runner.go:130] >       ],
	I0918 20:35:46.532775   44697 command_runner.go:130] >       "repoDigests": [
	I0918 20:35:46.532781   44697 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a",
	I0918 20:35:46.532793   44697 command_runner.go:130] >         "registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"
	I0918 20:35:46.532796   44697 command_runner.go:130] >       ],
	I0918 20:35:46.532800   44697 command_runner.go:130] >       "size": "742080",
	I0918 20:35:46.532804   44697 command_runner.go:130] >       "uid": {
	I0918 20:35:46.532808   44697 command_runner.go:130] >         "value": "65535"
	I0918 20:35:46.532811   44697 command_runner.go:130] >       },
	I0918 20:35:46.532815   44697 command_runner.go:130] >       "username": "",
	I0918 20:35:46.532821   44697 command_runner.go:130] >       "spec": null,
	I0918 20:35:46.532825   44697 command_runner.go:130] >       "pinned": true
	I0918 20:35:46.532828   44697 command_runner.go:130] >     }
	I0918 20:35:46.532834   44697 command_runner.go:130] >   ]
	I0918 20:35:46.532837   44697 command_runner.go:130] > }
	I0918 20:35:46.532948   44697 crio.go:514] all images are preloaded for cri-o runtime.
	I0918 20:35:46.532960   44697 cache_images.go:84] Images are preloaded, skipping loading
	I0918 20:35:46.532967   44697 kubeadm.go:934] updating node { 192.168.39.106 8443 v1.31.1 crio true true} ...
	I0918 20:35:46.533060   44697 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-622675 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.106
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:multinode-622675 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0918 20:35:46.533120   44697 ssh_runner.go:195] Run: crio config
	I0918 20:35:46.570896   44697 command_runner.go:130] ! time="2024-09-18 20:35:46.544989453Z" level=info msg="Starting CRI-O, version: 1.29.1, git: unknown(clean)"
	I0918 20:35:46.576645   44697 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I0918 20:35:46.590735   44697 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I0918 20:35:46.590758   44697 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I0918 20:35:46.590767   44697 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I0918 20:35:46.590771   44697 command_runner.go:130] > #
	I0918 20:35:46.590778   44697 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I0918 20:35:46.590783   44697 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I0918 20:35:46.590790   44697 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I0918 20:35:46.590820   44697 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I0918 20:35:46.590830   44697 command_runner.go:130] > # reload'.
	I0918 20:35:46.590838   44697 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I0918 20:35:46.590847   44697 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I0918 20:35:46.590858   44697 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I0918 20:35:46.590865   44697 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I0918 20:35:46.590873   44697 command_runner.go:130] > [crio]
	I0918 20:35:46.590879   44697 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I0918 20:35:46.590886   44697 command_runner.go:130] > # containers images, in this directory.
	I0918 20:35:46.590891   44697 command_runner.go:130] > root = "/var/lib/containers/storage"
	I0918 20:35:46.590900   44697 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I0918 20:35:46.590906   44697 command_runner.go:130] > runroot = "/var/run/containers/storage"
	I0918 20:35:46.590914   44697 command_runner.go:130] > # Path to the "imagestore". If CRI-O stores all of its images in this directory differently than Root.
	I0918 20:35:46.590923   44697 command_runner.go:130] > # imagestore = ""
	I0918 20:35:46.590935   44697 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I0918 20:35:46.590947   44697 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I0918 20:35:46.590956   44697 command_runner.go:130] > storage_driver = "overlay"
	I0918 20:35:46.590964   44697 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I0918 20:35:46.590972   44697 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I0918 20:35:46.590976   44697 command_runner.go:130] > storage_option = [
	I0918 20:35:46.590980   44697 command_runner.go:130] > 	"overlay.mountopt=nodev,metacopy=on",
	I0918 20:35:46.590983   44697 command_runner.go:130] > ]
	I0918 20:35:46.590990   44697 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I0918 20:35:46.590997   44697 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I0918 20:35:46.591001   44697 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I0918 20:35:46.591009   44697 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I0918 20:35:46.591015   44697 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I0918 20:35:46.591021   44697 command_runner.go:130] > # always happen on a node reboot
	I0918 20:35:46.591026   44697 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I0918 20:35:46.591037   44697 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I0918 20:35:46.591043   44697 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I0918 20:35:46.591048   44697 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I0918 20:35:46.591053   44697 command_runner.go:130] > version_file_persist = "/var/lib/crio/version"
	I0918 20:35:46.591063   44697 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I0918 20:35:46.591070   44697 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I0918 20:35:46.591076   44697 command_runner.go:130] > # internal_wipe = true
	I0918 20:35:46.591084   44697 command_runner.go:130] > # InternalRepair is whether CRI-O should check if the container and image storage was corrupted after a sudden restart.
	I0918 20:35:46.591091   44697 command_runner.go:130] > # If it was, CRI-O also attempts to repair the storage.
	I0918 20:35:46.591095   44697 command_runner.go:130] > # internal_repair = false
	I0918 20:35:46.591102   44697 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I0918 20:35:46.591108   44697 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I0918 20:35:46.591116   44697 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I0918 20:35:46.591121   44697 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I0918 20:35:46.591132   44697 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I0918 20:35:46.591135   44697 command_runner.go:130] > [crio.api]
	I0918 20:35:46.591140   44697 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I0918 20:35:46.591144   44697 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I0918 20:35:46.591151   44697 command_runner.go:130] > # IP address on which the stream server will listen.
	I0918 20:35:46.591155   44697 command_runner.go:130] > # stream_address = "127.0.0.1"
	I0918 20:35:46.591161   44697 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I0918 20:35:46.591167   44697 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I0918 20:35:46.591171   44697 command_runner.go:130] > # stream_port = "0"
	I0918 20:35:46.591177   44697 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I0918 20:35:46.591181   44697 command_runner.go:130] > # stream_enable_tls = false
	I0918 20:35:46.591187   44697 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I0918 20:35:46.591191   44697 command_runner.go:130] > # stream_idle_timeout = ""
	I0918 20:35:46.591198   44697 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I0918 20:35:46.591206   44697 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I0918 20:35:46.591210   44697 command_runner.go:130] > # minutes.
	I0918 20:35:46.591215   44697 command_runner.go:130] > # stream_tls_cert = ""
	I0918 20:35:46.591221   44697 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I0918 20:35:46.591229   44697 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I0918 20:35:46.591233   44697 command_runner.go:130] > # stream_tls_key = ""
	I0918 20:35:46.591241   44697 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I0918 20:35:46.591247   44697 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I0918 20:35:46.591264   44697 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I0918 20:35:46.591282   44697 command_runner.go:130] > # stream_tls_ca = ""
	I0918 20:35:46.591291   44697 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 80 * 1024 * 1024.
	I0918 20:35:46.591297   44697 command_runner.go:130] > grpc_max_send_msg_size = 16777216
	I0918 20:35:46.591307   44697 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 80 * 1024 * 1024.
	I0918 20:35:46.591313   44697 command_runner.go:130] > grpc_max_recv_msg_size = 16777216
	I0918 20:35:46.591319   44697 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I0918 20:35:46.591326   44697 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I0918 20:35:46.591330   44697 command_runner.go:130] > [crio.runtime]
	I0918 20:35:46.591336   44697 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I0918 20:35:46.591343   44697 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I0918 20:35:46.591347   44697 command_runner.go:130] > # "nofile=1024:2048"
	I0918 20:35:46.591353   44697 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I0918 20:35:46.591359   44697 command_runner.go:130] > # default_ulimits = [
	I0918 20:35:46.591362   44697 command_runner.go:130] > # ]
	I0918 20:35:46.591368   44697 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I0918 20:35:46.591373   44697 command_runner.go:130] > # no_pivot = false
	I0918 20:35:46.591381   44697 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I0918 20:35:46.591389   44697 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I0918 20:35:46.591394   44697 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I0918 20:35:46.591402   44697 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I0918 20:35:46.591407   44697 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I0918 20:35:46.591415   44697 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0918 20:35:46.591419   44697 command_runner.go:130] > conmon = "/usr/libexec/crio/conmon"
	I0918 20:35:46.591425   44697 command_runner.go:130] > # Cgroup setting for conmon
	I0918 20:35:46.591432   44697 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I0918 20:35:46.591436   44697 command_runner.go:130] > conmon_cgroup = "pod"
	I0918 20:35:46.591441   44697 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I0918 20:35:46.591449   44697 command_runner.go:130] > # environment variables to conmon or the runtime.
	I0918 20:35:46.591454   44697 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0918 20:35:46.591459   44697 command_runner.go:130] > conmon_env = [
	I0918 20:35:46.591464   44697 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0918 20:35:46.591468   44697 command_runner.go:130] > ]
	I0918 20:35:46.591473   44697 command_runner.go:130] > # Additional environment variables to set for all the
	I0918 20:35:46.591480   44697 command_runner.go:130] > # containers. These are overridden if set in the
	I0918 20:35:46.591486   44697 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I0918 20:35:46.591492   44697 command_runner.go:130] > # default_env = [
	I0918 20:35:46.591495   44697 command_runner.go:130] > # ]
	I0918 20:35:46.591500   44697 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I0918 20:35:46.591509   44697 command_runner.go:130] > # This option is deprecated, and be interpreted from whether SELinux is enabled on the host in the future.
	I0918 20:35:46.591513   44697 command_runner.go:130] > # selinux = false
	I0918 20:35:46.591530   44697 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I0918 20:35:46.591542   44697 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I0918 20:35:46.591551   44697 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I0918 20:35:46.591558   44697 command_runner.go:130] > # seccomp_profile = ""
	I0918 20:35:46.591564   44697 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I0918 20:35:46.591572   44697 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I0918 20:35:46.591578   44697 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I0918 20:35:46.591585   44697 command_runner.go:130] > # which might increase security.
	I0918 20:35:46.591589   44697 command_runner.go:130] > # This option is currently deprecated,
	I0918 20:35:46.591599   44697 command_runner.go:130] > # and will be replaced by the SeccompDefault FeatureGate in Kubernetes.
	I0918 20:35:46.591603   44697 command_runner.go:130] > seccomp_use_default_when_empty = false
	I0918 20:35:46.591609   44697 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I0918 20:35:46.591618   44697 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I0918 20:35:46.591634   44697 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I0918 20:35:46.591648   44697 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I0918 20:35:46.591656   44697 command_runner.go:130] > # This option supports live configuration reload.
	I0918 20:35:46.591660   44697 command_runner.go:130] > # apparmor_profile = "crio-default"
	I0918 20:35:46.591667   44697 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I0918 20:35:46.591671   44697 command_runner.go:130] > # the cgroup blockio controller.
	I0918 20:35:46.591677   44697 command_runner.go:130] > # blockio_config_file = ""
	I0918 20:35:46.591684   44697 command_runner.go:130] > # Reload blockio-config-file and rescan blockio devices in the system before applying
	I0918 20:35:46.591690   44697 command_runner.go:130] > # blockio parameters.
	I0918 20:35:46.591697   44697 command_runner.go:130] > # blockio_reload = false
	I0918 20:35:46.591709   44697 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I0918 20:35:46.591719   44697 command_runner.go:130] > # irqbalance daemon.
	I0918 20:35:46.591729   44697 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I0918 20:35:46.591742   44697 command_runner.go:130] > # irqbalance_config_restore_file allows to set a cpu mask CRI-O should
	I0918 20:35:46.591755   44697 command_runner.go:130] > # restore as irqbalance config at startup. Set to empty string to disable this flow entirely.
	I0918 20:35:46.591764   44697 command_runner.go:130] > # By default, CRI-O manages the irqbalance configuration to enable dynamic IRQ pinning.
	I0918 20:35:46.591772   44697 command_runner.go:130] > # irqbalance_config_restore_file = "/etc/sysconfig/orig_irq_banned_cpus"
	I0918 20:35:46.591779   44697 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I0918 20:35:46.591787   44697 command_runner.go:130] > # This option supports live configuration reload.
	I0918 20:35:46.591790   44697 command_runner.go:130] > # rdt_config_file = ""
	I0918 20:35:46.591795   44697 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I0918 20:35:46.591807   44697 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I0918 20:35:46.591827   44697 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I0918 20:35:46.591837   44697 command_runner.go:130] > # separate_pull_cgroup = ""
	I0918 20:35:46.591847   44697 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I0918 20:35:46.591860   44697 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I0918 20:35:46.591867   44697 command_runner.go:130] > # will be added.
	I0918 20:35:46.591876   44697 command_runner.go:130] > # default_capabilities = [
	I0918 20:35:46.591882   44697 command_runner.go:130] > # 	"CHOWN",
	I0918 20:35:46.591891   44697 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I0918 20:35:46.591897   44697 command_runner.go:130] > # 	"FSETID",
	I0918 20:35:46.591901   44697 command_runner.go:130] > # 	"FOWNER",
	I0918 20:35:46.591905   44697 command_runner.go:130] > # 	"SETGID",
	I0918 20:35:46.591910   44697 command_runner.go:130] > # 	"SETUID",
	I0918 20:35:46.591914   44697 command_runner.go:130] > # 	"SETPCAP",
	I0918 20:35:46.591921   44697 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I0918 20:35:46.591928   44697 command_runner.go:130] > # 	"KILL",
	I0918 20:35:46.591937   44697 command_runner.go:130] > # ]
	I0918 20:35:46.591949   44697 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I0918 20:35:46.591962   44697 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I0918 20:35:46.591973   44697 command_runner.go:130] > # add_inheritable_capabilities = false
	I0918 20:35:46.591986   44697 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I0918 20:35:46.591998   44697 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0918 20:35:46.592005   44697 command_runner.go:130] > default_sysctls = [
	I0918 20:35:46.592011   44697 command_runner.go:130] > 	"net.ipv4.ip_unprivileged_port_start=0",
	I0918 20:35:46.592032   44697 command_runner.go:130] > ]
	I0918 20:35:46.592042   44697 command_runner.go:130] > # List of devices on the host that a
	I0918 20:35:46.592055   44697 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I0918 20:35:46.592064   44697 command_runner.go:130] > # allowed_devices = [
	I0918 20:35:46.592071   44697 command_runner.go:130] > # 	"/dev/fuse",
	I0918 20:35:46.592084   44697 command_runner.go:130] > # ]
	I0918 20:35:46.592094   44697 command_runner.go:130] > # List of additional devices. specified as
	I0918 20:35:46.592109   44697 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I0918 20:35:46.592118   44697 command_runner.go:130] > # If it is empty or commented out, only the devices
	I0918 20:35:46.592127   44697 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0918 20:35:46.592137   44697 command_runner.go:130] > # additional_devices = [
	I0918 20:35:46.592145   44697 command_runner.go:130] > # ]
	I0918 20:35:46.592157   44697 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I0918 20:35:46.592164   44697 command_runner.go:130] > # cdi_spec_dirs = [
	I0918 20:35:46.592173   44697 command_runner.go:130] > # 	"/etc/cdi",
	I0918 20:35:46.592179   44697 command_runner.go:130] > # 	"/var/run/cdi",
	I0918 20:35:46.592187   44697 command_runner.go:130] > # ]
	I0918 20:35:46.592198   44697 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I0918 20:35:46.592208   44697 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I0918 20:35:46.592212   44697 command_runner.go:130] > # Defaults to false.
	I0918 20:35:46.592217   44697 command_runner.go:130] > # device_ownership_from_security_context = false
	I0918 20:35:46.592231   44697 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I0918 20:35:46.592245   44697 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I0918 20:35:46.592250   44697 command_runner.go:130] > # hooks_dir = [
	I0918 20:35:46.592258   44697 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I0918 20:35:46.592270   44697 command_runner.go:130] > # ]
	I0918 20:35:46.592282   44697 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I0918 20:35:46.592295   44697 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I0918 20:35:46.592307   44697 command_runner.go:130] > # its default mounts from the following two files:
	I0918 20:35:46.592314   44697 command_runner.go:130] > #
	I0918 20:35:46.592320   44697 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I0918 20:35:46.592333   44697 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I0918 20:35:46.592345   44697 command_runner.go:130] > #      override the default mounts shipped with the package.
	I0918 20:35:46.592351   44697 command_runner.go:130] > #
	I0918 20:35:46.592365   44697 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I0918 20:35:46.592378   44697 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I0918 20:35:46.592394   44697 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I0918 20:35:46.592405   44697 command_runner.go:130] > #      only add mounts it finds in this file.
	I0918 20:35:46.592411   44697 command_runner.go:130] > #
	I0918 20:35:46.592416   44697 command_runner.go:130] > # default_mounts_file = ""
	I0918 20:35:46.592424   44697 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I0918 20:35:46.592433   44697 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I0918 20:35:46.592443   44697 command_runner.go:130] > pids_limit = 1024
	I0918 20:35:46.592454   44697 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I0918 20:35:46.592466   44697 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I0918 20:35:46.592477   44697 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I0918 20:35:46.592492   44697 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I0918 20:35:46.592503   44697 command_runner.go:130] > # log_size_max = -1
	I0918 20:35:46.592513   44697 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kubernetes log file
	I0918 20:35:46.592519   44697 command_runner.go:130] > # log_to_journald = false
	I0918 20:35:46.592531   44697 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I0918 20:35:46.592543   44697 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I0918 20:35:46.592551   44697 command_runner.go:130] > # Path to directory for container attach sockets.
	I0918 20:35:46.592563   44697 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I0918 20:35:46.592574   44697 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I0918 20:35:46.592583   44697 command_runner.go:130] > # bind_mount_prefix = ""
	I0918 20:35:46.592591   44697 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I0918 20:35:46.592600   44697 command_runner.go:130] > # read_only = false
	I0918 20:35:46.592609   44697 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I0918 20:35:46.592618   44697 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I0918 20:35:46.592624   44697 command_runner.go:130] > # live configuration reload.
	I0918 20:35:46.592633   44697 command_runner.go:130] > # log_level = "info"
	I0918 20:35:46.592642   44697 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I0918 20:35:46.592654   44697 command_runner.go:130] > # This option supports live configuration reload.
	I0918 20:35:46.592661   44697 command_runner.go:130] > # log_filter = ""
	I0918 20:35:46.592674   44697 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I0918 20:35:46.592687   44697 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I0918 20:35:46.592697   44697 command_runner.go:130] > # separated by comma.
	I0918 20:35:46.592708   44697 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0918 20:35:46.592716   44697 command_runner.go:130] > # uid_mappings = ""
	I0918 20:35:46.592726   44697 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I0918 20:35:46.592739   44697 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I0918 20:35:46.592745   44697 command_runner.go:130] > # separated by comma.
	I0918 20:35:46.592760   44697 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0918 20:35:46.592770   44697 command_runner.go:130] > # gid_mappings = ""
	I0918 20:35:46.592783   44697 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I0918 20:35:46.592798   44697 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0918 20:35:46.592808   44697 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0918 20:35:46.592817   44697 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0918 20:35:46.592827   44697 command_runner.go:130] > # minimum_mappable_uid = -1
	I0918 20:35:46.592837   44697 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I0918 20:35:46.592851   44697 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0918 20:35:46.592862   44697 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0918 20:35:46.592879   44697 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0918 20:35:46.592889   44697 command_runner.go:130] > # minimum_mappable_gid = -1
	I0918 20:35:46.592899   44697 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I0918 20:35:46.592907   44697 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I0918 20:35:46.592915   44697 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I0918 20:35:46.592926   44697 command_runner.go:130] > # ctr_stop_timeout = 30
	I0918 20:35:46.592936   44697 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I0918 20:35:46.592948   44697 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I0918 20:35:46.592955   44697 command_runner.go:130] > # a kernel separating runtime (like kata).
	I0918 20:35:46.592972   44697 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I0918 20:35:46.592982   44697 command_runner.go:130] > drop_infra_ctr = false
	I0918 20:35:46.592991   44697 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I0918 20:35:46.593000   44697 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I0918 20:35:46.593016   44697 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I0918 20:35:46.593026   44697 command_runner.go:130] > # infra_ctr_cpuset = ""
	I0918 20:35:46.593037   44697 command_runner.go:130] > # shared_cpuset  determines the CPU set which is allowed to be shared between guaranteed containers,
	I0918 20:35:46.593049   44697 command_runner.go:130] > # regardless of, and in addition to, the exclusiveness of their CPUs.
	I0918 20:35:46.593062   44697 command_runner.go:130] > # This field is optional and would not be used if not specified.
	I0918 20:35:46.593074   44697 command_runner.go:130] > # You can specify CPUs in the Linux CPU list format.
	I0918 20:35:46.593083   44697 command_runner.go:130] > # shared_cpuset = ""
	I0918 20:35:46.593092   44697 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I0918 20:35:46.593100   44697 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I0918 20:35:46.593106   44697 command_runner.go:130] > # namespaces_dir = "/var/run"
	I0918 20:35:46.593126   44697 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I0918 20:35:46.593134   44697 command_runner.go:130] > pinns_path = "/usr/bin/pinns"
	I0918 20:35:46.593143   44697 command_runner.go:130] > # Globally enable/disable CRIU support which is necessary to
	I0918 20:35:46.593156   44697 command_runner.go:130] > # checkpoint and restore container or pods (even if CRIU is found in $PATH).
	I0918 20:35:46.593166   44697 command_runner.go:130] > # enable_criu_support = false
	I0918 20:35:46.593174   44697 command_runner.go:130] > # Enable/disable the generation of the container,
	I0918 20:35:46.593187   44697 command_runner.go:130] > # sandbox lifecycle events to be sent to the Kubelet to optimize the PLEG
	I0918 20:35:46.593194   44697 command_runner.go:130] > # enable_pod_events = false
	I0918 20:35:46.593207   44697 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0918 20:35:46.593217   44697 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0918 20:35:46.593226   44697 command_runner.go:130] > # The name is matched against the runtimes map below.
	I0918 20:35:46.593234   44697 command_runner.go:130] > # default_runtime = "runc"
	I0918 20:35:46.593246   44697 command_runner.go:130] > # A list of paths that, when absent from the host,
	I0918 20:35:46.593258   44697 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I0918 20:35:46.593278   44697 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jeopardize the health of the node, and whose
	I0918 20:35:46.593289   44697 command_runner.go:130] > # creation as a file is not desired either.
	I0918 20:35:46.593305   44697 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I0918 20:35:46.593314   44697 command_runner.go:130] > # the hostname is being managed dynamically.
	I0918 20:35:46.593318   44697 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I0918 20:35:46.593322   44697 command_runner.go:130] > # ]
	I0918 20:35:46.593331   44697 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I0918 20:35:46.593344   44697 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I0918 20:35:46.593354   44697 command_runner.go:130] > # If no runtime handler is provided, the "default_runtime" will be used.
	I0918 20:35:46.593366   44697 command_runner.go:130] > # Each entry in the table should follow the format:
	I0918 20:35:46.593374   44697 command_runner.go:130] > #
	I0918 20:35:46.593382   44697 command_runner.go:130] > # [crio.runtime.runtimes.runtime-handler]
	I0918 20:35:46.593392   44697 command_runner.go:130] > # runtime_path = "/path/to/the/executable"
	I0918 20:35:46.593418   44697 command_runner.go:130] > # runtime_type = "oci"
	I0918 20:35:46.593428   44697 command_runner.go:130] > # runtime_root = "/path/to/the/root"
	I0918 20:35:46.593436   44697 command_runner.go:130] > # monitor_path = "/path/to/container/monitor"
	I0918 20:35:46.593446   44697 command_runner.go:130] > # monitor_cgroup = "/cgroup/path"
	I0918 20:35:46.593454   44697 command_runner.go:130] > # monitor_exec_cgroup = "/cgroup/path"
	I0918 20:35:46.593462   44697 command_runner.go:130] > # monitor_env = []
	I0918 20:35:46.593472   44697 command_runner.go:130] > # privileged_without_host_devices = false
	I0918 20:35:46.593482   44697 command_runner.go:130] > # allowed_annotations = []
	I0918 20:35:46.593493   44697 command_runner.go:130] > # platform_runtime_paths = { "os/arch" = "/path/to/binary" }
	I0918 20:35:46.593499   44697 command_runner.go:130] > # Where:
	I0918 20:35:46.593505   44697 command_runner.go:130] > # - runtime-handler: Name used to identify the runtime.
	I0918 20:35:46.593517   44697 command_runner.go:130] > # - runtime_path (optional, string): Absolute path to the runtime executable in
	I0918 20:35:46.593530   44697 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I0918 20:35:46.593544   44697 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I0918 20:35:46.593553   44697 command_runner.go:130] > #   in $PATH.
	I0918 20:35:46.593563   44697 command_runner.go:130] > # - runtime_type (optional, string): Type of runtime, one of: "oci", "vm". If
	I0918 20:35:46.593574   44697 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I0918 20:35:46.593586   44697 command_runner.go:130] > # - runtime_root (optional, string): Root directory for storage of containers
	I0918 20:35:46.593594   44697 command_runner.go:130] > #   state.
	I0918 20:35:46.593604   44697 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I0918 20:35:46.593614   44697 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I0918 20:35:46.593624   44697 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I0918 20:35:46.593637   44697 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I0918 20:35:46.593647   44697 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I0918 20:35:46.593661   44697 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I0918 20:35:46.593671   44697 command_runner.go:130] > #   The currently recognized values are:
	I0918 20:35:46.593692   44697 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I0918 20:35:46.593705   44697 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I0918 20:35:46.593713   44697 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I0918 20:35:46.593722   44697 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I0918 20:35:46.593738   44697 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I0918 20:35:46.593748   44697 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I0918 20:35:46.593764   44697 command_runner.go:130] > #   "io.kubernetes.cri-o.seccompNotifierAction" for enabling the seccomp notifier feature.
	I0918 20:35:46.593779   44697 command_runner.go:130] > #   "io.kubernetes.cri-o.umask" for setting the umask for container init process.
	I0918 20:35:46.593789   44697 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I0918 20:35:46.593801   44697 command_runner.go:130] > # - monitor_path (optional, string): The path of the monitor binary. Replaces
	I0918 20:35:46.593806   44697 command_runner.go:130] > #   deprecated option "conmon".
	I0918 20:35:46.593814   44697 command_runner.go:130] > # - monitor_cgroup (optional, string): The cgroup the container monitor process will be put in.
	I0918 20:35:46.593824   44697 command_runner.go:130] > #   Replaces deprecated option "conmon_cgroup".
	I0918 20:35:46.593836   44697 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): If set to "container", indicates exec probes
	I0918 20:35:46.593847   44697 command_runner.go:130] > #   should be moved to the container's cgroup
	I0918 20:35:46.593857   44697 command_runner.go:130] > # - monitor_env (optional, array of strings): Environment variables to pass to the montior.
	I0918 20:35:46.593868   44697 command_runner.go:130] > #   Replaces deprecated option "conmon_env".
	I0918 20:35:46.593880   44697 command_runner.go:130] > # - platform_runtime_paths (optional, map): A mapping of platforms to the corresponding
	I0918 20:35:46.593889   44697 command_runner.go:130] > #   runtime executable paths for the runtime handler.
	I0918 20:35:46.593892   44697 command_runner.go:130] > #
	I0918 20:35:46.593898   44697 command_runner.go:130] > # Using the seccomp notifier feature:
	I0918 20:35:46.593908   44697 command_runner.go:130] > #
	I0918 20:35:46.593922   44697 command_runner.go:130] > # This feature can help you to debug seccomp related issues, for example if
	I0918 20:35:46.593935   44697 command_runner.go:130] > # blocked syscalls (permission denied errors) have negative impact on the workload.
	I0918 20:35:46.593943   44697 command_runner.go:130] > #
	I0918 20:35:46.593954   44697 command_runner.go:130] > # To be able to use this feature, configure a runtime which has the annotation
	I0918 20:35:46.593966   44697 command_runner.go:130] > # "io.kubernetes.cri-o.seccompNotifierAction" in the allowed_annotations array.
	I0918 20:35:46.593974   44697 command_runner.go:130] > #
	I0918 20:35:46.593984   44697 command_runner.go:130] > # It also requires at least runc 1.1.0 or crun 0.19 which support the notifier
	I0918 20:35:46.593990   44697 command_runner.go:130] > # feature.
	I0918 20:35:46.593993   44697 command_runner.go:130] > #
	I0918 20:35:46.594002   44697 command_runner.go:130] > # If everything is setup, CRI-O will modify chosen seccomp profiles for
	I0918 20:35:46.594015   44697 command_runner.go:130] > # containers if the annotation "io.kubernetes.cri-o.seccompNotifierAction" is
	I0918 20:35:46.594030   44697 command_runner.go:130] > # set on the Pod sandbox. CRI-O will then get notified if a container is using
	I0918 20:35:46.594043   44697 command_runner.go:130] > # a blocked syscall and then terminate the workload after a timeout of 5
	I0918 20:35:46.594054   44697 command_runner.go:130] > # seconds if the value of "io.kubernetes.cri-o.seccompNotifierAction=stop".
	I0918 20:35:46.594063   44697 command_runner.go:130] > #
	I0918 20:35:46.594073   44697 command_runner.go:130] > # This also means that multiple syscalls can be captured during that period,
	I0918 20:35:46.594082   44697 command_runner.go:130] > # while the timeout will get reset once a new syscall has been discovered.
	I0918 20:35:46.594085   44697 command_runner.go:130] > #
	I0918 20:35:46.594098   44697 command_runner.go:130] > # This also means that the Pods "restartPolicy" has to be set to "Never",
	I0918 20:35:46.594111   44697 command_runner.go:130] > # otherwise the kubelet will restart the container immediately.
	I0918 20:35:46.594117   44697 command_runner.go:130] > #
	I0918 20:35:46.594127   44697 command_runner.go:130] > # Please be aware that CRI-O is not able to get notified if a syscall gets
	I0918 20:35:46.594139   44697 command_runner.go:130] > # blocked based on the seccomp defaultAction, which is a general runtime
	I0918 20:35:46.594147   44697 command_runner.go:130] > # limitation.
	I0918 20:35:46.594156   44697 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I0918 20:35:46.594165   44697 command_runner.go:130] > runtime_path = "/usr/bin/runc"
	I0918 20:35:46.594173   44697 command_runner.go:130] > runtime_type = "oci"
	I0918 20:35:46.594182   44697 command_runner.go:130] > runtime_root = "/run/runc"
	I0918 20:35:46.594189   44697 command_runner.go:130] > runtime_config_path = ""
	I0918 20:35:46.594195   44697 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I0918 20:35:46.594204   44697 command_runner.go:130] > monitor_cgroup = "pod"
	I0918 20:35:46.594211   44697 command_runner.go:130] > monitor_exec_cgroup = ""
	I0918 20:35:46.594221   44697 command_runner.go:130] > monitor_env = [
	I0918 20:35:46.594230   44697 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0918 20:35:46.594238   44697 command_runner.go:130] > ]
	I0918 20:35:46.594246   44697 command_runner.go:130] > privileged_without_host_devices = false
	I0918 20:35:46.594258   44697 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I0918 20:35:46.594271   44697 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I0918 20:35:46.594281   44697 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I0918 20:35:46.594293   44697 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I0918 20:35:46.594311   44697 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I0918 20:35:46.594323   44697 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I0918 20:35:46.594340   44697 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I0918 20:35:46.594354   44697 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I0918 20:35:46.594366   44697 command_runner.go:130] > # signifying for that resource type to override the default value.
	I0918 20:35:46.594376   44697 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I0918 20:35:46.594381   44697 command_runner.go:130] > # Example:
	I0918 20:35:46.594390   44697 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I0918 20:35:46.594402   44697 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I0918 20:35:46.594410   44697 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I0918 20:35:46.594422   44697 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I0918 20:35:46.594432   44697 command_runner.go:130] > # cpuset = 0
	I0918 20:35:46.594442   44697 command_runner.go:130] > # cpushares = "0-1"
	I0918 20:35:46.594450   44697 command_runner.go:130] > # Where:
	I0918 20:35:46.594458   44697 command_runner.go:130] > # The workload name is workload-type.
	I0918 20:35:46.594470   44697 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I0918 20:35:46.594478   44697 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I0918 20:35:46.594486   44697 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I0918 20:35:46.594500   44697 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I0918 20:35:46.594515   44697 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I0918 20:35:46.594526   44697 command_runner.go:130] > # hostnetwork_disable_selinux determines whether
	I0918 20:35:46.594540   44697 command_runner.go:130] > # SELinux should be disabled within a pod when it is running in the host network namespace
	I0918 20:35:46.594549   44697 command_runner.go:130] > # Default value is set to true
	I0918 20:35:46.594557   44697 command_runner.go:130] > # hostnetwork_disable_selinux = true
	I0918 20:35:46.594567   44697 command_runner.go:130] > # disable_hostport_mapping determines whether to enable/disable
	I0918 20:35:46.594574   44697 command_runner.go:130] > # the container hostport mapping in CRI-O.
	I0918 20:35:46.594598   44697 command_runner.go:130] > # Default value is set to 'false'
	I0918 20:35:46.594616   44697 command_runner.go:130] > # disable_hostport_mapping = false
	I0918 20:35:46.594626   44697 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I0918 20:35:46.594636   44697 command_runner.go:130] > #
	I0918 20:35:46.594646   44697 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I0918 20:35:46.594656   44697 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I0918 20:35:46.594672   44697 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I0918 20:35:46.594682   44697 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I0918 20:35:46.594695   44697 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I0918 20:35:46.594699   44697 command_runner.go:130] > [crio.image]
	I0918 20:35:46.594705   44697 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I0918 20:35:46.594711   44697 command_runner.go:130] > # default_transport = "docker://"
	I0918 20:35:46.594720   44697 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I0918 20:35:46.594730   44697 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I0918 20:35:46.594738   44697 command_runner.go:130] > # global_auth_file = ""
	I0918 20:35:46.594745   44697 command_runner.go:130] > # The image used to instantiate infra containers.
	I0918 20:35:46.594754   44697 command_runner.go:130] > # This option supports live configuration reload.
	I0918 20:35:46.594762   44697 command_runner.go:130] > pause_image = "registry.k8s.io/pause:3.10"
	I0918 20:35:46.594773   44697 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I0918 20:35:46.594783   44697 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I0918 20:35:46.594788   44697 command_runner.go:130] > # This option supports live configuration reload.
	I0918 20:35:46.594792   44697 command_runner.go:130] > # pause_image_auth_file = ""
	I0918 20:35:46.594800   44697 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I0918 20:35:46.594809   44697 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I0918 20:35:46.594819   44697 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I0918 20:35:46.594829   44697 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I0918 20:35:46.594836   44697 command_runner.go:130] > # pause_command = "/pause"
	I0918 20:35:46.594845   44697 command_runner.go:130] > # List of images to be excluded from the kubelet's garbage collection.
	I0918 20:35:46.594858   44697 command_runner.go:130] > # It allows specifying image names using either exact, glob, or keyword
	I0918 20:35:46.594870   44697 command_runner.go:130] > # patterns. Exact matches must match the entire name, glob matches can
	I0918 20:35:46.594877   44697 command_runner.go:130] > # have a wildcard * at the end, and keyword matches can have wildcards
	I0918 20:35:46.594888   44697 command_runner.go:130] > # on both ends. By default, this list includes the "pause" image if
	I0918 20:35:46.594901   44697 command_runner.go:130] > # configured by the user, which is used as a placeholder in Kubernetes pods.
	I0918 20:35:46.594908   44697 command_runner.go:130] > # pinned_images = [
	I0918 20:35:46.594923   44697 command_runner.go:130] > # ]
	I0918 20:35:46.594932   44697 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I0918 20:35:46.594944   44697 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I0918 20:35:46.594957   44697 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I0918 20:35:46.594970   44697 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I0918 20:35:46.594976   44697 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I0918 20:35:46.594985   44697 command_runner.go:130] > # signature_policy = ""
	I0918 20:35:46.594993   44697 command_runner.go:130] > # Root path for pod namespace-separated signature policies.
	I0918 20:35:46.595007   44697 command_runner.go:130] > # The final policy to be used on image pull will be <SIGNATURE_POLICY_DIR>/<NAMESPACE>.json.
	I0918 20:35:46.595018   44697 command_runner.go:130] > # If no pod namespace is being provided on image pull (via the sandbox config),
	I0918 20:35:46.595036   44697 command_runner.go:130] > # or the concatenated path is non existent, then the signature_policy or system
	I0918 20:35:46.595049   44697 command_runner.go:130] > # wide policy will be used as fallback. Must be an absolute path.
	I0918 20:35:46.595059   44697 command_runner.go:130] > # signature_policy_dir = "/etc/crio/policies"
	I0918 20:35:46.595071   44697 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I0918 20:35:46.595079   44697 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I0918 20:35:46.595084   44697 command_runner.go:130] > # changing them here.
	I0918 20:35:46.595093   44697 command_runner.go:130] > # insecure_registries = [
	I0918 20:35:46.595100   44697 command_runner.go:130] > # ]
	I0918 20:35:46.595113   44697 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I0918 20:35:46.595124   44697 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I0918 20:35:46.595138   44697 command_runner.go:130] > # image_volumes = "mkdir"
	I0918 20:35:46.595146   44697 command_runner.go:130] > # Temporary directory to use for storing big files
	I0918 20:35:46.595155   44697 command_runner.go:130] > # big_files_temporary_dir = ""
	I0918 20:35:46.595162   44697 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I0918 20:35:46.595169   44697 command_runner.go:130] > # CNI plugins.
	I0918 20:35:46.595175   44697 command_runner.go:130] > [crio.network]
	I0918 20:35:46.595187   44697 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I0918 20:35:46.595199   44697 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I0918 20:35:46.595209   44697 command_runner.go:130] > # cni_default_network = ""
	I0918 20:35:46.595222   44697 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I0918 20:35:46.595233   44697 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I0918 20:35:46.595244   44697 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I0918 20:35:46.595250   44697 command_runner.go:130] > # plugin_dirs = [
	I0918 20:35:46.595254   44697 command_runner.go:130] > # 	"/opt/cni/bin/",
	I0918 20:35:46.595258   44697 command_runner.go:130] > # ]
	I0918 20:35:46.595270   44697 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I0918 20:35:46.595280   44697 command_runner.go:130] > [crio.metrics]
	I0918 20:35:46.595288   44697 command_runner.go:130] > # Globally enable or disable metrics support.
	I0918 20:35:46.595298   44697 command_runner.go:130] > enable_metrics = true
	I0918 20:35:46.595306   44697 command_runner.go:130] > # Specify enabled metrics collectors.
	I0918 20:35:46.595316   44697 command_runner.go:130] > # Per default all metrics are enabled.
	I0918 20:35:46.595328   44697 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I0918 20:35:46.595341   44697 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I0918 20:35:46.595351   44697 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I0918 20:35:46.595355   44697 command_runner.go:130] > # metrics_collectors = [
	I0918 20:35:46.595364   44697 command_runner.go:130] > # 	"operations",
	I0918 20:35:46.595372   44697 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I0918 20:35:46.595382   44697 command_runner.go:130] > # 	"operations_latency_microseconds",
	I0918 20:35:46.595389   44697 command_runner.go:130] > # 	"operations_errors",
	I0918 20:35:46.595398   44697 command_runner.go:130] > # 	"image_pulls_by_digest",
	I0918 20:35:46.595409   44697 command_runner.go:130] > # 	"image_pulls_by_name",
	I0918 20:35:46.595420   44697 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I0918 20:35:46.595434   44697 command_runner.go:130] > # 	"image_pulls_failures",
	I0918 20:35:46.595443   44697 command_runner.go:130] > # 	"image_pulls_successes",
	I0918 20:35:46.595455   44697 command_runner.go:130] > # 	"image_pulls_layer_size",
	I0918 20:35:46.595463   44697 command_runner.go:130] > # 	"image_layer_reuse",
	I0918 20:35:46.595471   44697 command_runner.go:130] > # 	"containers_events_dropped_total",
	I0918 20:35:46.595481   44697 command_runner.go:130] > # 	"containers_oom_total",
	I0918 20:35:46.595489   44697 command_runner.go:130] > # 	"containers_oom",
	I0918 20:35:46.595499   44697 command_runner.go:130] > # 	"processes_defunct",
	I0918 20:35:46.595508   44697 command_runner.go:130] > # 	"operations_total",
	I0918 20:35:46.595518   44697 command_runner.go:130] > # 	"operations_latency_seconds",
	I0918 20:35:46.595528   44697 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I0918 20:35:46.595538   44697 command_runner.go:130] > # 	"operations_errors_total",
	I0918 20:35:46.595547   44697 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I0918 20:35:46.595555   44697 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I0918 20:35:46.595561   44697 command_runner.go:130] > # 	"image_pulls_failure_total",
	I0918 20:35:46.595570   44697 command_runner.go:130] > # 	"image_pulls_success_total",
	I0918 20:35:46.595581   44697 command_runner.go:130] > # 	"image_layer_reuse_total",
	I0918 20:35:46.595588   44697 command_runner.go:130] > # 	"containers_oom_count_total",
	I0918 20:35:46.595599   44697 command_runner.go:130] > # 	"containers_seccomp_notifier_count_total",
	I0918 20:35:46.595613   44697 command_runner.go:130] > # 	"resources_stalled_at_stage",
	I0918 20:35:46.595621   44697 command_runner.go:130] > # ]
	I0918 20:35:46.595632   44697 command_runner.go:130] > # The port on which the metrics server will listen.
	I0918 20:35:46.595639   44697 command_runner.go:130] > # metrics_port = 9090
	I0918 20:35:46.595648   44697 command_runner.go:130] > # Local socket path to bind the metrics server to
	I0918 20:35:46.595654   44697 command_runner.go:130] > # metrics_socket = ""
	I0918 20:35:46.595667   44697 command_runner.go:130] > # The certificate for the secure metrics server.
	I0918 20:35:46.595680   44697 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I0918 20:35:46.595693   44697 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I0918 20:35:46.595703   44697 command_runner.go:130] > # certificate on any modification event.
	I0918 20:35:46.595716   44697 command_runner.go:130] > # metrics_cert = ""
	I0918 20:35:46.595726   44697 command_runner.go:130] > # The certificate key for the secure metrics server.
	I0918 20:35:46.595738   44697 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I0918 20:35:46.595744   44697 command_runner.go:130] > # metrics_key = ""
	I0918 20:35:46.595752   44697 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I0918 20:35:46.595760   44697 command_runner.go:130] > [crio.tracing]
	I0918 20:35:46.595773   44697 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I0918 20:35:46.595782   44697 command_runner.go:130] > # enable_tracing = false
	I0918 20:35:46.595793   44697 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I0918 20:35:46.595803   44697 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I0918 20:35:46.595816   44697 command_runner.go:130] > # Number of samples to collect per million spans. Set to 1000000 to always sample.
	I0918 20:35:46.595824   44697 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I0918 20:35:46.595830   44697 command_runner.go:130] > # CRI-O NRI configuration.
	I0918 20:35:46.595835   44697 command_runner.go:130] > [crio.nri]
	I0918 20:35:46.595845   44697 command_runner.go:130] > # Globally enable or disable NRI.
	I0918 20:35:46.595851   44697 command_runner.go:130] > # enable_nri = false
	I0918 20:35:46.595866   44697 command_runner.go:130] > # NRI socket to listen on.
	I0918 20:35:46.595876   44697 command_runner.go:130] > # nri_listen = "/var/run/nri/nri.sock"
	I0918 20:35:46.595886   44697 command_runner.go:130] > # NRI plugin directory to use.
	I0918 20:35:46.595895   44697 command_runner.go:130] > # nri_plugin_dir = "/opt/nri/plugins"
	I0918 20:35:46.595906   44697 command_runner.go:130] > # NRI plugin configuration directory to use.
	I0918 20:35:46.595913   44697 command_runner.go:130] > # nri_plugin_config_dir = "/etc/nri/conf.d"
	I0918 20:35:46.595920   44697 command_runner.go:130] > # Disable connections from externally launched NRI plugins.
	I0918 20:35:46.595931   44697 command_runner.go:130] > # nri_disable_connections = false
	I0918 20:35:46.595941   44697 command_runner.go:130] > # Timeout for a plugin to register itself with NRI.
	I0918 20:35:46.595949   44697 command_runner.go:130] > # nri_plugin_registration_timeout = "5s"
	I0918 20:35:46.595960   44697 command_runner.go:130] > # Timeout for a plugin to handle an NRI request.
	I0918 20:35:46.595970   44697 command_runner.go:130] > # nri_plugin_request_timeout = "2s"
	I0918 20:35:46.595979   44697 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I0918 20:35:46.595988   44697 command_runner.go:130] > [crio.stats]
	I0918 20:35:46.596000   44697 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I0918 20:35:46.596008   44697 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I0918 20:35:46.596027   44697 command_runner.go:130] > # stats_collection_period = 0
	I0918 20:35:46.596121   44697 cni.go:84] Creating CNI manager for ""
	I0918 20:35:46.596137   44697 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0918 20:35:46.596156   44697 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0918 20:35:46.596189   44697 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.106 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-622675 NodeName:multinode-622675 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.106"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.106 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:
/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0918 20:35:46.596360   44697 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.106
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-622675"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.106
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.106"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0918 20:35:46.596438   44697 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0918 20:35:46.607272   44697 command_runner.go:130] > kubeadm
	I0918 20:35:46.607301   44697 command_runner.go:130] > kubectl
	I0918 20:35:46.607308   44697 command_runner.go:130] > kubelet
	I0918 20:35:46.607346   44697 binaries.go:44] Found k8s binaries, skipping transfer
	I0918 20:35:46.607401   44697 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0918 20:35:46.617110   44697 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (316 bytes)
	I0918 20:35:46.633840   44697 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0918 20:35:46.650882   44697 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2160 bytes)
	I0918 20:35:46.668731   44697 ssh_runner.go:195] Run: grep 192.168.39.106	control-plane.minikube.internal$ /etc/hosts
	I0918 20:35:46.672786   44697 command_runner.go:130] > 192.168.39.106	control-plane.minikube.internal
	I0918 20:35:46.672851   44697 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0918 20:35:46.811923   44697 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0918 20:35:46.826795   44697 certs.go:68] Setting up /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/multinode-622675 for IP: 192.168.39.106
	I0918 20:35:46.826819   44697 certs.go:194] generating shared ca certs ...
	I0918 20:35:46.826846   44697 certs.go:226] acquiring lock for ca certs: {Name:mkf54e0e71ed884c4372f9d3d4cb308ea4600185 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0918 20:35:46.827000   44697 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19667-7671/.minikube/ca.key
	I0918 20:35:46.827040   44697 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19667-7671/.minikube/proxy-client-ca.key
	I0918 20:35:46.827056   44697 certs.go:256] generating profile certs ...
	I0918 20:35:46.827144   44697 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/multinode-622675/client.key
	I0918 20:35:46.827199   44697 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/multinode-622675/apiserver.key.2ea34399
	I0918 20:35:46.827238   44697 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/multinode-622675/proxy-client.key
	I0918 20:35:46.827248   44697 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19667-7671/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0918 20:35:46.827278   44697 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19667-7671/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0918 20:35:46.827294   44697 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19667-7671/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0918 20:35:46.827305   44697 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19667-7671/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0918 20:35:46.827317   44697 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/multinode-622675/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0918 20:35:46.827330   44697 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/multinode-622675/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0918 20:35:46.827342   44697 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/multinode-622675/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0918 20:35:46.827351   44697 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/multinode-622675/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0918 20:35:46.827395   44697 certs.go:484] found cert: /home/jenkins/minikube-integration/19667-7671/.minikube/certs/14878.pem (1338 bytes)
	W0918 20:35:46.827425   44697 certs.go:480] ignoring /home/jenkins/minikube-integration/19667-7671/.minikube/certs/14878_empty.pem, impossibly tiny 0 bytes
	I0918 20:35:46.827434   44697 certs.go:484] found cert: /home/jenkins/minikube-integration/19667-7671/.minikube/certs/ca-key.pem (1675 bytes)
	I0918 20:35:46.827457   44697 certs.go:484] found cert: /home/jenkins/minikube-integration/19667-7671/.minikube/certs/ca.pem (1082 bytes)
	I0918 20:35:46.827480   44697 certs.go:484] found cert: /home/jenkins/minikube-integration/19667-7671/.minikube/certs/cert.pem (1123 bytes)
	I0918 20:35:46.827500   44697 certs.go:484] found cert: /home/jenkins/minikube-integration/19667-7671/.minikube/certs/key.pem (1679 bytes)
	I0918 20:35:46.827540   44697 certs.go:484] found cert: /home/jenkins/minikube-integration/19667-7671/.minikube/files/etc/ssl/certs/148782.pem (1708 bytes)
	I0918 20:35:46.827567   44697 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19667-7671/.minikube/certs/14878.pem -> /usr/share/ca-certificates/14878.pem
	I0918 20:35:46.827580   44697 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19667-7671/.minikube/files/etc/ssl/certs/148782.pem -> /usr/share/ca-certificates/148782.pem
	I0918 20:35:46.827592   44697 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19667-7671/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0918 20:35:46.828140   44697 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0918 20:35:46.852089   44697 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0918 20:35:46.876549   44697 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0918 20:35:46.899917   44697 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0918 20:35:46.924274   44697 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/multinode-622675/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0918 20:35:46.949084   44697 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/multinode-622675/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0918 20:35:46.974418   44697 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/multinode-622675/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0918 20:35:46.999216   44697 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/multinode-622675/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0918 20:35:47.023307   44697 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/certs/14878.pem --> /usr/share/ca-certificates/14878.pem (1338 bytes)
	I0918 20:35:47.046977   44697 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/files/etc/ssl/certs/148782.pem --> /usr/share/ca-certificates/148782.pem (1708 bytes)
	I0918 20:35:47.071690   44697 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0918 20:35:47.095867   44697 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0918 20:35:47.112760   44697 ssh_runner.go:195] Run: openssl version
	I0918 20:35:47.118351   44697 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0918 20:35:47.118633   44697 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0918 20:35:47.129535   44697 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0918 20:35:47.134111   44697 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Sep 18 19:39 /usr/share/ca-certificates/minikubeCA.pem
	I0918 20:35:47.134150   44697 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 18 19:39 /usr/share/ca-certificates/minikubeCA.pem
	I0918 20:35:47.134195   44697 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0918 20:35:47.139660   44697 command_runner.go:130] > b5213941
	I0918 20:35:47.139744   44697 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0918 20:35:47.149149   44697 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14878.pem && ln -fs /usr/share/ca-certificates/14878.pem /etc/ssl/certs/14878.pem"
	I0918 20:35:47.160259   44697 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14878.pem
	I0918 20:35:47.164882   44697 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Sep 18 19:57 /usr/share/ca-certificates/14878.pem
	I0918 20:35:47.164926   44697 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 18 19:57 /usr/share/ca-certificates/14878.pem
	I0918 20:35:47.164989   44697 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14878.pem
	I0918 20:35:47.170696   44697 command_runner.go:130] > 51391683
	I0918 20:35:47.170967   44697 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14878.pem /etc/ssl/certs/51391683.0"
	I0918 20:35:47.180702   44697 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/148782.pem && ln -fs /usr/share/ca-certificates/148782.pem /etc/ssl/certs/148782.pem"
	I0918 20:35:47.191928   44697 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/148782.pem
	I0918 20:35:47.196531   44697 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Sep 18 19:57 /usr/share/ca-certificates/148782.pem
	I0918 20:35:47.196659   44697 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 18 19:57 /usr/share/ca-certificates/148782.pem
	I0918 20:35:47.196711   44697 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/148782.pem
	I0918 20:35:47.202230   44697 command_runner.go:130] > 3ec20f2e
	I0918 20:35:47.202314   44697 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/148782.pem /etc/ssl/certs/3ec20f2e.0"
	I0918 20:35:47.212430   44697 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0918 20:35:47.217336   44697 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0918 20:35:47.217368   44697 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I0918 20:35:47.217374   44697 command_runner.go:130] > Device: 253,1	Inode: 3148840     Links: 1
	I0918 20:35:47.217381   44697 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0918 20:35:47.217386   44697 command_runner.go:130] > Access: 2024-09-18 20:28:57.047619252 +0000
	I0918 20:35:47.217396   44697 command_runner.go:130] > Modify: 2024-09-18 20:28:57.047619252 +0000
	I0918 20:35:47.217401   44697 command_runner.go:130] > Change: 2024-09-18 20:28:57.047619252 +0000
	I0918 20:35:47.217408   44697 command_runner.go:130] >  Birth: 2024-09-18 20:28:57.047619252 +0000
	I0918 20:35:47.217466   44697 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0918 20:35:47.223477   44697 command_runner.go:130] > Certificate will not expire
	I0918 20:35:47.223623   44697 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0918 20:35:47.229805   44697 command_runner.go:130] > Certificate will not expire
	I0918 20:35:47.229873   44697 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0918 20:35:47.235847   44697 command_runner.go:130] > Certificate will not expire
	I0918 20:35:47.235942   44697 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0918 20:35:47.242904   44697 command_runner.go:130] > Certificate will not expire
	I0918 20:35:47.242977   44697 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0918 20:35:47.248632   44697 command_runner.go:130] > Certificate will not expire
	I0918 20:35:47.248773   44697 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0918 20:35:47.254593   44697 command_runner.go:130] > Certificate will not expire
	I0918 20:35:47.254697   44697 kubeadm.go:392] StartCluster: {Name:multinode-622675 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.
1 ClusterName:multinode-622675 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.106 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.224 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.216 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false
inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disable
Optimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0918 20:35:47.254808   44697 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0918 20:35:47.254859   44697 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0918 20:35:47.293771   44697 command_runner.go:130] > 43e5c05bff5626d849f1cc1a2457bed371140f56a7d07e75167462bf594fc28f
	I0918 20:35:47.293800   44697 command_runner.go:130] > d944b227553371de1b94120a984c24cb4b177354de38e8d43eb9ed453b0fba4a
	I0918 20:35:47.293807   44697 command_runner.go:130] > 19d5f178e345d175b48c3050d2afb16f5238d648db531cca0bc1482880d1cd61
	I0918 20:35:47.293816   44697 command_runner.go:130] > e499594bd4ca1e0ffebd9b5b5240ecdc7a824b9fbd8d21840a8cb7e865d601b0
	I0918 20:35:47.293824   44697 command_runner.go:130] > 10852f58e0d0a0b7a14d38e4a5a81025f9506296d50000e0f0157576e90b73d5
	I0918 20:35:47.293833   44697 command_runner.go:130] > aabf741e6c21bcdd33794ad272367e8eee404ab32e56df824e7b682d84ec0cb3
	I0918 20:35:47.293842   44697 command_runner.go:130] > fa1411a6edea077a5bf21467bebe993d54a884c4ef814ac722270c58c4e1027c
	I0918 20:35:47.293872   44697 command_runner.go:130] > ca2c1be8e70a4692c9c15e9da716688fe6dcc7d6c82050191b2576cffe47cc24
	I0918 20:35:47.293902   44697 cri.go:89] found id: "43e5c05bff5626d849f1cc1a2457bed371140f56a7d07e75167462bf594fc28f"
	I0918 20:35:47.293910   44697 cri.go:89] found id: "d944b227553371de1b94120a984c24cb4b177354de38e8d43eb9ed453b0fba4a"
	I0918 20:35:47.293914   44697 cri.go:89] found id: "19d5f178e345d175b48c3050d2afb16f5238d648db531cca0bc1482880d1cd61"
	I0918 20:35:47.293919   44697 cri.go:89] found id: "e499594bd4ca1e0ffebd9b5b5240ecdc7a824b9fbd8d21840a8cb7e865d601b0"
	I0918 20:35:47.293923   44697 cri.go:89] found id: "10852f58e0d0a0b7a14d38e4a5a81025f9506296d50000e0f0157576e90b73d5"
	I0918 20:35:47.293928   44697 cri.go:89] found id: "aabf741e6c21bcdd33794ad272367e8eee404ab32e56df824e7b682d84ec0cb3"
	I0918 20:35:47.293933   44697 cri.go:89] found id: "fa1411a6edea077a5bf21467bebe993d54a884c4ef814ac722270c58c4e1027c"
	I0918 20:35:47.293937   44697 cri.go:89] found id: "ca2c1be8e70a4692c9c15e9da716688fe6dcc7d6c82050191b2576cffe47cc24"
	I0918 20:35:47.293941   44697 cri.go:89] found id: ""
	I0918 20:35:47.293991   44697 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Sep 18 20:40:00 multinode-622675 crio[2695]: time="2024-09-18 20:40:00.276346803Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726692000276315836,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=079894ca-cb5d-4a32-bcf7-8fe1f51b87c1 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 18 20:40:00 multinode-622675 crio[2695]: time="2024-09-18 20:40:00.276757944Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=3f6615b4-1d3d-4a64-a243-3b4780d56014 name=/runtime.v1.RuntimeService/ListContainers
	Sep 18 20:40:00 multinode-622675 crio[2695]: time="2024-09-18 20:40:00.276827208Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=3f6615b4-1d3d-4a64-a243-3b4780d56014 name=/runtime.v1.RuntimeService/ListContainers
	Sep 18 20:40:00 multinode-622675 crio[2695]: time="2024-09-18 20:40:00.277225514Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:0aa3be4f6770078682857062b75364a07889ed89b6d7725f4d222a925f54d7e1,PodSandboxId:37fadf186b79639715989c0c251bbd9cb53ed867f20833b59bf2f24d7dc2db19,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1726691787265875647,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-sxchh,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 056a9bd0-0d18-47f1-be48-7f0a773979db,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d800c9f5cd07512852d48c4b63d9ac715b3ab0d8aa9f971659a4d1e098182684,PodSandboxId:7661f91bded7e04ad84cbcb869216e52fb39e725a20511807f190a14f71a138f,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1726691753718362716,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-5mfhg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5dc4d401-73fd-4b13-ac85-f2cb65b4b0f4,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5f42cc3b22183e2667e54bc3e4f3a8694e5a03687258e51ea727f2791e8f2bd2,PodSandboxId:fb8186cb25bedcb5ead72ab2bee49e30b85f3fe3d93d6fbbe8cc27a71a8bcee2,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726691753719419837,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-qhw9j,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 80568ba7-ff98-4d33-a142-e6042afceb54,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d9fa3ac1afc1ca59a2af96e3a25d529be9aefc33f90fed347b098d22cd4f87c6,PodSandboxId:5c2fdf9555c75568a328bcb1dd39c97cc1b6aa551c52d71444d561d1b6b63100,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726691753577636161,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-8bns5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7d082c6f-bda6-4f6e-906f-f54ee78ab21d,},Annotations:map[string]
string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:791bd9a5018aeedf9e06af458153d286ee9adf9d3f305eac37abdf6c15be05e2,PodSandboxId:aa8631f45a260656250e46f0fbd1073c87adef174b5c5aed0eaceda68bcecaca,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726691753569564470,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 66cb614f-b7a9-4d3e-a045-2fdde5e368cf,},Annotations:map[string]string{io.ku
bernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8fd060cb3319be9965fa313cbfa09acc398464e7391a2344409f7e842e63a92d,PodSandboxId:bf9087acb004157f001ddd0a52d8fd362f2da1444d4e2d26954241dd565a64a0,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726691749702651647,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-622675,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fca4c2ba470b4eae5638e77bd36da67c,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kuberne
tes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ff8c8376b280ff49f0e834d1561ecf31b332e70e3c1cd4785429577b30a31a74,PodSandboxId:2b6c1eadfce59acb58e26cc0d2d145c4e5744ffa181d3925f1a7e9d80659642f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726691749672403621,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-622675,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 149a9ea18cda8f18fe46b42f5fdcfc42,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.
restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8195546af7c97dbc4c7e931098abda3db7972fe66027da6eed0e4d0cbbabb435,PodSandboxId:c327ac057bb5e9f01ef80b9b90ecf353c03c86ac611274795c72fe883ee6e2f7,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726691749606943506,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-622675,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fd68ba840847ee4c3afc8ea4a806da80,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount:
1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bb9942cd5a355661495e4b729ac316c9b20f71dc4d56160866541cd1d1be6eb2,PodSandboxId:a7dd9337ebbc27427f6247c2366d2906b6f71f19a387a9abe9be193963de8b83,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726691749588676988,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-622675,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3dd56b6172e959eabe3391d13c3a2a11,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:14daa73e4b6447058fbb9acad7addcf2492e09082bdb8c59f4566c4c3c4ecbb5,PodSandboxId:c8caa799524794493dbfe9bac34f0447360906ceb1aec934ae128638272c56fc,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1726691418331135241,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-sxchh,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 056a9bd0-0d18-47f1-be48-7f0a773979db,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.res
tartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:43e5c05bff5626d849f1cc1a2457bed371140f56a7d07e75167462bf594fc28f,PodSandboxId:28439dd9ecc619703fb66cdece03c6a2eff4a229483ca954401bdc5f938d4efc,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1726691364054297176,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 66cb614f-b7a9-4d3e-a045-2fdde5e368cf,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,
io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d944b227553371de1b94120a984c24cb4b177354de38e8d43eb9ed453b0fba4a,PodSandboxId:7d66f12289576432c00d2bf0919fc70e50b47d67ef3abc205d546052d26f59fe,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726691363752948720,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-qhw9j,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 80568ba7-ff98-4d33-a142-e6042afceb54,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\
"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:19d5f178e345d175b48c3050d2afb16f5238d648db531cca0bc1482880d1cd61,PodSandboxId:6d42deb88737975208f43de569db1326a6e9c10c6cf7828616b31042f861b4c0,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1726691352087479272,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-5mfhg,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: 5dc4d401-73fd-4b13-ac85-f2cb65b4b0f4,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e499594bd4ca1e0ffebd9b5b5240ecdc7a824b9fbd8d21840a8cb7e865d601b0,PodSandboxId:c7c998fccea60bd4755ac2216b7817bc62c33b15e3b2c850f94c3137c789c6f2,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1726691351886016603,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-8bns5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7d082c6f-bda6-4f6e-906f
-f54ee78ab21d,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:10852f58e0d0a0b7a14d38e4a5a81025f9506296d50000e0f0157576e90b73d5,PodSandboxId:000a3cbbd10b3f4051d7b8e30d56b0c4964499d27480805a600e9151703aa33e,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1726691340776059921,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-622675,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fca4c2ba470b4eae5638e77bd36da67c,},Annotations:map[string]string
{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aabf741e6c21bcdd33794ad272367e8eee404ab32e56df824e7b682d84ec0cb3,PodSandboxId:8a3c914b697a74102c43ed3545d35dc856b63d70e9f098351b7bc71b478fea76,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1726691340765600818,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-622675,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 149a9ea18cda8f18fe46b42f5fdcfc42,},Annotations:map[string]string{io.kubernetes.
container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fa1411a6edea077a5bf21467bebe993d54a884c4ef814ac722270c58c4e1027c,PodSandboxId:836940a817d400129afc8f5461aeaf3cd20616d272b8db529bd21a1e30f18af3,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1726691340707884443,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-622675,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3dd56b6172e959eabe3391d13c3a2a11,},Annotations:map[string]string{io
.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ca2c1be8e70a4692c9c15e9da716688fe6dcc7d6c82050191b2576cffe47cc24,PodSandboxId:6bfb0c877ef32204a57b0733140b081e7f546b82b86349c5f66fc15a32410c54,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726691340699526426,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-622675,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fd68ba840847ee4c3afc8ea4a806da80,},Annotations:map[string]string{io.kubernetes.con
tainer.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=3f6615b4-1d3d-4a64-a243-3b4780d56014 name=/runtime.v1.RuntimeService/ListContainers
	Sep 18 20:40:00 multinode-622675 crio[2695]: time="2024-09-18 20:40:00.317253281Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=5ffabbe5-13bb-4492-b368-0f68d4d52257 name=/runtime.v1.RuntimeService/Version
	Sep 18 20:40:00 multinode-622675 crio[2695]: time="2024-09-18 20:40:00.317331673Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=5ffabbe5-13bb-4492-b368-0f68d4d52257 name=/runtime.v1.RuntimeService/Version
	Sep 18 20:40:00 multinode-622675 crio[2695]: time="2024-09-18 20:40:00.318295695Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=b5b60802-9402-4731-b810-6ebaa835f05c name=/runtime.v1.ImageService/ImageFsInfo
	Sep 18 20:40:00 multinode-622675 crio[2695]: time="2024-09-18 20:40:00.318676356Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726692000318653802,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=b5b60802-9402-4731-b810-6ebaa835f05c name=/runtime.v1.ImageService/ImageFsInfo
	Sep 18 20:40:00 multinode-622675 crio[2695]: time="2024-09-18 20:40:00.319139468Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=422e4391-5e85-4e59-9eab-b52d985d9549 name=/runtime.v1.RuntimeService/ListContainers
	Sep 18 20:40:00 multinode-622675 crio[2695]: time="2024-09-18 20:40:00.319191454Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=422e4391-5e85-4e59-9eab-b52d985d9549 name=/runtime.v1.RuntimeService/ListContainers
	Sep 18 20:40:00 multinode-622675 crio[2695]: time="2024-09-18 20:40:00.319541382Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:0aa3be4f6770078682857062b75364a07889ed89b6d7725f4d222a925f54d7e1,PodSandboxId:37fadf186b79639715989c0c251bbd9cb53ed867f20833b59bf2f24d7dc2db19,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1726691787265875647,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-sxchh,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 056a9bd0-0d18-47f1-be48-7f0a773979db,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d800c9f5cd07512852d48c4b63d9ac715b3ab0d8aa9f971659a4d1e098182684,PodSandboxId:7661f91bded7e04ad84cbcb869216e52fb39e725a20511807f190a14f71a138f,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1726691753718362716,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-5mfhg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5dc4d401-73fd-4b13-ac85-f2cb65b4b0f4,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5f42cc3b22183e2667e54bc3e4f3a8694e5a03687258e51ea727f2791e8f2bd2,PodSandboxId:fb8186cb25bedcb5ead72ab2bee49e30b85f3fe3d93d6fbbe8cc27a71a8bcee2,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726691753719419837,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-qhw9j,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 80568ba7-ff98-4d33-a142-e6042afceb54,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d9fa3ac1afc1ca59a2af96e3a25d529be9aefc33f90fed347b098d22cd4f87c6,PodSandboxId:5c2fdf9555c75568a328bcb1dd39c97cc1b6aa551c52d71444d561d1b6b63100,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726691753577636161,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-8bns5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7d082c6f-bda6-4f6e-906f-f54ee78ab21d,},Annotations:map[string]
string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:791bd9a5018aeedf9e06af458153d286ee9adf9d3f305eac37abdf6c15be05e2,PodSandboxId:aa8631f45a260656250e46f0fbd1073c87adef174b5c5aed0eaceda68bcecaca,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726691753569564470,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 66cb614f-b7a9-4d3e-a045-2fdde5e368cf,},Annotations:map[string]string{io.ku
bernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8fd060cb3319be9965fa313cbfa09acc398464e7391a2344409f7e842e63a92d,PodSandboxId:bf9087acb004157f001ddd0a52d8fd362f2da1444d4e2d26954241dd565a64a0,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726691749702651647,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-622675,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fca4c2ba470b4eae5638e77bd36da67c,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kuberne
tes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ff8c8376b280ff49f0e834d1561ecf31b332e70e3c1cd4785429577b30a31a74,PodSandboxId:2b6c1eadfce59acb58e26cc0d2d145c4e5744ffa181d3925f1a7e9d80659642f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726691749672403621,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-622675,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 149a9ea18cda8f18fe46b42f5fdcfc42,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.
restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8195546af7c97dbc4c7e931098abda3db7972fe66027da6eed0e4d0cbbabb435,PodSandboxId:c327ac057bb5e9f01ef80b9b90ecf353c03c86ac611274795c72fe883ee6e2f7,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726691749606943506,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-622675,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fd68ba840847ee4c3afc8ea4a806da80,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount:
1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bb9942cd5a355661495e4b729ac316c9b20f71dc4d56160866541cd1d1be6eb2,PodSandboxId:a7dd9337ebbc27427f6247c2366d2906b6f71f19a387a9abe9be193963de8b83,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726691749588676988,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-622675,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3dd56b6172e959eabe3391d13c3a2a11,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:14daa73e4b6447058fbb9acad7addcf2492e09082bdb8c59f4566c4c3c4ecbb5,PodSandboxId:c8caa799524794493dbfe9bac34f0447360906ceb1aec934ae128638272c56fc,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1726691418331135241,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-sxchh,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 056a9bd0-0d18-47f1-be48-7f0a773979db,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.res
tartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:43e5c05bff5626d849f1cc1a2457bed371140f56a7d07e75167462bf594fc28f,PodSandboxId:28439dd9ecc619703fb66cdece03c6a2eff4a229483ca954401bdc5f938d4efc,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1726691364054297176,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 66cb614f-b7a9-4d3e-a045-2fdde5e368cf,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,
io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d944b227553371de1b94120a984c24cb4b177354de38e8d43eb9ed453b0fba4a,PodSandboxId:7d66f12289576432c00d2bf0919fc70e50b47d67ef3abc205d546052d26f59fe,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726691363752948720,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-qhw9j,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 80568ba7-ff98-4d33-a142-e6042afceb54,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\
"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:19d5f178e345d175b48c3050d2afb16f5238d648db531cca0bc1482880d1cd61,PodSandboxId:6d42deb88737975208f43de569db1326a6e9c10c6cf7828616b31042f861b4c0,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1726691352087479272,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-5mfhg,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: 5dc4d401-73fd-4b13-ac85-f2cb65b4b0f4,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e499594bd4ca1e0ffebd9b5b5240ecdc7a824b9fbd8d21840a8cb7e865d601b0,PodSandboxId:c7c998fccea60bd4755ac2216b7817bc62c33b15e3b2c850f94c3137c789c6f2,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1726691351886016603,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-8bns5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7d082c6f-bda6-4f6e-906f
-f54ee78ab21d,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:10852f58e0d0a0b7a14d38e4a5a81025f9506296d50000e0f0157576e90b73d5,PodSandboxId:000a3cbbd10b3f4051d7b8e30d56b0c4964499d27480805a600e9151703aa33e,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1726691340776059921,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-622675,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fca4c2ba470b4eae5638e77bd36da67c,},Annotations:map[string]string
{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aabf741e6c21bcdd33794ad272367e8eee404ab32e56df824e7b682d84ec0cb3,PodSandboxId:8a3c914b697a74102c43ed3545d35dc856b63d70e9f098351b7bc71b478fea76,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1726691340765600818,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-622675,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 149a9ea18cda8f18fe46b42f5fdcfc42,},Annotations:map[string]string{io.kubernetes.
container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fa1411a6edea077a5bf21467bebe993d54a884c4ef814ac722270c58c4e1027c,PodSandboxId:836940a817d400129afc8f5461aeaf3cd20616d272b8db529bd21a1e30f18af3,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1726691340707884443,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-622675,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3dd56b6172e959eabe3391d13c3a2a11,},Annotations:map[string]string{io
.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ca2c1be8e70a4692c9c15e9da716688fe6dcc7d6c82050191b2576cffe47cc24,PodSandboxId:6bfb0c877ef32204a57b0733140b081e7f546b82b86349c5f66fc15a32410c54,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726691340699526426,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-622675,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fd68ba840847ee4c3afc8ea4a806da80,},Annotations:map[string]string{io.kubernetes.con
tainer.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=422e4391-5e85-4e59-9eab-b52d985d9549 name=/runtime.v1.RuntimeService/ListContainers
	Sep 18 20:40:00 multinode-622675 crio[2695]: time="2024-09-18 20:40:00.357532972Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=ed82e369-5046-4bfa-9c40-5a8d6fe04d68 name=/runtime.v1.RuntimeService/Version
	Sep 18 20:40:00 multinode-622675 crio[2695]: time="2024-09-18 20:40:00.357606539Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=ed82e369-5046-4bfa-9c40-5a8d6fe04d68 name=/runtime.v1.RuntimeService/Version
	Sep 18 20:40:00 multinode-622675 crio[2695]: time="2024-09-18 20:40:00.358712632Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=8e0a0f1d-6c00-4132-8fb9-bed15710e4a6 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 18 20:40:00 multinode-622675 crio[2695]: time="2024-09-18 20:40:00.359161846Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726692000359138472,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=8e0a0f1d-6c00-4132-8fb9-bed15710e4a6 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 18 20:40:00 multinode-622675 crio[2695]: time="2024-09-18 20:40:00.359891336Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=cfc9af34-5a42-4f1a-b561-d90734b67b36 name=/runtime.v1.RuntimeService/ListContainers
	Sep 18 20:40:00 multinode-622675 crio[2695]: time="2024-09-18 20:40:00.360058198Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=cfc9af34-5a42-4f1a-b561-d90734b67b36 name=/runtime.v1.RuntimeService/ListContainers
	Sep 18 20:40:00 multinode-622675 crio[2695]: time="2024-09-18 20:40:00.360489570Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:0aa3be4f6770078682857062b75364a07889ed89b6d7725f4d222a925f54d7e1,PodSandboxId:37fadf186b79639715989c0c251bbd9cb53ed867f20833b59bf2f24d7dc2db19,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1726691787265875647,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-sxchh,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 056a9bd0-0d18-47f1-be48-7f0a773979db,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d800c9f5cd07512852d48c4b63d9ac715b3ab0d8aa9f971659a4d1e098182684,PodSandboxId:7661f91bded7e04ad84cbcb869216e52fb39e725a20511807f190a14f71a138f,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1726691753718362716,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-5mfhg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5dc4d401-73fd-4b13-ac85-f2cb65b4b0f4,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5f42cc3b22183e2667e54bc3e4f3a8694e5a03687258e51ea727f2791e8f2bd2,PodSandboxId:fb8186cb25bedcb5ead72ab2bee49e30b85f3fe3d93d6fbbe8cc27a71a8bcee2,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726691753719419837,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-qhw9j,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 80568ba7-ff98-4d33-a142-e6042afceb54,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d9fa3ac1afc1ca59a2af96e3a25d529be9aefc33f90fed347b098d22cd4f87c6,PodSandboxId:5c2fdf9555c75568a328bcb1dd39c97cc1b6aa551c52d71444d561d1b6b63100,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726691753577636161,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-8bns5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7d082c6f-bda6-4f6e-906f-f54ee78ab21d,},Annotations:map[string]
string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:791bd9a5018aeedf9e06af458153d286ee9adf9d3f305eac37abdf6c15be05e2,PodSandboxId:aa8631f45a260656250e46f0fbd1073c87adef174b5c5aed0eaceda68bcecaca,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726691753569564470,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 66cb614f-b7a9-4d3e-a045-2fdde5e368cf,},Annotations:map[string]string{io.ku
bernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8fd060cb3319be9965fa313cbfa09acc398464e7391a2344409f7e842e63a92d,PodSandboxId:bf9087acb004157f001ddd0a52d8fd362f2da1444d4e2d26954241dd565a64a0,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726691749702651647,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-622675,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fca4c2ba470b4eae5638e77bd36da67c,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kuberne
tes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ff8c8376b280ff49f0e834d1561ecf31b332e70e3c1cd4785429577b30a31a74,PodSandboxId:2b6c1eadfce59acb58e26cc0d2d145c4e5744ffa181d3925f1a7e9d80659642f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726691749672403621,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-622675,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 149a9ea18cda8f18fe46b42f5fdcfc42,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.
restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8195546af7c97dbc4c7e931098abda3db7972fe66027da6eed0e4d0cbbabb435,PodSandboxId:c327ac057bb5e9f01ef80b9b90ecf353c03c86ac611274795c72fe883ee6e2f7,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726691749606943506,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-622675,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fd68ba840847ee4c3afc8ea4a806da80,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount:
1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bb9942cd5a355661495e4b729ac316c9b20f71dc4d56160866541cd1d1be6eb2,PodSandboxId:a7dd9337ebbc27427f6247c2366d2906b6f71f19a387a9abe9be193963de8b83,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726691749588676988,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-622675,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3dd56b6172e959eabe3391d13c3a2a11,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:14daa73e4b6447058fbb9acad7addcf2492e09082bdb8c59f4566c4c3c4ecbb5,PodSandboxId:c8caa799524794493dbfe9bac34f0447360906ceb1aec934ae128638272c56fc,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1726691418331135241,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-sxchh,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 056a9bd0-0d18-47f1-be48-7f0a773979db,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.res
tartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:43e5c05bff5626d849f1cc1a2457bed371140f56a7d07e75167462bf594fc28f,PodSandboxId:28439dd9ecc619703fb66cdece03c6a2eff4a229483ca954401bdc5f938d4efc,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1726691364054297176,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 66cb614f-b7a9-4d3e-a045-2fdde5e368cf,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,
io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d944b227553371de1b94120a984c24cb4b177354de38e8d43eb9ed453b0fba4a,PodSandboxId:7d66f12289576432c00d2bf0919fc70e50b47d67ef3abc205d546052d26f59fe,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726691363752948720,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-qhw9j,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 80568ba7-ff98-4d33-a142-e6042afceb54,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\
"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:19d5f178e345d175b48c3050d2afb16f5238d648db531cca0bc1482880d1cd61,PodSandboxId:6d42deb88737975208f43de569db1326a6e9c10c6cf7828616b31042f861b4c0,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1726691352087479272,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-5mfhg,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: 5dc4d401-73fd-4b13-ac85-f2cb65b4b0f4,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e499594bd4ca1e0ffebd9b5b5240ecdc7a824b9fbd8d21840a8cb7e865d601b0,PodSandboxId:c7c998fccea60bd4755ac2216b7817bc62c33b15e3b2c850f94c3137c789c6f2,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1726691351886016603,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-8bns5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7d082c6f-bda6-4f6e-906f
-f54ee78ab21d,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:10852f58e0d0a0b7a14d38e4a5a81025f9506296d50000e0f0157576e90b73d5,PodSandboxId:000a3cbbd10b3f4051d7b8e30d56b0c4964499d27480805a600e9151703aa33e,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1726691340776059921,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-622675,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fca4c2ba470b4eae5638e77bd36da67c,},Annotations:map[string]string
{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aabf741e6c21bcdd33794ad272367e8eee404ab32e56df824e7b682d84ec0cb3,PodSandboxId:8a3c914b697a74102c43ed3545d35dc856b63d70e9f098351b7bc71b478fea76,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1726691340765600818,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-622675,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 149a9ea18cda8f18fe46b42f5fdcfc42,},Annotations:map[string]string{io.kubernetes.
container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fa1411a6edea077a5bf21467bebe993d54a884c4ef814ac722270c58c4e1027c,PodSandboxId:836940a817d400129afc8f5461aeaf3cd20616d272b8db529bd21a1e30f18af3,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1726691340707884443,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-622675,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3dd56b6172e959eabe3391d13c3a2a11,},Annotations:map[string]string{io
.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ca2c1be8e70a4692c9c15e9da716688fe6dcc7d6c82050191b2576cffe47cc24,PodSandboxId:6bfb0c877ef32204a57b0733140b081e7f546b82b86349c5f66fc15a32410c54,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726691340699526426,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-622675,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fd68ba840847ee4c3afc8ea4a806da80,},Annotations:map[string]string{io.kubernetes.con
tainer.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=cfc9af34-5a42-4f1a-b561-d90734b67b36 name=/runtime.v1.RuntimeService/ListContainers
	Sep 18 20:40:00 multinode-622675 crio[2695]: time="2024-09-18 20:40:00.400218148Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=bcd6494a-498b-4519-a51f-ca3214630a2b name=/runtime.v1.RuntimeService/Version
	Sep 18 20:40:00 multinode-622675 crio[2695]: time="2024-09-18 20:40:00.400309886Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=bcd6494a-498b-4519-a51f-ca3214630a2b name=/runtime.v1.RuntimeService/Version
	Sep 18 20:40:00 multinode-622675 crio[2695]: time="2024-09-18 20:40:00.402129925Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=cdba366f-1de9-4407-9623-0e4b1e265d9a name=/runtime.v1.ImageService/ImageFsInfo
	Sep 18 20:40:00 multinode-622675 crio[2695]: time="2024-09-18 20:40:00.402534392Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726692000402510863,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=cdba366f-1de9-4407-9623-0e4b1e265d9a name=/runtime.v1.ImageService/ImageFsInfo
	Sep 18 20:40:00 multinode-622675 crio[2695]: time="2024-09-18 20:40:00.403066692Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=10ee9420-2f84-40e0-b323-8e2ee7509458 name=/runtime.v1.RuntimeService/ListContainers
	Sep 18 20:40:00 multinode-622675 crio[2695]: time="2024-09-18 20:40:00.403138572Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=10ee9420-2f84-40e0-b323-8e2ee7509458 name=/runtime.v1.RuntimeService/ListContainers
	Sep 18 20:40:00 multinode-622675 crio[2695]: time="2024-09-18 20:40:00.403474136Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:0aa3be4f6770078682857062b75364a07889ed89b6d7725f4d222a925f54d7e1,PodSandboxId:37fadf186b79639715989c0c251bbd9cb53ed867f20833b59bf2f24d7dc2db19,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1726691787265875647,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-sxchh,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 056a9bd0-0d18-47f1-be48-7f0a773979db,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d800c9f5cd07512852d48c4b63d9ac715b3ab0d8aa9f971659a4d1e098182684,PodSandboxId:7661f91bded7e04ad84cbcb869216e52fb39e725a20511807f190a14f71a138f,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1726691753718362716,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-5mfhg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5dc4d401-73fd-4b13-ac85-f2cb65b4b0f4,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5f42cc3b22183e2667e54bc3e4f3a8694e5a03687258e51ea727f2791e8f2bd2,PodSandboxId:fb8186cb25bedcb5ead72ab2bee49e30b85f3fe3d93d6fbbe8cc27a71a8bcee2,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726691753719419837,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-qhw9j,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 80568ba7-ff98-4d33-a142-e6042afceb54,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d9fa3ac1afc1ca59a2af96e3a25d529be9aefc33f90fed347b098d22cd4f87c6,PodSandboxId:5c2fdf9555c75568a328bcb1dd39c97cc1b6aa551c52d71444d561d1b6b63100,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726691753577636161,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-8bns5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7d082c6f-bda6-4f6e-906f-f54ee78ab21d,},Annotations:map[string]
string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:791bd9a5018aeedf9e06af458153d286ee9adf9d3f305eac37abdf6c15be05e2,PodSandboxId:aa8631f45a260656250e46f0fbd1073c87adef174b5c5aed0eaceda68bcecaca,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726691753569564470,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 66cb614f-b7a9-4d3e-a045-2fdde5e368cf,},Annotations:map[string]string{io.ku
bernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8fd060cb3319be9965fa313cbfa09acc398464e7391a2344409f7e842e63a92d,PodSandboxId:bf9087acb004157f001ddd0a52d8fd362f2da1444d4e2d26954241dd565a64a0,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726691749702651647,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-622675,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fca4c2ba470b4eae5638e77bd36da67c,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kuberne
tes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ff8c8376b280ff49f0e834d1561ecf31b332e70e3c1cd4785429577b30a31a74,PodSandboxId:2b6c1eadfce59acb58e26cc0d2d145c4e5744ffa181d3925f1a7e9d80659642f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726691749672403621,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-622675,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 149a9ea18cda8f18fe46b42f5fdcfc42,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.
restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8195546af7c97dbc4c7e931098abda3db7972fe66027da6eed0e4d0cbbabb435,PodSandboxId:c327ac057bb5e9f01ef80b9b90ecf353c03c86ac611274795c72fe883ee6e2f7,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726691749606943506,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-622675,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fd68ba840847ee4c3afc8ea4a806da80,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount:
1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bb9942cd5a355661495e4b729ac316c9b20f71dc4d56160866541cd1d1be6eb2,PodSandboxId:a7dd9337ebbc27427f6247c2366d2906b6f71f19a387a9abe9be193963de8b83,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726691749588676988,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-622675,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3dd56b6172e959eabe3391d13c3a2a11,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:14daa73e4b6447058fbb9acad7addcf2492e09082bdb8c59f4566c4c3c4ecbb5,PodSandboxId:c8caa799524794493dbfe9bac34f0447360906ceb1aec934ae128638272c56fc,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1726691418331135241,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-sxchh,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 056a9bd0-0d18-47f1-be48-7f0a773979db,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.res
tartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:43e5c05bff5626d849f1cc1a2457bed371140f56a7d07e75167462bf594fc28f,PodSandboxId:28439dd9ecc619703fb66cdece03c6a2eff4a229483ca954401bdc5f938d4efc,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1726691364054297176,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 66cb614f-b7a9-4d3e-a045-2fdde5e368cf,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,
io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d944b227553371de1b94120a984c24cb4b177354de38e8d43eb9ed453b0fba4a,PodSandboxId:7d66f12289576432c00d2bf0919fc70e50b47d67ef3abc205d546052d26f59fe,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726691363752948720,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-qhw9j,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 80568ba7-ff98-4d33-a142-e6042afceb54,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\
"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:19d5f178e345d175b48c3050d2afb16f5238d648db531cca0bc1482880d1cd61,PodSandboxId:6d42deb88737975208f43de569db1326a6e9c10c6cf7828616b31042f861b4c0,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1726691352087479272,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-5mfhg,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: 5dc4d401-73fd-4b13-ac85-f2cb65b4b0f4,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e499594bd4ca1e0ffebd9b5b5240ecdc7a824b9fbd8d21840a8cb7e865d601b0,PodSandboxId:c7c998fccea60bd4755ac2216b7817bc62c33b15e3b2c850f94c3137c789c6f2,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1726691351886016603,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-8bns5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7d082c6f-bda6-4f6e-906f
-f54ee78ab21d,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:10852f58e0d0a0b7a14d38e4a5a81025f9506296d50000e0f0157576e90b73d5,PodSandboxId:000a3cbbd10b3f4051d7b8e30d56b0c4964499d27480805a600e9151703aa33e,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1726691340776059921,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-622675,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fca4c2ba470b4eae5638e77bd36da67c,},Annotations:map[string]string
{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aabf741e6c21bcdd33794ad272367e8eee404ab32e56df824e7b682d84ec0cb3,PodSandboxId:8a3c914b697a74102c43ed3545d35dc856b63d70e9f098351b7bc71b478fea76,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1726691340765600818,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-622675,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 149a9ea18cda8f18fe46b42f5fdcfc42,},Annotations:map[string]string{io.kubernetes.
container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fa1411a6edea077a5bf21467bebe993d54a884c4ef814ac722270c58c4e1027c,PodSandboxId:836940a817d400129afc8f5461aeaf3cd20616d272b8db529bd21a1e30f18af3,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1726691340707884443,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-622675,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3dd56b6172e959eabe3391d13c3a2a11,},Annotations:map[string]string{io
.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ca2c1be8e70a4692c9c15e9da716688fe6dcc7d6c82050191b2576cffe47cc24,PodSandboxId:6bfb0c877ef32204a57b0733140b081e7f546b82b86349c5f66fc15a32410c54,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726691340699526426,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-622675,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fd68ba840847ee4c3afc8ea4a806da80,},Annotations:map[string]string{io.kubernetes.con
tainer.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=10ee9420-2f84-40e0-b323-8e2ee7509458 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	0aa3be4f67700       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a                                      3 minutes ago       Running             busybox                   1                   37fadf186b796       busybox-7dff88458-sxchh
	5f42cc3b22183       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      4 minutes ago       Running             coredns                   1                   fb8186cb25bed       coredns-7c65d6cfc9-qhw9j
	d800c9f5cd075       12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f                                      4 minutes ago       Running             kindnet-cni               1                   7661f91bded7e       kindnet-5mfhg
	d9fa3ac1afc1c       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                      4 minutes ago       Running             kube-proxy                1                   5c2fdf9555c75       kube-proxy-8bns5
	791bd9a5018ae       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      4 minutes ago       Running             storage-provisioner       1                   aa8631f45a260       storage-provisioner
	8fd060cb3319b       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      4 minutes ago       Running             etcd                      1                   bf9087acb0041       etcd-multinode-622675
	ff8c8376b280f       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                      4 minutes ago       Running             kube-scheduler            1                   2b6c1eadfce59       kube-scheduler-multinode-622675
	8195546af7c97       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee                                      4 minutes ago       Running             kube-apiserver            1                   c327ac057bb5e       kube-apiserver-multinode-622675
	bb9942cd5a355       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                      4 minutes ago       Running             kube-controller-manager   1                   a7dd9337ebbc2       kube-controller-manager-multinode-622675
	14daa73e4b644       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   9 minutes ago       Exited              busybox                   0                   c8caa79952479       busybox-7dff88458-sxchh
	43e5c05bff562       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      10 minutes ago      Exited              storage-provisioner       0                   28439dd9ecc61       storage-provisioner
	d944b22755337       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      10 minutes ago      Exited              coredns                   0                   7d66f12289576       coredns-7c65d6cfc9-qhw9j
	19d5f178e345d       12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f                                      10 minutes ago      Exited              kindnet-cni               0                   6d42deb887379       kindnet-5mfhg
	e499594bd4ca1       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                      10 minutes ago      Exited              kube-proxy                0                   c7c998fccea60       kube-proxy-8bns5
	10852f58e0d0a       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      10 minutes ago      Exited              etcd                      0                   000a3cbbd10b3       etcd-multinode-622675
	aabf741e6c21b       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                      10 minutes ago      Exited              kube-scheduler            0                   8a3c914b697a7       kube-scheduler-multinode-622675
	fa1411a6edea0       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                      10 minutes ago      Exited              kube-controller-manager   0                   836940a817d40       kube-controller-manager-multinode-622675
	ca2c1be8e70a4       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee                                      10 minutes ago      Exited              kube-apiserver            0                   6bfb0c877ef32       kube-apiserver-multinode-622675
	
	
	==> coredns [5f42cc3b22183e2667e54bc3e4f3a8694e5a03687258e51ea727f2791e8f2bd2] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:60438 - 13597 "HINFO IN 937730212470279970.3145319502201119370. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.01050279s
	
	
	==> coredns [d944b227553371de1b94120a984c24cb4b177354de38e8d43eb9ed453b0fba4a] <==
	[INFO] 10.244.1.2:60475 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.002113425s
	[INFO] 10.244.1.2:57308 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000100558s
	[INFO] 10.244.1.2:40216 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000069207s
	[INFO] 10.244.1.2:59449 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001495016s
	[INFO] 10.244.1.2:47081 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.00020545s
	[INFO] 10.244.1.2:44235 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000070751s
	[INFO] 10.244.1.2:52562 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00018663s
	[INFO] 10.244.0.3:53698 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000110057s
	[INFO] 10.244.0.3:54614 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000178911s
	[INFO] 10.244.0.3:60235 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000108951s
	[INFO] 10.244.0.3:38275 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000195679s
	[INFO] 10.244.1.2:50638 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000164811s
	[INFO] 10.244.1.2:36862 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00027575s
	[INFO] 10.244.1.2:34719 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000086256s
	[INFO] 10.244.1.2:48586 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00018652s
	[INFO] 10.244.0.3:36271 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000156116s
	[INFO] 10.244.0.3:55158 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000232974s
	[INFO] 10.244.0.3:55686 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000109487s
	[INFO] 10.244.0.3:43642 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000220267s
	[INFO] 10.244.1.2:33096 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000186841s
	[INFO] 10.244.1.2:51389 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000196611s
	[INFO] 10.244.1.2:49102 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000108803s
	[INFO] 10.244.1.2:57092 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000135542s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               multinode-622675
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-622675
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=85073601a832bd4bbda5d11fa91feafff6ec6b91
	                    minikube.k8s.io/name=multinode-622675
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_18T20_29_07_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 18 Sep 2024 20:29:03 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-622675
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 18 Sep 2024 20:39:58 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 18 Sep 2024 20:35:53 +0000   Wed, 18 Sep 2024 20:29:01 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 18 Sep 2024 20:35:53 +0000   Wed, 18 Sep 2024 20:29:01 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 18 Sep 2024 20:35:53 +0000   Wed, 18 Sep 2024 20:29:01 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 18 Sep 2024 20:35:53 +0000   Wed, 18 Sep 2024 20:29:23 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.106
	  Hostname:    multinode-622675
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 01fae44ecdba45e88651a7b4ea518137
	  System UUID:                01fae44e-cdba-45e8-8651-a7b4ea518137
	  Boot ID:                    59e3dfe6-8cab-4620-ae90-07cbeed2e1b4
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-sxchh                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m46s
	  kube-system                 coredns-7c65d6cfc9-qhw9j                    100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     10m
	  kube-system                 etcd-multinode-622675                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         10m
	  kube-system                 kindnet-5mfhg                               100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      10m
	  kube-system                 kube-apiserver-multinode-622675             250m (12%)    0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-controller-manager-multinode-622675    200m (10%)    0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-proxy-8bns5                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-scheduler-multinode-622675             100m (5%)     0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%)   100m (5%)
	  memory             220Mi (10%)  220Mi (10%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 10m                    kube-proxy       
	  Normal  Starting                 4m6s                   kube-proxy       
	  Normal  NodeHasSufficientMemory  11m (x8 over 11m)      kubelet          Node multinode-622675 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    11m (x8 over 11m)      kubelet          Node multinode-622675 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     11m (x7 over 11m)      kubelet          Node multinode-622675 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  11m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 10m                    kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  10m                    kubelet          Node multinode-622675 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    10m                    kubelet          Node multinode-622675 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     10m                    kubelet          Node multinode-622675 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  10m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           10m                    node-controller  Node multinode-622675 event: Registered Node multinode-622675 in Controller
	  Normal  NodeReady                10m                    kubelet          Node multinode-622675 status is now: NodeReady
	  Normal  Starting                 4m12s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m11s (x8 over 4m11s)  kubelet          Node multinode-622675 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m11s (x8 over 4m11s)  kubelet          Node multinode-622675 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m11s (x7 over 4m11s)  kubelet          Node multinode-622675 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m11s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           4m4s                   node-controller  Node multinode-622675 event: Registered Node multinode-622675 in Controller
	
	
	Name:               multinode-622675-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-622675-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=85073601a832bd4bbda5d11fa91feafff6ec6b91
	                    minikube.k8s.io/name=multinode-622675
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_18T20_36_33_0700
	                    minikube.k8s.io/version=v1.34.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 18 Sep 2024 20:36:32 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-622675-m02
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 18 Sep 2024 20:37:33 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Wed, 18 Sep 2024 20:37:03 +0000   Wed, 18 Sep 2024 20:38:16 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Wed, 18 Sep 2024 20:37:03 +0000   Wed, 18 Sep 2024 20:38:16 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Wed, 18 Sep 2024 20:37:03 +0000   Wed, 18 Sep 2024 20:38:16 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Wed, 18 Sep 2024 20:37:03 +0000   Wed, 18 Sep 2024 20:38:16 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.224
	  Hostname:    multinode-622675-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 49c42c0661b747518cdb352e5e19d75f
	  System UUID:                49c42c06-61b7-4751-8cdb-352e5e19d75f
	  Boot ID:                    26be908f-499e-43ba-a32b-4fd322c74b55
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-dcmpq    0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m32s
	  kube-system                 kindnet-wgcjk              100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      10m
	  kube-system                 kube-proxy-msqjg           0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 3m23s                  kube-proxy       
	  Normal  Starting                 10m                    kube-proxy       
	  Normal  NodeHasSufficientMemory  10m (x2 over 10m)      kubelet          Node multinode-622675-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    10m (x2 over 10m)      kubelet          Node multinode-622675-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     10m (x2 over 10m)      kubelet          Node multinode-622675-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  10m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                9m48s                  kubelet          Node multinode-622675-m02 status is now: NodeReady
	  Normal  NodeHasSufficientMemory  3m28s (x2 over 3m28s)  kubelet          Node multinode-622675-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m28s (x2 over 3m28s)  kubelet          Node multinode-622675-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m28s (x2 over 3m28s)  kubelet          Node multinode-622675-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m28s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                3m8s                   kubelet          Node multinode-622675-m02 status is now: NodeReady
	  Normal  NodeNotReady             104s                   node-controller  Node multinode-622675-m02 status is now: NodeNotReady
	
	
	==> dmesg <==
	[  +0.062253] systemd-fstab-generator[598]: Ignoring "noauto" option for root device
	[  +0.169595] systemd-fstab-generator[612]: Ignoring "noauto" option for root device
	[  +0.145253] systemd-fstab-generator[624]: Ignoring "noauto" option for root device
	[  +0.266606] systemd-fstab-generator[654]: Ignoring "noauto" option for root device
	[  +3.812109] systemd-fstab-generator[744]: Ignoring "noauto" option for root device
	[  +3.759104] systemd-fstab-generator[875]: Ignoring "noauto" option for root device
	[  +0.066909] kauditd_printk_skb: 158 callbacks suppressed
	[Sep18 20:29] systemd-fstab-generator[1211]: Ignoring "noauto" option for root device
	[  +0.076886] kauditd_printk_skb: 69 callbacks suppressed
	[  +5.180679] systemd-fstab-generator[1313]: Ignoring "noauto" option for root device
	[  +0.121853] kauditd_printk_skb: 18 callbacks suppressed
	[ +12.582048] kauditd_printk_skb: 69 callbacks suppressed
	[Sep18 20:30] kauditd_printk_skb: 14 callbacks suppressed
	[Sep18 20:35] systemd-fstab-generator[2620]: Ignoring "noauto" option for root device
	[  +0.144443] systemd-fstab-generator[2632]: Ignoring "noauto" option for root device
	[  +0.168575] systemd-fstab-generator[2646]: Ignoring "noauto" option for root device
	[  +0.167066] systemd-fstab-generator[2658]: Ignoring "noauto" option for root device
	[  +0.289362] systemd-fstab-generator[2687]: Ignoring "noauto" option for root device
	[  +9.090928] systemd-fstab-generator[2777]: Ignoring "noauto" option for root device
	[  +0.085234] kauditd_printk_skb: 100 callbacks suppressed
	[  +1.951763] systemd-fstab-generator[2899]: Ignoring "noauto" option for root device
	[  +4.692670] kauditd_printk_skb: 74 callbacks suppressed
	[  +6.878512] kauditd_printk_skb: 34 callbacks suppressed
	[Sep18 20:36] systemd-fstab-generator[3742]: Ignoring "noauto" option for root device
	[ +18.659386] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> etcd [10852f58e0d0a0b7a14d38e4a5a81025f9506296d50000e0f0157576e90b73d5] <==
	{"level":"info","ts":"2024-09-18T20:29:01.458218Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-09-18T20:29:01.460032Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"db63b0e3647a827","local-member-id":"133f99d1dc1797cc","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-18T20:29:01.460161Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-18T20:29:01.460218Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-18T20:29:01.463271Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-18T20:29:01.469055Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.106:2379"}
	{"level":"warn","ts":"2024-09-18T20:29:51.490882Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"148.850253ms","expected-duration":"100ms","prefix":"","request":"header:<ID:10938278152982715042 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/events/default/multinode-622675-m02.17f670ac035452cc\" mod_revision:0 > success:<request_put:<key:\"/registry/events/default/multinode-622675-m02.17f670ac035452cc\" value_size:646 lease:1714906116127938256 >> failure:<>>","response":"size:16"}
	{"level":"info","ts":"2024-09-18T20:29:51.491241Z","caller":"traceutil/trace.go:171","msg":"trace[1497791882] transaction","detail":"{read_only:false; response_revision:469; number_of_response:1; }","duration":"234.761042ms","start":"2024-09-18T20:29:51.256465Z","end":"2024-09-18T20:29:51.491226Z","steps":["trace[1497791882] 'process raft request'  (duration: 85.256272ms)","trace[1497791882] 'compare'  (duration: 148.749608ms)"],"step_count":2}
	{"level":"warn","ts":"2024-09-18T20:30:50.317337Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"147.311213ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 serializable:true keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-18T20:30:50.317538Z","caller":"traceutil/trace.go:171","msg":"trace[1157474967] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:606; }","duration":"147.601668ms","start":"2024-09-18T20:30:50.169900Z","end":"2024-09-18T20:30:50.317501Z","steps":["trace[1157474967] 'range keys from in-memory index tree'  (duration: 147.285028ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-18T20:30:50.318130Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"109.619201ms","expected-duration":"100ms","prefix":"","request":"header:<ID:10938278152982715568 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/events/default/multinode-622675-m03.17f670b9b5a7cfb7\" mod_revision:0 > success:<request_put:<key:\"/registry/events/default/multinode-622675-m03.17f670b9b5a7cfb7\" value_size:646 lease:1714906116127939447 >> failure:<>>","response":"size:16"}
	{"level":"info","ts":"2024-09-18T20:30:50.318276Z","caller":"traceutil/trace.go:171","msg":"trace[1136392025] linearizableReadLoop","detail":"{readStateIndex:637; appliedIndex:636; }","duration":"159.084562ms","start":"2024-09-18T20:30:50.159171Z","end":"2024-09-18T20:30:50.318256Z","steps":["trace[1136392025] 'read index received'  (duration: 48.925195ms)","trace[1136392025] 'applied index is now lower than readState.Index'  (duration: 110.158575ms)"],"step_count":2}
	{"level":"info","ts":"2024-09-18T20:30:50.318357Z","caller":"traceutil/trace.go:171","msg":"trace[1344067763] transaction","detail":"{read_only:false; response_revision:607; number_of_response:1; }","duration":"236.903834ms","start":"2024-09-18T20:30:50.081421Z","end":"2024-09-18T20:30:50.318325Z","steps":["trace[1344067763] 'process raft request'  (duration: 126.7126ms)","trace[1344067763] 'compare'  (duration: 109.182792ms)"],"step_count":2}
	{"level":"warn","ts":"2024-09-18T20:30:50.318467Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"159.30067ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/multinode-622675-m03\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-18T20:30:50.318512Z","caller":"traceutil/trace.go:171","msg":"trace[1512247111] range","detail":"{range_begin:/registry/minions/multinode-622675-m03; range_end:; response_count:0; response_revision:607; }","duration":"159.345035ms","start":"2024-09-18T20:30:50.159155Z","end":"2024-09-18T20:30:50.318500Z","steps":["trace[1512247111] 'agreement among raft nodes before linearized reading'  (duration: 159.200625ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-18T20:34:05.507688Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-09-18T20:34:05.507880Z","caller":"embed/etcd.go:377","msg":"closing etcd server","name":"multinode-622675","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.106:2380"],"advertise-client-urls":["https://192.168.39.106:2379"]}
	{"level":"warn","ts":"2024-09-18T20:34:05.516007Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-09-18T20:34:05.516843Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-09-18T20:34:05.573339Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.106:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-09-18T20:34:05.573392Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.106:2379: use of closed network connection"}
	{"level":"info","ts":"2024-09-18T20:34:05.573476Z","caller":"etcdserver/server.go:1521","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"133f99d1dc1797cc","current-leader-member-id":"133f99d1dc1797cc"}
	{"level":"info","ts":"2024-09-18T20:34:05.576997Z","caller":"embed/etcd.go:581","msg":"stopping serving peer traffic","address":"192.168.39.106:2380"}
	{"level":"info","ts":"2024-09-18T20:34:05.577244Z","caller":"embed/etcd.go:586","msg":"stopped serving peer traffic","address":"192.168.39.106:2380"}
	{"level":"info","ts":"2024-09-18T20:34:05.577297Z","caller":"embed/etcd.go:379","msg":"closed etcd server","name":"multinode-622675","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.106:2380"],"advertise-client-urls":["https://192.168.39.106:2379"]}
	
	
	==> etcd [8fd060cb3319be9965fa313cbfa09acc398464e7391a2344409f7e842e63a92d] <==
	{"level":"info","ts":"2024-09-18T20:35:50.041042Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"db63b0e3647a827","local-member-id":"133f99d1dc1797cc","added-peer-id":"133f99d1dc1797cc","added-peer-peer-urls":["https://192.168.39.106:2380"]}
	{"level":"info","ts":"2024-09-18T20:35:50.051276Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"db63b0e3647a827","local-member-id":"133f99d1dc1797cc","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-18T20:35:50.051345Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-18T20:35:50.040475Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-18T20:35:50.062129Z","caller":"embed/etcd.go:728","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-09-18T20:35:50.098482Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"133f99d1dc1797cc","initial-advertise-peer-urls":["https://192.168.39.106:2380"],"listen-peer-urls":["https://192.168.39.106:2380"],"advertise-client-urls":["https://192.168.39.106:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.106:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-09-18T20:35:50.098527Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-09-18T20:35:50.098065Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.39.106:2380"}
	{"level":"info","ts":"2024-09-18T20:35:50.098570Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.39.106:2380"}
	{"level":"info","ts":"2024-09-18T20:35:51.594940Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"133f99d1dc1797cc is starting a new election at term 2"}
	{"level":"info","ts":"2024-09-18T20:35:51.595034Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"133f99d1dc1797cc became pre-candidate at term 2"}
	{"level":"info","ts":"2024-09-18T20:35:51.595084Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"133f99d1dc1797cc received MsgPreVoteResp from 133f99d1dc1797cc at term 2"}
	{"level":"info","ts":"2024-09-18T20:35:51.595111Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"133f99d1dc1797cc became candidate at term 3"}
	{"level":"info","ts":"2024-09-18T20:35:51.595119Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"133f99d1dc1797cc received MsgVoteResp from 133f99d1dc1797cc at term 3"}
	{"level":"info","ts":"2024-09-18T20:35:51.595132Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"133f99d1dc1797cc became leader at term 3"}
	{"level":"info","ts":"2024-09-18T20:35:51.595141Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 133f99d1dc1797cc elected leader 133f99d1dc1797cc at term 3"}
	{"level":"info","ts":"2024-09-18T20:35:51.600756Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"133f99d1dc1797cc","local-member-attributes":"{Name:multinode-622675 ClientURLs:[https://192.168.39.106:2379]}","request-path":"/0/members/133f99d1dc1797cc/attributes","cluster-id":"db63b0e3647a827","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-18T20:35:51.600767Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-18T20:35:51.600795Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-18T20:35:51.601587Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-18T20:35:51.601628Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-18T20:35:51.603060Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-18T20:35:51.603072Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-18T20:35:51.604104Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.106:2379"}
	{"level":"info","ts":"2024-09-18T20:35:51.604788Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> kernel <==
	 20:40:00 up 11 min,  0 users,  load average: 0.19, 0.24, 0.14
	Linux multinode-622675 5.10.207 #1 SMP Mon Sep 16 15:00:28 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [19d5f178e345d175b48c3050d2afb16f5238d648db531cca0bc1482880d1cd61] <==
	I0918 20:33:23.144374       1 main.go:322] Node multinode-622675-m03 has CIDR [10.244.4.0/24] 
	I0918 20:33:33.143686       1 main.go:295] Handling node with IPs: map[192.168.39.216:{}]
	I0918 20:33:33.143829       1 main.go:322] Node multinode-622675-m03 has CIDR [10.244.4.0/24] 
	I0918 20:33:33.144069       1 main.go:295] Handling node with IPs: map[192.168.39.106:{}]
	I0918 20:33:33.144122       1 main.go:299] handling current node
	I0918 20:33:33.144149       1 main.go:295] Handling node with IPs: map[192.168.39.224:{}]
	I0918 20:33:33.144167       1 main.go:322] Node multinode-622675-m02 has CIDR [10.244.1.0/24] 
	I0918 20:33:43.144060       1 main.go:295] Handling node with IPs: map[192.168.39.224:{}]
	I0918 20:33:43.144105       1 main.go:322] Node multinode-622675-m02 has CIDR [10.244.1.0/24] 
	I0918 20:33:43.144238       1 main.go:295] Handling node with IPs: map[192.168.39.216:{}]
	I0918 20:33:43.144258       1 main.go:322] Node multinode-622675-m03 has CIDR [10.244.4.0/24] 
	I0918 20:33:43.144320       1 main.go:295] Handling node with IPs: map[192.168.39.106:{}]
	I0918 20:33:43.144337       1 main.go:299] handling current node
	I0918 20:33:53.143699       1 main.go:295] Handling node with IPs: map[192.168.39.106:{}]
	I0918 20:33:53.143742       1 main.go:299] handling current node
	I0918 20:33:53.143762       1 main.go:295] Handling node with IPs: map[192.168.39.224:{}]
	I0918 20:33:53.143768       1 main.go:322] Node multinode-622675-m02 has CIDR [10.244.1.0/24] 
	I0918 20:33:53.143932       1 main.go:295] Handling node with IPs: map[192.168.39.216:{}]
	I0918 20:33:53.144001       1 main.go:322] Node multinode-622675-m03 has CIDR [10.244.4.0/24] 
	I0918 20:34:03.143676       1 main.go:295] Handling node with IPs: map[192.168.39.106:{}]
	I0918 20:34:03.143820       1 main.go:299] handling current node
	I0918 20:34:03.143855       1 main.go:295] Handling node with IPs: map[192.168.39.224:{}]
	I0918 20:34:03.143875       1 main.go:322] Node multinode-622675-m02 has CIDR [10.244.1.0/24] 
	I0918 20:34:03.144053       1 main.go:295] Handling node with IPs: map[192.168.39.216:{}]
	I0918 20:34:03.144080       1 main.go:322] Node multinode-622675-m03 has CIDR [10.244.4.0/24] 
	
	
	==> kindnet [d800c9f5cd07512852d48c4b63d9ac715b3ab0d8aa9f971659a4d1e098182684] <==
	I0918 20:38:54.642103       1 main.go:322] Node multinode-622675-m02 has CIDR [10.244.1.0/24] 
	I0918 20:39:04.647579       1 main.go:295] Handling node with IPs: map[192.168.39.106:{}]
	I0918 20:39:04.647693       1 main.go:299] handling current node
	I0918 20:39:04.647726       1 main.go:295] Handling node with IPs: map[192.168.39.224:{}]
	I0918 20:39:04.647744       1 main.go:322] Node multinode-622675-m02 has CIDR [10.244.1.0/24] 
	I0918 20:39:14.650830       1 main.go:295] Handling node with IPs: map[192.168.39.106:{}]
	I0918 20:39:14.651015       1 main.go:299] handling current node
	I0918 20:39:14.651085       1 main.go:295] Handling node with IPs: map[192.168.39.224:{}]
	I0918 20:39:14.651110       1 main.go:322] Node multinode-622675-m02 has CIDR [10.244.1.0/24] 
	I0918 20:39:24.641893       1 main.go:295] Handling node with IPs: map[192.168.39.106:{}]
	I0918 20:39:24.642068       1 main.go:299] handling current node
	I0918 20:39:24.642101       1 main.go:295] Handling node with IPs: map[192.168.39.224:{}]
	I0918 20:39:24.642121       1 main.go:322] Node multinode-622675-m02 has CIDR [10.244.1.0/24] 
	I0918 20:39:34.645122       1 main.go:295] Handling node with IPs: map[192.168.39.106:{}]
	I0918 20:39:34.645286       1 main.go:299] handling current node
	I0918 20:39:34.645319       1 main.go:295] Handling node with IPs: map[192.168.39.224:{}]
	I0918 20:39:34.645339       1 main.go:322] Node multinode-622675-m02 has CIDR [10.244.1.0/24] 
	I0918 20:39:44.647676       1 main.go:295] Handling node with IPs: map[192.168.39.106:{}]
	I0918 20:39:44.647786       1 main.go:299] handling current node
	I0918 20:39:44.647815       1 main.go:295] Handling node with IPs: map[192.168.39.224:{}]
	I0918 20:39:44.647840       1 main.go:322] Node multinode-622675-m02 has CIDR [10.244.1.0/24] 
	I0918 20:39:54.642802       1 main.go:295] Handling node with IPs: map[192.168.39.106:{}]
	I0918 20:39:54.643018       1 main.go:299] handling current node
	I0918 20:39:54.643074       1 main.go:295] Handling node with IPs: map[192.168.39.224:{}]
	I0918 20:39:54.643098       1 main.go:322] Node multinode-622675-m02 has CIDR [10.244.1.0/24] 
	
	
	==> kube-apiserver [8195546af7c97dbc4c7e931098abda3db7972fe66027da6eed0e4d0cbbabb435] <==
	I0918 20:35:52.962214       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0918 20:35:52.985582       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0918 20:35:52.985681       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0918 20:35:52.993089       1 aggregator.go:171] initial CRD sync complete...
	I0918 20:35:52.993121       1 autoregister_controller.go:144] Starting autoregister controller
	I0918 20:35:52.993128       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0918 20:35:52.993133       1 cache.go:39] Caches are synced for autoregister controller
	I0918 20:35:52.993849       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	I0918 20:35:53.007410       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0918 20:35:53.008539       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0918 20:35:53.008685       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0918 20:35:53.008728       1 cache.go:39] Caches are synced for LocalAvailability controller
	E0918 20:35:53.023167       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0918 20:35:53.030658       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0918 20:35:53.034501       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0918 20:35:53.034535       1 policy_source.go:224] refreshing policies
	I0918 20:35:53.078305       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0918 20:35:53.901338       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0918 20:35:55.005258       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0918 20:35:55.172568       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0918 20:35:55.187780       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0918 20:35:55.281908       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0918 20:35:55.291294       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0918 20:35:56.728822       1 controller.go:615] quota admission added evaluator for: endpoints
	I0918 20:35:56.778225       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-apiserver [ca2c1be8e70a4692c9c15e9da716688fe6dcc7d6c82050191b2576cffe47cc24] <==
	I0918 20:34:05.526201       1 dynamic_cafile_content.go:174] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0918 20:34:05.525475       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I0918 20:34:05.525838       1 dynamic_serving_content.go:149] "Shutting down controller" name="serving-cert::/var/lib/minikube/certs/apiserver.crt::/var/lib/minikube/certs/apiserver.key"
	I0918 20:34:05.520049       1 naming_controller.go:305] Shutting down NamingConditionController
	I0918 20:34:05.520059       1 controller.go:120] Shutting down OpenAPI V3 controller
	I0918 20:34:05.520072       1 controller.go:170] Shutting down OpenAPI controller
	I0918 20:34:05.520078       1 autoregister_controller.go:168] Shutting down autoregister controller
	I0918 20:34:05.520094       1 crdregistration_controller.go:145] Shutting down crd-autoregister controller
	I0918 20:34:05.520104       1 local_available_controller.go:172] Shutting down LocalAvailability controller
	I0918 20:34:05.520108       1 remote_available_controller.go:427] Shutting down RemoteAvailability controller
	I0918 20:34:05.520121       1 gc_controller.go:91] Shutting down apiserver lease garbage collector
	I0918 20:34:05.520126       1 apiservice_controller.go:134] Shutting down APIServiceRegistrationController
	I0918 20:34:05.520132       1 apf_controller.go:389] Shutting down API Priority and Fairness config worker
	I0918 20:34:05.520145       1 customresource_discovery_controller.go:328] Shutting down DiscoveryController
	I0918 20:34:05.520453       1 controller.go:84] Shutting down OpenAPI AggregationController
	I0918 20:34:05.520562       1 controller.go:86] Shutting down OpenAPI V3 AggregationController
	I0918 20:34:05.524779       1 controller.go:157] Shutting down quota evaluator
	I0918 20:34:05.531516       1 controller.go:176] quota evaluator worker shutdown
	I0918 20:34:05.525538       1 secure_serving.go:258] Stopped listening on [::]:8443
	I0918 20:34:05.531536       1 controller.go:176] quota evaluator worker shutdown
	I0918 20:34:05.531541       1 controller.go:176] quota evaluator worker shutdown
	I0918 20:34:05.531546       1 controller.go:176] quota evaluator worker shutdown
	I0918 20:34:05.531550       1 controller.go:176] quota evaluator worker shutdown
	W0918 20:34:05.532436       1 logging.go:55] [core] [Channel #178 SubChannel #179]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0918 20:34:05.535177       1 logging.go:55] [core] [Channel #115 SubChannel #116]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-controller-manager [bb9942cd5a355661495e4b729ac316c9b20f71dc4d56160866541cd1d1be6eb2] <==
	I0918 20:37:11.101716       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-622675-m02"
	I0918 20:37:11.125223       1 range_allocator.go:422] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-622675-m03" podCIDRs=["10.244.2.0/24"]
	I0918 20:37:11.125588       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-622675-m03"
	I0918 20:37:11.125718       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-622675-m03"
	I0918 20:37:11.192694       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-622675-m03"
	I0918 20:37:11.472880       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-622675-m03"
	I0918 20:37:11.550300       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-622675-m03"
	I0918 20:37:21.237457       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-622675-m03"
	I0918 20:37:30.484698       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-622675-m03"
	I0918 20:37:30.484877       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-622675-m02"
	I0918 20:37:30.493785       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-622675-m03"
	I0918 20:37:31.405567       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-622675-m03"
	I0918 20:37:35.180459       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-622675-m03"
	I0918 20:37:35.193778       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-622675-m03"
	I0918 20:37:35.759755       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-622675-m03"
	I0918 20:37:35.759812       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-622675-m02"
	I0918 20:38:16.338567       1 gc_controller.go:342] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kube-proxy-scpz2"
	I0918 20:38:16.370749       1 gc_controller.go:258] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kube-proxy-scpz2"
	I0918 20:38:16.370918       1 gc_controller.go:342] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kindnet-zn545"
	I0918 20:38:16.413028       1 gc_controller.go:258] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kindnet-zn545"
	I0918 20:38:16.428149       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-622675-m02"
	I0918 20:38:16.455116       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-622675-m02"
	I0918 20:38:16.488281       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="13.27785ms"
	I0918 20:38:16.488438       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="49.02µs"
	I0918 20:38:21.527268       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-622675-m02"
	
	
	==> kube-controller-manager [fa1411a6edea077a5bf21467bebe993d54a884c4ef814ac722270c58c4e1027c] <==
	I0918 20:31:38.904755       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-622675-m03"
	I0918 20:31:39.133987       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-622675-m03"
	I0918 20:31:39.134567       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-622675-m02"
	I0918 20:31:40.347987       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-622675-m02"
	I0918 20:31:40.351409       1 actual_state_of_world.go:540] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-622675-m03\" does not exist"
	I0918 20:31:40.373556       1 range_allocator.go:422] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-622675-m03" podCIDRs=["10.244.4.0/24"]
	I0918 20:31:40.373600       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-622675-m03"
	I0918 20:31:40.373626       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-622675-m03"
	I0918 20:31:40.647454       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-622675-m03"
	I0918 20:31:40.973536       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-622675-m03"
	I0918 20:31:44.886297       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-622675-m03"
	I0918 20:31:50.475772       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-622675-m03"
	I0918 20:31:59.973890       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-622675-m02"
	I0918 20:31:59.974648       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-622675-m03"
	I0918 20:31:59.986820       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-622675-m03"
	I0918 20:32:04.875502       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-622675-m03"
	I0918 20:32:44.897211       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-622675-m02"
	I0918 20:32:44.897852       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-622675-m03"
	I0918 20:32:44.906236       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-622675-m02"
	I0918 20:32:44.927905       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-622675-m03"
	I0918 20:32:44.933517       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-622675-m02"
	I0918 20:32:44.974122       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="16.974541ms"
	I0918 20:32:44.975119       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="97.3µs"
	I0918 20:32:50.015536       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-622675-m02"
	I0918 20:33:00.104532       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-622675-m03"
	
	
	==> kube-proxy [d9fa3ac1afc1ca59a2af96e3a25d529be9aefc33f90fed347b098d22cd4f87c6] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0918 20:35:54.059170       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0918 20:35:54.073148       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.106"]
	E0918 20:35:54.073249       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0918 20:35:54.164147       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0918 20:35:54.164245       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0918 20:35:54.164288       1 server_linux.go:169] "Using iptables Proxier"
	I0918 20:35:54.167913       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0918 20:35:54.168379       1 server.go:483] "Version info" version="v1.31.1"
	I0918 20:35:54.168447       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0918 20:35:54.171422       1 config.go:328] "Starting node config controller"
	I0918 20:35:54.172025       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0918 20:35:54.171791       1 config.go:199] "Starting service config controller"
	I0918 20:35:54.173641       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0918 20:35:54.171818       1 config.go:105] "Starting endpoint slice config controller"
	I0918 20:35:54.174169       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0918 20:35:54.273861       1 shared_informer.go:320] Caches are synced for node config
	I0918 20:35:54.274462       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0918 20:35:54.274830       1 shared_informer.go:320] Caches are synced for service config
	
	
	==> kube-proxy [e499594bd4ca1e0ffebd9b5b5240ecdc7a824b9fbd8d21840a8cb7e865d601b0] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0918 20:29:12.063603       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0918 20:29:12.104517       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.106"]
	E0918 20:29:12.111148       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0918 20:29:12.150529       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0918 20:29:12.150649       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0918 20:29:12.150690       1 server_linux.go:169] "Using iptables Proxier"
	I0918 20:29:12.153113       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0918 20:29:12.153467       1 server.go:483] "Version info" version="v1.31.1"
	I0918 20:29:12.153621       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0918 20:29:12.155452       1 config.go:199] "Starting service config controller"
	I0918 20:29:12.155517       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0918 20:29:12.155569       1 config.go:105] "Starting endpoint slice config controller"
	I0918 20:29:12.155586       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0918 20:29:12.156348       1 config.go:328] "Starting node config controller"
	I0918 20:29:12.156389       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0918 20:29:12.255649       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0918 20:29:12.255700       1 shared_informer.go:320] Caches are synced for service config
	I0918 20:29:12.257005       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [aabf741e6c21bcdd33794ad272367e8eee404ab32e56df824e7b682d84ec0cb3] <==
	E0918 20:29:04.032294       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0918 20:29:04.102044       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0918 20:29:04.102168       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0918 20:29:04.128226       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0918 20:29:04.128289       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0918 20:29:04.170599       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0918 20:29:04.170650       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0918 20:29:04.213928       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0918 20:29:04.214108       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0918 20:29:04.261425       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0918 20:29:04.261888       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0918 20:29:04.265441       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0918 20:29:04.265532       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0918 20:29:04.303644       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0918 20:29:04.303742       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0918 20:29:04.303829       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0918 20:29:04.303859       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0918 20:29:04.339813       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0918 20:29:04.339915       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0918 20:29:04.341002       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0918 20:29:04.341071       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0918 20:29:04.504933       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0918 20:29:04.505016       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0918 20:29:07.239317       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0918 20:34:05.512441       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [ff8c8376b280ff49f0e834d1561ecf31b332e70e3c1cd4785429577b30a31a74] <==
	I0918 20:35:50.529046       1 serving.go:386] Generated self-signed cert in-memory
	W0918 20:35:52.910132       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0918 20:35:52.910294       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0918 20:35:52.910326       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0918 20:35:52.910559       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0918 20:35:52.985336       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.1"
	I0918 20:35:52.988024       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0918 20:35:52.992295       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0918 20:35:52.992437       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0918 20:35:52.992483       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0918 20:35:52.992509       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0918 20:35:53.092626       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 18 20:38:48 multinode-622675 kubelet[2906]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Sep 18 20:38:48 multinode-622675 kubelet[2906]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 18 20:38:48 multinode-622675 kubelet[2906]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 18 20:38:48 multinode-622675 kubelet[2906]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 18 20:38:49 multinode-622675 kubelet[2906]: E0918 20:38:49.106589    2906 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726691929105928967,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 18 20:38:49 multinode-622675 kubelet[2906]: E0918 20:38:49.106621    2906 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726691929105928967,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 18 20:38:59 multinode-622675 kubelet[2906]: E0918 20:38:59.112942    2906 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726691939109231887,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 18 20:38:59 multinode-622675 kubelet[2906]: E0918 20:38:59.113015    2906 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726691939109231887,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 18 20:39:09 multinode-622675 kubelet[2906]: E0918 20:39:09.114549    2906 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726691949114118512,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 18 20:39:09 multinode-622675 kubelet[2906]: E0918 20:39:09.114579    2906 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726691949114118512,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 18 20:39:19 multinode-622675 kubelet[2906]: E0918 20:39:19.117414    2906 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726691959116764551,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 18 20:39:19 multinode-622675 kubelet[2906]: E0918 20:39:19.117446    2906 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726691959116764551,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 18 20:39:29 multinode-622675 kubelet[2906]: E0918 20:39:29.119729    2906 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726691969118925249,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 18 20:39:29 multinode-622675 kubelet[2906]: E0918 20:39:29.120412    2906 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726691969118925249,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 18 20:39:39 multinode-622675 kubelet[2906]: E0918 20:39:39.125530    2906 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726691979124934993,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 18 20:39:39 multinode-622675 kubelet[2906]: E0918 20:39:39.125569    2906 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726691979124934993,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 18 20:39:48 multinode-622675 kubelet[2906]: E0918 20:39:48.978311    2906 iptables.go:577] "Could not set up iptables canary" err=<
	Sep 18 20:39:48 multinode-622675 kubelet[2906]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Sep 18 20:39:48 multinode-622675 kubelet[2906]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 18 20:39:48 multinode-622675 kubelet[2906]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 18 20:39:48 multinode-622675 kubelet[2906]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 18 20:39:49 multinode-622675 kubelet[2906]: E0918 20:39:49.127047    2906 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726691989126468527,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 18 20:39:49 multinode-622675 kubelet[2906]: E0918 20:39:49.127074    2906 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726691989126468527,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 18 20:39:59 multinode-622675 kubelet[2906]: E0918 20:39:59.129541    2906 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726691999129207833,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 18 20:39:59 multinode-622675 kubelet[2906]: E0918 20:39:59.129579    2906 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726691999129207833,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0918 20:40:00.009625   46694 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/19667-7671/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p multinode-622675 -n multinode-622675
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-622675 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/StopMultiNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/StopMultiNode (144.64s)

                                                
                                    
x
+
TestPreload (170.73s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-684593 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.24.4
E0918 20:45:01.288869   14878 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/functional-790989/client.crt: no such file or directory" logger="UnhandledError"
preload_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-684593 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.24.4: (1m29.29790385s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-684593 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-amd64 -p test-preload-684593 image pull gcr.io/k8s-minikube/busybox: (3.407526561s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-684593
preload_test.go:58: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-684593: (7.289070099s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-684593 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio
E0918 20:46:12.175804   14878 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/addons-815929/client.crt: no such file or directory" logger="UnhandledError"
preload_test.go:66: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-684593 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio: (1m7.827492635s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-684593 image list
preload_test.go:76: Expected to find gcr.io/k8s-minikube/busybox in image list output, instead got 
-- stdout --
	registry.k8s.io/pause:3.7
	registry.k8s.io/kube-scheduler:v1.24.4
	registry.k8s.io/kube-proxy:v1.24.4
	registry.k8s.io/kube-controller-manager:v1.24.4
	registry.k8s.io/kube-apiserver:v1.24.4
	registry.k8s.io/etcd:3.5.3-0
	registry.k8s.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/kube-scheduler:v1.24.4
	k8s.gcr.io/kube-proxy:v1.24.4
	k8s.gcr.io/kube-controller-manager:v1.24.4
	k8s.gcr.io/kube-apiserver:v1.24.4
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/coredns/coredns:v1.8.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	docker.io/kindest/kindnetd:v20220726-ed811e41

                                                
                                                
-- /stdout --
panic.go:629: *** TestPreload FAILED at 2024-09-18 20:46:39.921277805 +0000 UTC m=+4122.691619038
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p test-preload-684593 -n test-preload-684593
helpers_test.go:244: <<< TestPreload FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPreload]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-684593 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p test-preload-684593 logs -n 25: (1.093035028s)
helpers_test.go:252: TestPreload logs: 
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                          Args                                           |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| ssh     | multinode-622675 ssh -n                                                                 | multinode-622675     | jenkins | v1.34.0 | 18 Sep 24 20:31 UTC | 18 Sep 24 20:31 UTC |
	|         | multinode-622675-m03 sudo cat                                                           |                      |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                      |         |         |                     |                     |
	| ssh     | multinode-622675 ssh -n multinode-622675 sudo cat                                       | multinode-622675     | jenkins | v1.34.0 | 18 Sep 24 20:31 UTC | 18 Sep 24 20:31 UTC |
	|         | /home/docker/cp-test_multinode-622675-m03_multinode-622675.txt                          |                      |         |         |                     |                     |
	| cp      | multinode-622675 cp multinode-622675-m03:/home/docker/cp-test.txt                       | multinode-622675     | jenkins | v1.34.0 | 18 Sep 24 20:31 UTC | 18 Sep 24 20:31 UTC |
	|         | multinode-622675-m02:/home/docker/cp-test_multinode-622675-m03_multinode-622675-m02.txt |                      |         |         |                     |                     |
	| ssh     | multinode-622675 ssh -n                                                                 | multinode-622675     | jenkins | v1.34.0 | 18 Sep 24 20:31 UTC | 18 Sep 24 20:31 UTC |
	|         | multinode-622675-m03 sudo cat                                                           |                      |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                      |         |         |                     |                     |
	| ssh     | multinode-622675 ssh -n multinode-622675-m02 sudo cat                                   | multinode-622675     | jenkins | v1.34.0 | 18 Sep 24 20:31 UTC | 18 Sep 24 20:31 UTC |
	|         | /home/docker/cp-test_multinode-622675-m03_multinode-622675-m02.txt                      |                      |         |         |                     |                     |
	| node    | multinode-622675 node stop m03                                                          | multinode-622675     | jenkins | v1.34.0 | 18 Sep 24 20:31 UTC | 18 Sep 24 20:31 UTC |
	| node    | multinode-622675 node start                                                             | multinode-622675     | jenkins | v1.34.0 | 18 Sep 24 20:31 UTC | 18 Sep 24 20:32 UTC |
	|         | m03 -v=7 --alsologtostderr                                                              |                      |         |         |                     |                     |
	| node    | list -p multinode-622675                                                                | multinode-622675     | jenkins | v1.34.0 | 18 Sep 24 20:32 UTC |                     |
	| stop    | -p multinode-622675                                                                     | multinode-622675     | jenkins | v1.34.0 | 18 Sep 24 20:32 UTC |                     |
	| start   | -p multinode-622675                                                                     | multinode-622675     | jenkins | v1.34.0 | 18 Sep 24 20:34 UTC | 18 Sep 24 20:37 UTC |
	|         | --wait=true -v=8                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                      |         |         |                     |                     |
	| node    | list -p multinode-622675                                                                | multinode-622675     | jenkins | v1.34.0 | 18 Sep 24 20:37 UTC |                     |
	| node    | multinode-622675 node delete                                                            | multinode-622675     | jenkins | v1.34.0 | 18 Sep 24 20:37 UTC | 18 Sep 24 20:37 UTC |
	|         | m03                                                                                     |                      |         |         |                     |                     |
	| stop    | multinode-622675 stop                                                                   | multinode-622675     | jenkins | v1.34.0 | 18 Sep 24 20:37 UTC |                     |
	| start   | -p multinode-622675                                                                     | multinode-622675     | jenkins | v1.34.0 | 18 Sep 24 20:40 UTC | 18 Sep 24 20:43 UTC |
	|         | --wait=true -v=8                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                           |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| node    | list -p multinode-622675                                                                | multinode-622675     | jenkins | v1.34.0 | 18 Sep 24 20:43 UTC |                     |
	| start   | -p multinode-622675-m02                                                                 | multinode-622675-m02 | jenkins | v1.34.0 | 18 Sep 24 20:43 UTC |                     |
	|         | --driver=kvm2                                                                           |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| start   | -p multinode-622675-m03                                                                 | multinode-622675-m03 | jenkins | v1.34.0 | 18 Sep 24 20:43 UTC | 18 Sep 24 20:43 UTC |
	|         | --driver=kvm2                                                                           |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| node    | add -p multinode-622675                                                                 | multinode-622675     | jenkins | v1.34.0 | 18 Sep 24 20:43 UTC |                     |
	| delete  | -p multinode-622675-m03                                                                 | multinode-622675-m03 | jenkins | v1.34.0 | 18 Sep 24 20:43 UTC | 18 Sep 24 20:43 UTC |
	| delete  | -p multinode-622675                                                                     | multinode-622675     | jenkins | v1.34.0 | 18 Sep 24 20:43 UTC | 18 Sep 24 20:43 UTC |
	| start   | -p test-preload-684593                                                                  | test-preload-684593  | jenkins | v1.34.0 | 18 Sep 24 20:43 UTC | 18 Sep 24 20:45 UTC |
	|         | --memory=2200                                                                           |                      |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                                                           |                      |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                                                           |                      |         |         |                     |                     |
	|         |  --container-runtime=crio                                                               |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.24.4                                                            |                      |         |         |                     |                     |
	| image   | test-preload-684593 image pull                                                          | test-preload-684593  | jenkins | v1.34.0 | 18 Sep 24 20:45 UTC | 18 Sep 24 20:45 UTC |
	|         | gcr.io/k8s-minikube/busybox                                                             |                      |         |         |                     |                     |
	| stop    | -p test-preload-684593                                                                  | test-preload-684593  | jenkins | v1.34.0 | 18 Sep 24 20:45 UTC | 18 Sep 24 20:45 UTC |
	| start   | -p test-preload-684593                                                                  | test-preload-684593  | jenkins | v1.34.0 | 18 Sep 24 20:45 UTC | 18 Sep 24 20:46 UTC |
	|         | --memory=2200                                                                           |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                  |                      |         |         |                     |                     |
	|         | --wait=true --driver=kvm2                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| image   | test-preload-684593 image list                                                          | test-preload-684593  | jenkins | v1.34.0 | 18 Sep 24 20:46 UTC | 18 Sep 24 20:46 UTC |
	|---------|-----------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/18 20:45:31
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0918 20:45:31.912523   48996 out.go:345] Setting OutFile to fd 1 ...
	I0918 20:45:31.912760   48996 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0918 20:45:31.912768   48996 out.go:358] Setting ErrFile to fd 2...
	I0918 20:45:31.912773   48996 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0918 20:45:31.912969   48996 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19667-7671/.minikube/bin
	I0918 20:45:31.913490   48996 out.go:352] Setting JSON to false
	I0918 20:45:31.914380   48996 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":5276,"bootTime":1726687056,"procs":182,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0918 20:45:31.914478   48996 start.go:139] virtualization: kvm guest
	I0918 20:45:31.916600   48996 out.go:177] * [test-preload-684593] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0918 20:45:31.918017   48996 out.go:177]   - MINIKUBE_LOCATION=19667
	I0918 20:45:31.918018   48996 notify.go:220] Checking for updates...
	I0918 20:45:31.920643   48996 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0918 20:45:31.921997   48996 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19667-7671/kubeconfig
	I0918 20:45:31.923458   48996 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19667-7671/.minikube
	I0918 20:45:31.924777   48996 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0918 20:45:31.926060   48996 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0918 20:45:31.927797   48996 config.go:182] Loaded profile config "test-preload-684593": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.4
	I0918 20:45:31.928254   48996 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0918 20:45:31.928315   48996 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0918 20:45:31.943372   48996 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45063
	I0918 20:45:31.943887   48996 main.go:141] libmachine: () Calling .GetVersion
	I0918 20:45:31.944477   48996 main.go:141] libmachine: Using API Version  1
	I0918 20:45:31.944500   48996 main.go:141] libmachine: () Calling .SetConfigRaw
	I0918 20:45:31.944835   48996 main.go:141] libmachine: () Calling .GetMachineName
	I0918 20:45:31.945020   48996 main.go:141] libmachine: (test-preload-684593) Calling .DriverName
	I0918 20:45:31.947008   48996 out.go:177] * Kubernetes 1.31.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.1
	I0918 20:45:31.948434   48996 driver.go:394] Setting default libvirt URI to qemu:///system
	I0918 20:45:31.948853   48996 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0918 20:45:31.948929   48996 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0918 20:45:31.964088   48996 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33567
	I0918 20:45:31.964638   48996 main.go:141] libmachine: () Calling .GetVersion
	I0918 20:45:31.965289   48996 main.go:141] libmachine: Using API Version  1
	I0918 20:45:31.965339   48996 main.go:141] libmachine: () Calling .SetConfigRaw
	I0918 20:45:31.965744   48996 main.go:141] libmachine: () Calling .GetMachineName
	I0918 20:45:31.965939   48996 main.go:141] libmachine: (test-preload-684593) Calling .DriverName
	I0918 20:45:32.002469   48996 out.go:177] * Using the kvm2 driver based on existing profile
	I0918 20:45:32.003639   48996 start.go:297] selected driver: kvm2
	I0918 20:45:32.003653   48996 start.go:901] validating driver "kvm2" against &{Name:test-preload-684593 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.24.4 ClusterName:test-preload-684593 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.171 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L
MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0918 20:45:32.003760   48996 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0918 20:45:32.004460   48996 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0918 20:45:32.004533   48996 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19667-7671/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0918 20:45:32.019908   48996 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0918 20:45:32.020266   48996 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0918 20:45:32.020303   48996 cni.go:84] Creating CNI manager for ""
	I0918 20:45:32.020345   48996 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0918 20:45:32.020407   48996 start.go:340] cluster config:
	{Name:test-preload-684593 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-684593 Namespace:default APISe
rverHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.171 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountTy
pe:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0918 20:45:32.020512   48996 iso.go:125] acquiring lock: {Name:mkbcf4d84f3091567a39802cd5e773cb10b8c545 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0918 20:45:32.022540   48996 out.go:177] * Starting "test-preload-684593" primary control-plane node in "test-preload-684593" cluster
	I0918 20:45:32.023721   48996 preload.go:131] Checking if preload exists for k8s version v1.24.4 and runtime crio
	I0918 20:45:32.118783   48996 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.24.4/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4
	I0918 20:45:32.118809   48996 cache.go:56] Caching tarball of preloaded images
	I0918 20:45:32.118951   48996 preload.go:131] Checking if preload exists for k8s version v1.24.4 and runtime crio
	I0918 20:45:32.120945   48996 out.go:177] * Downloading Kubernetes v1.24.4 preload ...
	I0918 20:45:32.122370   48996 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 ...
	I0918 20:45:32.229122   48996 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.24.4/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4?checksum=md5:b2ee0ab83ed99f9e7ff71cb0cf27e8f9 -> /home/jenkins/minikube-integration/19667-7671/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4
	I0918 20:45:44.018457   48996 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 ...
	I0918 20:45:44.018552   48996 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/19667-7671/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 ...
	I0918 20:45:44.860582   48996 cache.go:59] Finished verifying existence of preloaded tar for v1.24.4 on crio
	I0918 20:45:44.860707   48996 profile.go:143] Saving config to /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/test-preload-684593/config.json ...
	I0918 20:45:44.860940   48996 start.go:360] acquireMachinesLock for test-preload-684593: {Name:mk1b0d68a0b7d9d4bc8204d5d81cb9eb77222526 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0918 20:45:44.860998   48996 start.go:364] duration metric: took 37.378µs to acquireMachinesLock for "test-preload-684593"
	I0918 20:45:44.861013   48996 start.go:96] Skipping create...Using existing machine configuration
	I0918 20:45:44.861018   48996 fix.go:54] fixHost starting: 
	I0918 20:45:44.861297   48996 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0918 20:45:44.861333   48996 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0918 20:45:44.876520   48996 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39395
	I0918 20:45:44.877021   48996 main.go:141] libmachine: () Calling .GetVersion
	I0918 20:45:44.877510   48996 main.go:141] libmachine: Using API Version  1
	I0918 20:45:44.877530   48996 main.go:141] libmachine: () Calling .SetConfigRaw
	I0918 20:45:44.877819   48996 main.go:141] libmachine: () Calling .GetMachineName
	I0918 20:45:44.878012   48996 main.go:141] libmachine: (test-preload-684593) Calling .DriverName
	I0918 20:45:44.878195   48996 main.go:141] libmachine: (test-preload-684593) Calling .GetState
	I0918 20:45:44.879975   48996 fix.go:112] recreateIfNeeded on test-preload-684593: state=Stopped err=<nil>
	I0918 20:45:44.880009   48996 main.go:141] libmachine: (test-preload-684593) Calling .DriverName
	W0918 20:45:44.880184   48996 fix.go:138] unexpected machine state, will restart: <nil>
	I0918 20:45:44.882106   48996 out.go:177] * Restarting existing kvm2 VM for "test-preload-684593" ...
	I0918 20:45:44.883403   48996 main.go:141] libmachine: (test-preload-684593) Calling .Start
	I0918 20:45:44.883594   48996 main.go:141] libmachine: (test-preload-684593) Ensuring networks are active...
	I0918 20:45:44.884427   48996 main.go:141] libmachine: (test-preload-684593) Ensuring network default is active
	I0918 20:45:44.884784   48996 main.go:141] libmachine: (test-preload-684593) Ensuring network mk-test-preload-684593 is active
	I0918 20:45:44.885093   48996 main.go:141] libmachine: (test-preload-684593) Getting domain xml...
	I0918 20:45:44.885802   48996 main.go:141] libmachine: (test-preload-684593) Creating domain...
	I0918 20:45:46.127757   48996 main.go:141] libmachine: (test-preload-684593) Waiting to get IP...
	I0918 20:45:46.128740   48996 main.go:141] libmachine: (test-preload-684593) DBG | domain test-preload-684593 has defined MAC address 52:54:00:97:40:d0 in network mk-test-preload-684593
	I0918 20:45:46.129206   48996 main.go:141] libmachine: (test-preload-684593) DBG | unable to find current IP address of domain test-preload-684593 in network mk-test-preload-684593
	I0918 20:45:46.129271   48996 main.go:141] libmachine: (test-preload-684593) DBG | I0918 20:45:46.129187   49066 retry.go:31] will retry after 285.618164ms: waiting for machine to come up
	I0918 20:45:46.416921   48996 main.go:141] libmachine: (test-preload-684593) DBG | domain test-preload-684593 has defined MAC address 52:54:00:97:40:d0 in network mk-test-preload-684593
	I0918 20:45:46.417310   48996 main.go:141] libmachine: (test-preload-684593) DBG | unable to find current IP address of domain test-preload-684593 in network mk-test-preload-684593
	I0918 20:45:46.417339   48996 main.go:141] libmachine: (test-preload-684593) DBG | I0918 20:45:46.417260   49066 retry.go:31] will retry after 379.969675ms: waiting for machine to come up
	I0918 20:45:46.798946   48996 main.go:141] libmachine: (test-preload-684593) DBG | domain test-preload-684593 has defined MAC address 52:54:00:97:40:d0 in network mk-test-preload-684593
	I0918 20:45:46.799363   48996 main.go:141] libmachine: (test-preload-684593) DBG | unable to find current IP address of domain test-preload-684593 in network mk-test-preload-684593
	I0918 20:45:46.799395   48996 main.go:141] libmachine: (test-preload-684593) DBG | I0918 20:45:46.799305   49066 retry.go:31] will retry after 418.207077ms: waiting for machine to come up
	I0918 20:45:47.218794   48996 main.go:141] libmachine: (test-preload-684593) DBG | domain test-preload-684593 has defined MAC address 52:54:00:97:40:d0 in network mk-test-preload-684593
	I0918 20:45:47.219173   48996 main.go:141] libmachine: (test-preload-684593) DBG | unable to find current IP address of domain test-preload-684593 in network mk-test-preload-684593
	I0918 20:45:47.219199   48996 main.go:141] libmachine: (test-preload-684593) DBG | I0918 20:45:47.219133   49066 retry.go:31] will retry after 593.401711ms: waiting for machine to come up
	I0918 20:45:47.813937   48996 main.go:141] libmachine: (test-preload-684593) DBG | domain test-preload-684593 has defined MAC address 52:54:00:97:40:d0 in network mk-test-preload-684593
	I0918 20:45:47.814341   48996 main.go:141] libmachine: (test-preload-684593) DBG | unable to find current IP address of domain test-preload-684593 in network mk-test-preload-684593
	I0918 20:45:47.814363   48996 main.go:141] libmachine: (test-preload-684593) DBG | I0918 20:45:47.814315   49066 retry.go:31] will retry after 652.456966ms: waiting for machine to come up
	I0918 20:45:48.468147   48996 main.go:141] libmachine: (test-preload-684593) DBG | domain test-preload-684593 has defined MAC address 52:54:00:97:40:d0 in network mk-test-preload-684593
	I0918 20:45:48.468539   48996 main.go:141] libmachine: (test-preload-684593) DBG | unable to find current IP address of domain test-preload-684593 in network mk-test-preload-684593
	I0918 20:45:48.468566   48996 main.go:141] libmachine: (test-preload-684593) DBG | I0918 20:45:48.468507   49066 retry.go:31] will retry after 723.86455ms: waiting for machine to come up
	I0918 20:45:49.194446   48996 main.go:141] libmachine: (test-preload-684593) DBG | domain test-preload-684593 has defined MAC address 52:54:00:97:40:d0 in network mk-test-preload-684593
	I0918 20:45:49.194846   48996 main.go:141] libmachine: (test-preload-684593) DBG | unable to find current IP address of domain test-preload-684593 in network mk-test-preload-684593
	I0918 20:45:49.194877   48996 main.go:141] libmachine: (test-preload-684593) DBG | I0918 20:45:49.194790   49066 retry.go:31] will retry after 1.008696926s: waiting for machine to come up
	I0918 20:45:50.205321   48996 main.go:141] libmachine: (test-preload-684593) DBG | domain test-preload-684593 has defined MAC address 52:54:00:97:40:d0 in network mk-test-preload-684593
	I0918 20:45:50.205666   48996 main.go:141] libmachine: (test-preload-684593) DBG | unable to find current IP address of domain test-preload-684593 in network mk-test-preload-684593
	I0918 20:45:50.205720   48996 main.go:141] libmachine: (test-preload-684593) DBG | I0918 20:45:50.205609   49066 retry.go:31] will retry after 1.121963273s: waiting for machine to come up
	I0918 20:45:51.329288   48996 main.go:141] libmachine: (test-preload-684593) DBG | domain test-preload-684593 has defined MAC address 52:54:00:97:40:d0 in network mk-test-preload-684593
	I0918 20:45:51.329624   48996 main.go:141] libmachine: (test-preload-684593) DBG | unable to find current IP address of domain test-preload-684593 in network mk-test-preload-684593
	I0918 20:45:51.329650   48996 main.go:141] libmachine: (test-preload-684593) DBG | I0918 20:45:51.329591   49066 retry.go:31] will retry after 1.268408162s: waiting for machine to come up
	I0918 20:45:52.599261   48996 main.go:141] libmachine: (test-preload-684593) DBG | domain test-preload-684593 has defined MAC address 52:54:00:97:40:d0 in network mk-test-preload-684593
	I0918 20:45:52.599679   48996 main.go:141] libmachine: (test-preload-684593) DBG | unable to find current IP address of domain test-preload-684593 in network mk-test-preload-684593
	I0918 20:45:52.599707   48996 main.go:141] libmachine: (test-preload-684593) DBG | I0918 20:45:52.599624   49066 retry.go:31] will retry after 2.130416166s: waiting for machine to come up
	I0918 20:45:54.733230   48996 main.go:141] libmachine: (test-preload-684593) DBG | domain test-preload-684593 has defined MAC address 52:54:00:97:40:d0 in network mk-test-preload-684593
	I0918 20:45:54.733636   48996 main.go:141] libmachine: (test-preload-684593) DBG | unable to find current IP address of domain test-preload-684593 in network mk-test-preload-684593
	I0918 20:45:54.733657   48996 main.go:141] libmachine: (test-preload-684593) DBG | I0918 20:45:54.733587   49066 retry.go:31] will retry after 2.881257797s: waiting for machine to come up
	I0918 20:45:57.618138   48996 main.go:141] libmachine: (test-preload-684593) DBG | domain test-preload-684593 has defined MAC address 52:54:00:97:40:d0 in network mk-test-preload-684593
	I0918 20:45:57.618481   48996 main.go:141] libmachine: (test-preload-684593) DBG | unable to find current IP address of domain test-preload-684593 in network mk-test-preload-684593
	I0918 20:45:57.618508   48996 main.go:141] libmachine: (test-preload-684593) DBG | I0918 20:45:57.618407   49066 retry.go:31] will retry after 2.29257761s: waiting for machine to come up
	I0918 20:45:59.912122   48996 main.go:141] libmachine: (test-preload-684593) DBG | domain test-preload-684593 has defined MAC address 52:54:00:97:40:d0 in network mk-test-preload-684593
	I0918 20:45:59.912507   48996 main.go:141] libmachine: (test-preload-684593) DBG | unable to find current IP address of domain test-preload-684593 in network mk-test-preload-684593
	I0918 20:45:59.912532   48996 main.go:141] libmachine: (test-preload-684593) DBG | I0918 20:45:59.912475   49066 retry.go:31] will retry after 4.382455329s: waiting for machine to come up
	I0918 20:46:04.297058   48996 main.go:141] libmachine: (test-preload-684593) DBG | domain test-preload-684593 has defined MAC address 52:54:00:97:40:d0 in network mk-test-preload-684593
	I0918 20:46:04.297643   48996 main.go:141] libmachine: (test-preload-684593) Found IP for machine: 192.168.39.171
	I0918 20:46:04.297665   48996 main.go:141] libmachine: (test-preload-684593) Reserving static IP address...
	I0918 20:46:04.297677   48996 main.go:141] libmachine: (test-preload-684593) DBG | domain test-preload-684593 has current primary IP address 192.168.39.171 and MAC address 52:54:00:97:40:d0 in network mk-test-preload-684593
	I0918 20:46:04.298184   48996 main.go:141] libmachine: (test-preload-684593) Reserved static IP address: 192.168.39.171
	I0918 20:46:04.298225   48996 main.go:141] libmachine: (test-preload-684593) DBG | found host DHCP lease matching {name: "test-preload-684593", mac: "52:54:00:97:40:d0", ip: "192.168.39.171"} in network mk-test-preload-684593: {Iface:virbr1 ExpiryTime:2024-09-18 21:45:55 +0000 UTC Type:0 Mac:52:54:00:97:40:d0 Iaid: IPaddr:192.168.39.171 Prefix:24 Hostname:test-preload-684593 Clientid:01:52:54:00:97:40:d0}
	I0918 20:46:04.298236   48996 main.go:141] libmachine: (test-preload-684593) Waiting for SSH to be available...
	I0918 20:46:04.298255   48996 main.go:141] libmachine: (test-preload-684593) DBG | skip adding static IP to network mk-test-preload-684593 - found existing host DHCP lease matching {name: "test-preload-684593", mac: "52:54:00:97:40:d0", ip: "192.168.39.171"}
	I0918 20:46:04.298267   48996 main.go:141] libmachine: (test-preload-684593) DBG | Getting to WaitForSSH function...
	I0918 20:46:04.300301   48996 main.go:141] libmachine: (test-preload-684593) DBG | domain test-preload-684593 has defined MAC address 52:54:00:97:40:d0 in network mk-test-preload-684593
	I0918 20:46:04.300771   48996 main.go:141] libmachine: (test-preload-684593) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:97:40:d0", ip: ""} in network mk-test-preload-684593: {Iface:virbr1 ExpiryTime:2024-09-18 21:45:55 +0000 UTC Type:0 Mac:52:54:00:97:40:d0 Iaid: IPaddr:192.168.39.171 Prefix:24 Hostname:test-preload-684593 Clientid:01:52:54:00:97:40:d0}
	I0918 20:46:04.300799   48996 main.go:141] libmachine: (test-preload-684593) DBG | domain test-preload-684593 has defined IP address 192.168.39.171 and MAC address 52:54:00:97:40:d0 in network mk-test-preload-684593
	I0918 20:46:04.300932   48996 main.go:141] libmachine: (test-preload-684593) DBG | Using SSH client type: external
	I0918 20:46:04.300960   48996 main.go:141] libmachine: (test-preload-684593) DBG | Using SSH private key: /home/jenkins/minikube-integration/19667-7671/.minikube/machines/test-preload-684593/id_rsa (-rw-------)
	I0918 20:46:04.300991   48996 main.go:141] libmachine: (test-preload-684593) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.171 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19667-7671/.minikube/machines/test-preload-684593/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0918 20:46:04.301006   48996 main.go:141] libmachine: (test-preload-684593) DBG | About to run SSH command:
	I0918 20:46:04.301020   48996 main.go:141] libmachine: (test-preload-684593) DBG | exit 0
	I0918 20:46:04.428158   48996 main.go:141] libmachine: (test-preload-684593) DBG | SSH cmd err, output: <nil>: 
	I0918 20:46:04.428589   48996 main.go:141] libmachine: (test-preload-684593) Calling .GetConfigRaw
	I0918 20:46:04.476374   48996 main.go:141] libmachine: (test-preload-684593) Calling .GetIP
	I0918 20:46:04.479629   48996 main.go:141] libmachine: (test-preload-684593) DBG | domain test-preload-684593 has defined MAC address 52:54:00:97:40:d0 in network mk-test-preload-684593
	I0918 20:46:04.479983   48996 main.go:141] libmachine: (test-preload-684593) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:97:40:d0", ip: ""} in network mk-test-preload-684593: {Iface:virbr1 ExpiryTime:2024-09-18 21:45:55 +0000 UTC Type:0 Mac:52:54:00:97:40:d0 Iaid: IPaddr:192.168.39.171 Prefix:24 Hostname:test-preload-684593 Clientid:01:52:54:00:97:40:d0}
	I0918 20:46:04.480028   48996 main.go:141] libmachine: (test-preload-684593) DBG | domain test-preload-684593 has defined IP address 192.168.39.171 and MAC address 52:54:00:97:40:d0 in network mk-test-preload-684593
	I0918 20:46:04.480282   48996 profile.go:143] Saving config to /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/test-preload-684593/config.json ...
	I0918 20:46:04.539409   48996 machine.go:93] provisionDockerMachine start ...
	I0918 20:46:04.539448   48996 main.go:141] libmachine: (test-preload-684593) Calling .DriverName
	I0918 20:46:04.539798   48996 main.go:141] libmachine: (test-preload-684593) Calling .GetSSHHostname
	I0918 20:46:04.542845   48996 main.go:141] libmachine: (test-preload-684593) DBG | domain test-preload-684593 has defined MAC address 52:54:00:97:40:d0 in network mk-test-preload-684593
	I0918 20:46:04.543255   48996 main.go:141] libmachine: (test-preload-684593) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:97:40:d0", ip: ""} in network mk-test-preload-684593: {Iface:virbr1 ExpiryTime:2024-09-18 21:45:55 +0000 UTC Type:0 Mac:52:54:00:97:40:d0 Iaid: IPaddr:192.168.39.171 Prefix:24 Hostname:test-preload-684593 Clientid:01:52:54:00:97:40:d0}
	I0918 20:46:04.543285   48996 main.go:141] libmachine: (test-preload-684593) DBG | domain test-preload-684593 has defined IP address 192.168.39.171 and MAC address 52:54:00:97:40:d0 in network mk-test-preload-684593
	I0918 20:46:04.543463   48996 main.go:141] libmachine: (test-preload-684593) Calling .GetSSHPort
	I0918 20:46:04.543626   48996 main.go:141] libmachine: (test-preload-684593) Calling .GetSSHKeyPath
	I0918 20:46:04.543801   48996 main.go:141] libmachine: (test-preload-684593) Calling .GetSSHKeyPath
	I0918 20:46:04.543932   48996 main.go:141] libmachine: (test-preload-684593) Calling .GetSSHUsername
	I0918 20:46:04.544095   48996 main.go:141] libmachine: Using SSH client type: native
	I0918 20:46:04.544300   48996 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.171 22 <nil> <nil>}
	I0918 20:46:04.544313   48996 main.go:141] libmachine: About to run SSH command:
	hostname
	I0918 20:46:04.656463   48996 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0918 20:46:04.656500   48996 main.go:141] libmachine: (test-preload-684593) Calling .GetMachineName
	I0918 20:46:04.656805   48996 buildroot.go:166] provisioning hostname "test-preload-684593"
	I0918 20:46:04.656829   48996 main.go:141] libmachine: (test-preload-684593) Calling .GetMachineName
	I0918 20:46:04.657007   48996 main.go:141] libmachine: (test-preload-684593) Calling .GetSSHHostname
	I0918 20:46:04.659878   48996 main.go:141] libmachine: (test-preload-684593) DBG | domain test-preload-684593 has defined MAC address 52:54:00:97:40:d0 in network mk-test-preload-684593
	I0918 20:46:04.660322   48996 main.go:141] libmachine: (test-preload-684593) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:97:40:d0", ip: ""} in network mk-test-preload-684593: {Iface:virbr1 ExpiryTime:2024-09-18 21:45:55 +0000 UTC Type:0 Mac:52:54:00:97:40:d0 Iaid: IPaddr:192.168.39.171 Prefix:24 Hostname:test-preload-684593 Clientid:01:52:54:00:97:40:d0}
	I0918 20:46:04.660357   48996 main.go:141] libmachine: (test-preload-684593) DBG | domain test-preload-684593 has defined IP address 192.168.39.171 and MAC address 52:54:00:97:40:d0 in network mk-test-preload-684593
	I0918 20:46:04.660530   48996 main.go:141] libmachine: (test-preload-684593) Calling .GetSSHPort
	I0918 20:46:04.660730   48996 main.go:141] libmachine: (test-preload-684593) Calling .GetSSHKeyPath
	I0918 20:46:04.660883   48996 main.go:141] libmachine: (test-preload-684593) Calling .GetSSHKeyPath
	I0918 20:46:04.661005   48996 main.go:141] libmachine: (test-preload-684593) Calling .GetSSHUsername
	I0918 20:46:04.661171   48996 main.go:141] libmachine: Using SSH client type: native
	I0918 20:46:04.661390   48996 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.171 22 <nil> <nil>}
	I0918 20:46:04.661405   48996 main.go:141] libmachine: About to run SSH command:
	sudo hostname test-preload-684593 && echo "test-preload-684593" | sudo tee /etc/hostname
	I0918 20:46:04.788407   48996 main.go:141] libmachine: SSH cmd err, output: <nil>: test-preload-684593
	
	I0918 20:46:04.788438   48996 main.go:141] libmachine: (test-preload-684593) Calling .GetSSHHostname
	I0918 20:46:04.791178   48996 main.go:141] libmachine: (test-preload-684593) DBG | domain test-preload-684593 has defined MAC address 52:54:00:97:40:d0 in network mk-test-preload-684593
	I0918 20:46:04.791482   48996 main.go:141] libmachine: (test-preload-684593) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:97:40:d0", ip: ""} in network mk-test-preload-684593: {Iface:virbr1 ExpiryTime:2024-09-18 21:45:55 +0000 UTC Type:0 Mac:52:54:00:97:40:d0 Iaid: IPaddr:192.168.39.171 Prefix:24 Hostname:test-preload-684593 Clientid:01:52:54:00:97:40:d0}
	I0918 20:46:04.791505   48996 main.go:141] libmachine: (test-preload-684593) DBG | domain test-preload-684593 has defined IP address 192.168.39.171 and MAC address 52:54:00:97:40:d0 in network mk-test-preload-684593
	I0918 20:46:04.791691   48996 main.go:141] libmachine: (test-preload-684593) Calling .GetSSHPort
	I0918 20:46:04.791895   48996 main.go:141] libmachine: (test-preload-684593) Calling .GetSSHKeyPath
	I0918 20:46:04.792070   48996 main.go:141] libmachine: (test-preload-684593) Calling .GetSSHKeyPath
	I0918 20:46:04.792244   48996 main.go:141] libmachine: (test-preload-684593) Calling .GetSSHUsername
	I0918 20:46:04.792425   48996 main.go:141] libmachine: Using SSH client type: native
	I0918 20:46:04.792593   48996 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.171 22 <nil> <nil>}
	I0918 20:46:04.792609   48996 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\stest-preload-684593' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 test-preload-684593/g' /etc/hosts;
				else 
					echo '127.0.1.1 test-preload-684593' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0918 20:46:04.915428   48996 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0918 20:46:04.915462   48996 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19667-7671/.minikube CaCertPath:/home/jenkins/minikube-integration/19667-7671/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19667-7671/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19667-7671/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19667-7671/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19667-7671/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19667-7671/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19667-7671/.minikube}
	I0918 20:46:04.915502   48996 buildroot.go:174] setting up certificates
	I0918 20:46:04.915521   48996 provision.go:84] configureAuth start
	I0918 20:46:04.915532   48996 main.go:141] libmachine: (test-preload-684593) Calling .GetMachineName
	I0918 20:46:04.915750   48996 main.go:141] libmachine: (test-preload-684593) Calling .GetIP
	I0918 20:46:04.918594   48996 main.go:141] libmachine: (test-preload-684593) DBG | domain test-preload-684593 has defined MAC address 52:54:00:97:40:d0 in network mk-test-preload-684593
	I0918 20:46:04.918980   48996 main.go:141] libmachine: (test-preload-684593) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:97:40:d0", ip: ""} in network mk-test-preload-684593: {Iface:virbr1 ExpiryTime:2024-09-18 21:45:55 +0000 UTC Type:0 Mac:52:54:00:97:40:d0 Iaid: IPaddr:192.168.39.171 Prefix:24 Hostname:test-preload-684593 Clientid:01:52:54:00:97:40:d0}
	I0918 20:46:04.919005   48996 main.go:141] libmachine: (test-preload-684593) DBG | domain test-preload-684593 has defined IP address 192.168.39.171 and MAC address 52:54:00:97:40:d0 in network mk-test-preload-684593
	I0918 20:46:04.919198   48996 main.go:141] libmachine: (test-preload-684593) Calling .GetSSHHostname
	I0918 20:46:04.921452   48996 main.go:141] libmachine: (test-preload-684593) DBG | domain test-preload-684593 has defined MAC address 52:54:00:97:40:d0 in network mk-test-preload-684593
	I0918 20:46:04.921795   48996 main.go:141] libmachine: (test-preload-684593) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:97:40:d0", ip: ""} in network mk-test-preload-684593: {Iface:virbr1 ExpiryTime:2024-09-18 21:45:55 +0000 UTC Type:0 Mac:52:54:00:97:40:d0 Iaid: IPaddr:192.168.39.171 Prefix:24 Hostname:test-preload-684593 Clientid:01:52:54:00:97:40:d0}
	I0918 20:46:04.921825   48996 main.go:141] libmachine: (test-preload-684593) DBG | domain test-preload-684593 has defined IP address 192.168.39.171 and MAC address 52:54:00:97:40:d0 in network mk-test-preload-684593
	I0918 20:46:04.921953   48996 provision.go:143] copyHostCerts
	I0918 20:46:04.922018   48996 exec_runner.go:144] found /home/jenkins/minikube-integration/19667-7671/.minikube/cert.pem, removing ...
	I0918 20:46:04.922029   48996 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19667-7671/.minikube/cert.pem
	I0918 20:46:04.922110   48996 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19667-7671/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19667-7671/.minikube/cert.pem (1123 bytes)
	I0918 20:46:04.922195   48996 exec_runner.go:144] found /home/jenkins/minikube-integration/19667-7671/.minikube/key.pem, removing ...
	I0918 20:46:04.922203   48996 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19667-7671/.minikube/key.pem
	I0918 20:46:04.922229   48996 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19667-7671/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19667-7671/.minikube/key.pem (1679 bytes)
	I0918 20:46:04.922283   48996 exec_runner.go:144] found /home/jenkins/minikube-integration/19667-7671/.minikube/ca.pem, removing ...
	I0918 20:46:04.922291   48996 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19667-7671/.minikube/ca.pem
	I0918 20:46:04.922318   48996 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19667-7671/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19667-7671/.minikube/ca.pem (1082 bytes)
	I0918 20:46:04.922369   48996 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19667-7671/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19667-7671/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19667-7671/.minikube/certs/ca-key.pem org=jenkins.test-preload-684593 san=[127.0.0.1 192.168.39.171 localhost minikube test-preload-684593]
	I0918 20:46:05.131104   48996 provision.go:177] copyRemoteCerts
	I0918 20:46:05.131198   48996 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0918 20:46:05.131226   48996 main.go:141] libmachine: (test-preload-684593) Calling .GetSSHHostname
	I0918 20:46:05.134050   48996 main.go:141] libmachine: (test-preload-684593) DBG | domain test-preload-684593 has defined MAC address 52:54:00:97:40:d0 in network mk-test-preload-684593
	I0918 20:46:05.134529   48996 main.go:141] libmachine: (test-preload-684593) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:97:40:d0", ip: ""} in network mk-test-preload-684593: {Iface:virbr1 ExpiryTime:2024-09-18 21:45:55 +0000 UTC Type:0 Mac:52:54:00:97:40:d0 Iaid: IPaddr:192.168.39.171 Prefix:24 Hostname:test-preload-684593 Clientid:01:52:54:00:97:40:d0}
	I0918 20:46:05.134552   48996 main.go:141] libmachine: (test-preload-684593) DBG | domain test-preload-684593 has defined IP address 192.168.39.171 and MAC address 52:54:00:97:40:d0 in network mk-test-preload-684593
	I0918 20:46:05.134741   48996 main.go:141] libmachine: (test-preload-684593) Calling .GetSSHPort
	I0918 20:46:05.134990   48996 main.go:141] libmachine: (test-preload-684593) Calling .GetSSHKeyPath
	I0918 20:46:05.135189   48996 main.go:141] libmachine: (test-preload-684593) Calling .GetSSHUsername
	I0918 20:46:05.135347   48996 sshutil.go:53] new ssh client: &{IP:192.168.39.171 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19667-7671/.minikube/machines/test-preload-684593/id_rsa Username:docker}
	I0918 20:46:05.221906   48996 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0918 20:46:05.246350   48996 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0918 20:46:05.270393   48996 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0918 20:46:05.293518   48996 provision.go:87] duration metric: took 377.985579ms to configureAuth
	I0918 20:46:05.293545   48996 buildroot.go:189] setting minikube options for container-runtime
	I0918 20:46:05.293708   48996 config.go:182] Loaded profile config "test-preload-684593": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.4
	I0918 20:46:05.293775   48996 main.go:141] libmachine: (test-preload-684593) Calling .GetSSHHostname
	I0918 20:46:05.296241   48996 main.go:141] libmachine: (test-preload-684593) DBG | domain test-preload-684593 has defined MAC address 52:54:00:97:40:d0 in network mk-test-preload-684593
	I0918 20:46:05.296664   48996 main.go:141] libmachine: (test-preload-684593) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:97:40:d0", ip: ""} in network mk-test-preload-684593: {Iface:virbr1 ExpiryTime:2024-09-18 21:45:55 +0000 UTC Type:0 Mac:52:54:00:97:40:d0 Iaid: IPaddr:192.168.39.171 Prefix:24 Hostname:test-preload-684593 Clientid:01:52:54:00:97:40:d0}
	I0918 20:46:05.296693   48996 main.go:141] libmachine: (test-preload-684593) DBG | domain test-preload-684593 has defined IP address 192.168.39.171 and MAC address 52:54:00:97:40:d0 in network mk-test-preload-684593
	I0918 20:46:05.296916   48996 main.go:141] libmachine: (test-preload-684593) Calling .GetSSHPort
	I0918 20:46:05.297096   48996 main.go:141] libmachine: (test-preload-684593) Calling .GetSSHKeyPath
	I0918 20:46:05.297238   48996 main.go:141] libmachine: (test-preload-684593) Calling .GetSSHKeyPath
	I0918 20:46:05.297388   48996 main.go:141] libmachine: (test-preload-684593) Calling .GetSSHUsername
	I0918 20:46:05.297541   48996 main.go:141] libmachine: Using SSH client type: native
	I0918 20:46:05.297699   48996 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.171 22 <nil> <nil>}
	I0918 20:46:05.297713   48996 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0918 20:46:05.526530   48996 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0918 20:46:05.526561   48996 machine.go:96] duration metric: took 987.125653ms to provisionDockerMachine
	I0918 20:46:05.526584   48996 start.go:293] postStartSetup for "test-preload-684593" (driver="kvm2")
	I0918 20:46:05.526598   48996 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0918 20:46:05.526623   48996 main.go:141] libmachine: (test-preload-684593) Calling .DriverName
	I0918 20:46:05.527031   48996 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0918 20:46:05.527153   48996 main.go:141] libmachine: (test-preload-684593) Calling .GetSSHHostname
	I0918 20:46:05.532305   48996 main.go:141] libmachine: (test-preload-684593) DBG | domain test-preload-684593 has defined MAC address 52:54:00:97:40:d0 in network mk-test-preload-684593
	I0918 20:46:05.532642   48996 main.go:141] libmachine: (test-preload-684593) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:97:40:d0", ip: ""} in network mk-test-preload-684593: {Iface:virbr1 ExpiryTime:2024-09-18 21:45:55 +0000 UTC Type:0 Mac:52:54:00:97:40:d0 Iaid: IPaddr:192.168.39.171 Prefix:24 Hostname:test-preload-684593 Clientid:01:52:54:00:97:40:d0}
	I0918 20:46:05.532664   48996 main.go:141] libmachine: (test-preload-684593) DBG | domain test-preload-684593 has defined IP address 192.168.39.171 and MAC address 52:54:00:97:40:d0 in network mk-test-preload-684593
	I0918 20:46:05.532833   48996 main.go:141] libmachine: (test-preload-684593) Calling .GetSSHPort
	I0918 20:46:05.533064   48996 main.go:141] libmachine: (test-preload-684593) Calling .GetSSHKeyPath
	I0918 20:46:05.533243   48996 main.go:141] libmachine: (test-preload-684593) Calling .GetSSHUsername
	I0918 20:46:05.533384   48996 sshutil.go:53] new ssh client: &{IP:192.168.39.171 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19667-7671/.minikube/machines/test-preload-684593/id_rsa Username:docker}
	I0918 20:46:05.618890   48996 ssh_runner.go:195] Run: cat /etc/os-release
	I0918 20:46:05.623670   48996 info.go:137] Remote host: Buildroot 2023.02.9
	I0918 20:46:05.623705   48996 filesync.go:126] Scanning /home/jenkins/minikube-integration/19667-7671/.minikube/addons for local assets ...
	I0918 20:46:05.623783   48996 filesync.go:126] Scanning /home/jenkins/minikube-integration/19667-7671/.minikube/files for local assets ...
	I0918 20:46:05.623877   48996 filesync.go:149] local asset: /home/jenkins/minikube-integration/19667-7671/.minikube/files/etc/ssl/certs/148782.pem -> 148782.pem in /etc/ssl/certs
	I0918 20:46:05.623977   48996 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0918 20:46:05.633770   48996 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/files/etc/ssl/certs/148782.pem --> /etc/ssl/certs/148782.pem (1708 bytes)
	I0918 20:46:05.658371   48996 start.go:296] duration metric: took 131.773451ms for postStartSetup
	I0918 20:46:05.658411   48996 fix.go:56] duration metric: took 20.797393042s for fixHost
	I0918 20:46:05.658430   48996 main.go:141] libmachine: (test-preload-684593) Calling .GetSSHHostname
	I0918 20:46:05.661261   48996 main.go:141] libmachine: (test-preload-684593) DBG | domain test-preload-684593 has defined MAC address 52:54:00:97:40:d0 in network mk-test-preload-684593
	I0918 20:46:05.661636   48996 main.go:141] libmachine: (test-preload-684593) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:97:40:d0", ip: ""} in network mk-test-preload-684593: {Iface:virbr1 ExpiryTime:2024-09-18 21:45:55 +0000 UTC Type:0 Mac:52:54:00:97:40:d0 Iaid: IPaddr:192.168.39.171 Prefix:24 Hostname:test-preload-684593 Clientid:01:52:54:00:97:40:d0}
	I0918 20:46:05.661667   48996 main.go:141] libmachine: (test-preload-684593) DBG | domain test-preload-684593 has defined IP address 192.168.39.171 and MAC address 52:54:00:97:40:d0 in network mk-test-preload-684593
	I0918 20:46:05.661895   48996 main.go:141] libmachine: (test-preload-684593) Calling .GetSSHPort
	I0918 20:46:05.662123   48996 main.go:141] libmachine: (test-preload-684593) Calling .GetSSHKeyPath
	I0918 20:46:05.662305   48996 main.go:141] libmachine: (test-preload-684593) Calling .GetSSHKeyPath
	I0918 20:46:05.662445   48996 main.go:141] libmachine: (test-preload-684593) Calling .GetSSHUsername
	I0918 20:46:05.662587   48996 main.go:141] libmachine: Using SSH client type: native
	I0918 20:46:05.662752   48996 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.171 22 <nil> <nil>}
	I0918 20:46:05.662762   48996 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0918 20:46:05.773234   48996 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726692365.751439232
	
	I0918 20:46:05.773257   48996 fix.go:216] guest clock: 1726692365.751439232
	I0918 20:46:05.773264   48996 fix.go:229] Guest: 2024-09-18 20:46:05.751439232 +0000 UTC Remote: 2024-09-18 20:46:05.658414719 +0000 UTC m=+33.782635502 (delta=93.024513ms)
	I0918 20:46:05.773284   48996 fix.go:200] guest clock delta is within tolerance: 93.024513ms
	I0918 20:46:05.773290   48996 start.go:83] releasing machines lock for "test-preload-684593", held for 20.912282321s
	I0918 20:46:05.773313   48996 main.go:141] libmachine: (test-preload-684593) Calling .DriverName
	I0918 20:46:05.773722   48996 main.go:141] libmachine: (test-preload-684593) Calling .GetIP
	I0918 20:46:05.776511   48996 main.go:141] libmachine: (test-preload-684593) DBG | domain test-preload-684593 has defined MAC address 52:54:00:97:40:d0 in network mk-test-preload-684593
	I0918 20:46:05.776990   48996 main.go:141] libmachine: (test-preload-684593) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:97:40:d0", ip: ""} in network mk-test-preload-684593: {Iface:virbr1 ExpiryTime:2024-09-18 21:45:55 +0000 UTC Type:0 Mac:52:54:00:97:40:d0 Iaid: IPaddr:192.168.39.171 Prefix:24 Hostname:test-preload-684593 Clientid:01:52:54:00:97:40:d0}
	I0918 20:46:05.777017   48996 main.go:141] libmachine: (test-preload-684593) DBG | domain test-preload-684593 has defined IP address 192.168.39.171 and MAC address 52:54:00:97:40:d0 in network mk-test-preload-684593
	I0918 20:46:05.777194   48996 main.go:141] libmachine: (test-preload-684593) Calling .DriverName
	I0918 20:46:05.777747   48996 main.go:141] libmachine: (test-preload-684593) Calling .DriverName
	I0918 20:46:05.777942   48996 main.go:141] libmachine: (test-preload-684593) Calling .DriverName
	I0918 20:46:05.778016   48996 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0918 20:46:05.778056   48996 main.go:141] libmachine: (test-preload-684593) Calling .GetSSHHostname
	I0918 20:46:05.778162   48996 ssh_runner.go:195] Run: cat /version.json
	I0918 20:46:05.778190   48996 main.go:141] libmachine: (test-preload-684593) Calling .GetSSHHostname
	I0918 20:46:05.781090   48996 main.go:141] libmachine: (test-preload-684593) DBG | domain test-preload-684593 has defined MAC address 52:54:00:97:40:d0 in network mk-test-preload-684593
	I0918 20:46:05.781325   48996 main.go:141] libmachine: (test-preload-684593) DBG | domain test-preload-684593 has defined MAC address 52:54:00:97:40:d0 in network mk-test-preload-684593
	I0918 20:46:05.781486   48996 main.go:141] libmachine: (test-preload-684593) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:97:40:d0", ip: ""} in network mk-test-preload-684593: {Iface:virbr1 ExpiryTime:2024-09-18 21:45:55 +0000 UTC Type:0 Mac:52:54:00:97:40:d0 Iaid: IPaddr:192.168.39.171 Prefix:24 Hostname:test-preload-684593 Clientid:01:52:54:00:97:40:d0}
	I0918 20:46:05.781523   48996 main.go:141] libmachine: (test-preload-684593) DBG | domain test-preload-684593 has defined IP address 192.168.39.171 and MAC address 52:54:00:97:40:d0 in network mk-test-preload-684593
	I0918 20:46:05.781680   48996 main.go:141] libmachine: (test-preload-684593) Calling .GetSSHPort
	I0918 20:46:05.781830   48996 main.go:141] libmachine: (test-preload-684593) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:97:40:d0", ip: ""} in network mk-test-preload-684593: {Iface:virbr1 ExpiryTime:2024-09-18 21:45:55 +0000 UTC Type:0 Mac:52:54:00:97:40:d0 Iaid: IPaddr:192.168.39.171 Prefix:24 Hostname:test-preload-684593 Clientid:01:52:54:00:97:40:d0}
	I0918 20:46:05.781861   48996 main.go:141] libmachine: (test-preload-684593) DBG | domain test-preload-684593 has defined IP address 192.168.39.171 and MAC address 52:54:00:97:40:d0 in network mk-test-preload-684593
	I0918 20:46:05.781907   48996 main.go:141] libmachine: (test-preload-684593) Calling .GetSSHKeyPath
	I0918 20:46:05.782015   48996 main.go:141] libmachine: (test-preload-684593) Calling .GetSSHPort
	I0918 20:46:05.782091   48996 main.go:141] libmachine: (test-preload-684593) Calling .GetSSHUsername
	I0918 20:46:05.782163   48996 main.go:141] libmachine: (test-preload-684593) Calling .GetSSHKeyPath
	I0918 20:46:05.782217   48996 sshutil.go:53] new ssh client: &{IP:192.168.39.171 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19667-7671/.minikube/machines/test-preload-684593/id_rsa Username:docker}
	I0918 20:46:05.782270   48996 main.go:141] libmachine: (test-preload-684593) Calling .GetSSHUsername
	I0918 20:46:05.782472   48996 sshutil.go:53] new ssh client: &{IP:192.168.39.171 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19667-7671/.minikube/machines/test-preload-684593/id_rsa Username:docker}
	I0918 20:46:05.904366   48996 ssh_runner.go:195] Run: systemctl --version
	I0918 20:46:05.910340   48996 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0918 20:46:06.060931   48996 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0918 20:46:06.066589   48996 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0918 20:46:06.066666   48996 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0918 20:46:06.083514   48996 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0918 20:46:06.083541   48996 start.go:495] detecting cgroup driver to use...
	I0918 20:46:06.083602   48996 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0918 20:46:06.098931   48996 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0918 20:46:06.112581   48996 docker.go:217] disabling cri-docker service (if available) ...
	I0918 20:46:06.112643   48996 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0918 20:46:06.126051   48996 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0918 20:46:06.139694   48996 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0918 20:46:06.255711   48996 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0918 20:46:06.415456   48996 docker.go:233] disabling docker service ...
	I0918 20:46:06.415540   48996 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0918 20:46:06.432119   48996 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0918 20:46:06.446887   48996 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0918 20:46:06.575247   48996 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0918 20:46:06.705264   48996 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0918 20:46:06.718604   48996 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0918 20:46:06.736810   48996 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.7" pause image...
	I0918 20:46:06.736868   48996 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.7"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0918 20:46:06.746825   48996 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0918 20:46:06.746893   48996 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0918 20:46:06.758194   48996 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0918 20:46:06.768851   48996 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0918 20:46:06.779688   48996 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0918 20:46:06.791122   48996 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0918 20:46:06.801500   48996 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0918 20:46:06.818619   48996 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0918 20:46:06.829036   48996 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0918 20:46:06.838169   48996 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0918 20:46:06.838229   48996 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0918 20:46:06.850236   48996 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0918 20:46:06.860070   48996 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0918 20:46:06.991594   48996 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0918 20:46:07.076890   48996 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0918 20:46:07.076952   48996 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0918 20:46:07.081220   48996 start.go:563] Will wait 60s for crictl version
	I0918 20:46:07.081271   48996 ssh_runner.go:195] Run: which crictl
	I0918 20:46:07.084767   48996 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0918 20:46:07.121260   48996 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0918 20:46:07.121334   48996 ssh_runner.go:195] Run: crio --version
	I0918 20:46:07.148990   48996 ssh_runner.go:195] Run: crio --version
	I0918 20:46:07.177960   48996 out.go:177] * Preparing Kubernetes v1.24.4 on CRI-O 1.29.1 ...
	I0918 20:46:07.179654   48996 main.go:141] libmachine: (test-preload-684593) Calling .GetIP
	I0918 20:46:07.182211   48996 main.go:141] libmachine: (test-preload-684593) DBG | domain test-preload-684593 has defined MAC address 52:54:00:97:40:d0 in network mk-test-preload-684593
	I0918 20:46:07.182487   48996 main.go:141] libmachine: (test-preload-684593) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:97:40:d0", ip: ""} in network mk-test-preload-684593: {Iface:virbr1 ExpiryTime:2024-09-18 21:45:55 +0000 UTC Type:0 Mac:52:54:00:97:40:d0 Iaid: IPaddr:192.168.39.171 Prefix:24 Hostname:test-preload-684593 Clientid:01:52:54:00:97:40:d0}
	I0918 20:46:07.182512   48996 main.go:141] libmachine: (test-preload-684593) DBG | domain test-preload-684593 has defined IP address 192.168.39.171 and MAC address 52:54:00:97:40:d0 in network mk-test-preload-684593
	I0918 20:46:07.182701   48996 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0918 20:46:07.186524   48996 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0918 20:46:07.198270   48996 kubeadm.go:883] updating cluster {Name:test-preload-684593 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:
v1.24.4 ClusterName:test-preload-684593 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.171 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker
MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0918 20:46:07.198390   48996 preload.go:131] Checking if preload exists for k8s version v1.24.4 and runtime crio
	I0918 20:46:07.198429   48996 ssh_runner.go:195] Run: sudo crictl images --output json
	I0918 20:46:07.232870   48996 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.24.4". assuming images are not preloaded.
	I0918 20:46:07.232940   48996 ssh_runner.go:195] Run: which lz4
	I0918 20:46:07.236780   48996 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0918 20:46:07.240980   48996 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0918 20:46:07.241008   48996 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (459355427 bytes)
	I0918 20:46:08.639580   48996 crio.go:462] duration metric: took 1.402840154s to copy over tarball
	I0918 20:46:08.639671   48996 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0918 20:46:11.040887   48996 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.401187976s)
	I0918 20:46:11.040919   48996 crio.go:469] duration metric: took 2.401305592s to extract the tarball
	I0918 20:46:11.040926   48996 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0918 20:46:11.081341   48996 ssh_runner.go:195] Run: sudo crictl images --output json
	I0918 20:46:11.120206   48996 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.24.4". assuming images are not preloaded.
	I0918 20:46:11.120236   48996 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.4 registry.k8s.io/kube-controller-manager:v1.24.4 registry.k8s.io/kube-scheduler:v1.24.4 registry.k8s.io/kube-proxy:v1.24.4 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0918 20:46:11.120302   48996 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0918 20:46:11.120312   48996 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.24.4
	I0918 20:46:11.120349   48996 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.24.4
	I0918 20:46:11.120370   48996 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0918 20:46:11.120396   48996 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0918 20:46:11.120420   48996 image.go:135] retrieving image: registry.k8s.io/pause:3.7
	I0918 20:46:11.120331   48996 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0918 20:46:11.120457   48996 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.24.4
	I0918 20:46:11.121956   48996 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.4
	I0918 20:46:11.121955   48996 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0918 20:46:11.121959   48996 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.4
	I0918 20:46:11.121960   48996 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0918 20:46:11.121960   48996 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.4
	I0918 20:46:11.121957   48996 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0918 20:46:11.122036   48996 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0918 20:46:11.121955   48996 image.go:178] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0918 20:46:11.441666   48996 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.4
	I0918 20:46:11.452170   48996 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I0918 20:46:11.452223   48996 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I0918 20:46:11.459816   48996 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.4
	I0918 20:46:11.477631   48996 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I0918 20:46:11.493281   48996 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.4
	I0918 20:46:11.504877   48996 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.4" needs transfer: "registry.k8s.io/kube-proxy:v1.24.4" does not exist at hash "7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7" in container runtime
	I0918 20:46:11.504917   48996 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.24.4
	I0918 20:46:11.504971   48996 ssh_runner.go:195] Run: which crictl
	I0918 20:46:11.514251   48996 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.4
	I0918 20:46:11.546579   48996 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b" in container runtime
	I0918 20:46:11.546625   48996 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.3-0
	I0918 20:46:11.546674   48996 ssh_runner.go:195] Run: which crictl
	I0918 20:46:11.584375   48996 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "221177c6082a88ea4f6240ab2450d540955ac6f4d5454f0e15751b653ebda165" in container runtime
	I0918 20:46:11.584421   48996 cri.go:218] Removing image: registry.k8s.io/pause:3.7
	I0918 20:46:11.584460   48996 ssh_runner.go:195] Run: which crictl
	I0918 20:46:11.654664   48996 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.4" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.4" does not exist at hash "1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48" in container runtime
	I0918 20:46:11.654710   48996 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03" in container runtime
	I0918 20:46:11.654745   48996 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I0918 20:46:11.654757   48996 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.4" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.4" does not exist at hash "03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9" in container runtime
	I0918 20:46:11.654774   48996 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.24.4
	I0918 20:46:11.654715   48996 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0918 20:46:11.654811   48996 ssh_runner.go:195] Run: which crictl
	I0918 20:46:11.654817   48996 ssh_runner.go:195] Run: which crictl
	I0918 20:46:11.654834   48996 ssh_runner.go:195] Run: which crictl
	I0918 20:46:11.654898   48996 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.24.4
	I0918 20:46:11.654939   48996 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.4" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.4" does not exist at hash "6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d" in container runtime
	I0918 20:46:11.654963   48996 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.24.4
	I0918 20:46:11.654991   48996 ssh_runner.go:195] Run: which crictl
	I0918 20:46:11.654997   48996 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.3-0
	I0918 20:46:11.655054   48996 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.7
	I0918 20:46:11.672079   48996 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.24.4
	I0918 20:46:11.731150   48996 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.24.4
	I0918 20:46:11.731215   48996 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.24.4
	I0918 20:46:11.731282   48996 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0918 20:46:11.733185   48996 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.3-0
	I0918 20:46:11.733199   48996 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.7
	I0918 20:46:11.733235   48996 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.24.4
	I0918 20:46:11.771988   48996 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.24.4
	I0918 20:46:11.853913   48996 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.24.4
	I0918 20:46:11.906162   48996 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.24.4
	I0918 20:46:11.906189   48996 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0918 20:46:11.906357   48996 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.7
	I0918 20:46:11.922110   48996 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.24.4
	I0918 20:46:11.922209   48996 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.3-0
	I0918 20:46:11.922219   48996 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.24.4
	I0918 20:46:11.937588   48996 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19667-7671/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.24.4
	I0918 20:46:11.937729   48996 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.24.4
	I0918 20:46:12.023510   48996 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0918 20:46:12.055182   48996 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19667-7671/.minikube/cache/images/amd64/registry.k8s.io/pause_3.7
	I0918 20:46:12.055305   48996 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.7
	I0918 20:46:12.055306   48996 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19667-7671/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.24.4
	I0918 20:46:12.055409   48996 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.24.4
	I0918 20:46:12.055437   48996 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.24.4
	I0918 20:46:12.057699   48996 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.24.4
	I0918 20:46:12.059711   48996 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19667-7671/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.3-0
	I0918 20:46:12.059748   48996 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.24.4 (exists)
	I0918 20:46:12.059765   48996 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.24.4
	I0918 20:46:12.059806   48996 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.3-0
	I0918 20:46:12.059814   48996 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.24.4
	I0918 20:46:12.097294   48996 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/pause_3.7 (exists)
	I0918 20:46:12.097296   48996 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19667-7671/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.8.6
	I0918 20:46:12.097337   48996 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.24.4 (exists)
	I0918 20:46:12.097414   48996 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6
	I0918 20:46:12.126530   48996 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19667-7671/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.24.4
	I0918 20:46:12.126668   48996 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.24.4
	I0918 20:46:12.139334   48996 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19667-7671/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.24.4
	I0918 20:46:12.139360   48996 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.3-0 (exists)
	I0918 20:46:12.139459   48996 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.24.4
	I0918 20:46:12.352485   48996 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0918 20:46:14.769174   48996 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.24.4: (2.709331516s)
	I0918 20:46:14.769218   48996 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19667-7671/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.24.4 from cache
	I0918 20:46:14.769242   48996 crio.go:275] Loading image: /var/lib/minikube/images/pause_3.7
	I0918 20:46:14.769240   48996 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6: (2.671802s)
	I0918 20:46:14.769270   48996 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.8.6 (exists)
	I0918 20:46:14.769290   48996 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.24.4: (2.642602973s)
	I0918 20:46:14.769312   48996 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.24.4 (exists)
	I0918 20:46:14.769294   48996 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/pause_3.7
	I0918 20:46:14.769330   48996 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.24.4: (2.629856353s)
	I0918 20:46:14.769362   48996 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.24.4 (exists)
	I0918 20:46:14.769413   48996 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (2.416899364s)
	I0918 20:46:14.913012   48996 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19667-7671/.minikube/cache/images/amd64/registry.k8s.io/pause_3.7 from cache
	I0918 20:46:14.913062   48996 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.24.4
	I0918 20:46:14.913123   48996 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.24.4
	I0918 20:46:15.359950   48996 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19667-7671/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.24.4 from cache
	I0918 20:46:15.359993   48996 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.3-0
	I0918 20:46:15.360081   48996 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.3-0
	I0918 20:46:17.607691   48996 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.3-0: (2.24758371s)
	I0918 20:46:17.607723   48996 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19667-7671/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.3-0 from cache
	I0918 20:46:17.607749   48996 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I0918 20:46:17.607798   48996 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.8.6
	I0918 20:46:17.955190   48996 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19667-7671/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	I0918 20:46:17.955249   48996 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.24.4
	I0918 20:46:17.955346   48996 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.24.4
	I0918 20:46:18.696478   48996 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19667-7671/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.24.4 from cache
	I0918 20:46:18.696534   48996 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.24.4
	I0918 20:46:18.696585   48996 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.24.4
	I0918 20:46:19.443892   48996 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19667-7671/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.24.4 from cache
	I0918 20:46:19.443938   48996 cache_images.go:123] Successfully loaded all cached images
	I0918 20:46:19.443944   48996 cache_images.go:92] duration metric: took 8.323695463s to LoadCachedImages
	I0918 20:46:19.443955   48996 kubeadm.go:934] updating node { 192.168.39.171 8443 v1.24.4 crio true true} ...
	I0918 20:46:19.444096   48996 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=test-preload-684593 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.171
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.4 ClusterName:test-preload-684593 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0918 20:46:19.444185   48996 ssh_runner.go:195] Run: crio config
	I0918 20:46:19.491266   48996 cni.go:84] Creating CNI manager for ""
	I0918 20:46:19.491289   48996 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0918 20:46:19.491299   48996 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0918 20:46:19.491315   48996 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.171 APIServerPort:8443 KubernetesVersion:v1.24.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:test-preload-684593 NodeName:test-preload-684593 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.171"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.171 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0918 20:46:19.491453   48996 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.171
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "test-preload-684593"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.171
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.171"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0918 20:46:19.491519   48996 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.4
	I0918 20:46:19.501954   48996 binaries.go:44] Found k8s binaries, skipping transfer
	I0918 20:46:19.502046   48996 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0918 20:46:19.511400   48996 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (379 bytes)
	I0918 20:46:19.526931   48996 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0918 20:46:19.542616   48996 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2106 bytes)
	I0918 20:46:19.559165   48996 ssh_runner.go:195] Run: grep 192.168.39.171	control-plane.minikube.internal$ /etc/hosts
	I0918 20:46:19.563326   48996 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.171	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0918 20:46:19.575744   48996 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0918 20:46:19.682538   48996 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0918 20:46:19.699015   48996 certs.go:68] Setting up /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/test-preload-684593 for IP: 192.168.39.171
	I0918 20:46:19.699041   48996 certs.go:194] generating shared ca certs ...
	I0918 20:46:19.699064   48996 certs.go:226] acquiring lock for ca certs: {Name:mkf54e0e71ed884c4372f9d3d4cb308ea4600185 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0918 20:46:19.699241   48996 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19667-7671/.minikube/ca.key
	I0918 20:46:19.699309   48996 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19667-7671/.minikube/proxy-client-ca.key
	I0918 20:46:19.699323   48996 certs.go:256] generating profile certs ...
	I0918 20:46:19.699442   48996 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/test-preload-684593/client.key
	I0918 20:46:19.699519   48996 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/test-preload-684593/apiserver.key.ac68cfbb
	I0918 20:46:19.699577   48996 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/test-preload-684593/proxy-client.key
	I0918 20:46:19.699735   48996 certs.go:484] found cert: /home/jenkins/minikube-integration/19667-7671/.minikube/certs/14878.pem (1338 bytes)
	W0918 20:46:19.699785   48996 certs.go:480] ignoring /home/jenkins/minikube-integration/19667-7671/.minikube/certs/14878_empty.pem, impossibly tiny 0 bytes
	I0918 20:46:19.699811   48996 certs.go:484] found cert: /home/jenkins/minikube-integration/19667-7671/.minikube/certs/ca-key.pem (1675 bytes)
	I0918 20:46:19.699845   48996 certs.go:484] found cert: /home/jenkins/minikube-integration/19667-7671/.minikube/certs/ca.pem (1082 bytes)
	I0918 20:46:19.699879   48996 certs.go:484] found cert: /home/jenkins/minikube-integration/19667-7671/.minikube/certs/cert.pem (1123 bytes)
	I0918 20:46:19.699919   48996 certs.go:484] found cert: /home/jenkins/minikube-integration/19667-7671/.minikube/certs/key.pem (1679 bytes)
	I0918 20:46:19.699977   48996 certs.go:484] found cert: /home/jenkins/minikube-integration/19667-7671/.minikube/files/etc/ssl/certs/148782.pem (1708 bytes)
	I0918 20:46:19.700992   48996 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0918 20:46:19.733845   48996 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0918 20:46:19.766268   48996 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0918 20:46:19.797358   48996 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0918 20:46:19.834981   48996 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/test-preload-684593/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0918 20:46:19.864984   48996 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/test-preload-684593/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0918 20:46:19.896658   48996 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/test-preload-684593/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0918 20:46:19.928661   48996 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/test-preload-684593/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0918 20:46:19.952860   48996 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/files/etc/ssl/certs/148782.pem --> /usr/share/ca-certificates/148782.pem (1708 bytes)
	I0918 20:46:19.976368   48996 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0918 20:46:20.000088   48996 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/certs/14878.pem --> /usr/share/ca-certificates/14878.pem (1338 bytes)
	I0918 20:46:20.023942   48996 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0918 20:46:20.040467   48996 ssh_runner.go:195] Run: openssl version
	I0918 20:46:20.046117   48996 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/148782.pem && ln -fs /usr/share/ca-certificates/148782.pem /etc/ssl/certs/148782.pem"
	I0918 20:46:20.057190   48996 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/148782.pem
	I0918 20:46:20.061986   48996 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 18 19:57 /usr/share/ca-certificates/148782.pem
	I0918 20:46:20.062041   48996 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/148782.pem
	I0918 20:46:20.067710   48996 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/148782.pem /etc/ssl/certs/3ec20f2e.0"
	I0918 20:46:20.078411   48996 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0918 20:46:20.089051   48996 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0918 20:46:20.093280   48996 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 18 19:39 /usr/share/ca-certificates/minikubeCA.pem
	I0918 20:46:20.093413   48996 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0918 20:46:20.098750   48996 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0918 20:46:20.109062   48996 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14878.pem && ln -fs /usr/share/ca-certificates/14878.pem /etc/ssl/certs/14878.pem"
	I0918 20:46:20.119503   48996 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14878.pem
	I0918 20:46:20.123591   48996 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 18 19:57 /usr/share/ca-certificates/14878.pem
	I0918 20:46:20.123667   48996 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14878.pem
	I0918 20:46:20.128946   48996 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14878.pem /etc/ssl/certs/51391683.0"
	I0918 20:46:20.139339   48996 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0918 20:46:20.143455   48996 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0918 20:46:20.148870   48996 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0918 20:46:20.154172   48996 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0918 20:46:20.159618   48996 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0918 20:46:20.165069   48996 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0918 20:46:20.170535   48996 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0918 20:46:20.176130   48996 kubeadm.go:392] StartCluster: {Name:test-preload-684593 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
24.4 ClusterName:test-preload-684593 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.171 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mo
untIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0918 20:46:20.176204   48996 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0918 20:46:20.176262   48996 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0918 20:46:20.211551   48996 cri.go:89] found id: ""
	I0918 20:46:20.211633   48996 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0918 20:46:20.224745   48996 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0918 20:46:20.224768   48996 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0918 20:46:20.224820   48996 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0918 20:46:20.236112   48996 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0918 20:46:20.236540   48996 kubeconfig.go:47] verify endpoint returned: get endpoint: "test-preload-684593" does not appear in /home/jenkins/minikube-integration/19667-7671/kubeconfig
	I0918 20:46:20.236676   48996 kubeconfig.go:62] /home/jenkins/minikube-integration/19667-7671/kubeconfig needs updating (will repair): [kubeconfig missing "test-preload-684593" cluster setting kubeconfig missing "test-preload-684593" context setting]
	I0918 20:46:20.237004   48996 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19667-7671/kubeconfig: {Name:mk3a3eecadb0164dfdd5d3c4392b5b473a2a9bb0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0918 20:46:20.237622   48996 kapi.go:59] client config for test-preload-684593: &rest.Config{Host:"https://192.168.39.171:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19667-7671/.minikube/profiles/test-preload-684593/client.crt", KeyFile:"/home/jenkins/minikube-integration/19667-7671/.minikube/profiles/test-preload-684593/client.key", CAFile:"/home/jenkins/minikube-integration/19667-7671/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil
), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f6fca0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0918 20:46:20.238230   48996 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0918 20:46:20.247990   48996 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.171
	I0918 20:46:20.248047   48996 kubeadm.go:1160] stopping kube-system containers ...
	I0918 20:46:20.248061   48996 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0918 20:46:20.248109   48996 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0918 20:46:20.296345   48996 cri.go:89] found id: ""
	I0918 20:46:20.296409   48996 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0918 20:46:20.315499   48996 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0918 20:46:20.326843   48996 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0918 20:46:20.326864   48996 kubeadm.go:157] found existing configuration files:
	
	I0918 20:46:20.326916   48996 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0918 20:46:20.337119   48996 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0918 20:46:20.337173   48996 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0918 20:46:20.347657   48996 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0918 20:46:20.358105   48996 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0918 20:46:20.358160   48996 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0918 20:46:20.369081   48996 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0918 20:46:20.379921   48996 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0918 20:46:20.379986   48996 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0918 20:46:20.391541   48996 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0918 20:46:20.402058   48996 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0918 20:46:20.402113   48996 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0918 20:46:20.412732   48996 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0918 20:46:20.423274   48996 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0918 20:46:20.519012   48996 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0918 20:46:21.282785   48996 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0918 20:46:21.530473   48996 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0918 20:46:21.604508   48996 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0918 20:46:21.725311   48996 api_server.go:52] waiting for apiserver process to appear ...
	I0918 20:46:21.725377   48996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 20:46:22.226493   48996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 20:46:22.725536   48996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 20:46:22.744560   48996 api_server.go:72] duration metric: took 1.019259185s to wait for apiserver process to appear ...
	I0918 20:46:22.744595   48996 api_server.go:88] waiting for apiserver healthz status ...
	I0918 20:46:22.744616   48996 api_server.go:253] Checking apiserver healthz at https://192.168.39.171:8443/healthz ...
	I0918 20:46:22.745135   48996 api_server.go:269] stopped: https://192.168.39.171:8443/healthz: Get "https://192.168.39.171:8443/healthz": dial tcp 192.168.39.171:8443: connect: connection refused
	I0918 20:46:23.244697   48996 api_server.go:253] Checking apiserver healthz at https://192.168.39.171:8443/healthz ...
	I0918 20:46:27.228766   48996 api_server.go:279] https://192.168.39.171:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0918 20:46:27.228798   48996 api_server.go:103] status: https://192.168.39.171:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0918 20:46:27.228814   48996 api_server.go:253] Checking apiserver healthz at https://192.168.39.171:8443/healthz ...
	I0918 20:46:27.350615   48996 api_server.go:279] https://192.168.39.171:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0918 20:46:27.350644   48996 api_server.go:103] status: https://192.168.39.171:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0918 20:46:27.350657   48996 api_server.go:253] Checking apiserver healthz at https://192.168.39.171:8443/healthz ...
	I0918 20:46:27.371550   48996 api_server.go:279] https://192.168.39.171:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0918 20:46:27.371581   48996 api_server.go:103] status: https://192.168.39.171:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0918 20:46:27.745195   48996 api_server.go:253] Checking apiserver healthz at https://192.168.39.171:8443/healthz ...
	I0918 20:46:27.752107   48996 api_server.go:279] https://192.168.39.171:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0918 20:46:27.752135   48996 api_server.go:103] status: https://192.168.39.171:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0918 20:46:28.244695   48996 api_server.go:253] Checking apiserver healthz at https://192.168.39.171:8443/healthz ...
	I0918 20:46:28.251489   48996 api_server.go:279] https://192.168.39.171:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0918 20:46:28.251517   48996 api_server.go:103] status: https://192.168.39.171:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0918 20:46:28.745082   48996 api_server.go:253] Checking apiserver healthz at https://192.168.39.171:8443/healthz ...
	I0918 20:46:28.750507   48996 api_server.go:279] https://192.168.39.171:8443/healthz returned 200:
	ok
	I0918 20:46:28.756993   48996 api_server.go:141] control plane version: v1.24.4
	I0918 20:46:28.757025   48996 api_server.go:131] duration metric: took 6.01242335s to wait for apiserver health ...
	I0918 20:46:28.757034   48996 cni.go:84] Creating CNI manager for ""
	I0918 20:46:28.757039   48996 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0918 20:46:28.759195   48996 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0918 20:46:28.761053   48996 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0918 20:46:28.772242   48996 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0918 20:46:28.793484   48996 system_pods.go:43] waiting for kube-system pods to appear ...
	I0918 20:46:28.793575   48996 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0918 20:46:28.793613   48996 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0918 20:46:28.804779   48996 system_pods.go:59] 7 kube-system pods found
	I0918 20:46:28.804822   48996 system_pods.go:61] "coredns-6d4b75cb6d-bw4dh" [8778f526-b86f-4ab4-9366-3590fc08a39f] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0918 20:46:28.804828   48996 system_pods.go:61] "etcd-test-preload-684593" [52aa746d-ac4c-4d6f-b7ee-ebfa62e490dc] Running
	I0918 20:46:28.804833   48996 system_pods.go:61] "kube-apiserver-test-preload-684593" [07039c37-256f-45b7-93e6-03d317886783] Running
	I0918 20:46:28.804841   48996 system_pods.go:61] "kube-controller-manager-test-preload-684593" [5f8c2d25-131d-4764-b8b5-beb0fa8ee9bd] Running
	I0918 20:46:28.804846   48996 system_pods.go:61] "kube-proxy-rl2xg" [0a98fabf-112d-4411-ad53-c03e90ac3b08] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0918 20:46:28.804850   48996 system_pods.go:61] "kube-scheduler-test-preload-684593" [96bf65dd-06f5-4533-bc37-9e2e2804dfa0] Running
	I0918 20:46:28.804858   48996 system_pods.go:61] "storage-provisioner" [0b4b3785-a0b2-474f-82b0-f83463878ce5] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0918 20:46:28.804865   48996 system_pods.go:74] duration metric: took 11.34892ms to wait for pod list to return data ...
	I0918 20:46:28.804872   48996 node_conditions.go:102] verifying NodePressure condition ...
	I0918 20:46:28.808009   48996 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0918 20:46:28.808057   48996 node_conditions.go:123] node cpu capacity is 2
	I0918 20:46:28.808066   48996 node_conditions.go:105] duration metric: took 3.18997ms to run NodePressure ...
	I0918 20:46:28.808082   48996 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0918 20:46:29.074550   48996 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0918 20:46:29.087796   48996 retry.go:31] will retry after 244.577402ms: kubelet not initialised
	I0918 20:46:29.343326   48996 retry.go:31] will retry after 369.547797ms: kubelet not initialised
	I0918 20:46:29.718542   48996 retry.go:31] will retry after 387.394996ms: kubelet not initialised
	I0918 20:46:30.111933   48996 retry.go:31] will retry after 1.23932024s: kubelet not initialised
	I0918 20:46:31.357613   48996 retry.go:31] will retry after 637.982081ms: kubelet not initialised
	I0918 20:46:32.003486   48996 kubeadm.go:739] kubelet initialised
	I0918 20:46:32.003510   48996 kubeadm.go:740] duration metric: took 2.928929559s waiting for restarted kubelet to initialise ...
	I0918 20:46:32.003517   48996 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0918 20:46:32.009115   48996 pod_ready.go:79] waiting up to 4m0s for pod "coredns-6d4b75cb6d-bw4dh" in "kube-system" namespace to be "Ready" ...
	I0918 20:46:32.015055   48996 pod_ready.go:98] node "test-preload-684593" hosting pod "coredns-6d4b75cb6d-bw4dh" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-684593" has status "Ready":"False"
	I0918 20:46:32.015077   48996 pod_ready.go:82] duration metric: took 5.926719ms for pod "coredns-6d4b75cb6d-bw4dh" in "kube-system" namespace to be "Ready" ...
	E0918 20:46:32.015086   48996 pod_ready.go:67] WaitExtra: waitPodCondition: node "test-preload-684593" hosting pod "coredns-6d4b75cb6d-bw4dh" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-684593" has status "Ready":"False"
	I0918 20:46:32.015092   48996 pod_ready.go:79] waiting up to 4m0s for pod "etcd-test-preload-684593" in "kube-system" namespace to be "Ready" ...
	I0918 20:46:32.020355   48996 pod_ready.go:98] node "test-preload-684593" hosting pod "etcd-test-preload-684593" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-684593" has status "Ready":"False"
	I0918 20:46:32.020382   48996 pod_ready.go:82] duration metric: took 5.281413ms for pod "etcd-test-preload-684593" in "kube-system" namespace to be "Ready" ...
	E0918 20:46:32.020390   48996 pod_ready.go:67] WaitExtra: waitPodCondition: node "test-preload-684593" hosting pod "etcd-test-preload-684593" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-684593" has status "Ready":"False"
	I0918 20:46:32.020397   48996 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-test-preload-684593" in "kube-system" namespace to be "Ready" ...
	I0918 20:46:32.025982   48996 pod_ready.go:98] node "test-preload-684593" hosting pod "kube-apiserver-test-preload-684593" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-684593" has status "Ready":"False"
	I0918 20:46:32.026015   48996 pod_ready.go:82] duration metric: took 5.609894ms for pod "kube-apiserver-test-preload-684593" in "kube-system" namespace to be "Ready" ...
	E0918 20:46:32.026027   48996 pod_ready.go:67] WaitExtra: waitPodCondition: node "test-preload-684593" hosting pod "kube-apiserver-test-preload-684593" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-684593" has status "Ready":"False"
	I0918 20:46:32.026036   48996 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-test-preload-684593" in "kube-system" namespace to be "Ready" ...
	I0918 20:46:32.030614   48996 pod_ready.go:98] node "test-preload-684593" hosting pod "kube-controller-manager-test-preload-684593" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-684593" has status "Ready":"False"
	I0918 20:46:32.030642   48996 pod_ready.go:82] duration metric: took 4.596581ms for pod "kube-controller-manager-test-preload-684593" in "kube-system" namespace to be "Ready" ...
	E0918 20:46:32.030654   48996 pod_ready.go:67] WaitExtra: waitPodCondition: node "test-preload-684593" hosting pod "kube-controller-manager-test-preload-684593" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-684593" has status "Ready":"False"
	I0918 20:46:32.030668   48996 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-rl2xg" in "kube-system" namespace to be "Ready" ...
	I0918 20:46:32.401054   48996 pod_ready.go:98] node "test-preload-684593" hosting pod "kube-proxy-rl2xg" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-684593" has status "Ready":"False"
	I0918 20:46:32.401078   48996 pod_ready.go:82] duration metric: took 370.395487ms for pod "kube-proxy-rl2xg" in "kube-system" namespace to be "Ready" ...
	E0918 20:46:32.401087   48996 pod_ready.go:67] WaitExtra: waitPodCondition: node "test-preload-684593" hosting pod "kube-proxy-rl2xg" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-684593" has status "Ready":"False"
	I0918 20:46:32.401092   48996 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-test-preload-684593" in "kube-system" namespace to be "Ready" ...
	I0918 20:46:32.800629   48996 pod_ready.go:98] node "test-preload-684593" hosting pod "kube-scheduler-test-preload-684593" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-684593" has status "Ready":"False"
	I0918 20:46:32.800683   48996 pod_ready.go:82] duration metric: took 399.583845ms for pod "kube-scheduler-test-preload-684593" in "kube-system" namespace to be "Ready" ...
	E0918 20:46:32.800697   48996 pod_ready.go:67] WaitExtra: waitPodCondition: node "test-preload-684593" hosting pod "kube-scheduler-test-preload-684593" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-684593" has status "Ready":"False"
	I0918 20:46:32.800705   48996 pod_ready.go:39] duration metric: took 797.177157ms for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0918 20:46:32.800723   48996 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0918 20:46:32.813520   48996 ops.go:34] apiserver oom_adj: -16
	I0918 20:46:32.813542   48996 kubeadm.go:597] duration metric: took 12.588768747s to restartPrimaryControlPlane
	I0918 20:46:32.813551   48996 kubeadm.go:394] duration metric: took 12.637427965s to StartCluster
	I0918 20:46:32.813569   48996 settings.go:142] acquiring lock: {Name:mk6ae95a69dcbe00c28846409e9f46945a88de2f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0918 20:46:32.813647   48996 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19667-7671/kubeconfig
	I0918 20:46:32.814286   48996 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19667-7671/kubeconfig: {Name:mk3a3eecadb0164dfdd5d3c4392b5b473a2a9bb0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0918 20:46:32.814534   48996 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.171 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0918 20:46:32.814608   48996 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0918 20:46:32.814713   48996 addons.go:69] Setting storage-provisioner=true in profile "test-preload-684593"
	I0918 20:46:32.814734   48996 addons.go:234] Setting addon storage-provisioner=true in "test-preload-684593"
	W0918 20:46:32.814744   48996 addons.go:243] addon storage-provisioner should already be in state true
	I0918 20:46:32.814748   48996 addons.go:69] Setting default-storageclass=true in profile "test-preload-684593"
	I0918 20:46:32.814770   48996 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "test-preload-684593"
	I0918 20:46:32.814791   48996 config.go:182] Loaded profile config "test-preload-684593": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.4
	I0918 20:46:32.814773   48996 host.go:66] Checking if "test-preload-684593" exists ...
	I0918 20:46:32.815117   48996 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0918 20:46:32.815126   48996 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0918 20:46:32.815155   48996 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0918 20:46:32.815246   48996 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0918 20:46:32.817073   48996 out.go:177] * Verifying Kubernetes components...
	I0918 20:46:32.818216   48996 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0918 20:46:32.830440   48996 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37181
	I0918 20:46:32.830962   48996 main.go:141] libmachine: () Calling .GetVersion
	I0918 20:46:32.831526   48996 main.go:141] libmachine: Using API Version  1
	I0918 20:46:32.831560   48996 main.go:141] libmachine: () Calling .SetConfigRaw
	I0918 20:46:32.831918   48996 main.go:141] libmachine: () Calling .GetMachineName
	I0918 20:46:32.832420   48996 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0918 20:46:32.832454   48996 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0918 20:46:32.835069   48996 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37621
	I0918 20:46:32.835530   48996 main.go:141] libmachine: () Calling .GetVersion
	I0918 20:46:32.835944   48996 main.go:141] libmachine: Using API Version  1
	I0918 20:46:32.835966   48996 main.go:141] libmachine: () Calling .SetConfigRaw
	I0918 20:46:32.836324   48996 main.go:141] libmachine: () Calling .GetMachineName
	I0918 20:46:32.836531   48996 main.go:141] libmachine: (test-preload-684593) Calling .GetState
	I0918 20:46:32.838741   48996 kapi.go:59] client config for test-preload-684593: &rest.Config{Host:"https://192.168.39.171:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19667-7671/.minikube/profiles/test-preload-684593/client.crt", KeyFile:"/home/jenkins/minikube-integration/19667-7671/.minikube/profiles/test-preload-684593/client.key", CAFile:"/home/jenkins/minikube-integration/19667-7671/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil
), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f6fca0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0918 20:46:32.839045   48996 addons.go:234] Setting addon default-storageclass=true in "test-preload-684593"
	W0918 20:46:32.839059   48996 addons.go:243] addon default-storageclass should already be in state true
	I0918 20:46:32.839079   48996 host.go:66] Checking if "test-preload-684593" exists ...
	I0918 20:46:32.839314   48996 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0918 20:46:32.839353   48996 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0918 20:46:32.849090   48996 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33079
	I0918 20:46:32.849562   48996 main.go:141] libmachine: () Calling .GetVersion
	I0918 20:46:32.850105   48996 main.go:141] libmachine: Using API Version  1
	I0918 20:46:32.850133   48996 main.go:141] libmachine: () Calling .SetConfigRaw
	I0918 20:46:32.850492   48996 main.go:141] libmachine: () Calling .GetMachineName
	I0918 20:46:32.850683   48996 main.go:141] libmachine: (test-preload-684593) Calling .GetState
	I0918 20:46:32.852414   48996 main.go:141] libmachine: (test-preload-684593) Calling .DriverName
	I0918 20:46:32.854551   48996 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0918 20:46:32.855572   48996 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33813
	I0918 20:46:32.855903   48996 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0918 20:46:32.855930   48996 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0918 20:46:32.855952   48996 main.go:141] libmachine: (test-preload-684593) Calling .GetSSHHostname
	I0918 20:46:32.855991   48996 main.go:141] libmachine: () Calling .GetVersion
	I0918 20:46:32.856505   48996 main.go:141] libmachine: Using API Version  1
	I0918 20:46:32.856528   48996 main.go:141] libmachine: () Calling .SetConfigRaw
	I0918 20:46:32.856986   48996 main.go:141] libmachine: () Calling .GetMachineName
	I0918 20:46:32.857487   48996 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0918 20:46:32.857533   48996 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0918 20:46:32.858914   48996 main.go:141] libmachine: (test-preload-684593) DBG | domain test-preload-684593 has defined MAC address 52:54:00:97:40:d0 in network mk-test-preload-684593
	I0918 20:46:32.859394   48996 main.go:141] libmachine: (test-preload-684593) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:97:40:d0", ip: ""} in network mk-test-preload-684593: {Iface:virbr1 ExpiryTime:2024-09-18 21:45:55 +0000 UTC Type:0 Mac:52:54:00:97:40:d0 Iaid: IPaddr:192.168.39.171 Prefix:24 Hostname:test-preload-684593 Clientid:01:52:54:00:97:40:d0}
	I0918 20:46:32.859418   48996 main.go:141] libmachine: (test-preload-684593) DBG | domain test-preload-684593 has defined IP address 192.168.39.171 and MAC address 52:54:00:97:40:d0 in network mk-test-preload-684593
	I0918 20:46:32.859586   48996 main.go:141] libmachine: (test-preload-684593) Calling .GetSSHPort
	I0918 20:46:32.859753   48996 main.go:141] libmachine: (test-preload-684593) Calling .GetSSHKeyPath
	I0918 20:46:32.859867   48996 main.go:141] libmachine: (test-preload-684593) Calling .GetSSHUsername
	I0918 20:46:32.859969   48996 sshutil.go:53] new ssh client: &{IP:192.168.39.171 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19667-7671/.minikube/machines/test-preload-684593/id_rsa Username:docker}
	I0918 20:46:32.902878   48996 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34625
	I0918 20:46:32.903245   48996 main.go:141] libmachine: () Calling .GetVersion
	I0918 20:46:32.903704   48996 main.go:141] libmachine: Using API Version  1
	I0918 20:46:32.903724   48996 main.go:141] libmachine: () Calling .SetConfigRaw
	I0918 20:46:32.904116   48996 main.go:141] libmachine: () Calling .GetMachineName
	I0918 20:46:32.904301   48996 main.go:141] libmachine: (test-preload-684593) Calling .GetState
	I0918 20:46:32.906043   48996 main.go:141] libmachine: (test-preload-684593) Calling .DriverName
	I0918 20:46:32.906275   48996 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0918 20:46:32.906308   48996 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0918 20:46:32.906323   48996 main.go:141] libmachine: (test-preload-684593) Calling .GetSSHHostname
	I0918 20:46:32.909003   48996 main.go:141] libmachine: (test-preload-684593) DBG | domain test-preload-684593 has defined MAC address 52:54:00:97:40:d0 in network mk-test-preload-684593
	I0918 20:46:32.909396   48996 main.go:141] libmachine: (test-preload-684593) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:97:40:d0", ip: ""} in network mk-test-preload-684593: {Iface:virbr1 ExpiryTime:2024-09-18 21:45:55 +0000 UTC Type:0 Mac:52:54:00:97:40:d0 Iaid: IPaddr:192.168.39.171 Prefix:24 Hostname:test-preload-684593 Clientid:01:52:54:00:97:40:d0}
	I0918 20:46:32.909427   48996 main.go:141] libmachine: (test-preload-684593) DBG | domain test-preload-684593 has defined IP address 192.168.39.171 and MAC address 52:54:00:97:40:d0 in network mk-test-preload-684593
	I0918 20:46:32.909614   48996 main.go:141] libmachine: (test-preload-684593) Calling .GetSSHPort
	I0918 20:46:32.909796   48996 main.go:141] libmachine: (test-preload-684593) Calling .GetSSHKeyPath
	I0918 20:46:32.909937   48996 main.go:141] libmachine: (test-preload-684593) Calling .GetSSHUsername
	I0918 20:46:32.910056   48996 sshutil.go:53] new ssh client: &{IP:192.168.39.171 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19667-7671/.minikube/machines/test-preload-684593/id_rsa Username:docker}
	I0918 20:46:32.985657   48996 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0918 20:46:33.014022   48996 node_ready.go:35] waiting up to 6m0s for node "test-preload-684593" to be "Ready" ...
	I0918 20:46:33.073334   48996 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0918 20:46:33.112092   48996 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0918 20:46:34.001193   48996 main.go:141] libmachine: Making call to close driver server
	I0918 20:46:34.001227   48996 main.go:141] libmachine: (test-preload-684593) Calling .Close
	I0918 20:46:34.001488   48996 main.go:141] libmachine: Successfully made call to close driver server
	I0918 20:46:34.001506   48996 main.go:141] libmachine: Making call to close connection to plugin binary
	I0918 20:46:34.001515   48996 main.go:141] libmachine: Making call to close driver server
	I0918 20:46:34.001513   48996 main.go:141] libmachine: (test-preload-684593) DBG | Closing plugin on server side
	I0918 20:46:34.001522   48996 main.go:141] libmachine: (test-preload-684593) Calling .Close
	I0918 20:46:34.001762   48996 main.go:141] libmachine: Successfully made call to close driver server
	I0918 20:46:34.001777   48996 main.go:141] libmachine: Making call to close connection to plugin binary
	I0918 20:46:34.012355   48996 main.go:141] libmachine: Making call to close driver server
	I0918 20:46:34.012380   48996 main.go:141] libmachine: (test-preload-684593) Calling .Close
	I0918 20:46:34.012685   48996 main.go:141] libmachine: Successfully made call to close driver server
	I0918 20:46:34.012708   48996 main.go:141] libmachine: Making call to close connection to plugin binary
	I0918 20:46:34.033237   48996 main.go:141] libmachine: Making call to close driver server
	I0918 20:46:34.033259   48996 main.go:141] libmachine: (test-preload-684593) Calling .Close
	I0918 20:46:34.033514   48996 main.go:141] libmachine: Successfully made call to close driver server
	I0918 20:46:34.033535   48996 main.go:141] libmachine: Making call to close connection to plugin binary
	I0918 20:46:34.033545   48996 main.go:141] libmachine: Making call to close driver server
	I0918 20:46:34.033553   48996 main.go:141] libmachine: (test-preload-684593) Calling .Close
	I0918 20:46:34.033792   48996 main.go:141] libmachine: Successfully made call to close driver server
	I0918 20:46:34.033807   48996 main.go:141] libmachine: Making call to close connection to plugin binary
	I0918 20:46:34.033835   48996 main.go:141] libmachine: (test-preload-684593) DBG | Closing plugin on server side
	I0918 20:46:34.035679   48996 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I0918 20:46:34.037007   48996 addons.go:510] duration metric: took 1.222406555s for enable addons: enabled=[default-storageclass storage-provisioner]
	I0918 20:46:35.018233   48996 node_ready.go:53] node "test-preload-684593" has status "Ready":"False"
	I0918 20:46:37.020288   48996 node_ready.go:53] node "test-preload-684593" has status "Ready":"False"
	I0918 20:46:38.019454   48996 node_ready.go:49] node "test-preload-684593" has status "Ready":"True"
	I0918 20:46:38.019482   48996 node_ready.go:38] duration metric: took 5.005424347s for node "test-preload-684593" to be "Ready" ...
	I0918 20:46:38.019491   48996 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0918 20:46:38.027517   48996 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6d4b75cb6d-bw4dh" in "kube-system" namespace to be "Ready" ...
	I0918 20:46:38.032262   48996 pod_ready.go:93] pod "coredns-6d4b75cb6d-bw4dh" in "kube-system" namespace has status "Ready":"True"
	I0918 20:46:38.032285   48996 pod_ready.go:82] duration metric: took 4.736493ms for pod "coredns-6d4b75cb6d-bw4dh" in "kube-system" namespace to be "Ready" ...
	I0918 20:46:38.032295   48996 pod_ready.go:79] waiting up to 6m0s for pod "etcd-test-preload-684593" in "kube-system" namespace to be "Ready" ...
	I0918 20:46:38.036929   48996 pod_ready.go:93] pod "etcd-test-preload-684593" in "kube-system" namespace has status "Ready":"True"
	I0918 20:46:38.036949   48996 pod_ready.go:82] duration metric: took 4.646119ms for pod "etcd-test-preload-684593" in "kube-system" namespace to be "Ready" ...
	I0918 20:46:38.036957   48996 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-test-preload-684593" in "kube-system" namespace to be "Ready" ...
	I0918 20:46:38.041476   48996 pod_ready.go:93] pod "kube-apiserver-test-preload-684593" in "kube-system" namespace has status "Ready":"True"
	I0918 20:46:38.041494   48996 pod_ready.go:82] duration metric: took 4.531392ms for pod "kube-apiserver-test-preload-684593" in "kube-system" namespace to be "Ready" ...
	I0918 20:46:38.041502   48996 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-test-preload-684593" in "kube-system" namespace to be "Ready" ...
	I0918 20:46:38.045859   48996 pod_ready.go:93] pod "kube-controller-manager-test-preload-684593" in "kube-system" namespace has status "Ready":"True"
	I0918 20:46:38.045878   48996 pod_ready.go:82] duration metric: took 4.370194ms for pod "kube-controller-manager-test-preload-684593" in "kube-system" namespace to be "Ready" ...
	I0918 20:46:38.045889   48996 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-rl2xg" in "kube-system" namespace to be "Ready" ...
	I0918 20:46:38.419106   48996 pod_ready.go:93] pod "kube-proxy-rl2xg" in "kube-system" namespace has status "Ready":"True"
	I0918 20:46:38.419139   48996 pod_ready.go:82] duration metric: took 373.243633ms for pod "kube-proxy-rl2xg" in "kube-system" namespace to be "Ready" ...
	I0918 20:46:38.419153   48996 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-test-preload-684593" in "kube-system" namespace to be "Ready" ...
	I0918 20:46:38.819072   48996 pod_ready.go:93] pod "kube-scheduler-test-preload-684593" in "kube-system" namespace has status "Ready":"True"
	I0918 20:46:38.819097   48996 pod_ready.go:82] duration metric: took 399.936282ms for pod "kube-scheduler-test-preload-684593" in "kube-system" namespace to be "Ready" ...
	I0918 20:46:38.819108   48996 pod_ready.go:39] duration metric: took 799.608438ms for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0918 20:46:38.819121   48996 api_server.go:52] waiting for apiserver process to appear ...
	I0918 20:46:38.819172   48996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 20:46:38.835539   48996 api_server.go:72] duration metric: took 6.0209748s to wait for apiserver process to appear ...
	I0918 20:46:38.835566   48996 api_server.go:88] waiting for apiserver healthz status ...
	I0918 20:46:38.835588   48996 api_server.go:253] Checking apiserver healthz at https://192.168.39.171:8443/healthz ...
	I0918 20:46:38.840673   48996 api_server.go:279] https://192.168.39.171:8443/healthz returned 200:
	ok
	I0918 20:46:38.841670   48996 api_server.go:141] control plane version: v1.24.4
	I0918 20:46:38.841695   48996 api_server.go:131] duration metric: took 6.121139ms to wait for apiserver health ...
	I0918 20:46:38.841704   48996 system_pods.go:43] waiting for kube-system pods to appear ...
	I0918 20:46:39.020481   48996 system_pods.go:59] 7 kube-system pods found
	I0918 20:46:39.020509   48996 system_pods.go:61] "coredns-6d4b75cb6d-bw4dh" [8778f526-b86f-4ab4-9366-3590fc08a39f] Running
	I0918 20:46:39.020514   48996 system_pods.go:61] "etcd-test-preload-684593" [52aa746d-ac4c-4d6f-b7ee-ebfa62e490dc] Running
	I0918 20:46:39.020519   48996 system_pods.go:61] "kube-apiserver-test-preload-684593" [07039c37-256f-45b7-93e6-03d317886783] Running
	I0918 20:46:39.020522   48996 system_pods.go:61] "kube-controller-manager-test-preload-684593" [5f8c2d25-131d-4764-b8b5-beb0fa8ee9bd] Running
	I0918 20:46:39.020525   48996 system_pods.go:61] "kube-proxy-rl2xg" [0a98fabf-112d-4411-ad53-c03e90ac3b08] Running
	I0918 20:46:39.020528   48996 system_pods.go:61] "kube-scheduler-test-preload-684593" [96bf65dd-06f5-4533-bc37-9e2e2804dfa0] Running
	I0918 20:46:39.020530   48996 system_pods.go:61] "storage-provisioner" [0b4b3785-a0b2-474f-82b0-f83463878ce5] Running
	I0918 20:46:39.020536   48996 system_pods.go:74] duration metric: took 178.826216ms to wait for pod list to return data ...
	I0918 20:46:39.020543   48996 default_sa.go:34] waiting for default service account to be created ...
	I0918 20:46:39.218200   48996 default_sa.go:45] found service account: "default"
	I0918 20:46:39.218223   48996 default_sa.go:55] duration metric: took 197.673881ms for default service account to be created ...
	I0918 20:46:39.218231   48996 system_pods.go:116] waiting for k8s-apps to be running ...
	I0918 20:46:39.423652   48996 system_pods.go:86] 7 kube-system pods found
	I0918 20:46:39.423687   48996 system_pods.go:89] "coredns-6d4b75cb6d-bw4dh" [8778f526-b86f-4ab4-9366-3590fc08a39f] Running
	I0918 20:46:39.423693   48996 system_pods.go:89] "etcd-test-preload-684593" [52aa746d-ac4c-4d6f-b7ee-ebfa62e490dc] Running
	I0918 20:46:39.423696   48996 system_pods.go:89] "kube-apiserver-test-preload-684593" [07039c37-256f-45b7-93e6-03d317886783] Running
	I0918 20:46:39.423700   48996 system_pods.go:89] "kube-controller-manager-test-preload-684593" [5f8c2d25-131d-4764-b8b5-beb0fa8ee9bd] Running
	I0918 20:46:39.423703   48996 system_pods.go:89] "kube-proxy-rl2xg" [0a98fabf-112d-4411-ad53-c03e90ac3b08] Running
	I0918 20:46:39.423706   48996 system_pods.go:89] "kube-scheduler-test-preload-684593" [96bf65dd-06f5-4533-bc37-9e2e2804dfa0] Running
	I0918 20:46:39.423709   48996 system_pods.go:89] "storage-provisioner" [0b4b3785-a0b2-474f-82b0-f83463878ce5] Running
	I0918 20:46:39.423714   48996 system_pods.go:126] duration metric: took 205.479588ms to wait for k8s-apps to be running ...
	I0918 20:46:39.423720   48996 system_svc.go:44] waiting for kubelet service to be running ....
	I0918 20:46:39.423760   48996 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0918 20:46:39.439446   48996 system_svc.go:56] duration metric: took 15.714801ms WaitForService to wait for kubelet
	I0918 20:46:39.439476   48996 kubeadm.go:582] duration metric: took 6.624917195s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0918 20:46:39.439493   48996 node_conditions.go:102] verifying NodePressure condition ...
	I0918 20:46:39.617988   48996 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0918 20:46:39.618017   48996 node_conditions.go:123] node cpu capacity is 2
	I0918 20:46:39.618027   48996 node_conditions.go:105] duration metric: took 178.529457ms to run NodePressure ...
	I0918 20:46:39.618036   48996 start.go:241] waiting for startup goroutines ...
	I0918 20:46:39.618043   48996 start.go:246] waiting for cluster config update ...
	I0918 20:46:39.618051   48996 start.go:255] writing updated cluster config ...
	I0918 20:46:39.618287   48996 ssh_runner.go:195] Run: rm -f paused
	I0918 20:46:39.665744   48996 start.go:600] kubectl: 1.31.1, cluster: 1.24.4 (minor skew: 7)
	I0918 20:46:39.668117   48996 out.go:201] 
	W0918 20:46:39.669731   48996 out.go:270] ! /usr/local/bin/kubectl is version 1.31.1, which may have incompatibilities with Kubernetes 1.24.4.
	I0918 20:46:39.671312   48996 out.go:177]   - Want kubectl v1.24.4? Try 'minikube kubectl -- get pods -A'
	I0918 20:46:39.672786   48996 out.go:177] * Done! kubectl is now configured to use "test-preload-684593" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Sep 18 20:46:40 test-preload-684593 crio[692]: time="2024-09-18 20:46:40.606621265Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726692400606598746,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:119830,},InodesUsed:&UInt64Value{Value:76,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=d024af88-fc57-47bf-a091-d0bb88744227 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 18 20:46:40 test-preload-684593 crio[692]: time="2024-09-18 20:46:40.607145257Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f4f37664-cf51-4fba-a4ba-815064938b7d name=/runtime.v1.RuntimeService/ListContainers
	Sep 18 20:46:40 test-preload-684593 crio[692]: time="2024-09-18 20:46:40.607199585Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f4f37664-cf51-4fba-a4ba-815064938b7d name=/runtime.v1.RuntimeService/ListContainers
	Sep 18 20:46:40 test-preload-684593 crio[692]: time="2024-09-18 20:46:40.607388858Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:16d399d89535cf00fdab890ef76e81fac4692c4c1fa594bf424cb8c0f9c8e7c8,PodSandboxId:2a9e270bbe0a89210a09aa2cde362342e14336a0acff9f44295613d8a1898f3a,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,State:CONTAINER_RUNNING,CreatedAt:1726692396185423465,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-bw4dh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8778f526-b86f-4ab4-9366-3590fc08a39f,},Annotations:map[string]string{io.kubernetes.container.hash: b9f58a24,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5603c6412d59fe6d70f67ab740da776ab49a6a2e707c0ca1f5cb085a422ac08e,PodSandboxId:2bf8ce3b13d29e01c3ae4ac4fb4420a1268b11f059d7ddc0c851253766784bdc,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,State:CONTAINER_RUNNING,CreatedAt:1726692389234490898,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-rl2xg,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 0a98fabf-112d-4411-ad53-c03e90ac3b08,},Annotations:map[string]string{io.kubernetes.container.hash: a0b52b72,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5fe536b446afe60c37b963cd943f2f59e8c26d8522db7383eb94e92191734034,PodSandboxId:a2c21ef84c4083952c7b508d87fd3b87f912489433de7220566624bf721a5dc3,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726692389008193564,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0b
4b3785-a0b2-474f-82b0-f83463878ce5,},Annotations:map[string]string{io.kubernetes.container.hash: bea54a56,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:907914339dfc16c8675b5baa89ec3535b2010b41ff6e42d2b0360f84fd7250b8,PodSandboxId:9ab383c9e30680d2c6a88417f1a10ffb3ac5e61f25519a3dbce3bb0e18328f2c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,State:CONTAINER_RUNNING,CreatedAt:1726692382472744833,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-684593,io.kubernetes.pod.namespace: kube-system,io.ku
bernetes.pod.uid: 48c624534db278493d25314f60ebdd6d,},Annotations:map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:186efc0ce41f48852b2ec6e6f2fcdb970e10561f5315fcd1ede9e625dd428051,PodSandboxId:040dcec6e34d9812d6cc9cfbc266b9f035c477925f1893307ce4fa4beaf35492,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,State:CONTAINER_RUNNING,CreatedAt:1726692382453287338,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-684593,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c3a599e757988816580beb1
d6b19cc5f,},Annotations:map[string]string{io.kubernetes.container.hash: d27f5e50,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c75632796d5eb6a2efa815619a605ab85d7b2317bf6241cb1a7251cfd3ad58b7,PodSandboxId:bcab6007c84dde5a094502ae82ad33f65062aebfcc08b65d1f9f0ec760ece344,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,State:CONTAINER_RUNNING,CreatedAt:1726692382412366974,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-684593,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aef3cb0ab92042f1128e12b7a12647ed,}
,Annotations:map[string]string{io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9ea9d7cd41a1c52dbc8968f324ed4ec5a39deec891d20b024dae319a92dab6af,PodSandboxId:60489d43a92c1aee3f19b938d0de6af68335c437381f274df58f090baeb9b10f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,State:CONTAINER_RUNNING,CreatedAt:1726692382362980998,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-684593,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 28efde34bb433deddb03eed65e98a468,},Annotation
s:map[string]string{io.kubernetes.container.hash: 2f7a8e23,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=f4f37664-cf51-4fba-a4ba-815064938b7d name=/runtime.v1.RuntimeService/ListContainers
	Sep 18 20:46:40 test-preload-684593 crio[692]: time="2024-09-18 20:46:40.644960905Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=0b386b5b-fb3a-4385-8d61-23c41be13374 name=/runtime.v1.RuntimeService/Version
	Sep 18 20:46:40 test-preload-684593 crio[692]: time="2024-09-18 20:46:40.645149099Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=0b386b5b-fb3a-4385-8d61-23c41be13374 name=/runtime.v1.RuntimeService/Version
	Sep 18 20:46:40 test-preload-684593 crio[692]: time="2024-09-18 20:46:40.649181338Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=425a2bbf-6de2-42ab-9b68-91e8bd1caf4c name=/runtime.v1.ImageService/ImageFsInfo
	Sep 18 20:46:40 test-preload-684593 crio[692]: time="2024-09-18 20:46:40.649832281Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726692400649805364,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:119830,},InodesUsed:&UInt64Value{Value:76,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=425a2bbf-6de2-42ab-9b68-91e8bd1caf4c name=/runtime.v1.ImageService/ImageFsInfo
	Sep 18 20:46:40 test-preload-684593 crio[692]: time="2024-09-18 20:46:40.650422860Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=a35adcb0-f224-4b62-8c6b-36c3c53fa8e0 name=/runtime.v1.RuntimeService/ListContainers
	Sep 18 20:46:40 test-preload-684593 crio[692]: time="2024-09-18 20:46:40.650476703Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=a35adcb0-f224-4b62-8c6b-36c3c53fa8e0 name=/runtime.v1.RuntimeService/ListContainers
	Sep 18 20:46:40 test-preload-684593 crio[692]: time="2024-09-18 20:46:40.650709813Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:16d399d89535cf00fdab890ef76e81fac4692c4c1fa594bf424cb8c0f9c8e7c8,PodSandboxId:2a9e270bbe0a89210a09aa2cde362342e14336a0acff9f44295613d8a1898f3a,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,State:CONTAINER_RUNNING,CreatedAt:1726692396185423465,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-bw4dh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8778f526-b86f-4ab4-9366-3590fc08a39f,},Annotations:map[string]string{io.kubernetes.container.hash: b9f58a24,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5603c6412d59fe6d70f67ab740da776ab49a6a2e707c0ca1f5cb085a422ac08e,PodSandboxId:2bf8ce3b13d29e01c3ae4ac4fb4420a1268b11f059d7ddc0c851253766784bdc,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,State:CONTAINER_RUNNING,CreatedAt:1726692389234490898,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-rl2xg,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 0a98fabf-112d-4411-ad53-c03e90ac3b08,},Annotations:map[string]string{io.kubernetes.container.hash: a0b52b72,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5fe536b446afe60c37b963cd943f2f59e8c26d8522db7383eb94e92191734034,PodSandboxId:a2c21ef84c4083952c7b508d87fd3b87f912489433de7220566624bf721a5dc3,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726692389008193564,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0b
4b3785-a0b2-474f-82b0-f83463878ce5,},Annotations:map[string]string{io.kubernetes.container.hash: bea54a56,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:907914339dfc16c8675b5baa89ec3535b2010b41ff6e42d2b0360f84fd7250b8,PodSandboxId:9ab383c9e30680d2c6a88417f1a10ffb3ac5e61f25519a3dbce3bb0e18328f2c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,State:CONTAINER_RUNNING,CreatedAt:1726692382472744833,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-684593,io.kubernetes.pod.namespace: kube-system,io.ku
bernetes.pod.uid: 48c624534db278493d25314f60ebdd6d,},Annotations:map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:186efc0ce41f48852b2ec6e6f2fcdb970e10561f5315fcd1ede9e625dd428051,PodSandboxId:040dcec6e34d9812d6cc9cfbc266b9f035c477925f1893307ce4fa4beaf35492,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,State:CONTAINER_RUNNING,CreatedAt:1726692382453287338,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-684593,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c3a599e757988816580beb1
d6b19cc5f,},Annotations:map[string]string{io.kubernetes.container.hash: d27f5e50,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c75632796d5eb6a2efa815619a605ab85d7b2317bf6241cb1a7251cfd3ad58b7,PodSandboxId:bcab6007c84dde5a094502ae82ad33f65062aebfcc08b65d1f9f0ec760ece344,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,State:CONTAINER_RUNNING,CreatedAt:1726692382412366974,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-684593,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aef3cb0ab92042f1128e12b7a12647ed,}
,Annotations:map[string]string{io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9ea9d7cd41a1c52dbc8968f324ed4ec5a39deec891d20b024dae319a92dab6af,PodSandboxId:60489d43a92c1aee3f19b938d0de6af68335c437381f274df58f090baeb9b10f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,State:CONTAINER_RUNNING,CreatedAt:1726692382362980998,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-684593,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 28efde34bb433deddb03eed65e98a468,},Annotation
s:map[string]string{io.kubernetes.container.hash: 2f7a8e23,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=a35adcb0-f224-4b62-8c6b-36c3c53fa8e0 name=/runtime.v1.RuntimeService/ListContainers
	Sep 18 20:46:40 test-preload-684593 crio[692]: time="2024-09-18 20:46:40.690949198Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=29f00b5a-7215-4c66-9392-adb61a4bd3fc name=/runtime.v1.RuntimeService/Version
	Sep 18 20:46:40 test-preload-684593 crio[692]: time="2024-09-18 20:46:40.691025591Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=29f00b5a-7215-4c66-9392-adb61a4bd3fc name=/runtime.v1.RuntimeService/Version
	Sep 18 20:46:40 test-preload-684593 crio[692]: time="2024-09-18 20:46:40.692500879Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=70ae8d8e-06e2-42e7-9918-f537c8d29ea1 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 18 20:46:40 test-preload-684593 crio[692]: time="2024-09-18 20:46:40.693315771Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726692400693291340,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:119830,},InodesUsed:&UInt64Value{Value:76,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=70ae8d8e-06e2-42e7-9918-f537c8d29ea1 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 18 20:46:40 test-preload-684593 crio[692]: time="2024-09-18 20:46:40.693871684Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=2cdc242a-dcee-4d4c-8409-27445173509d name=/runtime.v1.RuntimeService/ListContainers
	Sep 18 20:46:40 test-preload-684593 crio[692]: time="2024-09-18 20:46:40.693949250Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=2cdc242a-dcee-4d4c-8409-27445173509d name=/runtime.v1.RuntimeService/ListContainers
	Sep 18 20:46:40 test-preload-684593 crio[692]: time="2024-09-18 20:46:40.694146188Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:16d399d89535cf00fdab890ef76e81fac4692c4c1fa594bf424cb8c0f9c8e7c8,PodSandboxId:2a9e270bbe0a89210a09aa2cde362342e14336a0acff9f44295613d8a1898f3a,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,State:CONTAINER_RUNNING,CreatedAt:1726692396185423465,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-bw4dh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8778f526-b86f-4ab4-9366-3590fc08a39f,},Annotations:map[string]string{io.kubernetes.container.hash: b9f58a24,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5603c6412d59fe6d70f67ab740da776ab49a6a2e707c0ca1f5cb085a422ac08e,PodSandboxId:2bf8ce3b13d29e01c3ae4ac4fb4420a1268b11f059d7ddc0c851253766784bdc,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,State:CONTAINER_RUNNING,CreatedAt:1726692389234490898,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-rl2xg,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 0a98fabf-112d-4411-ad53-c03e90ac3b08,},Annotations:map[string]string{io.kubernetes.container.hash: a0b52b72,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5fe536b446afe60c37b963cd943f2f59e8c26d8522db7383eb94e92191734034,PodSandboxId:a2c21ef84c4083952c7b508d87fd3b87f912489433de7220566624bf721a5dc3,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726692389008193564,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0b
4b3785-a0b2-474f-82b0-f83463878ce5,},Annotations:map[string]string{io.kubernetes.container.hash: bea54a56,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:907914339dfc16c8675b5baa89ec3535b2010b41ff6e42d2b0360f84fd7250b8,PodSandboxId:9ab383c9e30680d2c6a88417f1a10ffb3ac5e61f25519a3dbce3bb0e18328f2c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,State:CONTAINER_RUNNING,CreatedAt:1726692382472744833,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-684593,io.kubernetes.pod.namespace: kube-system,io.ku
bernetes.pod.uid: 48c624534db278493d25314f60ebdd6d,},Annotations:map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:186efc0ce41f48852b2ec6e6f2fcdb970e10561f5315fcd1ede9e625dd428051,PodSandboxId:040dcec6e34d9812d6cc9cfbc266b9f035c477925f1893307ce4fa4beaf35492,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,State:CONTAINER_RUNNING,CreatedAt:1726692382453287338,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-684593,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c3a599e757988816580beb1
d6b19cc5f,},Annotations:map[string]string{io.kubernetes.container.hash: d27f5e50,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c75632796d5eb6a2efa815619a605ab85d7b2317bf6241cb1a7251cfd3ad58b7,PodSandboxId:bcab6007c84dde5a094502ae82ad33f65062aebfcc08b65d1f9f0ec760ece344,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,State:CONTAINER_RUNNING,CreatedAt:1726692382412366974,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-684593,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aef3cb0ab92042f1128e12b7a12647ed,}
,Annotations:map[string]string{io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9ea9d7cd41a1c52dbc8968f324ed4ec5a39deec891d20b024dae319a92dab6af,PodSandboxId:60489d43a92c1aee3f19b938d0de6af68335c437381f274df58f090baeb9b10f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,State:CONTAINER_RUNNING,CreatedAt:1726692382362980998,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-684593,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 28efde34bb433deddb03eed65e98a468,},Annotation
s:map[string]string{io.kubernetes.container.hash: 2f7a8e23,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=2cdc242a-dcee-4d4c-8409-27445173509d name=/runtime.v1.RuntimeService/ListContainers
	Sep 18 20:46:40 test-preload-684593 crio[692]: time="2024-09-18 20:46:40.725464520Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=d43a61bb-0bd0-4026-b1a8-97510c22d39b name=/runtime.v1.RuntimeService/Version
	Sep 18 20:46:40 test-preload-684593 crio[692]: time="2024-09-18 20:46:40.725590262Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=d43a61bb-0bd0-4026-b1a8-97510c22d39b name=/runtime.v1.RuntimeService/Version
	Sep 18 20:46:40 test-preload-684593 crio[692]: time="2024-09-18 20:46:40.726761298Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=0613034a-91f9-4af7-8b1a-6a31c2c14167 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 18 20:46:40 test-preload-684593 crio[692]: time="2024-09-18 20:46:40.727292090Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726692400727266951,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:119830,},InodesUsed:&UInt64Value{Value:76,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=0613034a-91f9-4af7-8b1a-6a31c2c14167 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 18 20:46:40 test-preload-684593 crio[692]: time="2024-09-18 20:46:40.729647619Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b80c9fbc-5a08-486c-b3be-ba7b07610955 name=/runtime.v1.RuntimeService/ListContainers
	Sep 18 20:46:40 test-preload-684593 crio[692]: time="2024-09-18 20:46:40.729751203Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b80c9fbc-5a08-486c-b3be-ba7b07610955 name=/runtime.v1.RuntimeService/ListContainers
	Sep 18 20:46:40 test-preload-684593 crio[692]: time="2024-09-18 20:46:40.730065714Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:16d399d89535cf00fdab890ef76e81fac4692c4c1fa594bf424cb8c0f9c8e7c8,PodSandboxId:2a9e270bbe0a89210a09aa2cde362342e14336a0acff9f44295613d8a1898f3a,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,State:CONTAINER_RUNNING,CreatedAt:1726692396185423465,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-bw4dh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8778f526-b86f-4ab4-9366-3590fc08a39f,},Annotations:map[string]string{io.kubernetes.container.hash: b9f58a24,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5603c6412d59fe6d70f67ab740da776ab49a6a2e707c0ca1f5cb085a422ac08e,PodSandboxId:2bf8ce3b13d29e01c3ae4ac4fb4420a1268b11f059d7ddc0c851253766784bdc,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,State:CONTAINER_RUNNING,CreatedAt:1726692389234490898,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-rl2xg,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 0a98fabf-112d-4411-ad53-c03e90ac3b08,},Annotations:map[string]string{io.kubernetes.container.hash: a0b52b72,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5fe536b446afe60c37b963cd943f2f59e8c26d8522db7383eb94e92191734034,PodSandboxId:a2c21ef84c4083952c7b508d87fd3b87f912489433de7220566624bf721a5dc3,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726692389008193564,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0b
4b3785-a0b2-474f-82b0-f83463878ce5,},Annotations:map[string]string{io.kubernetes.container.hash: bea54a56,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:907914339dfc16c8675b5baa89ec3535b2010b41ff6e42d2b0360f84fd7250b8,PodSandboxId:9ab383c9e30680d2c6a88417f1a10ffb3ac5e61f25519a3dbce3bb0e18328f2c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,State:CONTAINER_RUNNING,CreatedAt:1726692382472744833,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-684593,io.kubernetes.pod.namespace: kube-system,io.ku
bernetes.pod.uid: 48c624534db278493d25314f60ebdd6d,},Annotations:map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:186efc0ce41f48852b2ec6e6f2fcdb970e10561f5315fcd1ede9e625dd428051,PodSandboxId:040dcec6e34d9812d6cc9cfbc266b9f035c477925f1893307ce4fa4beaf35492,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,State:CONTAINER_RUNNING,CreatedAt:1726692382453287338,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-684593,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c3a599e757988816580beb1
d6b19cc5f,},Annotations:map[string]string{io.kubernetes.container.hash: d27f5e50,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c75632796d5eb6a2efa815619a605ab85d7b2317bf6241cb1a7251cfd3ad58b7,PodSandboxId:bcab6007c84dde5a094502ae82ad33f65062aebfcc08b65d1f9f0ec760ece344,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,State:CONTAINER_RUNNING,CreatedAt:1726692382412366974,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-684593,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aef3cb0ab92042f1128e12b7a12647ed,}
,Annotations:map[string]string{io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9ea9d7cd41a1c52dbc8968f324ed4ec5a39deec891d20b024dae319a92dab6af,PodSandboxId:60489d43a92c1aee3f19b938d0de6af68335c437381f274df58f090baeb9b10f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,State:CONTAINER_RUNNING,CreatedAt:1726692382362980998,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-684593,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 28efde34bb433deddb03eed65e98a468,},Annotation
s:map[string]string{io.kubernetes.container.hash: 2f7a8e23,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=b80c9fbc-5a08-486c-b3be-ba7b07610955 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	16d399d89535c       a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03   4 seconds ago       Running             coredns                   1                   2a9e270bbe0a8       coredns-6d4b75cb6d-bw4dh
	5603c6412d59f       7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7   11 seconds ago      Running             kube-proxy                1                   2bf8ce3b13d29       kube-proxy-rl2xg
	5fe536b446afe       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   11 seconds ago      Running             storage-provisioner       1                   a2c21ef84c408       storage-provisioner
	907914339dfc1       1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48   18 seconds ago      Running             kube-controller-manager   1                   9ab383c9e3068       kube-controller-manager-test-preload-684593
	186efc0ce41f4       aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b   18 seconds ago      Running             etcd                      1                   040dcec6e34d9       etcd-test-preload-684593
	c75632796d5eb       03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9   18 seconds ago      Running             kube-scheduler            1                   bcab6007c84dd       kube-scheduler-test-preload-684593
	9ea9d7cd41a1c       6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d   18 seconds ago      Running             kube-apiserver            1                   60489d43a92c1       kube-apiserver-test-preload-684593
	
	
	==> coredns [16d399d89535cf00fdab890ef76e81fac4692c4c1fa594bf424cb8c0f9c8e7c8] <==
	.:53
	[INFO] plugin/reload: Running configuration MD5 = bbeeddb09682f41960fef01b05cb3a3d
	CoreDNS-1.8.6
	linux/amd64, go1.17.1, 13a9191
	[INFO] 127.0.0.1:58223 - 62738 "HINFO IN 3019824985050769720.2669289427736249661. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.011611836s
	
	
	==> describe nodes <==
	Name:               test-preload-684593
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=test-preload-684593
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=85073601a832bd4bbda5d11fa91feafff6ec6b91
	                    minikube.k8s.io/name=test-preload-684593
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_18T20_45_06_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 18 Sep 2024 20:45:03 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  test-preload-684593
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 18 Sep 2024 20:46:37 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 18 Sep 2024 20:46:37 +0000   Wed, 18 Sep 2024 20:45:00 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 18 Sep 2024 20:46:37 +0000   Wed, 18 Sep 2024 20:45:00 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 18 Sep 2024 20:46:37 +0000   Wed, 18 Sep 2024 20:45:00 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 18 Sep 2024 20:46:37 +0000   Wed, 18 Sep 2024 20:46:37 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.171
	  Hostname:    test-preload-684593
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 1adb5b40203b4c1cb5e7f50241791f94
	  System UUID:                1adb5b40-203b-4c1c-b5e7-f50241791f94
	  Boot ID:                    0827f342-4181-42c9-a9ef-f13bf05f0f03
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.24.4
	  Kube-Proxy Version:         v1.24.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                           CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                           ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-6d4b75cb6d-bw4dh                       100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     81s
	  kube-system                 etcd-test-preload-684593                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         96s
	  kube-system                 kube-apiserver-test-preload-684593             250m (12%)    0 (0%)      0 (0%)           0 (0%)         94s
	  kube-system                 kube-controller-manager-test-preload-684593    200m (10%)    0 (0%)      0 (0%)           0 (0%)         93s
	  kube-system                 kube-proxy-rl2xg                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         81s
	  kube-system                 kube-scheduler-test-preload-684593             100m (5%)     0 (0%)      0 (0%)           0 (0%)         94s
	  kube-system                 storage-provisioner                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         80s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  0 (0%)
	  memory             170Mi (8%)  170Mi (8%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 11s                  kube-proxy       
	  Normal  Starting                 80s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  102s (x5 over 102s)  kubelet          Node test-preload-684593 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    102s (x5 over 102s)  kubelet          Node test-preload-684593 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     102s (x5 over 102s)  kubelet          Node test-preload-684593 status is now: NodeHasSufficientPID
	  Normal  Starting                 94s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  94s                  kubelet          Node test-preload-684593 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    94s                  kubelet          Node test-preload-684593 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     94s                  kubelet          Node test-preload-684593 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  94s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                83s                  kubelet          Node test-preload-684593 status is now: NodeReady
	  Normal  RegisteredNode           82s                  node-controller  Node test-preload-684593 event: Registered Node test-preload-684593 in Controller
	  Normal  Starting                 19s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  19s (x8 over 19s)    kubelet          Node test-preload-684593 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    19s (x8 over 19s)    kubelet          Node test-preload-684593 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     19s (x7 over 19s)    kubelet          Node test-preload-684593 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  19s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           0s                   node-controller  Node test-preload-684593 event: Registered Node test-preload-684593 in Controller
	
	
	==> dmesg <==
	[Sep18 20:45] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.051394] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.038905] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.776508] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +1.898079] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.570845] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[Sep18 20:46] systemd-fstab-generator[614]: Ignoring "noauto" option for root device
	[  +0.057906] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.059249] systemd-fstab-generator[626]: Ignoring "noauto" option for root device
	[  +0.195801] systemd-fstab-generator[640]: Ignoring "noauto" option for root device
	[  +0.127803] systemd-fstab-generator[653]: Ignoring "noauto" option for root device
	[  +0.289834] systemd-fstab-generator[683]: Ignoring "noauto" option for root device
	[ +12.695694] systemd-fstab-generator[1012]: Ignoring "noauto" option for root device
	[  +0.058648] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.778493] systemd-fstab-generator[1141]: Ignoring "noauto" option for root device
	[  +4.471093] kauditd_printk_skb: 105 callbacks suppressed
	[  +6.957433] systemd-fstab-generator[1760]: Ignoring "noauto" option for root device
	[  +0.089680] kauditd_printk_skb: 31 callbacks suppressed
	[  +7.302534] kauditd_printk_skb: 33 callbacks suppressed
	
	
	==> etcd [186efc0ce41f48852b2ec6e6f2fcdb970e10561f5315fcd1ede9e625dd428051] <==
	{"level":"info","ts":"2024-09-18T20:46:22.915Z","caller":"etcdserver/server.go:851","msg":"starting etcd server","local-member-id":"4e6b9cdcc1ed933f","local-server-version":"3.5.3","cluster-version":"to_be_decided"}
	{"level":"info","ts":"2024-09-18T20:46:22.921Z","caller":"etcdserver/server.go:752","msg":"starting initial election tick advance","election-ticks":10}
	{"level":"info","ts":"2024-09-18T20:46:22.921Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4e6b9cdcc1ed933f switched to configuration voters=(5650782629426729791)"}
	{"level":"info","ts":"2024-09-18T20:46:22.921Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"c9ee22fca1de3e71","local-member-id":"4e6b9cdcc1ed933f","added-peer-id":"4e6b9cdcc1ed933f","added-peer-peer-urls":["https://192.168.39.171:2380"]}
	{"level":"info","ts":"2024-09-18T20:46:22.922Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"c9ee22fca1de3e71","local-member-id":"4e6b9cdcc1ed933f","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-18T20:46:22.922Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-18T20:46:22.927Z","caller":"embed/etcd.go:688","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-09-18T20:46:22.927Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"4e6b9cdcc1ed933f","initial-advertise-peer-urls":["https://192.168.39.171:2380"],"listen-peer-urls":["https://192.168.39.171:2380"],"advertise-client-urls":["https://192.168.39.171:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.171:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-09-18T20:46:22.927Z","caller":"embed/etcd.go:763","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-09-18T20:46:22.927Z","caller":"embed/etcd.go:581","msg":"serving peer traffic","address":"192.168.39.171:2380"}
	{"level":"info","ts":"2024-09-18T20:46:22.927Z","caller":"embed/etcd.go:553","msg":"cmux::serve","address":"192.168.39.171:2380"}
	{"level":"info","ts":"2024-09-18T20:46:24.771Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4e6b9cdcc1ed933f is starting a new election at term 2"}
	{"level":"info","ts":"2024-09-18T20:46:24.771Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4e6b9cdcc1ed933f became pre-candidate at term 2"}
	{"level":"info","ts":"2024-09-18T20:46:24.771Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4e6b9cdcc1ed933f received MsgPreVoteResp from 4e6b9cdcc1ed933f at term 2"}
	{"level":"info","ts":"2024-09-18T20:46:24.771Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4e6b9cdcc1ed933f became candidate at term 3"}
	{"level":"info","ts":"2024-09-18T20:46:24.771Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4e6b9cdcc1ed933f received MsgVoteResp from 4e6b9cdcc1ed933f at term 3"}
	{"level":"info","ts":"2024-09-18T20:46:24.771Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4e6b9cdcc1ed933f became leader at term 3"}
	{"level":"info","ts":"2024-09-18T20:46:24.771Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 4e6b9cdcc1ed933f elected leader 4e6b9cdcc1ed933f at term 3"}
	{"level":"info","ts":"2024-09-18T20:46:24.772Z","caller":"etcdserver/server.go:2042","msg":"published local member to cluster through raft","local-member-id":"4e6b9cdcc1ed933f","local-member-attributes":"{Name:test-preload-684593 ClientURLs:[https://192.168.39.171:2379]}","request-path":"/0/members/4e6b9cdcc1ed933f/attributes","cluster-id":"c9ee22fca1de3e71","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-18T20:46:24.772Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-18T20:46:24.774Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.39.171:2379"}
	{"level":"info","ts":"2024-09-18T20:46:24.774Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-18T20:46:24.775Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-09-18T20:46:24.775Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-18T20:46:24.775Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	
	==> kernel <==
	 20:46:41 up 0 min,  0 users,  load average: 0.84, 0.23, 0.08
	Linux test-preload-684593 5.10.207 #1 SMP Mon Sep 16 15:00:28 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [9ea9d7cd41a1c52dbc8968f324ed4ec5a39deec891d20b024dae319a92dab6af] <==
	I0918 20:46:27.216422       1 controller.go:85] Starting OpenAPI V3 controller
	I0918 20:46:27.216678       1 naming_controller.go:291] Starting NamingConditionController
	I0918 20:46:27.217135       1 establishing_controller.go:76] Starting EstablishingController
	I0918 20:46:27.217443       1 nonstructuralschema_controller.go:192] Starting NonStructuralSchemaConditionController
	I0918 20:46:27.217493       1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I0918 20:46:27.217595       1 crd_finalizer.go:266] Starting CRDFinalizer
	I0918 20:46:27.281542       1 shared_informer.go:262] Caches are synced for cluster_authentication_trust_controller
	I0918 20:46:27.298181       1 shared_informer.go:262] Caches are synced for crd-autoregister
	I0918 20:46:27.304479       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0918 20:46:27.317949       1 shared_informer.go:262] Caches are synced for node_authorizer
	I0918 20:46:27.337551       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	E0918 20:46:27.341595       1 controller.go:169] Error removing old endpoints from kubernetes service: no master IPs were listed in storage, refusing to erase all endpoints for the kubernetes service
	I0918 20:46:27.381948       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0918 20:46:27.386076       1 cache.go:39] Caches are synced for autoregister controller
	I0918 20:46:27.386247       1 apf_controller.go:322] Running API Priority and Fairness config worker
	I0918 20:46:27.879109       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0918 20:46:28.185041       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0918 20:46:28.931927       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I0918 20:46:28.954681       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I0918 20:46:29.012257       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I0918 20:46:29.036266       1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0918 20:46:29.057934       1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0918 20:46:29.504716       1 controller.go:611] quota admission added evaluator for: events.events.k8s.io
	I0918 20:46:40.228961       1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0918 20:46:40.328145       1 controller.go:611] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [907914339dfc16c8675b5baa89ec3535b2010b41ff6e42d2b0360f84fd7250b8] <==
	I0918 20:46:40.257824       1 shared_informer.go:262] Caches are synced for namespace
	I0918 20:46:40.261256       1 shared_informer.go:262] Caches are synced for service account
	I0918 20:46:40.263611       1 shared_informer.go:262] Caches are synced for bootstrap_signer
	I0918 20:46:40.271314       1 shared_informer.go:262] Caches are synced for certificate-csrsigning-kubelet-client
	I0918 20:46:40.271480       1 shared_informer.go:262] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I0918 20:46:40.273471       1 shared_informer.go:262] Caches are synced for taint
	I0918 20:46:40.273652       1 node_lifecycle_controller.go:1399] Initializing eviction metric for zone: 
	I0918 20:46:40.273747       1 taint_manager.go:187] "Starting NoExecuteTaintManager"
	W0918 20:46:40.273851       1 node_lifecycle_controller.go:1014] Missing timestamp for Node test-preload-684593. Assuming now as a timestamp.
	I0918 20:46:40.273922       1 node_lifecycle_controller.go:1215] Controller detected that zone  is now in state Normal.
	I0918 20:46:40.274140       1 event.go:294] "Event occurred" object="test-preload-684593" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node test-preload-684593 event: Registered Node test-preload-684593 in Controller"
	I0918 20:46:40.283581       1 shared_informer.go:262] Caches are synced for ClusterRoleAggregator
	I0918 20:46:40.351398       1 shared_informer.go:262] Caches are synced for resource quota
	I0918 20:46:40.372799       1 shared_informer.go:262] Caches are synced for deployment
	I0918 20:46:40.381065       1 shared_informer.go:262] Caches are synced for PV protection
	I0918 20:46:40.411898       1 shared_informer.go:262] Caches are synced for resource quota
	I0918 20:46:40.422630       1 shared_informer.go:262] Caches are synced for ReplicaSet
	I0918 20:46:40.424947       1 shared_informer.go:262] Caches are synced for persistent volume
	I0918 20:46:40.426608       1 shared_informer.go:262] Caches are synced for attach detach
	I0918 20:46:40.430012       1 shared_informer.go:262] Caches are synced for disruption
	I0918 20:46:40.430577       1 disruption.go:371] Sending events to api server.
	I0918 20:46:40.465958       1 shared_informer.go:262] Caches are synced for expand
	I0918 20:46:40.863699       1 shared_informer.go:262] Caches are synced for garbage collector
	I0918 20:46:40.863729       1 garbagecollector.go:158] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0918 20:46:40.893183       1 shared_informer.go:262] Caches are synced for garbage collector
	
	
	==> kube-proxy [5603c6412d59fe6d70f67ab740da776ab49a6a2e707c0ca1f5cb085a422ac08e] <==
	I0918 20:46:29.465405       1 node.go:163] Successfully retrieved node IP: 192.168.39.171
	I0918 20:46:29.465486       1 server_others.go:138] "Detected node IP" address="192.168.39.171"
	I0918 20:46:29.465577       1 server_others.go:578] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I0918 20:46:29.496581       1 server_others.go:199] "kube-proxy running in single-stack mode, this ipFamily is not supported" ipFamily=IPv6
	I0918 20:46:29.496611       1 server_others.go:206] "Using iptables Proxier"
	I0918 20:46:29.497318       1 proxier.go:259] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I0918 20:46:29.498045       1 server.go:661] "Version info" version="v1.24.4"
	I0918 20:46:29.498058       1 server.go:663] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0918 20:46:29.499760       1 config.go:317] "Starting service config controller"
	I0918 20:46:29.499960       1 shared_informer.go:255] Waiting for caches to sync for service config
	I0918 20:46:29.500012       1 config.go:226] "Starting endpoint slice config controller"
	I0918 20:46:29.500030       1 shared_informer.go:255] Waiting for caches to sync for endpoint slice config
	I0918 20:46:29.501229       1 config.go:444] "Starting node config controller"
	I0918 20:46:29.501963       1 shared_informer.go:255] Waiting for caches to sync for node config
	I0918 20:46:29.600580       1 shared_informer.go:262] Caches are synced for endpoint slice config
	I0918 20:46:29.600648       1 shared_informer.go:262] Caches are synced for service config
	I0918 20:46:29.602401       1 shared_informer.go:262] Caches are synced for node config
	
	
	==> kube-scheduler [c75632796d5eb6a2efa815619a605ab85d7b2317bf6241cb1a7251cfd3ad58b7] <==
	I0918 20:46:23.107335       1 serving.go:348] Generated self-signed cert in-memory
	W0918 20:46:27.308284       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0918 20:46:27.310604       1 authentication.go:346] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0918 20:46:27.310763       1 authentication.go:347] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0918 20:46:27.310791       1 authentication.go:348] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0918 20:46:27.368722       1 server.go:147] "Starting Kubernetes Scheduler" version="v1.24.4"
	I0918 20:46:27.368760       1 server.go:149] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0918 20:46:27.374557       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I0918 20:46:27.374726       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0918 20:46:27.374759       1 shared_informer.go:255] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0918 20:46:27.377929       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0918 20:46:27.475047       1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 18 20:46:27 test-preload-684593 kubelet[1148]: I0918 20:46:27.702403    1148 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0a98fabf-112d-4411-ad53-c03e90ac3b08-lib-modules\") pod \"kube-proxy-rl2xg\" (UID: \"0a98fabf-112d-4411-ad53-c03e90ac3b08\") " pod="kube-system/kube-proxy-rl2xg"
	Sep 18 20:46:27 test-preload-684593 kubelet[1148]: I0918 20:46:27.702431    1148 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8778f526-b86f-4ab4-9366-3590fc08a39f-config-volume\") pod \"coredns-6d4b75cb6d-bw4dh\" (UID: \"8778f526-b86f-4ab4-9366-3590fc08a39f\") " pod="kube-system/coredns-6d4b75cb6d-bw4dh"
	Sep 18 20:46:27 test-preload-684593 kubelet[1148]: I0918 20:46:27.702450    1148 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/0b4b3785-a0b2-474f-82b0-f83463878ce5-tmp\") pod \"storage-provisioner\" (UID: \"0b4b3785-a0b2-474f-82b0-f83463878ce5\") " pod="kube-system/storage-provisioner"
	Sep 18 20:46:27 test-preload-684593 kubelet[1148]: I0918 20:46:27.702473    1148 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/0a98fabf-112d-4411-ad53-c03e90ac3b08-kube-proxy\") pod \"kube-proxy-rl2xg\" (UID: \"0a98fabf-112d-4411-ad53-c03e90ac3b08\") " pod="kube-system/kube-proxy-rl2xg"
	Sep 18 20:46:27 test-preload-684593 kubelet[1148]: I0918 20:46:27.702487    1148 reconciler.go:159] "Reconciler: start to sync state"
	Sep 18 20:46:27 test-preload-684593 kubelet[1148]: I0918 20:46:27.775010    1148 kubelet_volumes.go:133] "Cleaned up orphaned volume from pod" podUID=3d6ecfce-102b-450b-a46e-94635e80f223 path="/var/lib/kubelet/pods/3d6ecfce-102b-450b-a46e-94635e80f223/volumes/kubernetes.io~projected/kube-api-access-4444n"
	Sep 18 20:46:27 test-preload-684593 kubelet[1148]: E0918 20:46:27.775497    1148 kubelet_volumes.go:245] "There were many similar errors. Turn up verbosity to see them." err="orphaned pod \"3d6ecfce-102b-450b-a46e-94635e80f223\" found, but failed to rmdir() volume at path /var/lib/kubelet/pods/3d6ecfce-102b-450b-a46e-94635e80f223/volumes/kubernetes.io~configmap/config-volume: directory not empty" numErrs=2
	Sep 18 20:46:28 test-preload-684593 kubelet[1148]: I0918 20:46:28.140109    1148 reconciler.go:201] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3d6ecfce-102b-450b-a46e-94635e80f223-config-volume\") pod \"3d6ecfce-102b-450b-a46e-94635e80f223\" (UID: \"3d6ecfce-102b-450b-a46e-94635e80f223\") "
	Sep 18 20:46:28 test-preload-684593 kubelet[1148]: I0918 20:46:28.140172    1148 reconciler.go:201] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4444n\" (UniqueName: \"kubernetes.io/projected/3d6ecfce-102b-450b-a46e-94635e80f223-kube-api-access-4444n\") pod \"3d6ecfce-102b-450b-a46e-94635e80f223\" (UID: \"3d6ecfce-102b-450b-a46e-94635e80f223\") "
	Sep 18 20:46:28 test-preload-684593 kubelet[1148]: E0918 20:46:28.140913    1148 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Sep 18 20:46:28 test-preload-684593 kubelet[1148]: E0918 20:46:28.141020    1148 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/8778f526-b86f-4ab4-9366-3590fc08a39f-config-volume podName:8778f526-b86f-4ab4-9366-3590fc08a39f nodeName:}" failed. No retries permitted until 2024-09-18 20:46:28.640986602 +0000 UTC m=+7.116336785 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/8778f526-b86f-4ab4-9366-3590fc08a39f-config-volume") pod "coredns-6d4b75cb6d-bw4dh" (UID: "8778f526-b86f-4ab4-9366-3590fc08a39f") : object "kube-system"/"coredns" not registered
	Sep 18 20:46:28 test-preload-684593 kubelet[1148]: W0918 20:46:28.141585    1148 empty_dir.go:493] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/3d6ecfce-102b-450b-a46e-94635e80f223/volumes/kubernetes.io~projected/kube-api-access-4444n
	Sep 18 20:46:28 test-preload-684593 kubelet[1148]: I0918 20:46:28.141620    1148 operation_generator.go:863] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3d6ecfce-102b-450b-a46e-94635e80f223-kube-api-access-4444n" (OuterVolumeSpecName: "kube-api-access-4444n") pod "3d6ecfce-102b-450b-a46e-94635e80f223" (UID: "3d6ecfce-102b-450b-a46e-94635e80f223"). InnerVolumeSpecName "kube-api-access-4444n". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 18 20:46:28 test-preload-684593 kubelet[1148]: W0918 20:46:28.142156    1148 empty_dir.go:519] Warning: Failed to clear quota on /var/lib/kubelet/pods/3d6ecfce-102b-450b-a46e-94635e80f223/volumes/kubernetes.io~configmap/config-volume: clearQuota called, but quotas disabled
	Sep 18 20:46:28 test-preload-684593 kubelet[1148]: I0918 20:46:28.142911    1148 operation_generator.go:863] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3d6ecfce-102b-450b-a46e-94635e80f223-config-volume" (OuterVolumeSpecName: "config-volume") pod "3d6ecfce-102b-450b-a46e-94635e80f223" (UID: "3d6ecfce-102b-450b-a46e-94635e80f223"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue ""
	Sep 18 20:46:28 test-preload-684593 kubelet[1148]: I0918 20:46:28.241616    1148 reconciler.go:384] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3d6ecfce-102b-450b-a46e-94635e80f223-config-volume\") on node \"test-preload-684593\" DevicePath \"\""
	Sep 18 20:46:28 test-preload-684593 kubelet[1148]: I0918 20:46:28.241651    1148 reconciler.go:384] "Volume detached for volume \"kube-api-access-4444n\" (UniqueName: \"kubernetes.io/projected/3d6ecfce-102b-450b-a46e-94635e80f223-kube-api-access-4444n\") on node \"test-preload-684593\" DevicePath \"\""
	Sep 18 20:46:28 test-preload-684593 kubelet[1148]: E0918 20:46:28.643585    1148 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Sep 18 20:46:28 test-preload-684593 kubelet[1148]: E0918 20:46:28.643660    1148 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/8778f526-b86f-4ab4-9366-3590fc08a39f-config-volume podName:8778f526-b86f-4ab4-9366-3590fc08a39f nodeName:}" failed. No retries permitted until 2024-09-18 20:46:29.643644823 +0000 UTC m=+8.118995008 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/8778f526-b86f-4ab4-9366-3590fc08a39f-config-volume") pod "coredns-6d4b75cb6d-bw4dh" (UID: "8778f526-b86f-4ab4-9366-3590fc08a39f") : object "kube-system"/"coredns" not registered
	Sep 18 20:46:29 test-preload-684593 kubelet[1148]: E0918 20:46:29.651858    1148 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Sep 18 20:46:29 test-preload-684593 kubelet[1148]: E0918 20:46:29.651925    1148 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/8778f526-b86f-4ab4-9366-3590fc08a39f-config-volume podName:8778f526-b86f-4ab4-9366-3590fc08a39f nodeName:}" failed. No retries permitted until 2024-09-18 20:46:31.651911993 +0000 UTC m=+10.127262176 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/8778f526-b86f-4ab4-9366-3590fc08a39f-config-volume") pod "coredns-6d4b75cb6d-bw4dh" (UID: "8778f526-b86f-4ab4-9366-3590fc08a39f") : object "kube-system"/"coredns" not registered
	Sep 18 20:46:29 test-preload-684593 kubelet[1148]: E0918 20:46:29.768842    1148 pod_workers.go:951] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="kube-system/coredns-6d4b75cb6d-bw4dh" podUID=8778f526-b86f-4ab4-9366-3590fc08a39f
	Sep 18 20:46:29 test-preload-684593 kubelet[1148]: I0918 20:46:29.774891    1148 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=3d6ecfce-102b-450b-a46e-94635e80f223 path="/var/lib/kubelet/pods/3d6ecfce-102b-450b-a46e-94635e80f223/volumes"
	Sep 18 20:46:31 test-preload-684593 kubelet[1148]: E0918 20:46:31.668309    1148 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Sep 18 20:46:31 test-preload-684593 kubelet[1148]: E0918 20:46:31.668425    1148 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/8778f526-b86f-4ab4-9366-3590fc08a39f-config-volume podName:8778f526-b86f-4ab4-9366-3590fc08a39f nodeName:}" failed. No retries permitted until 2024-09-18 20:46:35.66840087 +0000 UTC m=+14.143751055 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/8778f526-b86f-4ab4-9366-3590fc08a39f-config-volume") pod "coredns-6d4b75cb6d-bw4dh" (UID: "8778f526-b86f-4ab4-9366-3590fc08a39f") : object "kube-system"/"coredns" not registered
	
	
	==> storage-provisioner [5fe536b446afe60c37b963cd943f2f59e8c26d8522db7383eb94e92191734034] <==
	I0918 20:46:29.130144       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p test-preload-684593 -n test-preload-684593
helpers_test.go:261: (dbg) Run:  kubectl --context test-preload-684593 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestPreload FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
helpers_test.go:175: Cleaning up "test-preload-684593" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-684593
--- FAIL: TestPreload (170.73s)

                                                
                                    
x
+
TestKubernetesUpgrade (408.39s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-878094 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
E0918 20:50:01.286087   14878 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/functional-790989/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:222: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-878094 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 109 (4m59.733562175s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-878094] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19667
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19667-7671/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19667-7671/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	* Starting "kubernetes-upgrade-878094" primary control-plane node in "kubernetes-upgrade-878094" cluster
	* Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	* Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0918 20:49:41.022842   51397 out.go:345] Setting OutFile to fd 1 ...
	I0918 20:49:41.023137   51397 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0918 20:49:41.023148   51397 out.go:358] Setting ErrFile to fd 2...
	I0918 20:49:41.023152   51397 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0918 20:49:41.023398   51397 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19667-7671/.minikube/bin
	I0918 20:49:41.024076   51397 out.go:352] Setting JSON to false
	I0918 20:49:41.025020   51397 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":5525,"bootTime":1726687056,"procs":199,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0918 20:49:41.025117   51397 start.go:139] virtualization: kvm guest
	I0918 20:49:41.027514   51397 out.go:177] * [kubernetes-upgrade-878094] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0918 20:49:41.028919   51397 out.go:177]   - MINIKUBE_LOCATION=19667
	I0918 20:49:41.028972   51397 notify.go:220] Checking for updates...
	I0918 20:49:41.031468   51397 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0918 20:49:41.032730   51397 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19667-7671/kubeconfig
	I0918 20:49:41.034030   51397 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19667-7671/.minikube
	I0918 20:49:41.035165   51397 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0918 20:49:41.036777   51397 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0918 20:49:41.038508   51397 config.go:182] Loaded profile config "NoKubernetes-341744": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0918 20:49:41.038600   51397 config.go:182] Loaded profile config "cert-expiration-456762": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0918 20:49:41.038728   51397 config.go:182] Loaded profile config "offline-crio-339909": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0918 20:49:41.038855   51397 driver.go:394] Setting default libvirt URI to qemu:///system
	I0918 20:49:41.078090   51397 out.go:177] * Using the kvm2 driver based on user configuration
	I0918 20:49:41.079253   51397 start.go:297] selected driver: kvm2
	I0918 20:49:41.079265   51397 start.go:901] validating driver "kvm2" against <nil>
	I0918 20:49:41.079280   51397 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0918 20:49:41.079959   51397 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0918 20:49:41.080082   51397 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19667-7671/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0918 20:49:41.095466   51397 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0918 20:49:41.095517   51397 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0918 20:49:41.095766   51397 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0918 20:49:41.095795   51397 cni.go:84] Creating CNI manager for ""
	I0918 20:49:41.095838   51397 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0918 20:49:41.095846   51397 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0918 20:49:41.095900   51397 start.go:340] cluster config:
	{Name:kubernetes-upgrade-878094 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-878094 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.
local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0918 20:49:41.095991   51397 iso.go:125] acquiring lock: {Name:mkbcf4d84f3091567a39802cd5e773cb10b8c545 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0918 20:49:41.097758   51397 out.go:177] * Starting "kubernetes-upgrade-878094" primary control-plane node in "kubernetes-upgrade-878094" cluster
	I0918 20:49:41.099006   51397 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0918 20:49:41.099076   51397 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19667-7671/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0918 20:49:41.099094   51397 cache.go:56] Caching tarball of preloaded images
	I0918 20:49:41.099197   51397 preload.go:172] Found /home/jenkins/minikube-integration/19667-7671/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0918 20:49:41.099208   51397 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0918 20:49:41.099338   51397 profile.go:143] Saving config to /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/kubernetes-upgrade-878094/config.json ...
	I0918 20:49:41.099377   51397 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/kubernetes-upgrade-878094/config.json: {Name:mkf9a8195ad34cbf8b78bca71ef9435e244054c1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0918 20:49:41.099547   51397 start.go:360] acquireMachinesLock for kubernetes-upgrade-878094: {Name:mk1b0d68a0b7d9d4bc8204d5d81cb9eb77222526 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0918 20:50:10.457133   51397 start.go:364] duration metric: took 29.357556839s to acquireMachinesLock for "kubernetes-upgrade-878094"
	I0918 20:50:10.457218   51397 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-878094 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-878094 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableO
ptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0918 20:50:10.457350   51397 start.go:125] createHost starting for "" (driver="kvm2")
	I0918 20:50:10.459390   51397 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0918 20:50:10.459600   51397 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0918 20:50:10.459645   51397 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0918 20:50:10.476590   51397 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44457
	I0918 20:50:10.477128   51397 main.go:141] libmachine: () Calling .GetVersion
	I0918 20:50:10.477782   51397 main.go:141] libmachine: Using API Version  1
	I0918 20:50:10.477814   51397 main.go:141] libmachine: () Calling .SetConfigRaw
	I0918 20:50:10.478187   51397 main.go:141] libmachine: () Calling .GetMachineName
	I0918 20:50:10.478402   51397 main.go:141] libmachine: (kubernetes-upgrade-878094) Calling .GetMachineName
	I0918 20:50:10.478581   51397 main.go:141] libmachine: (kubernetes-upgrade-878094) Calling .DriverName
	I0918 20:50:10.478748   51397 start.go:159] libmachine.API.Create for "kubernetes-upgrade-878094" (driver="kvm2")
	I0918 20:50:10.478781   51397 client.go:168] LocalClient.Create starting
	I0918 20:50:10.478816   51397 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19667-7671/.minikube/certs/ca.pem
	I0918 20:50:10.478858   51397 main.go:141] libmachine: Decoding PEM data...
	I0918 20:50:10.478876   51397 main.go:141] libmachine: Parsing certificate...
	I0918 20:50:10.478924   51397 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19667-7671/.minikube/certs/cert.pem
	I0918 20:50:10.478945   51397 main.go:141] libmachine: Decoding PEM data...
	I0918 20:50:10.478959   51397 main.go:141] libmachine: Parsing certificate...
	I0918 20:50:10.478981   51397 main.go:141] libmachine: Running pre-create checks...
	I0918 20:50:10.478992   51397 main.go:141] libmachine: (kubernetes-upgrade-878094) Calling .PreCreateCheck
	I0918 20:50:10.479438   51397 main.go:141] libmachine: (kubernetes-upgrade-878094) Calling .GetConfigRaw
	I0918 20:50:10.479865   51397 main.go:141] libmachine: Creating machine...
	I0918 20:50:10.479879   51397 main.go:141] libmachine: (kubernetes-upgrade-878094) Calling .Create
	I0918 20:50:10.480039   51397 main.go:141] libmachine: (kubernetes-upgrade-878094) Creating KVM machine...
	I0918 20:50:10.481514   51397 main.go:141] libmachine: (kubernetes-upgrade-878094) DBG | found existing default KVM network
	I0918 20:50:10.482989   51397 main.go:141] libmachine: (kubernetes-upgrade-878094) DBG | I0918 20:50:10.482813   51777 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:f5:fc:f1} reservation:<nil>}
	I0918 20:50:10.484532   51397 main.go:141] libmachine: (kubernetes-upgrade-878094) DBG | I0918 20:50:10.484430   51777 network.go:206] using free private subnet 192.168.50.0/24: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000131f80}
	I0918 20:50:10.484592   51397 main.go:141] libmachine: (kubernetes-upgrade-878094) DBG | created network xml: 
	I0918 20:50:10.484609   51397 main.go:141] libmachine: (kubernetes-upgrade-878094) DBG | <network>
	I0918 20:50:10.484618   51397 main.go:141] libmachine: (kubernetes-upgrade-878094) DBG |   <name>mk-kubernetes-upgrade-878094</name>
	I0918 20:50:10.484625   51397 main.go:141] libmachine: (kubernetes-upgrade-878094) DBG |   <dns enable='no'/>
	I0918 20:50:10.484631   51397 main.go:141] libmachine: (kubernetes-upgrade-878094) DBG |   
	I0918 20:50:10.484637   51397 main.go:141] libmachine: (kubernetes-upgrade-878094) DBG |   <ip address='192.168.50.1' netmask='255.255.255.0'>
	I0918 20:50:10.484646   51397 main.go:141] libmachine: (kubernetes-upgrade-878094) DBG |     <dhcp>
	I0918 20:50:10.484662   51397 main.go:141] libmachine: (kubernetes-upgrade-878094) DBG |       <range start='192.168.50.2' end='192.168.50.253'/>
	I0918 20:50:10.484684   51397 main.go:141] libmachine: (kubernetes-upgrade-878094) DBG |     </dhcp>
	I0918 20:50:10.484699   51397 main.go:141] libmachine: (kubernetes-upgrade-878094) DBG |   </ip>
	I0918 20:50:10.484713   51397 main.go:141] libmachine: (kubernetes-upgrade-878094) DBG |   
	I0918 20:50:10.484724   51397 main.go:141] libmachine: (kubernetes-upgrade-878094) DBG | </network>
	I0918 20:50:10.484735   51397 main.go:141] libmachine: (kubernetes-upgrade-878094) DBG | 
	I0918 20:50:10.490165   51397 main.go:141] libmachine: (kubernetes-upgrade-878094) DBG | trying to create private KVM network mk-kubernetes-upgrade-878094 192.168.50.0/24...
	I0918 20:50:10.563282   51397 main.go:141] libmachine: (kubernetes-upgrade-878094) DBG | private KVM network mk-kubernetes-upgrade-878094 192.168.50.0/24 created
	I0918 20:50:10.563456   51397 main.go:141] libmachine: (kubernetes-upgrade-878094) Setting up store path in /home/jenkins/minikube-integration/19667-7671/.minikube/machines/kubernetes-upgrade-878094 ...
	I0918 20:50:10.563515   51397 main.go:141] libmachine: (kubernetes-upgrade-878094) DBG | I0918 20:50:10.563425   51777 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19667-7671/.minikube
	I0918 20:50:10.563528   51397 main.go:141] libmachine: (kubernetes-upgrade-878094) Building disk image from file:///home/jenkins/minikube-integration/19667-7671/.minikube/cache/iso/amd64/minikube-v1.34.0-1726481713-19649-amd64.iso
	I0918 20:50:10.563707   51397 main.go:141] libmachine: (kubernetes-upgrade-878094) Downloading /home/jenkins/minikube-integration/19667-7671/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19667-7671/.minikube/cache/iso/amd64/minikube-v1.34.0-1726481713-19649-amd64.iso...
	I0918 20:50:10.843295   51397 main.go:141] libmachine: (kubernetes-upgrade-878094) DBG | I0918 20:50:10.843134   51777 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19667-7671/.minikube/machines/kubernetes-upgrade-878094/id_rsa...
	I0918 20:50:10.974751   51397 main.go:141] libmachine: (kubernetes-upgrade-878094) DBG | I0918 20:50:10.974593   51777 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19667-7671/.minikube/machines/kubernetes-upgrade-878094/kubernetes-upgrade-878094.rawdisk...
	I0918 20:50:10.974786   51397 main.go:141] libmachine: (kubernetes-upgrade-878094) DBG | Writing magic tar header
	I0918 20:50:10.974802   51397 main.go:141] libmachine: (kubernetes-upgrade-878094) DBG | Writing SSH key tar header
	I0918 20:50:10.974813   51397 main.go:141] libmachine: (kubernetes-upgrade-878094) DBG | I0918 20:50:10.974745   51777 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19667-7671/.minikube/machines/kubernetes-upgrade-878094 ...
	I0918 20:50:10.974854   51397 main.go:141] libmachine: (kubernetes-upgrade-878094) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19667-7671/.minikube/machines/kubernetes-upgrade-878094
	I0918 20:50:10.974879   51397 main.go:141] libmachine: (kubernetes-upgrade-878094) Setting executable bit set on /home/jenkins/minikube-integration/19667-7671/.minikube/machines/kubernetes-upgrade-878094 (perms=drwx------)
	I0918 20:50:10.974921   51397 main.go:141] libmachine: (kubernetes-upgrade-878094) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19667-7671/.minikube/machines
	I0918 20:50:10.974940   51397 main.go:141] libmachine: (kubernetes-upgrade-878094) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19667-7671/.minikube
	I0918 20:50:10.974952   51397 main.go:141] libmachine: (kubernetes-upgrade-878094) Setting executable bit set on /home/jenkins/minikube-integration/19667-7671/.minikube/machines (perms=drwxr-xr-x)
	I0918 20:50:10.974962   51397 main.go:141] libmachine: (kubernetes-upgrade-878094) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19667-7671
	I0918 20:50:10.975035   51397 main.go:141] libmachine: (kubernetes-upgrade-878094) Setting executable bit set on /home/jenkins/minikube-integration/19667-7671/.minikube (perms=drwxr-xr-x)
	I0918 20:50:10.975069   51397 main.go:141] libmachine: (kubernetes-upgrade-878094) Setting executable bit set on /home/jenkins/minikube-integration/19667-7671 (perms=drwxrwxr-x)
	I0918 20:50:10.975079   51397 main.go:141] libmachine: (kubernetes-upgrade-878094) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0918 20:50:10.975098   51397 main.go:141] libmachine: (kubernetes-upgrade-878094) DBG | Checking permissions on dir: /home/jenkins
	I0918 20:50:10.975107   51397 main.go:141] libmachine: (kubernetes-upgrade-878094) DBG | Checking permissions on dir: /home
	I0918 20:50:10.975121   51397 main.go:141] libmachine: (kubernetes-upgrade-878094) DBG | Skipping /home - not owner
	I0918 20:50:10.975132   51397 main.go:141] libmachine: (kubernetes-upgrade-878094) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0918 20:50:10.975140   51397 main.go:141] libmachine: (kubernetes-upgrade-878094) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0918 20:50:10.975163   51397 main.go:141] libmachine: (kubernetes-upgrade-878094) Creating domain...
	I0918 20:50:10.976395   51397 main.go:141] libmachine: (kubernetes-upgrade-878094) define libvirt domain using xml: 
	I0918 20:50:10.976416   51397 main.go:141] libmachine: (kubernetes-upgrade-878094) <domain type='kvm'>
	I0918 20:50:10.976426   51397 main.go:141] libmachine: (kubernetes-upgrade-878094)   <name>kubernetes-upgrade-878094</name>
	I0918 20:50:10.976433   51397 main.go:141] libmachine: (kubernetes-upgrade-878094)   <memory unit='MiB'>2200</memory>
	I0918 20:50:10.976441   51397 main.go:141] libmachine: (kubernetes-upgrade-878094)   <vcpu>2</vcpu>
	I0918 20:50:10.976460   51397 main.go:141] libmachine: (kubernetes-upgrade-878094)   <features>
	I0918 20:50:10.976493   51397 main.go:141] libmachine: (kubernetes-upgrade-878094)     <acpi/>
	I0918 20:50:10.976503   51397 main.go:141] libmachine: (kubernetes-upgrade-878094)     <apic/>
	I0918 20:50:10.976512   51397 main.go:141] libmachine: (kubernetes-upgrade-878094)     <pae/>
	I0918 20:50:10.976521   51397 main.go:141] libmachine: (kubernetes-upgrade-878094)     
	I0918 20:50:10.976528   51397 main.go:141] libmachine: (kubernetes-upgrade-878094)   </features>
	I0918 20:50:10.976537   51397 main.go:141] libmachine: (kubernetes-upgrade-878094)   <cpu mode='host-passthrough'>
	I0918 20:50:10.976544   51397 main.go:141] libmachine: (kubernetes-upgrade-878094)   
	I0918 20:50:10.976550   51397 main.go:141] libmachine: (kubernetes-upgrade-878094)   </cpu>
	I0918 20:50:10.976556   51397 main.go:141] libmachine: (kubernetes-upgrade-878094)   <os>
	I0918 20:50:10.976568   51397 main.go:141] libmachine: (kubernetes-upgrade-878094)     <type>hvm</type>
	I0918 20:50:10.976579   51397 main.go:141] libmachine: (kubernetes-upgrade-878094)     <boot dev='cdrom'/>
	I0918 20:50:10.976584   51397 main.go:141] libmachine: (kubernetes-upgrade-878094)     <boot dev='hd'/>
	I0918 20:50:10.976595   51397 main.go:141] libmachine: (kubernetes-upgrade-878094)     <bootmenu enable='no'/>
	I0918 20:50:10.976604   51397 main.go:141] libmachine: (kubernetes-upgrade-878094)   </os>
	I0918 20:50:10.976611   51397 main.go:141] libmachine: (kubernetes-upgrade-878094)   <devices>
	I0918 20:50:10.976622   51397 main.go:141] libmachine: (kubernetes-upgrade-878094)     <disk type='file' device='cdrom'>
	I0918 20:50:10.976634   51397 main.go:141] libmachine: (kubernetes-upgrade-878094)       <source file='/home/jenkins/minikube-integration/19667-7671/.minikube/machines/kubernetes-upgrade-878094/boot2docker.iso'/>
	I0918 20:50:10.976641   51397 main.go:141] libmachine: (kubernetes-upgrade-878094)       <target dev='hdc' bus='scsi'/>
	I0918 20:50:10.976654   51397 main.go:141] libmachine: (kubernetes-upgrade-878094)       <readonly/>
	I0918 20:50:10.976660   51397 main.go:141] libmachine: (kubernetes-upgrade-878094)     </disk>
	I0918 20:50:10.976672   51397 main.go:141] libmachine: (kubernetes-upgrade-878094)     <disk type='file' device='disk'>
	I0918 20:50:10.976683   51397 main.go:141] libmachine: (kubernetes-upgrade-878094)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0918 20:50:10.976697   51397 main.go:141] libmachine: (kubernetes-upgrade-878094)       <source file='/home/jenkins/minikube-integration/19667-7671/.minikube/machines/kubernetes-upgrade-878094/kubernetes-upgrade-878094.rawdisk'/>
	I0918 20:50:10.976703   51397 main.go:141] libmachine: (kubernetes-upgrade-878094)       <target dev='hda' bus='virtio'/>
	I0918 20:50:10.976711   51397 main.go:141] libmachine: (kubernetes-upgrade-878094)     </disk>
	I0918 20:50:10.976722   51397 main.go:141] libmachine: (kubernetes-upgrade-878094)     <interface type='network'>
	I0918 20:50:10.976730   51397 main.go:141] libmachine: (kubernetes-upgrade-878094)       <source network='mk-kubernetes-upgrade-878094'/>
	I0918 20:50:10.976740   51397 main.go:141] libmachine: (kubernetes-upgrade-878094)       <model type='virtio'/>
	I0918 20:50:10.976749   51397 main.go:141] libmachine: (kubernetes-upgrade-878094)     </interface>
	I0918 20:50:10.976759   51397 main.go:141] libmachine: (kubernetes-upgrade-878094)     <interface type='network'>
	I0918 20:50:10.976768   51397 main.go:141] libmachine: (kubernetes-upgrade-878094)       <source network='default'/>
	I0918 20:50:10.976778   51397 main.go:141] libmachine: (kubernetes-upgrade-878094)       <model type='virtio'/>
	I0918 20:50:10.976784   51397 main.go:141] libmachine: (kubernetes-upgrade-878094)     </interface>
	I0918 20:50:10.976801   51397 main.go:141] libmachine: (kubernetes-upgrade-878094)     <serial type='pty'>
	I0918 20:50:10.976813   51397 main.go:141] libmachine: (kubernetes-upgrade-878094)       <target port='0'/>
	I0918 20:50:10.976823   51397 main.go:141] libmachine: (kubernetes-upgrade-878094)     </serial>
	I0918 20:50:10.976829   51397 main.go:141] libmachine: (kubernetes-upgrade-878094)     <console type='pty'>
	I0918 20:50:10.976839   51397 main.go:141] libmachine: (kubernetes-upgrade-878094)       <target type='serial' port='0'/>
	I0918 20:50:10.976847   51397 main.go:141] libmachine: (kubernetes-upgrade-878094)     </console>
	I0918 20:50:10.976855   51397 main.go:141] libmachine: (kubernetes-upgrade-878094)     <rng model='virtio'>
	I0918 20:50:10.976865   51397 main.go:141] libmachine: (kubernetes-upgrade-878094)       <backend model='random'>/dev/random</backend>
	I0918 20:50:10.976871   51397 main.go:141] libmachine: (kubernetes-upgrade-878094)     </rng>
	I0918 20:50:10.976888   51397 main.go:141] libmachine: (kubernetes-upgrade-878094)     
	I0918 20:50:10.976900   51397 main.go:141] libmachine: (kubernetes-upgrade-878094)     
	I0918 20:50:10.976907   51397 main.go:141] libmachine: (kubernetes-upgrade-878094)   </devices>
	I0918 20:50:10.976916   51397 main.go:141] libmachine: (kubernetes-upgrade-878094) </domain>
	I0918 20:50:10.976926   51397 main.go:141] libmachine: (kubernetes-upgrade-878094) 
	I0918 20:50:10.985045   51397 main.go:141] libmachine: (kubernetes-upgrade-878094) DBG | domain kubernetes-upgrade-878094 has defined MAC address 52:54:00:65:33:37 in network default
	I0918 20:50:10.985802   51397 main.go:141] libmachine: (kubernetes-upgrade-878094) Ensuring networks are active...
	I0918 20:50:10.985830   51397 main.go:141] libmachine: (kubernetes-upgrade-878094) DBG | domain kubernetes-upgrade-878094 has defined MAC address 52:54:00:21:00:07 in network mk-kubernetes-upgrade-878094
	I0918 20:50:10.986695   51397 main.go:141] libmachine: (kubernetes-upgrade-878094) Ensuring network default is active
	I0918 20:50:10.987128   51397 main.go:141] libmachine: (kubernetes-upgrade-878094) Ensuring network mk-kubernetes-upgrade-878094 is active
	I0918 20:50:10.987744   51397 main.go:141] libmachine: (kubernetes-upgrade-878094) Getting domain xml...
	I0918 20:50:10.988761   51397 main.go:141] libmachine: (kubernetes-upgrade-878094) Creating domain...
	I0918 20:50:12.349056   51397 main.go:141] libmachine: (kubernetes-upgrade-878094) Waiting to get IP...
	I0918 20:50:12.349920   51397 main.go:141] libmachine: (kubernetes-upgrade-878094) DBG | domain kubernetes-upgrade-878094 has defined MAC address 52:54:00:21:00:07 in network mk-kubernetes-upgrade-878094
	I0918 20:50:12.350509   51397 main.go:141] libmachine: (kubernetes-upgrade-878094) DBG | unable to find current IP address of domain kubernetes-upgrade-878094 in network mk-kubernetes-upgrade-878094
	I0918 20:50:12.350537   51397 main.go:141] libmachine: (kubernetes-upgrade-878094) DBG | I0918 20:50:12.350417   51777 retry.go:31] will retry after 239.148724ms: waiting for machine to come up
	I0918 20:50:12.591137   51397 main.go:141] libmachine: (kubernetes-upgrade-878094) DBG | domain kubernetes-upgrade-878094 has defined MAC address 52:54:00:21:00:07 in network mk-kubernetes-upgrade-878094
	I0918 20:50:12.591739   51397 main.go:141] libmachine: (kubernetes-upgrade-878094) DBG | unable to find current IP address of domain kubernetes-upgrade-878094 in network mk-kubernetes-upgrade-878094
	I0918 20:50:12.591761   51397 main.go:141] libmachine: (kubernetes-upgrade-878094) DBG | I0918 20:50:12.591699   51777 retry.go:31] will retry after 309.17005ms: waiting for machine to come up
	I0918 20:50:12.902352   51397 main.go:141] libmachine: (kubernetes-upgrade-878094) DBG | domain kubernetes-upgrade-878094 has defined MAC address 52:54:00:21:00:07 in network mk-kubernetes-upgrade-878094
	I0918 20:50:12.902895   51397 main.go:141] libmachine: (kubernetes-upgrade-878094) DBG | unable to find current IP address of domain kubernetes-upgrade-878094 in network mk-kubernetes-upgrade-878094
	I0918 20:50:12.902916   51397 main.go:141] libmachine: (kubernetes-upgrade-878094) DBG | I0918 20:50:12.902852   51777 retry.go:31] will retry after 312.338353ms: waiting for machine to come up
	I0918 20:50:13.216481   51397 main.go:141] libmachine: (kubernetes-upgrade-878094) DBG | domain kubernetes-upgrade-878094 has defined MAC address 52:54:00:21:00:07 in network mk-kubernetes-upgrade-878094
	I0918 20:50:13.217163   51397 main.go:141] libmachine: (kubernetes-upgrade-878094) DBG | unable to find current IP address of domain kubernetes-upgrade-878094 in network mk-kubernetes-upgrade-878094
	I0918 20:50:13.217193   51397 main.go:141] libmachine: (kubernetes-upgrade-878094) DBG | I0918 20:50:13.217079   51777 retry.go:31] will retry after 530.640654ms: waiting for machine to come up
	I0918 20:50:13.749910   51397 main.go:141] libmachine: (kubernetes-upgrade-878094) DBG | domain kubernetes-upgrade-878094 has defined MAC address 52:54:00:21:00:07 in network mk-kubernetes-upgrade-878094
	I0918 20:50:13.750458   51397 main.go:141] libmachine: (kubernetes-upgrade-878094) DBG | unable to find current IP address of domain kubernetes-upgrade-878094 in network mk-kubernetes-upgrade-878094
	I0918 20:50:13.750484   51397 main.go:141] libmachine: (kubernetes-upgrade-878094) DBG | I0918 20:50:13.750409   51777 retry.go:31] will retry after 676.863231ms: waiting for machine to come up
	I0918 20:50:14.429428   51397 main.go:141] libmachine: (kubernetes-upgrade-878094) DBG | domain kubernetes-upgrade-878094 has defined MAC address 52:54:00:21:00:07 in network mk-kubernetes-upgrade-878094
	I0918 20:50:14.429992   51397 main.go:141] libmachine: (kubernetes-upgrade-878094) DBG | unable to find current IP address of domain kubernetes-upgrade-878094 in network mk-kubernetes-upgrade-878094
	I0918 20:50:14.430024   51397 main.go:141] libmachine: (kubernetes-upgrade-878094) DBG | I0918 20:50:14.429927   51777 retry.go:31] will retry after 949.545379ms: waiting for machine to come up
	I0918 20:50:15.380439   51397 main.go:141] libmachine: (kubernetes-upgrade-878094) DBG | domain kubernetes-upgrade-878094 has defined MAC address 52:54:00:21:00:07 in network mk-kubernetes-upgrade-878094
	I0918 20:50:15.380863   51397 main.go:141] libmachine: (kubernetes-upgrade-878094) DBG | unable to find current IP address of domain kubernetes-upgrade-878094 in network mk-kubernetes-upgrade-878094
	I0918 20:50:15.380888   51397 main.go:141] libmachine: (kubernetes-upgrade-878094) DBG | I0918 20:50:15.380827   51777 retry.go:31] will retry after 896.819281ms: waiting for machine to come up
	I0918 20:50:16.278900   51397 main.go:141] libmachine: (kubernetes-upgrade-878094) DBG | domain kubernetes-upgrade-878094 has defined MAC address 52:54:00:21:00:07 in network mk-kubernetes-upgrade-878094
	I0918 20:50:16.279355   51397 main.go:141] libmachine: (kubernetes-upgrade-878094) DBG | unable to find current IP address of domain kubernetes-upgrade-878094 in network mk-kubernetes-upgrade-878094
	I0918 20:50:16.279384   51397 main.go:141] libmachine: (kubernetes-upgrade-878094) DBG | I0918 20:50:16.279305   51777 retry.go:31] will retry after 1.372490065s: waiting for machine to come up
	I0918 20:50:17.653373   51397 main.go:141] libmachine: (kubernetes-upgrade-878094) DBG | domain kubernetes-upgrade-878094 has defined MAC address 52:54:00:21:00:07 in network mk-kubernetes-upgrade-878094
	I0918 20:50:17.653765   51397 main.go:141] libmachine: (kubernetes-upgrade-878094) DBG | unable to find current IP address of domain kubernetes-upgrade-878094 in network mk-kubernetes-upgrade-878094
	I0918 20:50:17.653792   51397 main.go:141] libmachine: (kubernetes-upgrade-878094) DBG | I0918 20:50:17.653724   51777 retry.go:31] will retry after 1.817211963s: waiting for machine to come up
	I0918 20:50:19.473041   51397 main.go:141] libmachine: (kubernetes-upgrade-878094) DBG | domain kubernetes-upgrade-878094 has defined MAC address 52:54:00:21:00:07 in network mk-kubernetes-upgrade-878094
	I0918 20:50:19.473506   51397 main.go:141] libmachine: (kubernetes-upgrade-878094) DBG | unable to find current IP address of domain kubernetes-upgrade-878094 in network mk-kubernetes-upgrade-878094
	I0918 20:50:19.473533   51397 main.go:141] libmachine: (kubernetes-upgrade-878094) DBG | I0918 20:50:19.473461   51777 retry.go:31] will retry after 2.152943633s: waiting for machine to come up
	I0918 20:50:21.627941   51397 main.go:141] libmachine: (kubernetes-upgrade-878094) DBG | domain kubernetes-upgrade-878094 has defined MAC address 52:54:00:21:00:07 in network mk-kubernetes-upgrade-878094
	I0918 20:50:21.628381   51397 main.go:141] libmachine: (kubernetes-upgrade-878094) DBG | unable to find current IP address of domain kubernetes-upgrade-878094 in network mk-kubernetes-upgrade-878094
	I0918 20:50:21.628410   51397 main.go:141] libmachine: (kubernetes-upgrade-878094) DBG | I0918 20:50:21.628344   51777 retry.go:31] will retry after 1.756302465s: waiting for machine to come up
	I0918 20:50:23.387207   51397 main.go:141] libmachine: (kubernetes-upgrade-878094) DBG | domain kubernetes-upgrade-878094 has defined MAC address 52:54:00:21:00:07 in network mk-kubernetes-upgrade-878094
	I0918 20:50:23.387604   51397 main.go:141] libmachine: (kubernetes-upgrade-878094) DBG | unable to find current IP address of domain kubernetes-upgrade-878094 in network mk-kubernetes-upgrade-878094
	I0918 20:50:23.387661   51397 main.go:141] libmachine: (kubernetes-upgrade-878094) DBG | I0918 20:50:23.387551   51777 retry.go:31] will retry after 2.952789953s: waiting for machine to come up
	I0918 20:50:26.342094   51397 main.go:141] libmachine: (kubernetes-upgrade-878094) DBG | domain kubernetes-upgrade-878094 has defined MAC address 52:54:00:21:00:07 in network mk-kubernetes-upgrade-878094
	I0918 20:50:26.342753   51397 main.go:141] libmachine: (kubernetes-upgrade-878094) DBG | unable to find current IP address of domain kubernetes-upgrade-878094 in network mk-kubernetes-upgrade-878094
	I0918 20:50:26.342778   51397 main.go:141] libmachine: (kubernetes-upgrade-878094) DBG | I0918 20:50:26.342709   51777 retry.go:31] will retry after 3.030644017s: waiting for machine to come up
	I0918 20:50:29.375004   51397 main.go:141] libmachine: (kubernetes-upgrade-878094) DBG | domain kubernetes-upgrade-878094 has defined MAC address 52:54:00:21:00:07 in network mk-kubernetes-upgrade-878094
	I0918 20:50:29.375477   51397 main.go:141] libmachine: (kubernetes-upgrade-878094) DBG | unable to find current IP address of domain kubernetes-upgrade-878094 in network mk-kubernetes-upgrade-878094
	I0918 20:50:29.375522   51397 main.go:141] libmachine: (kubernetes-upgrade-878094) DBG | I0918 20:50:29.375420   51777 retry.go:31] will retry after 5.315471428s: waiting for machine to come up
	I0918 20:50:34.693763   51397 main.go:141] libmachine: (kubernetes-upgrade-878094) DBG | domain kubernetes-upgrade-878094 has defined MAC address 52:54:00:21:00:07 in network mk-kubernetes-upgrade-878094
	I0918 20:50:34.694262   51397 main.go:141] libmachine: (kubernetes-upgrade-878094) Found IP for machine: 192.168.50.80
	I0918 20:50:34.694288   51397 main.go:141] libmachine: (kubernetes-upgrade-878094) Reserving static IP address...
	I0918 20:50:34.694302   51397 main.go:141] libmachine: (kubernetes-upgrade-878094) DBG | domain kubernetes-upgrade-878094 has current primary IP address 192.168.50.80 and MAC address 52:54:00:21:00:07 in network mk-kubernetes-upgrade-878094
	I0918 20:50:34.694760   51397 main.go:141] libmachine: (kubernetes-upgrade-878094) DBG | unable to find host DHCP lease matching {name: "kubernetes-upgrade-878094", mac: "52:54:00:21:00:07", ip: "192.168.50.80"} in network mk-kubernetes-upgrade-878094
	I0918 20:50:34.777151   51397 main.go:141] libmachine: (kubernetes-upgrade-878094) DBG | Getting to WaitForSSH function...
	I0918 20:50:34.777174   51397 main.go:141] libmachine: (kubernetes-upgrade-878094) Reserved static IP address: 192.168.50.80
	I0918 20:50:34.777188   51397 main.go:141] libmachine: (kubernetes-upgrade-878094) Waiting for SSH to be available...
	I0918 20:50:34.780238   51397 main.go:141] libmachine: (kubernetes-upgrade-878094) DBG | domain kubernetes-upgrade-878094 has defined MAC address 52:54:00:21:00:07 in network mk-kubernetes-upgrade-878094
	I0918 20:50:34.780656   51397 main.go:141] libmachine: (kubernetes-upgrade-878094) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:00:07", ip: ""} in network mk-kubernetes-upgrade-878094: {Iface:virbr4 ExpiryTime:2024-09-18 21:50:25 +0000 UTC Type:0 Mac:52:54:00:21:00:07 Iaid: IPaddr:192.168.50.80 Prefix:24 Hostname:minikube Clientid:01:52:54:00:21:00:07}
	I0918 20:50:34.780687   51397 main.go:141] libmachine: (kubernetes-upgrade-878094) DBG | domain kubernetes-upgrade-878094 has defined IP address 192.168.50.80 and MAC address 52:54:00:21:00:07 in network mk-kubernetes-upgrade-878094
	I0918 20:50:34.780831   51397 main.go:141] libmachine: (kubernetes-upgrade-878094) DBG | Using SSH client type: external
	I0918 20:50:34.780859   51397 main.go:141] libmachine: (kubernetes-upgrade-878094) DBG | Using SSH private key: /home/jenkins/minikube-integration/19667-7671/.minikube/machines/kubernetes-upgrade-878094/id_rsa (-rw-------)
	I0918 20:50:34.780898   51397 main.go:141] libmachine: (kubernetes-upgrade-878094) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.80 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19667-7671/.minikube/machines/kubernetes-upgrade-878094/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0918 20:50:34.780911   51397 main.go:141] libmachine: (kubernetes-upgrade-878094) DBG | About to run SSH command:
	I0918 20:50:34.780926   51397 main.go:141] libmachine: (kubernetes-upgrade-878094) DBG | exit 0
	I0918 20:50:34.908424   51397 main.go:141] libmachine: (kubernetes-upgrade-878094) DBG | SSH cmd err, output: <nil>: 
	I0918 20:50:34.908690   51397 main.go:141] libmachine: (kubernetes-upgrade-878094) KVM machine creation complete!
	I0918 20:50:34.909017   51397 main.go:141] libmachine: (kubernetes-upgrade-878094) Calling .GetConfigRaw
	I0918 20:50:34.909555   51397 main.go:141] libmachine: (kubernetes-upgrade-878094) Calling .DriverName
	I0918 20:50:34.909727   51397 main.go:141] libmachine: (kubernetes-upgrade-878094) Calling .DriverName
	I0918 20:50:34.909890   51397 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0918 20:50:34.909904   51397 main.go:141] libmachine: (kubernetes-upgrade-878094) Calling .GetState
	I0918 20:50:34.911144   51397 main.go:141] libmachine: Detecting operating system of created instance...
	I0918 20:50:34.911161   51397 main.go:141] libmachine: Waiting for SSH to be available...
	I0918 20:50:34.911167   51397 main.go:141] libmachine: Getting to WaitForSSH function...
	I0918 20:50:34.911175   51397 main.go:141] libmachine: (kubernetes-upgrade-878094) Calling .GetSSHHostname
	I0918 20:50:34.913274   51397 main.go:141] libmachine: (kubernetes-upgrade-878094) DBG | domain kubernetes-upgrade-878094 has defined MAC address 52:54:00:21:00:07 in network mk-kubernetes-upgrade-878094
	I0918 20:50:34.913631   51397 main.go:141] libmachine: (kubernetes-upgrade-878094) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:00:07", ip: ""} in network mk-kubernetes-upgrade-878094: {Iface:virbr4 ExpiryTime:2024-09-18 21:50:25 +0000 UTC Type:0 Mac:52:54:00:21:00:07 Iaid: IPaddr:192.168.50.80 Prefix:24 Hostname:kubernetes-upgrade-878094 Clientid:01:52:54:00:21:00:07}
	I0918 20:50:34.913658   51397 main.go:141] libmachine: (kubernetes-upgrade-878094) DBG | domain kubernetes-upgrade-878094 has defined IP address 192.168.50.80 and MAC address 52:54:00:21:00:07 in network mk-kubernetes-upgrade-878094
	I0918 20:50:34.913767   51397 main.go:141] libmachine: (kubernetes-upgrade-878094) Calling .GetSSHPort
	I0918 20:50:34.913914   51397 main.go:141] libmachine: (kubernetes-upgrade-878094) Calling .GetSSHKeyPath
	I0918 20:50:34.914086   51397 main.go:141] libmachine: (kubernetes-upgrade-878094) Calling .GetSSHKeyPath
	I0918 20:50:34.914278   51397 main.go:141] libmachine: (kubernetes-upgrade-878094) Calling .GetSSHUsername
	I0918 20:50:34.914446   51397 main.go:141] libmachine: Using SSH client type: native
	I0918 20:50:34.914663   51397 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.50.80 22 <nil> <nil>}
	I0918 20:50:34.914688   51397 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0918 20:50:35.019863   51397 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0918 20:50:35.019893   51397 main.go:141] libmachine: Detecting the provisioner...
	I0918 20:50:35.019914   51397 main.go:141] libmachine: (kubernetes-upgrade-878094) Calling .GetSSHHostname
	I0918 20:50:35.023090   51397 main.go:141] libmachine: (kubernetes-upgrade-878094) DBG | domain kubernetes-upgrade-878094 has defined MAC address 52:54:00:21:00:07 in network mk-kubernetes-upgrade-878094
	I0918 20:50:35.023399   51397 main.go:141] libmachine: (kubernetes-upgrade-878094) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:00:07", ip: ""} in network mk-kubernetes-upgrade-878094: {Iface:virbr4 ExpiryTime:2024-09-18 21:50:25 +0000 UTC Type:0 Mac:52:54:00:21:00:07 Iaid: IPaddr:192.168.50.80 Prefix:24 Hostname:kubernetes-upgrade-878094 Clientid:01:52:54:00:21:00:07}
	I0918 20:50:35.023437   51397 main.go:141] libmachine: (kubernetes-upgrade-878094) DBG | domain kubernetes-upgrade-878094 has defined IP address 192.168.50.80 and MAC address 52:54:00:21:00:07 in network mk-kubernetes-upgrade-878094
	I0918 20:50:35.023570   51397 main.go:141] libmachine: (kubernetes-upgrade-878094) Calling .GetSSHPort
	I0918 20:50:35.023768   51397 main.go:141] libmachine: (kubernetes-upgrade-878094) Calling .GetSSHKeyPath
	I0918 20:50:35.023934   51397 main.go:141] libmachine: (kubernetes-upgrade-878094) Calling .GetSSHKeyPath
	I0918 20:50:35.024178   51397 main.go:141] libmachine: (kubernetes-upgrade-878094) Calling .GetSSHUsername
	I0918 20:50:35.024378   51397 main.go:141] libmachine: Using SSH client type: native
	I0918 20:50:35.024547   51397 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.50.80 22 <nil> <nil>}
	I0918 20:50:35.024557   51397 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0918 20:50:35.132904   51397 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0918 20:50:35.132980   51397 main.go:141] libmachine: found compatible host: buildroot
	I0918 20:50:35.132990   51397 main.go:141] libmachine: Provisioning with buildroot...
	I0918 20:50:35.132998   51397 main.go:141] libmachine: (kubernetes-upgrade-878094) Calling .GetMachineName
	I0918 20:50:35.133213   51397 buildroot.go:166] provisioning hostname "kubernetes-upgrade-878094"
	I0918 20:50:35.133237   51397 main.go:141] libmachine: (kubernetes-upgrade-878094) Calling .GetMachineName
	I0918 20:50:35.133402   51397 main.go:141] libmachine: (kubernetes-upgrade-878094) Calling .GetSSHHostname
	I0918 20:50:35.136083   51397 main.go:141] libmachine: (kubernetes-upgrade-878094) DBG | domain kubernetes-upgrade-878094 has defined MAC address 52:54:00:21:00:07 in network mk-kubernetes-upgrade-878094
	I0918 20:50:35.136554   51397 main.go:141] libmachine: (kubernetes-upgrade-878094) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:00:07", ip: ""} in network mk-kubernetes-upgrade-878094: {Iface:virbr4 ExpiryTime:2024-09-18 21:50:25 +0000 UTC Type:0 Mac:52:54:00:21:00:07 Iaid: IPaddr:192.168.50.80 Prefix:24 Hostname:kubernetes-upgrade-878094 Clientid:01:52:54:00:21:00:07}
	I0918 20:50:35.136591   51397 main.go:141] libmachine: (kubernetes-upgrade-878094) DBG | domain kubernetes-upgrade-878094 has defined IP address 192.168.50.80 and MAC address 52:54:00:21:00:07 in network mk-kubernetes-upgrade-878094
	I0918 20:50:35.136818   51397 main.go:141] libmachine: (kubernetes-upgrade-878094) Calling .GetSSHPort
	I0918 20:50:35.137023   51397 main.go:141] libmachine: (kubernetes-upgrade-878094) Calling .GetSSHKeyPath
	I0918 20:50:35.137201   51397 main.go:141] libmachine: (kubernetes-upgrade-878094) Calling .GetSSHKeyPath
	I0918 20:50:35.137323   51397 main.go:141] libmachine: (kubernetes-upgrade-878094) Calling .GetSSHUsername
	I0918 20:50:35.137522   51397 main.go:141] libmachine: Using SSH client type: native
	I0918 20:50:35.137706   51397 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.50.80 22 <nil> <nil>}
	I0918 20:50:35.137719   51397 main.go:141] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-878094 && echo "kubernetes-upgrade-878094" | sudo tee /etc/hostname
	I0918 20:50:35.259775   51397 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-878094
	
	I0918 20:50:35.259820   51397 main.go:141] libmachine: (kubernetes-upgrade-878094) Calling .GetSSHHostname
	I0918 20:50:35.262540   51397 main.go:141] libmachine: (kubernetes-upgrade-878094) DBG | domain kubernetes-upgrade-878094 has defined MAC address 52:54:00:21:00:07 in network mk-kubernetes-upgrade-878094
	I0918 20:50:35.262907   51397 main.go:141] libmachine: (kubernetes-upgrade-878094) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:00:07", ip: ""} in network mk-kubernetes-upgrade-878094: {Iface:virbr4 ExpiryTime:2024-09-18 21:50:25 +0000 UTC Type:0 Mac:52:54:00:21:00:07 Iaid: IPaddr:192.168.50.80 Prefix:24 Hostname:kubernetes-upgrade-878094 Clientid:01:52:54:00:21:00:07}
	I0918 20:50:35.262940   51397 main.go:141] libmachine: (kubernetes-upgrade-878094) DBG | domain kubernetes-upgrade-878094 has defined IP address 192.168.50.80 and MAC address 52:54:00:21:00:07 in network mk-kubernetes-upgrade-878094
	I0918 20:50:35.263184   51397 main.go:141] libmachine: (kubernetes-upgrade-878094) Calling .GetSSHPort
	I0918 20:50:35.263458   51397 main.go:141] libmachine: (kubernetes-upgrade-878094) Calling .GetSSHKeyPath
	I0918 20:50:35.263713   51397 main.go:141] libmachine: (kubernetes-upgrade-878094) Calling .GetSSHKeyPath
	I0918 20:50:35.263871   51397 main.go:141] libmachine: (kubernetes-upgrade-878094) Calling .GetSSHUsername
	I0918 20:50:35.264275   51397 main.go:141] libmachine: Using SSH client type: native
	I0918 20:50:35.264514   51397 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.50.80 22 <nil> <nil>}
	I0918 20:50:35.264544   51397 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-878094' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-878094/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-878094' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0918 20:50:35.381529   51397 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0918 20:50:35.381580   51397 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19667-7671/.minikube CaCertPath:/home/jenkins/minikube-integration/19667-7671/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19667-7671/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19667-7671/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19667-7671/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19667-7671/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19667-7671/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19667-7671/.minikube}
	I0918 20:50:35.381605   51397 buildroot.go:174] setting up certificates
	I0918 20:50:35.381617   51397 provision.go:84] configureAuth start
	I0918 20:50:35.381629   51397 main.go:141] libmachine: (kubernetes-upgrade-878094) Calling .GetMachineName
	I0918 20:50:35.381920   51397 main.go:141] libmachine: (kubernetes-upgrade-878094) Calling .GetIP
	I0918 20:50:35.384569   51397 main.go:141] libmachine: (kubernetes-upgrade-878094) DBG | domain kubernetes-upgrade-878094 has defined MAC address 52:54:00:21:00:07 in network mk-kubernetes-upgrade-878094
	I0918 20:50:35.384963   51397 main.go:141] libmachine: (kubernetes-upgrade-878094) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:00:07", ip: ""} in network mk-kubernetes-upgrade-878094: {Iface:virbr4 ExpiryTime:2024-09-18 21:50:25 +0000 UTC Type:0 Mac:52:54:00:21:00:07 Iaid: IPaddr:192.168.50.80 Prefix:24 Hostname:kubernetes-upgrade-878094 Clientid:01:52:54:00:21:00:07}
	I0918 20:50:35.384987   51397 main.go:141] libmachine: (kubernetes-upgrade-878094) DBG | domain kubernetes-upgrade-878094 has defined IP address 192.168.50.80 and MAC address 52:54:00:21:00:07 in network mk-kubernetes-upgrade-878094
	I0918 20:50:35.385165   51397 main.go:141] libmachine: (kubernetes-upgrade-878094) Calling .GetSSHHostname
	I0918 20:50:35.387328   51397 main.go:141] libmachine: (kubernetes-upgrade-878094) DBG | domain kubernetes-upgrade-878094 has defined MAC address 52:54:00:21:00:07 in network mk-kubernetes-upgrade-878094
	I0918 20:50:35.387719   51397 main.go:141] libmachine: (kubernetes-upgrade-878094) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:00:07", ip: ""} in network mk-kubernetes-upgrade-878094: {Iface:virbr4 ExpiryTime:2024-09-18 21:50:25 +0000 UTC Type:0 Mac:52:54:00:21:00:07 Iaid: IPaddr:192.168.50.80 Prefix:24 Hostname:kubernetes-upgrade-878094 Clientid:01:52:54:00:21:00:07}
	I0918 20:50:35.387740   51397 main.go:141] libmachine: (kubernetes-upgrade-878094) DBG | domain kubernetes-upgrade-878094 has defined IP address 192.168.50.80 and MAC address 52:54:00:21:00:07 in network mk-kubernetes-upgrade-878094
	I0918 20:50:35.387888   51397 provision.go:143] copyHostCerts
	I0918 20:50:35.387985   51397 exec_runner.go:144] found /home/jenkins/minikube-integration/19667-7671/.minikube/ca.pem, removing ...
	I0918 20:50:35.388000   51397 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19667-7671/.minikube/ca.pem
	I0918 20:50:35.388151   51397 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19667-7671/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19667-7671/.minikube/ca.pem (1082 bytes)
	I0918 20:50:35.388304   51397 exec_runner.go:144] found /home/jenkins/minikube-integration/19667-7671/.minikube/cert.pem, removing ...
	I0918 20:50:35.388316   51397 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19667-7671/.minikube/cert.pem
	I0918 20:50:35.388350   51397 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19667-7671/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19667-7671/.minikube/cert.pem (1123 bytes)
	I0918 20:50:35.388431   51397 exec_runner.go:144] found /home/jenkins/minikube-integration/19667-7671/.minikube/key.pem, removing ...
	I0918 20:50:35.388441   51397 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19667-7671/.minikube/key.pem
	I0918 20:50:35.388472   51397 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19667-7671/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19667-7671/.minikube/key.pem (1679 bytes)
	I0918 20:50:35.388536   51397 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19667-7671/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19667-7671/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19667-7671/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-878094 san=[127.0.0.1 192.168.50.80 kubernetes-upgrade-878094 localhost minikube]
	I0918 20:50:35.492346   51397 provision.go:177] copyRemoteCerts
	I0918 20:50:35.492404   51397 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0918 20:50:35.492427   51397 main.go:141] libmachine: (kubernetes-upgrade-878094) Calling .GetSSHHostname
	I0918 20:50:35.495007   51397 main.go:141] libmachine: (kubernetes-upgrade-878094) DBG | domain kubernetes-upgrade-878094 has defined MAC address 52:54:00:21:00:07 in network mk-kubernetes-upgrade-878094
	I0918 20:50:35.495318   51397 main.go:141] libmachine: (kubernetes-upgrade-878094) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:00:07", ip: ""} in network mk-kubernetes-upgrade-878094: {Iface:virbr4 ExpiryTime:2024-09-18 21:50:25 +0000 UTC Type:0 Mac:52:54:00:21:00:07 Iaid: IPaddr:192.168.50.80 Prefix:24 Hostname:kubernetes-upgrade-878094 Clientid:01:52:54:00:21:00:07}
	I0918 20:50:35.495342   51397 main.go:141] libmachine: (kubernetes-upgrade-878094) DBG | domain kubernetes-upgrade-878094 has defined IP address 192.168.50.80 and MAC address 52:54:00:21:00:07 in network mk-kubernetes-upgrade-878094
	I0918 20:50:35.495480   51397 main.go:141] libmachine: (kubernetes-upgrade-878094) Calling .GetSSHPort
	I0918 20:50:35.495697   51397 main.go:141] libmachine: (kubernetes-upgrade-878094) Calling .GetSSHKeyPath
	I0918 20:50:35.495888   51397 main.go:141] libmachine: (kubernetes-upgrade-878094) Calling .GetSSHUsername
	I0918 20:50:35.496052   51397 sshutil.go:53] new ssh client: &{IP:192.168.50.80 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19667-7671/.minikube/machines/kubernetes-upgrade-878094/id_rsa Username:docker}
	I0918 20:50:35.578818   51397 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0918 20:50:35.604141   51397 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0918 20:50:35.628921   51397 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0918 20:50:35.652843   51397 provision.go:87] duration metric: took 271.214825ms to configureAuth
	I0918 20:50:35.652876   51397 buildroot.go:189] setting minikube options for container-runtime
	I0918 20:50:35.653083   51397 config.go:182] Loaded profile config "kubernetes-upgrade-878094": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0918 20:50:35.653165   51397 main.go:141] libmachine: (kubernetes-upgrade-878094) Calling .GetSSHHostname
	I0918 20:50:35.655962   51397 main.go:141] libmachine: (kubernetes-upgrade-878094) DBG | domain kubernetes-upgrade-878094 has defined MAC address 52:54:00:21:00:07 in network mk-kubernetes-upgrade-878094
	I0918 20:50:35.656386   51397 main.go:141] libmachine: (kubernetes-upgrade-878094) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:00:07", ip: ""} in network mk-kubernetes-upgrade-878094: {Iface:virbr4 ExpiryTime:2024-09-18 21:50:25 +0000 UTC Type:0 Mac:52:54:00:21:00:07 Iaid: IPaddr:192.168.50.80 Prefix:24 Hostname:kubernetes-upgrade-878094 Clientid:01:52:54:00:21:00:07}
	I0918 20:50:35.656411   51397 main.go:141] libmachine: (kubernetes-upgrade-878094) DBG | domain kubernetes-upgrade-878094 has defined IP address 192.168.50.80 and MAC address 52:54:00:21:00:07 in network mk-kubernetes-upgrade-878094
	I0918 20:50:35.656603   51397 main.go:141] libmachine: (kubernetes-upgrade-878094) Calling .GetSSHPort
	I0918 20:50:35.656790   51397 main.go:141] libmachine: (kubernetes-upgrade-878094) Calling .GetSSHKeyPath
	I0918 20:50:35.656958   51397 main.go:141] libmachine: (kubernetes-upgrade-878094) Calling .GetSSHKeyPath
	I0918 20:50:35.657123   51397 main.go:141] libmachine: (kubernetes-upgrade-878094) Calling .GetSSHUsername
	I0918 20:50:35.657297   51397 main.go:141] libmachine: Using SSH client type: native
	I0918 20:50:35.657492   51397 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.50.80 22 <nil> <nil>}
	I0918 20:50:35.657507   51397 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0918 20:50:35.893156   51397 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0918 20:50:35.893199   51397 main.go:141] libmachine: Checking connection to Docker...
	I0918 20:50:35.893207   51397 main.go:141] libmachine: (kubernetes-upgrade-878094) Calling .GetURL
	I0918 20:50:35.894448   51397 main.go:141] libmachine: (kubernetes-upgrade-878094) DBG | Using libvirt version 6000000
	I0918 20:50:35.896780   51397 main.go:141] libmachine: (kubernetes-upgrade-878094) DBG | domain kubernetes-upgrade-878094 has defined MAC address 52:54:00:21:00:07 in network mk-kubernetes-upgrade-878094
	I0918 20:50:35.897218   51397 main.go:141] libmachine: (kubernetes-upgrade-878094) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:00:07", ip: ""} in network mk-kubernetes-upgrade-878094: {Iface:virbr4 ExpiryTime:2024-09-18 21:50:25 +0000 UTC Type:0 Mac:52:54:00:21:00:07 Iaid: IPaddr:192.168.50.80 Prefix:24 Hostname:kubernetes-upgrade-878094 Clientid:01:52:54:00:21:00:07}
	I0918 20:50:35.897253   51397 main.go:141] libmachine: (kubernetes-upgrade-878094) DBG | domain kubernetes-upgrade-878094 has defined IP address 192.168.50.80 and MAC address 52:54:00:21:00:07 in network mk-kubernetes-upgrade-878094
	I0918 20:50:35.897534   51397 main.go:141] libmachine: Docker is up and running!
	I0918 20:50:35.897562   51397 main.go:141] libmachine: Reticulating splines...
	I0918 20:50:35.897570   51397 client.go:171] duration metric: took 25.418780076s to LocalClient.Create
	I0918 20:50:35.897607   51397 start.go:167] duration metric: took 25.418859523s to libmachine.API.Create "kubernetes-upgrade-878094"
	I0918 20:50:35.897620   51397 start.go:293] postStartSetup for "kubernetes-upgrade-878094" (driver="kvm2")
	I0918 20:50:35.897632   51397 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0918 20:50:35.897648   51397 main.go:141] libmachine: (kubernetes-upgrade-878094) Calling .DriverName
	I0918 20:50:35.897898   51397 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0918 20:50:35.897945   51397 main.go:141] libmachine: (kubernetes-upgrade-878094) Calling .GetSSHHostname
	I0918 20:50:35.900313   51397 main.go:141] libmachine: (kubernetes-upgrade-878094) DBG | domain kubernetes-upgrade-878094 has defined MAC address 52:54:00:21:00:07 in network mk-kubernetes-upgrade-878094
	I0918 20:50:35.900701   51397 main.go:141] libmachine: (kubernetes-upgrade-878094) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:00:07", ip: ""} in network mk-kubernetes-upgrade-878094: {Iface:virbr4 ExpiryTime:2024-09-18 21:50:25 +0000 UTC Type:0 Mac:52:54:00:21:00:07 Iaid: IPaddr:192.168.50.80 Prefix:24 Hostname:kubernetes-upgrade-878094 Clientid:01:52:54:00:21:00:07}
	I0918 20:50:35.900731   51397 main.go:141] libmachine: (kubernetes-upgrade-878094) DBG | domain kubernetes-upgrade-878094 has defined IP address 192.168.50.80 and MAC address 52:54:00:21:00:07 in network mk-kubernetes-upgrade-878094
	I0918 20:50:35.900897   51397 main.go:141] libmachine: (kubernetes-upgrade-878094) Calling .GetSSHPort
	I0918 20:50:35.901065   51397 main.go:141] libmachine: (kubernetes-upgrade-878094) Calling .GetSSHKeyPath
	I0918 20:50:35.901229   51397 main.go:141] libmachine: (kubernetes-upgrade-878094) Calling .GetSSHUsername
	I0918 20:50:35.901352   51397 sshutil.go:53] new ssh client: &{IP:192.168.50.80 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19667-7671/.minikube/machines/kubernetes-upgrade-878094/id_rsa Username:docker}
	I0918 20:50:35.986861   51397 ssh_runner.go:195] Run: cat /etc/os-release
	I0918 20:50:35.991567   51397 info.go:137] Remote host: Buildroot 2023.02.9
	I0918 20:50:35.991591   51397 filesync.go:126] Scanning /home/jenkins/minikube-integration/19667-7671/.minikube/addons for local assets ...
	I0918 20:50:35.991655   51397 filesync.go:126] Scanning /home/jenkins/minikube-integration/19667-7671/.minikube/files for local assets ...
	I0918 20:50:35.991754   51397 filesync.go:149] local asset: /home/jenkins/minikube-integration/19667-7671/.minikube/files/etc/ssl/certs/148782.pem -> 148782.pem in /etc/ssl/certs
	I0918 20:50:35.991870   51397 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0918 20:50:36.003299   51397 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/files/etc/ssl/certs/148782.pem --> /etc/ssl/certs/148782.pem (1708 bytes)
	I0918 20:50:36.027400   51397 start.go:296] duration metric: took 129.765478ms for postStartSetup
	I0918 20:50:36.027452   51397 main.go:141] libmachine: (kubernetes-upgrade-878094) Calling .GetConfigRaw
	I0918 20:50:36.028078   51397 main.go:141] libmachine: (kubernetes-upgrade-878094) Calling .GetIP
	I0918 20:50:36.030707   51397 main.go:141] libmachine: (kubernetes-upgrade-878094) DBG | domain kubernetes-upgrade-878094 has defined MAC address 52:54:00:21:00:07 in network mk-kubernetes-upgrade-878094
	I0918 20:50:36.030998   51397 main.go:141] libmachine: (kubernetes-upgrade-878094) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:00:07", ip: ""} in network mk-kubernetes-upgrade-878094: {Iface:virbr4 ExpiryTime:2024-09-18 21:50:25 +0000 UTC Type:0 Mac:52:54:00:21:00:07 Iaid: IPaddr:192.168.50.80 Prefix:24 Hostname:kubernetes-upgrade-878094 Clientid:01:52:54:00:21:00:07}
	I0918 20:50:36.031027   51397 main.go:141] libmachine: (kubernetes-upgrade-878094) DBG | domain kubernetes-upgrade-878094 has defined IP address 192.168.50.80 and MAC address 52:54:00:21:00:07 in network mk-kubernetes-upgrade-878094
	I0918 20:50:36.031283   51397 profile.go:143] Saving config to /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/kubernetes-upgrade-878094/config.json ...
	I0918 20:50:36.031504   51397 start.go:128] duration metric: took 25.574140684s to createHost
	I0918 20:50:36.031528   51397 main.go:141] libmachine: (kubernetes-upgrade-878094) Calling .GetSSHHostname
	I0918 20:50:36.033855   51397 main.go:141] libmachine: (kubernetes-upgrade-878094) DBG | domain kubernetes-upgrade-878094 has defined MAC address 52:54:00:21:00:07 in network mk-kubernetes-upgrade-878094
	I0918 20:50:36.034209   51397 main.go:141] libmachine: (kubernetes-upgrade-878094) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:00:07", ip: ""} in network mk-kubernetes-upgrade-878094: {Iface:virbr4 ExpiryTime:2024-09-18 21:50:25 +0000 UTC Type:0 Mac:52:54:00:21:00:07 Iaid: IPaddr:192.168.50.80 Prefix:24 Hostname:kubernetes-upgrade-878094 Clientid:01:52:54:00:21:00:07}
	I0918 20:50:36.034235   51397 main.go:141] libmachine: (kubernetes-upgrade-878094) DBG | domain kubernetes-upgrade-878094 has defined IP address 192.168.50.80 and MAC address 52:54:00:21:00:07 in network mk-kubernetes-upgrade-878094
	I0918 20:50:36.034398   51397 main.go:141] libmachine: (kubernetes-upgrade-878094) Calling .GetSSHPort
	I0918 20:50:36.034581   51397 main.go:141] libmachine: (kubernetes-upgrade-878094) Calling .GetSSHKeyPath
	I0918 20:50:36.034725   51397 main.go:141] libmachine: (kubernetes-upgrade-878094) Calling .GetSSHKeyPath
	I0918 20:50:36.034842   51397 main.go:141] libmachine: (kubernetes-upgrade-878094) Calling .GetSSHUsername
	I0918 20:50:36.034968   51397 main.go:141] libmachine: Using SSH client type: native
	I0918 20:50:36.035129   51397 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.50.80 22 <nil> <nil>}
	I0918 20:50:36.035138   51397 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0918 20:50:36.145018   51397 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726692636.121126775
	
	I0918 20:50:36.145047   51397 fix.go:216] guest clock: 1726692636.121126775
	I0918 20:50:36.145056   51397 fix.go:229] Guest: 2024-09-18 20:50:36.121126775 +0000 UTC Remote: 2024-09-18 20:50:36.031517269 +0000 UTC m=+55.045409376 (delta=89.609506ms)
	I0918 20:50:36.145083   51397 fix.go:200] guest clock delta is within tolerance: 89.609506ms
	I0918 20:50:36.145090   51397 start.go:83] releasing machines lock for "kubernetes-upgrade-878094", held for 25.687906461s
	I0918 20:50:36.145121   51397 main.go:141] libmachine: (kubernetes-upgrade-878094) Calling .DriverName
	I0918 20:50:36.145364   51397 main.go:141] libmachine: (kubernetes-upgrade-878094) Calling .GetIP
	I0918 20:50:36.148123   51397 main.go:141] libmachine: (kubernetes-upgrade-878094) DBG | domain kubernetes-upgrade-878094 has defined MAC address 52:54:00:21:00:07 in network mk-kubernetes-upgrade-878094
	I0918 20:50:36.148592   51397 main.go:141] libmachine: (kubernetes-upgrade-878094) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:00:07", ip: ""} in network mk-kubernetes-upgrade-878094: {Iface:virbr4 ExpiryTime:2024-09-18 21:50:25 +0000 UTC Type:0 Mac:52:54:00:21:00:07 Iaid: IPaddr:192.168.50.80 Prefix:24 Hostname:kubernetes-upgrade-878094 Clientid:01:52:54:00:21:00:07}
	I0918 20:50:36.148624   51397 main.go:141] libmachine: (kubernetes-upgrade-878094) DBG | domain kubernetes-upgrade-878094 has defined IP address 192.168.50.80 and MAC address 52:54:00:21:00:07 in network mk-kubernetes-upgrade-878094
	I0918 20:50:36.148806   51397 main.go:141] libmachine: (kubernetes-upgrade-878094) Calling .DriverName
	I0918 20:50:36.149756   51397 main.go:141] libmachine: (kubernetes-upgrade-878094) Calling .DriverName
	I0918 20:50:36.149951   51397 main.go:141] libmachine: (kubernetes-upgrade-878094) Calling .DriverName
	I0918 20:50:36.150096   51397 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0918 20:50:36.150158   51397 main.go:141] libmachine: (kubernetes-upgrade-878094) Calling .GetSSHHostname
	I0918 20:50:36.150184   51397 ssh_runner.go:195] Run: cat /version.json
	I0918 20:50:36.150211   51397 main.go:141] libmachine: (kubernetes-upgrade-878094) Calling .GetSSHHostname
	I0918 20:50:36.153592   51397 main.go:141] libmachine: (kubernetes-upgrade-878094) DBG | domain kubernetes-upgrade-878094 has defined MAC address 52:54:00:21:00:07 in network mk-kubernetes-upgrade-878094
	I0918 20:50:36.153638   51397 main.go:141] libmachine: (kubernetes-upgrade-878094) DBG | domain kubernetes-upgrade-878094 has defined MAC address 52:54:00:21:00:07 in network mk-kubernetes-upgrade-878094
	I0918 20:50:36.153965   51397 main.go:141] libmachine: (kubernetes-upgrade-878094) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:00:07", ip: ""} in network mk-kubernetes-upgrade-878094: {Iface:virbr4 ExpiryTime:2024-09-18 21:50:25 +0000 UTC Type:0 Mac:52:54:00:21:00:07 Iaid: IPaddr:192.168.50.80 Prefix:24 Hostname:kubernetes-upgrade-878094 Clientid:01:52:54:00:21:00:07}
	I0918 20:50:36.154013   51397 main.go:141] libmachine: (kubernetes-upgrade-878094) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:00:07", ip: ""} in network mk-kubernetes-upgrade-878094: {Iface:virbr4 ExpiryTime:2024-09-18 21:50:25 +0000 UTC Type:0 Mac:52:54:00:21:00:07 Iaid: IPaddr:192.168.50.80 Prefix:24 Hostname:kubernetes-upgrade-878094 Clientid:01:52:54:00:21:00:07}
	I0918 20:50:36.154068   51397 main.go:141] libmachine: (kubernetes-upgrade-878094) DBG | domain kubernetes-upgrade-878094 has defined IP address 192.168.50.80 and MAC address 52:54:00:21:00:07 in network mk-kubernetes-upgrade-878094
	I0918 20:50:36.154090   51397 main.go:141] libmachine: (kubernetes-upgrade-878094) DBG | domain kubernetes-upgrade-878094 has defined IP address 192.168.50.80 and MAC address 52:54:00:21:00:07 in network mk-kubernetes-upgrade-878094
	I0918 20:50:36.154259   51397 main.go:141] libmachine: (kubernetes-upgrade-878094) Calling .GetSSHPort
	I0918 20:50:36.154388   51397 main.go:141] libmachine: (kubernetes-upgrade-878094) Calling .GetSSHPort
	I0918 20:50:36.154495   51397 main.go:141] libmachine: (kubernetes-upgrade-878094) Calling .GetSSHKeyPath
	I0918 20:50:36.154585   51397 main.go:141] libmachine: (kubernetes-upgrade-878094) Calling .GetSSHKeyPath
	I0918 20:50:36.154665   51397 main.go:141] libmachine: (kubernetes-upgrade-878094) Calling .GetSSHUsername
	I0918 20:50:36.154708   51397 main.go:141] libmachine: (kubernetes-upgrade-878094) Calling .GetSSHUsername
	I0918 20:50:36.154781   51397 sshutil.go:53] new ssh client: &{IP:192.168.50.80 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19667-7671/.minikube/machines/kubernetes-upgrade-878094/id_rsa Username:docker}
	I0918 20:50:36.154840   51397 sshutil.go:53] new ssh client: &{IP:192.168.50.80 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19667-7671/.minikube/machines/kubernetes-upgrade-878094/id_rsa Username:docker}
	I0918 20:50:36.233130   51397 ssh_runner.go:195] Run: systemctl --version
	I0918 20:50:36.273044   51397 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0918 20:50:36.433940   51397 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0918 20:50:36.440357   51397 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0918 20:50:36.440432   51397 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0918 20:50:36.458853   51397 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0918 20:50:36.458876   51397 start.go:495] detecting cgroup driver to use...
	I0918 20:50:36.458943   51397 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0918 20:50:36.480974   51397 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0918 20:50:36.495644   51397 docker.go:217] disabling cri-docker service (if available) ...
	I0918 20:50:36.495701   51397 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0918 20:50:36.509780   51397 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0918 20:50:36.524444   51397 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0918 20:50:36.643634   51397 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0918 20:50:36.810010   51397 docker.go:233] disabling docker service ...
	I0918 20:50:36.810112   51397 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0918 20:50:36.825693   51397 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0918 20:50:36.840809   51397 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0918 20:50:36.990013   51397 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0918 20:50:37.135544   51397 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0918 20:50:37.149746   51397 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0918 20:50:37.168869   51397 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0918 20:50:37.168930   51397 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0918 20:50:37.179313   51397 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0918 20:50:37.179386   51397 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0918 20:50:37.190092   51397 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0918 20:50:37.200184   51397 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0918 20:50:37.210400   51397 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0918 20:50:37.221450   51397 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0918 20:50:37.231653   51397 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0918 20:50:37.231714   51397 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0918 20:50:37.244787   51397 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0918 20:50:37.255686   51397 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0918 20:50:37.384489   51397 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0918 20:50:37.475286   51397 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0918 20:50:37.475380   51397 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0918 20:50:37.480228   51397 start.go:563] Will wait 60s for crictl version
	I0918 20:50:37.480285   51397 ssh_runner.go:195] Run: which crictl
	I0918 20:50:37.484061   51397 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0918 20:50:37.529031   51397 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0918 20:50:37.529170   51397 ssh_runner.go:195] Run: crio --version
	I0918 20:50:37.557782   51397 ssh_runner.go:195] Run: crio --version
	I0918 20:50:37.587288   51397 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0918 20:50:37.588819   51397 main.go:141] libmachine: (kubernetes-upgrade-878094) Calling .GetIP
	I0918 20:50:37.591725   51397 main.go:141] libmachine: (kubernetes-upgrade-878094) DBG | domain kubernetes-upgrade-878094 has defined MAC address 52:54:00:21:00:07 in network mk-kubernetes-upgrade-878094
	I0918 20:50:37.592077   51397 main.go:141] libmachine: (kubernetes-upgrade-878094) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:00:07", ip: ""} in network mk-kubernetes-upgrade-878094: {Iface:virbr4 ExpiryTime:2024-09-18 21:50:25 +0000 UTC Type:0 Mac:52:54:00:21:00:07 Iaid: IPaddr:192.168.50.80 Prefix:24 Hostname:kubernetes-upgrade-878094 Clientid:01:52:54:00:21:00:07}
	I0918 20:50:37.592105   51397 main.go:141] libmachine: (kubernetes-upgrade-878094) DBG | domain kubernetes-upgrade-878094 has defined IP address 192.168.50.80 and MAC address 52:54:00:21:00:07 in network mk-kubernetes-upgrade-878094
	I0918 20:50:37.592295   51397 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0918 20:50:37.596586   51397 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0918 20:50:37.608797   51397 kubeadm.go:883] updating cluster {Name:kubernetes-upgrade-878094 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.20.0 ClusterName:kubernetes-upgrade-878094 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.80 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimiz
ations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0918 20:50:37.608970   51397 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0918 20:50:37.609054   51397 ssh_runner.go:195] Run: sudo crictl images --output json
	I0918 20:50:37.641355   51397 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0918 20:50:37.641419   51397 ssh_runner.go:195] Run: which lz4
	I0918 20:50:37.645162   51397 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0918 20:50:37.649154   51397 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0918 20:50:37.649195   51397 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0918 20:50:39.237580   51397 crio.go:462] duration metric: took 1.592466503s to copy over tarball
	I0918 20:50:39.237776   51397 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0918 20:50:41.822761   51397 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.584942569s)
	I0918 20:50:41.822795   51397 crio.go:469] duration metric: took 2.585082232s to extract the tarball
	I0918 20:50:41.822804   51397 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0918 20:50:41.866582   51397 ssh_runner.go:195] Run: sudo crictl images --output json
	I0918 20:50:41.912107   51397 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0918 20:50:41.912138   51397 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0918 20:50:41.912258   51397 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I0918 20:50:41.912218   51397 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0918 20:50:41.912306   51397 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0918 20:50:41.912316   51397 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0918 20:50:41.912316   51397 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0918 20:50:41.912342   51397 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0918 20:50:41.912244   51397 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0918 20:50:41.912226   51397 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I0918 20:50:41.913878   51397 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0918 20:50:41.913889   51397 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0918 20:50:41.913911   51397 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0918 20:50:41.913873   51397 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0918 20:50:41.913922   51397 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0918 20:50:41.913878   51397 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0918 20:50:41.913876   51397 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0918 20:50:41.913878   51397 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0918 20:50:42.238452   51397 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0918 20:50:42.262551   51397 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0918 20:50:42.264854   51397 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0918 20:50:42.264959   51397 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0918 20:50:42.266825   51397 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0918 20:50:42.290256   51397 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0918 20:50:42.319276   51397 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0918 20:50:42.319328   51397 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0918 20:50:42.319368   51397 ssh_runner.go:195] Run: which crictl
	I0918 20:50:42.324923   51397 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0918 20:50:42.404578   51397 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0918 20:50:42.404627   51397 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0918 20:50:42.404681   51397 ssh_runner.go:195] Run: which crictl
	I0918 20:50:42.422202   51397 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0918 20:50:42.422274   51397 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0918 20:50:42.422309   51397 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0918 20:50:42.422327   51397 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0918 20:50:42.422310   51397 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0918 20:50:42.422367   51397 ssh_runner.go:195] Run: which crictl
	I0918 20:50:42.422385   51397 ssh_runner.go:195] Run: which crictl
	I0918 20:50:42.422355   51397 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0918 20:50:42.422437   51397 ssh_runner.go:195] Run: which crictl
	I0918 20:50:42.440678   51397 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0918 20:50:42.440719   51397 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0918 20:50:42.440767   51397 ssh_runner.go:195] Run: which crictl
	I0918 20:50:42.440775   51397 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0918 20:50:42.458999   51397 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0918 20:50:42.459051   51397 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0918 20:50:42.459055   51397 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0918 20:50:42.459071   51397 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0918 20:50:42.459093   51397 ssh_runner.go:195] Run: which crictl
	I0918 20:50:42.459121   51397 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0918 20:50:42.459170   51397 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0918 20:50:42.503387   51397 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0918 20:50:42.503386   51397 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0918 20:50:42.581525   51397 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0918 20:50:42.581575   51397 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0918 20:50:42.581666   51397 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0918 20:50:42.581783   51397 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0918 20:50:42.581876   51397 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0918 20:50:42.649755   51397 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0918 20:50:42.649758   51397 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0918 20:50:42.743862   51397 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0918 20:50:42.747781   51397 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0918 20:50:42.747843   51397 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0918 20:50:42.747781   51397 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0918 20:50:42.747788   51397 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0918 20:50:42.823902   51397 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19667-7671/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0918 20:50:42.824065   51397 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0918 20:50:42.885820   51397 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19667-7671/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0918 20:50:42.911298   51397 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0918 20:50:42.920554   51397 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19667-7671/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0918 20:50:42.920691   51397 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19667-7671/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0918 20:50:42.920767   51397 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19667-7671/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0918 20:50:42.938206   51397 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19667-7671/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0918 20:50:42.968085   51397 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19667-7671/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0918 20:50:43.078376   51397 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0918 20:50:43.218703   51397 cache_images.go:92] duration metric: took 1.306544974s to LoadCachedImages
	W0918 20:50:43.218788   51397 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19667-7671/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19667-7671/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2: no such file or directory
	I0918 20:50:43.218803   51397 kubeadm.go:934] updating node { 192.168.50.80 8443 v1.20.0 crio true true} ...
	I0918 20:50:43.218931   51397 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=kubernetes-upgrade-878094 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.50.80
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-878094 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0918 20:50:43.219009   51397 ssh_runner.go:195] Run: crio config
	I0918 20:50:43.270983   51397 cni.go:84] Creating CNI manager for ""
	I0918 20:50:43.271034   51397 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0918 20:50:43.271049   51397 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0918 20:50:43.271073   51397 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.80 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-878094 NodeName:kubernetes-upgrade-878094 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.80"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.80 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt
StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0918 20:50:43.271230   51397 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.80
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "kubernetes-upgrade-878094"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.80
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.80"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0918 20:50:43.271297   51397 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0918 20:50:43.281121   51397 binaries.go:44] Found k8s binaries, skipping transfer
	I0918 20:50:43.281198   51397 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0918 20:50:43.295453   51397 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (432 bytes)
	I0918 20:50:43.314494   51397 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0918 20:50:43.334895   51397 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0918 20:50:43.352956   51397 ssh_runner.go:195] Run: grep 192.168.50.80	control-plane.minikube.internal$ /etc/hosts
	I0918 20:50:43.357105   51397 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.80	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0918 20:50:43.370247   51397 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0918 20:50:43.502730   51397 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0918 20:50:43.521197   51397 certs.go:68] Setting up /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/kubernetes-upgrade-878094 for IP: 192.168.50.80
	I0918 20:50:43.521217   51397 certs.go:194] generating shared ca certs ...
	I0918 20:50:43.521232   51397 certs.go:226] acquiring lock for ca certs: {Name:mkf54e0e71ed884c4372f9d3d4cb308ea4600185 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0918 20:50:43.521434   51397 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19667-7671/.minikube/ca.key
	I0918 20:50:43.521493   51397 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19667-7671/.minikube/proxy-client-ca.key
	I0918 20:50:43.521509   51397 certs.go:256] generating profile certs ...
	I0918 20:50:43.521576   51397 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/kubernetes-upgrade-878094/client.key
	I0918 20:50:43.521596   51397 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/kubernetes-upgrade-878094/client.crt with IP's: []
	I0918 20:50:43.797290   51397 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/kubernetes-upgrade-878094/client.crt ...
	I0918 20:50:43.797327   51397 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/kubernetes-upgrade-878094/client.crt: {Name:mke5c74c30ccb6f4a15f1522a7380759daa4cd2e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0918 20:50:43.797557   51397 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/kubernetes-upgrade-878094/client.key ...
	I0918 20:50:43.797590   51397 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/kubernetes-upgrade-878094/client.key: {Name:mk0e12cce987f7b6378fcdf76b678c80d644067e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0918 20:50:43.797739   51397 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/kubernetes-upgrade-878094/apiserver.key.a7777c0f
	I0918 20:50:43.797766   51397 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/kubernetes-upgrade-878094/apiserver.crt.a7777c0f with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.50.80]
	I0918 20:50:44.074553   51397 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/kubernetes-upgrade-878094/apiserver.crt.a7777c0f ...
	I0918 20:50:44.074588   51397 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/kubernetes-upgrade-878094/apiserver.crt.a7777c0f: {Name:mk35852399123bab8a0ffd8d60610c5a0894bbdb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0918 20:50:44.074784   51397 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/kubernetes-upgrade-878094/apiserver.key.a7777c0f ...
	I0918 20:50:44.074800   51397 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/kubernetes-upgrade-878094/apiserver.key.a7777c0f: {Name:mk4aa90cbd2c3076b069dd10502ecd29418e8a96 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0918 20:50:44.074878   51397 certs.go:381] copying /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/kubernetes-upgrade-878094/apiserver.crt.a7777c0f -> /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/kubernetes-upgrade-878094/apiserver.crt
	I0918 20:50:44.074947   51397 certs.go:385] copying /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/kubernetes-upgrade-878094/apiserver.key.a7777c0f -> /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/kubernetes-upgrade-878094/apiserver.key
	I0918 20:50:44.074998   51397 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/kubernetes-upgrade-878094/proxy-client.key
	I0918 20:50:44.075012   51397 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/kubernetes-upgrade-878094/proxy-client.crt with IP's: []
	I0918 20:50:44.152331   51397 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/kubernetes-upgrade-878094/proxy-client.crt ...
	I0918 20:50:44.152369   51397 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/kubernetes-upgrade-878094/proxy-client.crt: {Name:mka795ae07dc4b01cec3160d5749fa3f4e356656 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0918 20:50:44.152564   51397 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/kubernetes-upgrade-878094/proxy-client.key ...
	I0918 20:50:44.152578   51397 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/kubernetes-upgrade-878094/proxy-client.key: {Name:mk288836cd3eb944d39b34b1e0f8d5803ec66374 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0918 20:50:44.152757   51397 certs.go:484] found cert: /home/jenkins/minikube-integration/19667-7671/.minikube/certs/14878.pem (1338 bytes)
	W0918 20:50:44.152800   51397 certs.go:480] ignoring /home/jenkins/minikube-integration/19667-7671/.minikube/certs/14878_empty.pem, impossibly tiny 0 bytes
	I0918 20:50:44.152809   51397 certs.go:484] found cert: /home/jenkins/minikube-integration/19667-7671/.minikube/certs/ca-key.pem (1675 bytes)
	I0918 20:50:44.152834   51397 certs.go:484] found cert: /home/jenkins/minikube-integration/19667-7671/.minikube/certs/ca.pem (1082 bytes)
	I0918 20:50:44.152861   51397 certs.go:484] found cert: /home/jenkins/minikube-integration/19667-7671/.minikube/certs/cert.pem (1123 bytes)
	I0918 20:50:44.152887   51397 certs.go:484] found cert: /home/jenkins/minikube-integration/19667-7671/.minikube/certs/key.pem (1679 bytes)
	I0918 20:50:44.152929   51397 certs.go:484] found cert: /home/jenkins/minikube-integration/19667-7671/.minikube/files/etc/ssl/certs/148782.pem (1708 bytes)
	I0918 20:50:44.153606   51397 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0918 20:50:44.183497   51397 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0918 20:50:44.212963   51397 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0918 20:50:44.242252   51397 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0918 20:50:44.268520   51397 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/kubernetes-upgrade-878094/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0918 20:50:44.295826   51397 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/kubernetes-upgrade-878094/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0918 20:50:44.321818   51397 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/kubernetes-upgrade-878094/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0918 20:50:44.349617   51397 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/kubernetes-upgrade-878094/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0918 20:50:44.372913   51397 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/files/etc/ssl/certs/148782.pem --> /usr/share/ca-certificates/148782.pem (1708 bytes)
	I0918 20:50:44.409891   51397 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0918 20:50:44.444686   51397 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/certs/14878.pem --> /usr/share/ca-certificates/14878.pem (1338 bytes)
	I0918 20:50:44.469870   51397 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0918 20:50:44.486941   51397 ssh_runner.go:195] Run: openssl version
	I0918 20:50:44.492859   51397 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0918 20:50:44.503815   51397 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0918 20:50:44.508334   51397 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 18 19:39 /usr/share/ca-certificates/minikubeCA.pem
	I0918 20:50:44.508398   51397 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0918 20:50:44.514269   51397 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0918 20:50:44.525432   51397 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14878.pem && ln -fs /usr/share/ca-certificates/14878.pem /etc/ssl/certs/14878.pem"
	I0918 20:50:44.537315   51397 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14878.pem
	I0918 20:50:44.541952   51397 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 18 19:57 /usr/share/ca-certificates/14878.pem
	I0918 20:50:44.542026   51397 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14878.pem
	I0918 20:50:44.548333   51397 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14878.pem /etc/ssl/certs/51391683.0"
	I0918 20:50:44.559330   51397 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/148782.pem && ln -fs /usr/share/ca-certificates/148782.pem /etc/ssl/certs/148782.pem"
	I0918 20:50:44.570845   51397 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/148782.pem
	I0918 20:50:44.575442   51397 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 18 19:57 /usr/share/ca-certificates/148782.pem
	I0918 20:50:44.575532   51397 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/148782.pem
	I0918 20:50:44.581203   51397 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/148782.pem /etc/ssl/certs/3ec20f2e.0"
	I0918 20:50:44.591749   51397 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0918 20:50:44.596770   51397 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0918 20:50:44.596839   51397 kubeadm.go:392] StartCluster: {Name:kubernetes-upgrade-878094 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersi
on:v1.20.0 ClusterName:kubernetes-upgrade-878094 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.80 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizati
ons:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0918 20:50:44.596929   51397 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0918 20:50:44.597004   51397 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0918 20:50:44.640614   51397 cri.go:89] found id: ""
	I0918 20:50:44.640677   51397 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0918 20:50:44.651601   51397 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0918 20:50:44.661827   51397 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0918 20:50:44.672286   51397 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0918 20:50:44.672310   51397 kubeadm.go:157] found existing configuration files:
	
	I0918 20:50:44.672357   51397 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0918 20:50:44.683145   51397 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0918 20:50:44.683202   51397 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0918 20:50:44.692766   51397 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0918 20:50:44.702261   51397 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0918 20:50:44.702330   51397 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0918 20:50:44.714935   51397 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0918 20:50:44.728042   51397 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0918 20:50:44.728111   51397 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0918 20:50:44.739471   51397 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0918 20:50:44.748774   51397 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0918 20:50:44.748831   51397 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0918 20:50:44.758787   51397 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0918 20:50:44.888377   51397 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0918 20:50:44.888549   51397 kubeadm.go:310] [preflight] Running pre-flight checks
	I0918 20:50:45.034173   51397 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0918 20:50:45.034396   51397 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0918 20:50:45.034585   51397 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0918 20:50:45.232808   51397 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0918 20:50:45.464673   51397 out.go:235]   - Generating certificates and keys ...
	I0918 20:50:45.464823   51397 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0918 20:50:45.464923   51397 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0918 20:50:45.590797   51397 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0918 20:50:45.834592   51397 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0918 20:50:46.097532   51397 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0918 20:50:46.271298   51397 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0918 20:50:46.571173   51397 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0918 20:50:46.571692   51397 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-878094 localhost] and IPs [192.168.50.80 127.0.0.1 ::1]
	I0918 20:50:46.686512   51397 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0918 20:50:46.686733   51397 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-878094 localhost] and IPs [192.168.50.80 127.0.0.1 ::1]
	I0918 20:50:46.761181   51397 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0918 20:50:47.012208   51397 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0918 20:50:47.101701   51397 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0918 20:50:47.101794   51397 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0918 20:50:47.298490   51397 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0918 20:50:47.604425   51397 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0918 20:50:47.779609   51397 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0918 20:50:47.920067   51397 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0918 20:50:47.935691   51397 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0918 20:50:47.937060   51397 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0918 20:50:47.937117   51397 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0918 20:50:48.088388   51397 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0918 20:50:48.090914   51397 out.go:235]   - Booting up control plane ...
	I0918 20:50:48.091042   51397 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0918 20:50:48.107956   51397 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0918 20:50:48.109361   51397 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0918 20:50:48.110153   51397 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0918 20:50:48.114422   51397 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0918 20:51:28.108868   51397 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0918 20:51:28.109439   51397 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0918 20:51:28.109645   51397 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0918 20:51:33.110029   51397 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0918 20:51:33.110201   51397 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0918 20:51:43.109697   51397 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0918 20:51:43.109871   51397 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0918 20:52:03.109320   51397 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0918 20:52:03.109581   51397 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0918 20:52:43.112067   51397 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0918 20:52:43.112317   51397 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0918 20:52:43.112330   51397 kubeadm.go:310] 
	I0918 20:52:43.112392   51397 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0918 20:52:43.112450   51397 kubeadm.go:310] 		timed out waiting for the condition
	I0918 20:52:43.112460   51397 kubeadm.go:310] 
	I0918 20:52:43.112508   51397 kubeadm.go:310] 	This error is likely caused by:
	I0918 20:52:43.112558   51397 kubeadm.go:310] 		- The kubelet is not running
	I0918 20:52:43.112674   51397 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0918 20:52:43.112685   51397 kubeadm.go:310] 
	I0918 20:52:43.112800   51397 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0918 20:52:43.112849   51397 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0918 20:52:43.112896   51397 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0918 20:52:43.112906   51397 kubeadm.go:310] 
	I0918 20:52:43.113021   51397 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0918 20:52:43.113124   51397 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0918 20:52:43.113134   51397 kubeadm.go:310] 
	I0918 20:52:43.113247   51397 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0918 20:52:43.113359   51397 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0918 20:52:43.113455   51397 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0918 20:52:43.113550   51397 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0918 20:52:43.113560   51397 kubeadm.go:310] 
	I0918 20:52:43.114296   51397 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0918 20:52:43.114443   51397 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0918 20:52:43.114546   51397 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W0918 20:52:43.114692   51397 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-878094 localhost] and IPs [192.168.50.80 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-878094 localhost] and IPs [192.168.50.80 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-878094 localhost] and IPs [192.168.50.80 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-878094 localhost] and IPs [192.168.50.80 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0918 20:52:43.114745   51397 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0918 20:52:43.898769   51397 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0918 20:52:43.921482   51397 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0918 20:52:43.932774   51397 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0918 20:52:43.932799   51397 kubeadm.go:157] found existing configuration files:
	
	I0918 20:52:43.932856   51397 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0918 20:52:43.944249   51397 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0918 20:52:43.944329   51397 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0918 20:52:43.956500   51397 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0918 20:52:43.968991   51397 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0918 20:52:43.969055   51397 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0918 20:52:43.979876   51397 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0918 20:52:43.991771   51397 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0918 20:52:43.991829   51397 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0918 20:52:44.003236   51397 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0918 20:52:44.013587   51397 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0918 20:52:44.013661   51397 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0918 20:52:44.024428   51397 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0918 20:52:44.110424   51397 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0918 20:52:44.110526   51397 kubeadm.go:310] [preflight] Running pre-flight checks
	I0918 20:52:44.272702   51397 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0918 20:52:44.272918   51397 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0918 20:52:44.273065   51397 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0918 20:52:44.516598   51397 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0918 20:52:44.519483   51397 out.go:235]   - Generating certificates and keys ...
	I0918 20:52:44.519617   51397 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0918 20:52:44.519709   51397 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0918 20:52:44.519835   51397 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0918 20:52:44.519928   51397 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0918 20:52:44.520035   51397 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0918 20:52:44.520112   51397 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0918 20:52:44.520200   51397 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0918 20:52:44.520291   51397 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0918 20:52:44.520396   51397 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0918 20:52:44.520493   51397 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0918 20:52:44.520545   51397 kubeadm.go:310] [certs] Using the existing "sa" key
	I0918 20:52:44.520622   51397 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0918 20:52:44.599572   51397 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0918 20:52:44.757006   51397 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0918 20:52:44.834855   51397 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0918 20:52:44.919782   51397 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0918 20:52:44.936218   51397 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0918 20:52:44.937359   51397 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0918 20:52:44.937437   51397 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0918 20:52:45.098073   51397 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0918 20:52:45.099995   51397 out.go:235]   - Booting up control plane ...
	I0918 20:52:45.100183   51397 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0918 20:52:45.114811   51397 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0918 20:52:45.116081   51397 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0918 20:52:45.117006   51397 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0918 20:52:45.119384   51397 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0918 20:53:25.122550   51397 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0918 20:53:25.123416   51397 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0918 20:53:25.123720   51397 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0918 20:53:30.124287   51397 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0918 20:53:30.124544   51397 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0918 20:53:40.125465   51397 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0918 20:53:40.125768   51397 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0918 20:54:00.124701   51397 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0918 20:54:00.125037   51397 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0918 20:54:40.124711   51397 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0918 20:54:40.124956   51397 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0918 20:54:40.124976   51397 kubeadm.go:310] 
	I0918 20:54:40.125023   51397 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0918 20:54:40.125070   51397 kubeadm.go:310] 		timed out waiting for the condition
	I0918 20:54:40.125077   51397 kubeadm.go:310] 
	I0918 20:54:40.125105   51397 kubeadm.go:310] 	This error is likely caused by:
	I0918 20:54:40.125157   51397 kubeadm.go:310] 		- The kubelet is not running
	I0918 20:54:40.125304   51397 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0918 20:54:40.125317   51397 kubeadm.go:310] 
	I0918 20:54:40.125466   51397 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0918 20:54:40.125506   51397 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0918 20:54:40.125552   51397 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0918 20:54:40.125561   51397 kubeadm.go:310] 
	I0918 20:54:40.125685   51397 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0918 20:54:40.125784   51397 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0918 20:54:40.125795   51397 kubeadm.go:310] 
	I0918 20:54:40.125919   51397 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0918 20:54:40.126005   51397 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0918 20:54:40.126070   51397 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0918 20:54:40.126186   51397 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0918 20:54:40.126208   51397 kubeadm.go:310] 
	I0918 20:54:40.126843   51397 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0918 20:54:40.126946   51397 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0918 20:54:40.127045   51397 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0918 20:54:40.127107   51397 kubeadm.go:394] duration metric: took 3m55.53027403s to StartCluster
	I0918 20:54:40.127161   51397 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0918 20:54:40.127233   51397 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0918 20:54:40.167207   51397 cri.go:89] found id: ""
	I0918 20:54:40.167253   51397 logs.go:276] 0 containers: []
	W0918 20:54:40.167264   51397 logs.go:278] No container was found matching "kube-apiserver"
	I0918 20:54:40.167273   51397 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0918 20:54:40.167361   51397 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0918 20:54:40.206177   51397 cri.go:89] found id: ""
	I0918 20:54:40.206200   51397 logs.go:276] 0 containers: []
	W0918 20:54:40.206207   51397 logs.go:278] No container was found matching "etcd"
	I0918 20:54:40.206213   51397 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0918 20:54:40.206275   51397 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0918 20:54:40.240749   51397 cri.go:89] found id: ""
	I0918 20:54:40.240777   51397 logs.go:276] 0 containers: []
	W0918 20:54:40.240785   51397 logs.go:278] No container was found matching "coredns"
	I0918 20:54:40.240791   51397 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0918 20:54:40.240841   51397 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0918 20:54:40.277427   51397 cri.go:89] found id: ""
	I0918 20:54:40.277454   51397 logs.go:276] 0 containers: []
	W0918 20:54:40.277462   51397 logs.go:278] No container was found matching "kube-scheduler"
	I0918 20:54:40.277467   51397 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0918 20:54:40.277518   51397 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0918 20:54:40.310451   51397 cri.go:89] found id: ""
	I0918 20:54:40.310478   51397 logs.go:276] 0 containers: []
	W0918 20:54:40.310486   51397 logs.go:278] No container was found matching "kube-proxy"
	I0918 20:54:40.310492   51397 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0918 20:54:40.310560   51397 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0918 20:54:40.344364   51397 cri.go:89] found id: ""
	I0918 20:54:40.344393   51397 logs.go:276] 0 containers: []
	W0918 20:54:40.344400   51397 logs.go:278] No container was found matching "kube-controller-manager"
	I0918 20:54:40.344406   51397 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0918 20:54:40.344458   51397 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0918 20:54:40.376631   51397 cri.go:89] found id: ""
	I0918 20:54:40.376658   51397 logs.go:276] 0 containers: []
	W0918 20:54:40.376666   51397 logs.go:278] No container was found matching "kindnet"
	I0918 20:54:40.376679   51397 logs.go:123] Gathering logs for describe nodes ...
	I0918 20:54:40.376698   51397 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0918 20:54:40.495699   51397 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0918 20:54:40.495728   51397 logs.go:123] Gathering logs for CRI-O ...
	I0918 20:54:40.495742   51397 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0918 20:54:40.598463   51397 logs.go:123] Gathering logs for container status ...
	I0918 20:54:40.598502   51397 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 20:54:40.637701   51397 logs.go:123] Gathering logs for kubelet ...
	I0918 20:54:40.637742   51397 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 20:54:40.690566   51397 logs.go:123] Gathering logs for dmesg ...
	I0918 20:54:40.690625   51397 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W0918 20:54:40.704077   51397 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0918 20:54:40.704169   51397 out.go:270] * 
	* 
	W0918 20:54:40.704232   51397 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0918 20:54:40.704250   51397 out.go:270] * 
	* 
	W0918 20:54:40.705368   51397 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0918 20:54:40.708107   51397 out.go:201] 
	W0918 20:54:40.709088   51397 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0918 20:54:40.709154   51397 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0918 20:54:40.709181   51397 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0918 20:54:40.710512   51397 out.go:201] 

                                                
                                                
** /stderr **
version_upgrade_test.go:224: failed to start minikube HEAD with oldest k8s version: out/minikube-linux-amd64 start -p kubernetes-upgrade-878094 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 109
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-878094
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-878094: (1.331050406s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-878094 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-878094 status --format={{.Host}}: exit status 7 (63.902081ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-878094 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
E0918 20:54:44.358134   14878 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/functional-790989/client.crt: no such file or directory" logger="UnhandledError"
E0918 20:55:01.286390   14878 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/functional-790989/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-878094 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m15.90770299s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-878094 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-878094 --memory=2200 --kubernetes-version=v1.20.0 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-878094 --memory=2200 --kubernetes-version=v1.20.0 --driver=kvm2  --container-runtime=crio: exit status 106 (91.951056ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-878094] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19667
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19667-7671/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19667-7671/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.31.1 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-878094
	    minikube start -p kubernetes-upgrade-878094 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-8780942 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.31.1, by running:
	    
	    minikube start -p kubernetes-upgrade-878094 --kubernetes-version=v1.31.1
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-878094 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-878094 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (27.555116652s)
version_upgrade_test.go:279: *** TestKubernetesUpgrade FAILED at 2024-09-18 20:56:25.784711416 +0000 UTC m=+4708.555052648
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p kubernetes-upgrade-878094 -n kubernetes-upgrade-878094
helpers_test.go:244: <<< TestKubernetesUpgrade FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestKubernetesUpgrade]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-878094 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p kubernetes-upgrade-878094 logs -n 25: (1.784438857s)
helpers_test.go:252: TestKubernetesUpgrade logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                    Args                    |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p cilium-543581 sudo cat                  | cilium-543581             | jenkins | v1.34.0 | 18 Sep 24 20:54 UTC |                     |
	|         | /usr/lib/systemd/system/cri-docker.service |                           |         |         |                     |                     |
	| ssh     | -p cilium-543581 sudo                      | cilium-543581             | jenkins | v1.34.0 | 18 Sep 24 20:54 UTC |                     |
	|         | cri-dockerd --version                      |                           |         |         |                     |                     |
	| ssh     | -p cilium-543581 sudo                      | cilium-543581             | jenkins | v1.34.0 | 18 Sep 24 20:54 UTC |                     |
	|         | systemctl status containerd                |                           |         |         |                     |                     |
	|         | --all --full --no-pager                    |                           |         |         |                     |                     |
	| ssh     | -p cilium-543581 sudo                      | cilium-543581             | jenkins | v1.34.0 | 18 Sep 24 20:54 UTC |                     |
	|         | systemctl cat containerd                   |                           |         |         |                     |                     |
	|         | --no-pager                                 |                           |         |         |                     |                     |
	| ssh     | -p cilium-543581 sudo cat                  | cilium-543581             | jenkins | v1.34.0 | 18 Sep 24 20:54 UTC |                     |
	|         | /lib/systemd/system/containerd.service     |                           |         |         |                     |                     |
	| ssh     | -p cilium-543581 sudo cat                  | cilium-543581             | jenkins | v1.34.0 | 18 Sep 24 20:54 UTC |                     |
	|         | /etc/containerd/config.toml                |                           |         |         |                     |                     |
	| ssh     | -p cilium-543581 sudo                      | cilium-543581             | jenkins | v1.34.0 | 18 Sep 24 20:54 UTC |                     |
	|         | containerd config dump                     |                           |         |         |                     |                     |
	| ssh     | -p cilium-543581 sudo                      | cilium-543581             | jenkins | v1.34.0 | 18 Sep 24 20:54 UTC |                     |
	|         | systemctl status crio --all                |                           |         |         |                     |                     |
	|         | --full --no-pager                          |                           |         |         |                     |                     |
	| ssh     | -p cilium-543581 sudo                      | cilium-543581             | jenkins | v1.34.0 | 18 Sep 24 20:54 UTC |                     |
	|         | systemctl cat crio --no-pager              |                           |         |         |                     |                     |
	| ssh     | -p cilium-543581 sudo find                 | cilium-543581             | jenkins | v1.34.0 | 18 Sep 24 20:54 UTC |                     |
	|         | /etc/crio -type f -exec sh -c              |                           |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                       |                           |         |         |                     |                     |
	| ssh     | -p cilium-543581 sudo crio                 | cilium-543581             | jenkins | v1.34.0 | 18 Sep 24 20:54 UTC |                     |
	|         | config                                     |                           |         |         |                     |                     |
	| delete  | -p cilium-543581                           | cilium-543581             | jenkins | v1.34.0 | 18 Sep 24 20:54 UTC | 18 Sep 24 20:54 UTC |
	| start   | -p cert-options-347585                     | cert-options-347585       | jenkins | v1.34.0 | 18 Sep 24 20:54 UTC | 18 Sep 24 20:55 UTC |
	|         | --memory=2048                              |                           |         |         |                     |                     |
	|         | --apiserver-ips=127.0.0.1                  |                           |         |         |                     |                     |
	|         | --apiserver-ips=192.168.15.15              |                           |         |         |                     |                     |
	|         | --apiserver-names=localhost                |                           |         |         |                     |                     |
	|         | --apiserver-names=www.google.com           |                           |         |         |                     |                     |
	|         | --apiserver-port=8555                      |                           |         |         |                     |                     |
	|         | --driver=kvm2                              |                           |         |         |                     |                     |
	|         | --container-runtime=crio                   |                           |         |         |                     |                     |
	| start   | -p old-k8s-version-740194                  | old-k8s-version-740194    | jenkins | v1.34.0 | 18 Sep 24 20:54 UTC |                     |
	|         | --memory=2200                              |                           |         |         |                     |                     |
	|         | --alsologtostderr --wait=true              |                           |         |         |                     |                     |
	|         | --kvm-network=default                      |                           |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system              |                           |         |         |                     |                     |
	|         | --disable-driver-mounts                    |                           |         |         |                     |                     |
	|         | --keep-context=false                       |                           |         |         |                     |                     |
	|         | --driver=kvm2                              |                           |         |         |                     |                     |
	|         | --container-runtime=crio                   |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0               |                           |         |         |                     |                     |
	| start   | -p pause-543700                            | pause-543700              | jenkins | v1.34.0 | 18 Sep 24 20:54 UTC | 18 Sep 24 20:55 UTC |
	|         | --alsologtostderr                          |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio                   |                           |         |         |                     |                     |
	| stop    | -p kubernetes-upgrade-878094               | kubernetes-upgrade-878094 | jenkins | v1.34.0 | 18 Sep 24 20:54 UTC | 18 Sep 24 20:54 UTC |
	| start   | -p kubernetes-upgrade-878094               | kubernetes-upgrade-878094 | jenkins | v1.34.0 | 18 Sep 24 20:54 UTC | 18 Sep 24 20:55 UTC |
	|         | --memory=2200                              |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1               |                           |         |         |                     |                     |
	|         | --alsologtostderr                          |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio                   |                           |         |         |                     |                     |
	| ssh     | cert-options-347585 ssh                    | cert-options-347585       | jenkins | v1.34.0 | 18 Sep 24 20:55 UTC | 18 Sep 24 20:55 UTC |
	|         | openssl x509 -text -noout -in              |                           |         |         |                     |                     |
	|         | /var/lib/minikube/certs/apiserver.crt      |                           |         |         |                     |                     |
	| ssh     | -p cert-options-347585 -- sudo             | cert-options-347585       | jenkins | v1.34.0 | 18 Sep 24 20:55 UTC | 18 Sep 24 20:55 UTC |
	|         | cat /etc/kubernetes/admin.conf             |                           |         |         |                     |                     |
	| delete  | -p cert-options-347585                     | cert-options-347585       | jenkins | v1.34.0 | 18 Sep 24 20:55 UTC | 18 Sep 24 20:55 UTC |
	| start   | -p no-preload-331658                       | no-preload-331658         | jenkins | v1.34.0 | 18 Sep 24 20:55 UTC |                     |
	|         | --memory=2200                              |                           |         |         |                     |                     |
	|         | --alsologtostderr --wait=true              |                           |         |         |                     |                     |
	|         | --preload=false --driver=kvm2              |                           |         |         |                     |                     |
	|         |  --container-runtime=crio                  |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1               |                           |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-878094               | kubernetes-upgrade-878094 | jenkins | v1.34.0 | 18 Sep 24 20:55 UTC |                     |
	|         | --memory=2200                              |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0               |                           |         |         |                     |                     |
	|         | --driver=kvm2                              |                           |         |         |                     |                     |
	|         | --container-runtime=crio                   |                           |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-878094               | kubernetes-upgrade-878094 | jenkins | v1.34.0 | 18 Sep 24 20:55 UTC | 18 Sep 24 20:56 UTC |
	|         | --memory=2200                              |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1               |                           |         |         |                     |                     |
	|         | --alsologtostderr                          |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio                   |                           |         |         |                     |                     |
	| delete  | -p pause-543700                            | pause-543700              | jenkins | v1.34.0 | 18 Sep 24 20:56 UTC | 18 Sep 24 20:56 UTC |
	| start   | -p embed-certs-255556                      | embed-certs-255556        | jenkins | v1.34.0 | 18 Sep 24 20:56 UTC |                     |
	|         | --memory=2200                              |                           |         |         |                     |                     |
	|         | --alsologtostderr --wait=true              |                           |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                |                           |         |         |                     |                     |
	|         |  --container-runtime=crio                  |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1               |                           |         |         |                     |                     |
	|---------|--------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/18 20:56:03
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0918 20:56:03.869682   59271 out.go:345] Setting OutFile to fd 1 ...
	I0918 20:56:03.869792   59271 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0918 20:56:03.869800   59271 out.go:358] Setting ErrFile to fd 2...
	I0918 20:56:03.869804   59271 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0918 20:56:03.869964   59271 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19667-7671/.minikube/bin
	I0918 20:56:03.870513   59271 out.go:352] Setting JSON to false
	I0918 20:56:03.871450   59271 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":5908,"bootTime":1726687056,"procs":235,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0918 20:56:03.871543   59271 start.go:139] virtualization: kvm guest
	I0918 20:56:03.873783   59271 out.go:177] * [embed-certs-255556] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0918 20:56:03.875238   59271 out.go:177]   - MINIKUBE_LOCATION=19667
	I0918 20:56:03.875285   59271 notify.go:220] Checking for updates...
	I0918 20:56:03.877727   59271 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0918 20:56:03.878815   59271 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19667-7671/kubeconfig
	I0918 20:56:03.879885   59271 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19667-7671/.minikube
	I0918 20:56:03.880912   59271 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0918 20:56:03.881988   59271 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0918 20:56:03.883635   59271 config.go:182] Loaded profile config "kubernetes-upgrade-878094": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0918 20:56:03.883751   59271 config.go:182] Loaded profile config "no-preload-331658": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0918 20:56:03.883864   59271 config.go:182] Loaded profile config "old-k8s-version-740194": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0918 20:56:03.883959   59271 driver.go:394] Setting default libvirt URI to qemu:///system
	I0918 20:56:03.921179   59271 out.go:177] * Using the kvm2 driver based on user configuration
	I0918 20:56:03.923038   59271 start.go:297] selected driver: kvm2
	I0918 20:56:03.923061   59271 start.go:901] validating driver "kvm2" against <nil>
	I0918 20:56:03.923073   59271 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0918 20:56:03.923862   59271 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0918 20:56:03.923954   59271 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19667-7671/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0918 20:56:03.940075   59271 install.go:137] /home/jenkins/minikube-integration/19667-7671/.minikube/bin/docker-machine-driver-kvm2 version is 1.34.0
	I0918 20:56:03.940123   59271 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0918 20:56:03.940384   59271 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0918 20:56:03.940416   59271 cni.go:84] Creating CNI manager for ""
	I0918 20:56:03.940463   59271 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0918 20:56:03.940472   59271 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0918 20:56:03.940519   59271 start.go:340] cluster config:
	{Name:embed-certs-255556 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:embed-certs-255556 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Container
Runtime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSo
ck: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0918 20:56:03.940631   59271 iso.go:125] acquiring lock: {Name:mkbcf4d84f3091567a39802cd5e773cb10b8c545 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0918 20:56:03.942453   59271 out.go:177] * Starting "embed-certs-255556" primary control-plane node in "embed-certs-255556" cluster
	I0918 20:56:01.672361   57636 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0918 20:56:01.673259   57636 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0918 20:56:01.673516   57636 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0918 20:56:05.954572   58323 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.1: (2.716240545s)
	I0918 20:56:05.954609   58323 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19667-7671/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.1 from cache
	I0918 20:56:05.954640   58323 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.3
	I0918 20:56:05.954662   58323 ssh_runner.go:235] Completed: which crictl: (2.716269907s)
	I0918 20:56:05.954694   58323 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.3
	I0918 20:56:05.954724   58323 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0918 20:56:08.036783   58323 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (2.082031242s)
	I0918 20:56:08.036884   58323 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0918 20:56:08.036893   58323 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.3: (2.082176654s)
	I0918 20:56:08.036917   58323 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19667-7671/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3 from cache
	I0918 20:56:08.036949   58323 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.31.1
	I0918 20:56:08.036999   58323 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.1
	I0918 20:56:08.086470   58323 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0918 20:56:03.943821   59271 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0918 20:56:03.943876   59271 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19667-7671/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I0918 20:56:03.943888   59271 cache.go:56] Caching tarball of preloaded images
	I0918 20:56:03.943976   59271 preload.go:172] Found /home/jenkins/minikube-integration/19667-7671/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0918 20:56:03.943990   59271 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0918 20:56:03.944130   59271 profile.go:143] Saving config to /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/embed-certs-255556/config.json ...
	I0918 20:56:03.944155   59271 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/embed-certs-255556/config.json: {Name:mkc2c392ddbf155898c8eb408b3d8a19dd2c0295 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0918 20:56:03.944318   59271 start.go:360] acquireMachinesLock for embed-certs-255556: {Name:mk1b0d68a0b7d9d4bc8204d5d81cb9eb77222526 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0918 20:56:09.568868   59271 start.go:364] duration metric: took 5.624523611s to acquireMachinesLock for "embed-certs-255556"
	I0918 20:56:09.568952   59271 start.go:93] Provisioning new machine with config: &{Name:embed-certs-255556 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.31.1 ClusterName:embed-certs-255556 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0918 20:56:09.569084   59271 start.go:125] createHost starting for "" (driver="kvm2")
	I0918 20:56:06.673719   57636 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0918 20:56:06.673968   57636 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0918 20:56:09.324865   58925 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0918 20:56:09.324893   58925 machine.go:96] duration metric: took 9.086488828s to provisionDockerMachine
	I0918 20:56:09.324905   58925 start.go:293] postStartSetup for "kubernetes-upgrade-878094" (driver="kvm2")
	I0918 20:56:09.324918   58925 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0918 20:56:09.324937   58925 main.go:141] libmachine: (kubernetes-upgrade-878094) Calling .DriverName
	I0918 20:56:09.325301   58925 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0918 20:56:09.325334   58925 main.go:141] libmachine: (kubernetes-upgrade-878094) Calling .GetSSHHostname
	I0918 20:56:09.328468   58925 main.go:141] libmachine: (kubernetes-upgrade-878094) DBG | domain kubernetes-upgrade-878094 has defined MAC address 52:54:00:21:00:07 in network mk-kubernetes-upgrade-878094
	I0918 20:56:09.328880   58925 main.go:141] libmachine: (kubernetes-upgrade-878094) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:00:07", ip: ""} in network mk-kubernetes-upgrade-878094: {Iface:virbr4 ExpiryTime:2024-09-18 21:55:27 +0000 UTC Type:0 Mac:52:54:00:21:00:07 Iaid: IPaddr:192.168.50.80 Prefix:24 Hostname:kubernetes-upgrade-878094 Clientid:01:52:54:00:21:00:07}
	I0918 20:56:09.328918   58925 main.go:141] libmachine: (kubernetes-upgrade-878094) DBG | domain kubernetes-upgrade-878094 has defined IP address 192.168.50.80 and MAC address 52:54:00:21:00:07 in network mk-kubernetes-upgrade-878094
	I0918 20:56:09.329033   58925 main.go:141] libmachine: (kubernetes-upgrade-878094) Calling .GetSSHPort
	I0918 20:56:09.329249   58925 main.go:141] libmachine: (kubernetes-upgrade-878094) Calling .GetSSHKeyPath
	I0918 20:56:09.329423   58925 main.go:141] libmachine: (kubernetes-upgrade-878094) Calling .GetSSHUsername
	I0918 20:56:09.329573   58925 sshutil.go:53] new ssh client: &{IP:192.168.50.80 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19667-7671/.minikube/machines/kubernetes-upgrade-878094/id_rsa Username:docker}
	I0918 20:56:09.414198   58925 ssh_runner.go:195] Run: cat /etc/os-release
	I0918 20:56:09.418545   58925 info.go:137] Remote host: Buildroot 2023.02.9
	I0918 20:56:09.418571   58925 filesync.go:126] Scanning /home/jenkins/minikube-integration/19667-7671/.minikube/addons for local assets ...
	I0918 20:56:09.418646   58925 filesync.go:126] Scanning /home/jenkins/minikube-integration/19667-7671/.minikube/files for local assets ...
	I0918 20:56:09.418744   58925 filesync.go:149] local asset: /home/jenkins/minikube-integration/19667-7671/.minikube/files/etc/ssl/certs/148782.pem -> 148782.pem in /etc/ssl/certs
	I0918 20:56:09.418988   58925 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0918 20:56:09.428134   58925 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/files/etc/ssl/certs/148782.pem --> /etc/ssl/certs/148782.pem (1708 bytes)
	I0918 20:56:09.452327   58925 start.go:296] duration metric: took 127.406712ms for postStartSetup
	I0918 20:56:09.452395   58925 fix.go:56] duration metric: took 9.239595864s for fixHost
	I0918 20:56:09.452424   58925 main.go:141] libmachine: (kubernetes-upgrade-878094) Calling .GetSSHHostname
	I0918 20:56:09.455614   58925 main.go:141] libmachine: (kubernetes-upgrade-878094) DBG | domain kubernetes-upgrade-878094 has defined MAC address 52:54:00:21:00:07 in network mk-kubernetes-upgrade-878094
	I0918 20:56:09.455990   58925 main.go:141] libmachine: (kubernetes-upgrade-878094) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:00:07", ip: ""} in network mk-kubernetes-upgrade-878094: {Iface:virbr4 ExpiryTime:2024-09-18 21:55:27 +0000 UTC Type:0 Mac:52:54:00:21:00:07 Iaid: IPaddr:192.168.50.80 Prefix:24 Hostname:kubernetes-upgrade-878094 Clientid:01:52:54:00:21:00:07}
	I0918 20:56:09.456055   58925 main.go:141] libmachine: (kubernetes-upgrade-878094) DBG | domain kubernetes-upgrade-878094 has defined IP address 192.168.50.80 and MAC address 52:54:00:21:00:07 in network mk-kubernetes-upgrade-878094
	I0918 20:56:09.456363   58925 main.go:141] libmachine: (kubernetes-upgrade-878094) Calling .GetSSHPort
	I0918 20:56:09.456626   58925 main.go:141] libmachine: (kubernetes-upgrade-878094) Calling .GetSSHKeyPath
	I0918 20:56:09.456834   58925 main.go:141] libmachine: (kubernetes-upgrade-878094) Calling .GetSSHKeyPath
	I0918 20:56:09.457011   58925 main.go:141] libmachine: (kubernetes-upgrade-878094) Calling .GetSSHUsername
	I0918 20:56:09.457232   58925 main.go:141] libmachine: Using SSH client type: native
	I0918 20:56:09.457450   58925 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.50.80 22 <nil> <nil>}
	I0918 20:56:09.457465   58925 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0918 20:56:09.568717   58925 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726692969.558175184
	
	I0918 20:56:09.568740   58925 fix.go:216] guest clock: 1726692969.558175184
	I0918 20:56:09.568750   58925 fix.go:229] Guest: 2024-09-18 20:56:09.558175184 +0000 UTC Remote: 2024-09-18 20:56:09.45240154 +0000 UTC m=+11.219513492 (delta=105.773644ms)
	I0918 20:56:09.568775   58925 fix.go:200] guest clock delta is within tolerance: 105.773644ms
	I0918 20:56:09.568782   58925 start.go:83] releasing machines lock for "kubernetes-upgrade-878094", held for 9.356031784s
	I0918 20:56:09.568809   58925 main.go:141] libmachine: (kubernetes-upgrade-878094) Calling .DriverName
	I0918 20:56:09.569076   58925 main.go:141] libmachine: (kubernetes-upgrade-878094) Calling .GetIP
	I0918 20:56:09.572489   58925 main.go:141] libmachine: (kubernetes-upgrade-878094) DBG | domain kubernetes-upgrade-878094 has defined MAC address 52:54:00:21:00:07 in network mk-kubernetes-upgrade-878094
	I0918 20:56:09.572845   58925 main.go:141] libmachine: (kubernetes-upgrade-878094) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:00:07", ip: ""} in network mk-kubernetes-upgrade-878094: {Iface:virbr4 ExpiryTime:2024-09-18 21:55:27 +0000 UTC Type:0 Mac:52:54:00:21:00:07 Iaid: IPaddr:192.168.50.80 Prefix:24 Hostname:kubernetes-upgrade-878094 Clientid:01:52:54:00:21:00:07}
	I0918 20:56:09.572868   58925 main.go:141] libmachine: (kubernetes-upgrade-878094) DBG | domain kubernetes-upgrade-878094 has defined IP address 192.168.50.80 and MAC address 52:54:00:21:00:07 in network mk-kubernetes-upgrade-878094
	I0918 20:56:09.573084   58925 main.go:141] libmachine: (kubernetes-upgrade-878094) Calling .DriverName
	I0918 20:56:09.573717   58925 main.go:141] libmachine: (kubernetes-upgrade-878094) Calling .DriverName
	I0918 20:56:09.573897   58925 main.go:141] libmachine: (kubernetes-upgrade-878094) Calling .DriverName
	I0918 20:56:09.573998   58925 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0918 20:56:09.574036   58925 main.go:141] libmachine: (kubernetes-upgrade-878094) Calling .GetSSHHostname
	I0918 20:56:09.574121   58925 ssh_runner.go:195] Run: cat /version.json
	I0918 20:56:09.574146   58925 main.go:141] libmachine: (kubernetes-upgrade-878094) Calling .GetSSHHostname
	I0918 20:56:09.576922   58925 main.go:141] libmachine: (kubernetes-upgrade-878094) DBG | domain kubernetes-upgrade-878094 has defined MAC address 52:54:00:21:00:07 in network mk-kubernetes-upgrade-878094
	I0918 20:56:09.577207   58925 main.go:141] libmachine: (kubernetes-upgrade-878094) DBG | domain kubernetes-upgrade-878094 has defined MAC address 52:54:00:21:00:07 in network mk-kubernetes-upgrade-878094
	I0918 20:56:09.577425   58925 main.go:141] libmachine: (kubernetes-upgrade-878094) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:00:07", ip: ""} in network mk-kubernetes-upgrade-878094: {Iface:virbr4 ExpiryTime:2024-09-18 21:55:27 +0000 UTC Type:0 Mac:52:54:00:21:00:07 Iaid: IPaddr:192.168.50.80 Prefix:24 Hostname:kubernetes-upgrade-878094 Clientid:01:52:54:00:21:00:07}
	I0918 20:56:09.577445   58925 main.go:141] libmachine: (kubernetes-upgrade-878094) DBG | domain kubernetes-upgrade-878094 has defined IP address 192.168.50.80 and MAC address 52:54:00:21:00:07 in network mk-kubernetes-upgrade-878094
	I0918 20:56:09.577631   58925 main.go:141] libmachine: (kubernetes-upgrade-878094) Calling .GetSSHPort
	I0918 20:56:09.577778   58925 main.go:141] libmachine: (kubernetes-upgrade-878094) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:00:07", ip: ""} in network mk-kubernetes-upgrade-878094: {Iface:virbr4 ExpiryTime:2024-09-18 21:55:27 +0000 UTC Type:0 Mac:52:54:00:21:00:07 Iaid: IPaddr:192.168.50.80 Prefix:24 Hostname:kubernetes-upgrade-878094 Clientid:01:52:54:00:21:00:07}
	I0918 20:56:09.577804   58925 main.go:141] libmachine: (kubernetes-upgrade-878094) DBG | domain kubernetes-upgrade-878094 has defined IP address 192.168.50.80 and MAC address 52:54:00:21:00:07 in network mk-kubernetes-upgrade-878094
	I0918 20:56:09.577812   58925 main.go:141] libmachine: (kubernetes-upgrade-878094) Calling .GetSSHKeyPath
	I0918 20:56:09.577986   58925 main.go:141] libmachine: (kubernetes-upgrade-878094) Calling .GetSSHUsername
	I0918 20:56:09.578032   58925 main.go:141] libmachine: (kubernetes-upgrade-878094) Calling .GetSSHPort
	I0918 20:56:09.578196   58925 main.go:141] libmachine: (kubernetes-upgrade-878094) Calling .GetSSHKeyPath
	I0918 20:56:09.578210   58925 sshutil.go:53] new ssh client: &{IP:192.168.50.80 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19667-7671/.minikube/machines/kubernetes-upgrade-878094/id_rsa Username:docker}
	I0918 20:56:09.578326   58925 main.go:141] libmachine: (kubernetes-upgrade-878094) Calling .GetSSHUsername
	I0918 20:56:09.578452   58925 sshutil.go:53] new ssh client: &{IP:192.168.50.80 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19667-7671/.minikube/machines/kubernetes-upgrade-878094/id_rsa Username:docker}
	I0918 20:56:09.657838   58925 ssh_runner.go:195] Run: systemctl --version
	I0918 20:56:09.698002   58925 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0918 20:56:09.857215   58925 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0918 20:56:09.864410   58925 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0918 20:56:09.864491   58925 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0918 20:56:09.876463   58925 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0918 20:56:09.876490   58925 start.go:495] detecting cgroup driver to use...
	I0918 20:56:09.876587   58925 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0918 20:56:09.895301   58925 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0918 20:56:09.910983   58925 docker.go:217] disabling cri-docker service (if available) ...
	I0918 20:56:09.911069   58925 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0918 20:56:09.932393   58925 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0918 20:56:09.953729   58925 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0918 20:56:10.112270   58925 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0918 20:56:10.299701   58925 docker.go:233] disabling docker service ...
	I0918 20:56:10.299775   58925 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0918 20:56:10.319695   58925 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0918 20:56:10.333597   58925 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0918 20:56:10.471442   58925 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0918 20:56:10.607698   58925 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0918 20:56:10.626257   58925 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0918 20:56:10.645835   58925 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0918 20:56:10.645896   58925 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0918 20:56:10.656209   58925 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0918 20:56:10.656288   58925 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0918 20:56:10.666826   58925 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0918 20:56:10.677661   58925 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0918 20:56:10.688064   58925 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0918 20:56:10.699305   58925 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0918 20:56:10.710070   58925 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0918 20:56:10.722705   58925 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0918 20:56:10.733142   58925 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0918 20:56:10.742377   58925 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0918 20:56:10.751611   58925 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0918 20:56:10.920100   58925 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0918 20:56:10.142208   58323 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (2.055705574s)
	I0918 20:56:10.142273   58323 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.1: (2.105254358s)
	I0918 20:56:10.142294   58323 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19667-7671/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.1 from cache
	I0918 20:56:10.142307   58323 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19667-7671/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0918 20:56:10.142319   58323 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.31.1
	I0918 20:56:10.142362   58323 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.1
	I0918 20:56:10.142380   58323 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I0918 20:56:10.146849   58323 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I0918 20:56:10.146884   58323 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (9060352 bytes)
	I0918 20:56:12.859034   58323 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.1: (2.716636004s)
	I0918 20:56:12.859069   58323 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19667-7671/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.1 from cache
	I0918 20:56:12.859096   58323 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.31.1
	I0918 20:56:12.859149   58323 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.1
	I0918 20:56:09.571683   59271 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0918 20:56:09.571867   59271 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19667-7671/.minikube/bin/docker-machine-driver-kvm2
	I0918 20:56:09.571917   59271 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0918 20:56:09.591090   59271 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33151
	I0918 20:56:09.591614   59271 main.go:141] libmachine: () Calling .GetVersion
	I0918 20:56:09.592409   59271 main.go:141] libmachine: Using API Version  1
	I0918 20:56:09.592431   59271 main.go:141] libmachine: () Calling .SetConfigRaw
	I0918 20:56:09.592922   59271 main.go:141] libmachine: () Calling .GetMachineName
	I0918 20:56:09.593151   59271 main.go:141] libmachine: (embed-certs-255556) Calling .GetMachineName
	I0918 20:56:09.593313   59271 main.go:141] libmachine: (embed-certs-255556) Calling .DriverName
	I0918 20:56:09.593496   59271 start.go:159] libmachine.API.Create for "embed-certs-255556" (driver="kvm2")
	I0918 20:56:09.593533   59271 client.go:168] LocalClient.Create starting
	I0918 20:56:09.593577   59271 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19667-7671/.minikube/certs/ca.pem
	I0918 20:56:09.593630   59271 main.go:141] libmachine: Decoding PEM data...
	I0918 20:56:09.593656   59271 main.go:141] libmachine: Parsing certificate...
	I0918 20:56:09.593728   59271 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19667-7671/.minikube/certs/cert.pem
	I0918 20:56:09.593755   59271 main.go:141] libmachine: Decoding PEM data...
	I0918 20:56:09.593770   59271 main.go:141] libmachine: Parsing certificate...
	I0918 20:56:09.593789   59271 main.go:141] libmachine: Running pre-create checks...
	I0918 20:56:09.593803   59271 main.go:141] libmachine: (embed-certs-255556) Calling .PreCreateCheck
	I0918 20:56:09.594212   59271 main.go:141] libmachine: (embed-certs-255556) Calling .GetConfigRaw
	I0918 20:56:09.594620   59271 main.go:141] libmachine: Creating machine...
	I0918 20:56:09.594634   59271 main.go:141] libmachine: (embed-certs-255556) Calling .Create
	I0918 20:56:09.594790   59271 main.go:141] libmachine: (embed-certs-255556) Creating KVM machine...
	I0918 20:56:09.596281   59271 main.go:141] libmachine: (embed-certs-255556) DBG | found existing default KVM network
	I0918 20:56:09.598279   59271 main.go:141] libmachine: (embed-certs-255556) DBG | I0918 20:56:09.598119   59311 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0002157c0}
	I0918 20:56:09.598330   59271 main.go:141] libmachine: (embed-certs-255556) DBG | created network xml: 
	I0918 20:56:09.598345   59271 main.go:141] libmachine: (embed-certs-255556) DBG | <network>
	I0918 20:56:09.598354   59271 main.go:141] libmachine: (embed-certs-255556) DBG |   <name>mk-embed-certs-255556</name>
	I0918 20:56:09.598384   59271 main.go:141] libmachine: (embed-certs-255556) DBG |   <dns enable='no'/>
	I0918 20:56:09.598401   59271 main.go:141] libmachine: (embed-certs-255556) DBG |   
	I0918 20:56:09.598411   59271 main.go:141] libmachine: (embed-certs-255556) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0918 20:56:09.598423   59271 main.go:141] libmachine: (embed-certs-255556) DBG |     <dhcp>
	I0918 20:56:09.598434   59271 main.go:141] libmachine: (embed-certs-255556) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0918 20:56:09.598445   59271 main.go:141] libmachine: (embed-certs-255556) DBG |     </dhcp>
	I0918 20:56:09.598453   59271 main.go:141] libmachine: (embed-certs-255556) DBG |   </ip>
	I0918 20:56:09.598460   59271 main.go:141] libmachine: (embed-certs-255556) DBG |   
	I0918 20:56:09.598478   59271 main.go:141] libmachine: (embed-certs-255556) DBG | </network>
	I0918 20:56:09.598489   59271 main.go:141] libmachine: (embed-certs-255556) DBG | 
	I0918 20:56:09.604046   59271 main.go:141] libmachine: (embed-certs-255556) DBG | trying to create private KVM network mk-embed-certs-255556 192.168.39.0/24...
	I0918 20:56:09.702052   59271 main.go:141] libmachine: (embed-certs-255556) DBG | private KVM network mk-embed-certs-255556 192.168.39.0/24 created
	I0918 20:56:09.702254   59271 main.go:141] libmachine: (embed-certs-255556) Setting up store path in /home/jenkins/minikube-integration/19667-7671/.minikube/machines/embed-certs-255556 ...
	I0918 20:56:09.702285   59271 main.go:141] libmachine: (embed-certs-255556) DBG | I0918 20:56:09.702210   59311 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19667-7671/.minikube
	I0918 20:56:09.702297   59271 main.go:141] libmachine: (embed-certs-255556) Building disk image from file:///home/jenkins/minikube-integration/19667-7671/.minikube/cache/iso/amd64/minikube-v1.34.0-1726481713-19649-amd64.iso
	I0918 20:56:09.702336   59271 main.go:141] libmachine: (embed-certs-255556) Downloading /home/jenkins/minikube-integration/19667-7671/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19667-7671/.minikube/cache/iso/amd64/minikube-v1.34.0-1726481713-19649-amd64.iso...
	I0918 20:56:09.978546   59271 main.go:141] libmachine: (embed-certs-255556) DBG | I0918 20:56:09.978411   59311 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19667-7671/.minikube/machines/embed-certs-255556/id_rsa...
	I0918 20:56:10.126265   59271 main.go:141] libmachine: (embed-certs-255556) DBG | I0918 20:56:10.126116   59311 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19667-7671/.minikube/machines/embed-certs-255556/embed-certs-255556.rawdisk...
	I0918 20:56:10.126309   59271 main.go:141] libmachine: (embed-certs-255556) DBG | Writing magic tar header
	I0918 20:56:10.126383   59271 main.go:141] libmachine: (embed-certs-255556) DBG | Writing SSH key tar header
	I0918 20:56:10.126410   59271 main.go:141] libmachine: (embed-certs-255556) Setting executable bit set on /home/jenkins/minikube-integration/19667-7671/.minikube/machines/embed-certs-255556 (perms=drwx------)
	I0918 20:56:10.126434   59271 main.go:141] libmachine: (embed-certs-255556) Setting executable bit set on /home/jenkins/minikube-integration/19667-7671/.minikube/machines (perms=drwxr-xr-x)
	I0918 20:56:10.126460   59271 main.go:141] libmachine: (embed-certs-255556) DBG | I0918 20:56:10.126235   59311 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19667-7671/.minikube/machines/embed-certs-255556 ...
	I0918 20:56:10.126474   59271 main.go:141] libmachine: (embed-certs-255556) Setting executable bit set on /home/jenkins/minikube-integration/19667-7671/.minikube (perms=drwxr-xr-x)
	I0918 20:56:10.126484   59271 main.go:141] libmachine: (embed-certs-255556) Setting executable bit set on /home/jenkins/minikube-integration/19667-7671 (perms=drwxrwxr-x)
	I0918 20:56:10.126492   59271 main.go:141] libmachine: (embed-certs-255556) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0918 20:56:10.126502   59271 main.go:141] libmachine: (embed-certs-255556) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0918 20:56:10.126516   59271 main.go:141] libmachine: (embed-certs-255556) Creating domain...
	I0918 20:56:10.126531   59271 main.go:141] libmachine: (embed-certs-255556) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19667-7671/.minikube/machines/embed-certs-255556
	I0918 20:56:10.126555   59271 main.go:141] libmachine: (embed-certs-255556) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19667-7671/.minikube/machines
	I0918 20:56:10.126566   59271 main.go:141] libmachine: (embed-certs-255556) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19667-7671/.minikube
	I0918 20:56:10.126571   59271 main.go:141] libmachine: (embed-certs-255556) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19667-7671
	I0918 20:56:10.126589   59271 main.go:141] libmachine: (embed-certs-255556) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0918 20:56:10.126599   59271 main.go:141] libmachine: (embed-certs-255556) DBG | Checking permissions on dir: /home/jenkins
	I0918 20:56:10.126616   59271 main.go:141] libmachine: (embed-certs-255556) DBG | Checking permissions on dir: /home
	I0918 20:56:10.126629   59271 main.go:141] libmachine: (embed-certs-255556) DBG | Skipping /home - not owner
	I0918 20:56:10.127848   59271 main.go:141] libmachine: (embed-certs-255556) define libvirt domain using xml: 
	I0918 20:56:10.127870   59271 main.go:141] libmachine: (embed-certs-255556) <domain type='kvm'>
	I0918 20:56:10.127879   59271 main.go:141] libmachine: (embed-certs-255556)   <name>embed-certs-255556</name>
	I0918 20:56:10.127886   59271 main.go:141] libmachine: (embed-certs-255556)   <memory unit='MiB'>2200</memory>
	I0918 20:56:10.127894   59271 main.go:141] libmachine: (embed-certs-255556)   <vcpu>2</vcpu>
	I0918 20:56:10.127906   59271 main.go:141] libmachine: (embed-certs-255556)   <features>
	I0918 20:56:10.127917   59271 main.go:141] libmachine: (embed-certs-255556)     <acpi/>
	I0918 20:56:10.127931   59271 main.go:141] libmachine: (embed-certs-255556)     <apic/>
	I0918 20:56:10.127940   59271 main.go:141] libmachine: (embed-certs-255556)     <pae/>
	I0918 20:56:10.127954   59271 main.go:141] libmachine: (embed-certs-255556)     
	I0918 20:56:10.127979   59271 main.go:141] libmachine: (embed-certs-255556)   </features>
	I0918 20:56:10.127987   59271 main.go:141] libmachine: (embed-certs-255556)   <cpu mode='host-passthrough'>
	I0918 20:56:10.127992   59271 main.go:141] libmachine: (embed-certs-255556)   
	I0918 20:56:10.127996   59271 main.go:141] libmachine: (embed-certs-255556)   </cpu>
	I0918 20:56:10.128001   59271 main.go:141] libmachine: (embed-certs-255556)   <os>
	I0918 20:56:10.128007   59271 main.go:141] libmachine: (embed-certs-255556)     <type>hvm</type>
	I0918 20:56:10.128035   59271 main.go:141] libmachine: (embed-certs-255556)     <boot dev='cdrom'/>
	I0918 20:56:10.128046   59271 main.go:141] libmachine: (embed-certs-255556)     <boot dev='hd'/>
	I0918 20:56:10.128055   59271 main.go:141] libmachine: (embed-certs-255556)     <bootmenu enable='no'/>
	I0918 20:56:10.128063   59271 main.go:141] libmachine: (embed-certs-255556)   </os>
	I0918 20:56:10.128067   59271 main.go:141] libmachine: (embed-certs-255556)   <devices>
	I0918 20:56:10.128074   59271 main.go:141] libmachine: (embed-certs-255556)     <disk type='file' device='cdrom'>
	I0918 20:56:10.128082   59271 main.go:141] libmachine: (embed-certs-255556)       <source file='/home/jenkins/minikube-integration/19667-7671/.minikube/machines/embed-certs-255556/boot2docker.iso'/>
	I0918 20:56:10.128093   59271 main.go:141] libmachine: (embed-certs-255556)       <target dev='hdc' bus='scsi'/>
	I0918 20:56:10.128101   59271 main.go:141] libmachine: (embed-certs-255556)       <readonly/>
	I0918 20:56:10.128114   59271 main.go:141] libmachine: (embed-certs-255556)     </disk>
	I0918 20:56:10.128135   59271 main.go:141] libmachine: (embed-certs-255556)     <disk type='file' device='disk'>
	I0918 20:56:10.128147   59271 main.go:141] libmachine: (embed-certs-255556)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0918 20:56:10.128162   59271 main.go:141] libmachine: (embed-certs-255556)       <source file='/home/jenkins/minikube-integration/19667-7671/.minikube/machines/embed-certs-255556/embed-certs-255556.rawdisk'/>
	I0918 20:56:10.128172   59271 main.go:141] libmachine: (embed-certs-255556)       <target dev='hda' bus='virtio'/>
	I0918 20:56:10.128182   59271 main.go:141] libmachine: (embed-certs-255556)     </disk>
	I0918 20:56:10.128197   59271 main.go:141] libmachine: (embed-certs-255556)     <interface type='network'>
	I0918 20:56:10.128206   59271 main.go:141] libmachine: (embed-certs-255556)       <source network='mk-embed-certs-255556'/>
	I0918 20:56:10.128216   59271 main.go:141] libmachine: (embed-certs-255556)       <model type='virtio'/>
	I0918 20:56:10.128228   59271 main.go:141] libmachine: (embed-certs-255556)     </interface>
	I0918 20:56:10.128237   59271 main.go:141] libmachine: (embed-certs-255556)     <interface type='network'>
	I0918 20:56:10.128246   59271 main.go:141] libmachine: (embed-certs-255556)       <source network='default'/>
	I0918 20:56:10.128255   59271 main.go:141] libmachine: (embed-certs-255556)       <model type='virtio'/>
	I0918 20:56:10.128277   59271 main.go:141] libmachine: (embed-certs-255556)     </interface>
	I0918 20:56:10.128295   59271 main.go:141] libmachine: (embed-certs-255556)     <serial type='pty'>
	I0918 20:56:10.128304   59271 main.go:141] libmachine: (embed-certs-255556)       <target port='0'/>
	I0918 20:56:10.128312   59271 main.go:141] libmachine: (embed-certs-255556)     </serial>
	I0918 20:56:10.128319   59271 main.go:141] libmachine: (embed-certs-255556)     <console type='pty'>
	I0918 20:56:10.128326   59271 main.go:141] libmachine: (embed-certs-255556)       <target type='serial' port='0'/>
	I0918 20:56:10.128331   59271 main.go:141] libmachine: (embed-certs-255556)     </console>
	I0918 20:56:10.128338   59271 main.go:141] libmachine: (embed-certs-255556)     <rng model='virtio'>
	I0918 20:56:10.128344   59271 main.go:141] libmachine: (embed-certs-255556)       <backend model='random'>/dev/random</backend>
	I0918 20:56:10.128350   59271 main.go:141] libmachine: (embed-certs-255556)     </rng>
	I0918 20:56:10.128355   59271 main.go:141] libmachine: (embed-certs-255556)     
	I0918 20:56:10.128361   59271 main.go:141] libmachine: (embed-certs-255556)     
	I0918 20:56:10.128365   59271 main.go:141] libmachine: (embed-certs-255556)   </devices>
	I0918 20:56:10.128375   59271 main.go:141] libmachine: (embed-certs-255556) </domain>
	I0918 20:56:10.128384   59271 main.go:141] libmachine: (embed-certs-255556) 
	I0918 20:56:10.133353   59271 main.go:141] libmachine: (embed-certs-255556) DBG | domain embed-certs-255556 has defined MAC address 52:54:00:d8:58:e8 in network default
	I0918 20:56:10.133982   59271 main.go:141] libmachine: (embed-certs-255556) Ensuring networks are active...
	I0918 20:56:10.134009   59271 main.go:141] libmachine: (embed-certs-255556) DBG | domain embed-certs-255556 has defined MAC address 52:54:00:e8:c2:b7 in network mk-embed-certs-255556
	I0918 20:56:10.134952   59271 main.go:141] libmachine: (embed-certs-255556) Ensuring network default is active
	I0918 20:56:10.135392   59271 main.go:141] libmachine: (embed-certs-255556) Ensuring network mk-embed-certs-255556 is active
	I0918 20:56:10.136088   59271 main.go:141] libmachine: (embed-certs-255556) Getting domain xml...
	I0918 20:56:10.137034   59271 main.go:141] libmachine: (embed-certs-255556) Creating domain...
	I0918 20:56:11.461742   59271 main.go:141] libmachine: (embed-certs-255556) Waiting to get IP...
	I0918 20:56:11.462558   59271 main.go:141] libmachine: (embed-certs-255556) DBG | domain embed-certs-255556 has defined MAC address 52:54:00:e8:c2:b7 in network mk-embed-certs-255556
	I0918 20:56:11.463009   59271 main.go:141] libmachine: (embed-certs-255556) DBG | unable to find current IP address of domain embed-certs-255556 in network mk-embed-certs-255556
	I0918 20:56:11.463055   59271 main.go:141] libmachine: (embed-certs-255556) DBG | I0918 20:56:11.463013   59311 retry.go:31] will retry after 230.5679ms: waiting for machine to come up
	I0918 20:56:11.695413   59271 main.go:141] libmachine: (embed-certs-255556) DBG | domain embed-certs-255556 has defined MAC address 52:54:00:e8:c2:b7 in network mk-embed-certs-255556
	I0918 20:56:11.695992   59271 main.go:141] libmachine: (embed-certs-255556) DBG | unable to find current IP address of domain embed-certs-255556 in network mk-embed-certs-255556
	I0918 20:56:11.696037   59271 main.go:141] libmachine: (embed-certs-255556) DBG | I0918 20:56:11.695954   59311 retry.go:31] will retry after 378.14275ms: waiting for machine to come up
	I0918 20:56:12.075305   59271 main.go:141] libmachine: (embed-certs-255556) DBG | domain embed-certs-255556 has defined MAC address 52:54:00:e8:c2:b7 in network mk-embed-certs-255556
	I0918 20:56:12.075994   59271 main.go:141] libmachine: (embed-certs-255556) DBG | unable to find current IP address of domain embed-certs-255556 in network mk-embed-certs-255556
	I0918 20:56:12.076047   59271 main.go:141] libmachine: (embed-certs-255556) DBG | I0918 20:56:12.075938   59311 retry.go:31] will retry after 478.022031ms: waiting for machine to come up
	I0918 20:56:12.555603   59271 main.go:141] libmachine: (embed-certs-255556) DBG | domain embed-certs-255556 has defined MAC address 52:54:00:e8:c2:b7 in network mk-embed-certs-255556
	I0918 20:56:12.556216   59271 main.go:141] libmachine: (embed-certs-255556) DBG | unable to find current IP address of domain embed-certs-255556 in network mk-embed-certs-255556
	I0918 20:56:12.556246   59271 main.go:141] libmachine: (embed-certs-255556) DBG | I0918 20:56:12.556171   59311 retry.go:31] will retry after 386.222527ms: waiting for machine to come up
	I0918 20:56:12.943767   59271 main.go:141] libmachine: (embed-certs-255556) DBG | domain embed-certs-255556 has defined MAC address 52:54:00:e8:c2:b7 in network mk-embed-certs-255556
	I0918 20:56:12.944310   59271 main.go:141] libmachine: (embed-certs-255556) DBG | unable to find current IP address of domain embed-certs-255556 in network mk-embed-certs-255556
	I0918 20:56:12.944336   59271 main.go:141] libmachine: (embed-certs-255556) DBG | I0918 20:56:12.944251   59311 retry.go:31] will retry after 527.977382ms: waiting for machine to come up
	I0918 20:56:13.474263   59271 main.go:141] libmachine: (embed-certs-255556) DBG | domain embed-certs-255556 has defined MAC address 52:54:00:e8:c2:b7 in network mk-embed-certs-255556
	I0918 20:56:13.474810   59271 main.go:141] libmachine: (embed-certs-255556) DBG | unable to find current IP address of domain embed-certs-255556 in network mk-embed-certs-255556
	I0918 20:56:13.474838   59271 main.go:141] libmachine: (embed-certs-255556) DBG | I0918 20:56:13.474760   59311 retry.go:31] will retry after 869.191634ms: waiting for machine to come up
	I0918 20:56:15.466455   58925 ssh_runner.go:235] Completed: sudo systemctl restart crio: (4.546309139s)
	I0918 20:56:15.466490   58925 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0918 20:56:15.466567   58925 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0918 20:56:15.472037   58925 start.go:563] Will wait 60s for crictl version
	I0918 20:56:15.472111   58925 ssh_runner.go:195] Run: which crictl
	I0918 20:56:15.476891   58925 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0918 20:56:15.511763   58925 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0918 20:56:15.511878   58925 ssh_runner.go:195] Run: crio --version
	I0918 20:56:15.541273   58925 ssh_runner.go:195] Run: crio --version
	I0918 20:56:15.577805   58925 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0918 20:56:15.579457   58925 main.go:141] libmachine: (kubernetes-upgrade-878094) Calling .GetIP
	I0918 20:56:15.582554   58925 main.go:141] libmachine: (kubernetes-upgrade-878094) DBG | domain kubernetes-upgrade-878094 has defined MAC address 52:54:00:21:00:07 in network mk-kubernetes-upgrade-878094
	I0918 20:56:15.582968   58925 main.go:141] libmachine: (kubernetes-upgrade-878094) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:00:07", ip: ""} in network mk-kubernetes-upgrade-878094: {Iface:virbr4 ExpiryTime:2024-09-18 21:55:27 +0000 UTC Type:0 Mac:52:54:00:21:00:07 Iaid: IPaddr:192.168.50.80 Prefix:24 Hostname:kubernetes-upgrade-878094 Clientid:01:52:54:00:21:00:07}
	I0918 20:56:15.582999   58925 main.go:141] libmachine: (kubernetes-upgrade-878094) DBG | domain kubernetes-upgrade-878094 has defined IP address 192.168.50.80 and MAC address 52:54:00:21:00:07 in network mk-kubernetes-upgrade-878094
	I0918 20:56:15.583241   58925 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0918 20:56:15.587509   58925 kubeadm.go:883] updating cluster {Name:kubernetes-upgrade-878094 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.31.1 ClusterName:kubernetes-upgrade-878094 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.80 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountTy
pe:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0918 20:56:15.587638   58925 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0918 20:56:15.587687   58925 ssh_runner.go:195] Run: sudo crictl images --output json
	I0918 20:56:15.640490   58925 crio.go:514] all images are preloaded for cri-o runtime.
	I0918 20:56:15.640518   58925 crio.go:433] Images already preloaded, skipping extraction
	I0918 20:56:15.640574   58925 ssh_runner.go:195] Run: sudo crictl images --output json
	I0918 20:56:15.686403   58925 crio.go:514] all images are preloaded for cri-o runtime.
	I0918 20:56:15.686428   58925 cache_images.go:84] Images are preloaded, skipping loading
	I0918 20:56:15.686435   58925 kubeadm.go:934] updating node { 192.168.50.80 8443 v1.31.1 crio true true} ...
	I0918 20:56:15.686577   58925 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=kubernetes-upgrade-878094 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.80
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:kubernetes-upgrade-878094 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0918 20:56:15.686666   58925 ssh_runner.go:195] Run: crio config
	I0918 20:56:15.741453   58925 cni.go:84] Creating CNI manager for ""
	I0918 20:56:15.741483   58925 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0918 20:56:15.741494   58925 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0918 20:56:15.741528   58925 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.80 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-878094 NodeName:kubernetes-upgrade-878094 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.80"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.80 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt
StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0918 20:56:15.741696   58925 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.80
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "kubernetes-upgrade-878094"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.80
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.80"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0918 20:56:15.741776   58925 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0918 20:56:15.754455   58925 binaries.go:44] Found k8s binaries, skipping transfer
	I0918 20:56:15.754543   58925 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0918 20:56:15.767814   58925 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (324 bytes)
	I0918 20:56:15.789075   58925 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0918 20:56:15.810684   58925 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2166 bytes)
	I0918 20:56:15.831579   58925 ssh_runner.go:195] Run: grep 192.168.50.80	control-plane.minikube.internal$ /etc/hosts
	I0918 20:56:15.836094   58925 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0918 20:56:15.990860   58925 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0918 20:56:16.011180   58925 certs.go:68] Setting up /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/kubernetes-upgrade-878094 for IP: 192.168.50.80
	I0918 20:56:16.011208   58925 certs.go:194] generating shared ca certs ...
	I0918 20:56:16.011226   58925 certs.go:226] acquiring lock for ca certs: {Name:mkf54e0e71ed884c4372f9d3d4cb308ea4600185 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0918 20:56:16.011430   58925 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19667-7671/.minikube/ca.key
	I0918 20:56:16.011513   58925 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19667-7671/.minikube/proxy-client-ca.key
	I0918 20:56:16.011530   58925 certs.go:256] generating profile certs ...
	I0918 20:56:16.011652   58925 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/kubernetes-upgrade-878094/client.key
	I0918 20:56:16.011716   58925 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/kubernetes-upgrade-878094/apiserver.key.a7777c0f
	I0918 20:56:16.011763   58925 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/kubernetes-upgrade-878094/proxy-client.key
	I0918 20:56:16.011905   58925 certs.go:484] found cert: /home/jenkins/minikube-integration/19667-7671/.minikube/certs/14878.pem (1338 bytes)
	W0918 20:56:16.011945   58925 certs.go:480] ignoring /home/jenkins/minikube-integration/19667-7671/.minikube/certs/14878_empty.pem, impossibly tiny 0 bytes
	I0918 20:56:16.011958   58925 certs.go:484] found cert: /home/jenkins/minikube-integration/19667-7671/.minikube/certs/ca-key.pem (1675 bytes)
	I0918 20:56:16.011993   58925 certs.go:484] found cert: /home/jenkins/minikube-integration/19667-7671/.minikube/certs/ca.pem (1082 bytes)
	I0918 20:56:16.012065   58925 certs.go:484] found cert: /home/jenkins/minikube-integration/19667-7671/.minikube/certs/cert.pem (1123 bytes)
	I0918 20:56:16.012097   58925 certs.go:484] found cert: /home/jenkins/minikube-integration/19667-7671/.minikube/certs/key.pem (1679 bytes)
	I0918 20:56:16.012158   58925 certs.go:484] found cert: /home/jenkins/minikube-integration/19667-7671/.minikube/files/etc/ssl/certs/148782.pem (1708 bytes)
	I0918 20:56:16.013452   58925 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0918 20:56:16.047494   58925 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0918 20:56:16.077164   58925 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0918 20:56:16.109192   58925 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0918 20:56:16.140316   58925 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/kubernetes-upgrade-878094/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0918 20:56:16.172824   58925 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/kubernetes-upgrade-878094/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0918 20:56:16.199035   58925 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/kubernetes-upgrade-878094/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0918 20:56:16.232195   58925 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/kubernetes-upgrade-878094/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0918 20:56:16.261456   58925 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0918 20:56:16.293068   58925 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/certs/14878.pem --> /usr/share/ca-certificates/14878.pem (1338 bytes)
	I0918 20:56:16.322517   58925 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/files/etc/ssl/certs/148782.pem --> /usr/share/ca-certificates/148782.pem (1708 bytes)
	I0918 20:56:16.349786   58925 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0918 20:56:16.367717   58925 ssh_runner.go:195] Run: openssl version
	I0918 20:56:16.374026   58925 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0918 20:56:16.386187   58925 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0918 20:56:16.390981   58925 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 18 19:39 /usr/share/ca-certificates/minikubeCA.pem
	I0918 20:56:16.391061   58925 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0918 20:56:16.399199   58925 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0918 20:56:16.413213   58925 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14878.pem && ln -fs /usr/share/ca-certificates/14878.pem /etc/ssl/certs/14878.pem"
	I0918 20:56:16.427979   58925 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14878.pem
	I0918 20:56:16.433336   58925 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 18 19:57 /usr/share/ca-certificates/14878.pem
	I0918 20:56:16.433405   58925 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14878.pem
	I0918 20:56:16.444923   58925 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14878.pem /etc/ssl/certs/51391683.0"
	I0918 20:56:16.458796   58925 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/148782.pem && ln -fs /usr/share/ca-certificates/148782.pem /etc/ssl/certs/148782.pem"
	I0918 20:56:16.474636   58925 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/148782.pem
	I0918 20:56:16.480635   58925 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 18 19:57 /usr/share/ca-certificates/148782.pem
	I0918 20:56:16.480713   58925 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/148782.pem
	I0918 20:56:16.488020   58925 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/148782.pem /etc/ssl/certs/3ec20f2e.0"
	I0918 20:56:16.497933   58925 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0918 20:56:16.502709   58925 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0918 20:56:16.508849   58925 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0918 20:56:16.516750   58925 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0918 20:56:16.524360   58925 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0918 20:56:16.530364   58925 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0918 20:56:16.538113   58925 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0918 20:56:16.545882   58925 kubeadm.go:392] StartCluster: {Name:kubernetes-upgrade-878094 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersi
on:v1.31.1 ClusterName:kubernetes-upgrade-878094 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.80 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0918 20:56:16.545987   58925 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0918 20:56:16.546061   58925 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0918 20:56:16.590304   58925 cri.go:89] found id: "417ec9725c157000881aa4cccdf27e36741d8024b6849236501da39b266c5d0c"
	I0918 20:56:16.590337   58925 cri.go:89] found id: "a8c466fc91d12b771ff9f5957804a2a955926fa42c4d0f54d0f441df9b88fe56"
	I0918 20:56:16.590344   58925 cri.go:89] found id: "b1d2fd20aecf579d5fc14c7a36d46a56c5713cf858980933bc1c8634c22506d5"
	I0918 20:56:16.590349   58925 cri.go:89] found id: "09494b2f8d41671e9678f8c6baea0da05f8f2c573b84069f23ae5cf4890431ea"
	I0918 20:56:16.590366   58925 cri.go:89] found id: "867079e75c3d4175271bb5641b5ea2c3cb5a7cf5b319ffa751dee65fce2def40"
	I0918 20:56:16.590371   58925 cri.go:89] found id: "8ee7225c32cdfd5e4125e6c14836f34c6e1b052ff2fd354b720d255a7bdb907d"
	I0918 20:56:16.590376   58925 cri.go:89] found id: "b686f4e4b38a759f86022717daf5d25453243af6ba20116a2fde797e8a82cbe8"
	I0918 20:56:16.590382   58925 cri.go:89] found id: "c3d0dbe08c50c46df3970decdaacb02bc9d72e2df614302c196fd6ec34e0c82d"
	I0918 20:56:16.590385   58925 cri.go:89] found id: ""
	I0918 20:56:16.590440   58925 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Sep 18 20:56:26 kubernetes-upgrade-878094 crio[2239]: time="2024-09-18 20:56:26.646090138Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726692986646061088,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125697,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=714d7f93-7f78-496e-b788-01fa1b488114 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 18 20:56:26 kubernetes-upgrade-878094 crio[2239]: time="2024-09-18 20:56:26.647566529Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=dfc70624-b088-433c-8e12-864f2e950528 name=/runtime.v1.RuntimeService/ListContainers
	Sep 18 20:56:26 kubernetes-upgrade-878094 crio[2239]: time="2024-09-18 20:56:26.647629133Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=dfc70624-b088-433c-8e12-864f2e950528 name=/runtime.v1.RuntimeService/ListContainers
	Sep 18 20:56:26 kubernetes-upgrade-878094 crio[2239]: time="2024-09-18 20:56:26.648042919Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f4d63f77a08eb5eff71aad1cca06b4204d83119e853736cc3865b9a627a3de14,PodSandboxId:e1f71d0c45120e37e9e0d77c08ac8ed9ddf56a7afab217d1c1b0cbe54858a729,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726692984256171427,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-2mgnd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d4e294bc-06bd-4372-8086-17b388ea6706,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":
53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:443796a749b9f9a9309cc093e9d8a09639e2f1f17601e9ce369fd824186a3f06,PodSandboxId:5ff553b94f0df020523b2cc4a1d62cece49f272f9e1a601f1fc0d8777a4a5502,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726692984203930528,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-vd6ds,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: bb3a3972-0df8-46e2-b923-cbe591bd7d9f,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c871810f19eb6325a70584cc5dc48abe43eb206689d31056706a9950a7e3ba3f,PodSandboxId:986f47f96f3e7408964406e912df6677ba4b291ed885f80786041b3c7096220d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Stat
e:CONTAINER_RUNNING,CreatedAt:1726692983798990936,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 801fa934-504a-47ea-8673-2cccf5a64c56,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a970bc5dd08730ce482b07e6ad759434af4cf93841cf1215fd205a01ad7f17a4,PodSandboxId:e0287701d17fb3721ab97efba7d172b2cf48d0023356e84f47a81ee18c864ce5,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,C
reatedAt:1726692983766316490,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-pn8zl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 15bea14f-20ab-493b-aafd-89a94011300d,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:602211e9810a2d0c4072e9533f40287225d66dcd44706fee93b3ec147f8d9073,PodSandboxId:d28be4f5dd732155d2c67cf854bec1778ea27add1dc20df95a6d5a86bfc6b71a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726692978908304564,
Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-878094,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f72830116fbd9f05d6f2a37b5a7cfa72,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cda2d3bf7f58406568da6d179b04244127169d0c8a7769946c35c80d00701683,PodSandboxId:ac467d101f2b422d589dd8a4854e4126f5947e636dc00d57f1f4d69840963f30,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726692978887181570,Labels:map[stri
ng]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-878094,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f0a008276fd21582066215a4859f9244,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:147cf6a30342dc10c1236f9568e86548109ddf20c9cf7ee78a3d33ee56a39016,PodSandboxId:62021c4e559029fb1b24bd5554b6b83460efa22dd8c4f8a6552f079c5e2a6336,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726692978801961959,Labels:map[string]string{io.kub
ernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-878094,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 45492333b012fd419f83201fe0cce69c,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:176ee0a7648ab1599c6ad2b379c9aab6bba4b7dc5e9784e966b5eea01f733f68,PodSandboxId:0861229837d5873901b1dfc4029bdf1b3739f563256942dbaf5862741c51bc5c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726692978783871202,Labels:map[string]
string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-878094,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9d3c67842eba1d8a2fbcf61a52eba8dc,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:417ec9725c157000881aa4cccdf27e36741d8024b6849236501da39b266c5d0c,PodSandboxId:2105c00562cd9f317c174be872cbaf8fb21300cac3f3877c5e99f234895b4dcf,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1726692958517988704,Labels:map[string]s
tring{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 801fa934-504a-47ea-8673-2cccf5a64c56,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a8c466fc91d12b771ff9f5957804a2a955926fa42c4d0f54d0f441df9b88fe56,PodSandboxId:62618f6a2d4856a07b144ad9c0fa70799cbe908c01781918e38507aea2f35579,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726692958076465309,Labels:map[string]string{io.kubernetes.conta
iner.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-2mgnd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d4e294bc-06bd-4372-8086-17b388ea6706,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b1d2fd20aecf579d5fc14c7a36d46a56c5713cf858980933bc1c8634c22506d5,PodSandboxId:f4f4a19a4dd59e47f34d81a6b3077a55fb85016d2bf6d7c23a3ac084737ad679,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedIm
age:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726692958016043589,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-vd6ds,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bb3a3972-0df8-46e2-b923-cbe591bd7d9f,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:09494b2f8d41671e9678f8c6baea0da05f8f2c573b84069f23ae5cf4890431ea,PodSandboxId:f75f5ee44940ba465ac91320025019e67ef1380d0e92bea087194a47120
5198e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1726692957629719179,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-pn8zl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 15bea14f-20ab-493b-aafd-89a94011300d,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8ee7225c32cdfd5e4125e6c14836f34c6e1b052ff2fd354b720d255a7bdb907d,PodSandboxId:90b86bd9700423e74de21f6dcb04330e1fd7b2a417e37f759240590d6241499e,Metadata:&ContainerMetadata{
Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1726692946531268468,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-878094,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f0a008276fd21582066215a4859f9244,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:867079e75c3d4175271bb5641b5ea2c3cb5a7cf5b319ffa751dee65fce2def40,PodSandboxId:b2024a5f5b030bdefdce4d3c59b37adaaa284aac2bb8074b670362f0e1e9be54,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Imag
e:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1726692946554910965,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-878094,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f72830116fbd9f05d6f2a37b5a7cfa72,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b686f4e4b38a759f86022717daf5d25453243af6ba20116a2fde797e8a82cbe8,PodSandboxId:16c620c35b747111ee1b0a8ce12e3d826f9d11831bc4fe8d7ba7875586dae905,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},I
mage:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1726692946508110738,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-878094,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 45492333b012fd419f83201fe0cce69c,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c3d0dbe08c50c46df3970decdaacb02bc9d72e2df614302c196fd6ec34e0c82d,PodSandboxId:d3a5a0fba39cff232367c9fcd3526d9e9e42931f373b682847a777e9bf66e7e5,Metadata:&ContainerMetadata{Name:kube-apiserver,A
ttempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726692946467077469,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-878094,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9d3c67842eba1d8a2fbcf61a52eba8dc,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=dfc70624-b088-433c-8e12-864f2e950528 name=/runtime.v1.RuntimeService/ListContainers
	Sep 18 20:56:26 kubernetes-upgrade-878094 crio[2239]: time="2024-09-18 20:56:26.706698411Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=92e1b1d5-13f0-4714-8a9f-a12617e93cb9 name=/runtime.v1.RuntimeService/Version
	Sep 18 20:56:26 kubernetes-upgrade-878094 crio[2239]: time="2024-09-18 20:56:26.706784464Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=92e1b1d5-13f0-4714-8a9f-a12617e93cb9 name=/runtime.v1.RuntimeService/Version
	Sep 18 20:56:26 kubernetes-upgrade-878094 crio[2239]: time="2024-09-18 20:56:26.707818105Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=21b9b9ac-6210-4a54-9187-7e9cea28d580 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 18 20:56:26 kubernetes-upgrade-878094 crio[2239]: time="2024-09-18 20:56:26.708402838Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726692986708374316,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125697,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=21b9b9ac-6210-4a54-9187-7e9cea28d580 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 18 20:56:26 kubernetes-upgrade-878094 crio[2239]: time="2024-09-18 20:56:26.708935651Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=3d943109-e819-41cc-9877-4b514bc6ae37 name=/runtime.v1.RuntimeService/ListContainers
	Sep 18 20:56:26 kubernetes-upgrade-878094 crio[2239]: time="2024-09-18 20:56:26.709010533Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=3d943109-e819-41cc-9877-4b514bc6ae37 name=/runtime.v1.RuntimeService/ListContainers
	Sep 18 20:56:26 kubernetes-upgrade-878094 crio[2239]: time="2024-09-18 20:56:26.709557484Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f4d63f77a08eb5eff71aad1cca06b4204d83119e853736cc3865b9a627a3de14,PodSandboxId:e1f71d0c45120e37e9e0d77c08ac8ed9ddf56a7afab217d1c1b0cbe54858a729,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726692984256171427,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-2mgnd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d4e294bc-06bd-4372-8086-17b388ea6706,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":
53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:443796a749b9f9a9309cc093e9d8a09639e2f1f17601e9ce369fd824186a3f06,PodSandboxId:5ff553b94f0df020523b2cc4a1d62cece49f272f9e1a601f1fc0d8777a4a5502,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726692984203930528,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-vd6ds,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: bb3a3972-0df8-46e2-b923-cbe591bd7d9f,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c871810f19eb6325a70584cc5dc48abe43eb206689d31056706a9950a7e3ba3f,PodSandboxId:986f47f96f3e7408964406e912df6677ba4b291ed885f80786041b3c7096220d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Stat
e:CONTAINER_RUNNING,CreatedAt:1726692983798990936,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 801fa934-504a-47ea-8673-2cccf5a64c56,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a970bc5dd08730ce482b07e6ad759434af4cf93841cf1215fd205a01ad7f17a4,PodSandboxId:e0287701d17fb3721ab97efba7d172b2cf48d0023356e84f47a81ee18c864ce5,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,C
reatedAt:1726692983766316490,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-pn8zl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 15bea14f-20ab-493b-aafd-89a94011300d,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:602211e9810a2d0c4072e9533f40287225d66dcd44706fee93b3ec147f8d9073,PodSandboxId:d28be4f5dd732155d2c67cf854bec1778ea27add1dc20df95a6d5a86bfc6b71a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726692978908304564,
Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-878094,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f72830116fbd9f05d6f2a37b5a7cfa72,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cda2d3bf7f58406568da6d179b04244127169d0c8a7769946c35c80d00701683,PodSandboxId:ac467d101f2b422d589dd8a4854e4126f5947e636dc00d57f1f4d69840963f30,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726692978887181570,Labels:map[stri
ng]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-878094,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f0a008276fd21582066215a4859f9244,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:147cf6a30342dc10c1236f9568e86548109ddf20c9cf7ee78a3d33ee56a39016,PodSandboxId:62021c4e559029fb1b24bd5554b6b83460efa22dd8c4f8a6552f079c5e2a6336,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726692978801961959,Labels:map[string]string{io.kub
ernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-878094,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 45492333b012fd419f83201fe0cce69c,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:176ee0a7648ab1599c6ad2b379c9aab6bba4b7dc5e9784e966b5eea01f733f68,PodSandboxId:0861229837d5873901b1dfc4029bdf1b3739f563256942dbaf5862741c51bc5c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726692978783871202,Labels:map[string]
string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-878094,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9d3c67842eba1d8a2fbcf61a52eba8dc,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:417ec9725c157000881aa4cccdf27e36741d8024b6849236501da39b266c5d0c,PodSandboxId:2105c00562cd9f317c174be872cbaf8fb21300cac3f3877c5e99f234895b4dcf,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1726692958517988704,Labels:map[string]s
tring{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 801fa934-504a-47ea-8673-2cccf5a64c56,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a8c466fc91d12b771ff9f5957804a2a955926fa42c4d0f54d0f441df9b88fe56,PodSandboxId:62618f6a2d4856a07b144ad9c0fa70799cbe908c01781918e38507aea2f35579,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726692958076465309,Labels:map[string]string{io.kubernetes.conta
iner.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-2mgnd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d4e294bc-06bd-4372-8086-17b388ea6706,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b1d2fd20aecf579d5fc14c7a36d46a56c5713cf858980933bc1c8634c22506d5,PodSandboxId:f4f4a19a4dd59e47f34d81a6b3077a55fb85016d2bf6d7c23a3ac084737ad679,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedIm
age:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726692958016043589,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-vd6ds,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bb3a3972-0df8-46e2-b923-cbe591bd7d9f,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:09494b2f8d41671e9678f8c6baea0da05f8f2c573b84069f23ae5cf4890431ea,PodSandboxId:f75f5ee44940ba465ac91320025019e67ef1380d0e92bea087194a47120
5198e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1726692957629719179,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-pn8zl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 15bea14f-20ab-493b-aafd-89a94011300d,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8ee7225c32cdfd5e4125e6c14836f34c6e1b052ff2fd354b720d255a7bdb907d,PodSandboxId:90b86bd9700423e74de21f6dcb04330e1fd7b2a417e37f759240590d6241499e,Metadata:&ContainerMetadata{
Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1726692946531268468,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-878094,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f0a008276fd21582066215a4859f9244,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:867079e75c3d4175271bb5641b5ea2c3cb5a7cf5b319ffa751dee65fce2def40,PodSandboxId:b2024a5f5b030bdefdce4d3c59b37adaaa284aac2bb8074b670362f0e1e9be54,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Imag
e:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1726692946554910965,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-878094,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f72830116fbd9f05d6f2a37b5a7cfa72,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b686f4e4b38a759f86022717daf5d25453243af6ba20116a2fde797e8a82cbe8,PodSandboxId:16c620c35b747111ee1b0a8ce12e3d826f9d11831bc4fe8d7ba7875586dae905,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},I
mage:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1726692946508110738,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-878094,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 45492333b012fd419f83201fe0cce69c,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c3d0dbe08c50c46df3970decdaacb02bc9d72e2df614302c196fd6ec34e0c82d,PodSandboxId:d3a5a0fba39cff232367c9fcd3526d9e9e42931f373b682847a777e9bf66e7e5,Metadata:&ContainerMetadata{Name:kube-apiserver,A
ttempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726692946467077469,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-878094,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9d3c67842eba1d8a2fbcf61a52eba8dc,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=3d943109-e819-41cc-9877-4b514bc6ae37 name=/runtime.v1.RuntimeService/ListContainers
	Sep 18 20:56:26 kubernetes-upgrade-878094 crio[2239]: time="2024-09-18 20:56:26.788431384Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=bd74b0d0-e102-4baa-9b1d-c2a52dffe931 name=/runtime.v1.RuntimeService/Version
	Sep 18 20:56:26 kubernetes-upgrade-878094 crio[2239]: time="2024-09-18 20:56:26.788556814Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=bd74b0d0-e102-4baa-9b1d-c2a52dffe931 name=/runtime.v1.RuntimeService/Version
	Sep 18 20:56:26 kubernetes-upgrade-878094 crio[2239]: time="2024-09-18 20:56:26.790612363Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=8eba3402-dab3-4cb3-86bc-786a78a4373b name=/runtime.v1.ImageService/ImageFsInfo
	Sep 18 20:56:26 kubernetes-upgrade-878094 crio[2239]: time="2024-09-18 20:56:26.791010217Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726692986790985702,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125697,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=8eba3402-dab3-4cb3-86bc-786a78a4373b name=/runtime.v1.ImageService/ImageFsInfo
	Sep 18 20:56:26 kubernetes-upgrade-878094 crio[2239]: time="2024-09-18 20:56:26.791791688Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=59b8dddb-4ae5-4c3e-9cec-7d1569d6a3f5 name=/runtime.v1.RuntimeService/ListContainers
	Sep 18 20:56:26 kubernetes-upgrade-878094 crio[2239]: time="2024-09-18 20:56:26.791893561Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=59b8dddb-4ae5-4c3e-9cec-7d1569d6a3f5 name=/runtime.v1.RuntimeService/ListContainers
	Sep 18 20:56:26 kubernetes-upgrade-878094 crio[2239]: time="2024-09-18 20:56:26.792517791Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f4d63f77a08eb5eff71aad1cca06b4204d83119e853736cc3865b9a627a3de14,PodSandboxId:e1f71d0c45120e37e9e0d77c08ac8ed9ddf56a7afab217d1c1b0cbe54858a729,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726692984256171427,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-2mgnd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d4e294bc-06bd-4372-8086-17b388ea6706,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":
53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:443796a749b9f9a9309cc093e9d8a09639e2f1f17601e9ce369fd824186a3f06,PodSandboxId:5ff553b94f0df020523b2cc4a1d62cece49f272f9e1a601f1fc0d8777a4a5502,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726692984203930528,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-vd6ds,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: bb3a3972-0df8-46e2-b923-cbe591bd7d9f,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c871810f19eb6325a70584cc5dc48abe43eb206689d31056706a9950a7e3ba3f,PodSandboxId:986f47f96f3e7408964406e912df6677ba4b291ed885f80786041b3c7096220d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Stat
e:CONTAINER_RUNNING,CreatedAt:1726692983798990936,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 801fa934-504a-47ea-8673-2cccf5a64c56,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a970bc5dd08730ce482b07e6ad759434af4cf93841cf1215fd205a01ad7f17a4,PodSandboxId:e0287701d17fb3721ab97efba7d172b2cf48d0023356e84f47a81ee18c864ce5,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,C
reatedAt:1726692983766316490,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-pn8zl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 15bea14f-20ab-493b-aafd-89a94011300d,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:602211e9810a2d0c4072e9533f40287225d66dcd44706fee93b3ec147f8d9073,PodSandboxId:d28be4f5dd732155d2c67cf854bec1778ea27add1dc20df95a6d5a86bfc6b71a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726692978908304564,
Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-878094,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f72830116fbd9f05d6f2a37b5a7cfa72,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cda2d3bf7f58406568da6d179b04244127169d0c8a7769946c35c80d00701683,PodSandboxId:ac467d101f2b422d589dd8a4854e4126f5947e636dc00d57f1f4d69840963f30,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726692978887181570,Labels:map[stri
ng]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-878094,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f0a008276fd21582066215a4859f9244,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:147cf6a30342dc10c1236f9568e86548109ddf20c9cf7ee78a3d33ee56a39016,PodSandboxId:62021c4e559029fb1b24bd5554b6b83460efa22dd8c4f8a6552f079c5e2a6336,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726692978801961959,Labels:map[string]string{io.kub
ernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-878094,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 45492333b012fd419f83201fe0cce69c,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:176ee0a7648ab1599c6ad2b379c9aab6bba4b7dc5e9784e966b5eea01f733f68,PodSandboxId:0861229837d5873901b1dfc4029bdf1b3739f563256942dbaf5862741c51bc5c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726692978783871202,Labels:map[string]
string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-878094,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9d3c67842eba1d8a2fbcf61a52eba8dc,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:417ec9725c157000881aa4cccdf27e36741d8024b6849236501da39b266c5d0c,PodSandboxId:2105c00562cd9f317c174be872cbaf8fb21300cac3f3877c5e99f234895b4dcf,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1726692958517988704,Labels:map[string]s
tring{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 801fa934-504a-47ea-8673-2cccf5a64c56,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a8c466fc91d12b771ff9f5957804a2a955926fa42c4d0f54d0f441df9b88fe56,PodSandboxId:62618f6a2d4856a07b144ad9c0fa70799cbe908c01781918e38507aea2f35579,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726692958076465309,Labels:map[string]string{io.kubernetes.conta
iner.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-2mgnd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d4e294bc-06bd-4372-8086-17b388ea6706,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b1d2fd20aecf579d5fc14c7a36d46a56c5713cf858980933bc1c8634c22506d5,PodSandboxId:f4f4a19a4dd59e47f34d81a6b3077a55fb85016d2bf6d7c23a3ac084737ad679,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedIm
age:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726692958016043589,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-vd6ds,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bb3a3972-0df8-46e2-b923-cbe591bd7d9f,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:09494b2f8d41671e9678f8c6baea0da05f8f2c573b84069f23ae5cf4890431ea,PodSandboxId:f75f5ee44940ba465ac91320025019e67ef1380d0e92bea087194a47120
5198e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1726692957629719179,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-pn8zl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 15bea14f-20ab-493b-aafd-89a94011300d,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8ee7225c32cdfd5e4125e6c14836f34c6e1b052ff2fd354b720d255a7bdb907d,PodSandboxId:90b86bd9700423e74de21f6dcb04330e1fd7b2a417e37f759240590d6241499e,Metadata:&ContainerMetadata{
Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1726692946531268468,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-878094,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f0a008276fd21582066215a4859f9244,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:867079e75c3d4175271bb5641b5ea2c3cb5a7cf5b319ffa751dee65fce2def40,PodSandboxId:b2024a5f5b030bdefdce4d3c59b37adaaa284aac2bb8074b670362f0e1e9be54,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Imag
e:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1726692946554910965,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-878094,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f72830116fbd9f05d6f2a37b5a7cfa72,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b686f4e4b38a759f86022717daf5d25453243af6ba20116a2fde797e8a82cbe8,PodSandboxId:16c620c35b747111ee1b0a8ce12e3d826f9d11831bc4fe8d7ba7875586dae905,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},I
mage:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1726692946508110738,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-878094,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 45492333b012fd419f83201fe0cce69c,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c3d0dbe08c50c46df3970decdaacb02bc9d72e2df614302c196fd6ec34e0c82d,PodSandboxId:d3a5a0fba39cff232367c9fcd3526d9e9e42931f373b682847a777e9bf66e7e5,Metadata:&ContainerMetadata{Name:kube-apiserver,A
ttempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726692946467077469,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-878094,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9d3c67842eba1d8a2fbcf61a52eba8dc,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=59b8dddb-4ae5-4c3e-9cec-7d1569d6a3f5 name=/runtime.v1.RuntimeService/ListContainers
	Sep 18 20:56:26 kubernetes-upgrade-878094 crio[2239]: time="2024-09-18 20:56:26.855685934Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=8d945e2a-e2ac-40f3-a690-8883b674a60b name=/runtime.v1.RuntimeService/Version
	Sep 18 20:56:26 kubernetes-upgrade-878094 crio[2239]: time="2024-09-18 20:56:26.855823105Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=8d945e2a-e2ac-40f3-a690-8883b674a60b name=/runtime.v1.RuntimeService/Version
	Sep 18 20:56:26 kubernetes-upgrade-878094 crio[2239]: time="2024-09-18 20:56:26.858296842Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=876474b6-8a8f-421c-b23e-bc16ac21ebc8 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 18 20:56:26 kubernetes-upgrade-878094 crio[2239]: time="2024-09-18 20:56:26.859310040Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726692986858884656,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125697,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=876474b6-8a8f-421c-b23e-bc16ac21ebc8 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 18 20:56:26 kubernetes-upgrade-878094 crio[2239]: time="2024-09-18 20:56:26.860064584Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b57db8ea-228a-4815-b8bf-da0378159143 name=/runtime.v1.RuntimeService/ListContainers
	Sep 18 20:56:26 kubernetes-upgrade-878094 crio[2239]: time="2024-09-18 20:56:26.860193511Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b57db8ea-228a-4815-b8bf-da0378159143 name=/runtime.v1.RuntimeService/ListContainers
	Sep 18 20:56:26 kubernetes-upgrade-878094 crio[2239]: time="2024-09-18 20:56:26.860666447Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f4d63f77a08eb5eff71aad1cca06b4204d83119e853736cc3865b9a627a3de14,PodSandboxId:e1f71d0c45120e37e9e0d77c08ac8ed9ddf56a7afab217d1c1b0cbe54858a729,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726692984256171427,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-2mgnd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d4e294bc-06bd-4372-8086-17b388ea6706,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":
53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:443796a749b9f9a9309cc093e9d8a09639e2f1f17601e9ce369fd824186a3f06,PodSandboxId:5ff553b94f0df020523b2cc4a1d62cece49f272f9e1a601f1fc0d8777a4a5502,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726692984203930528,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-vd6ds,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: bb3a3972-0df8-46e2-b923-cbe591bd7d9f,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c871810f19eb6325a70584cc5dc48abe43eb206689d31056706a9950a7e3ba3f,PodSandboxId:986f47f96f3e7408964406e912df6677ba4b291ed885f80786041b3c7096220d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Stat
e:CONTAINER_RUNNING,CreatedAt:1726692983798990936,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 801fa934-504a-47ea-8673-2cccf5a64c56,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a970bc5dd08730ce482b07e6ad759434af4cf93841cf1215fd205a01ad7f17a4,PodSandboxId:e0287701d17fb3721ab97efba7d172b2cf48d0023356e84f47a81ee18c864ce5,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,C
reatedAt:1726692983766316490,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-pn8zl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 15bea14f-20ab-493b-aafd-89a94011300d,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:602211e9810a2d0c4072e9533f40287225d66dcd44706fee93b3ec147f8d9073,PodSandboxId:d28be4f5dd732155d2c67cf854bec1778ea27add1dc20df95a6d5a86bfc6b71a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726692978908304564,
Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-878094,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f72830116fbd9f05d6f2a37b5a7cfa72,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cda2d3bf7f58406568da6d179b04244127169d0c8a7769946c35c80d00701683,PodSandboxId:ac467d101f2b422d589dd8a4854e4126f5947e636dc00d57f1f4d69840963f30,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726692978887181570,Labels:map[stri
ng]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-878094,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f0a008276fd21582066215a4859f9244,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:147cf6a30342dc10c1236f9568e86548109ddf20c9cf7ee78a3d33ee56a39016,PodSandboxId:62021c4e559029fb1b24bd5554b6b83460efa22dd8c4f8a6552f079c5e2a6336,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726692978801961959,Labels:map[string]string{io.kub
ernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-878094,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 45492333b012fd419f83201fe0cce69c,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:176ee0a7648ab1599c6ad2b379c9aab6bba4b7dc5e9784e966b5eea01f733f68,PodSandboxId:0861229837d5873901b1dfc4029bdf1b3739f563256942dbaf5862741c51bc5c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726692978783871202,Labels:map[string]
string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-878094,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9d3c67842eba1d8a2fbcf61a52eba8dc,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:417ec9725c157000881aa4cccdf27e36741d8024b6849236501da39b266c5d0c,PodSandboxId:2105c00562cd9f317c174be872cbaf8fb21300cac3f3877c5e99f234895b4dcf,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1726692958517988704,Labels:map[string]s
tring{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 801fa934-504a-47ea-8673-2cccf5a64c56,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a8c466fc91d12b771ff9f5957804a2a955926fa42c4d0f54d0f441df9b88fe56,PodSandboxId:62618f6a2d4856a07b144ad9c0fa70799cbe908c01781918e38507aea2f35579,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726692958076465309,Labels:map[string]string{io.kubernetes.conta
iner.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-2mgnd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d4e294bc-06bd-4372-8086-17b388ea6706,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b1d2fd20aecf579d5fc14c7a36d46a56c5713cf858980933bc1c8634c22506d5,PodSandboxId:f4f4a19a4dd59e47f34d81a6b3077a55fb85016d2bf6d7c23a3ac084737ad679,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedIm
age:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726692958016043589,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-vd6ds,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bb3a3972-0df8-46e2-b923-cbe591bd7d9f,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:09494b2f8d41671e9678f8c6baea0da05f8f2c573b84069f23ae5cf4890431ea,PodSandboxId:f75f5ee44940ba465ac91320025019e67ef1380d0e92bea087194a47120
5198e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1726692957629719179,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-pn8zl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 15bea14f-20ab-493b-aafd-89a94011300d,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8ee7225c32cdfd5e4125e6c14836f34c6e1b052ff2fd354b720d255a7bdb907d,PodSandboxId:90b86bd9700423e74de21f6dcb04330e1fd7b2a417e37f759240590d6241499e,Metadata:&ContainerMetadata{
Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1726692946531268468,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-878094,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f0a008276fd21582066215a4859f9244,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:867079e75c3d4175271bb5641b5ea2c3cb5a7cf5b319ffa751dee65fce2def40,PodSandboxId:b2024a5f5b030bdefdce4d3c59b37adaaa284aac2bb8074b670362f0e1e9be54,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Imag
e:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1726692946554910965,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-878094,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f72830116fbd9f05d6f2a37b5a7cfa72,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b686f4e4b38a759f86022717daf5d25453243af6ba20116a2fde797e8a82cbe8,PodSandboxId:16c620c35b747111ee1b0a8ce12e3d826f9d11831bc4fe8d7ba7875586dae905,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},I
mage:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1726692946508110738,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-878094,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 45492333b012fd419f83201fe0cce69c,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c3d0dbe08c50c46df3970decdaacb02bc9d72e2df614302c196fd6ec34e0c82d,PodSandboxId:d3a5a0fba39cff232367c9fcd3526d9e9e42931f373b682847a777e9bf66e7e5,Metadata:&ContainerMetadata{Name:kube-apiserver,A
ttempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726692946467077469,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-878094,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9d3c67842eba1d8a2fbcf61a52eba8dc,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=b57db8ea-228a-4815-b8bf-da0378159143 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	f4d63f77a08eb       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   2 seconds ago       Running             coredns                   1                   e1f71d0c45120       coredns-7c65d6cfc9-2mgnd
	443796a749b9f       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   2 seconds ago       Running             coredns                   1                   5ff553b94f0df       coredns-7c65d6cfc9-vd6ds
	c871810f19eb6       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   3 seconds ago       Running             storage-provisioner       1                   986f47f96f3e7       storage-provisioner
	a970bc5dd0873       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561   3 seconds ago       Running             kube-proxy                1                   e0287701d17fb       kube-proxy-pn8zl
	602211e9810a2       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b   8 seconds ago       Running             kube-scheduler            1                   d28be4f5dd732       kube-scheduler-kubernetes-upgrade-878094
	cda2d3bf7f584       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4   8 seconds ago       Running             etcd                      1                   ac467d101f2b4       etcd-kubernetes-upgrade-878094
	147cf6a30342d       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1   8 seconds ago       Running             kube-controller-manager   1                   62021c4e55902       kube-controller-manager-kubernetes-upgrade-878094
	176ee0a7648ab       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee   8 seconds ago       Running             kube-apiserver            1                   0861229837d58       kube-apiserver-kubernetes-upgrade-878094
	417ec9725c157       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   28 seconds ago      Exited              storage-provisioner       0                   2105c00562cd9       storage-provisioner
	a8c466fc91d12       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   28 seconds ago      Exited              coredns                   0                   62618f6a2d485       coredns-7c65d6cfc9-2mgnd
	b1d2fd20aecf5       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   28 seconds ago      Exited              coredns                   0                   f4f4a19a4dd59       coredns-7c65d6cfc9-vd6ds
	09494b2f8d416       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561   29 seconds ago      Exited              kube-proxy                0                   f75f5ee44940b       kube-proxy-pn8zl
	867079e75c3d4       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b   40 seconds ago      Exited              kube-scheduler            0                   b2024a5f5b030       kube-scheduler-kubernetes-upgrade-878094
	8ee7225c32cdf       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4   40 seconds ago      Exited              etcd                      0                   90b86bd970042       etcd-kubernetes-upgrade-878094
	b686f4e4b38a7       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1   40 seconds ago      Exited              kube-controller-manager   0                   16c620c35b747       kube-controller-manager-kubernetes-upgrade-878094
	c3d0dbe08c50c       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee   40 seconds ago      Exited              kube-apiserver            0                   d3a5a0fba39cf       kube-apiserver-kubernetes-upgrade-878094
	
	
	==> coredns [443796a749b9f9a9309cc093e9d8a09639e2f1f17601e9ce369fd824186a3f06] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	
	
	==> coredns [a8c466fc91d12b771ff9f5957804a2a955926fa42c4d0f54d0f441df9b88fe56] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [b1d2fd20aecf579d5fc14c7a36d46a56c5713cf858980933bc1c8634c22506d5] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [f4d63f77a08eb5eff71aad1cca06b4204d83119e853736cc3865b9a627a3de14] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	
	
	==> describe nodes <==
	Name:               kubernetes-upgrade-878094
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=kubernetes-upgrade-878094
	                    kubernetes.io/os=linux
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 18 Sep 2024 20:55:49 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  kubernetes-upgrade-878094
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 18 Sep 2024 20:56:22 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 18 Sep 2024 20:56:22 +0000   Wed, 18 Sep 2024 20:55:47 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 18 Sep 2024 20:56:22 +0000   Wed, 18 Sep 2024 20:55:47 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 18 Sep 2024 20:56:22 +0000   Wed, 18 Sep 2024 20:55:47 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 18 Sep 2024 20:56:22 +0000   Wed, 18 Sep 2024 20:55:51 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.50.80
	  Hostname:    kubernetes-upgrade-878094
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 46fee29df063401b89474afff7fd3fcb
	  System UUID:                46fee29d-f063-401b-8947-4afff7fd3fcb
	  Boot ID:                    79f43a24-606e-4aa4-8637-f81c682a5e4f
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                 ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7c65d6cfc9-2mgnd                             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     31s
	  kube-system                 coredns-7c65d6cfc9-vd6ds                             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     31s
	  kube-system                 etcd-kubernetes-upgrade-878094                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         32s
	  kube-system                 kube-apiserver-kubernetes-upgrade-878094             250m (12%)    0 (0%)      0 (0%)           0 (0%)         29s
	  kube-system                 kube-controller-manager-kubernetes-upgrade-878094    200m (10%)    0 (0%)      0 (0%)           0 (0%)         34s
	  kube-system                 kube-proxy-pn8zl                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         30s
	  kube-system                 kube-scheduler-kubernetes-upgrade-878094             100m (5%)     0 (0%)      0 (0%)           0 (0%)         30s
	  kube-system                 storage-provisioner                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         30s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%)   0 (0%)
	  memory             240Mi (11%)  340Mi (16%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 28s                kube-proxy       
	  Normal  Starting                 2s                 kube-proxy       
	  Normal  NodeAllocatableEnforced  42s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  42s (x8 over 42s)  kubelet          Node kubernetes-upgrade-878094 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    42s (x8 over 42s)  kubelet          Node kubernetes-upgrade-878094 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     42s (x7 over 42s)  kubelet          Node kubernetes-upgrade-878094 status is now: NodeHasSufficientPID
	  Normal  Starting                 42s                kubelet          Starting kubelet.
	  Normal  RegisteredNode           31s                node-controller  Node kubernetes-upgrade-878094 event: Registered Node kubernetes-upgrade-878094 in Controller
	  Normal  Starting                 9s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  9s (x8 over 9s)    kubelet          Node kubernetes-upgrade-878094 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9s (x8 over 9s)    kubelet          Node kubernetes-upgrade-878094 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9s (x7 over 9s)    kubelet          Node kubernetes-upgrade-878094 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  9s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           2s                 node-controller  Node kubernetes-upgrade-878094 event: Registered Node kubernetes-upgrade-878094 in Controller
	
	
	==> dmesg <==
	[  +1.563821] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +8.862647] systemd-fstab-generator[547]: Ignoring "noauto" option for root device
	[  +0.057060] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.059378] systemd-fstab-generator[560]: Ignoring "noauto" option for root device
	[  +0.218735] systemd-fstab-generator[573]: Ignoring "noauto" option for root device
	[  +0.150819] systemd-fstab-generator[586]: Ignoring "noauto" option for root device
	[  +0.326947] systemd-fstab-generator[615]: Ignoring "noauto" option for root device
	[  +4.282946] systemd-fstab-generator[706]: Ignoring "noauto" option for root device
	[  +0.064203] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.141134] systemd-fstab-generator[828]: Ignoring "noauto" option for root device
	[ +11.909728] systemd-fstab-generator[1216]: Ignoring "noauto" option for root device
	[  +0.104451] kauditd_printk_skb: 97 callbacks suppressed
	[Sep18 20:56] systemd-fstab-generator[2164]: Ignoring "noauto" option for root device
	[  +0.086698] kauditd_printk_skb: 108 callbacks suppressed
	[  +0.070687] systemd-fstab-generator[2177]: Ignoring "noauto" option for root device
	[  +0.206186] systemd-fstab-generator[2190]: Ignoring "noauto" option for root device
	[  +0.143525] systemd-fstab-generator[2202]: Ignoring "noauto" option for root device
	[  +0.280157] systemd-fstab-generator[2230]: Ignoring "noauto" option for root device
	[  +5.099052] systemd-fstab-generator[2374]: Ignoring "noauto" option for root device
	[  +0.080983] kauditd_printk_skb: 100 callbacks suppressed
	[  +1.907146] systemd-fstab-generator[2494]: Ignoring "noauto" option for root device
	[  +5.573768] kauditd_printk_skb: 74 callbacks suppressed
	[  +1.149240] systemd-fstab-generator[3419]: Ignoring "noauto" option for root device
	
	
	==> etcd [8ee7225c32cdfd5e4125e6c14836f34c6e1b052ff2fd354b720d255a7bdb907d] <==
	{"level":"info","ts":"2024-09-18T20:55:47.430462Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e6ffce7719c7f133 became leader at term 2"}
	{"level":"info","ts":"2024-09-18T20:55:47.430487Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: e6ffce7719c7f133 elected leader e6ffce7719c7f133 at term 2"}
	{"level":"info","ts":"2024-09-18T20:55:47.435377Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"e6ffce7719c7f133","local-member-attributes":"{Name:kubernetes-upgrade-878094 ClientURLs:[https://192.168.50.80:2379]}","request-path":"/0/members/e6ffce7719c7f133/attributes","cluster-id":"c89fb47a3b7a6c85","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-18T20:55:47.435482Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-18T20:55:47.435555Z","caller":"etcdserver/server.go:2629","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-18T20:55:47.439205Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-18T20:55:47.439244Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-18T20:55:47.435575Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-18T20:55:47.441483Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"c89fb47a3b7a6c85","local-member-id":"e6ffce7719c7f133","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-18T20:55:47.444238Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-18T20:55:47.444799Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-18T20:55:47.449993Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.50.80:2379"}
	{"level":"info","ts":"2024-09-18T20:55:47.446323Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-18T20:55:47.441827Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-18T20:55:47.457021Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-09-18T20:56:01.104157Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-09-18T20:56:01.104215Z","caller":"embed/etcd.go:377","msg":"closing etcd server","name":"kubernetes-upgrade-878094","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.50.80:2380"],"advertise-client-urls":["https://192.168.50.80:2379"]}
	{"level":"warn","ts":"2024-09-18T20:56:01.104295Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-09-18T20:56:01.104381Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-09-18T20:56:01.164690Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.50.80:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-09-18T20:56:01.164753Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.50.80:2379: use of closed network connection"}
	{"level":"info","ts":"2024-09-18T20:56:01.164818Z","caller":"etcdserver/server.go:1521","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"e6ffce7719c7f133","current-leader-member-id":"e6ffce7719c7f133"}
	{"level":"info","ts":"2024-09-18T20:56:01.194953Z","caller":"embed/etcd.go:581","msg":"stopping serving peer traffic","address":"192.168.50.80:2380"}
	{"level":"info","ts":"2024-09-18T20:56:01.195184Z","caller":"embed/etcd.go:586","msg":"stopped serving peer traffic","address":"192.168.50.80:2380"}
	{"level":"info","ts":"2024-09-18T20:56:01.195210Z","caller":"embed/etcd.go:379","msg":"closed etcd server","name":"kubernetes-upgrade-878094","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.50.80:2380"],"advertise-client-urls":["https://192.168.50.80:2379"]}
	
	
	==> etcd [cda2d3bf7f58406568da6d179b04244127169d0c8a7769946c35c80d00701683] <==
	{"level":"info","ts":"2024-09-18T20:56:19.228161Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"c89fb47a3b7a6c85","local-member-id":"e6ffce7719c7f133","added-peer-id":"e6ffce7719c7f133","added-peer-peer-urls":["https://192.168.50.80:2380"]}
	{"level":"info","ts":"2024-09-18T20:56:19.228450Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"c89fb47a3b7a6c85","local-member-id":"e6ffce7719c7f133","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-18T20:56:19.228475Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-18T20:56:19.235679Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-18T20:56:19.238056Z","caller":"embed/etcd.go:728","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-09-18T20:56:19.238316Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.50.80:2380"}
	{"level":"info","ts":"2024-09-18T20:56:19.238341Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.50.80:2380"}
	{"level":"info","ts":"2024-09-18T20:56:19.238497Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"e6ffce7719c7f133","initial-advertise-peer-urls":["https://192.168.50.80:2380"],"listen-peer-urls":["https://192.168.50.80:2380"],"advertise-client-urls":["https://192.168.50.80:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.50.80:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-09-18T20:56:19.238529Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-09-18T20:56:20.699524Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e6ffce7719c7f133 is starting a new election at term 2"}
	{"level":"info","ts":"2024-09-18T20:56:20.699664Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e6ffce7719c7f133 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-09-18T20:56:20.699736Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e6ffce7719c7f133 received MsgPreVoteResp from e6ffce7719c7f133 at term 2"}
	{"level":"info","ts":"2024-09-18T20:56:20.699788Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e6ffce7719c7f133 became candidate at term 3"}
	{"level":"info","ts":"2024-09-18T20:56:20.699832Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e6ffce7719c7f133 received MsgVoteResp from e6ffce7719c7f133 at term 3"}
	{"level":"info","ts":"2024-09-18T20:56:20.699867Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e6ffce7719c7f133 became leader at term 3"}
	{"level":"info","ts":"2024-09-18T20:56:20.699904Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: e6ffce7719c7f133 elected leader e6ffce7719c7f133 at term 3"}
	{"level":"info","ts":"2024-09-18T20:56:20.705064Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"e6ffce7719c7f133","local-member-attributes":"{Name:kubernetes-upgrade-878094 ClientURLs:[https://192.168.50.80:2379]}","request-path":"/0/members/e6ffce7719c7f133/attributes","cluster-id":"c89fb47a3b7a6c85","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-18T20:56:20.705507Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-18T20:56:20.705851Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-18T20:56:20.705996Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-18T20:56:20.706034Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-18T20:56:20.706903Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-18T20:56:20.708781Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.50.80:2379"}
	{"level":"info","ts":"2024-09-18T20:56:20.710603Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-18T20:56:20.714417Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> kernel <==
	 20:56:27 up 1 min,  0 users,  load average: 1.93, 0.52, 0.18
	Linux kubernetes-upgrade-878094 5.10.207 #1 SMP Mon Sep 16 15:00:28 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [176ee0a7648ab1599c6ad2b379c9aab6bba4b7dc5e9784e966b5eea01f733f68] <==
	I0918 20:56:22.330473       1 shared_informer.go:320] Caches are synced for configmaps
	I0918 20:56:22.332553       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0918 20:56:22.335298       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0918 20:56:22.331863       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0918 20:56:22.357719       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0918 20:56:22.357815       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0918 20:56:22.357927       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0918 20:56:22.357994       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	I0918 20:56:22.358844       1 aggregator.go:171] initial CRD sync complete...
	I0918 20:56:22.358900       1 autoregister_controller.go:144] Starting autoregister controller
	I0918 20:56:22.358931       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0918 20:56:22.358954       1 cache.go:39] Caches are synced for autoregister controller
	I0918 20:56:22.359206       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0918 20:56:22.374573       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0918 20:56:22.374696       1 policy_source.go:224] refreshing policies
	E0918 20:56:22.382791       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0918 20:56:22.392945       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0918 20:56:23.238955       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0918 20:56:24.121389       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0918 20:56:24.148042       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0918 20:56:24.260332       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0918 20:56:24.450684       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0918 20:56:24.463684       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0918 20:56:25.808117       1 controller.go:615] quota admission added evaluator for: endpoints
	I0918 20:56:25.910039       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-apiserver [c3d0dbe08c50c46df3970decdaacb02bc9d72e2df614302c196fd6ec34e0c82d] <==
	W0918 20:56:01.149203       1 logging.go:55] [core] [Channel #106 SubChannel #107]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0918 20:56:01.149280       1 logging.go:55] [core] [Channel #85 SubChannel #86]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0918 20:56:01.149349       1 logging.go:55] [core] [Channel #124 SubChannel #125]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0918 20:56:01.149418       1 logging.go:55] [core] [Channel #172 SubChannel #173]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0918 20:56:01.149495       1 logging.go:55] [core] [Channel #10 SubChannel #11]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0918 20:56:01.149566       1 logging.go:55] [core] [Channel #37 SubChannel #38]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0918 20:56:01.150923       1 logging.go:55] [core] [Channel #115 SubChannel #116]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0918 20:56:01.151549       1 logging.go:55] [core] [Channel #166 SubChannel #167]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0918 20:56:01.151701       1 logging.go:55] [core] [Channel #133 SubChannel #134]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0918 20:56:01.151781       1 logging.go:55] [core] [Channel #34 SubChannel #35]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0918 20:56:01.151897       1 logging.go:55] [core] [Channel #139 SubChannel #140]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0918 20:56:01.151988       1 logging.go:55] [core] [Channel #22 SubChannel #23]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0918 20:56:01.152097       1 logging.go:55] [core] [Channel #154 SubChannel #155]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0918 20:56:01.152277       1 logging.go:55] [core] [Channel #64 SubChannel #65]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0918 20:56:01.152356       1 logging.go:55] [core] [Channel #151 SubChannel #152]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0918 20:56:01.152460       1 logging.go:55] [core] [Channel #28 SubChannel #29]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0918 20:56:01.152535       1 logging.go:55] [core] [Channel #82 SubChannel #83]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0918 20:56:01.152643       1 logging.go:55] [core] [Channel #40 SubChannel #41]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0918 20:56:01.152728       1 logging.go:55] [core] [Channel #88 SubChannel #89]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0918 20:56:01.152820       1 logging.go:55] [core] [Channel #148 SubChannel #149]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0918 20:56:01.152903       1 logging.go:55] [core] [Channel #181 SubChannel #182]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0918 20:56:01.152969       1 logging.go:55] [core] [Channel #17 SubChannel #18]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0918 20:56:01.153026       1 logging.go:55] [core] [Channel #100 SubChannel #101]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0918 20:56:01.153081       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0918 20:56:01.153185       1 logging.go:55] [core] [Channel #55 SubChannel #56]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-controller-manager [147cf6a30342dc10c1236f9568e86548109ddf20c9cf7ee78a3d33ee56a39016] <==
	I0918 20:56:25.666198       1 shared_informer.go:320] Caches are synced for validatingadmissionpolicy-status
	I0918 20:56:25.666299       1 shared_informer.go:320] Caches are synced for ReplicaSet
	I0918 20:56:25.666463       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="106.334µs"
	I0918 20:56:25.667190       1 shared_informer.go:320] Caches are synced for ReplicationController
	I0918 20:56:25.669660       1 shared_informer.go:320] Caches are synced for expand
	I0918 20:56:25.669735       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I0918 20:56:25.669803       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="kubernetes-upgrade-878094"
	I0918 20:56:25.671211       1 shared_informer.go:320] Caches are synced for service account
	I0918 20:56:25.683271       1 shared_informer.go:320] Caches are synced for job
	I0918 20:56:25.683544       1 shared_informer.go:320] Caches are synced for deployment
	I0918 20:56:25.683676       1 shared_informer.go:320] Caches are synced for namespace
	I0918 20:56:25.692966       1 shared_informer.go:320] Caches are synced for PVC protection
	I0918 20:56:25.704811       1 shared_informer.go:320] Caches are synced for ephemeral
	I0918 20:56:25.713221       1 shared_informer.go:320] Caches are synced for persistent volume
	I0918 20:56:25.742761       1 shared_informer.go:320] Caches are synced for attach detach
	I0918 20:56:25.804926       1 shared_informer.go:320] Caches are synced for daemon sets
	I0918 20:56:25.806179       1 shared_informer.go:320] Caches are synced for stateful set
	I0918 20:56:25.823390       1 shared_informer.go:320] Caches are synced for HPA
	I0918 20:56:25.847865       1 shared_informer.go:320] Caches are synced for resource quota
	I0918 20:56:25.862918       1 shared_informer.go:320] Caches are synced for resource quota
	I0918 20:56:26.293398       1 shared_informer.go:320] Caches are synced for garbage collector
	I0918 20:56:26.293422       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0918 20:56:26.328413       1 shared_informer.go:320] Caches are synced for garbage collector
	I0918 20:56:26.684495       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="96.510246ms"
	I0918 20:56:26.685663       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="39.795µs"
	
	
	==> kube-controller-manager [b686f4e4b38a759f86022717daf5d25453243af6ba20116a2fde797e8a82cbe8] <==
	I0918 20:55:56.093289       1 shared_informer.go:320] Caches are synced for cronjob
	I0918 20:55:56.093783       1 shared_informer.go:320] Caches are synced for stateful set
	I0918 20:55:56.095817       1 shared_informer.go:320] Caches are synced for namespace
	I0918 20:55:56.107794       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="kubernetes-upgrade-878094"
	I0918 20:55:56.113475       1 shared_informer.go:320] Caches are synced for job
	I0918 20:55:56.143243       1 shared_informer.go:320] Caches are synced for ClusterRoleAggregator
	I0918 20:55:56.241967       1 shared_informer.go:320] Caches are synced for attach detach
	I0918 20:55:56.241979       1 shared_informer.go:320] Caches are synced for endpoint
	I0918 20:55:56.300059       1 shared_informer.go:320] Caches are synced for endpoint_slice_mirroring
	I0918 20:55:56.300445       1 shared_informer.go:320] Caches are synced for resource quota
	I0918 20:55:56.301738       1 shared_informer.go:320] Caches are synced for resource quota
	I0918 20:55:56.342287       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I0918 20:55:56.342651       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="kubernetes-upgrade-878094"
	I0918 20:55:56.452908       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="kubernetes-upgrade-878094"
	I0918 20:55:56.727969       1 shared_informer.go:320] Caches are synced for garbage collector
	I0918 20:55:56.792351       1 shared_informer.go:320] Caches are synced for garbage collector
	I0918 20:55:56.792452       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0918 20:55:56.986926       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="203.661179ms"
	I0918 20:55:57.055471       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="68.30376ms"
	I0918 20:55:57.139318       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="83.786368ms"
	I0918 20:55:57.139422       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="55.429µs"
	I0918 20:55:57.162726       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="96.77µs"
	I0918 20:55:58.469699       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="48.05µs"
	I0918 20:55:58.491629       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="102.577µs"
	I0918 20:55:59.813759       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="kubernetes-upgrade-878094"
	
	
	==> kube-proxy [09494b2f8d41671e9678f8c6baea0da05f8f2c573b84069f23ae5cf4890431ea] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0918 20:55:58.139611       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0918 20:55:58.185091       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.50.80"]
	E0918 20:55:58.185274       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0918 20:55:58.262644       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0918 20:55:58.262712       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0918 20:55:58.262738       1 server_linux.go:169] "Using iptables Proxier"
	I0918 20:55:58.270464       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0918 20:55:58.270822       1 server.go:483] "Version info" version="v1.31.1"
	I0918 20:55:58.270850       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0918 20:55:58.272477       1 config.go:199] "Starting service config controller"
	I0918 20:55:58.272517       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0918 20:55:58.272591       1 config.go:105] "Starting endpoint slice config controller"
	I0918 20:55:58.272599       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0918 20:55:58.278397       1 config.go:328] "Starting node config controller"
	I0918 20:55:58.278428       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0918 20:55:58.373508       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0918 20:55:58.373590       1 shared_informer.go:320] Caches are synced for service config
	I0918 20:55:58.378899       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-proxy [a970bc5dd08730ce482b07e6ad759434af4cf93841cf1215fd205a01ad7f17a4] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0918 20:56:24.180115       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0918 20:56:24.203364       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.50.80"]
	E0918 20:56:24.203443       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0918 20:56:24.264772       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0918 20:56:24.264819       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0918 20:56:24.264845       1 server_linux.go:169] "Using iptables Proxier"
	I0918 20:56:24.267836       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0918 20:56:24.268944       1 server.go:483] "Version info" version="v1.31.1"
	I0918 20:56:24.268974       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0918 20:56:24.272643       1 config.go:199] "Starting service config controller"
	I0918 20:56:24.272712       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0918 20:56:24.272758       1 config.go:105] "Starting endpoint slice config controller"
	I0918 20:56:24.272773       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0918 20:56:24.273769       1 config.go:328] "Starting node config controller"
	I0918 20:56:24.273778       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0918 20:56:24.373331       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0918 20:56:24.373391       1 shared_informer.go:320] Caches are synced for service config
	I0918 20:56:24.374517       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [602211e9810a2d0c4072e9533f40287225d66dcd44706fee93b3ec147f8d9073] <==
	I0918 20:56:20.465321       1 serving.go:386] Generated self-signed cert in-memory
	W0918 20:56:22.268553       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0918 20:56:22.271255       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0918 20:56:22.271367       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0918 20:56:22.271403       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0918 20:56:22.323726       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.1"
	I0918 20:56:22.327223       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0918 20:56:22.333392       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0918 20:56:22.333465       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0918 20:56:22.335936       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0918 20:56:22.336208       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0918 20:56:22.433885       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [867079e75c3d4175271bb5641b5ea2c3cb5a7cf5b319ffa751dee65fce2def40] <==
	E0918 20:55:49.403618       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0918 20:55:49.402935       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0918 20:55:49.403670       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0918 20:55:49.402944       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0918 20:55:49.403729       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0918 20:55:50.246248       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0918 20:55:50.246327       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0918 20:55:50.349760       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0918 20:55:50.349835       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0918 20:55:50.400430       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0918 20:55:50.400464       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0918 20:55:50.475902       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0918 20:55:50.475966       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0918 20:55:50.506865       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0918 20:55:50.506913       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0918 20:55:50.612322       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0918 20:55:50.612403       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0918 20:55:50.616342       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0918 20:55:50.616468       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0918 20:55:50.702287       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0918 20:55:50.702392       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0918 20:55:50.833346       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0918 20:55:50.833392       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	I0918 20:55:53.784965       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0918 20:56:01.109313       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Sep 18 20:56:18 kubernetes-upgrade-878094 kubelet[2502]: I0918 20:56:18.364086    2502 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/45492333b012fd419f83201fe0cce69c-ca-certs\") pod \"kube-controller-manager-kubernetes-upgrade-878094\" (UID: \"45492333b012fd419f83201fe0cce69c\") " pod="kube-system/kube-controller-manager-kubernetes-upgrade-878094"
	Sep 18 20:56:18 kubernetes-upgrade-878094 kubelet[2502]: I0918 20:56:18.364115    2502 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/45492333b012fd419f83201fe0cce69c-usr-share-ca-certificates\") pod \"kube-controller-manager-kubernetes-upgrade-878094\" (UID: \"45492333b012fd419f83201fe0cce69c\") " pod="kube-system/kube-controller-manager-kubernetes-upgrade-878094"
	Sep 18 20:56:18 kubernetes-upgrade-878094 kubelet[2502]: I0918 20:56:18.364255    2502 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f72830116fbd9f05d6f2a37b5a7cfa72-kubeconfig\") pod \"kube-scheduler-kubernetes-upgrade-878094\" (UID: \"f72830116fbd9f05d6f2a37b5a7cfa72\") " pod="kube-system/kube-scheduler-kubernetes-upgrade-878094"
	Sep 18 20:56:18 kubernetes-upgrade-878094 kubelet[2502]: I0918 20:56:18.364273    2502 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-data\" (UniqueName: \"kubernetes.io/host-path/f0a008276fd21582066215a4859f9244-etcd-data\") pod \"etcd-kubernetes-upgrade-878094\" (UID: \"f0a008276fd21582066215a4859f9244\") " pod="kube-system/etcd-kubernetes-upgrade-878094"
	Sep 18 20:56:18 kubernetes-upgrade-878094 kubelet[2502]: I0918 20:56:18.364288    2502 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/9d3c67842eba1d8a2fbcf61a52eba8dc-usr-share-ca-certificates\") pod \"kube-apiserver-kubernetes-upgrade-878094\" (UID: \"9d3c67842eba1d8a2fbcf61a52eba8dc\") " pod="kube-system/kube-apiserver-kubernetes-upgrade-878094"
	Sep 18 20:56:18 kubernetes-upgrade-878094 kubelet[2502]: I0918 20:56:18.364305    2502 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/45492333b012fd419f83201fe0cce69c-flexvolume-dir\") pod \"kube-controller-manager-kubernetes-upgrade-878094\" (UID: \"45492333b012fd419f83201fe0cce69c\") " pod="kube-system/kube-controller-manager-kubernetes-upgrade-878094"
	Sep 18 20:56:18 kubernetes-upgrade-878094 kubelet[2502]: I0918 20:56:18.364318    2502 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/45492333b012fd419f83201fe0cce69c-k8s-certs\") pod \"kube-controller-manager-kubernetes-upgrade-878094\" (UID: \"45492333b012fd419f83201fe0cce69c\") " pod="kube-system/kube-controller-manager-kubernetes-upgrade-878094"
	Sep 18 20:56:18 kubernetes-upgrade-878094 kubelet[2502]: I0918 20:56:18.364331    2502 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/45492333b012fd419f83201fe0cce69c-kubeconfig\") pod \"kube-controller-manager-kubernetes-upgrade-878094\" (UID: \"45492333b012fd419f83201fe0cce69c\") " pod="kube-system/kube-controller-manager-kubernetes-upgrade-878094"
	Sep 18 20:56:18 kubernetes-upgrade-878094 kubelet[2502]: I0918 20:56:18.517595    2502 kubelet_node_status.go:72] "Attempting to register node" node="kubernetes-upgrade-878094"
	Sep 18 20:56:18 kubernetes-upgrade-878094 kubelet[2502]: E0918 20:56:18.518728    2502 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.50.80:8443: connect: connection refused" node="kubernetes-upgrade-878094"
	Sep 18 20:56:18 kubernetes-upgrade-878094 kubelet[2502]: E0918 20:56:18.767763    2502 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/kubernetes-upgrade-878094?timeout=10s\": dial tcp 192.168.50.80:8443: connect: connection refused" interval="800ms"
	Sep 18 20:56:18 kubernetes-upgrade-878094 kubelet[2502]: I0918 20:56:18.920419    2502 kubelet_node_status.go:72] "Attempting to register node" node="kubernetes-upgrade-878094"
	Sep 18 20:56:18 kubernetes-upgrade-878094 kubelet[2502]: E0918 20:56:18.925377    2502 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.50.80:8443: connect: connection refused" node="kubernetes-upgrade-878094"
	Sep 18 20:56:19 kubernetes-upgrade-878094 kubelet[2502]: I0918 20:56:19.728026    2502 kubelet_node_status.go:72] "Attempting to register node" node="kubernetes-upgrade-878094"
	Sep 18 20:56:22 kubernetes-upgrade-878094 kubelet[2502]: I0918 20:56:22.477554    2502 kubelet_node_status.go:111] "Node was previously registered" node="kubernetes-upgrade-878094"
	Sep 18 20:56:22 kubernetes-upgrade-878094 kubelet[2502]: I0918 20:56:22.478075    2502 kubelet_node_status.go:75] "Successfully registered node" node="kubernetes-upgrade-878094"
	Sep 18 20:56:22 kubernetes-upgrade-878094 kubelet[2502]: I0918 20:56:22.478246    2502 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Sep 18 20:56:22 kubernetes-upgrade-878094 kubelet[2502]: I0918 20:56:22.480209    2502 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Sep 18 20:56:23 kubernetes-upgrade-878094 kubelet[2502]: I0918 20:56:23.112192    2502 apiserver.go:52] "Watching apiserver"
	Sep 18 20:56:23 kubernetes-upgrade-878094 kubelet[2502]: I0918 20:56:23.144730    2502 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Sep 18 20:56:23 kubernetes-upgrade-878094 kubelet[2502]: I0918 20:56:23.185974    2502 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/15bea14f-20ab-493b-aafd-89a94011300d-lib-modules\") pod \"kube-proxy-pn8zl\" (UID: \"15bea14f-20ab-493b-aafd-89a94011300d\") " pod="kube-system/kube-proxy-pn8zl"
	Sep 18 20:56:23 kubernetes-upgrade-878094 kubelet[2502]: I0918 20:56:23.186234    2502 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/801fa934-504a-47ea-8673-2cccf5a64c56-tmp\") pod \"storage-provisioner\" (UID: \"801fa934-504a-47ea-8673-2cccf5a64c56\") " pod="kube-system/storage-provisioner"
	Sep 18 20:56:23 kubernetes-upgrade-878094 kubelet[2502]: I0918 20:56:23.186351    2502 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/15bea14f-20ab-493b-aafd-89a94011300d-xtables-lock\") pod \"kube-proxy-pn8zl\" (UID: \"15bea14f-20ab-493b-aafd-89a94011300d\") " pod="kube-system/kube-proxy-pn8zl"
	Sep 18 20:56:26 kubernetes-upgrade-878094 kubelet[2502]: I0918 20:56:26.348763    2502 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Sep 18 20:56:26 kubernetes-upgrade-878094 kubelet[2502]: I0918 20:56:26.349287    2502 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	
	
	==> storage-provisioner [417ec9725c157000881aa4cccdf27e36741d8024b6849236501da39b266c5d0c] <==
	I0918 20:55:58.626635       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0918 20:55:58.636793       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0918 20:55:58.636969       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0918 20:55:58.650871       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0918 20:55:58.651253       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_kubernetes-upgrade-878094_fce7baf8-c494-4913-88aa-15c4f90fa3f5!
	I0918 20:55:58.656946       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"7339fa54-72f5-4bc3-8f79-857c86bfc7a8", APIVersion:"v1", ResourceVersion:"371", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' kubernetes-upgrade-878094_fce7baf8-c494-4913-88aa-15c4f90fa3f5 became leader
	I0918 20:55:58.751472       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_kubernetes-upgrade-878094_fce7baf8-c494-4913-88aa-15c4f90fa3f5!
	
	
	==> storage-provisioner [c871810f19eb6325a70584cc5dc48abe43eb206689d31056706a9950a7e3ba3f] <==
	I0918 20:56:24.010637       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0918 20:56:24.043028       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0918 20:56:24.044375       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0918 20:56:26.138669   59554 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/19667-7671/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p kubernetes-upgrade-878094 -n kubernetes-upgrade-878094
helpers_test.go:261: (dbg) Run:  kubectl --context kubernetes-upgrade-878094 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestKubernetesUpgrade FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
helpers_test.go:175: Cleaning up "kubernetes-upgrade-878094" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-878094
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-878094: (1.123017221s)
--- FAIL: TestKubernetesUpgrade (408.39s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (290.74s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-740194 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p old-k8s-version-740194 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0: exit status 109 (4m50.447536901s)

                                                
                                                
-- stdout --
	* [old-k8s-version-740194] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19667
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19667-7671/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19667-7671/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	* Starting "old-k8s-version-740194" primary control-plane node in "old-k8s-version-740194" cluster
	* Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	* Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0918 20:54:25.217162   57636 out.go:345] Setting OutFile to fd 1 ...
	I0918 20:54:25.217316   57636 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0918 20:54:25.217329   57636 out.go:358] Setting ErrFile to fd 2...
	I0918 20:54:25.217342   57636 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0918 20:54:25.217647   57636 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19667-7671/.minikube/bin
	I0918 20:54:25.218443   57636 out.go:352] Setting JSON to false
	I0918 20:54:25.219781   57636 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":5809,"bootTime":1726687056,"procs":214,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0918 20:54:25.219925   57636 start.go:139] virtualization: kvm guest
	I0918 20:54:25.223213   57636 out.go:177] * [old-k8s-version-740194] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0918 20:54:25.225154   57636 notify.go:220] Checking for updates...
	I0918 20:54:25.225206   57636 out.go:177]   - MINIKUBE_LOCATION=19667
	I0918 20:54:25.226962   57636 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0918 20:54:25.228651   57636 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19667-7671/kubeconfig
	I0918 20:54:25.230373   57636 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19667-7671/.minikube
	I0918 20:54:25.231981   57636 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0918 20:54:25.233429   57636 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0918 20:54:25.235533   57636 config.go:182] Loaded profile config "cert-options-347585": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0918 20:54:25.235678   57636 config.go:182] Loaded profile config "kubernetes-upgrade-878094": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0918 20:54:25.235796   57636 config.go:182] Loaded profile config "pause-543700": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0918 20:54:25.235925   57636 driver.go:394] Setting default libvirt URI to qemu:///system
	I0918 20:54:25.276500   57636 out.go:177] * Using the kvm2 driver based on user configuration
	I0918 20:54:25.277785   57636 start.go:297] selected driver: kvm2
	I0918 20:54:25.277808   57636 start.go:901] validating driver "kvm2" against <nil>
	I0918 20:54:25.277823   57636 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0918 20:54:25.278901   57636 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0918 20:54:25.278982   57636 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19667-7671/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0918 20:54:25.296763   57636 install.go:137] /home/jenkins/minikube-integration/19667-7671/.minikube/bin/docker-machine-driver-kvm2 version is 1.34.0
	I0918 20:54:25.296842   57636 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0918 20:54:25.297196   57636 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0918 20:54:25.297253   57636 cni.go:84] Creating CNI manager for ""
	I0918 20:54:25.297322   57636 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0918 20:54:25.297337   57636 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0918 20:54:25.297406   57636 start.go:340] cluster config:
	{Name:old-k8s-version-740194 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-740194 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: S
SHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0918 20:54:25.297559   57636 iso.go:125] acquiring lock: {Name:mkbcf4d84f3091567a39802cd5e773cb10b8c545 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0918 20:54:25.299637   57636 out.go:177] * Starting "old-k8s-version-740194" primary control-plane node in "old-k8s-version-740194" cluster
	I0918 20:54:25.301007   57636 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0918 20:54:25.301067   57636 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19667-7671/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0918 20:54:25.301083   57636 cache.go:56] Caching tarball of preloaded images
	I0918 20:54:25.301208   57636 preload.go:172] Found /home/jenkins/minikube-integration/19667-7671/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0918 20:54:25.301222   57636 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0918 20:54:25.301372   57636 profile.go:143] Saving config to /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/old-k8s-version-740194/config.json ...
	I0918 20:54:25.301402   57636 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/old-k8s-version-740194/config.json: {Name:mk01958a732544452df4228f15c108da27c14f58 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0918 20:54:25.301596   57636 start.go:360] acquireMachinesLock for old-k8s-version-740194: {Name:mk1b0d68a0b7d9d4bc8204d5d81cb9eb77222526 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0918 20:54:44.012911   57636 start.go:364] duration metric: took 18.711233892s to acquireMachinesLock for "old-k8s-version-740194"
	I0918 20:54:44.013008   57636 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-740194 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-740194 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0918 20:54:44.013113   57636 start.go:125] createHost starting for "" (driver="kvm2")
	I0918 20:54:44.015315   57636 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0918 20:54:44.015526   57636 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19667-7671/.minikube/bin/docker-machine-driver-kvm2
	I0918 20:54:44.015577   57636 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0918 20:54:44.032565   57636 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42795
	I0918 20:54:44.033031   57636 main.go:141] libmachine: () Calling .GetVersion
	I0918 20:54:44.033579   57636 main.go:141] libmachine: Using API Version  1
	I0918 20:54:44.033625   57636 main.go:141] libmachine: () Calling .SetConfigRaw
	I0918 20:54:44.033985   57636 main.go:141] libmachine: () Calling .GetMachineName
	I0918 20:54:44.034220   57636 main.go:141] libmachine: (old-k8s-version-740194) Calling .GetMachineName
	I0918 20:54:44.034374   57636 main.go:141] libmachine: (old-k8s-version-740194) Calling .DriverName
	I0918 20:54:44.034564   57636 start.go:159] libmachine.API.Create for "old-k8s-version-740194" (driver="kvm2")
	I0918 20:54:44.034604   57636 client.go:168] LocalClient.Create starting
	I0918 20:54:44.034639   57636 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19667-7671/.minikube/certs/ca.pem
	I0918 20:54:44.034678   57636 main.go:141] libmachine: Decoding PEM data...
	I0918 20:54:44.034702   57636 main.go:141] libmachine: Parsing certificate...
	I0918 20:54:44.034765   57636 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19667-7671/.minikube/certs/cert.pem
	I0918 20:54:44.034791   57636 main.go:141] libmachine: Decoding PEM data...
	I0918 20:54:44.034813   57636 main.go:141] libmachine: Parsing certificate...
	I0918 20:54:44.034840   57636 main.go:141] libmachine: Running pre-create checks...
	I0918 20:54:44.034852   57636 main.go:141] libmachine: (old-k8s-version-740194) Calling .PreCreateCheck
	I0918 20:54:44.035275   57636 main.go:141] libmachine: (old-k8s-version-740194) Calling .GetConfigRaw
	I0918 20:54:44.035729   57636 main.go:141] libmachine: Creating machine...
	I0918 20:54:44.035747   57636 main.go:141] libmachine: (old-k8s-version-740194) Calling .Create
	I0918 20:54:44.035897   57636 main.go:141] libmachine: (old-k8s-version-740194) Creating KVM machine...
	I0918 20:54:44.037334   57636 main.go:141] libmachine: (old-k8s-version-740194) DBG | found existing default KVM network
	I0918 20:54:44.038818   57636 main.go:141] libmachine: (old-k8s-version-740194) DBG | I0918 20:54:44.038656   57927 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:f9:46:d3} reservation:<nil>}
	I0918 20:54:44.039551   57636 main.go:141] libmachine: (old-k8s-version-740194) DBG | I0918 20:54:44.039470   57927 network.go:211] skipping subnet 192.168.50.0/24 that is taken: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName:virbr4 IfaceIPv4:192.168.50.1 IfaceMTU:1500 IfaceMAC:52:54:00:3f:b5:c1} reservation:<nil>}
	I0918 20:54:44.040619   57636 main.go:141] libmachine: (old-k8s-version-740194) DBG | I0918 20:54:44.040550   57927 network.go:211] skipping subnet 192.168.61.0/24 that is taken: &{IP:192.168.61.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.61.0/24 Gateway:192.168.61.1 ClientMin:192.168.61.2 ClientMax:192.168.61.254 Broadcast:192.168.61.255 IsPrivate:true Interface:{IfaceName:virbr2 IfaceIPv4:192.168.61.1 IfaceMTU:1500 IfaceMAC:52:54:00:b8:c8:5a} reservation:<nil>}
	I0918 20:54:44.041693   57636 main.go:141] libmachine: (old-k8s-version-740194) DBG | I0918 20:54:44.041582   57927 network.go:206] using free private subnet 192.168.72.0/24: &{IP:192.168.72.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.72.0/24 Gateway:192.168.72.1 ClientMin:192.168.72.2 ClientMax:192.168.72.254 Broadcast:192.168.72.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0002856e0}
	I0918 20:54:44.041720   57636 main.go:141] libmachine: (old-k8s-version-740194) DBG | created network xml: 
	I0918 20:54:44.041769   57636 main.go:141] libmachine: (old-k8s-version-740194) DBG | <network>
	I0918 20:54:44.041793   57636 main.go:141] libmachine: (old-k8s-version-740194) DBG |   <name>mk-old-k8s-version-740194</name>
	I0918 20:54:44.041848   57636 main.go:141] libmachine: (old-k8s-version-740194) DBG |   <dns enable='no'/>
	I0918 20:54:44.041869   57636 main.go:141] libmachine: (old-k8s-version-740194) DBG |   
	I0918 20:54:44.041882   57636 main.go:141] libmachine: (old-k8s-version-740194) DBG |   <ip address='192.168.72.1' netmask='255.255.255.0'>
	I0918 20:54:44.041893   57636 main.go:141] libmachine: (old-k8s-version-740194) DBG |     <dhcp>
	I0918 20:54:44.041907   57636 main.go:141] libmachine: (old-k8s-version-740194) DBG |       <range start='192.168.72.2' end='192.168.72.253'/>
	I0918 20:54:44.041917   57636 main.go:141] libmachine: (old-k8s-version-740194) DBG |     </dhcp>
	I0918 20:54:44.041928   57636 main.go:141] libmachine: (old-k8s-version-740194) DBG |   </ip>
	I0918 20:54:44.041940   57636 main.go:141] libmachine: (old-k8s-version-740194) DBG |   
	I0918 20:54:44.041949   57636 main.go:141] libmachine: (old-k8s-version-740194) DBG | </network>
	I0918 20:54:44.041958   57636 main.go:141] libmachine: (old-k8s-version-740194) DBG | 
	I0918 20:54:44.047193   57636 main.go:141] libmachine: (old-k8s-version-740194) DBG | trying to create private KVM network mk-old-k8s-version-740194 192.168.72.0/24...
	I0918 20:54:44.123496   57636 main.go:141] libmachine: (old-k8s-version-740194) DBG | private KVM network mk-old-k8s-version-740194 192.168.72.0/24 created
	I0918 20:54:44.123531   57636 main.go:141] libmachine: (old-k8s-version-740194) Setting up store path in /home/jenkins/minikube-integration/19667-7671/.minikube/machines/old-k8s-version-740194 ...
	I0918 20:54:44.123556   57636 main.go:141] libmachine: (old-k8s-version-740194) DBG | I0918 20:54:44.123449   57927 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19667-7671/.minikube
	I0918 20:54:44.123568   57636 main.go:141] libmachine: (old-k8s-version-740194) Building disk image from file:///home/jenkins/minikube-integration/19667-7671/.minikube/cache/iso/amd64/minikube-v1.34.0-1726481713-19649-amd64.iso
	I0918 20:54:44.123608   57636 main.go:141] libmachine: (old-k8s-version-740194) Downloading /home/jenkins/minikube-integration/19667-7671/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19667-7671/.minikube/cache/iso/amd64/minikube-v1.34.0-1726481713-19649-amd64.iso...
	I0918 20:54:44.376922   57636 main.go:141] libmachine: (old-k8s-version-740194) DBG | I0918 20:54:44.376788   57927 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19667-7671/.minikube/machines/old-k8s-version-740194/id_rsa...
	I0918 20:54:44.739886   57636 main.go:141] libmachine: (old-k8s-version-740194) DBG | I0918 20:54:44.739749   57927 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19667-7671/.minikube/machines/old-k8s-version-740194/old-k8s-version-740194.rawdisk...
	I0918 20:54:44.739941   57636 main.go:141] libmachine: (old-k8s-version-740194) DBG | Writing magic tar header
	I0918 20:54:44.739960   57636 main.go:141] libmachine: (old-k8s-version-740194) DBG | Writing SSH key tar header
	I0918 20:54:44.739981   57636 main.go:141] libmachine: (old-k8s-version-740194) DBG | I0918 20:54:44.739857   57927 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19667-7671/.minikube/machines/old-k8s-version-740194 ...
	I0918 20:54:44.740042   57636 main.go:141] libmachine: (old-k8s-version-740194) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19667-7671/.minikube/machines/old-k8s-version-740194
	I0918 20:54:44.740063   57636 main.go:141] libmachine: (old-k8s-version-740194) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19667-7671/.minikube/machines
	I0918 20:54:44.740081   57636 main.go:141] libmachine: (old-k8s-version-740194) Setting executable bit set on /home/jenkins/minikube-integration/19667-7671/.minikube/machines/old-k8s-version-740194 (perms=drwx------)
	I0918 20:54:44.740097   57636 main.go:141] libmachine: (old-k8s-version-740194) Setting executable bit set on /home/jenkins/minikube-integration/19667-7671/.minikube/machines (perms=drwxr-xr-x)
	I0918 20:54:44.740111   57636 main.go:141] libmachine: (old-k8s-version-740194) Setting executable bit set on /home/jenkins/minikube-integration/19667-7671/.minikube (perms=drwxr-xr-x)
	I0918 20:54:44.740122   57636 main.go:141] libmachine: (old-k8s-version-740194) Setting executable bit set on /home/jenkins/minikube-integration/19667-7671 (perms=drwxrwxr-x)
	I0918 20:54:44.740138   57636 main.go:141] libmachine: (old-k8s-version-740194) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0918 20:54:44.740152   57636 main.go:141] libmachine: (old-k8s-version-740194) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19667-7671/.minikube
	I0918 20:54:44.740162   57636 main.go:141] libmachine: (old-k8s-version-740194) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0918 20:54:44.740176   57636 main.go:141] libmachine: (old-k8s-version-740194) Creating domain...
	I0918 20:54:44.740188   57636 main.go:141] libmachine: (old-k8s-version-740194) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19667-7671
	I0918 20:54:44.740198   57636 main.go:141] libmachine: (old-k8s-version-740194) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0918 20:54:44.740208   57636 main.go:141] libmachine: (old-k8s-version-740194) DBG | Checking permissions on dir: /home/jenkins
	I0918 20:54:44.740218   57636 main.go:141] libmachine: (old-k8s-version-740194) DBG | Checking permissions on dir: /home
	I0918 20:54:44.740226   57636 main.go:141] libmachine: (old-k8s-version-740194) DBG | Skipping /home - not owner
	I0918 20:54:44.741491   57636 main.go:141] libmachine: (old-k8s-version-740194) define libvirt domain using xml: 
	I0918 20:54:44.741520   57636 main.go:141] libmachine: (old-k8s-version-740194) <domain type='kvm'>
	I0918 20:54:44.741559   57636 main.go:141] libmachine: (old-k8s-version-740194)   <name>old-k8s-version-740194</name>
	I0918 20:54:44.741584   57636 main.go:141] libmachine: (old-k8s-version-740194)   <memory unit='MiB'>2200</memory>
	I0918 20:54:44.741594   57636 main.go:141] libmachine: (old-k8s-version-740194)   <vcpu>2</vcpu>
	I0918 20:54:44.741604   57636 main.go:141] libmachine: (old-k8s-version-740194)   <features>
	I0918 20:54:44.741629   57636 main.go:141] libmachine: (old-k8s-version-740194)     <acpi/>
	I0918 20:54:44.741638   57636 main.go:141] libmachine: (old-k8s-version-740194)     <apic/>
	I0918 20:54:44.741645   57636 main.go:141] libmachine: (old-k8s-version-740194)     <pae/>
	I0918 20:54:44.741654   57636 main.go:141] libmachine: (old-k8s-version-740194)     
	I0918 20:54:44.741662   57636 main.go:141] libmachine: (old-k8s-version-740194)   </features>
	I0918 20:54:44.741677   57636 main.go:141] libmachine: (old-k8s-version-740194)   <cpu mode='host-passthrough'>
	I0918 20:54:44.741688   57636 main.go:141] libmachine: (old-k8s-version-740194)   
	I0918 20:54:44.741697   57636 main.go:141] libmachine: (old-k8s-version-740194)   </cpu>
	I0918 20:54:44.741705   57636 main.go:141] libmachine: (old-k8s-version-740194)   <os>
	I0918 20:54:44.741714   57636 main.go:141] libmachine: (old-k8s-version-740194)     <type>hvm</type>
	I0918 20:54:44.741724   57636 main.go:141] libmachine: (old-k8s-version-740194)     <boot dev='cdrom'/>
	I0918 20:54:44.741734   57636 main.go:141] libmachine: (old-k8s-version-740194)     <boot dev='hd'/>
	I0918 20:54:44.741742   57636 main.go:141] libmachine: (old-k8s-version-740194)     <bootmenu enable='no'/>
	I0918 20:54:44.741760   57636 main.go:141] libmachine: (old-k8s-version-740194)   </os>
	I0918 20:54:44.741772   57636 main.go:141] libmachine: (old-k8s-version-740194)   <devices>
	I0918 20:54:44.741795   57636 main.go:141] libmachine: (old-k8s-version-740194)     <disk type='file' device='cdrom'>
	I0918 20:54:44.741820   57636 main.go:141] libmachine: (old-k8s-version-740194)       <source file='/home/jenkins/minikube-integration/19667-7671/.minikube/machines/old-k8s-version-740194/boot2docker.iso'/>
	I0918 20:54:44.741829   57636 main.go:141] libmachine: (old-k8s-version-740194)       <target dev='hdc' bus='scsi'/>
	I0918 20:54:44.741834   57636 main.go:141] libmachine: (old-k8s-version-740194)       <readonly/>
	I0918 20:54:44.741841   57636 main.go:141] libmachine: (old-k8s-version-740194)     </disk>
	I0918 20:54:44.741846   57636 main.go:141] libmachine: (old-k8s-version-740194)     <disk type='file' device='disk'>
	I0918 20:54:44.741856   57636 main.go:141] libmachine: (old-k8s-version-740194)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0918 20:54:44.741865   57636 main.go:141] libmachine: (old-k8s-version-740194)       <source file='/home/jenkins/minikube-integration/19667-7671/.minikube/machines/old-k8s-version-740194/old-k8s-version-740194.rawdisk'/>
	I0918 20:54:44.741871   57636 main.go:141] libmachine: (old-k8s-version-740194)       <target dev='hda' bus='virtio'/>
	I0918 20:54:44.741877   57636 main.go:141] libmachine: (old-k8s-version-740194)     </disk>
	I0918 20:54:44.741883   57636 main.go:141] libmachine: (old-k8s-version-740194)     <interface type='network'>
	I0918 20:54:44.741893   57636 main.go:141] libmachine: (old-k8s-version-740194)       <source network='mk-old-k8s-version-740194'/>
	I0918 20:54:44.741901   57636 main.go:141] libmachine: (old-k8s-version-740194)       <model type='virtio'/>
	I0918 20:54:44.741908   57636 main.go:141] libmachine: (old-k8s-version-740194)     </interface>
	I0918 20:54:44.741934   57636 main.go:141] libmachine: (old-k8s-version-740194)     <interface type='network'>
	I0918 20:54:44.741961   57636 main.go:141] libmachine: (old-k8s-version-740194)       <source network='default'/>
	I0918 20:54:44.741973   57636 main.go:141] libmachine: (old-k8s-version-740194)       <model type='virtio'/>
	I0918 20:54:44.741984   57636 main.go:141] libmachine: (old-k8s-version-740194)     </interface>
	I0918 20:54:44.741992   57636 main.go:141] libmachine: (old-k8s-version-740194)     <serial type='pty'>
	I0918 20:54:44.742002   57636 main.go:141] libmachine: (old-k8s-version-740194)       <target port='0'/>
	I0918 20:54:44.742012   57636 main.go:141] libmachine: (old-k8s-version-740194)     </serial>
	I0918 20:54:44.742023   57636 main.go:141] libmachine: (old-k8s-version-740194)     <console type='pty'>
	I0918 20:54:44.742034   57636 main.go:141] libmachine: (old-k8s-version-740194)       <target type='serial' port='0'/>
	I0918 20:54:44.742048   57636 main.go:141] libmachine: (old-k8s-version-740194)     </console>
	I0918 20:54:44.742071   57636 main.go:141] libmachine: (old-k8s-version-740194)     <rng model='virtio'>
	I0918 20:54:44.742083   57636 main.go:141] libmachine: (old-k8s-version-740194)       <backend model='random'>/dev/random</backend>
	I0918 20:54:44.742095   57636 main.go:141] libmachine: (old-k8s-version-740194)     </rng>
	I0918 20:54:44.742103   57636 main.go:141] libmachine: (old-k8s-version-740194)     
	I0918 20:54:44.742110   57636 main.go:141] libmachine: (old-k8s-version-740194)     
	I0918 20:54:44.742119   57636 main.go:141] libmachine: (old-k8s-version-740194)   </devices>
	I0918 20:54:44.742130   57636 main.go:141] libmachine: (old-k8s-version-740194) </domain>
	I0918 20:54:44.742138   57636 main.go:141] libmachine: (old-k8s-version-740194) 
	I0918 20:54:44.747273   57636 main.go:141] libmachine: (old-k8s-version-740194) DBG | domain old-k8s-version-740194 has defined MAC address 52:54:00:46:43:78 in network default
	I0918 20:54:44.747937   57636 main.go:141] libmachine: (old-k8s-version-740194) Ensuring networks are active...
	I0918 20:54:44.747956   57636 main.go:141] libmachine: (old-k8s-version-740194) DBG | domain old-k8s-version-740194 has defined MAC address 52:54:00:3f:a7:11 in network mk-old-k8s-version-740194
	I0918 20:54:44.748973   57636 main.go:141] libmachine: (old-k8s-version-740194) Ensuring network default is active
	I0918 20:54:44.749587   57636 main.go:141] libmachine: (old-k8s-version-740194) Ensuring network mk-old-k8s-version-740194 is active
	I0918 20:54:44.750143   57636 main.go:141] libmachine: (old-k8s-version-740194) Getting domain xml...
	I0918 20:54:44.751026   57636 main.go:141] libmachine: (old-k8s-version-740194) Creating domain...
	I0918 20:54:46.186742   57636 main.go:141] libmachine: (old-k8s-version-740194) Waiting to get IP...
	I0918 20:54:46.187704   57636 main.go:141] libmachine: (old-k8s-version-740194) DBG | domain old-k8s-version-740194 has defined MAC address 52:54:00:3f:a7:11 in network mk-old-k8s-version-740194
	I0918 20:54:46.188244   57636 main.go:141] libmachine: (old-k8s-version-740194) DBG | unable to find current IP address of domain old-k8s-version-740194 in network mk-old-k8s-version-740194
	I0918 20:54:46.188297   57636 main.go:141] libmachine: (old-k8s-version-740194) DBG | I0918 20:54:46.188233   57927 retry.go:31] will retry after 228.869302ms: waiting for machine to come up
	I0918 20:54:46.419198   57636 main.go:141] libmachine: (old-k8s-version-740194) DBG | domain old-k8s-version-740194 has defined MAC address 52:54:00:3f:a7:11 in network mk-old-k8s-version-740194
	I0918 20:54:46.419743   57636 main.go:141] libmachine: (old-k8s-version-740194) DBG | unable to find current IP address of domain old-k8s-version-740194 in network mk-old-k8s-version-740194
	I0918 20:54:46.419821   57636 main.go:141] libmachine: (old-k8s-version-740194) DBG | I0918 20:54:46.419713   57927 retry.go:31] will retry after 389.854092ms: waiting for machine to come up
	I0918 20:54:46.811651   57636 main.go:141] libmachine: (old-k8s-version-740194) DBG | domain old-k8s-version-740194 has defined MAC address 52:54:00:3f:a7:11 in network mk-old-k8s-version-740194
	I0918 20:54:46.812259   57636 main.go:141] libmachine: (old-k8s-version-740194) DBG | unable to find current IP address of domain old-k8s-version-740194 in network mk-old-k8s-version-740194
	I0918 20:54:46.812284   57636 main.go:141] libmachine: (old-k8s-version-740194) DBG | I0918 20:54:46.812212   57927 retry.go:31] will retry after 383.050394ms: waiting for machine to come up
	I0918 20:54:47.196937   57636 main.go:141] libmachine: (old-k8s-version-740194) DBG | domain old-k8s-version-740194 has defined MAC address 52:54:00:3f:a7:11 in network mk-old-k8s-version-740194
	I0918 20:54:47.197463   57636 main.go:141] libmachine: (old-k8s-version-740194) DBG | unable to find current IP address of domain old-k8s-version-740194 in network mk-old-k8s-version-740194
	I0918 20:54:47.197491   57636 main.go:141] libmachine: (old-k8s-version-740194) DBG | I0918 20:54:47.197411   57927 retry.go:31] will retry after 556.442033ms: waiting for machine to come up
	I0918 20:54:47.755788   57636 main.go:141] libmachine: (old-k8s-version-740194) DBG | domain old-k8s-version-740194 has defined MAC address 52:54:00:3f:a7:11 in network mk-old-k8s-version-740194
	I0918 20:54:47.756306   57636 main.go:141] libmachine: (old-k8s-version-740194) DBG | unable to find current IP address of domain old-k8s-version-740194 in network mk-old-k8s-version-740194
	I0918 20:54:47.756331   57636 main.go:141] libmachine: (old-k8s-version-740194) DBG | I0918 20:54:47.756268   57927 retry.go:31] will retry after 491.387205ms: waiting for machine to come up
	I0918 20:54:48.248880   57636 main.go:141] libmachine: (old-k8s-version-740194) DBG | domain old-k8s-version-740194 has defined MAC address 52:54:00:3f:a7:11 in network mk-old-k8s-version-740194
	I0918 20:54:48.249407   57636 main.go:141] libmachine: (old-k8s-version-740194) DBG | unable to find current IP address of domain old-k8s-version-740194 in network mk-old-k8s-version-740194
	I0918 20:54:48.249438   57636 main.go:141] libmachine: (old-k8s-version-740194) DBG | I0918 20:54:48.249337   57927 retry.go:31] will retry after 832.286861ms: waiting for machine to come up
	I0918 20:54:49.083217   57636 main.go:141] libmachine: (old-k8s-version-740194) DBG | domain old-k8s-version-740194 has defined MAC address 52:54:00:3f:a7:11 in network mk-old-k8s-version-740194
	I0918 20:54:49.083746   57636 main.go:141] libmachine: (old-k8s-version-740194) DBG | unable to find current IP address of domain old-k8s-version-740194 in network mk-old-k8s-version-740194
	I0918 20:54:49.083775   57636 main.go:141] libmachine: (old-k8s-version-740194) DBG | I0918 20:54:49.083701   57927 retry.go:31] will retry after 779.02114ms: waiting for machine to come up
	I0918 20:54:49.864702   57636 main.go:141] libmachine: (old-k8s-version-740194) DBG | domain old-k8s-version-740194 has defined MAC address 52:54:00:3f:a7:11 in network mk-old-k8s-version-740194
	I0918 20:54:49.865178   57636 main.go:141] libmachine: (old-k8s-version-740194) DBG | unable to find current IP address of domain old-k8s-version-740194 in network mk-old-k8s-version-740194
	I0918 20:54:49.865202   57636 main.go:141] libmachine: (old-k8s-version-740194) DBG | I0918 20:54:49.865148   57927 retry.go:31] will retry after 1.461684769s: waiting for machine to come up
	I0918 20:54:51.328223   57636 main.go:141] libmachine: (old-k8s-version-740194) DBG | domain old-k8s-version-740194 has defined MAC address 52:54:00:3f:a7:11 in network mk-old-k8s-version-740194
	I0918 20:54:51.328729   57636 main.go:141] libmachine: (old-k8s-version-740194) DBG | unable to find current IP address of domain old-k8s-version-740194 in network mk-old-k8s-version-740194
	I0918 20:54:51.328755   57636 main.go:141] libmachine: (old-k8s-version-740194) DBG | I0918 20:54:51.328672   57927 retry.go:31] will retry after 1.227657824s: waiting for machine to come up
	I0918 20:54:52.558390   57636 main.go:141] libmachine: (old-k8s-version-740194) DBG | domain old-k8s-version-740194 has defined MAC address 52:54:00:3f:a7:11 in network mk-old-k8s-version-740194
	I0918 20:54:52.558995   57636 main.go:141] libmachine: (old-k8s-version-740194) DBG | unable to find current IP address of domain old-k8s-version-740194 in network mk-old-k8s-version-740194
	I0918 20:54:52.559026   57636 main.go:141] libmachine: (old-k8s-version-740194) DBG | I0918 20:54:52.558935   57927 retry.go:31] will retry after 2.030269752s: waiting for machine to come up
	I0918 20:54:54.591505   57636 main.go:141] libmachine: (old-k8s-version-740194) DBG | domain old-k8s-version-740194 has defined MAC address 52:54:00:3f:a7:11 in network mk-old-k8s-version-740194
	I0918 20:54:54.592128   57636 main.go:141] libmachine: (old-k8s-version-740194) DBG | unable to find current IP address of domain old-k8s-version-740194 in network mk-old-k8s-version-740194
	I0918 20:54:54.592160   57636 main.go:141] libmachine: (old-k8s-version-740194) DBG | I0918 20:54:54.592072   57927 retry.go:31] will retry after 2.759759247s: waiting for machine to come up
	I0918 20:54:57.355323   57636 main.go:141] libmachine: (old-k8s-version-740194) DBG | domain old-k8s-version-740194 has defined MAC address 52:54:00:3f:a7:11 in network mk-old-k8s-version-740194
	I0918 20:54:57.355888   57636 main.go:141] libmachine: (old-k8s-version-740194) DBG | unable to find current IP address of domain old-k8s-version-740194 in network mk-old-k8s-version-740194
	I0918 20:54:57.355917   57636 main.go:141] libmachine: (old-k8s-version-740194) DBG | I0918 20:54:57.355848   57927 retry.go:31] will retry after 2.975347857s: waiting for machine to come up
	I0918 20:55:00.333439   57636 main.go:141] libmachine: (old-k8s-version-740194) DBG | domain old-k8s-version-740194 has defined MAC address 52:54:00:3f:a7:11 in network mk-old-k8s-version-740194
	I0918 20:55:00.334163   57636 main.go:141] libmachine: (old-k8s-version-740194) DBG | unable to find current IP address of domain old-k8s-version-740194 in network mk-old-k8s-version-740194
	I0918 20:55:00.334186   57636 main.go:141] libmachine: (old-k8s-version-740194) DBG | I0918 20:55:00.334108   57927 retry.go:31] will retry after 3.419652423s: waiting for machine to come up
	I0918 20:55:03.757482   57636 main.go:141] libmachine: (old-k8s-version-740194) DBG | domain old-k8s-version-740194 has defined MAC address 52:54:00:3f:a7:11 in network mk-old-k8s-version-740194
	I0918 20:55:03.758295   57636 main.go:141] libmachine: (old-k8s-version-740194) DBG | unable to find current IP address of domain old-k8s-version-740194 in network mk-old-k8s-version-740194
	I0918 20:55:03.758346   57636 main.go:141] libmachine: (old-k8s-version-740194) DBG | I0918 20:55:03.758262   57927 retry.go:31] will retry after 4.65098527s: waiting for machine to come up
	I0918 20:55:08.410525   57636 main.go:141] libmachine: (old-k8s-version-740194) DBG | domain old-k8s-version-740194 has defined MAC address 52:54:00:3f:a7:11 in network mk-old-k8s-version-740194
	I0918 20:55:08.411016   57636 main.go:141] libmachine: (old-k8s-version-740194) Found IP for machine: 192.168.72.53
	I0918 20:55:08.411044   57636 main.go:141] libmachine: (old-k8s-version-740194) Reserving static IP address...
	I0918 20:55:08.411069   57636 main.go:141] libmachine: (old-k8s-version-740194) DBG | domain old-k8s-version-740194 has current primary IP address 192.168.72.53 and MAC address 52:54:00:3f:a7:11 in network mk-old-k8s-version-740194
	I0918 20:55:08.411464   57636 main.go:141] libmachine: (old-k8s-version-740194) DBG | unable to find host DHCP lease matching {name: "old-k8s-version-740194", mac: "52:54:00:3f:a7:11", ip: "192.168.72.53"} in network mk-old-k8s-version-740194
	I0918 20:55:08.493115   57636 main.go:141] libmachine: (old-k8s-version-740194) DBG | Getting to WaitForSSH function...
	I0918 20:55:08.493147   57636 main.go:141] libmachine: (old-k8s-version-740194) Reserved static IP address: 192.168.72.53
	I0918 20:55:08.493160   57636 main.go:141] libmachine: (old-k8s-version-740194) Waiting for SSH to be available...
	I0918 20:55:08.496193   57636 main.go:141] libmachine: (old-k8s-version-740194) DBG | domain old-k8s-version-740194 has defined MAC address 52:54:00:3f:a7:11 in network mk-old-k8s-version-740194
	I0918 20:55:08.496581   57636 main.go:141] libmachine: (old-k8s-version-740194) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:a7:11", ip: ""} in network mk-old-k8s-version-740194: {Iface:virbr3 ExpiryTime:2024-09-18 21:54:58 +0000 UTC Type:0 Mac:52:54:00:3f:a7:11 Iaid: IPaddr:192.168.72.53 Prefix:24 Hostname:minikube Clientid:01:52:54:00:3f:a7:11}
	I0918 20:55:08.496607   57636 main.go:141] libmachine: (old-k8s-version-740194) DBG | domain old-k8s-version-740194 has defined IP address 192.168.72.53 and MAC address 52:54:00:3f:a7:11 in network mk-old-k8s-version-740194
	I0918 20:55:08.496736   57636 main.go:141] libmachine: (old-k8s-version-740194) DBG | Using SSH client type: external
	I0918 20:55:08.496764   57636 main.go:141] libmachine: (old-k8s-version-740194) DBG | Using SSH private key: /home/jenkins/minikube-integration/19667-7671/.minikube/machines/old-k8s-version-740194/id_rsa (-rw-------)
	I0918 20:55:08.496795   57636 main.go:141] libmachine: (old-k8s-version-740194) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.53 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19667-7671/.minikube/machines/old-k8s-version-740194/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0918 20:55:08.496807   57636 main.go:141] libmachine: (old-k8s-version-740194) DBG | About to run SSH command:
	I0918 20:55:08.496855   57636 main.go:141] libmachine: (old-k8s-version-740194) DBG | exit 0
	I0918 20:55:08.624642   57636 main.go:141] libmachine: (old-k8s-version-740194) DBG | SSH cmd err, output: <nil>: 
	I0918 20:55:08.624933   57636 main.go:141] libmachine: (old-k8s-version-740194) KVM machine creation complete!
	I0918 20:55:08.625332   57636 main.go:141] libmachine: (old-k8s-version-740194) Calling .GetConfigRaw
	I0918 20:55:08.625885   57636 main.go:141] libmachine: (old-k8s-version-740194) Calling .DriverName
	I0918 20:55:08.626081   57636 main.go:141] libmachine: (old-k8s-version-740194) Calling .DriverName
	I0918 20:55:08.626267   57636 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0918 20:55:08.626299   57636 main.go:141] libmachine: (old-k8s-version-740194) Calling .GetState
	I0918 20:55:08.627920   57636 main.go:141] libmachine: Detecting operating system of created instance...
	I0918 20:55:08.627939   57636 main.go:141] libmachine: Waiting for SSH to be available...
	I0918 20:55:08.627947   57636 main.go:141] libmachine: Getting to WaitForSSH function...
	I0918 20:55:08.627957   57636 main.go:141] libmachine: (old-k8s-version-740194) Calling .GetSSHHostname
	I0918 20:55:08.630628   57636 main.go:141] libmachine: (old-k8s-version-740194) DBG | domain old-k8s-version-740194 has defined MAC address 52:54:00:3f:a7:11 in network mk-old-k8s-version-740194
	I0918 20:55:08.631008   57636 main.go:141] libmachine: (old-k8s-version-740194) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:a7:11", ip: ""} in network mk-old-k8s-version-740194: {Iface:virbr3 ExpiryTime:2024-09-18 21:54:58 +0000 UTC Type:0 Mac:52:54:00:3f:a7:11 Iaid: IPaddr:192.168.72.53 Prefix:24 Hostname:old-k8s-version-740194 Clientid:01:52:54:00:3f:a7:11}
	I0918 20:55:08.631039   57636 main.go:141] libmachine: (old-k8s-version-740194) DBG | domain old-k8s-version-740194 has defined IP address 192.168.72.53 and MAC address 52:54:00:3f:a7:11 in network mk-old-k8s-version-740194
	I0918 20:55:08.631162   57636 main.go:141] libmachine: (old-k8s-version-740194) Calling .GetSSHPort
	I0918 20:55:08.631322   57636 main.go:141] libmachine: (old-k8s-version-740194) Calling .GetSSHKeyPath
	I0918 20:55:08.631484   57636 main.go:141] libmachine: (old-k8s-version-740194) Calling .GetSSHKeyPath
	I0918 20:55:08.631608   57636 main.go:141] libmachine: (old-k8s-version-740194) Calling .GetSSHUsername
	I0918 20:55:08.631802   57636 main.go:141] libmachine: Using SSH client type: native
	I0918 20:55:08.632049   57636 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.72.53 22 <nil> <nil>}
	I0918 20:55:08.632065   57636 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0918 20:55:08.739160   57636 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0918 20:55:08.739189   57636 main.go:141] libmachine: Detecting the provisioner...
	I0918 20:55:08.739199   57636 main.go:141] libmachine: (old-k8s-version-740194) Calling .GetSSHHostname
	I0918 20:55:08.742019   57636 main.go:141] libmachine: (old-k8s-version-740194) DBG | domain old-k8s-version-740194 has defined MAC address 52:54:00:3f:a7:11 in network mk-old-k8s-version-740194
	I0918 20:55:08.742459   57636 main.go:141] libmachine: (old-k8s-version-740194) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:a7:11", ip: ""} in network mk-old-k8s-version-740194: {Iface:virbr3 ExpiryTime:2024-09-18 21:54:58 +0000 UTC Type:0 Mac:52:54:00:3f:a7:11 Iaid: IPaddr:192.168.72.53 Prefix:24 Hostname:old-k8s-version-740194 Clientid:01:52:54:00:3f:a7:11}
	I0918 20:55:08.742501   57636 main.go:141] libmachine: (old-k8s-version-740194) DBG | domain old-k8s-version-740194 has defined IP address 192.168.72.53 and MAC address 52:54:00:3f:a7:11 in network mk-old-k8s-version-740194
	I0918 20:55:08.742675   57636 main.go:141] libmachine: (old-k8s-version-740194) Calling .GetSSHPort
	I0918 20:55:08.742871   57636 main.go:141] libmachine: (old-k8s-version-740194) Calling .GetSSHKeyPath
	I0918 20:55:08.743064   57636 main.go:141] libmachine: (old-k8s-version-740194) Calling .GetSSHKeyPath
	I0918 20:55:08.743173   57636 main.go:141] libmachine: (old-k8s-version-740194) Calling .GetSSHUsername
	I0918 20:55:08.743363   57636 main.go:141] libmachine: Using SSH client type: native
	I0918 20:55:08.743521   57636 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.72.53 22 <nil> <nil>}
	I0918 20:55:08.743530   57636 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0918 20:55:08.852594   57636 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0918 20:55:08.852666   57636 main.go:141] libmachine: found compatible host: buildroot
	I0918 20:55:08.852672   57636 main.go:141] libmachine: Provisioning with buildroot...
	I0918 20:55:08.852680   57636 main.go:141] libmachine: (old-k8s-version-740194) Calling .GetMachineName
	I0918 20:55:08.852940   57636 buildroot.go:166] provisioning hostname "old-k8s-version-740194"
	I0918 20:55:08.852989   57636 main.go:141] libmachine: (old-k8s-version-740194) Calling .GetMachineName
	I0918 20:55:08.853205   57636 main.go:141] libmachine: (old-k8s-version-740194) Calling .GetSSHHostname
	I0918 20:55:08.855759   57636 main.go:141] libmachine: (old-k8s-version-740194) DBG | domain old-k8s-version-740194 has defined MAC address 52:54:00:3f:a7:11 in network mk-old-k8s-version-740194
	I0918 20:55:08.856224   57636 main.go:141] libmachine: (old-k8s-version-740194) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:a7:11", ip: ""} in network mk-old-k8s-version-740194: {Iface:virbr3 ExpiryTime:2024-09-18 21:54:58 +0000 UTC Type:0 Mac:52:54:00:3f:a7:11 Iaid: IPaddr:192.168.72.53 Prefix:24 Hostname:old-k8s-version-740194 Clientid:01:52:54:00:3f:a7:11}
	I0918 20:55:08.856254   57636 main.go:141] libmachine: (old-k8s-version-740194) DBG | domain old-k8s-version-740194 has defined IP address 192.168.72.53 and MAC address 52:54:00:3f:a7:11 in network mk-old-k8s-version-740194
	I0918 20:55:08.856447   57636 main.go:141] libmachine: (old-k8s-version-740194) Calling .GetSSHPort
	I0918 20:55:08.856661   57636 main.go:141] libmachine: (old-k8s-version-740194) Calling .GetSSHKeyPath
	I0918 20:55:08.856855   57636 main.go:141] libmachine: (old-k8s-version-740194) Calling .GetSSHKeyPath
	I0918 20:55:08.857035   57636 main.go:141] libmachine: (old-k8s-version-740194) Calling .GetSSHUsername
	I0918 20:55:08.857174   57636 main.go:141] libmachine: Using SSH client type: native
	I0918 20:55:08.857356   57636 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.72.53 22 <nil> <nil>}
	I0918 20:55:08.857377   57636 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-740194 && echo "old-k8s-version-740194" | sudo tee /etc/hostname
	I0918 20:55:08.978698   57636 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-740194
	
	I0918 20:55:08.978734   57636 main.go:141] libmachine: (old-k8s-version-740194) Calling .GetSSHHostname
	I0918 20:55:08.981586   57636 main.go:141] libmachine: (old-k8s-version-740194) DBG | domain old-k8s-version-740194 has defined MAC address 52:54:00:3f:a7:11 in network mk-old-k8s-version-740194
	I0918 20:55:08.981955   57636 main.go:141] libmachine: (old-k8s-version-740194) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:a7:11", ip: ""} in network mk-old-k8s-version-740194: {Iface:virbr3 ExpiryTime:2024-09-18 21:54:58 +0000 UTC Type:0 Mac:52:54:00:3f:a7:11 Iaid: IPaddr:192.168.72.53 Prefix:24 Hostname:old-k8s-version-740194 Clientid:01:52:54:00:3f:a7:11}
	I0918 20:55:08.982001   57636 main.go:141] libmachine: (old-k8s-version-740194) DBG | domain old-k8s-version-740194 has defined IP address 192.168.72.53 and MAC address 52:54:00:3f:a7:11 in network mk-old-k8s-version-740194
	I0918 20:55:08.982134   57636 main.go:141] libmachine: (old-k8s-version-740194) Calling .GetSSHPort
	I0918 20:55:08.982336   57636 main.go:141] libmachine: (old-k8s-version-740194) Calling .GetSSHKeyPath
	I0918 20:55:08.982535   57636 main.go:141] libmachine: (old-k8s-version-740194) Calling .GetSSHKeyPath
	I0918 20:55:08.982680   57636 main.go:141] libmachine: (old-k8s-version-740194) Calling .GetSSHUsername
	I0918 20:55:08.982891   57636 main.go:141] libmachine: Using SSH client type: native
	I0918 20:55:08.983069   57636 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.72.53 22 <nil> <nil>}
	I0918 20:55:08.983087   57636 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-740194' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-740194/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-740194' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0918 20:55:09.100431   57636 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0918 20:55:09.100460   57636 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19667-7671/.minikube CaCertPath:/home/jenkins/minikube-integration/19667-7671/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19667-7671/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19667-7671/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19667-7671/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19667-7671/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19667-7671/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19667-7671/.minikube}
	I0918 20:55:09.100501   57636 buildroot.go:174] setting up certificates
	I0918 20:55:09.100515   57636 provision.go:84] configureAuth start
	I0918 20:55:09.100526   57636 main.go:141] libmachine: (old-k8s-version-740194) Calling .GetMachineName
	I0918 20:55:09.100847   57636 main.go:141] libmachine: (old-k8s-version-740194) Calling .GetIP
	I0918 20:55:09.103511   57636 main.go:141] libmachine: (old-k8s-version-740194) DBG | domain old-k8s-version-740194 has defined MAC address 52:54:00:3f:a7:11 in network mk-old-k8s-version-740194
	I0918 20:55:09.103894   57636 main.go:141] libmachine: (old-k8s-version-740194) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:a7:11", ip: ""} in network mk-old-k8s-version-740194: {Iface:virbr3 ExpiryTime:2024-09-18 21:54:58 +0000 UTC Type:0 Mac:52:54:00:3f:a7:11 Iaid: IPaddr:192.168.72.53 Prefix:24 Hostname:old-k8s-version-740194 Clientid:01:52:54:00:3f:a7:11}
	I0918 20:55:09.103926   57636 main.go:141] libmachine: (old-k8s-version-740194) DBG | domain old-k8s-version-740194 has defined IP address 192.168.72.53 and MAC address 52:54:00:3f:a7:11 in network mk-old-k8s-version-740194
	I0918 20:55:09.104138   57636 main.go:141] libmachine: (old-k8s-version-740194) Calling .GetSSHHostname
	I0918 20:55:09.106349   57636 main.go:141] libmachine: (old-k8s-version-740194) DBG | domain old-k8s-version-740194 has defined MAC address 52:54:00:3f:a7:11 in network mk-old-k8s-version-740194
	I0918 20:55:09.106702   57636 main.go:141] libmachine: (old-k8s-version-740194) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:a7:11", ip: ""} in network mk-old-k8s-version-740194: {Iface:virbr3 ExpiryTime:2024-09-18 21:54:58 +0000 UTC Type:0 Mac:52:54:00:3f:a7:11 Iaid: IPaddr:192.168.72.53 Prefix:24 Hostname:old-k8s-version-740194 Clientid:01:52:54:00:3f:a7:11}
	I0918 20:55:09.106744   57636 main.go:141] libmachine: (old-k8s-version-740194) DBG | domain old-k8s-version-740194 has defined IP address 192.168.72.53 and MAC address 52:54:00:3f:a7:11 in network mk-old-k8s-version-740194
	I0918 20:55:09.106875   57636 provision.go:143] copyHostCerts
	I0918 20:55:09.106942   57636 exec_runner.go:144] found /home/jenkins/minikube-integration/19667-7671/.minikube/ca.pem, removing ...
	I0918 20:55:09.106952   57636 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19667-7671/.minikube/ca.pem
	I0918 20:55:09.107006   57636 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19667-7671/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19667-7671/.minikube/ca.pem (1082 bytes)
	I0918 20:55:09.107102   57636 exec_runner.go:144] found /home/jenkins/minikube-integration/19667-7671/.minikube/cert.pem, removing ...
	I0918 20:55:09.107109   57636 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19667-7671/.minikube/cert.pem
	I0918 20:55:09.107130   57636 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19667-7671/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19667-7671/.minikube/cert.pem (1123 bytes)
	I0918 20:55:09.107192   57636 exec_runner.go:144] found /home/jenkins/minikube-integration/19667-7671/.minikube/key.pem, removing ...
	I0918 20:55:09.107199   57636 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19667-7671/.minikube/key.pem
	I0918 20:55:09.107219   57636 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19667-7671/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19667-7671/.minikube/key.pem (1679 bytes)
	I0918 20:55:09.107311   57636 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19667-7671/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19667-7671/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19667-7671/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-740194 san=[127.0.0.1 192.168.72.53 localhost minikube old-k8s-version-740194]
	I0918 20:55:09.278662   57636 provision.go:177] copyRemoteCerts
	I0918 20:55:09.278726   57636 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0918 20:55:09.278748   57636 main.go:141] libmachine: (old-k8s-version-740194) Calling .GetSSHHostname
	I0918 20:55:09.281784   57636 main.go:141] libmachine: (old-k8s-version-740194) DBG | domain old-k8s-version-740194 has defined MAC address 52:54:00:3f:a7:11 in network mk-old-k8s-version-740194
	I0918 20:55:09.282111   57636 main.go:141] libmachine: (old-k8s-version-740194) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:a7:11", ip: ""} in network mk-old-k8s-version-740194: {Iface:virbr3 ExpiryTime:2024-09-18 21:54:58 +0000 UTC Type:0 Mac:52:54:00:3f:a7:11 Iaid: IPaddr:192.168.72.53 Prefix:24 Hostname:old-k8s-version-740194 Clientid:01:52:54:00:3f:a7:11}
	I0918 20:55:09.282146   57636 main.go:141] libmachine: (old-k8s-version-740194) DBG | domain old-k8s-version-740194 has defined IP address 192.168.72.53 and MAC address 52:54:00:3f:a7:11 in network mk-old-k8s-version-740194
	I0918 20:55:09.282332   57636 main.go:141] libmachine: (old-k8s-version-740194) Calling .GetSSHPort
	I0918 20:55:09.282609   57636 main.go:141] libmachine: (old-k8s-version-740194) Calling .GetSSHKeyPath
	I0918 20:55:09.282793   57636 main.go:141] libmachine: (old-k8s-version-740194) Calling .GetSSHUsername
	I0918 20:55:09.282936   57636 sshutil.go:53] new ssh client: &{IP:192.168.72.53 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19667-7671/.minikube/machines/old-k8s-version-740194/id_rsa Username:docker}
	I0918 20:55:09.366798   57636 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0918 20:55:09.392263   57636 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0918 20:55:09.417073   57636 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0918 20:55:09.441344   57636 provision.go:87] duration metric: took 340.817597ms to configureAuth
	I0918 20:55:09.441380   57636 buildroot.go:189] setting minikube options for container-runtime
	I0918 20:55:09.441559   57636 config.go:182] Loaded profile config "old-k8s-version-740194": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0918 20:55:09.441646   57636 main.go:141] libmachine: (old-k8s-version-740194) Calling .GetSSHHostname
	I0918 20:55:09.444803   57636 main.go:141] libmachine: (old-k8s-version-740194) DBG | domain old-k8s-version-740194 has defined MAC address 52:54:00:3f:a7:11 in network mk-old-k8s-version-740194
	I0918 20:55:09.445187   57636 main.go:141] libmachine: (old-k8s-version-740194) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:a7:11", ip: ""} in network mk-old-k8s-version-740194: {Iface:virbr3 ExpiryTime:2024-09-18 21:54:58 +0000 UTC Type:0 Mac:52:54:00:3f:a7:11 Iaid: IPaddr:192.168.72.53 Prefix:24 Hostname:old-k8s-version-740194 Clientid:01:52:54:00:3f:a7:11}
	I0918 20:55:09.445219   57636 main.go:141] libmachine: (old-k8s-version-740194) DBG | domain old-k8s-version-740194 has defined IP address 192.168.72.53 and MAC address 52:54:00:3f:a7:11 in network mk-old-k8s-version-740194
	I0918 20:55:09.445405   57636 main.go:141] libmachine: (old-k8s-version-740194) Calling .GetSSHPort
	I0918 20:55:09.445884   57636 main.go:141] libmachine: (old-k8s-version-740194) Calling .GetSSHKeyPath
	I0918 20:55:09.447554   57636 main.go:141] libmachine: (old-k8s-version-740194) Calling .GetSSHKeyPath
	I0918 20:55:09.447849   57636 main.go:141] libmachine: (old-k8s-version-740194) Calling .GetSSHUsername
	I0918 20:55:09.448101   57636 main.go:141] libmachine: Using SSH client type: native
	I0918 20:55:09.448272   57636 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.72.53 22 <nil> <nil>}
	I0918 20:55:09.448287   57636 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0918 20:55:09.682989   57636 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0918 20:55:09.683023   57636 main.go:141] libmachine: Checking connection to Docker...
	I0918 20:55:09.683031   57636 main.go:141] libmachine: (old-k8s-version-740194) Calling .GetURL
	I0918 20:55:09.684566   57636 main.go:141] libmachine: (old-k8s-version-740194) DBG | Using libvirt version 6000000
	I0918 20:55:09.687010   57636 main.go:141] libmachine: (old-k8s-version-740194) DBG | domain old-k8s-version-740194 has defined MAC address 52:54:00:3f:a7:11 in network mk-old-k8s-version-740194
	I0918 20:55:09.687327   57636 main.go:141] libmachine: (old-k8s-version-740194) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:a7:11", ip: ""} in network mk-old-k8s-version-740194: {Iface:virbr3 ExpiryTime:2024-09-18 21:54:58 +0000 UTC Type:0 Mac:52:54:00:3f:a7:11 Iaid: IPaddr:192.168.72.53 Prefix:24 Hostname:old-k8s-version-740194 Clientid:01:52:54:00:3f:a7:11}
	I0918 20:55:09.687353   57636 main.go:141] libmachine: (old-k8s-version-740194) DBG | domain old-k8s-version-740194 has defined IP address 192.168.72.53 and MAC address 52:54:00:3f:a7:11 in network mk-old-k8s-version-740194
	I0918 20:55:09.687721   57636 main.go:141] libmachine: Docker is up and running!
	I0918 20:55:09.687738   57636 main.go:141] libmachine: Reticulating splines...
	I0918 20:55:09.687745   57636 client.go:171] duration metric: took 25.653131026s to LocalClient.Create
	I0918 20:55:09.687778   57636 start.go:167] duration metric: took 25.653216108s to libmachine.API.Create "old-k8s-version-740194"
	I0918 20:55:09.687792   57636 start.go:293] postStartSetup for "old-k8s-version-740194" (driver="kvm2")
	I0918 20:55:09.687810   57636 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0918 20:55:09.687836   57636 main.go:141] libmachine: (old-k8s-version-740194) Calling .DriverName
	I0918 20:55:09.688117   57636 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0918 20:55:09.688145   57636 main.go:141] libmachine: (old-k8s-version-740194) Calling .GetSSHHostname
	I0918 20:55:09.690471   57636 main.go:141] libmachine: (old-k8s-version-740194) DBG | domain old-k8s-version-740194 has defined MAC address 52:54:00:3f:a7:11 in network mk-old-k8s-version-740194
	I0918 20:55:09.690832   57636 main.go:141] libmachine: (old-k8s-version-740194) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:a7:11", ip: ""} in network mk-old-k8s-version-740194: {Iface:virbr3 ExpiryTime:2024-09-18 21:54:58 +0000 UTC Type:0 Mac:52:54:00:3f:a7:11 Iaid: IPaddr:192.168.72.53 Prefix:24 Hostname:old-k8s-version-740194 Clientid:01:52:54:00:3f:a7:11}
	I0918 20:55:09.690863   57636 main.go:141] libmachine: (old-k8s-version-740194) DBG | domain old-k8s-version-740194 has defined IP address 192.168.72.53 and MAC address 52:54:00:3f:a7:11 in network mk-old-k8s-version-740194
	I0918 20:55:09.690993   57636 main.go:141] libmachine: (old-k8s-version-740194) Calling .GetSSHPort
	I0918 20:55:09.691180   57636 main.go:141] libmachine: (old-k8s-version-740194) Calling .GetSSHKeyPath
	I0918 20:55:09.691381   57636 main.go:141] libmachine: (old-k8s-version-740194) Calling .GetSSHUsername
	I0918 20:55:09.691507   57636 sshutil.go:53] new ssh client: &{IP:192.168.72.53 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19667-7671/.minikube/machines/old-k8s-version-740194/id_rsa Username:docker}
	I0918 20:55:09.774814   57636 ssh_runner.go:195] Run: cat /etc/os-release
	I0918 20:55:09.779012   57636 info.go:137] Remote host: Buildroot 2023.02.9
	I0918 20:55:09.779038   57636 filesync.go:126] Scanning /home/jenkins/minikube-integration/19667-7671/.minikube/addons for local assets ...
	I0918 20:55:09.779109   57636 filesync.go:126] Scanning /home/jenkins/minikube-integration/19667-7671/.minikube/files for local assets ...
	I0918 20:55:09.779217   57636 filesync.go:149] local asset: /home/jenkins/minikube-integration/19667-7671/.minikube/files/etc/ssl/certs/148782.pem -> 148782.pem in /etc/ssl/certs
	I0918 20:55:09.779353   57636 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0918 20:55:09.789687   57636 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/files/etc/ssl/certs/148782.pem --> /etc/ssl/certs/148782.pem (1708 bytes)
	I0918 20:55:09.814308   57636 start.go:296] duration metric: took 126.497475ms for postStartSetup
	I0918 20:55:09.814377   57636 main.go:141] libmachine: (old-k8s-version-740194) Calling .GetConfigRaw
	I0918 20:55:09.814993   57636 main.go:141] libmachine: (old-k8s-version-740194) Calling .GetIP
	I0918 20:55:09.817812   57636 main.go:141] libmachine: (old-k8s-version-740194) DBG | domain old-k8s-version-740194 has defined MAC address 52:54:00:3f:a7:11 in network mk-old-k8s-version-740194
	I0918 20:55:09.818126   57636 main.go:141] libmachine: (old-k8s-version-740194) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:a7:11", ip: ""} in network mk-old-k8s-version-740194: {Iface:virbr3 ExpiryTime:2024-09-18 21:54:58 +0000 UTC Type:0 Mac:52:54:00:3f:a7:11 Iaid: IPaddr:192.168.72.53 Prefix:24 Hostname:old-k8s-version-740194 Clientid:01:52:54:00:3f:a7:11}
	I0918 20:55:09.818153   57636 main.go:141] libmachine: (old-k8s-version-740194) DBG | domain old-k8s-version-740194 has defined IP address 192.168.72.53 and MAC address 52:54:00:3f:a7:11 in network mk-old-k8s-version-740194
	I0918 20:55:09.818428   57636 profile.go:143] Saving config to /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/old-k8s-version-740194/config.json ...
	I0918 20:55:09.818694   57636 start.go:128] duration metric: took 25.80556582s to createHost
	I0918 20:55:09.818747   57636 main.go:141] libmachine: (old-k8s-version-740194) Calling .GetSSHHostname
	I0918 20:55:09.821252   57636 main.go:141] libmachine: (old-k8s-version-740194) DBG | domain old-k8s-version-740194 has defined MAC address 52:54:00:3f:a7:11 in network mk-old-k8s-version-740194
	I0918 20:55:09.821606   57636 main.go:141] libmachine: (old-k8s-version-740194) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:a7:11", ip: ""} in network mk-old-k8s-version-740194: {Iface:virbr3 ExpiryTime:2024-09-18 21:54:58 +0000 UTC Type:0 Mac:52:54:00:3f:a7:11 Iaid: IPaddr:192.168.72.53 Prefix:24 Hostname:old-k8s-version-740194 Clientid:01:52:54:00:3f:a7:11}
	I0918 20:55:09.821649   57636 main.go:141] libmachine: (old-k8s-version-740194) DBG | domain old-k8s-version-740194 has defined IP address 192.168.72.53 and MAC address 52:54:00:3f:a7:11 in network mk-old-k8s-version-740194
	I0918 20:55:09.821820   57636 main.go:141] libmachine: (old-k8s-version-740194) Calling .GetSSHPort
	I0918 20:55:09.821997   57636 main.go:141] libmachine: (old-k8s-version-740194) Calling .GetSSHKeyPath
	I0918 20:55:09.822150   57636 main.go:141] libmachine: (old-k8s-version-740194) Calling .GetSSHKeyPath
	I0918 20:55:09.822286   57636 main.go:141] libmachine: (old-k8s-version-740194) Calling .GetSSHUsername
	I0918 20:55:09.822439   57636 main.go:141] libmachine: Using SSH client type: native
	I0918 20:55:09.822606   57636 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.72.53 22 <nil> <nil>}
	I0918 20:55:09.822619   57636 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0918 20:55:09.932740   57636 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726692909.909985434
	
	I0918 20:55:09.932772   57636 fix.go:216] guest clock: 1726692909.909985434
	I0918 20:55:09.932781   57636 fix.go:229] Guest: 2024-09-18 20:55:09.909985434 +0000 UTC Remote: 2024-09-18 20:55:09.818706944 +0000 UTC m=+44.644157749 (delta=91.27849ms)
	I0918 20:55:09.932813   57636 fix.go:200] guest clock delta is within tolerance: 91.27849ms
	I0918 20:55:09.932817   57636 start.go:83] releasing machines lock for "old-k8s-version-740194", held for 25.919872088s
	I0918 20:55:09.932840   57636 main.go:141] libmachine: (old-k8s-version-740194) Calling .DriverName
	I0918 20:55:09.933145   57636 main.go:141] libmachine: (old-k8s-version-740194) Calling .GetIP
	I0918 20:55:09.936372   57636 main.go:141] libmachine: (old-k8s-version-740194) DBG | domain old-k8s-version-740194 has defined MAC address 52:54:00:3f:a7:11 in network mk-old-k8s-version-740194
	I0918 20:55:09.936826   57636 main.go:141] libmachine: (old-k8s-version-740194) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:a7:11", ip: ""} in network mk-old-k8s-version-740194: {Iface:virbr3 ExpiryTime:2024-09-18 21:54:58 +0000 UTC Type:0 Mac:52:54:00:3f:a7:11 Iaid: IPaddr:192.168.72.53 Prefix:24 Hostname:old-k8s-version-740194 Clientid:01:52:54:00:3f:a7:11}
	I0918 20:55:09.936856   57636 main.go:141] libmachine: (old-k8s-version-740194) DBG | domain old-k8s-version-740194 has defined IP address 192.168.72.53 and MAC address 52:54:00:3f:a7:11 in network mk-old-k8s-version-740194
	I0918 20:55:09.937012   57636 main.go:141] libmachine: (old-k8s-version-740194) Calling .DriverName
	I0918 20:55:09.937568   57636 main.go:141] libmachine: (old-k8s-version-740194) Calling .DriverName
	I0918 20:55:09.937775   57636 main.go:141] libmachine: (old-k8s-version-740194) Calling .DriverName
	I0918 20:55:09.937842   57636 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0918 20:55:09.937894   57636 main.go:141] libmachine: (old-k8s-version-740194) Calling .GetSSHHostname
	I0918 20:55:09.938020   57636 ssh_runner.go:195] Run: cat /version.json
	I0918 20:55:09.938045   57636 main.go:141] libmachine: (old-k8s-version-740194) Calling .GetSSHHostname
	I0918 20:55:09.940641   57636 main.go:141] libmachine: (old-k8s-version-740194) DBG | domain old-k8s-version-740194 has defined MAC address 52:54:00:3f:a7:11 in network mk-old-k8s-version-740194
	I0918 20:55:09.941522   57636 main.go:141] libmachine: (old-k8s-version-740194) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:a7:11", ip: ""} in network mk-old-k8s-version-740194: {Iface:virbr3 ExpiryTime:2024-09-18 21:54:58 +0000 UTC Type:0 Mac:52:54:00:3f:a7:11 Iaid: IPaddr:192.168.72.53 Prefix:24 Hostname:old-k8s-version-740194 Clientid:01:52:54:00:3f:a7:11}
	I0918 20:55:09.941554   57636 main.go:141] libmachine: (old-k8s-version-740194) DBG | domain old-k8s-version-740194 has defined MAC address 52:54:00:3f:a7:11 in network mk-old-k8s-version-740194
	I0918 20:55:09.941578   57636 main.go:141] libmachine: (old-k8s-version-740194) DBG | domain old-k8s-version-740194 has defined IP address 192.168.72.53 and MAC address 52:54:00:3f:a7:11 in network mk-old-k8s-version-740194
	I0918 20:55:09.941922   57636 main.go:141] libmachine: (old-k8s-version-740194) Calling .GetSSHPort
	I0918 20:55:09.942136   57636 main.go:141] libmachine: (old-k8s-version-740194) Calling .GetSSHKeyPath
	I0918 20:55:09.942288   57636 main.go:141] libmachine: (old-k8s-version-740194) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:a7:11", ip: ""} in network mk-old-k8s-version-740194: {Iface:virbr3 ExpiryTime:2024-09-18 21:54:58 +0000 UTC Type:0 Mac:52:54:00:3f:a7:11 Iaid: IPaddr:192.168.72.53 Prefix:24 Hostname:old-k8s-version-740194 Clientid:01:52:54:00:3f:a7:11}
	I0918 20:55:09.942289   57636 main.go:141] libmachine: (old-k8s-version-740194) Calling .GetSSHUsername
	I0918 20:55:09.942312   57636 main.go:141] libmachine: (old-k8s-version-740194) DBG | domain old-k8s-version-740194 has defined IP address 192.168.72.53 and MAC address 52:54:00:3f:a7:11 in network mk-old-k8s-version-740194
	I0918 20:55:09.942501   57636 main.go:141] libmachine: (old-k8s-version-740194) Calling .GetSSHPort
	I0918 20:55:09.942501   57636 sshutil.go:53] new ssh client: &{IP:192.168.72.53 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19667-7671/.minikube/machines/old-k8s-version-740194/id_rsa Username:docker}
	I0918 20:55:09.942705   57636 main.go:141] libmachine: (old-k8s-version-740194) Calling .GetSSHKeyPath
	I0918 20:55:09.942862   57636 main.go:141] libmachine: (old-k8s-version-740194) Calling .GetSSHUsername
	I0918 20:55:09.942987   57636 sshutil.go:53] new ssh client: &{IP:192.168.72.53 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19667-7671/.minikube/machines/old-k8s-version-740194/id_rsa Username:docker}
	I0918 20:55:10.054479   57636 ssh_runner.go:195] Run: systemctl --version
	I0918 20:55:10.060868   57636 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0918 20:55:10.226147   57636 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0918 20:55:10.232494   57636 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0918 20:55:10.232573   57636 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0918 20:55:10.250626   57636 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0918 20:55:10.250658   57636 start.go:495] detecting cgroup driver to use...
	I0918 20:55:10.250721   57636 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0918 20:55:10.266867   57636 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0918 20:55:10.282481   57636 docker.go:217] disabling cri-docker service (if available) ...
	I0918 20:55:10.282548   57636 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0918 20:55:10.297209   57636 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0918 20:55:10.311601   57636 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0918 20:55:10.423924   57636 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0918 20:55:10.588448   57636 docker.go:233] disabling docker service ...
	I0918 20:55:10.588541   57636 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0918 20:55:10.603979   57636 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0918 20:55:10.619540   57636 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0918 20:55:10.762470   57636 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0918 20:55:10.889355   57636 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0918 20:55:10.903278   57636 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0918 20:55:10.921603   57636 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0918 20:55:10.921673   57636 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0918 20:55:10.933104   57636 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0918 20:55:10.933180   57636 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0918 20:55:10.944268   57636 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0918 20:55:10.955712   57636 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0918 20:55:10.966952   57636 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0918 20:55:10.979202   57636 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0918 20:55:10.989755   57636 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0918 20:55:10.989830   57636 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0918 20:55:11.004599   57636 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0918 20:55:11.015051   57636 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0918 20:55:11.130786   57636 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0918 20:55:11.224382   57636 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0918 20:55:11.224461   57636 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0918 20:55:11.229024   57636 start.go:563] Will wait 60s for crictl version
	I0918 20:55:11.229082   57636 ssh_runner.go:195] Run: which crictl
	I0918 20:55:11.232661   57636 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0918 20:55:11.268431   57636 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0918 20:55:11.268519   57636 ssh_runner.go:195] Run: crio --version
	I0918 20:55:11.295080   57636 ssh_runner.go:195] Run: crio --version
	I0918 20:55:11.324593   57636 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0918 20:55:11.326125   57636 main.go:141] libmachine: (old-k8s-version-740194) Calling .GetIP
	I0918 20:55:11.329075   57636 main.go:141] libmachine: (old-k8s-version-740194) DBG | domain old-k8s-version-740194 has defined MAC address 52:54:00:3f:a7:11 in network mk-old-k8s-version-740194
	I0918 20:55:11.329447   57636 main.go:141] libmachine: (old-k8s-version-740194) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:a7:11", ip: ""} in network mk-old-k8s-version-740194: {Iface:virbr3 ExpiryTime:2024-09-18 21:54:58 +0000 UTC Type:0 Mac:52:54:00:3f:a7:11 Iaid: IPaddr:192.168.72.53 Prefix:24 Hostname:old-k8s-version-740194 Clientid:01:52:54:00:3f:a7:11}
	I0918 20:55:11.329477   57636 main.go:141] libmachine: (old-k8s-version-740194) DBG | domain old-k8s-version-740194 has defined IP address 192.168.72.53 and MAC address 52:54:00:3f:a7:11 in network mk-old-k8s-version-740194
	I0918 20:55:11.329722   57636 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0918 20:55:11.333758   57636 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0918 20:55:11.345820   57636 kubeadm.go:883] updating cluster {Name:old-k8s-version-740194 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-740194 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.53 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mo
untPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0918 20:55:11.345941   57636 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0918 20:55:11.345994   57636 ssh_runner.go:195] Run: sudo crictl images --output json
	I0918 20:55:11.377097   57636 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0918 20:55:11.377175   57636 ssh_runner.go:195] Run: which lz4
	I0918 20:55:11.380770   57636 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0918 20:55:11.384664   57636 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0918 20:55:11.384697   57636 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0918 20:55:12.923532   57636 crio.go:462] duration metric: took 1.542787572s to copy over tarball
	I0918 20:55:12.923606   57636 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0918 20:55:15.493562   57636 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.569925133s)
	I0918 20:55:15.493611   57636 crio.go:469] duration metric: took 2.57005265s to extract the tarball
	I0918 20:55:15.493621   57636 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0918 20:55:15.535424   57636 ssh_runner.go:195] Run: sudo crictl images --output json
	I0918 20:55:15.584361   57636 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0918 20:55:15.584389   57636 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0918 20:55:15.584456   57636 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0918 20:55:15.584477   57636 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0918 20:55:15.584490   57636 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I0918 20:55:15.584525   57636 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0918 20:55:15.584548   57636 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I0918 20:55:15.584572   57636 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0918 20:55:15.584476   57636 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0918 20:55:15.584455   57636 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0918 20:55:15.585878   57636 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0918 20:55:15.585914   57636 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0918 20:55:15.585878   57636 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0918 20:55:15.585972   57636 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0918 20:55:15.586004   57636 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0918 20:55:15.585879   57636 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0918 20:55:15.585882   57636 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0918 20:55:15.585887   57636 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0918 20:55:15.842727   57636 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0918 20:55:15.870921   57636 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0918 20:55:15.880322   57636 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0918 20:55:15.895373   57636 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0918 20:55:15.896681   57636 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0918 20:55:15.896746   57636 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0918 20:55:15.896796   57636 ssh_runner.go:195] Run: which crictl
	I0918 20:55:15.897630   57636 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0918 20:55:15.901148   57636 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0918 20:55:15.916439   57636 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0918 20:55:15.985111   57636 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0918 20:55:15.985161   57636 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0918 20:55:15.985219   57636 ssh_runner.go:195] Run: which crictl
	I0918 20:55:16.039560   57636 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0918 20:55:16.039618   57636 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0918 20:55:16.039636   57636 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0918 20:55:16.039670   57636 ssh_runner.go:195] Run: which crictl
	I0918 20:55:16.039697   57636 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0918 20:55:16.039671   57636 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0918 20:55:16.039749   57636 ssh_runner.go:195] Run: which crictl
	I0918 20:55:16.074791   57636 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0918 20:55:16.074837   57636 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0918 20:55:16.074845   57636 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0918 20:55:16.074858   57636 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0918 20:55:16.074896   57636 ssh_runner.go:195] Run: which crictl
	I0918 20:55:16.074903   57636 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0918 20:55:16.074938   57636 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0918 20:55:16.074896   57636 ssh_runner.go:195] Run: which crictl
	I0918 20:55:16.074981   57636 ssh_runner.go:195] Run: which crictl
	I0918 20:55:16.074994   57636 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0918 20:55:16.075090   57636 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0918 20:55:16.075094   57636 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0918 20:55:16.140474   57636 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0918 20:55:16.140591   57636 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0918 20:55:16.140629   57636 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0918 20:55:16.182345   57636 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0918 20:55:16.182542   57636 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0918 20:55:16.195664   57636 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0918 20:55:16.195911   57636 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0918 20:55:16.218831   57636 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0918 20:55:16.349049   57636 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0918 20:55:16.349085   57636 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0918 20:55:16.349143   57636 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0918 20:55:16.349207   57636 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0918 20:55:16.351578   57636 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0918 20:55:16.370453   57636 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0918 20:55:16.370458   57636 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19667-7671/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0918 20:55:16.482871   57636 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0918 20:55:16.482911   57636 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0918 20:55:16.492420   57636 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19667-7671/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0918 20:55:16.494024   57636 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19667-7671/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0918 20:55:16.494231   57636 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0918 20:55:16.503450   57636 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19667-7671/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0918 20:55:16.556922   57636 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19667-7671/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0918 20:55:16.556922   57636 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19667-7671/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0918 20:55:16.562490   57636 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19667-7671/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0918 20:55:16.745862   57636 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0918 20:55:16.892118   57636 cache_images.go:92] duration metric: took 1.307712475s to LoadCachedImages
	W0918 20:55:16.892210   57636 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19667-7671/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19667-7671/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2: no such file or directory
	I0918 20:55:16.892229   57636 kubeadm.go:934] updating node { 192.168.72.53 8443 v1.20.0 crio true true} ...
	I0918 20:55:16.892452   57636 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-740194 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.72.53
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-740194 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0918 20:55:16.892581   57636 ssh_runner.go:195] Run: crio config
	I0918 20:55:16.942196   57636 cni.go:84] Creating CNI manager for ""
	I0918 20:55:16.942227   57636 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0918 20:55:16.942238   57636 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0918 20:55:16.942263   57636 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.53 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-740194 NodeName:old-k8s-version-740194 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.53"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.53 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0918 20:55:16.942447   57636 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.53
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-740194"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.53
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.53"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0918 20:55:16.942527   57636 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0918 20:55:16.952779   57636 binaries.go:44] Found k8s binaries, skipping transfer
	I0918 20:55:16.952862   57636 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0918 20:55:16.962332   57636 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (429 bytes)
	I0918 20:55:16.981875   57636 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0918 20:55:17.004242   57636 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2120 bytes)
	I0918 20:55:17.022865   57636 ssh_runner.go:195] Run: grep 192.168.72.53	control-plane.minikube.internal$ /etc/hosts
	I0918 20:55:17.026842   57636 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.53	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0918 20:55:17.042388   57636 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0918 20:55:17.171035   57636 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0918 20:55:17.189352   57636 certs.go:68] Setting up /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/old-k8s-version-740194 for IP: 192.168.72.53
	I0918 20:55:17.189381   57636 certs.go:194] generating shared ca certs ...
	I0918 20:55:17.189401   57636 certs.go:226] acquiring lock for ca certs: {Name:mkf54e0e71ed884c4372f9d3d4cb308ea4600185 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0918 20:55:17.189602   57636 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19667-7671/.minikube/ca.key
	I0918 20:55:17.189660   57636 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19667-7671/.minikube/proxy-client-ca.key
	I0918 20:55:17.189676   57636 certs.go:256] generating profile certs ...
	I0918 20:55:17.189746   57636 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/old-k8s-version-740194/client.key
	I0918 20:55:17.189777   57636 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/old-k8s-version-740194/client.crt with IP's: []
	I0918 20:55:17.326815   57636 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/old-k8s-version-740194/client.crt ...
	I0918 20:55:17.326849   57636 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/old-k8s-version-740194/client.crt: {Name:mkbb010070226d055a77716a97cda1707537eddc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0918 20:55:17.347721   57636 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/old-k8s-version-740194/client.key ...
	I0918 20:55:17.347768   57636 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/old-k8s-version-740194/client.key: {Name:mkc2b90fb15edb238d44c85d26279306419d060f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0918 20:55:17.347922   57636 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/old-k8s-version-740194/apiserver.key.424b07d9
	I0918 20:55:17.347947   57636 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/old-k8s-version-740194/apiserver.crt.424b07d9 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.72.53]
	I0918 20:55:17.559343   57636 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/old-k8s-version-740194/apiserver.crt.424b07d9 ...
	I0918 20:55:17.559380   57636 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/old-k8s-version-740194/apiserver.crt.424b07d9: {Name:mke5e3e6951aaa1a48948d99429f2ec714ee1720 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0918 20:55:17.559571   57636 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/old-k8s-version-740194/apiserver.key.424b07d9 ...
	I0918 20:55:17.559587   57636 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/old-k8s-version-740194/apiserver.key.424b07d9: {Name:mkefc7668c386d2a7c0ea0401c6ac467c79807be Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0918 20:55:17.559695   57636 certs.go:381] copying /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/old-k8s-version-740194/apiserver.crt.424b07d9 -> /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/old-k8s-version-740194/apiserver.crt
	I0918 20:55:17.559770   57636 certs.go:385] copying /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/old-k8s-version-740194/apiserver.key.424b07d9 -> /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/old-k8s-version-740194/apiserver.key
	I0918 20:55:17.559823   57636 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/old-k8s-version-740194/proxy-client.key
	I0918 20:55:17.559838   57636 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/old-k8s-version-740194/proxy-client.crt with IP's: []
	I0918 20:55:17.802201   57636 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/old-k8s-version-740194/proxy-client.crt ...
	I0918 20:55:17.802235   57636 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/old-k8s-version-740194/proxy-client.crt: {Name:mkb6f1dcbeee9315c20c455baab06377c3b43d5a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0918 20:55:17.802425   57636 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/old-k8s-version-740194/proxy-client.key ...
	I0918 20:55:17.802442   57636 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/old-k8s-version-740194/proxy-client.key: {Name:mkf3d462232fdbc2cc7af80bb1ac3b6a4be0b835 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0918 20:55:17.802645   57636 certs.go:484] found cert: /home/jenkins/minikube-integration/19667-7671/.minikube/certs/14878.pem (1338 bytes)
	W0918 20:55:17.802699   57636 certs.go:480] ignoring /home/jenkins/minikube-integration/19667-7671/.minikube/certs/14878_empty.pem, impossibly tiny 0 bytes
	I0918 20:55:17.802712   57636 certs.go:484] found cert: /home/jenkins/minikube-integration/19667-7671/.minikube/certs/ca-key.pem (1675 bytes)
	I0918 20:55:17.802741   57636 certs.go:484] found cert: /home/jenkins/minikube-integration/19667-7671/.minikube/certs/ca.pem (1082 bytes)
	I0918 20:55:17.802766   57636 certs.go:484] found cert: /home/jenkins/minikube-integration/19667-7671/.minikube/certs/cert.pem (1123 bytes)
	I0918 20:55:17.802787   57636 certs.go:484] found cert: /home/jenkins/minikube-integration/19667-7671/.minikube/certs/key.pem (1679 bytes)
	I0918 20:55:17.802831   57636 certs.go:484] found cert: /home/jenkins/minikube-integration/19667-7671/.minikube/files/etc/ssl/certs/148782.pem (1708 bytes)
	I0918 20:55:17.803581   57636 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0918 20:55:17.830931   57636 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0918 20:55:17.857184   57636 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0918 20:55:17.884559   57636 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0918 20:55:17.910628   57636 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/old-k8s-version-740194/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0918 20:55:17.945495   57636 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/old-k8s-version-740194/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0918 20:55:17.977502   57636 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/old-k8s-version-740194/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0918 20:55:18.008922   57636 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/old-k8s-version-740194/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0918 20:55:18.037316   57636 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/files/etc/ssl/certs/148782.pem --> /usr/share/ca-certificates/148782.pem (1708 bytes)
	I0918 20:55:18.084740   57636 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0918 20:55:18.123750   57636 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/certs/14878.pem --> /usr/share/ca-certificates/14878.pem (1338 bytes)
	I0918 20:55:18.147967   57636 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0918 20:55:18.164614   57636 ssh_runner.go:195] Run: openssl version
	I0918 20:55:18.170999   57636 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/148782.pem && ln -fs /usr/share/ca-certificates/148782.pem /etc/ssl/certs/148782.pem"
	I0918 20:55:18.182029   57636 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/148782.pem
	I0918 20:55:18.186747   57636 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 18 19:57 /usr/share/ca-certificates/148782.pem
	I0918 20:55:18.186826   57636 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/148782.pem
	I0918 20:55:18.192865   57636 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/148782.pem /etc/ssl/certs/3ec20f2e.0"
	I0918 20:55:18.203725   57636 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0918 20:55:18.215107   57636 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0918 20:55:18.219493   57636 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 18 19:39 /usr/share/ca-certificates/minikubeCA.pem
	I0918 20:55:18.219552   57636 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0918 20:55:18.225615   57636 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0918 20:55:18.238100   57636 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14878.pem && ln -fs /usr/share/ca-certificates/14878.pem /etc/ssl/certs/14878.pem"
	I0918 20:55:18.250040   57636 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14878.pem
	I0918 20:55:18.254995   57636 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 18 19:57 /usr/share/ca-certificates/14878.pem
	I0918 20:55:18.255071   57636 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14878.pem
	I0918 20:55:18.260925   57636 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14878.pem /etc/ssl/certs/51391683.0"
	I0918 20:55:18.271600   57636 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0918 20:55:18.275666   57636 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0918 20:55:18.275725   57636 kubeadm.go:392] StartCluster: {Name:old-k8s-version-740194 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-740194 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.53 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mount
Port:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0918 20:55:18.275805   57636 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0918 20:55:18.275861   57636 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0918 20:55:18.320938   57636 cri.go:89] found id: ""
	I0918 20:55:18.321002   57636 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0918 20:55:18.332487   57636 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0918 20:55:18.345115   57636 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0918 20:55:18.358424   57636 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0918 20:55:18.358458   57636 kubeadm.go:157] found existing configuration files:
	
	I0918 20:55:18.358511   57636 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0918 20:55:18.370163   57636 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0918 20:55:18.370240   57636 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0918 20:55:18.383434   57636 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0918 20:55:18.396260   57636 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0918 20:55:18.396349   57636 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0918 20:55:18.409721   57636 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0918 20:55:18.422421   57636 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0918 20:55:18.422491   57636 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0918 20:55:18.435647   57636 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0918 20:55:18.446133   57636 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0918 20:55:18.446200   57636 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0918 20:55:18.457811   57636 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0918 20:55:18.574170   57636 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0918 20:55:18.574286   57636 kubeadm.go:310] [preflight] Running pre-flight checks
	I0918 20:55:18.731732   57636 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0918 20:55:18.731890   57636 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0918 20:55:18.732059   57636 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0918 20:55:18.943964   57636 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0918 20:55:19.129180   57636 out.go:235]   - Generating certificates and keys ...
	I0918 20:55:19.129310   57636 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0918 20:55:19.129406   57636 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0918 20:55:19.177825   57636 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0918 20:55:19.276131   57636 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0918 20:55:19.576307   57636 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0918 20:55:20.071570   57636 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0918 20:55:20.130790   57636 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0918 20:55:20.131142   57636 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-740194] and IPs [192.168.72.53 127.0.0.1 ::1]
	I0918 20:55:20.429324   57636 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0918 20:55:20.429538   57636 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-740194] and IPs [192.168.72.53 127.0.0.1 ::1]
	I0918 20:55:20.607036   57636 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0918 20:55:20.788906   57636 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0918 20:55:20.936236   57636 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0918 20:55:20.936534   57636 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0918 20:55:21.147533   57636 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0918 20:55:21.242447   57636 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0918 20:55:21.350636   57636 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0918 20:55:21.487245   57636 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0918 20:55:21.508149   57636 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0918 20:55:21.509578   57636 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0918 20:55:21.510063   57636 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0918 20:55:21.646954   57636 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0918 20:55:21.649151   57636 out.go:235]   - Booting up control plane ...
	I0918 20:55:21.649280   57636 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0918 20:55:21.667665   57636 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0918 20:55:21.667774   57636 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0918 20:55:21.668188   57636 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0918 20:55:21.677377   57636 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0918 20:56:01.672361   57636 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0918 20:56:01.673259   57636 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0918 20:56:01.673516   57636 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0918 20:56:06.673719   57636 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0918 20:56:06.673968   57636 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0918 20:56:16.672974   57636 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0918 20:56:16.673270   57636 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0918 20:56:36.672842   57636 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0918 20:56:36.673115   57636 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0918 20:57:16.674480   57636 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0918 20:57:16.674756   57636 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0918 20:57:16.674789   57636 kubeadm.go:310] 
	I0918 20:57:16.674839   57636 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0918 20:57:16.674901   57636 kubeadm.go:310] 		timed out waiting for the condition
	I0918 20:57:16.674912   57636 kubeadm.go:310] 
	I0918 20:57:16.674973   57636 kubeadm.go:310] 	This error is likely caused by:
	I0918 20:57:16.675022   57636 kubeadm.go:310] 		- The kubelet is not running
	I0918 20:57:16.675178   57636 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0918 20:57:16.675201   57636 kubeadm.go:310] 
	I0918 20:57:16.675319   57636 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0918 20:57:16.675387   57636 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0918 20:57:16.675431   57636 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0918 20:57:16.675442   57636 kubeadm.go:310] 
	I0918 20:57:16.675542   57636 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0918 20:57:16.675621   57636 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0918 20:57:16.675631   57636 kubeadm.go:310] 
	I0918 20:57:16.675720   57636 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0918 20:57:16.675808   57636 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0918 20:57:16.675921   57636 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0918 20:57:16.675991   57636 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0918 20:57:16.675999   57636 kubeadm.go:310] 
	I0918 20:57:16.676696   57636 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0918 20:57:16.676808   57636 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0918 20:57:16.676869   57636 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W0918 20:57:16.677005   57636 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-740194] and IPs [192.168.72.53 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-740194] and IPs [192.168.72.53 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-740194] and IPs [192.168.72.53 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-740194] and IPs [192.168.72.53 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0918 20:57:16.677049   57636 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0918 20:57:18.660716   57636 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (1.983637838s)
	I0918 20:57:18.660808   57636 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0918 20:57:18.676937   57636 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0918 20:57:18.690993   57636 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0918 20:57:18.691017   57636 kubeadm.go:157] found existing configuration files:
	
	I0918 20:57:18.691072   57636 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0918 20:57:18.705235   57636 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0918 20:57:18.705321   57636 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0918 20:57:18.718804   57636 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0918 20:57:18.732166   57636 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0918 20:57:18.732241   57636 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0918 20:57:18.745577   57636 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0918 20:57:18.758624   57636 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0918 20:57:18.758695   57636 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0918 20:57:18.769185   57636 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0918 20:57:18.782607   57636 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0918 20:57:18.782679   57636 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0918 20:57:18.792849   57636 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0918 20:57:18.869283   57636 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0918 20:57:18.869396   57636 kubeadm.go:310] [preflight] Running pre-flight checks
	I0918 20:57:19.032714   57636 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0918 20:57:19.032891   57636 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0918 20:57:19.033041   57636 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0918 20:57:19.230705   57636 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0918 20:57:19.232667   57636 out.go:235]   - Generating certificates and keys ...
	I0918 20:57:19.232774   57636 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0918 20:57:19.232859   57636 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0918 20:57:19.232975   57636 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0918 20:57:19.233082   57636 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0918 20:57:19.233188   57636 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0918 20:57:19.233287   57636 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0918 20:57:19.233374   57636 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0918 20:57:19.233470   57636 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0918 20:57:19.233596   57636 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0918 20:57:19.233712   57636 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0918 20:57:19.233772   57636 kubeadm.go:310] [certs] Using the existing "sa" key
	I0918 20:57:19.233846   57636 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0918 20:57:19.285829   57636 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0918 20:57:19.352792   57636 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0918 20:57:19.509110   57636 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0918 20:57:19.807064   57636 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0918 20:57:19.824364   57636 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0918 20:57:19.825647   57636 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0918 20:57:19.825728   57636 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0918 20:57:19.988597   57636 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0918 20:57:19.990336   57636 out.go:235]   - Booting up control plane ...
	I0918 20:57:19.990479   57636 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0918 20:57:19.997517   57636 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0918 20:57:19.998636   57636 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0918 20:57:19.999389   57636 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0918 20:57:20.001591   57636 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0918 20:58:00.004643   57636 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0918 20:58:00.004746   57636 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0918 20:58:00.004934   57636 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0918 20:58:05.005433   57636 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0918 20:58:05.005724   57636 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0918 20:58:15.007364   57636 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0918 20:58:15.010753   57636 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0918 20:58:35.007799   57636 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0918 20:58:35.008116   57636 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0918 20:59:15.007913   57636 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0918 20:59:15.008115   57636 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0918 20:59:15.008157   57636 kubeadm.go:310] 
	I0918 20:59:15.008254   57636 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0918 20:59:15.008330   57636 kubeadm.go:310] 		timed out waiting for the condition
	I0918 20:59:15.008346   57636 kubeadm.go:310] 
	I0918 20:59:15.008387   57636 kubeadm.go:310] 	This error is likely caused by:
	I0918 20:59:15.008453   57636 kubeadm.go:310] 		- The kubelet is not running
	I0918 20:59:15.008587   57636 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0918 20:59:15.008610   57636 kubeadm.go:310] 
	I0918 20:59:15.008755   57636 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0918 20:59:15.008805   57636 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0918 20:59:15.008853   57636 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0918 20:59:15.008863   57636 kubeadm.go:310] 
	I0918 20:59:15.009005   57636 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0918 20:59:15.009107   57636 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0918 20:59:15.009119   57636 kubeadm.go:310] 
	I0918 20:59:15.009254   57636 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0918 20:59:15.009362   57636 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0918 20:59:15.009470   57636 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0918 20:59:15.009588   57636 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0918 20:59:15.009606   57636 kubeadm.go:310] 
	I0918 20:59:15.010021   57636 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0918 20:59:15.010118   57636 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0918 20:59:15.010192   57636 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0918 20:59:15.010264   57636 kubeadm.go:394] duration metric: took 3m56.73454432s to StartCluster
	I0918 20:59:15.010317   57636 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0918 20:59:15.010370   57636 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0918 20:59:15.051228   57636 cri.go:89] found id: ""
	I0918 20:59:15.051256   57636 logs.go:276] 0 containers: []
	W0918 20:59:15.051264   57636 logs.go:278] No container was found matching "kube-apiserver"
	I0918 20:59:15.051271   57636 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0918 20:59:15.051325   57636 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0918 20:59:15.095700   57636 cri.go:89] found id: ""
	I0918 20:59:15.095734   57636 logs.go:276] 0 containers: []
	W0918 20:59:15.095747   57636 logs.go:278] No container was found matching "etcd"
	I0918 20:59:15.095755   57636 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0918 20:59:15.095822   57636 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0918 20:59:15.130172   57636 cri.go:89] found id: ""
	I0918 20:59:15.130205   57636 logs.go:276] 0 containers: []
	W0918 20:59:15.130213   57636 logs.go:278] No container was found matching "coredns"
	I0918 20:59:15.130219   57636 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0918 20:59:15.130268   57636 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0918 20:59:15.162779   57636 cri.go:89] found id: ""
	I0918 20:59:15.162813   57636 logs.go:276] 0 containers: []
	W0918 20:59:15.162828   57636 logs.go:278] No container was found matching "kube-scheduler"
	I0918 20:59:15.162836   57636 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0918 20:59:15.162895   57636 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0918 20:59:15.195904   57636 cri.go:89] found id: ""
	I0918 20:59:15.195934   57636 logs.go:276] 0 containers: []
	W0918 20:59:15.195944   57636 logs.go:278] No container was found matching "kube-proxy"
	I0918 20:59:15.195952   57636 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0918 20:59:15.196008   57636 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0918 20:59:15.236095   57636 cri.go:89] found id: ""
	I0918 20:59:15.236126   57636 logs.go:276] 0 containers: []
	W0918 20:59:15.236135   57636 logs.go:278] No container was found matching "kube-controller-manager"
	I0918 20:59:15.236141   57636 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0918 20:59:15.236191   57636 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0918 20:59:15.271718   57636 cri.go:89] found id: ""
	I0918 20:59:15.271743   57636 logs.go:276] 0 containers: []
	W0918 20:59:15.271751   57636 logs.go:278] No container was found matching "kindnet"
	I0918 20:59:15.271766   57636 logs.go:123] Gathering logs for kubelet ...
	I0918 20:59:15.271777   57636 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 20:59:15.325280   57636 logs.go:123] Gathering logs for dmesg ...
	I0918 20:59:15.325334   57636 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 20:59:15.344079   57636 logs.go:123] Gathering logs for describe nodes ...
	I0918 20:59:15.344111   57636 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0918 20:59:15.467386   57636 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0918 20:59:15.467411   57636 logs.go:123] Gathering logs for CRI-O ...
	I0918 20:59:15.467432   57636 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0918 20:59:15.567448   57636 logs.go:123] Gathering logs for container status ...
	I0918 20:59:15.567492   57636 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0918 20:59:15.605905   57636 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0918 20:59:15.605960   57636 out.go:270] * 
	* 
	W0918 20:59:15.606010   57636 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0918 20:59:15.606022   57636 out.go:270] * 
	* 
	W0918 20:59:15.606922   57636 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0918 20:59:15.609965   57636 out.go:201] 
	W0918 20:59:15.611055   57636 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0918 20:59:15.611113   57636 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0918 20:59:15.611158   57636 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0918 20:59:15.612879   57636 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-linux-amd64 start -p old-k8s-version-740194 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0": exit status 109
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-740194 -n old-k8s-version-740194
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-740194 -n old-k8s-version-740194: exit status 6 (234.497565ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0918 20:59:15.899755   60998 status.go:451] kubeconfig endpoint: get endpoint: "old-k8s-version-740194" does not appear in /home/jenkins/minikube-integration/19667-7671/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-740194" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/FirstStart (290.74s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (91.86s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-543700 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-543700 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m27.860995205s)
pause_test.go:100: expected the second start log output to include "The running cluster does not require reconfiguration" but got: 
-- stdout --
	* [pause-543700] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19667
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19667-7671/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19667-7671/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	* Starting "pause-543700" primary control-plane node in "pause-543700" cluster
	* Updating the running kvm2 "pause-543700" VM ...
	* Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Enabled addons: 
	* Verifying Kubernetes components...
	* Done! kubectl is now configured to use "pause-543700" cluster and "default" namespace by default

                                                
                                                
-- /stdout --
** stderr ** 
	I0918 20:54:30.838891   57703 out.go:345] Setting OutFile to fd 1 ...
	I0918 20:54:30.839159   57703 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0918 20:54:30.839168   57703 out.go:358] Setting ErrFile to fd 2...
	I0918 20:54:30.839188   57703 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0918 20:54:30.839372   57703 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19667-7671/.minikube/bin
	I0918 20:54:30.839979   57703 out.go:352] Setting JSON to false
	I0918 20:54:30.841019   57703 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":5815,"bootTime":1726687056,"procs":207,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0918 20:54:30.841130   57703 start.go:139] virtualization: kvm guest
	I0918 20:54:30.843422   57703 out.go:177] * [pause-543700] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0918 20:54:30.844720   57703 out.go:177]   - MINIKUBE_LOCATION=19667
	I0918 20:54:30.844783   57703 notify.go:220] Checking for updates...
	I0918 20:54:30.847249   57703 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0918 20:54:30.848404   57703 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19667-7671/kubeconfig
	I0918 20:54:30.850033   57703 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19667-7671/.minikube
	I0918 20:54:30.851203   57703 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0918 20:54:30.852271   57703 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0918 20:54:30.854017   57703 config.go:182] Loaded profile config "pause-543700": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0918 20:54:30.854446   57703 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19667-7671/.minikube/bin/docker-machine-driver-kvm2
	I0918 20:54:30.854511   57703 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0918 20:54:30.869982   57703 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44359
	I0918 20:54:30.870483   57703 main.go:141] libmachine: () Calling .GetVersion
	I0918 20:54:30.871049   57703 main.go:141] libmachine: Using API Version  1
	I0918 20:54:30.871078   57703 main.go:141] libmachine: () Calling .SetConfigRaw
	I0918 20:54:30.871440   57703 main.go:141] libmachine: () Calling .GetMachineName
	I0918 20:54:30.871614   57703 main.go:141] libmachine: (pause-543700) Calling .DriverName
	I0918 20:54:30.871861   57703 driver.go:394] Setting default libvirt URI to qemu:///system
	I0918 20:54:30.872239   57703 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19667-7671/.minikube/bin/docker-machine-driver-kvm2
	I0918 20:54:30.872290   57703 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0918 20:54:30.887553   57703 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43161
	I0918 20:54:30.887985   57703 main.go:141] libmachine: () Calling .GetVersion
	I0918 20:54:30.888534   57703 main.go:141] libmachine: Using API Version  1
	I0918 20:54:30.888566   57703 main.go:141] libmachine: () Calling .SetConfigRaw
	I0918 20:54:30.888890   57703 main.go:141] libmachine: () Calling .GetMachineName
	I0918 20:54:30.889100   57703 main.go:141] libmachine: (pause-543700) Calling .DriverName
	I0918 20:54:30.925235   57703 out.go:177] * Using the kvm2 driver based on existing profile
	I0918 20:54:30.926562   57703 start.go:297] selected driver: kvm2
	I0918 20:54:30.926587   57703 start.go:901] validating driver "kvm2" against &{Name:pause-543700 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernetes
Version:v1.31.1 ClusterName:pause-543700 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.184 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-dev
ice-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0918 20:54:30.926788   57703 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0918 20:54:30.927257   57703 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0918 20:54:30.927371   57703 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19667-7671/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0918 20:54:30.943849   57703 install.go:137] /home/jenkins/minikube-integration/19667-7671/.minikube/bin/docker-machine-driver-kvm2 version is 1.34.0
	I0918 20:54:30.944666   57703 cni.go:84] Creating CNI manager for ""
	I0918 20:54:30.944721   57703 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0918 20:54:30.944783   57703 start.go:340] cluster config:
	{Name:pause-543700 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:pause-543700 Namespace:default APIServerHAVIP: API
ServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.184 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:
false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0918 20:54:30.944918   57703 iso.go:125] acquiring lock: {Name:mkbcf4d84f3091567a39802cd5e773cb10b8c545 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0918 20:54:30.946819   57703 out.go:177] * Starting "pause-543700" primary control-plane node in "pause-543700" cluster
	I0918 20:54:30.948369   57703 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0918 20:54:30.948424   57703 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19667-7671/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I0918 20:54:30.948435   57703 cache.go:56] Caching tarball of preloaded images
	I0918 20:54:30.948553   57703 preload.go:172] Found /home/jenkins/minikube-integration/19667-7671/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0918 20:54:30.948568   57703 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0918 20:54:30.948688   57703 profile.go:143] Saving config to /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/pause-543700/config.json ...
	I0918 20:54:30.948916   57703 start.go:360] acquireMachinesLock for pause-543700: {Name:mk1b0d68a0b7d9d4bc8204d5d81cb9eb77222526 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0918 20:55:09.932900   57703 start.go:364] duration metric: took 38.983941572s to acquireMachinesLock for "pause-543700"
	I0918 20:55:09.932955   57703 start.go:96] Skipping create...Using existing machine configuration
	I0918 20:55:09.932966   57703 fix.go:54] fixHost starting: 
	I0918 20:55:09.933368   57703 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19667-7671/.minikube/bin/docker-machine-driver-kvm2
	I0918 20:55:09.933421   57703 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0918 20:55:09.953279   57703 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40521
	I0918 20:55:09.953717   57703 main.go:141] libmachine: () Calling .GetVersion
	I0918 20:55:09.954226   57703 main.go:141] libmachine: Using API Version  1
	I0918 20:55:09.954245   57703 main.go:141] libmachine: () Calling .SetConfigRaw
	I0918 20:55:09.954673   57703 main.go:141] libmachine: () Calling .GetMachineName
	I0918 20:55:09.954963   57703 main.go:141] libmachine: (pause-543700) Calling .DriverName
	I0918 20:55:09.955170   57703 main.go:141] libmachine: (pause-543700) Calling .GetState
	I0918 20:55:09.957064   57703 fix.go:112] recreateIfNeeded on pause-543700: state=Running err=<nil>
	W0918 20:55:09.957099   57703 fix.go:138] unexpected machine state, will restart: <nil>
	I0918 20:55:09.959524   57703 out.go:177] * Updating the running kvm2 "pause-543700" VM ...
	I0918 20:55:09.961021   57703 machine.go:93] provisionDockerMachine start ...
	I0918 20:55:09.961062   57703 main.go:141] libmachine: (pause-543700) Calling .DriverName
	I0918 20:55:09.961369   57703 main.go:141] libmachine: (pause-543700) Calling .GetSSHHostname
	I0918 20:55:09.964163   57703 main.go:141] libmachine: (pause-543700) DBG | domain pause-543700 has defined MAC address 52:54:00:d2:f3:81 in network mk-pause-543700
	I0918 20:55:09.964762   57703 main.go:141] libmachine: (pause-543700) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:f3:81", ip: ""} in network mk-pause-543700: {Iface:virbr1 ExpiryTime:2024-09-18 21:53:20 +0000 UTC Type:0 Mac:52:54:00:d2:f3:81 Iaid: IPaddr:192.168.39.184 Prefix:24 Hostname:pause-543700 Clientid:01:52:54:00:d2:f3:81}
	I0918 20:55:09.964784   57703 main.go:141] libmachine: (pause-543700) DBG | domain pause-543700 has defined IP address 192.168.39.184 and MAC address 52:54:00:d2:f3:81 in network mk-pause-543700
	I0918 20:55:09.964937   57703 main.go:141] libmachine: (pause-543700) Calling .GetSSHPort
	I0918 20:55:09.965157   57703 main.go:141] libmachine: (pause-543700) Calling .GetSSHKeyPath
	I0918 20:55:09.965355   57703 main.go:141] libmachine: (pause-543700) Calling .GetSSHKeyPath
	I0918 20:55:09.965568   57703 main.go:141] libmachine: (pause-543700) Calling .GetSSHUsername
	I0918 20:55:09.965769   57703 main.go:141] libmachine: Using SSH client type: native
	I0918 20:55:09.965962   57703 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.184 22 <nil> <nil>}
	I0918 20:55:09.965974   57703 main.go:141] libmachine: About to run SSH command:
	hostname
	I0918 20:55:10.081950   57703 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-543700
	
	I0918 20:55:10.081981   57703 main.go:141] libmachine: (pause-543700) Calling .GetMachineName
	I0918 20:55:10.082257   57703 buildroot.go:166] provisioning hostname "pause-543700"
	I0918 20:55:10.082284   57703 main.go:141] libmachine: (pause-543700) Calling .GetMachineName
	I0918 20:55:10.082493   57703 main.go:141] libmachine: (pause-543700) Calling .GetSSHHostname
	I0918 20:55:10.085604   57703 main.go:141] libmachine: (pause-543700) DBG | domain pause-543700 has defined MAC address 52:54:00:d2:f3:81 in network mk-pause-543700
	I0918 20:55:10.085974   57703 main.go:141] libmachine: (pause-543700) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:f3:81", ip: ""} in network mk-pause-543700: {Iface:virbr1 ExpiryTime:2024-09-18 21:53:20 +0000 UTC Type:0 Mac:52:54:00:d2:f3:81 Iaid: IPaddr:192.168.39.184 Prefix:24 Hostname:pause-543700 Clientid:01:52:54:00:d2:f3:81}
	I0918 20:55:10.086010   57703 main.go:141] libmachine: (pause-543700) DBG | domain pause-543700 has defined IP address 192.168.39.184 and MAC address 52:54:00:d2:f3:81 in network mk-pause-543700
	I0918 20:55:10.086232   57703 main.go:141] libmachine: (pause-543700) Calling .GetSSHPort
	I0918 20:55:10.086458   57703 main.go:141] libmachine: (pause-543700) Calling .GetSSHKeyPath
	I0918 20:55:10.086677   57703 main.go:141] libmachine: (pause-543700) Calling .GetSSHKeyPath
	I0918 20:55:10.086829   57703 main.go:141] libmachine: (pause-543700) Calling .GetSSHUsername
	I0918 20:55:10.087068   57703 main.go:141] libmachine: Using SSH client type: native
	I0918 20:55:10.087286   57703 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.184 22 <nil> <nil>}
	I0918 20:55:10.087305   57703 main.go:141] libmachine: About to run SSH command:
	sudo hostname pause-543700 && echo "pause-543700" | sudo tee /etc/hostname
	I0918 20:55:10.218787   57703 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-543700
	
	I0918 20:55:10.218817   57703 main.go:141] libmachine: (pause-543700) Calling .GetSSHHostname
	I0918 20:55:10.221976   57703 main.go:141] libmachine: (pause-543700) DBG | domain pause-543700 has defined MAC address 52:54:00:d2:f3:81 in network mk-pause-543700
	I0918 20:55:10.222678   57703 main.go:141] libmachine: (pause-543700) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:f3:81", ip: ""} in network mk-pause-543700: {Iface:virbr1 ExpiryTime:2024-09-18 21:53:20 +0000 UTC Type:0 Mac:52:54:00:d2:f3:81 Iaid: IPaddr:192.168.39.184 Prefix:24 Hostname:pause-543700 Clientid:01:52:54:00:d2:f3:81}
	I0918 20:55:10.222707   57703 main.go:141] libmachine: (pause-543700) DBG | domain pause-543700 has defined IP address 192.168.39.184 and MAC address 52:54:00:d2:f3:81 in network mk-pause-543700
	I0918 20:55:10.222927   57703 main.go:141] libmachine: (pause-543700) Calling .GetSSHPort
	I0918 20:55:10.223203   57703 main.go:141] libmachine: (pause-543700) Calling .GetSSHKeyPath
	I0918 20:55:10.223427   57703 main.go:141] libmachine: (pause-543700) Calling .GetSSHKeyPath
	I0918 20:55:10.223590   57703 main.go:141] libmachine: (pause-543700) Calling .GetSSHUsername
	I0918 20:55:10.223858   57703 main.go:141] libmachine: Using SSH client type: native
	I0918 20:55:10.224067   57703 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.184 22 <nil> <nil>}
	I0918 20:55:10.224093   57703 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-543700' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-543700/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-543700' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0918 20:55:10.337271   57703 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0918 20:55:10.337304   57703 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19667-7671/.minikube CaCertPath:/home/jenkins/minikube-integration/19667-7671/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19667-7671/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19667-7671/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19667-7671/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19667-7671/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19667-7671/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19667-7671/.minikube}
	I0918 20:55:10.337405   57703 buildroot.go:174] setting up certificates
	I0918 20:55:10.337418   57703 provision.go:84] configureAuth start
	I0918 20:55:10.337435   57703 main.go:141] libmachine: (pause-543700) Calling .GetMachineName
	I0918 20:55:10.337733   57703 main.go:141] libmachine: (pause-543700) Calling .GetIP
	I0918 20:55:10.340980   57703 main.go:141] libmachine: (pause-543700) DBG | domain pause-543700 has defined MAC address 52:54:00:d2:f3:81 in network mk-pause-543700
	I0918 20:55:10.341389   57703 main.go:141] libmachine: (pause-543700) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:f3:81", ip: ""} in network mk-pause-543700: {Iface:virbr1 ExpiryTime:2024-09-18 21:53:20 +0000 UTC Type:0 Mac:52:54:00:d2:f3:81 Iaid: IPaddr:192.168.39.184 Prefix:24 Hostname:pause-543700 Clientid:01:52:54:00:d2:f3:81}
	I0918 20:55:10.341424   57703 main.go:141] libmachine: (pause-543700) DBG | domain pause-543700 has defined IP address 192.168.39.184 and MAC address 52:54:00:d2:f3:81 in network mk-pause-543700
	I0918 20:55:10.341620   57703 main.go:141] libmachine: (pause-543700) Calling .GetSSHHostname
	I0918 20:55:10.344557   57703 main.go:141] libmachine: (pause-543700) DBG | domain pause-543700 has defined MAC address 52:54:00:d2:f3:81 in network mk-pause-543700
	I0918 20:55:10.344959   57703 main.go:141] libmachine: (pause-543700) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:f3:81", ip: ""} in network mk-pause-543700: {Iface:virbr1 ExpiryTime:2024-09-18 21:53:20 +0000 UTC Type:0 Mac:52:54:00:d2:f3:81 Iaid: IPaddr:192.168.39.184 Prefix:24 Hostname:pause-543700 Clientid:01:52:54:00:d2:f3:81}
	I0918 20:55:10.344987   57703 main.go:141] libmachine: (pause-543700) DBG | domain pause-543700 has defined IP address 192.168.39.184 and MAC address 52:54:00:d2:f3:81 in network mk-pause-543700
	I0918 20:55:10.345128   57703 provision.go:143] copyHostCerts
	I0918 20:55:10.345212   57703 exec_runner.go:144] found /home/jenkins/minikube-integration/19667-7671/.minikube/ca.pem, removing ...
	I0918 20:55:10.345226   57703 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19667-7671/.minikube/ca.pem
	I0918 20:55:10.345294   57703 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19667-7671/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19667-7671/.minikube/ca.pem (1082 bytes)
	I0918 20:55:10.345415   57703 exec_runner.go:144] found /home/jenkins/minikube-integration/19667-7671/.minikube/cert.pem, removing ...
	I0918 20:55:10.345425   57703 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19667-7671/.minikube/cert.pem
	I0918 20:55:10.345458   57703 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19667-7671/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19667-7671/.minikube/cert.pem (1123 bytes)
	I0918 20:55:10.345540   57703 exec_runner.go:144] found /home/jenkins/minikube-integration/19667-7671/.minikube/key.pem, removing ...
	I0918 20:55:10.345550   57703 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19667-7671/.minikube/key.pem
	I0918 20:55:10.345581   57703 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19667-7671/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19667-7671/.minikube/key.pem (1679 bytes)
	I0918 20:55:10.345648   57703 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19667-7671/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19667-7671/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19667-7671/.minikube/certs/ca-key.pem org=jenkins.pause-543700 san=[127.0.0.1 192.168.39.184 localhost minikube pause-543700]
	I0918 20:55:10.425893   57703 provision.go:177] copyRemoteCerts
	I0918 20:55:10.425943   57703 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0918 20:55:10.425966   57703 main.go:141] libmachine: (pause-543700) Calling .GetSSHHostname
	I0918 20:55:10.428742   57703 main.go:141] libmachine: (pause-543700) DBG | domain pause-543700 has defined MAC address 52:54:00:d2:f3:81 in network mk-pause-543700
	I0918 20:55:10.429137   57703 main.go:141] libmachine: (pause-543700) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:f3:81", ip: ""} in network mk-pause-543700: {Iface:virbr1 ExpiryTime:2024-09-18 21:53:20 +0000 UTC Type:0 Mac:52:54:00:d2:f3:81 Iaid: IPaddr:192.168.39.184 Prefix:24 Hostname:pause-543700 Clientid:01:52:54:00:d2:f3:81}
	I0918 20:55:10.429173   57703 main.go:141] libmachine: (pause-543700) DBG | domain pause-543700 has defined IP address 192.168.39.184 and MAC address 52:54:00:d2:f3:81 in network mk-pause-543700
	I0918 20:55:10.429360   57703 main.go:141] libmachine: (pause-543700) Calling .GetSSHPort
	I0918 20:55:10.429576   57703 main.go:141] libmachine: (pause-543700) Calling .GetSSHKeyPath
	I0918 20:55:10.429749   57703 main.go:141] libmachine: (pause-543700) Calling .GetSSHUsername
	I0918 20:55:10.429920   57703 sshutil.go:53] new ssh client: &{IP:192.168.39.184 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19667-7671/.minikube/machines/pause-543700/id_rsa Username:docker}
	I0918 20:55:10.514809   57703 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0918 20:55:10.541374   57703 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0918 20:55:10.570640   57703 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0918 20:55:10.595543   57703 provision.go:87] duration metric: took 258.108111ms to configureAuth
	I0918 20:55:10.595584   57703 buildroot.go:189] setting minikube options for container-runtime
	I0918 20:55:10.595916   57703 config.go:182] Loaded profile config "pause-543700": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0918 20:55:10.596046   57703 main.go:141] libmachine: (pause-543700) Calling .GetSSHHostname
	I0918 20:55:10.598954   57703 main.go:141] libmachine: (pause-543700) DBG | domain pause-543700 has defined MAC address 52:54:00:d2:f3:81 in network mk-pause-543700
	I0918 20:55:10.599349   57703 main.go:141] libmachine: (pause-543700) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:f3:81", ip: ""} in network mk-pause-543700: {Iface:virbr1 ExpiryTime:2024-09-18 21:53:20 +0000 UTC Type:0 Mac:52:54:00:d2:f3:81 Iaid: IPaddr:192.168.39.184 Prefix:24 Hostname:pause-543700 Clientid:01:52:54:00:d2:f3:81}
	I0918 20:55:10.599378   57703 main.go:141] libmachine: (pause-543700) DBG | domain pause-543700 has defined IP address 192.168.39.184 and MAC address 52:54:00:d2:f3:81 in network mk-pause-543700
	I0918 20:55:10.599604   57703 main.go:141] libmachine: (pause-543700) Calling .GetSSHPort
	I0918 20:55:10.599855   57703 main.go:141] libmachine: (pause-543700) Calling .GetSSHKeyPath
	I0918 20:55:10.600058   57703 main.go:141] libmachine: (pause-543700) Calling .GetSSHKeyPath
	I0918 20:55:10.600232   57703 main.go:141] libmachine: (pause-543700) Calling .GetSSHUsername
	I0918 20:55:10.600446   57703 main.go:141] libmachine: Using SSH client type: native
	I0918 20:55:10.600622   57703 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.184 22 <nil> <nil>}
	I0918 20:55:10.600635   57703 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0918 20:55:16.181084   57703 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0918 20:55:16.181112   57703 machine.go:96] duration metric: took 6.220065786s to provisionDockerMachine
	I0918 20:55:16.181126   57703 start.go:293] postStartSetup for "pause-543700" (driver="kvm2")
	I0918 20:55:16.181137   57703 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0918 20:55:16.181196   57703 main.go:141] libmachine: (pause-543700) Calling .DriverName
	I0918 20:55:16.181634   57703 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0918 20:55:16.181694   57703 main.go:141] libmachine: (pause-543700) Calling .GetSSHHostname
	I0918 20:55:16.185383   57703 main.go:141] libmachine: (pause-543700) DBG | domain pause-543700 has defined MAC address 52:54:00:d2:f3:81 in network mk-pause-543700
	I0918 20:55:16.185954   57703 main.go:141] libmachine: (pause-543700) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:f3:81", ip: ""} in network mk-pause-543700: {Iface:virbr1 ExpiryTime:2024-09-18 21:53:20 +0000 UTC Type:0 Mac:52:54:00:d2:f3:81 Iaid: IPaddr:192.168.39.184 Prefix:24 Hostname:pause-543700 Clientid:01:52:54:00:d2:f3:81}
	I0918 20:55:16.185982   57703 main.go:141] libmachine: (pause-543700) DBG | domain pause-543700 has defined IP address 192.168.39.184 and MAC address 52:54:00:d2:f3:81 in network mk-pause-543700
	I0918 20:55:16.186218   57703 main.go:141] libmachine: (pause-543700) Calling .GetSSHPort
	I0918 20:55:16.186449   57703 main.go:141] libmachine: (pause-543700) Calling .GetSSHKeyPath
	I0918 20:55:16.186636   57703 main.go:141] libmachine: (pause-543700) Calling .GetSSHUsername
	I0918 20:55:16.186815   57703 sshutil.go:53] new ssh client: &{IP:192.168.39.184 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19667-7671/.minikube/machines/pause-543700/id_rsa Username:docker}
	I0918 20:55:16.284136   57703 ssh_runner.go:195] Run: cat /etc/os-release
	I0918 20:55:16.288570   57703 info.go:137] Remote host: Buildroot 2023.02.9
	I0918 20:55:16.288603   57703 filesync.go:126] Scanning /home/jenkins/minikube-integration/19667-7671/.minikube/addons for local assets ...
	I0918 20:55:16.288687   57703 filesync.go:126] Scanning /home/jenkins/minikube-integration/19667-7671/.minikube/files for local assets ...
	I0918 20:55:16.288783   57703 filesync.go:149] local asset: /home/jenkins/minikube-integration/19667-7671/.minikube/files/etc/ssl/certs/148782.pem -> 148782.pem in /etc/ssl/certs
	I0918 20:55:16.288903   57703 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0918 20:55:16.300684   57703 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/files/etc/ssl/certs/148782.pem --> /etc/ssl/certs/148782.pem (1708 bytes)
	I0918 20:55:16.330339   57703 start.go:296] duration metric: took 149.196249ms for postStartSetup
	I0918 20:55:16.330390   57703 fix.go:56] duration metric: took 6.397424565s for fixHost
	I0918 20:55:16.330418   57703 main.go:141] libmachine: (pause-543700) Calling .GetSSHHostname
	I0918 20:55:16.333770   57703 main.go:141] libmachine: (pause-543700) DBG | domain pause-543700 has defined MAC address 52:54:00:d2:f3:81 in network mk-pause-543700
	I0918 20:55:16.334134   57703 main.go:141] libmachine: (pause-543700) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:f3:81", ip: ""} in network mk-pause-543700: {Iface:virbr1 ExpiryTime:2024-09-18 21:53:20 +0000 UTC Type:0 Mac:52:54:00:d2:f3:81 Iaid: IPaddr:192.168.39.184 Prefix:24 Hostname:pause-543700 Clientid:01:52:54:00:d2:f3:81}
	I0918 20:55:16.334161   57703 main.go:141] libmachine: (pause-543700) DBG | domain pause-543700 has defined IP address 192.168.39.184 and MAC address 52:54:00:d2:f3:81 in network mk-pause-543700
	I0918 20:55:16.334368   57703 main.go:141] libmachine: (pause-543700) Calling .GetSSHPort
	I0918 20:55:16.334615   57703 main.go:141] libmachine: (pause-543700) Calling .GetSSHKeyPath
	I0918 20:55:16.334821   57703 main.go:141] libmachine: (pause-543700) Calling .GetSSHKeyPath
	I0918 20:55:16.334982   57703 main.go:141] libmachine: (pause-543700) Calling .GetSSHUsername
	I0918 20:55:16.335179   57703 main.go:141] libmachine: Using SSH client type: native
	I0918 20:55:16.335426   57703 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.184 22 <nil> <nil>}
	I0918 20:55:16.335441   57703 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0918 20:55:16.449511   57703 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726692916.441077876
	
	I0918 20:55:16.449538   57703 fix.go:216] guest clock: 1726692916.441077876
	I0918 20:55:16.449548   57703 fix.go:229] Guest: 2024-09-18 20:55:16.441077876 +0000 UTC Remote: 2024-09-18 20:55:16.330395711 +0000 UTC m=+45.529431563 (delta=110.682165ms)
	I0918 20:55:16.449574   57703 fix.go:200] guest clock delta is within tolerance: 110.682165ms
	I0918 20:55:16.449580   57703 start.go:83] releasing machines lock for "pause-543700", held for 6.516648945s
	I0918 20:55:16.449628   57703 main.go:141] libmachine: (pause-543700) Calling .DriverName
	I0918 20:55:16.449919   57703 main.go:141] libmachine: (pause-543700) Calling .GetIP
	I0918 20:55:16.452989   57703 main.go:141] libmachine: (pause-543700) DBG | domain pause-543700 has defined MAC address 52:54:00:d2:f3:81 in network mk-pause-543700
	I0918 20:55:16.453414   57703 main.go:141] libmachine: (pause-543700) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:f3:81", ip: ""} in network mk-pause-543700: {Iface:virbr1 ExpiryTime:2024-09-18 21:53:20 +0000 UTC Type:0 Mac:52:54:00:d2:f3:81 Iaid: IPaddr:192.168.39.184 Prefix:24 Hostname:pause-543700 Clientid:01:52:54:00:d2:f3:81}
	I0918 20:55:16.453445   57703 main.go:141] libmachine: (pause-543700) DBG | domain pause-543700 has defined IP address 192.168.39.184 and MAC address 52:54:00:d2:f3:81 in network mk-pause-543700
	I0918 20:55:16.453678   57703 main.go:141] libmachine: (pause-543700) Calling .DriverName
	I0918 20:55:16.454269   57703 main.go:141] libmachine: (pause-543700) Calling .DriverName
	I0918 20:55:16.454467   57703 main.go:141] libmachine: (pause-543700) Calling .DriverName
	I0918 20:55:16.454605   57703 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0918 20:55:16.454648   57703 main.go:141] libmachine: (pause-543700) Calling .GetSSHHostname
	I0918 20:55:16.454695   57703 ssh_runner.go:195] Run: cat /version.json
	I0918 20:55:16.454722   57703 main.go:141] libmachine: (pause-543700) Calling .GetSSHHostname
	I0918 20:55:16.457828   57703 main.go:141] libmachine: (pause-543700) DBG | domain pause-543700 has defined MAC address 52:54:00:d2:f3:81 in network mk-pause-543700
	I0918 20:55:16.458074   57703 main.go:141] libmachine: (pause-543700) DBG | domain pause-543700 has defined MAC address 52:54:00:d2:f3:81 in network mk-pause-543700
	I0918 20:55:16.458233   57703 main.go:141] libmachine: (pause-543700) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:f3:81", ip: ""} in network mk-pause-543700: {Iface:virbr1 ExpiryTime:2024-09-18 21:53:20 +0000 UTC Type:0 Mac:52:54:00:d2:f3:81 Iaid: IPaddr:192.168.39.184 Prefix:24 Hostname:pause-543700 Clientid:01:52:54:00:d2:f3:81}
	I0918 20:55:16.458251   57703 main.go:141] libmachine: (pause-543700) DBG | domain pause-543700 has defined IP address 192.168.39.184 and MAC address 52:54:00:d2:f3:81 in network mk-pause-543700
	I0918 20:55:16.458446   57703 main.go:141] libmachine: (pause-543700) Calling .GetSSHPort
	I0918 20:55:16.458504   57703 main.go:141] libmachine: (pause-543700) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:f3:81", ip: ""} in network mk-pause-543700: {Iface:virbr1 ExpiryTime:2024-09-18 21:53:20 +0000 UTC Type:0 Mac:52:54:00:d2:f3:81 Iaid: IPaddr:192.168.39.184 Prefix:24 Hostname:pause-543700 Clientid:01:52:54:00:d2:f3:81}
	I0918 20:55:16.458530   57703 main.go:141] libmachine: (pause-543700) DBG | domain pause-543700 has defined IP address 192.168.39.184 and MAC address 52:54:00:d2:f3:81 in network mk-pause-543700
	I0918 20:55:16.458701   57703 main.go:141] libmachine: (pause-543700) Calling .GetSSHPort
	I0918 20:55:16.458748   57703 main.go:141] libmachine: (pause-543700) Calling .GetSSHKeyPath
	I0918 20:55:16.458917   57703 main.go:141] libmachine: (pause-543700) Calling .GetSSHKeyPath
	I0918 20:55:16.458921   57703 main.go:141] libmachine: (pause-543700) Calling .GetSSHUsername
	I0918 20:55:16.459048   57703 main.go:141] libmachine: (pause-543700) Calling .GetSSHUsername
	I0918 20:55:16.459107   57703 sshutil.go:53] new ssh client: &{IP:192.168.39.184 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19667-7671/.minikube/machines/pause-543700/id_rsa Username:docker}
	I0918 20:55:16.459168   57703 sshutil.go:53] new ssh client: &{IP:192.168.39.184 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19667-7671/.minikube/machines/pause-543700/id_rsa Username:docker}
	I0918 20:55:16.581650   57703 ssh_runner.go:195] Run: systemctl --version
	I0918 20:55:16.588241   57703 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0918 20:55:16.752361   57703 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0918 20:55:16.760215   57703 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0918 20:55:16.760306   57703 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0918 20:55:16.770551   57703 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0918 20:55:16.770585   57703 start.go:495] detecting cgroup driver to use...
	I0918 20:55:16.770660   57703 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0918 20:55:16.792189   57703 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0918 20:55:16.809001   57703 docker.go:217] disabling cri-docker service (if available) ...
	I0918 20:55:16.809062   57703 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0918 20:55:16.826708   57703 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0918 20:55:16.842534   57703 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0918 20:55:16.984395   57703 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0918 20:55:17.134816   57703 docker.go:233] disabling docker service ...
	I0918 20:55:17.134907   57703 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0918 20:55:17.151778   57703 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0918 20:55:17.168288   57703 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0918 20:55:17.314891   57703 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0918 20:55:17.464816   57703 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0918 20:55:17.480790   57703 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0918 20:55:17.505237   57703 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0918 20:55:17.505324   57703 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0918 20:55:17.516791   57703 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0918 20:55:17.516870   57703 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0918 20:55:17.529170   57703 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0918 20:55:17.540736   57703 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0918 20:55:17.551571   57703 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0918 20:55:17.562576   57703 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0918 20:55:17.573545   57703 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0918 20:55:17.588376   57703 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0918 20:55:17.603925   57703 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0918 20:55:17.619689   57703 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0918 20:55:17.633351   57703 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0918 20:55:17.773968   57703 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0918 20:55:20.111700   57703 ssh_runner.go:235] Completed: sudo systemctl restart crio: (2.33769034s)
	I0918 20:55:20.111749   57703 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0918 20:55:20.111809   57703 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0918 20:55:20.116726   57703 start.go:563] Will wait 60s for crictl version
	I0918 20:55:20.116794   57703 ssh_runner.go:195] Run: which crictl
	I0918 20:55:20.120693   57703 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0918 20:55:20.159235   57703 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0918 20:55:20.159325   57703 ssh_runner.go:195] Run: crio --version
	I0918 20:55:20.187612   57703 ssh_runner.go:195] Run: crio --version
	I0918 20:55:20.217764   57703 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0918 20:55:20.219421   57703 main.go:141] libmachine: (pause-543700) Calling .GetIP
	I0918 20:55:20.222605   57703 main.go:141] libmachine: (pause-543700) DBG | domain pause-543700 has defined MAC address 52:54:00:d2:f3:81 in network mk-pause-543700
	I0918 20:55:20.223061   57703 main.go:141] libmachine: (pause-543700) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:f3:81", ip: ""} in network mk-pause-543700: {Iface:virbr1 ExpiryTime:2024-09-18 21:53:20 +0000 UTC Type:0 Mac:52:54:00:d2:f3:81 Iaid: IPaddr:192.168.39.184 Prefix:24 Hostname:pause-543700 Clientid:01:52:54:00:d2:f3:81}
	I0918 20:55:20.223093   57703 main.go:141] libmachine: (pause-543700) DBG | domain pause-543700 has defined IP address 192.168.39.184 and MAC address 52:54:00:d2:f3:81 in network mk-pause-543700
	I0918 20:55:20.223375   57703 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0918 20:55:20.227794   57703 kubeadm.go:883] updating cluster {Name:pause-543700 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1
ClusterName:pause-543700 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.184 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:fals
e olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0918 20:55:20.227921   57703 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0918 20:55:20.227971   57703 ssh_runner.go:195] Run: sudo crictl images --output json
	I0918 20:55:20.380101   57703 crio.go:514] all images are preloaded for cri-o runtime.
	I0918 20:55:20.380145   57703 crio.go:433] Images already preloaded, skipping extraction
	I0918 20:55:20.380214   57703 ssh_runner.go:195] Run: sudo crictl images --output json
	I0918 20:55:20.564513   57703 crio.go:514] all images are preloaded for cri-o runtime.
	I0918 20:55:20.564542   57703 cache_images.go:84] Images are preloaded, skipping loading
	I0918 20:55:20.564552   57703 kubeadm.go:934] updating node { 192.168.39.184 8443 v1.31.1 crio true true} ...
	I0918 20:55:20.564711   57703 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=pause-543700 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.184
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:pause-543700 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0918 20:55:20.564795   57703 ssh_runner.go:195] Run: crio config
	I0918 20:55:20.810142   57703 cni.go:84] Creating CNI manager for ""
	I0918 20:55:20.810177   57703 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0918 20:55:20.810190   57703 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0918 20:55:20.810219   57703 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.184 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-543700 NodeName:pause-543700 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.184"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.184 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kub
ernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0918 20:55:20.810436   57703 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.184
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-543700"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.184
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.184"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0918 20:55:20.810511   57703 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0918 20:55:20.863740   57703 binaries.go:44] Found k8s binaries, skipping transfer
	I0918 20:55:20.863830   57703 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0918 20:55:20.893115   57703 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0918 20:55:20.957652   57703 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0918 20:55:21.041418   57703 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2156 bytes)
	I0918 20:55:21.107918   57703 ssh_runner.go:195] Run: grep 192.168.39.184	control-plane.minikube.internal$ /etc/hosts
	I0918 20:55:21.128727   57703 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0918 20:55:21.374662   57703 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0918 20:55:21.419215   57703 certs.go:68] Setting up /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/pause-543700 for IP: 192.168.39.184
	I0918 20:55:21.419242   57703 certs.go:194] generating shared ca certs ...
	I0918 20:55:21.419264   57703 certs.go:226] acquiring lock for ca certs: {Name:mkf54e0e71ed884c4372f9d3d4cb308ea4600185 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0918 20:55:21.419459   57703 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19667-7671/.minikube/ca.key
	I0918 20:55:21.419515   57703 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19667-7671/.minikube/proxy-client-ca.key
	I0918 20:55:21.419533   57703 certs.go:256] generating profile certs ...
	I0918 20:55:21.419631   57703 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/pause-543700/client.key
	I0918 20:55:21.419711   57703 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/pause-543700/apiserver.key.091414ba
	I0918 20:55:21.419766   57703 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/pause-543700/proxy-client.key
	I0918 20:55:21.419905   57703 certs.go:484] found cert: /home/jenkins/minikube-integration/19667-7671/.minikube/certs/14878.pem (1338 bytes)
	W0918 20:55:21.419942   57703 certs.go:480] ignoring /home/jenkins/minikube-integration/19667-7671/.minikube/certs/14878_empty.pem, impossibly tiny 0 bytes
	I0918 20:55:21.419976   57703 certs.go:484] found cert: /home/jenkins/minikube-integration/19667-7671/.minikube/certs/ca-key.pem (1675 bytes)
	I0918 20:55:21.420035   57703 certs.go:484] found cert: /home/jenkins/minikube-integration/19667-7671/.minikube/certs/ca.pem (1082 bytes)
	I0918 20:55:21.420070   57703 certs.go:484] found cert: /home/jenkins/minikube-integration/19667-7671/.minikube/certs/cert.pem (1123 bytes)
	I0918 20:55:21.420103   57703 certs.go:484] found cert: /home/jenkins/minikube-integration/19667-7671/.minikube/certs/key.pem (1679 bytes)
	I0918 20:55:21.420156   57703 certs.go:484] found cert: /home/jenkins/minikube-integration/19667-7671/.minikube/files/etc/ssl/certs/148782.pem (1708 bytes)
	I0918 20:55:21.421046   57703 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0918 20:55:21.506016   57703 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0918 20:55:21.561759   57703 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0918 20:55:21.592363   57703 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0918 20:55:21.622730   57703 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/pause-543700/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0918 20:55:21.674726   57703 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/pause-543700/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0918 20:55:21.745268   57703 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/pause-543700/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0918 20:55:21.788867   57703 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/pause-543700/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0918 20:55:21.819363   57703 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0918 20:55:21.845268   57703 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/certs/14878.pem --> /usr/share/ca-certificates/14878.pem (1338 bytes)
	I0918 20:55:21.878230   57703 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/files/etc/ssl/certs/148782.pem --> /usr/share/ca-certificates/148782.pem (1708 bytes)
	I0918 20:55:21.915666   57703 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0918 20:55:21.938114   57703 ssh_runner.go:195] Run: openssl version
	I0918 20:55:21.946264   57703 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0918 20:55:21.962097   57703 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0918 20:55:21.966839   57703 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 18 19:39 /usr/share/ca-certificates/minikubeCA.pem
	I0918 20:55:21.966911   57703 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0918 20:55:21.973869   57703 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0918 20:55:21.984673   57703 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14878.pem && ln -fs /usr/share/ca-certificates/14878.pem /etc/ssl/certs/14878.pem"
	I0918 20:55:21.995445   57703 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14878.pem
	I0918 20:55:22.001157   57703 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 18 19:57 /usr/share/ca-certificates/14878.pem
	I0918 20:55:22.001242   57703 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14878.pem
	I0918 20:55:22.010794   57703 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14878.pem /etc/ssl/certs/51391683.0"
	I0918 20:55:22.024668   57703 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/148782.pem && ln -fs /usr/share/ca-certificates/148782.pem /etc/ssl/certs/148782.pem"
	I0918 20:55:22.039212   57703 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/148782.pem
	I0918 20:55:22.047136   57703 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 18 19:57 /usr/share/ca-certificates/148782.pem
	I0918 20:55:22.047209   57703 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/148782.pem
	I0918 20:55:22.058704   57703 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/148782.pem /etc/ssl/certs/3ec20f2e.0"
	I0918 20:55:22.070660   57703 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0918 20:55:22.076727   57703 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0918 20:55:22.084627   57703 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0918 20:55:22.094377   57703 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0918 20:55:22.103860   57703 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0918 20:55:22.111231   57703 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0918 20:55:22.119386   57703 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0918 20:55:22.127467   57703 kubeadm.go:392] StartCluster: {Name:pause-543700 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Cl
usterName:pause-543700 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.184 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false o
lm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0918 20:55:22.127613   57703 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0918 20:55:22.127703   57703 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0918 20:55:22.189258   57703 cri.go:89] found id: "e35c2076fdbf6ba34585d4dc1c830959915dd9b715305df7b4f1c7cb6e2320b4"
	I0918 20:55:22.189288   57703 cri.go:89] found id: "5b4875ac82adf72eda0bef747b563eb403ecc70cb56d0275307e4e04b468565c"
	I0918 20:55:22.189295   57703 cri.go:89] found id: "fdc9fb94be86655beb10880c7d5faaad8c9c29dd88fa8972743f97c12bc85a24"
	I0918 20:55:22.189303   57703 cri.go:89] found id: "1bffe6da49dd38ba92455bd869a9f7f9031ef436230cf2a9f7a0d79101a735c6"
	I0918 20:55:22.189308   57703 cri.go:89] found id: "57c211488dfbc6dfdb474ae7fc32a15deeb7f4d5a1d19bd672b04875a60c5eb1"
	I0918 20:55:22.189313   57703 cri.go:89] found id: "67554b57c9cc456c2dc6453a8b3cc56e3a300ae0d3ee59fc138b73a75e288bdc"
	I0918 20:55:22.189317   57703 cri.go:89] found id: "a769338517a5c2023c358cc4eaf84f3ae2d38cbcf0bb5191299ffe0be79db0e3"
	I0918 20:55:22.189321   57703 cri.go:89] found id: "64e3057dbc1fed39a2675e5b774a5ec5278c43dfe981ab372badd571fb3f66ad"
	I0918 20:55:22.189325   57703 cri.go:89] found id: "32619104ba41860caa364c46299d66b28cf96c7f1114a3c6fe27243243c4fa99"
	I0918 20:55:22.189335   57703 cri.go:89] found id: "0a03cc82754816d2b169f33fabd61884a9a852bafd460b9470facc4a928cad10"
	I0918 20:55:22.189340   57703 cri.go:89] found id: "bfc843ebf10ea9fcb5b1515d6338c35a43412477ece8866d276076fa6e733d3a"
	I0918 20:55:22.189344   57703 cri.go:89] found id: "56223a9754185846819b7db4eb1a47c622f2471e90ca041ae09377cf75b19b42"
	I0918 20:55:22.189352   57703 cri.go:89] found id: ""
	I0918 20:55:22.189405   57703 ssh_runner.go:195] Run: sudo runc list -f json

                                                
                                                
** /stderr **
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-543700 -n pause-543700
helpers_test.go:244: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p pause-543700 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p pause-543700 logs -n 25: (1.389914604s)
helpers_test.go:252: TestPause/serial/SecondStartNoReconfiguration logs: 
-- stdout --
	
	==> Audit <==
	|---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                         Args                         |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p cilium-543581 sudo                                | cilium-543581             | jenkins | v1.34.0 | 18 Sep 24 20:54 UTC |                     |
	|         | systemctl cat cri-docker                             |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p cilium-543581 sudo cat                            | cilium-543581             | jenkins | v1.34.0 | 18 Sep 24 20:54 UTC |                     |
	|         | /etc/systemd/system/cri-docker.service.d/10-cni.conf |                           |         |         |                     |                     |
	| ssh     | -p cilium-543581 sudo cat                            | cilium-543581             | jenkins | v1.34.0 | 18 Sep 24 20:54 UTC |                     |
	|         | /usr/lib/systemd/system/cri-docker.service           |                           |         |         |                     |                     |
	| ssh     | -p cilium-543581 sudo                                | cilium-543581             | jenkins | v1.34.0 | 18 Sep 24 20:54 UTC |                     |
	|         | cri-dockerd --version                                |                           |         |         |                     |                     |
	| ssh     | -p cilium-543581 sudo                                | cilium-543581             | jenkins | v1.34.0 | 18 Sep 24 20:54 UTC |                     |
	|         | systemctl status containerd                          |                           |         |         |                     |                     |
	|         | --all --full --no-pager                              |                           |         |         |                     |                     |
	| ssh     | -p cilium-543581 sudo                                | cilium-543581             | jenkins | v1.34.0 | 18 Sep 24 20:54 UTC |                     |
	|         | systemctl cat containerd                             |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p cilium-543581 sudo cat                            | cilium-543581             | jenkins | v1.34.0 | 18 Sep 24 20:54 UTC |                     |
	|         | /lib/systemd/system/containerd.service               |                           |         |         |                     |                     |
	| ssh     | -p cilium-543581 sudo cat                            | cilium-543581             | jenkins | v1.34.0 | 18 Sep 24 20:54 UTC |                     |
	|         | /etc/containerd/config.toml                          |                           |         |         |                     |                     |
	| ssh     | -p cilium-543581 sudo                                | cilium-543581             | jenkins | v1.34.0 | 18 Sep 24 20:54 UTC |                     |
	|         | containerd config dump                               |                           |         |         |                     |                     |
	| ssh     | -p cilium-543581 sudo                                | cilium-543581             | jenkins | v1.34.0 | 18 Sep 24 20:54 UTC |                     |
	|         | systemctl status crio --all                          |                           |         |         |                     |                     |
	|         | --full --no-pager                                    |                           |         |         |                     |                     |
	| ssh     | -p cilium-543581 sudo                                | cilium-543581             | jenkins | v1.34.0 | 18 Sep 24 20:54 UTC |                     |
	|         | systemctl cat crio --no-pager                        |                           |         |         |                     |                     |
	| ssh     | -p cilium-543581 sudo find                           | cilium-543581             | jenkins | v1.34.0 | 18 Sep 24 20:54 UTC |                     |
	|         | /etc/crio -type f -exec sh -c                        |                           |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                 |                           |         |         |                     |                     |
	| ssh     | -p cilium-543581 sudo crio                           | cilium-543581             | jenkins | v1.34.0 | 18 Sep 24 20:54 UTC |                     |
	|         | config                                               |                           |         |         |                     |                     |
	| delete  | -p cilium-543581                                     | cilium-543581             | jenkins | v1.34.0 | 18 Sep 24 20:54 UTC | 18 Sep 24 20:54 UTC |
	| start   | -p cert-options-347585                               | cert-options-347585       | jenkins | v1.34.0 | 18 Sep 24 20:54 UTC | 18 Sep 24 20:55 UTC |
	|         | --memory=2048                                        |                           |         |         |                     |                     |
	|         | --apiserver-ips=127.0.0.1                            |                           |         |         |                     |                     |
	|         | --apiserver-ips=192.168.15.15                        |                           |         |         |                     |                     |
	|         | --apiserver-names=localhost                          |                           |         |         |                     |                     |
	|         | --apiserver-names=www.google.com                     |                           |         |         |                     |                     |
	|         | --apiserver-port=8555                                |                           |         |         |                     |                     |
	|         | --driver=kvm2                                        |                           |         |         |                     |                     |
	|         | --container-runtime=crio                             |                           |         |         |                     |                     |
	| start   | -p old-k8s-version-740194                            | old-k8s-version-740194    | jenkins | v1.34.0 | 18 Sep 24 20:54 UTC |                     |
	|         | --memory=2200                                        |                           |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                        |                           |         |         |                     |                     |
	|         | --kvm-network=default                                |                           |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                        |                           |         |         |                     |                     |
	|         | --disable-driver-mounts                              |                           |         |         |                     |                     |
	|         | --keep-context=false                                 |                           |         |         |                     |                     |
	|         | --driver=kvm2                                        |                           |         |         |                     |                     |
	|         | --container-runtime=crio                             |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                         |                           |         |         |                     |                     |
	| start   | -p pause-543700                                      | pause-543700              | jenkins | v1.34.0 | 18 Sep 24 20:54 UTC | 18 Sep 24 20:55 UTC |
	|         | --alsologtostderr                                    |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                   |                           |         |         |                     |                     |
	|         | --container-runtime=crio                             |                           |         |         |                     |                     |
	| stop    | -p kubernetes-upgrade-878094                         | kubernetes-upgrade-878094 | jenkins | v1.34.0 | 18 Sep 24 20:54 UTC | 18 Sep 24 20:54 UTC |
	| start   | -p kubernetes-upgrade-878094                         | kubernetes-upgrade-878094 | jenkins | v1.34.0 | 18 Sep 24 20:54 UTC | 18 Sep 24 20:55 UTC |
	|         | --memory=2200                                        |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                         |                           |         |         |                     |                     |
	|         | --alsologtostderr                                    |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                   |                           |         |         |                     |                     |
	|         | --container-runtime=crio                             |                           |         |         |                     |                     |
	| ssh     | cert-options-347585 ssh                              | cert-options-347585       | jenkins | v1.34.0 | 18 Sep 24 20:55 UTC | 18 Sep 24 20:55 UTC |
	|         | openssl x509 -text -noout -in                        |                           |         |         |                     |                     |
	|         | /var/lib/minikube/certs/apiserver.crt                |                           |         |         |                     |                     |
	| ssh     | -p cert-options-347585 -- sudo                       | cert-options-347585       | jenkins | v1.34.0 | 18 Sep 24 20:55 UTC | 18 Sep 24 20:55 UTC |
	|         | cat /etc/kubernetes/admin.conf                       |                           |         |         |                     |                     |
	| delete  | -p cert-options-347585                               | cert-options-347585       | jenkins | v1.34.0 | 18 Sep 24 20:55 UTC | 18 Sep 24 20:55 UTC |
	| start   | -p no-preload-331658                                 | no-preload-331658         | jenkins | v1.34.0 | 18 Sep 24 20:55 UTC |                     |
	|         | --memory=2200                                        |                           |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                        |                           |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                        |                           |         |         |                     |                     |
	|         |  --container-runtime=crio                            |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                         |                           |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-878094                         | kubernetes-upgrade-878094 | jenkins | v1.34.0 | 18 Sep 24 20:55 UTC |                     |
	|         | --memory=2200                                        |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                         |                           |         |         |                     |                     |
	|         | --driver=kvm2                                        |                           |         |         |                     |                     |
	|         | --container-runtime=crio                             |                           |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-878094                         | kubernetes-upgrade-878094 | jenkins | v1.34.0 | 18 Sep 24 20:55 UTC |                     |
	|         | --memory=2200                                        |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                         |                           |         |         |                     |                     |
	|         | --alsologtostderr                                    |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                   |                           |         |         |                     |                     |
	|         | --container-runtime=crio                             |                           |         |         |                     |                     |
	|---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/18 20:55:58
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0918 20:55:58.271646   58925 out.go:345] Setting OutFile to fd 1 ...
	I0918 20:55:58.271782   58925 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0918 20:55:58.271790   58925 out.go:358] Setting ErrFile to fd 2...
	I0918 20:55:58.271795   58925 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0918 20:55:58.272006   58925 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19667-7671/.minikube/bin
	I0918 20:55:58.272667   58925 out.go:352] Setting JSON to false
	I0918 20:55:58.273767   58925 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":5902,"bootTime":1726687056,"procs":220,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0918 20:55:58.273902   58925 start.go:139] virtualization: kvm guest
	I0918 20:55:58.276204   58925 out.go:177] * [kubernetes-upgrade-878094] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0918 20:55:58.277808   58925 out.go:177]   - MINIKUBE_LOCATION=19667
	I0918 20:55:58.277837   58925 notify.go:220] Checking for updates...
	I0918 20:55:58.280496   58925 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0918 20:55:58.281756   58925 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19667-7671/kubeconfig
	I0918 20:55:58.282899   58925 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19667-7671/.minikube
	I0918 20:55:58.284154   58925 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0918 20:55:58.285362   58925 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0918 20:55:58.286906   58925 config.go:182] Loaded profile config "kubernetes-upgrade-878094": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0918 20:55:58.287350   58925 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19667-7671/.minikube/bin/docker-machine-driver-kvm2
	I0918 20:55:58.287425   58925 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0918 20:55:58.303557   58925 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46393
	I0918 20:55:58.303969   58925 main.go:141] libmachine: () Calling .GetVersion
	I0918 20:55:58.304552   58925 main.go:141] libmachine: Using API Version  1
	I0918 20:55:58.304572   58925 main.go:141] libmachine: () Calling .SetConfigRaw
	I0918 20:55:58.304997   58925 main.go:141] libmachine: () Calling .GetMachineName
	I0918 20:55:58.305216   58925 main.go:141] libmachine: (kubernetes-upgrade-878094) Calling .DriverName
	I0918 20:55:58.305443   58925 driver.go:394] Setting default libvirt URI to qemu:///system
	I0918 20:55:58.305725   58925 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19667-7671/.minikube/bin/docker-machine-driver-kvm2
	I0918 20:55:58.305766   58925 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0918 20:55:58.321657   58925 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40315
	I0918 20:55:58.322207   58925 main.go:141] libmachine: () Calling .GetVersion
	I0918 20:55:58.322760   58925 main.go:141] libmachine: Using API Version  1
	I0918 20:55:58.322785   58925 main.go:141] libmachine: () Calling .SetConfigRaw
	I0918 20:55:58.323142   58925 main.go:141] libmachine: () Calling .GetMachineName
	I0918 20:55:58.323360   58925 main.go:141] libmachine: (kubernetes-upgrade-878094) Calling .DriverName
	I0918 20:55:58.364072   58925 out.go:177] * Using the kvm2 driver based on existing profile
	I0918 20:55:58.365846   58925 start.go:297] selected driver: kvm2
	I0918 20:55:58.365873   58925 start.go:901] validating driver "kvm2" against &{Name:kubernetes-upgrade-878094 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.31.1 ClusterName:kubernetes-upgrade-878094 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.80 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0918 20:55:58.366039   58925 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0918 20:55:58.367102   58925 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0918 20:55:58.367217   58925 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19667-7671/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0918 20:55:58.386643   58925 install.go:137] /home/jenkins/minikube-integration/19667-7671/.minikube/bin/docker-machine-driver-kvm2 version is 1.34.0
	I0918 20:55:58.387101   58925 cni.go:84] Creating CNI manager for ""
	I0918 20:55:58.387153   58925 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0918 20:55:58.387189   58925 start.go:340] cluster config:
	{Name:kubernetes-upgrade-878094 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:kubernetes-upgrade-878094 Namespace:d
efault APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.80 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false
DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0918 20:55:58.387297   58925 iso.go:125] acquiring lock: {Name:mkbcf4d84f3091567a39802cd5e773cb10b8c545 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0918 20:55:58.389323   58925 out.go:177] * Starting "kubernetes-upgrade-878094" primary control-plane node in "kubernetes-upgrade-878094" cluster
	I0918 20:55:55.978929   57703 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0918 20:55:55.994876   57703 node_ready.go:35] waiting up to 6m0s for node "pause-543700" to be "Ready" ...
	I0918 20:55:56.000605   57703 node_ready.go:49] node "pause-543700" has status "Ready":"True"
	I0918 20:55:56.000645   57703 node_ready.go:38] duration metric: took 5.733906ms for node "pause-543700" to be "Ready" ...
	I0918 20:55:56.000659   57703 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0918 20:55:56.009140   57703 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-8qv9r" in "kube-system" namespace to be "Ready" ...
	I0918 20:55:56.020400   57703 pod_ready.go:93] pod "coredns-7c65d6cfc9-8qv9r" in "kube-system" namespace has status "Ready":"True"
	I0918 20:55:56.020435   57703 pod_ready.go:82] duration metric: took 11.258341ms for pod "coredns-7c65d6cfc9-8qv9r" in "kube-system" namespace to be "Ready" ...
	I0918 20:55:56.020445   57703 pod_ready.go:79] waiting up to 6m0s for pod "etcd-pause-543700" in "kube-system" namespace to be "Ready" ...
	I0918 20:55:56.178594   57703 pod_ready.go:93] pod "etcd-pause-543700" in "kube-system" namespace has status "Ready":"True"
	I0918 20:55:56.178618   57703 pod_ready.go:82] duration metric: took 158.167599ms for pod "etcd-pause-543700" in "kube-system" namespace to be "Ready" ...
	I0918 20:55:56.178628   57703 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-pause-543700" in "kube-system" namespace to be "Ready" ...
	I0918 20:55:56.576937   57703 pod_ready.go:93] pod "kube-apiserver-pause-543700" in "kube-system" namespace has status "Ready":"True"
	I0918 20:55:56.576963   57703 pod_ready.go:82] duration metric: took 398.329476ms for pod "kube-apiserver-pause-543700" in "kube-system" namespace to be "Ready" ...
	I0918 20:55:56.576972   57703 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-pause-543700" in "kube-system" namespace to be "Ready" ...
	I0918 20:55:56.978465   57703 pod_ready.go:93] pod "kube-controller-manager-pause-543700" in "kube-system" namespace has status "Ready":"True"
	I0918 20:55:56.978489   57703 pod_ready.go:82] duration metric: took 401.510515ms for pod "kube-controller-manager-pause-543700" in "kube-system" namespace to be "Ready" ...
	I0918 20:55:56.978499   57703 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-h544n" in "kube-system" namespace to be "Ready" ...
	I0918 20:55:57.377906   57703 pod_ready.go:93] pod "kube-proxy-h544n" in "kube-system" namespace has status "Ready":"True"
	I0918 20:55:57.377943   57703 pod_ready.go:82] duration metric: took 399.436541ms for pod "kube-proxy-h544n" in "kube-system" namespace to be "Ready" ...
	I0918 20:55:57.377959   57703 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-pause-543700" in "kube-system" namespace to be "Ready" ...
	I0918 20:55:57.778344   57703 pod_ready.go:93] pod "kube-scheduler-pause-543700" in "kube-system" namespace has status "Ready":"True"
	I0918 20:55:57.778382   57703 pod_ready.go:82] duration metric: took 400.414919ms for pod "kube-scheduler-pause-543700" in "kube-system" namespace to be "Ready" ...
	I0918 20:55:57.778395   57703 pod_ready.go:39] duration metric: took 1.777723336s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0918 20:55:57.778424   57703 api_server.go:52] waiting for apiserver process to appear ...
	I0918 20:55:57.778491   57703 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 20:55:57.795772   57703 api_server.go:72] duration metric: took 1.989492011s to wait for apiserver process to appear ...
	I0918 20:55:57.795804   57703 api_server.go:88] waiting for apiserver healthz status ...
	I0918 20:55:57.795825   57703 api_server.go:253] Checking apiserver healthz at https://192.168.39.184:8443/healthz ...
	I0918 20:55:57.801854   57703 api_server.go:279] https://192.168.39.184:8443/healthz returned 200:
	ok
	I0918 20:55:57.803211   57703 api_server.go:141] control plane version: v1.31.1
	I0918 20:55:57.803239   57703 api_server.go:131] duration metric: took 7.427712ms to wait for apiserver health ...
	I0918 20:55:57.803247   57703 system_pods.go:43] waiting for kube-system pods to appear ...
	I0918 20:55:57.979711   57703 system_pods.go:59] 6 kube-system pods found
	I0918 20:55:57.979740   57703 system_pods.go:61] "coredns-7c65d6cfc9-8qv9r" [5fb13b99-1337-4d7a-bb0d-c1da599a389c] Running
	I0918 20:55:57.979746   57703 system_pods.go:61] "etcd-pause-543700" [febf26c7-e1b3-4ed2-97ad-0d136f67624f] Running
	I0918 20:55:57.979750   57703 system_pods.go:61] "kube-apiserver-pause-543700" [1e445516-ef06-4b13-84d9-f4ddc56c3bca] Running
	I0918 20:55:57.979754   57703 system_pods.go:61] "kube-controller-manager-pause-543700" [3397cfb0-3200-4ffb-82e6-58c57886cc51] Running
	I0918 20:55:57.979757   57703 system_pods.go:61] "kube-proxy-h544n" [68d913c9-0656-4875-9450-f80bf77bbfd7] Running
	I0918 20:55:57.979760   57703 system_pods.go:61] "kube-scheduler-pause-543700" [ec47d44f-2f93-4269-835c-c51e7a33de01] Running
	I0918 20:55:57.979766   57703 system_pods.go:74] duration metric: took 176.513779ms to wait for pod list to return data ...
	I0918 20:55:57.979773   57703 default_sa.go:34] waiting for default service account to be created ...
	I0918 20:55:58.178995   57703 default_sa.go:45] found service account: "default"
	I0918 20:55:58.179035   57703 default_sa.go:55] duration metric: took 199.255488ms for default service account to be created ...
	I0918 20:55:58.179050   57703 system_pods.go:116] waiting for k8s-apps to be running ...
	I0918 20:55:58.381450   57703 system_pods.go:86] 6 kube-system pods found
	I0918 20:55:58.381495   57703 system_pods.go:89] "coredns-7c65d6cfc9-8qv9r" [5fb13b99-1337-4d7a-bb0d-c1da599a389c] Running
	I0918 20:55:58.381503   57703 system_pods.go:89] "etcd-pause-543700" [febf26c7-e1b3-4ed2-97ad-0d136f67624f] Running
	I0918 20:55:58.381510   57703 system_pods.go:89] "kube-apiserver-pause-543700" [1e445516-ef06-4b13-84d9-f4ddc56c3bca] Running
	I0918 20:55:58.381515   57703 system_pods.go:89] "kube-controller-manager-pause-543700" [3397cfb0-3200-4ffb-82e6-58c57886cc51] Running
	I0918 20:55:58.381522   57703 system_pods.go:89] "kube-proxy-h544n" [68d913c9-0656-4875-9450-f80bf77bbfd7] Running
	I0918 20:55:58.381527   57703 system_pods.go:89] "kube-scheduler-pause-543700" [ec47d44f-2f93-4269-835c-c51e7a33de01] Running
	I0918 20:55:58.381537   57703 system_pods.go:126] duration metric: took 202.478669ms to wait for k8s-apps to be running ...
	I0918 20:55:58.381546   57703 system_svc.go:44] waiting for kubelet service to be running ....
	I0918 20:55:58.381604   57703 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0918 20:55:58.399071   57703 system_svc.go:56] duration metric: took 17.51297ms WaitForService to wait for kubelet
	I0918 20:55:58.399117   57703 kubeadm.go:582] duration metric: took 2.592836484s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0918 20:55:58.399142   57703 node_conditions.go:102] verifying NodePressure condition ...
	I0918 20:55:58.578009   57703 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0918 20:55:58.578038   57703 node_conditions.go:123] node cpu capacity is 2
	I0918 20:55:58.578053   57703 node_conditions.go:105] duration metric: took 178.904465ms to run NodePressure ...
	I0918 20:55:58.578066   57703 start.go:241] waiting for startup goroutines ...
	I0918 20:55:58.578074   57703 start.go:246] waiting for cluster config update ...
	I0918 20:55:58.578083   57703 start.go:255] writing updated cluster config ...
	I0918 20:55:58.578388   57703 ssh_runner.go:195] Run: rm -f paused
	I0918 20:55:58.639017   57703 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I0918 20:55:58.641010   57703 out.go:177] * Done! kubectl is now configured to use "pause-543700" cluster and "default" namespace by default
	I0918 20:55:54.592956   58323 main.go:141] libmachine: (no-preload-331658) DBG | domain no-preload-331658 has defined MAC address 52:54:00:2a:47:d0 in network mk-no-preload-331658
	I0918 20:55:54.593467   58323 main.go:141] libmachine: (no-preload-331658) DBG | unable to find current IP address of domain no-preload-331658 in network mk-no-preload-331658
	I0918 20:55:54.593488   58323 main.go:141] libmachine: (no-preload-331658) DBG | I0918 20:55:54.593436   58561 retry.go:31] will retry after 4.147707562s: waiting for machine to come up
	
	
	==> CRI-O <==
	Sep 18 20:55:59 pause-543700 crio[2298]: time="2024-09-18 20:55:59.313508004Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726692959313481064,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125697,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=641d6205-1187-4335-aa41-75a2737bafbe name=/runtime.v1.ImageService/ImageFsInfo
	Sep 18 20:55:59 pause-543700 crio[2298]: time="2024-09-18 20:55:59.314399192Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=4d4238bc-eb24-4679-accb-bf44c264a759 name=/runtime.v1.RuntimeService/ListContainers
	Sep 18 20:55:59 pause-543700 crio[2298]: time="2024-09-18 20:55:59.314458160Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=4d4238bc-eb24-4679-accb-bf44c264a759 name=/runtime.v1.RuntimeService/ListContainers
	Sep 18 20:55:59 pause-543700 crio[2298]: time="2024-09-18 20:55:59.314724554Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:2d5a6a593e1fc59ac994439221a6778bc74417e3417c59d843c4763ac6e0a960,PodSandboxId:0430d57f1fea2f9d4054004182fa26b2ac6b9892be8113fa9e03dde7972f81b7,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726692939982123003,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-8qv9r,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5fb13b99-1337-4d7a-bb0d-c1da599a389c,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol
\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:898cc4ece731f92e0dd2201b2e41f746d0337ec9f659051b5eb8528f194bc91e,PodSandboxId:729e5b5637c2ededea2bb02d9c37a7f4ea67904a93b80b57f6a3f8348f5bf4f6,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726692939984189095,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-h544n,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: 68d913c9-0656-4875-9450-f80bf77bbfd7,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bc9feef647787abc2cb1b5ab3c0e0b76b70089edf815386900484342ded37a04,PodSandboxId:c55c5f664a6b6f62b13eac5e16fb4215d3babb63cc632fddeb56bdd0b48f42e6,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726692936156663889,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-543700,io.kubernetes.pod.namespace: kube-system,io.kub
ernetes.pod.uid: ae9f7d68597963a7ac2bd6aef48092d4,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8907210c9344d37955a4bf0618118949966882169d72f1706cc5c425f339b153,PodSandboxId:9160b6511023f8ed69c10cb2ad633b1d0181bc41143e217f2e7846c2ce9a739b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726692936140717855,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-543700,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7
25cba61f20ce903bc2d727db0f6fdb6,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ee3ad2aede523ba2e7a7101f3f85754cc71496eac6f9b1721d29755ce9aaea45,PodSandboxId:6408a7f020aa6b4dda2b84de91f8b112d11305f638ace0627ac1d1efd847006c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726692936121315154,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-543700,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 670c115f809258a1e39
e14f1c9f6e518,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7cda771d471c29dc3cd640a3adf8c05602a24ffc5b7fe4fd453d69e03194c145,PodSandboxId:fb9c9c7eece6d2691561f6df6e245203150d9ba796c6f15e1083d01d543cbbd6,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726692936113659912,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-543700,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 82feebc7c19ce7c0a88861b7c89cdb64,},Annotations:map[string]string{io
.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e35c2076fdbf6ba34585d4dc1c830959915dd9b715305df7b4f1c7cb6e2320b4,PodSandboxId:0430d57f1fea2f9d4054004182fa26b2ac6b9892be8113fa9e03dde7972f81b7,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726692921491241258,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-8qv9r,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5fb13b99-1337-4d7a-bb0d-c1da599a389c,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a
204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5b4875ac82adf72eda0bef747b563eb403ecc70cb56d0275307e4e04b468565c,PodSandboxId:729e5b5637c2ededea2bb02d9c37a7f4ea67904a93b80b57f6a3f8348f5bf4f6,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1726692920818874326,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.po
d.name: kube-proxy-h544n,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 68d913c9-0656-4875-9450-f80bf77bbfd7,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fdc9fb94be86655beb10880c7d5faaad8c9c29dd88fa8972743f97c12bc85a24,PodSandboxId:9160b6511023f8ed69c10cb2ad633b1d0181bc41143e217f2e7846c2ce9a739b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1726692920788892141,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pau
se-543700,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 725cba61f20ce903bc2d727db0f6fdb6,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1bffe6da49dd38ba92455bd869a9f7f9031ef436230cf2a9f7a0d79101a735c6,PodSandboxId:c55c5f664a6b6f62b13eac5e16fb4215d3babb63cc632fddeb56bdd0b48f42e6,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1726692920748214407,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-man
ager-pause-543700,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ae9f7d68597963a7ac2bd6aef48092d4,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:57c211488dfbc6dfdb474ae7fc32a15deeb7f4d5a1d19bd672b04875a60c5eb1,PodSandboxId:6408a7f020aa6b4dda2b84de91f8b112d11305f638ace0627ac1d1efd847006c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726692920661862670,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-543700,i
o.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 670c115f809258a1e39e14f1c9f6e518,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:67554b57c9cc456c2dc6453a8b3cc56e3a300ae0d3ee59fc138b73a75e288bdc,PodSandboxId:fb9c9c7eece6d2691561f6df6e245203150d9ba796c6f15e1083d01d543cbbd6,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1726692920656567187,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-543700,io.kubernetes.pod.namespace: kube-system,io.kubern
etes.pod.uid: 82feebc7c19ce7c0a88861b7c89cdb64,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=4d4238bc-eb24-4679-accb-bf44c264a759 name=/runtime.v1.RuntimeService/ListContainers
	Sep 18 20:55:59 pause-543700 crio[2298]: time="2024-09-18 20:55:59.356146667Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=f105e49b-fdfe-4f8c-a63a-27010df187fb name=/runtime.v1.RuntimeService/Version
	Sep 18 20:55:59 pause-543700 crio[2298]: time="2024-09-18 20:55:59.356223686Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=f105e49b-fdfe-4f8c-a63a-27010df187fb name=/runtime.v1.RuntimeService/Version
	Sep 18 20:55:59 pause-543700 crio[2298]: time="2024-09-18 20:55:59.357252489Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=5b5ffed8-6abb-4932-93fe-2422db12d2a9 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 18 20:55:59 pause-543700 crio[2298]: time="2024-09-18 20:55:59.357625614Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726692959357603525,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125697,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=5b5ffed8-6abb-4932-93fe-2422db12d2a9 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 18 20:55:59 pause-543700 crio[2298]: time="2024-09-18 20:55:59.358241042Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=8ff4c298-3ee4-4241-b213-fc5750fc27f4 name=/runtime.v1.RuntimeService/ListContainers
	Sep 18 20:55:59 pause-543700 crio[2298]: time="2024-09-18 20:55:59.358307063Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=8ff4c298-3ee4-4241-b213-fc5750fc27f4 name=/runtime.v1.RuntimeService/ListContainers
	Sep 18 20:55:59 pause-543700 crio[2298]: time="2024-09-18 20:55:59.358581416Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:2d5a6a593e1fc59ac994439221a6778bc74417e3417c59d843c4763ac6e0a960,PodSandboxId:0430d57f1fea2f9d4054004182fa26b2ac6b9892be8113fa9e03dde7972f81b7,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726692939982123003,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-8qv9r,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5fb13b99-1337-4d7a-bb0d-c1da599a389c,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol
\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:898cc4ece731f92e0dd2201b2e41f746d0337ec9f659051b5eb8528f194bc91e,PodSandboxId:729e5b5637c2ededea2bb02d9c37a7f4ea67904a93b80b57f6a3f8348f5bf4f6,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726692939984189095,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-h544n,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: 68d913c9-0656-4875-9450-f80bf77bbfd7,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bc9feef647787abc2cb1b5ab3c0e0b76b70089edf815386900484342ded37a04,PodSandboxId:c55c5f664a6b6f62b13eac5e16fb4215d3babb63cc632fddeb56bdd0b48f42e6,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726692936156663889,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-543700,io.kubernetes.pod.namespace: kube-system,io.kub
ernetes.pod.uid: ae9f7d68597963a7ac2bd6aef48092d4,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8907210c9344d37955a4bf0618118949966882169d72f1706cc5c425f339b153,PodSandboxId:9160b6511023f8ed69c10cb2ad633b1d0181bc41143e217f2e7846c2ce9a739b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726692936140717855,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-543700,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7
25cba61f20ce903bc2d727db0f6fdb6,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ee3ad2aede523ba2e7a7101f3f85754cc71496eac6f9b1721d29755ce9aaea45,PodSandboxId:6408a7f020aa6b4dda2b84de91f8b112d11305f638ace0627ac1d1efd847006c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726692936121315154,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-543700,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 670c115f809258a1e39
e14f1c9f6e518,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7cda771d471c29dc3cd640a3adf8c05602a24ffc5b7fe4fd453d69e03194c145,PodSandboxId:fb9c9c7eece6d2691561f6df6e245203150d9ba796c6f15e1083d01d543cbbd6,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726692936113659912,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-543700,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 82feebc7c19ce7c0a88861b7c89cdb64,},Annotations:map[string]string{io
.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e35c2076fdbf6ba34585d4dc1c830959915dd9b715305df7b4f1c7cb6e2320b4,PodSandboxId:0430d57f1fea2f9d4054004182fa26b2ac6b9892be8113fa9e03dde7972f81b7,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726692921491241258,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-8qv9r,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5fb13b99-1337-4d7a-bb0d-c1da599a389c,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a
204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5b4875ac82adf72eda0bef747b563eb403ecc70cb56d0275307e4e04b468565c,PodSandboxId:729e5b5637c2ededea2bb02d9c37a7f4ea67904a93b80b57f6a3f8348f5bf4f6,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1726692920818874326,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.po
d.name: kube-proxy-h544n,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 68d913c9-0656-4875-9450-f80bf77bbfd7,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fdc9fb94be86655beb10880c7d5faaad8c9c29dd88fa8972743f97c12bc85a24,PodSandboxId:9160b6511023f8ed69c10cb2ad633b1d0181bc41143e217f2e7846c2ce9a739b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1726692920788892141,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pau
se-543700,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 725cba61f20ce903bc2d727db0f6fdb6,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1bffe6da49dd38ba92455bd869a9f7f9031ef436230cf2a9f7a0d79101a735c6,PodSandboxId:c55c5f664a6b6f62b13eac5e16fb4215d3babb63cc632fddeb56bdd0b48f42e6,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1726692920748214407,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-man
ager-pause-543700,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ae9f7d68597963a7ac2bd6aef48092d4,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:57c211488dfbc6dfdb474ae7fc32a15deeb7f4d5a1d19bd672b04875a60c5eb1,PodSandboxId:6408a7f020aa6b4dda2b84de91f8b112d11305f638ace0627ac1d1efd847006c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726692920661862670,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-543700,i
o.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 670c115f809258a1e39e14f1c9f6e518,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:67554b57c9cc456c2dc6453a8b3cc56e3a300ae0d3ee59fc138b73a75e288bdc,PodSandboxId:fb9c9c7eece6d2691561f6df6e245203150d9ba796c6f15e1083d01d543cbbd6,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1726692920656567187,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-543700,io.kubernetes.pod.namespace: kube-system,io.kubern
etes.pod.uid: 82feebc7c19ce7c0a88861b7c89cdb64,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=8ff4c298-3ee4-4241-b213-fc5750fc27f4 name=/runtime.v1.RuntimeService/ListContainers
	Sep 18 20:55:59 pause-543700 crio[2298]: time="2024-09-18 20:55:59.405355620Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=2cb65ac0-23d2-4bab-9551-b82ad0520fac name=/runtime.v1.RuntimeService/Version
	Sep 18 20:55:59 pause-543700 crio[2298]: time="2024-09-18 20:55:59.405450739Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=2cb65ac0-23d2-4bab-9551-b82ad0520fac name=/runtime.v1.RuntimeService/Version
	Sep 18 20:55:59 pause-543700 crio[2298]: time="2024-09-18 20:55:59.406753673Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=c83267de-f84c-4ef1-9070-00c1ce6e36d4 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 18 20:55:59 pause-543700 crio[2298]: time="2024-09-18 20:55:59.407252073Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726692959407222296,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125697,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=c83267de-f84c-4ef1-9070-00c1ce6e36d4 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 18 20:55:59 pause-543700 crio[2298]: time="2024-09-18 20:55:59.407777926Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=8b8a2896-4353-49c3-a3ec-0cd5dca80b1f name=/runtime.v1.RuntimeService/ListContainers
	Sep 18 20:55:59 pause-543700 crio[2298]: time="2024-09-18 20:55:59.407860373Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=8b8a2896-4353-49c3-a3ec-0cd5dca80b1f name=/runtime.v1.RuntimeService/ListContainers
	Sep 18 20:55:59 pause-543700 crio[2298]: time="2024-09-18 20:55:59.408284644Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:2d5a6a593e1fc59ac994439221a6778bc74417e3417c59d843c4763ac6e0a960,PodSandboxId:0430d57f1fea2f9d4054004182fa26b2ac6b9892be8113fa9e03dde7972f81b7,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726692939982123003,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-8qv9r,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5fb13b99-1337-4d7a-bb0d-c1da599a389c,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol
\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:898cc4ece731f92e0dd2201b2e41f746d0337ec9f659051b5eb8528f194bc91e,PodSandboxId:729e5b5637c2ededea2bb02d9c37a7f4ea67904a93b80b57f6a3f8348f5bf4f6,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726692939984189095,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-h544n,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: 68d913c9-0656-4875-9450-f80bf77bbfd7,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bc9feef647787abc2cb1b5ab3c0e0b76b70089edf815386900484342ded37a04,PodSandboxId:c55c5f664a6b6f62b13eac5e16fb4215d3babb63cc632fddeb56bdd0b48f42e6,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726692936156663889,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-543700,io.kubernetes.pod.namespace: kube-system,io.kub
ernetes.pod.uid: ae9f7d68597963a7ac2bd6aef48092d4,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8907210c9344d37955a4bf0618118949966882169d72f1706cc5c425f339b153,PodSandboxId:9160b6511023f8ed69c10cb2ad633b1d0181bc41143e217f2e7846c2ce9a739b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726692936140717855,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-543700,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7
25cba61f20ce903bc2d727db0f6fdb6,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ee3ad2aede523ba2e7a7101f3f85754cc71496eac6f9b1721d29755ce9aaea45,PodSandboxId:6408a7f020aa6b4dda2b84de91f8b112d11305f638ace0627ac1d1efd847006c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726692936121315154,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-543700,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 670c115f809258a1e39
e14f1c9f6e518,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7cda771d471c29dc3cd640a3adf8c05602a24ffc5b7fe4fd453d69e03194c145,PodSandboxId:fb9c9c7eece6d2691561f6df6e245203150d9ba796c6f15e1083d01d543cbbd6,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726692936113659912,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-543700,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 82feebc7c19ce7c0a88861b7c89cdb64,},Annotations:map[string]string{io
.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e35c2076fdbf6ba34585d4dc1c830959915dd9b715305df7b4f1c7cb6e2320b4,PodSandboxId:0430d57f1fea2f9d4054004182fa26b2ac6b9892be8113fa9e03dde7972f81b7,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726692921491241258,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-8qv9r,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5fb13b99-1337-4d7a-bb0d-c1da599a389c,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a
204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5b4875ac82adf72eda0bef747b563eb403ecc70cb56d0275307e4e04b468565c,PodSandboxId:729e5b5637c2ededea2bb02d9c37a7f4ea67904a93b80b57f6a3f8348f5bf4f6,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1726692920818874326,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.po
d.name: kube-proxy-h544n,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 68d913c9-0656-4875-9450-f80bf77bbfd7,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fdc9fb94be86655beb10880c7d5faaad8c9c29dd88fa8972743f97c12bc85a24,PodSandboxId:9160b6511023f8ed69c10cb2ad633b1d0181bc41143e217f2e7846c2ce9a739b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1726692920788892141,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pau
se-543700,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 725cba61f20ce903bc2d727db0f6fdb6,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1bffe6da49dd38ba92455bd869a9f7f9031ef436230cf2a9f7a0d79101a735c6,PodSandboxId:c55c5f664a6b6f62b13eac5e16fb4215d3babb63cc632fddeb56bdd0b48f42e6,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1726692920748214407,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-man
ager-pause-543700,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ae9f7d68597963a7ac2bd6aef48092d4,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:57c211488dfbc6dfdb474ae7fc32a15deeb7f4d5a1d19bd672b04875a60c5eb1,PodSandboxId:6408a7f020aa6b4dda2b84de91f8b112d11305f638ace0627ac1d1efd847006c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726692920661862670,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-543700,i
o.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 670c115f809258a1e39e14f1c9f6e518,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:67554b57c9cc456c2dc6453a8b3cc56e3a300ae0d3ee59fc138b73a75e288bdc,PodSandboxId:fb9c9c7eece6d2691561f6df6e245203150d9ba796c6f15e1083d01d543cbbd6,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1726692920656567187,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-543700,io.kubernetes.pod.namespace: kube-system,io.kubern
etes.pod.uid: 82feebc7c19ce7c0a88861b7c89cdb64,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=8b8a2896-4353-49c3-a3ec-0cd5dca80b1f name=/runtime.v1.RuntimeService/ListContainers
	Sep 18 20:55:59 pause-543700 crio[2298]: time="2024-09-18 20:55:59.454693591Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=71204af0-6017-486a-829f-e6335819aa9f name=/runtime.v1.RuntimeService/Version
	Sep 18 20:55:59 pause-543700 crio[2298]: time="2024-09-18 20:55:59.454788539Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=71204af0-6017-486a-829f-e6335819aa9f name=/runtime.v1.RuntimeService/Version
	Sep 18 20:55:59 pause-543700 crio[2298]: time="2024-09-18 20:55:59.455962510Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=3a1d1371-12d9-4c58-9349-436659c00ef4 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 18 20:55:59 pause-543700 crio[2298]: time="2024-09-18 20:55:59.456525652Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726692959456500703,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125697,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=3a1d1371-12d9-4c58-9349-436659c00ef4 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 18 20:55:59 pause-543700 crio[2298]: time="2024-09-18 20:55:59.456973327Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=e017e511-3b39-4bb6-9b51-77298912b02c name=/runtime.v1.RuntimeService/ListContainers
	Sep 18 20:55:59 pause-543700 crio[2298]: time="2024-09-18 20:55:59.457074219Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=e017e511-3b39-4bb6-9b51-77298912b02c name=/runtime.v1.RuntimeService/ListContainers
	Sep 18 20:55:59 pause-543700 crio[2298]: time="2024-09-18 20:55:59.457377377Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:2d5a6a593e1fc59ac994439221a6778bc74417e3417c59d843c4763ac6e0a960,PodSandboxId:0430d57f1fea2f9d4054004182fa26b2ac6b9892be8113fa9e03dde7972f81b7,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726692939982123003,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-8qv9r,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5fb13b99-1337-4d7a-bb0d-c1da599a389c,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol
\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:898cc4ece731f92e0dd2201b2e41f746d0337ec9f659051b5eb8528f194bc91e,PodSandboxId:729e5b5637c2ededea2bb02d9c37a7f4ea67904a93b80b57f6a3f8348f5bf4f6,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726692939984189095,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-h544n,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: 68d913c9-0656-4875-9450-f80bf77bbfd7,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bc9feef647787abc2cb1b5ab3c0e0b76b70089edf815386900484342ded37a04,PodSandboxId:c55c5f664a6b6f62b13eac5e16fb4215d3babb63cc632fddeb56bdd0b48f42e6,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726692936156663889,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-543700,io.kubernetes.pod.namespace: kube-system,io.kub
ernetes.pod.uid: ae9f7d68597963a7ac2bd6aef48092d4,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8907210c9344d37955a4bf0618118949966882169d72f1706cc5c425f339b153,PodSandboxId:9160b6511023f8ed69c10cb2ad633b1d0181bc41143e217f2e7846c2ce9a739b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726692936140717855,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-543700,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7
25cba61f20ce903bc2d727db0f6fdb6,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ee3ad2aede523ba2e7a7101f3f85754cc71496eac6f9b1721d29755ce9aaea45,PodSandboxId:6408a7f020aa6b4dda2b84de91f8b112d11305f638ace0627ac1d1efd847006c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726692936121315154,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-543700,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 670c115f809258a1e39
e14f1c9f6e518,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7cda771d471c29dc3cd640a3adf8c05602a24ffc5b7fe4fd453d69e03194c145,PodSandboxId:fb9c9c7eece6d2691561f6df6e245203150d9ba796c6f15e1083d01d543cbbd6,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726692936113659912,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-543700,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 82feebc7c19ce7c0a88861b7c89cdb64,},Annotations:map[string]string{io
.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e35c2076fdbf6ba34585d4dc1c830959915dd9b715305df7b4f1c7cb6e2320b4,PodSandboxId:0430d57f1fea2f9d4054004182fa26b2ac6b9892be8113fa9e03dde7972f81b7,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726692921491241258,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-8qv9r,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5fb13b99-1337-4d7a-bb0d-c1da599a389c,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a
204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5b4875ac82adf72eda0bef747b563eb403ecc70cb56d0275307e4e04b468565c,PodSandboxId:729e5b5637c2ededea2bb02d9c37a7f4ea67904a93b80b57f6a3f8348f5bf4f6,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1726692920818874326,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.po
d.name: kube-proxy-h544n,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 68d913c9-0656-4875-9450-f80bf77bbfd7,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fdc9fb94be86655beb10880c7d5faaad8c9c29dd88fa8972743f97c12bc85a24,PodSandboxId:9160b6511023f8ed69c10cb2ad633b1d0181bc41143e217f2e7846c2ce9a739b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1726692920788892141,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pau
se-543700,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 725cba61f20ce903bc2d727db0f6fdb6,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1bffe6da49dd38ba92455bd869a9f7f9031ef436230cf2a9f7a0d79101a735c6,PodSandboxId:c55c5f664a6b6f62b13eac5e16fb4215d3babb63cc632fddeb56bdd0b48f42e6,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1726692920748214407,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-man
ager-pause-543700,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ae9f7d68597963a7ac2bd6aef48092d4,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:57c211488dfbc6dfdb474ae7fc32a15deeb7f4d5a1d19bd672b04875a60c5eb1,PodSandboxId:6408a7f020aa6b4dda2b84de91f8b112d11305f638ace0627ac1d1efd847006c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726692920661862670,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-543700,i
o.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 670c115f809258a1e39e14f1c9f6e518,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:67554b57c9cc456c2dc6453a8b3cc56e3a300ae0d3ee59fc138b73a75e288bdc,PodSandboxId:fb9c9c7eece6d2691561f6df6e245203150d9ba796c6f15e1083d01d543cbbd6,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1726692920656567187,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-543700,io.kubernetes.pod.namespace: kube-system,io.kubern
etes.pod.uid: 82feebc7c19ce7c0a88861b7c89cdb64,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=e017e511-3b39-4bb6-9b51-77298912b02c name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	898cc4ece731f       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561   19 seconds ago      Running             kube-proxy                2                   729e5b5637c2e       kube-proxy-h544n
	2d5a6a593e1fc       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   19 seconds ago      Running             coredns                   2                   0430d57f1fea2       coredns-7c65d6cfc9-8qv9r
	bc9feef647787       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1   23 seconds ago      Running             kube-controller-manager   2                   c55c5f664a6b6       kube-controller-manager-pause-543700
	8907210c9344d       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b   23 seconds ago      Running             kube-scheduler            2                   9160b6511023f       kube-scheduler-pause-543700
	ee3ad2aede523       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee   23 seconds ago      Running             kube-apiserver            2                   6408a7f020aa6       kube-apiserver-pause-543700
	7cda771d471c2       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4   23 seconds ago      Running             etcd                      2                   fb9c9c7eece6d       etcd-pause-543700
	e35c2076fdbf6       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   38 seconds ago      Exited              coredns                   1                   0430d57f1fea2       coredns-7c65d6cfc9-8qv9r
	5b4875ac82adf       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561   38 seconds ago      Exited              kube-proxy                1                   729e5b5637c2e       kube-proxy-h544n
	fdc9fb94be866       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b   38 seconds ago      Exited              kube-scheduler            1                   9160b6511023f       kube-scheduler-pause-543700
	1bffe6da49dd3       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1   38 seconds ago      Exited              kube-controller-manager   1                   c55c5f664a6b6       kube-controller-manager-pause-543700
	57c211488dfbc       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee   38 seconds ago      Exited              kube-apiserver            1                   6408a7f020aa6       kube-apiserver-pause-543700
	67554b57c9cc4       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4   38 seconds ago      Exited              etcd                      1                   fb9c9c7eece6d       etcd-pause-543700
	
	
	==> coredns [2d5a6a593e1fc59ac994439221a6778bc74417e3417c59d843c4763ac6e0a960] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:45796 - 27964 "HINFO IN 110522732956144026.4696447689420735325. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.011056281s
	
	
	==> coredns [e35c2076fdbf6ba34585d4dc1c830959915dd9b715305df7b4f1c7cb6e2320b4] <==
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] plugin/health: Going into lameduck mode for 5s
	[INFO] 127.0.0.1:40595 - 64061 "HINFO IN 5458279004038995788.7331210818426585796. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.01697225s
	
	
	==> describe nodes <==
	Name:               pause-543700
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-543700
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=85073601a832bd4bbda5d11fa91feafff6ec6b91
	                    minikube.k8s.io/name=pause-543700
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_18T20_53_46_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 18 Sep 2024 20:53:42 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-543700
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 18 Sep 2024 20:55:59 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 18 Sep 2024 20:55:39 +0000   Wed, 18 Sep 2024 20:53:41 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 18 Sep 2024 20:55:39 +0000   Wed, 18 Sep 2024 20:53:41 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 18 Sep 2024 20:55:39 +0000   Wed, 18 Sep 2024 20:53:41 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 18 Sep 2024 20:55:39 +0000   Wed, 18 Sep 2024 20:53:46 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.184
	  Hostname:    pause-543700
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2015704Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2015704Ki
	  pods:               110
	System Info:
	  Machine ID:                 1fd8658137a443899f27658e9977c486
	  System UUID:                1fd86581-37a4-4389-9f27-658e9977c486
	  Boot ID:                    fb68cf0a-e7a2-43f2-aadd-50d9055f0a19
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7c65d6cfc9-8qv9r                100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     2m9s
	  kube-system                 etcd-pause-543700                       100m (5%)     0 (0%)      100Mi (5%)       0 (0%)         2m14s
	  kube-system                 kube-apiserver-pause-543700             250m (12%)    0 (0%)      0 (0%)           0 (0%)         2m14s
	  kube-system                 kube-controller-manager-pause-543700    200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m14s
	  kube-system                 kube-proxy-h544n                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m9s
	  kube-system                 kube-scheduler-pause-543700             100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m14s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  0 (0%)
	  memory             170Mi (8%)  170Mi (8%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 2m8s                   kube-proxy       
	  Normal  Starting                 19s                    kube-proxy       
	  Normal  Starting                 35s                    kube-proxy       
	  Normal  NodeHasNoDiskPressure    2m20s (x8 over 2m20s)  kubelet          Node pause-543700 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  2m20s (x8 over 2m20s)  kubelet          Node pause-543700 status is now: NodeHasSufficientMemory
	  Normal  Starting                 2m20s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientPID     2m20s (x7 over 2m20s)  kubelet          Node pause-543700 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  2m20s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientPID     2m14s                  kubelet          Node pause-543700 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  2m14s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  2m14s                  kubelet          Node pause-543700 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m14s                  kubelet          Node pause-543700 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 2m14s                  kubelet          Starting kubelet.
	  Normal  NodeReady                2m13s                  kubelet          Node pause-543700 status is now: NodeReady
	  Normal  RegisteredNode           2m10s                  node-controller  Node pause-543700 event: Registered Node pause-543700 in Controller
	  Normal  RegisteredNode           32s                    node-controller  Node pause-543700 event: Registered Node pause-543700 in Controller
	  Normal  NodeHasNoDiskPressure    24s (x8 over 24s)      kubelet          Node pause-543700 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  24s (x8 over 24s)      kubelet          Node pause-543700 status is now: NodeHasSufficientMemory
	  Normal  Starting                 24s                    kubelet          Starting kubelet.
	  Normal  NodeHasSufficientPID     24s (x7 over 24s)      kubelet          Node pause-543700 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  24s                    kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           17s                    node-controller  Node pause-543700 event: Registered Node pause-543700 in Controller
	
	
	==> dmesg <==
	[  +0.063538] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.067388] systemd-fstab-generator[596]: Ignoring "noauto" option for root device
	[  +0.184044] systemd-fstab-generator[610]: Ignoring "noauto" option for root device
	[  +0.163492] systemd-fstab-generator[622]: Ignoring "noauto" option for root device
	[  +0.286934] systemd-fstab-generator[651]: Ignoring "noauto" option for root device
	[  +4.257421] systemd-fstab-generator[742]: Ignoring "noauto" option for root device
	[  +0.065641] kauditd_printk_skb: 130 callbacks suppressed
	[  +4.081569] systemd-fstab-generator[878]: Ignoring "noauto" option for root device
	[  +1.015647] kauditd_printk_skb: 57 callbacks suppressed
	[  +5.551264] systemd-fstab-generator[1212]: Ignoring "noauto" option for root device
	[  +0.086860] kauditd_printk_skb: 30 callbacks suppressed
	[  +4.805801] systemd-fstab-generator[1327]: Ignoring "noauto" option for root device
	[  +0.545384] kauditd_printk_skb: 49 callbacks suppressed
	[Sep18 20:54] kauditd_printk_skb: 66 callbacks suppressed
	[Sep18 20:55] systemd-fstab-generator[2225]: Ignoring "noauto" option for root device
	[  +0.145124] systemd-fstab-generator[2237]: Ignoring "noauto" option for root device
	[  +0.193353] systemd-fstab-generator[2251]: Ignoring "noauto" option for root device
	[  +0.144261] systemd-fstab-generator[2263]: Ignoring "noauto" option for root device
	[  +0.317012] systemd-fstab-generator[2291]: Ignoring "noauto" option for root device
	[  +3.536186] systemd-fstab-generator[2831]: Ignoring "noauto" option for root device
	[  +3.341904] kauditd_printk_skb: 195 callbacks suppressed
	[ +10.853405] systemd-fstab-generator[3272]: Ignoring "noauto" option for root device
	[  +4.658248] kauditd_printk_skb: 44 callbacks suppressed
	[ +15.788544] systemd-fstab-generator[3725]: Ignoring "noauto" option for root device
	[  +0.099194] kauditd_printk_skb: 6 callbacks suppressed
	
	
	==> etcd [67554b57c9cc456c2dc6453a8b3cc56e3a300ae0d3ee59fc138b73a75e288bdc] <==
	{"level":"info","ts":"2024-09-18T20:55:22.745261Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"989272a6374482ea became pre-candidate at term 2"}
	{"level":"info","ts":"2024-09-18T20:55:22.745314Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"989272a6374482ea received MsgPreVoteResp from 989272a6374482ea at term 2"}
	{"level":"info","ts":"2024-09-18T20:55:22.745356Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"989272a6374482ea became candidate at term 3"}
	{"level":"info","ts":"2024-09-18T20:55:22.745381Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"989272a6374482ea received MsgVoteResp from 989272a6374482ea at term 3"}
	{"level":"info","ts":"2024-09-18T20:55:22.745409Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"989272a6374482ea became leader at term 3"}
	{"level":"info","ts":"2024-09-18T20:55:22.745435Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 989272a6374482ea elected leader 989272a6374482ea at term 3"}
	{"level":"info","ts":"2024-09-18T20:55:22.750463Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-18T20:55:22.751574Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-18T20:55:22.754937Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.184:2379"}
	{"level":"info","ts":"2024-09-18T20:55:22.756723Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-18T20:55:22.759789Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-18T20:55:22.762654Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-09-18T20:55:22.750424Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"989272a6374482ea","local-member-attributes":"{Name:pause-543700 ClientURLs:[https://192.168.39.184:2379]}","request-path":"/0/members/989272a6374482ea/attributes","cluster-id":"e6ef3f762f24aa4a","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-18T20:55:22.767040Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-18T20:55:22.767082Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-18T20:55:33.678768Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-09-18T20:55:33.678837Z","caller":"embed/etcd.go:377","msg":"closing etcd server","name":"pause-543700","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.184:2380"],"advertise-client-urls":["https://192.168.39.184:2379"]}
	{"level":"warn","ts":"2024-09-18T20:55:33.678963Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-09-18T20:55:33.678989Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-09-18T20:55:33.680569Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.184:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-09-18T20:55:33.680617Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.184:2379: use of closed network connection"}
	{"level":"info","ts":"2024-09-18T20:55:33.682108Z","caller":"etcdserver/server.go:1521","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"989272a6374482ea","current-leader-member-id":"989272a6374482ea"}
	{"level":"info","ts":"2024-09-18T20:55:33.686041Z","caller":"embed/etcd.go:581","msg":"stopping serving peer traffic","address":"192.168.39.184:2380"}
	{"level":"info","ts":"2024-09-18T20:55:33.686152Z","caller":"embed/etcd.go:586","msg":"stopped serving peer traffic","address":"192.168.39.184:2380"}
	{"level":"info","ts":"2024-09-18T20:55:33.686176Z","caller":"embed/etcd.go:379","msg":"closed etcd server","name":"pause-543700","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.184:2380"],"advertise-client-urls":["https://192.168.39.184:2379"]}
	
	
	==> etcd [7cda771d471c29dc3cd640a3adf8c05602a24ffc5b7fe4fd453d69e03194c145] <==
	{"level":"info","ts":"2024-09-18T20:55:36.535352Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"989272a6374482ea switched to configuration voters=(10993975698582176490)"}
	{"level":"info","ts":"2024-09-18T20:55:36.535501Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"e6ef3f762f24aa4a","local-member-id":"989272a6374482ea","added-peer-id":"989272a6374482ea","added-peer-peer-urls":["https://192.168.39.184:2380"]}
	{"level":"info","ts":"2024-09-18T20:55:36.535634Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"e6ef3f762f24aa4a","local-member-id":"989272a6374482ea","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-18T20:55:36.535685Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-18T20:55:36.544742Z","caller":"embed/etcd.go:728","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-09-18T20:55:36.545394Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.39.184:2380"}
	{"level":"info","ts":"2024-09-18T20:55:36.545521Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.39.184:2380"}
	{"level":"info","ts":"2024-09-18T20:55:36.559632Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"989272a6374482ea","initial-advertise-peer-urls":["https://192.168.39.184:2380"],"listen-peer-urls":["https://192.168.39.184:2380"],"advertise-client-urls":["https://192.168.39.184:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.184:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-09-18T20:55:36.559712Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-09-18T20:55:37.687693Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"989272a6374482ea is starting a new election at term 3"}
	{"level":"info","ts":"2024-09-18T20:55:37.687754Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"989272a6374482ea became pre-candidate at term 3"}
	{"level":"info","ts":"2024-09-18T20:55:37.687771Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"989272a6374482ea received MsgPreVoteResp from 989272a6374482ea at term 3"}
	{"level":"info","ts":"2024-09-18T20:55:37.687783Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"989272a6374482ea became candidate at term 4"}
	{"level":"info","ts":"2024-09-18T20:55:37.687789Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"989272a6374482ea received MsgVoteResp from 989272a6374482ea at term 4"}
	{"level":"info","ts":"2024-09-18T20:55:37.687798Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"989272a6374482ea became leader at term 4"}
	{"level":"info","ts":"2024-09-18T20:55:37.687823Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 989272a6374482ea elected leader 989272a6374482ea at term 4"}
	{"level":"info","ts":"2024-09-18T20:55:37.695060Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"989272a6374482ea","local-member-attributes":"{Name:pause-543700 ClientURLs:[https://192.168.39.184:2379]}","request-path":"/0/members/989272a6374482ea/attributes","cluster-id":"e6ef3f762f24aa4a","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-18T20:55:37.695158Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-18T20:55:37.695838Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-18T20:55:37.697132Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-18T20:55:37.698422Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-09-18T20:55:37.699611Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-18T20:55:37.699940Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-18T20:55:37.699967Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-18T20:55:37.701333Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.184:2379"}
	
	
	==> kernel <==
	 20:55:59 up 2 min,  0 users,  load average: 0.82, 0.29, 0.11
	Linux pause-543700 5.10.207 #1 SMP Mon Sep 16 15:00:28 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [57c211488dfbc6dfdb474ae7fc32a15deeb7f4d5a1d19bd672b04875a60c5eb1] <==
	I0918 20:55:32.561289       1 storage_flowcontrol.go:186] APF bootstrap ensurer is exiting
	I0918 20:55:32.561299       1 gc_controller.go:91] Shutting down apiserver lease garbage collector
	I0918 20:55:32.561306       1 system_namespaces_controller.go:76] Shutting down system namespaces controller
	I0918 20:55:32.561321       1 crdregistration_controller.go:145] Shutting down crd-autoregister controller
	I0918 20:55:32.561353       1 customresource_discovery_controller.go:328] Shutting down DiscoveryController
	I0918 20:55:32.561648       1 dynamic_cafile_content.go:174] "Shutting down controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0918 20:55:32.561757       1 dynamic_cafile_content.go:174] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0918 20:55:32.561933       1 controller.go:157] Shutting down quota evaluator
	I0918 20:55:32.561957       1 controller.go:176] quota evaluator worker shutdown
	I0918 20:55:32.561160       1 establishing_controller.go:92] Shutting down EstablishingController
	I0918 20:55:32.561165       1 naming_controller.go:305] Shutting down NamingConditionController
	I0918 20:55:32.562416       1 dynamic_serving_content.go:149] "Shutting down controller" name="aggregator-proxy-cert::/var/lib/minikube/certs/front-proxy-client.crt::/var/lib/minikube/certs/front-proxy-client.key"
	I0918 20:55:32.562507       1 object_count_tracker.go:151] "StorageObjectCountTracker pruner is exiting"
	I0918 20:55:32.562582       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I0918 20:55:32.562833       1 dynamic_cafile_content.go:174] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0918 20:55:32.563084       1 dynamic_serving_content.go:149] "Shutting down controller" name="serving-cert::/var/lib/minikube/certs/apiserver.crt::/var/lib/minikube/certs/apiserver.key"
	I0918 20:55:32.561264       1 controller.go:170] Shutting down OpenAPI controller
	I0918 20:55:32.562487       1 controller.go:86] Shutting down OpenAPI V3 AggregationController
	I0918 20:55:32.562493       1 controller.go:84] Shutting down OpenAPI AggregationController
	I0918 20:55:32.562568       1 secure_serving.go:258] Stopped listening on [::]:8443
	I0918 20:55:32.562821       1 controller.go:176] quota evaluator worker shutdown
	I0918 20:55:32.562830       1 controller.go:176] quota evaluator worker shutdown
	I0918 20:55:32.562836       1 controller.go:176] quota evaluator worker shutdown
	I0918 20:55:32.562839       1 controller.go:176] quota evaluator worker shutdown
	I0918 20:55:32.561132       1 autoregister_controller.go:168] Shutting down autoregister controller
	
	
	==> kube-apiserver [ee3ad2aede523ba2e7a7101f3f85754cc71496eac6f9b1721d29755ce9aaea45] <==
	I0918 20:55:39.203460       1 policy_source.go:224] refreshing policies
	I0918 20:55:39.205508       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0918 20:55:39.221461       1 shared_informer.go:320] Caches are synced for configmaps
	I0918 20:55:39.221554       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0918 20:55:39.221954       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0918 20:55:39.222458       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0918 20:55:39.223782       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0918 20:55:39.241807       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0918 20:55:39.222344       1 aggregator.go:171] initial CRD sync complete...
	I0918 20:55:39.244454       1 autoregister_controller.go:144] Starting autoregister controller
	I0918 20:55:39.244464       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0918 20:55:39.244472       1 cache.go:39] Caches are synced for autoregister controller
	I0918 20:55:39.245240       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0918 20:55:39.258513       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	I0918 20:55:39.282989       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0918 20:55:39.285740       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0918 20:55:40.038943       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0918 20:55:40.574761       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.184]
	I0918 20:55:40.576606       1 controller.go:615] quota admission added evaluator for: endpoints
	I0918 20:55:40.583469       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0918 20:55:40.954751       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0918 20:55:40.979420       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0918 20:55:41.047934       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0918 20:55:41.101501       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0918 20:55:41.115767       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	
	
	==> kube-controller-manager [1bffe6da49dd38ba92455bd869a9f7f9031ef436230cf2a9f7a0d79101a735c6] <==
	I0918 20:55:27.565633       1 shared_informer.go:320] Caches are synced for certificate-csrapproving
	I0918 20:55:27.565788       1 shared_informer.go:320] Caches are synced for HPA
	I0918 20:55:27.565746       1 shared_informer.go:320] Caches are synced for job
	I0918 20:55:27.567118       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-legacy-unknown
	I0918 20:55:27.567179       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-client
	I0918 20:55:27.567192       1 shared_informer.go:320] Caches are synced for ephemeral
	I0918 20:55:27.567209       1 shared_informer.go:320] Caches are synced for taint-eviction-controller
	I0918 20:55:27.567219       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I0918 20:55:27.568418       1 shared_informer.go:320] Caches are synced for bootstrap_signer
	I0918 20:55:27.574348       1 shared_informer.go:320] Caches are synced for PVC protection
	I0918 20:55:27.576731       1 shared_informer.go:320] Caches are synced for daemon sets
	I0918 20:55:27.617045       1 shared_informer.go:320] Caches are synced for disruption
	I0918 20:55:27.624864       1 shared_informer.go:320] Caches are synced for ReplicaSet
	I0918 20:55:27.627335       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I0918 20:55:27.635266       1 shared_informer.go:320] Caches are synced for resource quota
	I0918 20:55:27.665915       1 shared_informer.go:320] Caches are synced for deployment
	I0918 20:55:27.675846       1 shared_informer.go:320] Caches are synced for resource quota
	I0918 20:55:27.713316       1 shared_informer.go:320] Caches are synced for endpoint
	I0918 20:55:27.714638       1 shared_informer.go:320] Caches are synced for endpoint_slice_mirroring
	I0918 20:55:27.733750       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="108.562325ms"
	I0918 20:55:27.733839       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="41.198µs"
	I0918 20:55:27.772228       1 shared_informer.go:320] Caches are synced for attach detach
	I0918 20:55:28.206366       1 shared_informer.go:320] Caches are synced for garbage collector
	I0918 20:55:28.206544       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0918 20:55:28.217358       1 shared_informer.go:320] Caches are synced for garbage collector
	
	
	==> kube-controller-manager [bc9feef647787abc2cb1b5ab3c0e0b76b70089edf815386900484342ded37a04] <==
	I0918 20:55:42.566085       1 shared_informer.go:320] Caches are synced for disruption
	I0918 20:55:42.592127       1 shared_informer.go:320] Caches are synced for ephemeral
	I0918 20:55:42.595686       1 shared_informer.go:320] Caches are synced for GC
	I0918 20:55:42.614553       1 shared_informer.go:320] Caches are synced for persistent volume
	I0918 20:55:42.635527       1 shared_informer.go:320] Caches are synced for deployment
	I0918 20:55:42.644356       1 shared_informer.go:320] Caches are synced for HPA
	I0918 20:55:42.644524       1 shared_informer.go:320] Caches are synced for job
	I0918 20:55:42.645618       1 shared_informer.go:320] Caches are synced for attach detach
	I0918 20:55:42.645912       1 shared_informer.go:320] Caches are synced for ReplicationController
	I0918 20:55:42.645984       1 shared_informer.go:320] Caches are synced for ReplicaSet
	I0918 20:55:42.646605       1 shared_informer.go:320] Caches are synced for PVC protection
	I0918 20:55:42.649102       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I0918 20:55:42.652416       1 shared_informer.go:320] Caches are synced for endpoint
	I0918 20:55:42.670766       1 shared_informer.go:320] Caches are synced for daemon sets
	I0918 20:55:42.697036       1 shared_informer.go:320] Caches are synced for taint
	I0918 20:55:42.697194       1 node_lifecycle_controller.go:1232] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0918 20:55:42.697283       1 node_lifecycle_controller.go:884] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="pause-543700"
	I0918 20:55:42.697333       1 node_lifecycle_controller.go:1078] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0918 20:55:42.707844       1 shared_informer.go:320] Caches are synced for resource quota
	I0918 20:55:42.722575       1 shared_informer.go:320] Caches are synced for resource quota
	I0918 20:55:42.761822       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="115.678232ms"
	I0918 20:55:42.762505       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="219.951µs"
	I0918 20:55:43.132391       1 shared_informer.go:320] Caches are synced for garbage collector
	I0918 20:55:43.198798       1 shared_informer.go:320] Caches are synced for garbage collector
	I0918 20:55:43.198896       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	
	
	==> kube-proxy [5b4875ac82adf72eda0bef747b563eb403ecc70cb56d0275307e4e04b468565c] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0918 20:55:22.697305       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0918 20:55:24.301370       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.184"]
	E0918 20:55:24.301890       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0918 20:55:24.362842       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0918 20:55:24.362887       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0918 20:55:24.362915       1 server_linux.go:169] "Using iptables Proxier"
	I0918 20:55:24.365573       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0918 20:55:24.365874       1 server.go:483] "Version info" version="v1.31.1"
	I0918 20:55:24.365898       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0918 20:55:24.367519       1 config.go:199] "Starting service config controller"
	I0918 20:55:24.367572       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0918 20:55:24.367605       1 config.go:105] "Starting endpoint slice config controller"
	I0918 20:55:24.367625       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0918 20:55:24.368321       1 config.go:328] "Starting node config controller"
	I0918 20:55:24.368351       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0918 20:55:24.469124       1 shared_informer.go:320] Caches are synced for node config
	I0918 20:55:24.469130       1 shared_informer.go:320] Caches are synced for service config
	I0918 20:55:24.469146       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-proxy [898cc4ece731f92e0dd2201b2e41f746d0337ec9f659051b5eb8528f194bc91e] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0918 20:55:40.305279       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0918 20:55:40.326530       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.184"]
	E0918 20:55:40.326716       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0918 20:55:40.377904       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0918 20:55:40.378052       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0918 20:55:40.378118       1 server_linux.go:169] "Using iptables Proxier"
	I0918 20:55:40.381891       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0918 20:55:40.382282       1 server.go:483] "Version info" version="v1.31.1"
	I0918 20:55:40.382347       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0918 20:55:40.383640       1 config.go:199] "Starting service config controller"
	I0918 20:55:40.385386       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0918 20:55:40.385519       1 config.go:105] "Starting endpoint slice config controller"
	I0918 20:55:40.385582       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0918 20:55:40.386352       1 config.go:328] "Starting node config controller"
	I0918 20:55:40.388073       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0918 20:55:40.485794       1 shared_informer.go:320] Caches are synced for service config
	I0918 20:55:40.486116       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0918 20:55:40.488301       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [8907210c9344d37955a4bf0618118949966882169d72f1706cc5c425f339b153] <==
	I0918 20:55:37.266672       1 serving.go:386] Generated self-signed cert in-memory
	W0918 20:55:39.131845       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0918 20:55:39.133156       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0918 20:55:39.133966       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0918 20:55:39.134973       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0918 20:55:39.204704       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.1"
	I0918 20:55:39.208409       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0918 20:55:39.219171       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0918 20:55:39.220284       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0918 20:55:39.220329       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0918 20:55:39.227713       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0918 20:55:39.328881       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [fdc9fb94be86655beb10880c7d5faaad8c9c29dd88fa8972743f97c12bc85a24] <==
	I0918 20:55:22.871517       1 serving.go:386] Generated self-signed cert in-memory
	W0918 20:55:24.168732       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0918 20:55:24.170157       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0918 20:55:24.170193       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0918 20:55:24.170249       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0918 20:55:24.294165       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.1"
	I0918 20:55:24.294207       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0918 20:55:24.304463       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0918 20:55:24.304568       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0918 20:55:24.305845       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0918 20:55:24.311282       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0918 20:55:24.405535       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0918 20:55:32.349080       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I0918 20:55:32.349350       1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
	E0918 20:55:32.349535       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Sep 18 20:55:36 pause-543700 kubelet[3279]: E0918 20:55:36.034973    3279 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.39.184:8443: connect: connection refused" node="pause-543700"
	Sep 18 20:55:36 pause-543700 kubelet[3279]: I0918 20:55:36.094342    3279 scope.go:117] "RemoveContainer" containerID="67554b57c9cc456c2dc6453a8b3cc56e3a300ae0d3ee59fc138b73a75e288bdc"
	Sep 18 20:55:36 pause-543700 kubelet[3279]: I0918 20:55:36.094730    3279 scope.go:117] "RemoveContainer" containerID="57c211488dfbc6dfdb474ae7fc32a15deeb7f4d5a1d19bd672b04875a60c5eb1"
	Sep 18 20:55:36 pause-543700 kubelet[3279]: I0918 20:55:36.095814    3279 scope.go:117] "RemoveContainer" containerID="1bffe6da49dd38ba92455bd869a9f7f9031ef436230cf2a9f7a0d79101a735c6"
	Sep 18 20:55:36 pause-543700 kubelet[3279]: I0918 20:55:36.096670    3279 scope.go:117] "RemoveContainer" containerID="fdc9fb94be86655beb10880c7d5faaad8c9c29dd88fa8972743f97c12bc85a24"
	Sep 18 20:55:36 pause-543700 kubelet[3279]: E0918 20:55:36.264937    3279 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/pause-543700?timeout=10s\": dial tcp 192.168.39.184:8443: connect: connection refused" interval="800ms"
	Sep 18 20:55:36 pause-543700 kubelet[3279]: I0918 20:55:36.436408    3279 kubelet_node_status.go:72] "Attempting to register node" node="pause-543700"
	Sep 18 20:55:36 pause-543700 kubelet[3279]: E0918 20:55:36.437214    3279 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.39.184:8443: connect: connection refused" node="pause-543700"
	Sep 18 20:55:37 pause-543700 kubelet[3279]: I0918 20:55:37.239537    3279 kubelet_node_status.go:72] "Attempting to register node" node="pause-543700"
	Sep 18 20:55:39 pause-543700 kubelet[3279]: I0918 20:55:39.278582    3279 kubelet_node_status.go:111] "Node was previously registered" node="pause-543700"
	Sep 18 20:55:39 pause-543700 kubelet[3279]: I0918 20:55:39.279297    3279 kubelet_node_status.go:75] "Successfully registered node" node="pause-543700"
	Sep 18 20:55:39 pause-543700 kubelet[3279]: I0918 20:55:39.279495    3279 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Sep 18 20:55:39 pause-543700 kubelet[3279]: I0918 20:55:39.281584    3279 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Sep 18 20:55:39 pause-543700 kubelet[3279]: I0918 20:55:39.639837    3279 apiserver.go:52] "Watching apiserver"
	Sep 18 20:55:39 pause-543700 kubelet[3279]: I0918 20:55:39.657483    3279 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Sep 18 20:55:39 pause-543700 kubelet[3279]: I0918 20:55:39.756345    3279 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/68d913c9-0656-4875-9450-f80bf77bbfd7-xtables-lock\") pod \"kube-proxy-h544n\" (UID: \"68d913c9-0656-4875-9450-f80bf77bbfd7\") " pod="kube-system/kube-proxy-h544n"
	Sep 18 20:55:39 pause-543700 kubelet[3279]: I0918 20:55:39.756387    3279 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/68d913c9-0656-4875-9450-f80bf77bbfd7-lib-modules\") pod \"kube-proxy-h544n\" (UID: \"68d913c9-0656-4875-9450-f80bf77bbfd7\") " pod="kube-system/kube-proxy-h544n"
	Sep 18 20:55:39 pause-543700 kubelet[3279]: E0918 20:55:39.831874    3279 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-pause-543700\" already exists" pod="kube-system/kube-apiserver-pause-543700"
	Sep 18 20:55:39 pause-543700 kubelet[3279]: E0918 20:55:39.839649    3279 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"etcd-pause-543700\" already exists" pod="kube-system/etcd-pause-543700"
	Sep 18 20:55:39 pause-543700 kubelet[3279]: I0918 20:55:39.947123    3279 scope.go:117] "RemoveContainer" containerID="e35c2076fdbf6ba34585d4dc1c830959915dd9b715305df7b4f1c7cb6e2320b4"
	Sep 18 20:55:39 pause-543700 kubelet[3279]: I0918 20:55:39.947442    3279 scope.go:117] "RemoveContainer" containerID="5b4875ac82adf72eda0bef747b563eb403ecc70cb56d0275307e4e04b468565c"
	Sep 18 20:55:45 pause-543700 kubelet[3279]: E0918 20:55:45.734917    3279 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726692945734431853,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125697,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 18 20:55:45 pause-543700 kubelet[3279]: E0918 20:55:45.734937    3279 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726692945734431853,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125697,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 18 20:55:55 pause-543700 kubelet[3279]: E0918 20:55:55.736703    3279 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726692955736381361,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125697,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 18 20:55:55 pause-543700 kubelet[3279]: E0918 20:55:55.736730    3279 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726692955736381361,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125697,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-543700 -n pause-543700
helpers_test.go:261: (dbg) Run:  kubectl --context pause-543700 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-543700 -n pause-543700
helpers_test.go:244: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p pause-543700 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p pause-543700 logs -n 25: (1.33790918s)
helpers_test.go:252: TestPause/serial/SecondStartNoReconfiguration logs: 
-- stdout --
	
	==> Audit <==
	|---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                         Args                         |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p cilium-543581 sudo                                | cilium-543581             | jenkins | v1.34.0 | 18 Sep 24 20:54 UTC |                     |
	|         | systemctl cat cri-docker                             |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p cilium-543581 sudo cat                            | cilium-543581             | jenkins | v1.34.0 | 18 Sep 24 20:54 UTC |                     |
	|         | /etc/systemd/system/cri-docker.service.d/10-cni.conf |                           |         |         |                     |                     |
	| ssh     | -p cilium-543581 sudo cat                            | cilium-543581             | jenkins | v1.34.0 | 18 Sep 24 20:54 UTC |                     |
	|         | /usr/lib/systemd/system/cri-docker.service           |                           |         |         |                     |                     |
	| ssh     | -p cilium-543581 sudo                                | cilium-543581             | jenkins | v1.34.0 | 18 Sep 24 20:54 UTC |                     |
	|         | cri-dockerd --version                                |                           |         |         |                     |                     |
	| ssh     | -p cilium-543581 sudo                                | cilium-543581             | jenkins | v1.34.0 | 18 Sep 24 20:54 UTC |                     |
	|         | systemctl status containerd                          |                           |         |         |                     |                     |
	|         | --all --full --no-pager                              |                           |         |         |                     |                     |
	| ssh     | -p cilium-543581 sudo                                | cilium-543581             | jenkins | v1.34.0 | 18 Sep 24 20:54 UTC |                     |
	|         | systemctl cat containerd                             |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p cilium-543581 sudo cat                            | cilium-543581             | jenkins | v1.34.0 | 18 Sep 24 20:54 UTC |                     |
	|         | /lib/systemd/system/containerd.service               |                           |         |         |                     |                     |
	| ssh     | -p cilium-543581 sudo cat                            | cilium-543581             | jenkins | v1.34.0 | 18 Sep 24 20:54 UTC |                     |
	|         | /etc/containerd/config.toml                          |                           |         |         |                     |                     |
	| ssh     | -p cilium-543581 sudo                                | cilium-543581             | jenkins | v1.34.0 | 18 Sep 24 20:54 UTC |                     |
	|         | containerd config dump                               |                           |         |         |                     |                     |
	| ssh     | -p cilium-543581 sudo                                | cilium-543581             | jenkins | v1.34.0 | 18 Sep 24 20:54 UTC |                     |
	|         | systemctl status crio --all                          |                           |         |         |                     |                     |
	|         | --full --no-pager                                    |                           |         |         |                     |                     |
	| ssh     | -p cilium-543581 sudo                                | cilium-543581             | jenkins | v1.34.0 | 18 Sep 24 20:54 UTC |                     |
	|         | systemctl cat crio --no-pager                        |                           |         |         |                     |                     |
	| ssh     | -p cilium-543581 sudo find                           | cilium-543581             | jenkins | v1.34.0 | 18 Sep 24 20:54 UTC |                     |
	|         | /etc/crio -type f -exec sh -c                        |                           |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                 |                           |         |         |                     |                     |
	| ssh     | -p cilium-543581 sudo crio                           | cilium-543581             | jenkins | v1.34.0 | 18 Sep 24 20:54 UTC |                     |
	|         | config                                               |                           |         |         |                     |                     |
	| delete  | -p cilium-543581                                     | cilium-543581             | jenkins | v1.34.0 | 18 Sep 24 20:54 UTC | 18 Sep 24 20:54 UTC |
	| start   | -p cert-options-347585                               | cert-options-347585       | jenkins | v1.34.0 | 18 Sep 24 20:54 UTC | 18 Sep 24 20:55 UTC |
	|         | --memory=2048                                        |                           |         |         |                     |                     |
	|         | --apiserver-ips=127.0.0.1                            |                           |         |         |                     |                     |
	|         | --apiserver-ips=192.168.15.15                        |                           |         |         |                     |                     |
	|         | --apiserver-names=localhost                          |                           |         |         |                     |                     |
	|         | --apiserver-names=www.google.com                     |                           |         |         |                     |                     |
	|         | --apiserver-port=8555                                |                           |         |         |                     |                     |
	|         | --driver=kvm2                                        |                           |         |         |                     |                     |
	|         | --container-runtime=crio                             |                           |         |         |                     |                     |
	| start   | -p old-k8s-version-740194                            | old-k8s-version-740194    | jenkins | v1.34.0 | 18 Sep 24 20:54 UTC |                     |
	|         | --memory=2200                                        |                           |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                        |                           |         |         |                     |                     |
	|         | --kvm-network=default                                |                           |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                        |                           |         |         |                     |                     |
	|         | --disable-driver-mounts                              |                           |         |         |                     |                     |
	|         | --keep-context=false                                 |                           |         |         |                     |                     |
	|         | --driver=kvm2                                        |                           |         |         |                     |                     |
	|         | --container-runtime=crio                             |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                         |                           |         |         |                     |                     |
	| start   | -p pause-543700                                      | pause-543700              | jenkins | v1.34.0 | 18 Sep 24 20:54 UTC | 18 Sep 24 20:55 UTC |
	|         | --alsologtostderr                                    |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                   |                           |         |         |                     |                     |
	|         | --container-runtime=crio                             |                           |         |         |                     |                     |
	| stop    | -p kubernetes-upgrade-878094                         | kubernetes-upgrade-878094 | jenkins | v1.34.0 | 18 Sep 24 20:54 UTC | 18 Sep 24 20:54 UTC |
	| start   | -p kubernetes-upgrade-878094                         | kubernetes-upgrade-878094 | jenkins | v1.34.0 | 18 Sep 24 20:54 UTC | 18 Sep 24 20:55 UTC |
	|         | --memory=2200                                        |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                         |                           |         |         |                     |                     |
	|         | --alsologtostderr                                    |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                   |                           |         |         |                     |                     |
	|         | --container-runtime=crio                             |                           |         |         |                     |                     |
	| ssh     | cert-options-347585 ssh                              | cert-options-347585       | jenkins | v1.34.0 | 18 Sep 24 20:55 UTC | 18 Sep 24 20:55 UTC |
	|         | openssl x509 -text -noout -in                        |                           |         |         |                     |                     |
	|         | /var/lib/minikube/certs/apiserver.crt                |                           |         |         |                     |                     |
	| ssh     | -p cert-options-347585 -- sudo                       | cert-options-347585       | jenkins | v1.34.0 | 18 Sep 24 20:55 UTC | 18 Sep 24 20:55 UTC |
	|         | cat /etc/kubernetes/admin.conf                       |                           |         |         |                     |                     |
	| delete  | -p cert-options-347585                               | cert-options-347585       | jenkins | v1.34.0 | 18 Sep 24 20:55 UTC | 18 Sep 24 20:55 UTC |
	| start   | -p no-preload-331658                                 | no-preload-331658         | jenkins | v1.34.0 | 18 Sep 24 20:55 UTC |                     |
	|         | --memory=2200                                        |                           |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                        |                           |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                        |                           |         |         |                     |                     |
	|         |  --container-runtime=crio                            |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                         |                           |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-878094                         | kubernetes-upgrade-878094 | jenkins | v1.34.0 | 18 Sep 24 20:55 UTC |                     |
	|         | --memory=2200                                        |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                         |                           |         |         |                     |                     |
	|         | --driver=kvm2                                        |                           |         |         |                     |                     |
	|         | --container-runtime=crio                             |                           |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-878094                         | kubernetes-upgrade-878094 | jenkins | v1.34.0 | 18 Sep 24 20:55 UTC |                     |
	|         | --memory=2200                                        |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                         |                           |         |         |                     |                     |
	|         | --alsologtostderr                                    |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                   |                           |         |         |                     |                     |
	|         | --container-runtime=crio                             |                           |         |         |                     |                     |
	|---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/18 20:55:58
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0918 20:55:58.271646   58925 out.go:345] Setting OutFile to fd 1 ...
	I0918 20:55:58.271782   58925 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0918 20:55:58.271790   58925 out.go:358] Setting ErrFile to fd 2...
	I0918 20:55:58.271795   58925 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0918 20:55:58.272006   58925 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19667-7671/.minikube/bin
	I0918 20:55:58.272667   58925 out.go:352] Setting JSON to false
	I0918 20:55:58.273767   58925 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":5902,"bootTime":1726687056,"procs":220,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0918 20:55:58.273902   58925 start.go:139] virtualization: kvm guest
	I0918 20:55:58.276204   58925 out.go:177] * [kubernetes-upgrade-878094] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0918 20:55:58.277808   58925 out.go:177]   - MINIKUBE_LOCATION=19667
	I0918 20:55:58.277837   58925 notify.go:220] Checking for updates...
	I0918 20:55:58.280496   58925 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0918 20:55:58.281756   58925 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19667-7671/kubeconfig
	I0918 20:55:58.282899   58925 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19667-7671/.minikube
	I0918 20:55:58.284154   58925 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0918 20:55:58.285362   58925 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0918 20:55:58.286906   58925 config.go:182] Loaded profile config "kubernetes-upgrade-878094": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0918 20:55:58.287350   58925 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19667-7671/.minikube/bin/docker-machine-driver-kvm2
	I0918 20:55:58.287425   58925 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0918 20:55:58.303557   58925 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46393
	I0918 20:55:58.303969   58925 main.go:141] libmachine: () Calling .GetVersion
	I0918 20:55:58.304552   58925 main.go:141] libmachine: Using API Version  1
	I0918 20:55:58.304572   58925 main.go:141] libmachine: () Calling .SetConfigRaw
	I0918 20:55:58.304997   58925 main.go:141] libmachine: () Calling .GetMachineName
	I0918 20:55:58.305216   58925 main.go:141] libmachine: (kubernetes-upgrade-878094) Calling .DriverName
	I0918 20:55:58.305443   58925 driver.go:394] Setting default libvirt URI to qemu:///system
	I0918 20:55:58.305725   58925 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19667-7671/.minikube/bin/docker-machine-driver-kvm2
	I0918 20:55:58.305766   58925 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0918 20:55:58.321657   58925 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40315
	I0918 20:55:58.322207   58925 main.go:141] libmachine: () Calling .GetVersion
	I0918 20:55:58.322760   58925 main.go:141] libmachine: Using API Version  1
	I0918 20:55:58.322785   58925 main.go:141] libmachine: () Calling .SetConfigRaw
	I0918 20:55:58.323142   58925 main.go:141] libmachine: () Calling .GetMachineName
	I0918 20:55:58.323360   58925 main.go:141] libmachine: (kubernetes-upgrade-878094) Calling .DriverName
	I0918 20:55:58.364072   58925 out.go:177] * Using the kvm2 driver based on existing profile
	I0918 20:55:58.365846   58925 start.go:297] selected driver: kvm2
	I0918 20:55:58.365873   58925 start.go:901] validating driver "kvm2" against &{Name:kubernetes-upgrade-878094 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.31.1 ClusterName:kubernetes-upgrade-878094 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.80 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0918 20:55:58.366039   58925 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0918 20:55:58.367102   58925 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0918 20:55:58.367217   58925 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19667-7671/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0918 20:55:58.386643   58925 install.go:137] /home/jenkins/minikube-integration/19667-7671/.minikube/bin/docker-machine-driver-kvm2 version is 1.34.0
	I0918 20:55:58.387101   58925 cni.go:84] Creating CNI manager for ""
	I0918 20:55:58.387153   58925 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0918 20:55:58.387189   58925 start.go:340] cluster config:
	{Name:kubernetes-upgrade-878094 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:kubernetes-upgrade-878094 Namespace:d
efault APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.80 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false
DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0918 20:55:58.387297   58925 iso.go:125] acquiring lock: {Name:mkbcf4d84f3091567a39802cd5e773cb10b8c545 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0918 20:55:58.389323   58925 out.go:177] * Starting "kubernetes-upgrade-878094" primary control-plane node in "kubernetes-upgrade-878094" cluster
	I0918 20:55:55.978929   57703 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0918 20:55:55.994876   57703 node_ready.go:35] waiting up to 6m0s for node "pause-543700" to be "Ready" ...
	I0918 20:55:56.000605   57703 node_ready.go:49] node "pause-543700" has status "Ready":"True"
	I0918 20:55:56.000645   57703 node_ready.go:38] duration metric: took 5.733906ms for node "pause-543700" to be "Ready" ...
	I0918 20:55:56.000659   57703 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0918 20:55:56.009140   57703 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-8qv9r" in "kube-system" namespace to be "Ready" ...
	I0918 20:55:56.020400   57703 pod_ready.go:93] pod "coredns-7c65d6cfc9-8qv9r" in "kube-system" namespace has status "Ready":"True"
	I0918 20:55:56.020435   57703 pod_ready.go:82] duration metric: took 11.258341ms for pod "coredns-7c65d6cfc9-8qv9r" in "kube-system" namespace to be "Ready" ...
	I0918 20:55:56.020445   57703 pod_ready.go:79] waiting up to 6m0s for pod "etcd-pause-543700" in "kube-system" namespace to be "Ready" ...
	I0918 20:55:56.178594   57703 pod_ready.go:93] pod "etcd-pause-543700" in "kube-system" namespace has status "Ready":"True"
	I0918 20:55:56.178618   57703 pod_ready.go:82] duration metric: took 158.167599ms for pod "etcd-pause-543700" in "kube-system" namespace to be "Ready" ...
	I0918 20:55:56.178628   57703 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-pause-543700" in "kube-system" namespace to be "Ready" ...
	I0918 20:55:56.576937   57703 pod_ready.go:93] pod "kube-apiserver-pause-543700" in "kube-system" namespace has status "Ready":"True"
	I0918 20:55:56.576963   57703 pod_ready.go:82] duration metric: took 398.329476ms for pod "kube-apiserver-pause-543700" in "kube-system" namespace to be "Ready" ...
	I0918 20:55:56.576972   57703 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-pause-543700" in "kube-system" namespace to be "Ready" ...
	I0918 20:55:56.978465   57703 pod_ready.go:93] pod "kube-controller-manager-pause-543700" in "kube-system" namespace has status "Ready":"True"
	I0918 20:55:56.978489   57703 pod_ready.go:82] duration metric: took 401.510515ms for pod "kube-controller-manager-pause-543700" in "kube-system" namespace to be "Ready" ...
	I0918 20:55:56.978499   57703 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-h544n" in "kube-system" namespace to be "Ready" ...
	I0918 20:55:57.377906   57703 pod_ready.go:93] pod "kube-proxy-h544n" in "kube-system" namespace has status "Ready":"True"
	I0918 20:55:57.377943   57703 pod_ready.go:82] duration metric: took 399.436541ms for pod "kube-proxy-h544n" in "kube-system" namespace to be "Ready" ...
	I0918 20:55:57.377959   57703 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-pause-543700" in "kube-system" namespace to be "Ready" ...
	I0918 20:55:57.778344   57703 pod_ready.go:93] pod "kube-scheduler-pause-543700" in "kube-system" namespace has status "Ready":"True"
	I0918 20:55:57.778382   57703 pod_ready.go:82] duration metric: took 400.414919ms for pod "kube-scheduler-pause-543700" in "kube-system" namespace to be "Ready" ...
	I0918 20:55:57.778395   57703 pod_ready.go:39] duration metric: took 1.777723336s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0918 20:55:57.778424   57703 api_server.go:52] waiting for apiserver process to appear ...
	I0918 20:55:57.778491   57703 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 20:55:57.795772   57703 api_server.go:72] duration metric: took 1.989492011s to wait for apiserver process to appear ...
	I0918 20:55:57.795804   57703 api_server.go:88] waiting for apiserver healthz status ...
	I0918 20:55:57.795825   57703 api_server.go:253] Checking apiserver healthz at https://192.168.39.184:8443/healthz ...
	I0918 20:55:57.801854   57703 api_server.go:279] https://192.168.39.184:8443/healthz returned 200:
	ok
	I0918 20:55:57.803211   57703 api_server.go:141] control plane version: v1.31.1
	I0918 20:55:57.803239   57703 api_server.go:131] duration metric: took 7.427712ms to wait for apiserver health ...
	I0918 20:55:57.803247   57703 system_pods.go:43] waiting for kube-system pods to appear ...
	I0918 20:55:57.979711   57703 system_pods.go:59] 6 kube-system pods found
	I0918 20:55:57.979740   57703 system_pods.go:61] "coredns-7c65d6cfc9-8qv9r" [5fb13b99-1337-4d7a-bb0d-c1da599a389c] Running
	I0918 20:55:57.979746   57703 system_pods.go:61] "etcd-pause-543700" [febf26c7-e1b3-4ed2-97ad-0d136f67624f] Running
	I0918 20:55:57.979750   57703 system_pods.go:61] "kube-apiserver-pause-543700" [1e445516-ef06-4b13-84d9-f4ddc56c3bca] Running
	I0918 20:55:57.979754   57703 system_pods.go:61] "kube-controller-manager-pause-543700" [3397cfb0-3200-4ffb-82e6-58c57886cc51] Running
	I0918 20:55:57.979757   57703 system_pods.go:61] "kube-proxy-h544n" [68d913c9-0656-4875-9450-f80bf77bbfd7] Running
	I0918 20:55:57.979760   57703 system_pods.go:61] "kube-scheduler-pause-543700" [ec47d44f-2f93-4269-835c-c51e7a33de01] Running
	I0918 20:55:57.979766   57703 system_pods.go:74] duration metric: took 176.513779ms to wait for pod list to return data ...
	I0918 20:55:57.979773   57703 default_sa.go:34] waiting for default service account to be created ...
	I0918 20:55:58.178995   57703 default_sa.go:45] found service account: "default"
	I0918 20:55:58.179035   57703 default_sa.go:55] duration metric: took 199.255488ms for default service account to be created ...
	I0918 20:55:58.179050   57703 system_pods.go:116] waiting for k8s-apps to be running ...
	I0918 20:55:58.381450   57703 system_pods.go:86] 6 kube-system pods found
	I0918 20:55:58.381495   57703 system_pods.go:89] "coredns-7c65d6cfc9-8qv9r" [5fb13b99-1337-4d7a-bb0d-c1da599a389c] Running
	I0918 20:55:58.381503   57703 system_pods.go:89] "etcd-pause-543700" [febf26c7-e1b3-4ed2-97ad-0d136f67624f] Running
	I0918 20:55:58.381510   57703 system_pods.go:89] "kube-apiserver-pause-543700" [1e445516-ef06-4b13-84d9-f4ddc56c3bca] Running
	I0918 20:55:58.381515   57703 system_pods.go:89] "kube-controller-manager-pause-543700" [3397cfb0-3200-4ffb-82e6-58c57886cc51] Running
	I0918 20:55:58.381522   57703 system_pods.go:89] "kube-proxy-h544n" [68d913c9-0656-4875-9450-f80bf77bbfd7] Running
	I0918 20:55:58.381527   57703 system_pods.go:89] "kube-scheduler-pause-543700" [ec47d44f-2f93-4269-835c-c51e7a33de01] Running
	I0918 20:55:58.381537   57703 system_pods.go:126] duration metric: took 202.478669ms to wait for k8s-apps to be running ...
	I0918 20:55:58.381546   57703 system_svc.go:44] waiting for kubelet service to be running ....
	I0918 20:55:58.381604   57703 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0918 20:55:58.399071   57703 system_svc.go:56] duration metric: took 17.51297ms WaitForService to wait for kubelet
	I0918 20:55:58.399117   57703 kubeadm.go:582] duration metric: took 2.592836484s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0918 20:55:58.399142   57703 node_conditions.go:102] verifying NodePressure condition ...
	I0918 20:55:58.578009   57703 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0918 20:55:58.578038   57703 node_conditions.go:123] node cpu capacity is 2
	I0918 20:55:58.578053   57703 node_conditions.go:105] duration metric: took 178.904465ms to run NodePressure ...
	I0918 20:55:58.578066   57703 start.go:241] waiting for startup goroutines ...
	I0918 20:55:58.578074   57703 start.go:246] waiting for cluster config update ...
	I0918 20:55:58.578083   57703 start.go:255] writing updated cluster config ...
	I0918 20:55:58.578388   57703 ssh_runner.go:195] Run: rm -f paused
	I0918 20:55:58.639017   57703 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I0918 20:55:58.641010   57703 out.go:177] * Done! kubectl is now configured to use "pause-543700" cluster and "default" namespace by default
	I0918 20:55:54.592956   58323 main.go:141] libmachine: (no-preload-331658) DBG | domain no-preload-331658 has defined MAC address 52:54:00:2a:47:d0 in network mk-no-preload-331658
	I0918 20:55:54.593467   58323 main.go:141] libmachine: (no-preload-331658) DBG | unable to find current IP address of domain no-preload-331658 in network mk-no-preload-331658
	I0918 20:55:54.593488   58323 main.go:141] libmachine: (no-preload-331658) DBG | I0918 20:55:54.593436   58561 retry.go:31] will retry after 4.147707562s: waiting for machine to come up
	I0918 20:55:58.390547   58925 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0918 20:55:58.390603   58925 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19667-7671/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I0918 20:55:58.390616   58925 cache.go:56] Caching tarball of preloaded images
	I0918 20:55:58.390732   58925 preload.go:172] Found /home/jenkins/minikube-integration/19667-7671/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0918 20:55:58.390747   58925 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0918 20:55:58.390890   58925 profile.go:143] Saving config to /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/kubernetes-upgrade-878094/config.json ...
	I0918 20:55:58.391155   58925 start.go:360] acquireMachinesLock for kubernetes-upgrade-878094: {Name:mk1b0d68a0b7d9d4bc8204d5d81cb9eb77222526 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0918 20:56:00.212721   58925 start.go:364] duration metric: took 1.821512564s to acquireMachinesLock for "kubernetes-upgrade-878094"
	I0918 20:56:00.212768   58925 start.go:96] Skipping create...Using existing machine configuration
	I0918 20:56:00.212775   58925 fix.go:54] fixHost starting: 
	I0918 20:56:00.213238   58925 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19667-7671/.minikube/bin/docker-machine-driver-kvm2
	I0918 20:56:00.213293   58925 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0918 20:56:00.232299   58925 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43623
	I0918 20:56:00.232735   58925 main.go:141] libmachine: () Calling .GetVersion
	I0918 20:56:00.233344   58925 main.go:141] libmachine: Using API Version  1
	I0918 20:56:00.233372   58925 main.go:141] libmachine: () Calling .SetConfigRaw
	I0918 20:56:00.233733   58925 main.go:141] libmachine: () Calling .GetMachineName
	I0918 20:56:00.233876   58925 main.go:141] libmachine: (kubernetes-upgrade-878094) Calling .DriverName
	I0918 20:56:00.233980   58925 main.go:141] libmachine: (kubernetes-upgrade-878094) Calling .GetState
	I0918 20:56:00.235743   58925 fix.go:112] recreateIfNeeded on kubernetes-upgrade-878094: state=Running err=<nil>
	W0918 20:56:00.235768   58925 fix.go:138] unexpected machine state, will restart: <nil>
	I0918 20:56:00.237374   58925 out.go:177] * Updating the running kvm2 "kubernetes-upgrade-878094" VM ...
	
	
	==> CRI-O <==
	Sep 18 20:56:01 pause-543700 crio[2298]: time="2024-09-18 20:56:01.339247360Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726692961339217284,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125697,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=ba1a696a-e9de-4e38-82ae-eaf361555276 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 18 20:56:01 pause-543700 crio[2298]: time="2024-09-18 20:56:01.339857904Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=6b8dac8f-c21c-404e-a5a2-515bea739436 name=/runtime.v1.RuntimeService/ListContainers
	Sep 18 20:56:01 pause-543700 crio[2298]: time="2024-09-18 20:56:01.339932052Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=6b8dac8f-c21c-404e-a5a2-515bea739436 name=/runtime.v1.RuntimeService/ListContainers
	Sep 18 20:56:01 pause-543700 crio[2298]: time="2024-09-18 20:56:01.340572388Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:2d5a6a593e1fc59ac994439221a6778bc74417e3417c59d843c4763ac6e0a960,PodSandboxId:0430d57f1fea2f9d4054004182fa26b2ac6b9892be8113fa9e03dde7972f81b7,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726692939982123003,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-8qv9r,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5fb13b99-1337-4d7a-bb0d-c1da599a389c,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol
\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:898cc4ece731f92e0dd2201b2e41f746d0337ec9f659051b5eb8528f194bc91e,PodSandboxId:729e5b5637c2ededea2bb02d9c37a7f4ea67904a93b80b57f6a3f8348f5bf4f6,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726692939984189095,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-h544n,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: 68d913c9-0656-4875-9450-f80bf77bbfd7,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bc9feef647787abc2cb1b5ab3c0e0b76b70089edf815386900484342ded37a04,PodSandboxId:c55c5f664a6b6f62b13eac5e16fb4215d3babb63cc632fddeb56bdd0b48f42e6,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726692936156663889,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-543700,io.kubernetes.pod.namespace: kube-system,io.kub
ernetes.pod.uid: ae9f7d68597963a7ac2bd6aef48092d4,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8907210c9344d37955a4bf0618118949966882169d72f1706cc5c425f339b153,PodSandboxId:9160b6511023f8ed69c10cb2ad633b1d0181bc41143e217f2e7846c2ce9a739b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726692936140717855,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-543700,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7
25cba61f20ce903bc2d727db0f6fdb6,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ee3ad2aede523ba2e7a7101f3f85754cc71496eac6f9b1721d29755ce9aaea45,PodSandboxId:6408a7f020aa6b4dda2b84de91f8b112d11305f638ace0627ac1d1efd847006c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726692936121315154,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-543700,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 670c115f809258a1e39
e14f1c9f6e518,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7cda771d471c29dc3cd640a3adf8c05602a24ffc5b7fe4fd453d69e03194c145,PodSandboxId:fb9c9c7eece6d2691561f6df6e245203150d9ba796c6f15e1083d01d543cbbd6,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726692936113659912,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-543700,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 82feebc7c19ce7c0a88861b7c89cdb64,},Annotations:map[string]string{io
.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e35c2076fdbf6ba34585d4dc1c830959915dd9b715305df7b4f1c7cb6e2320b4,PodSandboxId:0430d57f1fea2f9d4054004182fa26b2ac6b9892be8113fa9e03dde7972f81b7,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726692921491241258,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-8qv9r,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5fb13b99-1337-4d7a-bb0d-c1da599a389c,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a
204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5b4875ac82adf72eda0bef747b563eb403ecc70cb56d0275307e4e04b468565c,PodSandboxId:729e5b5637c2ededea2bb02d9c37a7f4ea67904a93b80b57f6a3f8348f5bf4f6,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1726692920818874326,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.po
d.name: kube-proxy-h544n,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 68d913c9-0656-4875-9450-f80bf77bbfd7,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fdc9fb94be86655beb10880c7d5faaad8c9c29dd88fa8972743f97c12bc85a24,PodSandboxId:9160b6511023f8ed69c10cb2ad633b1d0181bc41143e217f2e7846c2ce9a739b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1726692920788892141,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pau
se-543700,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 725cba61f20ce903bc2d727db0f6fdb6,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1bffe6da49dd38ba92455bd869a9f7f9031ef436230cf2a9f7a0d79101a735c6,PodSandboxId:c55c5f664a6b6f62b13eac5e16fb4215d3babb63cc632fddeb56bdd0b48f42e6,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1726692920748214407,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-man
ager-pause-543700,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ae9f7d68597963a7ac2bd6aef48092d4,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:57c211488dfbc6dfdb474ae7fc32a15deeb7f4d5a1d19bd672b04875a60c5eb1,PodSandboxId:6408a7f020aa6b4dda2b84de91f8b112d11305f638ace0627ac1d1efd847006c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726692920661862670,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-543700,i
o.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 670c115f809258a1e39e14f1c9f6e518,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:67554b57c9cc456c2dc6453a8b3cc56e3a300ae0d3ee59fc138b73a75e288bdc,PodSandboxId:fb9c9c7eece6d2691561f6df6e245203150d9ba796c6f15e1083d01d543cbbd6,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1726692920656567187,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-543700,io.kubernetes.pod.namespace: kube-system,io.kubern
etes.pod.uid: 82feebc7c19ce7c0a88861b7c89cdb64,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=6b8dac8f-c21c-404e-a5a2-515bea739436 name=/runtime.v1.RuntimeService/ListContainers
	Sep 18 20:56:01 pause-543700 crio[2298]: time="2024-09-18 20:56:01.384690041Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=f4eeca4f-4882-472e-8897-8500c9d1d558 name=/runtime.v1.RuntimeService/Version
	Sep 18 20:56:01 pause-543700 crio[2298]: time="2024-09-18 20:56:01.384796158Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=f4eeca4f-4882-472e-8897-8500c9d1d558 name=/runtime.v1.RuntimeService/Version
	Sep 18 20:56:01 pause-543700 crio[2298]: time="2024-09-18 20:56:01.386068066Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=5d323450-9cc2-4014-a779-edd6496adc5d name=/runtime.v1.ImageService/ImageFsInfo
	Sep 18 20:56:01 pause-543700 crio[2298]: time="2024-09-18 20:56:01.386446067Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726692961386417518,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125697,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=5d323450-9cc2-4014-a779-edd6496adc5d name=/runtime.v1.ImageService/ImageFsInfo
	Sep 18 20:56:01 pause-543700 crio[2298]: time="2024-09-18 20:56:01.386911825Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=0700d09f-6b33-43a4-9b57-f210affe9b85 name=/runtime.v1.RuntimeService/ListContainers
	Sep 18 20:56:01 pause-543700 crio[2298]: time="2024-09-18 20:56:01.386963961Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=0700d09f-6b33-43a4-9b57-f210affe9b85 name=/runtime.v1.RuntimeService/ListContainers
	Sep 18 20:56:01 pause-543700 crio[2298]: time="2024-09-18 20:56:01.387255723Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:2d5a6a593e1fc59ac994439221a6778bc74417e3417c59d843c4763ac6e0a960,PodSandboxId:0430d57f1fea2f9d4054004182fa26b2ac6b9892be8113fa9e03dde7972f81b7,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726692939982123003,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-8qv9r,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5fb13b99-1337-4d7a-bb0d-c1da599a389c,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol
\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:898cc4ece731f92e0dd2201b2e41f746d0337ec9f659051b5eb8528f194bc91e,PodSandboxId:729e5b5637c2ededea2bb02d9c37a7f4ea67904a93b80b57f6a3f8348f5bf4f6,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726692939984189095,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-h544n,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: 68d913c9-0656-4875-9450-f80bf77bbfd7,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bc9feef647787abc2cb1b5ab3c0e0b76b70089edf815386900484342ded37a04,PodSandboxId:c55c5f664a6b6f62b13eac5e16fb4215d3babb63cc632fddeb56bdd0b48f42e6,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726692936156663889,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-543700,io.kubernetes.pod.namespace: kube-system,io.kub
ernetes.pod.uid: ae9f7d68597963a7ac2bd6aef48092d4,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8907210c9344d37955a4bf0618118949966882169d72f1706cc5c425f339b153,PodSandboxId:9160b6511023f8ed69c10cb2ad633b1d0181bc41143e217f2e7846c2ce9a739b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726692936140717855,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-543700,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7
25cba61f20ce903bc2d727db0f6fdb6,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ee3ad2aede523ba2e7a7101f3f85754cc71496eac6f9b1721d29755ce9aaea45,PodSandboxId:6408a7f020aa6b4dda2b84de91f8b112d11305f638ace0627ac1d1efd847006c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726692936121315154,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-543700,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 670c115f809258a1e39
e14f1c9f6e518,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7cda771d471c29dc3cd640a3adf8c05602a24ffc5b7fe4fd453d69e03194c145,PodSandboxId:fb9c9c7eece6d2691561f6df6e245203150d9ba796c6f15e1083d01d543cbbd6,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726692936113659912,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-543700,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 82feebc7c19ce7c0a88861b7c89cdb64,},Annotations:map[string]string{io
.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e35c2076fdbf6ba34585d4dc1c830959915dd9b715305df7b4f1c7cb6e2320b4,PodSandboxId:0430d57f1fea2f9d4054004182fa26b2ac6b9892be8113fa9e03dde7972f81b7,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726692921491241258,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-8qv9r,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5fb13b99-1337-4d7a-bb0d-c1da599a389c,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a
204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5b4875ac82adf72eda0bef747b563eb403ecc70cb56d0275307e4e04b468565c,PodSandboxId:729e5b5637c2ededea2bb02d9c37a7f4ea67904a93b80b57f6a3f8348f5bf4f6,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1726692920818874326,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.po
d.name: kube-proxy-h544n,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 68d913c9-0656-4875-9450-f80bf77bbfd7,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fdc9fb94be86655beb10880c7d5faaad8c9c29dd88fa8972743f97c12bc85a24,PodSandboxId:9160b6511023f8ed69c10cb2ad633b1d0181bc41143e217f2e7846c2ce9a739b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1726692920788892141,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pau
se-543700,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 725cba61f20ce903bc2d727db0f6fdb6,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1bffe6da49dd38ba92455bd869a9f7f9031ef436230cf2a9f7a0d79101a735c6,PodSandboxId:c55c5f664a6b6f62b13eac5e16fb4215d3babb63cc632fddeb56bdd0b48f42e6,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1726692920748214407,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-man
ager-pause-543700,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ae9f7d68597963a7ac2bd6aef48092d4,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:57c211488dfbc6dfdb474ae7fc32a15deeb7f4d5a1d19bd672b04875a60c5eb1,PodSandboxId:6408a7f020aa6b4dda2b84de91f8b112d11305f638ace0627ac1d1efd847006c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726692920661862670,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-543700,i
o.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 670c115f809258a1e39e14f1c9f6e518,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:67554b57c9cc456c2dc6453a8b3cc56e3a300ae0d3ee59fc138b73a75e288bdc,PodSandboxId:fb9c9c7eece6d2691561f6df6e245203150d9ba796c6f15e1083d01d543cbbd6,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1726692920656567187,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-543700,io.kubernetes.pod.namespace: kube-system,io.kubern
etes.pod.uid: 82feebc7c19ce7c0a88861b7c89cdb64,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=0700d09f-6b33-43a4-9b57-f210affe9b85 name=/runtime.v1.RuntimeService/ListContainers
	Sep 18 20:56:01 pause-543700 crio[2298]: time="2024-09-18 20:56:01.430042610Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=1b6840c7-7fcc-4b22-96a2-b86c0b5a752c name=/runtime.v1.RuntimeService/Version
	Sep 18 20:56:01 pause-543700 crio[2298]: time="2024-09-18 20:56:01.430125923Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=1b6840c7-7fcc-4b22-96a2-b86c0b5a752c name=/runtime.v1.RuntimeService/Version
	Sep 18 20:56:01 pause-543700 crio[2298]: time="2024-09-18 20:56:01.431537031Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=519cf734-44a2-454e-a336-e9da473f163f name=/runtime.v1.ImageService/ImageFsInfo
	Sep 18 20:56:01 pause-543700 crio[2298]: time="2024-09-18 20:56:01.432271762Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726692961431889732,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125697,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=519cf734-44a2-454e-a336-e9da473f163f name=/runtime.v1.ImageService/ImageFsInfo
	Sep 18 20:56:01 pause-543700 crio[2298]: time="2024-09-18 20:56:01.435669803Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=ae4bc3c7-743e-45d2-836a-b9c08d9bb5d8 name=/runtime.v1.RuntimeService/ListContainers
	Sep 18 20:56:01 pause-543700 crio[2298]: time="2024-09-18 20:56:01.435806702Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=ae4bc3c7-743e-45d2-836a-b9c08d9bb5d8 name=/runtime.v1.RuntimeService/ListContainers
	Sep 18 20:56:01 pause-543700 crio[2298]: time="2024-09-18 20:56:01.436132742Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:2d5a6a593e1fc59ac994439221a6778bc74417e3417c59d843c4763ac6e0a960,PodSandboxId:0430d57f1fea2f9d4054004182fa26b2ac6b9892be8113fa9e03dde7972f81b7,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726692939982123003,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-8qv9r,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5fb13b99-1337-4d7a-bb0d-c1da599a389c,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol
\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:898cc4ece731f92e0dd2201b2e41f746d0337ec9f659051b5eb8528f194bc91e,PodSandboxId:729e5b5637c2ededea2bb02d9c37a7f4ea67904a93b80b57f6a3f8348f5bf4f6,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726692939984189095,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-h544n,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: 68d913c9-0656-4875-9450-f80bf77bbfd7,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bc9feef647787abc2cb1b5ab3c0e0b76b70089edf815386900484342ded37a04,PodSandboxId:c55c5f664a6b6f62b13eac5e16fb4215d3babb63cc632fddeb56bdd0b48f42e6,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726692936156663889,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-543700,io.kubernetes.pod.namespace: kube-system,io.kub
ernetes.pod.uid: ae9f7d68597963a7ac2bd6aef48092d4,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8907210c9344d37955a4bf0618118949966882169d72f1706cc5c425f339b153,PodSandboxId:9160b6511023f8ed69c10cb2ad633b1d0181bc41143e217f2e7846c2ce9a739b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726692936140717855,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-543700,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7
25cba61f20ce903bc2d727db0f6fdb6,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ee3ad2aede523ba2e7a7101f3f85754cc71496eac6f9b1721d29755ce9aaea45,PodSandboxId:6408a7f020aa6b4dda2b84de91f8b112d11305f638ace0627ac1d1efd847006c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726692936121315154,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-543700,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 670c115f809258a1e39
e14f1c9f6e518,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7cda771d471c29dc3cd640a3adf8c05602a24ffc5b7fe4fd453d69e03194c145,PodSandboxId:fb9c9c7eece6d2691561f6df6e245203150d9ba796c6f15e1083d01d543cbbd6,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726692936113659912,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-543700,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 82feebc7c19ce7c0a88861b7c89cdb64,},Annotations:map[string]string{io
.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e35c2076fdbf6ba34585d4dc1c830959915dd9b715305df7b4f1c7cb6e2320b4,PodSandboxId:0430d57f1fea2f9d4054004182fa26b2ac6b9892be8113fa9e03dde7972f81b7,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726692921491241258,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-8qv9r,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5fb13b99-1337-4d7a-bb0d-c1da599a389c,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a
204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5b4875ac82adf72eda0bef747b563eb403ecc70cb56d0275307e4e04b468565c,PodSandboxId:729e5b5637c2ededea2bb02d9c37a7f4ea67904a93b80b57f6a3f8348f5bf4f6,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1726692920818874326,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.po
d.name: kube-proxy-h544n,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 68d913c9-0656-4875-9450-f80bf77bbfd7,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fdc9fb94be86655beb10880c7d5faaad8c9c29dd88fa8972743f97c12bc85a24,PodSandboxId:9160b6511023f8ed69c10cb2ad633b1d0181bc41143e217f2e7846c2ce9a739b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1726692920788892141,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pau
se-543700,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 725cba61f20ce903bc2d727db0f6fdb6,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1bffe6da49dd38ba92455bd869a9f7f9031ef436230cf2a9f7a0d79101a735c6,PodSandboxId:c55c5f664a6b6f62b13eac5e16fb4215d3babb63cc632fddeb56bdd0b48f42e6,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1726692920748214407,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-man
ager-pause-543700,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ae9f7d68597963a7ac2bd6aef48092d4,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:57c211488dfbc6dfdb474ae7fc32a15deeb7f4d5a1d19bd672b04875a60c5eb1,PodSandboxId:6408a7f020aa6b4dda2b84de91f8b112d11305f638ace0627ac1d1efd847006c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726692920661862670,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-543700,i
o.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 670c115f809258a1e39e14f1c9f6e518,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:67554b57c9cc456c2dc6453a8b3cc56e3a300ae0d3ee59fc138b73a75e288bdc,PodSandboxId:fb9c9c7eece6d2691561f6df6e245203150d9ba796c6f15e1083d01d543cbbd6,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1726692920656567187,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-543700,io.kubernetes.pod.namespace: kube-system,io.kubern
etes.pod.uid: 82feebc7c19ce7c0a88861b7c89cdb64,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=ae4bc3c7-743e-45d2-836a-b9c08d9bb5d8 name=/runtime.v1.RuntimeService/ListContainers
	Sep 18 20:56:01 pause-543700 crio[2298]: time="2024-09-18 20:56:01.483923721Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=26f02ae1-070c-4748-940a-62cb8d392ecd name=/runtime.v1.RuntimeService/Version
	Sep 18 20:56:01 pause-543700 crio[2298]: time="2024-09-18 20:56:01.484092787Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=26f02ae1-070c-4748-940a-62cb8d392ecd name=/runtime.v1.RuntimeService/Version
	Sep 18 20:56:01 pause-543700 crio[2298]: time="2024-09-18 20:56:01.485348066Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=21ef150d-eb14-4a0f-b684-e4873df9417e name=/runtime.v1.ImageService/ImageFsInfo
	Sep 18 20:56:01 pause-543700 crio[2298]: time="2024-09-18 20:56:01.485741534Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726692961485720672,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125697,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=21ef150d-eb14-4a0f-b684-e4873df9417e name=/runtime.v1.ImageService/ImageFsInfo
	Sep 18 20:56:01 pause-543700 crio[2298]: time="2024-09-18 20:56:01.486375651Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=fe7d0eea-4498-4a5d-b700-28a949668117 name=/runtime.v1.RuntimeService/ListContainers
	Sep 18 20:56:01 pause-543700 crio[2298]: time="2024-09-18 20:56:01.486434635Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=fe7d0eea-4498-4a5d-b700-28a949668117 name=/runtime.v1.RuntimeService/ListContainers
	Sep 18 20:56:01 pause-543700 crio[2298]: time="2024-09-18 20:56:01.486695093Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:2d5a6a593e1fc59ac994439221a6778bc74417e3417c59d843c4763ac6e0a960,PodSandboxId:0430d57f1fea2f9d4054004182fa26b2ac6b9892be8113fa9e03dde7972f81b7,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726692939982123003,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-8qv9r,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5fb13b99-1337-4d7a-bb0d-c1da599a389c,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol
\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:898cc4ece731f92e0dd2201b2e41f746d0337ec9f659051b5eb8528f194bc91e,PodSandboxId:729e5b5637c2ededea2bb02d9c37a7f4ea67904a93b80b57f6a3f8348f5bf4f6,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726692939984189095,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-h544n,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: 68d913c9-0656-4875-9450-f80bf77bbfd7,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bc9feef647787abc2cb1b5ab3c0e0b76b70089edf815386900484342ded37a04,PodSandboxId:c55c5f664a6b6f62b13eac5e16fb4215d3babb63cc632fddeb56bdd0b48f42e6,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726692936156663889,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-543700,io.kubernetes.pod.namespace: kube-system,io.kub
ernetes.pod.uid: ae9f7d68597963a7ac2bd6aef48092d4,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8907210c9344d37955a4bf0618118949966882169d72f1706cc5c425f339b153,PodSandboxId:9160b6511023f8ed69c10cb2ad633b1d0181bc41143e217f2e7846c2ce9a739b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726692936140717855,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-543700,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7
25cba61f20ce903bc2d727db0f6fdb6,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ee3ad2aede523ba2e7a7101f3f85754cc71496eac6f9b1721d29755ce9aaea45,PodSandboxId:6408a7f020aa6b4dda2b84de91f8b112d11305f638ace0627ac1d1efd847006c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726692936121315154,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-543700,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 670c115f809258a1e39
e14f1c9f6e518,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7cda771d471c29dc3cd640a3adf8c05602a24ffc5b7fe4fd453d69e03194c145,PodSandboxId:fb9c9c7eece6d2691561f6df6e245203150d9ba796c6f15e1083d01d543cbbd6,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726692936113659912,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-543700,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 82feebc7c19ce7c0a88861b7c89cdb64,},Annotations:map[string]string{io
.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e35c2076fdbf6ba34585d4dc1c830959915dd9b715305df7b4f1c7cb6e2320b4,PodSandboxId:0430d57f1fea2f9d4054004182fa26b2ac6b9892be8113fa9e03dde7972f81b7,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726692921491241258,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-8qv9r,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5fb13b99-1337-4d7a-bb0d-c1da599a389c,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a
204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5b4875ac82adf72eda0bef747b563eb403ecc70cb56d0275307e4e04b468565c,PodSandboxId:729e5b5637c2ededea2bb02d9c37a7f4ea67904a93b80b57f6a3f8348f5bf4f6,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1726692920818874326,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.po
d.name: kube-proxy-h544n,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 68d913c9-0656-4875-9450-f80bf77bbfd7,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fdc9fb94be86655beb10880c7d5faaad8c9c29dd88fa8972743f97c12bc85a24,PodSandboxId:9160b6511023f8ed69c10cb2ad633b1d0181bc41143e217f2e7846c2ce9a739b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1726692920788892141,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pau
se-543700,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 725cba61f20ce903bc2d727db0f6fdb6,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1bffe6da49dd38ba92455bd869a9f7f9031ef436230cf2a9f7a0d79101a735c6,PodSandboxId:c55c5f664a6b6f62b13eac5e16fb4215d3babb63cc632fddeb56bdd0b48f42e6,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1726692920748214407,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-man
ager-pause-543700,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ae9f7d68597963a7ac2bd6aef48092d4,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:57c211488dfbc6dfdb474ae7fc32a15deeb7f4d5a1d19bd672b04875a60c5eb1,PodSandboxId:6408a7f020aa6b4dda2b84de91f8b112d11305f638ace0627ac1d1efd847006c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726692920661862670,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-543700,i
o.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 670c115f809258a1e39e14f1c9f6e518,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:67554b57c9cc456c2dc6453a8b3cc56e3a300ae0d3ee59fc138b73a75e288bdc,PodSandboxId:fb9c9c7eece6d2691561f6df6e245203150d9ba796c6f15e1083d01d543cbbd6,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1726692920656567187,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-543700,io.kubernetes.pod.namespace: kube-system,io.kubern
etes.pod.uid: 82feebc7c19ce7c0a88861b7c89cdb64,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=fe7d0eea-4498-4a5d-b700-28a949668117 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	898cc4ece731f       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561   21 seconds ago      Running             kube-proxy                2                   729e5b5637c2e       kube-proxy-h544n
	2d5a6a593e1fc       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   21 seconds ago      Running             coredns                   2                   0430d57f1fea2       coredns-7c65d6cfc9-8qv9r
	bc9feef647787       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1   25 seconds ago      Running             kube-controller-manager   2                   c55c5f664a6b6       kube-controller-manager-pause-543700
	8907210c9344d       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b   25 seconds ago      Running             kube-scheduler            2                   9160b6511023f       kube-scheduler-pause-543700
	ee3ad2aede523       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee   25 seconds ago      Running             kube-apiserver            2                   6408a7f020aa6       kube-apiserver-pause-543700
	7cda771d471c2       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4   25 seconds ago      Running             etcd                      2                   fb9c9c7eece6d       etcd-pause-543700
	e35c2076fdbf6       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   40 seconds ago      Exited              coredns                   1                   0430d57f1fea2       coredns-7c65d6cfc9-8qv9r
	5b4875ac82adf       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561   40 seconds ago      Exited              kube-proxy                1                   729e5b5637c2e       kube-proxy-h544n
	fdc9fb94be866       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b   40 seconds ago      Exited              kube-scheduler            1                   9160b6511023f       kube-scheduler-pause-543700
	1bffe6da49dd3       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1   40 seconds ago      Exited              kube-controller-manager   1                   c55c5f664a6b6       kube-controller-manager-pause-543700
	57c211488dfbc       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee   40 seconds ago      Exited              kube-apiserver            1                   6408a7f020aa6       kube-apiserver-pause-543700
	67554b57c9cc4       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4   40 seconds ago      Exited              etcd                      1                   fb9c9c7eece6d       etcd-pause-543700
	
	
	==> coredns [2d5a6a593e1fc59ac994439221a6778bc74417e3417c59d843c4763ac6e0a960] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:45796 - 27964 "HINFO IN 110522732956144026.4696447689420735325. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.011056281s
	
	
	==> coredns [e35c2076fdbf6ba34585d4dc1c830959915dd9b715305df7b4f1c7cb6e2320b4] <==
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] plugin/health: Going into lameduck mode for 5s
	[INFO] 127.0.0.1:40595 - 64061 "HINFO IN 5458279004038995788.7331210818426585796. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.01697225s
	
	
	==> describe nodes <==
	Name:               pause-543700
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-543700
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=85073601a832bd4bbda5d11fa91feafff6ec6b91
	                    minikube.k8s.io/name=pause-543700
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_18T20_53_46_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 18 Sep 2024 20:53:42 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-543700
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 18 Sep 2024 20:55:59 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 18 Sep 2024 20:55:39 +0000   Wed, 18 Sep 2024 20:53:41 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 18 Sep 2024 20:55:39 +0000   Wed, 18 Sep 2024 20:53:41 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 18 Sep 2024 20:55:39 +0000   Wed, 18 Sep 2024 20:53:41 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 18 Sep 2024 20:55:39 +0000   Wed, 18 Sep 2024 20:53:46 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.184
	  Hostname:    pause-543700
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2015704Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2015704Ki
	  pods:               110
	System Info:
	  Machine ID:                 1fd8658137a443899f27658e9977c486
	  System UUID:                1fd86581-37a4-4389-9f27-658e9977c486
	  Boot ID:                    fb68cf0a-e7a2-43f2-aadd-50d9055f0a19
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7c65d6cfc9-8qv9r                100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     2m11s
	  kube-system                 etcd-pause-543700                       100m (5%)     0 (0%)      100Mi (5%)       0 (0%)         2m16s
	  kube-system                 kube-apiserver-pause-543700             250m (12%)    0 (0%)      0 (0%)           0 (0%)         2m16s
	  kube-system                 kube-controller-manager-pause-543700    200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m16s
	  kube-system                 kube-proxy-h544n                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m11s
	  kube-system                 kube-scheduler-pause-543700             100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m16s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  0 (0%)
	  memory             170Mi (8%)  170Mi (8%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 2m10s                  kube-proxy       
	  Normal  Starting                 21s                    kube-proxy       
	  Normal  Starting                 37s                    kube-proxy       
	  Normal  NodeHasNoDiskPressure    2m22s (x8 over 2m22s)  kubelet          Node pause-543700 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  2m22s (x8 over 2m22s)  kubelet          Node pause-543700 status is now: NodeHasSufficientMemory
	  Normal  Starting                 2m22s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientPID     2m22s (x7 over 2m22s)  kubelet          Node pause-543700 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  2m22s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientPID     2m16s                  kubelet          Node pause-543700 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  2m16s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  2m16s                  kubelet          Node pause-543700 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m16s                  kubelet          Node pause-543700 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 2m16s                  kubelet          Starting kubelet.
	  Normal  NodeReady                2m15s                  kubelet          Node pause-543700 status is now: NodeReady
	  Normal  RegisteredNode           2m12s                  node-controller  Node pause-543700 event: Registered Node pause-543700 in Controller
	  Normal  RegisteredNode           34s                    node-controller  Node pause-543700 event: Registered Node pause-543700 in Controller
	  Normal  NodeHasNoDiskPressure    26s (x8 over 26s)      kubelet          Node pause-543700 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  26s (x8 over 26s)      kubelet          Node pause-543700 status is now: NodeHasSufficientMemory
	  Normal  Starting                 26s                    kubelet          Starting kubelet.
	  Normal  NodeHasSufficientPID     26s (x7 over 26s)      kubelet          Node pause-543700 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  26s                    kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           19s                    node-controller  Node pause-543700 event: Registered Node pause-543700 in Controller
	
	
	==> dmesg <==
	[  +0.063538] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.067388] systemd-fstab-generator[596]: Ignoring "noauto" option for root device
	[  +0.184044] systemd-fstab-generator[610]: Ignoring "noauto" option for root device
	[  +0.163492] systemd-fstab-generator[622]: Ignoring "noauto" option for root device
	[  +0.286934] systemd-fstab-generator[651]: Ignoring "noauto" option for root device
	[  +4.257421] systemd-fstab-generator[742]: Ignoring "noauto" option for root device
	[  +0.065641] kauditd_printk_skb: 130 callbacks suppressed
	[  +4.081569] systemd-fstab-generator[878]: Ignoring "noauto" option for root device
	[  +1.015647] kauditd_printk_skb: 57 callbacks suppressed
	[  +5.551264] systemd-fstab-generator[1212]: Ignoring "noauto" option for root device
	[  +0.086860] kauditd_printk_skb: 30 callbacks suppressed
	[  +4.805801] systemd-fstab-generator[1327]: Ignoring "noauto" option for root device
	[  +0.545384] kauditd_printk_skb: 49 callbacks suppressed
	[Sep18 20:54] kauditd_printk_skb: 66 callbacks suppressed
	[Sep18 20:55] systemd-fstab-generator[2225]: Ignoring "noauto" option for root device
	[  +0.145124] systemd-fstab-generator[2237]: Ignoring "noauto" option for root device
	[  +0.193353] systemd-fstab-generator[2251]: Ignoring "noauto" option for root device
	[  +0.144261] systemd-fstab-generator[2263]: Ignoring "noauto" option for root device
	[  +0.317012] systemd-fstab-generator[2291]: Ignoring "noauto" option for root device
	[  +3.536186] systemd-fstab-generator[2831]: Ignoring "noauto" option for root device
	[  +3.341904] kauditd_printk_skb: 195 callbacks suppressed
	[ +10.853405] systemd-fstab-generator[3272]: Ignoring "noauto" option for root device
	[  +4.658248] kauditd_printk_skb: 44 callbacks suppressed
	[ +15.788544] systemd-fstab-generator[3725]: Ignoring "noauto" option for root device
	[  +0.099194] kauditd_printk_skb: 6 callbacks suppressed
	
	
	==> etcd [67554b57c9cc456c2dc6453a8b3cc56e3a300ae0d3ee59fc138b73a75e288bdc] <==
	{"level":"info","ts":"2024-09-18T20:55:22.745261Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"989272a6374482ea became pre-candidate at term 2"}
	{"level":"info","ts":"2024-09-18T20:55:22.745314Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"989272a6374482ea received MsgPreVoteResp from 989272a6374482ea at term 2"}
	{"level":"info","ts":"2024-09-18T20:55:22.745356Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"989272a6374482ea became candidate at term 3"}
	{"level":"info","ts":"2024-09-18T20:55:22.745381Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"989272a6374482ea received MsgVoteResp from 989272a6374482ea at term 3"}
	{"level":"info","ts":"2024-09-18T20:55:22.745409Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"989272a6374482ea became leader at term 3"}
	{"level":"info","ts":"2024-09-18T20:55:22.745435Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 989272a6374482ea elected leader 989272a6374482ea at term 3"}
	{"level":"info","ts":"2024-09-18T20:55:22.750463Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-18T20:55:22.751574Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-18T20:55:22.754937Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.184:2379"}
	{"level":"info","ts":"2024-09-18T20:55:22.756723Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-18T20:55:22.759789Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-18T20:55:22.762654Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-09-18T20:55:22.750424Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"989272a6374482ea","local-member-attributes":"{Name:pause-543700 ClientURLs:[https://192.168.39.184:2379]}","request-path":"/0/members/989272a6374482ea/attributes","cluster-id":"e6ef3f762f24aa4a","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-18T20:55:22.767040Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-18T20:55:22.767082Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-18T20:55:33.678768Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-09-18T20:55:33.678837Z","caller":"embed/etcd.go:377","msg":"closing etcd server","name":"pause-543700","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.184:2380"],"advertise-client-urls":["https://192.168.39.184:2379"]}
	{"level":"warn","ts":"2024-09-18T20:55:33.678963Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-09-18T20:55:33.678989Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-09-18T20:55:33.680569Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.184:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-09-18T20:55:33.680617Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.184:2379: use of closed network connection"}
	{"level":"info","ts":"2024-09-18T20:55:33.682108Z","caller":"etcdserver/server.go:1521","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"989272a6374482ea","current-leader-member-id":"989272a6374482ea"}
	{"level":"info","ts":"2024-09-18T20:55:33.686041Z","caller":"embed/etcd.go:581","msg":"stopping serving peer traffic","address":"192.168.39.184:2380"}
	{"level":"info","ts":"2024-09-18T20:55:33.686152Z","caller":"embed/etcd.go:586","msg":"stopped serving peer traffic","address":"192.168.39.184:2380"}
	{"level":"info","ts":"2024-09-18T20:55:33.686176Z","caller":"embed/etcd.go:379","msg":"closed etcd server","name":"pause-543700","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.184:2380"],"advertise-client-urls":["https://192.168.39.184:2379"]}
	
	
	==> etcd [7cda771d471c29dc3cd640a3adf8c05602a24ffc5b7fe4fd453d69e03194c145] <==
	{"level":"info","ts":"2024-09-18T20:55:36.535352Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"989272a6374482ea switched to configuration voters=(10993975698582176490)"}
	{"level":"info","ts":"2024-09-18T20:55:36.535501Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"e6ef3f762f24aa4a","local-member-id":"989272a6374482ea","added-peer-id":"989272a6374482ea","added-peer-peer-urls":["https://192.168.39.184:2380"]}
	{"level":"info","ts":"2024-09-18T20:55:36.535634Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"e6ef3f762f24aa4a","local-member-id":"989272a6374482ea","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-18T20:55:36.535685Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-18T20:55:36.544742Z","caller":"embed/etcd.go:728","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-09-18T20:55:36.545394Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.39.184:2380"}
	{"level":"info","ts":"2024-09-18T20:55:36.545521Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.39.184:2380"}
	{"level":"info","ts":"2024-09-18T20:55:36.559632Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"989272a6374482ea","initial-advertise-peer-urls":["https://192.168.39.184:2380"],"listen-peer-urls":["https://192.168.39.184:2380"],"advertise-client-urls":["https://192.168.39.184:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.184:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-09-18T20:55:36.559712Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-09-18T20:55:37.687693Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"989272a6374482ea is starting a new election at term 3"}
	{"level":"info","ts":"2024-09-18T20:55:37.687754Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"989272a6374482ea became pre-candidate at term 3"}
	{"level":"info","ts":"2024-09-18T20:55:37.687771Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"989272a6374482ea received MsgPreVoteResp from 989272a6374482ea at term 3"}
	{"level":"info","ts":"2024-09-18T20:55:37.687783Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"989272a6374482ea became candidate at term 4"}
	{"level":"info","ts":"2024-09-18T20:55:37.687789Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"989272a6374482ea received MsgVoteResp from 989272a6374482ea at term 4"}
	{"level":"info","ts":"2024-09-18T20:55:37.687798Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"989272a6374482ea became leader at term 4"}
	{"level":"info","ts":"2024-09-18T20:55:37.687823Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 989272a6374482ea elected leader 989272a6374482ea at term 4"}
	{"level":"info","ts":"2024-09-18T20:55:37.695060Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"989272a6374482ea","local-member-attributes":"{Name:pause-543700 ClientURLs:[https://192.168.39.184:2379]}","request-path":"/0/members/989272a6374482ea/attributes","cluster-id":"e6ef3f762f24aa4a","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-18T20:55:37.695158Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-18T20:55:37.695838Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-18T20:55:37.697132Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-18T20:55:37.698422Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-09-18T20:55:37.699611Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-18T20:55:37.699940Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-18T20:55:37.699967Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-18T20:55:37.701333Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.184:2379"}
	
	
	==> kernel <==
	 20:56:01 up 2 min,  0 users,  load average: 0.82, 0.29, 0.11
	Linux pause-543700 5.10.207 #1 SMP Mon Sep 16 15:00:28 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [57c211488dfbc6dfdb474ae7fc32a15deeb7f4d5a1d19bd672b04875a60c5eb1] <==
	I0918 20:55:32.561289       1 storage_flowcontrol.go:186] APF bootstrap ensurer is exiting
	I0918 20:55:32.561299       1 gc_controller.go:91] Shutting down apiserver lease garbage collector
	I0918 20:55:32.561306       1 system_namespaces_controller.go:76] Shutting down system namespaces controller
	I0918 20:55:32.561321       1 crdregistration_controller.go:145] Shutting down crd-autoregister controller
	I0918 20:55:32.561353       1 customresource_discovery_controller.go:328] Shutting down DiscoveryController
	I0918 20:55:32.561648       1 dynamic_cafile_content.go:174] "Shutting down controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0918 20:55:32.561757       1 dynamic_cafile_content.go:174] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0918 20:55:32.561933       1 controller.go:157] Shutting down quota evaluator
	I0918 20:55:32.561957       1 controller.go:176] quota evaluator worker shutdown
	I0918 20:55:32.561160       1 establishing_controller.go:92] Shutting down EstablishingController
	I0918 20:55:32.561165       1 naming_controller.go:305] Shutting down NamingConditionController
	I0918 20:55:32.562416       1 dynamic_serving_content.go:149] "Shutting down controller" name="aggregator-proxy-cert::/var/lib/minikube/certs/front-proxy-client.crt::/var/lib/minikube/certs/front-proxy-client.key"
	I0918 20:55:32.562507       1 object_count_tracker.go:151] "StorageObjectCountTracker pruner is exiting"
	I0918 20:55:32.562582       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I0918 20:55:32.562833       1 dynamic_cafile_content.go:174] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0918 20:55:32.563084       1 dynamic_serving_content.go:149] "Shutting down controller" name="serving-cert::/var/lib/minikube/certs/apiserver.crt::/var/lib/minikube/certs/apiserver.key"
	I0918 20:55:32.561264       1 controller.go:170] Shutting down OpenAPI controller
	I0918 20:55:32.562487       1 controller.go:86] Shutting down OpenAPI V3 AggregationController
	I0918 20:55:32.562493       1 controller.go:84] Shutting down OpenAPI AggregationController
	I0918 20:55:32.562568       1 secure_serving.go:258] Stopped listening on [::]:8443
	I0918 20:55:32.562821       1 controller.go:176] quota evaluator worker shutdown
	I0918 20:55:32.562830       1 controller.go:176] quota evaluator worker shutdown
	I0918 20:55:32.562836       1 controller.go:176] quota evaluator worker shutdown
	I0918 20:55:32.562839       1 controller.go:176] quota evaluator worker shutdown
	I0918 20:55:32.561132       1 autoregister_controller.go:168] Shutting down autoregister controller
	
	
	==> kube-apiserver [ee3ad2aede523ba2e7a7101f3f85754cc71496eac6f9b1721d29755ce9aaea45] <==
	I0918 20:55:39.203460       1 policy_source.go:224] refreshing policies
	I0918 20:55:39.205508       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0918 20:55:39.221461       1 shared_informer.go:320] Caches are synced for configmaps
	I0918 20:55:39.221554       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0918 20:55:39.221954       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0918 20:55:39.222458       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0918 20:55:39.223782       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0918 20:55:39.241807       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0918 20:55:39.222344       1 aggregator.go:171] initial CRD sync complete...
	I0918 20:55:39.244454       1 autoregister_controller.go:144] Starting autoregister controller
	I0918 20:55:39.244464       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0918 20:55:39.244472       1 cache.go:39] Caches are synced for autoregister controller
	I0918 20:55:39.245240       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0918 20:55:39.258513       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	I0918 20:55:39.282989       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0918 20:55:39.285740       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0918 20:55:40.038943       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0918 20:55:40.574761       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.184]
	I0918 20:55:40.576606       1 controller.go:615] quota admission added evaluator for: endpoints
	I0918 20:55:40.583469       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0918 20:55:40.954751       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0918 20:55:40.979420       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0918 20:55:41.047934       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0918 20:55:41.101501       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0918 20:55:41.115767       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	
	
	==> kube-controller-manager [1bffe6da49dd38ba92455bd869a9f7f9031ef436230cf2a9f7a0d79101a735c6] <==
	I0918 20:55:27.565633       1 shared_informer.go:320] Caches are synced for certificate-csrapproving
	I0918 20:55:27.565788       1 shared_informer.go:320] Caches are synced for HPA
	I0918 20:55:27.565746       1 shared_informer.go:320] Caches are synced for job
	I0918 20:55:27.567118       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-legacy-unknown
	I0918 20:55:27.567179       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-client
	I0918 20:55:27.567192       1 shared_informer.go:320] Caches are synced for ephemeral
	I0918 20:55:27.567209       1 shared_informer.go:320] Caches are synced for taint-eviction-controller
	I0918 20:55:27.567219       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I0918 20:55:27.568418       1 shared_informer.go:320] Caches are synced for bootstrap_signer
	I0918 20:55:27.574348       1 shared_informer.go:320] Caches are synced for PVC protection
	I0918 20:55:27.576731       1 shared_informer.go:320] Caches are synced for daemon sets
	I0918 20:55:27.617045       1 shared_informer.go:320] Caches are synced for disruption
	I0918 20:55:27.624864       1 shared_informer.go:320] Caches are synced for ReplicaSet
	I0918 20:55:27.627335       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I0918 20:55:27.635266       1 shared_informer.go:320] Caches are synced for resource quota
	I0918 20:55:27.665915       1 shared_informer.go:320] Caches are synced for deployment
	I0918 20:55:27.675846       1 shared_informer.go:320] Caches are synced for resource quota
	I0918 20:55:27.713316       1 shared_informer.go:320] Caches are synced for endpoint
	I0918 20:55:27.714638       1 shared_informer.go:320] Caches are synced for endpoint_slice_mirroring
	I0918 20:55:27.733750       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="108.562325ms"
	I0918 20:55:27.733839       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="41.198µs"
	I0918 20:55:27.772228       1 shared_informer.go:320] Caches are synced for attach detach
	I0918 20:55:28.206366       1 shared_informer.go:320] Caches are synced for garbage collector
	I0918 20:55:28.206544       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0918 20:55:28.217358       1 shared_informer.go:320] Caches are synced for garbage collector
	
	
	==> kube-controller-manager [bc9feef647787abc2cb1b5ab3c0e0b76b70089edf815386900484342ded37a04] <==
	I0918 20:55:42.566085       1 shared_informer.go:320] Caches are synced for disruption
	I0918 20:55:42.592127       1 shared_informer.go:320] Caches are synced for ephemeral
	I0918 20:55:42.595686       1 shared_informer.go:320] Caches are synced for GC
	I0918 20:55:42.614553       1 shared_informer.go:320] Caches are synced for persistent volume
	I0918 20:55:42.635527       1 shared_informer.go:320] Caches are synced for deployment
	I0918 20:55:42.644356       1 shared_informer.go:320] Caches are synced for HPA
	I0918 20:55:42.644524       1 shared_informer.go:320] Caches are synced for job
	I0918 20:55:42.645618       1 shared_informer.go:320] Caches are synced for attach detach
	I0918 20:55:42.645912       1 shared_informer.go:320] Caches are synced for ReplicationController
	I0918 20:55:42.645984       1 shared_informer.go:320] Caches are synced for ReplicaSet
	I0918 20:55:42.646605       1 shared_informer.go:320] Caches are synced for PVC protection
	I0918 20:55:42.649102       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I0918 20:55:42.652416       1 shared_informer.go:320] Caches are synced for endpoint
	I0918 20:55:42.670766       1 shared_informer.go:320] Caches are synced for daemon sets
	I0918 20:55:42.697036       1 shared_informer.go:320] Caches are synced for taint
	I0918 20:55:42.697194       1 node_lifecycle_controller.go:1232] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0918 20:55:42.697283       1 node_lifecycle_controller.go:884] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="pause-543700"
	I0918 20:55:42.697333       1 node_lifecycle_controller.go:1078] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0918 20:55:42.707844       1 shared_informer.go:320] Caches are synced for resource quota
	I0918 20:55:42.722575       1 shared_informer.go:320] Caches are synced for resource quota
	I0918 20:55:42.761822       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="115.678232ms"
	I0918 20:55:42.762505       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="219.951µs"
	I0918 20:55:43.132391       1 shared_informer.go:320] Caches are synced for garbage collector
	I0918 20:55:43.198798       1 shared_informer.go:320] Caches are synced for garbage collector
	I0918 20:55:43.198896       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	
	
	==> kube-proxy [5b4875ac82adf72eda0bef747b563eb403ecc70cb56d0275307e4e04b468565c] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0918 20:55:22.697305       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0918 20:55:24.301370       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.184"]
	E0918 20:55:24.301890       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0918 20:55:24.362842       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0918 20:55:24.362887       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0918 20:55:24.362915       1 server_linux.go:169] "Using iptables Proxier"
	I0918 20:55:24.365573       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0918 20:55:24.365874       1 server.go:483] "Version info" version="v1.31.1"
	I0918 20:55:24.365898       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0918 20:55:24.367519       1 config.go:199] "Starting service config controller"
	I0918 20:55:24.367572       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0918 20:55:24.367605       1 config.go:105] "Starting endpoint slice config controller"
	I0918 20:55:24.367625       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0918 20:55:24.368321       1 config.go:328] "Starting node config controller"
	I0918 20:55:24.368351       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0918 20:55:24.469124       1 shared_informer.go:320] Caches are synced for node config
	I0918 20:55:24.469130       1 shared_informer.go:320] Caches are synced for service config
	I0918 20:55:24.469146       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-proxy [898cc4ece731f92e0dd2201b2e41f746d0337ec9f659051b5eb8528f194bc91e] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0918 20:55:40.305279       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0918 20:55:40.326530       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.184"]
	E0918 20:55:40.326716       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0918 20:55:40.377904       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0918 20:55:40.378052       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0918 20:55:40.378118       1 server_linux.go:169] "Using iptables Proxier"
	I0918 20:55:40.381891       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0918 20:55:40.382282       1 server.go:483] "Version info" version="v1.31.1"
	I0918 20:55:40.382347       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0918 20:55:40.383640       1 config.go:199] "Starting service config controller"
	I0918 20:55:40.385386       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0918 20:55:40.385519       1 config.go:105] "Starting endpoint slice config controller"
	I0918 20:55:40.385582       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0918 20:55:40.386352       1 config.go:328] "Starting node config controller"
	I0918 20:55:40.388073       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0918 20:55:40.485794       1 shared_informer.go:320] Caches are synced for service config
	I0918 20:55:40.486116       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0918 20:55:40.488301       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [8907210c9344d37955a4bf0618118949966882169d72f1706cc5c425f339b153] <==
	I0918 20:55:37.266672       1 serving.go:386] Generated self-signed cert in-memory
	W0918 20:55:39.131845       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0918 20:55:39.133156       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0918 20:55:39.133966       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0918 20:55:39.134973       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0918 20:55:39.204704       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.1"
	I0918 20:55:39.208409       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0918 20:55:39.219171       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0918 20:55:39.220284       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0918 20:55:39.220329       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0918 20:55:39.227713       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0918 20:55:39.328881       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [fdc9fb94be86655beb10880c7d5faaad8c9c29dd88fa8972743f97c12bc85a24] <==
	I0918 20:55:22.871517       1 serving.go:386] Generated self-signed cert in-memory
	W0918 20:55:24.168732       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0918 20:55:24.170157       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0918 20:55:24.170193       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0918 20:55:24.170249       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0918 20:55:24.294165       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.1"
	I0918 20:55:24.294207       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0918 20:55:24.304463       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0918 20:55:24.304568       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0918 20:55:24.305845       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0918 20:55:24.311282       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0918 20:55:24.405535       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0918 20:55:32.349080       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I0918 20:55:32.349350       1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
	E0918 20:55:32.349535       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Sep 18 20:55:36 pause-543700 kubelet[3279]: E0918 20:55:36.034973    3279 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.39.184:8443: connect: connection refused" node="pause-543700"
	Sep 18 20:55:36 pause-543700 kubelet[3279]: I0918 20:55:36.094342    3279 scope.go:117] "RemoveContainer" containerID="67554b57c9cc456c2dc6453a8b3cc56e3a300ae0d3ee59fc138b73a75e288bdc"
	Sep 18 20:55:36 pause-543700 kubelet[3279]: I0918 20:55:36.094730    3279 scope.go:117] "RemoveContainer" containerID="57c211488dfbc6dfdb474ae7fc32a15deeb7f4d5a1d19bd672b04875a60c5eb1"
	Sep 18 20:55:36 pause-543700 kubelet[3279]: I0918 20:55:36.095814    3279 scope.go:117] "RemoveContainer" containerID="1bffe6da49dd38ba92455bd869a9f7f9031ef436230cf2a9f7a0d79101a735c6"
	Sep 18 20:55:36 pause-543700 kubelet[3279]: I0918 20:55:36.096670    3279 scope.go:117] "RemoveContainer" containerID="fdc9fb94be86655beb10880c7d5faaad8c9c29dd88fa8972743f97c12bc85a24"
	Sep 18 20:55:36 pause-543700 kubelet[3279]: E0918 20:55:36.264937    3279 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/pause-543700?timeout=10s\": dial tcp 192.168.39.184:8443: connect: connection refused" interval="800ms"
	Sep 18 20:55:36 pause-543700 kubelet[3279]: I0918 20:55:36.436408    3279 kubelet_node_status.go:72] "Attempting to register node" node="pause-543700"
	Sep 18 20:55:36 pause-543700 kubelet[3279]: E0918 20:55:36.437214    3279 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.39.184:8443: connect: connection refused" node="pause-543700"
	Sep 18 20:55:37 pause-543700 kubelet[3279]: I0918 20:55:37.239537    3279 kubelet_node_status.go:72] "Attempting to register node" node="pause-543700"
	Sep 18 20:55:39 pause-543700 kubelet[3279]: I0918 20:55:39.278582    3279 kubelet_node_status.go:111] "Node was previously registered" node="pause-543700"
	Sep 18 20:55:39 pause-543700 kubelet[3279]: I0918 20:55:39.279297    3279 kubelet_node_status.go:75] "Successfully registered node" node="pause-543700"
	Sep 18 20:55:39 pause-543700 kubelet[3279]: I0918 20:55:39.279495    3279 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Sep 18 20:55:39 pause-543700 kubelet[3279]: I0918 20:55:39.281584    3279 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Sep 18 20:55:39 pause-543700 kubelet[3279]: I0918 20:55:39.639837    3279 apiserver.go:52] "Watching apiserver"
	Sep 18 20:55:39 pause-543700 kubelet[3279]: I0918 20:55:39.657483    3279 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Sep 18 20:55:39 pause-543700 kubelet[3279]: I0918 20:55:39.756345    3279 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/68d913c9-0656-4875-9450-f80bf77bbfd7-xtables-lock\") pod \"kube-proxy-h544n\" (UID: \"68d913c9-0656-4875-9450-f80bf77bbfd7\") " pod="kube-system/kube-proxy-h544n"
	Sep 18 20:55:39 pause-543700 kubelet[3279]: I0918 20:55:39.756387    3279 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/68d913c9-0656-4875-9450-f80bf77bbfd7-lib-modules\") pod \"kube-proxy-h544n\" (UID: \"68d913c9-0656-4875-9450-f80bf77bbfd7\") " pod="kube-system/kube-proxy-h544n"
	Sep 18 20:55:39 pause-543700 kubelet[3279]: E0918 20:55:39.831874    3279 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-pause-543700\" already exists" pod="kube-system/kube-apiserver-pause-543700"
	Sep 18 20:55:39 pause-543700 kubelet[3279]: E0918 20:55:39.839649    3279 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"etcd-pause-543700\" already exists" pod="kube-system/etcd-pause-543700"
	Sep 18 20:55:39 pause-543700 kubelet[3279]: I0918 20:55:39.947123    3279 scope.go:117] "RemoveContainer" containerID="e35c2076fdbf6ba34585d4dc1c830959915dd9b715305df7b4f1c7cb6e2320b4"
	Sep 18 20:55:39 pause-543700 kubelet[3279]: I0918 20:55:39.947442    3279 scope.go:117] "RemoveContainer" containerID="5b4875ac82adf72eda0bef747b563eb403ecc70cb56d0275307e4e04b468565c"
	Sep 18 20:55:45 pause-543700 kubelet[3279]: E0918 20:55:45.734917    3279 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726692945734431853,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125697,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 18 20:55:45 pause-543700 kubelet[3279]: E0918 20:55:45.734937    3279 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726692945734431853,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125697,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 18 20:55:55 pause-543700 kubelet[3279]: E0918 20:55:55.736703    3279 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726692955736381361,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125697,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 18 20:55:55 pause-543700 kubelet[3279]: E0918 20:55:55.736730    3279 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726692955736381361,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125697,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-543700 -n pause-543700
helpers_test.go:261: (dbg) Run:  kubectl --context pause-543700 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestPause/serial/SecondStartNoReconfiguration (91.86s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (138.96s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-331658 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p no-preload-331658 --alsologtostderr -v=3: exit status 82 (2m0.516279643s)

                                                
                                                
-- stdout --
	* Stopping node "no-preload-331658"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0918 20:57:00.154825   60240 out.go:345] Setting OutFile to fd 1 ...
	I0918 20:57:00.155062   60240 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0918 20:57:00.155071   60240 out.go:358] Setting ErrFile to fd 2...
	I0918 20:57:00.155075   60240 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0918 20:57:00.155729   60240 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19667-7671/.minikube/bin
	I0918 20:57:00.156120   60240 out.go:352] Setting JSON to false
	I0918 20:57:00.156212   60240 mustload.go:65] Loading cluster: no-preload-331658
	I0918 20:57:00.157031   60240 config.go:182] Loaded profile config "no-preload-331658": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0918 20:57:00.157112   60240 profile.go:143] Saving config to /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/no-preload-331658/config.json ...
	I0918 20:57:00.157293   60240 mustload.go:65] Loading cluster: no-preload-331658
	I0918 20:57:00.157390   60240 config.go:182] Loaded profile config "no-preload-331658": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0918 20:57:00.157421   60240 stop.go:39] StopHost: no-preload-331658
	I0918 20:57:00.157792   60240 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19667-7671/.minikube/bin/docker-machine-driver-kvm2
	I0918 20:57:00.157831   60240 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0918 20:57:00.173679   60240 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34231
	I0918 20:57:00.174208   60240 main.go:141] libmachine: () Calling .GetVersion
	I0918 20:57:00.174809   60240 main.go:141] libmachine: Using API Version  1
	I0918 20:57:00.174831   60240 main.go:141] libmachine: () Calling .SetConfigRaw
	I0918 20:57:00.175194   60240 main.go:141] libmachine: () Calling .GetMachineName
	I0918 20:57:00.178215   60240 out.go:177] * Stopping node "no-preload-331658"  ...
	I0918 20:57:00.179473   60240 machine.go:156] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0918 20:57:00.179507   60240 main.go:141] libmachine: (no-preload-331658) Calling .DriverName
	I0918 20:57:00.179737   60240 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0918 20:57:00.179762   60240 main.go:141] libmachine: (no-preload-331658) Calling .GetSSHHostname
	I0918 20:57:00.182449   60240 main.go:141] libmachine: (no-preload-331658) DBG | domain no-preload-331658 has defined MAC address 52:54:00:2a:47:d0 in network mk-no-preload-331658
	I0918 20:57:00.182812   60240 main.go:141] libmachine: (no-preload-331658) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:47:d0", ip: ""} in network mk-no-preload-331658: {Iface:virbr2 ExpiryTime:2024-09-18 21:55:52 +0000 UTC Type:0 Mac:52:54:00:2a:47:d0 Iaid: IPaddr:192.168.61.31 Prefix:24 Hostname:no-preload-331658 Clientid:01:52:54:00:2a:47:d0}
	I0918 20:57:00.182838   60240 main.go:141] libmachine: (no-preload-331658) DBG | domain no-preload-331658 has defined IP address 192.168.61.31 and MAC address 52:54:00:2a:47:d0 in network mk-no-preload-331658
	I0918 20:57:00.183014   60240 main.go:141] libmachine: (no-preload-331658) Calling .GetSSHPort
	I0918 20:57:00.183215   60240 main.go:141] libmachine: (no-preload-331658) Calling .GetSSHKeyPath
	I0918 20:57:00.183404   60240 main.go:141] libmachine: (no-preload-331658) Calling .GetSSHUsername
	I0918 20:57:00.183577   60240 sshutil.go:53] new ssh client: &{IP:192.168.61.31 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19667-7671/.minikube/machines/no-preload-331658/id_rsa Username:docker}
	I0918 20:57:00.279331   60240 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0918 20:57:00.351556   60240 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0918 20:57:00.411855   60240 main.go:141] libmachine: Stopping "no-preload-331658"...
	I0918 20:57:00.411884   60240 main.go:141] libmachine: (no-preload-331658) Calling .GetState
	I0918 20:57:00.413734   60240 main.go:141] libmachine: (no-preload-331658) Calling .Stop
	I0918 20:57:00.418169   60240 main.go:141] libmachine: (no-preload-331658) Waiting for machine to stop 0/120
	I0918 20:57:01.419794   60240 main.go:141] libmachine: (no-preload-331658) Waiting for machine to stop 1/120
	I0918 20:57:02.422202   60240 main.go:141] libmachine: (no-preload-331658) Waiting for machine to stop 2/120
	I0918 20:57:03.424389   60240 main.go:141] libmachine: (no-preload-331658) Waiting for machine to stop 3/120
	I0918 20:57:04.425797   60240 main.go:141] libmachine: (no-preload-331658) Waiting for machine to stop 4/120
	I0918 20:57:05.428563   60240 main.go:141] libmachine: (no-preload-331658) Waiting for machine to stop 5/120
	I0918 20:57:06.429882   60240 main.go:141] libmachine: (no-preload-331658) Waiting for machine to stop 6/120
	I0918 20:57:07.432110   60240 main.go:141] libmachine: (no-preload-331658) Waiting for machine to stop 7/120
	I0918 20:57:08.433655   60240 main.go:141] libmachine: (no-preload-331658) Waiting for machine to stop 8/120
	I0918 20:57:09.435037   60240 main.go:141] libmachine: (no-preload-331658) Waiting for machine to stop 9/120
	I0918 20:57:10.437487   60240 main.go:141] libmachine: (no-preload-331658) Waiting for machine to stop 10/120
	I0918 20:57:11.438874   60240 main.go:141] libmachine: (no-preload-331658) Waiting for machine to stop 11/120
	I0918 20:57:12.440839   60240 main.go:141] libmachine: (no-preload-331658) Waiting for machine to stop 12/120
	I0918 20:57:13.442466   60240 main.go:141] libmachine: (no-preload-331658) Waiting for machine to stop 13/120
	I0918 20:57:14.443948   60240 main.go:141] libmachine: (no-preload-331658) Waiting for machine to stop 14/120
	I0918 20:57:15.445903   60240 main.go:141] libmachine: (no-preload-331658) Waiting for machine to stop 15/120
	I0918 20:57:16.447385   60240 main.go:141] libmachine: (no-preload-331658) Waiting for machine to stop 16/120
	I0918 20:57:17.448822   60240 main.go:141] libmachine: (no-preload-331658) Waiting for machine to stop 17/120
	I0918 20:57:18.450940   60240 main.go:141] libmachine: (no-preload-331658) Waiting for machine to stop 18/120
	I0918 20:57:19.453425   60240 main.go:141] libmachine: (no-preload-331658) Waiting for machine to stop 19/120
	I0918 20:57:20.455982   60240 main.go:141] libmachine: (no-preload-331658) Waiting for machine to stop 20/120
	I0918 20:57:21.457398   60240 main.go:141] libmachine: (no-preload-331658) Waiting for machine to stop 21/120
	I0918 20:57:22.458717   60240 main.go:141] libmachine: (no-preload-331658) Waiting for machine to stop 22/120
	I0918 20:57:23.460034   60240 main.go:141] libmachine: (no-preload-331658) Waiting for machine to stop 23/120
	I0918 20:57:24.461547   60240 main.go:141] libmachine: (no-preload-331658) Waiting for machine to stop 24/120
	I0918 20:57:25.463354   60240 main.go:141] libmachine: (no-preload-331658) Waiting for machine to stop 25/120
	I0918 20:57:26.464674   60240 main.go:141] libmachine: (no-preload-331658) Waiting for machine to stop 26/120
	I0918 20:57:27.466300   60240 main.go:141] libmachine: (no-preload-331658) Waiting for machine to stop 27/120
	I0918 20:57:28.467749   60240 main.go:141] libmachine: (no-preload-331658) Waiting for machine to stop 28/120
	I0918 20:57:29.469113   60240 main.go:141] libmachine: (no-preload-331658) Waiting for machine to stop 29/120
	I0918 20:57:30.471468   60240 main.go:141] libmachine: (no-preload-331658) Waiting for machine to stop 30/120
	I0918 20:57:31.473613   60240 main.go:141] libmachine: (no-preload-331658) Waiting for machine to stop 31/120
	I0918 20:57:32.475079   60240 main.go:141] libmachine: (no-preload-331658) Waiting for machine to stop 32/120
	I0918 20:57:33.476599   60240 main.go:141] libmachine: (no-preload-331658) Waiting for machine to stop 33/120
	I0918 20:57:34.477960   60240 main.go:141] libmachine: (no-preload-331658) Waiting for machine to stop 34/120
	I0918 20:57:35.479875   60240 main.go:141] libmachine: (no-preload-331658) Waiting for machine to stop 35/120
	I0918 20:57:36.481545   60240 main.go:141] libmachine: (no-preload-331658) Waiting for machine to stop 36/120
	I0918 20:57:37.483136   60240 main.go:141] libmachine: (no-preload-331658) Waiting for machine to stop 37/120
	I0918 20:57:38.484425   60240 main.go:141] libmachine: (no-preload-331658) Waiting for machine to stop 38/120
	I0918 20:57:39.485915   60240 main.go:141] libmachine: (no-preload-331658) Waiting for machine to stop 39/120
	I0918 20:57:40.488170   60240 main.go:141] libmachine: (no-preload-331658) Waiting for machine to stop 40/120
	I0918 20:57:41.489542   60240 main.go:141] libmachine: (no-preload-331658) Waiting for machine to stop 41/120
	I0918 20:57:42.491047   60240 main.go:141] libmachine: (no-preload-331658) Waiting for machine to stop 42/120
	I0918 20:57:43.492599   60240 main.go:141] libmachine: (no-preload-331658) Waiting for machine to stop 43/120
	I0918 20:57:44.493970   60240 main.go:141] libmachine: (no-preload-331658) Waiting for machine to stop 44/120
	I0918 20:57:45.496421   60240 main.go:141] libmachine: (no-preload-331658) Waiting for machine to stop 45/120
	I0918 20:57:46.498170   60240 main.go:141] libmachine: (no-preload-331658) Waiting for machine to stop 46/120
	I0918 20:57:47.499663   60240 main.go:141] libmachine: (no-preload-331658) Waiting for machine to stop 47/120
	I0918 20:57:48.501130   60240 main.go:141] libmachine: (no-preload-331658) Waiting for machine to stop 48/120
	I0918 20:57:49.502564   60240 main.go:141] libmachine: (no-preload-331658) Waiting for machine to stop 49/120
	I0918 20:57:50.504827   60240 main.go:141] libmachine: (no-preload-331658) Waiting for machine to stop 50/120
	I0918 20:57:51.506461   60240 main.go:141] libmachine: (no-preload-331658) Waiting for machine to stop 51/120
	I0918 20:57:52.508099   60240 main.go:141] libmachine: (no-preload-331658) Waiting for machine to stop 52/120
	I0918 20:57:53.509545   60240 main.go:141] libmachine: (no-preload-331658) Waiting for machine to stop 53/120
	I0918 20:57:54.510972   60240 main.go:141] libmachine: (no-preload-331658) Waiting for machine to stop 54/120
	I0918 20:57:55.513323   60240 main.go:141] libmachine: (no-preload-331658) Waiting for machine to stop 55/120
	I0918 20:57:56.514911   60240 main.go:141] libmachine: (no-preload-331658) Waiting for machine to stop 56/120
	I0918 20:57:57.516523   60240 main.go:141] libmachine: (no-preload-331658) Waiting for machine to stop 57/120
	I0918 20:57:58.518042   60240 main.go:141] libmachine: (no-preload-331658) Waiting for machine to stop 58/120
	I0918 20:57:59.519451   60240 main.go:141] libmachine: (no-preload-331658) Waiting for machine to stop 59/120
	I0918 20:58:00.522044   60240 main.go:141] libmachine: (no-preload-331658) Waiting for machine to stop 60/120
	I0918 20:58:01.523543   60240 main.go:141] libmachine: (no-preload-331658) Waiting for machine to stop 61/120
	I0918 20:58:02.525451   60240 main.go:141] libmachine: (no-preload-331658) Waiting for machine to stop 62/120
	I0918 20:58:03.527503   60240 main.go:141] libmachine: (no-preload-331658) Waiting for machine to stop 63/120
	I0918 20:58:04.529051   60240 main.go:141] libmachine: (no-preload-331658) Waiting for machine to stop 64/120
	I0918 20:58:05.531325   60240 main.go:141] libmachine: (no-preload-331658) Waiting for machine to stop 65/120
	I0918 20:58:06.532801   60240 main.go:141] libmachine: (no-preload-331658) Waiting for machine to stop 66/120
	I0918 20:58:07.534427   60240 main.go:141] libmachine: (no-preload-331658) Waiting for machine to stop 67/120
	I0918 20:58:08.535879   60240 main.go:141] libmachine: (no-preload-331658) Waiting for machine to stop 68/120
	I0918 20:58:09.537465   60240 main.go:141] libmachine: (no-preload-331658) Waiting for machine to stop 69/120
	I0918 20:58:10.538934   60240 main.go:141] libmachine: (no-preload-331658) Waiting for machine to stop 70/120
	I0918 20:58:11.540312   60240 main.go:141] libmachine: (no-preload-331658) Waiting for machine to stop 71/120
	I0918 20:58:12.541851   60240 main.go:141] libmachine: (no-preload-331658) Waiting for machine to stop 72/120
	I0918 20:58:13.543420   60240 main.go:141] libmachine: (no-preload-331658) Waiting for machine to stop 73/120
	I0918 20:58:14.544907   60240 main.go:141] libmachine: (no-preload-331658) Waiting for machine to stop 74/120
	I0918 20:58:15.547568   60240 main.go:141] libmachine: (no-preload-331658) Waiting for machine to stop 75/120
	I0918 20:58:16.549135   60240 main.go:141] libmachine: (no-preload-331658) Waiting for machine to stop 76/120
	I0918 20:58:17.550473   60240 main.go:141] libmachine: (no-preload-331658) Waiting for machine to stop 77/120
	I0918 20:58:18.551930   60240 main.go:141] libmachine: (no-preload-331658) Waiting for machine to stop 78/120
	I0918 20:58:19.553404   60240 main.go:141] libmachine: (no-preload-331658) Waiting for machine to stop 79/120
	I0918 20:58:20.555601   60240 main.go:141] libmachine: (no-preload-331658) Waiting for machine to stop 80/120
	I0918 20:58:21.557159   60240 main.go:141] libmachine: (no-preload-331658) Waiting for machine to stop 81/120
	I0918 20:58:22.558729   60240 main.go:141] libmachine: (no-preload-331658) Waiting for machine to stop 82/120
	I0918 20:58:23.560851   60240 main.go:141] libmachine: (no-preload-331658) Waiting for machine to stop 83/120
	I0918 20:58:24.562295   60240 main.go:141] libmachine: (no-preload-331658) Waiting for machine to stop 84/120
	I0918 20:58:25.564739   60240 main.go:141] libmachine: (no-preload-331658) Waiting for machine to stop 85/120
	I0918 20:58:26.566320   60240 main.go:141] libmachine: (no-preload-331658) Waiting for machine to stop 86/120
	I0918 20:58:27.567579   60240 main.go:141] libmachine: (no-preload-331658) Waiting for machine to stop 87/120
	I0918 20:58:28.569136   60240 main.go:141] libmachine: (no-preload-331658) Waiting for machine to stop 88/120
	I0918 20:58:29.570521   60240 main.go:141] libmachine: (no-preload-331658) Waiting for machine to stop 89/120
	I0918 20:58:30.572711   60240 main.go:141] libmachine: (no-preload-331658) Waiting for machine to stop 90/120
	I0918 20:58:31.574314   60240 main.go:141] libmachine: (no-preload-331658) Waiting for machine to stop 91/120
	I0918 20:58:32.575882   60240 main.go:141] libmachine: (no-preload-331658) Waiting for machine to stop 92/120
	I0918 20:58:33.577387   60240 main.go:141] libmachine: (no-preload-331658) Waiting for machine to stop 93/120
	I0918 20:58:34.578902   60240 main.go:141] libmachine: (no-preload-331658) Waiting for machine to stop 94/120
	I0918 20:58:35.581094   60240 main.go:141] libmachine: (no-preload-331658) Waiting for machine to stop 95/120
	I0918 20:58:36.582611   60240 main.go:141] libmachine: (no-preload-331658) Waiting for machine to stop 96/120
	I0918 20:58:37.584617   60240 main.go:141] libmachine: (no-preload-331658) Waiting for machine to stop 97/120
	I0918 20:58:38.586138   60240 main.go:141] libmachine: (no-preload-331658) Waiting for machine to stop 98/120
	I0918 20:58:39.587694   60240 main.go:141] libmachine: (no-preload-331658) Waiting for machine to stop 99/120
	I0918 20:58:40.589784   60240 main.go:141] libmachine: (no-preload-331658) Waiting for machine to stop 100/120
	I0918 20:58:41.591537   60240 main.go:141] libmachine: (no-preload-331658) Waiting for machine to stop 101/120
	I0918 20:58:42.593166   60240 main.go:141] libmachine: (no-preload-331658) Waiting for machine to stop 102/120
	I0918 20:58:43.594622   60240 main.go:141] libmachine: (no-preload-331658) Waiting for machine to stop 103/120
	I0918 20:58:44.596157   60240 main.go:141] libmachine: (no-preload-331658) Waiting for machine to stop 104/120
	I0918 20:58:45.598333   60240 main.go:141] libmachine: (no-preload-331658) Waiting for machine to stop 105/120
	I0918 20:58:46.600084   60240 main.go:141] libmachine: (no-preload-331658) Waiting for machine to stop 106/120
	I0918 20:58:47.601649   60240 main.go:141] libmachine: (no-preload-331658) Waiting for machine to stop 107/120
	I0918 20:58:48.603160   60240 main.go:141] libmachine: (no-preload-331658) Waiting for machine to stop 108/120
	I0918 20:58:49.604483   60240 main.go:141] libmachine: (no-preload-331658) Waiting for machine to stop 109/120
	I0918 20:58:50.606618   60240 main.go:141] libmachine: (no-preload-331658) Waiting for machine to stop 110/120
	I0918 20:58:51.608496   60240 main.go:141] libmachine: (no-preload-331658) Waiting for machine to stop 111/120
	I0918 20:58:52.609722   60240 main.go:141] libmachine: (no-preload-331658) Waiting for machine to stop 112/120
	I0918 20:58:53.611023   60240 main.go:141] libmachine: (no-preload-331658) Waiting for machine to stop 113/120
	I0918 20:58:54.612478   60240 main.go:141] libmachine: (no-preload-331658) Waiting for machine to stop 114/120
	I0918 20:58:55.614321   60240 main.go:141] libmachine: (no-preload-331658) Waiting for machine to stop 115/120
	I0918 20:58:56.615720   60240 main.go:141] libmachine: (no-preload-331658) Waiting for machine to stop 116/120
	I0918 20:58:57.617100   60240 main.go:141] libmachine: (no-preload-331658) Waiting for machine to stop 117/120
	I0918 20:58:58.618412   60240 main.go:141] libmachine: (no-preload-331658) Waiting for machine to stop 118/120
	I0918 20:58:59.619562   60240 main.go:141] libmachine: (no-preload-331658) Waiting for machine to stop 119/120
	I0918 20:59:00.620174   60240 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0918 20:59:00.620264   60240 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0918 20:59:00.622256   60240 out.go:201] 
	W0918 20:59:00.623856   60240 out.go:270] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0918 20:59:00.623877   60240 out.go:270] * 
	* 
	W0918 20:59:00.626619   60240 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0918 20:59:00.628083   60240 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:230: failed stopping minikube - first stop-. args "out/minikube-linux-amd64 stop -p no-preload-331658 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-331658 -n no-preload-331658
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-331658 -n no-preload-331658: exit status 3 (18.439267498s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0918 20:59:19.068374   60913 status.go:410] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.61.31:22: connect: no route to host
	E0918 20:59:19.068395   60913 status.go:119] status error: NewSession: new client: new client: dial tcp 192.168.61.31:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "no-preload-331658" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/no-preload/serial/Stop (138.96s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (139.17s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-828868 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p default-k8s-diff-port-828868 --alsologtostderr -v=3: exit status 82 (2m0.524974417s)

                                                
                                                
-- stdout --
	* Stopping node "default-k8s-diff-port-828868"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0918 20:57:40.644636   60519 out.go:345] Setting OutFile to fd 1 ...
	I0918 20:57:40.644737   60519 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0918 20:57:40.644745   60519 out.go:358] Setting ErrFile to fd 2...
	I0918 20:57:40.644749   60519 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0918 20:57:40.644952   60519 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19667-7671/.minikube/bin
	I0918 20:57:40.645198   60519 out.go:352] Setting JSON to false
	I0918 20:57:40.645285   60519 mustload.go:65] Loading cluster: default-k8s-diff-port-828868
	I0918 20:57:40.645648   60519 config.go:182] Loaded profile config "default-k8s-diff-port-828868": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0918 20:57:40.645729   60519 profile.go:143] Saving config to /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/default-k8s-diff-port-828868/config.json ...
	I0918 20:57:40.645895   60519 mustload.go:65] Loading cluster: default-k8s-diff-port-828868
	I0918 20:57:40.645997   60519 config.go:182] Loaded profile config "default-k8s-diff-port-828868": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0918 20:57:40.646021   60519 stop.go:39] StopHost: default-k8s-diff-port-828868
	I0918 20:57:40.646402   60519 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19667-7671/.minikube/bin/docker-machine-driver-kvm2
	I0918 20:57:40.646442   60519 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0918 20:57:40.661624   60519 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43737
	I0918 20:57:40.662114   60519 main.go:141] libmachine: () Calling .GetVersion
	I0918 20:57:40.662675   60519 main.go:141] libmachine: Using API Version  1
	I0918 20:57:40.662707   60519 main.go:141] libmachine: () Calling .SetConfigRaw
	I0918 20:57:40.663010   60519 main.go:141] libmachine: () Calling .GetMachineName
	I0918 20:57:40.665474   60519 out.go:177] * Stopping node "default-k8s-diff-port-828868"  ...
	I0918 20:57:40.667054   60519 machine.go:156] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0918 20:57:40.667099   60519 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .DriverName
	I0918 20:57:40.667421   60519 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0918 20:57:40.667448   60519 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .GetSSHHostname
	I0918 20:57:40.670516   60519 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | domain default-k8s-diff-port-828868 has defined MAC address 52:54:00:c0:39:06 in network mk-default-k8s-diff-port-828868
	I0918 20:57:40.671093   60519 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:39:06", ip: ""} in network mk-default-k8s-diff-port-828868: {Iface:virbr4 ExpiryTime:2024-09-18 21:56:46 +0000 UTC Type:0 Mac:52:54:00:c0:39:06 Iaid: IPaddr:192.168.50.109 Prefix:24 Hostname:default-k8s-diff-port-828868 Clientid:01:52:54:00:c0:39:06}
	I0918 20:57:40.671130   60519 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | domain default-k8s-diff-port-828868 has defined IP address 192.168.50.109 and MAC address 52:54:00:c0:39:06 in network mk-default-k8s-diff-port-828868
	I0918 20:57:40.671284   60519 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .GetSSHPort
	I0918 20:57:40.671526   60519 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .GetSSHKeyPath
	I0918 20:57:40.671680   60519 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .GetSSHUsername
	I0918 20:57:40.671849   60519 sshutil.go:53] new ssh client: &{IP:192.168.50.109 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19667-7671/.minikube/machines/default-k8s-diff-port-828868/id_rsa Username:docker}
	I0918 20:57:40.786296   60519 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0918 20:57:40.851804   60519 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0918 20:57:40.906948   60519 main.go:141] libmachine: Stopping "default-k8s-diff-port-828868"...
	I0918 20:57:40.906998   60519 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .GetState
	I0918 20:57:40.908929   60519 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .Stop
	I0918 20:57:40.912578   60519 main.go:141] libmachine: (default-k8s-diff-port-828868) Waiting for machine to stop 0/120
	I0918 20:57:41.914160   60519 main.go:141] libmachine: (default-k8s-diff-port-828868) Waiting for machine to stop 1/120
	I0918 20:57:42.915449   60519 main.go:141] libmachine: (default-k8s-diff-port-828868) Waiting for machine to stop 2/120
	I0918 20:57:43.916806   60519 main.go:141] libmachine: (default-k8s-diff-port-828868) Waiting for machine to stop 3/120
	I0918 20:57:44.918489   60519 main.go:141] libmachine: (default-k8s-diff-port-828868) Waiting for machine to stop 4/120
	I0918 20:57:45.920853   60519 main.go:141] libmachine: (default-k8s-diff-port-828868) Waiting for machine to stop 5/120
	I0918 20:57:46.922694   60519 main.go:141] libmachine: (default-k8s-diff-port-828868) Waiting for machine to stop 6/120
	I0918 20:57:47.924527   60519 main.go:141] libmachine: (default-k8s-diff-port-828868) Waiting for machine to stop 7/120
	I0918 20:57:48.926077   60519 main.go:141] libmachine: (default-k8s-diff-port-828868) Waiting for machine to stop 8/120
	I0918 20:57:49.927556   60519 main.go:141] libmachine: (default-k8s-diff-port-828868) Waiting for machine to stop 9/120
	I0918 20:57:50.930037   60519 main.go:141] libmachine: (default-k8s-diff-port-828868) Waiting for machine to stop 10/120
	I0918 20:57:51.932123   60519 main.go:141] libmachine: (default-k8s-diff-port-828868) Waiting for machine to stop 11/120
	I0918 20:57:52.933711   60519 main.go:141] libmachine: (default-k8s-diff-port-828868) Waiting for machine to stop 12/120
	I0918 20:57:53.935236   60519 main.go:141] libmachine: (default-k8s-diff-port-828868) Waiting for machine to stop 13/120
	I0918 20:57:54.936743   60519 main.go:141] libmachine: (default-k8s-diff-port-828868) Waiting for machine to stop 14/120
	I0918 20:57:55.938987   60519 main.go:141] libmachine: (default-k8s-diff-port-828868) Waiting for machine to stop 15/120
	I0918 20:57:56.940370   60519 main.go:141] libmachine: (default-k8s-diff-port-828868) Waiting for machine to stop 16/120
	I0918 20:57:57.941940   60519 main.go:141] libmachine: (default-k8s-diff-port-828868) Waiting for machine to stop 17/120
	I0918 20:57:58.943854   60519 main.go:141] libmachine: (default-k8s-diff-port-828868) Waiting for machine to stop 18/120
	I0918 20:57:59.945377   60519 main.go:141] libmachine: (default-k8s-diff-port-828868) Waiting for machine to stop 19/120
	I0918 20:58:00.946919   60519 main.go:141] libmachine: (default-k8s-diff-port-828868) Waiting for machine to stop 20/120
	I0918 20:58:01.948789   60519 main.go:141] libmachine: (default-k8s-diff-port-828868) Waiting for machine to stop 21/120
	I0918 20:58:02.950284   60519 main.go:141] libmachine: (default-k8s-diff-port-828868) Waiting for machine to stop 22/120
	I0918 20:58:03.952102   60519 main.go:141] libmachine: (default-k8s-diff-port-828868) Waiting for machine to stop 23/120
	I0918 20:58:04.953671   60519 main.go:141] libmachine: (default-k8s-diff-port-828868) Waiting for machine to stop 24/120
	I0918 20:58:05.956028   60519 main.go:141] libmachine: (default-k8s-diff-port-828868) Waiting for machine to stop 25/120
	I0918 20:58:06.958043   60519 main.go:141] libmachine: (default-k8s-diff-port-828868) Waiting for machine to stop 26/120
	I0918 20:58:07.959726   60519 main.go:141] libmachine: (default-k8s-diff-port-828868) Waiting for machine to stop 27/120
	I0918 20:58:08.961800   60519 main.go:141] libmachine: (default-k8s-diff-port-828868) Waiting for machine to stop 28/120
	I0918 20:58:09.963687   60519 main.go:141] libmachine: (default-k8s-diff-port-828868) Waiting for machine to stop 29/120
	I0918 20:58:10.965268   60519 main.go:141] libmachine: (default-k8s-diff-port-828868) Waiting for machine to stop 30/120
	I0918 20:58:11.966978   60519 main.go:141] libmachine: (default-k8s-diff-port-828868) Waiting for machine to stop 31/120
	I0918 20:58:12.968527   60519 main.go:141] libmachine: (default-k8s-diff-port-828868) Waiting for machine to stop 32/120
	I0918 20:58:13.970041   60519 main.go:141] libmachine: (default-k8s-diff-port-828868) Waiting for machine to stop 33/120
	I0918 20:58:14.971794   60519 main.go:141] libmachine: (default-k8s-diff-port-828868) Waiting for machine to stop 34/120
	I0918 20:58:15.974212   60519 main.go:141] libmachine: (default-k8s-diff-port-828868) Waiting for machine to stop 35/120
	I0918 20:58:16.976335   60519 main.go:141] libmachine: (default-k8s-diff-port-828868) Waiting for machine to stop 36/120
	I0918 20:58:17.977811   60519 main.go:141] libmachine: (default-k8s-diff-port-828868) Waiting for machine to stop 37/120
	I0918 20:58:18.979163   60519 main.go:141] libmachine: (default-k8s-diff-port-828868) Waiting for machine to stop 38/120
	I0918 20:58:19.980601   60519 main.go:141] libmachine: (default-k8s-diff-port-828868) Waiting for machine to stop 39/120
	I0918 20:58:20.982775   60519 main.go:141] libmachine: (default-k8s-diff-port-828868) Waiting for machine to stop 40/120
	I0918 20:58:21.984406   60519 main.go:141] libmachine: (default-k8s-diff-port-828868) Waiting for machine to stop 41/120
	I0918 20:58:22.986387   60519 main.go:141] libmachine: (default-k8s-diff-port-828868) Waiting for machine to stop 42/120
	I0918 20:58:23.987807   60519 main.go:141] libmachine: (default-k8s-diff-port-828868) Waiting for machine to stop 43/120
	I0918 20:58:24.989336   60519 main.go:141] libmachine: (default-k8s-diff-port-828868) Waiting for machine to stop 44/120
	I0918 20:58:25.991795   60519 main.go:141] libmachine: (default-k8s-diff-port-828868) Waiting for machine to stop 45/120
	I0918 20:58:26.993441   60519 main.go:141] libmachine: (default-k8s-diff-port-828868) Waiting for machine to stop 46/120
	I0918 20:58:27.995149   60519 main.go:141] libmachine: (default-k8s-diff-port-828868) Waiting for machine to stop 47/120
	I0918 20:58:28.996624   60519 main.go:141] libmachine: (default-k8s-diff-port-828868) Waiting for machine to stop 48/120
	I0918 20:58:29.998134   60519 main.go:141] libmachine: (default-k8s-diff-port-828868) Waiting for machine to stop 49/120
	I0918 20:58:31.000732   60519 main.go:141] libmachine: (default-k8s-diff-port-828868) Waiting for machine to stop 50/120
	I0918 20:58:32.002243   60519 main.go:141] libmachine: (default-k8s-diff-port-828868) Waiting for machine to stop 51/120
	I0918 20:58:33.003906   60519 main.go:141] libmachine: (default-k8s-diff-port-828868) Waiting for machine to stop 52/120
	I0918 20:58:34.005448   60519 main.go:141] libmachine: (default-k8s-diff-port-828868) Waiting for machine to stop 53/120
	I0918 20:58:35.007334   60519 main.go:141] libmachine: (default-k8s-diff-port-828868) Waiting for machine to stop 54/120
	I0918 20:58:36.009095   60519 main.go:141] libmachine: (default-k8s-diff-port-828868) Waiting for machine to stop 55/120
	I0918 20:58:37.010857   60519 main.go:141] libmachine: (default-k8s-diff-port-828868) Waiting for machine to stop 56/120
	I0918 20:58:38.012332   60519 main.go:141] libmachine: (default-k8s-diff-port-828868) Waiting for machine to stop 57/120
	I0918 20:58:39.013860   60519 main.go:141] libmachine: (default-k8s-diff-port-828868) Waiting for machine to stop 58/120
	I0918 20:58:40.015231   60519 main.go:141] libmachine: (default-k8s-diff-port-828868) Waiting for machine to stop 59/120
	I0918 20:58:41.017488   60519 main.go:141] libmachine: (default-k8s-diff-port-828868) Waiting for machine to stop 60/120
	I0918 20:58:42.019021   60519 main.go:141] libmachine: (default-k8s-diff-port-828868) Waiting for machine to stop 61/120
	I0918 20:58:43.020453   60519 main.go:141] libmachine: (default-k8s-diff-port-828868) Waiting for machine to stop 62/120
	I0918 20:58:44.021937   60519 main.go:141] libmachine: (default-k8s-diff-port-828868) Waiting for machine to stop 63/120
	I0918 20:58:45.023456   60519 main.go:141] libmachine: (default-k8s-diff-port-828868) Waiting for machine to stop 64/120
	I0918 20:58:46.025476   60519 main.go:141] libmachine: (default-k8s-diff-port-828868) Waiting for machine to stop 65/120
	I0918 20:58:47.026894   60519 main.go:141] libmachine: (default-k8s-diff-port-828868) Waiting for machine to stop 66/120
	I0918 20:58:48.028319   60519 main.go:141] libmachine: (default-k8s-diff-port-828868) Waiting for machine to stop 67/120
	I0918 20:58:49.029884   60519 main.go:141] libmachine: (default-k8s-diff-port-828868) Waiting for machine to stop 68/120
	I0918 20:58:50.031483   60519 main.go:141] libmachine: (default-k8s-diff-port-828868) Waiting for machine to stop 69/120
	I0918 20:58:51.032947   60519 main.go:141] libmachine: (default-k8s-diff-port-828868) Waiting for machine to stop 70/120
	I0918 20:58:52.034355   60519 main.go:141] libmachine: (default-k8s-diff-port-828868) Waiting for machine to stop 71/120
	I0918 20:58:53.035536   60519 main.go:141] libmachine: (default-k8s-diff-port-828868) Waiting for machine to stop 72/120
	I0918 20:58:54.037321   60519 main.go:141] libmachine: (default-k8s-diff-port-828868) Waiting for machine to stop 73/120
	I0918 20:58:55.038609   60519 main.go:141] libmachine: (default-k8s-diff-port-828868) Waiting for machine to stop 74/120
	I0918 20:58:56.040584   60519 main.go:141] libmachine: (default-k8s-diff-port-828868) Waiting for machine to stop 75/120
	I0918 20:58:57.041980   60519 main.go:141] libmachine: (default-k8s-diff-port-828868) Waiting for machine to stop 76/120
	I0918 20:58:58.043284   60519 main.go:141] libmachine: (default-k8s-diff-port-828868) Waiting for machine to stop 77/120
	I0918 20:58:59.044931   60519 main.go:141] libmachine: (default-k8s-diff-port-828868) Waiting for machine to stop 78/120
	I0918 20:59:00.046418   60519 main.go:141] libmachine: (default-k8s-diff-port-828868) Waiting for machine to stop 79/120
	I0918 20:59:01.048676   60519 main.go:141] libmachine: (default-k8s-diff-port-828868) Waiting for machine to stop 80/120
	I0918 20:59:02.049993   60519 main.go:141] libmachine: (default-k8s-diff-port-828868) Waiting for machine to stop 81/120
	I0918 20:59:03.051494   60519 main.go:141] libmachine: (default-k8s-diff-port-828868) Waiting for machine to stop 82/120
	I0918 20:59:04.053163   60519 main.go:141] libmachine: (default-k8s-diff-port-828868) Waiting for machine to stop 83/120
	I0918 20:59:05.054405   60519 main.go:141] libmachine: (default-k8s-diff-port-828868) Waiting for machine to stop 84/120
	I0918 20:59:06.056581   60519 main.go:141] libmachine: (default-k8s-diff-port-828868) Waiting for machine to stop 85/120
	I0918 20:59:07.058025   60519 main.go:141] libmachine: (default-k8s-diff-port-828868) Waiting for machine to stop 86/120
	I0918 20:59:08.059486   60519 main.go:141] libmachine: (default-k8s-diff-port-828868) Waiting for machine to stop 87/120
	I0918 20:59:09.060967   60519 main.go:141] libmachine: (default-k8s-diff-port-828868) Waiting for machine to stop 88/120
	I0918 20:59:10.062768   60519 main.go:141] libmachine: (default-k8s-diff-port-828868) Waiting for machine to stop 89/120
	I0918 20:59:11.065162   60519 main.go:141] libmachine: (default-k8s-diff-port-828868) Waiting for machine to stop 90/120
	I0918 20:59:12.066688   60519 main.go:141] libmachine: (default-k8s-diff-port-828868) Waiting for machine to stop 91/120
	I0918 20:59:13.068626   60519 main.go:141] libmachine: (default-k8s-diff-port-828868) Waiting for machine to stop 92/120
	I0918 20:59:14.070099   60519 main.go:141] libmachine: (default-k8s-diff-port-828868) Waiting for machine to stop 93/120
	I0918 20:59:15.071677   60519 main.go:141] libmachine: (default-k8s-diff-port-828868) Waiting for machine to stop 94/120
	I0918 20:59:16.073789   60519 main.go:141] libmachine: (default-k8s-diff-port-828868) Waiting for machine to stop 95/120
	I0918 20:59:17.075294   60519 main.go:141] libmachine: (default-k8s-diff-port-828868) Waiting for machine to stop 96/120
	I0918 20:59:18.077768   60519 main.go:141] libmachine: (default-k8s-diff-port-828868) Waiting for machine to stop 97/120
	I0918 20:59:19.079029   60519 main.go:141] libmachine: (default-k8s-diff-port-828868) Waiting for machine to stop 98/120
	I0918 20:59:20.080660   60519 main.go:141] libmachine: (default-k8s-diff-port-828868) Waiting for machine to stop 99/120
	I0918 20:59:21.083137   60519 main.go:141] libmachine: (default-k8s-diff-port-828868) Waiting for machine to stop 100/120
	I0918 20:59:22.084734   60519 main.go:141] libmachine: (default-k8s-diff-port-828868) Waiting for machine to stop 101/120
	I0918 20:59:23.086264   60519 main.go:141] libmachine: (default-k8s-diff-port-828868) Waiting for machine to stop 102/120
	I0918 20:59:24.087859   60519 main.go:141] libmachine: (default-k8s-diff-port-828868) Waiting for machine to stop 103/120
	I0918 20:59:25.089838   60519 main.go:141] libmachine: (default-k8s-diff-port-828868) Waiting for machine to stop 104/120
	I0918 20:59:26.092259   60519 main.go:141] libmachine: (default-k8s-diff-port-828868) Waiting for machine to stop 105/120
	I0918 20:59:27.093919   60519 main.go:141] libmachine: (default-k8s-diff-port-828868) Waiting for machine to stop 106/120
	I0918 20:59:28.095647   60519 main.go:141] libmachine: (default-k8s-diff-port-828868) Waiting for machine to stop 107/120
	I0918 20:59:29.096970   60519 main.go:141] libmachine: (default-k8s-diff-port-828868) Waiting for machine to stop 108/120
	I0918 20:59:30.099066   60519 main.go:141] libmachine: (default-k8s-diff-port-828868) Waiting for machine to stop 109/120
	I0918 20:59:31.101515   60519 main.go:141] libmachine: (default-k8s-diff-port-828868) Waiting for machine to stop 110/120
	I0918 20:59:32.103043   60519 main.go:141] libmachine: (default-k8s-diff-port-828868) Waiting for machine to stop 111/120
	I0918 20:59:33.104641   60519 main.go:141] libmachine: (default-k8s-diff-port-828868) Waiting for machine to stop 112/120
	I0918 20:59:34.106753   60519 main.go:141] libmachine: (default-k8s-diff-port-828868) Waiting for machine to stop 113/120
	I0918 20:59:35.108266   60519 main.go:141] libmachine: (default-k8s-diff-port-828868) Waiting for machine to stop 114/120
	I0918 20:59:36.110531   60519 main.go:141] libmachine: (default-k8s-diff-port-828868) Waiting for machine to stop 115/120
	I0918 20:59:37.112040   60519 main.go:141] libmachine: (default-k8s-diff-port-828868) Waiting for machine to stop 116/120
	I0918 20:59:38.113714   60519 main.go:141] libmachine: (default-k8s-diff-port-828868) Waiting for machine to stop 117/120
	I0918 20:59:39.115164   60519 main.go:141] libmachine: (default-k8s-diff-port-828868) Waiting for machine to stop 118/120
	I0918 20:59:40.116782   60519 main.go:141] libmachine: (default-k8s-diff-port-828868) Waiting for machine to stop 119/120
	I0918 20:59:41.118109   60519 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0918 20:59:41.118168   60519 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0918 20:59:41.120000   60519 out.go:201] 
	W0918 20:59:41.121296   60519 out.go:270] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0918 20:59:41.121322   60519 out.go:270] * 
	* 
	W0918 20:59:41.123815   60519 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0918 20:59:41.124999   60519 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:230: failed stopping minikube - first stop-. args "out/minikube-linux-amd64 stop -p default-k8s-diff-port-828868 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-828868 -n default-k8s-diff-port-828868
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-828868 -n default-k8s-diff-port-828868: exit status 3 (18.645356939s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0918 20:59:59.772466   61337 status.go:410] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.50.109:22: connect: no route to host
	E0918 20:59:59.772498   61337 status.go:119] status error: NewSession: new client: new client: dial tcp 192.168.50.109:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-828868" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/Stop (139.17s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (139.03s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-255556 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p embed-certs-255556 --alsologtostderr -v=3: exit status 82 (2m0.483790415s)

                                                
                                                
-- stdout --
	* Stopping node "embed-certs-255556"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0918 20:57:46.925317   60619 out.go:345] Setting OutFile to fd 1 ...
	I0918 20:57:46.925425   60619 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0918 20:57:46.925433   60619 out.go:358] Setting ErrFile to fd 2...
	I0918 20:57:46.925437   60619 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0918 20:57:46.925609   60619 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19667-7671/.minikube/bin
	I0918 20:57:46.925828   60619 out.go:352] Setting JSON to false
	I0918 20:57:46.925902   60619 mustload.go:65] Loading cluster: embed-certs-255556
	I0918 20:57:46.926251   60619 config.go:182] Loaded profile config "embed-certs-255556": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0918 20:57:46.926312   60619 profile.go:143] Saving config to /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/embed-certs-255556/config.json ...
	I0918 20:57:46.926482   60619 mustload.go:65] Loading cluster: embed-certs-255556
	I0918 20:57:46.926581   60619 config.go:182] Loaded profile config "embed-certs-255556": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0918 20:57:46.926602   60619 stop.go:39] StopHost: embed-certs-255556
	I0918 20:57:46.926981   60619 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19667-7671/.minikube/bin/docker-machine-driver-kvm2
	I0918 20:57:46.927021   60619 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0918 20:57:46.943283   60619 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43373
	I0918 20:57:46.943806   60619 main.go:141] libmachine: () Calling .GetVersion
	I0918 20:57:46.944460   60619 main.go:141] libmachine: Using API Version  1
	I0918 20:57:46.944482   60619 main.go:141] libmachine: () Calling .SetConfigRaw
	I0918 20:57:46.944811   60619 main.go:141] libmachine: () Calling .GetMachineName
	I0918 20:57:46.947212   60619 out.go:177] * Stopping node "embed-certs-255556"  ...
	I0918 20:57:46.948766   60619 machine.go:156] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0918 20:57:46.948795   60619 main.go:141] libmachine: (embed-certs-255556) Calling .DriverName
	I0918 20:57:46.949065   60619 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0918 20:57:46.949100   60619 main.go:141] libmachine: (embed-certs-255556) Calling .GetSSHHostname
	I0918 20:57:46.952010   60619 main.go:141] libmachine: (embed-certs-255556) DBG | domain embed-certs-255556 has defined MAC address 52:54:00:e8:c2:b7 in network mk-embed-certs-255556
	I0918 20:57:46.952448   60619 main.go:141] libmachine: (embed-certs-255556) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e8:c2:b7", ip: ""} in network mk-embed-certs-255556: {Iface:virbr1 ExpiryTime:2024-09-18 21:56:24 +0000 UTC Type:0 Mac:52:54:00:e8:c2:b7 Iaid: IPaddr:192.168.39.21 Prefix:24 Hostname:embed-certs-255556 Clientid:01:52:54:00:e8:c2:b7}
	I0918 20:57:46.952489   60619 main.go:141] libmachine: (embed-certs-255556) DBG | domain embed-certs-255556 has defined IP address 192.168.39.21 and MAC address 52:54:00:e8:c2:b7 in network mk-embed-certs-255556
	I0918 20:57:46.952733   60619 main.go:141] libmachine: (embed-certs-255556) Calling .GetSSHPort
	I0918 20:57:46.952957   60619 main.go:141] libmachine: (embed-certs-255556) Calling .GetSSHKeyPath
	I0918 20:57:46.953142   60619 main.go:141] libmachine: (embed-certs-255556) Calling .GetSSHUsername
	I0918 20:57:46.953300   60619 sshutil.go:53] new ssh client: &{IP:192.168.39.21 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19667-7671/.minikube/machines/embed-certs-255556/id_rsa Username:docker}
	I0918 20:57:47.043979   60619 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0918 20:57:47.096955   60619 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0918 20:57:47.152446   60619 main.go:141] libmachine: Stopping "embed-certs-255556"...
	I0918 20:57:47.152496   60619 main.go:141] libmachine: (embed-certs-255556) Calling .GetState
	I0918 20:57:47.154059   60619 main.go:141] libmachine: (embed-certs-255556) Calling .Stop
	I0918 20:57:47.157569   60619 main.go:141] libmachine: (embed-certs-255556) Waiting for machine to stop 0/120
	I0918 20:57:48.159234   60619 main.go:141] libmachine: (embed-certs-255556) Waiting for machine to stop 1/120
	I0918 20:57:49.160719   60619 main.go:141] libmachine: (embed-certs-255556) Waiting for machine to stop 2/120
	I0918 20:57:50.162111   60619 main.go:141] libmachine: (embed-certs-255556) Waiting for machine to stop 3/120
	I0918 20:57:51.163681   60619 main.go:141] libmachine: (embed-certs-255556) Waiting for machine to stop 4/120
	I0918 20:57:52.165091   60619 main.go:141] libmachine: (embed-certs-255556) Waiting for machine to stop 5/120
	I0918 20:57:53.166594   60619 main.go:141] libmachine: (embed-certs-255556) Waiting for machine to stop 6/120
	I0918 20:57:54.168107   60619 main.go:141] libmachine: (embed-certs-255556) Waiting for machine to stop 7/120
	I0918 20:57:55.169464   60619 main.go:141] libmachine: (embed-certs-255556) Waiting for machine to stop 8/120
	I0918 20:57:56.170991   60619 main.go:141] libmachine: (embed-certs-255556) Waiting for machine to stop 9/120
	I0918 20:57:57.173394   60619 main.go:141] libmachine: (embed-certs-255556) Waiting for machine to stop 10/120
	I0918 20:57:58.174898   60619 main.go:141] libmachine: (embed-certs-255556) Waiting for machine to stop 11/120
	I0918 20:57:59.176546   60619 main.go:141] libmachine: (embed-certs-255556) Waiting for machine to stop 12/120
	I0918 20:58:00.178068   60619 main.go:141] libmachine: (embed-certs-255556) Waiting for machine to stop 13/120
	I0918 20:58:01.179620   60619 main.go:141] libmachine: (embed-certs-255556) Waiting for machine to stop 14/120
	I0918 20:58:02.181808   60619 main.go:141] libmachine: (embed-certs-255556) Waiting for machine to stop 15/120
	I0918 20:58:03.183951   60619 main.go:141] libmachine: (embed-certs-255556) Waiting for machine to stop 16/120
	I0918 20:58:04.185819   60619 main.go:141] libmachine: (embed-certs-255556) Waiting for machine to stop 17/120
	I0918 20:58:05.187272   60619 main.go:141] libmachine: (embed-certs-255556) Waiting for machine to stop 18/120
	I0918 20:58:06.189067   60619 main.go:141] libmachine: (embed-certs-255556) Waiting for machine to stop 19/120
	I0918 20:58:07.190599   60619 main.go:141] libmachine: (embed-certs-255556) Waiting for machine to stop 20/120
	I0918 20:58:08.192359   60619 main.go:141] libmachine: (embed-certs-255556) Waiting for machine to stop 21/120
	I0918 20:58:09.193936   60619 main.go:141] libmachine: (embed-certs-255556) Waiting for machine to stop 22/120
	I0918 20:58:10.195314   60619 main.go:141] libmachine: (embed-certs-255556) Waiting for machine to stop 23/120
	I0918 20:58:11.196809   60619 main.go:141] libmachine: (embed-certs-255556) Waiting for machine to stop 24/120
	I0918 20:58:12.198968   60619 main.go:141] libmachine: (embed-certs-255556) Waiting for machine to stop 25/120
	I0918 20:58:13.200633   60619 main.go:141] libmachine: (embed-certs-255556) Waiting for machine to stop 26/120
	I0918 20:58:14.202081   60619 main.go:141] libmachine: (embed-certs-255556) Waiting for machine to stop 27/120
	I0918 20:58:15.203786   60619 main.go:141] libmachine: (embed-certs-255556) Waiting for machine to stop 28/120
	I0918 20:58:16.205577   60619 main.go:141] libmachine: (embed-certs-255556) Waiting for machine to stop 29/120
	I0918 20:58:17.207011   60619 main.go:141] libmachine: (embed-certs-255556) Waiting for machine to stop 30/120
	I0918 20:58:18.208352   60619 main.go:141] libmachine: (embed-certs-255556) Waiting for machine to stop 31/120
	I0918 20:58:19.210539   60619 main.go:141] libmachine: (embed-certs-255556) Waiting for machine to stop 32/120
	I0918 20:58:20.211986   60619 main.go:141] libmachine: (embed-certs-255556) Waiting for machine to stop 33/120
	I0918 20:58:21.213423   60619 main.go:141] libmachine: (embed-certs-255556) Waiting for machine to stop 34/120
	I0918 20:58:22.216390   60619 main.go:141] libmachine: (embed-certs-255556) Waiting for machine to stop 35/120
	I0918 20:58:23.217969   60619 main.go:141] libmachine: (embed-certs-255556) Waiting for machine to stop 36/120
	I0918 20:58:24.220459   60619 main.go:141] libmachine: (embed-certs-255556) Waiting for machine to stop 37/120
	I0918 20:58:25.222466   60619 main.go:141] libmachine: (embed-certs-255556) Waiting for machine to stop 38/120
	I0918 20:58:26.224114   60619 main.go:141] libmachine: (embed-certs-255556) Waiting for machine to stop 39/120
	I0918 20:58:27.225574   60619 main.go:141] libmachine: (embed-certs-255556) Waiting for machine to stop 40/120
	I0918 20:58:28.227600   60619 main.go:141] libmachine: (embed-certs-255556) Waiting for machine to stop 41/120
	I0918 20:58:29.229477   60619 main.go:141] libmachine: (embed-certs-255556) Waiting for machine to stop 42/120
	I0918 20:58:30.230899   60619 main.go:141] libmachine: (embed-certs-255556) Waiting for machine to stop 43/120
	I0918 20:58:31.232397   60619 main.go:141] libmachine: (embed-certs-255556) Waiting for machine to stop 44/120
	I0918 20:58:32.234666   60619 main.go:141] libmachine: (embed-certs-255556) Waiting for machine to stop 45/120
	I0918 20:58:33.236285   60619 main.go:141] libmachine: (embed-certs-255556) Waiting for machine to stop 46/120
	I0918 20:58:34.237846   60619 main.go:141] libmachine: (embed-certs-255556) Waiting for machine to stop 47/120
	I0918 20:58:35.239457   60619 main.go:141] libmachine: (embed-certs-255556) Waiting for machine to stop 48/120
	I0918 20:58:36.241021   60619 main.go:141] libmachine: (embed-certs-255556) Waiting for machine to stop 49/120
	I0918 20:58:37.243393   60619 main.go:141] libmachine: (embed-certs-255556) Waiting for machine to stop 50/120
	I0918 20:58:38.245061   60619 main.go:141] libmachine: (embed-certs-255556) Waiting for machine to stop 51/120
	I0918 20:58:39.246552   60619 main.go:141] libmachine: (embed-certs-255556) Waiting for machine to stop 52/120
	I0918 20:58:40.248147   60619 main.go:141] libmachine: (embed-certs-255556) Waiting for machine to stop 53/120
	I0918 20:58:41.249656   60619 main.go:141] libmachine: (embed-certs-255556) Waiting for machine to stop 54/120
	I0918 20:58:42.251869   60619 main.go:141] libmachine: (embed-certs-255556) Waiting for machine to stop 55/120
	I0918 20:58:43.253455   60619 main.go:141] libmachine: (embed-certs-255556) Waiting for machine to stop 56/120
	I0918 20:58:44.255751   60619 main.go:141] libmachine: (embed-certs-255556) Waiting for machine to stop 57/120
	I0918 20:58:45.257456   60619 main.go:141] libmachine: (embed-certs-255556) Waiting for machine to stop 58/120
	I0918 20:58:46.259042   60619 main.go:141] libmachine: (embed-certs-255556) Waiting for machine to stop 59/120
	I0918 20:58:47.261178   60619 main.go:141] libmachine: (embed-certs-255556) Waiting for machine to stop 60/120
	I0918 20:58:48.262836   60619 main.go:141] libmachine: (embed-certs-255556) Waiting for machine to stop 61/120
	I0918 20:58:49.264495   60619 main.go:141] libmachine: (embed-certs-255556) Waiting for machine to stop 62/120
	I0918 20:58:50.265924   60619 main.go:141] libmachine: (embed-certs-255556) Waiting for machine to stop 63/120
	I0918 20:58:51.267406   60619 main.go:141] libmachine: (embed-certs-255556) Waiting for machine to stop 64/120
	I0918 20:58:52.269772   60619 main.go:141] libmachine: (embed-certs-255556) Waiting for machine to stop 65/120
	I0918 20:58:53.271058   60619 main.go:141] libmachine: (embed-certs-255556) Waiting for machine to stop 66/120
	I0918 20:58:54.273233   60619 main.go:141] libmachine: (embed-certs-255556) Waiting for machine to stop 67/120
	I0918 20:58:55.274717   60619 main.go:141] libmachine: (embed-certs-255556) Waiting for machine to stop 68/120
	I0918 20:58:56.276250   60619 main.go:141] libmachine: (embed-certs-255556) Waiting for machine to stop 69/120
	I0918 20:58:57.277795   60619 main.go:141] libmachine: (embed-certs-255556) Waiting for machine to stop 70/120
	I0918 20:58:58.279217   60619 main.go:141] libmachine: (embed-certs-255556) Waiting for machine to stop 71/120
	I0918 20:58:59.280929   60619 main.go:141] libmachine: (embed-certs-255556) Waiting for machine to stop 72/120
	I0918 20:59:00.282721   60619 main.go:141] libmachine: (embed-certs-255556) Waiting for machine to stop 73/120
	I0918 20:59:01.284261   60619 main.go:141] libmachine: (embed-certs-255556) Waiting for machine to stop 74/120
	I0918 20:59:02.285906   60619 main.go:141] libmachine: (embed-certs-255556) Waiting for machine to stop 75/120
	I0918 20:59:03.287497   60619 main.go:141] libmachine: (embed-certs-255556) Waiting for machine to stop 76/120
	I0918 20:59:04.288891   60619 main.go:141] libmachine: (embed-certs-255556) Waiting for machine to stop 77/120
	I0918 20:59:05.290465   60619 main.go:141] libmachine: (embed-certs-255556) Waiting for machine to stop 78/120
	I0918 20:59:06.291939   60619 main.go:141] libmachine: (embed-certs-255556) Waiting for machine to stop 79/120
	I0918 20:59:07.294354   60619 main.go:141] libmachine: (embed-certs-255556) Waiting for machine to stop 80/120
	I0918 20:59:08.296045   60619 main.go:141] libmachine: (embed-certs-255556) Waiting for machine to stop 81/120
	I0918 20:59:09.297419   60619 main.go:141] libmachine: (embed-certs-255556) Waiting for machine to stop 82/120
	I0918 20:59:10.298807   60619 main.go:141] libmachine: (embed-certs-255556) Waiting for machine to stop 83/120
	I0918 20:59:11.300385   60619 main.go:141] libmachine: (embed-certs-255556) Waiting for machine to stop 84/120
	I0918 20:59:12.302892   60619 main.go:141] libmachine: (embed-certs-255556) Waiting for machine to stop 85/120
	I0918 20:59:13.304311   60619 main.go:141] libmachine: (embed-certs-255556) Waiting for machine to stop 86/120
	I0918 20:59:14.305805   60619 main.go:141] libmachine: (embed-certs-255556) Waiting for machine to stop 87/120
	I0918 20:59:15.307135   60619 main.go:141] libmachine: (embed-certs-255556) Waiting for machine to stop 88/120
	I0918 20:59:16.308354   60619 main.go:141] libmachine: (embed-certs-255556) Waiting for machine to stop 89/120
	I0918 20:59:17.309810   60619 main.go:141] libmachine: (embed-certs-255556) Waiting for machine to stop 90/120
	I0918 20:59:18.311276   60619 main.go:141] libmachine: (embed-certs-255556) Waiting for machine to stop 91/120
	I0918 20:59:19.312791   60619 main.go:141] libmachine: (embed-certs-255556) Waiting for machine to stop 92/120
	I0918 20:59:20.314374   60619 main.go:141] libmachine: (embed-certs-255556) Waiting for machine to stop 93/120
	I0918 20:59:21.315779   60619 main.go:141] libmachine: (embed-certs-255556) Waiting for machine to stop 94/120
	I0918 20:59:22.316957   60619 main.go:141] libmachine: (embed-certs-255556) Waiting for machine to stop 95/120
	I0918 20:59:23.318816   60619 main.go:141] libmachine: (embed-certs-255556) Waiting for machine to stop 96/120
	I0918 20:59:24.320525   60619 main.go:141] libmachine: (embed-certs-255556) Waiting for machine to stop 97/120
	I0918 20:59:25.322545   60619 main.go:141] libmachine: (embed-certs-255556) Waiting for machine to stop 98/120
	I0918 20:59:26.323864   60619 main.go:141] libmachine: (embed-certs-255556) Waiting for machine to stop 99/120
	I0918 20:59:27.325211   60619 main.go:141] libmachine: (embed-certs-255556) Waiting for machine to stop 100/120
	I0918 20:59:28.326636   60619 main.go:141] libmachine: (embed-certs-255556) Waiting for machine to stop 101/120
	I0918 20:59:29.328314   60619 main.go:141] libmachine: (embed-certs-255556) Waiting for machine to stop 102/120
	I0918 20:59:30.329878   60619 main.go:141] libmachine: (embed-certs-255556) Waiting for machine to stop 103/120
	I0918 20:59:31.331208   60619 main.go:141] libmachine: (embed-certs-255556) Waiting for machine to stop 104/120
	I0918 20:59:32.333442   60619 main.go:141] libmachine: (embed-certs-255556) Waiting for machine to stop 105/120
	I0918 20:59:33.335044   60619 main.go:141] libmachine: (embed-certs-255556) Waiting for machine to stop 106/120
	I0918 20:59:34.336770   60619 main.go:141] libmachine: (embed-certs-255556) Waiting for machine to stop 107/120
	I0918 20:59:35.338627   60619 main.go:141] libmachine: (embed-certs-255556) Waiting for machine to stop 108/120
	I0918 20:59:36.340107   60619 main.go:141] libmachine: (embed-certs-255556) Waiting for machine to stop 109/120
	I0918 20:59:37.342812   60619 main.go:141] libmachine: (embed-certs-255556) Waiting for machine to stop 110/120
	I0918 20:59:38.344363   60619 main.go:141] libmachine: (embed-certs-255556) Waiting for machine to stop 111/120
	I0918 20:59:39.346002   60619 main.go:141] libmachine: (embed-certs-255556) Waiting for machine to stop 112/120
	I0918 20:59:40.347531   60619 main.go:141] libmachine: (embed-certs-255556) Waiting for machine to stop 113/120
	I0918 20:59:41.349119   60619 main.go:141] libmachine: (embed-certs-255556) Waiting for machine to stop 114/120
	I0918 20:59:42.351379   60619 main.go:141] libmachine: (embed-certs-255556) Waiting for machine to stop 115/120
	I0918 20:59:43.352900   60619 main.go:141] libmachine: (embed-certs-255556) Waiting for machine to stop 116/120
	I0918 20:59:44.354579   60619 main.go:141] libmachine: (embed-certs-255556) Waiting for machine to stop 117/120
	I0918 20:59:45.355959   60619 main.go:141] libmachine: (embed-certs-255556) Waiting for machine to stop 118/120
	I0918 20:59:46.357435   60619 main.go:141] libmachine: (embed-certs-255556) Waiting for machine to stop 119/120
	I0918 20:59:47.358829   60619 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0918 20:59:47.358882   60619 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0918 20:59:47.360704   60619 out.go:201] 
	W0918 20:59:47.362053   60619 out.go:270] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0918 20:59:47.362071   60619 out.go:270] * 
	* 
	W0918 20:59:47.364546   60619 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0918 20:59:47.365702   60619 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:230: failed stopping minikube - first stop-. args "out/minikube-linux-amd64 stop -p embed-certs-255556 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-255556 -n embed-certs-255556
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-255556 -n embed-certs-255556: exit status 3 (18.548528231s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0918 21:00:05.916285   61398 status.go:410] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.21:22: connect: no route to host
	E0918 21:00:05.916312   61398 status.go:119] status error: NewSession: new client: new client: dial tcp 192.168.39.21:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "embed-certs-255556" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/embed-certs/serial/Stop (139.03s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (0.49s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-740194 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context old-k8s-version-740194 create -f testdata/busybox.yaml: exit status 1 (46.806702ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-740194" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context old-k8s-version-740194 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-740194 -n old-k8s-version-740194
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-740194 -n old-k8s-version-740194: exit status 6 (223.057942ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0918 20:59:16.170609   61038 status.go:451] kubeconfig endpoint: get endpoint: "old-k8s-version-740194" does not appear in /home/jenkins/minikube-integration/19667-7671/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-740194" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-740194 -n old-k8s-version-740194
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-740194 -n old-k8s-version-740194: exit status 6 (216.614588ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0918 20:59:16.387431   61084 status.go:451] kubeconfig endpoint: get endpoint: "old-k8s-version-740194" does not appear in /home/jenkins/minikube-integration/19667-7671/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-740194" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/DeployApp (0.49s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (97.53s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-740194 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-740194 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 10 (1m37.263003653s)

                                                
                                                
-- stdout --
	* metrics-server is an addon maintained by Kubernetes. For any concerns contact minikube on GitHub.
	You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS
	  - Using image fake.domain/registry.k8s.io/echoserver:1.4
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE: enable failed: run callbacks: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	]
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:207: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-740194 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 10
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-740194 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context old-k8s-version-740194 describe deploy/metrics-server -n kube-system: exit status 1 (45.510583ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-740194" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context old-k8s-version-740194 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-740194 -n old-k8s-version-740194
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-740194 -n old-k8s-version-740194: exit status 6 (222.335824ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0918 21:00:53.918541   61933 status.go:451] kubeconfig endpoint: get endpoint: "old-k8s-version-740194" does not appear in /home/jenkins/minikube-integration/19667-7671/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-740194" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (97.53s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (12.38s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-331658 -n no-preload-331658
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-331658 -n no-preload-331658: exit status 3 (3.167710917s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0918 20:59:22.236370   61146 status.go:410] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.61.31:22: connect: no route to host
	E0918 20:59:22.236394   61146 status.go:119] status error: NewSession: new client: new client: dial tcp 192.168.61.31:22: connect: no route to host

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 3 (may be ok)
start_stop_delete_test.go:241: expected post-stop host status to be -"Stopped"- but got *"Error"*
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-331658 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
start_stop_delete_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p no-preload-331658 --images=MetricsScraper=registry.k8s.io/echoserver:1.4: exit status 11 (6.152202462s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.61.31:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:248: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable dashboard -p no-preload-331658 --images=MetricsScraper=registry.k8s.io/echoserver:1.4": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-331658 -n no-preload-331658
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-331658 -n no-preload-331658: exit status 3 (3.063788448s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0918 20:59:31.452453   61226 status.go:410] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.61.31:22: connect: no route to host
	E0918 20:59:31.452482   61226 status.go:119] status error: NewSession: new client: new client: dial tcp 192.168.61.31:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "no-preload-331658" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (12.38s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (12.38s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-828868 -n default-k8s-diff-port-828868
E0918 21:00:01.286828   14878 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/functional-790989/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-828868 -n default-k8s-diff-port-828868: exit status 3 (3.167673041s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0918 21:00:02.940446   61463 status.go:410] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.50.109:22: connect: no route to host
	E0918 21:00:02.940471   61463 status.go:119] status error: NewSession: new client: new client: dial tcp 192.168.50.109:22: connect: no route to host

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 3 (may be ok)
start_stop_delete_test.go:241: expected post-stop host status to be -"Stopped"- but got *"Error"*
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-828868 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
start_stop_delete_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-828868 --images=MetricsScraper=registry.k8s.io/echoserver:1.4: exit status 11 (6.154229693s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.50.109:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:248: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-828868 --images=MetricsScraper=registry.k8s.io/echoserver:1.4": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-828868 -n default-k8s-diff-port-828868
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-828868 -n default-k8s-diff-port-828868: exit status 3 (3.061728533s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0918 21:00:12.156482   61600 status.go:410] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.50.109:22: connect: no route to host
	E0918 21:00:12.156514   61600 status.go:119] status error: NewSession: new client: new client: dial tcp 192.168.50.109:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-828868" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (12.38s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (12.38s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-255556 -n embed-certs-255556
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-255556 -n embed-certs-255556: exit status 3 (3.168033593s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0918 21:00:09.084420   61547 status.go:410] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.21:22: connect: no route to host
	E0918 21:00:09.084447   61547 status.go:119] status error: NewSession: new client: new client: dial tcp 192.168.39.21:22: connect: no route to host

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 3 (may be ok)
start_stop_delete_test.go:241: expected post-stop host status to be -"Stopped"- but got *"Error"*
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-255556 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
start_stop_delete_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p embed-certs-255556 --images=MetricsScraper=registry.k8s.io/echoserver:1.4: exit status 11 (6.152900155s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.21:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:248: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable dashboard -p embed-certs-255556 --images=MetricsScraper=registry.k8s.io/echoserver:1.4": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-255556 -n embed-certs-255556
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-255556 -n embed-certs-255556: exit status 3 (3.062650043s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0918 21:00:18.300476   61694 status.go:410] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.21:22: connect: no route to host
	E0918 21:00:18.300496   61694 status.go:119] status error: NewSession: new client: new client: dial tcp 192.168.39.21:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "embed-certs-255556" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (12.38s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (739.31s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-740194 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0
E0918 21:01:12.175774   14878 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/addons-815929/client.crt: no such file or directory" logger="UnhandledError"
E0918 21:05:01.286707   14878 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/functional-790989/client.crt: no such file or directory" logger="UnhandledError"
E0918 21:06:12.175248   14878 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/addons-815929/client.crt: no such file or directory" logger="UnhandledError"
E0918 21:07:35.248851   14878 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/addons-815929/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p old-k8s-version-740194 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0: exit status 109 (12m15.720467873s)

                                                
                                                
-- stdout --
	* [old-k8s-version-740194] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19667
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19667-7671/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19667-7671/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.31.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.1
	* Using the kvm2 driver based on existing profile
	* Starting "old-k8s-version-740194" primary control-plane node in "old-k8s-version-740194" cluster
	* Restarting existing kvm2 VM for "old-k8s-version-740194" ...
	* Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0918 21:00:59.486726   62061 out.go:345] Setting OutFile to fd 1 ...
	I0918 21:00:59.486839   62061 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0918 21:00:59.486848   62061 out.go:358] Setting ErrFile to fd 2...
	I0918 21:00:59.486854   62061 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0918 21:00:59.487063   62061 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19667-7671/.minikube/bin
	I0918 21:00:59.487631   62061 out.go:352] Setting JSON to false
	I0918 21:00:59.488570   62061 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":6203,"bootTime":1726687056,"procs":200,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0918 21:00:59.488665   62061 start.go:139] virtualization: kvm guest
	I0918 21:00:59.490927   62061 out.go:177] * [old-k8s-version-740194] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0918 21:00:59.492124   62061 out.go:177]   - MINIKUBE_LOCATION=19667
	I0918 21:00:59.492192   62061 notify.go:220] Checking for updates...
	I0918 21:00:59.494436   62061 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0918 21:00:59.495887   62061 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19667-7671/kubeconfig
	I0918 21:00:59.496929   62061 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19667-7671/.minikube
	I0918 21:00:59.498172   62061 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0918 21:00:59.499384   62061 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0918 21:00:59.500900   62061 config.go:182] Loaded profile config "old-k8s-version-740194": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0918 21:00:59.501295   62061 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19667-7671/.minikube/bin/docker-machine-driver-kvm2
	I0918 21:00:59.501375   62061 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0918 21:00:59.516448   62061 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45397
	I0918 21:00:59.516855   62061 main.go:141] libmachine: () Calling .GetVersion
	I0918 21:00:59.517347   62061 main.go:141] libmachine: Using API Version  1
	I0918 21:00:59.517362   62061 main.go:141] libmachine: () Calling .SetConfigRaw
	I0918 21:00:59.517673   62061 main.go:141] libmachine: () Calling .GetMachineName
	I0918 21:00:59.517848   62061 main.go:141] libmachine: (old-k8s-version-740194) Calling .DriverName
	I0918 21:00:59.519750   62061 out.go:177] * Kubernetes 1.31.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.1
	I0918 21:00:59.521126   62061 driver.go:394] Setting default libvirt URI to qemu:///system
	I0918 21:00:59.521433   62061 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19667-7671/.minikube/bin/docker-machine-driver-kvm2
	I0918 21:00:59.521468   62061 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0918 21:00:59.537580   62061 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39433
	I0918 21:00:59.537973   62061 main.go:141] libmachine: () Calling .GetVersion
	I0918 21:00:59.538452   62061 main.go:141] libmachine: Using API Version  1
	I0918 21:00:59.538471   62061 main.go:141] libmachine: () Calling .SetConfigRaw
	I0918 21:00:59.538852   62061 main.go:141] libmachine: () Calling .GetMachineName
	I0918 21:00:59.539083   62061 main.go:141] libmachine: (old-k8s-version-740194) Calling .DriverName
	I0918 21:00:59.575494   62061 out.go:177] * Using the kvm2 driver based on existing profile
	I0918 21:00:59.576555   62061 start.go:297] selected driver: kvm2
	I0918 21:00:59.576572   62061 start.go:901] validating driver "kvm2" against &{Name:old-k8s-version-740194 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.20.0 ClusterName:old-k8s-version-740194 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.53 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280
h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0918 21:00:59.576682   62061 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0918 21:00:59.577339   62061 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0918 21:00:59.577400   62061 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19667-7671/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0918 21:00:59.593287   62061 install.go:137] /home/jenkins/minikube-integration/19667-7671/.minikube/bin/docker-machine-driver-kvm2 version is 1.34.0
	I0918 21:00:59.593810   62061 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0918 21:00:59.593855   62061 cni.go:84] Creating CNI manager for ""
	I0918 21:00:59.593911   62061 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0918 21:00:59.593972   62061 start.go:340] cluster config:
	{Name:old-k8s-version-740194 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-740194 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.53 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p20
00.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0918 21:00:59.594104   62061 iso.go:125] acquiring lock: {Name:mkbcf4d84f3091567a39802cd5e773cb10b8c545 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0918 21:00:59.595936   62061 out.go:177] * Starting "old-k8s-version-740194" primary control-plane node in "old-k8s-version-740194" cluster
	I0918 21:00:59.597210   62061 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0918 21:00:59.597243   62061 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19667-7671/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0918 21:00:59.597250   62061 cache.go:56] Caching tarball of preloaded images
	I0918 21:00:59.597325   62061 preload.go:172] Found /home/jenkins/minikube-integration/19667-7671/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0918 21:00:59.597338   62061 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0918 21:00:59.597439   62061 profile.go:143] Saving config to /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/old-k8s-version-740194/config.json ...
	I0918 21:00:59.597658   62061 start.go:360] acquireMachinesLock for old-k8s-version-740194: {Name:mk1b0d68a0b7d9d4bc8204d5d81cb9eb77222526 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0918 21:04:48.782438   62061 start.go:364] duration metric: took 3m49.184727821s to acquireMachinesLock for "old-k8s-version-740194"
	I0918 21:04:48.782503   62061 start.go:96] Skipping create...Using existing machine configuration
	I0918 21:04:48.782514   62061 fix.go:54] fixHost starting: 
	I0918 21:04:48.782993   62061 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19667-7671/.minikube/bin/docker-machine-driver-kvm2
	I0918 21:04:48.783053   62061 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0918 21:04:48.802299   62061 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44463
	I0918 21:04:48.802787   62061 main.go:141] libmachine: () Calling .GetVersion
	I0918 21:04:48.803286   62061 main.go:141] libmachine: Using API Version  1
	I0918 21:04:48.803313   62061 main.go:141] libmachine: () Calling .SetConfigRaw
	I0918 21:04:48.803681   62061 main.go:141] libmachine: () Calling .GetMachineName
	I0918 21:04:48.803873   62061 main.go:141] libmachine: (old-k8s-version-740194) Calling .DriverName
	I0918 21:04:48.804007   62061 main.go:141] libmachine: (old-k8s-version-740194) Calling .GetState
	I0918 21:04:48.805714   62061 fix.go:112] recreateIfNeeded on old-k8s-version-740194: state=Stopped err=<nil>
	I0918 21:04:48.805744   62061 main.go:141] libmachine: (old-k8s-version-740194) Calling .DriverName
	W0918 21:04:48.805910   62061 fix.go:138] unexpected machine state, will restart: <nil>
	I0918 21:04:48.835402   62061 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-740194" ...
	I0918 21:04:48.836753   62061 main.go:141] libmachine: (old-k8s-version-740194) Calling .Start
	I0918 21:04:48.837090   62061 main.go:141] libmachine: (old-k8s-version-740194) Ensuring networks are active...
	I0918 21:04:48.838014   62061 main.go:141] libmachine: (old-k8s-version-740194) Ensuring network default is active
	I0918 21:04:48.838375   62061 main.go:141] libmachine: (old-k8s-version-740194) Ensuring network mk-old-k8s-version-740194 is active
	I0918 21:04:48.839035   62061 main.go:141] libmachine: (old-k8s-version-740194) Getting domain xml...
	I0918 21:04:48.839832   62061 main.go:141] libmachine: (old-k8s-version-740194) Creating domain...
	I0918 21:04:50.157515   62061 main.go:141] libmachine: (old-k8s-version-740194) Waiting to get IP...
	I0918 21:04:50.158369   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | domain old-k8s-version-740194 has defined MAC address 52:54:00:3f:a7:11 in network mk-old-k8s-version-740194
	I0918 21:04:50.158796   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | unable to find current IP address of domain old-k8s-version-740194 in network mk-old-k8s-version-740194
	I0918 21:04:50.158931   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | I0918 21:04:50.158765   63026 retry.go:31] will retry after 214.617426ms: waiting for machine to come up
	I0918 21:04:50.375418   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | domain old-k8s-version-740194 has defined MAC address 52:54:00:3f:a7:11 in network mk-old-k8s-version-740194
	I0918 21:04:50.375882   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | unable to find current IP address of domain old-k8s-version-740194 in network mk-old-k8s-version-740194
	I0918 21:04:50.375910   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | I0918 21:04:50.375834   63026 retry.go:31] will retry after 311.080996ms: waiting for machine to come up
	I0918 21:04:50.688569   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | domain old-k8s-version-740194 has defined MAC address 52:54:00:3f:a7:11 in network mk-old-k8s-version-740194
	I0918 21:04:50.689151   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | unable to find current IP address of domain old-k8s-version-740194 in network mk-old-k8s-version-740194
	I0918 21:04:50.689182   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | I0918 21:04:50.689112   63026 retry.go:31] will retry after 386.384815ms: waiting for machine to come up
	I0918 21:04:51.076846   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | domain old-k8s-version-740194 has defined MAC address 52:54:00:3f:a7:11 in network mk-old-k8s-version-740194
	I0918 21:04:51.077336   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | unable to find current IP address of domain old-k8s-version-740194 in network mk-old-k8s-version-740194
	I0918 21:04:51.077359   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | I0918 21:04:51.077280   63026 retry.go:31] will retry after 556.210887ms: waiting for machine to come up
	I0918 21:04:51.634919   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | domain old-k8s-version-740194 has defined MAC address 52:54:00:3f:a7:11 in network mk-old-k8s-version-740194
	I0918 21:04:51.635423   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | unable to find current IP address of domain old-k8s-version-740194 in network mk-old-k8s-version-740194
	I0918 21:04:51.635462   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | I0918 21:04:51.635325   63026 retry.go:31] will retry after 607.960565ms: waiting for machine to come up
	I0918 21:04:52.245179   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | domain old-k8s-version-740194 has defined MAC address 52:54:00:3f:a7:11 in network mk-old-k8s-version-740194
	I0918 21:04:52.245741   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | unable to find current IP address of domain old-k8s-version-740194 in network mk-old-k8s-version-740194
	I0918 21:04:52.245774   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | I0918 21:04:52.245692   63026 retry.go:31] will retry after 620.243825ms: waiting for machine to come up
	I0918 21:04:52.867067   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | domain old-k8s-version-740194 has defined MAC address 52:54:00:3f:a7:11 in network mk-old-k8s-version-740194
	I0918 21:04:52.867658   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | unable to find current IP address of domain old-k8s-version-740194 in network mk-old-k8s-version-740194
	I0918 21:04:52.867680   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | I0918 21:04:52.867608   63026 retry.go:31] will retry after 1.091814923s: waiting for machine to come up
	I0918 21:04:53.961395   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | domain old-k8s-version-740194 has defined MAC address 52:54:00:3f:a7:11 in network mk-old-k8s-version-740194
	I0918 21:04:53.961819   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | unable to find current IP address of domain old-k8s-version-740194 in network mk-old-k8s-version-740194
	I0918 21:04:53.961842   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | I0918 21:04:53.961784   63026 retry.go:31] will retry after 1.130552716s: waiting for machine to come up
	I0918 21:04:55.093491   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | domain old-k8s-version-740194 has defined MAC address 52:54:00:3f:a7:11 in network mk-old-k8s-version-740194
	I0918 21:04:55.093956   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | unable to find current IP address of domain old-k8s-version-740194 in network mk-old-k8s-version-740194
	I0918 21:04:55.093982   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | I0918 21:04:55.093923   63026 retry.go:31] will retry after 1.824664154s: waiting for machine to come up
	I0918 21:04:56.920959   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | domain old-k8s-version-740194 has defined MAC address 52:54:00:3f:a7:11 in network mk-old-k8s-version-740194
	I0918 21:04:56.921371   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | unable to find current IP address of domain old-k8s-version-740194 in network mk-old-k8s-version-740194
	I0918 21:04:56.921422   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | I0918 21:04:56.921354   63026 retry.go:31] will retry after 1.591260677s: waiting for machine to come up
	I0918 21:04:58.514832   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | domain old-k8s-version-740194 has defined MAC address 52:54:00:3f:a7:11 in network mk-old-k8s-version-740194
	I0918 21:04:58.515294   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | unable to find current IP address of domain old-k8s-version-740194 in network mk-old-k8s-version-740194
	I0918 21:04:58.515322   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | I0918 21:04:58.515262   63026 retry.go:31] will retry after 1.868763497s: waiting for machine to come up
	I0918 21:05:00.385891   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | domain old-k8s-version-740194 has defined MAC address 52:54:00:3f:a7:11 in network mk-old-k8s-version-740194
	I0918 21:05:00.386451   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | unable to find current IP address of domain old-k8s-version-740194 in network mk-old-k8s-version-740194
	I0918 21:05:00.386482   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | I0918 21:05:00.386384   63026 retry.go:31] will retry after 3.274467583s: waiting for machine to come up
	I0918 21:05:03.664788   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | domain old-k8s-version-740194 has defined MAC address 52:54:00:3f:a7:11 in network mk-old-k8s-version-740194
	I0918 21:05:03.665255   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | unable to find current IP address of domain old-k8s-version-740194 in network mk-old-k8s-version-740194
	I0918 21:05:03.665286   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | I0918 21:05:03.665210   63026 retry.go:31] will retry after 4.112908346s: waiting for machine to come up
	I0918 21:05:07.782616   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | domain old-k8s-version-740194 has defined MAC address 52:54:00:3f:a7:11 in network mk-old-k8s-version-740194
	I0918 21:05:07.783108   62061 main.go:141] libmachine: (old-k8s-version-740194) Found IP for machine: 192.168.72.53
	I0918 21:05:07.783125   62061 main.go:141] libmachine: (old-k8s-version-740194) Reserving static IP address...
	I0918 21:05:07.783135   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | domain old-k8s-version-740194 has current primary IP address 192.168.72.53 and MAC address 52:54:00:3f:a7:11 in network mk-old-k8s-version-740194
	I0918 21:05:07.783509   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | found host DHCP lease matching {name: "old-k8s-version-740194", mac: "52:54:00:3f:a7:11", ip: "192.168.72.53"} in network mk-old-k8s-version-740194: {Iface:virbr3 ExpiryTime:2024-09-18 22:05:00 +0000 UTC Type:0 Mac:52:54:00:3f:a7:11 Iaid: IPaddr:192.168.72.53 Prefix:24 Hostname:old-k8s-version-740194 Clientid:01:52:54:00:3f:a7:11}
	I0918 21:05:07.783542   62061 main.go:141] libmachine: (old-k8s-version-740194) Reserved static IP address: 192.168.72.53
	I0918 21:05:07.783565   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | skip adding static IP to network mk-old-k8s-version-740194 - found existing host DHCP lease matching {name: "old-k8s-version-740194", mac: "52:54:00:3f:a7:11", ip: "192.168.72.53"}
	I0918 21:05:07.783583   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | Getting to WaitForSSH function...
	I0918 21:05:07.783614   62061 main.go:141] libmachine: (old-k8s-version-740194) Waiting for SSH to be available...
	I0918 21:05:07.785492   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | domain old-k8s-version-740194 has defined MAC address 52:54:00:3f:a7:11 in network mk-old-k8s-version-740194
	I0918 21:05:07.785856   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:a7:11", ip: ""} in network mk-old-k8s-version-740194: {Iface:virbr3 ExpiryTime:2024-09-18 22:05:00 +0000 UTC Type:0 Mac:52:54:00:3f:a7:11 Iaid: IPaddr:192.168.72.53 Prefix:24 Hostname:old-k8s-version-740194 Clientid:01:52:54:00:3f:a7:11}
	I0918 21:05:07.785885   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | domain old-k8s-version-740194 has defined IP address 192.168.72.53 and MAC address 52:54:00:3f:a7:11 in network mk-old-k8s-version-740194
	I0918 21:05:07.786000   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | Using SSH client type: external
	I0918 21:05:07.786021   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | Using SSH private key: /home/jenkins/minikube-integration/19667-7671/.minikube/machines/old-k8s-version-740194/id_rsa (-rw-------)
	I0918 21:05:07.786044   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.53 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19667-7671/.minikube/machines/old-k8s-version-740194/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0918 21:05:07.786055   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | About to run SSH command:
	I0918 21:05:07.786065   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | exit 0
	I0918 21:05:07.915953   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | SSH cmd err, output: <nil>: 
	I0918 21:05:07.916454   62061 main.go:141] libmachine: (old-k8s-version-740194) Calling .GetConfigRaw
	I0918 21:05:07.917059   62061 main.go:141] libmachine: (old-k8s-version-740194) Calling .GetIP
	I0918 21:05:07.919749   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | domain old-k8s-version-740194 has defined MAC address 52:54:00:3f:a7:11 in network mk-old-k8s-version-740194
	I0918 21:05:07.920244   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:a7:11", ip: ""} in network mk-old-k8s-version-740194: {Iface:virbr3 ExpiryTime:2024-09-18 22:05:00 +0000 UTC Type:0 Mac:52:54:00:3f:a7:11 Iaid: IPaddr:192.168.72.53 Prefix:24 Hostname:old-k8s-version-740194 Clientid:01:52:54:00:3f:a7:11}
	I0918 21:05:07.920280   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | domain old-k8s-version-740194 has defined IP address 192.168.72.53 and MAC address 52:54:00:3f:a7:11 in network mk-old-k8s-version-740194
	I0918 21:05:07.920639   62061 profile.go:143] Saving config to /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/old-k8s-version-740194/config.json ...
	I0918 21:05:07.920871   62061 machine.go:93] provisionDockerMachine start ...
	I0918 21:05:07.920893   62061 main.go:141] libmachine: (old-k8s-version-740194) Calling .DriverName
	I0918 21:05:07.921100   62061 main.go:141] libmachine: (old-k8s-version-740194) Calling .GetSSHHostname
	I0918 21:05:07.923129   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | domain old-k8s-version-740194 has defined MAC address 52:54:00:3f:a7:11 in network mk-old-k8s-version-740194
	I0918 21:05:07.923434   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:a7:11", ip: ""} in network mk-old-k8s-version-740194: {Iface:virbr3 ExpiryTime:2024-09-18 22:05:00 +0000 UTC Type:0 Mac:52:54:00:3f:a7:11 Iaid: IPaddr:192.168.72.53 Prefix:24 Hostname:old-k8s-version-740194 Clientid:01:52:54:00:3f:a7:11}
	I0918 21:05:07.923458   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | domain old-k8s-version-740194 has defined IP address 192.168.72.53 and MAC address 52:54:00:3f:a7:11 in network mk-old-k8s-version-740194
	I0918 21:05:07.923573   62061 main.go:141] libmachine: (old-k8s-version-740194) Calling .GetSSHPort
	I0918 21:05:07.923727   62061 main.go:141] libmachine: (old-k8s-version-740194) Calling .GetSSHKeyPath
	I0918 21:05:07.923889   62061 main.go:141] libmachine: (old-k8s-version-740194) Calling .GetSSHKeyPath
	I0918 21:05:07.924036   62061 main.go:141] libmachine: (old-k8s-version-740194) Calling .GetSSHUsername
	I0918 21:05:07.924185   62061 main.go:141] libmachine: Using SSH client type: native
	I0918 21:05:07.924368   62061 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.72.53 22 <nil> <nil>}
	I0918 21:05:07.924378   62061 main.go:141] libmachine: About to run SSH command:
	hostname
	I0918 21:05:08.035941   62061 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0918 21:05:08.035972   62061 main.go:141] libmachine: (old-k8s-version-740194) Calling .GetMachineName
	I0918 21:05:08.036242   62061 buildroot.go:166] provisioning hostname "old-k8s-version-740194"
	I0918 21:05:08.036273   62061 main.go:141] libmachine: (old-k8s-version-740194) Calling .GetMachineName
	I0918 21:05:08.036512   62061 main.go:141] libmachine: (old-k8s-version-740194) Calling .GetSSHHostname
	I0918 21:05:08.038902   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | domain old-k8s-version-740194 has defined MAC address 52:54:00:3f:a7:11 in network mk-old-k8s-version-740194
	I0918 21:05:08.039239   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:a7:11", ip: ""} in network mk-old-k8s-version-740194: {Iface:virbr3 ExpiryTime:2024-09-18 22:05:00 +0000 UTC Type:0 Mac:52:54:00:3f:a7:11 Iaid: IPaddr:192.168.72.53 Prefix:24 Hostname:old-k8s-version-740194 Clientid:01:52:54:00:3f:a7:11}
	I0918 21:05:08.039266   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | domain old-k8s-version-740194 has defined IP address 192.168.72.53 and MAC address 52:54:00:3f:a7:11 in network mk-old-k8s-version-740194
	I0918 21:05:08.039438   62061 main.go:141] libmachine: (old-k8s-version-740194) Calling .GetSSHPort
	I0918 21:05:08.039632   62061 main.go:141] libmachine: (old-k8s-version-740194) Calling .GetSSHKeyPath
	I0918 21:05:08.039899   62061 main.go:141] libmachine: (old-k8s-version-740194) Calling .GetSSHKeyPath
	I0918 21:05:08.040072   62061 main.go:141] libmachine: (old-k8s-version-740194) Calling .GetSSHUsername
	I0918 21:05:08.040248   62061 main.go:141] libmachine: Using SSH client type: native
	I0918 21:05:08.040415   62061 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.72.53 22 <nil> <nil>}
	I0918 21:05:08.040428   62061 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-740194 && echo "old-k8s-version-740194" | sudo tee /etc/hostname
	I0918 21:05:08.165391   62061 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-740194
	
	I0918 21:05:08.165424   62061 main.go:141] libmachine: (old-k8s-version-740194) Calling .GetSSHHostname
	I0918 21:05:08.168059   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | domain old-k8s-version-740194 has defined MAC address 52:54:00:3f:a7:11 in network mk-old-k8s-version-740194
	I0918 21:05:08.168396   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:a7:11", ip: ""} in network mk-old-k8s-version-740194: {Iface:virbr3 ExpiryTime:2024-09-18 22:05:00 +0000 UTC Type:0 Mac:52:54:00:3f:a7:11 Iaid: IPaddr:192.168.72.53 Prefix:24 Hostname:old-k8s-version-740194 Clientid:01:52:54:00:3f:a7:11}
	I0918 21:05:08.168428   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | domain old-k8s-version-740194 has defined IP address 192.168.72.53 and MAC address 52:54:00:3f:a7:11 in network mk-old-k8s-version-740194
	I0918 21:05:08.168621   62061 main.go:141] libmachine: (old-k8s-version-740194) Calling .GetSSHPort
	I0918 21:05:08.168837   62061 main.go:141] libmachine: (old-k8s-version-740194) Calling .GetSSHKeyPath
	I0918 21:05:08.168988   62061 main.go:141] libmachine: (old-k8s-version-740194) Calling .GetSSHKeyPath
	I0918 21:05:08.169231   62061 main.go:141] libmachine: (old-k8s-version-740194) Calling .GetSSHUsername
	I0918 21:05:08.169413   62061 main.go:141] libmachine: Using SSH client type: native
	I0918 21:05:08.169579   62061 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.72.53 22 <nil> <nil>}
	I0918 21:05:08.169594   62061 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-740194' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-740194/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-740194' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0918 21:05:08.288591   62061 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0918 21:05:08.288644   62061 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19667-7671/.minikube CaCertPath:/home/jenkins/minikube-integration/19667-7671/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19667-7671/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19667-7671/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19667-7671/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19667-7671/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19667-7671/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19667-7671/.minikube}
	I0918 21:05:08.288671   62061 buildroot.go:174] setting up certificates
	I0918 21:05:08.288682   62061 provision.go:84] configureAuth start
	I0918 21:05:08.288694   62061 main.go:141] libmachine: (old-k8s-version-740194) Calling .GetMachineName
	I0918 21:05:08.289017   62061 main.go:141] libmachine: (old-k8s-version-740194) Calling .GetIP
	I0918 21:05:08.291949   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | domain old-k8s-version-740194 has defined MAC address 52:54:00:3f:a7:11 in network mk-old-k8s-version-740194
	I0918 21:05:08.292358   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:a7:11", ip: ""} in network mk-old-k8s-version-740194: {Iface:virbr3 ExpiryTime:2024-09-18 22:05:00 +0000 UTC Type:0 Mac:52:54:00:3f:a7:11 Iaid: IPaddr:192.168.72.53 Prefix:24 Hostname:old-k8s-version-740194 Clientid:01:52:54:00:3f:a7:11}
	I0918 21:05:08.292405   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | domain old-k8s-version-740194 has defined IP address 192.168.72.53 and MAC address 52:54:00:3f:a7:11 in network mk-old-k8s-version-740194
	I0918 21:05:08.292526   62061 main.go:141] libmachine: (old-k8s-version-740194) Calling .GetSSHHostname
	I0918 21:05:08.295013   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | domain old-k8s-version-740194 has defined MAC address 52:54:00:3f:a7:11 in network mk-old-k8s-version-740194
	I0918 21:05:08.295378   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:a7:11", ip: ""} in network mk-old-k8s-version-740194: {Iface:virbr3 ExpiryTime:2024-09-18 22:05:00 +0000 UTC Type:0 Mac:52:54:00:3f:a7:11 Iaid: IPaddr:192.168.72.53 Prefix:24 Hostname:old-k8s-version-740194 Clientid:01:52:54:00:3f:a7:11}
	I0918 21:05:08.295399   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | domain old-k8s-version-740194 has defined IP address 192.168.72.53 and MAC address 52:54:00:3f:a7:11 in network mk-old-k8s-version-740194
	I0918 21:05:08.295567   62061 provision.go:143] copyHostCerts
	I0918 21:05:08.295630   62061 exec_runner.go:144] found /home/jenkins/minikube-integration/19667-7671/.minikube/ca.pem, removing ...
	I0918 21:05:08.295640   62061 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19667-7671/.minikube/ca.pem
	I0918 21:05:08.295692   62061 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19667-7671/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19667-7671/.minikube/ca.pem (1082 bytes)
	I0918 21:05:08.295783   62061 exec_runner.go:144] found /home/jenkins/minikube-integration/19667-7671/.minikube/cert.pem, removing ...
	I0918 21:05:08.295790   62061 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19667-7671/.minikube/cert.pem
	I0918 21:05:08.295810   62061 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19667-7671/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19667-7671/.minikube/cert.pem (1123 bytes)
	I0918 21:05:08.295917   62061 exec_runner.go:144] found /home/jenkins/minikube-integration/19667-7671/.minikube/key.pem, removing ...
	I0918 21:05:08.295926   62061 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19667-7671/.minikube/key.pem
	I0918 21:05:08.295949   62061 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19667-7671/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19667-7671/.minikube/key.pem (1679 bytes)
	I0918 21:05:08.296009   62061 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19667-7671/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19667-7671/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19667-7671/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-740194 san=[127.0.0.1 192.168.72.53 localhost minikube old-k8s-version-740194]
	I0918 21:05:08.560726   62061 provision.go:177] copyRemoteCerts
	I0918 21:05:08.560786   62061 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0918 21:05:08.560816   62061 main.go:141] libmachine: (old-k8s-version-740194) Calling .GetSSHHostname
	I0918 21:05:08.563798   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | domain old-k8s-version-740194 has defined MAC address 52:54:00:3f:a7:11 in network mk-old-k8s-version-740194
	I0918 21:05:08.564231   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:a7:11", ip: ""} in network mk-old-k8s-version-740194: {Iface:virbr3 ExpiryTime:2024-09-18 22:05:00 +0000 UTC Type:0 Mac:52:54:00:3f:a7:11 Iaid: IPaddr:192.168.72.53 Prefix:24 Hostname:old-k8s-version-740194 Clientid:01:52:54:00:3f:a7:11}
	I0918 21:05:08.564266   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | domain old-k8s-version-740194 has defined IP address 192.168.72.53 and MAC address 52:54:00:3f:a7:11 in network mk-old-k8s-version-740194
	I0918 21:05:08.564473   62061 main.go:141] libmachine: (old-k8s-version-740194) Calling .GetSSHPort
	I0918 21:05:08.564704   62061 main.go:141] libmachine: (old-k8s-version-740194) Calling .GetSSHKeyPath
	I0918 21:05:08.564876   62061 main.go:141] libmachine: (old-k8s-version-740194) Calling .GetSSHUsername
	I0918 21:05:08.565016   62061 sshutil.go:53] new ssh client: &{IP:192.168.72.53 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19667-7671/.minikube/machines/old-k8s-version-740194/id_rsa Username:docker}
	I0918 21:05:08.654357   62061 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0918 21:05:08.680551   62061 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0918 21:05:08.704324   62061 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0918 21:05:08.727966   62061 provision.go:87] duration metric: took 439.269312ms to configureAuth
	I0918 21:05:08.728003   62061 buildroot.go:189] setting minikube options for container-runtime
	I0918 21:05:08.728235   62061 config.go:182] Loaded profile config "old-k8s-version-740194": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0918 21:05:08.728305   62061 main.go:141] libmachine: (old-k8s-version-740194) Calling .GetSSHHostname
	I0918 21:05:08.730895   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | domain old-k8s-version-740194 has defined MAC address 52:54:00:3f:a7:11 in network mk-old-k8s-version-740194
	I0918 21:05:08.731318   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:a7:11", ip: ""} in network mk-old-k8s-version-740194: {Iface:virbr3 ExpiryTime:2024-09-18 22:05:00 +0000 UTC Type:0 Mac:52:54:00:3f:a7:11 Iaid: IPaddr:192.168.72.53 Prefix:24 Hostname:old-k8s-version-740194 Clientid:01:52:54:00:3f:a7:11}
	I0918 21:05:08.731348   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | domain old-k8s-version-740194 has defined IP address 192.168.72.53 and MAC address 52:54:00:3f:a7:11 in network mk-old-k8s-version-740194
	I0918 21:05:08.731448   62061 main.go:141] libmachine: (old-k8s-version-740194) Calling .GetSSHPort
	I0918 21:05:08.731668   62061 main.go:141] libmachine: (old-k8s-version-740194) Calling .GetSSHKeyPath
	I0918 21:05:08.731884   62061 main.go:141] libmachine: (old-k8s-version-740194) Calling .GetSSHKeyPath
	I0918 21:05:08.732057   62061 main.go:141] libmachine: (old-k8s-version-740194) Calling .GetSSHUsername
	I0918 21:05:08.732223   62061 main.go:141] libmachine: Using SSH client type: native
	I0918 21:05:08.732396   62061 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.72.53 22 <nil> <nil>}
	I0918 21:05:08.732411   62061 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0918 21:05:08.978962   62061 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0918 21:05:08.978995   62061 machine.go:96] duration metric: took 1.05811075s to provisionDockerMachine
	I0918 21:05:08.979008   62061 start.go:293] postStartSetup for "old-k8s-version-740194" (driver="kvm2")
	I0918 21:05:08.979020   62061 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0918 21:05:08.979050   62061 main.go:141] libmachine: (old-k8s-version-740194) Calling .DriverName
	I0918 21:05:08.979409   62061 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0918 21:05:08.979436   62061 main.go:141] libmachine: (old-k8s-version-740194) Calling .GetSSHHostname
	I0918 21:05:08.982472   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | domain old-k8s-version-740194 has defined MAC address 52:54:00:3f:a7:11 in network mk-old-k8s-version-740194
	I0918 21:05:08.982791   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:a7:11", ip: ""} in network mk-old-k8s-version-740194: {Iface:virbr3 ExpiryTime:2024-09-18 22:05:00 +0000 UTC Type:0 Mac:52:54:00:3f:a7:11 Iaid: IPaddr:192.168.72.53 Prefix:24 Hostname:old-k8s-version-740194 Clientid:01:52:54:00:3f:a7:11}
	I0918 21:05:08.982830   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | domain old-k8s-version-740194 has defined IP address 192.168.72.53 and MAC address 52:54:00:3f:a7:11 in network mk-old-k8s-version-740194
	I0918 21:05:08.982996   62061 main.go:141] libmachine: (old-k8s-version-740194) Calling .GetSSHPort
	I0918 21:05:08.983192   62061 main.go:141] libmachine: (old-k8s-version-740194) Calling .GetSSHKeyPath
	I0918 21:05:08.983341   62061 main.go:141] libmachine: (old-k8s-version-740194) Calling .GetSSHUsername
	I0918 21:05:08.983510   62061 sshutil.go:53] new ssh client: &{IP:192.168.72.53 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19667-7671/.minikube/machines/old-k8s-version-740194/id_rsa Username:docker}
	I0918 21:05:09.074995   62061 ssh_runner.go:195] Run: cat /etc/os-release
	I0918 21:05:09.080207   62061 info.go:137] Remote host: Buildroot 2023.02.9
	I0918 21:05:09.080240   62061 filesync.go:126] Scanning /home/jenkins/minikube-integration/19667-7671/.minikube/addons for local assets ...
	I0918 21:05:09.080327   62061 filesync.go:126] Scanning /home/jenkins/minikube-integration/19667-7671/.minikube/files for local assets ...
	I0918 21:05:09.080451   62061 filesync.go:149] local asset: /home/jenkins/minikube-integration/19667-7671/.minikube/files/etc/ssl/certs/148782.pem -> 148782.pem in /etc/ssl/certs
	I0918 21:05:09.080583   62061 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0918 21:05:09.091374   62061 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/files/etc/ssl/certs/148782.pem --> /etc/ssl/certs/148782.pem (1708 bytes)
	I0918 21:05:09.119546   62061 start.go:296] duration metric: took 140.521158ms for postStartSetup
	I0918 21:05:09.119593   62061 fix.go:56] duration metric: took 20.337078765s for fixHost
	I0918 21:05:09.119620   62061 main.go:141] libmachine: (old-k8s-version-740194) Calling .GetSSHHostname
	I0918 21:05:09.122534   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | domain old-k8s-version-740194 has defined MAC address 52:54:00:3f:a7:11 in network mk-old-k8s-version-740194
	I0918 21:05:09.123157   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:a7:11", ip: ""} in network mk-old-k8s-version-740194: {Iface:virbr3 ExpiryTime:2024-09-18 22:05:00 +0000 UTC Type:0 Mac:52:54:00:3f:a7:11 Iaid: IPaddr:192.168.72.53 Prefix:24 Hostname:old-k8s-version-740194 Clientid:01:52:54:00:3f:a7:11}
	I0918 21:05:09.123187   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | domain old-k8s-version-740194 has defined IP address 192.168.72.53 and MAC address 52:54:00:3f:a7:11 in network mk-old-k8s-version-740194
	I0918 21:05:09.123447   62061 main.go:141] libmachine: (old-k8s-version-740194) Calling .GetSSHPort
	I0918 21:05:09.123695   62061 main.go:141] libmachine: (old-k8s-version-740194) Calling .GetSSHKeyPath
	I0918 21:05:09.123891   62061 main.go:141] libmachine: (old-k8s-version-740194) Calling .GetSSHKeyPath
	I0918 21:05:09.124095   62061 main.go:141] libmachine: (old-k8s-version-740194) Calling .GetSSHUsername
	I0918 21:05:09.124373   62061 main.go:141] libmachine: Using SSH client type: native
	I0918 21:05:09.124582   62061 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.72.53 22 <nil> <nil>}
	I0918 21:05:09.124595   62061 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0918 21:05:09.245082   62061 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726693509.219779478
	
	I0918 21:05:09.245109   62061 fix.go:216] guest clock: 1726693509.219779478
	I0918 21:05:09.245119   62061 fix.go:229] Guest: 2024-09-18 21:05:09.219779478 +0000 UTC Remote: 2024-09-18 21:05:09.11959842 +0000 UTC m=+249.666759777 (delta=100.181058ms)
	I0918 21:05:09.245139   62061 fix.go:200] guest clock delta is within tolerance: 100.181058ms
	I0918 21:05:09.245146   62061 start.go:83] releasing machines lock for "old-k8s-version-740194", held for 20.462669229s
	I0918 21:05:09.245176   62061 main.go:141] libmachine: (old-k8s-version-740194) Calling .DriverName
	I0918 21:05:09.245602   62061 main.go:141] libmachine: (old-k8s-version-740194) Calling .GetIP
	I0918 21:05:09.248653   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | domain old-k8s-version-740194 has defined MAC address 52:54:00:3f:a7:11 in network mk-old-k8s-version-740194
	I0918 21:05:09.249110   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:a7:11", ip: ""} in network mk-old-k8s-version-740194: {Iface:virbr3 ExpiryTime:2024-09-18 22:05:00 +0000 UTC Type:0 Mac:52:54:00:3f:a7:11 Iaid: IPaddr:192.168.72.53 Prefix:24 Hostname:old-k8s-version-740194 Clientid:01:52:54:00:3f:a7:11}
	I0918 21:05:09.249156   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | domain old-k8s-version-740194 has defined IP address 192.168.72.53 and MAC address 52:54:00:3f:a7:11 in network mk-old-k8s-version-740194
	I0918 21:05:09.249345   62061 main.go:141] libmachine: (old-k8s-version-740194) Calling .DriverName
	I0918 21:05:09.249838   62061 main.go:141] libmachine: (old-k8s-version-740194) Calling .DriverName
	I0918 21:05:09.250047   62061 main.go:141] libmachine: (old-k8s-version-740194) Calling .DriverName
	I0918 21:05:09.250167   62061 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0918 21:05:09.250230   62061 main.go:141] libmachine: (old-k8s-version-740194) Calling .GetSSHHostname
	I0918 21:05:09.250286   62061 ssh_runner.go:195] Run: cat /version.json
	I0918 21:05:09.250312   62061 main.go:141] libmachine: (old-k8s-version-740194) Calling .GetSSHHostname
	I0918 21:05:09.253242   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | domain old-k8s-version-740194 has defined MAC address 52:54:00:3f:a7:11 in network mk-old-k8s-version-740194
	I0918 21:05:09.253408   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | domain old-k8s-version-740194 has defined MAC address 52:54:00:3f:a7:11 in network mk-old-k8s-version-740194
	I0918 21:05:09.253687   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:a7:11", ip: ""} in network mk-old-k8s-version-740194: {Iface:virbr3 ExpiryTime:2024-09-18 22:05:00 +0000 UTC Type:0 Mac:52:54:00:3f:a7:11 Iaid: IPaddr:192.168.72.53 Prefix:24 Hostname:old-k8s-version-740194 Clientid:01:52:54:00:3f:a7:11}
	I0918 21:05:09.253749   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | domain old-k8s-version-740194 has defined IP address 192.168.72.53 and MAC address 52:54:00:3f:a7:11 in network mk-old-k8s-version-740194
	I0918 21:05:09.253882   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:a7:11", ip: ""} in network mk-old-k8s-version-740194: {Iface:virbr3 ExpiryTime:2024-09-18 22:05:00 +0000 UTC Type:0 Mac:52:54:00:3f:a7:11 Iaid: IPaddr:192.168.72.53 Prefix:24 Hostname:old-k8s-version-740194 Clientid:01:52:54:00:3f:a7:11}
	I0918 21:05:09.253901   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | domain old-k8s-version-740194 has defined IP address 192.168.72.53 and MAC address 52:54:00:3f:a7:11 in network mk-old-k8s-version-740194
	I0918 21:05:09.253944   62061 main.go:141] libmachine: (old-k8s-version-740194) Calling .GetSSHPort
	I0918 21:05:09.254026   62061 main.go:141] libmachine: (old-k8s-version-740194) Calling .GetSSHPort
	I0918 21:05:09.254221   62061 main.go:141] libmachine: (old-k8s-version-740194) Calling .GetSSHKeyPath
	I0918 21:05:09.254243   62061 main.go:141] libmachine: (old-k8s-version-740194) Calling .GetSSHKeyPath
	I0918 21:05:09.254372   62061 main.go:141] libmachine: (old-k8s-version-740194) Calling .GetSSHUsername
	I0918 21:05:09.254426   62061 main.go:141] libmachine: (old-k8s-version-740194) Calling .GetSSHUsername
	I0918 21:05:09.254533   62061 sshutil.go:53] new ssh client: &{IP:192.168.72.53 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19667-7671/.minikube/machines/old-k8s-version-740194/id_rsa Username:docker}
	I0918 21:05:09.254695   62061 sshutil.go:53] new ssh client: &{IP:192.168.72.53 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19667-7671/.minikube/machines/old-k8s-version-740194/id_rsa Username:docker}
	I0918 21:05:09.374728   62061 ssh_runner.go:195] Run: systemctl --version
	I0918 21:05:09.381117   62061 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0918 21:05:09.532059   62061 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0918 21:05:09.538041   62061 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0918 21:05:09.538124   62061 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0918 21:05:09.553871   62061 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0918 21:05:09.553907   62061 start.go:495] detecting cgroup driver to use...
	I0918 21:05:09.553982   62061 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0918 21:05:09.573554   62061 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0918 21:05:09.591963   62061 docker.go:217] disabling cri-docker service (if available) ...
	I0918 21:05:09.592057   62061 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0918 21:05:09.610813   62061 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0918 21:05:09.627153   62061 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0918 21:05:09.763978   62061 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0918 21:05:09.945185   62061 docker.go:233] disabling docker service ...
	I0918 21:05:09.945257   62061 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0918 21:05:09.961076   62061 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0918 21:05:09.974660   62061 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0918 21:05:10.111191   62061 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0918 21:05:10.235058   62061 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0918 21:05:10.251389   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0918 21:05:10.270949   62061 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0918 21:05:10.271047   62061 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0918 21:05:10.284743   62061 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0918 21:05:10.284811   62061 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0918 21:05:10.295221   62061 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0918 21:05:10.305897   62061 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0918 21:05:10.317726   62061 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0918 21:05:10.330136   62061 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0918 21:05:10.339480   62061 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0918 21:05:10.339543   62061 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0918 21:05:10.352954   62061 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0918 21:05:10.370764   62061 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0918 21:05:10.524315   62061 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0918 21:05:10.617374   62061 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0918 21:05:10.617466   62061 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0918 21:05:10.624164   62061 start.go:563] Will wait 60s for crictl version
	I0918 21:05:10.624222   62061 ssh_runner.go:195] Run: which crictl
	I0918 21:05:10.629583   62061 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0918 21:05:10.673613   62061 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0918 21:05:10.673702   62061 ssh_runner.go:195] Run: crio --version
	I0918 21:05:10.703948   62061 ssh_runner.go:195] Run: crio --version
	I0918 21:05:10.733924   62061 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0918 21:05:10.735500   62061 main.go:141] libmachine: (old-k8s-version-740194) Calling .GetIP
	I0918 21:05:10.738130   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | domain old-k8s-version-740194 has defined MAC address 52:54:00:3f:a7:11 in network mk-old-k8s-version-740194
	I0918 21:05:10.738488   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:a7:11", ip: ""} in network mk-old-k8s-version-740194: {Iface:virbr3 ExpiryTime:2024-09-18 22:05:00 +0000 UTC Type:0 Mac:52:54:00:3f:a7:11 Iaid: IPaddr:192.168.72.53 Prefix:24 Hostname:old-k8s-version-740194 Clientid:01:52:54:00:3f:a7:11}
	I0918 21:05:10.738516   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | domain old-k8s-version-740194 has defined IP address 192.168.72.53 and MAC address 52:54:00:3f:a7:11 in network mk-old-k8s-version-740194
	I0918 21:05:10.738831   62061 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0918 21:05:10.742780   62061 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0918 21:05:10.754785   62061 kubeadm.go:883] updating cluster {Name:old-k8s-version-740194 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-740194 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.53 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0918 21:05:10.754939   62061 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0918 21:05:10.755002   62061 ssh_runner.go:195] Run: sudo crictl images --output json
	I0918 21:05:10.800452   62061 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0918 21:05:10.800526   62061 ssh_runner.go:195] Run: which lz4
	I0918 21:05:10.804522   62061 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0918 21:05:10.809179   62061 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0918 21:05:10.809214   62061 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0918 21:05:12.306958   62061 crio.go:462] duration metric: took 1.502481615s to copy over tarball
	I0918 21:05:12.307049   62061 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0918 21:05:15.357114   62061 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.050037005s)
	I0918 21:05:15.357141   62061 crio.go:469] duration metric: took 3.050151648s to extract the tarball
	I0918 21:05:15.357148   62061 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0918 21:05:15.399373   62061 ssh_runner.go:195] Run: sudo crictl images --output json
	I0918 21:05:15.434204   62061 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0918 21:05:15.434238   62061 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0918 21:05:15.434332   62061 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0918 21:05:15.434369   62061 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0918 21:05:15.434385   62061 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0918 21:05:15.434398   62061 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0918 21:05:15.434339   62061 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0918 21:05:15.434438   62061 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I0918 21:05:15.434443   62061 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0918 21:05:15.434491   62061 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I0918 21:05:15.436820   62061 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0918 21:05:15.436824   62061 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0918 21:05:15.436829   62061 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0918 21:05:15.436856   62061 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0918 21:05:15.436867   62061 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0918 21:05:15.436904   62061 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0918 21:05:15.436952   62061 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0918 21:05:15.437278   62061 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0918 21:05:15.747423   62061 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0918 21:05:15.747705   62061 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0918 21:05:15.748375   62061 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0918 21:05:15.750816   62061 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0918 21:05:15.752244   62061 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0918 21:05:15.754038   62061 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0918 21:05:15.770881   62061 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0918 21:05:15.911654   62061 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0918 21:05:15.911725   62061 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0918 21:05:15.911795   62061 ssh_runner.go:195] Run: which crictl
	I0918 21:05:15.938412   62061 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0918 21:05:15.938464   62061 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0918 21:05:15.938534   62061 ssh_runner.go:195] Run: which crictl
	I0918 21:05:15.944615   62061 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0918 21:05:15.944706   62061 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0918 21:05:15.944736   62061 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0918 21:05:15.944749   62061 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0918 21:05:15.944786   62061 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0918 21:05:15.944809   62061 ssh_runner.go:195] Run: which crictl
	I0918 21:05:15.944820   62061 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0918 21:05:15.944844   62061 ssh_runner.go:195] Run: which crictl
	I0918 21:05:15.944857   62061 ssh_runner.go:195] Run: which crictl
	I0918 21:05:15.944661   62061 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0918 21:05:15.944914   62061 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0918 21:05:15.944950   62061 ssh_runner.go:195] Run: which crictl
	I0918 21:05:15.965316   62061 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0918 21:05:15.965366   62061 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0918 21:05:15.965418   62061 ssh_runner.go:195] Run: which crictl
	I0918 21:05:15.965461   62061 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0918 21:05:15.965419   62061 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0918 21:05:15.965544   62061 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0918 21:05:15.965558   62061 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0918 21:05:15.965613   62061 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0918 21:05:15.965621   62061 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0918 21:05:16.101533   62061 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0918 21:05:16.101536   62061 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0918 21:05:16.105174   62061 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0918 21:05:16.105198   62061 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0918 21:05:16.109044   62061 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0918 21:05:16.109048   62061 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0918 21:05:16.109152   62061 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0918 21:05:16.220653   62061 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0918 21:05:16.255418   62061 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0918 21:05:16.259931   62061 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0918 21:05:16.259986   62061 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0918 21:05:16.260039   62061 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0918 21:05:16.260121   62061 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0918 21:05:16.260337   62061 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0918 21:05:16.352969   62061 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0918 21:05:16.399180   62061 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19667-7671/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0918 21:05:16.445003   62061 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19667-7671/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0918 21:05:16.445078   62061 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19667-7671/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0918 21:05:16.445163   62061 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19667-7671/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0918 21:05:16.445266   62061 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19667-7671/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0918 21:05:16.445387   62061 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19667-7671/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0918 21:05:16.445398   62061 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19667-7671/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0918 21:05:16.633221   62061 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0918 21:05:16.782331   62061 cache_images.go:92] duration metric: took 1.348045359s to LoadCachedImages
	W0918 21:05:16.782445   62061 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19667-7671/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19667-7671/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0: no such file or directory
	I0918 21:05:16.782464   62061 kubeadm.go:934] updating node { 192.168.72.53 8443 v1.20.0 crio true true} ...
	I0918 21:05:16.782608   62061 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-740194 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.72.53
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-740194 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0918 21:05:16.782679   62061 ssh_runner.go:195] Run: crio config
	I0918 21:05:16.838946   62061 cni.go:84] Creating CNI manager for ""
	I0918 21:05:16.838978   62061 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0918 21:05:16.838990   62061 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0918 21:05:16.839008   62061 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.53 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-740194 NodeName:old-k8s-version-740194 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.53"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.53 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0918 21:05:16.839163   62061 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.53
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-740194"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.53
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.53"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0918 21:05:16.839232   62061 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0918 21:05:16.849868   62061 binaries.go:44] Found k8s binaries, skipping transfer
	I0918 21:05:16.849955   62061 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0918 21:05:16.859716   62061 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (429 bytes)
	I0918 21:05:16.877857   62061 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0918 21:05:16.895573   62061 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2120 bytes)
	I0918 21:05:16.913545   62061 ssh_runner.go:195] Run: grep 192.168.72.53	control-plane.minikube.internal$ /etc/hosts
	I0918 21:05:16.917476   62061 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.53	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0918 21:05:16.931318   62061 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0918 21:05:17.057636   62061 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0918 21:05:17.076534   62061 certs.go:68] Setting up /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/old-k8s-version-740194 for IP: 192.168.72.53
	I0918 21:05:17.076571   62061 certs.go:194] generating shared ca certs ...
	I0918 21:05:17.076594   62061 certs.go:226] acquiring lock for ca certs: {Name:mkf54e0e71ed884c4372f9d3d4cb308ea4600185 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0918 21:05:17.076796   62061 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19667-7671/.minikube/ca.key
	I0918 21:05:17.076855   62061 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19667-7671/.minikube/proxy-client-ca.key
	I0918 21:05:17.076871   62061 certs.go:256] generating profile certs ...
	I0918 21:05:17.076999   62061 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/old-k8s-version-740194/client.key
	I0918 21:05:17.077083   62061 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/old-k8s-version-740194/apiserver.key.424b07d9
	I0918 21:05:17.077149   62061 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/old-k8s-version-740194/proxy-client.key
	I0918 21:05:17.077321   62061 certs.go:484] found cert: /home/jenkins/minikube-integration/19667-7671/.minikube/certs/14878.pem (1338 bytes)
	W0918 21:05:17.077371   62061 certs.go:480] ignoring /home/jenkins/minikube-integration/19667-7671/.minikube/certs/14878_empty.pem, impossibly tiny 0 bytes
	I0918 21:05:17.077386   62061 certs.go:484] found cert: /home/jenkins/minikube-integration/19667-7671/.minikube/certs/ca-key.pem (1675 bytes)
	I0918 21:05:17.077421   62061 certs.go:484] found cert: /home/jenkins/minikube-integration/19667-7671/.minikube/certs/ca.pem (1082 bytes)
	I0918 21:05:17.077465   62061 certs.go:484] found cert: /home/jenkins/minikube-integration/19667-7671/.minikube/certs/cert.pem (1123 bytes)
	I0918 21:05:17.077499   62061 certs.go:484] found cert: /home/jenkins/minikube-integration/19667-7671/.minikube/certs/key.pem (1679 bytes)
	I0918 21:05:17.077585   62061 certs.go:484] found cert: /home/jenkins/minikube-integration/19667-7671/.minikube/files/etc/ssl/certs/148782.pem (1708 bytes)
	I0918 21:05:17.078553   62061 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0918 21:05:17.116000   62061 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0918 21:05:17.150848   62061 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0918 21:05:17.190616   62061 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0918 21:05:17.222411   62061 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/old-k8s-version-740194/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0918 21:05:17.266923   62061 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/old-k8s-version-740194/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0918 21:05:17.303973   62061 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/old-k8s-version-740194/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0918 21:05:17.334390   62061 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/old-k8s-version-740194/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0918 21:05:17.358732   62061 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0918 21:05:17.385693   62061 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/certs/14878.pem --> /usr/share/ca-certificates/14878.pem (1338 bytes)
	I0918 21:05:17.411396   62061 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/files/etc/ssl/certs/148782.pem --> /usr/share/ca-certificates/148782.pem (1708 bytes)
	I0918 21:05:17.440205   62061 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0918 21:05:17.459639   62061 ssh_runner.go:195] Run: openssl version
	I0918 21:05:17.465846   62061 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0918 21:05:17.477482   62061 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0918 21:05:17.482579   62061 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 18 19:39 /usr/share/ca-certificates/minikubeCA.pem
	I0918 21:05:17.482650   62061 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0918 21:05:17.488822   62061 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0918 21:05:17.499729   62061 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14878.pem && ln -fs /usr/share/ca-certificates/14878.pem /etc/ssl/certs/14878.pem"
	I0918 21:05:17.511718   62061 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14878.pem
	I0918 21:05:17.516361   62061 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 18 19:57 /usr/share/ca-certificates/14878.pem
	I0918 21:05:17.516438   62061 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14878.pem
	I0918 21:05:17.522542   62061 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14878.pem /etc/ssl/certs/51391683.0"
	I0918 21:05:17.534450   62061 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/148782.pem && ln -fs /usr/share/ca-certificates/148782.pem /etc/ssl/certs/148782.pem"
	I0918 21:05:17.545916   62061 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/148782.pem
	I0918 21:05:17.550672   62061 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 18 19:57 /usr/share/ca-certificates/148782.pem
	I0918 21:05:17.550737   62061 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/148782.pem
	I0918 21:05:17.556802   62061 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/148782.pem /etc/ssl/certs/3ec20f2e.0"
	I0918 21:05:17.568991   62061 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0918 21:05:17.574098   62061 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0918 21:05:17.581458   62061 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0918 21:05:17.588242   62061 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0918 21:05:17.594889   62061 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0918 21:05:17.601499   62061 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0918 21:05:17.608193   62061 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0918 21:05:17.614196   62061 kubeadm.go:392] StartCluster: {Name:old-k8s-version-740194 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-740194 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.53 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0918 21:05:17.614298   62061 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0918 21:05:17.614363   62061 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0918 21:05:17.661551   62061 cri.go:89] found id: ""
	I0918 21:05:17.661627   62061 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0918 21:05:17.673087   62061 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0918 21:05:17.673110   62061 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0918 21:05:17.673153   62061 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0918 21:05:17.683213   62061 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0918 21:05:17.684537   62061 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-740194" does not appear in /home/jenkins/minikube-integration/19667-7671/kubeconfig
	I0918 21:05:17.685136   62061 kubeconfig.go:62] /home/jenkins/minikube-integration/19667-7671/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-740194" cluster setting kubeconfig missing "old-k8s-version-740194" context setting]
	I0918 21:05:17.686023   62061 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19667-7671/kubeconfig: {Name:mk3a3eecadb0164dfdd5d3c4392b5b473a2a9bb0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0918 21:05:17.711337   62061 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0918 21:05:17.723894   62061 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.72.53
	I0918 21:05:17.723936   62061 kubeadm.go:1160] stopping kube-system containers ...
	I0918 21:05:17.723949   62061 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0918 21:05:17.724005   62061 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0918 21:05:17.761087   62061 cri.go:89] found id: ""
	I0918 21:05:17.761166   62061 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0918 21:05:17.779689   62061 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0918 21:05:17.791699   62061 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0918 21:05:17.791723   62061 kubeadm.go:157] found existing configuration files:
	
	I0918 21:05:17.791773   62061 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0918 21:05:17.801804   62061 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0918 21:05:17.801879   62061 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0918 21:05:17.812806   62061 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0918 21:05:17.822813   62061 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0918 21:05:17.822902   62061 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0918 21:05:17.833267   62061 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0918 21:05:17.845148   62061 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0918 21:05:17.845230   62061 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0918 21:05:17.855974   62061 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0918 21:05:17.866490   62061 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0918 21:05:17.866593   62061 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0918 21:05:17.877339   62061 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0918 21:05:17.887825   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0918 21:05:18.021691   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0918 21:05:18.726750   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0918 21:05:18.970635   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0918 21:05:19.081869   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0918 21:05:19.203071   62061 api_server.go:52] waiting for apiserver process to appear ...
	I0918 21:05:19.203166   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:19.704128   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:20.203595   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:20.703244   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:21.204288   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:21.703469   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:22.203224   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:22.703968   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:23.204097   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:23.703933   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:24.204272   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:24.704240   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:25.203880   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:25.703983   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:26.204273   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:26.703861   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:27.204064   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:27.703276   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:28.204289   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:28.703701   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:29.203604   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:29.704274   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:30.203372   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:30.703751   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:31.203670   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:31.704097   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:32.203611   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:32.703968   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:33.203356   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:33.704260   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:34.204082   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:34.703445   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:35.203660   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:35.703374   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:36.203304   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:36.704191   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:37.204129   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:37.703912   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:38.203310   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:38.703616   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:39.203258   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:39.704105   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:40.204102   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:40.704073   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:41.203654   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:41.703947   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:42.203722   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:42.703303   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:43.203847   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:43.704163   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:44.203216   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:44.703267   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:45.203924   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:45.703945   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:46.203386   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:46.703674   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:47.203387   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:47.704034   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:48.203348   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:48.703715   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:49.203984   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:49.703565   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:50.204192   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:50.704248   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:51.203335   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:51.703761   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:52.203474   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:52.703901   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:53.203856   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:53.704192   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:54.204243   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:54.703227   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:55.204057   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:55.704178   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:56.203443   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:56.703517   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:57.203499   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:57.703598   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:58.203660   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:58.703897   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:59.203256   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:59.704149   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:06:00.203356   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:06:00.703750   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:06:01.203765   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:06:01.704295   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:06:02.203759   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:06:02.703342   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:06:03.204083   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:06:03.703777   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:06:04.203340   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:06:04.703762   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:06:05.203639   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:06:05.703335   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:06:06.204156   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:06:06.703735   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:06:07.203278   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:06:07.704072   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:06:08.203299   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:06:08.703528   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:06:09.203725   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:06:09.703712   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:06:10.203930   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:06:10.704216   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:06:11.203706   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:06:11.703494   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:06:12.204098   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:06:12.703927   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:06:13.203379   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:06:13.703248   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:06:14.204000   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:06:14.704159   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:06:15.203401   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:06:15.703942   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:06:16.204043   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:06:16.703691   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:06:17.203508   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:06:17.703445   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:06:18.203689   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:06:18.704249   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:06:19.203290   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0918 21:06:19.203401   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0918 21:06:19.239033   62061 cri.go:89] found id: ""
	I0918 21:06:19.239065   62061 logs.go:276] 0 containers: []
	W0918 21:06:19.239073   62061 logs.go:278] No container was found matching "kube-apiserver"
	I0918 21:06:19.239079   62061 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0918 21:06:19.239141   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0918 21:06:19.274781   62061 cri.go:89] found id: ""
	I0918 21:06:19.274809   62061 logs.go:276] 0 containers: []
	W0918 21:06:19.274819   62061 logs.go:278] No container was found matching "etcd"
	I0918 21:06:19.274833   62061 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0918 21:06:19.274895   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0918 21:06:19.307894   62061 cri.go:89] found id: ""
	I0918 21:06:19.307928   62061 logs.go:276] 0 containers: []
	W0918 21:06:19.307940   62061 logs.go:278] No container was found matching "coredns"
	I0918 21:06:19.307948   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0918 21:06:19.308002   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0918 21:06:19.340572   62061 cri.go:89] found id: ""
	I0918 21:06:19.340602   62061 logs.go:276] 0 containers: []
	W0918 21:06:19.340610   62061 logs.go:278] No container was found matching "kube-scheduler"
	I0918 21:06:19.340615   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0918 21:06:19.340672   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0918 21:06:19.375448   62061 cri.go:89] found id: ""
	I0918 21:06:19.375475   62061 logs.go:276] 0 containers: []
	W0918 21:06:19.375483   62061 logs.go:278] No container was found matching "kube-proxy"
	I0918 21:06:19.375489   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0918 21:06:19.375536   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0918 21:06:19.413102   62061 cri.go:89] found id: ""
	I0918 21:06:19.413133   62061 logs.go:276] 0 containers: []
	W0918 21:06:19.413158   62061 logs.go:278] No container was found matching "kube-controller-manager"
	I0918 21:06:19.413166   62061 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0918 21:06:19.413238   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0918 21:06:19.447497   62061 cri.go:89] found id: ""
	I0918 21:06:19.447526   62061 logs.go:276] 0 containers: []
	W0918 21:06:19.447536   62061 logs.go:278] No container was found matching "kindnet"
	I0918 21:06:19.447544   62061 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0918 21:06:19.447605   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0918 21:06:19.480848   62061 cri.go:89] found id: ""
	I0918 21:06:19.480880   62061 logs.go:276] 0 containers: []
	W0918 21:06:19.480892   62061 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0918 21:06:19.480903   62061 logs.go:123] Gathering logs for kubelet ...
	I0918 21:06:19.480916   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 21:06:19.533573   62061 logs.go:123] Gathering logs for dmesg ...
	I0918 21:06:19.533625   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 21:06:19.546578   62061 logs.go:123] Gathering logs for describe nodes ...
	I0918 21:06:19.546604   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0918 21:06:19.667439   62061 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0918 21:06:19.667459   62061 logs.go:123] Gathering logs for CRI-O ...
	I0918 21:06:19.667472   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0918 21:06:19.739525   62061 logs.go:123] Gathering logs for container status ...
	I0918 21:06:19.739563   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 21:06:22.280913   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:06:22.296045   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0918 21:06:22.296132   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0918 21:06:22.332361   62061 cri.go:89] found id: ""
	I0918 21:06:22.332401   62061 logs.go:276] 0 containers: []
	W0918 21:06:22.332413   62061 logs.go:278] No container was found matching "kube-apiserver"
	I0918 21:06:22.332421   62061 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0918 21:06:22.332485   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0918 21:06:22.392138   62061 cri.go:89] found id: ""
	I0918 21:06:22.392170   62061 logs.go:276] 0 containers: []
	W0918 21:06:22.392178   62061 logs.go:278] No container was found matching "etcd"
	I0918 21:06:22.392184   62061 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0918 21:06:22.392237   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0918 21:06:22.431265   62061 cri.go:89] found id: ""
	I0918 21:06:22.431296   62061 logs.go:276] 0 containers: []
	W0918 21:06:22.431306   62061 logs.go:278] No container was found matching "coredns"
	I0918 21:06:22.431313   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0918 21:06:22.431376   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0918 21:06:22.463685   62061 cri.go:89] found id: ""
	I0918 21:06:22.463718   62061 logs.go:276] 0 containers: []
	W0918 21:06:22.463730   62061 logs.go:278] No container was found matching "kube-scheduler"
	I0918 21:06:22.463738   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0918 21:06:22.463808   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0918 21:06:22.498031   62061 cri.go:89] found id: ""
	I0918 21:06:22.498058   62061 logs.go:276] 0 containers: []
	W0918 21:06:22.498069   62061 logs.go:278] No container was found matching "kube-proxy"
	I0918 21:06:22.498076   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0918 21:06:22.498140   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0918 21:06:22.537701   62061 cri.go:89] found id: ""
	I0918 21:06:22.537732   62061 logs.go:276] 0 containers: []
	W0918 21:06:22.537740   62061 logs.go:278] No container was found matching "kube-controller-manager"
	I0918 21:06:22.537746   62061 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0918 21:06:22.537803   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0918 21:06:22.574085   62061 cri.go:89] found id: ""
	I0918 21:06:22.574118   62061 logs.go:276] 0 containers: []
	W0918 21:06:22.574130   62061 logs.go:278] No container was found matching "kindnet"
	I0918 21:06:22.574138   62061 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0918 21:06:22.574202   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0918 21:06:22.606313   62061 cri.go:89] found id: ""
	I0918 21:06:22.606341   62061 logs.go:276] 0 containers: []
	W0918 21:06:22.606349   62061 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0918 21:06:22.606359   62061 logs.go:123] Gathering logs for describe nodes ...
	I0918 21:06:22.606372   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0918 21:06:22.678484   62061 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0918 21:06:22.678511   62061 logs.go:123] Gathering logs for CRI-O ...
	I0918 21:06:22.678524   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0918 21:06:22.756730   62061 logs.go:123] Gathering logs for container status ...
	I0918 21:06:22.756767   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 21:06:22.793537   62061 logs.go:123] Gathering logs for kubelet ...
	I0918 21:06:22.793573   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 21:06:22.845987   62061 logs.go:123] Gathering logs for dmesg ...
	I0918 21:06:22.846021   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 21:06:25.360980   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:06:25.374930   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0918 21:06:25.375010   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0918 21:06:25.413100   62061 cri.go:89] found id: ""
	I0918 21:06:25.413135   62061 logs.go:276] 0 containers: []
	W0918 21:06:25.413147   62061 logs.go:278] No container was found matching "kube-apiserver"
	I0918 21:06:25.413155   62061 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0918 21:06:25.413221   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0918 21:06:25.450999   62061 cri.go:89] found id: ""
	I0918 21:06:25.451024   62061 logs.go:276] 0 containers: []
	W0918 21:06:25.451032   62061 logs.go:278] No container was found matching "etcd"
	I0918 21:06:25.451038   62061 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0918 21:06:25.451087   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0918 21:06:25.496114   62061 cri.go:89] found id: ""
	I0918 21:06:25.496147   62061 logs.go:276] 0 containers: []
	W0918 21:06:25.496155   62061 logs.go:278] No container was found matching "coredns"
	I0918 21:06:25.496161   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0918 21:06:25.496218   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0918 21:06:25.529186   62061 cri.go:89] found id: ""
	I0918 21:06:25.529211   62061 logs.go:276] 0 containers: []
	W0918 21:06:25.529218   62061 logs.go:278] No container was found matching "kube-scheduler"
	I0918 21:06:25.529223   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0918 21:06:25.529294   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0918 21:06:25.569710   62061 cri.go:89] found id: ""
	I0918 21:06:25.569735   62061 logs.go:276] 0 containers: []
	W0918 21:06:25.569743   62061 logs.go:278] No container was found matching "kube-proxy"
	I0918 21:06:25.569749   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0918 21:06:25.569796   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0918 21:06:25.606915   62061 cri.go:89] found id: ""
	I0918 21:06:25.606951   62061 logs.go:276] 0 containers: []
	W0918 21:06:25.606964   62061 logs.go:278] No container was found matching "kube-controller-manager"
	I0918 21:06:25.606972   62061 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0918 21:06:25.607034   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0918 21:06:25.642705   62061 cri.go:89] found id: ""
	I0918 21:06:25.642736   62061 logs.go:276] 0 containers: []
	W0918 21:06:25.642744   62061 logs.go:278] No container was found matching "kindnet"
	I0918 21:06:25.642750   62061 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0918 21:06:25.642801   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0918 21:06:25.676418   62061 cri.go:89] found id: ""
	I0918 21:06:25.676448   62061 logs.go:276] 0 containers: []
	W0918 21:06:25.676457   62061 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0918 21:06:25.676466   62061 logs.go:123] Gathering logs for dmesg ...
	I0918 21:06:25.676477   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 21:06:25.689913   62061 logs.go:123] Gathering logs for describe nodes ...
	I0918 21:06:25.689945   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0918 21:06:25.771579   62061 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0918 21:06:25.771607   62061 logs.go:123] Gathering logs for CRI-O ...
	I0918 21:06:25.771622   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0918 21:06:25.850904   62061 logs.go:123] Gathering logs for container status ...
	I0918 21:06:25.850945   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 21:06:25.890820   62061 logs.go:123] Gathering logs for kubelet ...
	I0918 21:06:25.890849   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 21:06:28.438958   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:06:28.451961   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0918 21:06:28.452043   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0918 21:06:28.485000   62061 cri.go:89] found id: ""
	I0918 21:06:28.485059   62061 logs.go:276] 0 containers: []
	W0918 21:06:28.485070   62061 logs.go:278] No container was found matching "kube-apiserver"
	I0918 21:06:28.485077   62061 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0918 21:06:28.485138   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0918 21:06:28.517852   62061 cri.go:89] found id: ""
	I0918 21:06:28.517885   62061 logs.go:276] 0 containers: []
	W0918 21:06:28.517895   62061 logs.go:278] No container was found matching "etcd"
	I0918 21:06:28.517903   62061 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0918 21:06:28.517957   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0918 21:06:28.549914   62061 cri.go:89] found id: ""
	I0918 21:06:28.549940   62061 logs.go:276] 0 containers: []
	W0918 21:06:28.549949   62061 logs.go:278] No container was found matching "coredns"
	I0918 21:06:28.549955   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0918 21:06:28.550000   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0918 21:06:28.584133   62061 cri.go:89] found id: ""
	I0918 21:06:28.584156   62061 logs.go:276] 0 containers: []
	W0918 21:06:28.584163   62061 logs.go:278] No container was found matching "kube-scheduler"
	I0918 21:06:28.584169   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0918 21:06:28.584213   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0918 21:06:28.620031   62061 cri.go:89] found id: ""
	I0918 21:06:28.620061   62061 logs.go:276] 0 containers: []
	W0918 21:06:28.620071   62061 logs.go:278] No container was found matching "kube-proxy"
	I0918 21:06:28.620078   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0918 21:06:28.620143   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0918 21:06:28.653393   62061 cri.go:89] found id: ""
	I0918 21:06:28.653424   62061 logs.go:276] 0 containers: []
	W0918 21:06:28.653435   62061 logs.go:278] No container was found matching "kube-controller-manager"
	I0918 21:06:28.653443   62061 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0918 21:06:28.653505   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0918 21:06:28.687696   62061 cri.go:89] found id: ""
	I0918 21:06:28.687729   62061 logs.go:276] 0 containers: []
	W0918 21:06:28.687741   62061 logs.go:278] No container was found matching "kindnet"
	I0918 21:06:28.687752   62061 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0918 21:06:28.687809   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0918 21:06:28.724824   62061 cri.go:89] found id: ""
	I0918 21:06:28.724854   62061 logs.go:276] 0 containers: []
	W0918 21:06:28.724864   62061 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0918 21:06:28.724875   62061 logs.go:123] Gathering logs for kubelet ...
	I0918 21:06:28.724890   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 21:06:28.785489   62061 logs.go:123] Gathering logs for dmesg ...
	I0918 21:06:28.785529   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 21:06:28.804170   62061 logs.go:123] Gathering logs for describe nodes ...
	I0918 21:06:28.804211   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0918 21:06:28.895508   62061 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0918 21:06:28.895538   62061 logs.go:123] Gathering logs for CRI-O ...
	I0918 21:06:28.895554   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0918 21:06:28.974643   62061 logs.go:123] Gathering logs for container status ...
	I0918 21:06:28.974685   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 21:06:31.517305   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:06:31.530374   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0918 21:06:31.530435   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0918 21:06:31.568335   62061 cri.go:89] found id: ""
	I0918 21:06:31.568374   62061 logs.go:276] 0 containers: []
	W0918 21:06:31.568385   62061 logs.go:278] No container was found matching "kube-apiserver"
	I0918 21:06:31.568393   62061 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0918 21:06:31.568462   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0918 21:06:31.603597   62061 cri.go:89] found id: ""
	I0918 21:06:31.603624   62061 logs.go:276] 0 containers: []
	W0918 21:06:31.603633   62061 logs.go:278] No container was found matching "etcd"
	I0918 21:06:31.603639   62061 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0918 21:06:31.603694   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0918 21:06:31.639355   62061 cri.go:89] found id: ""
	I0918 21:06:31.639380   62061 logs.go:276] 0 containers: []
	W0918 21:06:31.639388   62061 logs.go:278] No container was found matching "coredns"
	I0918 21:06:31.639393   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0918 21:06:31.639445   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0918 21:06:31.674197   62061 cri.go:89] found id: ""
	I0918 21:06:31.674223   62061 logs.go:276] 0 containers: []
	W0918 21:06:31.674231   62061 logs.go:278] No container was found matching "kube-scheduler"
	I0918 21:06:31.674237   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0918 21:06:31.674286   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0918 21:06:31.710563   62061 cri.go:89] found id: ""
	I0918 21:06:31.710592   62061 logs.go:276] 0 containers: []
	W0918 21:06:31.710606   62061 logs.go:278] No container was found matching "kube-proxy"
	I0918 21:06:31.710612   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0918 21:06:31.710663   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0918 21:06:31.746923   62061 cri.go:89] found id: ""
	I0918 21:06:31.746958   62061 logs.go:276] 0 containers: []
	W0918 21:06:31.746969   62061 logs.go:278] No container was found matching "kube-controller-manager"
	I0918 21:06:31.746976   62061 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0918 21:06:31.747039   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0918 21:06:31.781913   62061 cri.go:89] found id: ""
	I0918 21:06:31.781946   62061 logs.go:276] 0 containers: []
	W0918 21:06:31.781961   62061 logs.go:278] No container was found matching "kindnet"
	I0918 21:06:31.781969   62061 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0918 21:06:31.782023   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0918 21:06:31.817293   62061 cri.go:89] found id: ""
	I0918 21:06:31.817318   62061 logs.go:276] 0 containers: []
	W0918 21:06:31.817327   62061 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0918 21:06:31.817338   62061 logs.go:123] Gathering logs for kubelet ...
	I0918 21:06:31.817375   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 21:06:31.869479   62061 logs.go:123] Gathering logs for dmesg ...
	I0918 21:06:31.869518   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 21:06:31.884187   62061 logs.go:123] Gathering logs for describe nodes ...
	I0918 21:06:31.884219   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0918 21:06:31.967159   62061 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0918 21:06:31.967189   62061 logs.go:123] Gathering logs for CRI-O ...
	I0918 21:06:31.967202   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0918 21:06:32.050404   62061 logs.go:123] Gathering logs for container status ...
	I0918 21:06:32.050458   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 21:06:34.593135   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:06:34.613917   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0918 21:06:34.614001   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0918 21:06:34.649649   62061 cri.go:89] found id: ""
	I0918 21:06:34.649676   62061 logs.go:276] 0 containers: []
	W0918 21:06:34.649684   62061 logs.go:278] No container was found matching "kube-apiserver"
	I0918 21:06:34.649690   62061 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0918 21:06:34.649738   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0918 21:06:34.684898   62061 cri.go:89] found id: ""
	I0918 21:06:34.684932   62061 logs.go:276] 0 containers: []
	W0918 21:06:34.684940   62061 logs.go:278] No container was found matching "etcd"
	I0918 21:06:34.684948   62061 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0918 21:06:34.685018   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0918 21:06:34.717519   62061 cri.go:89] found id: ""
	I0918 21:06:34.717549   62061 logs.go:276] 0 containers: []
	W0918 21:06:34.717559   62061 logs.go:278] No container was found matching "coredns"
	I0918 21:06:34.717573   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0918 21:06:34.717626   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0918 21:06:34.751242   62061 cri.go:89] found id: ""
	I0918 21:06:34.751275   62061 logs.go:276] 0 containers: []
	W0918 21:06:34.751289   62061 logs.go:278] No container was found matching "kube-scheduler"
	I0918 21:06:34.751298   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0918 21:06:34.751368   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0918 21:06:34.786700   62061 cri.go:89] found id: ""
	I0918 21:06:34.786729   62061 logs.go:276] 0 containers: []
	W0918 21:06:34.786737   62061 logs.go:278] No container was found matching "kube-proxy"
	I0918 21:06:34.786742   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0918 21:06:34.786792   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0918 21:06:34.822400   62061 cri.go:89] found id: ""
	I0918 21:06:34.822446   62061 logs.go:276] 0 containers: []
	W0918 21:06:34.822459   62061 logs.go:278] No container was found matching "kube-controller-manager"
	I0918 21:06:34.822468   62061 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0918 21:06:34.822537   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0918 21:06:34.860509   62061 cri.go:89] found id: ""
	I0918 21:06:34.860540   62061 logs.go:276] 0 containers: []
	W0918 21:06:34.860549   62061 logs.go:278] No container was found matching "kindnet"
	I0918 21:06:34.860554   62061 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0918 21:06:34.860626   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0918 21:06:34.900682   62061 cri.go:89] found id: ""
	I0918 21:06:34.900712   62061 logs.go:276] 0 containers: []
	W0918 21:06:34.900719   62061 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0918 21:06:34.900727   62061 logs.go:123] Gathering logs for describe nodes ...
	I0918 21:06:34.900739   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0918 21:06:34.975975   62061 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0918 21:06:34.976009   62061 logs.go:123] Gathering logs for CRI-O ...
	I0918 21:06:34.976043   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0918 21:06:35.058972   62061 logs.go:123] Gathering logs for container status ...
	I0918 21:06:35.059012   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 21:06:35.101690   62061 logs.go:123] Gathering logs for kubelet ...
	I0918 21:06:35.101716   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 21:06:35.154486   62061 logs.go:123] Gathering logs for dmesg ...
	I0918 21:06:35.154525   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 21:06:37.669814   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:06:37.683235   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0918 21:06:37.683294   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0918 21:06:37.720666   62061 cri.go:89] found id: ""
	I0918 21:06:37.720697   62061 logs.go:276] 0 containers: []
	W0918 21:06:37.720708   62061 logs.go:278] No container was found matching "kube-apiserver"
	I0918 21:06:37.720715   62061 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0918 21:06:37.720777   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0918 21:06:37.758288   62061 cri.go:89] found id: ""
	I0918 21:06:37.758327   62061 logs.go:276] 0 containers: []
	W0918 21:06:37.758335   62061 logs.go:278] No container was found matching "etcd"
	I0918 21:06:37.758341   62061 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0918 21:06:37.758436   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0918 21:06:37.793394   62061 cri.go:89] found id: ""
	I0918 21:06:37.793421   62061 logs.go:276] 0 containers: []
	W0918 21:06:37.793430   62061 logs.go:278] No container was found matching "coredns"
	I0918 21:06:37.793437   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0918 21:06:37.793501   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0918 21:06:37.828258   62061 cri.go:89] found id: ""
	I0918 21:06:37.828291   62061 logs.go:276] 0 containers: []
	W0918 21:06:37.828300   62061 logs.go:278] No container was found matching "kube-scheduler"
	I0918 21:06:37.828307   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0918 21:06:37.828361   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0918 21:06:37.863164   62061 cri.go:89] found id: ""
	I0918 21:06:37.863189   62061 logs.go:276] 0 containers: []
	W0918 21:06:37.863197   62061 logs.go:278] No container was found matching "kube-proxy"
	I0918 21:06:37.863203   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0918 21:06:37.863251   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0918 21:06:37.899585   62061 cri.go:89] found id: ""
	I0918 21:06:37.899614   62061 logs.go:276] 0 containers: []
	W0918 21:06:37.899622   62061 logs.go:278] No container was found matching "kube-controller-manager"
	I0918 21:06:37.899628   62061 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0918 21:06:37.899675   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0918 21:06:37.936246   62061 cri.go:89] found id: ""
	I0918 21:06:37.936282   62061 logs.go:276] 0 containers: []
	W0918 21:06:37.936292   62061 logs.go:278] No container was found matching "kindnet"
	I0918 21:06:37.936299   62061 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0918 21:06:37.936362   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0918 21:06:37.969913   62061 cri.go:89] found id: ""
	I0918 21:06:37.969942   62061 logs.go:276] 0 containers: []
	W0918 21:06:37.969950   62061 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0918 21:06:37.969958   62061 logs.go:123] Gathering logs for kubelet ...
	I0918 21:06:37.969968   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 21:06:38.023055   62061 logs.go:123] Gathering logs for dmesg ...
	I0918 21:06:38.023094   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 21:06:38.036006   62061 logs.go:123] Gathering logs for describe nodes ...
	I0918 21:06:38.036047   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0918 21:06:38.116926   62061 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0918 21:06:38.116957   62061 logs.go:123] Gathering logs for CRI-O ...
	I0918 21:06:38.116972   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0918 21:06:38.192632   62061 logs.go:123] Gathering logs for container status ...
	I0918 21:06:38.192668   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 21:06:40.729602   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:06:40.743291   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0918 21:06:40.743368   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0918 21:06:40.778138   62061 cri.go:89] found id: ""
	I0918 21:06:40.778166   62061 logs.go:276] 0 containers: []
	W0918 21:06:40.778176   62061 logs.go:278] No container was found matching "kube-apiserver"
	I0918 21:06:40.778184   62061 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0918 21:06:40.778248   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0918 21:06:40.813026   62061 cri.go:89] found id: ""
	I0918 21:06:40.813062   62061 logs.go:276] 0 containers: []
	W0918 21:06:40.813073   62061 logs.go:278] No container was found matching "etcd"
	I0918 21:06:40.813081   62061 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0918 21:06:40.813146   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0918 21:06:40.847609   62061 cri.go:89] found id: ""
	I0918 21:06:40.847641   62061 logs.go:276] 0 containers: []
	W0918 21:06:40.847651   62061 logs.go:278] No container was found matching "coredns"
	I0918 21:06:40.847658   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0918 21:06:40.847727   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0918 21:06:40.885403   62061 cri.go:89] found id: ""
	I0918 21:06:40.885432   62061 logs.go:276] 0 containers: []
	W0918 21:06:40.885441   62061 logs.go:278] No container was found matching "kube-scheduler"
	I0918 21:06:40.885448   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0918 21:06:40.885515   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0918 21:06:40.922679   62061 cri.go:89] found id: ""
	I0918 21:06:40.922705   62061 logs.go:276] 0 containers: []
	W0918 21:06:40.922714   62061 logs.go:278] No container was found matching "kube-proxy"
	I0918 21:06:40.922719   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0918 21:06:40.922776   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0918 21:06:40.957187   62061 cri.go:89] found id: ""
	I0918 21:06:40.957215   62061 logs.go:276] 0 containers: []
	W0918 21:06:40.957222   62061 logs.go:278] No container was found matching "kube-controller-manager"
	I0918 21:06:40.957228   62061 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0918 21:06:40.957291   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0918 21:06:40.991722   62061 cri.go:89] found id: ""
	I0918 21:06:40.991751   62061 logs.go:276] 0 containers: []
	W0918 21:06:40.991762   62061 logs.go:278] No container was found matching "kindnet"
	I0918 21:06:40.991769   62061 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0918 21:06:40.991835   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0918 21:06:41.027210   62061 cri.go:89] found id: ""
	I0918 21:06:41.027234   62061 logs.go:276] 0 containers: []
	W0918 21:06:41.027242   62061 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0918 21:06:41.027250   62061 logs.go:123] Gathering logs for kubelet ...
	I0918 21:06:41.027275   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 21:06:41.077183   62061 logs.go:123] Gathering logs for dmesg ...
	I0918 21:06:41.077221   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 21:06:41.090707   62061 logs.go:123] Gathering logs for describe nodes ...
	I0918 21:06:41.090734   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0918 21:06:41.166206   62061 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0918 21:06:41.166228   62061 logs.go:123] Gathering logs for CRI-O ...
	I0918 21:06:41.166241   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0918 21:06:41.240308   62061 logs.go:123] Gathering logs for container status ...
	I0918 21:06:41.240346   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 21:06:43.777517   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:06:43.790901   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0918 21:06:43.790970   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0918 21:06:43.827127   62061 cri.go:89] found id: ""
	I0918 21:06:43.827156   62061 logs.go:276] 0 containers: []
	W0918 21:06:43.827164   62061 logs.go:278] No container was found matching "kube-apiserver"
	I0918 21:06:43.827170   62061 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0918 21:06:43.827218   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0918 21:06:43.861218   62061 cri.go:89] found id: ""
	I0918 21:06:43.861244   62061 logs.go:276] 0 containers: []
	W0918 21:06:43.861252   62061 logs.go:278] No container was found matching "etcd"
	I0918 21:06:43.861257   62061 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0918 21:06:43.861308   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0918 21:06:43.899669   62061 cri.go:89] found id: ""
	I0918 21:06:43.899694   62061 logs.go:276] 0 containers: []
	W0918 21:06:43.899701   62061 logs.go:278] No container was found matching "coredns"
	I0918 21:06:43.899707   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0918 21:06:43.899755   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0918 21:06:43.934698   62061 cri.go:89] found id: ""
	I0918 21:06:43.934731   62061 logs.go:276] 0 containers: []
	W0918 21:06:43.934741   62061 logs.go:278] No container was found matching "kube-scheduler"
	I0918 21:06:43.934748   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0918 21:06:43.934819   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0918 21:06:43.971715   62061 cri.go:89] found id: ""
	I0918 21:06:43.971742   62061 logs.go:276] 0 containers: []
	W0918 21:06:43.971755   62061 logs.go:278] No container was found matching "kube-proxy"
	I0918 21:06:43.971760   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0918 21:06:43.971817   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0918 21:06:44.005880   62061 cri.go:89] found id: ""
	I0918 21:06:44.005915   62061 logs.go:276] 0 containers: []
	W0918 21:06:44.005927   62061 logs.go:278] No container was found matching "kube-controller-manager"
	I0918 21:06:44.005935   62061 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0918 21:06:44.006003   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0918 21:06:44.038144   62061 cri.go:89] found id: ""
	I0918 21:06:44.038171   62061 logs.go:276] 0 containers: []
	W0918 21:06:44.038180   62061 logs.go:278] No container was found matching "kindnet"
	I0918 21:06:44.038186   62061 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0918 21:06:44.038239   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0918 21:06:44.073920   62061 cri.go:89] found id: ""
	I0918 21:06:44.073953   62061 logs.go:276] 0 containers: []
	W0918 21:06:44.073966   62061 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0918 21:06:44.073978   62061 logs.go:123] Gathering logs for describe nodes ...
	I0918 21:06:44.073992   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0918 21:06:44.142881   62061 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0918 21:06:44.142910   62061 logs.go:123] Gathering logs for CRI-O ...
	I0918 21:06:44.142926   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0918 21:06:44.222302   62061 logs.go:123] Gathering logs for container status ...
	I0918 21:06:44.222341   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 21:06:44.265914   62061 logs.go:123] Gathering logs for kubelet ...
	I0918 21:06:44.265952   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 21:06:44.316229   62061 logs.go:123] Gathering logs for dmesg ...
	I0918 21:06:44.316266   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 21:06:46.830793   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:06:46.843829   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0918 21:06:46.843886   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0918 21:06:46.878566   62061 cri.go:89] found id: ""
	I0918 21:06:46.878607   62061 logs.go:276] 0 containers: []
	W0918 21:06:46.878621   62061 logs.go:278] No container was found matching "kube-apiserver"
	I0918 21:06:46.878630   62061 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0918 21:06:46.878710   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0918 21:06:46.915576   62061 cri.go:89] found id: ""
	I0918 21:06:46.915603   62061 logs.go:276] 0 containers: []
	W0918 21:06:46.915611   62061 logs.go:278] No container was found matching "etcd"
	I0918 21:06:46.915616   62061 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0918 21:06:46.915663   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0918 21:06:46.952982   62061 cri.go:89] found id: ""
	I0918 21:06:46.953014   62061 logs.go:276] 0 containers: []
	W0918 21:06:46.953032   62061 logs.go:278] No container was found matching "coredns"
	I0918 21:06:46.953039   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0918 21:06:46.953104   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0918 21:06:46.987718   62061 cri.go:89] found id: ""
	I0918 21:06:46.987749   62061 logs.go:276] 0 containers: []
	W0918 21:06:46.987757   62061 logs.go:278] No container was found matching "kube-scheduler"
	I0918 21:06:46.987763   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0918 21:06:46.987815   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0918 21:06:47.022695   62061 cri.go:89] found id: ""
	I0918 21:06:47.022726   62061 logs.go:276] 0 containers: []
	W0918 21:06:47.022735   62061 logs.go:278] No container was found matching "kube-proxy"
	I0918 21:06:47.022741   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0918 21:06:47.022799   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0918 21:06:47.058058   62061 cri.go:89] found id: ""
	I0918 21:06:47.058086   62061 logs.go:276] 0 containers: []
	W0918 21:06:47.058094   62061 logs.go:278] No container was found matching "kube-controller-manager"
	I0918 21:06:47.058101   62061 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0918 21:06:47.058159   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0918 21:06:47.093314   62061 cri.go:89] found id: ""
	I0918 21:06:47.093352   62061 logs.go:276] 0 containers: []
	W0918 21:06:47.093361   62061 logs.go:278] No container was found matching "kindnet"
	I0918 21:06:47.093367   62061 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0918 21:06:47.093424   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0918 21:06:47.126997   62061 cri.go:89] found id: ""
	I0918 21:06:47.127023   62061 logs.go:276] 0 containers: []
	W0918 21:06:47.127033   62061 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0918 21:06:47.127041   62061 logs.go:123] Gathering logs for CRI-O ...
	I0918 21:06:47.127053   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0918 21:06:47.203239   62061 logs.go:123] Gathering logs for container status ...
	I0918 21:06:47.203282   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 21:06:47.240982   62061 logs.go:123] Gathering logs for kubelet ...
	I0918 21:06:47.241013   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 21:06:47.293553   62061 logs.go:123] Gathering logs for dmesg ...
	I0918 21:06:47.293591   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 21:06:47.307462   62061 logs.go:123] Gathering logs for describe nodes ...
	I0918 21:06:47.307493   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0918 21:06:47.379500   62061 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0918 21:06:49.880441   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:06:49.895816   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0918 21:06:49.895903   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0918 21:06:49.931916   62061 cri.go:89] found id: ""
	I0918 21:06:49.931941   62061 logs.go:276] 0 containers: []
	W0918 21:06:49.931950   62061 logs.go:278] No container was found matching "kube-apiserver"
	I0918 21:06:49.931955   62061 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0918 21:06:49.932007   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0918 21:06:49.967053   62061 cri.go:89] found id: ""
	I0918 21:06:49.967083   62061 logs.go:276] 0 containers: []
	W0918 21:06:49.967093   62061 logs.go:278] No container was found matching "etcd"
	I0918 21:06:49.967101   62061 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0918 21:06:49.967194   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0918 21:06:50.002943   62061 cri.go:89] found id: ""
	I0918 21:06:50.003004   62061 logs.go:276] 0 containers: []
	W0918 21:06:50.003015   62061 logs.go:278] No container was found matching "coredns"
	I0918 21:06:50.003023   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0918 21:06:50.003085   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0918 21:06:50.039445   62061 cri.go:89] found id: ""
	I0918 21:06:50.039478   62061 logs.go:276] 0 containers: []
	W0918 21:06:50.039489   62061 logs.go:278] No container was found matching "kube-scheduler"
	I0918 21:06:50.039497   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0918 21:06:50.039561   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0918 21:06:50.073529   62061 cri.go:89] found id: ""
	I0918 21:06:50.073562   62061 logs.go:276] 0 containers: []
	W0918 21:06:50.073572   62061 logs.go:278] No container was found matching "kube-proxy"
	I0918 21:06:50.073578   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0918 21:06:50.073645   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0918 21:06:50.109363   62061 cri.go:89] found id: ""
	I0918 21:06:50.109394   62061 logs.go:276] 0 containers: []
	W0918 21:06:50.109406   62061 logs.go:278] No container was found matching "kube-controller-manager"
	I0918 21:06:50.109413   62061 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0918 21:06:50.109485   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0918 21:06:50.144387   62061 cri.go:89] found id: ""
	I0918 21:06:50.144418   62061 logs.go:276] 0 containers: []
	W0918 21:06:50.144429   62061 logs.go:278] No container was found matching "kindnet"
	I0918 21:06:50.144450   62061 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0918 21:06:50.144524   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0918 21:06:50.178079   62061 cri.go:89] found id: ""
	I0918 21:06:50.178110   62061 logs.go:276] 0 containers: []
	W0918 21:06:50.178119   62061 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0918 21:06:50.178128   62061 logs.go:123] Gathering logs for kubelet ...
	I0918 21:06:50.178140   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 21:06:50.228857   62061 logs.go:123] Gathering logs for dmesg ...
	I0918 21:06:50.228893   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 21:06:50.242077   62061 logs.go:123] Gathering logs for describe nodes ...
	I0918 21:06:50.242108   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0918 21:06:50.312868   62061 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0918 21:06:50.312903   62061 logs.go:123] Gathering logs for CRI-O ...
	I0918 21:06:50.312920   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0918 21:06:50.392880   62061 logs.go:123] Gathering logs for container status ...
	I0918 21:06:50.392924   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 21:06:52.931623   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:06:52.945298   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0918 21:06:52.945394   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0918 21:06:52.980129   62061 cri.go:89] found id: ""
	I0918 21:06:52.980162   62061 logs.go:276] 0 containers: []
	W0918 21:06:52.980171   62061 logs.go:278] No container was found matching "kube-apiserver"
	I0918 21:06:52.980176   62061 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0918 21:06:52.980224   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0918 21:06:53.017899   62061 cri.go:89] found id: ""
	I0918 21:06:53.017932   62061 logs.go:276] 0 containers: []
	W0918 21:06:53.017941   62061 logs.go:278] No container was found matching "etcd"
	I0918 21:06:53.017947   62061 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0918 21:06:53.018015   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0918 21:06:53.056594   62061 cri.go:89] found id: ""
	I0918 21:06:53.056619   62061 logs.go:276] 0 containers: []
	W0918 21:06:53.056627   62061 logs.go:278] No container was found matching "coredns"
	I0918 21:06:53.056635   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0918 21:06:53.056684   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0918 21:06:53.089876   62061 cri.go:89] found id: ""
	I0918 21:06:53.089907   62061 logs.go:276] 0 containers: []
	W0918 21:06:53.089915   62061 logs.go:278] No container was found matching "kube-scheduler"
	I0918 21:06:53.089920   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0918 21:06:53.089967   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0918 21:06:53.122845   62061 cri.go:89] found id: ""
	I0918 21:06:53.122881   62061 logs.go:276] 0 containers: []
	W0918 21:06:53.122890   62061 logs.go:278] No container was found matching "kube-proxy"
	I0918 21:06:53.122904   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0918 21:06:53.122956   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0918 21:06:53.162103   62061 cri.go:89] found id: ""
	I0918 21:06:53.162137   62061 logs.go:276] 0 containers: []
	W0918 21:06:53.162148   62061 logs.go:278] No container was found matching "kube-controller-manager"
	I0918 21:06:53.162156   62061 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0918 21:06:53.162218   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0918 21:06:53.195034   62061 cri.go:89] found id: ""
	I0918 21:06:53.195067   62061 logs.go:276] 0 containers: []
	W0918 21:06:53.195078   62061 logs.go:278] No container was found matching "kindnet"
	I0918 21:06:53.195085   62061 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0918 21:06:53.195144   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0918 21:06:53.229473   62061 cri.go:89] found id: ""
	I0918 21:06:53.229518   62061 logs.go:276] 0 containers: []
	W0918 21:06:53.229542   62061 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0918 21:06:53.229556   62061 logs.go:123] Gathering logs for kubelet ...
	I0918 21:06:53.229577   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 21:06:53.283929   62061 logs.go:123] Gathering logs for dmesg ...
	I0918 21:06:53.283973   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 21:06:53.296932   62061 logs.go:123] Gathering logs for describe nodes ...
	I0918 21:06:53.296965   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0918 21:06:53.379339   62061 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0918 21:06:53.379369   62061 logs.go:123] Gathering logs for CRI-O ...
	I0918 21:06:53.379410   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0918 21:06:53.469608   62061 logs.go:123] Gathering logs for container status ...
	I0918 21:06:53.469652   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 21:06:56.009227   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:06:56.023451   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0918 21:06:56.023510   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0918 21:06:56.056839   62061 cri.go:89] found id: ""
	I0918 21:06:56.056866   62061 logs.go:276] 0 containers: []
	W0918 21:06:56.056875   62061 logs.go:278] No container was found matching "kube-apiserver"
	I0918 21:06:56.056881   62061 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0918 21:06:56.056931   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0918 21:06:56.091893   62061 cri.go:89] found id: ""
	I0918 21:06:56.091919   62061 logs.go:276] 0 containers: []
	W0918 21:06:56.091928   62061 logs.go:278] No container was found matching "etcd"
	I0918 21:06:56.091933   62061 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0918 21:06:56.091980   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0918 21:06:56.127432   62061 cri.go:89] found id: ""
	I0918 21:06:56.127456   62061 logs.go:276] 0 containers: []
	W0918 21:06:56.127464   62061 logs.go:278] No container was found matching "coredns"
	I0918 21:06:56.127470   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0918 21:06:56.127518   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0918 21:06:56.162085   62061 cri.go:89] found id: ""
	I0918 21:06:56.162109   62061 logs.go:276] 0 containers: []
	W0918 21:06:56.162118   62061 logs.go:278] No container was found matching "kube-scheduler"
	I0918 21:06:56.162123   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0918 21:06:56.162176   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0918 21:06:56.195711   62061 cri.go:89] found id: ""
	I0918 21:06:56.195743   62061 logs.go:276] 0 containers: []
	W0918 21:06:56.195753   62061 logs.go:278] No container was found matching "kube-proxy"
	I0918 21:06:56.195759   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0918 21:06:56.195809   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0918 21:06:56.234481   62061 cri.go:89] found id: ""
	I0918 21:06:56.234507   62061 logs.go:276] 0 containers: []
	W0918 21:06:56.234515   62061 logs.go:278] No container was found matching "kube-controller-manager"
	I0918 21:06:56.234522   62061 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0918 21:06:56.234567   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0918 21:06:56.268566   62061 cri.go:89] found id: ""
	I0918 21:06:56.268596   62061 logs.go:276] 0 containers: []
	W0918 21:06:56.268608   62061 logs.go:278] No container was found matching "kindnet"
	I0918 21:06:56.268617   62061 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0918 21:06:56.268681   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0918 21:06:56.309785   62061 cri.go:89] found id: ""
	I0918 21:06:56.309813   62061 logs.go:276] 0 containers: []
	W0918 21:06:56.309824   62061 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0918 21:06:56.309835   62061 logs.go:123] Gathering logs for kubelet ...
	I0918 21:06:56.309874   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 21:06:56.364834   62061 logs.go:123] Gathering logs for dmesg ...
	I0918 21:06:56.364880   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 21:06:56.378424   62061 logs.go:123] Gathering logs for describe nodes ...
	I0918 21:06:56.378451   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0918 21:06:56.450482   62061 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0918 21:06:56.450511   62061 logs.go:123] Gathering logs for CRI-O ...
	I0918 21:06:56.450522   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0918 21:06:56.536261   62061 logs.go:123] Gathering logs for container status ...
	I0918 21:06:56.536305   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 21:06:59.074494   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:06:59.087553   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0918 21:06:59.087622   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0918 21:06:59.128323   62061 cri.go:89] found id: ""
	I0918 21:06:59.128357   62061 logs.go:276] 0 containers: []
	W0918 21:06:59.128368   62061 logs.go:278] No container was found matching "kube-apiserver"
	I0918 21:06:59.128375   62061 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0918 21:06:59.128436   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0918 21:06:59.161135   62061 cri.go:89] found id: ""
	I0918 21:06:59.161161   62061 logs.go:276] 0 containers: []
	W0918 21:06:59.161170   62061 logs.go:278] No container was found matching "etcd"
	I0918 21:06:59.161178   62061 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0918 21:06:59.161240   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0918 21:06:59.201558   62061 cri.go:89] found id: ""
	I0918 21:06:59.201595   62061 logs.go:276] 0 containers: []
	W0918 21:06:59.201607   62061 logs.go:278] No container was found matching "coredns"
	I0918 21:06:59.201614   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0918 21:06:59.201678   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0918 21:06:59.235330   62061 cri.go:89] found id: ""
	I0918 21:06:59.235356   62061 logs.go:276] 0 containers: []
	W0918 21:06:59.235378   62061 logs.go:278] No container was found matching "kube-scheduler"
	I0918 21:06:59.235385   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0918 21:06:59.235450   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0918 21:06:59.270957   62061 cri.go:89] found id: ""
	I0918 21:06:59.270994   62061 logs.go:276] 0 containers: []
	W0918 21:06:59.271007   62061 logs.go:278] No container was found matching "kube-proxy"
	I0918 21:06:59.271016   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0918 21:06:59.271088   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0918 21:06:59.305068   62061 cri.go:89] found id: ""
	I0918 21:06:59.305103   62061 logs.go:276] 0 containers: []
	W0918 21:06:59.305115   62061 logs.go:278] No container was found matching "kube-controller-manager"
	I0918 21:06:59.305123   62061 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0918 21:06:59.305177   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0918 21:06:59.338763   62061 cri.go:89] found id: ""
	I0918 21:06:59.338796   62061 logs.go:276] 0 containers: []
	W0918 21:06:59.338809   62061 logs.go:278] No container was found matching "kindnet"
	I0918 21:06:59.338818   62061 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0918 21:06:59.338887   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0918 21:06:59.376557   62061 cri.go:89] found id: ""
	I0918 21:06:59.376585   62061 logs.go:276] 0 containers: []
	W0918 21:06:59.376593   62061 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0918 21:06:59.376602   62061 logs.go:123] Gathering logs for CRI-O ...
	I0918 21:06:59.376615   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0918 21:06:59.455228   62061 logs.go:123] Gathering logs for container status ...
	I0918 21:06:59.455264   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 21:06:59.494461   62061 logs.go:123] Gathering logs for kubelet ...
	I0918 21:06:59.494488   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 21:06:59.543143   62061 logs.go:123] Gathering logs for dmesg ...
	I0918 21:06:59.543177   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 21:06:59.556031   62061 logs.go:123] Gathering logs for describe nodes ...
	I0918 21:06:59.556062   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0918 21:06:59.623888   62061 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0918 21:07:02.124743   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:07:02.139392   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0918 21:07:02.139451   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0918 21:07:02.172176   62061 cri.go:89] found id: ""
	I0918 21:07:02.172201   62061 logs.go:276] 0 containers: []
	W0918 21:07:02.172209   62061 logs.go:278] No container was found matching "kube-apiserver"
	I0918 21:07:02.172215   62061 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0918 21:07:02.172263   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0918 21:07:02.206480   62061 cri.go:89] found id: ""
	I0918 21:07:02.206507   62061 logs.go:276] 0 containers: []
	W0918 21:07:02.206518   62061 logs.go:278] No container was found matching "etcd"
	I0918 21:07:02.206525   62061 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0918 21:07:02.206586   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0918 21:07:02.240256   62061 cri.go:89] found id: ""
	I0918 21:07:02.240281   62061 logs.go:276] 0 containers: []
	W0918 21:07:02.240289   62061 logs.go:278] No container was found matching "coredns"
	I0918 21:07:02.240295   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0918 21:07:02.240353   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0918 21:07:02.274001   62061 cri.go:89] found id: ""
	I0918 21:07:02.274034   62061 logs.go:276] 0 containers: []
	W0918 21:07:02.274046   62061 logs.go:278] No container was found matching "kube-scheduler"
	I0918 21:07:02.274056   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0918 21:07:02.274115   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0918 21:07:02.307491   62061 cri.go:89] found id: ""
	I0918 21:07:02.307520   62061 logs.go:276] 0 containers: []
	W0918 21:07:02.307528   62061 logs.go:278] No container was found matching "kube-proxy"
	I0918 21:07:02.307534   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0918 21:07:02.307597   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0918 21:07:02.344688   62061 cri.go:89] found id: ""
	I0918 21:07:02.344720   62061 logs.go:276] 0 containers: []
	W0918 21:07:02.344731   62061 logs.go:278] No container was found matching "kube-controller-manager"
	I0918 21:07:02.344739   62061 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0918 21:07:02.344805   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0918 21:07:02.384046   62061 cri.go:89] found id: ""
	I0918 21:07:02.384077   62061 logs.go:276] 0 containers: []
	W0918 21:07:02.384088   62061 logs.go:278] No container was found matching "kindnet"
	I0918 21:07:02.384095   62061 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0918 21:07:02.384154   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0918 21:07:02.422530   62061 cri.go:89] found id: ""
	I0918 21:07:02.422563   62061 logs.go:276] 0 containers: []
	W0918 21:07:02.422575   62061 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0918 21:07:02.422586   62061 logs.go:123] Gathering logs for container status ...
	I0918 21:07:02.422604   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 21:07:02.461236   62061 logs.go:123] Gathering logs for kubelet ...
	I0918 21:07:02.461266   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 21:07:02.512280   62061 logs.go:123] Gathering logs for dmesg ...
	I0918 21:07:02.512320   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 21:07:02.525404   62061 logs.go:123] Gathering logs for describe nodes ...
	I0918 21:07:02.525435   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0918 21:07:02.599746   62061 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0918 21:07:02.599773   62061 logs.go:123] Gathering logs for CRI-O ...
	I0918 21:07:02.599785   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0918 21:07:05.183459   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:07:05.197615   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0918 21:07:05.197681   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0918 21:07:05.234313   62061 cri.go:89] found id: ""
	I0918 21:07:05.234354   62061 logs.go:276] 0 containers: []
	W0918 21:07:05.234365   62061 logs.go:278] No container was found matching "kube-apiserver"
	I0918 21:07:05.234371   62061 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0918 21:07:05.234429   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0918 21:07:05.271436   62061 cri.go:89] found id: ""
	I0918 21:07:05.271466   62061 logs.go:276] 0 containers: []
	W0918 21:07:05.271477   62061 logs.go:278] No container was found matching "etcd"
	I0918 21:07:05.271484   62061 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0918 21:07:05.271554   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0918 21:07:05.307007   62061 cri.go:89] found id: ""
	I0918 21:07:05.307038   62061 logs.go:276] 0 containers: []
	W0918 21:07:05.307050   62061 logs.go:278] No container was found matching "coredns"
	I0918 21:07:05.307056   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0918 21:07:05.307120   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0918 21:07:05.341764   62061 cri.go:89] found id: ""
	I0918 21:07:05.341810   62061 logs.go:276] 0 containers: []
	W0918 21:07:05.341831   62061 logs.go:278] No container was found matching "kube-scheduler"
	I0918 21:07:05.341840   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0918 21:07:05.341908   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0918 21:07:05.381611   62061 cri.go:89] found id: ""
	I0918 21:07:05.381646   62061 logs.go:276] 0 containers: []
	W0918 21:07:05.381658   62061 logs.go:278] No container was found matching "kube-proxy"
	I0918 21:07:05.381666   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0918 21:07:05.381747   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0918 21:07:05.421468   62061 cri.go:89] found id: ""
	I0918 21:07:05.421499   62061 logs.go:276] 0 containers: []
	W0918 21:07:05.421520   62061 logs.go:278] No container was found matching "kube-controller-manager"
	I0918 21:07:05.421528   62061 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0918 21:07:05.421590   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0918 21:07:05.457320   62061 cri.go:89] found id: ""
	I0918 21:07:05.457348   62061 logs.go:276] 0 containers: []
	W0918 21:07:05.457359   62061 logs.go:278] No container was found matching "kindnet"
	I0918 21:07:05.457367   62061 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0918 21:07:05.457425   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0918 21:07:05.493879   62061 cri.go:89] found id: ""
	I0918 21:07:05.493915   62061 logs.go:276] 0 containers: []
	W0918 21:07:05.493928   62061 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0918 21:07:05.493943   62061 logs.go:123] Gathering logs for kubelet ...
	I0918 21:07:05.493955   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 21:07:05.543825   62061 logs.go:123] Gathering logs for dmesg ...
	I0918 21:07:05.543867   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 21:07:05.558254   62061 logs.go:123] Gathering logs for describe nodes ...
	I0918 21:07:05.558285   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0918 21:07:05.627065   62061 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0918 21:07:05.627091   62061 logs.go:123] Gathering logs for CRI-O ...
	I0918 21:07:05.627103   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0918 21:07:05.703838   62061 logs.go:123] Gathering logs for container status ...
	I0918 21:07:05.703876   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 21:07:08.244087   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:07:08.257807   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0918 21:07:08.257897   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0918 21:07:08.292034   62061 cri.go:89] found id: ""
	I0918 21:07:08.292064   62061 logs.go:276] 0 containers: []
	W0918 21:07:08.292076   62061 logs.go:278] No container was found matching "kube-apiserver"
	I0918 21:07:08.292084   62061 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0918 21:07:08.292154   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0918 21:07:08.325858   62061 cri.go:89] found id: ""
	I0918 21:07:08.325897   62061 logs.go:276] 0 containers: []
	W0918 21:07:08.325910   62061 logs.go:278] No container was found matching "etcd"
	I0918 21:07:08.325918   62061 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0918 21:07:08.325987   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0918 21:07:08.369427   62061 cri.go:89] found id: ""
	I0918 21:07:08.369457   62061 logs.go:276] 0 containers: []
	W0918 21:07:08.369468   62061 logs.go:278] No container was found matching "coredns"
	I0918 21:07:08.369475   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0918 21:07:08.369536   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0918 21:07:08.409402   62061 cri.go:89] found id: ""
	I0918 21:07:08.409434   62061 logs.go:276] 0 containers: []
	W0918 21:07:08.409444   62061 logs.go:278] No container was found matching "kube-scheduler"
	I0918 21:07:08.409451   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0918 21:07:08.409515   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0918 21:07:08.445530   62061 cri.go:89] found id: ""
	I0918 21:07:08.445565   62061 logs.go:276] 0 containers: []
	W0918 21:07:08.445575   62061 logs.go:278] No container was found matching "kube-proxy"
	I0918 21:07:08.445584   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0918 21:07:08.445646   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0918 21:07:08.481905   62061 cri.go:89] found id: ""
	I0918 21:07:08.481935   62061 logs.go:276] 0 containers: []
	W0918 21:07:08.481945   62061 logs.go:278] No container was found matching "kube-controller-manager"
	I0918 21:07:08.481952   62061 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0918 21:07:08.482020   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0918 21:07:08.516710   62061 cri.go:89] found id: ""
	I0918 21:07:08.516738   62061 logs.go:276] 0 containers: []
	W0918 21:07:08.516746   62061 logs.go:278] No container was found matching "kindnet"
	I0918 21:07:08.516752   62061 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0918 21:07:08.516802   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0918 21:07:08.550707   62061 cri.go:89] found id: ""
	I0918 21:07:08.550740   62061 logs.go:276] 0 containers: []
	W0918 21:07:08.550760   62061 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0918 21:07:08.550768   62061 logs.go:123] Gathering logs for describe nodes ...
	I0918 21:07:08.550782   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0918 21:07:08.624821   62061 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0918 21:07:08.624843   62061 logs.go:123] Gathering logs for CRI-O ...
	I0918 21:07:08.624854   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0918 21:07:08.705347   62061 logs.go:123] Gathering logs for container status ...
	I0918 21:07:08.705383   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 21:07:08.744394   62061 logs.go:123] Gathering logs for kubelet ...
	I0918 21:07:08.744425   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 21:07:08.799336   62061 logs.go:123] Gathering logs for dmesg ...
	I0918 21:07:08.799375   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 21:07:11.312920   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:07:11.327524   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0918 21:07:11.327598   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0918 21:07:11.366177   62061 cri.go:89] found id: ""
	I0918 21:07:11.366209   62061 logs.go:276] 0 containers: []
	W0918 21:07:11.366221   62061 logs.go:278] No container was found matching "kube-apiserver"
	I0918 21:07:11.366227   62061 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0918 21:07:11.366274   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0918 21:07:11.405296   62061 cri.go:89] found id: ""
	I0918 21:07:11.405326   62061 logs.go:276] 0 containers: []
	W0918 21:07:11.405344   62061 logs.go:278] No container was found matching "etcd"
	I0918 21:07:11.405351   62061 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0918 21:07:11.405413   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0918 21:07:11.441786   62061 cri.go:89] found id: ""
	I0918 21:07:11.441818   62061 logs.go:276] 0 containers: []
	W0918 21:07:11.441829   62061 logs.go:278] No container was found matching "coredns"
	I0918 21:07:11.441837   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0918 21:07:11.441891   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0918 21:07:11.476144   62061 cri.go:89] found id: ""
	I0918 21:07:11.476167   62061 logs.go:276] 0 containers: []
	W0918 21:07:11.476175   62061 logs.go:278] No container was found matching "kube-scheduler"
	I0918 21:07:11.476181   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0918 21:07:11.476224   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0918 21:07:11.512115   62061 cri.go:89] found id: ""
	I0918 21:07:11.512143   62061 logs.go:276] 0 containers: []
	W0918 21:07:11.512154   62061 logs.go:278] No container was found matching "kube-proxy"
	I0918 21:07:11.512160   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0918 21:07:11.512221   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0918 21:07:11.545466   62061 cri.go:89] found id: ""
	I0918 21:07:11.545494   62061 logs.go:276] 0 containers: []
	W0918 21:07:11.545504   62061 logs.go:278] No container was found matching "kube-controller-manager"
	I0918 21:07:11.545512   62061 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0918 21:07:11.545575   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0918 21:07:11.579940   62061 cri.go:89] found id: ""
	I0918 21:07:11.579965   62061 logs.go:276] 0 containers: []
	W0918 21:07:11.579975   62061 logs.go:278] No container was found matching "kindnet"
	I0918 21:07:11.579994   62061 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0918 21:07:11.580080   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0918 21:07:11.613454   62061 cri.go:89] found id: ""
	I0918 21:07:11.613480   62061 logs.go:276] 0 containers: []
	W0918 21:07:11.613491   62061 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0918 21:07:11.613512   62061 logs.go:123] Gathering logs for kubelet ...
	I0918 21:07:11.613535   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 21:07:11.669079   62061 logs.go:123] Gathering logs for dmesg ...
	I0918 21:07:11.669140   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 21:07:11.682614   62061 logs.go:123] Gathering logs for describe nodes ...
	I0918 21:07:11.682639   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0918 21:07:11.756876   62061 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0918 21:07:11.756901   62061 logs.go:123] Gathering logs for CRI-O ...
	I0918 21:07:11.756914   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0918 21:07:11.835046   62061 logs.go:123] Gathering logs for container status ...
	I0918 21:07:11.835086   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 21:07:14.377541   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:07:14.392083   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0918 21:07:14.392161   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0918 21:07:14.431752   62061 cri.go:89] found id: ""
	I0918 21:07:14.431786   62061 logs.go:276] 0 containers: []
	W0918 21:07:14.431795   62061 logs.go:278] No container was found matching "kube-apiserver"
	I0918 21:07:14.431800   62061 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0918 21:07:14.431856   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0918 21:07:14.468530   62061 cri.go:89] found id: ""
	I0918 21:07:14.468562   62061 logs.go:276] 0 containers: []
	W0918 21:07:14.468573   62061 logs.go:278] No container was found matching "etcd"
	I0918 21:07:14.468580   62061 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0918 21:07:14.468643   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0918 21:07:14.506510   62061 cri.go:89] found id: ""
	I0918 21:07:14.506540   62061 logs.go:276] 0 containers: []
	W0918 21:07:14.506550   62061 logs.go:278] No container was found matching "coredns"
	I0918 21:07:14.506557   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0918 21:07:14.506625   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0918 21:07:14.538986   62061 cri.go:89] found id: ""
	I0918 21:07:14.539021   62061 logs.go:276] 0 containers: []
	W0918 21:07:14.539032   62061 logs.go:278] No container was found matching "kube-scheduler"
	I0918 21:07:14.539039   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0918 21:07:14.539103   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0918 21:07:14.572390   62061 cri.go:89] found id: ""
	I0918 21:07:14.572421   62061 logs.go:276] 0 containers: []
	W0918 21:07:14.572432   62061 logs.go:278] No container was found matching "kube-proxy"
	I0918 21:07:14.572440   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0918 21:07:14.572499   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0918 21:07:14.607875   62061 cri.go:89] found id: ""
	I0918 21:07:14.607905   62061 logs.go:276] 0 containers: []
	W0918 21:07:14.607917   62061 logs.go:278] No container was found matching "kube-controller-manager"
	I0918 21:07:14.607924   62061 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0918 21:07:14.607988   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0918 21:07:14.642584   62061 cri.go:89] found id: ""
	I0918 21:07:14.642616   62061 logs.go:276] 0 containers: []
	W0918 21:07:14.642625   62061 logs.go:278] No container was found matching "kindnet"
	I0918 21:07:14.642630   62061 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0918 21:07:14.642685   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0918 21:07:14.682117   62061 cri.go:89] found id: ""
	I0918 21:07:14.682144   62061 logs.go:276] 0 containers: []
	W0918 21:07:14.682152   62061 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0918 21:07:14.682163   62061 logs.go:123] Gathering logs for dmesg ...
	I0918 21:07:14.682177   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 21:07:14.694780   62061 logs.go:123] Gathering logs for describe nodes ...
	I0918 21:07:14.694808   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0918 21:07:14.767988   62061 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0918 21:07:14.768036   62061 logs.go:123] Gathering logs for CRI-O ...
	I0918 21:07:14.768055   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0918 21:07:14.860620   62061 logs.go:123] Gathering logs for container status ...
	I0918 21:07:14.860655   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 21:07:14.905071   62061 logs.go:123] Gathering logs for kubelet ...
	I0918 21:07:14.905102   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 21:07:17.455582   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:07:17.468713   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0918 21:07:17.468774   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0918 21:07:17.503682   62061 cri.go:89] found id: ""
	I0918 21:07:17.503709   62061 logs.go:276] 0 containers: []
	W0918 21:07:17.503721   62061 logs.go:278] No container was found matching "kube-apiserver"
	I0918 21:07:17.503729   62061 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0918 21:07:17.503794   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0918 21:07:17.536581   62061 cri.go:89] found id: ""
	I0918 21:07:17.536611   62061 logs.go:276] 0 containers: []
	W0918 21:07:17.536619   62061 logs.go:278] No container was found matching "etcd"
	I0918 21:07:17.536625   62061 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0918 21:07:17.536673   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0918 21:07:17.570483   62061 cri.go:89] found id: ""
	I0918 21:07:17.570510   62061 logs.go:276] 0 containers: []
	W0918 21:07:17.570518   62061 logs.go:278] No container was found matching "coredns"
	I0918 21:07:17.570523   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0918 21:07:17.570591   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0918 21:07:17.604102   62061 cri.go:89] found id: ""
	I0918 21:07:17.604140   62061 logs.go:276] 0 containers: []
	W0918 21:07:17.604152   62061 logs.go:278] No container was found matching "kube-scheduler"
	I0918 21:07:17.604160   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0918 21:07:17.604229   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0918 21:07:17.638633   62061 cri.go:89] found id: ""
	I0918 21:07:17.638661   62061 logs.go:276] 0 containers: []
	W0918 21:07:17.638672   62061 logs.go:278] No container was found matching "kube-proxy"
	I0918 21:07:17.638678   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0918 21:07:17.638725   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0918 21:07:17.673016   62061 cri.go:89] found id: ""
	I0918 21:07:17.673048   62061 logs.go:276] 0 containers: []
	W0918 21:07:17.673057   62061 logs.go:278] No container was found matching "kube-controller-manager"
	I0918 21:07:17.673063   62061 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0918 21:07:17.673116   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0918 21:07:17.708576   62061 cri.go:89] found id: ""
	I0918 21:07:17.708601   62061 logs.go:276] 0 containers: []
	W0918 21:07:17.708609   62061 logs.go:278] No container was found matching "kindnet"
	I0918 21:07:17.708620   62061 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0918 21:07:17.708672   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0918 21:07:17.742820   62061 cri.go:89] found id: ""
	I0918 21:07:17.742843   62061 logs.go:276] 0 containers: []
	W0918 21:07:17.742851   62061 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0918 21:07:17.742859   62061 logs.go:123] Gathering logs for dmesg ...
	I0918 21:07:17.742870   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 21:07:17.757091   62061 logs.go:123] Gathering logs for describe nodes ...
	I0918 21:07:17.757120   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0918 21:07:17.824116   62061 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0918 21:07:17.824137   62061 logs.go:123] Gathering logs for CRI-O ...
	I0918 21:07:17.824151   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0918 21:07:17.911013   62061 logs.go:123] Gathering logs for container status ...
	I0918 21:07:17.911065   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 21:07:17.950517   62061 logs.go:123] Gathering logs for kubelet ...
	I0918 21:07:17.950553   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 21:07:20.502423   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:07:20.529214   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0918 21:07:20.529297   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0918 21:07:20.583618   62061 cri.go:89] found id: ""
	I0918 21:07:20.583660   62061 logs.go:276] 0 containers: []
	W0918 21:07:20.583672   62061 logs.go:278] No container was found matching "kube-apiserver"
	I0918 21:07:20.583682   62061 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0918 21:07:20.583751   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0918 21:07:20.633051   62061 cri.go:89] found id: ""
	I0918 21:07:20.633079   62061 logs.go:276] 0 containers: []
	W0918 21:07:20.633091   62061 logs.go:278] No container was found matching "etcd"
	I0918 21:07:20.633099   62061 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0918 21:07:20.633158   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0918 21:07:20.668513   62061 cri.go:89] found id: ""
	I0918 21:07:20.668543   62061 logs.go:276] 0 containers: []
	W0918 21:07:20.668552   62061 logs.go:278] No container was found matching "coredns"
	I0918 21:07:20.668558   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0918 21:07:20.668611   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0918 21:07:20.702648   62061 cri.go:89] found id: ""
	I0918 21:07:20.702683   62061 logs.go:276] 0 containers: []
	W0918 21:07:20.702698   62061 logs.go:278] No container was found matching "kube-scheduler"
	I0918 21:07:20.702706   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0918 21:07:20.702767   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0918 21:07:20.740639   62061 cri.go:89] found id: ""
	I0918 21:07:20.740666   62061 logs.go:276] 0 containers: []
	W0918 21:07:20.740674   62061 logs.go:278] No container was found matching "kube-proxy"
	I0918 21:07:20.740680   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0918 21:07:20.740731   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0918 21:07:20.771819   62061 cri.go:89] found id: ""
	I0918 21:07:20.771847   62061 logs.go:276] 0 containers: []
	W0918 21:07:20.771855   62061 logs.go:278] No container was found matching "kube-controller-manager"
	I0918 21:07:20.771863   62061 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0918 21:07:20.771913   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0918 21:07:20.803613   62061 cri.go:89] found id: ""
	I0918 21:07:20.803639   62061 logs.go:276] 0 containers: []
	W0918 21:07:20.803647   62061 logs.go:278] No container was found matching "kindnet"
	I0918 21:07:20.803653   62061 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0918 21:07:20.803708   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0918 21:07:20.835671   62061 cri.go:89] found id: ""
	I0918 21:07:20.835695   62061 logs.go:276] 0 containers: []
	W0918 21:07:20.835702   62061 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0918 21:07:20.835710   62061 logs.go:123] Gathering logs for kubelet ...
	I0918 21:07:20.835720   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 21:07:20.886159   62061 logs.go:123] Gathering logs for dmesg ...
	I0918 21:07:20.886191   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 21:07:20.901412   62061 logs.go:123] Gathering logs for describe nodes ...
	I0918 21:07:20.901443   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0918 21:07:20.979009   62061 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0918 21:07:20.979034   62061 logs.go:123] Gathering logs for CRI-O ...
	I0918 21:07:20.979045   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0918 21:07:21.059895   62061 logs.go:123] Gathering logs for container status ...
	I0918 21:07:21.059930   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 21:07:23.598133   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:07:23.611038   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0918 21:07:23.611102   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0918 21:07:23.648037   62061 cri.go:89] found id: ""
	I0918 21:07:23.648067   62061 logs.go:276] 0 containers: []
	W0918 21:07:23.648075   62061 logs.go:278] No container was found matching "kube-apiserver"
	I0918 21:07:23.648081   62061 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0918 21:07:23.648135   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0918 21:07:23.683956   62061 cri.go:89] found id: ""
	I0918 21:07:23.683985   62061 logs.go:276] 0 containers: []
	W0918 21:07:23.683995   62061 logs.go:278] No container was found matching "etcd"
	I0918 21:07:23.684003   62061 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0918 21:07:23.684083   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0918 21:07:23.719274   62061 cri.go:89] found id: ""
	I0918 21:07:23.719301   62061 logs.go:276] 0 containers: []
	W0918 21:07:23.719309   62061 logs.go:278] No container was found matching "coredns"
	I0918 21:07:23.719315   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0918 21:07:23.719378   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0918 21:07:23.757101   62061 cri.go:89] found id: ""
	I0918 21:07:23.757133   62061 logs.go:276] 0 containers: []
	W0918 21:07:23.757145   62061 logs.go:278] No container was found matching "kube-scheduler"
	I0918 21:07:23.757152   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0918 21:07:23.757202   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0918 21:07:23.790115   62061 cri.go:89] found id: ""
	I0918 21:07:23.790149   62061 logs.go:276] 0 containers: []
	W0918 21:07:23.790160   62061 logs.go:278] No container was found matching "kube-proxy"
	I0918 21:07:23.790168   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0918 21:07:23.790231   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0918 21:07:23.823089   62061 cri.go:89] found id: ""
	I0918 21:07:23.823120   62061 logs.go:276] 0 containers: []
	W0918 21:07:23.823131   62061 logs.go:278] No container was found matching "kube-controller-manager"
	I0918 21:07:23.823140   62061 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0918 21:07:23.823192   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0918 21:07:23.858566   62061 cri.go:89] found id: ""
	I0918 21:07:23.858591   62061 logs.go:276] 0 containers: []
	W0918 21:07:23.858601   62061 logs.go:278] No container was found matching "kindnet"
	I0918 21:07:23.858613   62061 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0918 21:07:23.858674   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0918 21:07:23.892650   62061 cri.go:89] found id: ""
	I0918 21:07:23.892679   62061 logs.go:276] 0 containers: []
	W0918 21:07:23.892690   62061 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0918 21:07:23.892699   62061 logs.go:123] Gathering logs for kubelet ...
	I0918 21:07:23.892713   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 21:07:23.941087   62061 logs.go:123] Gathering logs for dmesg ...
	I0918 21:07:23.941123   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 21:07:23.955674   62061 logs.go:123] Gathering logs for describe nodes ...
	I0918 21:07:23.955712   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0918 21:07:24.026087   62061 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0918 21:07:24.026115   62061 logs.go:123] Gathering logs for CRI-O ...
	I0918 21:07:24.026132   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0918 21:07:24.105237   62061 logs.go:123] Gathering logs for container status ...
	I0918 21:07:24.105272   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 21:07:26.646822   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:07:26.659978   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0918 21:07:26.660087   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0918 21:07:26.695776   62061 cri.go:89] found id: ""
	I0918 21:07:26.695804   62061 logs.go:276] 0 containers: []
	W0918 21:07:26.695817   62061 logs.go:278] No container was found matching "kube-apiserver"
	I0918 21:07:26.695824   62061 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0918 21:07:26.695875   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0918 21:07:26.729717   62061 cri.go:89] found id: ""
	I0918 21:07:26.729747   62061 logs.go:276] 0 containers: []
	W0918 21:07:26.729756   62061 logs.go:278] No container was found matching "etcd"
	I0918 21:07:26.729762   62061 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0918 21:07:26.729830   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0918 21:07:26.763848   62061 cri.go:89] found id: ""
	I0918 21:07:26.763886   62061 logs.go:276] 0 containers: []
	W0918 21:07:26.763907   62061 logs.go:278] No container was found matching "coredns"
	I0918 21:07:26.763915   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0918 21:07:26.763974   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0918 21:07:26.798670   62061 cri.go:89] found id: ""
	I0918 21:07:26.798703   62061 logs.go:276] 0 containers: []
	W0918 21:07:26.798713   62061 logs.go:278] No container was found matching "kube-scheduler"
	I0918 21:07:26.798720   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0918 21:07:26.798789   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0918 21:07:26.833778   62061 cri.go:89] found id: ""
	I0918 21:07:26.833804   62061 logs.go:276] 0 containers: []
	W0918 21:07:26.833815   62061 logs.go:278] No container was found matching "kube-proxy"
	I0918 21:07:26.833822   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0918 21:07:26.833905   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0918 21:07:26.869657   62061 cri.go:89] found id: ""
	I0918 21:07:26.869688   62061 logs.go:276] 0 containers: []
	W0918 21:07:26.869699   62061 logs.go:278] No container was found matching "kube-controller-manager"
	I0918 21:07:26.869707   62061 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0918 21:07:26.869772   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0918 21:07:26.908163   62061 cri.go:89] found id: ""
	I0918 21:07:26.908194   62061 logs.go:276] 0 containers: []
	W0918 21:07:26.908205   62061 logs.go:278] No container was found matching "kindnet"
	I0918 21:07:26.908212   62061 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0918 21:07:26.908269   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0918 21:07:26.943416   62061 cri.go:89] found id: ""
	I0918 21:07:26.943442   62061 logs.go:276] 0 containers: []
	W0918 21:07:26.943451   62061 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0918 21:07:26.943459   62061 logs.go:123] Gathering logs for kubelet ...
	I0918 21:07:26.943471   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 21:07:26.993796   62061 logs.go:123] Gathering logs for dmesg ...
	I0918 21:07:26.993833   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 21:07:27.007619   62061 logs.go:123] Gathering logs for describe nodes ...
	I0918 21:07:27.007661   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0918 21:07:27.072880   62061 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0918 21:07:27.072904   62061 logs.go:123] Gathering logs for CRI-O ...
	I0918 21:07:27.072919   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0918 21:07:27.148984   62061 logs.go:123] Gathering logs for container status ...
	I0918 21:07:27.149031   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 21:07:29.690106   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:07:29.702853   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0918 21:07:29.702932   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0918 21:07:29.736427   62061 cri.go:89] found id: ""
	I0918 21:07:29.736461   62061 logs.go:276] 0 containers: []
	W0918 21:07:29.736473   62061 logs.go:278] No container was found matching "kube-apiserver"
	I0918 21:07:29.736480   62061 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0918 21:07:29.736537   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0918 21:07:29.771287   62061 cri.go:89] found id: ""
	I0918 21:07:29.771317   62061 logs.go:276] 0 containers: []
	W0918 21:07:29.771328   62061 logs.go:278] No container was found matching "etcd"
	I0918 21:07:29.771334   62061 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0918 21:07:29.771398   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0918 21:07:29.804826   62061 cri.go:89] found id: ""
	I0918 21:07:29.804861   62061 logs.go:276] 0 containers: []
	W0918 21:07:29.804875   62061 logs.go:278] No container was found matching "coredns"
	I0918 21:07:29.804882   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0918 21:07:29.804934   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0918 21:07:29.838570   62061 cri.go:89] found id: ""
	I0918 21:07:29.838598   62061 logs.go:276] 0 containers: []
	W0918 21:07:29.838608   62061 logs.go:278] No container was found matching "kube-scheduler"
	I0918 21:07:29.838614   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0918 21:07:29.838659   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0918 21:07:29.871591   62061 cri.go:89] found id: ""
	I0918 21:07:29.871621   62061 logs.go:276] 0 containers: []
	W0918 21:07:29.871631   62061 logs.go:278] No container was found matching "kube-proxy"
	I0918 21:07:29.871638   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0918 21:07:29.871699   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0918 21:07:29.905789   62061 cri.go:89] found id: ""
	I0918 21:07:29.905824   62061 logs.go:276] 0 containers: []
	W0918 21:07:29.905835   62061 logs.go:278] No container was found matching "kube-controller-manager"
	I0918 21:07:29.905846   62061 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0918 21:07:29.905910   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0918 21:07:29.945914   62061 cri.go:89] found id: ""
	I0918 21:07:29.945941   62061 logs.go:276] 0 containers: []
	W0918 21:07:29.945950   62061 logs.go:278] No container was found matching "kindnet"
	I0918 21:07:29.945955   62061 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0918 21:07:29.946004   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0918 21:07:29.979365   62061 cri.go:89] found id: ""
	I0918 21:07:29.979396   62061 logs.go:276] 0 containers: []
	W0918 21:07:29.979405   62061 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0918 21:07:29.979413   62061 logs.go:123] Gathering logs for kubelet ...
	I0918 21:07:29.979425   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 21:07:30.026925   62061 logs.go:123] Gathering logs for dmesg ...
	I0918 21:07:30.026956   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 21:07:30.040589   62061 logs.go:123] Gathering logs for describe nodes ...
	I0918 21:07:30.040623   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0918 21:07:30.112900   62061 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0918 21:07:30.112928   62061 logs.go:123] Gathering logs for CRI-O ...
	I0918 21:07:30.112948   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0918 21:07:30.195208   62061 logs.go:123] Gathering logs for container status ...
	I0918 21:07:30.195256   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 21:07:32.735787   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:07:32.748969   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0918 21:07:32.749049   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0918 21:07:32.782148   62061 cri.go:89] found id: ""
	I0918 21:07:32.782179   62061 logs.go:276] 0 containers: []
	W0918 21:07:32.782189   62061 logs.go:278] No container was found matching "kube-apiserver"
	I0918 21:07:32.782196   62061 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0918 21:07:32.782262   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0918 21:07:32.816119   62061 cri.go:89] found id: ""
	I0918 21:07:32.816144   62061 logs.go:276] 0 containers: []
	W0918 21:07:32.816152   62061 logs.go:278] No container was found matching "etcd"
	I0918 21:07:32.816158   62061 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0918 21:07:32.816203   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0918 21:07:32.849817   62061 cri.go:89] found id: ""
	I0918 21:07:32.849844   62061 logs.go:276] 0 containers: []
	W0918 21:07:32.849853   62061 logs.go:278] No container was found matching "coredns"
	I0918 21:07:32.849859   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0918 21:07:32.849911   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0918 21:07:32.884464   62061 cri.go:89] found id: ""
	I0918 21:07:32.884494   62061 logs.go:276] 0 containers: []
	W0918 21:07:32.884506   62061 logs.go:278] No container was found matching "kube-scheduler"
	I0918 21:07:32.884513   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0918 21:07:32.884576   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0918 21:07:32.917942   62061 cri.go:89] found id: ""
	I0918 21:07:32.917973   62061 logs.go:276] 0 containers: []
	W0918 21:07:32.917983   62061 logs.go:278] No container was found matching "kube-proxy"
	I0918 21:07:32.917990   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0918 21:07:32.918051   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0918 21:07:32.950748   62061 cri.go:89] found id: ""
	I0918 21:07:32.950780   62061 logs.go:276] 0 containers: []
	W0918 21:07:32.950791   62061 logs.go:278] No container was found matching "kube-controller-manager"
	I0918 21:07:32.950800   62061 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0918 21:07:32.950864   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0918 21:07:32.985059   62061 cri.go:89] found id: ""
	I0918 21:07:32.985092   62061 logs.go:276] 0 containers: []
	W0918 21:07:32.985100   62061 logs.go:278] No container was found matching "kindnet"
	I0918 21:07:32.985106   62061 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0918 21:07:32.985167   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0918 21:07:33.021496   62061 cri.go:89] found id: ""
	I0918 21:07:33.021526   62061 logs.go:276] 0 containers: []
	W0918 21:07:33.021536   62061 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0918 21:07:33.021546   62061 logs.go:123] Gathering logs for kubelet ...
	I0918 21:07:33.021560   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 21:07:33.071744   62061 logs.go:123] Gathering logs for dmesg ...
	I0918 21:07:33.071793   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 21:07:33.086533   62061 logs.go:123] Gathering logs for describe nodes ...
	I0918 21:07:33.086565   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0918 21:07:33.155274   62061 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0918 21:07:33.155307   62061 logs.go:123] Gathering logs for CRI-O ...
	I0918 21:07:33.155326   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0918 21:07:33.238301   62061 logs.go:123] Gathering logs for container status ...
	I0918 21:07:33.238342   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 21:07:35.777905   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:07:35.792442   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0918 21:07:35.792510   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0918 21:07:35.827310   62061 cri.go:89] found id: ""
	I0918 21:07:35.827339   62061 logs.go:276] 0 containers: []
	W0918 21:07:35.827350   62061 logs.go:278] No container was found matching "kube-apiserver"
	I0918 21:07:35.827357   62061 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0918 21:07:35.827422   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0918 21:07:35.861873   62061 cri.go:89] found id: ""
	I0918 21:07:35.861897   62061 logs.go:276] 0 containers: []
	W0918 21:07:35.861905   62061 logs.go:278] No container was found matching "etcd"
	I0918 21:07:35.861916   62061 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0918 21:07:35.861966   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0918 21:07:35.895295   62061 cri.go:89] found id: ""
	I0918 21:07:35.895324   62061 logs.go:276] 0 containers: []
	W0918 21:07:35.895345   62061 logs.go:278] No container was found matching "coredns"
	I0918 21:07:35.895353   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0918 21:07:35.895410   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0918 21:07:35.927690   62061 cri.go:89] found id: ""
	I0918 21:07:35.927717   62061 logs.go:276] 0 containers: []
	W0918 21:07:35.927728   62061 logs.go:278] No container was found matching "kube-scheduler"
	I0918 21:07:35.927736   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0918 21:07:35.927788   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0918 21:07:35.963954   62061 cri.go:89] found id: ""
	I0918 21:07:35.963979   62061 logs.go:276] 0 containers: []
	W0918 21:07:35.963986   62061 logs.go:278] No container was found matching "kube-proxy"
	I0918 21:07:35.963992   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0918 21:07:35.964059   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0918 21:07:36.002287   62061 cri.go:89] found id: ""
	I0918 21:07:36.002313   62061 logs.go:276] 0 containers: []
	W0918 21:07:36.002322   62061 logs.go:278] No container was found matching "kube-controller-manager"
	I0918 21:07:36.002328   62061 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0918 21:07:36.002380   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0918 21:07:36.036748   62061 cri.go:89] found id: ""
	I0918 21:07:36.036778   62061 logs.go:276] 0 containers: []
	W0918 21:07:36.036790   62061 logs.go:278] No container was found matching "kindnet"
	I0918 21:07:36.036797   62061 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0918 21:07:36.036861   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0918 21:07:36.072616   62061 cri.go:89] found id: ""
	I0918 21:07:36.072647   62061 logs.go:276] 0 containers: []
	W0918 21:07:36.072658   62061 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0918 21:07:36.072668   62061 logs.go:123] Gathering logs for kubelet ...
	I0918 21:07:36.072683   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 21:07:36.121723   62061 logs.go:123] Gathering logs for dmesg ...
	I0918 21:07:36.121762   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 21:07:36.136528   62061 logs.go:123] Gathering logs for describe nodes ...
	I0918 21:07:36.136560   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0918 21:07:36.202878   62061 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0918 21:07:36.202911   62061 logs.go:123] Gathering logs for CRI-O ...
	I0918 21:07:36.202926   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0918 21:07:36.287882   62061 logs.go:123] Gathering logs for container status ...
	I0918 21:07:36.287924   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 21:07:38.825888   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:07:38.838437   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0918 21:07:38.838510   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0918 21:07:38.872312   62061 cri.go:89] found id: ""
	I0918 21:07:38.872348   62061 logs.go:276] 0 containers: []
	W0918 21:07:38.872358   62061 logs.go:278] No container was found matching "kube-apiserver"
	I0918 21:07:38.872366   62061 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0918 21:07:38.872417   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0918 21:07:38.905927   62061 cri.go:89] found id: ""
	I0918 21:07:38.905962   62061 logs.go:276] 0 containers: []
	W0918 21:07:38.905975   62061 logs.go:278] No container was found matching "etcd"
	I0918 21:07:38.905982   62061 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0918 21:07:38.906042   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0918 21:07:38.938245   62061 cri.go:89] found id: ""
	I0918 21:07:38.938274   62061 logs.go:276] 0 containers: []
	W0918 21:07:38.938286   62061 logs.go:278] No container was found matching "coredns"
	I0918 21:07:38.938293   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0918 21:07:38.938358   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0918 21:07:38.971497   62061 cri.go:89] found id: ""
	I0918 21:07:38.971536   62061 logs.go:276] 0 containers: []
	W0918 21:07:38.971548   62061 logs.go:278] No container was found matching "kube-scheduler"
	I0918 21:07:38.971555   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0918 21:07:38.971625   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0918 21:07:39.004755   62061 cri.go:89] found id: ""
	I0918 21:07:39.004784   62061 logs.go:276] 0 containers: []
	W0918 21:07:39.004793   62061 logs.go:278] No container was found matching "kube-proxy"
	I0918 21:07:39.004799   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0918 21:07:39.004854   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0918 21:07:39.036237   62061 cri.go:89] found id: ""
	I0918 21:07:39.036265   62061 logs.go:276] 0 containers: []
	W0918 21:07:39.036273   62061 logs.go:278] No container was found matching "kube-controller-manager"
	I0918 21:07:39.036279   62061 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0918 21:07:39.036328   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0918 21:07:39.071504   62061 cri.go:89] found id: ""
	I0918 21:07:39.071537   62061 logs.go:276] 0 containers: []
	W0918 21:07:39.071549   62061 logs.go:278] No container was found matching "kindnet"
	I0918 21:07:39.071557   62061 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0918 21:07:39.071623   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0918 21:07:39.107035   62061 cri.go:89] found id: ""
	I0918 21:07:39.107063   62061 logs.go:276] 0 containers: []
	W0918 21:07:39.107071   62061 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0918 21:07:39.107080   62061 logs.go:123] Gathering logs for kubelet ...
	I0918 21:07:39.107090   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 21:07:39.158078   62061 logs.go:123] Gathering logs for dmesg ...
	I0918 21:07:39.158113   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 21:07:39.172846   62061 logs.go:123] Gathering logs for describe nodes ...
	I0918 21:07:39.172875   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0918 21:07:39.240577   62061 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0918 21:07:39.240602   62061 logs.go:123] Gathering logs for CRI-O ...
	I0918 21:07:39.240618   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0918 21:07:39.319762   62061 logs.go:123] Gathering logs for container status ...
	I0918 21:07:39.319797   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 21:07:41.856586   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:07:41.870308   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0918 21:07:41.870388   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0918 21:07:41.905657   62061 cri.go:89] found id: ""
	I0918 21:07:41.905688   62061 logs.go:276] 0 containers: []
	W0918 21:07:41.905699   62061 logs.go:278] No container was found matching "kube-apiserver"
	I0918 21:07:41.905706   62061 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0918 21:07:41.905766   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0918 21:07:41.944507   62061 cri.go:89] found id: ""
	I0918 21:07:41.944544   62061 logs.go:276] 0 containers: []
	W0918 21:07:41.944555   62061 logs.go:278] No container was found matching "etcd"
	I0918 21:07:41.944566   62061 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0918 21:07:41.944634   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0918 21:07:41.979217   62061 cri.go:89] found id: ""
	I0918 21:07:41.979252   62061 logs.go:276] 0 containers: []
	W0918 21:07:41.979271   62061 logs.go:278] No container was found matching "coredns"
	I0918 21:07:41.979279   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0918 21:07:41.979346   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0918 21:07:42.013613   62061 cri.go:89] found id: ""
	I0918 21:07:42.013641   62061 logs.go:276] 0 containers: []
	W0918 21:07:42.013652   62061 logs.go:278] No container was found matching "kube-scheduler"
	I0918 21:07:42.013659   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0918 21:07:42.013725   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0918 21:07:42.049225   62061 cri.go:89] found id: ""
	I0918 21:07:42.049259   62061 logs.go:276] 0 containers: []
	W0918 21:07:42.049271   62061 logs.go:278] No container was found matching "kube-proxy"
	I0918 21:07:42.049279   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0918 21:07:42.049364   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0918 21:07:42.085737   62061 cri.go:89] found id: ""
	I0918 21:07:42.085763   62061 logs.go:276] 0 containers: []
	W0918 21:07:42.085775   62061 logs.go:278] No container was found matching "kube-controller-manager"
	I0918 21:07:42.085782   62061 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0918 21:07:42.085843   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0918 21:07:42.121326   62061 cri.go:89] found id: ""
	I0918 21:07:42.121356   62061 logs.go:276] 0 containers: []
	W0918 21:07:42.121365   62061 logs.go:278] No container was found matching "kindnet"
	I0918 21:07:42.121371   62061 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0918 21:07:42.121428   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0918 21:07:42.157070   62061 cri.go:89] found id: ""
	I0918 21:07:42.157097   62061 logs.go:276] 0 containers: []
	W0918 21:07:42.157107   62061 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0918 21:07:42.157118   62061 logs.go:123] Gathering logs for CRI-O ...
	I0918 21:07:42.157133   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0918 21:07:42.232110   62061 logs.go:123] Gathering logs for container status ...
	I0918 21:07:42.232162   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 21:07:42.270478   62061 logs.go:123] Gathering logs for kubelet ...
	I0918 21:07:42.270517   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 21:07:42.324545   62061 logs.go:123] Gathering logs for dmesg ...
	I0918 21:07:42.324586   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 21:07:42.339928   62061 logs.go:123] Gathering logs for describe nodes ...
	I0918 21:07:42.339962   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0918 21:07:42.415124   62061 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0918 21:07:44.915316   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:07:44.928261   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0918 21:07:44.928331   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0918 21:07:44.959641   62061 cri.go:89] found id: ""
	I0918 21:07:44.959673   62061 logs.go:276] 0 containers: []
	W0918 21:07:44.959682   62061 logs.go:278] No container was found matching "kube-apiserver"
	I0918 21:07:44.959689   62061 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0918 21:07:44.959791   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0918 21:07:44.991744   62061 cri.go:89] found id: ""
	I0918 21:07:44.991778   62061 logs.go:276] 0 containers: []
	W0918 21:07:44.991787   62061 logs.go:278] No container was found matching "etcd"
	I0918 21:07:44.991803   62061 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0918 21:07:44.991877   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0918 21:07:45.024228   62061 cri.go:89] found id: ""
	I0918 21:07:45.024261   62061 logs.go:276] 0 containers: []
	W0918 21:07:45.024272   62061 logs.go:278] No container was found matching "coredns"
	I0918 21:07:45.024280   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0918 21:07:45.024355   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0918 21:07:45.061905   62061 cri.go:89] found id: ""
	I0918 21:07:45.061931   62061 logs.go:276] 0 containers: []
	W0918 21:07:45.061940   62061 logs.go:278] No container was found matching "kube-scheduler"
	I0918 21:07:45.061946   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0918 21:07:45.061994   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0918 21:07:45.099224   62061 cri.go:89] found id: ""
	I0918 21:07:45.099251   62061 logs.go:276] 0 containers: []
	W0918 21:07:45.099259   62061 logs.go:278] No container was found matching "kube-proxy"
	I0918 21:07:45.099266   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0918 21:07:45.099327   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0918 21:07:45.137235   62061 cri.go:89] found id: ""
	I0918 21:07:45.137262   62061 logs.go:276] 0 containers: []
	W0918 21:07:45.137270   62061 logs.go:278] No container was found matching "kube-controller-manager"
	I0918 21:07:45.137278   62061 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0918 21:07:45.137333   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0918 21:07:45.174343   62061 cri.go:89] found id: ""
	I0918 21:07:45.174370   62061 logs.go:276] 0 containers: []
	W0918 21:07:45.174379   62061 logs.go:278] No container was found matching "kindnet"
	I0918 21:07:45.174385   62061 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0918 21:07:45.174432   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0918 21:07:45.209278   62061 cri.go:89] found id: ""
	I0918 21:07:45.209306   62061 logs.go:276] 0 containers: []
	W0918 21:07:45.209316   62061 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0918 21:07:45.209326   62061 logs.go:123] Gathering logs for dmesg ...
	I0918 21:07:45.209341   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 21:07:45.222376   62061 logs.go:123] Gathering logs for describe nodes ...
	I0918 21:07:45.222409   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0918 21:07:45.294562   62061 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0918 21:07:45.294595   62061 logs.go:123] Gathering logs for CRI-O ...
	I0918 21:07:45.294607   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0918 21:07:45.382582   62061 logs.go:123] Gathering logs for container status ...
	I0918 21:07:45.382631   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 21:07:45.424743   62061 logs.go:123] Gathering logs for kubelet ...
	I0918 21:07:45.424780   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 21:07:47.978171   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:07:47.990485   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0918 21:07:47.990554   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0918 21:07:48.024465   62061 cri.go:89] found id: ""
	I0918 21:07:48.024494   62061 logs.go:276] 0 containers: []
	W0918 21:07:48.024505   62061 logs.go:278] No container was found matching "kube-apiserver"
	I0918 21:07:48.024512   62061 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0918 21:07:48.024575   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0918 21:07:48.058838   62061 cri.go:89] found id: ""
	I0918 21:07:48.058871   62061 logs.go:276] 0 containers: []
	W0918 21:07:48.058899   62061 logs.go:278] No container was found matching "etcd"
	I0918 21:07:48.058905   62061 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0918 21:07:48.058959   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0918 21:07:48.094307   62061 cri.go:89] found id: ""
	I0918 21:07:48.094335   62061 logs.go:276] 0 containers: []
	W0918 21:07:48.094343   62061 logs.go:278] No container was found matching "coredns"
	I0918 21:07:48.094349   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0918 21:07:48.094398   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0918 21:07:48.127497   62061 cri.go:89] found id: ""
	I0918 21:07:48.127529   62061 logs.go:276] 0 containers: []
	W0918 21:07:48.127538   62061 logs.go:278] No container was found matching "kube-scheduler"
	I0918 21:07:48.127544   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0918 21:07:48.127590   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0918 21:07:48.160440   62061 cri.go:89] found id: ""
	I0918 21:07:48.160465   62061 logs.go:276] 0 containers: []
	W0918 21:07:48.160473   62061 logs.go:278] No container was found matching "kube-proxy"
	I0918 21:07:48.160478   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0918 21:07:48.160527   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0918 21:07:48.195581   62061 cri.go:89] found id: ""
	I0918 21:07:48.195615   62061 logs.go:276] 0 containers: []
	W0918 21:07:48.195624   62061 logs.go:278] No container was found matching "kube-controller-manager"
	I0918 21:07:48.195631   62061 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0918 21:07:48.195675   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0918 21:07:48.226478   62061 cri.go:89] found id: ""
	I0918 21:07:48.226506   62061 logs.go:276] 0 containers: []
	W0918 21:07:48.226514   62061 logs.go:278] No container was found matching "kindnet"
	I0918 21:07:48.226519   62061 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0918 21:07:48.226566   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0918 21:07:48.259775   62061 cri.go:89] found id: ""
	I0918 21:07:48.259804   62061 logs.go:276] 0 containers: []
	W0918 21:07:48.259815   62061 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0918 21:07:48.259825   62061 logs.go:123] Gathering logs for dmesg ...
	I0918 21:07:48.259839   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 21:07:48.274364   62061 logs.go:123] Gathering logs for describe nodes ...
	I0918 21:07:48.274391   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0918 21:07:48.347131   62061 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0918 21:07:48.347151   62061 logs.go:123] Gathering logs for CRI-O ...
	I0918 21:07:48.347163   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0918 21:07:48.430569   62061 logs.go:123] Gathering logs for container status ...
	I0918 21:07:48.430607   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 21:07:48.472972   62061 logs.go:123] Gathering logs for kubelet ...
	I0918 21:07:48.473007   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 21:07:51.024429   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:07:51.037249   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0918 21:07:51.037328   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0918 21:07:51.070137   62061 cri.go:89] found id: ""
	I0918 21:07:51.070173   62061 logs.go:276] 0 containers: []
	W0918 21:07:51.070195   62061 logs.go:278] No container was found matching "kube-apiserver"
	I0918 21:07:51.070203   62061 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0918 21:07:51.070276   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0918 21:07:51.105014   62061 cri.go:89] found id: ""
	I0918 21:07:51.105039   62061 logs.go:276] 0 containers: []
	W0918 21:07:51.105048   62061 logs.go:278] No container was found matching "etcd"
	I0918 21:07:51.105054   62061 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0918 21:07:51.105101   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0918 21:07:51.143287   62061 cri.go:89] found id: ""
	I0918 21:07:51.143310   62061 logs.go:276] 0 containers: []
	W0918 21:07:51.143318   62061 logs.go:278] No container was found matching "coredns"
	I0918 21:07:51.143325   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0918 21:07:51.143372   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0918 21:07:51.176811   62061 cri.go:89] found id: ""
	I0918 21:07:51.176838   62061 logs.go:276] 0 containers: []
	W0918 21:07:51.176846   62061 logs.go:278] No container was found matching "kube-scheduler"
	I0918 21:07:51.176852   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0918 21:07:51.176898   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0918 21:07:51.210817   62061 cri.go:89] found id: ""
	I0918 21:07:51.210842   62061 logs.go:276] 0 containers: []
	W0918 21:07:51.210850   62061 logs.go:278] No container was found matching "kube-proxy"
	I0918 21:07:51.210856   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0918 21:07:51.210916   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0918 21:07:51.243968   62061 cri.go:89] found id: ""
	I0918 21:07:51.244002   62061 logs.go:276] 0 containers: []
	W0918 21:07:51.244035   62061 logs.go:278] No container was found matching "kube-controller-manager"
	I0918 21:07:51.244043   62061 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0918 21:07:51.244104   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0918 21:07:51.279078   62061 cri.go:89] found id: ""
	I0918 21:07:51.279108   62061 logs.go:276] 0 containers: []
	W0918 21:07:51.279117   62061 logs.go:278] No container was found matching "kindnet"
	I0918 21:07:51.279122   62061 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0918 21:07:51.279173   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0918 21:07:51.315033   62061 cri.go:89] found id: ""
	I0918 21:07:51.315082   62061 logs.go:276] 0 containers: []
	W0918 21:07:51.315091   62061 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0918 21:07:51.315100   62061 logs.go:123] Gathering logs for CRI-O ...
	I0918 21:07:51.315111   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0918 21:07:51.391927   62061 logs.go:123] Gathering logs for container status ...
	I0918 21:07:51.391976   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 21:07:51.435515   62061 logs.go:123] Gathering logs for kubelet ...
	I0918 21:07:51.435542   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 21:07:51.488404   62061 logs.go:123] Gathering logs for dmesg ...
	I0918 21:07:51.488442   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 21:07:51.502019   62061 logs.go:123] Gathering logs for describe nodes ...
	I0918 21:07:51.502065   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0918 21:07:51.567893   62061 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0918 21:07:54.068142   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:07:54.082544   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0918 21:07:54.082619   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0918 21:07:54.118600   62061 cri.go:89] found id: ""
	I0918 21:07:54.118629   62061 logs.go:276] 0 containers: []
	W0918 21:07:54.118638   62061 logs.go:278] No container was found matching "kube-apiserver"
	I0918 21:07:54.118644   62061 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0918 21:07:54.118695   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0918 21:07:54.154086   62061 cri.go:89] found id: ""
	I0918 21:07:54.154115   62061 logs.go:276] 0 containers: []
	W0918 21:07:54.154127   62061 logs.go:278] No container was found matching "etcd"
	I0918 21:07:54.154135   62061 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0918 21:07:54.154187   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0918 21:07:54.188213   62061 cri.go:89] found id: ""
	I0918 21:07:54.188246   62061 logs.go:276] 0 containers: []
	W0918 21:07:54.188257   62061 logs.go:278] No container was found matching "coredns"
	I0918 21:07:54.188266   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0918 21:07:54.188329   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0918 21:07:54.221682   62061 cri.go:89] found id: ""
	I0918 21:07:54.221712   62061 logs.go:276] 0 containers: []
	W0918 21:07:54.221721   62061 logs.go:278] No container was found matching "kube-scheduler"
	I0918 21:07:54.221728   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0918 21:07:54.221791   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0918 21:07:54.254746   62061 cri.go:89] found id: ""
	I0918 21:07:54.254770   62061 logs.go:276] 0 containers: []
	W0918 21:07:54.254778   62061 logs.go:278] No container was found matching "kube-proxy"
	I0918 21:07:54.254783   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0918 21:07:54.254844   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0918 21:07:54.288249   62061 cri.go:89] found id: ""
	I0918 21:07:54.288273   62061 logs.go:276] 0 containers: []
	W0918 21:07:54.288281   62061 logs.go:278] No container was found matching "kube-controller-manager"
	I0918 21:07:54.288288   62061 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0918 21:07:54.288349   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0918 21:07:54.323390   62061 cri.go:89] found id: ""
	I0918 21:07:54.323422   62061 logs.go:276] 0 containers: []
	W0918 21:07:54.323433   62061 logs.go:278] No container was found matching "kindnet"
	I0918 21:07:54.323440   62061 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0918 21:07:54.323507   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0918 21:07:54.357680   62061 cri.go:89] found id: ""
	I0918 21:07:54.357709   62061 logs.go:276] 0 containers: []
	W0918 21:07:54.357719   62061 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0918 21:07:54.357734   62061 logs.go:123] Gathering logs for dmesg ...
	I0918 21:07:54.357748   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 21:07:54.396644   62061 logs.go:123] Gathering logs for describe nodes ...
	I0918 21:07:54.396683   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0918 21:07:54.469136   62061 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0918 21:07:54.469175   62061 logs.go:123] Gathering logs for CRI-O ...
	I0918 21:07:54.469191   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0918 21:07:54.547848   62061 logs.go:123] Gathering logs for container status ...
	I0918 21:07:54.547884   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 21:07:54.585758   62061 logs.go:123] Gathering logs for kubelet ...
	I0918 21:07:54.585788   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 21:07:57.139198   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:07:57.152342   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0918 21:07:57.152429   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0918 21:07:57.186747   62061 cri.go:89] found id: ""
	I0918 21:07:57.186778   62061 logs.go:276] 0 containers: []
	W0918 21:07:57.186789   62061 logs.go:278] No container was found matching "kube-apiserver"
	I0918 21:07:57.186796   62061 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0918 21:07:57.186855   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0918 21:07:57.219211   62061 cri.go:89] found id: ""
	I0918 21:07:57.219239   62061 logs.go:276] 0 containers: []
	W0918 21:07:57.219248   62061 logs.go:278] No container was found matching "etcd"
	I0918 21:07:57.219256   62061 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0918 21:07:57.219315   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0918 21:07:57.251361   62061 cri.go:89] found id: ""
	I0918 21:07:57.251388   62061 logs.go:276] 0 containers: []
	W0918 21:07:57.251396   62061 logs.go:278] No container was found matching "coredns"
	I0918 21:07:57.251401   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0918 21:07:57.251450   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0918 21:07:57.285467   62061 cri.go:89] found id: ""
	I0918 21:07:57.285501   62061 logs.go:276] 0 containers: []
	W0918 21:07:57.285512   62061 logs.go:278] No container was found matching "kube-scheduler"
	I0918 21:07:57.285519   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0918 21:07:57.285580   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0918 21:07:57.317973   62061 cri.go:89] found id: ""
	I0918 21:07:57.317999   62061 logs.go:276] 0 containers: []
	W0918 21:07:57.318006   62061 logs.go:278] No container was found matching "kube-proxy"
	I0918 21:07:57.318012   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0918 21:07:57.318071   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0918 21:07:57.352172   62061 cri.go:89] found id: ""
	I0918 21:07:57.352202   62061 logs.go:276] 0 containers: []
	W0918 21:07:57.352215   62061 logs.go:278] No container was found matching "kube-controller-manager"
	I0918 21:07:57.352223   62061 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0918 21:07:57.352277   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0918 21:07:57.388117   62061 cri.go:89] found id: ""
	I0918 21:07:57.388137   62061 logs.go:276] 0 containers: []
	W0918 21:07:57.388144   62061 logs.go:278] No container was found matching "kindnet"
	I0918 21:07:57.388150   62061 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0918 21:07:57.388205   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0918 21:07:57.424814   62061 cri.go:89] found id: ""
	I0918 21:07:57.424846   62061 logs.go:276] 0 containers: []
	W0918 21:07:57.424857   62061 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0918 21:07:57.424868   62061 logs.go:123] Gathering logs for dmesg ...
	I0918 21:07:57.424882   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 21:07:57.437317   62061 logs.go:123] Gathering logs for describe nodes ...
	I0918 21:07:57.437351   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0918 21:07:57.503393   62061 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0918 21:07:57.503417   62061 logs.go:123] Gathering logs for CRI-O ...
	I0918 21:07:57.503429   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0918 21:07:57.584167   62061 logs.go:123] Gathering logs for container status ...
	I0918 21:07:57.584203   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 21:07:57.620761   62061 logs.go:123] Gathering logs for kubelet ...
	I0918 21:07:57.620792   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 21:08:00.174933   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:08:00.188098   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0918 21:08:00.188177   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0918 21:08:00.221852   62061 cri.go:89] found id: ""
	I0918 21:08:00.221877   62061 logs.go:276] 0 containers: []
	W0918 21:08:00.221890   62061 logs.go:278] No container was found matching "kube-apiserver"
	I0918 21:08:00.221895   62061 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0918 21:08:00.221947   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0918 21:08:00.255947   62061 cri.go:89] found id: ""
	I0918 21:08:00.255974   62061 logs.go:276] 0 containers: []
	W0918 21:08:00.255982   62061 logs.go:278] No container was found matching "etcd"
	I0918 21:08:00.255987   62061 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0918 21:08:00.256056   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0918 21:08:00.290919   62061 cri.go:89] found id: ""
	I0918 21:08:00.290953   62061 logs.go:276] 0 containers: []
	W0918 21:08:00.290961   62061 logs.go:278] No container was found matching "coredns"
	I0918 21:08:00.290966   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0918 21:08:00.291017   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0918 21:08:00.328175   62061 cri.go:89] found id: ""
	I0918 21:08:00.328200   62061 logs.go:276] 0 containers: []
	W0918 21:08:00.328208   62061 logs.go:278] No container was found matching "kube-scheduler"
	I0918 21:08:00.328214   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0918 21:08:00.328261   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0918 21:08:00.364863   62061 cri.go:89] found id: ""
	I0918 21:08:00.364892   62061 logs.go:276] 0 containers: []
	W0918 21:08:00.364903   62061 logs.go:278] No container was found matching "kube-proxy"
	I0918 21:08:00.364912   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0918 21:08:00.364967   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0918 21:08:00.400368   62061 cri.go:89] found id: ""
	I0918 21:08:00.400397   62061 logs.go:276] 0 containers: []
	W0918 21:08:00.400408   62061 logs.go:278] No container was found matching "kube-controller-manager"
	I0918 21:08:00.400415   62061 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0918 21:08:00.400480   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0918 21:08:00.435341   62061 cri.go:89] found id: ""
	I0918 21:08:00.435375   62061 logs.go:276] 0 containers: []
	W0918 21:08:00.435386   62061 logs.go:278] No container was found matching "kindnet"
	I0918 21:08:00.435394   62061 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0918 21:08:00.435459   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0918 21:08:00.469981   62061 cri.go:89] found id: ""
	I0918 21:08:00.470010   62061 logs.go:276] 0 containers: []
	W0918 21:08:00.470019   62061 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0918 21:08:00.470028   62061 logs.go:123] Gathering logs for describe nodes ...
	I0918 21:08:00.470041   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0918 21:08:00.538006   62061 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0918 21:08:00.538037   62061 logs.go:123] Gathering logs for CRI-O ...
	I0918 21:08:00.538053   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0918 21:08:00.618497   62061 logs.go:123] Gathering logs for container status ...
	I0918 21:08:00.618543   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 21:08:00.656884   62061 logs.go:123] Gathering logs for kubelet ...
	I0918 21:08:00.656912   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 21:08:00.708836   62061 logs.go:123] Gathering logs for dmesg ...
	I0918 21:08:00.708870   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 21:08:03.222489   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:08:03.236904   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0918 21:08:03.236972   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0918 21:08:03.270559   62061 cri.go:89] found id: ""
	I0918 21:08:03.270588   62061 logs.go:276] 0 containers: []
	W0918 21:08:03.270596   62061 logs.go:278] No container was found matching "kube-apiserver"
	I0918 21:08:03.270602   62061 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0918 21:08:03.270649   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0918 21:08:03.305908   62061 cri.go:89] found id: ""
	I0918 21:08:03.305933   62061 logs.go:276] 0 containers: []
	W0918 21:08:03.305940   62061 logs.go:278] No container was found matching "etcd"
	I0918 21:08:03.305946   62061 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0918 21:08:03.306004   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0918 21:08:03.339442   62061 cri.go:89] found id: ""
	I0918 21:08:03.339468   62061 logs.go:276] 0 containers: []
	W0918 21:08:03.339476   62061 logs.go:278] No container was found matching "coredns"
	I0918 21:08:03.339482   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0918 21:08:03.339550   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0918 21:08:03.377460   62061 cri.go:89] found id: ""
	I0918 21:08:03.377486   62061 logs.go:276] 0 containers: []
	W0918 21:08:03.377495   62061 logs.go:278] No container was found matching "kube-scheduler"
	I0918 21:08:03.377501   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0918 21:08:03.377552   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0918 21:08:03.414815   62061 cri.go:89] found id: ""
	I0918 21:08:03.414850   62061 logs.go:276] 0 containers: []
	W0918 21:08:03.414861   62061 logs.go:278] No container was found matching "kube-proxy"
	I0918 21:08:03.414869   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0918 21:08:03.414930   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0918 21:08:03.448654   62061 cri.go:89] found id: ""
	I0918 21:08:03.448680   62061 logs.go:276] 0 containers: []
	W0918 21:08:03.448690   62061 logs.go:278] No container was found matching "kube-controller-manager"
	I0918 21:08:03.448698   62061 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0918 21:08:03.448759   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0918 21:08:03.483598   62061 cri.go:89] found id: ""
	I0918 21:08:03.483628   62061 logs.go:276] 0 containers: []
	W0918 21:08:03.483639   62061 logs.go:278] No container was found matching "kindnet"
	I0918 21:08:03.483646   62061 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0918 21:08:03.483717   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0918 21:08:03.518557   62061 cri.go:89] found id: ""
	I0918 21:08:03.518585   62061 logs.go:276] 0 containers: []
	W0918 21:08:03.518601   62061 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0918 21:08:03.518612   62061 logs.go:123] Gathering logs for container status ...
	I0918 21:08:03.518627   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 21:08:03.555922   62061 logs.go:123] Gathering logs for kubelet ...
	I0918 21:08:03.555958   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 21:08:03.608173   62061 logs.go:123] Gathering logs for dmesg ...
	I0918 21:08:03.608208   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 21:08:03.621251   62061 logs.go:123] Gathering logs for describe nodes ...
	I0918 21:08:03.621278   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0918 21:08:03.688773   62061 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0918 21:08:03.688796   62061 logs.go:123] Gathering logs for CRI-O ...
	I0918 21:08:03.688812   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0918 21:08:06.272727   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:08:06.286033   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0918 21:08:06.286115   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0918 21:08:06.319058   62061 cri.go:89] found id: ""
	I0918 21:08:06.319084   62061 logs.go:276] 0 containers: []
	W0918 21:08:06.319092   62061 logs.go:278] No container was found matching "kube-apiserver"
	I0918 21:08:06.319099   62061 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0918 21:08:06.319167   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0918 21:08:06.352572   62061 cri.go:89] found id: ""
	I0918 21:08:06.352606   62061 logs.go:276] 0 containers: []
	W0918 21:08:06.352627   62061 logs.go:278] No container was found matching "etcd"
	I0918 21:08:06.352638   62061 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0918 21:08:06.352709   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0918 21:08:06.389897   62061 cri.go:89] found id: ""
	I0918 21:08:06.389922   62061 logs.go:276] 0 containers: []
	W0918 21:08:06.389929   62061 logs.go:278] No container was found matching "coredns"
	I0918 21:08:06.389935   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0918 21:08:06.389993   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0918 21:08:06.427265   62061 cri.go:89] found id: ""
	I0918 21:08:06.427294   62061 logs.go:276] 0 containers: []
	W0918 21:08:06.427306   62061 logs.go:278] No container was found matching "kube-scheduler"
	I0918 21:08:06.427314   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0918 21:08:06.427375   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0918 21:08:06.462706   62061 cri.go:89] found id: ""
	I0918 21:08:06.462738   62061 logs.go:276] 0 containers: []
	W0918 21:08:06.462746   62061 logs.go:278] No container was found matching "kube-proxy"
	I0918 21:08:06.462753   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0918 21:08:06.462816   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0918 21:08:06.499672   62061 cri.go:89] found id: ""
	I0918 21:08:06.499707   62061 logs.go:276] 0 containers: []
	W0918 21:08:06.499719   62061 logs.go:278] No container was found matching "kube-controller-manager"
	I0918 21:08:06.499727   62061 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0918 21:08:06.499781   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0918 21:08:06.540386   62061 cri.go:89] found id: ""
	I0918 21:08:06.540415   62061 logs.go:276] 0 containers: []
	W0918 21:08:06.540426   62061 logs.go:278] No container was found matching "kindnet"
	I0918 21:08:06.540433   62061 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0918 21:08:06.540492   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0918 21:08:06.582919   62061 cri.go:89] found id: ""
	I0918 21:08:06.582946   62061 logs.go:276] 0 containers: []
	W0918 21:08:06.582957   62061 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0918 21:08:06.582966   62061 logs.go:123] Gathering logs for kubelet ...
	I0918 21:08:06.582982   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 21:08:06.637315   62061 logs.go:123] Gathering logs for dmesg ...
	I0918 21:08:06.637355   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 21:08:06.650626   62061 logs.go:123] Gathering logs for describe nodes ...
	I0918 21:08:06.650662   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0918 21:08:06.725605   62061 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0918 21:08:06.725641   62061 logs.go:123] Gathering logs for CRI-O ...
	I0918 21:08:06.725656   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0918 21:08:06.805431   62061 logs.go:123] Gathering logs for container status ...
	I0918 21:08:06.805471   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 21:08:09.343498   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:08:09.357396   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0918 21:08:09.357472   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0918 21:08:09.391561   62061 cri.go:89] found id: ""
	I0918 21:08:09.391590   62061 logs.go:276] 0 containers: []
	W0918 21:08:09.391600   62061 logs.go:278] No container was found matching "kube-apiserver"
	I0918 21:08:09.391605   62061 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0918 21:08:09.391655   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0918 21:08:09.429154   62061 cri.go:89] found id: ""
	I0918 21:08:09.429181   62061 logs.go:276] 0 containers: []
	W0918 21:08:09.429190   62061 logs.go:278] No container was found matching "etcd"
	I0918 21:08:09.429196   62061 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0918 21:08:09.429259   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0918 21:08:09.464807   62061 cri.go:89] found id: ""
	I0918 21:08:09.464848   62061 logs.go:276] 0 containers: []
	W0918 21:08:09.464859   62061 logs.go:278] No container was found matching "coredns"
	I0918 21:08:09.464866   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0918 21:08:09.464927   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0918 21:08:09.499512   62061 cri.go:89] found id: ""
	I0918 21:08:09.499540   62061 logs.go:276] 0 containers: []
	W0918 21:08:09.499549   62061 logs.go:278] No container was found matching "kube-scheduler"
	I0918 21:08:09.499556   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0918 21:08:09.499619   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0918 21:08:09.534569   62061 cri.go:89] found id: ""
	I0918 21:08:09.534593   62061 logs.go:276] 0 containers: []
	W0918 21:08:09.534601   62061 logs.go:278] No container was found matching "kube-proxy"
	I0918 21:08:09.534607   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0918 21:08:09.534660   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0918 21:08:09.570387   62061 cri.go:89] found id: ""
	I0918 21:08:09.570414   62061 logs.go:276] 0 containers: []
	W0918 21:08:09.570422   62061 logs.go:278] No container was found matching "kube-controller-manager"
	I0918 21:08:09.570428   62061 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0918 21:08:09.570489   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0918 21:08:09.603832   62061 cri.go:89] found id: ""
	I0918 21:08:09.603863   62061 logs.go:276] 0 containers: []
	W0918 21:08:09.603871   62061 logs.go:278] No container was found matching "kindnet"
	I0918 21:08:09.603877   62061 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0918 21:08:09.603923   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0918 21:08:09.638887   62061 cri.go:89] found id: ""
	I0918 21:08:09.638918   62061 logs.go:276] 0 containers: []
	W0918 21:08:09.638930   62061 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0918 21:08:09.638940   62061 logs.go:123] Gathering logs for kubelet ...
	I0918 21:08:09.638953   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 21:08:09.691197   62061 logs.go:123] Gathering logs for dmesg ...
	I0918 21:08:09.691237   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 21:08:09.705470   62061 logs.go:123] Gathering logs for describe nodes ...
	I0918 21:08:09.705500   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0918 21:08:09.776826   62061 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0918 21:08:09.776858   62061 logs.go:123] Gathering logs for CRI-O ...
	I0918 21:08:09.776871   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0918 21:08:09.851287   62061 logs.go:123] Gathering logs for container status ...
	I0918 21:08:09.851321   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 21:08:12.390895   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:08:12.406734   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0918 21:08:12.406809   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0918 21:08:12.459302   62061 cri.go:89] found id: ""
	I0918 21:08:12.459331   62061 logs.go:276] 0 containers: []
	W0918 21:08:12.459349   62061 logs.go:278] No container was found matching "kube-apiserver"
	I0918 21:08:12.459360   62061 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0918 21:08:12.459420   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0918 21:08:12.517527   62061 cri.go:89] found id: ""
	I0918 21:08:12.517557   62061 logs.go:276] 0 containers: []
	W0918 21:08:12.517567   62061 logs.go:278] No container was found matching "etcd"
	I0918 21:08:12.517573   62061 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0918 21:08:12.517628   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0918 21:08:12.549661   62061 cri.go:89] found id: ""
	I0918 21:08:12.549689   62061 logs.go:276] 0 containers: []
	W0918 21:08:12.549696   62061 logs.go:278] No container was found matching "coredns"
	I0918 21:08:12.549702   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0918 21:08:12.549747   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0918 21:08:12.582970   62061 cri.go:89] found id: ""
	I0918 21:08:12.582996   62061 logs.go:276] 0 containers: []
	W0918 21:08:12.583004   62061 logs.go:278] No container was found matching "kube-scheduler"
	I0918 21:08:12.583009   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0918 21:08:12.583054   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0918 21:08:12.617059   62061 cri.go:89] found id: ""
	I0918 21:08:12.617089   62061 logs.go:276] 0 containers: []
	W0918 21:08:12.617098   62061 logs.go:278] No container was found matching "kube-proxy"
	I0918 21:08:12.617103   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0918 21:08:12.617153   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0918 21:08:12.651115   62061 cri.go:89] found id: ""
	I0918 21:08:12.651143   62061 logs.go:276] 0 containers: []
	W0918 21:08:12.651156   62061 logs.go:278] No container was found matching "kube-controller-manager"
	I0918 21:08:12.651164   62061 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0918 21:08:12.651217   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0918 21:08:12.684727   62061 cri.go:89] found id: ""
	I0918 21:08:12.684758   62061 logs.go:276] 0 containers: []
	W0918 21:08:12.684765   62061 logs.go:278] No container was found matching "kindnet"
	I0918 21:08:12.684771   62061 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0918 21:08:12.684829   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0918 21:08:12.718181   62061 cri.go:89] found id: ""
	I0918 21:08:12.718215   62061 logs.go:276] 0 containers: []
	W0918 21:08:12.718226   62061 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0918 21:08:12.718237   62061 logs.go:123] Gathering logs for container status ...
	I0918 21:08:12.718252   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 21:08:12.760350   62061 logs.go:123] Gathering logs for kubelet ...
	I0918 21:08:12.760379   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 21:08:12.810028   62061 logs.go:123] Gathering logs for dmesg ...
	I0918 21:08:12.810067   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 21:08:12.823785   62061 logs.go:123] Gathering logs for describe nodes ...
	I0918 21:08:12.823815   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0918 21:08:12.898511   62061 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0918 21:08:12.898532   62061 logs.go:123] Gathering logs for CRI-O ...
	I0918 21:08:12.898545   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0918 21:08:15.476840   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:08:15.489386   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0918 21:08:15.489470   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0918 21:08:15.524549   62061 cri.go:89] found id: ""
	I0918 21:08:15.524578   62061 logs.go:276] 0 containers: []
	W0918 21:08:15.524585   62061 logs.go:278] No container was found matching "kube-apiserver"
	I0918 21:08:15.524591   62061 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0918 21:08:15.524642   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0918 21:08:15.557834   62061 cri.go:89] found id: ""
	I0918 21:08:15.557867   62061 logs.go:276] 0 containers: []
	W0918 21:08:15.557876   62061 logs.go:278] No container was found matching "etcd"
	I0918 21:08:15.557883   62061 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0918 21:08:15.557933   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0918 21:08:15.598774   62061 cri.go:89] found id: ""
	I0918 21:08:15.598805   62061 logs.go:276] 0 containers: []
	W0918 21:08:15.598818   62061 logs.go:278] No container was found matching "coredns"
	I0918 21:08:15.598828   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0918 21:08:15.598882   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0918 21:08:15.633123   62061 cri.go:89] found id: ""
	I0918 21:08:15.633147   62061 logs.go:276] 0 containers: []
	W0918 21:08:15.633155   62061 logs.go:278] No container was found matching "kube-scheduler"
	I0918 21:08:15.633161   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0918 21:08:15.633208   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0918 21:08:15.667281   62061 cri.go:89] found id: ""
	I0918 21:08:15.667307   62061 logs.go:276] 0 containers: []
	W0918 21:08:15.667317   62061 logs.go:278] No container was found matching "kube-proxy"
	I0918 21:08:15.667323   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0918 21:08:15.667381   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0918 21:08:15.705945   62061 cri.go:89] found id: ""
	I0918 21:08:15.705977   62061 logs.go:276] 0 containers: []
	W0918 21:08:15.705990   62061 logs.go:278] No container was found matching "kube-controller-manager"
	I0918 21:08:15.706015   62061 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0918 21:08:15.706093   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0918 21:08:15.739795   62061 cri.go:89] found id: ""
	I0918 21:08:15.739826   62061 logs.go:276] 0 containers: []
	W0918 21:08:15.739836   62061 logs.go:278] No container was found matching "kindnet"
	I0918 21:08:15.739843   62061 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0918 21:08:15.739904   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0918 21:08:15.778520   62061 cri.go:89] found id: ""
	I0918 21:08:15.778556   62061 logs.go:276] 0 containers: []
	W0918 21:08:15.778567   62061 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0918 21:08:15.778579   62061 logs.go:123] Gathering logs for kubelet ...
	I0918 21:08:15.778592   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 21:08:15.829357   62061 logs.go:123] Gathering logs for dmesg ...
	I0918 21:08:15.829394   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 21:08:15.842852   62061 logs.go:123] Gathering logs for describe nodes ...
	I0918 21:08:15.842882   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0918 21:08:15.922438   62061 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0918 21:08:15.922471   62061 logs.go:123] Gathering logs for CRI-O ...
	I0918 21:08:15.922483   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0918 21:08:16.001687   62061 logs.go:123] Gathering logs for container status ...
	I0918 21:08:16.001726   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 21:08:18.542067   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:08:18.554783   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0918 21:08:18.554870   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0918 21:08:18.589555   62061 cri.go:89] found id: ""
	I0918 21:08:18.589581   62061 logs.go:276] 0 containers: []
	W0918 21:08:18.589592   62061 logs.go:278] No container was found matching "kube-apiserver"
	I0918 21:08:18.589604   62061 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0918 21:08:18.589667   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0918 21:08:18.623035   62061 cri.go:89] found id: ""
	I0918 21:08:18.623059   62061 logs.go:276] 0 containers: []
	W0918 21:08:18.623067   62061 logs.go:278] No container was found matching "etcd"
	I0918 21:08:18.623073   62061 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0918 21:08:18.623127   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0918 21:08:18.655875   62061 cri.go:89] found id: ""
	I0918 21:08:18.655901   62061 logs.go:276] 0 containers: []
	W0918 21:08:18.655909   62061 logs.go:278] No container was found matching "coredns"
	I0918 21:08:18.655915   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0918 21:08:18.655973   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0918 21:08:18.688964   62061 cri.go:89] found id: ""
	I0918 21:08:18.688997   62061 logs.go:276] 0 containers: []
	W0918 21:08:18.689008   62061 logs.go:278] No container was found matching "kube-scheduler"
	I0918 21:08:18.689016   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0918 21:08:18.689080   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0918 21:08:18.723164   62061 cri.go:89] found id: ""
	I0918 21:08:18.723186   62061 logs.go:276] 0 containers: []
	W0918 21:08:18.723196   62061 logs.go:278] No container was found matching "kube-proxy"
	I0918 21:08:18.723201   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0918 21:08:18.723246   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0918 21:08:18.755022   62061 cri.go:89] found id: ""
	I0918 21:08:18.755048   62061 logs.go:276] 0 containers: []
	W0918 21:08:18.755057   62061 logs.go:278] No container was found matching "kube-controller-manager"
	I0918 21:08:18.755063   62061 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0918 21:08:18.755113   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0918 21:08:18.791614   62061 cri.go:89] found id: ""
	I0918 21:08:18.791645   62061 logs.go:276] 0 containers: []
	W0918 21:08:18.791655   62061 logs.go:278] No container was found matching "kindnet"
	I0918 21:08:18.791663   62061 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0918 21:08:18.791731   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0918 21:08:18.823553   62061 cri.go:89] found id: ""
	I0918 21:08:18.823583   62061 logs.go:276] 0 containers: []
	W0918 21:08:18.823590   62061 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0918 21:08:18.823597   62061 logs.go:123] Gathering logs for container status ...
	I0918 21:08:18.823609   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 21:08:18.864528   62061 logs.go:123] Gathering logs for kubelet ...
	I0918 21:08:18.864567   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 21:08:18.916590   62061 logs.go:123] Gathering logs for dmesg ...
	I0918 21:08:18.916628   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 21:08:18.930077   62061 logs.go:123] Gathering logs for describe nodes ...
	I0918 21:08:18.930107   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0918 21:08:18.999958   62061 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0918 21:08:18.999982   62061 logs.go:123] Gathering logs for CRI-O ...
	I0918 21:08:18.999997   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0918 21:08:21.573718   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:08:21.588164   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0918 21:08:21.588252   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0918 21:08:21.630044   62061 cri.go:89] found id: ""
	I0918 21:08:21.630075   62061 logs.go:276] 0 containers: []
	W0918 21:08:21.630086   62061 logs.go:278] No container was found matching "kube-apiserver"
	I0918 21:08:21.630094   62061 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0918 21:08:21.630154   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0918 21:08:21.666992   62061 cri.go:89] found id: ""
	I0918 21:08:21.667021   62061 logs.go:276] 0 containers: []
	W0918 21:08:21.667029   62061 logs.go:278] No container was found matching "etcd"
	I0918 21:08:21.667035   62061 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0918 21:08:21.667083   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0918 21:08:21.702379   62061 cri.go:89] found id: ""
	I0918 21:08:21.702403   62061 logs.go:276] 0 containers: []
	W0918 21:08:21.702411   62061 logs.go:278] No container was found matching "coredns"
	I0918 21:08:21.702416   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0918 21:08:21.702463   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0918 21:08:21.739877   62061 cri.go:89] found id: ""
	I0918 21:08:21.739908   62061 logs.go:276] 0 containers: []
	W0918 21:08:21.739918   62061 logs.go:278] No container was found matching "kube-scheduler"
	I0918 21:08:21.739923   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0918 21:08:21.739974   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0918 21:08:21.777536   62061 cri.go:89] found id: ""
	I0918 21:08:21.777573   62061 logs.go:276] 0 containers: []
	W0918 21:08:21.777584   62061 logs.go:278] No container was found matching "kube-proxy"
	I0918 21:08:21.777592   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0918 21:08:21.777652   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0918 21:08:21.812284   62061 cri.go:89] found id: ""
	I0918 21:08:21.812316   62061 logs.go:276] 0 containers: []
	W0918 21:08:21.812325   62061 logs.go:278] No container was found matching "kube-controller-manager"
	I0918 21:08:21.812332   62061 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0918 21:08:21.812401   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0918 21:08:21.848143   62061 cri.go:89] found id: ""
	I0918 21:08:21.848176   62061 logs.go:276] 0 containers: []
	W0918 21:08:21.848185   62061 logs.go:278] No container was found matching "kindnet"
	I0918 21:08:21.848191   62061 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0918 21:08:21.848250   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0918 21:08:21.887151   62061 cri.go:89] found id: ""
	I0918 21:08:21.887177   62061 logs.go:276] 0 containers: []
	W0918 21:08:21.887188   62061 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0918 21:08:21.887199   62061 logs.go:123] Gathering logs for kubelet ...
	I0918 21:08:21.887213   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 21:08:21.939969   62061 logs.go:123] Gathering logs for dmesg ...
	I0918 21:08:21.940008   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 21:08:21.954128   62061 logs.go:123] Gathering logs for describe nodes ...
	I0918 21:08:21.954164   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0918 21:08:22.022827   62061 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0918 21:08:22.022853   62061 logs.go:123] Gathering logs for CRI-O ...
	I0918 21:08:22.022865   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0918 21:08:22.103131   62061 logs.go:123] Gathering logs for container status ...
	I0918 21:08:22.103172   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 21:08:24.642045   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:08:24.655273   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0918 21:08:24.655343   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0918 21:08:24.687816   62061 cri.go:89] found id: ""
	I0918 21:08:24.687847   62061 logs.go:276] 0 containers: []
	W0918 21:08:24.687858   62061 logs.go:278] No container was found matching "kube-apiserver"
	I0918 21:08:24.687865   62061 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0918 21:08:24.687927   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0918 21:08:24.721276   62061 cri.go:89] found id: ""
	I0918 21:08:24.721303   62061 logs.go:276] 0 containers: []
	W0918 21:08:24.721311   62061 logs.go:278] No container was found matching "etcd"
	I0918 21:08:24.721316   62061 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0918 21:08:24.721366   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0918 21:08:24.753874   62061 cri.go:89] found id: ""
	I0918 21:08:24.753904   62061 logs.go:276] 0 containers: []
	W0918 21:08:24.753911   62061 logs.go:278] No container was found matching "coredns"
	I0918 21:08:24.753917   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0918 21:08:24.753967   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0918 21:08:24.789107   62061 cri.go:89] found id: ""
	I0918 21:08:24.789148   62061 logs.go:276] 0 containers: []
	W0918 21:08:24.789163   62061 logs.go:278] No container was found matching "kube-scheduler"
	I0918 21:08:24.789170   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0918 21:08:24.789219   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0918 21:08:24.826283   62061 cri.go:89] found id: ""
	I0918 21:08:24.826316   62061 logs.go:276] 0 containers: []
	W0918 21:08:24.826329   62061 logs.go:278] No container was found matching "kube-proxy"
	I0918 21:08:24.826337   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0918 21:08:24.826401   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0918 21:08:24.859878   62061 cri.go:89] found id: ""
	I0918 21:08:24.859907   62061 logs.go:276] 0 containers: []
	W0918 21:08:24.859917   62061 logs.go:278] No container was found matching "kube-controller-manager"
	I0918 21:08:24.859924   62061 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0918 21:08:24.859982   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0918 21:08:24.895717   62061 cri.go:89] found id: ""
	I0918 21:08:24.895747   62061 logs.go:276] 0 containers: []
	W0918 21:08:24.895758   62061 logs.go:278] No container was found matching "kindnet"
	I0918 21:08:24.895766   62061 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0918 21:08:24.895830   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0918 21:08:24.930207   62061 cri.go:89] found id: ""
	I0918 21:08:24.930239   62061 logs.go:276] 0 containers: []
	W0918 21:08:24.930250   62061 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0918 21:08:24.930262   62061 logs.go:123] Gathering logs for container status ...
	I0918 21:08:24.930279   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 21:08:24.967939   62061 logs.go:123] Gathering logs for kubelet ...
	I0918 21:08:24.967981   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 21:08:25.022526   62061 logs.go:123] Gathering logs for dmesg ...
	I0918 21:08:25.022569   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 21:08:25.036175   62061 logs.go:123] Gathering logs for describe nodes ...
	I0918 21:08:25.036214   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0918 21:08:25.104251   62061 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0918 21:08:25.104277   62061 logs.go:123] Gathering logs for CRI-O ...
	I0918 21:08:25.104294   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0918 21:08:27.686943   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:08:27.700071   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0918 21:08:27.700147   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0918 21:08:27.735245   62061 cri.go:89] found id: ""
	I0918 21:08:27.735277   62061 logs.go:276] 0 containers: []
	W0918 21:08:27.735286   62061 logs.go:278] No container was found matching "kube-apiserver"
	I0918 21:08:27.735291   62061 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0918 21:08:27.735349   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0918 21:08:27.768945   62061 cri.go:89] found id: ""
	I0918 21:08:27.768974   62061 logs.go:276] 0 containers: []
	W0918 21:08:27.768985   62061 logs.go:278] No container was found matching "etcd"
	I0918 21:08:27.768993   62061 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0918 21:08:27.769055   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0918 21:08:27.805425   62061 cri.go:89] found id: ""
	I0918 21:08:27.805457   62061 logs.go:276] 0 containers: []
	W0918 21:08:27.805468   62061 logs.go:278] No container was found matching "coredns"
	I0918 21:08:27.805474   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0918 21:08:27.805523   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0918 21:08:27.838049   62061 cri.go:89] found id: ""
	I0918 21:08:27.838081   62061 logs.go:276] 0 containers: []
	W0918 21:08:27.838091   62061 logs.go:278] No container was found matching "kube-scheduler"
	I0918 21:08:27.838098   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0918 21:08:27.838163   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0918 21:08:27.873961   62061 cri.go:89] found id: ""
	I0918 21:08:27.873986   62061 logs.go:276] 0 containers: []
	W0918 21:08:27.873994   62061 logs.go:278] No container was found matching "kube-proxy"
	I0918 21:08:27.874001   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0918 21:08:27.874064   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0918 21:08:27.908822   62061 cri.go:89] found id: ""
	I0918 21:08:27.908846   62061 logs.go:276] 0 containers: []
	W0918 21:08:27.908854   62061 logs.go:278] No container was found matching "kube-controller-manager"
	I0918 21:08:27.908860   62061 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0918 21:08:27.908915   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0918 21:08:27.943758   62061 cri.go:89] found id: ""
	I0918 21:08:27.943794   62061 logs.go:276] 0 containers: []
	W0918 21:08:27.943802   62061 logs.go:278] No container was found matching "kindnet"
	I0918 21:08:27.943808   62061 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0918 21:08:27.943875   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0918 21:08:27.978136   62061 cri.go:89] found id: ""
	I0918 21:08:27.978168   62061 logs.go:276] 0 containers: []
	W0918 21:08:27.978179   62061 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0918 21:08:27.978189   62061 logs.go:123] Gathering logs for dmesg ...
	I0918 21:08:27.978202   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 21:08:27.992744   62061 logs.go:123] Gathering logs for describe nodes ...
	I0918 21:08:27.992773   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0918 21:08:28.070339   62061 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0918 21:08:28.070362   62061 logs.go:123] Gathering logs for CRI-O ...
	I0918 21:08:28.070374   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0918 21:08:28.152405   62061 logs.go:123] Gathering logs for container status ...
	I0918 21:08:28.152452   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 21:08:28.191190   62061 logs.go:123] Gathering logs for kubelet ...
	I0918 21:08:28.191220   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 21:08:30.742507   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:08:30.756451   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0918 21:08:30.756553   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0918 21:08:30.790746   62061 cri.go:89] found id: ""
	I0918 21:08:30.790771   62061 logs.go:276] 0 containers: []
	W0918 21:08:30.790781   62061 logs.go:278] No container was found matching "kube-apiserver"
	I0918 21:08:30.790787   62061 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0918 21:08:30.790851   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0918 21:08:30.823633   62061 cri.go:89] found id: ""
	I0918 21:08:30.823670   62061 logs.go:276] 0 containers: []
	W0918 21:08:30.823682   62061 logs.go:278] No container was found matching "etcd"
	I0918 21:08:30.823689   62061 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0918 21:08:30.823754   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0918 21:08:30.861912   62061 cri.go:89] found id: ""
	I0918 21:08:30.861936   62061 logs.go:276] 0 containers: []
	W0918 21:08:30.861943   62061 logs.go:278] No container was found matching "coredns"
	I0918 21:08:30.861949   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0918 21:08:30.862000   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0918 21:08:30.899452   62061 cri.go:89] found id: ""
	I0918 21:08:30.899481   62061 logs.go:276] 0 containers: []
	W0918 21:08:30.899489   62061 logs.go:278] No container was found matching "kube-scheduler"
	I0918 21:08:30.899495   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0918 21:08:30.899562   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0918 21:08:30.935871   62061 cri.go:89] found id: ""
	I0918 21:08:30.935898   62061 logs.go:276] 0 containers: []
	W0918 21:08:30.935906   62061 logs.go:278] No container was found matching "kube-proxy"
	I0918 21:08:30.935912   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0918 21:08:30.935969   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0918 21:08:30.974608   62061 cri.go:89] found id: ""
	I0918 21:08:30.974643   62061 logs.go:276] 0 containers: []
	W0918 21:08:30.974655   62061 logs.go:278] No container was found matching "kube-controller-manager"
	I0918 21:08:30.974663   62061 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0918 21:08:30.974722   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0918 21:08:31.008249   62061 cri.go:89] found id: ""
	I0918 21:08:31.008279   62061 logs.go:276] 0 containers: []
	W0918 21:08:31.008290   62061 logs.go:278] No container was found matching "kindnet"
	I0918 21:08:31.008297   62061 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0918 21:08:31.008366   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0918 21:08:31.047042   62061 cri.go:89] found id: ""
	I0918 21:08:31.047075   62061 logs.go:276] 0 containers: []
	W0918 21:08:31.047083   62061 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0918 21:08:31.047093   62061 logs.go:123] Gathering logs for kubelet ...
	I0918 21:08:31.047107   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 21:08:31.098961   62061 logs.go:123] Gathering logs for dmesg ...
	I0918 21:08:31.099001   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 21:08:31.113116   62061 logs.go:123] Gathering logs for describe nodes ...
	I0918 21:08:31.113147   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0918 21:08:31.179609   62061 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0918 21:08:31.179650   62061 logs.go:123] Gathering logs for CRI-O ...
	I0918 21:08:31.179664   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0918 21:08:31.258299   62061 logs.go:123] Gathering logs for container status ...
	I0918 21:08:31.258335   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 21:08:33.798360   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:08:33.812105   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0918 21:08:33.812172   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0918 21:08:33.848120   62061 cri.go:89] found id: ""
	I0918 21:08:33.848149   62061 logs.go:276] 0 containers: []
	W0918 21:08:33.848160   62061 logs.go:278] No container was found matching "kube-apiserver"
	I0918 21:08:33.848169   62061 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0918 21:08:33.848231   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0918 21:08:33.884974   62061 cri.go:89] found id: ""
	I0918 21:08:33.885007   62061 logs.go:276] 0 containers: []
	W0918 21:08:33.885019   62061 logs.go:278] No container was found matching "etcd"
	I0918 21:08:33.885028   62061 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0918 21:08:33.885104   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0918 21:08:33.919613   62061 cri.go:89] found id: ""
	I0918 21:08:33.919658   62061 logs.go:276] 0 containers: []
	W0918 21:08:33.919667   62061 logs.go:278] No container was found matching "coredns"
	I0918 21:08:33.919673   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0918 21:08:33.919721   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0918 21:08:33.953074   62061 cri.go:89] found id: ""
	I0918 21:08:33.953112   62061 logs.go:276] 0 containers: []
	W0918 21:08:33.953120   62061 logs.go:278] No container was found matching "kube-scheduler"
	I0918 21:08:33.953125   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0918 21:08:33.953185   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0918 21:08:33.985493   62061 cri.go:89] found id: ""
	I0918 21:08:33.985521   62061 logs.go:276] 0 containers: []
	W0918 21:08:33.985532   62061 logs.go:278] No container was found matching "kube-proxy"
	I0918 21:08:33.985539   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0918 21:08:33.985630   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0918 21:08:34.020938   62061 cri.go:89] found id: ""
	I0918 21:08:34.020962   62061 logs.go:276] 0 containers: []
	W0918 21:08:34.020972   62061 logs.go:278] No container was found matching "kube-controller-manager"
	I0918 21:08:34.020981   62061 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0918 21:08:34.021047   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0918 21:08:34.053947   62061 cri.go:89] found id: ""
	I0918 21:08:34.053977   62061 logs.go:276] 0 containers: []
	W0918 21:08:34.053988   62061 logs.go:278] No container was found matching "kindnet"
	I0918 21:08:34.053996   62061 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0918 21:08:34.054060   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0918 21:08:34.090072   62061 cri.go:89] found id: ""
	I0918 21:08:34.090110   62061 logs.go:276] 0 containers: []
	W0918 21:08:34.090123   62061 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0918 21:08:34.090133   62061 logs.go:123] Gathering logs for kubelet ...
	I0918 21:08:34.090145   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 21:08:34.142069   62061 logs.go:123] Gathering logs for dmesg ...
	I0918 21:08:34.142107   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 21:08:34.156709   62061 logs.go:123] Gathering logs for describe nodes ...
	I0918 21:08:34.156740   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0918 21:08:34.232644   62061 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0918 21:08:34.232672   62061 logs.go:123] Gathering logs for CRI-O ...
	I0918 21:08:34.232685   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0918 21:08:34.311833   62061 logs.go:123] Gathering logs for container status ...
	I0918 21:08:34.311878   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 21:08:36.850811   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:08:36.864479   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0918 21:08:36.864558   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0918 21:08:36.902595   62061 cri.go:89] found id: ""
	I0918 21:08:36.902628   62061 logs.go:276] 0 containers: []
	W0918 21:08:36.902640   62061 logs.go:278] No container was found matching "kube-apiserver"
	I0918 21:08:36.902649   62061 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0918 21:08:36.902706   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0918 21:08:36.940336   62061 cri.go:89] found id: ""
	I0918 21:08:36.940385   62061 logs.go:276] 0 containers: []
	W0918 21:08:36.940394   62061 logs.go:278] No container was found matching "etcd"
	I0918 21:08:36.940400   62061 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0918 21:08:36.940465   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0918 21:08:36.973909   62061 cri.go:89] found id: ""
	I0918 21:08:36.973942   62061 logs.go:276] 0 containers: []
	W0918 21:08:36.973952   62061 logs.go:278] No container was found matching "coredns"
	I0918 21:08:36.973958   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0918 21:08:36.974013   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0918 21:08:37.008766   62061 cri.go:89] found id: ""
	I0918 21:08:37.008791   62061 logs.go:276] 0 containers: []
	W0918 21:08:37.008799   62061 logs.go:278] No container was found matching "kube-scheduler"
	I0918 21:08:37.008805   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0918 21:08:37.008852   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0918 21:08:37.041633   62061 cri.go:89] found id: ""
	I0918 21:08:37.041669   62061 logs.go:276] 0 containers: []
	W0918 21:08:37.041681   62061 logs.go:278] No container was found matching "kube-proxy"
	I0918 21:08:37.041688   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0918 21:08:37.041750   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0918 21:08:37.075154   62061 cri.go:89] found id: ""
	I0918 21:08:37.075188   62061 logs.go:276] 0 containers: []
	W0918 21:08:37.075197   62061 logs.go:278] No container was found matching "kube-controller-manager"
	I0918 21:08:37.075204   62061 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0918 21:08:37.075268   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0918 21:08:37.111083   62061 cri.go:89] found id: ""
	I0918 21:08:37.111119   62061 logs.go:276] 0 containers: []
	W0918 21:08:37.111130   62061 logs.go:278] No container was found matching "kindnet"
	I0918 21:08:37.111138   62061 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0918 21:08:37.111192   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0918 21:08:37.147894   62061 cri.go:89] found id: ""
	I0918 21:08:37.147925   62061 logs.go:276] 0 containers: []
	W0918 21:08:37.147936   62061 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0918 21:08:37.147948   62061 logs.go:123] Gathering logs for kubelet ...
	I0918 21:08:37.147962   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 21:08:37.200102   62061 logs.go:123] Gathering logs for dmesg ...
	I0918 21:08:37.200141   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 21:08:37.213511   62061 logs.go:123] Gathering logs for describe nodes ...
	I0918 21:08:37.213537   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0918 21:08:37.286978   62061 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0918 21:08:37.287012   62061 logs.go:123] Gathering logs for CRI-O ...
	I0918 21:08:37.287027   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0918 21:08:37.368153   62061 logs.go:123] Gathering logs for container status ...
	I0918 21:08:37.368191   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 21:08:39.907600   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:08:39.922325   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0918 21:08:39.922395   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0918 21:08:39.958150   62061 cri.go:89] found id: ""
	I0918 21:08:39.958175   62061 logs.go:276] 0 containers: []
	W0918 21:08:39.958183   62061 logs.go:278] No container was found matching "kube-apiserver"
	I0918 21:08:39.958189   62061 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0918 21:08:39.958254   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0918 21:08:39.993916   62061 cri.go:89] found id: ""
	I0918 21:08:39.993945   62061 logs.go:276] 0 containers: []
	W0918 21:08:39.993956   62061 logs.go:278] No container was found matching "etcd"
	I0918 21:08:39.993963   62061 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0918 21:08:39.994026   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0918 21:08:40.031078   62061 cri.go:89] found id: ""
	I0918 21:08:40.031114   62061 logs.go:276] 0 containers: []
	W0918 21:08:40.031126   62061 logs.go:278] No container was found matching "coredns"
	I0918 21:08:40.031133   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0918 21:08:40.031194   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0918 21:08:40.065028   62061 cri.go:89] found id: ""
	I0918 21:08:40.065054   62061 logs.go:276] 0 containers: []
	W0918 21:08:40.065065   62061 logs.go:278] No container was found matching "kube-scheduler"
	I0918 21:08:40.065072   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0918 21:08:40.065129   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0918 21:08:40.099437   62061 cri.go:89] found id: ""
	I0918 21:08:40.099466   62061 logs.go:276] 0 containers: []
	W0918 21:08:40.099474   62061 logs.go:278] No container was found matching "kube-proxy"
	I0918 21:08:40.099480   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0918 21:08:40.099544   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0918 21:08:40.139841   62061 cri.go:89] found id: ""
	I0918 21:08:40.139866   62061 logs.go:276] 0 containers: []
	W0918 21:08:40.139874   62061 logs.go:278] No container was found matching "kube-controller-manager"
	I0918 21:08:40.139880   62061 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0918 21:08:40.139936   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0918 21:08:40.175287   62061 cri.go:89] found id: ""
	I0918 21:08:40.175316   62061 logs.go:276] 0 containers: []
	W0918 21:08:40.175327   62061 logs.go:278] No container was found matching "kindnet"
	I0918 21:08:40.175334   62061 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0918 21:08:40.175397   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0918 21:08:40.208646   62061 cri.go:89] found id: ""
	I0918 21:08:40.208677   62061 logs.go:276] 0 containers: []
	W0918 21:08:40.208690   62061 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0918 21:08:40.208701   62061 logs.go:123] Gathering logs for kubelet ...
	I0918 21:08:40.208712   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 21:08:40.262944   62061 logs.go:123] Gathering logs for dmesg ...
	I0918 21:08:40.262982   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 21:08:40.276192   62061 logs.go:123] Gathering logs for describe nodes ...
	I0918 21:08:40.276222   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0918 21:08:40.346393   62061 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0918 21:08:40.346414   62061 logs.go:123] Gathering logs for CRI-O ...
	I0918 21:08:40.346426   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0918 21:08:40.433797   62061 logs.go:123] Gathering logs for container status ...
	I0918 21:08:40.433848   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 21:08:42.976574   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:08:42.990198   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0918 21:08:42.990263   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0918 21:08:43.031744   62061 cri.go:89] found id: ""
	I0918 21:08:43.031774   62061 logs.go:276] 0 containers: []
	W0918 21:08:43.031784   62061 logs.go:278] No container was found matching "kube-apiserver"
	I0918 21:08:43.031791   62061 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0918 21:08:43.031854   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0918 21:08:43.065198   62061 cri.go:89] found id: ""
	I0918 21:08:43.065240   62061 logs.go:276] 0 containers: []
	W0918 21:08:43.065248   62061 logs.go:278] No container was found matching "etcd"
	I0918 21:08:43.065261   62061 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0918 21:08:43.065319   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0918 21:08:43.099292   62061 cri.go:89] found id: ""
	I0918 21:08:43.099320   62061 logs.go:276] 0 containers: []
	W0918 21:08:43.099328   62061 logs.go:278] No container was found matching "coredns"
	I0918 21:08:43.099333   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0918 21:08:43.099388   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0918 21:08:43.135085   62061 cri.go:89] found id: ""
	I0918 21:08:43.135110   62061 logs.go:276] 0 containers: []
	W0918 21:08:43.135119   62061 logs.go:278] No container was found matching "kube-scheduler"
	I0918 21:08:43.135131   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0918 21:08:43.135190   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0918 21:08:43.173255   62061 cri.go:89] found id: ""
	I0918 21:08:43.173312   62061 logs.go:276] 0 containers: []
	W0918 21:08:43.173326   62061 logs.go:278] No container was found matching "kube-proxy"
	I0918 21:08:43.173335   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0918 21:08:43.173433   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0918 21:08:43.209249   62061 cri.go:89] found id: ""
	I0918 21:08:43.209282   62061 logs.go:276] 0 containers: []
	W0918 21:08:43.209294   62061 logs.go:278] No container was found matching "kube-controller-manager"
	I0918 21:08:43.209303   62061 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0918 21:08:43.209377   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0918 21:08:43.242333   62061 cri.go:89] found id: ""
	I0918 21:08:43.242366   62061 logs.go:276] 0 containers: []
	W0918 21:08:43.242376   62061 logs.go:278] No container was found matching "kindnet"
	I0918 21:08:43.242383   62061 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0918 21:08:43.242449   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0918 21:08:43.278099   62061 cri.go:89] found id: ""
	I0918 21:08:43.278128   62061 logs.go:276] 0 containers: []
	W0918 21:08:43.278136   62061 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0918 21:08:43.278146   62061 logs.go:123] Gathering logs for kubelet ...
	I0918 21:08:43.278164   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 21:08:43.329621   62061 logs.go:123] Gathering logs for dmesg ...
	I0918 21:08:43.329661   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 21:08:43.343357   62061 logs.go:123] Gathering logs for describe nodes ...
	I0918 21:08:43.343402   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0918 21:08:43.415392   62061 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0918 21:08:43.415419   62061 logs.go:123] Gathering logs for CRI-O ...
	I0918 21:08:43.415435   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0918 21:08:43.501634   62061 logs.go:123] Gathering logs for container status ...
	I0918 21:08:43.501670   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 21:08:46.043925   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:08:46.057457   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0918 21:08:46.057540   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0918 21:08:46.091987   62061 cri.go:89] found id: ""
	I0918 21:08:46.092036   62061 logs.go:276] 0 containers: []
	W0918 21:08:46.092047   62061 logs.go:278] No container was found matching "kube-apiserver"
	I0918 21:08:46.092055   62061 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0918 21:08:46.092118   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0918 21:08:46.131171   62061 cri.go:89] found id: ""
	I0918 21:08:46.131195   62061 logs.go:276] 0 containers: []
	W0918 21:08:46.131203   62061 logs.go:278] No container was found matching "etcd"
	I0918 21:08:46.131209   62061 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0918 21:08:46.131266   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0918 21:08:46.167324   62061 cri.go:89] found id: ""
	I0918 21:08:46.167360   62061 logs.go:276] 0 containers: []
	W0918 21:08:46.167369   62061 logs.go:278] No container was found matching "coredns"
	I0918 21:08:46.167375   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0918 21:08:46.167433   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0918 21:08:46.200940   62061 cri.go:89] found id: ""
	I0918 21:08:46.200969   62061 logs.go:276] 0 containers: []
	W0918 21:08:46.200978   62061 logs.go:278] No container was found matching "kube-scheduler"
	I0918 21:08:46.200983   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0918 21:08:46.201042   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0918 21:08:46.233736   62061 cri.go:89] found id: ""
	I0918 21:08:46.233765   62061 logs.go:276] 0 containers: []
	W0918 21:08:46.233773   62061 logs.go:278] No container was found matching "kube-proxy"
	I0918 21:08:46.233779   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0918 21:08:46.233841   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0918 21:08:46.267223   62061 cri.go:89] found id: ""
	I0918 21:08:46.267250   62061 logs.go:276] 0 containers: []
	W0918 21:08:46.267261   62061 logs.go:278] No container was found matching "kube-controller-manager"
	I0918 21:08:46.267268   62061 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0918 21:08:46.267331   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0918 21:08:46.302218   62061 cri.go:89] found id: ""
	I0918 21:08:46.302245   62061 logs.go:276] 0 containers: []
	W0918 21:08:46.302255   62061 logs.go:278] No container was found matching "kindnet"
	I0918 21:08:46.302262   62061 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0918 21:08:46.302326   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0918 21:08:46.334979   62061 cri.go:89] found id: ""
	I0918 21:08:46.335005   62061 logs.go:276] 0 containers: []
	W0918 21:08:46.335015   62061 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0918 21:08:46.335024   62061 logs.go:123] Gathering logs for describe nodes ...
	I0918 21:08:46.335038   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0918 21:08:46.422905   62061 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0918 21:08:46.422929   62061 logs.go:123] Gathering logs for CRI-O ...
	I0918 21:08:46.422948   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0918 21:08:46.504831   62061 logs.go:123] Gathering logs for container status ...
	I0918 21:08:46.504884   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 21:08:46.543954   62061 logs.go:123] Gathering logs for kubelet ...
	I0918 21:08:46.543981   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 21:08:46.595817   62061 logs.go:123] Gathering logs for dmesg ...
	I0918 21:08:46.595854   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 21:08:49.109984   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:08:49.124966   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0918 21:08:49.125052   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0918 21:08:49.163272   62061 cri.go:89] found id: ""
	I0918 21:08:49.163302   62061 logs.go:276] 0 containers: []
	W0918 21:08:49.163334   62061 logs.go:278] No container was found matching "kube-apiserver"
	I0918 21:08:49.163342   62061 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0918 21:08:49.163411   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0918 21:08:49.199973   62061 cri.go:89] found id: ""
	I0918 21:08:49.200003   62061 logs.go:276] 0 containers: []
	W0918 21:08:49.200037   62061 logs.go:278] No container was found matching "etcd"
	I0918 21:08:49.200046   62061 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0918 21:08:49.200111   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0918 21:08:49.235980   62061 cri.go:89] found id: ""
	I0918 21:08:49.236036   62061 logs.go:276] 0 containers: []
	W0918 21:08:49.236050   62061 logs.go:278] No container was found matching "coredns"
	I0918 21:08:49.236061   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0918 21:08:49.236123   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0918 21:08:49.271162   62061 cri.go:89] found id: ""
	I0918 21:08:49.271386   62061 logs.go:276] 0 containers: []
	W0918 21:08:49.271404   62061 logs.go:278] No container was found matching "kube-scheduler"
	I0918 21:08:49.271413   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0918 21:08:49.271483   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0918 21:08:49.305799   62061 cri.go:89] found id: ""
	I0918 21:08:49.305831   62061 logs.go:276] 0 containers: []
	W0918 21:08:49.305842   62061 logs.go:278] No container was found matching "kube-proxy"
	I0918 21:08:49.305848   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0918 21:08:49.305899   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0918 21:08:49.342170   62061 cri.go:89] found id: ""
	I0918 21:08:49.342195   62061 logs.go:276] 0 containers: []
	W0918 21:08:49.342202   62061 logs.go:278] No container was found matching "kube-controller-manager"
	I0918 21:08:49.342208   62061 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0918 21:08:49.342265   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0918 21:08:49.374071   62061 cri.go:89] found id: ""
	I0918 21:08:49.374098   62061 logs.go:276] 0 containers: []
	W0918 21:08:49.374108   62061 logs.go:278] No container was found matching "kindnet"
	I0918 21:08:49.374116   62061 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0918 21:08:49.374186   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0918 21:08:49.407614   62061 cri.go:89] found id: ""
	I0918 21:08:49.407645   62061 logs.go:276] 0 containers: []
	W0918 21:08:49.407657   62061 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0918 21:08:49.407669   62061 logs.go:123] Gathering logs for kubelet ...
	I0918 21:08:49.407685   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 21:08:49.458433   62061 logs.go:123] Gathering logs for dmesg ...
	I0918 21:08:49.458473   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 21:08:49.471490   62061 logs.go:123] Gathering logs for describe nodes ...
	I0918 21:08:49.471519   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0918 21:08:49.533874   62061 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0918 21:08:49.533901   62061 logs.go:123] Gathering logs for CRI-O ...
	I0918 21:08:49.533916   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0918 21:08:49.611711   62061 logs.go:123] Gathering logs for container status ...
	I0918 21:08:49.611744   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 21:08:52.157839   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:08:52.170690   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0918 21:08:52.170770   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0918 21:08:52.208737   62061 cri.go:89] found id: ""
	I0918 21:08:52.208767   62061 logs.go:276] 0 containers: []
	W0918 21:08:52.208779   62061 logs.go:278] No container was found matching "kube-apiserver"
	I0918 21:08:52.208787   62061 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0918 21:08:52.208846   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0918 21:08:52.242624   62061 cri.go:89] found id: ""
	I0918 21:08:52.242658   62061 logs.go:276] 0 containers: []
	W0918 21:08:52.242669   62061 logs.go:278] No container was found matching "etcd"
	I0918 21:08:52.242677   62061 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0918 21:08:52.242742   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0918 21:08:52.280686   62061 cri.go:89] found id: ""
	I0918 21:08:52.280717   62061 logs.go:276] 0 containers: []
	W0918 21:08:52.280728   62061 logs.go:278] No container was found matching "coredns"
	I0918 21:08:52.280736   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0918 21:08:52.280798   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0918 21:08:52.313748   62061 cri.go:89] found id: ""
	I0918 21:08:52.313777   62061 logs.go:276] 0 containers: []
	W0918 21:08:52.313785   62061 logs.go:278] No container was found matching "kube-scheduler"
	I0918 21:08:52.313791   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0918 21:08:52.313840   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0918 21:08:52.353072   62061 cri.go:89] found id: ""
	I0918 21:08:52.353102   62061 logs.go:276] 0 containers: []
	W0918 21:08:52.353124   62061 logs.go:278] No container was found matching "kube-proxy"
	I0918 21:08:52.353132   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0918 21:08:52.353195   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0918 21:08:52.390358   62061 cri.go:89] found id: ""
	I0918 21:08:52.390384   62061 logs.go:276] 0 containers: []
	W0918 21:08:52.390392   62061 logs.go:278] No container was found matching "kube-controller-manager"
	I0918 21:08:52.390398   62061 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0918 21:08:52.390448   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0918 21:08:52.430039   62061 cri.go:89] found id: ""
	I0918 21:08:52.430068   62061 logs.go:276] 0 containers: []
	W0918 21:08:52.430081   62061 logs.go:278] No container was found matching "kindnet"
	I0918 21:08:52.430088   62061 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0918 21:08:52.430146   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0918 21:08:52.466096   62061 cri.go:89] found id: ""
	I0918 21:08:52.466125   62061 logs.go:276] 0 containers: []
	W0918 21:08:52.466137   62061 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0918 21:08:52.466149   62061 logs.go:123] Gathering logs for kubelet ...
	I0918 21:08:52.466162   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 21:08:52.518643   62061 logs.go:123] Gathering logs for dmesg ...
	I0918 21:08:52.518678   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 21:08:52.531768   62061 logs.go:123] Gathering logs for describe nodes ...
	I0918 21:08:52.531797   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0918 21:08:52.605130   62061 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0918 21:08:52.605163   62061 logs.go:123] Gathering logs for CRI-O ...
	I0918 21:08:52.605181   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0918 21:08:52.686510   62061 logs.go:123] Gathering logs for container status ...
	I0918 21:08:52.686553   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 21:08:55.225867   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:08:55.240537   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0918 21:08:55.240618   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0918 21:08:55.276461   62061 cri.go:89] found id: ""
	I0918 21:08:55.276490   62061 logs.go:276] 0 containers: []
	W0918 21:08:55.276498   62061 logs.go:278] No container was found matching "kube-apiserver"
	I0918 21:08:55.276504   62061 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0918 21:08:55.276564   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0918 21:08:55.313449   62061 cri.go:89] found id: ""
	I0918 21:08:55.313482   62061 logs.go:276] 0 containers: []
	W0918 21:08:55.313493   62061 logs.go:278] No container was found matching "etcd"
	I0918 21:08:55.313499   62061 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0918 21:08:55.313551   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0918 21:08:55.352436   62061 cri.go:89] found id: ""
	I0918 21:08:55.352475   62061 logs.go:276] 0 containers: []
	W0918 21:08:55.352485   62061 logs.go:278] No container was found matching "coredns"
	I0918 21:08:55.352492   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0918 21:08:55.352560   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0918 21:08:55.390433   62061 cri.go:89] found id: ""
	I0918 21:08:55.390458   62061 logs.go:276] 0 containers: []
	W0918 21:08:55.390466   62061 logs.go:278] No container was found matching "kube-scheduler"
	I0918 21:08:55.390472   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0918 21:08:55.390529   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0918 21:08:55.428426   62061 cri.go:89] found id: ""
	I0918 21:08:55.428455   62061 logs.go:276] 0 containers: []
	W0918 21:08:55.428465   62061 logs.go:278] No container was found matching "kube-proxy"
	I0918 21:08:55.428473   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0918 21:08:55.428540   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0918 21:08:55.465587   62061 cri.go:89] found id: ""
	I0918 21:08:55.465622   62061 logs.go:276] 0 containers: []
	W0918 21:08:55.465633   62061 logs.go:278] No container was found matching "kube-controller-manager"
	I0918 21:08:55.465641   62061 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0918 21:08:55.465710   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0918 21:08:55.502137   62061 cri.go:89] found id: ""
	I0918 21:08:55.502185   62061 logs.go:276] 0 containers: []
	W0918 21:08:55.502196   62061 logs.go:278] No container was found matching "kindnet"
	I0918 21:08:55.502203   62061 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0918 21:08:55.502266   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0918 21:08:55.535992   62061 cri.go:89] found id: ""
	I0918 21:08:55.536037   62061 logs.go:276] 0 containers: []
	W0918 21:08:55.536050   62061 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0918 21:08:55.536060   62061 logs.go:123] Gathering logs for dmesg ...
	I0918 21:08:55.536078   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 21:08:55.549267   62061 logs.go:123] Gathering logs for describe nodes ...
	I0918 21:08:55.549296   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0918 21:08:55.616522   62061 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0918 21:08:55.616556   62061 logs.go:123] Gathering logs for CRI-O ...
	I0918 21:08:55.616572   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0918 21:08:55.698822   62061 logs.go:123] Gathering logs for container status ...
	I0918 21:08:55.698874   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 21:08:55.740234   62061 logs.go:123] Gathering logs for kubelet ...
	I0918 21:08:55.740264   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 21:08:58.294876   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:08:58.307543   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0918 21:08:58.307630   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0918 21:08:58.340435   62061 cri.go:89] found id: ""
	I0918 21:08:58.340467   62061 logs.go:276] 0 containers: []
	W0918 21:08:58.340479   62061 logs.go:278] No container was found matching "kube-apiserver"
	I0918 21:08:58.340486   62061 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0918 21:08:58.340545   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0918 21:08:58.374759   62061 cri.go:89] found id: ""
	I0918 21:08:58.374792   62061 logs.go:276] 0 containers: []
	W0918 21:08:58.374804   62061 logs.go:278] No container was found matching "etcd"
	I0918 21:08:58.374810   62061 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0918 21:08:58.374863   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0918 21:08:58.411756   62061 cri.go:89] found id: ""
	I0918 21:08:58.411787   62061 logs.go:276] 0 containers: []
	W0918 21:08:58.411797   62061 logs.go:278] No container was found matching "coredns"
	I0918 21:08:58.411804   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0918 21:08:58.411867   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0918 21:08:58.449787   62061 cri.go:89] found id: ""
	I0918 21:08:58.449820   62061 logs.go:276] 0 containers: []
	W0918 21:08:58.449832   62061 logs.go:278] No container was found matching "kube-scheduler"
	I0918 21:08:58.449839   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0918 21:08:58.449903   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0918 21:08:58.485212   62061 cri.go:89] found id: ""
	I0918 21:08:58.485243   62061 logs.go:276] 0 containers: []
	W0918 21:08:58.485254   62061 logs.go:278] No container was found matching "kube-proxy"
	I0918 21:08:58.485261   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0918 21:08:58.485331   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0918 21:08:58.528667   62061 cri.go:89] found id: ""
	I0918 21:08:58.528696   62061 logs.go:276] 0 containers: []
	W0918 21:08:58.528706   62061 logs.go:278] No container was found matching "kube-controller-manager"
	I0918 21:08:58.528714   62061 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0918 21:08:58.528775   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0918 21:08:58.568201   62061 cri.go:89] found id: ""
	I0918 21:08:58.568231   62061 logs.go:276] 0 containers: []
	W0918 21:08:58.568241   62061 logs.go:278] No container was found matching "kindnet"
	I0918 21:08:58.568247   62061 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0918 21:08:58.568305   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0918 21:08:58.623952   62061 cri.go:89] found id: ""
	I0918 21:08:58.623982   62061 logs.go:276] 0 containers: []
	W0918 21:08:58.623991   62061 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0918 21:08:58.624000   62061 logs.go:123] Gathering logs for container status ...
	I0918 21:08:58.624011   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 21:08:58.665418   62061 logs.go:123] Gathering logs for kubelet ...
	I0918 21:08:58.665457   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 21:08:58.713464   62061 logs.go:123] Gathering logs for dmesg ...
	I0918 21:08:58.713504   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 21:08:58.727511   62061 logs.go:123] Gathering logs for describe nodes ...
	I0918 21:08:58.727552   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0918 21:08:58.799004   62061 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0918 21:08:58.799035   62061 logs.go:123] Gathering logs for CRI-O ...
	I0918 21:08:58.799050   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0918 21:09:01.389457   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:09:01.404658   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0918 21:09:01.404734   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0918 21:09:01.439139   62061 cri.go:89] found id: ""
	I0918 21:09:01.439170   62061 logs.go:276] 0 containers: []
	W0918 21:09:01.439180   62061 logs.go:278] No container was found matching "kube-apiserver"
	I0918 21:09:01.439187   62061 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0918 21:09:01.439251   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0918 21:09:01.473866   62061 cri.go:89] found id: ""
	I0918 21:09:01.473896   62061 logs.go:276] 0 containers: []
	W0918 21:09:01.473907   62061 logs.go:278] No container was found matching "etcd"
	I0918 21:09:01.473915   62061 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0918 21:09:01.473978   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0918 21:09:01.506734   62061 cri.go:89] found id: ""
	I0918 21:09:01.506767   62061 logs.go:276] 0 containers: []
	W0918 21:09:01.506777   62061 logs.go:278] No container was found matching "coredns"
	I0918 21:09:01.506783   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0918 21:09:01.506836   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0918 21:09:01.540123   62061 cri.go:89] found id: ""
	I0918 21:09:01.540152   62061 logs.go:276] 0 containers: []
	W0918 21:09:01.540162   62061 logs.go:278] No container was found matching "kube-scheduler"
	I0918 21:09:01.540169   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0918 21:09:01.540236   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0918 21:09:01.575002   62061 cri.go:89] found id: ""
	I0918 21:09:01.575037   62061 logs.go:276] 0 containers: []
	W0918 21:09:01.575048   62061 logs.go:278] No container was found matching "kube-proxy"
	I0918 21:09:01.575071   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0918 21:09:01.575159   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0918 21:09:01.612285   62061 cri.go:89] found id: ""
	I0918 21:09:01.612316   62061 logs.go:276] 0 containers: []
	W0918 21:09:01.612327   62061 logs.go:278] No container was found matching "kube-controller-manager"
	I0918 21:09:01.612335   62061 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0918 21:09:01.612399   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0918 21:09:01.648276   62061 cri.go:89] found id: ""
	I0918 21:09:01.648304   62061 logs.go:276] 0 containers: []
	W0918 21:09:01.648318   62061 logs.go:278] No container was found matching "kindnet"
	I0918 21:09:01.648325   62061 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0918 21:09:01.648397   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0918 21:09:01.686192   62061 cri.go:89] found id: ""
	I0918 21:09:01.686220   62061 logs.go:276] 0 containers: []
	W0918 21:09:01.686228   62061 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0918 21:09:01.686236   62061 logs.go:123] Gathering logs for kubelet ...
	I0918 21:09:01.686247   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 21:09:01.738366   62061 logs.go:123] Gathering logs for dmesg ...
	I0918 21:09:01.738408   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 21:09:01.752650   62061 logs.go:123] Gathering logs for describe nodes ...
	I0918 21:09:01.752683   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0918 21:09:01.825010   62061 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0918 21:09:01.825114   62061 logs.go:123] Gathering logs for CRI-O ...
	I0918 21:09:01.825156   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0918 21:09:01.907401   62061 logs.go:123] Gathering logs for container status ...
	I0918 21:09:01.907448   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 21:09:04.448316   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:09:04.461242   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0918 21:09:04.461304   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0918 21:09:04.495951   62061 cri.go:89] found id: ""
	I0918 21:09:04.495984   62061 logs.go:276] 0 containers: []
	W0918 21:09:04.495997   62061 logs.go:278] No container was found matching "kube-apiserver"
	I0918 21:09:04.496006   62061 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0918 21:09:04.496090   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0918 21:09:04.536874   62061 cri.go:89] found id: ""
	I0918 21:09:04.536906   62061 logs.go:276] 0 containers: []
	W0918 21:09:04.536917   62061 logs.go:278] No container was found matching "etcd"
	I0918 21:09:04.536935   62061 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0918 21:09:04.537010   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0918 21:09:04.572601   62061 cri.go:89] found id: ""
	I0918 21:09:04.572634   62061 logs.go:276] 0 containers: []
	W0918 21:09:04.572646   62061 logs.go:278] No container was found matching "coredns"
	I0918 21:09:04.572653   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0918 21:09:04.572716   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0918 21:09:04.609783   62061 cri.go:89] found id: ""
	I0918 21:09:04.609817   62061 logs.go:276] 0 containers: []
	W0918 21:09:04.609826   62061 logs.go:278] No container was found matching "kube-scheduler"
	I0918 21:09:04.609832   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0918 21:09:04.609891   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0918 21:09:04.645124   62061 cri.go:89] found id: ""
	I0918 21:09:04.645156   62061 logs.go:276] 0 containers: []
	W0918 21:09:04.645167   62061 logs.go:278] No container was found matching "kube-proxy"
	I0918 21:09:04.645175   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0918 21:09:04.645241   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0918 21:09:04.680927   62061 cri.go:89] found id: ""
	I0918 21:09:04.680959   62061 logs.go:276] 0 containers: []
	W0918 21:09:04.680971   62061 logs.go:278] No container was found matching "kube-controller-manager"
	I0918 21:09:04.680978   62061 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0918 21:09:04.681038   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0918 21:09:04.718920   62061 cri.go:89] found id: ""
	I0918 21:09:04.718954   62061 logs.go:276] 0 containers: []
	W0918 21:09:04.718972   62061 logs.go:278] No container was found matching "kindnet"
	I0918 21:09:04.718979   62061 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0918 21:09:04.719039   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0918 21:09:04.751450   62061 cri.go:89] found id: ""
	I0918 21:09:04.751488   62061 logs.go:276] 0 containers: []
	W0918 21:09:04.751500   62061 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0918 21:09:04.751511   62061 logs.go:123] Gathering logs for container status ...
	I0918 21:09:04.751529   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 21:09:04.788969   62061 logs.go:123] Gathering logs for kubelet ...
	I0918 21:09:04.789001   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 21:09:04.837638   62061 logs.go:123] Gathering logs for dmesg ...
	I0918 21:09:04.837673   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 21:09:04.853673   62061 logs.go:123] Gathering logs for describe nodes ...
	I0918 21:09:04.853706   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0918 21:09:04.923670   62061 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0918 21:09:04.923703   62061 logs.go:123] Gathering logs for CRI-O ...
	I0918 21:09:04.923717   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0918 21:09:07.507979   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:09:07.521581   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0918 21:09:07.521656   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0918 21:09:07.556280   62061 cri.go:89] found id: ""
	I0918 21:09:07.556310   62061 logs.go:276] 0 containers: []
	W0918 21:09:07.556321   62061 logs.go:278] No container was found matching "kube-apiserver"
	I0918 21:09:07.556329   62061 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0918 21:09:07.556397   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0918 21:09:07.590775   62061 cri.go:89] found id: ""
	I0918 21:09:07.590802   62061 logs.go:276] 0 containers: []
	W0918 21:09:07.590810   62061 logs.go:278] No container was found matching "etcd"
	I0918 21:09:07.590815   62061 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0918 21:09:07.590862   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0918 21:09:07.625971   62061 cri.go:89] found id: ""
	I0918 21:09:07.626000   62061 logs.go:276] 0 containers: []
	W0918 21:09:07.626010   62061 logs.go:278] No container was found matching "coredns"
	I0918 21:09:07.626018   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0918 21:09:07.626080   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0918 21:09:07.660083   62061 cri.go:89] found id: ""
	I0918 21:09:07.660116   62061 logs.go:276] 0 containers: []
	W0918 21:09:07.660128   62061 logs.go:278] No container was found matching "kube-scheduler"
	I0918 21:09:07.660136   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0918 21:09:07.660201   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0918 21:09:07.694165   62061 cri.go:89] found id: ""
	I0918 21:09:07.694195   62061 logs.go:276] 0 containers: []
	W0918 21:09:07.694204   62061 logs.go:278] No container was found matching "kube-proxy"
	I0918 21:09:07.694211   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0918 21:09:07.694269   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0918 21:09:07.728298   62061 cri.go:89] found id: ""
	I0918 21:09:07.728328   62061 logs.go:276] 0 containers: []
	W0918 21:09:07.728338   62061 logs.go:278] No container was found matching "kube-controller-manager"
	I0918 21:09:07.728349   62061 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0918 21:09:07.728409   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0918 21:09:07.762507   62061 cri.go:89] found id: ""
	I0918 21:09:07.762546   62061 logs.go:276] 0 containers: []
	W0918 21:09:07.762555   62061 logs.go:278] No container was found matching "kindnet"
	I0918 21:09:07.762565   62061 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0918 21:09:07.762745   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0918 21:09:07.798006   62061 cri.go:89] found id: ""
	I0918 21:09:07.798038   62061 logs.go:276] 0 containers: []
	W0918 21:09:07.798049   62061 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0918 21:09:07.798059   62061 logs.go:123] Gathering logs for kubelet ...
	I0918 21:09:07.798074   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 21:09:07.849222   62061 logs.go:123] Gathering logs for dmesg ...
	I0918 21:09:07.849267   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 21:09:07.865023   62061 logs.go:123] Gathering logs for describe nodes ...
	I0918 21:09:07.865055   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0918 21:09:07.938810   62061 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0918 21:09:07.938830   62061 logs.go:123] Gathering logs for CRI-O ...
	I0918 21:09:07.938842   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0918 21:09:08.018885   62061 logs.go:123] Gathering logs for container status ...
	I0918 21:09:08.018924   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 21:09:10.562565   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:09:10.575854   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0918 21:09:10.575941   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0918 21:09:10.612853   62061 cri.go:89] found id: ""
	I0918 21:09:10.612884   62061 logs.go:276] 0 containers: []
	W0918 21:09:10.612896   62061 logs.go:278] No container was found matching "kube-apiserver"
	I0918 21:09:10.612906   62061 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0918 21:09:10.612966   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0918 21:09:10.648672   62061 cri.go:89] found id: ""
	I0918 21:09:10.648703   62061 logs.go:276] 0 containers: []
	W0918 21:09:10.648713   62061 logs.go:278] No container was found matching "etcd"
	I0918 21:09:10.648720   62061 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0918 21:09:10.648780   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0918 21:09:10.682337   62061 cri.go:89] found id: ""
	I0918 21:09:10.682370   62061 logs.go:276] 0 containers: []
	W0918 21:09:10.682388   62061 logs.go:278] No container was found matching "coredns"
	I0918 21:09:10.682394   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0918 21:09:10.682445   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0918 21:09:10.718221   62061 cri.go:89] found id: ""
	I0918 21:09:10.718257   62061 logs.go:276] 0 containers: []
	W0918 21:09:10.718269   62061 logs.go:278] No container was found matching "kube-scheduler"
	I0918 21:09:10.718277   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0918 21:09:10.718345   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0918 21:09:10.751570   62061 cri.go:89] found id: ""
	I0918 21:09:10.751600   62061 logs.go:276] 0 containers: []
	W0918 21:09:10.751609   62061 logs.go:278] No container was found matching "kube-proxy"
	I0918 21:09:10.751615   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0918 21:09:10.751706   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0918 21:09:10.792125   62061 cri.go:89] found id: ""
	I0918 21:09:10.792158   62061 logs.go:276] 0 containers: []
	W0918 21:09:10.792170   62061 logs.go:278] No container was found matching "kube-controller-manager"
	I0918 21:09:10.792178   62061 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0918 21:09:10.792245   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0918 21:09:10.830699   62061 cri.go:89] found id: ""
	I0918 21:09:10.830733   62061 logs.go:276] 0 containers: []
	W0918 21:09:10.830742   62061 logs.go:278] No container was found matching "kindnet"
	I0918 21:09:10.830748   62061 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0918 21:09:10.830820   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0918 21:09:10.869625   62061 cri.go:89] found id: ""
	I0918 21:09:10.869655   62061 logs.go:276] 0 containers: []
	W0918 21:09:10.869663   62061 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0918 21:09:10.869672   62061 logs.go:123] Gathering logs for kubelet ...
	I0918 21:09:10.869684   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 21:09:10.921340   62061 logs.go:123] Gathering logs for dmesg ...
	I0918 21:09:10.921378   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 21:09:10.937032   62061 logs.go:123] Gathering logs for describe nodes ...
	I0918 21:09:10.937071   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0918 21:09:11.006248   62061 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0918 21:09:11.006276   62061 logs.go:123] Gathering logs for CRI-O ...
	I0918 21:09:11.006291   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0918 21:09:11.086458   62061 logs.go:123] Gathering logs for container status ...
	I0918 21:09:11.086496   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 21:09:13.628824   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:09:13.642499   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0918 21:09:13.642578   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0918 21:09:13.677337   62061 cri.go:89] found id: ""
	I0918 21:09:13.677368   62061 logs.go:276] 0 containers: []
	W0918 21:09:13.677378   62061 logs.go:278] No container was found matching "kube-apiserver"
	I0918 21:09:13.677385   62061 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0918 21:09:13.677449   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0918 21:09:13.717317   62061 cri.go:89] found id: ""
	I0918 21:09:13.717341   62061 logs.go:276] 0 containers: []
	W0918 21:09:13.717353   62061 logs.go:278] No container was found matching "etcd"
	I0918 21:09:13.717358   62061 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0918 21:09:13.717419   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0918 21:09:13.752151   62061 cri.go:89] found id: ""
	I0918 21:09:13.752181   62061 logs.go:276] 0 containers: []
	W0918 21:09:13.752189   62061 logs.go:278] No container was found matching "coredns"
	I0918 21:09:13.752195   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0918 21:09:13.752253   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0918 21:09:13.791946   62061 cri.go:89] found id: ""
	I0918 21:09:13.791975   62061 logs.go:276] 0 containers: []
	W0918 21:09:13.791983   62061 logs.go:278] No container was found matching "kube-scheduler"
	I0918 21:09:13.791989   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0918 21:09:13.792069   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0918 21:09:13.825173   62061 cri.go:89] found id: ""
	I0918 21:09:13.825198   62061 logs.go:276] 0 containers: []
	W0918 21:09:13.825209   62061 logs.go:278] No container was found matching "kube-proxy"
	I0918 21:09:13.825216   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0918 21:09:13.825276   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0918 21:09:13.859801   62061 cri.go:89] found id: ""
	I0918 21:09:13.859834   62061 logs.go:276] 0 containers: []
	W0918 21:09:13.859846   62061 logs.go:278] No container was found matching "kube-controller-manager"
	I0918 21:09:13.859853   62061 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0918 21:09:13.859907   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0918 21:09:13.895413   62061 cri.go:89] found id: ""
	I0918 21:09:13.895445   62061 logs.go:276] 0 containers: []
	W0918 21:09:13.895456   62061 logs.go:278] No container was found matching "kindnet"
	I0918 21:09:13.895463   62061 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0918 21:09:13.895515   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0918 21:09:13.929048   62061 cri.go:89] found id: ""
	I0918 21:09:13.929075   62061 logs.go:276] 0 containers: []
	W0918 21:09:13.929083   62061 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0918 21:09:13.929092   62061 logs.go:123] Gathering logs for kubelet ...
	I0918 21:09:13.929104   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 21:09:13.981579   62061 logs.go:123] Gathering logs for dmesg ...
	I0918 21:09:13.981613   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 21:09:13.995642   62061 logs.go:123] Gathering logs for describe nodes ...
	I0918 21:09:13.995679   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0918 21:09:14.061762   62061 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0918 21:09:14.061782   62061 logs.go:123] Gathering logs for CRI-O ...
	I0918 21:09:14.061793   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0918 21:09:14.139623   62061 logs.go:123] Gathering logs for container status ...
	I0918 21:09:14.139659   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 21:09:16.686071   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:09:16.699769   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0918 21:09:16.699844   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0918 21:09:16.735242   62061 cri.go:89] found id: ""
	I0918 21:09:16.735277   62061 logs.go:276] 0 containers: []
	W0918 21:09:16.735288   62061 logs.go:278] No container was found matching "kube-apiserver"
	I0918 21:09:16.735298   62061 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0918 21:09:16.735371   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0918 21:09:16.770027   62061 cri.go:89] found id: ""
	I0918 21:09:16.770052   62061 logs.go:276] 0 containers: []
	W0918 21:09:16.770060   62061 logs.go:278] No container was found matching "etcd"
	I0918 21:09:16.770066   62061 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0918 21:09:16.770114   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0918 21:09:16.806525   62061 cri.go:89] found id: ""
	I0918 21:09:16.806555   62061 logs.go:276] 0 containers: []
	W0918 21:09:16.806563   62061 logs.go:278] No container was found matching "coredns"
	I0918 21:09:16.806569   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0918 21:09:16.806636   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0918 21:09:16.851146   62061 cri.go:89] found id: ""
	I0918 21:09:16.851183   62061 logs.go:276] 0 containers: []
	W0918 21:09:16.851194   62061 logs.go:278] No container was found matching "kube-scheduler"
	I0918 21:09:16.851204   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0918 21:09:16.851271   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0918 21:09:16.890718   62061 cri.go:89] found id: ""
	I0918 21:09:16.890748   62061 logs.go:276] 0 containers: []
	W0918 21:09:16.890760   62061 logs.go:278] No container was found matching "kube-proxy"
	I0918 21:09:16.890767   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0918 21:09:16.890824   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0918 21:09:16.928971   62061 cri.go:89] found id: ""
	I0918 21:09:16.929002   62061 logs.go:276] 0 containers: []
	W0918 21:09:16.929012   62061 logs.go:278] No container was found matching "kube-controller-manager"
	I0918 21:09:16.929020   62061 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0918 21:09:16.929079   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0918 21:09:16.970965   62061 cri.go:89] found id: ""
	I0918 21:09:16.970999   62061 logs.go:276] 0 containers: []
	W0918 21:09:16.971011   62061 logs.go:278] No container was found matching "kindnet"
	I0918 21:09:16.971019   62061 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0918 21:09:16.971089   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0918 21:09:17.006427   62061 cri.go:89] found id: ""
	I0918 21:09:17.006453   62061 logs.go:276] 0 containers: []
	W0918 21:09:17.006461   62061 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0918 21:09:17.006468   62061 logs.go:123] Gathering logs for kubelet ...
	I0918 21:09:17.006480   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 21:09:17.058690   62061 logs.go:123] Gathering logs for dmesg ...
	I0918 21:09:17.058733   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 21:09:17.072593   62061 logs.go:123] Gathering logs for describe nodes ...
	I0918 21:09:17.072623   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0918 21:09:17.143046   62061 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0918 21:09:17.143071   62061 logs.go:123] Gathering logs for CRI-O ...
	I0918 21:09:17.143082   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0918 21:09:17.236943   62061 logs.go:123] Gathering logs for container status ...
	I0918 21:09:17.236989   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 21:09:19.782266   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:09:19.799433   62061 kubeadm.go:597] duration metric: took 4m2.126311085s to restartPrimaryControlPlane
	W0918 21:09:19.799513   62061 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0918 21:09:19.799543   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0918 21:09:20.910192   62061 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (1.110625463s)
	I0918 21:09:20.910273   62061 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0918 21:09:20.925992   62061 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0918 21:09:20.936876   62061 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0918 21:09:20.947170   62061 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0918 21:09:20.947199   62061 kubeadm.go:157] found existing configuration files:
	
	I0918 21:09:20.947255   62061 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0918 21:09:20.958140   62061 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0918 21:09:20.958240   62061 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0918 21:09:20.968351   62061 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0918 21:09:20.978669   62061 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0918 21:09:20.978735   62061 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0918 21:09:20.989765   62061 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0918 21:09:20.999842   62061 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0918 21:09:20.999903   62061 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0918 21:09:21.009945   62061 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0918 21:09:21.020229   62061 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0918 21:09:21.020289   62061 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0918 21:09:21.030583   62061 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0918 21:09:21.271399   62061 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0918 21:11:17.441368   62061 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0918 21:11:17.441607   62061 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0918 21:11:17.442921   62061 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0918 21:11:17.443036   62061 kubeadm.go:310] [preflight] Running pre-flight checks
	I0918 21:11:17.443221   62061 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0918 21:11:17.443500   62061 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0918 21:11:17.443818   62061 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0918 21:11:17.444099   62061 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0918 21:11:17.446858   62061 out.go:235]   - Generating certificates and keys ...
	I0918 21:11:17.446965   62061 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0918 21:11:17.447048   62061 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0918 21:11:17.447160   62061 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0918 21:11:17.447248   62061 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0918 21:11:17.447349   62061 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0918 21:11:17.447425   62061 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0918 21:11:17.447507   62061 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0918 21:11:17.447587   62061 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0918 21:11:17.447742   62061 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0918 21:11:17.447847   62061 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0918 21:11:17.447911   62061 kubeadm.go:310] [certs] Using the existing "sa" key
	I0918 21:11:17.447984   62061 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0918 21:11:17.448085   62061 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0918 21:11:17.448163   62061 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0918 21:11:17.448255   62061 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0918 21:11:17.448339   62061 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0918 21:11:17.448486   62061 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0918 21:11:17.448590   62061 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0918 21:11:17.448645   62061 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0918 21:11:17.448733   62061 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0918 21:11:17.450203   62061 out.go:235]   - Booting up control plane ...
	I0918 21:11:17.450299   62061 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0918 21:11:17.450434   62061 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0918 21:11:17.450506   62061 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0918 21:11:17.450578   62061 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0918 21:11:17.450752   62061 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0918 21:11:17.450805   62061 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0918 21:11:17.450863   62061 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0918 21:11:17.451034   62061 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0918 21:11:17.451090   62061 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0918 21:11:17.451259   62061 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0918 21:11:17.451325   62061 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0918 21:11:17.451498   62061 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0918 21:11:17.451562   62061 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0918 21:11:17.451765   62061 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0918 21:11:17.451845   62061 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0918 21:11:17.452027   62061 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0918 21:11:17.452041   62061 kubeadm.go:310] 
	I0918 21:11:17.452083   62061 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0918 21:11:17.452118   62061 kubeadm.go:310] 		timed out waiting for the condition
	I0918 21:11:17.452127   62061 kubeadm.go:310] 
	I0918 21:11:17.452160   62061 kubeadm.go:310] 	This error is likely caused by:
	I0918 21:11:17.452189   62061 kubeadm.go:310] 		- The kubelet is not running
	I0918 21:11:17.452282   62061 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0918 21:11:17.452289   62061 kubeadm.go:310] 
	I0918 21:11:17.452385   62061 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0918 21:11:17.452425   62061 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0918 21:11:17.452473   62061 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0918 21:11:17.452482   62061 kubeadm.go:310] 
	I0918 21:11:17.452578   62061 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0918 21:11:17.452649   62061 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0918 21:11:17.452656   62061 kubeadm.go:310] 
	I0918 21:11:17.452770   62061 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0918 21:11:17.452849   62061 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0918 21:11:17.452920   62061 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0918 21:11:17.452983   62061 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0918 21:11:17.453029   62061 kubeadm.go:310] 
	W0918 21:11:17.453100   62061 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0918 21:11:17.453139   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0918 21:11:17.917211   62061 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0918 21:11:17.933265   62061 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0918 21:11:17.943909   62061 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0918 21:11:17.943937   62061 kubeadm.go:157] found existing configuration files:
	
	I0918 21:11:17.943980   62061 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0918 21:11:17.953745   62061 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0918 21:11:17.953817   62061 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0918 21:11:17.964411   62061 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0918 21:11:17.974601   62061 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0918 21:11:17.974661   62061 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0918 21:11:17.984631   62061 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0918 21:11:17.994280   62061 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0918 21:11:17.994341   62061 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0918 21:11:18.004341   62061 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0918 21:11:18.014066   62061 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0918 21:11:18.014127   62061 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0918 21:11:18.023592   62061 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0918 21:11:18.098831   62061 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0918 21:11:18.098908   62061 kubeadm.go:310] [preflight] Running pre-flight checks
	I0918 21:11:18.254904   62061 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0918 21:11:18.255012   62061 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0918 21:11:18.255102   62061 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0918 21:11:18.444152   62061 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0918 21:11:18.445967   62061 out.go:235]   - Generating certificates and keys ...
	I0918 21:11:18.446082   62061 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0918 21:11:18.446185   62061 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0918 21:11:18.446316   62061 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0918 21:11:18.446403   62061 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0918 21:11:18.446508   62061 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0918 21:11:18.446600   62061 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0918 21:11:18.446706   62061 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0918 21:11:18.446767   62061 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0918 21:11:18.446852   62061 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0918 21:11:18.446936   62061 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0918 21:11:18.446971   62061 kubeadm.go:310] [certs] Using the existing "sa" key
	I0918 21:11:18.447025   62061 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0918 21:11:18.621414   62061 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0918 21:11:18.973691   62061 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0918 21:11:19.177522   62061 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0918 21:11:19.351312   62061 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0918 21:11:19.375169   62061 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0918 21:11:19.376274   62061 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0918 21:11:19.376351   62061 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0918 21:11:19.505030   62061 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0918 21:11:19.507065   62061 out.go:235]   - Booting up control plane ...
	I0918 21:11:19.507197   62061 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0918 21:11:19.519523   62061 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0918 21:11:19.520747   62061 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0918 21:11:19.522723   62061 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0918 21:11:19.524851   62061 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0918 21:11:59.527084   62061 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0918 21:11:59.527421   62061 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0918 21:11:59.527674   62061 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0918 21:12:04.528288   62061 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0918 21:12:04.528530   62061 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0918 21:12:14.529180   62061 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0918 21:12:14.529367   62061 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0918 21:12:34.530066   62061 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0918 21:12:34.530335   62061 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0918 21:13:14.529030   62061 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0918 21:13:14.529305   62061 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0918 21:13:14.529326   62061 kubeadm.go:310] 
	I0918 21:13:14.529374   62061 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0918 21:13:14.529447   62061 kubeadm.go:310] 		timed out waiting for the condition
	I0918 21:13:14.529466   62061 kubeadm.go:310] 
	I0918 21:13:14.529521   62061 kubeadm.go:310] 	This error is likely caused by:
	I0918 21:13:14.529569   62061 kubeadm.go:310] 		- The kubelet is not running
	I0918 21:13:14.529716   62061 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0918 21:13:14.529726   62061 kubeadm.go:310] 
	I0918 21:13:14.529866   62061 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0918 21:13:14.529917   62061 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0918 21:13:14.529967   62061 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0918 21:13:14.529976   62061 kubeadm.go:310] 
	I0918 21:13:14.530120   62061 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0918 21:13:14.530220   62061 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0918 21:13:14.530230   62061 kubeadm.go:310] 
	I0918 21:13:14.530352   62061 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0918 21:13:14.530480   62061 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0918 21:13:14.530579   62061 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0918 21:13:14.530678   62061 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0918 21:13:14.530688   62061 kubeadm.go:310] 
	I0918 21:13:14.531356   62061 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0918 21:13:14.531463   62061 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0918 21:13:14.531520   62061 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0918 21:13:14.531592   62061 kubeadm.go:394] duration metric: took 7m56.917405378s to StartCluster
	I0918 21:13:14.531633   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0918 21:13:14.531689   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0918 21:13:14.578851   62061 cri.go:89] found id: ""
	I0918 21:13:14.578883   62061 logs.go:276] 0 containers: []
	W0918 21:13:14.578893   62061 logs.go:278] No container was found matching "kube-apiserver"
	I0918 21:13:14.578901   62061 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0918 21:13:14.578960   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0918 21:13:14.627137   62061 cri.go:89] found id: ""
	I0918 21:13:14.627168   62061 logs.go:276] 0 containers: []
	W0918 21:13:14.627179   62061 logs.go:278] No container was found matching "etcd"
	I0918 21:13:14.627187   62061 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0918 21:13:14.627245   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0918 21:13:14.670678   62061 cri.go:89] found id: ""
	I0918 21:13:14.670707   62061 logs.go:276] 0 containers: []
	W0918 21:13:14.670717   62061 logs.go:278] No container was found matching "coredns"
	I0918 21:13:14.670724   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0918 21:13:14.670788   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0918 21:13:14.709610   62061 cri.go:89] found id: ""
	I0918 21:13:14.709641   62061 logs.go:276] 0 containers: []
	W0918 21:13:14.709651   62061 logs.go:278] No container was found matching "kube-scheduler"
	I0918 21:13:14.709658   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0918 21:13:14.709722   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0918 21:13:14.743492   62061 cri.go:89] found id: ""
	I0918 21:13:14.743522   62061 logs.go:276] 0 containers: []
	W0918 21:13:14.743534   62061 logs.go:278] No container was found matching "kube-proxy"
	I0918 21:13:14.743540   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0918 21:13:14.743601   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0918 21:13:14.777577   62061 cri.go:89] found id: ""
	I0918 21:13:14.777602   62061 logs.go:276] 0 containers: []
	W0918 21:13:14.777610   62061 logs.go:278] No container was found matching "kube-controller-manager"
	I0918 21:13:14.777616   62061 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0918 21:13:14.777679   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0918 21:13:14.815884   62061 cri.go:89] found id: ""
	I0918 21:13:14.815913   62061 logs.go:276] 0 containers: []
	W0918 21:13:14.815922   62061 logs.go:278] No container was found matching "kindnet"
	I0918 21:13:14.815937   62061 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0918 21:13:14.815997   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0918 21:13:14.855426   62061 cri.go:89] found id: ""
	I0918 21:13:14.855456   62061 logs.go:276] 0 containers: []
	W0918 21:13:14.855464   62061 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0918 21:13:14.855473   62061 logs.go:123] Gathering logs for dmesg ...
	I0918 21:13:14.855484   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 21:13:14.868812   62061 logs.go:123] Gathering logs for describe nodes ...
	I0918 21:13:14.868843   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0918 21:13:14.955093   62061 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0918 21:13:14.955120   62061 logs.go:123] Gathering logs for CRI-O ...
	I0918 21:13:14.955151   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0918 21:13:15.064722   62061 logs.go:123] Gathering logs for container status ...
	I0918 21:13:15.064760   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 21:13:15.105430   62061 logs.go:123] Gathering logs for kubelet ...
	I0918 21:13:15.105466   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0918 21:13:15.157878   62061 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0918 21:13:15.157956   62061 out.go:270] * 
	* 
	W0918 21:13:15.158036   62061 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0918 21:13:15.158052   62061 out.go:270] * 
	* 
	W0918 21:13:15.158934   62061 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0918 21:13:15.161604   62061 out.go:201] 
	W0918 21:13:15.162606   62061 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0918 21:13:15.162664   62061 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0918 21:13:15.162685   62061 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0918 21:13:15.163831   62061 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-linux-amd64 start -p old-k8s-version-740194 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0": exit status 109
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-740194 -n old-k8s-version-740194
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-740194 -n old-k8s-version-740194: exit status 2 (233.619371ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/SecondStart FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-740194 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-740194 logs -n 25: (1.662410771s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/SecondStart logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| delete  | -p cert-options-347585                                 | cert-options-347585          | jenkins | v1.34.0 | 18 Sep 24 20:55 UTC | 18 Sep 24 20:55 UTC |
	| start   | -p no-preload-331658                                   | no-preload-331658            | jenkins | v1.34.0 | 18 Sep 24 20:55 UTC | 18 Sep 24 20:56 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-878094                           | kubernetes-upgrade-878094    | jenkins | v1.34.0 | 18 Sep 24 20:55 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-878094                           | kubernetes-upgrade-878094    | jenkins | v1.34.0 | 18 Sep 24 20:55 UTC | 18 Sep 24 20:56 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	|         | --alsologtostderr                                      |                              |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                     |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| delete  | -p pause-543700                                        | pause-543700                 | jenkins | v1.34.0 | 18 Sep 24 20:56 UTC | 18 Sep 24 20:56 UTC |
	| start   | -p embed-certs-255556                                  | embed-certs-255556           | jenkins | v1.34.0 | 18 Sep 24 20:56 UTC | 18 Sep 24 20:57 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| delete  | -p kubernetes-upgrade-878094                           | kubernetes-upgrade-878094    | jenkins | v1.34.0 | 18 Sep 24 20:56 UTC | 18 Sep 24 20:56 UTC |
	| delete  | -p                                                     | disable-driver-mounts-335923 | jenkins | v1.34.0 | 18 Sep 24 20:56 UTC | 18 Sep 24 20:56 UTC |
	|         | disable-driver-mounts-335923                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-828868 | jenkins | v1.34.0 | 18 Sep 24 20:56 UTC | 18 Sep 24 20:57 UTC |
	|         | default-k8s-diff-port-828868                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-331658             | no-preload-331658            | jenkins | v1.34.0 | 18 Sep 24 20:56 UTC | 18 Sep 24 20:57 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-331658                                   | no-preload-331658            | jenkins | v1.34.0 | 18 Sep 24 20:57 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-828868  | default-k8s-diff-port-828868 | jenkins | v1.34.0 | 18 Sep 24 20:57 UTC | 18 Sep 24 20:57 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-828868 | jenkins | v1.34.0 | 18 Sep 24 20:57 UTC |                     |
	|         | default-k8s-diff-port-828868                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-255556            | embed-certs-255556           | jenkins | v1.34.0 | 18 Sep 24 20:57 UTC | 18 Sep 24 20:57 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-255556                                  | embed-certs-255556           | jenkins | v1.34.0 | 18 Sep 24 20:57 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-740194        | old-k8s-version-740194       | jenkins | v1.34.0 | 18 Sep 24 20:59 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-331658                  | no-preload-331658            | jenkins | v1.34.0 | 18 Sep 24 20:59 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-331658                                   | no-preload-331658            | jenkins | v1.34.0 | 18 Sep 24 20:59 UTC | 18 Sep 24 21:10 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-828868       | default-k8s-diff-port-828868 | jenkins | v1.34.0 | 18 Sep 24 21:00 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-255556                 | embed-certs-255556           | jenkins | v1.34.0 | 18 Sep 24 21:00 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-828868 | jenkins | v1.34.0 | 18 Sep 24 21:00 UTC | 18 Sep 24 21:09 UTC |
	|         | default-k8s-diff-port-828868                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| start   | -p embed-certs-255556                                  | embed-certs-255556           | jenkins | v1.34.0 | 18 Sep 24 21:00 UTC | 18 Sep 24 21:10 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-740194                              | old-k8s-version-740194       | jenkins | v1.34.0 | 18 Sep 24 21:00 UTC | 18 Sep 24 21:00 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-740194             | old-k8s-version-740194       | jenkins | v1.34.0 | 18 Sep 24 21:00 UTC | 18 Sep 24 21:00 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-740194                              | old-k8s-version-740194       | jenkins | v1.34.0 | 18 Sep 24 21:00 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/18 21:00:59
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0918 21:00:59.486726   62061 out.go:345] Setting OutFile to fd 1 ...
	I0918 21:00:59.486839   62061 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0918 21:00:59.486848   62061 out.go:358] Setting ErrFile to fd 2...
	I0918 21:00:59.486854   62061 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0918 21:00:59.487063   62061 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19667-7671/.minikube/bin
	I0918 21:00:59.487631   62061 out.go:352] Setting JSON to false
	I0918 21:00:59.488570   62061 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":6203,"bootTime":1726687056,"procs":200,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0918 21:00:59.488665   62061 start.go:139] virtualization: kvm guest
	I0918 21:00:59.490927   62061 out.go:177] * [old-k8s-version-740194] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0918 21:00:59.492124   62061 out.go:177]   - MINIKUBE_LOCATION=19667
	I0918 21:00:59.492192   62061 notify.go:220] Checking for updates...
	I0918 21:00:59.494436   62061 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0918 21:00:59.495887   62061 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19667-7671/kubeconfig
	I0918 21:00:59.496929   62061 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19667-7671/.minikube
	I0918 21:00:59.498172   62061 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0918 21:00:59.499384   62061 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0918 21:00:59.500900   62061 config.go:182] Loaded profile config "old-k8s-version-740194": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0918 21:00:59.501295   62061 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19667-7671/.minikube/bin/docker-machine-driver-kvm2
	I0918 21:00:59.501375   62061 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0918 21:00:59.516448   62061 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45397
	I0918 21:00:59.516855   62061 main.go:141] libmachine: () Calling .GetVersion
	I0918 21:00:59.517347   62061 main.go:141] libmachine: Using API Version  1
	I0918 21:00:59.517362   62061 main.go:141] libmachine: () Calling .SetConfigRaw
	I0918 21:00:59.517673   62061 main.go:141] libmachine: () Calling .GetMachineName
	I0918 21:00:59.517848   62061 main.go:141] libmachine: (old-k8s-version-740194) Calling .DriverName
	I0918 21:00:59.519750   62061 out.go:177] * Kubernetes 1.31.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.1
	I0918 21:00:59.521126   62061 driver.go:394] Setting default libvirt URI to qemu:///system
	I0918 21:00:59.521433   62061 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19667-7671/.minikube/bin/docker-machine-driver-kvm2
	I0918 21:00:59.521468   62061 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0918 21:00:59.537580   62061 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39433
	I0918 21:00:59.537973   62061 main.go:141] libmachine: () Calling .GetVersion
	I0918 21:00:59.538452   62061 main.go:141] libmachine: Using API Version  1
	I0918 21:00:59.538471   62061 main.go:141] libmachine: () Calling .SetConfigRaw
	I0918 21:00:59.538852   62061 main.go:141] libmachine: () Calling .GetMachineName
	I0918 21:00:59.539083   62061 main.go:141] libmachine: (old-k8s-version-740194) Calling .DriverName
	I0918 21:00:59.575494   62061 out.go:177] * Using the kvm2 driver based on existing profile
	I0918 21:00:59.576555   62061 start.go:297] selected driver: kvm2
	I0918 21:00:59.576572   62061 start.go:901] validating driver "kvm2" against &{Name:old-k8s-version-740194 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.20.0 ClusterName:old-k8s-version-740194 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.53 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280
h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0918 21:00:59.576682   62061 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0918 21:00:59.577339   62061 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0918 21:00:59.577400   62061 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19667-7671/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0918 21:00:59.593287   62061 install.go:137] /home/jenkins/minikube-integration/19667-7671/.minikube/bin/docker-machine-driver-kvm2 version is 1.34.0
	I0918 21:00:59.593810   62061 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0918 21:00:59.593855   62061 cni.go:84] Creating CNI manager for ""
	I0918 21:00:59.593911   62061 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0918 21:00:59.593972   62061 start.go:340] cluster config:
	{Name:old-k8s-version-740194 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-740194 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.53 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p20
00.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0918 21:00:59.594104   62061 iso.go:125] acquiring lock: {Name:mkbcf4d84f3091567a39802cd5e773cb10b8c545 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0918 21:00:59.595936   62061 out.go:177] * Starting "old-k8s-version-740194" primary control-plane node in "old-k8s-version-740194" cluster
	I0918 21:00:59.932315   61273 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.31:22: connect: no route to host
	I0918 21:00:59.597210   62061 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0918 21:00:59.597243   62061 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19667-7671/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0918 21:00:59.597250   62061 cache.go:56] Caching tarball of preloaded images
	I0918 21:00:59.597325   62061 preload.go:172] Found /home/jenkins/minikube-integration/19667-7671/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0918 21:00:59.597338   62061 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0918 21:00:59.597439   62061 profile.go:143] Saving config to /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/old-k8s-version-740194/config.json ...
	I0918 21:00:59.597658   62061 start.go:360] acquireMachinesLock for old-k8s-version-740194: {Name:mk1b0d68a0b7d9d4bc8204d5d81cb9eb77222526 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0918 21:01:03.004316   61273 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.31:22: connect: no route to host
	I0918 21:01:09.084327   61273 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.31:22: connect: no route to host
	I0918 21:01:12.156358   61273 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.31:22: connect: no route to host
	I0918 21:01:18.236353   61273 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.31:22: connect: no route to host
	I0918 21:01:21.308245   61273 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.31:22: connect: no route to host
	I0918 21:01:27.388302   61273 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.31:22: connect: no route to host
	I0918 21:01:30.460341   61273 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.31:22: connect: no route to host
	I0918 21:01:36.540285   61273 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.31:22: connect: no route to host
	I0918 21:01:39.612345   61273 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.31:22: connect: no route to host
	I0918 21:01:45.692338   61273 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.31:22: connect: no route to host
	I0918 21:01:48.764308   61273 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.31:22: connect: no route to host
	I0918 21:01:54.844344   61273 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.31:22: connect: no route to host
	I0918 21:01:57.916346   61273 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.31:22: connect: no route to host
	I0918 21:02:03.996351   61273 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.31:22: connect: no route to host
	I0918 21:02:07.068377   61273 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.31:22: connect: no route to host
	I0918 21:02:13.148269   61273 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.31:22: connect: no route to host
	I0918 21:02:16.220321   61273 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.31:22: connect: no route to host
	I0918 21:02:22.300282   61273 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.31:22: connect: no route to host
	I0918 21:02:25.372352   61273 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.31:22: connect: no route to host
	I0918 21:02:31.452275   61273 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.31:22: connect: no route to host
	I0918 21:02:34.524362   61273 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.31:22: connect: no route to host
	I0918 21:02:40.604332   61273 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.31:22: connect: no route to host
	I0918 21:02:43.676372   61273 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.31:22: connect: no route to host
	I0918 21:02:49.756305   61273 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.31:22: connect: no route to host
	I0918 21:02:52.828321   61273 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.31:22: connect: no route to host
	I0918 21:02:58.908358   61273 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.31:22: connect: no route to host
	I0918 21:03:01.980309   61273 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.31:22: connect: no route to host
	I0918 21:03:08.060301   61273 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.31:22: connect: no route to host
	I0918 21:03:11.132322   61273 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.31:22: connect: no route to host
	I0918 21:03:17.212232   61273 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.31:22: connect: no route to host
	I0918 21:03:20.284342   61273 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.31:22: connect: no route to host
	I0918 21:03:26.364312   61273 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.31:22: connect: no route to host
	I0918 21:03:29.436328   61273 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.31:22: connect: no route to host
	I0918 21:03:35.516323   61273 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.31:22: connect: no route to host
	I0918 21:03:38.588372   61273 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.31:22: connect: no route to host
	I0918 21:03:44.668300   61273 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.31:22: connect: no route to host
	I0918 21:03:47.740379   61273 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.31:22: connect: no route to host
	I0918 21:03:53.820363   61273 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.31:22: connect: no route to host
	I0918 21:03:56.892355   61273 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.31:22: connect: no route to host
	I0918 21:04:02.972312   61273 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.31:22: connect: no route to host
	I0918 21:04:06.044373   61273 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.31:22: connect: no route to host
	I0918 21:04:09.048392   61659 start.go:364] duration metric: took 3m56.738592157s to acquireMachinesLock for "default-k8s-diff-port-828868"
	I0918 21:04:09.048461   61659 start.go:96] Skipping create...Using existing machine configuration
	I0918 21:04:09.048469   61659 fix.go:54] fixHost starting: 
	I0918 21:04:09.048788   61659 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19667-7671/.minikube/bin/docker-machine-driver-kvm2
	I0918 21:04:09.048827   61659 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0918 21:04:09.064428   61659 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41181
	I0918 21:04:09.064856   61659 main.go:141] libmachine: () Calling .GetVersion
	I0918 21:04:09.065395   61659 main.go:141] libmachine: Using API Version  1
	I0918 21:04:09.065421   61659 main.go:141] libmachine: () Calling .SetConfigRaw
	I0918 21:04:09.065751   61659 main.go:141] libmachine: () Calling .GetMachineName
	I0918 21:04:09.065961   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .DriverName
	I0918 21:04:09.066108   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .GetState
	I0918 21:04:09.067874   61659 fix.go:112] recreateIfNeeded on default-k8s-diff-port-828868: state=Stopped err=<nil>
	I0918 21:04:09.067915   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .DriverName
	W0918 21:04:09.068096   61659 fix.go:138] unexpected machine state, will restart: <nil>
	I0918 21:04:09.069985   61659 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-828868" ...
	I0918 21:04:09.045944   61273 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0918 21:04:09.045978   61273 main.go:141] libmachine: (no-preload-331658) Calling .GetMachineName
	I0918 21:04:09.046314   61273 buildroot.go:166] provisioning hostname "no-preload-331658"
	I0918 21:04:09.046350   61273 main.go:141] libmachine: (no-preload-331658) Calling .GetMachineName
	I0918 21:04:09.046602   61273 main.go:141] libmachine: (no-preload-331658) Calling .GetSSHHostname
	I0918 21:04:09.048253   61273 machine.go:96] duration metric: took 4m37.423609251s to provisionDockerMachine
	I0918 21:04:09.048293   61273 fix.go:56] duration metric: took 4m37.446130108s for fixHost
	I0918 21:04:09.048301   61273 start.go:83] releasing machines lock for "no-preload-331658", held for 4m37.44629145s
	W0918 21:04:09.048329   61273 start.go:714] error starting host: provision: host is not running
	W0918 21:04:09.048451   61273 out.go:270] ! StartHost failed, but will try again: provision: host is not running
	I0918 21:04:09.048465   61273 start.go:729] Will try again in 5 seconds ...
	I0918 21:04:09.071488   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .Start
	I0918 21:04:09.071699   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Ensuring networks are active...
	I0918 21:04:09.072473   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Ensuring network default is active
	I0918 21:04:09.072816   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Ensuring network mk-default-k8s-diff-port-828868 is active
	I0918 21:04:09.073204   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Getting domain xml...
	I0918 21:04:09.073977   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Creating domain...
	I0918 21:04:10.321507   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Waiting to get IP...
	I0918 21:04:10.322390   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | domain default-k8s-diff-port-828868 has defined MAC address 52:54:00:c0:39:06 in network mk-default-k8s-diff-port-828868
	I0918 21:04:10.322863   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | unable to find current IP address of domain default-k8s-diff-port-828868 in network mk-default-k8s-diff-port-828868
	I0918 21:04:10.322907   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | I0918 21:04:10.322821   62722 retry.go:31] will retry after 272.805092ms: waiting for machine to come up
	I0918 21:04:10.597434   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | domain default-k8s-diff-port-828868 has defined MAC address 52:54:00:c0:39:06 in network mk-default-k8s-diff-port-828868
	I0918 21:04:10.597861   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | unable to find current IP address of domain default-k8s-diff-port-828868 in network mk-default-k8s-diff-port-828868
	I0918 21:04:10.597888   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | I0918 21:04:10.597825   62722 retry.go:31] will retry after 302.631333ms: waiting for machine to come up
	I0918 21:04:10.902544   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | domain default-k8s-diff-port-828868 has defined MAC address 52:54:00:c0:39:06 in network mk-default-k8s-diff-port-828868
	I0918 21:04:10.903002   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | unable to find current IP address of domain default-k8s-diff-port-828868 in network mk-default-k8s-diff-port-828868
	I0918 21:04:10.903030   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | I0918 21:04:10.902943   62722 retry.go:31] will retry after 325.769954ms: waiting for machine to come up
	I0918 21:04:11.230182   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | domain default-k8s-diff-port-828868 has defined MAC address 52:54:00:c0:39:06 in network mk-default-k8s-diff-port-828868
	I0918 21:04:11.230602   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | unable to find current IP address of domain default-k8s-diff-port-828868 in network mk-default-k8s-diff-port-828868
	I0918 21:04:11.230652   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | I0918 21:04:11.230557   62722 retry.go:31] will retry after 396.395153ms: waiting for machine to come up
	I0918 21:04:11.628135   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | domain default-k8s-diff-port-828868 has defined MAC address 52:54:00:c0:39:06 in network mk-default-k8s-diff-port-828868
	I0918 21:04:11.628520   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | unable to find current IP address of domain default-k8s-diff-port-828868 in network mk-default-k8s-diff-port-828868
	I0918 21:04:11.628544   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | I0918 21:04:11.628495   62722 retry.go:31] will retry after 578.74167ms: waiting for machine to come up
	I0918 21:04:14.050009   61273 start.go:360] acquireMachinesLock for no-preload-331658: {Name:mk1b0d68a0b7d9d4bc8204d5d81cb9eb77222526 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0918 21:04:12.209844   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | domain default-k8s-diff-port-828868 has defined MAC address 52:54:00:c0:39:06 in network mk-default-k8s-diff-port-828868
	I0918 21:04:12.209911   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | unable to find current IP address of domain default-k8s-diff-port-828868 in network mk-default-k8s-diff-port-828868
	I0918 21:04:12.209937   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | I0918 21:04:12.209841   62722 retry.go:31] will retry after 779.0434ms: waiting for machine to come up
	I0918 21:04:12.990688   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | domain default-k8s-diff-port-828868 has defined MAC address 52:54:00:c0:39:06 in network mk-default-k8s-diff-port-828868
	I0918 21:04:12.991106   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | unable to find current IP address of domain default-k8s-diff-port-828868 in network mk-default-k8s-diff-port-828868
	I0918 21:04:12.991141   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | I0918 21:04:12.991045   62722 retry.go:31] will retry after 772.165771ms: waiting for machine to come up
	I0918 21:04:13.764946   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | domain default-k8s-diff-port-828868 has defined MAC address 52:54:00:c0:39:06 in network mk-default-k8s-diff-port-828868
	I0918 21:04:13.765460   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | unable to find current IP address of domain default-k8s-diff-port-828868 in network mk-default-k8s-diff-port-828868
	I0918 21:04:13.765493   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | I0918 21:04:13.765404   62722 retry.go:31] will retry after 1.017078101s: waiting for machine to come up
	I0918 21:04:14.783920   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | domain default-k8s-diff-port-828868 has defined MAC address 52:54:00:c0:39:06 in network mk-default-k8s-diff-port-828868
	I0918 21:04:14.784320   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | unable to find current IP address of domain default-k8s-diff-port-828868 in network mk-default-k8s-diff-port-828868
	I0918 21:04:14.784348   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | I0918 21:04:14.784276   62722 retry.go:31] will retry after 1.775982574s: waiting for machine to come up
	I0918 21:04:16.562037   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | domain default-k8s-diff-port-828868 has defined MAC address 52:54:00:c0:39:06 in network mk-default-k8s-diff-port-828868
	I0918 21:04:16.562413   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | unable to find current IP address of domain default-k8s-diff-port-828868 in network mk-default-k8s-diff-port-828868
	I0918 21:04:16.562451   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | I0918 21:04:16.562369   62722 retry.go:31] will retry after 1.609664062s: waiting for machine to come up
	I0918 21:04:18.174149   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | domain default-k8s-diff-port-828868 has defined MAC address 52:54:00:c0:39:06 in network mk-default-k8s-diff-port-828868
	I0918 21:04:18.174759   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | unable to find current IP address of domain default-k8s-diff-port-828868 in network mk-default-k8s-diff-port-828868
	I0918 21:04:18.174788   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | I0918 21:04:18.174710   62722 retry.go:31] will retry after 2.26359536s: waiting for machine to come up
	I0918 21:04:20.440599   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | domain default-k8s-diff-port-828868 has defined MAC address 52:54:00:c0:39:06 in network mk-default-k8s-diff-port-828868
	I0918 21:04:20.441000   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | unable to find current IP address of domain default-k8s-diff-port-828868 in network mk-default-k8s-diff-port-828868
	I0918 21:04:20.441027   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | I0918 21:04:20.440955   62722 retry.go:31] will retry after 3.387446315s: waiting for machine to come up
	I0918 21:04:23.832623   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | domain default-k8s-diff-port-828868 has defined MAC address 52:54:00:c0:39:06 in network mk-default-k8s-diff-port-828868
	I0918 21:04:23.833134   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | unable to find current IP address of domain default-k8s-diff-port-828868 in network mk-default-k8s-diff-port-828868
	I0918 21:04:23.833162   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | I0918 21:04:23.833097   62722 retry.go:31] will retry after 3.312983418s: waiting for machine to come up
	I0918 21:04:27.150091   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | domain default-k8s-diff-port-828868 has defined MAC address 52:54:00:c0:39:06 in network mk-default-k8s-diff-port-828868
	I0918 21:04:27.150658   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Found IP for machine: 192.168.50.109
	I0918 21:04:27.150682   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | domain default-k8s-diff-port-828868 has current primary IP address 192.168.50.109 and MAC address 52:54:00:c0:39:06 in network mk-default-k8s-diff-port-828868
	I0918 21:04:27.150703   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Reserving static IP address...
	I0918 21:04:27.151248   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-828868", mac: "52:54:00:c0:39:06", ip: "192.168.50.109"} in network mk-default-k8s-diff-port-828868: {Iface:virbr4 ExpiryTime:2024-09-18 22:04:19 +0000 UTC Type:0 Mac:52:54:00:c0:39:06 Iaid: IPaddr:192.168.50.109 Prefix:24 Hostname:default-k8s-diff-port-828868 Clientid:01:52:54:00:c0:39:06}
	I0918 21:04:27.151276   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Reserved static IP address: 192.168.50.109
	I0918 21:04:27.151297   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | skip adding static IP to network mk-default-k8s-diff-port-828868 - found existing host DHCP lease matching {name: "default-k8s-diff-port-828868", mac: "52:54:00:c0:39:06", ip: "192.168.50.109"}
	I0918 21:04:27.151317   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | Getting to WaitForSSH function...
	I0918 21:04:27.151330   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Waiting for SSH to be available...
	I0918 21:04:27.153633   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | domain default-k8s-diff-port-828868 has defined MAC address 52:54:00:c0:39:06 in network mk-default-k8s-diff-port-828868
	I0918 21:04:27.154006   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:39:06", ip: ""} in network mk-default-k8s-diff-port-828868: {Iface:virbr4 ExpiryTime:2024-09-18 22:04:19 +0000 UTC Type:0 Mac:52:54:00:c0:39:06 Iaid: IPaddr:192.168.50.109 Prefix:24 Hostname:default-k8s-diff-port-828868 Clientid:01:52:54:00:c0:39:06}
	I0918 21:04:27.154036   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | domain default-k8s-diff-port-828868 has defined IP address 192.168.50.109 and MAC address 52:54:00:c0:39:06 in network mk-default-k8s-diff-port-828868
	I0918 21:04:27.154127   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | Using SSH client type: external
	I0918 21:04:27.154153   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | Using SSH private key: /home/jenkins/minikube-integration/19667-7671/.minikube/machines/default-k8s-diff-port-828868/id_rsa (-rw-------)
	I0918 21:04:27.154196   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.109 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19667-7671/.minikube/machines/default-k8s-diff-port-828868/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0918 21:04:27.154211   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | About to run SSH command:
	I0918 21:04:27.154225   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | exit 0
	I0918 21:04:28.308967   61740 start.go:364] duration metric: took 4m9.856658805s to acquireMachinesLock for "embed-certs-255556"
	I0918 21:04:28.309052   61740 start.go:96] Skipping create...Using existing machine configuration
	I0918 21:04:28.309066   61740 fix.go:54] fixHost starting: 
	I0918 21:04:28.309548   61740 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19667-7671/.minikube/bin/docker-machine-driver-kvm2
	I0918 21:04:28.309609   61740 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0918 21:04:28.326972   61740 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41115
	I0918 21:04:28.327375   61740 main.go:141] libmachine: () Calling .GetVersion
	I0918 21:04:28.327941   61740 main.go:141] libmachine: Using API Version  1
	I0918 21:04:28.327974   61740 main.go:141] libmachine: () Calling .SetConfigRaw
	I0918 21:04:28.328300   61740 main.go:141] libmachine: () Calling .GetMachineName
	I0918 21:04:28.328538   61740 main.go:141] libmachine: (embed-certs-255556) Calling .DriverName
	I0918 21:04:28.328676   61740 main.go:141] libmachine: (embed-certs-255556) Calling .GetState
	I0918 21:04:28.330265   61740 fix.go:112] recreateIfNeeded on embed-certs-255556: state=Stopped err=<nil>
	I0918 21:04:28.330312   61740 main.go:141] libmachine: (embed-certs-255556) Calling .DriverName
	W0918 21:04:28.330482   61740 fix.go:138] unexpected machine state, will restart: <nil>
	I0918 21:04:28.332680   61740 out.go:177] * Restarting existing kvm2 VM for "embed-certs-255556" ...
	I0918 21:04:28.333692   61740 main.go:141] libmachine: (embed-certs-255556) Calling .Start
	I0918 21:04:28.333865   61740 main.go:141] libmachine: (embed-certs-255556) Ensuring networks are active...
	I0918 21:04:28.334536   61740 main.go:141] libmachine: (embed-certs-255556) Ensuring network default is active
	I0918 21:04:28.334987   61740 main.go:141] libmachine: (embed-certs-255556) Ensuring network mk-embed-certs-255556 is active
	I0918 21:04:28.335491   61740 main.go:141] libmachine: (embed-certs-255556) Getting domain xml...
	I0918 21:04:28.336206   61740 main.go:141] libmachine: (embed-certs-255556) Creating domain...
	I0918 21:04:27.280056   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | SSH cmd err, output: <nil>: 
	I0918 21:04:27.280448   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .GetConfigRaw
	I0918 21:04:27.281097   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .GetIP
	I0918 21:04:27.283491   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | domain default-k8s-diff-port-828868 has defined MAC address 52:54:00:c0:39:06 in network mk-default-k8s-diff-port-828868
	I0918 21:04:27.283933   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:39:06", ip: ""} in network mk-default-k8s-diff-port-828868: {Iface:virbr4 ExpiryTime:2024-09-18 22:04:19 +0000 UTC Type:0 Mac:52:54:00:c0:39:06 Iaid: IPaddr:192.168.50.109 Prefix:24 Hostname:default-k8s-diff-port-828868 Clientid:01:52:54:00:c0:39:06}
	I0918 21:04:27.283968   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | domain default-k8s-diff-port-828868 has defined IP address 192.168.50.109 and MAC address 52:54:00:c0:39:06 in network mk-default-k8s-diff-port-828868
	I0918 21:04:27.284242   61659 profile.go:143] Saving config to /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/default-k8s-diff-port-828868/config.json ...
	I0918 21:04:27.284483   61659 machine.go:93] provisionDockerMachine start ...
	I0918 21:04:27.284527   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .DriverName
	I0918 21:04:27.284740   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .GetSSHHostname
	I0918 21:04:27.287263   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | domain default-k8s-diff-port-828868 has defined MAC address 52:54:00:c0:39:06 in network mk-default-k8s-diff-port-828868
	I0918 21:04:27.287640   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:39:06", ip: ""} in network mk-default-k8s-diff-port-828868: {Iface:virbr4 ExpiryTime:2024-09-18 22:04:19 +0000 UTC Type:0 Mac:52:54:00:c0:39:06 Iaid: IPaddr:192.168.50.109 Prefix:24 Hostname:default-k8s-diff-port-828868 Clientid:01:52:54:00:c0:39:06}
	I0918 21:04:27.287671   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | domain default-k8s-diff-port-828868 has defined IP address 192.168.50.109 and MAC address 52:54:00:c0:39:06 in network mk-default-k8s-diff-port-828868
	I0918 21:04:27.287831   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .GetSSHPort
	I0918 21:04:27.288053   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .GetSSHKeyPath
	I0918 21:04:27.288230   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .GetSSHKeyPath
	I0918 21:04:27.288348   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .GetSSHUsername
	I0918 21:04:27.288497   61659 main.go:141] libmachine: Using SSH client type: native
	I0918 21:04:27.288759   61659 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.50.109 22 <nil> <nil>}
	I0918 21:04:27.288774   61659 main.go:141] libmachine: About to run SSH command:
	hostname
	I0918 21:04:27.396110   61659 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0918 21:04:27.396140   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .GetMachineName
	I0918 21:04:27.396439   61659 buildroot.go:166] provisioning hostname "default-k8s-diff-port-828868"
	I0918 21:04:27.396472   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .GetMachineName
	I0918 21:04:27.396655   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .GetSSHHostname
	I0918 21:04:27.399285   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | domain default-k8s-diff-port-828868 has defined MAC address 52:54:00:c0:39:06 in network mk-default-k8s-diff-port-828868
	I0918 21:04:27.399629   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:39:06", ip: ""} in network mk-default-k8s-diff-port-828868: {Iface:virbr4 ExpiryTime:2024-09-18 22:04:19 +0000 UTC Type:0 Mac:52:54:00:c0:39:06 Iaid: IPaddr:192.168.50.109 Prefix:24 Hostname:default-k8s-diff-port-828868 Clientid:01:52:54:00:c0:39:06}
	I0918 21:04:27.399670   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | domain default-k8s-diff-port-828868 has defined IP address 192.168.50.109 and MAC address 52:54:00:c0:39:06 in network mk-default-k8s-diff-port-828868
	I0918 21:04:27.399746   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .GetSSHPort
	I0918 21:04:27.399947   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .GetSSHKeyPath
	I0918 21:04:27.400158   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .GetSSHKeyPath
	I0918 21:04:27.400295   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .GetSSHUsername
	I0918 21:04:27.400476   61659 main.go:141] libmachine: Using SSH client type: native
	I0918 21:04:27.400701   61659 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.50.109 22 <nil> <nil>}
	I0918 21:04:27.400714   61659 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-828868 && echo "default-k8s-diff-port-828868" | sudo tee /etc/hostname
	I0918 21:04:27.518553   61659 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-828868
	
	I0918 21:04:27.518579   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .GetSSHHostname
	I0918 21:04:27.521274   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | domain default-k8s-diff-port-828868 has defined MAC address 52:54:00:c0:39:06 in network mk-default-k8s-diff-port-828868
	I0918 21:04:27.521714   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:39:06", ip: ""} in network mk-default-k8s-diff-port-828868: {Iface:virbr4 ExpiryTime:2024-09-18 22:04:19 +0000 UTC Type:0 Mac:52:54:00:c0:39:06 Iaid: IPaddr:192.168.50.109 Prefix:24 Hostname:default-k8s-diff-port-828868 Clientid:01:52:54:00:c0:39:06}
	I0918 21:04:27.521746   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | domain default-k8s-diff-port-828868 has defined IP address 192.168.50.109 and MAC address 52:54:00:c0:39:06 in network mk-default-k8s-diff-port-828868
	I0918 21:04:27.521918   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .GetSSHPort
	I0918 21:04:27.522085   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .GetSSHKeyPath
	I0918 21:04:27.522298   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .GetSSHKeyPath
	I0918 21:04:27.522469   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .GetSSHUsername
	I0918 21:04:27.522689   61659 main.go:141] libmachine: Using SSH client type: native
	I0918 21:04:27.522867   61659 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.50.109 22 <nil> <nil>}
	I0918 21:04:27.522885   61659 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-828868' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-828868/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-828868' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0918 21:04:27.636264   61659 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0918 21:04:27.636296   61659 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19667-7671/.minikube CaCertPath:/home/jenkins/minikube-integration/19667-7671/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19667-7671/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19667-7671/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19667-7671/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19667-7671/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19667-7671/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19667-7671/.minikube}
	I0918 21:04:27.636325   61659 buildroot.go:174] setting up certificates
	I0918 21:04:27.636335   61659 provision.go:84] configureAuth start
	I0918 21:04:27.636343   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .GetMachineName
	I0918 21:04:27.636629   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .GetIP
	I0918 21:04:27.639186   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | domain default-k8s-diff-port-828868 has defined MAC address 52:54:00:c0:39:06 in network mk-default-k8s-diff-port-828868
	I0918 21:04:27.639622   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:39:06", ip: ""} in network mk-default-k8s-diff-port-828868: {Iface:virbr4 ExpiryTime:2024-09-18 22:04:19 +0000 UTC Type:0 Mac:52:54:00:c0:39:06 Iaid: IPaddr:192.168.50.109 Prefix:24 Hostname:default-k8s-diff-port-828868 Clientid:01:52:54:00:c0:39:06}
	I0918 21:04:27.639646   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | domain default-k8s-diff-port-828868 has defined IP address 192.168.50.109 and MAC address 52:54:00:c0:39:06 in network mk-default-k8s-diff-port-828868
	I0918 21:04:27.639858   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .GetSSHHostname
	I0918 21:04:27.642158   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | domain default-k8s-diff-port-828868 has defined MAC address 52:54:00:c0:39:06 in network mk-default-k8s-diff-port-828868
	I0918 21:04:27.642421   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:39:06", ip: ""} in network mk-default-k8s-diff-port-828868: {Iface:virbr4 ExpiryTime:2024-09-18 22:04:19 +0000 UTC Type:0 Mac:52:54:00:c0:39:06 Iaid: IPaddr:192.168.50.109 Prefix:24 Hostname:default-k8s-diff-port-828868 Clientid:01:52:54:00:c0:39:06}
	I0918 21:04:27.642448   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | domain default-k8s-diff-port-828868 has defined IP address 192.168.50.109 and MAC address 52:54:00:c0:39:06 in network mk-default-k8s-diff-port-828868
	I0918 21:04:27.642626   61659 provision.go:143] copyHostCerts
	I0918 21:04:27.642706   61659 exec_runner.go:144] found /home/jenkins/minikube-integration/19667-7671/.minikube/ca.pem, removing ...
	I0918 21:04:27.642869   61659 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19667-7671/.minikube/ca.pem
	I0918 21:04:27.642966   61659 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19667-7671/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19667-7671/.minikube/ca.pem (1082 bytes)
	I0918 21:04:27.643099   61659 exec_runner.go:144] found /home/jenkins/minikube-integration/19667-7671/.minikube/cert.pem, removing ...
	I0918 21:04:27.643111   61659 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19667-7671/.minikube/cert.pem
	I0918 21:04:27.643150   61659 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19667-7671/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19667-7671/.minikube/cert.pem (1123 bytes)
	I0918 21:04:27.643270   61659 exec_runner.go:144] found /home/jenkins/minikube-integration/19667-7671/.minikube/key.pem, removing ...
	I0918 21:04:27.643280   61659 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19667-7671/.minikube/key.pem
	I0918 21:04:27.643320   61659 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19667-7671/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19667-7671/.minikube/key.pem (1679 bytes)
	I0918 21:04:27.643387   61659 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19667-7671/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19667-7671/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19667-7671/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-828868 san=[127.0.0.1 192.168.50.109 default-k8s-diff-port-828868 localhost minikube]
	I0918 21:04:27.693367   61659 provision.go:177] copyRemoteCerts
	I0918 21:04:27.693426   61659 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0918 21:04:27.693463   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .GetSSHHostname
	I0918 21:04:27.696331   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | domain default-k8s-diff-port-828868 has defined MAC address 52:54:00:c0:39:06 in network mk-default-k8s-diff-port-828868
	I0918 21:04:27.696661   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:39:06", ip: ""} in network mk-default-k8s-diff-port-828868: {Iface:virbr4 ExpiryTime:2024-09-18 22:04:19 +0000 UTC Type:0 Mac:52:54:00:c0:39:06 Iaid: IPaddr:192.168.50.109 Prefix:24 Hostname:default-k8s-diff-port-828868 Clientid:01:52:54:00:c0:39:06}
	I0918 21:04:27.696693   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | domain default-k8s-diff-port-828868 has defined IP address 192.168.50.109 and MAC address 52:54:00:c0:39:06 in network mk-default-k8s-diff-port-828868
	I0918 21:04:27.696835   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .GetSSHPort
	I0918 21:04:27.697028   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .GetSSHKeyPath
	I0918 21:04:27.697212   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .GetSSHUsername
	I0918 21:04:27.697317   61659 sshutil.go:53] new ssh client: &{IP:192.168.50.109 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19667-7671/.minikube/machines/default-k8s-diff-port-828868/id_rsa Username:docker}
	I0918 21:04:27.777944   61659 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0918 21:04:27.801476   61659 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0918 21:04:27.825025   61659 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0918 21:04:27.848244   61659 provision.go:87] duration metric: took 211.897185ms to configureAuth
	I0918 21:04:27.848274   61659 buildroot.go:189] setting minikube options for container-runtime
	I0918 21:04:27.848434   61659 config.go:182] Loaded profile config "default-k8s-diff-port-828868": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0918 21:04:27.848513   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .GetSSHHostname
	I0918 21:04:27.851119   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | domain default-k8s-diff-port-828868 has defined MAC address 52:54:00:c0:39:06 in network mk-default-k8s-diff-port-828868
	I0918 21:04:27.851472   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:39:06", ip: ""} in network mk-default-k8s-diff-port-828868: {Iface:virbr4 ExpiryTime:2024-09-18 22:04:19 +0000 UTC Type:0 Mac:52:54:00:c0:39:06 Iaid: IPaddr:192.168.50.109 Prefix:24 Hostname:default-k8s-diff-port-828868 Clientid:01:52:54:00:c0:39:06}
	I0918 21:04:27.851509   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | domain default-k8s-diff-port-828868 has defined IP address 192.168.50.109 and MAC address 52:54:00:c0:39:06 in network mk-default-k8s-diff-port-828868
	I0918 21:04:27.851788   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .GetSSHPort
	I0918 21:04:27.852007   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .GetSSHKeyPath
	I0918 21:04:27.852216   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .GetSSHKeyPath
	I0918 21:04:27.852420   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .GetSSHUsername
	I0918 21:04:27.852670   61659 main.go:141] libmachine: Using SSH client type: native
	I0918 21:04:27.852852   61659 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.50.109 22 <nil> <nil>}
	I0918 21:04:27.852870   61659 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0918 21:04:28.072808   61659 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0918 21:04:28.072843   61659 machine.go:96] duration metric: took 788.346091ms to provisionDockerMachine
	I0918 21:04:28.072858   61659 start.go:293] postStartSetup for "default-k8s-diff-port-828868" (driver="kvm2")
	I0918 21:04:28.072874   61659 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0918 21:04:28.072898   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .DriverName
	I0918 21:04:28.073246   61659 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0918 21:04:28.073287   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .GetSSHHostname
	I0918 21:04:28.075998   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | domain default-k8s-diff-port-828868 has defined MAC address 52:54:00:c0:39:06 in network mk-default-k8s-diff-port-828868
	I0918 21:04:28.076389   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:39:06", ip: ""} in network mk-default-k8s-diff-port-828868: {Iface:virbr4 ExpiryTime:2024-09-18 22:04:19 +0000 UTC Type:0 Mac:52:54:00:c0:39:06 Iaid: IPaddr:192.168.50.109 Prefix:24 Hostname:default-k8s-diff-port-828868 Clientid:01:52:54:00:c0:39:06}
	I0918 21:04:28.076416   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | domain default-k8s-diff-port-828868 has defined IP address 192.168.50.109 and MAC address 52:54:00:c0:39:06 in network mk-default-k8s-diff-port-828868
	I0918 21:04:28.076561   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .GetSSHPort
	I0918 21:04:28.076780   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .GetSSHKeyPath
	I0918 21:04:28.076939   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .GetSSHUsername
	I0918 21:04:28.077063   61659 sshutil.go:53] new ssh client: &{IP:192.168.50.109 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19667-7671/.minikube/machines/default-k8s-diff-port-828868/id_rsa Username:docker}
	I0918 21:04:28.158946   61659 ssh_runner.go:195] Run: cat /etc/os-release
	I0918 21:04:28.163200   61659 info.go:137] Remote host: Buildroot 2023.02.9
	I0918 21:04:28.163231   61659 filesync.go:126] Scanning /home/jenkins/minikube-integration/19667-7671/.minikube/addons for local assets ...
	I0918 21:04:28.163290   61659 filesync.go:126] Scanning /home/jenkins/minikube-integration/19667-7671/.minikube/files for local assets ...
	I0918 21:04:28.163368   61659 filesync.go:149] local asset: /home/jenkins/minikube-integration/19667-7671/.minikube/files/etc/ssl/certs/148782.pem -> 148782.pem in /etc/ssl/certs
	I0918 21:04:28.163464   61659 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0918 21:04:28.172987   61659 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/files/etc/ssl/certs/148782.pem --> /etc/ssl/certs/148782.pem (1708 bytes)
	I0918 21:04:28.198647   61659 start.go:296] duration metric: took 125.77566ms for postStartSetup
	I0918 21:04:28.198685   61659 fix.go:56] duration metric: took 19.150217303s for fixHost
	I0918 21:04:28.198704   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .GetSSHHostname
	I0918 21:04:28.201549   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | domain default-k8s-diff-port-828868 has defined MAC address 52:54:00:c0:39:06 in network mk-default-k8s-diff-port-828868
	I0918 21:04:28.201904   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:39:06", ip: ""} in network mk-default-k8s-diff-port-828868: {Iface:virbr4 ExpiryTime:2024-09-18 22:04:19 +0000 UTC Type:0 Mac:52:54:00:c0:39:06 Iaid: IPaddr:192.168.50.109 Prefix:24 Hostname:default-k8s-diff-port-828868 Clientid:01:52:54:00:c0:39:06}
	I0918 21:04:28.201934   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | domain default-k8s-diff-port-828868 has defined IP address 192.168.50.109 and MAC address 52:54:00:c0:39:06 in network mk-default-k8s-diff-port-828868
	I0918 21:04:28.202093   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .GetSSHPort
	I0918 21:04:28.202278   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .GetSSHKeyPath
	I0918 21:04:28.202435   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .GetSSHKeyPath
	I0918 21:04:28.202588   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .GetSSHUsername
	I0918 21:04:28.202714   61659 main.go:141] libmachine: Using SSH client type: native
	I0918 21:04:28.202871   61659 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.50.109 22 <nil> <nil>}
	I0918 21:04:28.202879   61659 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0918 21:04:28.308752   61659 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726693468.285343658
	
	I0918 21:04:28.308778   61659 fix.go:216] guest clock: 1726693468.285343658
	I0918 21:04:28.308786   61659 fix.go:229] Guest: 2024-09-18 21:04:28.285343658 +0000 UTC Remote: 2024-09-18 21:04:28.198688962 +0000 UTC m=+256.035220061 (delta=86.654696ms)
	I0918 21:04:28.308821   61659 fix.go:200] guest clock delta is within tolerance: 86.654696ms
	I0918 21:04:28.308829   61659 start.go:83] releasing machines lock for "default-k8s-diff-port-828868", held for 19.260404228s
	I0918 21:04:28.308857   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .DriverName
	I0918 21:04:28.309175   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .GetIP
	I0918 21:04:28.312346   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | domain default-k8s-diff-port-828868 has defined MAC address 52:54:00:c0:39:06 in network mk-default-k8s-diff-port-828868
	I0918 21:04:28.312725   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:39:06", ip: ""} in network mk-default-k8s-diff-port-828868: {Iface:virbr4 ExpiryTime:2024-09-18 22:04:19 +0000 UTC Type:0 Mac:52:54:00:c0:39:06 Iaid: IPaddr:192.168.50.109 Prefix:24 Hostname:default-k8s-diff-port-828868 Clientid:01:52:54:00:c0:39:06}
	I0918 21:04:28.312753   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | domain default-k8s-diff-port-828868 has defined IP address 192.168.50.109 and MAC address 52:54:00:c0:39:06 in network mk-default-k8s-diff-port-828868
	I0918 21:04:28.312951   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .DriverName
	I0918 21:04:28.313506   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .DriverName
	I0918 21:04:28.313702   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .DriverName
	I0918 21:04:28.313792   61659 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0918 21:04:28.313849   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .GetSSHHostname
	I0918 21:04:28.313966   61659 ssh_runner.go:195] Run: cat /version.json
	I0918 21:04:28.314001   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .GetSSHHostname
	I0918 21:04:28.316698   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | domain default-k8s-diff-port-828868 has defined MAC address 52:54:00:c0:39:06 in network mk-default-k8s-diff-port-828868
	I0918 21:04:28.316882   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | domain default-k8s-diff-port-828868 has defined MAC address 52:54:00:c0:39:06 in network mk-default-k8s-diff-port-828868
	I0918 21:04:28.317016   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:39:06", ip: ""} in network mk-default-k8s-diff-port-828868: {Iface:virbr4 ExpiryTime:2024-09-18 22:04:19 +0000 UTC Type:0 Mac:52:54:00:c0:39:06 Iaid: IPaddr:192.168.50.109 Prefix:24 Hostname:default-k8s-diff-port-828868 Clientid:01:52:54:00:c0:39:06}
	I0918 21:04:28.317038   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | domain default-k8s-diff-port-828868 has defined IP address 192.168.50.109 and MAC address 52:54:00:c0:39:06 in network mk-default-k8s-diff-port-828868
	I0918 21:04:28.317239   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .GetSSHPort
	I0918 21:04:28.317357   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:39:06", ip: ""} in network mk-default-k8s-diff-port-828868: {Iface:virbr4 ExpiryTime:2024-09-18 22:04:19 +0000 UTC Type:0 Mac:52:54:00:c0:39:06 Iaid: IPaddr:192.168.50.109 Prefix:24 Hostname:default-k8s-diff-port-828868 Clientid:01:52:54:00:c0:39:06}
	I0918 21:04:28.317408   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | domain default-k8s-diff-port-828868 has defined IP address 192.168.50.109 and MAC address 52:54:00:c0:39:06 in network mk-default-k8s-diff-port-828868
	I0918 21:04:28.317410   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .GetSSHKeyPath
	I0918 21:04:28.317596   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .GetSSHPort
	I0918 21:04:28.317598   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .GetSSHUsername
	I0918 21:04:28.317743   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .GetSSHKeyPath
	I0918 21:04:28.317783   61659 sshutil.go:53] new ssh client: &{IP:192.168.50.109 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19667-7671/.minikube/machines/default-k8s-diff-port-828868/id_rsa Username:docker}
	I0918 21:04:28.317905   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .GetSSHUsername
	I0918 21:04:28.318060   61659 sshutil.go:53] new ssh client: &{IP:192.168.50.109 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19667-7671/.minikube/machines/default-k8s-diff-port-828868/id_rsa Username:docker}
	I0918 21:04:28.439960   61659 ssh_runner.go:195] Run: systemctl --version
	I0918 21:04:28.446111   61659 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0918 21:04:28.593574   61659 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0918 21:04:28.599542   61659 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0918 21:04:28.599623   61659 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0918 21:04:28.615775   61659 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0918 21:04:28.615802   61659 start.go:495] detecting cgroup driver to use...
	I0918 21:04:28.615965   61659 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0918 21:04:28.636924   61659 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0918 21:04:28.655681   61659 docker.go:217] disabling cri-docker service (if available) ...
	I0918 21:04:28.655775   61659 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0918 21:04:28.670090   61659 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0918 21:04:28.684780   61659 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0918 21:04:28.807355   61659 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0918 21:04:28.941753   61659 docker.go:233] disabling docker service ...
	I0918 21:04:28.941836   61659 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0918 21:04:28.956786   61659 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0918 21:04:28.970301   61659 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0918 21:04:29.119605   61659 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0918 21:04:29.245330   61659 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0918 21:04:29.259626   61659 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0918 21:04:29.278104   61659 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0918 21:04:29.278162   61659 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0918 21:04:29.288761   61659 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0918 21:04:29.288837   61659 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0918 21:04:29.299631   61659 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0918 21:04:29.310244   61659 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0918 21:04:29.321220   61659 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0918 21:04:29.332722   61659 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0918 21:04:29.343590   61659 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0918 21:04:29.366099   61659 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0918 21:04:29.381180   61659 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0918 21:04:29.394427   61659 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0918 21:04:29.394494   61659 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0918 21:04:29.410069   61659 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0918 21:04:29.421207   61659 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0918 21:04:29.543870   61659 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0918 21:04:29.642149   61659 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0918 21:04:29.642205   61659 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0918 21:04:29.647336   61659 start.go:563] Will wait 60s for crictl version
	I0918 21:04:29.647400   61659 ssh_runner.go:195] Run: which crictl
	I0918 21:04:29.651148   61659 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0918 21:04:29.690903   61659 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0918 21:04:29.690992   61659 ssh_runner.go:195] Run: crio --version
	I0918 21:04:29.717176   61659 ssh_runner.go:195] Run: crio --version
	I0918 21:04:29.747416   61659 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0918 21:04:29.748825   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .GetIP
	I0918 21:04:29.751828   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | domain default-k8s-diff-port-828868 has defined MAC address 52:54:00:c0:39:06 in network mk-default-k8s-diff-port-828868
	I0918 21:04:29.752238   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:39:06", ip: ""} in network mk-default-k8s-diff-port-828868: {Iface:virbr4 ExpiryTime:2024-09-18 22:04:19 +0000 UTC Type:0 Mac:52:54:00:c0:39:06 Iaid: IPaddr:192.168.50.109 Prefix:24 Hostname:default-k8s-diff-port-828868 Clientid:01:52:54:00:c0:39:06}
	I0918 21:04:29.752288   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | domain default-k8s-diff-port-828868 has defined IP address 192.168.50.109 and MAC address 52:54:00:c0:39:06 in network mk-default-k8s-diff-port-828868
	I0918 21:04:29.752533   61659 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0918 21:04:29.756672   61659 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0918 21:04:29.768691   61659 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-828868 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.1 ClusterName:default-k8s-diff-port-828868 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.109 Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0918 21:04:29.768822   61659 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0918 21:04:29.768867   61659 ssh_runner.go:195] Run: sudo crictl images --output json
	I0918 21:04:29.803885   61659 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I0918 21:04:29.803964   61659 ssh_runner.go:195] Run: which lz4
	I0918 21:04:29.808051   61659 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0918 21:04:29.812324   61659 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0918 21:04:29.812363   61659 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I0918 21:04:31.172721   61659 crio.go:462] duration metric: took 1.364736071s to copy over tarball
	I0918 21:04:31.172837   61659 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0918 21:04:29.637411   61740 main.go:141] libmachine: (embed-certs-255556) Waiting to get IP...
	I0918 21:04:29.638427   61740 main.go:141] libmachine: (embed-certs-255556) DBG | domain embed-certs-255556 has defined MAC address 52:54:00:e8:c2:b7 in network mk-embed-certs-255556
	I0918 21:04:29.638877   61740 main.go:141] libmachine: (embed-certs-255556) DBG | unable to find current IP address of domain embed-certs-255556 in network mk-embed-certs-255556
	I0918 21:04:29.638973   61740 main.go:141] libmachine: (embed-certs-255556) DBG | I0918 21:04:29.638868   62857 retry.go:31] will retry after 298.087525ms: waiting for machine to come up
	I0918 21:04:29.938543   61740 main.go:141] libmachine: (embed-certs-255556) DBG | domain embed-certs-255556 has defined MAC address 52:54:00:e8:c2:b7 in network mk-embed-certs-255556
	I0918 21:04:29.938923   61740 main.go:141] libmachine: (embed-certs-255556) DBG | unable to find current IP address of domain embed-certs-255556 in network mk-embed-certs-255556
	I0918 21:04:29.938946   61740 main.go:141] libmachine: (embed-certs-255556) DBG | I0918 21:04:29.938889   62857 retry.go:31] will retry after 362.887862ms: waiting for machine to come up
	I0918 21:04:30.303379   61740 main.go:141] libmachine: (embed-certs-255556) DBG | domain embed-certs-255556 has defined MAC address 52:54:00:e8:c2:b7 in network mk-embed-certs-255556
	I0918 21:04:30.303867   61740 main.go:141] libmachine: (embed-certs-255556) DBG | unable to find current IP address of domain embed-certs-255556 in network mk-embed-certs-255556
	I0918 21:04:30.303898   61740 main.go:141] libmachine: (embed-certs-255556) DBG | I0918 21:04:30.303820   62857 retry.go:31] will retry after 452.771021ms: waiting for machine to come up
	I0918 21:04:30.758353   61740 main.go:141] libmachine: (embed-certs-255556) DBG | domain embed-certs-255556 has defined MAC address 52:54:00:e8:c2:b7 in network mk-embed-certs-255556
	I0918 21:04:30.758897   61740 main.go:141] libmachine: (embed-certs-255556) DBG | unable to find current IP address of domain embed-certs-255556 in network mk-embed-certs-255556
	I0918 21:04:30.758928   61740 main.go:141] libmachine: (embed-certs-255556) DBG | I0918 21:04:30.758856   62857 retry.go:31] will retry after 506.010985ms: waiting for machine to come up
	I0918 21:04:31.266443   61740 main.go:141] libmachine: (embed-certs-255556) DBG | domain embed-certs-255556 has defined MAC address 52:54:00:e8:c2:b7 in network mk-embed-certs-255556
	I0918 21:04:31.266934   61740 main.go:141] libmachine: (embed-certs-255556) DBG | unable to find current IP address of domain embed-certs-255556 in network mk-embed-certs-255556
	I0918 21:04:31.266964   61740 main.go:141] libmachine: (embed-certs-255556) DBG | I0918 21:04:31.266893   62857 retry.go:31] will retry after 584.679329ms: waiting for machine to come up
	I0918 21:04:31.853811   61740 main.go:141] libmachine: (embed-certs-255556) DBG | domain embed-certs-255556 has defined MAC address 52:54:00:e8:c2:b7 in network mk-embed-certs-255556
	I0918 21:04:31.854371   61740 main.go:141] libmachine: (embed-certs-255556) DBG | unable to find current IP address of domain embed-certs-255556 in network mk-embed-certs-255556
	I0918 21:04:31.854402   61740 main.go:141] libmachine: (embed-certs-255556) DBG | I0918 21:04:31.854309   62857 retry.go:31] will retry after 786.010743ms: waiting for machine to come up
	I0918 21:04:32.642494   61740 main.go:141] libmachine: (embed-certs-255556) DBG | domain embed-certs-255556 has defined MAC address 52:54:00:e8:c2:b7 in network mk-embed-certs-255556
	I0918 21:04:32.643068   61740 main.go:141] libmachine: (embed-certs-255556) DBG | unable to find current IP address of domain embed-certs-255556 in network mk-embed-certs-255556
	I0918 21:04:32.643100   61740 main.go:141] libmachine: (embed-certs-255556) DBG | I0918 21:04:32.643013   62857 retry.go:31] will retry after 1.010762944s: waiting for machine to come up
	I0918 21:04:33.299563   61659 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.126697598s)
	I0918 21:04:33.299596   61659 crio.go:469] duration metric: took 2.126840983s to extract the tarball
	I0918 21:04:33.299602   61659 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0918 21:04:33.336428   61659 ssh_runner.go:195] Run: sudo crictl images --output json
	I0918 21:04:33.377303   61659 crio.go:514] all images are preloaded for cri-o runtime.
	I0918 21:04:33.377342   61659 cache_images.go:84] Images are preloaded, skipping loading
	I0918 21:04:33.377352   61659 kubeadm.go:934] updating node { 192.168.50.109 8444 v1.31.1 crio true true} ...
	I0918 21:04:33.377490   61659 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-828868 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.109
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:default-k8s-diff-port-828868 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0918 21:04:33.377574   61659 ssh_runner.go:195] Run: crio config
	I0918 21:04:33.423773   61659 cni.go:84] Creating CNI manager for ""
	I0918 21:04:33.423800   61659 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0918 21:04:33.423816   61659 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0918 21:04:33.423835   61659 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.109 APIServerPort:8444 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-828868 NodeName:default-k8s-diff-port-828868 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.109"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.109 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0918 21:04:33.423976   61659 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.109
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-828868"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.109
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.109"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0918 21:04:33.424058   61659 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0918 21:04:33.434047   61659 binaries.go:44] Found k8s binaries, skipping transfer
	I0918 21:04:33.434119   61659 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0918 21:04:33.443535   61659 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I0918 21:04:33.460116   61659 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0918 21:04:33.475883   61659 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2172 bytes)
	I0918 21:04:33.492311   61659 ssh_runner.go:195] Run: grep 192.168.50.109	control-plane.minikube.internal$ /etc/hosts
	I0918 21:04:33.495940   61659 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.109	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0918 21:04:33.507411   61659 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0918 21:04:33.625104   61659 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0918 21:04:33.641530   61659 certs.go:68] Setting up /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/default-k8s-diff-port-828868 for IP: 192.168.50.109
	I0918 21:04:33.641556   61659 certs.go:194] generating shared ca certs ...
	I0918 21:04:33.641572   61659 certs.go:226] acquiring lock for ca certs: {Name:mkf54e0e71ed884c4372f9d3d4cb308ea4600185 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0918 21:04:33.641757   61659 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19667-7671/.minikube/ca.key
	I0918 21:04:33.641804   61659 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19667-7671/.minikube/proxy-client-ca.key
	I0918 21:04:33.641822   61659 certs.go:256] generating profile certs ...
	I0918 21:04:33.641944   61659 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/default-k8s-diff-port-828868/client.key
	I0918 21:04:33.642036   61659 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/default-k8s-diff-port-828868/apiserver.key.df92be3a
	I0918 21:04:33.642087   61659 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/default-k8s-diff-port-828868/proxy-client.key
	I0918 21:04:33.642255   61659 certs.go:484] found cert: /home/jenkins/minikube-integration/19667-7671/.minikube/certs/14878.pem (1338 bytes)
	W0918 21:04:33.642297   61659 certs.go:480] ignoring /home/jenkins/minikube-integration/19667-7671/.minikube/certs/14878_empty.pem, impossibly tiny 0 bytes
	I0918 21:04:33.642306   61659 certs.go:484] found cert: /home/jenkins/minikube-integration/19667-7671/.minikube/certs/ca-key.pem (1675 bytes)
	I0918 21:04:33.642337   61659 certs.go:484] found cert: /home/jenkins/minikube-integration/19667-7671/.minikube/certs/ca.pem (1082 bytes)
	I0918 21:04:33.642370   61659 certs.go:484] found cert: /home/jenkins/minikube-integration/19667-7671/.minikube/certs/cert.pem (1123 bytes)
	I0918 21:04:33.642404   61659 certs.go:484] found cert: /home/jenkins/minikube-integration/19667-7671/.minikube/certs/key.pem (1679 bytes)
	I0918 21:04:33.642454   61659 certs.go:484] found cert: /home/jenkins/minikube-integration/19667-7671/.minikube/files/etc/ssl/certs/148782.pem (1708 bytes)
	I0918 21:04:33.643116   61659 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0918 21:04:33.682428   61659 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0918 21:04:33.710444   61659 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0918 21:04:33.759078   61659 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0918 21:04:33.797727   61659 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/default-k8s-diff-port-828868/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0918 21:04:33.821989   61659 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/default-k8s-diff-port-828868/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0918 21:04:33.844210   61659 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/default-k8s-diff-port-828868/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0918 21:04:33.866843   61659 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/default-k8s-diff-port-828868/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0918 21:04:33.896125   61659 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/files/etc/ssl/certs/148782.pem --> /usr/share/ca-certificates/148782.pem (1708 bytes)
	I0918 21:04:33.918667   61659 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0918 21:04:33.940790   61659 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/certs/14878.pem --> /usr/share/ca-certificates/14878.pem (1338 bytes)
	I0918 21:04:33.963660   61659 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0918 21:04:33.980348   61659 ssh_runner.go:195] Run: openssl version
	I0918 21:04:33.985856   61659 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/148782.pem && ln -fs /usr/share/ca-certificates/148782.pem /etc/ssl/certs/148782.pem"
	I0918 21:04:33.996472   61659 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/148782.pem
	I0918 21:04:34.000732   61659 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 18 19:57 /usr/share/ca-certificates/148782.pem
	I0918 21:04:34.000788   61659 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/148782.pem
	I0918 21:04:34.006282   61659 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/148782.pem /etc/ssl/certs/3ec20f2e.0"
	I0918 21:04:34.016612   61659 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0918 21:04:34.026689   61659 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0918 21:04:34.030650   61659 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 18 19:39 /usr/share/ca-certificates/minikubeCA.pem
	I0918 21:04:34.030705   61659 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0918 21:04:34.035940   61659 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0918 21:04:34.046516   61659 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14878.pem && ln -fs /usr/share/ca-certificates/14878.pem /etc/ssl/certs/14878.pem"
	I0918 21:04:34.056755   61659 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14878.pem
	I0918 21:04:34.061189   61659 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 18 19:57 /usr/share/ca-certificates/14878.pem
	I0918 21:04:34.061264   61659 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14878.pem
	I0918 21:04:34.066973   61659 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14878.pem /etc/ssl/certs/51391683.0"
	I0918 21:04:34.078781   61659 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0918 21:04:34.083129   61659 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0918 21:04:34.089249   61659 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0918 21:04:34.095211   61659 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0918 21:04:34.101350   61659 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0918 21:04:34.107269   61659 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0918 21:04:34.113177   61659 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0918 21:04:34.119005   61659 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-828868 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.31.1 ClusterName:default-k8s-diff-port-828868 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.109 Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0918 21:04:34.119093   61659 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0918 21:04:34.119147   61659 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0918 21:04:34.162792   61659 cri.go:89] found id: ""
	I0918 21:04:34.162895   61659 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0918 21:04:34.174325   61659 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0918 21:04:34.174358   61659 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0918 21:04:34.174420   61659 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0918 21:04:34.183708   61659 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0918 21:04:34.184680   61659 kubeconfig.go:125] found "default-k8s-diff-port-828868" server: "https://192.168.50.109:8444"
	I0918 21:04:34.186781   61659 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0918 21:04:34.195823   61659 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.109
	I0918 21:04:34.195856   61659 kubeadm.go:1160] stopping kube-system containers ...
	I0918 21:04:34.195866   61659 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0918 21:04:34.195907   61659 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0918 21:04:34.235799   61659 cri.go:89] found id: ""
	I0918 21:04:34.235882   61659 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0918 21:04:34.251412   61659 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0918 21:04:34.261361   61659 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0918 21:04:34.261390   61659 kubeadm.go:157] found existing configuration files:
	
	I0918 21:04:34.261435   61659 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0918 21:04:34.272201   61659 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0918 21:04:34.272272   61659 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0918 21:04:34.283030   61659 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0918 21:04:34.293227   61659 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0918 21:04:34.293321   61659 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0918 21:04:34.303749   61659 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0918 21:04:34.314027   61659 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0918 21:04:34.314116   61659 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0918 21:04:34.324585   61659 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0918 21:04:34.334524   61659 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0918 21:04:34.334594   61659 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0918 21:04:34.344923   61659 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0918 21:04:34.355422   61659 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0918 21:04:34.480395   61659 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0918 21:04:35.320827   61659 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0918 21:04:35.542013   61659 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0918 21:04:35.610886   61659 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0918 21:04:35.694501   61659 api_server.go:52] waiting for apiserver process to appear ...
	I0918 21:04:35.694610   61659 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:04:36.195441   61659 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:04:36.694978   61659 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:04:37.195220   61659 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:04:33.655864   61740 main.go:141] libmachine: (embed-certs-255556) DBG | domain embed-certs-255556 has defined MAC address 52:54:00:e8:c2:b7 in network mk-embed-certs-255556
	I0918 21:04:33.656375   61740 main.go:141] libmachine: (embed-certs-255556) DBG | unable to find current IP address of domain embed-certs-255556 in network mk-embed-certs-255556
	I0918 21:04:33.656407   61740 main.go:141] libmachine: (embed-certs-255556) DBG | I0918 21:04:33.656347   62857 retry.go:31] will retry after 1.375317123s: waiting for machine to come up
	I0918 21:04:35.033882   61740 main.go:141] libmachine: (embed-certs-255556) DBG | domain embed-certs-255556 has defined MAC address 52:54:00:e8:c2:b7 in network mk-embed-certs-255556
	I0918 21:04:35.034266   61740 main.go:141] libmachine: (embed-certs-255556) DBG | unable to find current IP address of domain embed-certs-255556 in network mk-embed-certs-255556
	I0918 21:04:35.034293   61740 main.go:141] libmachine: (embed-certs-255556) DBG | I0918 21:04:35.034232   62857 retry.go:31] will retry after 1.142237895s: waiting for machine to come up
	I0918 21:04:36.178371   61740 main.go:141] libmachine: (embed-certs-255556) DBG | domain embed-certs-255556 has defined MAC address 52:54:00:e8:c2:b7 in network mk-embed-certs-255556
	I0918 21:04:36.178837   61740 main.go:141] libmachine: (embed-certs-255556) DBG | unable to find current IP address of domain embed-certs-255556 in network mk-embed-certs-255556
	I0918 21:04:36.178865   61740 main.go:141] libmachine: (embed-certs-255556) DBG | I0918 21:04:36.178804   62857 retry.go:31] will retry after 1.983853904s: waiting for machine to come up
	I0918 21:04:38.165113   61740 main.go:141] libmachine: (embed-certs-255556) DBG | domain embed-certs-255556 has defined MAC address 52:54:00:e8:c2:b7 in network mk-embed-certs-255556
	I0918 21:04:38.165662   61740 main.go:141] libmachine: (embed-certs-255556) DBG | unable to find current IP address of domain embed-certs-255556 in network mk-embed-certs-255556
	I0918 21:04:38.165697   61740 main.go:141] libmachine: (embed-certs-255556) DBG | I0918 21:04:38.165601   62857 retry.go:31] will retry after 2.407286782s: waiting for machine to come up
	I0918 21:04:37.694916   61659 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:04:37.713724   61659 api_server.go:72] duration metric: took 2.019221095s to wait for apiserver process to appear ...
	I0918 21:04:37.713756   61659 api_server.go:88] waiting for apiserver healthz status ...
	I0918 21:04:37.713782   61659 api_server.go:253] Checking apiserver healthz at https://192.168.50.109:8444/healthz ...
	I0918 21:04:37.714297   61659 api_server.go:269] stopped: https://192.168.50.109:8444/healthz: Get "https://192.168.50.109:8444/healthz": dial tcp 192.168.50.109:8444: connect: connection refused
	I0918 21:04:38.213883   61659 api_server.go:253] Checking apiserver healthz at https://192.168.50.109:8444/healthz ...
	I0918 21:04:40.396513   61659 api_server.go:279] https://192.168.50.109:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0918 21:04:40.396564   61659 api_server.go:103] status: https://192.168.50.109:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0918 21:04:40.396584   61659 api_server.go:253] Checking apiserver healthz at https://192.168.50.109:8444/healthz ...
	I0918 21:04:40.409718   61659 api_server.go:279] https://192.168.50.109:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0918 21:04:40.409750   61659 api_server.go:103] status: https://192.168.50.109:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0918 21:04:40.714176   61659 api_server.go:253] Checking apiserver healthz at https://192.168.50.109:8444/healthz ...
	I0918 21:04:40.719353   61659 api_server.go:279] https://192.168.50.109:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0918 21:04:40.719391   61659 api_server.go:103] status: https://192.168.50.109:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0918 21:04:41.214596   61659 api_server.go:253] Checking apiserver healthz at https://192.168.50.109:8444/healthz ...
	I0918 21:04:41.219579   61659 api_server.go:279] https://192.168.50.109:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0918 21:04:41.219608   61659 api_server.go:103] status: https://192.168.50.109:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0918 21:04:41.713951   61659 api_server.go:253] Checking apiserver healthz at https://192.168.50.109:8444/healthz ...
	I0918 21:04:41.719212   61659 api_server.go:279] https://192.168.50.109:8444/healthz returned 200:
	ok
	I0918 21:04:41.726647   61659 api_server.go:141] control plane version: v1.31.1
	I0918 21:04:41.726679   61659 api_server.go:131] duration metric: took 4.012914861s to wait for apiserver health ...
	I0918 21:04:41.726689   61659 cni.go:84] Creating CNI manager for ""
	I0918 21:04:41.726707   61659 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0918 21:04:41.728312   61659 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0918 21:04:41.729613   61659 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0918 21:04:41.741932   61659 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0918 21:04:41.763195   61659 system_pods.go:43] waiting for kube-system pods to appear ...
	I0918 21:04:41.775167   61659 system_pods.go:59] 8 kube-system pods found
	I0918 21:04:41.775210   61659 system_pods.go:61] "coredns-7c65d6cfc9-xzjd7" [bd8252df-707c-41e6-84b7-cc74480177a8] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0918 21:04:41.775219   61659 system_pods.go:61] "etcd-default-k8s-diff-port-828868" [aa8e221d-abba-48a5-8814-246df0776408] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0918 21:04:41.775227   61659 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-828868" [b44966ac-3478-40c4-b67f-1824bff2bec7] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0918 21:04:41.775233   61659 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-828868" [7af8fbad-3aa2-497e-90df-33facaee6b17] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0918 21:04:41.775239   61659 system_pods.go:61] "kube-proxy-jz7ls" [f931ae9a-0b9c-4754-8b7b-d52c267b018c] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0918 21:04:41.775247   61659 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-828868" [ee89c713-c689-4de3-b1a5-4e08470ff6fb] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0918 21:04:41.775252   61659 system_pods.go:61] "metrics-server-6867b74b74-cqp47" [1ccf8c85-183a-4bea-abbc-eb7bcedca7f0] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0918 21:04:41.775257   61659 system_pods.go:61] "storage-provisioner" [9744cbfa-6b9a-42f0-aa80-0821b87a33d4] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0918 21:04:41.775270   61659 system_pods.go:74] duration metric: took 12.058758ms to wait for pod list to return data ...
	I0918 21:04:41.775280   61659 node_conditions.go:102] verifying NodePressure condition ...
	I0918 21:04:41.779525   61659 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0918 21:04:41.779559   61659 node_conditions.go:123] node cpu capacity is 2
	I0918 21:04:41.779580   61659 node_conditions.go:105] duration metric: took 4.292138ms to run NodePressure ...
	I0918 21:04:41.779615   61659 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0918 21:04:42.079279   61659 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0918 21:04:42.084311   61659 kubeadm.go:739] kubelet initialised
	I0918 21:04:42.084338   61659 kubeadm.go:740] duration metric: took 5.024999ms waiting for restarted kubelet to initialise ...
	I0918 21:04:42.084351   61659 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0918 21:04:42.089113   61659 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-xzjd7" in "kube-system" namespace to be "Ready" ...
	I0918 21:04:42.095539   61659 pod_ready.go:98] node "default-k8s-diff-port-828868" hosting pod "coredns-7c65d6cfc9-xzjd7" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-828868" has status "Ready":"False"
	I0918 21:04:42.095565   61659 pod_ready.go:82] duration metric: took 6.405251ms for pod "coredns-7c65d6cfc9-xzjd7" in "kube-system" namespace to be "Ready" ...
	E0918 21:04:42.095575   61659 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-828868" hosting pod "coredns-7c65d6cfc9-xzjd7" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-828868" has status "Ready":"False"
	I0918 21:04:42.095581   61659 pod_ready.go:79] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-828868" in "kube-system" namespace to be "Ready" ...
	I0918 21:04:42.100447   61659 pod_ready.go:98] node "default-k8s-diff-port-828868" hosting pod "etcd-default-k8s-diff-port-828868" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-828868" has status "Ready":"False"
	I0918 21:04:42.100469   61659 pod_ready.go:82] duration metric: took 4.879955ms for pod "etcd-default-k8s-diff-port-828868" in "kube-system" namespace to be "Ready" ...
	E0918 21:04:42.100480   61659 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-828868" hosting pod "etcd-default-k8s-diff-port-828868" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-828868" has status "Ready":"False"
	I0918 21:04:42.100485   61659 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-828868" in "kube-system" namespace to be "Ready" ...
	I0918 21:04:42.104889   61659 pod_ready.go:98] node "default-k8s-diff-port-828868" hosting pod "kube-apiserver-default-k8s-diff-port-828868" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-828868" has status "Ready":"False"
	I0918 21:04:42.104914   61659 pod_ready.go:82] duration metric: took 4.421708ms for pod "kube-apiserver-default-k8s-diff-port-828868" in "kube-system" namespace to be "Ready" ...
	E0918 21:04:42.104926   61659 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-828868" hosting pod "kube-apiserver-default-k8s-diff-port-828868" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-828868" has status "Ready":"False"
	I0918 21:04:42.104934   61659 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-828868" in "kube-system" namespace to be "Ready" ...
	I0918 21:04:40.574813   61740 main.go:141] libmachine: (embed-certs-255556) DBG | domain embed-certs-255556 has defined MAC address 52:54:00:e8:c2:b7 in network mk-embed-certs-255556
	I0918 21:04:40.575265   61740 main.go:141] libmachine: (embed-certs-255556) DBG | unable to find current IP address of domain embed-certs-255556 in network mk-embed-certs-255556
	I0918 21:04:40.575295   61740 main.go:141] libmachine: (embed-certs-255556) DBG | I0918 21:04:40.575215   62857 retry.go:31] will retry after 2.249084169s: waiting for machine to come up
	I0918 21:04:42.827547   61740 main.go:141] libmachine: (embed-certs-255556) DBG | domain embed-certs-255556 has defined MAC address 52:54:00:e8:c2:b7 in network mk-embed-certs-255556
	I0918 21:04:42.827966   61740 main.go:141] libmachine: (embed-certs-255556) DBG | unable to find current IP address of domain embed-certs-255556 in network mk-embed-certs-255556
	I0918 21:04:42.828028   61740 main.go:141] libmachine: (embed-certs-255556) DBG | I0918 21:04:42.827923   62857 retry.go:31] will retry after 4.512161859s: waiting for machine to come up
	I0918 21:04:44.113739   61659 pod_ready.go:103] pod "kube-controller-manager-default-k8s-diff-port-828868" in "kube-system" namespace has status "Ready":"False"
	I0918 21:04:46.611013   61659 pod_ready.go:103] pod "kube-controller-manager-default-k8s-diff-port-828868" in "kube-system" namespace has status "Ready":"False"
	I0918 21:04:47.345046   61740 main.go:141] libmachine: (embed-certs-255556) DBG | domain embed-certs-255556 has defined MAC address 52:54:00:e8:c2:b7 in network mk-embed-certs-255556
	I0918 21:04:47.345426   61740 main.go:141] libmachine: (embed-certs-255556) Found IP for machine: 192.168.39.21
	I0918 21:04:47.345444   61740 main.go:141] libmachine: (embed-certs-255556) Reserving static IP address...
	I0918 21:04:47.345457   61740 main.go:141] libmachine: (embed-certs-255556) DBG | domain embed-certs-255556 has current primary IP address 192.168.39.21 and MAC address 52:54:00:e8:c2:b7 in network mk-embed-certs-255556
	I0918 21:04:47.345824   61740 main.go:141] libmachine: (embed-certs-255556) DBG | found host DHCP lease matching {name: "embed-certs-255556", mac: "52:54:00:e8:c2:b7", ip: "192.168.39.21"} in network mk-embed-certs-255556: {Iface:virbr1 ExpiryTime:2024-09-18 22:04:39 +0000 UTC Type:0 Mac:52:54:00:e8:c2:b7 Iaid: IPaddr:192.168.39.21 Prefix:24 Hostname:embed-certs-255556 Clientid:01:52:54:00:e8:c2:b7}
	I0918 21:04:47.345846   61740 main.go:141] libmachine: (embed-certs-255556) DBG | skip adding static IP to network mk-embed-certs-255556 - found existing host DHCP lease matching {name: "embed-certs-255556", mac: "52:54:00:e8:c2:b7", ip: "192.168.39.21"}
	I0918 21:04:47.345856   61740 main.go:141] libmachine: (embed-certs-255556) Reserved static IP address: 192.168.39.21
	I0918 21:04:47.345866   61740 main.go:141] libmachine: (embed-certs-255556) Waiting for SSH to be available...
	I0918 21:04:47.345874   61740 main.go:141] libmachine: (embed-certs-255556) DBG | Getting to WaitForSSH function...
	I0918 21:04:47.347972   61740 main.go:141] libmachine: (embed-certs-255556) DBG | domain embed-certs-255556 has defined MAC address 52:54:00:e8:c2:b7 in network mk-embed-certs-255556
	I0918 21:04:47.348327   61740 main.go:141] libmachine: (embed-certs-255556) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e8:c2:b7", ip: ""} in network mk-embed-certs-255556: {Iface:virbr1 ExpiryTime:2024-09-18 22:04:39 +0000 UTC Type:0 Mac:52:54:00:e8:c2:b7 Iaid: IPaddr:192.168.39.21 Prefix:24 Hostname:embed-certs-255556 Clientid:01:52:54:00:e8:c2:b7}
	I0918 21:04:47.348367   61740 main.go:141] libmachine: (embed-certs-255556) DBG | domain embed-certs-255556 has defined IP address 192.168.39.21 and MAC address 52:54:00:e8:c2:b7 in network mk-embed-certs-255556
	I0918 21:04:47.348437   61740 main.go:141] libmachine: (embed-certs-255556) DBG | Using SSH client type: external
	I0918 21:04:47.348469   61740 main.go:141] libmachine: (embed-certs-255556) DBG | Using SSH private key: /home/jenkins/minikube-integration/19667-7671/.minikube/machines/embed-certs-255556/id_rsa (-rw-------)
	I0918 21:04:47.348511   61740 main.go:141] libmachine: (embed-certs-255556) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.21 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19667-7671/.minikube/machines/embed-certs-255556/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0918 21:04:47.348526   61740 main.go:141] libmachine: (embed-certs-255556) DBG | About to run SSH command:
	I0918 21:04:47.348554   61740 main.go:141] libmachine: (embed-certs-255556) DBG | exit 0
	I0918 21:04:47.476457   61740 main.go:141] libmachine: (embed-certs-255556) DBG | SSH cmd err, output: <nil>: 
	I0918 21:04:47.476858   61740 main.go:141] libmachine: (embed-certs-255556) Calling .GetConfigRaw
	I0918 21:04:47.477533   61740 main.go:141] libmachine: (embed-certs-255556) Calling .GetIP
	I0918 21:04:47.480221   61740 main.go:141] libmachine: (embed-certs-255556) DBG | domain embed-certs-255556 has defined MAC address 52:54:00:e8:c2:b7 in network mk-embed-certs-255556
	I0918 21:04:47.480601   61740 main.go:141] libmachine: (embed-certs-255556) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e8:c2:b7", ip: ""} in network mk-embed-certs-255556: {Iface:virbr1 ExpiryTime:2024-09-18 22:04:39 +0000 UTC Type:0 Mac:52:54:00:e8:c2:b7 Iaid: IPaddr:192.168.39.21 Prefix:24 Hostname:embed-certs-255556 Clientid:01:52:54:00:e8:c2:b7}
	I0918 21:04:47.480644   61740 main.go:141] libmachine: (embed-certs-255556) DBG | domain embed-certs-255556 has defined IP address 192.168.39.21 and MAC address 52:54:00:e8:c2:b7 in network mk-embed-certs-255556
	I0918 21:04:47.480966   61740 profile.go:143] Saving config to /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/embed-certs-255556/config.json ...
	I0918 21:04:47.481172   61740 machine.go:93] provisionDockerMachine start ...
	I0918 21:04:47.481189   61740 main.go:141] libmachine: (embed-certs-255556) Calling .DriverName
	I0918 21:04:47.481440   61740 main.go:141] libmachine: (embed-certs-255556) Calling .GetSSHHostname
	I0918 21:04:47.483916   61740 main.go:141] libmachine: (embed-certs-255556) DBG | domain embed-certs-255556 has defined MAC address 52:54:00:e8:c2:b7 in network mk-embed-certs-255556
	I0918 21:04:47.484299   61740 main.go:141] libmachine: (embed-certs-255556) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e8:c2:b7", ip: ""} in network mk-embed-certs-255556: {Iface:virbr1 ExpiryTime:2024-09-18 22:04:39 +0000 UTC Type:0 Mac:52:54:00:e8:c2:b7 Iaid: IPaddr:192.168.39.21 Prefix:24 Hostname:embed-certs-255556 Clientid:01:52:54:00:e8:c2:b7}
	I0918 21:04:47.484328   61740 main.go:141] libmachine: (embed-certs-255556) DBG | domain embed-certs-255556 has defined IP address 192.168.39.21 and MAC address 52:54:00:e8:c2:b7 in network mk-embed-certs-255556
	I0918 21:04:47.484467   61740 main.go:141] libmachine: (embed-certs-255556) Calling .GetSSHPort
	I0918 21:04:47.484703   61740 main.go:141] libmachine: (embed-certs-255556) Calling .GetSSHKeyPath
	I0918 21:04:47.484898   61740 main.go:141] libmachine: (embed-certs-255556) Calling .GetSSHKeyPath
	I0918 21:04:47.485043   61740 main.go:141] libmachine: (embed-certs-255556) Calling .GetSSHUsername
	I0918 21:04:47.485185   61740 main.go:141] libmachine: Using SSH client type: native
	I0918 21:04:47.485386   61740 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.21 22 <nil> <nil>}
	I0918 21:04:47.485399   61740 main.go:141] libmachine: About to run SSH command:
	hostname
	I0918 21:04:47.596243   61740 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0918 21:04:47.596272   61740 main.go:141] libmachine: (embed-certs-255556) Calling .GetMachineName
	I0918 21:04:47.596531   61740 buildroot.go:166] provisioning hostname "embed-certs-255556"
	I0918 21:04:47.596560   61740 main.go:141] libmachine: (embed-certs-255556) Calling .GetMachineName
	I0918 21:04:47.596775   61740 main.go:141] libmachine: (embed-certs-255556) Calling .GetSSHHostname
	I0918 21:04:47.599159   61740 main.go:141] libmachine: (embed-certs-255556) DBG | domain embed-certs-255556 has defined MAC address 52:54:00:e8:c2:b7 in network mk-embed-certs-255556
	I0918 21:04:47.599508   61740 main.go:141] libmachine: (embed-certs-255556) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e8:c2:b7", ip: ""} in network mk-embed-certs-255556: {Iface:virbr1 ExpiryTime:2024-09-18 22:04:39 +0000 UTC Type:0 Mac:52:54:00:e8:c2:b7 Iaid: IPaddr:192.168.39.21 Prefix:24 Hostname:embed-certs-255556 Clientid:01:52:54:00:e8:c2:b7}
	I0918 21:04:47.599532   61740 main.go:141] libmachine: (embed-certs-255556) DBG | domain embed-certs-255556 has defined IP address 192.168.39.21 and MAC address 52:54:00:e8:c2:b7 in network mk-embed-certs-255556
	I0918 21:04:47.599706   61740 main.go:141] libmachine: (embed-certs-255556) Calling .GetSSHPort
	I0918 21:04:47.599888   61740 main.go:141] libmachine: (embed-certs-255556) Calling .GetSSHKeyPath
	I0918 21:04:47.600090   61740 main.go:141] libmachine: (embed-certs-255556) Calling .GetSSHKeyPath
	I0918 21:04:47.600229   61740 main.go:141] libmachine: (embed-certs-255556) Calling .GetSSHUsername
	I0918 21:04:47.600406   61740 main.go:141] libmachine: Using SSH client type: native
	I0918 21:04:47.600589   61740 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.21 22 <nil> <nil>}
	I0918 21:04:47.600602   61740 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-255556 && echo "embed-certs-255556" | sudo tee /etc/hostname
	I0918 21:04:47.726173   61740 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-255556
	
	I0918 21:04:47.726213   61740 main.go:141] libmachine: (embed-certs-255556) Calling .GetSSHHostname
	I0918 21:04:47.729209   61740 main.go:141] libmachine: (embed-certs-255556) DBG | domain embed-certs-255556 has defined MAC address 52:54:00:e8:c2:b7 in network mk-embed-certs-255556
	I0918 21:04:47.729575   61740 main.go:141] libmachine: (embed-certs-255556) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e8:c2:b7", ip: ""} in network mk-embed-certs-255556: {Iface:virbr1 ExpiryTime:2024-09-18 22:04:39 +0000 UTC Type:0 Mac:52:54:00:e8:c2:b7 Iaid: IPaddr:192.168.39.21 Prefix:24 Hostname:embed-certs-255556 Clientid:01:52:54:00:e8:c2:b7}
	I0918 21:04:47.729609   61740 main.go:141] libmachine: (embed-certs-255556) DBG | domain embed-certs-255556 has defined IP address 192.168.39.21 and MAC address 52:54:00:e8:c2:b7 in network mk-embed-certs-255556
	I0918 21:04:47.729775   61740 main.go:141] libmachine: (embed-certs-255556) Calling .GetSSHPort
	I0918 21:04:47.729952   61740 main.go:141] libmachine: (embed-certs-255556) Calling .GetSSHKeyPath
	I0918 21:04:47.730212   61740 main.go:141] libmachine: (embed-certs-255556) Calling .GetSSHKeyPath
	I0918 21:04:47.730386   61740 main.go:141] libmachine: (embed-certs-255556) Calling .GetSSHUsername
	I0918 21:04:47.730583   61740 main.go:141] libmachine: Using SSH client type: native
	I0918 21:04:47.730755   61740 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.21 22 <nil> <nil>}
	I0918 21:04:47.730771   61740 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-255556' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-255556/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-255556' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0918 21:04:47.849894   61740 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0918 21:04:47.849928   61740 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19667-7671/.minikube CaCertPath:/home/jenkins/minikube-integration/19667-7671/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19667-7671/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19667-7671/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19667-7671/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19667-7671/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19667-7671/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19667-7671/.minikube}
	I0918 21:04:47.849954   61740 buildroot.go:174] setting up certificates
	I0918 21:04:47.849961   61740 provision.go:84] configureAuth start
	I0918 21:04:47.849971   61740 main.go:141] libmachine: (embed-certs-255556) Calling .GetMachineName
	I0918 21:04:47.850307   61740 main.go:141] libmachine: (embed-certs-255556) Calling .GetIP
	I0918 21:04:47.852989   61740 main.go:141] libmachine: (embed-certs-255556) DBG | domain embed-certs-255556 has defined MAC address 52:54:00:e8:c2:b7 in network mk-embed-certs-255556
	I0918 21:04:47.853389   61740 main.go:141] libmachine: (embed-certs-255556) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e8:c2:b7", ip: ""} in network mk-embed-certs-255556: {Iface:virbr1 ExpiryTime:2024-09-18 22:04:39 +0000 UTC Type:0 Mac:52:54:00:e8:c2:b7 Iaid: IPaddr:192.168.39.21 Prefix:24 Hostname:embed-certs-255556 Clientid:01:52:54:00:e8:c2:b7}
	I0918 21:04:47.853423   61740 main.go:141] libmachine: (embed-certs-255556) DBG | domain embed-certs-255556 has defined IP address 192.168.39.21 and MAC address 52:54:00:e8:c2:b7 in network mk-embed-certs-255556
	I0918 21:04:47.853555   61740 main.go:141] libmachine: (embed-certs-255556) Calling .GetSSHHostname
	I0918 21:04:47.856032   61740 main.go:141] libmachine: (embed-certs-255556) DBG | domain embed-certs-255556 has defined MAC address 52:54:00:e8:c2:b7 in network mk-embed-certs-255556
	I0918 21:04:47.856389   61740 main.go:141] libmachine: (embed-certs-255556) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e8:c2:b7", ip: ""} in network mk-embed-certs-255556: {Iface:virbr1 ExpiryTime:2024-09-18 22:04:39 +0000 UTC Type:0 Mac:52:54:00:e8:c2:b7 Iaid: IPaddr:192.168.39.21 Prefix:24 Hostname:embed-certs-255556 Clientid:01:52:54:00:e8:c2:b7}
	I0918 21:04:47.856410   61740 main.go:141] libmachine: (embed-certs-255556) DBG | domain embed-certs-255556 has defined IP address 192.168.39.21 and MAC address 52:54:00:e8:c2:b7 in network mk-embed-certs-255556
	I0918 21:04:47.856556   61740 provision.go:143] copyHostCerts
	I0918 21:04:47.856617   61740 exec_runner.go:144] found /home/jenkins/minikube-integration/19667-7671/.minikube/ca.pem, removing ...
	I0918 21:04:47.856627   61740 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19667-7671/.minikube/ca.pem
	I0918 21:04:47.856686   61740 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19667-7671/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19667-7671/.minikube/ca.pem (1082 bytes)
	I0918 21:04:47.856778   61740 exec_runner.go:144] found /home/jenkins/minikube-integration/19667-7671/.minikube/cert.pem, removing ...
	I0918 21:04:47.856786   61740 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19667-7671/.minikube/cert.pem
	I0918 21:04:47.856805   61740 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19667-7671/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19667-7671/.minikube/cert.pem (1123 bytes)
	I0918 21:04:47.856855   61740 exec_runner.go:144] found /home/jenkins/minikube-integration/19667-7671/.minikube/key.pem, removing ...
	I0918 21:04:47.856862   61740 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19667-7671/.minikube/key.pem
	I0918 21:04:47.856881   61740 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19667-7671/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19667-7671/.minikube/key.pem (1679 bytes)
	I0918 21:04:47.856929   61740 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19667-7671/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19667-7671/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19667-7671/.minikube/certs/ca-key.pem org=jenkins.embed-certs-255556 san=[127.0.0.1 192.168.39.21 embed-certs-255556 localhost minikube]
	I0918 21:04:48.145689   61740 provision.go:177] copyRemoteCerts
	I0918 21:04:48.145750   61740 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0918 21:04:48.145779   61740 main.go:141] libmachine: (embed-certs-255556) Calling .GetSSHHostname
	I0918 21:04:48.148420   61740 main.go:141] libmachine: (embed-certs-255556) DBG | domain embed-certs-255556 has defined MAC address 52:54:00:e8:c2:b7 in network mk-embed-certs-255556
	I0918 21:04:48.148785   61740 main.go:141] libmachine: (embed-certs-255556) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e8:c2:b7", ip: ""} in network mk-embed-certs-255556: {Iface:virbr1 ExpiryTime:2024-09-18 22:04:39 +0000 UTC Type:0 Mac:52:54:00:e8:c2:b7 Iaid: IPaddr:192.168.39.21 Prefix:24 Hostname:embed-certs-255556 Clientid:01:52:54:00:e8:c2:b7}
	I0918 21:04:48.148812   61740 main.go:141] libmachine: (embed-certs-255556) DBG | domain embed-certs-255556 has defined IP address 192.168.39.21 and MAC address 52:54:00:e8:c2:b7 in network mk-embed-certs-255556
	I0918 21:04:48.148983   61740 main.go:141] libmachine: (embed-certs-255556) Calling .GetSSHPort
	I0918 21:04:48.149194   61740 main.go:141] libmachine: (embed-certs-255556) Calling .GetSSHKeyPath
	I0918 21:04:48.149371   61740 main.go:141] libmachine: (embed-certs-255556) Calling .GetSSHUsername
	I0918 21:04:48.149486   61740 sshutil.go:53] new ssh client: &{IP:192.168.39.21 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19667-7671/.minikube/machines/embed-certs-255556/id_rsa Username:docker}
	I0918 21:04:48.234451   61740 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0918 21:04:48.260660   61740 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0918 21:04:48.283305   61740 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0918 21:04:48.305919   61740 provision.go:87] duration metric: took 455.946794ms to configureAuth
	I0918 21:04:48.305954   61740 buildroot.go:189] setting minikube options for container-runtime
	I0918 21:04:48.306183   61740 config.go:182] Loaded profile config "embed-certs-255556": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0918 21:04:48.306284   61740 main.go:141] libmachine: (embed-certs-255556) Calling .GetSSHHostname
	I0918 21:04:48.308853   61740 main.go:141] libmachine: (embed-certs-255556) DBG | domain embed-certs-255556 has defined MAC address 52:54:00:e8:c2:b7 in network mk-embed-certs-255556
	I0918 21:04:48.309319   61740 main.go:141] libmachine: (embed-certs-255556) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e8:c2:b7", ip: ""} in network mk-embed-certs-255556: {Iface:virbr1 ExpiryTime:2024-09-18 22:04:39 +0000 UTC Type:0 Mac:52:54:00:e8:c2:b7 Iaid: IPaddr:192.168.39.21 Prefix:24 Hostname:embed-certs-255556 Clientid:01:52:54:00:e8:c2:b7}
	I0918 21:04:48.309359   61740 main.go:141] libmachine: (embed-certs-255556) DBG | domain embed-certs-255556 has defined IP address 192.168.39.21 and MAC address 52:54:00:e8:c2:b7 in network mk-embed-certs-255556
	I0918 21:04:48.309488   61740 main.go:141] libmachine: (embed-certs-255556) Calling .GetSSHPort
	I0918 21:04:48.309706   61740 main.go:141] libmachine: (embed-certs-255556) Calling .GetSSHKeyPath
	I0918 21:04:48.309860   61740 main.go:141] libmachine: (embed-certs-255556) Calling .GetSSHKeyPath
	I0918 21:04:48.309976   61740 main.go:141] libmachine: (embed-certs-255556) Calling .GetSSHUsername
	I0918 21:04:48.310134   61740 main.go:141] libmachine: Using SSH client type: native
	I0918 21:04:48.310349   61740 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.21 22 <nil> <nil>}
	I0918 21:04:48.310372   61740 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0918 21:04:48.782438   62061 start.go:364] duration metric: took 3m49.184727821s to acquireMachinesLock for "old-k8s-version-740194"
	I0918 21:04:48.782503   62061 start.go:96] Skipping create...Using existing machine configuration
	I0918 21:04:48.782514   62061 fix.go:54] fixHost starting: 
	I0918 21:04:48.782993   62061 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19667-7671/.minikube/bin/docker-machine-driver-kvm2
	I0918 21:04:48.783053   62061 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0918 21:04:48.802299   62061 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44463
	I0918 21:04:48.802787   62061 main.go:141] libmachine: () Calling .GetVersion
	I0918 21:04:48.803286   62061 main.go:141] libmachine: Using API Version  1
	I0918 21:04:48.803313   62061 main.go:141] libmachine: () Calling .SetConfigRaw
	I0918 21:04:48.803681   62061 main.go:141] libmachine: () Calling .GetMachineName
	I0918 21:04:48.803873   62061 main.go:141] libmachine: (old-k8s-version-740194) Calling .DriverName
	I0918 21:04:48.804007   62061 main.go:141] libmachine: (old-k8s-version-740194) Calling .GetState
	I0918 21:04:48.805714   62061 fix.go:112] recreateIfNeeded on old-k8s-version-740194: state=Stopped err=<nil>
	I0918 21:04:48.805744   62061 main.go:141] libmachine: (old-k8s-version-740194) Calling .DriverName
	W0918 21:04:48.805910   62061 fix.go:138] unexpected machine state, will restart: <nil>
	I0918 21:04:48.835402   62061 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-740194" ...
	I0918 21:04:48.836753   62061 main.go:141] libmachine: (old-k8s-version-740194) Calling .Start
	I0918 21:04:48.837090   62061 main.go:141] libmachine: (old-k8s-version-740194) Ensuring networks are active...
	I0918 21:04:48.838014   62061 main.go:141] libmachine: (old-k8s-version-740194) Ensuring network default is active
	I0918 21:04:48.838375   62061 main.go:141] libmachine: (old-k8s-version-740194) Ensuring network mk-old-k8s-version-740194 is active
	I0918 21:04:48.839035   62061 main.go:141] libmachine: (old-k8s-version-740194) Getting domain xml...
	I0918 21:04:48.839832   62061 main.go:141] libmachine: (old-k8s-version-740194) Creating domain...
	I0918 21:04:48.532928   61740 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0918 21:04:48.532952   61740 machine.go:96] duration metric: took 1.051769616s to provisionDockerMachine
	I0918 21:04:48.532962   61740 start.go:293] postStartSetup for "embed-certs-255556" (driver="kvm2")
	I0918 21:04:48.532973   61740 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0918 21:04:48.532991   61740 main.go:141] libmachine: (embed-certs-255556) Calling .DriverName
	I0918 21:04:48.533310   61740 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0918 21:04:48.533342   61740 main.go:141] libmachine: (embed-certs-255556) Calling .GetSSHHostname
	I0918 21:04:48.536039   61740 main.go:141] libmachine: (embed-certs-255556) DBG | domain embed-certs-255556 has defined MAC address 52:54:00:e8:c2:b7 in network mk-embed-certs-255556
	I0918 21:04:48.536529   61740 main.go:141] libmachine: (embed-certs-255556) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e8:c2:b7", ip: ""} in network mk-embed-certs-255556: {Iface:virbr1 ExpiryTime:2024-09-18 22:04:39 +0000 UTC Type:0 Mac:52:54:00:e8:c2:b7 Iaid: IPaddr:192.168.39.21 Prefix:24 Hostname:embed-certs-255556 Clientid:01:52:54:00:e8:c2:b7}
	I0918 21:04:48.536558   61740 main.go:141] libmachine: (embed-certs-255556) DBG | domain embed-certs-255556 has defined IP address 192.168.39.21 and MAC address 52:54:00:e8:c2:b7 in network mk-embed-certs-255556
	I0918 21:04:48.536631   61740 main.go:141] libmachine: (embed-certs-255556) Calling .GetSSHPort
	I0918 21:04:48.536806   61740 main.go:141] libmachine: (embed-certs-255556) Calling .GetSSHKeyPath
	I0918 21:04:48.536971   61740 main.go:141] libmachine: (embed-certs-255556) Calling .GetSSHUsername
	I0918 21:04:48.537148   61740 sshutil.go:53] new ssh client: &{IP:192.168.39.21 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19667-7671/.minikube/machines/embed-certs-255556/id_rsa Username:docker}
	I0918 21:04:48.623154   61740 ssh_runner.go:195] Run: cat /etc/os-release
	I0918 21:04:48.627520   61740 info.go:137] Remote host: Buildroot 2023.02.9
	I0918 21:04:48.627544   61740 filesync.go:126] Scanning /home/jenkins/minikube-integration/19667-7671/.minikube/addons for local assets ...
	I0918 21:04:48.627617   61740 filesync.go:126] Scanning /home/jenkins/minikube-integration/19667-7671/.minikube/files for local assets ...
	I0918 21:04:48.627711   61740 filesync.go:149] local asset: /home/jenkins/minikube-integration/19667-7671/.minikube/files/etc/ssl/certs/148782.pem -> 148782.pem in /etc/ssl/certs
	I0918 21:04:48.627827   61740 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0918 21:04:48.637145   61740 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/files/etc/ssl/certs/148782.pem --> /etc/ssl/certs/148782.pem (1708 bytes)
	I0918 21:04:48.661971   61740 start.go:296] duration metric: took 128.997987ms for postStartSetup
	I0918 21:04:48.662012   61740 fix.go:56] duration metric: took 20.352947161s for fixHost
	I0918 21:04:48.662034   61740 main.go:141] libmachine: (embed-certs-255556) Calling .GetSSHHostname
	I0918 21:04:48.665153   61740 main.go:141] libmachine: (embed-certs-255556) DBG | domain embed-certs-255556 has defined MAC address 52:54:00:e8:c2:b7 in network mk-embed-certs-255556
	I0918 21:04:48.665637   61740 main.go:141] libmachine: (embed-certs-255556) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e8:c2:b7", ip: ""} in network mk-embed-certs-255556: {Iface:virbr1 ExpiryTime:2024-09-18 22:04:39 +0000 UTC Type:0 Mac:52:54:00:e8:c2:b7 Iaid: IPaddr:192.168.39.21 Prefix:24 Hostname:embed-certs-255556 Clientid:01:52:54:00:e8:c2:b7}
	I0918 21:04:48.665668   61740 main.go:141] libmachine: (embed-certs-255556) DBG | domain embed-certs-255556 has defined IP address 192.168.39.21 and MAC address 52:54:00:e8:c2:b7 in network mk-embed-certs-255556
	I0918 21:04:48.665853   61740 main.go:141] libmachine: (embed-certs-255556) Calling .GetSSHPort
	I0918 21:04:48.666090   61740 main.go:141] libmachine: (embed-certs-255556) Calling .GetSSHKeyPath
	I0918 21:04:48.666289   61740 main.go:141] libmachine: (embed-certs-255556) Calling .GetSSHKeyPath
	I0918 21:04:48.666607   61740 main.go:141] libmachine: (embed-certs-255556) Calling .GetSSHUsername
	I0918 21:04:48.666784   61740 main.go:141] libmachine: Using SSH client type: native
	I0918 21:04:48.667024   61740 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.21 22 <nil> <nil>}
	I0918 21:04:48.667040   61740 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0918 21:04:48.782245   61740 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726693488.758182538
	
	I0918 21:04:48.782286   61740 fix.go:216] guest clock: 1726693488.758182538
	I0918 21:04:48.782297   61740 fix.go:229] Guest: 2024-09-18 21:04:48.758182538 +0000 UTC Remote: 2024-09-18 21:04:48.662016609 +0000 UTC m=+270.354724953 (delta=96.165929ms)
	I0918 21:04:48.782322   61740 fix.go:200] guest clock delta is within tolerance: 96.165929ms
	I0918 21:04:48.782329   61740 start.go:83] releasing machines lock for "embed-certs-255556", held for 20.47331123s
	I0918 21:04:48.782358   61740 main.go:141] libmachine: (embed-certs-255556) Calling .DriverName
	I0918 21:04:48.782655   61740 main.go:141] libmachine: (embed-certs-255556) Calling .GetIP
	I0918 21:04:48.785572   61740 main.go:141] libmachine: (embed-certs-255556) DBG | domain embed-certs-255556 has defined MAC address 52:54:00:e8:c2:b7 in network mk-embed-certs-255556
	I0918 21:04:48.785959   61740 main.go:141] libmachine: (embed-certs-255556) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e8:c2:b7", ip: ""} in network mk-embed-certs-255556: {Iface:virbr1 ExpiryTime:2024-09-18 22:04:39 +0000 UTC Type:0 Mac:52:54:00:e8:c2:b7 Iaid: IPaddr:192.168.39.21 Prefix:24 Hostname:embed-certs-255556 Clientid:01:52:54:00:e8:c2:b7}
	I0918 21:04:48.785988   61740 main.go:141] libmachine: (embed-certs-255556) DBG | domain embed-certs-255556 has defined IP address 192.168.39.21 and MAC address 52:54:00:e8:c2:b7 in network mk-embed-certs-255556
	I0918 21:04:48.786181   61740 main.go:141] libmachine: (embed-certs-255556) Calling .DriverName
	I0918 21:04:48.786653   61740 main.go:141] libmachine: (embed-certs-255556) Calling .DriverName
	I0918 21:04:48.786859   61740 main.go:141] libmachine: (embed-certs-255556) Calling .DriverName
	I0918 21:04:48.787019   61740 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0918 21:04:48.787083   61740 main.go:141] libmachine: (embed-certs-255556) Calling .GetSSHHostname
	I0918 21:04:48.787118   61740 ssh_runner.go:195] Run: cat /version.json
	I0918 21:04:48.787142   61740 main.go:141] libmachine: (embed-certs-255556) Calling .GetSSHHostname
	I0918 21:04:48.789834   61740 main.go:141] libmachine: (embed-certs-255556) DBG | domain embed-certs-255556 has defined MAC address 52:54:00:e8:c2:b7 in network mk-embed-certs-255556
	I0918 21:04:48.790239   61740 main.go:141] libmachine: (embed-certs-255556) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e8:c2:b7", ip: ""} in network mk-embed-certs-255556: {Iface:virbr1 ExpiryTime:2024-09-18 22:04:39 +0000 UTC Type:0 Mac:52:54:00:e8:c2:b7 Iaid: IPaddr:192.168.39.21 Prefix:24 Hostname:embed-certs-255556 Clientid:01:52:54:00:e8:c2:b7}
	I0918 21:04:48.790290   61740 main.go:141] libmachine: (embed-certs-255556) DBG | domain embed-certs-255556 has defined IP address 192.168.39.21 and MAC address 52:54:00:e8:c2:b7 in network mk-embed-certs-255556
	I0918 21:04:48.790326   61740 main.go:141] libmachine: (embed-certs-255556) DBG | domain embed-certs-255556 has defined MAC address 52:54:00:e8:c2:b7 in network mk-embed-certs-255556
	I0918 21:04:48.790625   61740 main.go:141] libmachine: (embed-certs-255556) Calling .GetSSHPort
	I0918 21:04:48.790805   61740 main.go:141] libmachine: (embed-certs-255556) Calling .GetSSHKeyPath
	I0918 21:04:48.790828   61740 main.go:141] libmachine: (embed-certs-255556) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e8:c2:b7", ip: ""} in network mk-embed-certs-255556: {Iface:virbr1 ExpiryTime:2024-09-18 22:04:39 +0000 UTC Type:0 Mac:52:54:00:e8:c2:b7 Iaid: IPaddr:192.168.39.21 Prefix:24 Hostname:embed-certs-255556 Clientid:01:52:54:00:e8:c2:b7}
	I0918 21:04:48.790860   61740 main.go:141] libmachine: (embed-certs-255556) DBG | domain embed-certs-255556 has defined IP address 192.168.39.21 and MAC address 52:54:00:e8:c2:b7 in network mk-embed-certs-255556
	I0918 21:04:48.791012   61740 main.go:141] libmachine: (embed-certs-255556) Calling .GetSSHUsername
	I0918 21:04:48.791035   61740 main.go:141] libmachine: (embed-certs-255556) Calling .GetSSHPort
	I0918 21:04:48.791172   61740 sshutil.go:53] new ssh client: &{IP:192.168.39.21 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19667-7671/.minikube/machines/embed-certs-255556/id_rsa Username:docker}
	I0918 21:04:48.791251   61740 main.go:141] libmachine: (embed-certs-255556) Calling .GetSSHKeyPath
	I0918 21:04:48.791406   61740 main.go:141] libmachine: (embed-certs-255556) Calling .GetSSHUsername
	I0918 21:04:48.791537   61740 sshutil.go:53] new ssh client: &{IP:192.168.39.21 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19667-7671/.minikube/machines/embed-certs-255556/id_rsa Username:docker}
	I0918 21:04:48.911282   61740 ssh_runner.go:195] Run: systemctl --version
	I0918 21:04:48.917459   61740 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0918 21:04:49.062272   61740 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0918 21:04:49.068629   61740 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0918 21:04:49.068709   61740 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0918 21:04:49.085575   61740 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0918 21:04:49.085607   61740 start.go:495] detecting cgroup driver to use...
	I0918 21:04:49.085677   61740 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0918 21:04:49.102455   61740 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0918 21:04:49.117869   61740 docker.go:217] disabling cri-docker service (if available) ...
	I0918 21:04:49.117958   61740 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0918 21:04:49.135361   61740 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0918 21:04:49.150861   61740 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0918 21:04:49.285901   61740 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0918 21:04:49.438312   61740 docker.go:233] disabling docker service ...
	I0918 21:04:49.438390   61740 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0918 21:04:49.454560   61740 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0918 21:04:49.471109   61740 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0918 21:04:49.631711   61740 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0918 21:04:49.760860   61740 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0918 21:04:49.778574   61740 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0918 21:04:49.797293   61740 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0918 21:04:49.797365   61740 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0918 21:04:49.808796   61740 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0918 21:04:49.808872   61740 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0918 21:04:49.821451   61740 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0918 21:04:49.834678   61740 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0918 21:04:49.847521   61740 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0918 21:04:49.860918   61740 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0918 21:04:49.873942   61740 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0918 21:04:49.892983   61740 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0918 21:04:49.904925   61740 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0918 21:04:49.916195   61740 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0918 21:04:49.916310   61740 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0918 21:04:49.931084   61740 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0918 21:04:49.942692   61740 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0918 21:04:50.065013   61740 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0918 21:04:50.168347   61740 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0918 21:04:50.168440   61740 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0918 21:04:50.174948   61740 start.go:563] Will wait 60s for crictl version
	I0918 21:04:50.175017   61740 ssh_runner.go:195] Run: which crictl
	I0918 21:04:50.180139   61740 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0918 21:04:50.221578   61740 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0918 21:04:50.221687   61740 ssh_runner.go:195] Run: crio --version
	I0918 21:04:50.251587   61740 ssh_runner.go:195] Run: crio --version
	I0918 21:04:50.282931   61740 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0918 21:04:48.112865   61659 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-828868" in "kube-system" namespace has status "Ready":"True"
	I0918 21:04:48.112895   61659 pod_ready.go:82] duration metric: took 6.007950768s for pod "kube-controller-manager-default-k8s-diff-port-828868" in "kube-system" namespace to be "Ready" ...
	I0918 21:04:48.112909   61659 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-jz7ls" in "kube-system" namespace to be "Ready" ...
	I0918 21:04:48.118606   61659 pod_ready.go:93] pod "kube-proxy-jz7ls" in "kube-system" namespace has status "Ready":"True"
	I0918 21:04:48.118628   61659 pod_ready.go:82] duration metric: took 5.710918ms for pod "kube-proxy-jz7ls" in "kube-system" namespace to be "Ready" ...
	I0918 21:04:48.118647   61659 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-828868" in "kube-system" namespace to be "Ready" ...
	I0918 21:04:49.626081   61659 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-828868" in "kube-system" namespace has status "Ready":"True"
	I0918 21:04:49.626116   61659 pod_ready.go:82] duration metric: took 1.507459822s for pod "kube-scheduler-default-k8s-diff-port-828868" in "kube-system" namespace to be "Ready" ...
	I0918 21:04:49.626130   61659 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace to be "Ready" ...
	I0918 21:04:51.635306   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:04:50.284258   61740 main.go:141] libmachine: (embed-certs-255556) Calling .GetIP
	I0918 21:04:50.287321   61740 main.go:141] libmachine: (embed-certs-255556) DBG | domain embed-certs-255556 has defined MAC address 52:54:00:e8:c2:b7 in network mk-embed-certs-255556
	I0918 21:04:50.287754   61740 main.go:141] libmachine: (embed-certs-255556) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e8:c2:b7", ip: ""} in network mk-embed-certs-255556: {Iface:virbr1 ExpiryTime:2024-09-18 22:04:39 +0000 UTC Type:0 Mac:52:54:00:e8:c2:b7 Iaid: IPaddr:192.168.39.21 Prefix:24 Hostname:embed-certs-255556 Clientid:01:52:54:00:e8:c2:b7}
	I0918 21:04:50.287782   61740 main.go:141] libmachine: (embed-certs-255556) DBG | domain embed-certs-255556 has defined IP address 192.168.39.21 and MAC address 52:54:00:e8:c2:b7 in network mk-embed-certs-255556
	I0918 21:04:50.288116   61740 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0918 21:04:50.292221   61740 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0918 21:04:50.304472   61740 kubeadm.go:883] updating cluster {Name:embed-certs-255556 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.1 ClusterName:embed-certs-255556 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.21 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:f
alse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0918 21:04:50.304604   61740 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0918 21:04:50.304675   61740 ssh_runner.go:195] Run: sudo crictl images --output json
	I0918 21:04:50.343445   61740 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I0918 21:04:50.343527   61740 ssh_runner.go:195] Run: which lz4
	I0918 21:04:50.347600   61740 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0918 21:04:50.351647   61740 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0918 21:04:50.351679   61740 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I0918 21:04:51.665892   61740 crio.go:462] duration metric: took 1.318339658s to copy over tarball
	I0918 21:04:51.665970   61740 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0918 21:04:50.157515   62061 main.go:141] libmachine: (old-k8s-version-740194) Waiting to get IP...
	I0918 21:04:50.158369   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | domain old-k8s-version-740194 has defined MAC address 52:54:00:3f:a7:11 in network mk-old-k8s-version-740194
	I0918 21:04:50.158796   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | unable to find current IP address of domain old-k8s-version-740194 in network mk-old-k8s-version-740194
	I0918 21:04:50.158931   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | I0918 21:04:50.158765   63026 retry.go:31] will retry after 214.617426ms: waiting for machine to come up
	I0918 21:04:50.375418   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | domain old-k8s-version-740194 has defined MAC address 52:54:00:3f:a7:11 in network mk-old-k8s-version-740194
	I0918 21:04:50.375882   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | unable to find current IP address of domain old-k8s-version-740194 in network mk-old-k8s-version-740194
	I0918 21:04:50.375910   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | I0918 21:04:50.375834   63026 retry.go:31] will retry after 311.080996ms: waiting for machine to come up
	I0918 21:04:50.688569   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | domain old-k8s-version-740194 has defined MAC address 52:54:00:3f:a7:11 in network mk-old-k8s-version-740194
	I0918 21:04:50.689151   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | unable to find current IP address of domain old-k8s-version-740194 in network mk-old-k8s-version-740194
	I0918 21:04:50.689182   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | I0918 21:04:50.689112   63026 retry.go:31] will retry after 386.384815ms: waiting for machine to come up
	I0918 21:04:51.076846   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | domain old-k8s-version-740194 has defined MAC address 52:54:00:3f:a7:11 in network mk-old-k8s-version-740194
	I0918 21:04:51.077336   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | unable to find current IP address of domain old-k8s-version-740194 in network mk-old-k8s-version-740194
	I0918 21:04:51.077359   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | I0918 21:04:51.077280   63026 retry.go:31] will retry after 556.210887ms: waiting for machine to come up
	I0918 21:04:51.634919   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | domain old-k8s-version-740194 has defined MAC address 52:54:00:3f:a7:11 in network mk-old-k8s-version-740194
	I0918 21:04:51.635423   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | unable to find current IP address of domain old-k8s-version-740194 in network mk-old-k8s-version-740194
	I0918 21:04:51.635462   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | I0918 21:04:51.635325   63026 retry.go:31] will retry after 607.960565ms: waiting for machine to come up
	I0918 21:04:52.245179   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | domain old-k8s-version-740194 has defined MAC address 52:54:00:3f:a7:11 in network mk-old-k8s-version-740194
	I0918 21:04:52.245741   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | unable to find current IP address of domain old-k8s-version-740194 in network mk-old-k8s-version-740194
	I0918 21:04:52.245774   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | I0918 21:04:52.245692   63026 retry.go:31] will retry after 620.243825ms: waiting for machine to come up
	I0918 21:04:52.867067   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | domain old-k8s-version-740194 has defined MAC address 52:54:00:3f:a7:11 in network mk-old-k8s-version-740194
	I0918 21:04:52.867658   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | unable to find current IP address of domain old-k8s-version-740194 in network mk-old-k8s-version-740194
	I0918 21:04:52.867680   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | I0918 21:04:52.867608   63026 retry.go:31] will retry after 1.091814923s: waiting for machine to come up
	I0918 21:04:53.961395   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | domain old-k8s-version-740194 has defined MAC address 52:54:00:3f:a7:11 in network mk-old-k8s-version-740194
	I0918 21:04:53.961819   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | unable to find current IP address of domain old-k8s-version-740194 in network mk-old-k8s-version-740194
	I0918 21:04:53.961842   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | I0918 21:04:53.961784   63026 retry.go:31] will retry after 1.130552716s: waiting for machine to come up
	I0918 21:04:54.133598   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:04:56.134938   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:04:53.837558   61740 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.171557505s)
	I0918 21:04:53.837589   61740 crio.go:469] duration metric: took 2.171667234s to extract the tarball
	I0918 21:04:53.837610   61740 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0918 21:04:53.876381   61740 ssh_runner.go:195] Run: sudo crictl images --output json
	I0918 21:04:53.924938   61740 crio.go:514] all images are preloaded for cri-o runtime.
	I0918 21:04:53.924968   61740 cache_images.go:84] Images are preloaded, skipping loading
	I0918 21:04:53.924979   61740 kubeadm.go:934] updating node { 192.168.39.21 8443 v1.31.1 crio true true} ...
	I0918 21:04:53.925115   61740 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-255556 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.21
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:embed-certs-255556 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0918 21:04:53.925203   61740 ssh_runner.go:195] Run: crio config
	I0918 21:04:53.969048   61740 cni.go:84] Creating CNI manager for ""
	I0918 21:04:53.969076   61740 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0918 21:04:53.969086   61740 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0918 21:04:53.969105   61740 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.21 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-255556 NodeName:embed-certs-255556 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.21"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.21 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath
:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0918 21:04:53.969240   61740 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.21
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-255556"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.21
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.21"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0918 21:04:53.969298   61740 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0918 21:04:53.978636   61740 binaries.go:44] Found k8s binaries, skipping transfer
	I0918 21:04:53.978702   61740 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0918 21:04:53.988580   61740 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0918 21:04:54.005819   61740 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0918 21:04:54.021564   61740 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2159 bytes)
	I0918 21:04:54.038702   61740 ssh_runner.go:195] Run: grep 192.168.39.21	control-plane.minikube.internal$ /etc/hosts
	I0918 21:04:54.042536   61740 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.21	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0918 21:04:54.053896   61740 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0918 21:04:54.180842   61740 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0918 21:04:54.197701   61740 certs.go:68] Setting up /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/embed-certs-255556 for IP: 192.168.39.21
	I0918 21:04:54.197731   61740 certs.go:194] generating shared ca certs ...
	I0918 21:04:54.197754   61740 certs.go:226] acquiring lock for ca certs: {Name:mkf54e0e71ed884c4372f9d3d4cb308ea4600185 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0918 21:04:54.197953   61740 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19667-7671/.minikube/ca.key
	I0918 21:04:54.198020   61740 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19667-7671/.minikube/proxy-client-ca.key
	I0918 21:04:54.198034   61740 certs.go:256] generating profile certs ...
	I0918 21:04:54.198129   61740 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/embed-certs-255556/client.key
	I0918 21:04:54.198191   61740 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/embed-certs-255556/apiserver.key.4704fd19
	I0918 21:04:54.198225   61740 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/embed-certs-255556/proxy-client.key
	I0918 21:04:54.198326   61740 certs.go:484] found cert: /home/jenkins/minikube-integration/19667-7671/.minikube/certs/14878.pem (1338 bytes)
	W0918 21:04:54.198358   61740 certs.go:480] ignoring /home/jenkins/minikube-integration/19667-7671/.minikube/certs/14878_empty.pem, impossibly tiny 0 bytes
	I0918 21:04:54.198370   61740 certs.go:484] found cert: /home/jenkins/minikube-integration/19667-7671/.minikube/certs/ca-key.pem (1675 bytes)
	I0918 21:04:54.198420   61740 certs.go:484] found cert: /home/jenkins/minikube-integration/19667-7671/.minikube/certs/ca.pem (1082 bytes)
	I0918 21:04:54.198463   61740 certs.go:484] found cert: /home/jenkins/minikube-integration/19667-7671/.minikube/certs/cert.pem (1123 bytes)
	I0918 21:04:54.198498   61740 certs.go:484] found cert: /home/jenkins/minikube-integration/19667-7671/.minikube/certs/key.pem (1679 bytes)
	I0918 21:04:54.198566   61740 certs.go:484] found cert: /home/jenkins/minikube-integration/19667-7671/.minikube/files/etc/ssl/certs/148782.pem (1708 bytes)
	I0918 21:04:54.199258   61740 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0918 21:04:54.231688   61740 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0918 21:04:54.276366   61740 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0918 21:04:54.320929   61740 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0918 21:04:54.348698   61740 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/embed-certs-255556/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0918 21:04:54.375168   61740 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/embed-certs-255556/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0918 21:04:54.399159   61740 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/embed-certs-255556/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0918 21:04:54.427975   61740 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/embed-certs-255556/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0918 21:04:54.454648   61740 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/certs/14878.pem --> /usr/share/ca-certificates/14878.pem (1338 bytes)
	I0918 21:04:54.477518   61740 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/files/etc/ssl/certs/148782.pem --> /usr/share/ca-certificates/148782.pem (1708 bytes)
	I0918 21:04:54.500703   61740 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0918 21:04:54.523380   61740 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0918 21:04:54.540053   61740 ssh_runner.go:195] Run: openssl version
	I0918 21:04:54.545818   61740 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14878.pem && ln -fs /usr/share/ca-certificates/14878.pem /etc/ssl/certs/14878.pem"
	I0918 21:04:54.557138   61740 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14878.pem
	I0918 21:04:54.561973   61740 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 18 19:57 /usr/share/ca-certificates/14878.pem
	I0918 21:04:54.562030   61740 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14878.pem
	I0918 21:04:54.568133   61740 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14878.pem /etc/ssl/certs/51391683.0"
	I0918 21:04:54.578964   61740 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/148782.pem && ln -fs /usr/share/ca-certificates/148782.pem /etc/ssl/certs/148782.pem"
	I0918 21:04:54.590254   61740 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/148782.pem
	I0918 21:04:54.594944   61740 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 18 19:57 /usr/share/ca-certificates/148782.pem
	I0918 21:04:54.595022   61740 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/148782.pem
	I0918 21:04:54.600797   61740 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/148782.pem /etc/ssl/certs/3ec20f2e.0"
	I0918 21:04:54.612078   61740 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0918 21:04:54.623280   61740 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0918 21:04:54.628636   61740 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 18 19:39 /usr/share/ca-certificates/minikubeCA.pem
	I0918 21:04:54.628711   61740 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0918 21:04:54.634847   61740 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0918 21:04:54.645647   61740 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0918 21:04:54.650004   61740 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0918 21:04:54.656906   61740 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0918 21:04:54.662778   61740 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0918 21:04:54.668744   61740 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0918 21:04:54.674676   61740 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0918 21:04:54.680431   61740 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0918 21:04:54.686242   61740 kubeadm.go:392] StartCluster: {Name:embed-certs-255556 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.1 ClusterName:embed-certs-255556 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.21 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fals
e MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0918 21:04:54.686364   61740 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0918 21:04:54.686439   61740 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0918 21:04:54.724228   61740 cri.go:89] found id: ""
	I0918 21:04:54.724319   61740 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0918 21:04:54.734427   61740 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0918 21:04:54.734458   61740 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0918 21:04:54.734511   61740 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0918 21:04:54.747453   61740 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0918 21:04:54.748449   61740 kubeconfig.go:125] found "embed-certs-255556" server: "https://192.168.39.21:8443"
	I0918 21:04:54.750481   61740 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0918 21:04:54.760549   61740 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.21
	I0918 21:04:54.760585   61740 kubeadm.go:1160] stopping kube-system containers ...
	I0918 21:04:54.760599   61740 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0918 21:04:54.760659   61740 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0918 21:04:54.796334   61740 cri.go:89] found id: ""
	I0918 21:04:54.796426   61740 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0918 21:04:54.820854   61740 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0918 21:04:54.831959   61740 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0918 21:04:54.831982   61740 kubeadm.go:157] found existing configuration files:
	
	I0918 21:04:54.832075   61740 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0918 21:04:54.841872   61740 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0918 21:04:54.841952   61740 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0918 21:04:54.852032   61740 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0918 21:04:54.862101   61740 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0918 21:04:54.862176   61740 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0918 21:04:54.872575   61740 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0918 21:04:54.882283   61740 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0918 21:04:54.882386   61740 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0918 21:04:54.895907   61740 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0918 21:04:54.905410   61740 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0918 21:04:54.905484   61740 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0918 21:04:54.914938   61740 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0918 21:04:54.924536   61740 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0918 21:04:55.035830   61740 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0918 21:04:55.975305   61740 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0918 21:04:56.227988   61740 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0918 21:04:56.304760   61740 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0918 21:04:56.375088   61740 api_server.go:52] waiting for apiserver process to appear ...
	I0918 21:04:56.375185   61740 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:04:56.875319   61740 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:04:57.375240   61740 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:04:57.875532   61740 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:04:55.093491   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | domain old-k8s-version-740194 has defined MAC address 52:54:00:3f:a7:11 in network mk-old-k8s-version-740194
	I0918 21:04:55.093956   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | unable to find current IP address of domain old-k8s-version-740194 in network mk-old-k8s-version-740194
	I0918 21:04:55.093982   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | I0918 21:04:55.093923   63026 retry.go:31] will retry after 1.824664154s: waiting for machine to come up
	I0918 21:04:56.920959   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | domain old-k8s-version-740194 has defined MAC address 52:54:00:3f:a7:11 in network mk-old-k8s-version-740194
	I0918 21:04:56.921371   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | unable to find current IP address of domain old-k8s-version-740194 in network mk-old-k8s-version-740194
	I0918 21:04:56.921422   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | I0918 21:04:56.921354   63026 retry.go:31] will retry after 1.591260677s: waiting for machine to come up
	I0918 21:04:58.514832   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | domain old-k8s-version-740194 has defined MAC address 52:54:00:3f:a7:11 in network mk-old-k8s-version-740194
	I0918 21:04:58.515294   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | unable to find current IP address of domain old-k8s-version-740194 in network mk-old-k8s-version-740194
	I0918 21:04:58.515322   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | I0918 21:04:58.515262   63026 retry.go:31] will retry after 1.868763497s: waiting for machine to come up
	I0918 21:04:58.135056   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:05:00.633540   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:04:58.375400   61740 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:04:58.392935   61740 api_server.go:72] duration metric: took 2.017847705s to wait for apiserver process to appear ...
	I0918 21:04:58.393110   61740 api_server.go:88] waiting for apiserver healthz status ...
	I0918 21:04:58.393152   61740 api_server.go:253] Checking apiserver healthz at https://192.168.39.21:8443/healthz ...
	I0918 21:04:58.393699   61740 api_server.go:269] stopped: https://192.168.39.21:8443/healthz: Get "https://192.168.39.21:8443/healthz": dial tcp 192.168.39.21:8443: connect: connection refused
	I0918 21:04:58.893291   61740 api_server.go:253] Checking apiserver healthz at https://192.168.39.21:8443/healthz ...
	I0918 21:05:01.124915   61740 api_server.go:279] https://192.168.39.21:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0918 21:05:01.124954   61740 api_server.go:103] status: https://192.168.39.21:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0918 21:05:01.124991   61740 api_server.go:253] Checking apiserver healthz at https://192.168.39.21:8443/healthz ...
	I0918 21:05:01.179199   61740 api_server.go:279] https://192.168.39.21:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0918 21:05:01.179225   61740 api_server.go:103] status: https://192.168.39.21:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0918 21:05:01.393537   61740 api_server.go:253] Checking apiserver healthz at https://192.168.39.21:8443/healthz ...
	I0918 21:05:01.399577   61740 api_server.go:279] https://192.168.39.21:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0918 21:05:01.399610   61740 api_server.go:103] status: https://192.168.39.21:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0918 21:05:01.894174   61740 api_server.go:253] Checking apiserver healthz at https://192.168.39.21:8443/healthz ...
	I0918 21:05:01.899086   61740 api_server.go:279] https://192.168.39.21:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0918 21:05:01.899110   61740 api_server.go:103] status: https://192.168.39.21:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0918 21:05:02.393672   61740 api_server.go:253] Checking apiserver healthz at https://192.168.39.21:8443/healthz ...
	I0918 21:05:02.401942   61740 api_server.go:279] https://192.168.39.21:8443/healthz returned 200:
	ok
	I0918 21:05:02.408523   61740 api_server.go:141] control plane version: v1.31.1
	I0918 21:05:02.408553   61740 api_server.go:131] duration metric: took 4.015427901s to wait for apiserver health ...
	I0918 21:05:02.408562   61740 cni.go:84] Creating CNI manager for ""
	I0918 21:05:02.408568   61740 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0918 21:05:02.410199   61740 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0918 21:05:02.411470   61740 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0918 21:05:02.424617   61740 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0918 21:05:02.443819   61740 system_pods.go:43] waiting for kube-system pods to appear ...
	I0918 21:05:02.458892   61740 system_pods.go:59] 8 kube-system pods found
	I0918 21:05:02.458939   61740 system_pods.go:61] "coredns-7c65d6cfc9-xwn8w" [773b9a83-bb43-40d3-b3a3-40603c3b22b2] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0918 21:05:02.458949   61740 system_pods.go:61] "etcd-embed-certs-255556" [ee3e7dc9-fb5a-4faa-a0b5-b84b7cd506b6] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0918 21:05:02.458961   61740 system_pods.go:61] "kube-apiserver-embed-certs-255556" [c60ce069-c7a0-42d7-a7de-ce3cf91a3d43] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0918 21:05:02.458970   61740 system_pods.go:61] "kube-controller-manager-embed-certs-255556" [ac8f6b42-caa3-4815-9a90-3f7bb1f0060b] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0918 21:05:02.458980   61740 system_pods.go:61] "kube-proxy-v8szm" [367f743a-399b-4d04-8604-dcd441999581] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0918 21:05:02.458993   61740 system_pods.go:61] "kube-scheduler-embed-certs-255556" [b5dd211b-7963-41ac-8b43-0a5451e3e848] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0918 21:05:02.459001   61740 system_pods.go:61] "metrics-server-6867b74b74-z8rm7" [d1b6823e-4ac5-4ac6-88ae-7f8eac622fdf] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0918 21:05:02.459009   61740 system_pods.go:61] "storage-provisioner" [1575f899-35a7-4eb2-ad5f-660183f75aa6] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0918 21:05:02.459015   61740 system_pods.go:74] duration metric: took 15.172393ms to wait for pod list to return data ...
	I0918 21:05:02.459025   61740 node_conditions.go:102] verifying NodePressure condition ...
	I0918 21:05:02.463140   61740 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0918 21:05:02.463177   61740 node_conditions.go:123] node cpu capacity is 2
	I0918 21:05:02.463192   61740 node_conditions.go:105] duration metric: took 4.162401ms to run NodePressure ...
	I0918 21:05:02.463214   61740 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0918 21:05:02.757153   61740 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0918 21:05:02.761949   61740 kubeadm.go:739] kubelet initialised
	I0918 21:05:02.761977   61740 kubeadm.go:740] duration metric: took 4.79396ms waiting for restarted kubelet to initialise ...
	I0918 21:05:02.761985   61740 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0918 21:05:02.767197   61740 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-xwn8w" in "kube-system" namespace to be "Ready" ...
	I0918 21:05:00.385891   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | domain old-k8s-version-740194 has defined MAC address 52:54:00:3f:a7:11 in network mk-old-k8s-version-740194
	I0918 21:05:00.386451   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | unable to find current IP address of domain old-k8s-version-740194 in network mk-old-k8s-version-740194
	I0918 21:05:00.386482   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | I0918 21:05:00.386384   63026 retry.go:31] will retry after 3.274467583s: waiting for machine to come up
	I0918 21:05:03.664788   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | domain old-k8s-version-740194 has defined MAC address 52:54:00:3f:a7:11 in network mk-old-k8s-version-740194
	I0918 21:05:03.665255   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | unable to find current IP address of domain old-k8s-version-740194 in network mk-old-k8s-version-740194
	I0918 21:05:03.665286   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | I0918 21:05:03.665210   63026 retry.go:31] will retry after 4.112908346s: waiting for machine to come up
	I0918 21:05:02.634177   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:05:05.133431   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:05:07.133941   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:05:04.774196   61740 pod_ready.go:103] pod "coredns-7c65d6cfc9-xwn8w" in "kube-system" namespace has status "Ready":"False"
	I0918 21:05:07.273045   61740 pod_ready.go:103] pod "coredns-7c65d6cfc9-xwn8w" in "kube-system" namespace has status "Ready":"False"
	I0918 21:05:09.245246   61273 start.go:364] duration metric: took 55.195169549s to acquireMachinesLock for "no-preload-331658"
	I0918 21:05:09.245300   61273 start.go:96] Skipping create...Using existing machine configuration
	I0918 21:05:09.245311   61273 fix.go:54] fixHost starting: 
	I0918 21:05:09.245741   61273 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19667-7671/.minikube/bin/docker-machine-driver-kvm2
	I0918 21:05:09.245778   61273 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0918 21:05:09.263998   61273 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37335
	I0918 21:05:09.264565   61273 main.go:141] libmachine: () Calling .GetVersion
	I0918 21:05:09.265118   61273 main.go:141] libmachine: Using API Version  1
	I0918 21:05:09.265142   61273 main.go:141] libmachine: () Calling .SetConfigRaw
	I0918 21:05:09.265505   61273 main.go:141] libmachine: () Calling .GetMachineName
	I0918 21:05:09.265732   61273 main.go:141] libmachine: (no-preload-331658) Calling .DriverName
	I0918 21:05:09.265901   61273 main.go:141] libmachine: (no-preload-331658) Calling .GetState
	I0918 21:05:09.269500   61273 fix.go:112] recreateIfNeeded on no-preload-331658: state=Stopped err=<nil>
	I0918 21:05:09.269525   61273 main.go:141] libmachine: (no-preload-331658) Calling .DriverName
	W0918 21:05:09.269730   61273 fix.go:138] unexpected machine state, will restart: <nil>
	I0918 21:05:09.271448   61273 out.go:177] * Restarting existing kvm2 VM for "no-preload-331658" ...
	I0918 21:05:07.782616   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | domain old-k8s-version-740194 has defined MAC address 52:54:00:3f:a7:11 in network mk-old-k8s-version-740194
	I0918 21:05:07.783108   62061 main.go:141] libmachine: (old-k8s-version-740194) Found IP for machine: 192.168.72.53
	I0918 21:05:07.783125   62061 main.go:141] libmachine: (old-k8s-version-740194) Reserving static IP address...
	I0918 21:05:07.783135   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | domain old-k8s-version-740194 has current primary IP address 192.168.72.53 and MAC address 52:54:00:3f:a7:11 in network mk-old-k8s-version-740194
	I0918 21:05:07.783509   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | found host DHCP lease matching {name: "old-k8s-version-740194", mac: "52:54:00:3f:a7:11", ip: "192.168.72.53"} in network mk-old-k8s-version-740194: {Iface:virbr3 ExpiryTime:2024-09-18 22:05:00 +0000 UTC Type:0 Mac:52:54:00:3f:a7:11 Iaid: IPaddr:192.168.72.53 Prefix:24 Hostname:old-k8s-version-740194 Clientid:01:52:54:00:3f:a7:11}
	I0918 21:05:07.783542   62061 main.go:141] libmachine: (old-k8s-version-740194) Reserved static IP address: 192.168.72.53
	I0918 21:05:07.783565   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | skip adding static IP to network mk-old-k8s-version-740194 - found existing host DHCP lease matching {name: "old-k8s-version-740194", mac: "52:54:00:3f:a7:11", ip: "192.168.72.53"}
	I0918 21:05:07.783583   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | Getting to WaitForSSH function...
	I0918 21:05:07.783614   62061 main.go:141] libmachine: (old-k8s-version-740194) Waiting for SSH to be available...
	I0918 21:05:07.785492   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | domain old-k8s-version-740194 has defined MAC address 52:54:00:3f:a7:11 in network mk-old-k8s-version-740194
	I0918 21:05:07.785856   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:a7:11", ip: ""} in network mk-old-k8s-version-740194: {Iface:virbr3 ExpiryTime:2024-09-18 22:05:00 +0000 UTC Type:0 Mac:52:54:00:3f:a7:11 Iaid: IPaddr:192.168.72.53 Prefix:24 Hostname:old-k8s-version-740194 Clientid:01:52:54:00:3f:a7:11}
	I0918 21:05:07.785885   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | domain old-k8s-version-740194 has defined IP address 192.168.72.53 and MAC address 52:54:00:3f:a7:11 in network mk-old-k8s-version-740194
	I0918 21:05:07.786000   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | Using SSH client type: external
	I0918 21:05:07.786021   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | Using SSH private key: /home/jenkins/minikube-integration/19667-7671/.minikube/machines/old-k8s-version-740194/id_rsa (-rw-------)
	I0918 21:05:07.786044   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.53 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19667-7671/.minikube/machines/old-k8s-version-740194/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0918 21:05:07.786055   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | About to run SSH command:
	I0918 21:05:07.786065   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | exit 0
	I0918 21:05:07.915953   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | SSH cmd err, output: <nil>: 
	I0918 21:05:07.916454   62061 main.go:141] libmachine: (old-k8s-version-740194) Calling .GetConfigRaw
	I0918 21:05:07.917059   62061 main.go:141] libmachine: (old-k8s-version-740194) Calling .GetIP
	I0918 21:05:07.919749   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | domain old-k8s-version-740194 has defined MAC address 52:54:00:3f:a7:11 in network mk-old-k8s-version-740194
	I0918 21:05:07.920244   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:a7:11", ip: ""} in network mk-old-k8s-version-740194: {Iface:virbr3 ExpiryTime:2024-09-18 22:05:00 +0000 UTC Type:0 Mac:52:54:00:3f:a7:11 Iaid: IPaddr:192.168.72.53 Prefix:24 Hostname:old-k8s-version-740194 Clientid:01:52:54:00:3f:a7:11}
	I0918 21:05:07.920280   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | domain old-k8s-version-740194 has defined IP address 192.168.72.53 and MAC address 52:54:00:3f:a7:11 in network mk-old-k8s-version-740194
	I0918 21:05:07.920639   62061 profile.go:143] Saving config to /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/old-k8s-version-740194/config.json ...
	I0918 21:05:07.920871   62061 machine.go:93] provisionDockerMachine start ...
	I0918 21:05:07.920893   62061 main.go:141] libmachine: (old-k8s-version-740194) Calling .DriverName
	I0918 21:05:07.921100   62061 main.go:141] libmachine: (old-k8s-version-740194) Calling .GetSSHHostname
	I0918 21:05:07.923129   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | domain old-k8s-version-740194 has defined MAC address 52:54:00:3f:a7:11 in network mk-old-k8s-version-740194
	I0918 21:05:07.923434   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:a7:11", ip: ""} in network mk-old-k8s-version-740194: {Iface:virbr3 ExpiryTime:2024-09-18 22:05:00 +0000 UTC Type:0 Mac:52:54:00:3f:a7:11 Iaid: IPaddr:192.168.72.53 Prefix:24 Hostname:old-k8s-version-740194 Clientid:01:52:54:00:3f:a7:11}
	I0918 21:05:07.923458   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | domain old-k8s-version-740194 has defined IP address 192.168.72.53 and MAC address 52:54:00:3f:a7:11 in network mk-old-k8s-version-740194
	I0918 21:05:07.923573   62061 main.go:141] libmachine: (old-k8s-version-740194) Calling .GetSSHPort
	I0918 21:05:07.923727   62061 main.go:141] libmachine: (old-k8s-version-740194) Calling .GetSSHKeyPath
	I0918 21:05:07.923889   62061 main.go:141] libmachine: (old-k8s-version-740194) Calling .GetSSHKeyPath
	I0918 21:05:07.924036   62061 main.go:141] libmachine: (old-k8s-version-740194) Calling .GetSSHUsername
	I0918 21:05:07.924185   62061 main.go:141] libmachine: Using SSH client type: native
	I0918 21:05:07.924368   62061 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.72.53 22 <nil> <nil>}
	I0918 21:05:07.924378   62061 main.go:141] libmachine: About to run SSH command:
	hostname
	I0918 21:05:08.035941   62061 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0918 21:05:08.035972   62061 main.go:141] libmachine: (old-k8s-version-740194) Calling .GetMachineName
	I0918 21:05:08.036242   62061 buildroot.go:166] provisioning hostname "old-k8s-version-740194"
	I0918 21:05:08.036273   62061 main.go:141] libmachine: (old-k8s-version-740194) Calling .GetMachineName
	I0918 21:05:08.036512   62061 main.go:141] libmachine: (old-k8s-version-740194) Calling .GetSSHHostname
	I0918 21:05:08.038902   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | domain old-k8s-version-740194 has defined MAC address 52:54:00:3f:a7:11 in network mk-old-k8s-version-740194
	I0918 21:05:08.039239   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:a7:11", ip: ""} in network mk-old-k8s-version-740194: {Iface:virbr3 ExpiryTime:2024-09-18 22:05:00 +0000 UTC Type:0 Mac:52:54:00:3f:a7:11 Iaid: IPaddr:192.168.72.53 Prefix:24 Hostname:old-k8s-version-740194 Clientid:01:52:54:00:3f:a7:11}
	I0918 21:05:08.039266   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | domain old-k8s-version-740194 has defined IP address 192.168.72.53 and MAC address 52:54:00:3f:a7:11 in network mk-old-k8s-version-740194
	I0918 21:05:08.039438   62061 main.go:141] libmachine: (old-k8s-version-740194) Calling .GetSSHPort
	I0918 21:05:08.039632   62061 main.go:141] libmachine: (old-k8s-version-740194) Calling .GetSSHKeyPath
	I0918 21:05:08.039899   62061 main.go:141] libmachine: (old-k8s-version-740194) Calling .GetSSHKeyPath
	I0918 21:05:08.040072   62061 main.go:141] libmachine: (old-k8s-version-740194) Calling .GetSSHUsername
	I0918 21:05:08.040248   62061 main.go:141] libmachine: Using SSH client type: native
	I0918 21:05:08.040415   62061 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.72.53 22 <nil> <nil>}
	I0918 21:05:08.040428   62061 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-740194 && echo "old-k8s-version-740194" | sudo tee /etc/hostname
	I0918 21:05:08.165391   62061 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-740194
	
	I0918 21:05:08.165424   62061 main.go:141] libmachine: (old-k8s-version-740194) Calling .GetSSHHostname
	I0918 21:05:08.168059   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | domain old-k8s-version-740194 has defined MAC address 52:54:00:3f:a7:11 in network mk-old-k8s-version-740194
	I0918 21:05:08.168396   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:a7:11", ip: ""} in network mk-old-k8s-version-740194: {Iface:virbr3 ExpiryTime:2024-09-18 22:05:00 +0000 UTC Type:0 Mac:52:54:00:3f:a7:11 Iaid: IPaddr:192.168.72.53 Prefix:24 Hostname:old-k8s-version-740194 Clientid:01:52:54:00:3f:a7:11}
	I0918 21:05:08.168428   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | domain old-k8s-version-740194 has defined IP address 192.168.72.53 and MAC address 52:54:00:3f:a7:11 in network mk-old-k8s-version-740194
	I0918 21:05:08.168621   62061 main.go:141] libmachine: (old-k8s-version-740194) Calling .GetSSHPort
	I0918 21:05:08.168837   62061 main.go:141] libmachine: (old-k8s-version-740194) Calling .GetSSHKeyPath
	I0918 21:05:08.168988   62061 main.go:141] libmachine: (old-k8s-version-740194) Calling .GetSSHKeyPath
	I0918 21:05:08.169231   62061 main.go:141] libmachine: (old-k8s-version-740194) Calling .GetSSHUsername
	I0918 21:05:08.169413   62061 main.go:141] libmachine: Using SSH client type: native
	I0918 21:05:08.169579   62061 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.72.53 22 <nil> <nil>}
	I0918 21:05:08.169594   62061 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-740194' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-740194/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-740194' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0918 21:05:08.288591   62061 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0918 21:05:08.288644   62061 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19667-7671/.minikube CaCertPath:/home/jenkins/minikube-integration/19667-7671/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19667-7671/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19667-7671/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19667-7671/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19667-7671/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19667-7671/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19667-7671/.minikube}
	I0918 21:05:08.288671   62061 buildroot.go:174] setting up certificates
	I0918 21:05:08.288682   62061 provision.go:84] configureAuth start
	I0918 21:05:08.288694   62061 main.go:141] libmachine: (old-k8s-version-740194) Calling .GetMachineName
	I0918 21:05:08.289017   62061 main.go:141] libmachine: (old-k8s-version-740194) Calling .GetIP
	I0918 21:05:08.291949   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | domain old-k8s-version-740194 has defined MAC address 52:54:00:3f:a7:11 in network mk-old-k8s-version-740194
	I0918 21:05:08.292358   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:a7:11", ip: ""} in network mk-old-k8s-version-740194: {Iface:virbr3 ExpiryTime:2024-09-18 22:05:00 +0000 UTC Type:0 Mac:52:54:00:3f:a7:11 Iaid: IPaddr:192.168.72.53 Prefix:24 Hostname:old-k8s-version-740194 Clientid:01:52:54:00:3f:a7:11}
	I0918 21:05:08.292405   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | domain old-k8s-version-740194 has defined IP address 192.168.72.53 and MAC address 52:54:00:3f:a7:11 in network mk-old-k8s-version-740194
	I0918 21:05:08.292526   62061 main.go:141] libmachine: (old-k8s-version-740194) Calling .GetSSHHostname
	I0918 21:05:08.295013   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | domain old-k8s-version-740194 has defined MAC address 52:54:00:3f:a7:11 in network mk-old-k8s-version-740194
	I0918 21:05:08.295378   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:a7:11", ip: ""} in network mk-old-k8s-version-740194: {Iface:virbr3 ExpiryTime:2024-09-18 22:05:00 +0000 UTC Type:0 Mac:52:54:00:3f:a7:11 Iaid: IPaddr:192.168.72.53 Prefix:24 Hostname:old-k8s-version-740194 Clientid:01:52:54:00:3f:a7:11}
	I0918 21:05:08.295399   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | domain old-k8s-version-740194 has defined IP address 192.168.72.53 and MAC address 52:54:00:3f:a7:11 in network mk-old-k8s-version-740194
	I0918 21:05:08.295567   62061 provision.go:143] copyHostCerts
	I0918 21:05:08.295630   62061 exec_runner.go:144] found /home/jenkins/minikube-integration/19667-7671/.minikube/ca.pem, removing ...
	I0918 21:05:08.295640   62061 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19667-7671/.minikube/ca.pem
	I0918 21:05:08.295692   62061 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19667-7671/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19667-7671/.minikube/ca.pem (1082 bytes)
	I0918 21:05:08.295783   62061 exec_runner.go:144] found /home/jenkins/minikube-integration/19667-7671/.minikube/cert.pem, removing ...
	I0918 21:05:08.295790   62061 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19667-7671/.minikube/cert.pem
	I0918 21:05:08.295810   62061 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19667-7671/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19667-7671/.minikube/cert.pem (1123 bytes)
	I0918 21:05:08.295917   62061 exec_runner.go:144] found /home/jenkins/minikube-integration/19667-7671/.minikube/key.pem, removing ...
	I0918 21:05:08.295926   62061 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19667-7671/.minikube/key.pem
	I0918 21:05:08.295949   62061 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19667-7671/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19667-7671/.minikube/key.pem (1679 bytes)
	I0918 21:05:08.296009   62061 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19667-7671/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19667-7671/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19667-7671/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-740194 san=[127.0.0.1 192.168.72.53 localhost minikube old-k8s-version-740194]
	I0918 21:05:08.560726   62061 provision.go:177] copyRemoteCerts
	I0918 21:05:08.560786   62061 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0918 21:05:08.560816   62061 main.go:141] libmachine: (old-k8s-version-740194) Calling .GetSSHHostname
	I0918 21:05:08.563798   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | domain old-k8s-version-740194 has defined MAC address 52:54:00:3f:a7:11 in network mk-old-k8s-version-740194
	I0918 21:05:08.564231   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:a7:11", ip: ""} in network mk-old-k8s-version-740194: {Iface:virbr3 ExpiryTime:2024-09-18 22:05:00 +0000 UTC Type:0 Mac:52:54:00:3f:a7:11 Iaid: IPaddr:192.168.72.53 Prefix:24 Hostname:old-k8s-version-740194 Clientid:01:52:54:00:3f:a7:11}
	I0918 21:05:08.564266   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | domain old-k8s-version-740194 has defined IP address 192.168.72.53 and MAC address 52:54:00:3f:a7:11 in network mk-old-k8s-version-740194
	I0918 21:05:08.564473   62061 main.go:141] libmachine: (old-k8s-version-740194) Calling .GetSSHPort
	I0918 21:05:08.564704   62061 main.go:141] libmachine: (old-k8s-version-740194) Calling .GetSSHKeyPath
	I0918 21:05:08.564876   62061 main.go:141] libmachine: (old-k8s-version-740194) Calling .GetSSHUsername
	I0918 21:05:08.565016   62061 sshutil.go:53] new ssh client: &{IP:192.168.72.53 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19667-7671/.minikube/machines/old-k8s-version-740194/id_rsa Username:docker}
	I0918 21:05:08.654357   62061 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0918 21:05:08.680551   62061 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0918 21:05:08.704324   62061 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0918 21:05:08.727966   62061 provision.go:87] duration metric: took 439.269312ms to configureAuth
	I0918 21:05:08.728003   62061 buildroot.go:189] setting minikube options for container-runtime
	I0918 21:05:08.728235   62061 config.go:182] Loaded profile config "old-k8s-version-740194": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0918 21:05:08.728305   62061 main.go:141] libmachine: (old-k8s-version-740194) Calling .GetSSHHostname
	I0918 21:05:08.730895   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | domain old-k8s-version-740194 has defined MAC address 52:54:00:3f:a7:11 in network mk-old-k8s-version-740194
	I0918 21:05:08.731318   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:a7:11", ip: ""} in network mk-old-k8s-version-740194: {Iface:virbr3 ExpiryTime:2024-09-18 22:05:00 +0000 UTC Type:0 Mac:52:54:00:3f:a7:11 Iaid: IPaddr:192.168.72.53 Prefix:24 Hostname:old-k8s-version-740194 Clientid:01:52:54:00:3f:a7:11}
	I0918 21:05:08.731348   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | domain old-k8s-version-740194 has defined IP address 192.168.72.53 and MAC address 52:54:00:3f:a7:11 in network mk-old-k8s-version-740194
	I0918 21:05:08.731448   62061 main.go:141] libmachine: (old-k8s-version-740194) Calling .GetSSHPort
	I0918 21:05:08.731668   62061 main.go:141] libmachine: (old-k8s-version-740194) Calling .GetSSHKeyPath
	I0918 21:05:08.731884   62061 main.go:141] libmachine: (old-k8s-version-740194) Calling .GetSSHKeyPath
	I0918 21:05:08.732057   62061 main.go:141] libmachine: (old-k8s-version-740194) Calling .GetSSHUsername
	I0918 21:05:08.732223   62061 main.go:141] libmachine: Using SSH client type: native
	I0918 21:05:08.732396   62061 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.72.53 22 <nil> <nil>}
	I0918 21:05:08.732411   62061 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0918 21:05:08.978962   62061 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0918 21:05:08.978995   62061 machine.go:96] duration metric: took 1.05811075s to provisionDockerMachine
	I0918 21:05:08.979008   62061 start.go:293] postStartSetup for "old-k8s-version-740194" (driver="kvm2")
	I0918 21:05:08.979020   62061 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0918 21:05:08.979050   62061 main.go:141] libmachine: (old-k8s-version-740194) Calling .DriverName
	I0918 21:05:08.979409   62061 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0918 21:05:08.979436   62061 main.go:141] libmachine: (old-k8s-version-740194) Calling .GetSSHHostname
	I0918 21:05:08.982472   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | domain old-k8s-version-740194 has defined MAC address 52:54:00:3f:a7:11 in network mk-old-k8s-version-740194
	I0918 21:05:08.982791   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:a7:11", ip: ""} in network mk-old-k8s-version-740194: {Iface:virbr3 ExpiryTime:2024-09-18 22:05:00 +0000 UTC Type:0 Mac:52:54:00:3f:a7:11 Iaid: IPaddr:192.168.72.53 Prefix:24 Hostname:old-k8s-version-740194 Clientid:01:52:54:00:3f:a7:11}
	I0918 21:05:08.982830   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | domain old-k8s-version-740194 has defined IP address 192.168.72.53 and MAC address 52:54:00:3f:a7:11 in network mk-old-k8s-version-740194
	I0918 21:05:08.982996   62061 main.go:141] libmachine: (old-k8s-version-740194) Calling .GetSSHPort
	I0918 21:05:08.983192   62061 main.go:141] libmachine: (old-k8s-version-740194) Calling .GetSSHKeyPath
	I0918 21:05:08.983341   62061 main.go:141] libmachine: (old-k8s-version-740194) Calling .GetSSHUsername
	I0918 21:05:08.983510   62061 sshutil.go:53] new ssh client: &{IP:192.168.72.53 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19667-7671/.minikube/machines/old-k8s-version-740194/id_rsa Username:docker}
	I0918 21:05:09.074995   62061 ssh_runner.go:195] Run: cat /etc/os-release
	I0918 21:05:09.080207   62061 info.go:137] Remote host: Buildroot 2023.02.9
	I0918 21:05:09.080240   62061 filesync.go:126] Scanning /home/jenkins/minikube-integration/19667-7671/.minikube/addons for local assets ...
	I0918 21:05:09.080327   62061 filesync.go:126] Scanning /home/jenkins/minikube-integration/19667-7671/.minikube/files for local assets ...
	I0918 21:05:09.080451   62061 filesync.go:149] local asset: /home/jenkins/minikube-integration/19667-7671/.minikube/files/etc/ssl/certs/148782.pem -> 148782.pem in /etc/ssl/certs
	I0918 21:05:09.080583   62061 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0918 21:05:09.091374   62061 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/files/etc/ssl/certs/148782.pem --> /etc/ssl/certs/148782.pem (1708 bytes)
	I0918 21:05:09.119546   62061 start.go:296] duration metric: took 140.521158ms for postStartSetup
	I0918 21:05:09.119593   62061 fix.go:56] duration metric: took 20.337078765s for fixHost
	I0918 21:05:09.119620   62061 main.go:141] libmachine: (old-k8s-version-740194) Calling .GetSSHHostname
	I0918 21:05:09.122534   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | domain old-k8s-version-740194 has defined MAC address 52:54:00:3f:a7:11 in network mk-old-k8s-version-740194
	I0918 21:05:09.123157   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:a7:11", ip: ""} in network mk-old-k8s-version-740194: {Iface:virbr3 ExpiryTime:2024-09-18 22:05:00 +0000 UTC Type:0 Mac:52:54:00:3f:a7:11 Iaid: IPaddr:192.168.72.53 Prefix:24 Hostname:old-k8s-version-740194 Clientid:01:52:54:00:3f:a7:11}
	I0918 21:05:09.123187   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | domain old-k8s-version-740194 has defined IP address 192.168.72.53 and MAC address 52:54:00:3f:a7:11 in network mk-old-k8s-version-740194
	I0918 21:05:09.123447   62061 main.go:141] libmachine: (old-k8s-version-740194) Calling .GetSSHPort
	I0918 21:05:09.123695   62061 main.go:141] libmachine: (old-k8s-version-740194) Calling .GetSSHKeyPath
	I0918 21:05:09.123891   62061 main.go:141] libmachine: (old-k8s-version-740194) Calling .GetSSHKeyPath
	I0918 21:05:09.124095   62061 main.go:141] libmachine: (old-k8s-version-740194) Calling .GetSSHUsername
	I0918 21:05:09.124373   62061 main.go:141] libmachine: Using SSH client type: native
	I0918 21:05:09.124582   62061 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.72.53 22 <nil> <nil>}
	I0918 21:05:09.124595   62061 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0918 21:05:09.245082   62061 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726693509.219779478
	
	I0918 21:05:09.245109   62061 fix.go:216] guest clock: 1726693509.219779478
	I0918 21:05:09.245119   62061 fix.go:229] Guest: 2024-09-18 21:05:09.219779478 +0000 UTC Remote: 2024-09-18 21:05:09.11959842 +0000 UTC m=+249.666759777 (delta=100.181058ms)
	I0918 21:05:09.245139   62061 fix.go:200] guest clock delta is within tolerance: 100.181058ms
	I0918 21:05:09.245146   62061 start.go:83] releasing machines lock for "old-k8s-version-740194", held for 20.462669229s
	I0918 21:05:09.245176   62061 main.go:141] libmachine: (old-k8s-version-740194) Calling .DriverName
	I0918 21:05:09.245602   62061 main.go:141] libmachine: (old-k8s-version-740194) Calling .GetIP
	I0918 21:05:09.248653   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | domain old-k8s-version-740194 has defined MAC address 52:54:00:3f:a7:11 in network mk-old-k8s-version-740194
	I0918 21:05:09.249110   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:a7:11", ip: ""} in network mk-old-k8s-version-740194: {Iface:virbr3 ExpiryTime:2024-09-18 22:05:00 +0000 UTC Type:0 Mac:52:54:00:3f:a7:11 Iaid: IPaddr:192.168.72.53 Prefix:24 Hostname:old-k8s-version-740194 Clientid:01:52:54:00:3f:a7:11}
	I0918 21:05:09.249156   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | domain old-k8s-version-740194 has defined IP address 192.168.72.53 and MAC address 52:54:00:3f:a7:11 in network mk-old-k8s-version-740194
	I0918 21:05:09.249345   62061 main.go:141] libmachine: (old-k8s-version-740194) Calling .DriverName
	I0918 21:05:09.249838   62061 main.go:141] libmachine: (old-k8s-version-740194) Calling .DriverName
	I0918 21:05:09.250047   62061 main.go:141] libmachine: (old-k8s-version-740194) Calling .DriverName
	I0918 21:05:09.250167   62061 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0918 21:05:09.250230   62061 main.go:141] libmachine: (old-k8s-version-740194) Calling .GetSSHHostname
	I0918 21:05:09.250286   62061 ssh_runner.go:195] Run: cat /version.json
	I0918 21:05:09.250312   62061 main.go:141] libmachine: (old-k8s-version-740194) Calling .GetSSHHostname
	I0918 21:05:09.253242   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | domain old-k8s-version-740194 has defined MAC address 52:54:00:3f:a7:11 in network mk-old-k8s-version-740194
	I0918 21:05:09.253408   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | domain old-k8s-version-740194 has defined MAC address 52:54:00:3f:a7:11 in network mk-old-k8s-version-740194
	I0918 21:05:09.253687   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:a7:11", ip: ""} in network mk-old-k8s-version-740194: {Iface:virbr3 ExpiryTime:2024-09-18 22:05:00 +0000 UTC Type:0 Mac:52:54:00:3f:a7:11 Iaid: IPaddr:192.168.72.53 Prefix:24 Hostname:old-k8s-version-740194 Clientid:01:52:54:00:3f:a7:11}
	I0918 21:05:09.253749   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | domain old-k8s-version-740194 has defined IP address 192.168.72.53 and MAC address 52:54:00:3f:a7:11 in network mk-old-k8s-version-740194
	I0918 21:05:09.253882   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:a7:11", ip: ""} in network mk-old-k8s-version-740194: {Iface:virbr3 ExpiryTime:2024-09-18 22:05:00 +0000 UTC Type:0 Mac:52:54:00:3f:a7:11 Iaid: IPaddr:192.168.72.53 Prefix:24 Hostname:old-k8s-version-740194 Clientid:01:52:54:00:3f:a7:11}
	I0918 21:05:09.253901   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | domain old-k8s-version-740194 has defined IP address 192.168.72.53 and MAC address 52:54:00:3f:a7:11 in network mk-old-k8s-version-740194
	I0918 21:05:09.253944   62061 main.go:141] libmachine: (old-k8s-version-740194) Calling .GetSSHPort
	I0918 21:05:09.254026   62061 main.go:141] libmachine: (old-k8s-version-740194) Calling .GetSSHPort
	I0918 21:05:09.254221   62061 main.go:141] libmachine: (old-k8s-version-740194) Calling .GetSSHKeyPath
	I0918 21:05:09.254243   62061 main.go:141] libmachine: (old-k8s-version-740194) Calling .GetSSHKeyPath
	I0918 21:05:09.254372   62061 main.go:141] libmachine: (old-k8s-version-740194) Calling .GetSSHUsername
	I0918 21:05:09.254426   62061 main.go:141] libmachine: (old-k8s-version-740194) Calling .GetSSHUsername
	I0918 21:05:09.254533   62061 sshutil.go:53] new ssh client: &{IP:192.168.72.53 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19667-7671/.minikube/machines/old-k8s-version-740194/id_rsa Username:docker}
	I0918 21:05:09.254695   62061 sshutil.go:53] new ssh client: &{IP:192.168.72.53 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19667-7671/.minikube/machines/old-k8s-version-740194/id_rsa Username:docker}
	I0918 21:05:09.374728   62061 ssh_runner.go:195] Run: systemctl --version
	I0918 21:05:09.381117   62061 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0918 21:05:09.532059   62061 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0918 21:05:09.538041   62061 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0918 21:05:09.538124   62061 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0918 21:05:09.553871   62061 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0918 21:05:09.553907   62061 start.go:495] detecting cgroup driver to use...
	I0918 21:05:09.553982   62061 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0918 21:05:09.573554   62061 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0918 21:05:09.591963   62061 docker.go:217] disabling cri-docker service (if available) ...
	I0918 21:05:09.592057   62061 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0918 21:05:09.610813   62061 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0918 21:05:09.627153   62061 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0918 21:05:09.763978   62061 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0918 21:05:09.945185   62061 docker.go:233] disabling docker service ...
	I0918 21:05:09.945257   62061 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0918 21:05:09.961076   62061 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0918 21:05:09.974660   62061 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0918 21:05:10.111191   62061 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0918 21:05:10.235058   62061 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0918 21:05:10.251389   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0918 21:05:10.270949   62061 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0918 21:05:10.271047   62061 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0918 21:05:10.284743   62061 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0918 21:05:10.284811   62061 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0918 21:05:10.295221   62061 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0918 21:05:10.305897   62061 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0918 21:05:10.317726   62061 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0918 21:05:10.330136   62061 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0918 21:05:10.339480   62061 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0918 21:05:10.339543   62061 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0918 21:05:10.352954   62061 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0918 21:05:10.370764   62061 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0918 21:05:10.524315   62061 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0918 21:05:10.617374   62061 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0918 21:05:10.617466   62061 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0918 21:05:10.624164   62061 start.go:563] Will wait 60s for crictl version
	I0918 21:05:10.624222   62061 ssh_runner.go:195] Run: which crictl
	I0918 21:05:10.629583   62061 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0918 21:05:10.673613   62061 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0918 21:05:10.673702   62061 ssh_runner.go:195] Run: crio --version
	I0918 21:05:10.703948   62061 ssh_runner.go:195] Run: crio --version
	I0918 21:05:10.733924   62061 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0918 21:05:09.272840   61273 main.go:141] libmachine: (no-preload-331658) Calling .Start
	I0918 21:05:09.273067   61273 main.go:141] libmachine: (no-preload-331658) Ensuring networks are active...
	I0918 21:05:09.274115   61273 main.go:141] libmachine: (no-preload-331658) Ensuring network default is active
	I0918 21:05:09.274576   61273 main.go:141] libmachine: (no-preload-331658) Ensuring network mk-no-preload-331658 is active
	I0918 21:05:09.275108   61273 main.go:141] libmachine: (no-preload-331658) Getting domain xml...
	I0918 21:05:09.276003   61273 main.go:141] libmachine: (no-preload-331658) Creating domain...
	I0918 21:05:10.665647   61273 main.go:141] libmachine: (no-preload-331658) Waiting to get IP...
	I0918 21:05:10.666710   61273 main.go:141] libmachine: (no-preload-331658) DBG | domain no-preload-331658 has defined MAC address 52:54:00:2a:47:d0 in network mk-no-preload-331658
	I0918 21:05:10.667187   61273 main.go:141] libmachine: (no-preload-331658) DBG | unable to find current IP address of domain no-preload-331658 in network mk-no-preload-331658
	I0918 21:05:10.667261   61273 main.go:141] libmachine: (no-preload-331658) DBG | I0918 21:05:10.667162   63200 retry.go:31] will retry after 215.232953ms: waiting for machine to come up
	I0918 21:05:10.883691   61273 main.go:141] libmachine: (no-preload-331658) DBG | domain no-preload-331658 has defined MAC address 52:54:00:2a:47:d0 in network mk-no-preload-331658
	I0918 21:05:10.884249   61273 main.go:141] libmachine: (no-preload-331658) DBG | unable to find current IP address of domain no-preload-331658 in network mk-no-preload-331658
	I0918 21:05:10.884283   61273 main.go:141] libmachine: (no-preload-331658) DBG | I0918 21:05:10.884185   63200 retry.go:31] will retry after 289.698979ms: waiting for machine to come up
	I0918 21:05:11.175936   61273 main.go:141] libmachine: (no-preload-331658) DBG | domain no-preload-331658 has defined MAC address 52:54:00:2a:47:d0 in network mk-no-preload-331658
	I0918 21:05:11.176656   61273 main.go:141] libmachine: (no-preload-331658) DBG | unable to find current IP address of domain no-preload-331658 in network mk-no-preload-331658
	I0918 21:05:11.176680   61273 main.go:141] libmachine: (no-preload-331658) DBG | I0918 21:05:11.176553   63200 retry.go:31] will retry after 424.473311ms: waiting for machine to come up
	I0918 21:05:09.633671   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:05:11.634755   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:05:09.274214   61740 pod_ready.go:103] pod "coredns-7c65d6cfc9-xwn8w" in "kube-system" namespace has status "Ready":"False"
	I0918 21:05:11.275099   61740 pod_ready.go:103] pod "coredns-7c65d6cfc9-xwn8w" in "kube-system" namespace has status "Ready":"False"
	I0918 21:05:10.735500   62061 main.go:141] libmachine: (old-k8s-version-740194) Calling .GetIP
	I0918 21:05:10.738130   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | domain old-k8s-version-740194 has defined MAC address 52:54:00:3f:a7:11 in network mk-old-k8s-version-740194
	I0918 21:05:10.738488   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:a7:11", ip: ""} in network mk-old-k8s-version-740194: {Iface:virbr3 ExpiryTime:2024-09-18 22:05:00 +0000 UTC Type:0 Mac:52:54:00:3f:a7:11 Iaid: IPaddr:192.168.72.53 Prefix:24 Hostname:old-k8s-version-740194 Clientid:01:52:54:00:3f:a7:11}
	I0918 21:05:10.738516   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | domain old-k8s-version-740194 has defined IP address 192.168.72.53 and MAC address 52:54:00:3f:a7:11 in network mk-old-k8s-version-740194
	I0918 21:05:10.738831   62061 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0918 21:05:10.742780   62061 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0918 21:05:10.754785   62061 kubeadm.go:883] updating cluster {Name:old-k8s-version-740194 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-740194 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.53 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0918 21:05:10.754939   62061 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0918 21:05:10.755002   62061 ssh_runner.go:195] Run: sudo crictl images --output json
	I0918 21:05:10.800452   62061 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0918 21:05:10.800526   62061 ssh_runner.go:195] Run: which lz4
	I0918 21:05:10.804522   62061 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0918 21:05:10.809179   62061 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0918 21:05:10.809214   62061 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0918 21:05:12.306958   62061 crio.go:462] duration metric: took 1.502481615s to copy over tarball
	I0918 21:05:12.307049   62061 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0918 21:05:11.603153   61273 main.go:141] libmachine: (no-preload-331658) DBG | domain no-preload-331658 has defined MAC address 52:54:00:2a:47:d0 in network mk-no-preload-331658
	I0918 21:05:11.603791   61273 main.go:141] libmachine: (no-preload-331658) DBG | unable to find current IP address of domain no-preload-331658 in network mk-no-preload-331658
	I0918 21:05:11.603817   61273 main.go:141] libmachine: (no-preload-331658) DBG | I0918 21:05:11.603742   63200 retry.go:31] will retry after 425.818515ms: waiting for machine to come up
	I0918 21:05:12.031622   61273 main.go:141] libmachine: (no-preload-331658) DBG | domain no-preload-331658 has defined MAC address 52:54:00:2a:47:d0 in network mk-no-preload-331658
	I0918 21:05:12.032425   61273 main.go:141] libmachine: (no-preload-331658) DBG | unable to find current IP address of domain no-preload-331658 in network mk-no-preload-331658
	I0918 21:05:12.032458   61273 main.go:141] libmachine: (no-preload-331658) DBG | I0918 21:05:12.032357   63200 retry.go:31] will retry after 701.564015ms: waiting for machine to come up
	I0918 21:05:12.735295   61273 main.go:141] libmachine: (no-preload-331658) DBG | domain no-preload-331658 has defined MAC address 52:54:00:2a:47:d0 in network mk-no-preload-331658
	I0918 21:05:12.735852   61273 main.go:141] libmachine: (no-preload-331658) DBG | unable to find current IP address of domain no-preload-331658 in network mk-no-preload-331658
	I0918 21:05:12.735882   61273 main.go:141] libmachine: (no-preload-331658) DBG | I0918 21:05:12.735814   63200 retry.go:31] will retry after 904.737419ms: waiting for machine to come up
	I0918 21:05:13.642383   61273 main.go:141] libmachine: (no-preload-331658) DBG | domain no-preload-331658 has defined MAC address 52:54:00:2a:47:d0 in network mk-no-preload-331658
	I0918 21:05:13.642913   61273 main.go:141] libmachine: (no-preload-331658) DBG | unable to find current IP address of domain no-preload-331658 in network mk-no-preload-331658
	I0918 21:05:13.642935   61273 main.go:141] libmachine: (no-preload-331658) DBG | I0918 21:05:13.642872   63200 retry.go:31] will retry after 891.091353ms: waiting for machine to come up
	I0918 21:05:14.536200   61273 main.go:141] libmachine: (no-preload-331658) DBG | domain no-preload-331658 has defined MAC address 52:54:00:2a:47:d0 in network mk-no-preload-331658
	I0918 21:05:14.536797   61273 main.go:141] libmachine: (no-preload-331658) DBG | unable to find current IP address of domain no-preload-331658 in network mk-no-preload-331658
	I0918 21:05:14.536849   61273 main.go:141] libmachine: (no-preload-331658) DBG | I0918 21:05:14.536761   63200 retry.go:31] will retry after 1.01795417s: waiting for machine to come up
	I0918 21:05:15.555787   61273 main.go:141] libmachine: (no-preload-331658) DBG | domain no-preload-331658 has defined MAC address 52:54:00:2a:47:d0 in network mk-no-preload-331658
	I0918 21:05:15.556287   61273 main.go:141] libmachine: (no-preload-331658) DBG | unable to find current IP address of domain no-preload-331658 in network mk-no-preload-331658
	I0918 21:05:15.556315   61273 main.go:141] libmachine: (no-preload-331658) DBG | I0918 21:05:15.556243   63200 retry.go:31] will retry after 1.598926126s: waiting for machine to come up
	I0918 21:05:14.132957   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:05:16.133323   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:05:13.778274   61740 pod_ready.go:93] pod "coredns-7c65d6cfc9-xwn8w" in "kube-system" namespace has status "Ready":"True"
	I0918 21:05:13.778310   61740 pod_ready.go:82] duration metric: took 11.011085965s for pod "coredns-7c65d6cfc9-xwn8w" in "kube-system" namespace to be "Ready" ...
	I0918 21:05:13.778325   61740 pod_ready.go:79] waiting up to 4m0s for pod "etcd-embed-certs-255556" in "kube-system" namespace to be "Ready" ...
	I0918 21:05:13.785089   61740 pod_ready.go:93] pod "etcd-embed-certs-255556" in "kube-system" namespace has status "Ready":"True"
	I0918 21:05:13.785121   61740 pod_ready.go:82] duration metric: took 6.787649ms for pod "etcd-embed-certs-255556" in "kube-system" namespace to be "Ready" ...
	I0918 21:05:13.785135   61740 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-embed-certs-255556" in "kube-system" namespace to be "Ready" ...
	I0918 21:05:15.793479   61740 pod_ready.go:103] pod "kube-apiserver-embed-certs-255556" in "kube-system" namespace has status "Ready":"False"
	I0918 21:05:15.357114   62061 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.050037005s)
	I0918 21:05:15.357141   62061 crio.go:469] duration metric: took 3.050151648s to extract the tarball
	I0918 21:05:15.357148   62061 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0918 21:05:15.399373   62061 ssh_runner.go:195] Run: sudo crictl images --output json
	I0918 21:05:15.434204   62061 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0918 21:05:15.434238   62061 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0918 21:05:15.434332   62061 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0918 21:05:15.434369   62061 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0918 21:05:15.434385   62061 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0918 21:05:15.434398   62061 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0918 21:05:15.434339   62061 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0918 21:05:15.434438   62061 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I0918 21:05:15.434443   62061 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0918 21:05:15.434491   62061 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I0918 21:05:15.436820   62061 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0918 21:05:15.436824   62061 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0918 21:05:15.436829   62061 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0918 21:05:15.436856   62061 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0918 21:05:15.436867   62061 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0918 21:05:15.436904   62061 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0918 21:05:15.436952   62061 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0918 21:05:15.437278   62061 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0918 21:05:15.747423   62061 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0918 21:05:15.747705   62061 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0918 21:05:15.748375   62061 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0918 21:05:15.750816   62061 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0918 21:05:15.752244   62061 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0918 21:05:15.754038   62061 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0918 21:05:15.770881   62061 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0918 21:05:15.911654   62061 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0918 21:05:15.911725   62061 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0918 21:05:15.911795   62061 ssh_runner.go:195] Run: which crictl
	I0918 21:05:15.938412   62061 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0918 21:05:15.938464   62061 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0918 21:05:15.938534   62061 ssh_runner.go:195] Run: which crictl
	I0918 21:05:15.944615   62061 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0918 21:05:15.944706   62061 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0918 21:05:15.944736   62061 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0918 21:05:15.944749   62061 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0918 21:05:15.944786   62061 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0918 21:05:15.944809   62061 ssh_runner.go:195] Run: which crictl
	I0918 21:05:15.944820   62061 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0918 21:05:15.944844   62061 ssh_runner.go:195] Run: which crictl
	I0918 21:05:15.944857   62061 ssh_runner.go:195] Run: which crictl
	I0918 21:05:15.944661   62061 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0918 21:05:15.944914   62061 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0918 21:05:15.944950   62061 ssh_runner.go:195] Run: which crictl
	I0918 21:05:15.965316   62061 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0918 21:05:15.965366   62061 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0918 21:05:15.965418   62061 ssh_runner.go:195] Run: which crictl
	I0918 21:05:15.965461   62061 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0918 21:05:15.965419   62061 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0918 21:05:15.965544   62061 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0918 21:05:15.965558   62061 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0918 21:05:15.965613   62061 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0918 21:05:15.965621   62061 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0918 21:05:16.101533   62061 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0918 21:05:16.101536   62061 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0918 21:05:16.105174   62061 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0918 21:05:16.105198   62061 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0918 21:05:16.109044   62061 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0918 21:05:16.109048   62061 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0918 21:05:16.109152   62061 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0918 21:05:16.220653   62061 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0918 21:05:16.255418   62061 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0918 21:05:16.259931   62061 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0918 21:05:16.259986   62061 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0918 21:05:16.260039   62061 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0918 21:05:16.260121   62061 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0918 21:05:16.260337   62061 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0918 21:05:16.352969   62061 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0918 21:05:16.399180   62061 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19667-7671/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0918 21:05:16.445003   62061 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19667-7671/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0918 21:05:16.445078   62061 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19667-7671/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0918 21:05:16.445163   62061 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19667-7671/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0918 21:05:16.445266   62061 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19667-7671/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0918 21:05:16.445387   62061 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19667-7671/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0918 21:05:16.445398   62061 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19667-7671/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0918 21:05:16.633221   62061 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0918 21:05:16.782331   62061 cache_images.go:92] duration metric: took 1.348045359s to LoadCachedImages
	W0918 21:05:16.782445   62061 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19667-7671/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0: no such file or directory
	I0918 21:05:16.782464   62061 kubeadm.go:934] updating node { 192.168.72.53 8443 v1.20.0 crio true true} ...
	I0918 21:05:16.782608   62061 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-740194 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.72.53
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-740194 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0918 21:05:16.782679   62061 ssh_runner.go:195] Run: crio config
	I0918 21:05:16.838946   62061 cni.go:84] Creating CNI manager for ""
	I0918 21:05:16.838978   62061 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0918 21:05:16.838990   62061 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0918 21:05:16.839008   62061 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.53 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-740194 NodeName:old-k8s-version-740194 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.53"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.53 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0918 21:05:16.839163   62061 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.53
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-740194"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.53
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.53"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0918 21:05:16.839232   62061 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0918 21:05:16.849868   62061 binaries.go:44] Found k8s binaries, skipping transfer
	I0918 21:05:16.849955   62061 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0918 21:05:16.859716   62061 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (429 bytes)
	I0918 21:05:16.877857   62061 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0918 21:05:16.895573   62061 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2120 bytes)
	I0918 21:05:16.913545   62061 ssh_runner.go:195] Run: grep 192.168.72.53	control-plane.minikube.internal$ /etc/hosts
	I0918 21:05:16.917476   62061 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.53	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0918 21:05:16.931318   62061 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0918 21:05:17.057636   62061 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0918 21:05:17.076534   62061 certs.go:68] Setting up /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/old-k8s-version-740194 for IP: 192.168.72.53
	I0918 21:05:17.076571   62061 certs.go:194] generating shared ca certs ...
	I0918 21:05:17.076594   62061 certs.go:226] acquiring lock for ca certs: {Name:mkf54e0e71ed884c4372f9d3d4cb308ea4600185 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0918 21:05:17.076796   62061 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19667-7671/.minikube/ca.key
	I0918 21:05:17.076855   62061 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19667-7671/.minikube/proxy-client-ca.key
	I0918 21:05:17.076871   62061 certs.go:256] generating profile certs ...
	I0918 21:05:17.076999   62061 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/old-k8s-version-740194/client.key
	I0918 21:05:17.077083   62061 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/old-k8s-version-740194/apiserver.key.424b07d9
	I0918 21:05:17.077149   62061 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/old-k8s-version-740194/proxy-client.key
	I0918 21:05:17.077321   62061 certs.go:484] found cert: /home/jenkins/minikube-integration/19667-7671/.minikube/certs/14878.pem (1338 bytes)
	W0918 21:05:17.077371   62061 certs.go:480] ignoring /home/jenkins/minikube-integration/19667-7671/.minikube/certs/14878_empty.pem, impossibly tiny 0 bytes
	I0918 21:05:17.077386   62061 certs.go:484] found cert: /home/jenkins/minikube-integration/19667-7671/.minikube/certs/ca-key.pem (1675 bytes)
	I0918 21:05:17.077421   62061 certs.go:484] found cert: /home/jenkins/minikube-integration/19667-7671/.minikube/certs/ca.pem (1082 bytes)
	I0918 21:05:17.077465   62061 certs.go:484] found cert: /home/jenkins/minikube-integration/19667-7671/.minikube/certs/cert.pem (1123 bytes)
	I0918 21:05:17.077499   62061 certs.go:484] found cert: /home/jenkins/minikube-integration/19667-7671/.minikube/certs/key.pem (1679 bytes)
	I0918 21:05:17.077585   62061 certs.go:484] found cert: /home/jenkins/minikube-integration/19667-7671/.minikube/files/etc/ssl/certs/148782.pem (1708 bytes)
	I0918 21:05:17.078553   62061 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0918 21:05:17.116000   62061 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0918 21:05:17.150848   62061 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0918 21:05:17.190616   62061 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0918 21:05:17.222411   62061 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/old-k8s-version-740194/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0918 21:05:17.266923   62061 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/old-k8s-version-740194/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0918 21:05:17.303973   62061 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/old-k8s-version-740194/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0918 21:05:17.334390   62061 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/old-k8s-version-740194/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0918 21:05:17.358732   62061 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0918 21:05:17.385693   62061 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/certs/14878.pem --> /usr/share/ca-certificates/14878.pem (1338 bytes)
	I0918 21:05:17.411396   62061 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/files/etc/ssl/certs/148782.pem --> /usr/share/ca-certificates/148782.pem (1708 bytes)
	I0918 21:05:17.440205   62061 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0918 21:05:17.459639   62061 ssh_runner.go:195] Run: openssl version
	I0918 21:05:17.465846   62061 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0918 21:05:17.477482   62061 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0918 21:05:17.482579   62061 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 18 19:39 /usr/share/ca-certificates/minikubeCA.pem
	I0918 21:05:17.482650   62061 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0918 21:05:17.488822   62061 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0918 21:05:17.499729   62061 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14878.pem && ln -fs /usr/share/ca-certificates/14878.pem /etc/ssl/certs/14878.pem"
	I0918 21:05:17.511718   62061 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14878.pem
	I0918 21:05:17.516361   62061 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 18 19:57 /usr/share/ca-certificates/14878.pem
	I0918 21:05:17.516438   62061 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14878.pem
	I0918 21:05:17.522542   62061 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14878.pem /etc/ssl/certs/51391683.0"
	I0918 21:05:17.534450   62061 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/148782.pem && ln -fs /usr/share/ca-certificates/148782.pem /etc/ssl/certs/148782.pem"
	I0918 21:05:17.545916   62061 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/148782.pem
	I0918 21:05:17.550672   62061 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 18 19:57 /usr/share/ca-certificates/148782.pem
	I0918 21:05:17.550737   62061 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/148782.pem
	I0918 21:05:17.556802   62061 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/148782.pem /etc/ssl/certs/3ec20f2e.0"
	I0918 21:05:17.568991   62061 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0918 21:05:17.574098   62061 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0918 21:05:17.581458   62061 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0918 21:05:17.588242   62061 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0918 21:05:17.594889   62061 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0918 21:05:17.601499   62061 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0918 21:05:17.608193   62061 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0918 21:05:17.614196   62061 kubeadm.go:392] StartCluster: {Name:old-k8s-version-740194 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-740194 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.53 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0918 21:05:17.614298   62061 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0918 21:05:17.614363   62061 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0918 21:05:17.661551   62061 cri.go:89] found id: ""
	I0918 21:05:17.661627   62061 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0918 21:05:17.673087   62061 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0918 21:05:17.673110   62061 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0918 21:05:17.673153   62061 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0918 21:05:17.683213   62061 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0918 21:05:17.684537   62061 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-740194" does not appear in /home/jenkins/minikube-integration/19667-7671/kubeconfig
	I0918 21:05:17.685136   62061 kubeconfig.go:62] /home/jenkins/minikube-integration/19667-7671/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-740194" cluster setting kubeconfig missing "old-k8s-version-740194" context setting]
	I0918 21:05:17.686023   62061 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19667-7671/kubeconfig: {Name:mk3a3eecadb0164dfdd5d3c4392b5b473a2a9bb0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0918 21:05:17.711337   62061 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0918 21:05:17.723894   62061 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.72.53
	I0918 21:05:17.723936   62061 kubeadm.go:1160] stopping kube-system containers ...
	I0918 21:05:17.723949   62061 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0918 21:05:17.724005   62061 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0918 21:05:17.761087   62061 cri.go:89] found id: ""
	I0918 21:05:17.761166   62061 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0918 21:05:17.779689   62061 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0918 21:05:17.791699   62061 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0918 21:05:17.791723   62061 kubeadm.go:157] found existing configuration files:
	
	I0918 21:05:17.791773   62061 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0918 21:05:17.801804   62061 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0918 21:05:17.801879   62061 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0918 21:05:17.812806   62061 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0918 21:05:17.822813   62061 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0918 21:05:17.822902   62061 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0918 21:05:17.833267   62061 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0918 21:05:17.845148   62061 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0918 21:05:17.845230   62061 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0918 21:05:17.855974   62061 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0918 21:05:17.866490   62061 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0918 21:05:17.866593   62061 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0918 21:05:17.877339   62061 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0918 21:05:17.887825   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0918 21:05:18.021691   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0918 21:05:18.726750   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0918 21:05:18.970635   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0918 21:05:19.081869   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0918 21:05:19.203071   62061 api_server.go:52] waiting for apiserver process to appear ...
	I0918 21:05:19.203166   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:17.156934   61273 main.go:141] libmachine: (no-preload-331658) DBG | domain no-preload-331658 has defined MAC address 52:54:00:2a:47:d0 in network mk-no-preload-331658
	I0918 21:05:17.157481   61273 main.go:141] libmachine: (no-preload-331658) DBG | unable to find current IP address of domain no-preload-331658 in network mk-no-preload-331658
	I0918 21:05:17.157509   61273 main.go:141] libmachine: (no-preload-331658) DBG | I0918 21:05:17.157429   63200 retry.go:31] will retry after 1.586399944s: waiting for machine to come up
	I0918 21:05:18.746155   61273 main.go:141] libmachine: (no-preload-331658) DBG | domain no-preload-331658 has defined MAC address 52:54:00:2a:47:d0 in network mk-no-preload-331658
	I0918 21:05:18.746620   61273 main.go:141] libmachine: (no-preload-331658) DBG | unable to find current IP address of domain no-preload-331658 in network mk-no-preload-331658
	I0918 21:05:18.746650   61273 main.go:141] libmachine: (no-preload-331658) DBG | I0918 21:05:18.746571   63200 retry.go:31] will retry after 2.204220189s: waiting for machine to come up
	I0918 21:05:20.953669   61273 main.go:141] libmachine: (no-preload-331658) DBG | domain no-preload-331658 has defined MAC address 52:54:00:2a:47:d0 in network mk-no-preload-331658
	I0918 21:05:20.954223   61273 main.go:141] libmachine: (no-preload-331658) DBG | unable to find current IP address of domain no-preload-331658 in network mk-no-preload-331658
	I0918 21:05:20.954287   61273 main.go:141] libmachine: (no-preload-331658) DBG | I0918 21:05:20.954209   63200 retry.go:31] will retry after 2.418479665s: waiting for machine to come up
	I0918 21:05:18.634113   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:05:21.133516   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:05:18.365915   61740 pod_ready.go:93] pod "kube-apiserver-embed-certs-255556" in "kube-system" namespace has status "Ready":"True"
	I0918 21:05:18.365943   61740 pod_ready.go:82] duration metric: took 4.580799395s for pod "kube-apiserver-embed-certs-255556" in "kube-system" namespace to be "Ready" ...
	I0918 21:05:18.365956   61740 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-255556" in "kube-system" namespace to be "Ready" ...
	I0918 21:05:18.371010   61740 pod_ready.go:93] pod "kube-controller-manager-embed-certs-255556" in "kube-system" namespace has status "Ready":"True"
	I0918 21:05:18.371035   61740 pod_ready.go:82] duration metric: took 5.070331ms for pod "kube-controller-manager-embed-certs-255556" in "kube-system" namespace to be "Ready" ...
	I0918 21:05:18.371046   61740 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-v8szm" in "kube-system" namespace to be "Ready" ...
	I0918 21:05:18.375632   61740 pod_ready.go:93] pod "kube-proxy-v8szm" in "kube-system" namespace has status "Ready":"True"
	I0918 21:05:18.375658   61740 pod_ready.go:82] duration metric: took 4.603787ms for pod "kube-proxy-v8szm" in "kube-system" namespace to be "Ready" ...
	I0918 21:05:18.375671   61740 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-embed-certs-255556" in "kube-system" namespace to be "Ready" ...
	I0918 21:05:18.380527   61740 pod_ready.go:93] pod "kube-scheduler-embed-certs-255556" in "kube-system" namespace has status "Ready":"True"
	I0918 21:05:18.380551   61740 pod_ready.go:82] duration metric: took 4.872699ms for pod "kube-scheduler-embed-certs-255556" in "kube-system" namespace to be "Ready" ...
	I0918 21:05:18.380563   61740 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace to be "Ready" ...
	I0918 21:05:20.388600   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:05:22.887122   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:05:19.704128   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:20.203595   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:20.703244   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:21.204288   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:21.703469   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:22.203224   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:22.703968   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:23.204097   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:23.703933   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:24.204272   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:23.375904   61273 main.go:141] libmachine: (no-preload-331658) DBG | domain no-preload-331658 has defined MAC address 52:54:00:2a:47:d0 in network mk-no-preload-331658
	I0918 21:05:23.376450   61273 main.go:141] libmachine: (no-preload-331658) DBG | unable to find current IP address of domain no-preload-331658 in network mk-no-preload-331658
	I0918 21:05:23.376476   61273 main.go:141] libmachine: (no-preload-331658) DBG | I0918 21:05:23.376397   63200 retry.go:31] will retry after 4.431211335s: waiting for machine to come up
	I0918 21:05:23.633093   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:05:25.633913   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:05:24.887771   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:05:27.386891   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:05:24.704240   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:25.203880   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:25.703983   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:26.204273   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:26.703861   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:27.204064   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:27.703276   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:28.204289   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:28.703701   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:29.203604   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:27.811234   61273 main.go:141] libmachine: (no-preload-331658) DBG | domain no-preload-331658 has defined MAC address 52:54:00:2a:47:d0 in network mk-no-preload-331658
	I0918 21:05:27.811698   61273 main.go:141] libmachine: (no-preload-331658) DBG | domain no-preload-331658 has current primary IP address 192.168.61.31 and MAC address 52:54:00:2a:47:d0 in network mk-no-preload-331658
	I0918 21:05:27.811719   61273 main.go:141] libmachine: (no-preload-331658) Found IP for machine: 192.168.61.31
	I0918 21:05:27.811729   61273 main.go:141] libmachine: (no-preload-331658) Reserving static IP address...
	I0918 21:05:27.812131   61273 main.go:141] libmachine: (no-preload-331658) DBG | found host DHCP lease matching {name: "no-preload-331658", mac: "52:54:00:2a:47:d0", ip: "192.168.61.31"} in network mk-no-preload-331658: {Iface:virbr2 ExpiryTime:2024-09-18 21:55:52 +0000 UTC Type:0 Mac:52:54:00:2a:47:d0 Iaid: IPaddr:192.168.61.31 Prefix:24 Hostname:no-preload-331658 Clientid:01:52:54:00:2a:47:d0}
	I0918 21:05:27.812150   61273 main.go:141] libmachine: (no-preload-331658) Reserved static IP address: 192.168.61.31
	I0918 21:05:27.812163   61273 main.go:141] libmachine: (no-preload-331658) DBG | skip adding static IP to network mk-no-preload-331658 - found existing host DHCP lease matching {name: "no-preload-331658", mac: "52:54:00:2a:47:d0", ip: "192.168.61.31"}
	I0918 21:05:27.812170   61273 main.go:141] libmachine: (no-preload-331658) Waiting for SSH to be available...
	I0918 21:05:27.812178   61273 main.go:141] libmachine: (no-preload-331658) DBG | Getting to WaitForSSH function...
	I0918 21:05:27.814300   61273 main.go:141] libmachine: (no-preload-331658) DBG | domain no-preload-331658 has defined MAC address 52:54:00:2a:47:d0 in network mk-no-preload-331658
	I0918 21:05:27.814735   61273 main.go:141] libmachine: (no-preload-331658) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:47:d0", ip: ""} in network mk-no-preload-331658: {Iface:virbr2 ExpiryTime:2024-09-18 21:55:52 +0000 UTC Type:0 Mac:52:54:00:2a:47:d0 Iaid: IPaddr:192.168.61.31 Prefix:24 Hostname:no-preload-331658 Clientid:01:52:54:00:2a:47:d0}
	I0918 21:05:27.814767   61273 main.go:141] libmachine: (no-preload-331658) DBG | domain no-preload-331658 has defined IP address 192.168.61.31 and MAC address 52:54:00:2a:47:d0 in network mk-no-preload-331658
	I0918 21:05:27.814891   61273 main.go:141] libmachine: (no-preload-331658) DBG | Using SSH client type: external
	I0918 21:05:27.814922   61273 main.go:141] libmachine: (no-preload-331658) DBG | Using SSH private key: /home/jenkins/minikube-integration/19667-7671/.minikube/machines/no-preload-331658/id_rsa (-rw-------)
	I0918 21:05:27.814945   61273 main.go:141] libmachine: (no-preload-331658) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.31 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19667-7671/.minikube/machines/no-preload-331658/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0918 21:05:27.814972   61273 main.go:141] libmachine: (no-preload-331658) DBG | About to run SSH command:
	I0918 21:05:27.814985   61273 main.go:141] libmachine: (no-preload-331658) DBG | exit 0
	I0918 21:05:27.939949   61273 main.go:141] libmachine: (no-preload-331658) DBG | SSH cmd err, output: <nil>: 
	I0918 21:05:27.940365   61273 main.go:141] libmachine: (no-preload-331658) Calling .GetConfigRaw
	I0918 21:05:27.941187   61273 main.go:141] libmachine: (no-preload-331658) Calling .GetIP
	I0918 21:05:27.943976   61273 main.go:141] libmachine: (no-preload-331658) DBG | domain no-preload-331658 has defined MAC address 52:54:00:2a:47:d0 in network mk-no-preload-331658
	I0918 21:05:27.944375   61273 main.go:141] libmachine: (no-preload-331658) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:47:d0", ip: ""} in network mk-no-preload-331658: {Iface:virbr2 ExpiryTime:2024-09-18 21:55:52 +0000 UTC Type:0 Mac:52:54:00:2a:47:d0 Iaid: IPaddr:192.168.61.31 Prefix:24 Hostname:no-preload-331658 Clientid:01:52:54:00:2a:47:d0}
	I0918 21:05:27.944399   61273 main.go:141] libmachine: (no-preload-331658) DBG | domain no-preload-331658 has defined IP address 192.168.61.31 and MAC address 52:54:00:2a:47:d0 in network mk-no-preload-331658
	I0918 21:05:27.944670   61273 profile.go:143] Saving config to /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/no-preload-331658/config.json ...
	I0918 21:05:27.944942   61273 machine.go:93] provisionDockerMachine start ...
	I0918 21:05:27.944963   61273 main.go:141] libmachine: (no-preload-331658) Calling .DriverName
	I0918 21:05:27.945228   61273 main.go:141] libmachine: (no-preload-331658) Calling .GetSSHHostname
	I0918 21:05:27.947444   61273 main.go:141] libmachine: (no-preload-331658) DBG | domain no-preload-331658 has defined MAC address 52:54:00:2a:47:d0 in network mk-no-preload-331658
	I0918 21:05:27.947810   61273 main.go:141] libmachine: (no-preload-331658) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:47:d0", ip: ""} in network mk-no-preload-331658: {Iface:virbr2 ExpiryTime:2024-09-18 21:55:52 +0000 UTC Type:0 Mac:52:54:00:2a:47:d0 Iaid: IPaddr:192.168.61.31 Prefix:24 Hostname:no-preload-331658 Clientid:01:52:54:00:2a:47:d0}
	I0918 21:05:27.947843   61273 main.go:141] libmachine: (no-preload-331658) DBG | domain no-preload-331658 has defined IP address 192.168.61.31 and MAC address 52:54:00:2a:47:d0 in network mk-no-preload-331658
	I0918 21:05:27.947974   61273 main.go:141] libmachine: (no-preload-331658) Calling .GetSSHPort
	I0918 21:05:27.948196   61273 main.go:141] libmachine: (no-preload-331658) Calling .GetSSHKeyPath
	I0918 21:05:27.948404   61273 main.go:141] libmachine: (no-preload-331658) Calling .GetSSHKeyPath
	I0918 21:05:27.948664   61273 main.go:141] libmachine: (no-preload-331658) Calling .GetSSHUsername
	I0918 21:05:27.948845   61273 main.go:141] libmachine: Using SSH client type: native
	I0918 21:05:27.949078   61273 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.61.31 22 <nil> <nil>}
	I0918 21:05:27.949099   61273 main.go:141] libmachine: About to run SSH command:
	hostname
	I0918 21:05:28.052352   61273 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0918 21:05:28.052378   61273 main.go:141] libmachine: (no-preload-331658) Calling .GetMachineName
	I0918 21:05:28.052638   61273 buildroot.go:166] provisioning hostname "no-preload-331658"
	I0918 21:05:28.052668   61273 main.go:141] libmachine: (no-preload-331658) Calling .GetMachineName
	I0918 21:05:28.052923   61273 main.go:141] libmachine: (no-preload-331658) Calling .GetSSHHostname
	I0918 21:05:28.056168   61273 main.go:141] libmachine: (no-preload-331658) DBG | domain no-preload-331658 has defined MAC address 52:54:00:2a:47:d0 in network mk-no-preload-331658
	I0918 21:05:28.056599   61273 main.go:141] libmachine: (no-preload-331658) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:47:d0", ip: ""} in network mk-no-preload-331658: {Iface:virbr2 ExpiryTime:2024-09-18 21:55:52 +0000 UTC Type:0 Mac:52:54:00:2a:47:d0 Iaid: IPaddr:192.168.61.31 Prefix:24 Hostname:no-preload-331658 Clientid:01:52:54:00:2a:47:d0}
	I0918 21:05:28.056631   61273 main.go:141] libmachine: (no-preload-331658) DBG | domain no-preload-331658 has defined IP address 192.168.61.31 and MAC address 52:54:00:2a:47:d0 in network mk-no-preload-331658
	I0918 21:05:28.056805   61273 main.go:141] libmachine: (no-preload-331658) Calling .GetSSHPort
	I0918 21:05:28.057009   61273 main.go:141] libmachine: (no-preload-331658) Calling .GetSSHKeyPath
	I0918 21:05:28.057168   61273 main.go:141] libmachine: (no-preload-331658) Calling .GetSSHKeyPath
	I0918 21:05:28.057305   61273 main.go:141] libmachine: (no-preload-331658) Calling .GetSSHUsername
	I0918 21:05:28.057478   61273 main.go:141] libmachine: Using SSH client type: native
	I0918 21:05:28.057652   61273 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.61.31 22 <nil> <nil>}
	I0918 21:05:28.057665   61273 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-331658 && echo "no-preload-331658" | sudo tee /etc/hostname
	I0918 21:05:28.174245   61273 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-331658
	
	I0918 21:05:28.174282   61273 main.go:141] libmachine: (no-preload-331658) Calling .GetSSHHostname
	I0918 21:05:28.177373   61273 main.go:141] libmachine: (no-preload-331658) DBG | domain no-preload-331658 has defined MAC address 52:54:00:2a:47:d0 in network mk-no-preload-331658
	I0918 21:05:28.177753   61273 main.go:141] libmachine: (no-preload-331658) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:47:d0", ip: ""} in network mk-no-preload-331658: {Iface:virbr2 ExpiryTime:2024-09-18 21:55:52 +0000 UTC Type:0 Mac:52:54:00:2a:47:d0 Iaid: IPaddr:192.168.61.31 Prefix:24 Hostname:no-preload-331658 Clientid:01:52:54:00:2a:47:d0}
	I0918 21:05:28.177781   61273 main.go:141] libmachine: (no-preload-331658) DBG | domain no-preload-331658 has defined IP address 192.168.61.31 and MAC address 52:54:00:2a:47:d0 in network mk-no-preload-331658
	I0918 21:05:28.177981   61273 main.go:141] libmachine: (no-preload-331658) Calling .GetSSHPort
	I0918 21:05:28.178202   61273 main.go:141] libmachine: (no-preload-331658) Calling .GetSSHKeyPath
	I0918 21:05:28.178380   61273 main.go:141] libmachine: (no-preload-331658) Calling .GetSSHKeyPath
	I0918 21:05:28.178523   61273 main.go:141] libmachine: (no-preload-331658) Calling .GetSSHUsername
	I0918 21:05:28.178752   61273 main.go:141] libmachine: Using SSH client type: native
	I0918 21:05:28.178948   61273 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.61.31 22 <nil> <nil>}
	I0918 21:05:28.178965   61273 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-331658' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-331658/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-331658' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0918 21:05:28.292659   61273 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0918 21:05:28.292691   61273 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19667-7671/.minikube CaCertPath:/home/jenkins/minikube-integration/19667-7671/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19667-7671/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19667-7671/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19667-7671/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19667-7671/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19667-7671/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19667-7671/.minikube}
	I0918 21:05:28.292714   61273 buildroot.go:174] setting up certificates
	I0918 21:05:28.292725   61273 provision.go:84] configureAuth start
	I0918 21:05:28.292734   61273 main.go:141] libmachine: (no-preload-331658) Calling .GetMachineName
	I0918 21:05:28.293091   61273 main.go:141] libmachine: (no-preload-331658) Calling .GetIP
	I0918 21:05:28.295792   61273 main.go:141] libmachine: (no-preload-331658) DBG | domain no-preload-331658 has defined MAC address 52:54:00:2a:47:d0 in network mk-no-preload-331658
	I0918 21:05:28.296192   61273 main.go:141] libmachine: (no-preload-331658) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:47:d0", ip: ""} in network mk-no-preload-331658: {Iface:virbr2 ExpiryTime:2024-09-18 21:55:52 +0000 UTC Type:0 Mac:52:54:00:2a:47:d0 Iaid: IPaddr:192.168.61.31 Prefix:24 Hostname:no-preload-331658 Clientid:01:52:54:00:2a:47:d0}
	I0918 21:05:28.296219   61273 main.go:141] libmachine: (no-preload-331658) DBG | domain no-preload-331658 has defined IP address 192.168.61.31 and MAC address 52:54:00:2a:47:d0 in network mk-no-preload-331658
	I0918 21:05:28.296405   61273 main.go:141] libmachine: (no-preload-331658) Calling .GetSSHHostname
	I0918 21:05:28.298446   61273 main.go:141] libmachine: (no-preload-331658) DBG | domain no-preload-331658 has defined MAC address 52:54:00:2a:47:d0 in network mk-no-preload-331658
	I0918 21:05:28.298788   61273 main.go:141] libmachine: (no-preload-331658) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:47:d0", ip: ""} in network mk-no-preload-331658: {Iface:virbr2 ExpiryTime:2024-09-18 21:55:52 +0000 UTC Type:0 Mac:52:54:00:2a:47:d0 Iaid: IPaddr:192.168.61.31 Prefix:24 Hostname:no-preload-331658 Clientid:01:52:54:00:2a:47:d0}
	I0918 21:05:28.298815   61273 main.go:141] libmachine: (no-preload-331658) DBG | domain no-preload-331658 has defined IP address 192.168.61.31 and MAC address 52:54:00:2a:47:d0 in network mk-no-preload-331658
	I0918 21:05:28.298938   61273 provision.go:143] copyHostCerts
	I0918 21:05:28.299013   61273 exec_runner.go:144] found /home/jenkins/minikube-integration/19667-7671/.minikube/ca.pem, removing ...
	I0918 21:05:28.299026   61273 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19667-7671/.minikube/ca.pem
	I0918 21:05:28.299078   61273 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19667-7671/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19667-7671/.minikube/ca.pem (1082 bytes)
	I0918 21:05:28.299170   61273 exec_runner.go:144] found /home/jenkins/minikube-integration/19667-7671/.minikube/cert.pem, removing ...
	I0918 21:05:28.299178   61273 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19667-7671/.minikube/cert.pem
	I0918 21:05:28.299199   61273 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19667-7671/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19667-7671/.minikube/cert.pem (1123 bytes)
	I0918 21:05:28.299252   61273 exec_runner.go:144] found /home/jenkins/minikube-integration/19667-7671/.minikube/key.pem, removing ...
	I0918 21:05:28.299258   61273 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19667-7671/.minikube/key.pem
	I0918 21:05:28.299278   61273 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19667-7671/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19667-7671/.minikube/key.pem (1679 bytes)
	I0918 21:05:28.299325   61273 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19667-7671/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19667-7671/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19667-7671/.minikube/certs/ca-key.pem org=jenkins.no-preload-331658 san=[127.0.0.1 192.168.61.31 localhost minikube no-preload-331658]
	I0918 21:05:28.606565   61273 provision.go:177] copyRemoteCerts
	I0918 21:05:28.606629   61273 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0918 21:05:28.606653   61273 main.go:141] libmachine: (no-preload-331658) Calling .GetSSHHostname
	I0918 21:05:28.609156   61273 main.go:141] libmachine: (no-preload-331658) DBG | domain no-preload-331658 has defined MAC address 52:54:00:2a:47:d0 in network mk-no-preload-331658
	I0918 21:05:28.609533   61273 main.go:141] libmachine: (no-preload-331658) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:47:d0", ip: ""} in network mk-no-preload-331658: {Iface:virbr2 ExpiryTime:2024-09-18 21:55:52 +0000 UTC Type:0 Mac:52:54:00:2a:47:d0 Iaid: IPaddr:192.168.61.31 Prefix:24 Hostname:no-preload-331658 Clientid:01:52:54:00:2a:47:d0}
	I0918 21:05:28.609564   61273 main.go:141] libmachine: (no-preload-331658) DBG | domain no-preload-331658 has defined IP address 192.168.61.31 and MAC address 52:54:00:2a:47:d0 in network mk-no-preload-331658
	I0918 21:05:28.609690   61273 main.go:141] libmachine: (no-preload-331658) Calling .GetSSHPort
	I0918 21:05:28.609891   61273 main.go:141] libmachine: (no-preload-331658) Calling .GetSSHKeyPath
	I0918 21:05:28.610102   61273 main.go:141] libmachine: (no-preload-331658) Calling .GetSSHUsername
	I0918 21:05:28.610332   61273 sshutil.go:53] new ssh client: &{IP:192.168.61.31 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19667-7671/.minikube/machines/no-preload-331658/id_rsa Username:docker}
	I0918 21:05:28.690571   61273 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0918 21:05:28.719257   61273 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0918 21:05:28.744119   61273 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0918 21:05:28.768692   61273 provision.go:87] duration metric: took 475.955066ms to configureAuth
	I0918 21:05:28.768720   61273 buildroot.go:189] setting minikube options for container-runtime
	I0918 21:05:28.768941   61273 config.go:182] Loaded profile config "no-preload-331658": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0918 21:05:28.769031   61273 main.go:141] libmachine: (no-preload-331658) Calling .GetSSHHostname
	I0918 21:05:28.771437   61273 main.go:141] libmachine: (no-preload-331658) DBG | domain no-preload-331658 has defined MAC address 52:54:00:2a:47:d0 in network mk-no-preload-331658
	I0918 21:05:28.771747   61273 main.go:141] libmachine: (no-preload-331658) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:47:d0", ip: ""} in network mk-no-preload-331658: {Iface:virbr2 ExpiryTime:2024-09-18 21:55:52 +0000 UTC Type:0 Mac:52:54:00:2a:47:d0 Iaid: IPaddr:192.168.61.31 Prefix:24 Hostname:no-preload-331658 Clientid:01:52:54:00:2a:47:d0}
	I0918 21:05:28.771786   61273 main.go:141] libmachine: (no-preload-331658) DBG | domain no-preload-331658 has defined IP address 192.168.61.31 and MAC address 52:54:00:2a:47:d0 in network mk-no-preload-331658
	I0918 21:05:28.771906   61273 main.go:141] libmachine: (no-preload-331658) Calling .GetSSHPort
	I0918 21:05:28.772127   61273 main.go:141] libmachine: (no-preload-331658) Calling .GetSSHKeyPath
	I0918 21:05:28.772330   61273 main.go:141] libmachine: (no-preload-331658) Calling .GetSSHKeyPath
	I0918 21:05:28.772496   61273 main.go:141] libmachine: (no-preload-331658) Calling .GetSSHUsername
	I0918 21:05:28.772717   61273 main.go:141] libmachine: Using SSH client type: native
	I0918 21:05:28.772886   61273 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.61.31 22 <nil> <nil>}
	I0918 21:05:28.772902   61273 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0918 21:05:29.001137   61273 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0918 21:05:29.001160   61273 machine.go:96] duration metric: took 1.056205004s to provisionDockerMachine
	I0918 21:05:29.001171   61273 start.go:293] postStartSetup for "no-preload-331658" (driver="kvm2")
	I0918 21:05:29.001181   61273 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0918 21:05:29.001194   61273 main.go:141] libmachine: (no-preload-331658) Calling .DriverName
	I0918 21:05:29.001531   61273 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0918 21:05:29.001556   61273 main.go:141] libmachine: (no-preload-331658) Calling .GetSSHHostname
	I0918 21:05:29.004307   61273 main.go:141] libmachine: (no-preload-331658) DBG | domain no-preload-331658 has defined MAC address 52:54:00:2a:47:d0 in network mk-no-preload-331658
	I0918 21:05:29.004656   61273 main.go:141] libmachine: (no-preload-331658) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:47:d0", ip: ""} in network mk-no-preload-331658: {Iface:virbr2 ExpiryTime:2024-09-18 21:55:52 +0000 UTC Type:0 Mac:52:54:00:2a:47:d0 Iaid: IPaddr:192.168.61.31 Prefix:24 Hostname:no-preload-331658 Clientid:01:52:54:00:2a:47:d0}
	I0918 21:05:29.004686   61273 main.go:141] libmachine: (no-preload-331658) DBG | domain no-preload-331658 has defined IP address 192.168.61.31 and MAC address 52:54:00:2a:47:d0 in network mk-no-preload-331658
	I0918 21:05:29.004877   61273 main.go:141] libmachine: (no-preload-331658) Calling .GetSSHPort
	I0918 21:05:29.005128   61273 main.go:141] libmachine: (no-preload-331658) Calling .GetSSHKeyPath
	I0918 21:05:29.005379   61273 main.go:141] libmachine: (no-preload-331658) Calling .GetSSHUsername
	I0918 21:05:29.005556   61273 sshutil.go:53] new ssh client: &{IP:192.168.61.31 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19667-7671/.minikube/machines/no-preload-331658/id_rsa Username:docker}
	I0918 21:05:29.087453   61273 ssh_runner.go:195] Run: cat /etc/os-release
	I0918 21:05:29.091329   61273 info.go:137] Remote host: Buildroot 2023.02.9
	I0918 21:05:29.091356   61273 filesync.go:126] Scanning /home/jenkins/minikube-integration/19667-7671/.minikube/addons for local assets ...
	I0918 21:05:29.091422   61273 filesync.go:126] Scanning /home/jenkins/minikube-integration/19667-7671/.minikube/files for local assets ...
	I0918 21:05:29.091493   61273 filesync.go:149] local asset: /home/jenkins/minikube-integration/19667-7671/.minikube/files/etc/ssl/certs/148782.pem -> 148782.pem in /etc/ssl/certs
	I0918 21:05:29.091578   61273 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0918 21:05:29.101039   61273 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/files/etc/ssl/certs/148782.pem --> /etc/ssl/certs/148782.pem (1708 bytes)
	I0918 21:05:29.125451   61273 start.go:296] duration metric: took 124.264463ms for postStartSetup
	I0918 21:05:29.125492   61273 fix.go:56] duration metric: took 19.880181743s for fixHost
	I0918 21:05:29.125514   61273 main.go:141] libmachine: (no-preload-331658) Calling .GetSSHHostname
	I0918 21:05:29.128543   61273 main.go:141] libmachine: (no-preload-331658) DBG | domain no-preload-331658 has defined MAC address 52:54:00:2a:47:d0 in network mk-no-preload-331658
	I0918 21:05:29.128968   61273 main.go:141] libmachine: (no-preload-331658) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:47:d0", ip: ""} in network mk-no-preload-331658: {Iface:virbr2 ExpiryTime:2024-09-18 21:55:52 +0000 UTC Type:0 Mac:52:54:00:2a:47:d0 Iaid: IPaddr:192.168.61.31 Prefix:24 Hostname:no-preload-331658 Clientid:01:52:54:00:2a:47:d0}
	I0918 21:05:29.129022   61273 main.go:141] libmachine: (no-preload-331658) DBG | domain no-preload-331658 has defined IP address 192.168.61.31 and MAC address 52:54:00:2a:47:d0 in network mk-no-preload-331658
	I0918 21:05:29.129185   61273 main.go:141] libmachine: (no-preload-331658) Calling .GetSSHPort
	I0918 21:05:29.129385   61273 main.go:141] libmachine: (no-preload-331658) Calling .GetSSHKeyPath
	I0918 21:05:29.129580   61273 main.go:141] libmachine: (no-preload-331658) Calling .GetSSHKeyPath
	I0918 21:05:29.129739   61273 main.go:141] libmachine: (no-preload-331658) Calling .GetSSHUsername
	I0918 21:05:29.129919   61273 main.go:141] libmachine: Using SSH client type: native
	I0918 21:05:29.130155   61273 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.61.31 22 <nil> <nil>}
	I0918 21:05:29.130172   61273 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0918 21:05:29.240857   61273 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726693529.214864261
	
	I0918 21:05:29.240886   61273 fix.go:216] guest clock: 1726693529.214864261
	I0918 21:05:29.240897   61273 fix.go:229] Guest: 2024-09-18 21:05:29.214864261 +0000 UTC Remote: 2024-09-18 21:05:29.125495769 +0000 UTC m=+357.666326175 (delta=89.368492ms)
	I0918 21:05:29.240943   61273 fix.go:200] guest clock delta is within tolerance: 89.368492ms
	I0918 21:05:29.240949   61273 start.go:83] releasing machines lock for "no-preload-331658", held for 19.99567651s
	I0918 21:05:29.240969   61273 main.go:141] libmachine: (no-preload-331658) Calling .DriverName
	I0918 21:05:29.241256   61273 main.go:141] libmachine: (no-preload-331658) Calling .GetIP
	I0918 21:05:29.243922   61273 main.go:141] libmachine: (no-preload-331658) DBG | domain no-preload-331658 has defined MAC address 52:54:00:2a:47:d0 in network mk-no-preload-331658
	I0918 21:05:29.244347   61273 main.go:141] libmachine: (no-preload-331658) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:47:d0", ip: ""} in network mk-no-preload-331658: {Iface:virbr2 ExpiryTime:2024-09-18 21:55:52 +0000 UTC Type:0 Mac:52:54:00:2a:47:d0 Iaid: IPaddr:192.168.61.31 Prefix:24 Hostname:no-preload-331658 Clientid:01:52:54:00:2a:47:d0}
	I0918 21:05:29.244376   61273 main.go:141] libmachine: (no-preload-331658) DBG | domain no-preload-331658 has defined IP address 192.168.61.31 and MAC address 52:54:00:2a:47:d0 in network mk-no-preload-331658
	I0918 21:05:29.244575   61273 main.go:141] libmachine: (no-preload-331658) Calling .DriverName
	I0918 21:05:29.245157   61273 main.go:141] libmachine: (no-preload-331658) Calling .DriverName
	I0918 21:05:29.245380   61273 main.go:141] libmachine: (no-preload-331658) Calling .DriverName
	I0918 21:05:29.245492   61273 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0918 21:05:29.245548   61273 main.go:141] libmachine: (no-preload-331658) Calling .GetSSHHostname
	I0918 21:05:29.245640   61273 ssh_runner.go:195] Run: cat /version.json
	I0918 21:05:29.245665   61273 main.go:141] libmachine: (no-preload-331658) Calling .GetSSHHostname
	I0918 21:05:29.248511   61273 main.go:141] libmachine: (no-preload-331658) DBG | domain no-preload-331658 has defined MAC address 52:54:00:2a:47:d0 in network mk-no-preload-331658
	I0918 21:05:29.248927   61273 main.go:141] libmachine: (no-preload-331658) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:47:d0", ip: ""} in network mk-no-preload-331658: {Iface:virbr2 ExpiryTime:2024-09-18 21:55:52 +0000 UTC Type:0 Mac:52:54:00:2a:47:d0 Iaid: IPaddr:192.168.61.31 Prefix:24 Hostname:no-preload-331658 Clientid:01:52:54:00:2a:47:d0}
	I0918 21:05:29.248954   61273 main.go:141] libmachine: (no-preload-331658) DBG | domain no-preload-331658 has defined MAC address 52:54:00:2a:47:d0 in network mk-no-preload-331658
	I0918 21:05:29.248984   61273 main.go:141] libmachine: (no-preload-331658) DBG | domain no-preload-331658 has defined IP address 192.168.61.31 and MAC address 52:54:00:2a:47:d0 in network mk-no-preload-331658
	I0918 21:05:29.249198   61273 main.go:141] libmachine: (no-preload-331658) Calling .GetSSHPort
	I0918 21:05:29.249423   61273 main.go:141] libmachine: (no-preload-331658) Calling .GetSSHKeyPath
	I0918 21:05:29.249506   61273 main.go:141] libmachine: (no-preload-331658) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:47:d0", ip: ""} in network mk-no-preload-331658: {Iface:virbr2 ExpiryTime:2024-09-18 21:55:52 +0000 UTC Type:0 Mac:52:54:00:2a:47:d0 Iaid: IPaddr:192.168.61.31 Prefix:24 Hostname:no-preload-331658 Clientid:01:52:54:00:2a:47:d0}
	I0918 21:05:29.249538   61273 main.go:141] libmachine: (no-preload-331658) DBG | domain no-preload-331658 has defined IP address 192.168.61.31 and MAC address 52:54:00:2a:47:d0 in network mk-no-preload-331658
	I0918 21:05:29.249608   61273 main.go:141] libmachine: (no-preload-331658) Calling .GetSSHUsername
	I0918 21:05:29.249692   61273 main.go:141] libmachine: (no-preload-331658) Calling .GetSSHPort
	I0918 21:05:29.249791   61273 sshutil.go:53] new ssh client: &{IP:192.168.61.31 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19667-7671/.minikube/machines/no-preload-331658/id_rsa Username:docker}
	I0918 21:05:29.249899   61273 main.go:141] libmachine: (no-preload-331658) Calling .GetSSHKeyPath
	I0918 21:05:29.250076   61273 main.go:141] libmachine: (no-preload-331658) Calling .GetSSHUsername
	I0918 21:05:29.250228   61273 sshutil.go:53] new ssh client: &{IP:192.168.61.31 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19667-7671/.minikube/machines/no-preload-331658/id_rsa Username:docker}
	I0918 21:05:29.365104   61273 ssh_runner.go:195] Run: systemctl --version
	I0918 21:05:29.371202   61273 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0918 21:05:29.518067   61273 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0918 21:05:29.524126   61273 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0918 21:05:29.524207   61273 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0918 21:05:29.540977   61273 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0918 21:05:29.541007   61273 start.go:495] detecting cgroup driver to use...
	I0918 21:05:29.541072   61273 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0918 21:05:29.558893   61273 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0918 21:05:29.576084   61273 docker.go:217] disabling cri-docker service (if available) ...
	I0918 21:05:29.576161   61273 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0918 21:05:29.591212   61273 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0918 21:05:29.605765   61273 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0918 21:05:29.734291   61273 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0918 21:05:29.892707   61273 docker.go:233] disabling docker service ...
	I0918 21:05:29.892771   61273 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0918 21:05:29.907575   61273 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0918 21:05:29.920545   61273 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0918 21:05:30.058604   61273 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0918 21:05:30.196896   61273 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0918 21:05:30.211398   61273 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0918 21:05:30.231791   61273 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0918 21:05:30.231917   61273 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0918 21:05:30.243369   61273 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0918 21:05:30.243465   61273 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0918 21:05:30.254911   61273 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0918 21:05:30.266839   61273 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0918 21:05:30.278532   61273 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0918 21:05:30.290173   61273 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0918 21:05:30.301068   61273 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0918 21:05:30.318589   61273 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0918 21:05:30.329022   61273 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0918 21:05:30.338645   61273 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0918 21:05:30.338720   61273 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0918 21:05:30.351797   61273 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0918 21:05:30.363412   61273 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0918 21:05:30.504035   61273 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0918 21:05:30.606470   61273 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0918 21:05:30.606547   61273 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0918 21:05:30.611499   61273 start.go:563] Will wait 60s for crictl version
	I0918 21:05:30.611559   61273 ssh_runner.go:195] Run: which crictl
	I0918 21:05:30.615485   61273 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0918 21:05:30.659735   61273 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0918 21:05:30.659835   61273 ssh_runner.go:195] Run: crio --version
	I0918 21:05:30.690573   61273 ssh_runner.go:195] Run: crio --version
	I0918 21:05:30.723342   61273 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0918 21:05:30.724604   61273 main.go:141] libmachine: (no-preload-331658) Calling .GetIP
	I0918 21:05:30.727445   61273 main.go:141] libmachine: (no-preload-331658) DBG | domain no-preload-331658 has defined MAC address 52:54:00:2a:47:d0 in network mk-no-preload-331658
	I0918 21:05:30.727885   61273 main.go:141] libmachine: (no-preload-331658) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:47:d0", ip: ""} in network mk-no-preload-331658: {Iface:virbr2 ExpiryTime:2024-09-18 21:55:52 +0000 UTC Type:0 Mac:52:54:00:2a:47:d0 Iaid: IPaddr:192.168.61.31 Prefix:24 Hostname:no-preload-331658 Clientid:01:52:54:00:2a:47:d0}
	I0918 21:05:30.727919   61273 main.go:141] libmachine: (no-preload-331658) DBG | domain no-preload-331658 has defined IP address 192.168.61.31 and MAC address 52:54:00:2a:47:d0 in network mk-no-preload-331658
	I0918 21:05:30.728132   61273 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0918 21:05:30.732134   61273 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0918 21:05:30.745695   61273 kubeadm.go:883] updating cluster {Name:no-preload-331658 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.1 ClusterName:no-preload-331658 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.31 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0918 21:05:30.745813   61273 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0918 21:05:30.745849   61273 ssh_runner.go:195] Run: sudo crictl images --output json
	I0918 21:05:30.788504   61273 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I0918 21:05:30.788537   61273 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.31.1 registry.k8s.io/kube-controller-manager:v1.31.1 registry.k8s.io/kube-scheduler:v1.31.1 registry.k8s.io/kube-proxy:v1.31.1 registry.k8s.io/pause:3.10 registry.k8s.io/etcd:3.5.15-0 registry.k8s.io/coredns/coredns:v1.11.3 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0918 21:05:30.788634   61273 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0918 21:05:30.788655   61273 image.go:135] retrieving image: registry.k8s.io/pause:3.10
	I0918 21:05:30.788673   61273 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.31.1
	I0918 21:05:30.788685   61273 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.11.3
	I0918 21:05:30.788655   61273 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.31.1
	I0918 21:05:30.788796   61273 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.31.1
	I0918 21:05:30.788804   61273 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.1
	I0918 21:05:30.788655   61273 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.15-0
	I0918 21:05:30.790173   61273 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0918 21:05:30.790181   61273 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.1
	I0918 21:05:30.790199   61273 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.3: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.3
	I0918 21:05:30.790170   61273 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.1
	I0918 21:05:30.790222   61273 image.go:178] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I0918 21:05:30.790237   61273 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.1
	I0918 21:05:30.790268   61273 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.15-0
	I0918 21:05:30.790542   61273 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.1
	I0918 21:05:31.049150   61273 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10
	I0918 21:05:31.052046   61273 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.31.1
	I0918 21:05:31.099660   61273 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.3
	I0918 21:05:31.099861   61273 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.31.1
	I0918 21:05:31.111308   61273 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.31.1
	I0918 21:05:31.111439   61273 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.15-0
	I0918 21:05:31.112293   61273 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.31.1
	I0918 21:05:31.203873   61273 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.31.1" needs transfer: "registry.k8s.io/kube-proxy:v1.31.1" does not exist at hash "60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561" in container runtime
	I0918 21:05:31.203934   61273 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.31.1
	I0918 21:05:31.204042   61273 ssh_runner.go:195] Run: which crictl
	I0918 21:05:31.208912   61273 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.3" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.3" does not exist at hash "c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6" in container runtime
	I0918 21:05:31.208937   61273 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.31.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.31.1" does not exist at hash "9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b" in container runtime
	I0918 21:05:31.208968   61273 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.31.1
	I0918 21:05:31.208960   61273 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.3
	I0918 21:05:31.209020   61273 ssh_runner.go:195] Run: which crictl
	I0918 21:05:31.209029   61273 ssh_runner.go:195] Run: which crictl
	I0918 21:05:31.249355   61273 cache_images.go:116] "registry.k8s.io/etcd:3.5.15-0" needs transfer: "registry.k8s.io/etcd:3.5.15-0" does not exist at hash "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4" in container runtime
	I0918 21:05:31.249408   61273 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.15-0
	I0918 21:05:31.249459   61273 ssh_runner.go:195] Run: which crictl
	I0918 21:05:31.253214   61273 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.31.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.31.1" does not exist at hash "175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1" in container runtime
	I0918 21:05:31.253244   61273 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.31.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.31.1" does not exist at hash "6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee" in container runtime
	I0918 21:05:31.253286   61273 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.31.1
	I0918 21:05:31.253274   61273 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.31.1
	I0918 21:05:31.253335   61273 ssh_runner.go:195] Run: which crictl
	I0918 21:05:31.253339   61273 ssh_runner.go:195] Run: which crictl
	I0918 21:05:31.253351   61273 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.1
	I0918 21:05:31.253405   61273 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.1
	I0918 21:05:31.253419   61273 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I0918 21:05:31.255163   61273 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0918 21:05:31.330929   61273 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.1
	I0918 21:05:31.330999   61273 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.1
	I0918 21:05:31.349540   61273 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.1
	I0918 21:05:31.349558   61273 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.1
	I0918 21:05:31.350088   61273 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I0918 21:05:31.353763   61273 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0918 21:05:31.447057   61273 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.1
	I0918 21:05:31.457171   61273 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.1
	I0918 21:05:31.457239   61273 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.1
	I0918 21:05:31.483087   61273 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.1
	I0918 21:05:31.483097   61273 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I0918 21:05:31.483210   61273 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0918 21:05:28.131874   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:05:30.133067   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:05:32.134557   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:05:29.389052   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:05:31.887032   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:05:29.704274   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:30.203372   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:30.703751   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:31.203670   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:31.704097   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:32.203611   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:32.703968   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:33.203356   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:33.704260   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:34.204082   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:31.573784   61273 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19667-7671/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.1
	I0918 21:05:31.573906   61273 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.31.1
	I0918 21:05:31.573927   61273 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.1
	I0918 21:05:31.573951   61273 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19667-7671/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.1
	I0918 21:05:31.574038   61273 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.31.1
	I0918 21:05:31.605972   61273 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19667-7671/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3
	I0918 21:05:31.606077   61273 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.1
	I0918 21:05:31.606086   61273 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.11.3
	I0918 21:05:31.613640   61273 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19667-7671/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0
	I0918 21:05:31.613769   61273 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.15-0
	I0918 21:05:31.641105   61273 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19667-7671/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.1
	I0918 21:05:31.641109   61273 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.31.1 (exists)
	I0918 21:05:31.641199   61273 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.31.1
	I0918 21:05:31.641223   61273 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.31.1
	I0918 21:05:31.641244   61273 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.1
	I0918 21:05:31.641175   61273 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.31.1 (exists)
	I0918 21:05:31.666586   61273 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.3 (exists)
	I0918 21:05:31.666661   61273 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19667-7671/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.1
	I0918 21:05:31.666792   61273 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.31.1
	I0918 21:05:31.666821   61273 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.31.1 (exists)
	I0918 21:05:31.666795   61273 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.15-0 (exists)
	I0918 21:05:32.009797   61273 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0918 21:05:33.610028   61273 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.1: (1.968756977s)
	I0918 21:05:33.610065   61273 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19667-7671/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.1 from cache
	I0918 21:05:33.610080   61273 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.31.1: (1.943261692s)
	I0918 21:05:33.610111   61273 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.31.1 (exists)
	I0918 21:05:33.610090   61273 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.31.1
	I0918 21:05:33.610122   61273 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (1.600294362s)
	I0918 21:05:33.610161   61273 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0918 21:05:33.610174   61273 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.1
	I0918 21:05:33.610193   61273 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0918 21:05:33.610242   61273 ssh_runner.go:195] Run: which crictl
	I0918 21:05:35.571685   61273 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.1: (1.96147024s)
	I0918 21:05:35.571722   61273 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19667-7671/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.1 from cache
	I0918 21:05:35.571748   61273 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.3
	I0918 21:05:35.571802   61273 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.3
	I0918 21:05:35.571802   61273 ssh_runner.go:235] Completed: which crictl: (1.961540517s)
	I0918 21:05:35.571882   61273 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0918 21:05:34.632853   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:05:36.633341   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:05:33.887591   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:05:36.387534   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:05:34.703445   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:35.203660   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:35.703374   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:36.203304   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:36.704191   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:37.204129   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:37.703912   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:38.203310   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:38.703616   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:39.203258   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:37.536622   61273 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.96470192s)
	I0918 21:05:37.536666   61273 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.3: (1.96484474s)
	I0918 21:05:37.536690   61273 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19667-7671/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3 from cache
	I0918 21:05:37.536713   61273 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0918 21:05:37.536721   61273 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.31.1
	I0918 21:05:37.536766   61273 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.1
	I0918 21:05:39.615751   61273 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.1: (2.078954836s)
	I0918 21:05:39.615791   61273 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19667-7671/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.1 from cache
	I0918 21:05:39.615823   61273 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (2.079084749s)
	I0918 21:05:39.615902   61273 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0918 21:05:39.615829   61273 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.15-0
	I0918 21:05:39.615972   61273 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0
	I0918 21:05:39.676258   61273 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19667-7671/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0918 21:05:39.676355   61273 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I0918 21:05:38.634158   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:05:40.634292   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:05:38.888255   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:05:41.387766   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:05:39.704105   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:40.204102   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:40.704073   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:41.203654   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:41.703947   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:42.203722   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:42.703303   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:43.203847   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:43.704163   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:44.203216   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:42.909577   61273 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: (3.233201912s)
	I0918 21:05:42.909617   61273 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0918 21:05:42.909722   61273 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0: (3.293701319s)
	I0918 21:05:42.909748   61273 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19667-7671/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0 from cache
	I0918 21:05:42.909781   61273 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.31.1
	I0918 21:05:42.909859   61273 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.1
	I0918 21:05:44.767646   61273 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.1: (1.857764218s)
	I0918 21:05:44.767673   61273 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19667-7671/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.1 from cache
	I0918 21:05:44.767705   61273 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0918 21:05:44.767787   61273 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0918 21:05:45.419210   61273 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19667-7671/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0918 21:05:45.419257   61273 cache_images.go:123] Successfully loaded all cached images
	I0918 21:05:45.419265   61273 cache_images.go:92] duration metric: took 14.630712818s to LoadCachedImages
	I0918 21:05:45.419278   61273 kubeadm.go:934] updating node { 192.168.61.31 8443 v1.31.1 crio true true} ...
	I0918 21:05:45.419399   61273 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-331658 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.31
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:no-preload-331658 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0918 21:05:45.419479   61273 ssh_runner.go:195] Run: crio config
	I0918 21:05:45.468525   61273 cni.go:84] Creating CNI manager for ""
	I0918 21:05:45.468549   61273 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0918 21:05:45.468558   61273 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0918 21:05:45.468579   61273 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.31 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-331658 NodeName:no-preload-331658 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.31"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.31 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0918 21:05:45.468706   61273 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.31
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-331658"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.31
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.31"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0918 21:05:45.468781   61273 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0918 21:05:45.479592   61273 binaries.go:44] Found k8s binaries, skipping transfer
	I0918 21:05:45.479662   61273 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0918 21:05:45.488586   61273 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (316 bytes)
	I0918 21:05:45.507027   61273 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0918 21:05:45.525430   61273 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2158 bytes)
	I0918 21:05:45.543854   61273 ssh_runner.go:195] Run: grep 192.168.61.31	control-plane.minikube.internal$ /etc/hosts
	I0918 21:05:45.547792   61273 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.31	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0918 21:05:45.559968   61273 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0918 21:05:45.686602   61273 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0918 21:05:45.702793   61273 certs.go:68] Setting up /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/no-preload-331658 for IP: 192.168.61.31
	I0918 21:05:45.702814   61273 certs.go:194] generating shared ca certs ...
	I0918 21:05:45.702829   61273 certs.go:226] acquiring lock for ca certs: {Name:mkf54e0e71ed884c4372f9d3d4cb308ea4600185 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0918 21:05:45.703005   61273 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19667-7671/.minikube/ca.key
	I0918 21:05:45.703071   61273 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19667-7671/.minikube/proxy-client-ca.key
	I0918 21:05:45.703085   61273 certs.go:256] generating profile certs ...
	I0918 21:05:45.703159   61273 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/no-preload-331658/client.key
	I0918 21:05:45.703228   61273 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/no-preload-331658/apiserver.key.1a336b78
	I0918 21:05:45.703263   61273 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/no-preload-331658/proxy-client.key
	I0918 21:05:45.703384   61273 certs.go:484] found cert: /home/jenkins/minikube-integration/19667-7671/.minikube/certs/14878.pem (1338 bytes)
	W0918 21:05:45.703417   61273 certs.go:480] ignoring /home/jenkins/minikube-integration/19667-7671/.minikube/certs/14878_empty.pem, impossibly tiny 0 bytes
	I0918 21:05:45.703430   61273 certs.go:484] found cert: /home/jenkins/minikube-integration/19667-7671/.minikube/certs/ca-key.pem (1675 bytes)
	I0918 21:05:45.703463   61273 certs.go:484] found cert: /home/jenkins/minikube-integration/19667-7671/.minikube/certs/ca.pem (1082 bytes)
	I0918 21:05:45.703493   61273 certs.go:484] found cert: /home/jenkins/minikube-integration/19667-7671/.minikube/certs/cert.pem (1123 bytes)
	I0918 21:05:45.703521   61273 certs.go:484] found cert: /home/jenkins/minikube-integration/19667-7671/.minikube/certs/key.pem (1679 bytes)
	I0918 21:05:45.703582   61273 certs.go:484] found cert: /home/jenkins/minikube-integration/19667-7671/.minikube/files/etc/ssl/certs/148782.pem (1708 bytes)
	I0918 21:05:45.704338   61273 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0918 21:05:45.757217   61273 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0918 21:05:45.791588   61273 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0918 21:05:45.825543   61273 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0918 21:05:45.859322   61273 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/no-preload-331658/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0918 21:05:45.892890   61273 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/no-preload-331658/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0918 21:05:45.922841   61273 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/no-preload-331658/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0918 21:05:45.947670   61273 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/no-preload-331658/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0918 21:05:45.973315   61273 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/files/etc/ssl/certs/148782.pem --> /usr/share/ca-certificates/148782.pem (1708 bytes)
	I0918 21:05:45.997699   61273 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0918 21:05:46.022802   61273 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/certs/14878.pem --> /usr/share/ca-certificates/14878.pem (1338 bytes)
	I0918 21:05:46.046646   61273 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0918 21:05:46.063329   61273 ssh_runner.go:195] Run: openssl version
	I0918 21:05:46.069432   61273 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/148782.pem && ln -fs /usr/share/ca-certificates/148782.pem /etc/ssl/certs/148782.pem"
	I0918 21:05:46.081104   61273 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/148782.pem
	I0918 21:05:46.086180   61273 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 18 19:57 /usr/share/ca-certificates/148782.pem
	I0918 21:05:46.086241   61273 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/148782.pem
	I0918 21:05:46.092527   61273 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/148782.pem /etc/ssl/certs/3ec20f2e.0"
	I0918 21:05:46.103601   61273 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0918 21:05:46.114656   61273 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0918 21:05:46.118788   61273 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 18 19:39 /usr/share/ca-certificates/minikubeCA.pem
	I0918 21:05:46.118855   61273 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0918 21:05:46.124094   61273 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0918 21:05:46.135442   61273 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14878.pem && ln -fs /usr/share/ca-certificates/14878.pem /etc/ssl/certs/14878.pem"
	I0918 21:05:46.146105   61273 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14878.pem
	I0918 21:05:46.150661   61273 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 18 19:57 /usr/share/ca-certificates/14878.pem
	I0918 21:05:46.150714   61273 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14878.pem
	I0918 21:05:46.156247   61273 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14878.pem /etc/ssl/certs/51391683.0"
	I0918 21:05:46.167475   61273 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0918 21:05:46.172172   61273 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0918 21:05:46.178638   61273 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0918 21:05:46.184644   61273 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0918 21:05:46.190704   61273 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0918 21:05:46.196414   61273 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0918 21:05:46.202467   61273 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0918 21:05:46.208306   61273 kubeadm.go:392] StartCluster: {Name:no-preload-331658 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.1 ClusterName:no-preload-331658 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.31 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0918 21:05:46.208405   61273 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0918 21:05:46.208472   61273 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0918 21:05:46.247189   61273 cri.go:89] found id: ""
	I0918 21:05:46.247267   61273 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0918 21:05:46.258228   61273 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0918 21:05:46.258253   61273 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0918 21:05:46.258309   61273 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0918 21:05:46.268703   61273 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0918 21:05:46.269728   61273 kubeconfig.go:125] found "no-preload-331658" server: "https://192.168.61.31:8443"
	I0918 21:05:46.271749   61273 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0918 21:05:46.282051   61273 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.61.31
	I0918 21:05:46.282105   61273 kubeadm.go:1160] stopping kube-system containers ...
	I0918 21:05:46.282122   61273 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0918 21:05:46.282191   61273 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0918 21:05:46.319805   61273 cri.go:89] found id: ""
	I0918 21:05:46.319880   61273 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0918 21:05:46.336130   61273 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0918 21:05:46.345940   61273 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0918 21:05:46.345962   61273 kubeadm.go:157] found existing configuration files:
	
	I0918 21:05:46.346008   61273 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0918 21:05:46.355577   61273 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0918 21:05:46.355658   61273 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0918 21:05:46.367154   61273 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0918 21:05:46.377062   61273 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0918 21:05:46.377126   61273 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0918 21:05:46.387180   61273 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0918 21:05:46.396578   61273 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0918 21:05:46.396642   61273 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0918 21:05:46.406687   61273 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0918 21:05:46.416545   61273 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0918 21:05:46.416617   61273 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0918 21:05:46.426405   61273 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0918 21:05:46.436343   61273 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0918 21:05:43.132484   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:05:45.132905   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:05:47.132942   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:05:43.890245   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:05:46.386955   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:05:44.703267   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:45.203924   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:45.703945   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:46.203386   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:46.703674   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:47.203387   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:47.704034   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:48.203348   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:48.703715   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:49.203984   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:46.563094   61273 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0918 21:05:47.663823   61273 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.100694645s)
	I0918 21:05:47.663857   61273 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0918 21:05:47.895962   61273 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0918 21:05:47.978862   61273 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0918 21:05:48.095438   61273 api_server.go:52] waiting for apiserver process to appear ...
	I0918 21:05:48.095530   61273 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:48.595581   61273 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:49.095761   61273 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:49.122304   61273 api_server.go:72] duration metric: took 1.026867171s to wait for apiserver process to appear ...
	I0918 21:05:49.122343   61273 api_server.go:88] waiting for apiserver healthz status ...
	I0918 21:05:49.122361   61273 api_server.go:253] Checking apiserver healthz at https://192.168.61.31:8443/healthz ...
	I0918 21:05:49.133503   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:05:51.133761   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:05:48.386996   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:05:50.387697   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:05:52.886989   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:05:52.253818   61273 api_server.go:279] https://192.168.61.31:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0918 21:05:52.253850   61273 api_server.go:103] status: https://192.168.61.31:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0918 21:05:52.253864   61273 api_server.go:253] Checking apiserver healthz at https://192.168.61.31:8443/healthz ...
	I0918 21:05:52.290586   61273 api_server.go:279] https://192.168.61.31:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0918 21:05:52.290617   61273 api_server.go:103] status: https://192.168.61.31:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0918 21:05:52.623078   61273 api_server.go:253] Checking apiserver healthz at https://192.168.61.31:8443/healthz ...
	I0918 21:05:52.631774   61273 api_server.go:279] https://192.168.61.31:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0918 21:05:52.631811   61273 api_server.go:103] status: https://192.168.61.31:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0918 21:05:53.123498   61273 api_server.go:253] Checking apiserver healthz at https://192.168.61.31:8443/healthz ...
	I0918 21:05:53.132091   61273 api_server.go:279] https://192.168.61.31:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0918 21:05:53.132120   61273 api_server.go:103] status: https://192.168.61.31:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0918 21:05:53.622597   61273 api_server.go:253] Checking apiserver healthz at https://192.168.61.31:8443/healthz ...
	I0918 21:05:53.628896   61273 api_server.go:279] https://192.168.61.31:8443/healthz returned 200:
	ok
	I0918 21:05:53.638315   61273 api_server.go:141] control plane version: v1.31.1
	I0918 21:05:53.638354   61273 api_server.go:131] duration metric: took 4.516002991s to wait for apiserver health ...
	I0918 21:05:53.638367   61273 cni.go:84] Creating CNI manager for ""
	I0918 21:05:53.638376   61273 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0918 21:05:53.639948   61273 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0918 21:05:49.703565   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:50.204192   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:50.704248   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:51.203335   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:51.703761   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:52.203474   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:52.703901   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:53.203856   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:53.704192   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:54.204243   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:53.641376   61273 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0918 21:05:53.667828   61273 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0918 21:05:53.701667   61273 system_pods.go:43] waiting for kube-system pods to appear ...
	I0918 21:05:53.714053   61273 system_pods.go:59] 8 kube-system pods found
	I0918 21:05:53.714101   61273 system_pods.go:61] "coredns-7c65d6cfc9-dgnw2" [085d5a98-0a61-4678-830f-384780a0d7ef] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0918 21:05:53.714113   61273 system_pods.go:61] "etcd-no-preload-331658" [d6a53ff4-fcf0-46c0-9e77-9af6d0363e08] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0918 21:05:53.714126   61273 system_pods.go:61] "kube-apiserver-no-preload-331658" [1953598f-4c3f-4a98-ab75-b8ca1b8093ad] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0918 21:05:53.714135   61273 system_pods.go:61] "kube-controller-manager-no-preload-331658" [8fc655be-a192-4250-a31d-2e533a2b5e41] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0918 21:05:53.714145   61273 system_pods.go:61] "kube-proxy-hx25w" [a26512ff-f695-4452-8974-577479257160] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0918 21:05:53.714157   61273 system_pods.go:61] "kube-scheduler-no-preload-331658" [64e5347e-e7fd-4a0d-ae41-539c3989c29e] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0918 21:05:53.714169   61273 system_pods.go:61] "metrics-server-6867b74b74-n27vc" [b1de76ec-8987-49ce-ae66-eedda2705cde] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0918 21:05:53.714181   61273 system_pods.go:61] "storage-provisioner" [e110aeb3-e9bc-4bb9-9b49-5579558bdda2] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0918 21:05:53.714191   61273 system_pods.go:74] duration metric: took 12.499195ms to wait for pod list to return data ...
	I0918 21:05:53.714206   61273 node_conditions.go:102] verifying NodePressure condition ...
	I0918 21:05:53.720251   61273 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0918 21:05:53.720283   61273 node_conditions.go:123] node cpu capacity is 2
	I0918 21:05:53.720296   61273 node_conditions.go:105] duration metric: took 6.082637ms to run NodePressure ...
	I0918 21:05:53.720317   61273 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0918 21:05:54.056981   61273 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0918 21:05:54.062413   61273 kubeadm.go:739] kubelet initialised
	I0918 21:05:54.062436   61273 kubeadm.go:740] duration metric: took 5.424693ms waiting for restarted kubelet to initialise ...
	I0918 21:05:54.062443   61273 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0918 21:05:54.069721   61273 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-dgnw2" in "kube-system" namespace to be "Ready" ...
	I0918 21:05:54.089970   61273 pod_ready.go:98] node "no-preload-331658" hosting pod "coredns-7c65d6cfc9-dgnw2" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-331658" has status "Ready":"False"
	I0918 21:05:54.090005   61273 pod_ready.go:82] duration metric: took 20.250586ms for pod "coredns-7c65d6cfc9-dgnw2" in "kube-system" namespace to be "Ready" ...
	E0918 21:05:54.090017   61273 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-331658" hosting pod "coredns-7c65d6cfc9-dgnw2" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-331658" has status "Ready":"False"
	I0918 21:05:54.090046   61273 pod_ready.go:79] waiting up to 4m0s for pod "etcd-no-preload-331658" in "kube-system" namespace to be "Ready" ...
	I0918 21:05:54.105121   61273 pod_ready.go:98] node "no-preload-331658" hosting pod "etcd-no-preload-331658" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-331658" has status "Ready":"False"
	I0918 21:05:54.105156   61273 pod_ready.go:82] duration metric: took 15.097714ms for pod "etcd-no-preload-331658" in "kube-system" namespace to be "Ready" ...
	E0918 21:05:54.105170   61273 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-331658" hosting pod "etcd-no-preload-331658" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-331658" has status "Ready":"False"
	I0918 21:05:54.105180   61273 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-no-preload-331658" in "kube-system" namespace to be "Ready" ...
	I0918 21:05:54.112687   61273 pod_ready.go:98] node "no-preload-331658" hosting pod "kube-apiserver-no-preload-331658" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-331658" has status "Ready":"False"
	I0918 21:05:54.112711   61273 pod_ready.go:82] duration metric: took 7.523191ms for pod "kube-apiserver-no-preload-331658" in "kube-system" namespace to be "Ready" ...
	E0918 21:05:54.112722   61273 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-331658" hosting pod "kube-apiserver-no-preload-331658" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-331658" has status "Ready":"False"
	I0918 21:05:54.112730   61273 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-no-preload-331658" in "kube-system" namespace to be "Ready" ...
	I0918 21:05:54.119681   61273 pod_ready.go:98] node "no-preload-331658" hosting pod "kube-controller-manager-no-preload-331658" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-331658" has status "Ready":"False"
	I0918 21:05:54.119707   61273 pod_ready.go:82] duration metric: took 6.967275ms for pod "kube-controller-manager-no-preload-331658" in "kube-system" namespace to be "Ready" ...
	E0918 21:05:54.119716   61273 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-331658" hosting pod "kube-controller-manager-no-preload-331658" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-331658" has status "Ready":"False"
	I0918 21:05:54.119723   61273 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-hx25w" in "kube-system" namespace to be "Ready" ...
	I0918 21:05:54.505099   61273 pod_ready.go:98] node "no-preload-331658" hosting pod "kube-proxy-hx25w" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-331658" has status "Ready":"False"
	I0918 21:05:54.505127   61273 pod_ready.go:82] duration metric: took 385.395528ms for pod "kube-proxy-hx25w" in "kube-system" namespace to be "Ready" ...
	E0918 21:05:54.505140   61273 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-331658" hosting pod "kube-proxy-hx25w" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-331658" has status "Ready":"False"
	I0918 21:05:54.505147   61273 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-no-preload-331658" in "kube-system" namespace to be "Ready" ...
	I0918 21:05:54.905748   61273 pod_ready.go:98] node "no-preload-331658" hosting pod "kube-scheduler-no-preload-331658" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-331658" has status "Ready":"False"
	I0918 21:05:54.905774   61273 pod_ready.go:82] duration metric: took 400.618175ms for pod "kube-scheduler-no-preload-331658" in "kube-system" namespace to be "Ready" ...
	E0918 21:05:54.905785   61273 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-331658" hosting pod "kube-scheduler-no-preload-331658" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-331658" has status "Ready":"False"
	I0918 21:05:54.905794   61273 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace to be "Ready" ...
	I0918 21:05:55.305077   61273 pod_ready.go:98] node "no-preload-331658" hosting pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-331658" has status "Ready":"False"
	I0918 21:05:55.305106   61273 pod_ready.go:82] duration metric: took 399.301293ms for pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace to be "Ready" ...
	E0918 21:05:55.305118   61273 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-331658" hosting pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-331658" has status "Ready":"False"
	I0918 21:05:55.305126   61273 pod_ready.go:39] duration metric: took 1.242662699s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0918 21:05:55.305150   61273 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0918 21:05:55.317568   61273 ops.go:34] apiserver oom_adj: -16
	I0918 21:05:55.317597   61273 kubeadm.go:597] duration metric: took 9.0593375s to restartPrimaryControlPlane
	I0918 21:05:55.317616   61273 kubeadm.go:394] duration metric: took 9.109322119s to StartCluster
	I0918 21:05:55.317643   61273 settings.go:142] acquiring lock: {Name:mk6ae95a69dcbe00c28846409e9f46945a88de2f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0918 21:05:55.317720   61273 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19667-7671/kubeconfig
	I0918 21:05:55.320228   61273 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19667-7671/kubeconfig: {Name:mk3a3eecadb0164dfdd5d3c4392b5b473a2a9bb0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0918 21:05:55.320552   61273 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.61.31 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0918 21:05:55.320609   61273 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0918 21:05:55.320716   61273 addons.go:69] Setting storage-provisioner=true in profile "no-preload-331658"
	I0918 21:05:55.320725   61273 addons.go:69] Setting default-storageclass=true in profile "no-preload-331658"
	I0918 21:05:55.320739   61273 addons.go:234] Setting addon storage-provisioner=true in "no-preload-331658"
	W0918 21:05:55.320747   61273 addons.go:243] addon storage-provisioner should already be in state true
	I0918 21:05:55.320765   61273 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-331658"
	I0918 21:05:55.320785   61273 host.go:66] Checking if "no-preload-331658" exists ...
	I0918 21:05:55.320769   61273 addons.go:69] Setting metrics-server=true in profile "no-preload-331658"
	I0918 21:05:55.320799   61273 config.go:182] Loaded profile config "no-preload-331658": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0918 21:05:55.320808   61273 addons.go:234] Setting addon metrics-server=true in "no-preload-331658"
	W0918 21:05:55.320863   61273 addons.go:243] addon metrics-server should already be in state true
	I0918 21:05:55.320889   61273 host.go:66] Checking if "no-preload-331658" exists ...
	I0918 21:05:55.321208   61273 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19667-7671/.minikube/bin/docker-machine-driver-kvm2
	I0918 21:05:55.321228   61273 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19667-7671/.minikube/bin/docker-machine-driver-kvm2
	I0918 21:05:55.321208   61273 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19667-7671/.minikube/bin/docker-machine-driver-kvm2
	I0918 21:05:55.321262   61273 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0918 21:05:55.321282   61273 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0918 21:05:55.321357   61273 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0918 21:05:55.323762   61273 out.go:177] * Verifying Kubernetes components...
	I0918 21:05:55.325718   61273 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0918 21:05:55.348485   61273 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43725
	I0918 21:05:55.349072   61273 main.go:141] libmachine: () Calling .GetVersion
	I0918 21:05:55.349611   61273 main.go:141] libmachine: Using API Version  1
	I0918 21:05:55.349641   61273 main.go:141] libmachine: () Calling .SetConfigRaw
	I0918 21:05:55.349978   61273 main.go:141] libmachine: () Calling .GetMachineName
	I0918 21:05:55.350556   61273 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19667-7671/.minikube/bin/docker-machine-driver-kvm2
	I0918 21:05:55.350606   61273 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0918 21:05:55.368807   61273 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34285
	I0918 21:05:55.369340   61273 main.go:141] libmachine: () Calling .GetVersion
	I0918 21:05:55.369826   61273 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35389
	I0918 21:05:55.369908   61273 main.go:141] libmachine: Using API Version  1
	I0918 21:05:55.369928   61273 main.go:141] libmachine: () Calling .SetConfigRaw
	I0918 21:05:55.369949   61273 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42891
	I0918 21:05:55.370195   61273 main.go:141] libmachine: () Calling .GetVersion
	I0918 21:05:55.370303   61273 main.go:141] libmachine: () Calling .GetMachineName
	I0918 21:05:55.370408   61273 main.go:141] libmachine: () Calling .GetVersion
	I0918 21:05:55.370494   61273 main.go:141] libmachine: (no-preload-331658) Calling .GetState
	I0918 21:05:55.370772   61273 main.go:141] libmachine: Using API Version  1
	I0918 21:05:55.370797   61273 main.go:141] libmachine: () Calling .SetConfigRaw
	I0918 21:05:55.370908   61273 main.go:141] libmachine: Using API Version  1
	I0918 21:05:55.370929   61273 main.go:141] libmachine: () Calling .SetConfigRaw
	I0918 21:05:55.371790   61273 main.go:141] libmachine: () Calling .GetMachineName
	I0918 21:05:55.371833   61273 main.go:141] libmachine: () Calling .GetMachineName
	I0918 21:05:55.371996   61273 main.go:141] libmachine: (no-preload-331658) Calling .GetState
	I0918 21:05:55.372415   61273 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19667-7671/.minikube/bin/docker-machine-driver-kvm2
	I0918 21:05:55.372470   61273 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0918 21:05:55.372532   61273 main.go:141] libmachine: (no-preload-331658) Calling .DriverName
	I0918 21:05:55.375524   61273 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0918 21:05:55.375574   61273 addons.go:234] Setting addon default-storageclass=true in "no-preload-331658"
	W0918 21:05:55.375593   61273 addons.go:243] addon default-storageclass should already be in state true
	I0918 21:05:55.375626   61273 host.go:66] Checking if "no-preload-331658" exists ...
	I0918 21:05:55.376008   61273 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19667-7671/.minikube/bin/docker-machine-driver-kvm2
	I0918 21:05:55.376097   61273 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0918 21:05:55.377828   61273 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0918 21:05:55.377848   61273 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0918 21:05:55.377864   61273 main.go:141] libmachine: (no-preload-331658) Calling .GetSSHHostname
	I0918 21:05:55.381877   61273 main.go:141] libmachine: (no-preload-331658) DBG | domain no-preload-331658 has defined MAC address 52:54:00:2a:47:d0 in network mk-no-preload-331658
	I0918 21:05:55.382379   61273 main.go:141] libmachine: (no-preload-331658) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:47:d0", ip: ""} in network mk-no-preload-331658: {Iface:virbr2 ExpiryTime:2024-09-18 21:55:52 +0000 UTC Type:0 Mac:52:54:00:2a:47:d0 Iaid: IPaddr:192.168.61.31 Prefix:24 Hostname:no-preload-331658 Clientid:01:52:54:00:2a:47:d0}
	I0918 21:05:55.382438   61273 main.go:141] libmachine: (no-preload-331658) DBG | domain no-preload-331658 has defined IP address 192.168.61.31 and MAC address 52:54:00:2a:47:d0 in network mk-no-preload-331658
	I0918 21:05:55.382767   61273 main.go:141] libmachine: (no-preload-331658) Calling .GetSSHPort
	I0918 21:05:55.384470   61273 main.go:141] libmachine: (no-preload-331658) Calling .GetSSHKeyPath
	I0918 21:05:55.384700   61273 main.go:141] libmachine: (no-preload-331658) Calling .GetSSHUsername
	I0918 21:05:55.384863   61273 sshutil.go:53] new ssh client: &{IP:192.168.61.31 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19667-7671/.minikube/machines/no-preload-331658/id_rsa Username:docker}
	I0918 21:05:55.399531   61273 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38229
	I0918 21:05:55.400009   61273 main.go:141] libmachine: () Calling .GetVersion
	I0918 21:05:55.400532   61273 main.go:141] libmachine: Using API Version  1
	I0918 21:05:55.400552   61273 main.go:141] libmachine: () Calling .SetConfigRaw
	I0918 21:05:55.400918   61273 main.go:141] libmachine: () Calling .GetMachineName
	I0918 21:05:55.401097   61273 main.go:141] libmachine: (no-preload-331658) Calling .GetState
	I0918 21:05:55.403124   61273 main.go:141] libmachine: (no-preload-331658) Calling .DriverName
	I0918 21:05:55.404237   61273 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45209
	I0918 21:05:55.404637   61273 main.go:141] libmachine: () Calling .GetVersion
	I0918 21:05:55.405088   61273 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0918 21:05:55.405422   61273 main.go:141] libmachine: Using API Version  1
	I0918 21:05:55.405443   61273 main.go:141] libmachine: () Calling .SetConfigRaw
	I0918 21:05:55.405906   61273 main.go:141] libmachine: () Calling .GetMachineName
	I0918 21:05:55.406570   61273 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19667-7671/.minikube/bin/docker-machine-driver-kvm2
	I0918 21:05:55.406620   61273 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0918 21:05:55.406959   61273 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0918 21:05:55.406973   61273 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0918 21:05:55.407380   61273 main.go:141] libmachine: (no-preload-331658) Calling .GetSSHHostname
	I0918 21:05:55.411410   61273 main.go:141] libmachine: (no-preload-331658) DBG | domain no-preload-331658 has defined MAC address 52:54:00:2a:47:d0 in network mk-no-preload-331658
	I0918 21:05:55.411430   61273 main.go:141] libmachine: (no-preload-331658) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:47:d0", ip: ""} in network mk-no-preload-331658: {Iface:virbr2 ExpiryTime:2024-09-18 21:55:52 +0000 UTC Type:0 Mac:52:54:00:2a:47:d0 Iaid: IPaddr:192.168.61.31 Prefix:24 Hostname:no-preload-331658 Clientid:01:52:54:00:2a:47:d0}
	I0918 21:05:55.411440   61273 main.go:141] libmachine: (no-preload-331658) DBG | domain no-preload-331658 has defined IP address 192.168.61.31 and MAC address 52:54:00:2a:47:d0 in network mk-no-preload-331658
	I0918 21:05:55.411727   61273 main.go:141] libmachine: (no-preload-331658) Calling .GetSSHPort
	I0918 21:05:55.411965   61273 main.go:141] libmachine: (no-preload-331658) Calling .GetSSHKeyPath
	I0918 21:05:55.412171   61273 main.go:141] libmachine: (no-preload-331658) Calling .GetSSHUsername
	I0918 21:05:55.412377   61273 sshutil.go:53] new ssh client: &{IP:192.168.61.31 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19667-7671/.minikube/machines/no-preload-331658/id_rsa Username:docker}
	I0918 21:05:55.426166   61273 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38277
	I0918 21:05:55.426704   61273 main.go:141] libmachine: () Calling .GetVersion
	I0918 21:05:55.427211   61273 main.go:141] libmachine: Using API Version  1
	I0918 21:05:55.427232   61273 main.go:141] libmachine: () Calling .SetConfigRaw
	I0918 21:05:55.427610   61273 main.go:141] libmachine: () Calling .GetMachineName
	I0918 21:05:55.427805   61273 main.go:141] libmachine: (no-preload-331658) Calling .GetState
	I0918 21:05:55.429864   61273 main.go:141] libmachine: (no-preload-331658) Calling .DriverName
	I0918 21:05:55.430238   61273 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0918 21:05:55.430256   61273 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0918 21:05:55.430278   61273 main.go:141] libmachine: (no-preload-331658) Calling .GetSSHHostname
	I0918 21:05:55.433576   61273 main.go:141] libmachine: (no-preload-331658) DBG | domain no-preload-331658 has defined MAC address 52:54:00:2a:47:d0 in network mk-no-preload-331658
	I0918 21:05:55.433894   61273 main.go:141] libmachine: (no-preload-331658) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:47:d0", ip: ""} in network mk-no-preload-331658: {Iface:virbr2 ExpiryTime:2024-09-18 21:55:52 +0000 UTC Type:0 Mac:52:54:00:2a:47:d0 Iaid: IPaddr:192.168.61.31 Prefix:24 Hostname:no-preload-331658 Clientid:01:52:54:00:2a:47:d0}
	I0918 21:05:55.433918   61273 main.go:141] libmachine: (no-preload-331658) DBG | domain no-preload-331658 has defined IP address 192.168.61.31 and MAC address 52:54:00:2a:47:d0 in network mk-no-preload-331658
	I0918 21:05:55.434411   61273 main.go:141] libmachine: (no-preload-331658) Calling .GetSSHPort
	I0918 21:05:55.434650   61273 main.go:141] libmachine: (no-preload-331658) Calling .GetSSHKeyPath
	I0918 21:05:55.434798   61273 main.go:141] libmachine: (no-preload-331658) Calling .GetSSHUsername
	I0918 21:05:55.434942   61273 sshutil.go:53] new ssh client: &{IP:192.168.61.31 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19667-7671/.minikube/machines/no-preload-331658/id_rsa Username:docker}
	I0918 21:05:55.528033   61273 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0918 21:05:55.545524   61273 node_ready.go:35] waiting up to 6m0s for node "no-preload-331658" to be "Ready" ...
	I0918 21:05:55.606477   61273 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0918 21:05:55.606498   61273 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0918 21:05:55.628256   61273 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0918 21:05:55.636122   61273 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0918 21:05:55.636154   61273 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0918 21:05:55.663081   61273 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0918 21:05:55.663108   61273 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0918 21:05:55.715011   61273 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0918 21:05:55.738192   61273 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0918 21:05:56.247539   61273 main.go:141] libmachine: Making call to close driver server
	I0918 21:05:56.247568   61273 main.go:141] libmachine: (no-preload-331658) Calling .Close
	I0918 21:05:56.247900   61273 main.go:141] libmachine: (no-preload-331658) DBG | Closing plugin on server side
	I0918 21:05:56.247922   61273 main.go:141] libmachine: Successfully made call to close driver server
	I0918 21:05:56.247937   61273 main.go:141] libmachine: Making call to close connection to plugin binary
	I0918 21:05:56.247948   61273 main.go:141] libmachine: Making call to close driver server
	I0918 21:05:56.247960   61273 main.go:141] libmachine: (no-preload-331658) Calling .Close
	I0918 21:05:56.248225   61273 main.go:141] libmachine: Successfully made call to close driver server
	I0918 21:05:56.248240   61273 main.go:141] libmachine: Making call to close connection to plugin binary
	I0918 21:05:56.248273   61273 main.go:141] libmachine: (no-preload-331658) DBG | Closing plugin on server side
	I0918 21:05:56.261942   61273 main.go:141] libmachine: Making call to close driver server
	I0918 21:05:56.261972   61273 main.go:141] libmachine: (no-preload-331658) Calling .Close
	I0918 21:05:56.262269   61273 main.go:141] libmachine: (no-preload-331658) DBG | Closing plugin on server side
	I0918 21:05:56.262344   61273 main.go:141] libmachine: Successfully made call to close driver server
	I0918 21:05:56.262361   61273 main.go:141] libmachine: Making call to close connection to plugin binary
	I0918 21:05:56.944008   61273 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.22895695s)
	I0918 21:05:56.944084   61273 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.205856091s)
	I0918 21:05:56.944121   61273 main.go:141] libmachine: Making call to close driver server
	I0918 21:05:56.944138   61273 main.go:141] libmachine: (no-preload-331658) Calling .Close
	I0918 21:05:56.944087   61273 main.go:141] libmachine: Making call to close driver server
	I0918 21:05:56.944186   61273 main.go:141] libmachine: (no-preload-331658) Calling .Close
	I0918 21:05:56.944489   61273 main.go:141] libmachine: (no-preload-331658) DBG | Closing plugin on server side
	I0918 21:05:56.944539   61273 main.go:141] libmachine: Successfully made call to close driver server
	I0918 21:05:56.944553   61273 main.go:141] libmachine: Making call to close connection to plugin binary
	I0918 21:05:56.944561   61273 main.go:141] libmachine: Making call to close driver server
	I0918 21:05:56.944572   61273 main.go:141] libmachine: (no-preload-331658) Calling .Close
	I0918 21:05:56.944559   61273 main.go:141] libmachine: (no-preload-331658) DBG | Closing plugin on server side
	I0918 21:05:56.944570   61273 main.go:141] libmachine: Successfully made call to close driver server
	I0918 21:05:56.944654   61273 main.go:141] libmachine: Making call to close connection to plugin binary
	I0918 21:05:56.944669   61273 main.go:141] libmachine: Making call to close driver server
	I0918 21:05:56.944678   61273 main.go:141] libmachine: (no-preload-331658) Calling .Close
	I0918 21:05:56.944794   61273 main.go:141] libmachine: Successfully made call to close driver server
	I0918 21:05:56.944808   61273 main.go:141] libmachine: Making call to close connection to plugin binary
	I0918 21:05:56.944823   61273 main.go:141] libmachine: (no-preload-331658) DBG | Closing plugin on server side
	I0918 21:05:56.944965   61273 main.go:141] libmachine: (no-preload-331658) DBG | Closing plugin on server side
	I0918 21:05:56.944988   61273 main.go:141] libmachine: Successfully made call to close driver server
	I0918 21:05:56.944998   61273 main.go:141] libmachine: Making call to close connection to plugin binary
	I0918 21:05:56.945010   61273 addons.go:475] Verifying addon metrics-server=true in "no-preload-331658"
	I0918 21:05:56.946962   61273 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I0918 21:05:53.135068   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:05:55.633160   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:05:55.393859   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:05:57.888366   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:05:54.703227   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:55.204057   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:55.704178   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:56.203443   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:56.703517   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:57.203499   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:57.703598   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:58.203660   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:58.703897   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:59.203256   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:56.948595   61273 addons.go:510] duration metric: took 1.627989207s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I0918 21:05:57.549092   61273 node_ready.go:53] node "no-preload-331658" has status "Ready":"False"
	I0918 21:06:00.050199   61273 node_ready.go:53] node "no-preload-331658" has status "Ready":"False"
	I0918 21:05:58.134289   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:06:00.632302   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:06:00.386644   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:06:02.387972   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:05:59.704149   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:06:00.203356   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:06:00.703750   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:06:01.203765   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:06:01.704295   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:06:02.203759   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:06:02.703342   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:06:03.204083   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:06:03.703777   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:06:04.203340   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:06:02.549111   61273 node_ready.go:49] node "no-preload-331658" has status "Ready":"True"
	I0918 21:06:02.549153   61273 node_ready.go:38] duration metric: took 7.003597589s for node "no-preload-331658" to be "Ready" ...
	I0918 21:06:02.549162   61273 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0918 21:06:02.554487   61273 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-dgnw2" in "kube-system" namespace to be "Ready" ...
	I0918 21:06:02.560130   61273 pod_ready.go:93] pod "coredns-7c65d6cfc9-dgnw2" in "kube-system" namespace has status "Ready":"True"
	I0918 21:06:02.560160   61273 pod_ready.go:82] duration metric: took 5.643145ms for pod "coredns-7c65d6cfc9-dgnw2" in "kube-system" namespace to be "Ready" ...
	I0918 21:06:02.560173   61273 pod_ready.go:79] waiting up to 6m0s for pod "etcd-no-preload-331658" in "kube-system" namespace to be "Ready" ...
	I0918 21:06:02.567971   61273 pod_ready.go:93] pod "etcd-no-preload-331658" in "kube-system" namespace has status "Ready":"True"
	I0918 21:06:02.567992   61273 pod_ready.go:82] duration metric: took 7.811385ms for pod "etcd-no-preload-331658" in "kube-system" namespace to be "Ready" ...
	I0918 21:06:02.568001   61273 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-no-preload-331658" in "kube-system" namespace to be "Ready" ...
	I0918 21:06:02.572606   61273 pod_ready.go:93] pod "kube-apiserver-no-preload-331658" in "kube-system" namespace has status "Ready":"True"
	I0918 21:06:02.572633   61273 pod_ready.go:82] duration metric: took 4.625414ms for pod "kube-apiserver-no-preload-331658" in "kube-system" namespace to be "Ready" ...
	I0918 21:06:02.572644   61273 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-no-preload-331658" in "kube-system" namespace to be "Ready" ...
	I0918 21:06:02.577222   61273 pod_ready.go:93] pod "kube-controller-manager-no-preload-331658" in "kube-system" namespace has status "Ready":"True"
	I0918 21:06:02.577243   61273 pod_ready.go:82] duration metric: took 4.591499ms for pod "kube-controller-manager-no-preload-331658" in "kube-system" namespace to be "Ready" ...
	I0918 21:06:02.577252   61273 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-hx25w" in "kube-system" namespace to be "Ready" ...
	I0918 21:06:02.949682   61273 pod_ready.go:93] pod "kube-proxy-hx25w" in "kube-system" namespace has status "Ready":"True"
	I0918 21:06:02.949707   61273 pod_ready.go:82] duration metric: took 372.449094ms for pod "kube-proxy-hx25w" in "kube-system" namespace to be "Ready" ...
	I0918 21:06:02.949716   61273 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-no-preload-331658" in "kube-system" namespace to be "Ready" ...
	I0918 21:06:03.350071   61273 pod_ready.go:93] pod "kube-scheduler-no-preload-331658" in "kube-system" namespace has status "Ready":"True"
	I0918 21:06:03.350104   61273 pod_ready.go:82] duration metric: took 400.380059ms for pod "kube-scheduler-no-preload-331658" in "kube-system" namespace to be "Ready" ...
	I0918 21:06:03.350118   61273 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace to be "Ready" ...
	I0918 21:06:05.357041   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:06:02.634105   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:06:05.132860   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:06:04.887184   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:06:06.887596   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:06:04.703762   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:06:05.203639   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:06:05.703335   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:06:06.204156   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:06:06.703735   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:06:07.203278   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:06:07.704072   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:06:08.203299   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:06:08.703528   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:06:09.203725   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:06:07.857844   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:06:10.356822   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:06:07.633985   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:06:10.133861   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:06:08.887695   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:06:11.387735   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:06:09.703712   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:06:10.203930   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:06:10.704216   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:06:11.203706   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:06:11.703494   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:06:12.204098   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:06:12.703927   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:06:13.203379   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:06:13.703248   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:06:14.204000   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:06:12.356878   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:06:14.360285   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:06:12.631731   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:06:15.132229   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:06:17.132802   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:06:13.887296   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:06:16.386306   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:06:14.704159   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:06:15.203401   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:06:15.703942   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:06:16.204043   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:06:16.703691   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:06:17.203508   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:06:17.703445   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:06:18.203689   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:06:18.704249   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:06:19.203290   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0918 21:06:19.203401   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0918 21:06:19.239033   62061 cri.go:89] found id: ""
	I0918 21:06:19.239065   62061 logs.go:276] 0 containers: []
	W0918 21:06:19.239073   62061 logs.go:278] No container was found matching "kube-apiserver"
	I0918 21:06:19.239079   62061 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0918 21:06:19.239141   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0918 21:06:19.274781   62061 cri.go:89] found id: ""
	I0918 21:06:19.274809   62061 logs.go:276] 0 containers: []
	W0918 21:06:19.274819   62061 logs.go:278] No container was found matching "etcd"
	I0918 21:06:19.274833   62061 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0918 21:06:19.274895   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0918 21:06:19.307894   62061 cri.go:89] found id: ""
	I0918 21:06:19.307928   62061 logs.go:276] 0 containers: []
	W0918 21:06:19.307940   62061 logs.go:278] No container was found matching "coredns"
	I0918 21:06:19.307948   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0918 21:06:19.308002   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0918 21:06:19.340572   62061 cri.go:89] found id: ""
	I0918 21:06:19.340602   62061 logs.go:276] 0 containers: []
	W0918 21:06:19.340610   62061 logs.go:278] No container was found matching "kube-scheduler"
	I0918 21:06:19.340615   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0918 21:06:19.340672   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0918 21:06:19.375448   62061 cri.go:89] found id: ""
	I0918 21:06:19.375475   62061 logs.go:276] 0 containers: []
	W0918 21:06:19.375483   62061 logs.go:278] No container was found matching "kube-proxy"
	I0918 21:06:19.375489   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0918 21:06:19.375536   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0918 21:06:19.413102   62061 cri.go:89] found id: ""
	I0918 21:06:19.413133   62061 logs.go:276] 0 containers: []
	W0918 21:06:19.413158   62061 logs.go:278] No container was found matching "kube-controller-manager"
	I0918 21:06:19.413166   62061 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0918 21:06:19.413238   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0918 21:06:19.447497   62061 cri.go:89] found id: ""
	I0918 21:06:19.447526   62061 logs.go:276] 0 containers: []
	W0918 21:06:19.447536   62061 logs.go:278] No container was found matching "kindnet"
	I0918 21:06:19.447544   62061 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0918 21:06:19.447605   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0918 21:06:19.480848   62061 cri.go:89] found id: ""
	I0918 21:06:19.480880   62061 logs.go:276] 0 containers: []
	W0918 21:06:19.480892   62061 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0918 21:06:19.480903   62061 logs.go:123] Gathering logs for kubelet ...
	I0918 21:06:19.480916   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 21:06:16.856845   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:06:19.358010   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:06:19.632608   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:06:22.132792   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:06:18.387488   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:06:20.887832   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:06:19.533573   62061 logs.go:123] Gathering logs for dmesg ...
	I0918 21:06:19.533625   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 21:06:19.546578   62061 logs.go:123] Gathering logs for describe nodes ...
	I0918 21:06:19.546604   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0918 21:06:19.667439   62061 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0918 21:06:19.667459   62061 logs.go:123] Gathering logs for CRI-O ...
	I0918 21:06:19.667472   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0918 21:06:19.739525   62061 logs.go:123] Gathering logs for container status ...
	I0918 21:06:19.739563   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 21:06:22.280913   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:06:22.296045   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0918 21:06:22.296132   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0918 21:06:22.332361   62061 cri.go:89] found id: ""
	I0918 21:06:22.332401   62061 logs.go:276] 0 containers: []
	W0918 21:06:22.332413   62061 logs.go:278] No container was found matching "kube-apiserver"
	I0918 21:06:22.332421   62061 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0918 21:06:22.332485   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0918 21:06:22.392138   62061 cri.go:89] found id: ""
	I0918 21:06:22.392170   62061 logs.go:276] 0 containers: []
	W0918 21:06:22.392178   62061 logs.go:278] No container was found matching "etcd"
	I0918 21:06:22.392184   62061 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0918 21:06:22.392237   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0918 21:06:22.431265   62061 cri.go:89] found id: ""
	I0918 21:06:22.431296   62061 logs.go:276] 0 containers: []
	W0918 21:06:22.431306   62061 logs.go:278] No container was found matching "coredns"
	I0918 21:06:22.431313   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0918 21:06:22.431376   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0918 21:06:22.463685   62061 cri.go:89] found id: ""
	I0918 21:06:22.463718   62061 logs.go:276] 0 containers: []
	W0918 21:06:22.463730   62061 logs.go:278] No container was found matching "kube-scheduler"
	I0918 21:06:22.463738   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0918 21:06:22.463808   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0918 21:06:22.498031   62061 cri.go:89] found id: ""
	I0918 21:06:22.498058   62061 logs.go:276] 0 containers: []
	W0918 21:06:22.498069   62061 logs.go:278] No container was found matching "kube-proxy"
	I0918 21:06:22.498076   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0918 21:06:22.498140   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0918 21:06:22.537701   62061 cri.go:89] found id: ""
	I0918 21:06:22.537732   62061 logs.go:276] 0 containers: []
	W0918 21:06:22.537740   62061 logs.go:278] No container was found matching "kube-controller-manager"
	I0918 21:06:22.537746   62061 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0918 21:06:22.537803   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0918 21:06:22.574085   62061 cri.go:89] found id: ""
	I0918 21:06:22.574118   62061 logs.go:276] 0 containers: []
	W0918 21:06:22.574130   62061 logs.go:278] No container was found matching "kindnet"
	I0918 21:06:22.574138   62061 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0918 21:06:22.574202   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0918 21:06:22.606313   62061 cri.go:89] found id: ""
	I0918 21:06:22.606341   62061 logs.go:276] 0 containers: []
	W0918 21:06:22.606349   62061 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0918 21:06:22.606359   62061 logs.go:123] Gathering logs for describe nodes ...
	I0918 21:06:22.606372   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0918 21:06:22.678484   62061 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0918 21:06:22.678511   62061 logs.go:123] Gathering logs for CRI-O ...
	I0918 21:06:22.678524   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0918 21:06:22.756730   62061 logs.go:123] Gathering logs for container status ...
	I0918 21:06:22.756767   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 21:06:22.793537   62061 logs.go:123] Gathering logs for kubelet ...
	I0918 21:06:22.793573   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 21:06:22.845987   62061 logs.go:123] Gathering logs for dmesg ...
	I0918 21:06:22.846021   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 21:06:21.857010   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:06:23.857823   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:06:26.358268   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:06:24.133063   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:06:26.632474   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:06:23.387764   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:06:25.886548   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:06:27.887108   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:06:25.360980   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:06:25.374930   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0918 21:06:25.375010   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0918 21:06:25.413100   62061 cri.go:89] found id: ""
	I0918 21:06:25.413135   62061 logs.go:276] 0 containers: []
	W0918 21:06:25.413147   62061 logs.go:278] No container was found matching "kube-apiserver"
	I0918 21:06:25.413155   62061 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0918 21:06:25.413221   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0918 21:06:25.450999   62061 cri.go:89] found id: ""
	I0918 21:06:25.451024   62061 logs.go:276] 0 containers: []
	W0918 21:06:25.451032   62061 logs.go:278] No container was found matching "etcd"
	I0918 21:06:25.451038   62061 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0918 21:06:25.451087   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0918 21:06:25.496114   62061 cri.go:89] found id: ""
	I0918 21:06:25.496147   62061 logs.go:276] 0 containers: []
	W0918 21:06:25.496155   62061 logs.go:278] No container was found matching "coredns"
	I0918 21:06:25.496161   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0918 21:06:25.496218   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0918 21:06:25.529186   62061 cri.go:89] found id: ""
	I0918 21:06:25.529211   62061 logs.go:276] 0 containers: []
	W0918 21:06:25.529218   62061 logs.go:278] No container was found matching "kube-scheduler"
	I0918 21:06:25.529223   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0918 21:06:25.529294   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0918 21:06:25.569710   62061 cri.go:89] found id: ""
	I0918 21:06:25.569735   62061 logs.go:276] 0 containers: []
	W0918 21:06:25.569743   62061 logs.go:278] No container was found matching "kube-proxy"
	I0918 21:06:25.569749   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0918 21:06:25.569796   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0918 21:06:25.606915   62061 cri.go:89] found id: ""
	I0918 21:06:25.606951   62061 logs.go:276] 0 containers: []
	W0918 21:06:25.606964   62061 logs.go:278] No container was found matching "kube-controller-manager"
	I0918 21:06:25.606972   62061 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0918 21:06:25.607034   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0918 21:06:25.642705   62061 cri.go:89] found id: ""
	I0918 21:06:25.642736   62061 logs.go:276] 0 containers: []
	W0918 21:06:25.642744   62061 logs.go:278] No container was found matching "kindnet"
	I0918 21:06:25.642750   62061 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0918 21:06:25.642801   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0918 21:06:25.676418   62061 cri.go:89] found id: ""
	I0918 21:06:25.676448   62061 logs.go:276] 0 containers: []
	W0918 21:06:25.676457   62061 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0918 21:06:25.676466   62061 logs.go:123] Gathering logs for dmesg ...
	I0918 21:06:25.676477   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 21:06:25.689913   62061 logs.go:123] Gathering logs for describe nodes ...
	I0918 21:06:25.689945   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0918 21:06:25.771579   62061 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0918 21:06:25.771607   62061 logs.go:123] Gathering logs for CRI-O ...
	I0918 21:06:25.771622   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0918 21:06:25.850904   62061 logs.go:123] Gathering logs for container status ...
	I0918 21:06:25.850945   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 21:06:25.890820   62061 logs.go:123] Gathering logs for kubelet ...
	I0918 21:06:25.890849   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 21:06:28.438958   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:06:28.451961   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0918 21:06:28.452043   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0918 21:06:28.485000   62061 cri.go:89] found id: ""
	I0918 21:06:28.485059   62061 logs.go:276] 0 containers: []
	W0918 21:06:28.485070   62061 logs.go:278] No container was found matching "kube-apiserver"
	I0918 21:06:28.485077   62061 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0918 21:06:28.485138   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0918 21:06:28.517852   62061 cri.go:89] found id: ""
	I0918 21:06:28.517885   62061 logs.go:276] 0 containers: []
	W0918 21:06:28.517895   62061 logs.go:278] No container was found matching "etcd"
	I0918 21:06:28.517903   62061 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0918 21:06:28.517957   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0918 21:06:28.549914   62061 cri.go:89] found id: ""
	I0918 21:06:28.549940   62061 logs.go:276] 0 containers: []
	W0918 21:06:28.549949   62061 logs.go:278] No container was found matching "coredns"
	I0918 21:06:28.549955   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0918 21:06:28.550000   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0918 21:06:28.584133   62061 cri.go:89] found id: ""
	I0918 21:06:28.584156   62061 logs.go:276] 0 containers: []
	W0918 21:06:28.584163   62061 logs.go:278] No container was found matching "kube-scheduler"
	I0918 21:06:28.584169   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0918 21:06:28.584213   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0918 21:06:28.620031   62061 cri.go:89] found id: ""
	I0918 21:06:28.620061   62061 logs.go:276] 0 containers: []
	W0918 21:06:28.620071   62061 logs.go:278] No container was found matching "kube-proxy"
	I0918 21:06:28.620078   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0918 21:06:28.620143   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0918 21:06:28.653393   62061 cri.go:89] found id: ""
	I0918 21:06:28.653424   62061 logs.go:276] 0 containers: []
	W0918 21:06:28.653435   62061 logs.go:278] No container was found matching "kube-controller-manager"
	I0918 21:06:28.653443   62061 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0918 21:06:28.653505   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0918 21:06:28.687696   62061 cri.go:89] found id: ""
	I0918 21:06:28.687729   62061 logs.go:276] 0 containers: []
	W0918 21:06:28.687741   62061 logs.go:278] No container was found matching "kindnet"
	I0918 21:06:28.687752   62061 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0918 21:06:28.687809   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0918 21:06:28.724824   62061 cri.go:89] found id: ""
	I0918 21:06:28.724854   62061 logs.go:276] 0 containers: []
	W0918 21:06:28.724864   62061 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0918 21:06:28.724875   62061 logs.go:123] Gathering logs for kubelet ...
	I0918 21:06:28.724890   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 21:06:28.785489   62061 logs.go:123] Gathering logs for dmesg ...
	I0918 21:06:28.785529   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 21:06:28.804170   62061 logs.go:123] Gathering logs for describe nodes ...
	I0918 21:06:28.804211   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0918 21:06:28.895508   62061 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0918 21:06:28.895538   62061 logs.go:123] Gathering logs for CRI-O ...
	I0918 21:06:28.895554   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0918 21:06:28.974643   62061 logs.go:123] Gathering logs for container status ...
	I0918 21:06:28.974685   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 21:06:28.858259   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:06:31.356644   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:06:28.633851   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:06:31.133612   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:06:30.392038   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:06:32.886708   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:06:31.517305   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:06:31.530374   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0918 21:06:31.530435   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0918 21:06:31.568335   62061 cri.go:89] found id: ""
	I0918 21:06:31.568374   62061 logs.go:276] 0 containers: []
	W0918 21:06:31.568385   62061 logs.go:278] No container was found matching "kube-apiserver"
	I0918 21:06:31.568393   62061 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0918 21:06:31.568462   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0918 21:06:31.603597   62061 cri.go:89] found id: ""
	I0918 21:06:31.603624   62061 logs.go:276] 0 containers: []
	W0918 21:06:31.603633   62061 logs.go:278] No container was found matching "etcd"
	I0918 21:06:31.603639   62061 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0918 21:06:31.603694   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0918 21:06:31.639355   62061 cri.go:89] found id: ""
	I0918 21:06:31.639380   62061 logs.go:276] 0 containers: []
	W0918 21:06:31.639388   62061 logs.go:278] No container was found matching "coredns"
	I0918 21:06:31.639393   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0918 21:06:31.639445   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0918 21:06:31.674197   62061 cri.go:89] found id: ""
	I0918 21:06:31.674223   62061 logs.go:276] 0 containers: []
	W0918 21:06:31.674231   62061 logs.go:278] No container was found matching "kube-scheduler"
	I0918 21:06:31.674237   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0918 21:06:31.674286   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0918 21:06:31.710563   62061 cri.go:89] found id: ""
	I0918 21:06:31.710592   62061 logs.go:276] 0 containers: []
	W0918 21:06:31.710606   62061 logs.go:278] No container was found matching "kube-proxy"
	I0918 21:06:31.710612   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0918 21:06:31.710663   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0918 21:06:31.746923   62061 cri.go:89] found id: ""
	I0918 21:06:31.746958   62061 logs.go:276] 0 containers: []
	W0918 21:06:31.746969   62061 logs.go:278] No container was found matching "kube-controller-manager"
	I0918 21:06:31.746976   62061 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0918 21:06:31.747039   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0918 21:06:31.781913   62061 cri.go:89] found id: ""
	I0918 21:06:31.781946   62061 logs.go:276] 0 containers: []
	W0918 21:06:31.781961   62061 logs.go:278] No container was found matching "kindnet"
	I0918 21:06:31.781969   62061 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0918 21:06:31.782023   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0918 21:06:31.817293   62061 cri.go:89] found id: ""
	I0918 21:06:31.817318   62061 logs.go:276] 0 containers: []
	W0918 21:06:31.817327   62061 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0918 21:06:31.817338   62061 logs.go:123] Gathering logs for kubelet ...
	I0918 21:06:31.817375   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 21:06:31.869479   62061 logs.go:123] Gathering logs for dmesg ...
	I0918 21:06:31.869518   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 21:06:31.884187   62061 logs.go:123] Gathering logs for describe nodes ...
	I0918 21:06:31.884219   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0918 21:06:31.967159   62061 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0918 21:06:31.967189   62061 logs.go:123] Gathering logs for CRI-O ...
	I0918 21:06:31.967202   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0918 21:06:32.050404   62061 logs.go:123] Gathering logs for container status ...
	I0918 21:06:32.050458   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 21:06:33.357380   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:06:35.856960   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:06:33.633434   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:06:36.133740   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:06:34.888738   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:06:37.386351   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:06:34.593135   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:06:34.613917   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0918 21:06:34.614001   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0918 21:06:34.649649   62061 cri.go:89] found id: ""
	I0918 21:06:34.649676   62061 logs.go:276] 0 containers: []
	W0918 21:06:34.649684   62061 logs.go:278] No container was found matching "kube-apiserver"
	I0918 21:06:34.649690   62061 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0918 21:06:34.649738   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0918 21:06:34.684898   62061 cri.go:89] found id: ""
	I0918 21:06:34.684932   62061 logs.go:276] 0 containers: []
	W0918 21:06:34.684940   62061 logs.go:278] No container was found matching "etcd"
	I0918 21:06:34.684948   62061 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0918 21:06:34.685018   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0918 21:06:34.717519   62061 cri.go:89] found id: ""
	I0918 21:06:34.717549   62061 logs.go:276] 0 containers: []
	W0918 21:06:34.717559   62061 logs.go:278] No container was found matching "coredns"
	I0918 21:06:34.717573   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0918 21:06:34.717626   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0918 21:06:34.751242   62061 cri.go:89] found id: ""
	I0918 21:06:34.751275   62061 logs.go:276] 0 containers: []
	W0918 21:06:34.751289   62061 logs.go:278] No container was found matching "kube-scheduler"
	I0918 21:06:34.751298   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0918 21:06:34.751368   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0918 21:06:34.786700   62061 cri.go:89] found id: ""
	I0918 21:06:34.786729   62061 logs.go:276] 0 containers: []
	W0918 21:06:34.786737   62061 logs.go:278] No container was found matching "kube-proxy"
	I0918 21:06:34.786742   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0918 21:06:34.786792   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0918 21:06:34.822400   62061 cri.go:89] found id: ""
	I0918 21:06:34.822446   62061 logs.go:276] 0 containers: []
	W0918 21:06:34.822459   62061 logs.go:278] No container was found matching "kube-controller-manager"
	I0918 21:06:34.822468   62061 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0918 21:06:34.822537   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0918 21:06:34.860509   62061 cri.go:89] found id: ""
	I0918 21:06:34.860540   62061 logs.go:276] 0 containers: []
	W0918 21:06:34.860549   62061 logs.go:278] No container was found matching "kindnet"
	I0918 21:06:34.860554   62061 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0918 21:06:34.860626   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0918 21:06:34.900682   62061 cri.go:89] found id: ""
	I0918 21:06:34.900712   62061 logs.go:276] 0 containers: []
	W0918 21:06:34.900719   62061 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0918 21:06:34.900727   62061 logs.go:123] Gathering logs for describe nodes ...
	I0918 21:06:34.900739   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0918 21:06:34.975975   62061 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0918 21:06:34.976009   62061 logs.go:123] Gathering logs for CRI-O ...
	I0918 21:06:34.976043   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0918 21:06:35.058972   62061 logs.go:123] Gathering logs for container status ...
	I0918 21:06:35.059012   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 21:06:35.101690   62061 logs.go:123] Gathering logs for kubelet ...
	I0918 21:06:35.101716   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 21:06:35.154486   62061 logs.go:123] Gathering logs for dmesg ...
	I0918 21:06:35.154525   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 21:06:37.669814   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:06:37.683235   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0918 21:06:37.683294   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0918 21:06:37.720666   62061 cri.go:89] found id: ""
	I0918 21:06:37.720697   62061 logs.go:276] 0 containers: []
	W0918 21:06:37.720708   62061 logs.go:278] No container was found matching "kube-apiserver"
	I0918 21:06:37.720715   62061 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0918 21:06:37.720777   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0918 21:06:37.758288   62061 cri.go:89] found id: ""
	I0918 21:06:37.758327   62061 logs.go:276] 0 containers: []
	W0918 21:06:37.758335   62061 logs.go:278] No container was found matching "etcd"
	I0918 21:06:37.758341   62061 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0918 21:06:37.758436   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0918 21:06:37.793394   62061 cri.go:89] found id: ""
	I0918 21:06:37.793421   62061 logs.go:276] 0 containers: []
	W0918 21:06:37.793430   62061 logs.go:278] No container was found matching "coredns"
	I0918 21:06:37.793437   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0918 21:06:37.793501   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0918 21:06:37.828258   62061 cri.go:89] found id: ""
	I0918 21:06:37.828291   62061 logs.go:276] 0 containers: []
	W0918 21:06:37.828300   62061 logs.go:278] No container was found matching "kube-scheduler"
	I0918 21:06:37.828307   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0918 21:06:37.828361   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0918 21:06:37.863164   62061 cri.go:89] found id: ""
	I0918 21:06:37.863189   62061 logs.go:276] 0 containers: []
	W0918 21:06:37.863197   62061 logs.go:278] No container was found matching "kube-proxy"
	I0918 21:06:37.863203   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0918 21:06:37.863251   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0918 21:06:37.899585   62061 cri.go:89] found id: ""
	I0918 21:06:37.899614   62061 logs.go:276] 0 containers: []
	W0918 21:06:37.899622   62061 logs.go:278] No container was found matching "kube-controller-manager"
	I0918 21:06:37.899628   62061 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0918 21:06:37.899675   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0918 21:06:37.936246   62061 cri.go:89] found id: ""
	I0918 21:06:37.936282   62061 logs.go:276] 0 containers: []
	W0918 21:06:37.936292   62061 logs.go:278] No container was found matching "kindnet"
	I0918 21:06:37.936299   62061 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0918 21:06:37.936362   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0918 21:06:37.969913   62061 cri.go:89] found id: ""
	I0918 21:06:37.969942   62061 logs.go:276] 0 containers: []
	W0918 21:06:37.969950   62061 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0918 21:06:37.969958   62061 logs.go:123] Gathering logs for kubelet ...
	I0918 21:06:37.969968   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 21:06:38.023055   62061 logs.go:123] Gathering logs for dmesg ...
	I0918 21:06:38.023094   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 21:06:38.036006   62061 logs.go:123] Gathering logs for describe nodes ...
	I0918 21:06:38.036047   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0918 21:06:38.116926   62061 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0918 21:06:38.116957   62061 logs.go:123] Gathering logs for CRI-O ...
	I0918 21:06:38.116972   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0918 21:06:38.192632   62061 logs.go:123] Gathering logs for container status ...
	I0918 21:06:38.192668   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 21:06:37.860654   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:06:40.357107   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:06:38.633432   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:06:41.131957   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:06:39.387927   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:06:41.886904   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:06:40.729602   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:06:40.743291   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0918 21:06:40.743368   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0918 21:06:40.778138   62061 cri.go:89] found id: ""
	I0918 21:06:40.778166   62061 logs.go:276] 0 containers: []
	W0918 21:06:40.778176   62061 logs.go:278] No container was found matching "kube-apiserver"
	I0918 21:06:40.778184   62061 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0918 21:06:40.778248   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0918 21:06:40.813026   62061 cri.go:89] found id: ""
	I0918 21:06:40.813062   62061 logs.go:276] 0 containers: []
	W0918 21:06:40.813073   62061 logs.go:278] No container was found matching "etcd"
	I0918 21:06:40.813081   62061 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0918 21:06:40.813146   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0918 21:06:40.847609   62061 cri.go:89] found id: ""
	I0918 21:06:40.847641   62061 logs.go:276] 0 containers: []
	W0918 21:06:40.847651   62061 logs.go:278] No container was found matching "coredns"
	I0918 21:06:40.847658   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0918 21:06:40.847727   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0918 21:06:40.885403   62061 cri.go:89] found id: ""
	I0918 21:06:40.885432   62061 logs.go:276] 0 containers: []
	W0918 21:06:40.885441   62061 logs.go:278] No container was found matching "kube-scheduler"
	I0918 21:06:40.885448   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0918 21:06:40.885515   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0918 21:06:40.922679   62061 cri.go:89] found id: ""
	I0918 21:06:40.922705   62061 logs.go:276] 0 containers: []
	W0918 21:06:40.922714   62061 logs.go:278] No container was found matching "kube-proxy"
	I0918 21:06:40.922719   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0918 21:06:40.922776   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0918 21:06:40.957187   62061 cri.go:89] found id: ""
	I0918 21:06:40.957215   62061 logs.go:276] 0 containers: []
	W0918 21:06:40.957222   62061 logs.go:278] No container was found matching "kube-controller-manager"
	I0918 21:06:40.957228   62061 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0918 21:06:40.957291   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0918 21:06:40.991722   62061 cri.go:89] found id: ""
	I0918 21:06:40.991751   62061 logs.go:276] 0 containers: []
	W0918 21:06:40.991762   62061 logs.go:278] No container was found matching "kindnet"
	I0918 21:06:40.991769   62061 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0918 21:06:40.991835   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0918 21:06:41.027210   62061 cri.go:89] found id: ""
	I0918 21:06:41.027234   62061 logs.go:276] 0 containers: []
	W0918 21:06:41.027242   62061 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0918 21:06:41.027250   62061 logs.go:123] Gathering logs for kubelet ...
	I0918 21:06:41.027275   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 21:06:41.077183   62061 logs.go:123] Gathering logs for dmesg ...
	I0918 21:06:41.077221   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 21:06:41.090707   62061 logs.go:123] Gathering logs for describe nodes ...
	I0918 21:06:41.090734   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0918 21:06:41.166206   62061 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0918 21:06:41.166228   62061 logs.go:123] Gathering logs for CRI-O ...
	I0918 21:06:41.166241   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0918 21:06:41.240308   62061 logs.go:123] Gathering logs for container status ...
	I0918 21:06:41.240346   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 21:06:43.777517   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:06:43.790901   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0918 21:06:43.790970   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0918 21:06:43.827127   62061 cri.go:89] found id: ""
	I0918 21:06:43.827156   62061 logs.go:276] 0 containers: []
	W0918 21:06:43.827164   62061 logs.go:278] No container was found matching "kube-apiserver"
	I0918 21:06:43.827170   62061 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0918 21:06:43.827218   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0918 21:06:43.861218   62061 cri.go:89] found id: ""
	I0918 21:06:43.861244   62061 logs.go:276] 0 containers: []
	W0918 21:06:43.861252   62061 logs.go:278] No container was found matching "etcd"
	I0918 21:06:43.861257   62061 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0918 21:06:43.861308   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0918 21:06:43.899669   62061 cri.go:89] found id: ""
	I0918 21:06:43.899694   62061 logs.go:276] 0 containers: []
	W0918 21:06:43.899701   62061 logs.go:278] No container was found matching "coredns"
	I0918 21:06:43.899707   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0918 21:06:43.899755   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0918 21:06:43.934698   62061 cri.go:89] found id: ""
	I0918 21:06:43.934731   62061 logs.go:276] 0 containers: []
	W0918 21:06:43.934741   62061 logs.go:278] No container was found matching "kube-scheduler"
	I0918 21:06:43.934748   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0918 21:06:43.934819   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0918 21:06:43.971715   62061 cri.go:89] found id: ""
	I0918 21:06:43.971742   62061 logs.go:276] 0 containers: []
	W0918 21:06:43.971755   62061 logs.go:278] No container was found matching "kube-proxy"
	I0918 21:06:43.971760   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0918 21:06:43.971817   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0918 21:06:44.005880   62061 cri.go:89] found id: ""
	I0918 21:06:44.005915   62061 logs.go:276] 0 containers: []
	W0918 21:06:44.005927   62061 logs.go:278] No container was found matching "kube-controller-manager"
	I0918 21:06:44.005935   62061 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0918 21:06:44.006003   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0918 21:06:44.038144   62061 cri.go:89] found id: ""
	I0918 21:06:44.038171   62061 logs.go:276] 0 containers: []
	W0918 21:06:44.038180   62061 logs.go:278] No container was found matching "kindnet"
	I0918 21:06:44.038186   62061 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0918 21:06:44.038239   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0918 21:06:44.073920   62061 cri.go:89] found id: ""
	I0918 21:06:44.073953   62061 logs.go:276] 0 containers: []
	W0918 21:06:44.073966   62061 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0918 21:06:44.073978   62061 logs.go:123] Gathering logs for describe nodes ...
	I0918 21:06:44.073992   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0918 21:06:44.142881   62061 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0918 21:06:44.142910   62061 logs.go:123] Gathering logs for CRI-O ...
	I0918 21:06:44.142926   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0918 21:06:44.222302   62061 logs.go:123] Gathering logs for container status ...
	I0918 21:06:44.222341   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 21:06:44.265914   62061 logs.go:123] Gathering logs for kubelet ...
	I0918 21:06:44.265952   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 21:06:44.316229   62061 logs.go:123] Gathering logs for dmesg ...
	I0918 21:06:44.316266   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 21:06:42.856192   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:06:44.857673   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:06:43.132992   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:06:45.134509   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:06:43.888282   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:06:45.889414   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:06:46.830793   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:06:46.843829   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0918 21:06:46.843886   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0918 21:06:46.878566   62061 cri.go:89] found id: ""
	I0918 21:06:46.878607   62061 logs.go:276] 0 containers: []
	W0918 21:06:46.878621   62061 logs.go:278] No container was found matching "kube-apiserver"
	I0918 21:06:46.878630   62061 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0918 21:06:46.878710   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0918 21:06:46.915576   62061 cri.go:89] found id: ""
	I0918 21:06:46.915603   62061 logs.go:276] 0 containers: []
	W0918 21:06:46.915611   62061 logs.go:278] No container was found matching "etcd"
	I0918 21:06:46.915616   62061 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0918 21:06:46.915663   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0918 21:06:46.952982   62061 cri.go:89] found id: ""
	I0918 21:06:46.953014   62061 logs.go:276] 0 containers: []
	W0918 21:06:46.953032   62061 logs.go:278] No container was found matching "coredns"
	I0918 21:06:46.953039   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0918 21:06:46.953104   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0918 21:06:46.987718   62061 cri.go:89] found id: ""
	I0918 21:06:46.987749   62061 logs.go:276] 0 containers: []
	W0918 21:06:46.987757   62061 logs.go:278] No container was found matching "kube-scheduler"
	I0918 21:06:46.987763   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0918 21:06:46.987815   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0918 21:06:47.022695   62061 cri.go:89] found id: ""
	I0918 21:06:47.022726   62061 logs.go:276] 0 containers: []
	W0918 21:06:47.022735   62061 logs.go:278] No container was found matching "kube-proxy"
	I0918 21:06:47.022741   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0918 21:06:47.022799   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0918 21:06:47.058058   62061 cri.go:89] found id: ""
	I0918 21:06:47.058086   62061 logs.go:276] 0 containers: []
	W0918 21:06:47.058094   62061 logs.go:278] No container was found matching "kube-controller-manager"
	I0918 21:06:47.058101   62061 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0918 21:06:47.058159   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0918 21:06:47.093314   62061 cri.go:89] found id: ""
	I0918 21:06:47.093352   62061 logs.go:276] 0 containers: []
	W0918 21:06:47.093361   62061 logs.go:278] No container was found matching "kindnet"
	I0918 21:06:47.093367   62061 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0918 21:06:47.093424   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0918 21:06:47.126997   62061 cri.go:89] found id: ""
	I0918 21:06:47.127023   62061 logs.go:276] 0 containers: []
	W0918 21:06:47.127033   62061 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0918 21:06:47.127041   62061 logs.go:123] Gathering logs for CRI-O ...
	I0918 21:06:47.127053   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0918 21:06:47.203239   62061 logs.go:123] Gathering logs for container status ...
	I0918 21:06:47.203282   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 21:06:47.240982   62061 logs.go:123] Gathering logs for kubelet ...
	I0918 21:06:47.241013   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 21:06:47.293553   62061 logs.go:123] Gathering logs for dmesg ...
	I0918 21:06:47.293591   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 21:06:47.307462   62061 logs.go:123] Gathering logs for describe nodes ...
	I0918 21:06:47.307493   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0918 21:06:47.379500   62061 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0918 21:06:47.357136   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:06:49.359981   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:06:47.633023   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:06:49.633350   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:06:52.134627   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:06:48.387568   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:06:50.886679   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:06:52.887065   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:06:49.880441   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:06:49.895816   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0918 21:06:49.895903   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0918 21:06:49.931916   62061 cri.go:89] found id: ""
	I0918 21:06:49.931941   62061 logs.go:276] 0 containers: []
	W0918 21:06:49.931950   62061 logs.go:278] No container was found matching "kube-apiserver"
	I0918 21:06:49.931955   62061 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0918 21:06:49.932007   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0918 21:06:49.967053   62061 cri.go:89] found id: ""
	I0918 21:06:49.967083   62061 logs.go:276] 0 containers: []
	W0918 21:06:49.967093   62061 logs.go:278] No container was found matching "etcd"
	I0918 21:06:49.967101   62061 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0918 21:06:49.967194   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0918 21:06:50.002943   62061 cri.go:89] found id: ""
	I0918 21:06:50.003004   62061 logs.go:276] 0 containers: []
	W0918 21:06:50.003015   62061 logs.go:278] No container was found matching "coredns"
	I0918 21:06:50.003023   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0918 21:06:50.003085   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0918 21:06:50.039445   62061 cri.go:89] found id: ""
	I0918 21:06:50.039478   62061 logs.go:276] 0 containers: []
	W0918 21:06:50.039489   62061 logs.go:278] No container was found matching "kube-scheduler"
	I0918 21:06:50.039497   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0918 21:06:50.039561   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0918 21:06:50.073529   62061 cri.go:89] found id: ""
	I0918 21:06:50.073562   62061 logs.go:276] 0 containers: []
	W0918 21:06:50.073572   62061 logs.go:278] No container was found matching "kube-proxy"
	I0918 21:06:50.073578   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0918 21:06:50.073645   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0918 21:06:50.109363   62061 cri.go:89] found id: ""
	I0918 21:06:50.109394   62061 logs.go:276] 0 containers: []
	W0918 21:06:50.109406   62061 logs.go:278] No container was found matching "kube-controller-manager"
	I0918 21:06:50.109413   62061 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0918 21:06:50.109485   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0918 21:06:50.144387   62061 cri.go:89] found id: ""
	I0918 21:06:50.144418   62061 logs.go:276] 0 containers: []
	W0918 21:06:50.144429   62061 logs.go:278] No container was found matching "kindnet"
	I0918 21:06:50.144450   62061 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0918 21:06:50.144524   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0918 21:06:50.178079   62061 cri.go:89] found id: ""
	I0918 21:06:50.178110   62061 logs.go:276] 0 containers: []
	W0918 21:06:50.178119   62061 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0918 21:06:50.178128   62061 logs.go:123] Gathering logs for kubelet ...
	I0918 21:06:50.178140   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 21:06:50.228857   62061 logs.go:123] Gathering logs for dmesg ...
	I0918 21:06:50.228893   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 21:06:50.242077   62061 logs.go:123] Gathering logs for describe nodes ...
	I0918 21:06:50.242108   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0918 21:06:50.312868   62061 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0918 21:06:50.312903   62061 logs.go:123] Gathering logs for CRI-O ...
	I0918 21:06:50.312920   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0918 21:06:50.392880   62061 logs.go:123] Gathering logs for container status ...
	I0918 21:06:50.392924   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 21:06:52.931623   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:06:52.945298   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0918 21:06:52.945394   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0918 21:06:52.980129   62061 cri.go:89] found id: ""
	I0918 21:06:52.980162   62061 logs.go:276] 0 containers: []
	W0918 21:06:52.980171   62061 logs.go:278] No container was found matching "kube-apiserver"
	I0918 21:06:52.980176   62061 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0918 21:06:52.980224   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0918 21:06:53.017899   62061 cri.go:89] found id: ""
	I0918 21:06:53.017932   62061 logs.go:276] 0 containers: []
	W0918 21:06:53.017941   62061 logs.go:278] No container was found matching "etcd"
	I0918 21:06:53.017947   62061 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0918 21:06:53.018015   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0918 21:06:53.056594   62061 cri.go:89] found id: ""
	I0918 21:06:53.056619   62061 logs.go:276] 0 containers: []
	W0918 21:06:53.056627   62061 logs.go:278] No container was found matching "coredns"
	I0918 21:06:53.056635   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0918 21:06:53.056684   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0918 21:06:53.089876   62061 cri.go:89] found id: ""
	I0918 21:06:53.089907   62061 logs.go:276] 0 containers: []
	W0918 21:06:53.089915   62061 logs.go:278] No container was found matching "kube-scheduler"
	I0918 21:06:53.089920   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0918 21:06:53.089967   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0918 21:06:53.122845   62061 cri.go:89] found id: ""
	I0918 21:06:53.122881   62061 logs.go:276] 0 containers: []
	W0918 21:06:53.122890   62061 logs.go:278] No container was found matching "kube-proxy"
	I0918 21:06:53.122904   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0918 21:06:53.122956   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0918 21:06:53.162103   62061 cri.go:89] found id: ""
	I0918 21:06:53.162137   62061 logs.go:276] 0 containers: []
	W0918 21:06:53.162148   62061 logs.go:278] No container was found matching "kube-controller-manager"
	I0918 21:06:53.162156   62061 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0918 21:06:53.162218   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0918 21:06:53.195034   62061 cri.go:89] found id: ""
	I0918 21:06:53.195067   62061 logs.go:276] 0 containers: []
	W0918 21:06:53.195078   62061 logs.go:278] No container was found matching "kindnet"
	I0918 21:06:53.195085   62061 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0918 21:06:53.195144   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0918 21:06:53.229473   62061 cri.go:89] found id: ""
	I0918 21:06:53.229518   62061 logs.go:276] 0 containers: []
	W0918 21:06:53.229542   62061 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0918 21:06:53.229556   62061 logs.go:123] Gathering logs for kubelet ...
	I0918 21:06:53.229577   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 21:06:53.283929   62061 logs.go:123] Gathering logs for dmesg ...
	I0918 21:06:53.283973   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 21:06:53.296932   62061 logs.go:123] Gathering logs for describe nodes ...
	I0918 21:06:53.296965   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0918 21:06:53.379339   62061 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0918 21:06:53.379369   62061 logs.go:123] Gathering logs for CRI-O ...
	I0918 21:06:53.379410   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0918 21:06:53.469608   62061 logs.go:123] Gathering logs for container status ...
	I0918 21:06:53.469652   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 21:06:51.855788   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:06:53.857284   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:06:55.860982   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:06:54.633423   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:06:56.633695   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:06:54.888052   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:06:57.387393   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:06:56.009227   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:06:56.023451   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0918 21:06:56.023510   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0918 21:06:56.056839   62061 cri.go:89] found id: ""
	I0918 21:06:56.056866   62061 logs.go:276] 0 containers: []
	W0918 21:06:56.056875   62061 logs.go:278] No container was found matching "kube-apiserver"
	I0918 21:06:56.056881   62061 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0918 21:06:56.056931   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0918 21:06:56.091893   62061 cri.go:89] found id: ""
	I0918 21:06:56.091919   62061 logs.go:276] 0 containers: []
	W0918 21:06:56.091928   62061 logs.go:278] No container was found matching "etcd"
	I0918 21:06:56.091933   62061 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0918 21:06:56.091980   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0918 21:06:56.127432   62061 cri.go:89] found id: ""
	I0918 21:06:56.127456   62061 logs.go:276] 0 containers: []
	W0918 21:06:56.127464   62061 logs.go:278] No container was found matching "coredns"
	I0918 21:06:56.127470   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0918 21:06:56.127518   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0918 21:06:56.162085   62061 cri.go:89] found id: ""
	I0918 21:06:56.162109   62061 logs.go:276] 0 containers: []
	W0918 21:06:56.162118   62061 logs.go:278] No container was found matching "kube-scheduler"
	I0918 21:06:56.162123   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0918 21:06:56.162176   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0918 21:06:56.195711   62061 cri.go:89] found id: ""
	I0918 21:06:56.195743   62061 logs.go:276] 0 containers: []
	W0918 21:06:56.195753   62061 logs.go:278] No container was found matching "kube-proxy"
	I0918 21:06:56.195759   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0918 21:06:56.195809   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0918 21:06:56.234481   62061 cri.go:89] found id: ""
	I0918 21:06:56.234507   62061 logs.go:276] 0 containers: []
	W0918 21:06:56.234515   62061 logs.go:278] No container was found matching "kube-controller-manager"
	I0918 21:06:56.234522   62061 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0918 21:06:56.234567   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0918 21:06:56.268566   62061 cri.go:89] found id: ""
	I0918 21:06:56.268596   62061 logs.go:276] 0 containers: []
	W0918 21:06:56.268608   62061 logs.go:278] No container was found matching "kindnet"
	I0918 21:06:56.268617   62061 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0918 21:06:56.268681   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0918 21:06:56.309785   62061 cri.go:89] found id: ""
	I0918 21:06:56.309813   62061 logs.go:276] 0 containers: []
	W0918 21:06:56.309824   62061 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0918 21:06:56.309835   62061 logs.go:123] Gathering logs for kubelet ...
	I0918 21:06:56.309874   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 21:06:56.364834   62061 logs.go:123] Gathering logs for dmesg ...
	I0918 21:06:56.364880   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 21:06:56.378424   62061 logs.go:123] Gathering logs for describe nodes ...
	I0918 21:06:56.378451   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0918 21:06:56.450482   62061 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0918 21:06:56.450511   62061 logs.go:123] Gathering logs for CRI-O ...
	I0918 21:06:56.450522   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0918 21:06:56.536261   62061 logs.go:123] Gathering logs for container status ...
	I0918 21:06:56.536305   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 21:06:59.074494   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:06:59.087553   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0918 21:06:59.087622   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0918 21:06:59.128323   62061 cri.go:89] found id: ""
	I0918 21:06:59.128357   62061 logs.go:276] 0 containers: []
	W0918 21:06:59.128368   62061 logs.go:278] No container was found matching "kube-apiserver"
	I0918 21:06:59.128375   62061 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0918 21:06:59.128436   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0918 21:06:59.161135   62061 cri.go:89] found id: ""
	I0918 21:06:59.161161   62061 logs.go:276] 0 containers: []
	W0918 21:06:59.161170   62061 logs.go:278] No container was found matching "etcd"
	I0918 21:06:59.161178   62061 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0918 21:06:59.161240   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0918 21:06:59.201558   62061 cri.go:89] found id: ""
	I0918 21:06:59.201595   62061 logs.go:276] 0 containers: []
	W0918 21:06:59.201607   62061 logs.go:278] No container was found matching "coredns"
	I0918 21:06:59.201614   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0918 21:06:59.201678   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0918 21:06:59.235330   62061 cri.go:89] found id: ""
	I0918 21:06:59.235356   62061 logs.go:276] 0 containers: []
	W0918 21:06:59.235378   62061 logs.go:278] No container was found matching "kube-scheduler"
	I0918 21:06:59.235385   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0918 21:06:59.235450   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0918 21:06:59.270957   62061 cri.go:89] found id: ""
	I0918 21:06:59.270994   62061 logs.go:276] 0 containers: []
	W0918 21:06:59.271007   62061 logs.go:278] No container was found matching "kube-proxy"
	I0918 21:06:59.271016   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0918 21:06:59.271088   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0918 21:06:59.305068   62061 cri.go:89] found id: ""
	I0918 21:06:59.305103   62061 logs.go:276] 0 containers: []
	W0918 21:06:59.305115   62061 logs.go:278] No container was found matching "kube-controller-manager"
	I0918 21:06:59.305123   62061 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0918 21:06:59.305177   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0918 21:06:59.338763   62061 cri.go:89] found id: ""
	I0918 21:06:59.338796   62061 logs.go:276] 0 containers: []
	W0918 21:06:59.338809   62061 logs.go:278] No container was found matching "kindnet"
	I0918 21:06:59.338818   62061 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0918 21:06:59.338887   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0918 21:06:59.376557   62061 cri.go:89] found id: ""
	I0918 21:06:59.376585   62061 logs.go:276] 0 containers: []
	W0918 21:06:59.376593   62061 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0918 21:06:59.376602   62061 logs.go:123] Gathering logs for CRI-O ...
	I0918 21:06:59.376615   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0918 21:06:59.455228   62061 logs.go:123] Gathering logs for container status ...
	I0918 21:06:59.455264   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 21:06:58.356648   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:07:00.357253   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:06:59.133274   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:07:01.632548   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:06:59.388183   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:07:01.886834   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:06:59.494461   62061 logs.go:123] Gathering logs for kubelet ...
	I0918 21:06:59.494488   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 21:06:59.543143   62061 logs.go:123] Gathering logs for dmesg ...
	I0918 21:06:59.543177   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 21:06:59.556031   62061 logs.go:123] Gathering logs for describe nodes ...
	I0918 21:06:59.556062   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0918 21:06:59.623888   62061 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0918 21:07:02.124743   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:07:02.139392   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0918 21:07:02.139451   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0918 21:07:02.172176   62061 cri.go:89] found id: ""
	I0918 21:07:02.172201   62061 logs.go:276] 0 containers: []
	W0918 21:07:02.172209   62061 logs.go:278] No container was found matching "kube-apiserver"
	I0918 21:07:02.172215   62061 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0918 21:07:02.172263   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0918 21:07:02.206480   62061 cri.go:89] found id: ""
	I0918 21:07:02.206507   62061 logs.go:276] 0 containers: []
	W0918 21:07:02.206518   62061 logs.go:278] No container was found matching "etcd"
	I0918 21:07:02.206525   62061 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0918 21:07:02.206586   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0918 21:07:02.240256   62061 cri.go:89] found id: ""
	I0918 21:07:02.240281   62061 logs.go:276] 0 containers: []
	W0918 21:07:02.240289   62061 logs.go:278] No container was found matching "coredns"
	I0918 21:07:02.240295   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0918 21:07:02.240353   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0918 21:07:02.274001   62061 cri.go:89] found id: ""
	I0918 21:07:02.274034   62061 logs.go:276] 0 containers: []
	W0918 21:07:02.274046   62061 logs.go:278] No container was found matching "kube-scheduler"
	I0918 21:07:02.274056   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0918 21:07:02.274115   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0918 21:07:02.307491   62061 cri.go:89] found id: ""
	I0918 21:07:02.307520   62061 logs.go:276] 0 containers: []
	W0918 21:07:02.307528   62061 logs.go:278] No container was found matching "kube-proxy"
	I0918 21:07:02.307534   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0918 21:07:02.307597   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0918 21:07:02.344688   62061 cri.go:89] found id: ""
	I0918 21:07:02.344720   62061 logs.go:276] 0 containers: []
	W0918 21:07:02.344731   62061 logs.go:278] No container was found matching "kube-controller-manager"
	I0918 21:07:02.344739   62061 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0918 21:07:02.344805   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0918 21:07:02.384046   62061 cri.go:89] found id: ""
	I0918 21:07:02.384077   62061 logs.go:276] 0 containers: []
	W0918 21:07:02.384088   62061 logs.go:278] No container was found matching "kindnet"
	I0918 21:07:02.384095   62061 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0918 21:07:02.384154   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0918 21:07:02.422530   62061 cri.go:89] found id: ""
	I0918 21:07:02.422563   62061 logs.go:276] 0 containers: []
	W0918 21:07:02.422575   62061 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0918 21:07:02.422586   62061 logs.go:123] Gathering logs for container status ...
	I0918 21:07:02.422604   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 21:07:02.461236   62061 logs.go:123] Gathering logs for kubelet ...
	I0918 21:07:02.461266   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 21:07:02.512280   62061 logs.go:123] Gathering logs for dmesg ...
	I0918 21:07:02.512320   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 21:07:02.525404   62061 logs.go:123] Gathering logs for describe nodes ...
	I0918 21:07:02.525435   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0918 21:07:02.599746   62061 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0918 21:07:02.599773   62061 logs.go:123] Gathering logs for CRI-O ...
	I0918 21:07:02.599785   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0918 21:07:02.856077   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:07:04.858098   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:07:04.133240   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:07:06.135937   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:07:03.887306   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:07:05.888675   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:07:05.183459   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:07:05.197615   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0918 21:07:05.197681   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0918 21:07:05.234313   62061 cri.go:89] found id: ""
	I0918 21:07:05.234354   62061 logs.go:276] 0 containers: []
	W0918 21:07:05.234365   62061 logs.go:278] No container was found matching "kube-apiserver"
	I0918 21:07:05.234371   62061 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0918 21:07:05.234429   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0918 21:07:05.271436   62061 cri.go:89] found id: ""
	I0918 21:07:05.271466   62061 logs.go:276] 0 containers: []
	W0918 21:07:05.271477   62061 logs.go:278] No container was found matching "etcd"
	I0918 21:07:05.271484   62061 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0918 21:07:05.271554   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0918 21:07:05.307007   62061 cri.go:89] found id: ""
	I0918 21:07:05.307038   62061 logs.go:276] 0 containers: []
	W0918 21:07:05.307050   62061 logs.go:278] No container was found matching "coredns"
	I0918 21:07:05.307056   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0918 21:07:05.307120   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0918 21:07:05.341764   62061 cri.go:89] found id: ""
	I0918 21:07:05.341810   62061 logs.go:276] 0 containers: []
	W0918 21:07:05.341831   62061 logs.go:278] No container was found matching "kube-scheduler"
	I0918 21:07:05.341840   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0918 21:07:05.341908   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0918 21:07:05.381611   62061 cri.go:89] found id: ""
	I0918 21:07:05.381646   62061 logs.go:276] 0 containers: []
	W0918 21:07:05.381658   62061 logs.go:278] No container was found matching "kube-proxy"
	I0918 21:07:05.381666   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0918 21:07:05.381747   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0918 21:07:05.421468   62061 cri.go:89] found id: ""
	I0918 21:07:05.421499   62061 logs.go:276] 0 containers: []
	W0918 21:07:05.421520   62061 logs.go:278] No container was found matching "kube-controller-manager"
	I0918 21:07:05.421528   62061 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0918 21:07:05.421590   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0918 21:07:05.457320   62061 cri.go:89] found id: ""
	I0918 21:07:05.457348   62061 logs.go:276] 0 containers: []
	W0918 21:07:05.457359   62061 logs.go:278] No container was found matching "kindnet"
	I0918 21:07:05.457367   62061 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0918 21:07:05.457425   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0918 21:07:05.493879   62061 cri.go:89] found id: ""
	I0918 21:07:05.493915   62061 logs.go:276] 0 containers: []
	W0918 21:07:05.493928   62061 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0918 21:07:05.493943   62061 logs.go:123] Gathering logs for kubelet ...
	I0918 21:07:05.493955   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 21:07:05.543825   62061 logs.go:123] Gathering logs for dmesg ...
	I0918 21:07:05.543867   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 21:07:05.558254   62061 logs.go:123] Gathering logs for describe nodes ...
	I0918 21:07:05.558285   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0918 21:07:05.627065   62061 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0918 21:07:05.627091   62061 logs.go:123] Gathering logs for CRI-O ...
	I0918 21:07:05.627103   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0918 21:07:05.703838   62061 logs.go:123] Gathering logs for container status ...
	I0918 21:07:05.703876   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 21:07:08.244087   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:07:08.257807   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0918 21:07:08.257897   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0918 21:07:08.292034   62061 cri.go:89] found id: ""
	I0918 21:07:08.292064   62061 logs.go:276] 0 containers: []
	W0918 21:07:08.292076   62061 logs.go:278] No container was found matching "kube-apiserver"
	I0918 21:07:08.292084   62061 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0918 21:07:08.292154   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0918 21:07:08.325858   62061 cri.go:89] found id: ""
	I0918 21:07:08.325897   62061 logs.go:276] 0 containers: []
	W0918 21:07:08.325910   62061 logs.go:278] No container was found matching "etcd"
	I0918 21:07:08.325918   62061 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0918 21:07:08.325987   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0918 21:07:08.369427   62061 cri.go:89] found id: ""
	I0918 21:07:08.369457   62061 logs.go:276] 0 containers: []
	W0918 21:07:08.369468   62061 logs.go:278] No container was found matching "coredns"
	I0918 21:07:08.369475   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0918 21:07:08.369536   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0918 21:07:08.409402   62061 cri.go:89] found id: ""
	I0918 21:07:08.409434   62061 logs.go:276] 0 containers: []
	W0918 21:07:08.409444   62061 logs.go:278] No container was found matching "kube-scheduler"
	I0918 21:07:08.409451   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0918 21:07:08.409515   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0918 21:07:08.445530   62061 cri.go:89] found id: ""
	I0918 21:07:08.445565   62061 logs.go:276] 0 containers: []
	W0918 21:07:08.445575   62061 logs.go:278] No container was found matching "kube-proxy"
	I0918 21:07:08.445584   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0918 21:07:08.445646   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0918 21:07:08.481905   62061 cri.go:89] found id: ""
	I0918 21:07:08.481935   62061 logs.go:276] 0 containers: []
	W0918 21:07:08.481945   62061 logs.go:278] No container was found matching "kube-controller-manager"
	I0918 21:07:08.481952   62061 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0918 21:07:08.482020   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0918 21:07:08.516710   62061 cri.go:89] found id: ""
	I0918 21:07:08.516738   62061 logs.go:276] 0 containers: []
	W0918 21:07:08.516746   62061 logs.go:278] No container was found matching "kindnet"
	I0918 21:07:08.516752   62061 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0918 21:07:08.516802   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0918 21:07:08.550707   62061 cri.go:89] found id: ""
	I0918 21:07:08.550740   62061 logs.go:276] 0 containers: []
	W0918 21:07:08.550760   62061 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0918 21:07:08.550768   62061 logs.go:123] Gathering logs for describe nodes ...
	I0918 21:07:08.550782   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0918 21:07:08.624821   62061 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0918 21:07:08.624843   62061 logs.go:123] Gathering logs for CRI-O ...
	I0918 21:07:08.624854   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0918 21:07:08.705347   62061 logs.go:123] Gathering logs for container status ...
	I0918 21:07:08.705383   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 21:07:08.744394   62061 logs.go:123] Gathering logs for kubelet ...
	I0918 21:07:08.744425   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 21:07:08.799336   62061 logs.go:123] Gathering logs for dmesg ...
	I0918 21:07:08.799375   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 21:07:07.358154   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:07:09.857118   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:07:08.633211   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:07:11.132676   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:07:08.388884   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:07:10.887356   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:07:11.312920   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:07:11.327524   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0918 21:07:11.327598   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0918 21:07:11.366177   62061 cri.go:89] found id: ""
	I0918 21:07:11.366209   62061 logs.go:276] 0 containers: []
	W0918 21:07:11.366221   62061 logs.go:278] No container was found matching "kube-apiserver"
	I0918 21:07:11.366227   62061 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0918 21:07:11.366274   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0918 21:07:11.405296   62061 cri.go:89] found id: ""
	I0918 21:07:11.405326   62061 logs.go:276] 0 containers: []
	W0918 21:07:11.405344   62061 logs.go:278] No container was found matching "etcd"
	I0918 21:07:11.405351   62061 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0918 21:07:11.405413   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0918 21:07:11.441786   62061 cri.go:89] found id: ""
	I0918 21:07:11.441818   62061 logs.go:276] 0 containers: []
	W0918 21:07:11.441829   62061 logs.go:278] No container was found matching "coredns"
	I0918 21:07:11.441837   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0918 21:07:11.441891   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0918 21:07:11.476144   62061 cri.go:89] found id: ""
	I0918 21:07:11.476167   62061 logs.go:276] 0 containers: []
	W0918 21:07:11.476175   62061 logs.go:278] No container was found matching "kube-scheduler"
	I0918 21:07:11.476181   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0918 21:07:11.476224   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0918 21:07:11.512115   62061 cri.go:89] found id: ""
	I0918 21:07:11.512143   62061 logs.go:276] 0 containers: []
	W0918 21:07:11.512154   62061 logs.go:278] No container was found matching "kube-proxy"
	I0918 21:07:11.512160   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0918 21:07:11.512221   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0918 21:07:11.545466   62061 cri.go:89] found id: ""
	I0918 21:07:11.545494   62061 logs.go:276] 0 containers: []
	W0918 21:07:11.545504   62061 logs.go:278] No container was found matching "kube-controller-manager"
	I0918 21:07:11.545512   62061 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0918 21:07:11.545575   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0918 21:07:11.579940   62061 cri.go:89] found id: ""
	I0918 21:07:11.579965   62061 logs.go:276] 0 containers: []
	W0918 21:07:11.579975   62061 logs.go:278] No container was found matching "kindnet"
	I0918 21:07:11.579994   62061 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0918 21:07:11.580080   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0918 21:07:11.613454   62061 cri.go:89] found id: ""
	I0918 21:07:11.613480   62061 logs.go:276] 0 containers: []
	W0918 21:07:11.613491   62061 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0918 21:07:11.613512   62061 logs.go:123] Gathering logs for kubelet ...
	I0918 21:07:11.613535   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 21:07:11.669079   62061 logs.go:123] Gathering logs for dmesg ...
	I0918 21:07:11.669140   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 21:07:11.682614   62061 logs.go:123] Gathering logs for describe nodes ...
	I0918 21:07:11.682639   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0918 21:07:11.756876   62061 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0918 21:07:11.756901   62061 logs.go:123] Gathering logs for CRI-O ...
	I0918 21:07:11.756914   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0918 21:07:11.835046   62061 logs.go:123] Gathering logs for container status ...
	I0918 21:07:11.835086   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 21:07:14.377541   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:07:14.392083   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0918 21:07:14.392161   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0918 21:07:14.431752   62061 cri.go:89] found id: ""
	I0918 21:07:14.431786   62061 logs.go:276] 0 containers: []
	W0918 21:07:14.431795   62061 logs.go:278] No container was found matching "kube-apiserver"
	I0918 21:07:14.431800   62061 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0918 21:07:14.431856   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0918 21:07:14.468530   62061 cri.go:89] found id: ""
	I0918 21:07:14.468562   62061 logs.go:276] 0 containers: []
	W0918 21:07:14.468573   62061 logs.go:278] No container was found matching "etcd"
	I0918 21:07:14.468580   62061 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0918 21:07:14.468643   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0918 21:07:11.857763   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:07:14.357253   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:07:13.132895   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:07:15.133426   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:07:13.386537   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:07:15.387844   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:07:17.888743   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:07:14.506510   62061 cri.go:89] found id: ""
	I0918 21:07:14.506540   62061 logs.go:276] 0 containers: []
	W0918 21:07:14.506550   62061 logs.go:278] No container was found matching "coredns"
	I0918 21:07:14.506557   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0918 21:07:14.506625   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0918 21:07:14.538986   62061 cri.go:89] found id: ""
	I0918 21:07:14.539021   62061 logs.go:276] 0 containers: []
	W0918 21:07:14.539032   62061 logs.go:278] No container was found matching "kube-scheduler"
	I0918 21:07:14.539039   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0918 21:07:14.539103   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0918 21:07:14.572390   62061 cri.go:89] found id: ""
	I0918 21:07:14.572421   62061 logs.go:276] 0 containers: []
	W0918 21:07:14.572432   62061 logs.go:278] No container was found matching "kube-proxy"
	I0918 21:07:14.572440   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0918 21:07:14.572499   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0918 21:07:14.607875   62061 cri.go:89] found id: ""
	I0918 21:07:14.607905   62061 logs.go:276] 0 containers: []
	W0918 21:07:14.607917   62061 logs.go:278] No container was found matching "kube-controller-manager"
	I0918 21:07:14.607924   62061 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0918 21:07:14.607988   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0918 21:07:14.642584   62061 cri.go:89] found id: ""
	I0918 21:07:14.642616   62061 logs.go:276] 0 containers: []
	W0918 21:07:14.642625   62061 logs.go:278] No container was found matching "kindnet"
	I0918 21:07:14.642630   62061 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0918 21:07:14.642685   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0918 21:07:14.682117   62061 cri.go:89] found id: ""
	I0918 21:07:14.682144   62061 logs.go:276] 0 containers: []
	W0918 21:07:14.682152   62061 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0918 21:07:14.682163   62061 logs.go:123] Gathering logs for dmesg ...
	I0918 21:07:14.682177   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 21:07:14.694780   62061 logs.go:123] Gathering logs for describe nodes ...
	I0918 21:07:14.694808   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0918 21:07:14.767988   62061 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0918 21:07:14.768036   62061 logs.go:123] Gathering logs for CRI-O ...
	I0918 21:07:14.768055   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0918 21:07:14.860620   62061 logs.go:123] Gathering logs for container status ...
	I0918 21:07:14.860655   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 21:07:14.905071   62061 logs.go:123] Gathering logs for kubelet ...
	I0918 21:07:14.905102   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 21:07:17.455582   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:07:17.468713   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0918 21:07:17.468774   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0918 21:07:17.503682   62061 cri.go:89] found id: ""
	I0918 21:07:17.503709   62061 logs.go:276] 0 containers: []
	W0918 21:07:17.503721   62061 logs.go:278] No container was found matching "kube-apiserver"
	I0918 21:07:17.503729   62061 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0918 21:07:17.503794   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0918 21:07:17.536581   62061 cri.go:89] found id: ""
	I0918 21:07:17.536611   62061 logs.go:276] 0 containers: []
	W0918 21:07:17.536619   62061 logs.go:278] No container was found matching "etcd"
	I0918 21:07:17.536625   62061 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0918 21:07:17.536673   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0918 21:07:17.570483   62061 cri.go:89] found id: ""
	I0918 21:07:17.570510   62061 logs.go:276] 0 containers: []
	W0918 21:07:17.570518   62061 logs.go:278] No container was found matching "coredns"
	I0918 21:07:17.570523   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0918 21:07:17.570591   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0918 21:07:17.604102   62061 cri.go:89] found id: ""
	I0918 21:07:17.604140   62061 logs.go:276] 0 containers: []
	W0918 21:07:17.604152   62061 logs.go:278] No container was found matching "kube-scheduler"
	I0918 21:07:17.604160   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0918 21:07:17.604229   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0918 21:07:17.638633   62061 cri.go:89] found id: ""
	I0918 21:07:17.638661   62061 logs.go:276] 0 containers: []
	W0918 21:07:17.638672   62061 logs.go:278] No container was found matching "kube-proxy"
	I0918 21:07:17.638678   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0918 21:07:17.638725   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0918 21:07:17.673016   62061 cri.go:89] found id: ""
	I0918 21:07:17.673048   62061 logs.go:276] 0 containers: []
	W0918 21:07:17.673057   62061 logs.go:278] No container was found matching "kube-controller-manager"
	I0918 21:07:17.673063   62061 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0918 21:07:17.673116   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0918 21:07:17.708576   62061 cri.go:89] found id: ""
	I0918 21:07:17.708601   62061 logs.go:276] 0 containers: []
	W0918 21:07:17.708609   62061 logs.go:278] No container was found matching "kindnet"
	I0918 21:07:17.708620   62061 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0918 21:07:17.708672   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0918 21:07:17.742820   62061 cri.go:89] found id: ""
	I0918 21:07:17.742843   62061 logs.go:276] 0 containers: []
	W0918 21:07:17.742851   62061 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0918 21:07:17.742859   62061 logs.go:123] Gathering logs for dmesg ...
	I0918 21:07:17.742870   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 21:07:17.757091   62061 logs.go:123] Gathering logs for describe nodes ...
	I0918 21:07:17.757120   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0918 21:07:17.824116   62061 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0918 21:07:17.824137   62061 logs.go:123] Gathering logs for CRI-O ...
	I0918 21:07:17.824151   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0918 21:07:17.911013   62061 logs.go:123] Gathering logs for container status ...
	I0918 21:07:17.911065   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 21:07:17.950517   62061 logs.go:123] Gathering logs for kubelet ...
	I0918 21:07:17.950553   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 21:07:16.857284   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:07:19.357336   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:07:17.635033   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:07:20.134331   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:07:20.388498   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:07:22.887115   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:07:20.502423   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:07:20.529214   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0918 21:07:20.529297   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0918 21:07:20.583618   62061 cri.go:89] found id: ""
	I0918 21:07:20.583660   62061 logs.go:276] 0 containers: []
	W0918 21:07:20.583672   62061 logs.go:278] No container was found matching "kube-apiserver"
	I0918 21:07:20.583682   62061 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0918 21:07:20.583751   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0918 21:07:20.633051   62061 cri.go:89] found id: ""
	I0918 21:07:20.633079   62061 logs.go:276] 0 containers: []
	W0918 21:07:20.633091   62061 logs.go:278] No container was found matching "etcd"
	I0918 21:07:20.633099   62061 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0918 21:07:20.633158   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0918 21:07:20.668513   62061 cri.go:89] found id: ""
	I0918 21:07:20.668543   62061 logs.go:276] 0 containers: []
	W0918 21:07:20.668552   62061 logs.go:278] No container was found matching "coredns"
	I0918 21:07:20.668558   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0918 21:07:20.668611   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0918 21:07:20.702648   62061 cri.go:89] found id: ""
	I0918 21:07:20.702683   62061 logs.go:276] 0 containers: []
	W0918 21:07:20.702698   62061 logs.go:278] No container was found matching "kube-scheduler"
	I0918 21:07:20.702706   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0918 21:07:20.702767   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0918 21:07:20.740639   62061 cri.go:89] found id: ""
	I0918 21:07:20.740666   62061 logs.go:276] 0 containers: []
	W0918 21:07:20.740674   62061 logs.go:278] No container was found matching "kube-proxy"
	I0918 21:07:20.740680   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0918 21:07:20.740731   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0918 21:07:20.771819   62061 cri.go:89] found id: ""
	I0918 21:07:20.771847   62061 logs.go:276] 0 containers: []
	W0918 21:07:20.771855   62061 logs.go:278] No container was found matching "kube-controller-manager"
	I0918 21:07:20.771863   62061 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0918 21:07:20.771913   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0918 21:07:20.803613   62061 cri.go:89] found id: ""
	I0918 21:07:20.803639   62061 logs.go:276] 0 containers: []
	W0918 21:07:20.803647   62061 logs.go:278] No container was found matching "kindnet"
	I0918 21:07:20.803653   62061 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0918 21:07:20.803708   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0918 21:07:20.835671   62061 cri.go:89] found id: ""
	I0918 21:07:20.835695   62061 logs.go:276] 0 containers: []
	W0918 21:07:20.835702   62061 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0918 21:07:20.835710   62061 logs.go:123] Gathering logs for kubelet ...
	I0918 21:07:20.835720   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 21:07:20.886159   62061 logs.go:123] Gathering logs for dmesg ...
	I0918 21:07:20.886191   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 21:07:20.901412   62061 logs.go:123] Gathering logs for describe nodes ...
	I0918 21:07:20.901443   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0918 21:07:20.979009   62061 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0918 21:07:20.979034   62061 logs.go:123] Gathering logs for CRI-O ...
	I0918 21:07:20.979045   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0918 21:07:21.059895   62061 logs.go:123] Gathering logs for container status ...
	I0918 21:07:21.059930   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 21:07:23.598133   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:07:23.611038   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0918 21:07:23.611102   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0918 21:07:23.648037   62061 cri.go:89] found id: ""
	I0918 21:07:23.648067   62061 logs.go:276] 0 containers: []
	W0918 21:07:23.648075   62061 logs.go:278] No container was found matching "kube-apiserver"
	I0918 21:07:23.648081   62061 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0918 21:07:23.648135   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0918 21:07:23.683956   62061 cri.go:89] found id: ""
	I0918 21:07:23.683985   62061 logs.go:276] 0 containers: []
	W0918 21:07:23.683995   62061 logs.go:278] No container was found matching "etcd"
	I0918 21:07:23.684003   62061 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0918 21:07:23.684083   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0918 21:07:23.719274   62061 cri.go:89] found id: ""
	I0918 21:07:23.719301   62061 logs.go:276] 0 containers: []
	W0918 21:07:23.719309   62061 logs.go:278] No container was found matching "coredns"
	I0918 21:07:23.719315   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0918 21:07:23.719378   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0918 21:07:23.757101   62061 cri.go:89] found id: ""
	I0918 21:07:23.757133   62061 logs.go:276] 0 containers: []
	W0918 21:07:23.757145   62061 logs.go:278] No container was found matching "kube-scheduler"
	I0918 21:07:23.757152   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0918 21:07:23.757202   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0918 21:07:23.790115   62061 cri.go:89] found id: ""
	I0918 21:07:23.790149   62061 logs.go:276] 0 containers: []
	W0918 21:07:23.790160   62061 logs.go:278] No container was found matching "kube-proxy"
	I0918 21:07:23.790168   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0918 21:07:23.790231   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0918 21:07:23.823089   62061 cri.go:89] found id: ""
	I0918 21:07:23.823120   62061 logs.go:276] 0 containers: []
	W0918 21:07:23.823131   62061 logs.go:278] No container was found matching "kube-controller-manager"
	I0918 21:07:23.823140   62061 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0918 21:07:23.823192   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0918 21:07:23.858566   62061 cri.go:89] found id: ""
	I0918 21:07:23.858591   62061 logs.go:276] 0 containers: []
	W0918 21:07:23.858601   62061 logs.go:278] No container was found matching "kindnet"
	I0918 21:07:23.858613   62061 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0918 21:07:23.858674   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0918 21:07:23.892650   62061 cri.go:89] found id: ""
	I0918 21:07:23.892679   62061 logs.go:276] 0 containers: []
	W0918 21:07:23.892690   62061 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0918 21:07:23.892699   62061 logs.go:123] Gathering logs for kubelet ...
	I0918 21:07:23.892713   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 21:07:23.941087   62061 logs.go:123] Gathering logs for dmesg ...
	I0918 21:07:23.941123   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 21:07:23.955674   62061 logs.go:123] Gathering logs for describe nodes ...
	I0918 21:07:23.955712   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0918 21:07:24.026087   62061 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0918 21:07:24.026115   62061 logs.go:123] Gathering logs for CRI-O ...
	I0918 21:07:24.026132   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0918 21:07:24.105237   62061 logs.go:123] Gathering logs for container status ...
	I0918 21:07:24.105272   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 21:07:21.857391   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:07:23.857954   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:07:26.356553   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:07:22.633058   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:07:25.133773   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:07:25.387123   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:07:27.886688   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:07:26.646822   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:07:26.659978   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0918 21:07:26.660087   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0918 21:07:26.695776   62061 cri.go:89] found id: ""
	I0918 21:07:26.695804   62061 logs.go:276] 0 containers: []
	W0918 21:07:26.695817   62061 logs.go:278] No container was found matching "kube-apiserver"
	I0918 21:07:26.695824   62061 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0918 21:07:26.695875   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0918 21:07:26.729717   62061 cri.go:89] found id: ""
	I0918 21:07:26.729747   62061 logs.go:276] 0 containers: []
	W0918 21:07:26.729756   62061 logs.go:278] No container was found matching "etcd"
	I0918 21:07:26.729762   62061 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0918 21:07:26.729830   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0918 21:07:26.763848   62061 cri.go:89] found id: ""
	I0918 21:07:26.763886   62061 logs.go:276] 0 containers: []
	W0918 21:07:26.763907   62061 logs.go:278] No container was found matching "coredns"
	I0918 21:07:26.763915   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0918 21:07:26.763974   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0918 21:07:26.798670   62061 cri.go:89] found id: ""
	I0918 21:07:26.798703   62061 logs.go:276] 0 containers: []
	W0918 21:07:26.798713   62061 logs.go:278] No container was found matching "kube-scheduler"
	I0918 21:07:26.798720   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0918 21:07:26.798789   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0918 21:07:26.833778   62061 cri.go:89] found id: ""
	I0918 21:07:26.833804   62061 logs.go:276] 0 containers: []
	W0918 21:07:26.833815   62061 logs.go:278] No container was found matching "kube-proxy"
	I0918 21:07:26.833822   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0918 21:07:26.833905   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0918 21:07:26.869657   62061 cri.go:89] found id: ""
	I0918 21:07:26.869688   62061 logs.go:276] 0 containers: []
	W0918 21:07:26.869699   62061 logs.go:278] No container was found matching "kube-controller-manager"
	I0918 21:07:26.869707   62061 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0918 21:07:26.869772   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0918 21:07:26.908163   62061 cri.go:89] found id: ""
	I0918 21:07:26.908194   62061 logs.go:276] 0 containers: []
	W0918 21:07:26.908205   62061 logs.go:278] No container was found matching "kindnet"
	I0918 21:07:26.908212   62061 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0918 21:07:26.908269   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0918 21:07:26.943416   62061 cri.go:89] found id: ""
	I0918 21:07:26.943442   62061 logs.go:276] 0 containers: []
	W0918 21:07:26.943451   62061 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0918 21:07:26.943459   62061 logs.go:123] Gathering logs for kubelet ...
	I0918 21:07:26.943471   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 21:07:26.993796   62061 logs.go:123] Gathering logs for dmesg ...
	I0918 21:07:26.993833   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 21:07:27.007619   62061 logs.go:123] Gathering logs for describe nodes ...
	I0918 21:07:27.007661   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0918 21:07:27.072880   62061 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0918 21:07:27.072904   62061 logs.go:123] Gathering logs for CRI-O ...
	I0918 21:07:27.072919   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0918 21:07:27.148984   62061 logs.go:123] Gathering logs for container status ...
	I0918 21:07:27.149031   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 21:07:28.357006   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:07:30.857527   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:07:27.632697   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:07:30.133718   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:07:29.887981   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:07:32.387478   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:07:29.690106   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:07:29.702853   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0918 21:07:29.702932   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0918 21:07:29.736427   62061 cri.go:89] found id: ""
	I0918 21:07:29.736461   62061 logs.go:276] 0 containers: []
	W0918 21:07:29.736473   62061 logs.go:278] No container was found matching "kube-apiserver"
	I0918 21:07:29.736480   62061 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0918 21:07:29.736537   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0918 21:07:29.771287   62061 cri.go:89] found id: ""
	I0918 21:07:29.771317   62061 logs.go:276] 0 containers: []
	W0918 21:07:29.771328   62061 logs.go:278] No container was found matching "etcd"
	I0918 21:07:29.771334   62061 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0918 21:07:29.771398   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0918 21:07:29.804826   62061 cri.go:89] found id: ""
	I0918 21:07:29.804861   62061 logs.go:276] 0 containers: []
	W0918 21:07:29.804875   62061 logs.go:278] No container was found matching "coredns"
	I0918 21:07:29.804882   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0918 21:07:29.804934   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0918 21:07:29.838570   62061 cri.go:89] found id: ""
	I0918 21:07:29.838598   62061 logs.go:276] 0 containers: []
	W0918 21:07:29.838608   62061 logs.go:278] No container was found matching "kube-scheduler"
	I0918 21:07:29.838614   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0918 21:07:29.838659   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0918 21:07:29.871591   62061 cri.go:89] found id: ""
	I0918 21:07:29.871621   62061 logs.go:276] 0 containers: []
	W0918 21:07:29.871631   62061 logs.go:278] No container was found matching "kube-proxy"
	I0918 21:07:29.871638   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0918 21:07:29.871699   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0918 21:07:29.905789   62061 cri.go:89] found id: ""
	I0918 21:07:29.905824   62061 logs.go:276] 0 containers: []
	W0918 21:07:29.905835   62061 logs.go:278] No container was found matching "kube-controller-manager"
	I0918 21:07:29.905846   62061 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0918 21:07:29.905910   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0918 21:07:29.945914   62061 cri.go:89] found id: ""
	I0918 21:07:29.945941   62061 logs.go:276] 0 containers: []
	W0918 21:07:29.945950   62061 logs.go:278] No container was found matching "kindnet"
	I0918 21:07:29.945955   62061 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0918 21:07:29.946004   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0918 21:07:29.979365   62061 cri.go:89] found id: ""
	I0918 21:07:29.979396   62061 logs.go:276] 0 containers: []
	W0918 21:07:29.979405   62061 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0918 21:07:29.979413   62061 logs.go:123] Gathering logs for kubelet ...
	I0918 21:07:29.979425   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 21:07:30.026925   62061 logs.go:123] Gathering logs for dmesg ...
	I0918 21:07:30.026956   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 21:07:30.040589   62061 logs.go:123] Gathering logs for describe nodes ...
	I0918 21:07:30.040623   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0918 21:07:30.112900   62061 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0918 21:07:30.112928   62061 logs.go:123] Gathering logs for CRI-O ...
	I0918 21:07:30.112948   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0918 21:07:30.195208   62061 logs.go:123] Gathering logs for container status ...
	I0918 21:07:30.195256   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 21:07:32.735787   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:07:32.748969   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0918 21:07:32.749049   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0918 21:07:32.782148   62061 cri.go:89] found id: ""
	I0918 21:07:32.782179   62061 logs.go:276] 0 containers: []
	W0918 21:07:32.782189   62061 logs.go:278] No container was found matching "kube-apiserver"
	I0918 21:07:32.782196   62061 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0918 21:07:32.782262   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0918 21:07:32.816119   62061 cri.go:89] found id: ""
	I0918 21:07:32.816144   62061 logs.go:276] 0 containers: []
	W0918 21:07:32.816152   62061 logs.go:278] No container was found matching "etcd"
	I0918 21:07:32.816158   62061 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0918 21:07:32.816203   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0918 21:07:32.849817   62061 cri.go:89] found id: ""
	I0918 21:07:32.849844   62061 logs.go:276] 0 containers: []
	W0918 21:07:32.849853   62061 logs.go:278] No container was found matching "coredns"
	I0918 21:07:32.849859   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0918 21:07:32.849911   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0918 21:07:32.884464   62061 cri.go:89] found id: ""
	I0918 21:07:32.884494   62061 logs.go:276] 0 containers: []
	W0918 21:07:32.884506   62061 logs.go:278] No container was found matching "kube-scheduler"
	I0918 21:07:32.884513   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0918 21:07:32.884576   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0918 21:07:32.917942   62061 cri.go:89] found id: ""
	I0918 21:07:32.917973   62061 logs.go:276] 0 containers: []
	W0918 21:07:32.917983   62061 logs.go:278] No container was found matching "kube-proxy"
	I0918 21:07:32.917990   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0918 21:07:32.918051   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0918 21:07:32.950748   62061 cri.go:89] found id: ""
	I0918 21:07:32.950780   62061 logs.go:276] 0 containers: []
	W0918 21:07:32.950791   62061 logs.go:278] No container was found matching "kube-controller-manager"
	I0918 21:07:32.950800   62061 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0918 21:07:32.950864   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0918 21:07:32.985059   62061 cri.go:89] found id: ""
	I0918 21:07:32.985092   62061 logs.go:276] 0 containers: []
	W0918 21:07:32.985100   62061 logs.go:278] No container was found matching "kindnet"
	I0918 21:07:32.985106   62061 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0918 21:07:32.985167   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0918 21:07:33.021496   62061 cri.go:89] found id: ""
	I0918 21:07:33.021526   62061 logs.go:276] 0 containers: []
	W0918 21:07:33.021536   62061 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0918 21:07:33.021546   62061 logs.go:123] Gathering logs for kubelet ...
	I0918 21:07:33.021560   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 21:07:33.071744   62061 logs.go:123] Gathering logs for dmesg ...
	I0918 21:07:33.071793   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 21:07:33.086533   62061 logs.go:123] Gathering logs for describe nodes ...
	I0918 21:07:33.086565   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0918 21:07:33.155274   62061 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0918 21:07:33.155307   62061 logs.go:123] Gathering logs for CRI-O ...
	I0918 21:07:33.155326   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0918 21:07:33.238301   62061 logs.go:123] Gathering logs for container status ...
	I0918 21:07:33.238342   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 21:07:33.356874   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:07:35.357445   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:07:32.631814   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:07:34.631954   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:07:36.633057   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:07:34.387725   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:07:36.887031   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:07:35.777905   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:07:35.792442   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0918 21:07:35.792510   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0918 21:07:35.827310   62061 cri.go:89] found id: ""
	I0918 21:07:35.827339   62061 logs.go:276] 0 containers: []
	W0918 21:07:35.827350   62061 logs.go:278] No container was found matching "kube-apiserver"
	I0918 21:07:35.827357   62061 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0918 21:07:35.827422   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0918 21:07:35.861873   62061 cri.go:89] found id: ""
	I0918 21:07:35.861897   62061 logs.go:276] 0 containers: []
	W0918 21:07:35.861905   62061 logs.go:278] No container was found matching "etcd"
	I0918 21:07:35.861916   62061 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0918 21:07:35.861966   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0918 21:07:35.895295   62061 cri.go:89] found id: ""
	I0918 21:07:35.895324   62061 logs.go:276] 0 containers: []
	W0918 21:07:35.895345   62061 logs.go:278] No container was found matching "coredns"
	I0918 21:07:35.895353   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0918 21:07:35.895410   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0918 21:07:35.927690   62061 cri.go:89] found id: ""
	I0918 21:07:35.927717   62061 logs.go:276] 0 containers: []
	W0918 21:07:35.927728   62061 logs.go:278] No container was found matching "kube-scheduler"
	I0918 21:07:35.927736   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0918 21:07:35.927788   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0918 21:07:35.963954   62061 cri.go:89] found id: ""
	I0918 21:07:35.963979   62061 logs.go:276] 0 containers: []
	W0918 21:07:35.963986   62061 logs.go:278] No container was found matching "kube-proxy"
	I0918 21:07:35.963992   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0918 21:07:35.964059   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0918 21:07:36.002287   62061 cri.go:89] found id: ""
	I0918 21:07:36.002313   62061 logs.go:276] 0 containers: []
	W0918 21:07:36.002322   62061 logs.go:278] No container was found matching "kube-controller-manager"
	I0918 21:07:36.002328   62061 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0918 21:07:36.002380   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0918 21:07:36.036748   62061 cri.go:89] found id: ""
	I0918 21:07:36.036778   62061 logs.go:276] 0 containers: []
	W0918 21:07:36.036790   62061 logs.go:278] No container was found matching "kindnet"
	I0918 21:07:36.036797   62061 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0918 21:07:36.036861   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0918 21:07:36.072616   62061 cri.go:89] found id: ""
	I0918 21:07:36.072647   62061 logs.go:276] 0 containers: []
	W0918 21:07:36.072658   62061 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0918 21:07:36.072668   62061 logs.go:123] Gathering logs for kubelet ...
	I0918 21:07:36.072683   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 21:07:36.121723   62061 logs.go:123] Gathering logs for dmesg ...
	I0918 21:07:36.121762   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 21:07:36.136528   62061 logs.go:123] Gathering logs for describe nodes ...
	I0918 21:07:36.136560   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0918 21:07:36.202878   62061 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0918 21:07:36.202911   62061 logs.go:123] Gathering logs for CRI-O ...
	I0918 21:07:36.202926   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0918 21:07:36.287882   62061 logs.go:123] Gathering logs for container status ...
	I0918 21:07:36.287924   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 21:07:38.825888   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:07:38.838437   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0918 21:07:38.838510   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0918 21:07:38.872312   62061 cri.go:89] found id: ""
	I0918 21:07:38.872348   62061 logs.go:276] 0 containers: []
	W0918 21:07:38.872358   62061 logs.go:278] No container was found matching "kube-apiserver"
	I0918 21:07:38.872366   62061 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0918 21:07:38.872417   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0918 21:07:38.905927   62061 cri.go:89] found id: ""
	I0918 21:07:38.905962   62061 logs.go:276] 0 containers: []
	W0918 21:07:38.905975   62061 logs.go:278] No container was found matching "etcd"
	I0918 21:07:38.905982   62061 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0918 21:07:38.906042   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0918 21:07:38.938245   62061 cri.go:89] found id: ""
	I0918 21:07:38.938274   62061 logs.go:276] 0 containers: []
	W0918 21:07:38.938286   62061 logs.go:278] No container was found matching "coredns"
	I0918 21:07:38.938293   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0918 21:07:38.938358   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0918 21:07:38.971497   62061 cri.go:89] found id: ""
	I0918 21:07:38.971536   62061 logs.go:276] 0 containers: []
	W0918 21:07:38.971548   62061 logs.go:278] No container was found matching "kube-scheduler"
	I0918 21:07:38.971555   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0918 21:07:38.971625   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0918 21:07:39.004755   62061 cri.go:89] found id: ""
	I0918 21:07:39.004784   62061 logs.go:276] 0 containers: []
	W0918 21:07:39.004793   62061 logs.go:278] No container was found matching "kube-proxy"
	I0918 21:07:39.004799   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0918 21:07:39.004854   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0918 21:07:39.036237   62061 cri.go:89] found id: ""
	I0918 21:07:39.036265   62061 logs.go:276] 0 containers: []
	W0918 21:07:39.036273   62061 logs.go:278] No container was found matching "kube-controller-manager"
	I0918 21:07:39.036279   62061 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0918 21:07:39.036328   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0918 21:07:39.071504   62061 cri.go:89] found id: ""
	I0918 21:07:39.071537   62061 logs.go:276] 0 containers: []
	W0918 21:07:39.071549   62061 logs.go:278] No container was found matching "kindnet"
	I0918 21:07:39.071557   62061 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0918 21:07:39.071623   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0918 21:07:39.107035   62061 cri.go:89] found id: ""
	I0918 21:07:39.107063   62061 logs.go:276] 0 containers: []
	W0918 21:07:39.107071   62061 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0918 21:07:39.107080   62061 logs.go:123] Gathering logs for kubelet ...
	I0918 21:07:39.107090   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 21:07:39.158078   62061 logs.go:123] Gathering logs for dmesg ...
	I0918 21:07:39.158113   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 21:07:39.172846   62061 logs.go:123] Gathering logs for describe nodes ...
	I0918 21:07:39.172875   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0918 21:07:39.240577   62061 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0918 21:07:39.240602   62061 logs.go:123] Gathering logs for CRI-O ...
	I0918 21:07:39.240618   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0918 21:07:39.319762   62061 logs.go:123] Gathering logs for container status ...
	I0918 21:07:39.319797   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 21:07:37.857371   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:07:40.356710   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:07:39.133586   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:07:41.632538   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:07:38.887485   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:07:41.386252   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:07:41.856586   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:07:41.870308   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0918 21:07:41.870388   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0918 21:07:41.905657   62061 cri.go:89] found id: ""
	I0918 21:07:41.905688   62061 logs.go:276] 0 containers: []
	W0918 21:07:41.905699   62061 logs.go:278] No container was found matching "kube-apiserver"
	I0918 21:07:41.905706   62061 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0918 21:07:41.905766   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0918 21:07:41.944507   62061 cri.go:89] found id: ""
	I0918 21:07:41.944544   62061 logs.go:276] 0 containers: []
	W0918 21:07:41.944555   62061 logs.go:278] No container was found matching "etcd"
	I0918 21:07:41.944566   62061 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0918 21:07:41.944634   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0918 21:07:41.979217   62061 cri.go:89] found id: ""
	I0918 21:07:41.979252   62061 logs.go:276] 0 containers: []
	W0918 21:07:41.979271   62061 logs.go:278] No container was found matching "coredns"
	I0918 21:07:41.979279   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0918 21:07:41.979346   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0918 21:07:42.013613   62061 cri.go:89] found id: ""
	I0918 21:07:42.013641   62061 logs.go:276] 0 containers: []
	W0918 21:07:42.013652   62061 logs.go:278] No container was found matching "kube-scheduler"
	I0918 21:07:42.013659   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0918 21:07:42.013725   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0918 21:07:42.049225   62061 cri.go:89] found id: ""
	I0918 21:07:42.049259   62061 logs.go:276] 0 containers: []
	W0918 21:07:42.049271   62061 logs.go:278] No container was found matching "kube-proxy"
	I0918 21:07:42.049279   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0918 21:07:42.049364   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0918 21:07:42.085737   62061 cri.go:89] found id: ""
	I0918 21:07:42.085763   62061 logs.go:276] 0 containers: []
	W0918 21:07:42.085775   62061 logs.go:278] No container was found matching "kube-controller-manager"
	I0918 21:07:42.085782   62061 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0918 21:07:42.085843   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0918 21:07:42.121326   62061 cri.go:89] found id: ""
	I0918 21:07:42.121356   62061 logs.go:276] 0 containers: []
	W0918 21:07:42.121365   62061 logs.go:278] No container was found matching "kindnet"
	I0918 21:07:42.121371   62061 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0918 21:07:42.121428   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0918 21:07:42.157070   62061 cri.go:89] found id: ""
	I0918 21:07:42.157097   62061 logs.go:276] 0 containers: []
	W0918 21:07:42.157107   62061 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0918 21:07:42.157118   62061 logs.go:123] Gathering logs for CRI-O ...
	I0918 21:07:42.157133   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0918 21:07:42.232110   62061 logs.go:123] Gathering logs for container status ...
	I0918 21:07:42.232162   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 21:07:42.270478   62061 logs.go:123] Gathering logs for kubelet ...
	I0918 21:07:42.270517   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 21:07:42.324545   62061 logs.go:123] Gathering logs for dmesg ...
	I0918 21:07:42.324586   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 21:07:42.339928   62061 logs.go:123] Gathering logs for describe nodes ...
	I0918 21:07:42.339962   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0918 21:07:42.415124   62061 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0918 21:07:42.356847   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:07:44.856845   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:07:43.633029   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:07:46.134786   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:07:43.387596   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:07:45.887071   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:07:44.915316   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:07:44.928261   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0918 21:07:44.928331   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0918 21:07:44.959641   62061 cri.go:89] found id: ""
	I0918 21:07:44.959673   62061 logs.go:276] 0 containers: []
	W0918 21:07:44.959682   62061 logs.go:278] No container was found matching "kube-apiserver"
	I0918 21:07:44.959689   62061 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0918 21:07:44.959791   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0918 21:07:44.991744   62061 cri.go:89] found id: ""
	I0918 21:07:44.991778   62061 logs.go:276] 0 containers: []
	W0918 21:07:44.991787   62061 logs.go:278] No container was found matching "etcd"
	I0918 21:07:44.991803   62061 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0918 21:07:44.991877   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0918 21:07:45.024228   62061 cri.go:89] found id: ""
	I0918 21:07:45.024261   62061 logs.go:276] 0 containers: []
	W0918 21:07:45.024272   62061 logs.go:278] No container was found matching "coredns"
	I0918 21:07:45.024280   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0918 21:07:45.024355   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0918 21:07:45.061905   62061 cri.go:89] found id: ""
	I0918 21:07:45.061931   62061 logs.go:276] 0 containers: []
	W0918 21:07:45.061940   62061 logs.go:278] No container was found matching "kube-scheduler"
	I0918 21:07:45.061946   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0918 21:07:45.061994   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0918 21:07:45.099224   62061 cri.go:89] found id: ""
	I0918 21:07:45.099251   62061 logs.go:276] 0 containers: []
	W0918 21:07:45.099259   62061 logs.go:278] No container was found matching "kube-proxy"
	I0918 21:07:45.099266   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0918 21:07:45.099327   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0918 21:07:45.137235   62061 cri.go:89] found id: ""
	I0918 21:07:45.137262   62061 logs.go:276] 0 containers: []
	W0918 21:07:45.137270   62061 logs.go:278] No container was found matching "kube-controller-manager"
	I0918 21:07:45.137278   62061 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0918 21:07:45.137333   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0918 21:07:45.174343   62061 cri.go:89] found id: ""
	I0918 21:07:45.174370   62061 logs.go:276] 0 containers: []
	W0918 21:07:45.174379   62061 logs.go:278] No container was found matching "kindnet"
	I0918 21:07:45.174385   62061 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0918 21:07:45.174432   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0918 21:07:45.209278   62061 cri.go:89] found id: ""
	I0918 21:07:45.209306   62061 logs.go:276] 0 containers: []
	W0918 21:07:45.209316   62061 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0918 21:07:45.209326   62061 logs.go:123] Gathering logs for dmesg ...
	I0918 21:07:45.209341   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 21:07:45.222376   62061 logs.go:123] Gathering logs for describe nodes ...
	I0918 21:07:45.222409   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0918 21:07:45.294562   62061 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0918 21:07:45.294595   62061 logs.go:123] Gathering logs for CRI-O ...
	I0918 21:07:45.294607   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0918 21:07:45.382582   62061 logs.go:123] Gathering logs for container status ...
	I0918 21:07:45.382631   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 21:07:45.424743   62061 logs.go:123] Gathering logs for kubelet ...
	I0918 21:07:45.424780   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 21:07:47.978171   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:07:47.990485   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0918 21:07:47.990554   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0918 21:07:48.024465   62061 cri.go:89] found id: ""
	I0918 21:07:48.024494   62061 logs.go:276] 0 containers: []
	W0918 21:07:48.024505   62061 logs.go:278] No container was found matching "kube-apiserver"
	I0918 21:07:48.024512   62061 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0918 21:07:48.024575   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0918 21:07:48.058838   62061 cri.go:89] found id: ""
	I0918 21:07:48.058871   62061 logs.go:276] 0 containers: []
	W0918 21:07:48.058899   62061 logs.go:278] No container was found matching "etcd"
	I0918 21:07:48.058905   62061 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0918 21:07:48.058959   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0918 21:07:48.094307   62061 cri.go:89] found id: ""
	I0918 21:07:48.094335   62061 logs.go:276] 0 containers: []
	W0918 21:07:48.094343   62061 logs.go:278] No container was found matching "coredns"
	I0918 21:07:48.094349   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0918 21:07:48.094398   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0918 21:07:48.127497   62061 cri.go:89] found id: ""
	I0918 21:07:48.127529   62061 logs.go:276] 0 containers: []
	W0918 21:07:48.127538   62061 logs.go:278] No container was found matching "kube-scheduler"
	I0918 21:07:48.127544   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0918 21:07:48.127590   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0918 21:07:48.160440   62061 cri.go:89] found id: ""
	I0918 21:07:48.160465   62061 logs.go:276] 0 containers: []
	W0918 21:07:48.160473   62061 logs.go:278] No container was found matching "kube-proxy"
	I0918 21:07:48.160478   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0918 21:07:48.160527   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0918 21:07:48.195581   62061 cri.go:89] found id: ""
	I0918 21:07:48.195615   62061 logs.go:276] 0 containers: []
	W0918 21:07:48.195624   62061 logs.go:278] No container was found matching "kube-controller-manager"
	I0918 21:07:48.195631   62061 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0918 21:07:48.195675   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0918 21:07:48.226478   62061 cri.go:89] found id: ""
	I0918 21:07:48.226506   62061 logs.go:276] 0 containers: []
	W0918 21:07:48.226514   62061 logs.go:278] No container was found matching "kindnet"
	I0918 21:07:48.226519   62061 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0918 21:07:48.226566   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0918 21:07:48.259775   62061 cri.go:89] found id: ""
	I0918 21:07:48.259804   62061 logs.go:276] 0 containers: []
	W0918 21:07:48.259815   62061 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0918 21:07:48.259825   62061 logs.go:123] Gathering logs for dmesg ...
	I0918 21:07:48.259839   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 21:07:48.274364   62061 logs.go:123] Gathering logs for describe nodes ...
	I0918 21:07:48.274391   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0918 21:07:48.347131   62061 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0918 21:07:48.347151   62061 logs.go:123] Gathering logs for CRI-O ...
	I0918 21:07:48.347163   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0918 21:07:48.430569   62061 logs.go:123] Gathering logs for container status ...
	I0918 21:07:48.430607   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 21:07:48.472972   62061 logs.go:123] Gathering logs for kubelet ...
	I0918 21:07:48.473007   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 21:07:47.356907   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:07:49.857984   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:07:48.633550   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:07:51.133639   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:07:48.388136   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:07:50.888317   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:07:51.024429   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:07:51.037249   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0918 21:07:51.037328   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0918 21:07:51.070137   62061 cri.go:89] found id: ""
	I0918 21:07:51.070173   62061 logs.go:276] 0 containers: []
	W0918 21:07:51.070195   62061 logs.go:278] No container was found matching "kube-apiserver"
	I0918 21:07:51.070203   62061 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0918 21:07:51.070276   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0918 21:07:51.105014   62061 cri.go:89] found id: ""
	I0918 21:07:51.105039   62061 logs.go:276] 0 containers: []
	W0918 21:07:51.105048   62061 logs.go:278] No container was found matching "etcd"
	I0918 21:07:51.105054   62061 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0918 21:07:51.105101   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0918 21:07:51.143287   62061 cri.go:89] found id: ""
	I0918 21:07:51.143310   62061 logs.go:276] 0 containers: []
	W0918 21:07:51.143318   62061 logs.go:278] No container was found matching "coredns"
	I0918 21:07:51.143325   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0918 21:07:51.143372   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0918 21:07:51.176811   62061 cri.go:89] found id: ""
	I0918 21:07:51.176838   62061 logs.go:276] 0 containers: []
	W0918 21:07:51.176846   62061 logs.go:278] No container was found matching "kube-scheduler"
	I0918 21:07:51.176852   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0918 21:07:51.176898   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0918 21:07:51.210817   62061 cri.go:89] found id: ""
	I0918 21:07:51.210842   62061 logs.go:276] 0 containers: []
	W0918 21:07:51.210850   62061 logs.go:278] No container was found matching "kube-proxy"
	I0918 21:07:51.210856   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0918 21:07:51.210916   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0918 21:07:51.243968   62061 cri.go:89] found id: ""
	I0918 21:07:51.244002   62061 logs.go:276] 0 containers: []
	W0918 21:07:51.244035   62061 logs.go:278] No container was found matching "kube-controller-manager"
	I0918 21:07:51.244043   62061 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0918 21:07:51.244104   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0918 21:07:51.279078   62061 cri.go:89] found id: ""
	I0918 21:07:51.279108   62061 logs.go:276] 0 containers: []
	W0918 21:07:51.279117   62061 logs.go:278] No container was found matching "kindnet"
	I0918 21:07:51.279122   62061 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0918 21:07:51.279173   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0918 21:07:51.315033   62061 cri.go:89] found id: ""
	I0918 21:07:51.315082   62061 logs.go:276] 0 containers: []
	W0918 21:07:51.315091   62061 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0918 21:07:51.315100   62061 logs.go:123] Gathering logs for CRI-O ...
	I0918 21:07:51.315111   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0918 21:07:51.391927   62061 logs.go:123] Gathering logs for container status ...
	I0918 21:07:51.391976   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 21:07:51.435515   62061 logs.go:123] Gathering logs for kubelet ...
	I0918 21:07:51.435542   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 21:07:51.488404   62061 logs.go:123] Gathering logs for dmesg ...
	I0918 21:07:51.488442   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 21:07:51.502019   62061 logs.go:123] Gathering logs for describe nodes ...
	I0918 21:07:51.502065   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0918 21:07:51.567893   62061 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0918 21:07:54.068142   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:07:54.082544   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0918 21:07:54.082619   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0918 21:07:54.118600   62061 cri.go:89] found id: ""
	I0918 21:07:54.118629   62061 logs.go:276] 0 containers: []
	W0918 21:07:54.118638   62061 logs.go:278] No container was found matching "kube-apiserver"
	I0918 21:07:54.118644   62061 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0918 21:07:54.118695   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0918 21:07:54.154086   62061 cri.go:89] found id: ""
	I0918 21:07:54.154115   62061 logs.go:276] 0 containers: []
	W0918 21:07:54.154127   62061 logs.go:278] No container was found matching "etcd"
	I0918 21:07:54.154135   62061 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0918 21:07:54.154187   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0918 21:07:54.188213   62061 cri.go:89] found id: ""
	I0918 21:07:54.188246   62061 logs.go:276] 0 containers: []
	W0918 21:07:54.188257   62061 logs.go:278] No container was found matching "coredns"
	I0918 21:07:54.188266   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0918 21:07:54.188329   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0918 21:07:54.221682   62061 cri.go:89] found id: ""
	I0918 21:07:54.221712   62061 logs.go:276] 0 containers: []
	W0918 21:07:54.221721   62061 logs.go:278] No container was found matching "kube-scheduler"
	I0918 21:07:54.221728   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0918 21:07:54.221791   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0918 21:07:54.254746   62061 cri.go:89] found id: ""
	I0918 21:07:54.254770   62061 logs.go:276] 0 containers: []
	W0918 21:07:54.254778   62061 logs.go:278] No container was found matching "kube-proxy"
	I0918 21:07:54.254783   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0918 21:07:54.254844   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0918 21:07:54.288249   62061 cri.go:89] found id: ""
	I0918 21:07:54.288273   62061 logs.go:276] 0 containers: []
	W0918 21:07:54.288281   62061 logs.go:278] No container was found matching "kube-controller-manager"
	I0918 21:07:54.288288   62061 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0918 21:07:54.288349   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0918 21:07:54.323390   62061 cri.go:89] found id: ""
	I0918 21:07:54.323422   62061 logs.go:276] 0 containers: []
	W0918 21:07:54.323433   62061 logs.go:278] No container was found matching "kindnet"
	I0918 21:07:54.323440   62061 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0918 21:07:54.323507   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0918 21:07:54.357680   62061 cri.go:89] found id: ""
	I0918 21:07:54.357709   62061 logs.go:276] 0 containers: []
	W0918 21:07:54.357719   62061 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0918 21:07:54.357734   62061 logs.go:123] Gathering logs for dmesg ...
	I0918 21:07:54.357748   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 21:07:54.396644   62061 logs.go:123] Gathering logs for describe nodes ...
	I0918 21:07:54.396683   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0918 21:07:54.469136   62061 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0918 21:07:54.469175   62061 logs.go:123] Gathering logs for CRI-O ...
	I0918 21:07:54.469191   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0918 21:07:52.357187   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:07:54.857437   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:07:53.633161   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:07:56.132554   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:07:53.386646   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:07:55.387377   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:07:57.387524   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:07:54.547848   62061 logs.go:123] Gathering logs for container status ...
	I0918 21:07:54.547884   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 21:07:54.585758   62061 logs.go:123] Gathering logs for kubelet ...
	I0918 21:07:54.585788   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 21:07:57.139198   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:07:57.152342   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0918 21:07:57.152429   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0918 21:07:57.186747   62061 cri.go:89] found id: ""
	I0918 21:07:57.186778   62061 logs.go:276] 0 containers: []
	W0918 21:07:57.186789   62061 logs.go:278] No container was found matching "kube-apiserver"
	I0918 21:07:57.186796   62061 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0918 21:07:57.186855   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0918 21:07:57.219211   62061 cri.go:89] found id: ""
	I0918 21:07:57.219239   62061 logs.go:276] 0 containers: []
	W0918 21:07:57.219248   62061 logs.go:278] No container was found matching "etcd"
	I0918 21:07:57.219256   62061 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0918 21:07:57.219315   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0918 21:07:57.251361   62061 cri.go:89] found id: ""
	I0918 21:07:57.251388   62061 logs.go:276] 0 containers: []
	W0918 21:07:57.251396   62061 logs.go:278] No container was found matching "coredns"
	I0918 21:07:57.251401   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0918 21:07:57.251450   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0918 21:07:57.285467   62061 cri.go:89] found id: ""
	I0918 21:07:57.285501   62061 logs.go:276] 0 containers: []
	W0918 21:07:57.285512   62061 logs.go:278] No container was found matching "kube-scheduler"
	I0918 21:07:57.285519   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0918 21:07:57.285580   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0918 21:07:57.317973   62061 cri.go:89] found id: ""
	I0918 21:07:57.317999   62061 logs.go:276] 0 containers: []
	W0918 21:07:57.318006   62061 logs.go:278] No container was found matching "kube-proxy"
	I0918 21:07:57.318012   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0918 21:07:57.318071   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0918 21:07:57.352172   62061 cri.go:89] found id: ""
	I0918 21:07:57.352202   62061 logs.go:276] 0 containers: []
	W0918 21:07:57.352215   62061 logs.go:278] No container was found matching "kube-controller-manager"
	I0918 21:07:57.352223   62061 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0918 21:07:57.352277   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0918 21:07:57.388117   62061 cri.go:89] found id: ""
	I0918 21:07:57.388137   62061 logs.go:276] 0 containers: []
	W0918 21:07:57.388144   62061 logs.go:278] No container was found matching "kindnet"
	I0918 21:07:57.388150   62061 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0918 21:07:57.388205   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0918 21:07:57.424814   62061 cri.go:89] found id: ""
	I0918 21:07:57.424846   62061 logs.go:276] 0 containers: []
	W0918 21:07:57.424857   62061 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0918 21:07:57.424868   62061 logs.go:123] Gathering logs for dmesg ...
	I0918 21:07:57.424882   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 21:07:57.437317   62061 logs.go:123] Gathering logs for describe nodes ...
	I0918 21:07:57.437351   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0918 21:07:57.503393   62061 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0918 21:07:57.503417   62061 logs.go:123] Gathering logs for CRI-O ...
	I0918 21:07:57.503429   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0918 21:07:57.584167   62061 logs.go:123] Gathering logs for container status ...
	I0918 21:07:57.584203   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 21:07:57.620761   62061 logs.go:123] Gathering logs for kubelet ...
	I0918 21:07:57.620792   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 21:07:57.357989   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:07:59.856413   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:07:58.133077   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:08:00.633233   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:07:59.886455   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:08:01.887882   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:08:00.174933   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:08:00.188098   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0918 21:08:00.188177   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0918 21:08:00.221852   62061 cri.go:89] found id: ""
	I0918 21:08:00.221877   62061 logs.go:276] 0 containers: []
	W0918 21:08:00.221890   62061 logs.go:278] No container was found matching "kube-apiserver"
	I0918 21:08:00.221895   62061 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0918 21:08:00.221947   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0918 21:08:00.255947   62061 cri.go:89] found id: ""
	I0918 21:08:00.255974   62061 logs.go:276] 0 containers: []
	W0918 21:08:00.255982   62061 logs.go:278] No container was found matching "etcd"
	I0918 21:08:00.255987   62061 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0918 21:08:00.256056   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0918 21:08:00.290919   62061 cri.go:89] found id: ""
	I0918 21:08:00.290953   62061 logs.go:276] 0 containers: []
	W0918 21:08:00.290961   62061 logs.go:278] No container was found matching "coredns"
	I0918 21:08:00.290966   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0918 21:08:00.291017   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0918 21:08:00.328175   62061 cri.go:89] found id: ""
	I0918 21:08:00.328200   62061 logs.go:276] 0 containers: []
	W0918 21:08:00.328208   62061 logs.go:278] No container was found matching "kube-scheduler"
	I0918 21:08:00.328214   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0918 21:08:00.328261   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0918 21:08:00.364863   62061 cri.go:89] found id: ""
	I0918 21:08:00.364892   62061 logs.go:276] 0 containers: []
	W0918 21:08:00.364903   62061 logs.go:278] No container was found matching "kube-proxy"
	I0918 21:08:00.364912   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0918 21:08:00.364967   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0918 21:08:00.400368   62061 cri.go:89] found id: ""
	I0918 21:08:00.400397   62061 logs.go:276] 0 containers: []
	W0918 21:08:00.400408   62061 logs.go:278] No container was found matching "kube-controller-manager"
	I0918 21:08:00.400415   62061 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0918 21:08:00.400480   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0918 21:08:00.435341   62061 cri.go:89] found id: ""
	I0918 21:08:00.435375   62061 logs.go:276] 0 containers: []
	W0918 21:08:00.435386   62061 logs.go:278] No container was found matching "kindnet"
	I0918 21:08:00.435394   62061 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0918 21:08:00.435459   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0918 21:08:00.469981   62061 cri.go:89] found id: ""
	I0918 21:08:00.470010   62061 logs.go:276] 0 containers: []
	W0918 21:08:00.470019   62061 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0918 21:08:00.470028   62061 logs.go:123] Gathering logs for describe nodes ...
	I0918 21:08:00.470041   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0918 21:08:00.538006   62061 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0918 21:08:00.538037   62061 logs.go:123] Gathering logs for CRI-O ...
	I0918 21:08:00.538053   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0918 21:08:00.618497   62061 logs.go:123] Gathering logs for container status ...
	I0918 21:08:00.618543   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 21:08:00.656884   62061 logs.go:123] Gathering logs for kubelet ...
	I0918 21:08:00.656912   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 21:08:00.708836   62061 logs.go:123] Gathering logs for dmesg ...
	I0918 21:08:00.708870   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 21:08:03.222489   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:08:03.236904   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0918 21:08:03.236972   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0918 21:08:03.270559   62061 cri.go:89] found id: ""
	I0918 21:08:03.270588   62061 logs.go:276] 0 containers: []
	W0918 21:08:03.270596   62061 logs.go:278] No container was found matching "kube-apiserver"
	I0918 21:08:03.270602   62061 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0918 21:08:03.270649   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0918 21:08:03.305908   62061 cri.go:89] found id: ""
	I0918 21:08:03.305933   62061 logs.go:276] 0 containers: []
	W0918 21:08:03.305940   62061 logs.go:278] No container was found matching "etcd"
	I0918 21:08:03.305946   62061 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0918 21:08:03.306004   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0918 21:08:03.339442   62061 cri.go:89] found id: ""
	I0918 21:08:03.339468   62061 logs.go:276] 0 containers: []
	W0918 21:08:03.339476   62061 logs.go:278] No container was found matching "coredns"
	I0918 21:08:03.339482   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0918 21:08:03.339550   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0918 21:08:03.377460   62061 cri.go:89] found id: ""
	I0918 21:08:03.377486   62061 logs.go:276] 0 containers: []
	W0918 21:08:03.377495   62061 logs.go:278] No container was found matching "kube-scheduler"
	I0918 21:08:03.377501   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0918 21:08:03.377552   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0918 21:08:03.414815   62061 cri.go:89] found id: ""
	I0918 21:08:03.414850   62061 logs.go:276] 0 containers: []
	W0918 21:08:03.414861   62061 logs.go:278] No container was found matching "kube-proxy"
	I0918 21:08:03.414869   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0918 21:08:03.414930   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0918 21:08:03.448654   62061 cri.go:89] found id: ""
	I0918 21:08:03.448680   62061 logs.go:276] 0 containers: []
	W0918 21:08:03.448690   62061 logs.go:278] No container was found matching "kube-controller-manager"
	I0918 21:08:03.448698   62061 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0918 21:08:03.448759   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0918 21:08:03.483598   62061 cri.go:89] found id: ""
	I0918 21:08:03.483628   62061 logs.go:276] 0 containers: []
	W0918 21:08:03.483639   62061 logs.go:278] No container was found matching "kindnet"
	I0918 21:08:03.483646   62061 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0918 21:08:03.483717   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0918 21:08:03.518557   62061 cri.go:89] found id: ""
	I0918 21:08:03.518585   62061 logs.go:276] 0 containers: []
	W0918 21:08:03.518601   62061 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0918 21:08:03.518612   62061 logs.go:123] Gathering logs for container status ...
	I0918 21:08:03.518627   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 21:08:03.555922   62061 logs.go:123] Gathering logs for kubelet ...
	I0918 21:08:03.555958   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 21:08:03.608173   62061 logs.go:123] Gathering logs for dmesg ...
	I0918 21:08:03.608208   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 21:08:03.621251   62061 logs.go:123] Gathering logs for describe nodes ...
	I0918 21:08:03.621278   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0918 21:08:03.688773   62061 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0918 21:08:03.688796   62061 logs.go:123] Gathering logs for CRI-O ...
	I0918 21:08:03.688812   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0918 21:08:01.857289   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:08:03.857768   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:08:06.356504   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:08:03.132376   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:08:05.134169   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:08:04.386905   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:08:06.891459   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:08:06.272727   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:08:06.286033   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0918 21:08:06.286115   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0918 21:08:06.319058   62061 cri.go:89] found id: ""
	I0918 21:08:06.319084   62061 logs.go:276] 0 containers: []
	W0918 21:08:06.319092   62061 logs.go:278] No container was found matching "kube-apiserver"
	I0918 21:08:06.319099   62061 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0918 21:08:06.319167   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0918 21:08:06.352572   62061 cri.go:89] found id: ""
	I0918 21:08:06.352606   62061 logs.go:276] 0 containers: []
	W0918 21:08:06.352627   62061 logs.go:278] No container was found matching "etcd"
	I0918 21:08:06.352638   62061 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0918 21:08:06.352709   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0918 21:08:06.389897   62061 cri.go:89] found id: ""
	I0918 21:08:06.389922   62061 logs.go:276] 0 containers: []
	W0918 21:08:06.389929   62061 logs.go:278] No container was found matching "coredns"
	I0918 21:08:06.389935   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0918 21:08:06.389993   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0918 21:08:06.427265   62061 cri.go:89] found id: ""
	I0918 21:08:06.427294   62061 logs.go:276] 0 containers: []
	W0918 21:08:06.427306   62061 logs.go:278] No container was found matching "kube-scheduler"
	I0918 21:08:06.427314   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0918 21:08:06.427375   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0918 21:08:06.462706   62061 cri.go:89] found id: ""
	I0918 21:08:06.462738   62061 logs.go:276] 0 containers: []
	W0918 21:08:06.462746   62061 logs.go:278] No container was found matching "kube-proxy"
	I0918 21:08:06.462753   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0918 21:08:06.462816   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0918 21:08:06.499672   62061 cri.go:89] found id: ""
	I0918 21:08:06.499707   62061 logs.go:276] 0 containers: []
	W0918 21:08:06.499719   62061 logs.go:278] No container was found matching "kube-controller-manager"
	I0918 21:08:06.499727   62061 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0918 21:08:06.499781   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0918 21:08:06.540386   62061 cri.go:89] found id: ""
	I0918 21:08:06.540415   62061 logs.go:276] 0 containers: []
	W0918 21:08:06.540426   62061 logs.go:278] No container was found matching "kindnet"
	I0918 21:08:06.540433   62061 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0918 21:08:06.540492   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0918 21:08:06.582919   62061 cri.go:89] found id: ""
	I0918 21:08:06.582946   62061 logs.go:276] 0 containers: []
	W0918 21:08:06.582957   62061 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0918 21:08:06.582966   62061 logs.go:123] Gathering logs for kubelet ...
	I0918 21:08:06.582982   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 21:08:06.637315   62061 logs.go:123] Gathering logs for dmesg ...
	I0918 21:08:06.637355   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 21:08:06.650626   62061 logs.go:123] Gathering logs for describe nodes ...
	I0918 21:08:06.650662   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0918 21:08:06.725605   62061 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0918 21:08:06.725641   62061 logs.go:123] Gathering logs for CRI-O ...
	I0918 21:08:06.725656   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0918 21:08:06.805431   62061 logs.go:123] Gathering logs for container status ...
	I0918 21:08:06.805471   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 21:08:09.343498   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:08:09.357396   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0918 21:08:09.357472   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0918 21:08:09.391561   62061 cri.go:89] found id: ""
	I0918 21:08:09.391590   62061 logs.go:276] 0 containers: []
	W0918 21:08:09.391600   62061 logs.go:278] No container was found matching "kube-apiserver"
	I0918 21:08:09.391605   62061 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0918 21:08:09.391655   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0918 21:08:09.429154   62061 cri.go:89] found id: ""
	I0918 21:08:09.429181   62061 logs.go:276] 0 containers: []
	W0918 21:08:09.429190   62061 logs.go:278] No container was found matching "etcd"
	I0918 21:08:09.429196   62061 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0918 21:08:09.429259   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0918 21:08:09.464807   62061 cri.go:89] found id: ""
	I0918 21:08:09.464848   62061 logs.go:276] 0 containers: []
	W0918 21:08:09.464859   62061 logs.go:278] No container was found matching "coredns"
	I0918 21:08:09.464866   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0918 21:08:09.464927   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0918 21:08:08.856578   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:08:10.856650   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:08:07.633438   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:08:10.132651   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:08:12.132903   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:08:09.387482   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:08:11.886885   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:08:09.499512   62061 cri.go:89] found id: ""
	I0918 21:08:09.499540   62061 logs.go:276] 0 containers: []
	W0918 21:08:09.499549   62061 logs.go:278] No container was found matching "kube-scheduler"
	I0918 21:08:09.499556   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0918 21:08:09.499619   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0918 21:08:09.534569   62061 cri.go:89] found id: ""
	I0918 21:08:09.534593   62061 logs.go:276] 0 containers: []
	W0918 21:08:09.534601   62061 logs.go:278] No container was found matching "kube-proxy"
	I0918 21:08:09.534607   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0918 21:08:09.534660   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0918 21:08:09.570387   62061 cri.go:89] found id: ""
	I0918 21:08:09.570414   62061 logs.go:276] 0 containers: []
	W0918 21:08:09.570422   62061 logs.go:278] No container was found matching "kube-controller-manager"
	I0918 21:08:09.570428   62061 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0918 21:08:09.570489   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0918 21:08:09.603832   62061 cri.go:89] found id: ""
	I0918 21:08:09.603863   62061 logs.go:276] 0 containers: []
	W0918 21:08:09.603871   62061 logs.go:278] No container was found matching "kindnet"
	I0918 21:08:09.603877   62061 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0918 21:08:09.603923   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0918 21:08:09.638887   62061 cri.go:89] found id: ""
	I0918 21:08:09.638918   62061 logs.go:276] 0 containers: []
	W0918 21:08:09.638930   62061 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0918 21:08:09.638940   62061 logs.go:123] Gathering logs for kubelet ...
	I0918 21:08:09.638953   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 21:08:09.691197   62061 logs.go:123] Gathering logs for dmesg ...
	I0918 21:08:09.691237   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 21:08:09.705470   62061 logs.go:123] Gathering logs for describe nodes ...
	I0918 21:08:09.705500   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0918 21:08:09.776826   62061 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0918 21:08:09.776858   62061 logs.go:123] Gathering logs for CRI-O ...
	I0918 21:08:09.776871   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0918 21:08:09.851287   62061 logs.go:123] Gathering logs for container status ...
	I0918 21:08:09.851321   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 21:08:12.390895   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:08:12.406734   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0918 21:08:12.406809   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0918 21:08:12.459302   62061 cri.go:89] found id: ""
	I0918 21:08:12.459331   62061 logs.go:276] 0 containers: []
	W0918 21:08:12.459349   62061 logs.go:278] No container was found matching "kube-apiserver"
	I0918 21:08:12.459360   62061 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0918 21:08:12.459420   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0918 21:08:12.517527   62061 cri.go:89] found id: ""
	I0918 21:08:12.517557   62061 logs.go:276] 0 containers: []
	W0918 21:08:12.517567   62061 logs.go:278] No container was found matching "etcd"
	I0918 21:08:12.517573   62061 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0918 21:08:12.517628   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0918 21:08:12.549661   62061 cri.go:89] found id: ""
	I0918 21:08:12.549689   62061 logs.go:276] 0 containers: []
	W0918 21:08:12.549696   62061 logs.go:278] No container was found matching "coredns"
	I0918 21:08:12.549702   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0918 21:08:12.549747   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0918 21:08:12.582970   62061 cri.go:89] found id: ""
	I0918 21:08:12.582996   62061 logs.go:276] 0 containers: []
	W0918 21:08:12.583004   62061 logs.go:278] No container was found matching "kube-scheduler"
	I0918 21:08:12.583009   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0918 21:08:12.583054   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0918 21:08:12.617059   62061 cri.go:89] found id: ""
	I0918 21:08:12.617089   62061 logs.go:276] 0 containers: []
	W0918 21:08:12.617098   62061 logs.go:278] No container was found matching "kube-proxy"
	I0918 21:08:12.617103   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0918 21:08:12.617153   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0918 21:08:12.651115   62061 cri.go:89] found id: ""
	I0918 21:08:12.651143   62061 logs.go:276] 0 containers: []
	W0918 21:08:12.651156   62061 logs.go:278] No container was found matching "kube-controller-manager"
	I0918 21:08:12.651164   62061 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0918 21:08:12.651217   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0918 21:08:12.684727   62061 cri.go:89] found id: ""
	I0918 21:08:12.684758   62061 logs.go:276] 0 containers: []
	W0918 21:08:12.684765   62061 logs.go:278] No container was found matching "kindnet"
	I0918 21:08:12.684771   62061 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0918 21:08:12.684829   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0918 21:08:12.718181   62061 cri.go:89] found id: ""
	I0918 21:08:12.718215   62061 logs.go:276] 0 containers: []
	W0918 21:08:12.718226   62061 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0918 21:08:12.718237   62061 logs.go:123] Gathering logs for container status ...
	I0918 21:08:12.718252   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 21:08:12.760350   62061 logs.go:123] Gathering logs for kubelet ...
	I0918 21:08:12.760379   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 21:08:12.810028   62061 logs.go:123] Gathering logs for dmesg ...
	I0918 21:08:12.810067   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 21:08:12.823785   62061 logs.go:123] Gathering logs for describe nodes ...
	I0918 21:08:12.823815   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0918 21:08:12.898511   62061 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0918 21:08:12.898532   62061 logs.go:123] Gathering logs for CRI-O ...
	I0918 21:08:12.898545   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0918 21:08:12.856697   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:08:15.356381   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:08:14.632694   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:08:17.131888   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:08:13.887157   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:08:15.887190   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:08:17.890618   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:08:15.476840   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:08:15.489386   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0918 21:08:15.489470   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0918 21:08:15.524549   62061 cri.go:89] found id: ""
	I0918 21:08:15.524578   62061 logs.go:276] 0 containers: []
	W0918 21:08:15.524585   62061 logs.go:278] No container was found matching "kube-apiserver"
	I0918 21:08:15.524591   62061 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0918 21:08:15.524642   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0918 21:08:15.557834   62061 cri.go:89] found id: ""
	I0918 21:08:15.557867   62061 logs.go:276] 0 containers: []
	W0918 21:08:15.557876   62061 logs.go:278] No container was found matching "etcd"
	I0918 21:08:15.557883   62061 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0918 21:08:15.557933   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0918 21:08:15.598774   62061 cri.go:89] found id: ""
	I0918 21:08:15.598805   62061 logs.go:276] 0 containers: []
	W0918 21:08:15.598818   62061 logs.go:278] No container was found matching "coredns"
	I0918 21:08:15.598828   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0918 21:08:15.598882   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0918 21:08:15.633123   62061 cri.go:89] found id: ""
	I0918 21:08:15.633147   62061 logs.go:276] 0 containers: []
	W0918 21:08:15.633155   62061 logs.go:278] No container was found matching "kube-scheduler"
	I0918 21:08:15.633161   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0918 21:08:15.633208   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0918 21:08:15.667281   62061 cri.go:89] found id: ""
	I0918 21:08:15.667307   62061 logs.go:276] 0 containers: []
	W0918 21:08:15.667317   62061 logs.go:278] No container was found matching "kube-proxy"
	I0918 21:08:15.667323   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0918 21:08:15.667381   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0918 21:08:15.705945   62061 cri.go:89] found id: ""
	I0918 21:08:15.705977   62061 logs.go:276] 0 containers: []
	W0918 21:08:15.705990   62061 logs.go:278] No container was found matching "kube-controller-manager"
	I0918 21:08:15.706015   62061 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0918 21:08:15.706093   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0918 21:08:15.739795   62061 cri.go:89] found id: ""
	I0918 21:08:15.739826   62061 logs.go:276] 0 containers: []
	W0918 21:08:15.739836   62061 logs.go:278] No container was found matching "kindnet"
	I0918 21:08:15.739843   62061 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0918 21:08:15.739904   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0918 21:08:15.778520   62061 cri.go:89] found id: ""
	I0918 21:08:15.778556   62061 logs.go:276] 0 containers: []
	W0918 21:08:15.778567   62061 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0918 21:08:15.778579   62061 logs.go:123] Gathering logs for kubelet ...
	I0918 21:08:15.778592   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 21:08:15.829357   62061 logs.go:123] Gathering logs for dmesg ...
	I0918 21:08:15.829394   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 21:08:15.842852   62061 logs.go:123] Gathering logs for describe nodes ...
	I0918 21:08:15.842882   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0918 21:08:15.922438   62061 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0918 21:08:15.922471   62061 logs.go:123] Gathering logs for CRI-O ...
	I0918 21:08:15.922483   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0918 21:08:16.001687   62061 logs.go:123] Gathering logs for container status ...
	I0918 21:08:16.001726   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 21:08:18.542067   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:08:18.554783   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0918 21:08:18.554870   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0918 21:08:18.589555   62061 cri.go:89] found id: ""
	I0918 21:08:18.589581   62061 logs.go:276] 0 containers: []
	W0918 21:08:18.589592   62061 logs.go:278] No container was found matching "kube-apiserver"
	I0918 21:08:18.589604   62061 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0918 21:08:18.589667   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0918 21:08:18.623035   62061 cri.go:89] found id: ""
	I0918 21:08:18.623059   62061 logs.go:276] 0 containers: []
	W0918 21:08:18.623067   62061 logs.go:278] No container was found matching "etcd"
	I0918 21:08:18.623073   62061 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0918 21:08:18.623127   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0918 21:08:18.655875   62061 cri.go:89] found id: ""
	I0918 21:08:18.655901   62061 logs.go:276] 0 containers: []
	W0918 21:08:18.655909   62061 logs.go:278] No container was found matching "coredns"
	I0918 21:08:18.655915   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0918 21:08:18.655973   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0918 21:08:18.688964   62061 cri.go:89] found id: ""
	I0918 21:08:18.688997   62061 logs.go:276] 0 containers: []
	W0918 21:08:18.689008   62061 logs.go:278] No container was found matching "kube-scheduler"
	I0918 21:08:18.689016   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0918 21:08:18.689080   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0918 21:08:18.723164   62061 cri.go:89] found id: ""
	I0918 21:08:18.723186   62061 logs.go:276] 0 containers: []
	W0918 21:08:18.723196   62061 logs.go:278] No container was found matching "kube-proxy"
	I0918 21:08:18.723201   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0918 21:08:18.723246   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0918 21:08:18.755022   62061 cri.go:89] found id: ""
	I0918 21:08:18.755048   62061 logs.go:276] 0 containers: []
	W0918 21:08:18.755057   62061 logs.go:278] No container was found matching "kube-controller-manager"
	I0918 21:08:18.755063   62061 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0918 21:08:18.755113   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0918 21:08:18.791614   62061 cri.go:89] found id: ""
	I0918 21:08:18.791645   62061 logs.go:276] 0 containers: []
	W0918 21:08:18.791655   62061 logs.go:278] No container was found matching "kindnet"
	I0918 21:08:18.791663   62061 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0918 21:08:18.791731   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0918 21:08:18.823553   62061 cri.go:89] found id: ""
	I0918 21:08:18.823583   62061 logs.go:276] 0 containers: []
	W0918 21:08:18.823590   62061 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0918 21:08:18.823597   62061 logs.go:123] Gathering logs for container status ...
	I0918 21:08:18.823609   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 21:08:18.864528   62061 logs.go:123] Gathering logs for kubelet ...
	I0918 21:08:18.864567   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 21:08:18.916590   62061 logs.go:123] Gathering logs for dmesg ...
	I0918 21:08:18.916628   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 21:08:18.930077   62061 logs.go:123] Gathering logs for describe nodes ...
	I0918 21:08:18.930107   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0918 21:08:18.999958   62061 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0918 21:08:18.999982   62061 logs.go:123] Gathering logs for CRI-O ...
	I0918 21:08:18.999997   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0918 21:08:17.358190   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:08:19.856605   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:08:19.132382   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:08:21.634433   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:08:20.387223   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:08:22.387374   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:08:21.573718   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:08:21.588164   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0918 21:08:21.588252   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0918 21:08:21.630044   62061 cri.go:89] found id: ""
	I0918 21:08:21.630075   62061 logs.go:276] 0 containers: []
	W0918 21:08:21.630086   62061 logs.go:278] No container was found matching "kube-apiserver"
	I0918 21:08:21.630094   62061 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0918 21:08:21.630154   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0918 21:08:21.666992   62061 cri.go:89] found id: ""
	I0918 21:08:21.667021   62061 logs.go:276] 0 containers: []
	W0918 21:08:21.667029   62061 logs.go:278] No container was found matching "etcd"
	I0918 21:08:21.667035   62061 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0918 21:08:21.667083   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0918 21:08:21.702379   62061 cri.go:89] found id: ""
	I0918 21:08:21.702403   62061 logs.go:276] 0 containers: []
	W0918 21:08:21.702411   62061 logs.go:278] No container was found matching "coredns"
	I0918 21:08:21.702416   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0918 21:08:21.702463   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0918 21:08:21.739877   62061 cri.go:89] found id: ""
	I0918 21:08:21.739908   62061 logs.go:276] 0 containers: []
	W0918 21:08:21.739918   62061 logs.go:278] No container was found matching "kube-scheduler"
	I0918 21:08:21.739923   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0918 21:08:21.739974   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0918 21:08:21.777536   62061 cri.go:89] found id: ""
	I0918 21:08:21.777573   62061 logs.go:276] 0 containers: []
	W0918 21:08:21.777584   62061 logs.go:278] No container was found matching "kube-proxy"
	I0918 21:08:21.777592   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0918 21:08:21.777652   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0918 21:08:21.812284   62061 cri.go:89] found id: ""
	I0918 21:08:21.812316   62061 logs.go:276] 0 containers: []
	W0918 21:08:21.812325   62061 logs.go:278] No container was found matching "kube-controller-manager"
	I0918 21:08:21.812332   62061 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0918 21:08:21.812401   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0918 21:08:21.848143   62061 cri.go:89] found id: ""
	I0918 21:08:21.848176   62061 logs.go:276] 0 containers: []
	W0918 21:08:21.848185   62061 logs.go:278] No container was found matching "kindnet"
	I0918 21:08:21.848191   62061 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0918 21:08:21.848250   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0918 21:08:21.887151   62061 cri.go:89] found id: ""
	I0918 21:08:21.887177   62061 logs.go:276] 0 containers: []
	W0918 21:08:21.887188   62061 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0918 21:08:21.887199   62061 logs.go:123] Gathering logs for kubelet ...
	I0918 21:08:21.887213   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 21:08:21.939969   62061 logs.go:123] Gathering logs for dmesg ...
	I0918 21:08:21.940008   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 21:08:21.954128   62061 logs.go:123] Gathering logs for describe nodes ...
	I0918 21:08:21.954164   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0918 21:08:22.022827   62061 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0918 21:08:22.022853   62061 logs.go:123] Gathering logs for CRI-O ...
	I0918 21:08:22.022865   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0918 21:08:22.103131   62061 logs.go:123] Gathering logs for container status ...
	I0918 21:08:22.103172   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 21:08:22.356641   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:08:24.358204   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:08:24.133101   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:08:26.633701   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:08:24.888715   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:08:27.386901   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:08:24.642045   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:08:24.655273   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0918 21:08:24.655343   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0918 21:08:24.687816   62061 cri.go:89] found id: ""
	I0918 21:08:24.687847   62061 logs.go:276] 0 containers: []
	W0918 21:08:24.687858   62061 logs.go:278] No container was found matching "kube-apiserver"
	I0918 21:08:24.687865   62061 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0918 21:08:24.687927   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0918 21:08:24.721276   62061 cri.go:89] found id: ""
	I0918 21:08:24.721303   62061 logs.go:276] 0 containers: []
	W0918 21:08:24.721311   62061 logs.go:278] No container was found matching "etcd"
	I0918 21:08:24.721316   62061 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0918 21:08:24.721366   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0918 21:08:24.753874   62061 cri.go:89] found id: ""
	I0918 21:08:24.753904   62061 logs.go:276] 0 containers: []
	W0918 21:08:24.753911   62061 logs.go:278] No container was found matching "coredns"
	I0918 21:08:24.753917   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0918 21:08:24.753967   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0918 21:08:24.789107   62061 cri.go:89] found id: ""
	I0918 21:08:24.789148   62061 logs.go:276] 0 containers: []
	W0918 21:08:24.789163   62061 logs.go:278] No container was found matching "kube-scheduler"
	I0918 21:08:24.789170   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0918 21:08:24.789219   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0918 21:08:24.826283   62061 cri.go:89] found id: ""
	I0918 21:08:24.826316   62061 logs.go:276] 0 containers: []
	W0918 21:08:24.826329   62061 logs.go:278] No container was found matching "kube-proxy"
	I0918 21:08:24.826337   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0918 21:08:24.826401   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0918 21:08:24.859878   62061 cri.go:89] found id: ""
	I0918 21:08:24.859907   62061 logs.go:276] 0 containers: []
	W0918 21:08:24.859917   62061 logs.go:278] No container was found matching "kube-controller-manager"
	I0918 21:08:24.859924   62061 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0918 21:08:24.859982   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0918 21:08:24.895717   62061 cri.go:89] found id: ""
	I0918 21:08:24.895747   62061 logs.go:276] 0 containers: []
	W0918 21:08:24.895758   62061 logs.go:278] No container was found matching "kindnet"
	I0918 21:08:24.895766   62061 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0918 21:08:24.895830   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0918 21:08:24.930207   62061 cri.go:89] found id: ""
	I0918 21:08:24.930239   62061 logs.go:276] 0 containers: []
	W0918 21:08:24.930250   62061 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0918 21:08:24.930262   62061 logs.go:123] Gathering logs for container status ...
	I0918 21:08:24.930279   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 21:08:24.967939   62061 logs.go:123] Gathering logs for kubelet ...
	I0918 21:08:24.967981   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 21:08:25.022526   62061 logs.go:123] Gathering logs for dmesg ...
	I0918 21:08:25.022569   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 21:08:25.036175   62061 logs.go:123] Gathering logs for describe nodes ...
	I0918 21:08:25.036214   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0918 21:08:25.104251   62061 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0918 21:08:25.104277   62061 logs.go:123] Gathering logs for CRI-O ...
	I0918 21:08:25.104294   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0918 21:08:27.686943   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:08:27.700071   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0918 21:08:27.700147   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0918 21:08:27.735245   62061 cri.go:89] found id: ""
	I0918 21:08:27.735277   62061 logs.go:276] 0 containers: []
	W0918 21:08:27.735286   62061 logs.go:278] No container was found matching "kube-apiserver"
	I0918 21:08:27.735291   62061 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0918 21:08:27.735349   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0918 21:08:27.768945   62061 cri.go:89] found id: ""
	I0918 21:08:27.768974   62061 logs.go:276] 0 containers: []
	W0918 21:08:27.768985   62061 logs.go:278] No container was found matching "etcd"
	I0918 21:08:27.768993   62061 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0918 21:08:27.769055   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0918 21:08:27.805425   62061 cri.go:89] found id: ""
	I0918 21:08:27.805457   62061 logs.go:276] 0 containers: []
	W0918 21:08:27.805468   62061 logs.go:278] No container was found matching "coredns"
	I0918 21:08:27.805474   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0918 21:08:27.805523   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0918 21:08:27.838049   62061 cri.go:89] found id: ""
	I0918 21:08:27.838081   62061 logs.go:276] 0 containers: []
	W0918 21:08:27.838091   62061 logs.go:278] No container was found matching "kube-scheduler"
	I0918 21:08:27.838098   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0918 21:08:27.838163   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0918 21:08:27.873961   62061 cri.go:89] found id: ""
	I0918 21:08:27.873986   62061 logs.go:276] 0 containers: []
	W0918 21:08:27.873994   62061 logs.go:278] No container was found matching "kube-proxy"
	I0918 21:08:27.874001   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0918 21:08:27.874064   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0918 21:08:27.908822   62061 cri.go:89] found id: ""
	I0918 21:08:27.908846   62061 logs.go:276] 0 containers: []
	W0918 21:08:27.908854   62061 logs.go:278] No container was found matching "kube-controller-manager"
	I0918 21:08:27.908860   62061 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0918 21:08:27.908915   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0918 21:08:27.943758   62061 cri.go:89] found id: ""
	I0918 21:08:27.943794   62061 logs.go:276] 0 containers: []
	W0918 21:08:27.943802   62061 logs.go:278] No container was found matching "kindnet"
	I0918 21:08:27.943808   62061 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0918 21:08:27.943875   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0918 21:08:27.978136   62061 cri.go:89] found id: ""
	I0918 21:08:27.978168   62061 logs.go:276] 0 containers: []
	W0918 21:08:27.978179   62061 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0918 21:08:27.978189   62061 logs.go:123] Gathering logs for dmesg ...
	I0918 21:08:27.978202   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 21:08:27.992744   62061 logs.go:123] Gathering logs for describe nodes ...
	I0918 21:08:27.992773   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0918 21:08:28.070339   62061 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0918 21:08:28.070362   62061 logs.go:123] Gathering logs for CRI-O ...
	I0918 21:08:28.070374   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0918 21:08:28.152405   62061 logs.go:123] Gathering logs for container status ...
	I0918 21:08:28.152452   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 21:08:28.191190   62061 logs.go:123] Gathering logs for kubelet ...
	I0918 21:08:28.191220   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 21:08:26.857256   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:08:29.356662   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:08:29.132577   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:08:31.133108   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:08:29.387068   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:08:31.886962   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:08:30.742507   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:08:30.756451   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0918 21:08:30.756553   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0918 21:08:30.790746   62061 cri.go:89] found id: ""
	I0918 21:08:30.790771   62061 logs.go:276] 0 containers: []
	W0918 21:08:30.790781   62061 logs.go:278] No container was found matching "kube-apiserver"
	I0918 21:08:30.790787   62061 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0918 21:08:30.790851   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0918 21:08:30.823633   62061 cri.go:89] found id: ""
	I0918 21:08:30.823670   62061 logs.go:276] 0 containers: []
	W0918 21:08:30.823682   62061 logs.go:278] No container was found matching "etcd"
	I0918 21:08:30.823689   62061 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0918 21:08:30.823754   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0918 21:08:30.861912   62061 cri.go:89] found id: ""
	I0918 21:08:30.861936   62061 logs.go:276] 0 containers: []
	W0918 21:08:30.861943   62061 logs.go:278] No container was found matching "coredns"
	I0918 21:08:30.861949   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0918 21:08:30.862000   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0918 21:08:30.899452   62061 cri.go:89] found id: ""
	I0918 21:08:30.899481   62061 logs.go:276] 0 containers: []
	W0918 21:08:30.899489   62061 logs.go:278] No container was found matching "kube-scheduler"
	I0918 21:08:30.899495   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0918 21:08:30.899562   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0918 21:08:30.935871   62061 cri.go:89] found id: ""
	I0918 21:08:30.935898   62061 logs.go:276] 0 containers: []
	W0918 21:08:30.935906   62061 logs.go:278] No container was found matching "kube-proxy"
	I0918 21:08:30.935912   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0918 21:08:30.935969   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0918 21:08:30.974608   62061 cri.go:89] found id: ""
	I0918 21:08:30.974643   62061 logs.go:276] 0 containers: []
	W0918 21:08:30.974655   62061 logs.go:278] No container was found matching "kube-controller-manager"
	I0918 21:08:30.974663   62061 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0918 21:08:30.974722   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0918 21:08:31.008249   62061 cri.go:89] found id: ""
	I0918 21:08:31.008279   62061 logs.go:276] 0 containers: []
	W0918 21:08:31.008290   62061 logs.go:278] No container was found matching "kindnet"
	I0918 21:08:31.008297   62061 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0918 21:08:31.008366   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0918 21:08:31.047042   62061 cri.go:89] found id: ""
	I0918 21:08:31.047075   62061 logs.go:276] 0 containers: []
	W0918 21:08:31.047083   62061 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0918 21:08:31.047093   62061 logs.go:123] Gathering logs for kubelet ...
	I0918 21:08:31.047107   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 21:08:31.098961   62061 logs.go:123] Gathering logs for dmesg ...
	I0918 21:08:31.099001   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 21:08:31.113116   62061 logs.go:123] Gathering logs for describe nodes ...
	I0918 21:08:31.113147   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0918 21:08:31.179609   62061 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0918 21:08:31.179650   62061 logs.go:123] Gathering logs for CRI-O ...
	I0918 21:08:31.179664   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0918 21:08:31.258299   62061 logs.go:123] Gathering logs for container status ...
	I0918 21:08:31.258335   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 21:08:33.798360   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:08:33.812105   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0918 21:08:33.812172   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0918 21:08:33.848120   62061 cri.go:89] found id: ""
	I0918 21:08:33.848149   62061 logs.go:276] 0 containers: []
	W0918 21:08:33.848160   62061 logs.go:278] No container was found matching "kube-apiserver"
	I0918 21:08:33.848169   62061 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0918 21:08:33.848231   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0918 21:08:33.884974   62061 cri.go:89] found id: ""
	I0918 21:08:33.885007   62061 logs.go:276] 0 containers: []
	W0918 21:08:33.885019   62061 logs.go:278] No container was found matching "etcd"
	I0918 21:08:33.885028   62061 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0918 21:08:33.885104   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0918 21:08:33.919613   62061 cri.go:89] found id: ""
	I0918 21:08:33.919658   62061 logs.go:276] 0 containers: []
	W0918 21:08:33.919667   62061 logs.go:278] No container was found matching "coredns"
	I0918 21:08:33.919673   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0918 21:08:33.919721   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0918 21:08:33.953074   62061 cri.go:89] found id: ""
	I0918 21:08:33.953112   62061 logs.go:276] 0 containers: []
	W0918 21:08:33.953120   62061 logs.go:278] No container was found matching "kube-scheduler"
	I0918 21:08:33.953125   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0918 21:08:33.953185   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0918 21:08:33.985493   62061 cri.go:89] found id: ""
	I0918 21:08:33.985521   62061 logs.go:276] 0 containers: []
	W0918 21:08:33.985532   62061 logs.go:278] No container was found matching "kube-proxy"
	I0918 21:08:33.985539   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0918 21:08:33.985630   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0918 21:08:34.020938   62061 cri.go:89] found id: ""
	I0918 21:08:34.020962   62061 logs.go:276] 0 containers: []
	W0918 21:08:34.020972   62061 logs.go:278] No container was found matching "kube-controller-manager"
	I0918 21:08:34.020981   62061 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0918 21:08:34.021047   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0918 21:08:34.053947   62061 cri.go:89] found id: ""
	I0918 21:08:34.053977   62061 logs.go:276] 0 containers: []
	W0918 21:08:34.053988   62061 logs.go:278] No container was found matching "kindnet"
	I0918 21:08:34.053996   62061 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0918 21:08:34.054060   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0918 21:08:34.090072   62061 cri.go:89] found id: ""
	I0918 21:08:34.090110   62061 logs.go:276] 0 containers: []
	W0918 21:08:34.090123   62061 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0918 21:08:34.090133   62061 logs.go:123] Gathering logs for kubelet ...
	I0918 21:08:34.090145   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 21:08:34.142069   62061 logs.go:123] Gathering logs for dmesg ...
	I0918 21:08:34.142107   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 21:08:34.156709   62061 logs.go:123] Gathering logs for describe nodes ...
	I0918 21:08:34.156740   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0918 21:08:34.232644   62061 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0918 21:08:34.232672   62061 logs.go:123] Gathering logs for CRI-O ...
	I0918 21:08:34.232685   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0918 21:08:34.311833   62061 logs.go:123] Gathering logs for container status ...
	I0918 21:08:34.311878   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 21:08:31.859360   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:08:34.357056   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:08:33.133212   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:08:35.632885   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:08:33.888487   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:08:36.386571   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:08:36.850811   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:08:36.864479   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0918 21:08:36.864558   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0918 21:08:36.902595   62061 cri.go:89] found id: ""
	I0918 21:08:36.902628   62061 logs.go:276] 0 containers: []
	W0918 21:08:36.902640   62061 logs.go:278] No container was found matching "kube-apiserver"
	I0918 21:08:36.902649   62061 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0918 21:08:36.902706   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0918 21:08:36.940336   62061 cri.go:89] found id: ""
	I0918 21:08:36.940385   62061 logs.go:276] 0 containers: []
	W0918 21:08:36.940394   62061 logs.go:278] No container was found matching "etcd"
	I0918 21:08:36.940400   62061 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0918 21:08:36.940465   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0918 21:08:36.973909   62061 cri.go:89] found id: ""
	I0918 21:08:36.973942   62061 logs.go:276] 0 containers: []
	W0918 21:08:36.973952   62061 logs.go:278] No container was found matching "coredns"
	I0918 21:08:36.973958   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0918 21:08:36.974013   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0918 21:08:37.008766   62061 cri.go:89] found id: ""
	I0918 21:08:37.008791   62061 logs.go:276] 0 containers: []
	W0918 21:08:37.008799   62061 logs.go:278] No container was found matching "kube-scheduler"
	I0918 21:08:37.008805   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0918 21:08:37.008852   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0918 21:08:37.041633   62061 cri.go:89] found id: ""
	I0918 21:08:37.041669   62061 logs.go:276] 0 containers: []
	W0918 21:08:37.041681   62061 logs.go:278] No container was found matching "kube-proxy"
	I0918 21:08:37.041688   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0918 21:08:37.041750   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0918 21:08:37.075154   62061 cri.go:89] found id: ""
	I0918 21:08:37.075188   62061 logs.go:276] 0 containers: []
	W0918 21:08:37.075197   62061 logs.go:278] No container was found matching "kube-controller-manager"
	I0918 21:08:37.075204   62061 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0918 21:08:37.075268   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0918 21:08:37.111083   62061 cri.go:89] found id: ""
	I0918 21:08:37.111119   62061 logs.go:276] 0 containers: []
	W0918 21:08:37.111130   62061 logs.go:278] No container was found matching "kindnet"
	I0918 21:08:37.111138   62061 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0918 21:08:37.111192   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0918 21:08:37.147894   62061 cri.go:89] found id: ""
	I0918 21:08:37.147925   62061 logs.go:276] 0 containers: []
	W0918 21:08:37.147936   62061 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0918 21:08:37.147948   62061 logs.go:123] Gathering logs for kubelet ...
	I0918 21:08:37.147962   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 21:08:37.200102   62061 logs.go:123] Gathering logs for dmesg ...
	I0918 21:08:37.200141   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 21:08:37.213511   62061 logs.go:123] Gathering logs for describe nodes ...
	I0918 21:08:37.213537   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0918 21:08:37.286978   62061 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0918 21:08:37.287012   62061 logs.go:123] Gathering logs for CRI-O ...
	I0918 21:08:37.287027   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0918 21:08:37.368153   62061 logs.go:123] Gathering logs for container status ...
	I0918 21:08:37.368191   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 21:08:36.857508   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:08:39.357177   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:08:41.357329   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:08:38.134332   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:08:40.633274   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:08:38.387121   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:08:40.387310   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:08:42.887614   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:08:39.907600   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:08:39.922325   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0918 21:08:39.922395   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0918 21:08:39.958150   62061 cri.go:89] found id: ""
	I0918 21:08:39.958175   62061 logs.go:276] 0 containers: []
	W0918 21:08:39.958183   62061 logs.go:278] No container was found matching "kube-apiserver"
	I0918 21:08:39.958189   62061 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0918 21:08:39.958254   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0918 21:08:39.993916   62061 cri.go:89] found id: ""
	I0918 21:08:39.993945   62061 logs.go:276] 0 containers: []
	W0918 21:08:39.993956   62061 logs.go:278] No container was found matching "etcd"
	I0918 21:08:39.993963   62061 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0918 21:08:39.994026   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0918 21:08:40.031078   62061 cri.go:89] found id: ""
	I0918 21:08:40.031114   62061 logs.go:276] 0 containers: []
	W0918 21:08:40.031126   62061 logs.go:278] No container was found matching "coredns"
	I0918 21:08:40.031133   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0918 21:08:40.031194   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0918 21:08:40.065028   62061 cri.go:89] found id: ""
	I0918 21:08:40.065054   62061 logs.go:276] 0 containers: []
	W0918 21:08:40.065065   62061 logs.go:278] No container was found matching "kube-scheduler"
	I0918 21:08:40.065072   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0918 21:08:40.065129   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0918 21:08:40.099437   62061 cri.go:89] found id: ""
	I0918 21:08:40.099466   62061 logs.go:276] 0 containers: []
	W0918 21:08:40.099474   62061 logs.go:278] No container was found matching "kube-proxy"
	I0918 21:08:40.099480   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0918 21:08:40.099544   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0918 21:08:40.139841   62061 cri.go:89] found id: ""
	I0918 21:08:40.139866   62061 logs.go:276] 0 containers: []
	W0918 21:08:40.139874   62061 logs.go:278] No container was found matching "kube-controller-manager"
	I0918 21:08:40.139880   62061 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0918 21:08:40.139936   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0918 21:08:40.175287   62061 cri.go:89] found id: ""
	I0918 21:08:40.175316   62061 logs.go:276] 0 containers: []
	W0918 21:08:40.175327   62061 logs.go:278] No container was found matching "kindnet"
	I0918 21:08:40.175334   62061 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0918 21:08:40.175397   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0918 21:08:40.208646   62061 cri.go:89] found id: ""
	I0918 21:08:40.208677   62061 logs.go:276] 0 containers: []
	W0918 21:08:40.208690   62061 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0918 21:08:40.208701   62061 logs.go:123] Gathering logs for kubelet ...
	I0918 21:08:40.208712   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 21:08:40.262944   62061 logs.go:123] Gathering logs for dmesg ...
	I0918 21:08:40.262982   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 21:08:40.276192   62061 logs.go:123] Gathering logs for describe nodes ...
	I0918 21:08:40.276222   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0918 21:08:40.346393   62061 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0918 21:08:40.346414   62061 logs.go:123] Gathering logs for CRI-O ...
	I0918 21:08:40.346426   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0918 21:08:40.433797   62061 logs.go:123] Gathering logs for container status ...
	I0918 21:08:40.433848   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 21:08:42.976574   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:08:42.990198   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0918 21:08:42.990263   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0918 21:08:43.031744   62061 cri.go:89] found id: ""
	I0918 21:08:43.031774   62061 logs.go:276] 0 containers: []
	W0918 21:08:43.031784   62061 logs.go:278] No container was found matching "kube-apiserver"
	I0918 21:08:43.031791   62061 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0918 21:08:43.031854   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0918 21:08:43.065198   62061 cri.go:89] found id: ""
	I0918 21:08:43.065240   62061 logs.go:276] 0 containers: []
	W0918 21:08:43.065248   62061 logs.go:278] No container was found matching "etcd"
	I0918 21:08:43.065261   62061 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0918 21:08:43.065319   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0918 21:08:43.099292   62061 cri.go:89] found id: ""
	I0918 21:08:43.099320   62061 logs.go:276] 0 containers: []
	W0918 21:08:43.099328   62061 logs.go:278] No container was found matching "coredns"
	I0918 21:08:43.099333   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0918 21:08:43.099388   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0918 21:08:43.135085   62061 cri.go:89] found id: ""
	I0918 21:08:43.135110   62061 logs.go:276] 0 containers: []
	W0918 21:08:43.135119   62061 logs.go:278] No container was found matching "kube-scheduler"
	I0918 21:08:43.135131   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0918 21:08:43.135190   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0918 21:08:43.173255   62061 cri.go:89] found id: ""
	I0918 21:08:43.173312   62061 logs.go:276] 0 containers: []
	W0918 21:08:43.173326   62061 logs.go:278] No container was found matching "kube-proxy"
	I0918 21:08:43.173335   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0918 21:08:43.173433   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0918 21:08:43.209249   62061 cri.go:89] found id: ""
	I0918 21:08:43.209282   62061 logs.go:276] 0 containers: []
	W0918 21:08:43.209294   62061 logs.go:278] No container was found matching "kube-controller-manager"
	I0918 21:08:43.209303   62061 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0918 21:08:43.209377   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0918 21:08:43.242333   62061 cri.go:89] found id: ""
	I0918 21:08:43.242366   62061 logs.go:276] 0 containers: []
	W0918 21:08:43.242376   62061 logs.go:278] No container was found matching "kindnet"
	I0918 21:08:43.242383   62061 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0918 21:08:43.242449   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0918 21:08:43.278099   62061 cri.go:89] found id: ""
	I0918 21:08:43.278128   62061 logs.go:276] 0 containers: []
	W0918 21:08:43.278136   62061 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0918 21:08:43.278146   62061 logs.go:123] Gathering logs for kubelet ...
	I0918 21:08:43.278164   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 21:08:43.329621   62061 logs.go:123] Gathering logs for dmesg ...
	I0918 21:08:43.329661   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 21:08:43.343357   62061 logs.go:123] Gathering logs for describe nodes ...
	I0918 21:08:43.343402   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0918 21:08:43.415392   62061 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0918 21:08:43.415419   62061 logs.go:123] Gathering logs for CRI-O ...
	I0918 21:08:43.415435   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0918 21:08:43.501634   62061 logs.go:123] Gathering logs for container status ...
	I0918 21:08:43.501670   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 21:08:43.357675   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:08:45.857212   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:08:43.133389   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:08:45.134057   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:08:44.887763   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:08:47.387221   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:08:46.043925   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:08:46.057457   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0918 21:08:46.057540   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0918 21:08:46.091987   62061 cri.go:89] found id: ""
	I0918 21:08:46.092036   62061 logs.go:276] 0 containers: []
	W0918 21:08:46.092047   62061 logs.go:278] No container was found matching "kube-apiserver"
	I0918 21:08:46.092055   62061 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0918 21:08:46.092118   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0918 21:08:46.131171   62061 cri.go:89] found id: ""
	I0918 21:08:46.131195   62061 logs.go:276] 0 containers: []
	W0918 21:08:46.131203   62061 logs.go:278] No container was found matching "etcd"
	I0918 21:08:46.131209   62061 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0918 21:08:46.131266   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0918 21:08:46.167324   62061 cri.go:89] found id: ""
	I0918 21:08:46.167360   62061 logs.go:276] 0 containers: []
	W0918 21:08:46.167369   62061 logs.go:278] No container was found matching "coredns"
	I0918 21:08:46.167375   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0918 21:08:46.167433   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0918 21:08:46.200940   62061 cri.go:89] found id: ""
	I0918 21:08:46.200969   62061 logs.go:276] 0 containers: []
	W0918 21:08:46.200978   62061 logs.go:278] No container was found matching "kube-scheduler"
	I0918 21:08:46.200983   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0918 21:08:46.201042   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0918 21:08:46.233736   62061 cri.go:89] found id: ""
	I0918 21:08:46.233765   62061 logs.go:276] 0 containers: []
	W0918 21:08:46.233773   62061 logs.go:278] No container was found matching "kube-proxy"
	I0918 21:08:46.233779   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0918 21:08:46.233841   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0918 21:08:46.267223   62061 cri.go:89] found id: ""
	I0918 21:08:46.267250   62061 logs.go:276] 0 containers: []
	W0918 21:08:46.267261   62061 logs.go:278] No container was found matching "kube-controller-manager"
	I0918 21:08:46.267268   62061 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0918 21:08:46.267331   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0918 21:08:46.302218   62061 cri.go:89] found id: ""
	I0918 21:08:46.302245   62061 logs.go:276] 0 containers: []
	W0918 21:08:46.302255   62061 logs.go:278] No container was found matching "kindnet"
	I0918 21:08:46.302262   62061 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0918 21:08:46.302326   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0918 21:08:46.334979   62061 cri.go:89] found id: ""
	I0918 21:08:46.335005   62061 logs.go:276] 0 containers: []
	W0918 21:08:46.335015   62061 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0918 21:08:46.335024   62061 logs.go:123] Gathering logs for describe nodes ...
	I0918 21:08:46.335038   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0918 21:08:46.422905   62061 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0918 21:08:46.422929   62061 logs.go:123] Gathering logs for CRI-O ...
	I0918 21:08:46.422948   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0918 21:08:46.504831   62061 logs.go:123] Gathering logs for container status ...
	I0918 21:08:46.504884   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 21:08:46.543954   62061 logs.go:123] Gathering logs for kubelet ...
	I0918 21:08:46.543981   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 21:08:46.595817   62061 logs.go:123] Gathering logs for dmesg ...
	I0918 21:08:46.595854   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 21:08:49.109984   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:08:49.124966   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0918 21:08:49.125052   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0918 21:08:49.163272   62061 cri.go:89] found id: ""
	I0918 21:08:49.163302   62061 logs.go:276] 0 containers: []
	W0918 21:08:49.163334   62061 logs.go:278] No container was found matching "kube-apiserver"
	I0918 21:08:49.163342   62061 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0918 21:08:49.163411   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0918 21:08:49.199973   62061 cri.go:89] found id: ""
	I0918 21:08:49.200003   62061 logs.go:276] 0 containers: []
	W0918 21:08:49.200037   62061 logs.go:278] No container was found matching "etcd"
	I0918 21:08:49.200046   62061 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0918 21:08:49.200111   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0918 21:08:49.235980   62061 cri.go:89] found id: ""
	I0918 21:08:49.236036   62061 logs.go:276] 0 containers: []
	W0918 21:08:49.236050   62061 logs.go:278] No container was found matching "coredns"
	I0918 21:08:49.236061   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0918 21:08:49.236123   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0918 21:08:49.271162   62061 cri.go:89] found id: ""
	I0918 21:08:49.271386   62061 logs.go:276] 0 containers: []
	W0918 21:08:49.271404   62061 logs.go:278] No container was found matching "kube-scheduler"
	I0918 21:08:49.271413   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0918 21:08:49.271483   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0918 21:08:49.305799   62061 cri.go:89] found id: ""
	I0918 21:08:49.305831   62061 logs.go:276] 0 containers: []
	W0918 21:08:49.305842   62061 logs.go:278] No container was found matching "kube-proxy"
	I0918 21:08:49.305848   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0918 21:08:49.305899   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0918 21:08:49.342170   62061 cri.go:89] found id: ""
	I0918 21:08:49.342195   62061 logs.go:276] 0 containers: []
	W0918 21:08:49.342202   62061 logs.go:278] No container was found matching "kube-controller-manager"
	I0918 21:08:49.342208   62061 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0918 21:08:49.342265   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0918 21:08:49.374071   62061 cri.go:89] found id: ""
	I0918 21:08:49.374098   62061 logs.go:276] 0 containers: []
	W0918 21:08:49.374108   62061 logs.go:278] No container was found matching "kindnet"
	I0918 21:08:49.374116   62061 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0918 21:08:49.374186   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0918 21:08:49.407614   62061 cri.go:89] found id: ""
	I0918 21:08:49.407645   62061 logs.go:276] 0 containers: []
	W0918 21:08:49.407657   62061 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0918 21:08:49.407669   62061 logs.go:123] Gathering logs for kubelet ...
	I0918 21:08:49.407685   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 21:08:49.458433   62061 logs.go:123] Gathering logs for dmesg ...
	I0918 21:08:49.458473   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 21:08:49.471490   62061 logs.go:123] Gathering logs for describe nodes ...
	I0918 21:08:49.471519   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0918 21:08:47.857798   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:08:50.355748   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:08:47.634158   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:08:49.627085   61659 pod_ready.go:82] duration metric: took 4m0.000936582s for pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace to be "Ready" ...
	E0918 21:08:49.627133   61659 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace to be "Ready" (will not retry!)
	I0918 21:08:49.627156   61659 pod_ready.go:39] duration metric: took 4m7.542795536s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0918 21:08:49.627192   61659 kubeadm.go:597] duration metric: took 4m15.452827752s to restartPrimaryControlPlane
	W0918 21:08:49.627251   61659 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0918 21:08:49.627290   61659 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0918 21:08:49.387560   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:08:51.887591   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	W0918 21:08:49.533874   62061 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0918 21:08:49.533901   62061 logs.go:123] Gathering logs for CRI-O ...
	I0918 21:08:49.533916   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0918 21:08:49.611711   62061 logs.go:123] Gathering logs for container status ...
	I0918 21:08:49.611744   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 21:08:52.157839   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:08:52.170690   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0918 21:08:52.170770   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0918 21:08:52.208737   62061 cri.go:89] found id: ""
	I0918 21:08:52.208767   62061 logs.go:276] 0 containers: []
	W0918 21:08:52.208779   62061 logs.go:278] No container was found matching "kube-apiserver"
	I0918 21:08:52.208787   62061 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0918 21:08:52.208846   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0918 21:08:52.242624   62061 cri.go:89] found id: ""
	I0918 21:08:52.242658   62061 logs.go:276] 0 containers: []
	W0918 21:08:52.242669   62061 logs.go:278] No container was found matching "etcd"
	I0918 21:08:52.242677   62061 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0918 21:08:52.242742   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0918 21:08:52.280686   62061 cri.go:89] found id: ""
	I0918 21:08:52.280717   62061 logs.go:276] 0 containers: []
	W0918 21:08:52.280728   62061 logs.go:278] No container was found matching "coredns"
	I0918 21:08:52.280736   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0918 21:08:52.280798   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0918 21:08:52.313748   62061 cri.go:89] found id: ""
	I0918 21:08:52.313777   62061 logs.go:276] 0 containers: []
	W0918 21:08:52.313785   62061 logs.go:278] No container was found matching "kube-scheduler"
	I0918 21:08:52.313791   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0918 21:08:52.313840   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0918 21:08:52.353072   62061 cri.go:89] found id: ""
	I0918 21:08:52.353102   62061 logs.go:276] 0 containers: []
	W0918 21:08:52.353124   62061 logs.go:278] No container was found matching "kube-proxy"
	I0918 21:08:52.353132   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0918 21:08:52.353195   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0918 21:08:52.390358   62061 cri.go:89] found id: ""
	I0918 21:08:52.390384   62061 logs.go:276] 0 containers: []
	W0918 21:08:52.390392   62061 logs.go:278] No container was found matching "kube-controller-manager"
	I0918 21:08:52.390398   62061 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0918 21:08:52.390448   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0918 21:08:52.430039   62061 cri.go:89] found id: ""
	I0918 21:08:52.430068   62061 logs.go:276] 0 containers: []
	W0918 21:08:52.430081   62061 logs.go:278] No container was found matching "kindnet"
	I0918 21:08:52.430088   62061 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0918 21:08:52.430146   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0918 21:08:52.466096   62061 cri.go:89] found id: ""
	I0918 21:08:52.466125   62061 logs.go:276] 0 containers: []
	W0918 21:08:52.466137   62061 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0918 21:08:52.466149   62061 logs.go:123] Gathering logs for kubelet ...
	I0918 21:08:52.466162   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 21:08:52.518643   62061 logs.go:123] Gathering logs for dmesg ...
	I0918 21:08:52.518678   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 21:08:52.531768   62061 logs.go:123] Gathering logs for describe nodes ...
	I0918 21:08:52.531797   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0918 21:08:52.605130   62061 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0918 21:08:52.605163   62061 logs.go:123] Gathering logs for CRI-O ...
	I0918 21:08:52.605181   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0918 21:08:52.686510   62061 logs.go:123] Gathering logs for container status ...
	I0918 21:08:52.686553   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 21:08:52.356535   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:08:54.356671   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:08:54.387306   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:08:56.887745   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:08:55.225867   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:08:55.240537   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0918 21:08:55.240618   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0918 21:08:55.276461   62061 cri.go:89] found id: ""
	I0918 21:08:55.276490   62061 logs.go:276] 0 containers: []
	W0918 21:08:55.276498   62061 logs.go:278] No container was found matching "kube-apiserver"
	I0918 21:08:55.276504   62061 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0918 21:08:55.276564   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0918 21:08:55.313449   62061 cri.go:89] found id: ""
	I0918 21:08:55.313482   62061 logs.go:276] 0 containers: []
	W0918 21:08:55.313493   62061 logs.go:278] No container was found matching "etcd"
	I0918 21:08:55.313499   62061 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0918 21:08:55.313551   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0918 21:08:55.352436   62061 cri.go:89] found id: ""
	I0918 21:08:55.352475   62061 logs.go:276] 0 containers: []
	W0918 21:08:55.352485   62061 logs.go:278] No container was found matching "coredns"
	I0918 21:08:55.352492   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0918 21:08:55.352560   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0918 21:08:55.390433   62061 cri.go:89] found id: ""
	I0918 21:08:55.390458   62061 logs.go:276] 0 containers: []
	W0918 21:08:55.390466   62061 logs.go:278] No container was found matching "kube-scheduler"
	I0918 21:08:55.390472   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0918 21:08:55.390529   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0918 21:08:55.428426   62061 cri.go:89] found id: ""
	I0918 21:08:55.428455   62061 logs.go:276] 0 containers: []
	W0918 21:08:55.428465   62061 logs.go:278] No container was found matching "kube-proxy"
	I0918 21:08:55.428473   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0918 21:08:55.428540   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0918 21:08:55.465587   62061 cri.go:89] found id: ""
	I0918 21:08:55.465622   62061 logs.go:276] 0 containers: []
	W0918 21:08:55.465633   62061 logs.go:278] No container was found matching "kube-controller-manager"
	I0918 21:08:55.465641   62061 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0918 21:08:55.465710   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0918 21:08:55.502137   62061 cri.go:89] found id: ""
	I0918 21:08:55.502185   62061 logs.go:276] 0 containers: []
	W0918 21:08:55.502196   62061 logs.go:278] No container was found matching "kindnet"
	I0918 21:08:55.502203   62061 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0918 21:08:55.502266   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0918 21:08:55.535992   62061 cri.go:89] found id: ""
	I0918 21:08:55.536037   62061 logs.go:276] 0 containers: []
	W0918 21:08:55.536050   62061 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0918 21:08:55.536060   62061 logs.go:123] Gathering logs for dmesg ...
	I0918 21:08:55.536078   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 21:08:55.549267   62061 logs.go:123] Gathering logs for describe nodes ...
	I0918 21:08:55.549296   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0918 21:08:55.616522   62061 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0918 21:08:55.616556   62061 logs.go:123] Gathering logs for CRI-O ...
	I0918 21:08:55.616572   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0918 21:08:55.698822   62061 logs.go:123] Gathering logs for container status ...
	I0918 21:08:55.698874   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 21:08:55.740234   62061 logs.go:123] Gathering logs for kubelet ...
	I0918 21:08:55.740264   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 21:08:58.294876   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:08:58.307543   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0918 21:08:58.307630   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0918 21:08:58.340435   62061 cri.go:89] found id: ""
	I0918 21:08:58.340467   62061 logs.go:276] 0 containers: []
	W0918 21:08:58.340479   62061 logs.go:278] No container was found matching "kube-apiserver"
	I0918 21:08:58.340486   62061 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0918 21:08:58.340545   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0918 21:08:58.374759   62061 cri.go:89] found id: ""
	I0918 21:08:58.374792   62061 logs.go:276] 0 containers: []
	W0918 21:08:58.374804   62061 logs.go:278] No container was found matching "etcd"
	I0918 21:08:58.374810   62061 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0918 21:08:58.374863   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0918 21:08:58.411756   62061 cri.go:89] found id: ""
	I0918 21:08:58.411787   62061 logs.go:276] 0 containers: []
	W0918 21:08:58.411797   62061 logs.go:278] No container was found matching "coredns"
	I0918 21:08:58.411804   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0918 21:08:58.411867   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0918 21:08:58.449787   62061 cri.go:89] found id: ""
	I0918 21:08:58.449820   62061 logs.go:276] 0 containers: []
	W0918 21:08:58.449832   62061 logs.go:278] No container was found matching "kube-scheduler"
	I0918 21:08:58.449839   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0918 21:08:58.449903   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0918 21:08:58.485212   62061 cri.go:89] found id: ""
	I0918 21:08:58.485243   62061 logs.go:276] 0 containers: []
	W0918 21:08:58.485254   62061 logs.go:278] No container was found matching "kube-proxy"
	I0918 21:08:58.485261   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0918 21:08:58.485331   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0918 21:08:58.528667   62061 cri.go:89] found id: ""
	I0918 21:08:58.528696   62061 logs.go:276] 0 containers: []
	W0918 21:08:58.528706   62061 logs.go:278] No container was found matching "kube-controller-manager"
	I0918 21:08:58.528714   62061 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0918 21:08:58.528775   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0918 21:08:58.568201   62061 cri.go:89] found id: ""
	I0918 21:08:58.568231   62061 logs.go:276] 0 containers: []
	W0918 21:08:58.568241   62061 logs.go:278] No container was found matching "kindnet"
	I0918 21:08:58.568247   62061 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0918 21:08:58.568305   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0918 21:08:58.623952   62061 cri.go:89] found id: ""
	I0918 21:08:58.623982   62061 logs.go:276] 0 containers: []
	W0918 21:08:58.623991   62061 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0918 21:08:58.624000   62061 logs.go:123] Gathering logs for container status ...
	I0918 21:08:58.624011   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 21:08:58.665418   62061 logs.go:123] Gathering logs for kubelet ...
	I0918 21:08:58.665457   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 21:08:58.713464   62061 logs.go:123] Gathering logs for dmesg ...
	I0918 21:08:58.713504   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 21:08:58.727511   62061 logs.go:123] Gathering logs for describe nodes ...
	I0918 21:08:58.727552   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0918 21:08:58.799004   62061 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0918 21:08:58.799035   62061 logs.go:123] Gathering logs for CRI-O ...
	I0918 21:08:58.799050   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0918 21:08:56.856428   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:08:58.856632   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:09:00.857301   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:08:59.386076   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:09:01.387016   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:09:01.389457   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:09:01.404658   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0918 21:09:01.404734   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0918 21:09:01.439139   62061 cri.go:89] found id: ""
	I0918 21:09:01.439170   62061 logs.go:276] 0 containers: []
	W0918 21:09:01.439180   62061 logs.go:278] No container was found matching "kube-apiserver"
	I0918 21:09:01.439187   62061 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0918 21:09:01.439251   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0918 21:09:01.473866   62061 cri.go:89] found id: ""
	I0918 21:09:01.473896   62061 logs.go:276] 0 containers: []
	W0918 21:09:01.473907   62061 logs.go:278] No container was found matching "etcd"
	I0918 21:09:01.473915   62061 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0918 21:09:01.473978   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0918 21:09:01.506734   62061 cri.go:89] found id: ""
	I0918 21:09:01.506767   62061 logs.go:276] 0 containers: []
	W0918 21:09:01.506777   62061 logs.go:278] No container was found matching "coredns"
	I0918 21:09:01.506783   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0918 21:09:01.506836   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0918 21:09:01.540123   62061 cri.go:89] found id: ""
	I0918 21:09:01.540152   62061 logs.go:276] 0 containers: []
	W0918 21:09:01.540162   62061 logs.go:278] No container was found matching "kube-scheduler"
	I0918 21:09:01.540169   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0918 21:09:01.540236   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0918 21:09:01.575002   62061 cri.go:89] found id: ""
	I0918 21:09:01.575037   62061 logs.go:276] 0 containers: []
	W0918 21:09:01.575048   62061 logs.go:278] No container was found matching "kube-proxy"
	I0918 21:09:01.575071   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0918 21:09:01.575159   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0918 21:09:01.612285   62061 cri.go:89] found id: ""
	I0918 21:09:01.612316   62061 logs.go:276] 0 containers: []
	W0918 21:09:01.612327   62061 logs.go:278] No container was found matching "kube-controller-manager"
	I0918 21:09:01.612335   62061 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0918 21:09:01.612399   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0918 21:09:01.648276   62061 cri.go:89] found id: ""
	I0918 21:09:01.648304   62061 logs.go:276] 0 containers: []
	W0918 21:09:01.648318   62061 logs.go:278] No container was found matching "kindnet"
	I0918 21:09:01.648325   62061 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0918 21:09:01.648397   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0918 21:09:01.686192   62061 cri.go:89] found id: ""
	I0918 21:09:01.686220   62061 logs.go:276] 0 containers: []
	W0918 21:09:01.686228   62061 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0918 21:09:01.686236   62061 logs.go:123] Gathering logs for kubelet ...
	I0918 21:09:01.686247   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 21:09:01.738366   62061 logs.go:123] Gathering logs for dmesg ...
	I0918 21:09:01.738408   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 21:09:01.752650   62061 logs.go:123] Gathering logs for describe nodes ...
	I0918 21:09:01.752683   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0918 21:09:01.825010   62061 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0918 21:09:01.825114   62061 logs.go:123] Gathering logs for CRI-O ...
	I0918 21:09:01.825156   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0918 21:09:01.907401   62061 logs.go:123] Gathering logs for container status ...
	I0918 21:09:01.907448   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 21:09:04.448316   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:09:04.461242   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0918 21:09:04.461304   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0918 21:09:03.357089   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:09:05.856126   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:09:03.387563   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:09:05.389665   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:09:07.886523   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:09:04.495951   62061 cri.go:89] found id: ""
	I0918 21:09:04.495984   62061 logs.go:276] 0 containers: []
	W0918 21:09:04.495997   62061 logs.go:278] No container was found matching "kube-apiserver"
	I0918 21:09:04.496006   62061 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0918 21:09:04.496090   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0918 21:09:04.536874   62061 cri.go:89] found id: ""
	I0918 21:09:04.536906   62061 logs.go:276] 0 containers: []
	W0918 21:09:04.536917   62061 logs.go:278] No container was found matching "etcd"
	I0918 21:09:04.536935   62061 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0918 21:09:04.537010   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0918 21:09:04.572601   62061 cri.go:89] found id: ""
	I0918 21:09:04.572634   62061 logs.go:276] 0 containers: []
	W0918 21:09:04.572646   62061 logs.go:278] No container was found matching "coredns"
	I0918 21:09:04.572653   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0918 21:09:04.572716   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0918 21:09:04.609783   62061 cri.go:89] found id: ""
	I0918 21:09:04.609817   62061 logs.go:276] 0 containers: []
	W0918 21:09:04.609826   62061 logs.go:278] No container was found matching "kube-scheduler"
	I0918 21:09:04.609832   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0918 21:09:04.609891   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0918 21:09:04.645124   62061 cri.go:89] found id: ""
	I0918 21:09:04.645156   62061 logs.go:276] 0 containers: []
	W0918 21:09:04.645167   62061 logs.go:278] No container was found matching "kube-proxy"
	I0918 21:09:04.645175   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0918 21:09:04.645241   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0918 21:09:04.680927   62061 cri.go:89] found id: ""
	I0918 21:09:04.680959   62061 logs.go:276] 0 containers: []
	W0918 21:09:04.680971   62061 logs.go:278] No container was found matching "kube-controller-manager"
	I0918 21:09:04.680978   62061 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0918 21:09:04.681038   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0918 21:09:04.718920   62061 cri.go:89] found id: ""
	I0918 21:09:04.718954   62061 logs.go:276] 0 containers: []
	W0918 21:09:04.718972   62061 logs.go:278] No container was found matching "kindnet"
	I0918 21:09:04.718979   62061 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0918 21:09:04.719039   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0918 21:09:04.751450   62061 cri.go:89] found id: ""
	I0918 21:09:04.751488   62061 logs.go:276] 0 containers: []
	W0918 21:09:04.751500   62061 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0918 21:09:04.751511   62061 logs.go:123] Gathering logs for container status ...
	I0918 21:09:04.751529   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 21:09:04.788969   62061 logs.go:123] Gathering logs for kubelet ...
	I0918 21:09:04.789001   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 21:09:04.837638   62061 logs.go:123] Gathering logs for dmesg ...
	I0918 21:09:04.837673   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 21:09:04.853673   62061 logs.go:123] Gathering logs for describe nodes ...
	I0918 21:09:04.853706   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0918 21:09:04.923670   62061 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0918 21:09:04.923703   62061 logs.go:123] Gathering logs for CRI-O ...
	I0918 21:09:04.923717   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0918 21:09:07.507979   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:09:07.521581   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0918 21:09:07.521656   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0918 21:09:07.556280   62061 cri.go:89] found id: ""
	I0918 21:09:07.556310   62061 logs.go:276] 0 containers: []
	W0918 21:09:07.556321   62061 logs.go:278] No container was found matching "kube-apiserver"
	I0918 21:09:07.556329   62061 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0918 21:09:07.556397   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0918 21:09:07.590775   62061 cri.go:89] found id: ""
	I0918 21:09:07.590802   62061 logs.go:276] 0 containers: []
	W0918 21:09:07.590810   62061 logs.go:278] No container was found matching "etcd"
	I0918 21:09:07.590815   62061 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0918 21:09:07.590862   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0918 21:09:07.625971   62061 cri.go:89] found id: ""
	I0918 21:09:07.626000   62061 logs.go:276] 0 containers: []
	W0918 21:09:07.626010   62061 logs.go:278] No container was found matching "coredns"
	I0918 21:09:07.626018   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0918 21:09:07.626080   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0918 21:09:07.660083   62061 cri.go:89] found id: ""
	I0918 21:09:07.660116   62061 logs.go:276] 0 containers: []
	W0918 21:09:07.660128   62061 logs.go:278] No container was found matching "kube-scheduler"
	I0918 21:09:07.660136   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0918 21:09:07.660201   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0918 21:09:07.694165   62061 cri.go:89] found id: ""
	I0918 21:09:07.694195   62061 logs.go:276] 0 containers: []
	W0918 21:09:07.694204   62061 logs.go:278] No container was found matching "kube-proxy"
	I0918 21:09:07.694211   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0918 21:09:07.694269   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0918 21:09:07.728298   62061 cri.go:89] found id: ""
	I0918 21:09:07.728328   62061 logs.go:276] 0 containers: []
	W0918 21:09:07.728338   62061 logs.go:278] No container was found matching "kube-controller-manager"
	I0918 21:09:07.728349   62061 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0918 21:09:07.728409   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0918 21:09:07.762507   62061 cri.go:89] found id: ""
	I0918 21:09:07.762546   62061 logs.go:276] 0 containers: []
	W0918 21:09:07.762555   62061 logs.go:278] No container was found matching "kindnet"
	I0918 21:09:07.762565   62061 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0918 21:09:07.762745   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0918 21:09:07.798006   62061 cri.go:89] found id: ""
	I0918 21:09:07.798038   62061 logs.go:276] 0 containers: []
	W0918 21:09:07.798049   62061 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0918 21:09:07.798059   62061 logs.go:123] Gathering logs for kubelet ...
	I0918 21:09:07.798074   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 21:09:07.849222   62061 logs.go:123] Gathering logs for dmesg ...
	I0918 21:09:07.849267   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 21:09:07.865023   62061 logs.go:123] Gathering logs for describe nodes ...
	I0918 21:09:07.865055   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0918 21:09:07.938810   62061 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0918 21:09:07.938830   62061 logs.go:123] Gathering logs for CRI-O ...
	I0918 21:09:07.938842   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0918 21:09:08.018885   62061 logs.go:123] Gathering logs for container status ...
	I0918 21:09:08.018924   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 21:09:07.856987   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:09:10.356244   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:09:09.886563   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:09:12.386922   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:09:10.562565   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:09:10.575854   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0918 21:09:10.575941   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0918 21:09:10.612853   62061 cri.go:89] found id: ""
	I0918 21:09:10.612884   62061 logs.go:276] 0 containers: []
	W0918 21:09:10.612896   62061 logs.go:278] No container was found matching "kube-apiserver"
	I0918 21:09:10.612906   62061 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0918 21:09:10.612966   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0918 21:09:10.648672   62061 cri.go:89] found id: ""
	I0918 21:09:10.648703   62061 logs.go:276] 0 containers: []
	W0918 21:09:10.648713   62061 logs.go:278] No container was found matching "etcd"
	I0918 21:09:10.648720   62061 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0918 21:09:10.648780   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0918 21:09:10.682337   62061 cri.go:89] found id: ""
	I0918 21:09:10.682370   62061 logs.go:276] 0 containers: []
	W0918 21:09:10.682388   62061 logs.go:278] No container was found matching "coredns"
	I0918 21:09:10.682394   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0918 21:09:10.682445   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0918 21:09:10.718221   62061 cri.go:89] found id: ""
	I0918 21:09:10.718257   62061 logs.go:276] 0 containers: []
	W0918 21:09:10.718269   62061 logs.go:278] No container was found matching "kube-scheduler"
	I0918 21:09:10.718277   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0918 21:09:10.718345   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0918 21:09:10.751570   62061 cri.go:89] found id: ""
	I0918 21:09:10.751600   62061 logs.go:276] 0 containers: []
	W0918 21:09:10.751609   62061 logs.go:278] No container was found matching "kube-proxy"
	I0918 21:09:10.751615   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0918 21:09:10.751706   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0918 21:09:10.792125   62061 cri.go:89] found id: ""
	I0918 21:09:10.792158   62061 logs.go:276] 0 containers: []
	W0918 21:09:10.792170   62061 logs.go:278] No container was found matching "kube-controller-manager"
	I0918 21:09:10.792178   62061 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0918 21:09:10.792245   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0918 21:09:10.830699   62061 cri.go:89] found id: ""
	I0918 21:09:10.830733   62061 logs.go:276] 0 containers: []
	W0918 21:09:10.830742   62061 logs.go:278] No container was found matching "kindnet"
	I0918 21:09:10.830748   62061 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0918 21:09:10.830820   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0918 21:09:10.869625   62061 cri.go:89] found id: ""
	I0918 21:09:10.869655   62061 logs.go:276] 0 containers: []
	W0918 21:09:10.869663   62061 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0918 21:09:10.869672   62061 logs.go:123] Gathering logs for kubelet ...
	I0918 21:09:10.869684   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 21:09:10.921340   62061 logs.go:123] Gathering logs for dmesg ...
	I0918 21:09:10.921378   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 21:09:10.937032   62061 logs.go:123] Gathering logs for describe nodes ...
	I0918 21:09:10.937071   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0918 21:09:11.006248   62061 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0918 21:09:11.006276   62061 logs.go:123] Gathering logs for CRI-O ...
	I0918 21:09:11.006291   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0918 21:09:11.086458   62061 logs.go:123] Gathering logs for container status ...
	I0918 21:09:11.086496   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 21:09:13.628824   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:09:13.642499   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0918 21:09:13.642578   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0918 21:09:13.677337   62061 cri.go:89] found id: ""
	I0918 21:09:13.677368   62061 logs.go:276] 0 containers: []
	W0918 21:09:13.677378   62061 logs.go:278] No container was found matching "kube-apiserver"
	I0918 21:09:13.677385   62061 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0918 21:09:13.677449   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0918 21:09:13.717317   62061 cri.go:89] found id: ""
	I0918 21:09:13.717341   62061 logs.go:276] 0 containers: []
	W0918 21:09:13.717353   62061 logs.go:278] No container was found matching "etcd"
	I0918 21:09:13.717358   62061 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0918 21:09:13.717419   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0918 21:09:13.752151   62061 cri.go:89] found id: ""
	I0918 21:09:13.752181   62061 logs.go:276] 0 containers: []
	W0918 21:09:13.752189   62061 logs.go:278] No container was found matching "coredns"
	I0918 21:09:13.752195   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0918 21:09:13.752253   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0918 21:09:13.791946   62061 cri.go:89] found id: ""
	I0918 21:09:13.791975   62061 logs.go:276] 0 containers: []
	W0918 21:09:13.791983   62061 logs.go:278] No container was found matching "kube-scheduler"
	I0918 21:09:13.791989   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0918 21:09:13.792069   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0918 21:09:13.825173   62061 cri.go:89] found id: ""
	I0918 21:09:13.825198   62061 logs.go:276] 0 containers: []
	W0918 21:09:13.825209   62061 logs.go:278] No container was found matching "kube-proxy"
	I0918 21:09:13.825216   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0918 21:09:13.825276   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0918 21:09:13.859801   62061 cri.go:89] found id: ""
	I0918 21:09:13.859834   62061 logs.go:276] 0 containers: []
	W0918 21:09:13.859846   62061 logs.go:278] No container was found matching "kube-controller-manager"
	I0918 21:09:13.859853   62061 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0918 21:09:13.859907   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0918 21:09:13.895413   62061 cri.go:89] found id: ""
	I0918 21:09:13.895445   62061 logs.go:276] 0 containers: []
	W0918 21:09:13.895456   62061 logs.go:278] No container was found matching "kindnet"
	I0918 21:09:13.895463   62061 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0918 21:09:13.895515   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0918 21:09:13.929048   62061 cri.go:89] found id: ""
	I0918 21:09:13.929075   62061 logs.go:276] 0 containers: []
	W0918 21:09:13.929083   62061 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0918 21:09:13.929092   62061 logs.go:123] Gathering logs for kubelet ...
	I0918 21:09:13.929104   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 21:09:13.981579   62061 logs.go:123] Gathering logs for dmesg ...
	I0918 21:09:13.981613   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 21:09:13.995642   62061 logs.go:123] Gathering logs for describe nodes ...
	I0918 21:09:13.995679   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0918 21:09:14.061762   62061 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0918 21:09:14.061782   62061 logs.go:123] Gathering logs for CRI-O ...
	I0918 21:09:14.061793   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0918 21:09:14.139623   62061 logs.go:123] Gathering logs for container status ...
	I0918 21:09:14.139659   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 21:09:16.001617   61659 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.374302262s)
	I0918 21:09:16.001692   61659 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0918 21:09:16.019307   61659 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0918 21:09:16.029547   61659 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0918 21:09:16.039132   61659 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0918 21:09:16.039154   61659 kubeadm.go:157] found existing configuration files:
	
	I0918 21:09:16.039196   61659 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0918 21:09:16.048506   61659 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0918 21:09:16.048567   61659 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0918 21:09:16.058120   61659 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0918 21:09:16.067686   61659 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0918 21:09:16.067746   61659 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0918 21:09:16.077707   61659 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0918 21:09:16.087089   61659 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0918 21:09:16.087149   61659 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0918 21:09:16.097040   61659 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0918 21:09:16.106448   61659 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0918 21:09:16.106514   61659 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0918 21:09:16.116060   61659 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0918 21:09:16.159721   61659 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I0918 21:09:16.159797   61659 kubeadm.go:310] [preflight] Running pre-flight checks
	I0918 21:09:16.266821   61659 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0918 21:09:16.266968   61659 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0918 21:09:16.267122   61659 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0918 21:09:16.275249   61659 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0918 21:09:12.855996   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:09:14.857296   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:09:16.277228   61659 out.go:235]   - Generating certificates and keys ...
	I0918 21:09:16.277333   61659 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0918 21:09:16.277419   61659 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0918 21:09:16.277534   61659 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0918 21:09:16.277617   61659 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0918 21:09:16.277709   61659 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0918 21:09:16.277790   61659 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0918 21:09:16.277904   61659 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0918 21:09:16.278013   61659 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0918 21:09:16.278131   61659 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0918 21:09:16.278265   61659 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0918 21:09:16.278331   61659 kubeadm.go:310] [certs] Using the existing "sa" key
	I0918 21:09:16.278401   61659 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0918 21:09:16.516263   61659 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0918 21:09:16.708220   61659 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0918 21:09:17.009820   61659 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0918 21:09:17.108871   61659 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0918 21:09:17.211014   61659 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0918 21:09:17.211658   61659 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0918 21:09:17.216626   61659 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0918 21:09:14.887068   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:09:16.888350   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:09:16.686071   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:09:16.699769   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0918 21:09:16.699844   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0918 21:09:16.735242   62061 cri.go:89] found id: ""
	I0918 21:09:16.735277   62061 logs.go:276] 0 containers: []
	W0918 21:09:16.735288   62061 logs.go:278] No container was found matching "kube-apiserver"
	I0918 21:09:16.735298   62061 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0918 21:09:16.735371   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0918 21:09:16.770027   62061 cri.go:89] found id: ""
	I0918 21:09:16.770052   62061 logs.go:276] 0 containers: []
	W0918 21:09:16.770060   62061 logs.go:278] No container was found matching "etcd"
	I0918 21:09:16.770066   62061 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0918 21:09:16.770114   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0918 21:09:16.806525   62061 cri.go:89] found id: ""
	I0918 21:09:16.806555   62061 logs.go:276] 0 containers: []
	W0918 21:09:16.806563   62061 logs.go:278] No container was found matching "coredns"
	I0918 21:09:16.806569   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0918 21:09:16.806636   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0918 21:09:16.851146   62061 cri.go:89] found id: ""
	I0918 21:09:16.851183   62061 logs.go:276] 0 containers: []
	W0918 21:09:16.851194   62061 logs.go:278] No container was found matching "kube-scheduler"
	I0918 21:09:16.851204   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0918 21:09:16.851271   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0918 21:09:16.890718   62061 cri.go:89] found id: ""
	I0918 21:09:16.890748   62061 logs.go:276] 0 containers: []
	W0918 21:09:16.890760   62061 logs.go:278] No container was found matching "kube-proxy"
	I0918 21:09:16.890767   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0918 21:09:16.890824   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0918 21:09:16.928971   62061 cri.go:89] found id: ""
	I0918 21:09:16.929002   62061 logs.go:276] 0 containers: []
	W0918 21:09:16.929012   62061 logs.go:278] No container was found matching "kube-controller-manager"
	I0918 21:09:16.929020   62061 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0918 21:09:16.929079   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0918 21:09:16.970965   62061 cri.go:89] found id: ""
	I0918 21:09:16.970999   62061 logs.go:276] 0 containers: []
	W0918 21:09:16.971011   62061 logs.go:278] No container was found matching "kindnet"
	I0918 21:09:16.971019   62061 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0918 21:09:16.971089   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0918 21:09:17.006427   62061 cri.go:89] found id: ""
	I0918 21:09:17.006453   62061 logs.go:276] 0 containers: []
	W0918 21:09:17.006461   62061 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0918 21:09:17.006468   62061 logs.go:123] Gathering logs for kubelet ...
	I0918 21:09:17.006480   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 21:09:17.058690   62061 logs.go:123] Gathering logs for dmesg ...
	I0918 21:09:17.058733   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 21:09:17.072593   62061 logs.go:123] Gathering logs for describe nodes ...
	I0918 21:09:17.072623   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0918 21:09:17.143046   62061 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0918 21:09:17.143071   62061 logs.go:123] Gathering logs for CRI-O ...
	I0918 21:09:17.143082   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0918 21:09:17.236943   62061 logs.go:123] Gathering logs for container status ...
	I0918 21:09:17.236989   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 21:09:17.357978   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:09:19.858268   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:09:17.218406   61659 out.go:235]   - Booting up control plane ...
	I0918 21:09:17.218544   61659 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0918 21:09:17.218662   61659 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0918 21:09:17.218765   61659 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0918 21:09:17.238076   61659 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0918 21:09:17.248123   61659 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0918 21:09:17.248226   61659 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0918 21:09:17.379685   61659 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0918 21:09:17.379840   61659 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0918 21:09:18.380791   61659 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001279947s
	I0918 21:09:18.380906   61659 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0918 21:09:18.380783   61740 pod_ready.go:82] duration metric: took 4m0.000205104s for pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace to be "Ready" ...
	E0918 21:09:18.380812   61740 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace to be "Ready" (will not retry!)
	I0918 21:09:18.380832   61740 pod_ready.go:39] duration metric: took 4m15.618837854s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0918 21:09:18.380875   61740 kubeadm.go:597] duration metric: took 4m23.646410044s to restartPrimaryControlPlane
	W0918 21:09:18.380936   61740 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0918 21:09:18.380966   61740 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0918 21:09:23.386705   61659 kubeadm.go:310] [api-check] The API server is healthy after 5.005706581s
	I0918 21:09:23.402316   61659 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0918 21:09:23.422786   61659 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0918 21:09:23.462099   61659 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0918 21:09:23.462373   61659 kubeadm.go:310] [mark-control-plane] Marking the node default-k8s-diff-port-828868 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0918 21:09:23.484276   61659 kubeadm.go:310] [bootstrap-token] Using token: 2vcil8.e13zhc1806da8knq
	I0918 21:09:19.782266   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:09:19.799433   62061 kubeadm.go:597] duration metric: took 4m2.126311085s to restartPrimaryControlPlane
	W0918 21:09:19.799513   62061 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0918 21:09:19.799543   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0918 21:09:20.910192   62061 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (1.110625463s)
	I0918 21:09:20.910273   62061 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0918 21:09:20.925992   62061 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0918 21:09:20.936876   62061 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0918 21:09:20.947170   62061 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0918 21:09:20.947199   62061 kubeadm.go:157] found existing configuration files:
	
	I0918 21:09:20.947255   62061 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0918 21:09:20.958140   62061 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0918 21:09:20.958240   62061 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0918 21:09:20.968351   62061 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0918 21:09:20.978669   62061 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0918 21:09:20.978735   62061 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0918 21:09:20.989765   62061 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0918 21:09:20.999842   62061 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0918 21:09:20.999903   62061 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0918 21:09:21.009945   62061 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0918 21:09:21.020229   62061 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0918 21:09:21.020289   62061 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0918 21:09:21.030583   62061 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0918 21:09:21.271399   62061 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0918 21:09:23.485978   61659 out.go:235]   - Configuring RBAC rules ...
	I0918 21:09:23.486112   61659 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0918 21:09:23.499163   61659 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0918 21:09:23.510754   61659 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0918 21:09:23.514794   61659 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0918 21:09:23.519247   61659 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0918 21:09:23.530424   61659 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0918 21:09:23.799778   61659 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0918 21:09:24.223469   61659 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0918 21:09:24.794852   61659 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0918 21:09:24.794886   61659 kubeadm.go:310] 
	I0918 21:09:24.794951   61659 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0918 21:09:24.794963   61659 kubeadm.go:310] 
	I0918 21:09:24.795058   61659 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0918 21:09:24.795073   61659 kubeadm.go:310] 
	I0918 21:09:24.795105   61659 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0918 21:09:24.795192   61659 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0918 21:09:24.795255   61659 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0918 21:09:24.795285   61659 kubeadm.go:310] 
	I0918 21:09:24.795366   61659 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0918 21:09:24.795376   61659 kubeadm.go:310] 
	I0918 21:09:24.795416   61659 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0918 21:09:24.795425   61659 kubeadm.go:310] 
	I0918 21:09:24.795497   61659 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0918 21:09:24.795580   61659 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0918 21:09:24.795678   61659 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0918 21:09:24.795692   61659 kubeadm.go:310] 
	I0918 21:09:24.795779   61659 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0918 21:09:24.795891   61659 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0918 21:09:24.795901   61659 kubeadm.go:310] 
	I0918 21:09:24.796174   61659 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8444 --token 2vcil8.e13zhc1806da8knq \
	I0918 21:09:24.796299   61659 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:8ad46bf288e8cc2226f5a929a768e6c95cddecc97ffc4016be27ef0987b1e36f \
	I0918 21:09:24.796350   61659 kubeadm.go:310] 	--control-plane 
	I0918 21:09:24.796367   61659 kubeadm.go:310] 
	I0918 21:09:24.796479   61659 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0918 21:09:24.796487   61659 kubeadm.go:310] 
	I0918 21:09:24.796594   61659 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8444 --token 2vcil8.e13zhc1806da8knq \
	I0918 21:09:24.796738   61659 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:8ad46bf288e8cc2226f5a929a768e6c95cddecc97ffc4016be27ef0987b1e36f 
	I0918 21:09:24.797359   61659 kubeadm.go:310] W0918 21:09:16.134048    2561 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0918 21:09:24.797679   61659 kubeadm.go:310] W0918 21:09:16.134873    2561 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0918 21:09:24.797832   61659 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0918 21:09:24.797858   61659 cni.go:84] Creating CNI manager for ""
	I0918 21:09:24.797872   61659 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0918 21:09:24.799953   61659 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0918 21:09:22.357582   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:09:24.857037   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:09:24.801259   61659 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0918 21:09:24.812277   61659 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0918 21:09:24.834749   61659 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0918 21:09:24.834855   61659 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0918 21:09:24.834871   61659 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-828868 minikube.k8s.io/updated_at=2024_09_18T21_09_24_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=85073601a832bd4bbda5d11fa91feafff6ec6b91 minikube.k8s.io/name=default-k8s-diff-port-828868 minikube.k8s.io/primary=true
	I0918 21:09:25.022861   61659 ops.go:34] apiserver oom_adj: -16
	I0918 21:09:25.022930   61659 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0918 21:09:25.523400   61659 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0918 21:09:26.023075   61659 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0918 21:09:26.523330   61659 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0918 21:09:27.023179   61659 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0918 21:09:27.523363   61659 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0918 21:09:28.023150   61659 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0918 21:09:28.523941   61659 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0918 21:09:29.023542   61659 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0918 21:09:29.143581   61659 kubeadm.go:1113] duration metric: took 4.308796493s to wait for elevateKubeSystemPrivileges
	I0918 21:09:29.143614   61659 kubeadm.go:394] duration metric: took 4m55.024616229s to StartCluster
	I0918 21:09:29.143632   61659 settings.go:142] acquiring lock: {Name:mk6ae95a69dcbe00c28846409e9f46945a88de2f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0918 21:09:29.143727   61659 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19667-7671/kubeconfig
	I0918 21:09:29.145397   61659 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19667-7671/kubeconfig: {Name:mk3a3eecadb0164dfdd5d3c4392b5b473a2a9bb0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0918 21:09:29.145680   61659 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.50.109 Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0918 21:09:29.145767   61659 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0918 21:09:29.145851   61659 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-828868"
	I0918 21:09:29.145869   61659 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-828868"
	I0918 21:09:29.145877   61659 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-828868"
	W0918 21:09:29.145885   61659 addons.go:243] addon storage-provisioner should already be in state true
	I0918 21:09:29.145896   61659 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-828868"
	I0918 21:09:29.145898   61659 config.go:182] Loaded profile config "default-k8s-diff-port-828868": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0918 21:09:29.145900   61659 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-828868"
	I0918 21:09:29.145920   61659 host.go:66] Checking if "default-k8s-diff-port-828868" exists ...
	I0918 21:09:29.145932   61659 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-828868"
	W0918 21:09:29.145946   61659 addons.go:243] addon metrics-server should already be in state true
	I0918 21:09:29.145980   61659 host.go:66] Checking if "default-k8s-diff-port-828868" exists ...
	I0918 21:09:29.146234   61659 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19667-7671/.minikube/bin/docker-machine-driver-kvm2
	I0918 21:09:29.146238   61659 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19667-7671/.minikube/bin/docker-machine-driver-kvm2
	I0918 21:09:29.146282   61659 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0918 21:09:29.146297   61659 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0918 21:09:29.146372   61659 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19667-7671/.minikube/bin/docker-machine-driver-kvm2
	I0918 21:09:29.146389   61659 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0918 21:09:29.147645   61659 out.go:177] * Verifying Kubernetes components...
	I0918 21:09:29.149574   61659 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0918 21:09:29.164779   61659 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43359
	I0918 21:09:29.165002   61659 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37857
	I0918 21:09:29.165390   61659 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44573
	I0918 21:09:29.165682   61659 main.go:141] libmachine: () Calling .GetVersion
	I0918 21:09:29.165749   61659 main.go:141] libmachine: () Calling .GetVersion
	I0918 21:09:29.166233   61659 main.go:141] libmachine: Using API Version  1
	I0918 21:09:29.166254   61659 main.go:141] libmachine: () Calling .SetConfigRaw
	I0918 21:09:29.166270   61659 main.go:141] libmachine: () Calling .GetVersion
	I0918 21:09:29.166388   61659 main.go:141] libmachine: Using API Version  1
	I0918 21:09:29.166414   61659 main.go:141] libmachine: () Calling .SetConfigRaw
	I0918 21:09:29.166544   61659 main.go:141] libmachine: () Calling .GetMachineName
	I0918 21:09:29.166711   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .GetState
	I0918 21:09:29.166730   61659 main.go:141] libmachine: () Calling .GetMachineName
	I0918 21:09:29.166894   61659 main.go:141] libmachine: Using API Version  1
	I0918 21:09:29.166918   61659 main.go:141] libmachine: () Calling .SetConfigRaw
	I0918 21:09:29.167381   61659 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19667-7671/.minikube/bin/docker-machine-driver-kvm2
	I0918 21:09:29.167425   61659 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0918 21:09:29.168144   61659 main.go:141] libmachine: () Calling .GetMachineName
	I0918 21:09:29.168578   61659 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19667-7671/.minikube/bin/docker-machine-driver-kvm2
	I0918 21:09:29.168614   61659 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0918 21:09:29.171072   61659 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-828868"
	W0918 21:09:29.171101   61659 addons.go:243] addon default-storageclass should already be in state true
	I0918 21:09:29.171133   61659 host.go:66] Checking if "default-k8s-diff-port-828868" exists ...
	I0918 21:09:29.171534   61659 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19667-7671/.minikube/bin/docker-machine-driver-kvm2
	I0918 21:09:29.171597   61659 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0918 21:09:29.186305   61659 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42211
	I0918 21:09:29.186318   61659 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45153
	I0918 21:09:29.186838   61659 main.go:141] libmachine: () Calling .GetVersion
	I0918 21:09:29.186847   61659 main.go:141] libmachine: () Calling .GetVersion
	I0918 21:09:29.187353   61659 main.go:141] libmachine: Using API Version  1
	I0918 21:09:29.187367   61659 main.go:141] libmachine: () Calling .SetConfigRaw
	I0918 21:09:29.187373   61659 main.go:141] libmachine: Using API Version  1
	I0918 21:09:29.187403   61659 main.go:141] libmachine: () Calling .SetConfigRaw
	I0918 21:09:29.187840   61659 main.go:141] libmachine: () Calling .GetMachineName
	I0918 21:09:29.187855   61659 main.go:141] libmachine: () Calling .GetMachineName
	I0918 21:09:29.188085   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .GetState
	I0918 21:09:29.188106   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .GetState
	I0918 21:09:29.193453   61659 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33471
	I0918 21:09:29.193905   61659 main.go:141] libmachine: () Calling .GetVersion
	I0918 21:09:29.194477   61659 main.go:141] libmachine: Using API Version  1
	I0918 21:09:29.194513   61659 main.go:141] libmachine: () Calling .SetConfigRaw
	I0918 21:09:29.194981   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .DriverName
	I0918 21:09:29.195155   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .DriverName
	I0918 21:09:29.195254   61659 main.go:141] libmachine: () Calling .GetMachineName
	I0918 21:09:29.195807   61659 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19667-7671/.minikube/bin/docker-machine-driver-kvm2
	I0918 21:09:29.195839   61659 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0918 21:09:29.197102   61659 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0918 21:09:29.197111   61659 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0918 21:09:29.198425   61659 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0918 21:09:29.198458   61659 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0918 21:09:29.198486   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .GetSSHHostname
	I0918 21:09:29.198589   61659 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0918 21:09:29.198605   61659 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0918 21:09:29.198622   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .GetSSHHostname
	I0918 21:09:29.202110   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | domain default-k8s-diff-port-828868 has defined MAC address 52:54:00:c0:39:06 in network mk-default-k8s-diff-port-828868
	I0918 21:09:29.202236   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | domain default-k8s-diff-port-828868 has defined MAC address 52:54:00:c0:39:06 in network mk-default-k8s-diff-port-828868
	I0918 21:09:29.202634   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:39:06", ip: ""} in network mk-default-k8s-diff-port-828868: {Iface:virbr4 ExpiryTime:2024-09-18 22:04:19 +0000 UTC Type:0 Mac:52:54:00:c0:39:06 Iaid: IPaddr:192.168.50.109 Prefix:24 Hostname:default-k8s-diff-port-828868 Clientid:01:52:54:00:c0:39:06}
	I0918 21:09:29.202656   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:39:06", ip: ""} in network mk-default-k8s-diff-port-828868: {Iface:virbr4 ExpiryTime:2024-09-18 22:04:19 +0000 UTC Type:0 Mac:52:54:00:c0:39:06 Iaid: IPaddr:192.168.50.109 Prefix:24 Hostname:default-k8s-diff-port-828868 Clientid:01:52:54:00:c0:39:06}
	I0918 21:09:29.202661   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | domain default-k8s-diff-port-828868 has defined IP address 192.168.50.109 and MAC address 52:54:00:c0:39:06 in network mk-default-k8s-diff-port-828868
	I0918 21:09:29.202677   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | domain default-k8s-diff-port-828868 has defined IP address 192.168.50.109 and MAC address 52:54:00:c0:39:06 in network mk-default-k8s-diff-port-828868
	I0918 21:09:29.202895   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .GetSSHPort
	I0918 21:09:29.202942   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .GetSSHPort
	I0918 21:09:29.203084   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .GetSSHKeyPath
	I0918 21:09:29.203129   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .GetSSHKeyPath
	I0918 21:09:29.203268   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .GetSSHUsername
	I0918 21:09:29.203275   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .GetSSHUsername
	I0918 21:09:29.203393   61659 sshutil.go:53] new ssh client: &{IP:192.168.50.109 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19667-7671/.minikube/machines/default-k8s-diff-port-828868/id_rsa Username:docker}
	I0918 21:09:29.203407   61659 sshutil.go:53] new ssh client: &{IP:192.168.50.109 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19667-7671/.minikube/machines/default-k8s-diff-port-828868/id_rsa Username:docker}
	I0918 21:09:29.215178   61659 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41565
	I0918 21:09:29.215727   61659 main.go:141] libmachine: () Calling .GetVersion
	I0918 21:09:29.216301   61659 main.go:141] libmachine: Using API Version  1
	I0918 21:09:29.216325   61659 main.go:141] libmachine: () Calling .SetConfigRaw
	I0918 21:09:29.216669   61659 main.go:141] libmachine: () Calling .GetMachineName
	I0918 21:09:29.216873   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .GetState
	I0918 21:09:29.218689   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .DriverName
	I0918 21:09:29.218980   61659 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0918 21:09:29.218994   61659 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0918 21:09:29.219009   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .GetSSHHostname
	I0918 21:09:29.222542   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | domain default-k8s-diff-port-828868 has defined MAC address 52:54:00:c0:39:06 in network mk-default-k8s-diff-port-828868
	I0918 21:09:29.222963   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:39:06", ip: ""} in network mk-default-k8s-diff-port-828868: {Iface:virbr4 ExpiryTime:2024-09-18 22:04:19 +0000 UTC Type:0 Mac:52:54:00:c0:39:06 Iaid: IPaddr:192.168.50.109 Prefix:24 Hostname:default-k8s-diff-port-828868 Clientid:01:52:54:00:c0:39:06}
	I0918 21:09:29.222985   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | domain default-k8s-diff-port-828868 has defined IP address 192.168.50.109 and MAC address 52:54:00:c0:39:06 in network mk-default-k8s-diff-port-828868
	I0918 21:09:29.223398   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .GetSSHPort
	I0918 21:09:29.223632   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .GetSSHKeyPath
	I0918 21:09:29.223820   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .GetSSHUsername
	I0918 21:09:29.224004   61659 sshutil.go:53] new ssh client: &{IP:192.168.50.109 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19667-7671/.minikube/machines/default-k8s-diff-port-828868/id_rsa Username:docker}
	I0918 21:09:29.360595   61659 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0918 21:09:29.381254   61659 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-828868" to be "Ready" ...
	I0918 21:09:29.390526   61659 node_ready.go:49] node "default-k8s-diff-port-828868" has status "Ready":"True"
	I0918 21:09:29.390554   61659 node_ready.go:38] duration metric: took 9.264338ms for node "default-k8s-diff-port-828868" to be "Ready" ...
	I0918 21:09:29.390565   61659 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0918 21:09:29.395433   61659 pod_ready.go:79] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-828868" in "kube-system" namespace to be "Ready" ...
	I0918 21:09:29.468492   61659 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0918 21:09:29.526515   61659 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0918 21:09:29.527137   61659 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0918 21:09:29.527162   61659 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0918 21:09:29.570619   61659 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0918 21:09:29.570651   61659 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0918 21:09:29.631944   61659 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0918 21:09:29.631975   61659 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0918 21:09:29.653905   61659 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0918 21:09:30.402107   61659 main.go:141] libmachine: Making call to close driver server
	I0918 21:09:30.402145   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .Close
	I0918 21:09:30.402142   61659 main.go:141] libmachine: Making call to close driver server
	I0918 21:09:30.402167   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .Close
	I0918 21:09:30.402466   61659 main.go:141] libmachine: Successfully made call to close driver server
	I0918 21:09:30.402480   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | Closing plugin on server side
	I0918 21:09:30.402493   61659 main.go:141] libmachine: Making call to close connection to plugin binary
	I0918 21:09:30.402503   61659 main.go:141] libmachine: Making call to close driver server
	I0918 21:09:30.402512   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .Close
	I0918 21:09:30.402537   61659 main.go:141] libmachine: Successfully made call to close driver server
	I0918 21:09:30.402546   61659 main.go:141] libmachine: Making call to close connection to plugin binary
	I0918 21:09:30.402555   61659 main.go:141] libmachine: Making call to close driver server
	I0918 21:09:30.402571   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .Close
	I0918 21:09:30.402733   61659 main.go:141] libmachine: Successfully made call to close driver server
	I0918 21:09:30.402773   61659 main.go:141] libmachine: Making call to close connection to plugin binary
	I0918 21:09:30.402921   61659 main.go:141] libmachine: Successfully made call to close driver server
	I0918 21:09:30.402941   61659 main.go:141] libmachine: Making call to close connection to plugin binary
	I0918 21:09:30.435323   61659 main.go:141] libmachine: Making call to close driver server
	I0918 21:09:30.435366   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .Close
	I0918 21:09:30.435659   61659 main.go:141] libmachine: Successfully made call to close driver server
	I0918 21:09:30.435683   61659 main.go:141] libmachine: Making call to close connection to plugin binary
	I0918 21:09:30.975630   61659 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.321677798s)
	I0918 21:09:30.975716   61659 main.go:141] libmachine: Making call to close driver server
	I0918 21:09:30.975733   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .Close
	I0918 21:09:30.976074   61659 main.go:141] libmachine: Successfully made call to close driver server
	I0918 21:09:30.976094   61659 main.go:141] libmachine: Making call to close connection to plugin binary
	I0918 21:09:30.976105   61659 main.go:141] libmachine: Making call to close driver server
	I0918 21:09:30.976113   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .Close
	I0918 21:09:30.976369   61659 main.go:141] libmachine: Successfully made call to close driver server
	I0918 21:09:30.976395   61659 main.go:141] libmachine: Making call to close connection to plugin binary
	I0918 21:09:30.976406   61659 addons.go:475] Verifying addon metrics-server=true in "default-k8s-diff-port-828868"
	I0918 21:09:30.978345   61659 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0918 21:09:26.857486   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:09:29.356533   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:09:31.358269   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:09:30.979731   61659 addons.go:510] duration metric: took 1.833970994s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0918 21:09:31.403620   61659 pod_ready.go:103] pod "etcd-default-k8s-diff-port-828868" in "kube-system" namespace has status "Ready":"False"
	I0918 21:09:33.857960   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:09:36.357454   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:09:33.902436   61659 pod_ready.go:103] pod "etcd-default-k8s-diff-port-828868" in "kube-system" namespace has status "Ready":"False"
	I0918 21:09:36.401889   61659 pod_ready.go:103] pod "etcd-default-k8s-diff-port-828868" in "kube-system" namespace has status "Ready":"False"
	I0918 21:09:36.902002   61659 pod_ready.go:93] pod "etcd-default-k8s-diff-port-828868" in "kube-system" namespace has status "Ready":"True"
	I0918 21:09:36.902026   61659 pod_ready.go:82] duration metric: took 7.506563242s for pod "etcd-default-k8s-diff-port-828868" in "kube-system" namespace to be "Ready" ...
	I0918 21:09:36.902035   61659 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-828868" in "kube-system" namespace to be "Ready" ...
	I0918 21:09:36.907689   61659 pod_ready.go:93] pod "kube-apiserver-default-k8s-diff-port-828868" in "kube-system" namespace has status "Ready":"True"
	I0918 21:09:36.907713   61659 pod_ready.go:82] duration metric: took 5.672631ms for pod "kube-apiserver-default-k8s-diff-port-828868" in "kube-system" namespace to be "Ready" ...
	I0918 21:09:36.907722   61659 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-828868" in "kube-system" namespace to be "Ready" ...
	I0918 21:09:38.914521   61659 pod_ready.go:103] pod "kube-controller-manager-default-k8s-diff-port-828868" in "kube-system" namespace has status "Ready":"False"
	I0918 21:09:39.414168   61659 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-828868" in "kube-system" namespace has status "Ready":"True"
	I0918 21:09:39.414196   61659 pod_ready.go:82] duration metric: took 2.506467297s for pod "kube-controller-manager-default-k8s-diff-port-828868" in "kube-system" namespace to be "Ready" ...
	I0918 21:09:39.414207   61659 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-hf5mm" in "kube-system" namespace to be "Ready" ...
	I0918 21:09:39.419030   61659 pod_ready.go:93] pod "kube-proxy-hf5mm" in "kube-system" namespace has status "Ready":"True"
	I0918 21:09:39.419053   61659 pod_ready.go:82] duration metric: took 4.838856ms for pod "kube-proxy-hf5mm" in "kube-system" namespace to be "Ready" ...
	I0918 21:09:39.419061   61659 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-828868" in "kube-system" namespace to be "Ready" ...
	I0918 21:09:39.423321   61659 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-828868" in "kube-system" namespace has status "Ready":"True"
	I0918 21:09:39.423341   61659 pod_ready.go:82] duration metric: took 4.274601ms for pod "kube-scheduler-default-k8s-diff-port-828868" in "kube-system" namespace to be "Ready" ...
	I0918 21:09:39.423348   61659 pod_ready.go:39] duration metric: took 10.03277208s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0918 21:09:39.423360   61659 api_server.go:52] waiting for apiserver process to appear ...
	I0918 21:09:39.423407   61659 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:09:39.438272   61659 api_server.go:72] duration metric: took 10.292559807s to wait for apiserver process to appear ...
	I0918 21:09:39.438297   61659 api_server.go:88] waiting for apiserver healthz status ...
	I0918 21:09:39.438315   61659 api_server.go:253] Checking apiserver healthz at https://192.168.50.109:8444/healthz ...
	I0918 21:09:39.443342   61659 api_server.go:279] https://192.168.50.109:8444/healthz returned 200:
	ok
	I0918 21:09:39.444238   61659 api_server.go:141] control plane version: v1.31.1
	I0918 21:09:39.444262   61659 api_server.go:131] duration metric: took 5.958748ms to wait for apiserver health ...
	I0918 21:09:39.444270   61659 system_pods.go:43] waiting for kube-system pods to appear ...
	I0918 21:09:39.449914   61659 system_pods.go:59] 9 kube-system pods found
	I0918 21:09:39.449938   61659 system_pods.go:61] "coredns-7c65d6cfc9-8gz5v" [bfcbe99d-9a3d-4da8-a976-58cb9d9bf7ac] Running
	I0918 21:09:39.449942   61659 system_pods.go:61] "coredns-7c65d6cfc9-shx5p" [2d6d25ab-9a90-490a-911b-bf396605fa88] Running
	I0918 21:09:39.449947   61659 system_pods.go:61] "etcd-default-k8s-diff-port-828868" [d9772e35-df33-4b90-af88-7479a81776a2] Running
	I0918 21:09:39.449950   61659 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-828868" [9ca05dc8-1e15-4f6e-9855-1e1b6376fbfc] Running
	I0918 21:09:39.449954   61659 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-828868" [b4196fd3-4882-4e31-b418-2f5f24d2610d] Running
	I0918 21:09:39.449957   61659 system_pods.go:61] "kube-proxy-hf5mm" [3fb0a166-a925-4486-9695-6db05ae704b8] Running
	I0918 21:09:39.449962   61659 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-828868" [161b8693-3400-43a4-b361-cce5749dd2eb] Running
	I0918 21:09:39.449969   61659 system_pods.go:61] "metrics-server-6867b74b74-hdt52" [bf3aa7d0-a121-4a2c-92dd-2c79c6cbaa4a] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0918 21:09:39.449976   61659 system_pods.go:61] "storage-provisioner" [8b4e1077-e23f-4262-b83e-989506798531] Running
	I0918 21:09:39.449983   61659 system_pods.go:74] duration metric: took 5.708013ms to wait for pod list to return data ...
	I0918 21:09:39.449992   61659 default_sa.go:34] waiting for default service account to be created ...
	I0918 21:09:39.453256   61659 default_sa.go:45] found service account: "default"
	I0918 21:09:39.453278   61659 default_sa.go:55] duration metric: took 3.281012ms for default service account to be created ...
	I0918 21:09:39.453287   61659 system_pods.go:116] waiting for k8s-apps to be running ...
	I0918 21:09:39.502200   61659 system_pods.go:86] 9 kube-system pods found
	I0918 21:09:39.502231   61659 system_pods.go:89] "coredns-7c65d6cfc9-8gz5v" [bfcbe99d-9a3d-4da8-a976-58cb9d9bf7ac] Running
	I0918 21:09:39.502237   61659 system_pods.go:89] "coredns-7c65d6cfc9-shx5p" [2d6d25ab-9a90-490a-911b-bf396605fa88] Running
	I0918 21:09:39.502241   61659 system_pods.go:89] "etcd-default-k8s-diff-port-828868" [d9772e35-df33-4b90-af88-7479a81776a2] Running
	I0918 21:09:39.502246   61659 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-828868" [9ca05dc8-1e15-4f6e-9855-1e1b6376fbfc] Running
	I0918 21:09:39.502250   61659 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-828868" [b4196fd3-4882-4e31-b418-2f5f24d2610d] Running
	I0918 21:09:39.502253   61659 system_pods.go:89] "kube-proxy-hf5mm" [3fb0a166-a925-4486-9695-6db05ae704b8] Running
	I0918 21:09:39.502256   61659 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-828868" [161b8693-3400-43a4-b361-cce5749dd2eb] Running
	I0918 21:09:39.502262   61659 system_pods.go:89] "metrics-server-6867b74b74-hdt52" [bf3aa7d0-a121-4a2c-92dd-2c79c6cbaa4a] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0918 21:09:39.502266   61659 system_pods.go:89] "storage-provisioner" [8b4e1077-e23f-4262-b83e-989506798531] Running
	I0918 21:09:39.502276   61659 system_pods.go:126] duration metric: took 48.981872ms to wait for k8s-apps to be running ...
	I0918 21:09:39.502291   61659 system_svc.go:44] waiting for kubelet service to be running ....
	I0918 21:09:39.502367   61659 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0918 21:09:39.517514   61659 system_svc.go:56] duration metric: took 15.213443ms WaitForService to wait for kubelet
	I0918 21:09:39.517549   61659 kubeadm.go:582] duration metric: took 10.37183977s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0918 21:09:39.517573   61659 node_conditions.go:102] verifying NodePressure condition ...
	I0918 21:09:39.700593   61659 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0918 21:09:39.700616   61659 node_conditions.go:123] node cpu capacity is 2
	I0918 21:09:39.700626   61659 node_conditions.go:105] duration metric: took 183.048537ms to run NodePressure ...
	I0918 21:09:39.700637   61659 start.go:241] waiting for startup goroutines ...
	I0918 21:09:39.700643   61659 start.go:246] waiting for cluster config update ...
	I0918 21:09:39.700653   61659 start.go:255] writing updated cluster config ...
	I0918 21:09:39.700899   61659 ssh_runner.go:195] Run: rm -f paused
	I0918 21:09:39.750890   61659 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I0918 21:09:39.753015   61659 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-828868" cluster and "default" namespace by default
	I0918 21:09:38.857481   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:09:41.356307   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:09:44.581125   61740 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.200138695s)
	I0918 21:09:44.581198   61740 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0918 21:09:44.597051   61740 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0918 21:09:44.607195   61740 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0918 21:09:44.617135   61740 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0918 21:09:44.617160   61740 kubeadm.go:157] found existing configuration files:
	
	I0918 21:09:44.617203   61740 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0918 21:09:44.626216   61740 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0918 21:09:44.626278   61740 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0918 21:09:44.635161   61740 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0918 21:09:44.643767   61740 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0918 21:09:44.643828   61740 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0918 21:09:44.652663   61740 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0918 21:09:44.662045   61740 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0918 21:09:44.662107   61740 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0918 21:09:44.671165   61740 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0918 21:09:44.680397   61740 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0918 21:09:44.680469   61740 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0918 21:09:44.689168   61740 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0918 21:09:44.733425   61740 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I0918 21:09:44.733528   61740 kubeadm.go:310] [preflight] Running pre-flight checks
	I0918 21:09:44.846369   61740 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0918 21:09:44.846477   61740 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0918 21:09:44.846612   61740 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0918 21:09:44.855581   61740 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0918 21:09:44.857599   61740 out.go:235]   - Generating certificates and keys ...
	I0918 21:09:44.857709   61740 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0918 21:09:44.857777   61740 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0918 21:09:44.857851   61740 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0918 21:09:44.857942   61740 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0918 21:09:44.858061   61740 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0918 21:09:44.858137   61740 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0918 21:09:44.858243   61740 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0918 21:09:44.858339   61740 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0918 21:09:44.858409   61740 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0918 21:09:44.858509   61740 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0918 21:09:44.858547   61740 kubeadm.go:310] [certs] Using the existing "sa" key
	I0918 21:09:44.858615   61740 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0918 21:09:45.048967   61740 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0918 21:09:45.229640   61740 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0918 21:09:45.397078   61740 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0918 21:09:45.722116   61740 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0918 21:09:45.850285   61740 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0918 21:09:45.850902   61740 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0918 21:09:45.853909   61740 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0918 21:09:43.357136   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:09:45.858056   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:09:45.855803   61740 out.go:235]   - Booting up control plane ...
	I0918 21:09:45.855931   61740 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0918 21:09:45.857227   61740 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0918 21:09:45.858855   61740 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0918 21:09:45.877299   61740 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0918 21:09:45.883953   61740 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0918 21:09:45.884043   61740 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0918 21:09:46.015368   61740 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0918 21:09:46.015509   61740 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0918 21:09:47.016371   61740 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001062473s
	I0918 21:09:47.016465   61740 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0918 21:09:48.357057   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:09:50.856124   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:09:51.518808   61740 kubeadm.go:310] [api-check] The API server is healthy after 4.502250914s
	I0918 21:09:51.532148   61740 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0918 21:09:51.549560   61740 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0918 21:09:51.579801   61740 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0918 21:09:51.580053   61740 kubeadm.go:310] [mark-control-plane] Marking the node embed-certs-255556 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0918 21:09:51.598605   61740 kubeadm.go:310] [bootstrap-token] Using token: iilbxo.n0c6mbjmeqehlfso
	I0918 21:09:51.600035   61740 out.go:235]   - Configuring RBAC rules ...
	I0918 21:09:51.600200   61740 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0918 21:09:51.614672   61740 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0918 21:09:51.626186   61740 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0918 21:09:51.629722   61740 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0918 21:09:51.634757   61740 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0918 21:09:51.642778   61740 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0918 21:09:51.931051   61740 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0918 21:09:52.359085   61740 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0918 21:09:52.930191   61740 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0918 21:09:52.931033   61740 kubeadm.go:310] 
	I0918 21:09:52.931100   61740 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0918 21:09:52.931108   61740 kubeadm.go:310] 
	I0918 21:09:52.931178   61740 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0918 21:09:52.931186   61740 kubeadm.go:310] 
	I0918 21:09:52.931208   61740 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0918 21:09:52.931313   61740 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0918 21:09:52.931400   61740 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0918 21:09:52.931435   61740 kubeadm.go:310] 
	I0918 21:09:52.931524   61740 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0918 21:09:52.931537   61740 kubeadm.go:310] 
	I0918 21:09:52.931601   61740 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0918 21:09:52.931627   61740 kubeadm.go:310] 
	I0918 21:09:52.931721   61740 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0918 21:09:52.931825   61740 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0918 21:09:52.931896   61740 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0918 21:09:52.931903   61740 kubeadm.go:310] 
	I0918 21:09:52.931974   61740 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0918 21:09:52.932073   61740 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0918 21:09:52.932081   61740 kubeadm.go:310] 
	I0918 21:09:52.932154   61740 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token iilbxo.n0c6mbjmeqehlfso \
	I0918 21:09:52.932243   61740 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:8ad46bf288e8cc2226f5a929a768e6c95cddecc97ffc4016be27ef0987b1e36f \
	I0918 21:09:52.932289   61740 kubeadm.go:310] 	--control-plane 
	I0918 21:09:52.932296   61740 kubeadm.go:310] 
	I0918 21:09:52.932365   61740 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0918 21:09:52.932372   61740 kubeadm.go:310] 
	I0918 21:09:52.932438   61740 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token iilbxo.n0c6mbjmeqehlfso \
	I0918 21:09:52.932568   61740 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:8ad46bf288e8cc2226f5a929a768e6c95cddecc97ffc4016be27ef0987b1e36f 
	I0918 21:09:52.934280   61740 kubeadm.go:310] W0918 21:09:44.705437    2512 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0918 21:09:52.934656   61740 kubeadm.go:310] W0918 21:09:44.706219    2512 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0918 21:09:52.934841   61740 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0918 21:09:52.934861   61740 cni.go:84] Creating CNI manager for ""
	I0918 21:09:52.934871   61740 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0918 21:09:52.937656   61740 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0918 21:09:52.939150   61740 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0918 21:09:52.950774   61740 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0918 21:09:52.973081   61740 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0918 21:09:52.973161   61740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0918 21:09:52.973210   61740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-255556 minikube.k8s.io/updated_at=2024_09_18T21_09_52_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=85073601a832bd4bbda5d11fa91feafff6ec6b91 minikube.k8s.io/name=embed-certs-255556 minikube.k8s.io/primary=true
	I0918 21:09:53.012402   61740 ops.go:34] apiserver oom_adj: -16
	I0918 21:09:53.180983   61740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0918 21:09:52.857161   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:09:55.357515   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:09:53.681852   61740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0918 21:09:54.181892   61740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0918 21:09:54.681768   61740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0918 21:09:55.181353   61740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0918 21:09:55.681336   61740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0918 21:09:56.181389   61740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0918 21:09:56.681574   61740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0918 21:09:57.181050   61740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0918 21:09:57.258766   61740 kubeadm.go:1113] duration metric: took 4.285672952s to wait for elevateKubeSystemPrivileges
	I0918 21:09:57.258809   61740 kubeadm.go:394] duration metric: took 5m2.572577294s to StartCluster
	I0918 21:09:57.258831   61740 settings.go:142] acquiring lock: {Name:mk6ae95a69dcbe00c28846409e9f46945a88de2f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0918 21:09:57.258925   61740 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19667-7671/kubeconfig
	I0918 21:09:57.260757   61740 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19667-7671/kubeconfig: {Name:mk3a3eecadb0164dfdd5d3c4392b5b473a2a9bb0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0918 21:09:57.261072   61740 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.21 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0918 21:09:57.261168   61740 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0918 21:09:57.261275   61740 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-255556"
	I0918 21:09:57.261302   61740 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-255556"
	W0918 21:09:57.261314   61740 addons.go:243] addon storage-provisioner should already be in state true
	I0918 21:09:57.261344   61740 host.go:66] Checking if "embed-certs-255556" exists ...
	I0918 21:09:57.261337   61740 addons.go:69] Setting default-storageclass=true in profile "embed-certs-255556"
	I0918 21:09:57.261366   61740 config.go:182] Loaded profile config "embed-certs-255556": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0918 21:09:57.261363   61740 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-255556"
	I0918 21:09:57.261354   61740 addons.go:69] Setting metrics-server=true in profile "embed-certs-255556"
	I0918 21:09:57.261413   61740 addons.go:234] Setting addon metrics-server=true in "embed-certs-255556"
	W0918 21:09:57.261423   61740 addons.go:243] addon metrics-server should already be in state true
	I0918 21:09:57.261450   61740 host.go:66] Checking if "embed-certs-255556" exists ...
	I0918 21:09:57.261751   61740 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19667-7671/.minikube/bin/docker-machine-driver-kvm2
	I0918 21:09:57.261773   61740 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19667-7671/.minikube/bin/docker-machine-driver-kvm2
	I0918 21:09:57.261797   61740 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0918 21:09:57.261805   61740 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0918 21:09:57.261827   61740 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19667-7671/.minikube/bin/docker-machine-driver-kvm2
	I0918 21:09:57.261913   61740 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0918 21:09:57.263016   61740 out.go:177] * Verifying Kubernetes components...
	I0918 21:09:57.264732   61740 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0918 21:09:57.279143   61740 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41499
	I0918 21:09:57.279741   61740 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38525
	I0918 21:09:57.279948   61740 main.go:141] libmachine: () Calling .GetVersion
	I0918 21:09:57.280150   61740 main.go:141] libmachine: () Calling .GetVersion
	I0918 21:09:57.280518   61740 main.go:141] libmachine: Using API Version  1
	I0918 21:09:57.280536   61740 main.go:141] libmachine: () Calling .SetConfigRaw
	I0918 21:09:57.280662   61740 main.go:141] libmachine: Using API Version  1
	I0918 21:09:57.280699   61740 main.go:141] libmachine: () Calling .SetConfigRaw
	I0918 21:09:57.280899   61740 main.go:141] libmachine: () Calling .GetMachineName
	I0918 21:09:57.281014   61740 main.go:141] libmachine: () Calling .GetMachineName
	I0918 21:09:57.281224   61740 main.go:141] libmachine: (embed-certs-255556) Calling .GetState
	I0918 21:09:57.281401   61740 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45081
	I0918 21:09:57.281609   61740 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19667-7671/.minikube/bin/docker-machine-driver-kvm2
	I0918 21:09:57.281669   61740 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0918 21:09:57.281824   61740 main.go:141] libmachine: () Calling .GetVersion
	I0918 21:09:57.282291   61740 main.go:141] libmachine: Using API Version  1
	I0918 21:09:57.282316   61740 main.go:141] libmachine: () Calling .SetConfigRaw
	I0918 21:09:57.282655   61740 main.go:141] libmachine: () Calling .GetMachineName
	I0918 21:09:57.283166   61740 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19667-7671/.minikube/bin/docker-machine-driver-kvm2
	I0918 21:09:57.283198   61740 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0918 21:09:57.284993   61740 addons.go:234] Setting addon default-storageclass=true in "embed-certs-255556"
	W0918 21:09:57.285013   61740 addons.go:243] addon default-storageclass should already be in state true
	I0918 21:09:57.285042   61740 host.go:66] Checking if "embed-certs-255556" exists ...
	I0918 21:09:57.285400   61740 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19667-7671/.minikube/bin/docker-machine-driver-kvm2
	I0918 21:09:57.285441   61740 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0918 21:09:57.298996   61740 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39709
	I0918 21:09:57.299572   61740 main.go:141] libmachine: () Calling .GetVersion
	I0918 21:09:57.300427   61740 main.go:141] libmachine: Using API Version  1
	I0918 21:09:57.300453   61740 main.go:141] libmachine: () Calling .SetConfigRaw
	I0918 21:09:57.300865   61740 main.go:141] libmachine: () Calling .GetMachineName
	I0918 21:09:57.301062   61740 main.go:141] libmachine: (embed-certs-255556) Calling .GetState
	I0918 21:09:57.301827   61740 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40441
	I0918 21:09:57.302410   61740 main.go:141] libmachine: () Calling .GetVersion
	I0918 21:09:57.302948   61740 main.go:141] libmachine: Using API Version  1
	I0918 21:09:57.302968   61740 main.go:141] libmachine: () Calling .SetConfigRaw
	I0918 21:09:57.303284   61740 main.go:141] libmachine: (embed-certs-255556) Calling .DriverName
	I0918 21:09:57.303333   61740 main.go:141] libmachine: () Calling .GetMachineName
	I0918 21:09:57.303512   61740 main.go:141] libmachine: (embed-certs-255556) Calling .GetState
	I0918 21:09:57.304409   61740 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43125
	I0918 21:09:57.304836   61740 main.go:141] libmachine: () Calling .GetVersion
	I0918 21:09:57.305379   61740 main.go:141] libmachine: Using API Version  1
	I0918 21:09:57.305393   61740 main.go:141] libmachine: () Calling .SetConfigRaw
	I0918 21:09:57.305423   61740 main.go:141] libmachine: (embed-certs-255556) Calling .DriverName
	I0918 21:09:57.305449   61740 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0918 21:09:57.305705   61740 main.go:141] libmachine: () Calling .GetMachineName
	I0918 21:09:57.306221   61740 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19667-7671/.minikube/bin/docker-machine-driver-kvm2
	I0918 21:09:57.306270   61740 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0918 21:09:57.306972   61740 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0918 21:09:57.307226   61740 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0918 21:09:57.307247   61740 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0918 21:09:57.307261   61740 main.go:141] libmachine: (embed-certs-255556) Calling .GetSSHHostname
	I0918 21:09:57.308757   61740 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0918 21:09:57.308778   61740 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0918 21:09:57.308798   61740 main.go:141] libmachine: (embed-certs-255556) Calling .GetSSHHostname
	I0918 21:09:57.311608   61740 main.go:141] libmachine: (embed-certs-255556) DBG | domain embed-certs-255556 has defined MAC address 52:54:00:e8:c2:b7 in network mk-embed-certs-255556
	I0918 21:09:57.312311   61740 main.go:141] libmachine: (embed-certs-255556) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e8:c2:b7", ip: ""} in network mk-embed-certs-255556: {Iface:virbr1 ExpiryTime:2024-09-18 22:04:39 +0000 UTC Type:0 Mac:52:54:00:e8:c2:b7 Iaid: IPaddr:192.168.39.21 Prefix:24 Hostname:embed-certs-255556 Clientid:01:52:54:00:e8:c2:b7}
	I0918 21:09:57.312346   61740 main.go:141] libmachine: (embed-certs-255556) DBG | domain embed-certs-255556 has defined IP address 192.168.39.21 and MAC address 52:54:00:e8:c2:b7 in network mk-embed-certs-255556
	I0918 21:09:57.312529   61740 main.go:141] libmachine: (embed-certs-255556) Calling .GetSSHPort
	I0918 21:09:57.313308   61740 main.go:141] libmachine: (embed-certs-255556) DBG | domain embed-certs-255556 has defined MAC address 52:54:00:e8:c2:b7 in network mk-embed-certs-255556
	I0918 21:09:57.313344   61740 main.go:141] libmachine: (embed-certs-255556) Calling .GetSSHKeyPath
	I0918 21:09:57.313533   61740 main.go:141] libmachine: (embed-certs-255556) Calling .GetSSHUsername
	I0918 21:09:57.313707   61740 sshutil.go:53] new ssh client: &{IP:192.168.39.21 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19667-7671/.minikube/machines/embed-certs-255556/id_rsa Username:docker}
	I0918 21:09:57.313964   61740 main.go:141] libmachine: (embed-certs-255556) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e8:c2:b7", ip: ""} in network mk-embed-certs-255556: {Iface:virbr1 ExpiryTime:2024-09-18 22:04:39 +0000 UTC Type:0 Mac:52:54:00:e8:c2:b7 Iaid: IPaddr:192.168.39.21 Prefix:24 Hostname:embed-certs-255556 Clientid:01:52:54:00:e8:c2:b7}
	I0918 21:09:57.313991   61740 main.go:141] libmachine: (embed-certs-255556) DBG | domain embed-certs-255556 has defined IP address 192.168.39.21 and MAC address 52:54:00:e8:c2:b7 in network mk-embed-certs-255556
	I0918 21:09:57.314181   61740 main.go:141] libmachine: (embed-certs-255556) Calling .GetSSHPort
	I0918 21:09:57.314357   61740 main.go:141] libmachine: (embed-certs-255556) Calling .GetSSHKeyPath
	I0918 21:09:57.314517   61740 main.go:141] libmachine: (embed-certs-255556) Calling .GetSSHUsername
	I0918 21:09:57.314644   61740 sshutil.go:53] new ssh client: &{IP:192.168.39.21 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19667-7671/.minikube/machines/embed-certs-255556/id_rsa Username:docker}
	I0918 21:09:57.325307   61740 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45641
	I0918 21:09:57.325800   61740 main.go:141] libmachine: () Calling .GetVersion
	I0918 21:09:57.326390   61740 main.go:141] libmachine: Using API Version  1
	I0918 21:09:57.326416   61740 main.go:141] libmachine: () Calling .SetConfigRaw
	I0918 21:09:57.326850   61740 main.go:141] libmachine: () Calling .GetMachineName
	I0918 21:09:57.327116   61740 main.go:141] libmachine: (embed-certs-255556) Calling .GetState
	I0918 21:09:57.328954   61740 main.go:141] libmachine: (embed-certs-255556) Calling .DriverName
	I0918 21:09:57.329179   61740 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0918 21:09:57.329197   61740 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0918 21:09:57.329216   61740 main.go:141] libmachine: (embed-certs-255556) Calling .GetSSHHostname
	I0918 21:09:57.332176   61740 main.go:141] libmachine: (embed-certs-255556) DBG | domain embed-certs-255556 has defined MAC address 52:54:00:e8:c2:b7 in network mk-embed-certs-255556
	I0918 21:09:57.332608   61740 main.go:141] libmachine: (embed-certs-255556) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e8:c2:b7", ip: ""} in network mk-embed-certs-255556: {Iface:virbr1 ExpiryTime:2024-09-18 22:04:39 +0000 UTC Type:0 Mac:52:54:00:e8:c2:b7 Iaid: IPaddr:192.168.39.21 Prefix:24 Hostname:embed-certs-255556 Clientid:01:52:54:00:e8:c2:b7}
	I0918 21:09:57.332633   61740 main.go:141] libmachine: (embed-certs-255556) DBG | domain embed-certs-255556 has defined IP address 192.168.39.21 and MAC address 52:54:00:e8:c2:b7 in network mk-embed-certs-255556
	I0918 21:09:57.332803   61740 main.go:141] libmachine: (embed-certs-255556) Calling .GetSSHPort
	I0918 21:09:57.332991   61740 main.go:141] libmachine: (embed-certs-255556) Calling .GetSSHKeyPath
	I0918 21:09:57.333132   61740 main.go:141] libmachine: (embed-certs-255556) Calling .GetSSHUsername
	I0918 21:09:57.333254   61740 sshutil.go:53] new ssh client: &{IP:192.168.39.21 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19667-7671/.minikube/machines/embed-certs-255556/id_rsa Username:docker}
	I0918 21:09:57.463767   61740 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0918 21:09:57.480852   61740 node_ready.go:35] waiting up to 6m0s for node "embed-certs-255556" to be "Ready" ...
	I0918 21:09:57.492198   61740 node_ready.go:49] node "embed-certs-255556" has status "Ready":"True"
	I0918 21:09:57.492221   61740 node_ready.go:38] duration metric: took 11.335784ms for node "embed-certs-255556" to be "Ready" ...
	I0918 21:09:57.492229   61740 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0918 21:09:57.496607   61740 pod_ready.go:79] waiting up to 6m0s for pod "etcd-embed-certs-255556" in "kube-system" namespace to be "Ready" ...
	I0918 21:09:57.627581   61740 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0918 21:09:57.631704   61740 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0918 21:09:57.647778   61740 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0918 21:09:57.647799   61740 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0918 21:09:57.686558   61740 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0918 21:09:57.686589   61740 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0918 21:09:57.726206   61740 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0918 21:09:57.726230   61740 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0918 21:09:57.831932   61740 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0918 21:09:58.026530   61740 main.go:141] libmachine: Making call to close driver server
	I0918 21:09:58.026554   61740 main.go:141] libmachine: (embed-certs-255556) Calling .Close
	I0918 21:09:58.026862   61740 main.go:141] libmachine: Successfully made call to close driver server
	I0918 21:09:58.026885   61740 main.go:141] libmachine: Making call to close connection to plugin binary
	I0918 21:09:58.026895   61740 main.go:141] libmachine: Making call to close driver server
	I0918 21:09:58.026903   61740 main.go:141] libmachine: (embed-certs-255556) Calling .Close
	I0918 21:09:58.027205   61740 main.go:141] libmachine: Successfully made call to close driver server
	I0918 21:09:58.027260   61740 main.go:141] libmachine: Making call to close connection to plugin binary
	I0918 21:09:58.027269   61740 main.go:141] libmachine: (embed-certs-255556) DBG | Closing plugin on server side
	I0918 21:09:58.038140   61740 main.go:141] libmachine: Making call to close driver server
	I0918 21:09:58.038172   61740 main.go:141] libmachine: (embed-certs-255556) Calling .Close
	I0918 21:09:58.038506   61740 main.go:141] libmachine: Successfully made call to close driver server
	I0918 21:09:58.038555   61740 main.go:141] libmachine: Making call to close connection to plugin binary
	I0918 21:09:58.038512   61740 main.go:141] libmachine: (embed-certs-255556) DBG | Closing plugin on server side
	I0918 21:09:58.551479   61740 main.go:141] libmachine: Making call to close driver server
	I0918 21:09:58.551518   61740 main.go:141] libmachine: (embed-certs-255556) Calling .Close
	I0918 21:09:58.551851   61740 main.go:141] libmachine: Successfully made call to close driver server
	I0918 21:09:58.551870   61740 main.go:141] libmachine: Making call to close connection to plugin binary
	I0918 21:09:58.551885   61740 main.go:141] libmachine: Making call to close driver server
	I0918 21:09:58.551893   61740 main.go:141] libmachine: (embed-certs-255556) Calling .Close
	I0918 21:09:58.552242   61740 main.go:141] libmachine: Successfully made call to close driver server
	I0918 21:09:58.552307   61740 main.go:141] libmachine: Making call to close connection to plugin binary
	I0918 21:09:58.552326   61740 main.go:141] libmachine: (embed-certs-255556) DBG | Closing plugin on server side
	I0918 21:09:59.078469   61740 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.246485041s)
	I0918 21:09:59.078532   61740 main.go:141] libmachine: Making call to close driver server
	I0918 21:09:59.078550   61740 main.go:141] libmachine: (embed-certs-255556) Calling .Close
	I0918 21:09:59.078883   61740 main.go:141] libmachine: Successfully made call to close driver server
	I0918 21:09:59.078906   61740 main.go:141] libmachine: Making call to close connection to plugin binary
	I0918 21:09:59.078917   61740 main.go:141] libmachine: Making call to close driver server
	I0918 21:09:59.078924   61740 main.go:141] libmachine: (embed-certs-255556) Calling .Close
	I0918 21:09:59.079143   61740 main.go:141] libmachine: Successfully made call to close driver server
	I0918 21:09:59.079157   61740 main.go:141] libmachine: Making call to close connection to plugin binary
	I0918 21:09:59.079168   61740 addons.go:475] Verifying addon metrics-server=true in "embed-certs-255556"
	I0918 21:09:59.080861   61740 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I0918 21:09:57.357619   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:09:59.357838   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:09:59.082145   61740 addons.go:510] duration metric: took 1.82098849s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I0918 21:09:59.526424   61740 pod_ready.go:93] pod "etcd-embed-certs-255556" in "kube-system" namespace has status "Ready":"True"
	I0918 21:09:59.526445   61740 pod_ready.go:82] duration metric: took 2.02981732s for pod "etcd-embed-certs-255556" in "kube-system" namespace to be "Ready" ...
	I0918 21:09:59.526455   61740 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-embed-certs-255556" in "kube-system" namespace to be "Ready" ...
	I0918 21:10:00.033589   61740 pod_ready.go:93] pod "kube-apiserver-embed-certs-255556" in "kube-system" namespace has status "Ready":"True"
	I0918 21:10:00.033616   61740 pod_ready.go:82] duration metric: took 507.155125ms for pod "kube-apiserver-embed-certs-255556" in "kube-system" namespace to be "Ready" ...
	I0918 21:10:00.033630   61740 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-255556" in "kube-system" namespace to be "Ready" ...
	I0918 21:10:02.039884   61740 pod_ready.go:103] pod "kube-controller-manager-embed-certs-255556" in "kube-system" namespace has status "Ready":"False"
	I0918 21:10:04.040760   61740 pod_ready.go:103] pod "kube-controller-manager-embed-certs-255556" in "kube-system" namespace has status "Ready":"False"
	I0918 21:10:04.541799   61740 pod_ready.go:93] pod "kube-controller-manager-embed-certs-255556" in "kube-system" namespace has status "Ready":"True"
	I0918 21:10:04.541821   61740 pod_ready.go:82] duration metric: took 4.508184279s for pod "kube-controller-manager-embed-certs-255556" in "kube-system" namespace to be "Ready" ...
	I0918 21:10:04.541830   61740 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-embed-certs-255556" in "kube-system" namespace to be "Ready" ...
	I0918 21:10:04.550008   61740 pod_ready.go:93] pod "kube-scheduler-embed-certs-255556" in "kube-system" namespace has status "Ready":"True"
	I0918 21:10:04.550038   61740 pod_ready.go:82] duration metric: took 8.201765ms for pod "kube-scheduler-embed-certs-255556" in "kube-system" namespace to be "Ready" ...
	I0918 21:10:04.550046   61740 pod_ready.go:39] duration metric: took 7.057808243s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0918 21:10:04.550060   61740 api_server.go:52] waiting for apiserver process to appear ...
	I0918 21:10:04.550110   61740 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:10:04.566882   61740 api_server.go:72] duration metric: took 7.305767858s to wait for apiserver process to appear ...
	I0918 21:10:04.566914   61740 api_server.go:88] waiting for apiserver healthz status ...
	I0918 21:10:04.566937   61740 api_server.go:253] Checking apiserver healthz at https://192.168.39.21:8443/healthz ...
	I0918 21:10:04.571495   61740 api_server.go:279] https://192.168.39.21:8443/healthz returned 200:
	ok
	I0918 21:10:04.572590   61740 api_server.go:141] control plane version: v1.31.1
	I0918 21:10:04.572618   61740 api_server.go:131] duration metric: took 5.69747ms to wait for apiserver health ...
	I0918 21:10:04.572625   61740 system_pods.go:43] waiting for kube-system pods to appear ...
	I0918 21:10:04.578979   61740 system_pods.go:59] 9 kube-system pods found
	I0918 21:10:04.579019   61740 system_pods.go:61] "coredns-7c65d6cfc9-ptxbt" [798665e6-6f4a-4ba5-b4f9-3192d3f76f03] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0918 21:10:04.579030   61740 system_pods.go:61] "coredns-7c65d6cfc9-vgmtd" [1224ebf9-1b24-413a-b779-093acfcfb61e] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0918 21:10:04.579039   61740 system_pods.go:61] "etcd-embed-certs-255556" [a796cd6d-5b0c-4f73-9dfe-23c552cb2a43] Running
	I0918 21:10:04.579046   61740 system_pods.go:61] "kube-apiserver-embed-certs-255556" [f1926542-940b-4ba9-94cf-23265d42874c] Running
	I0918 21:10:04.579051   61740 system_pods.go:61] "kube-controller-manager-embed-certs-255556" [cd0bc9bb-ca2c-435f-81d5-2d4ea9f32d85] Running
	I0918 21:10:04.579057   61740 system_pods.go:61] "kube-proxy-m7gxh" [47d72a32-7efc-4155-a890-0ddc620af6e0] Running
	I0918 21:10:04.579067   61740 system_pods.go:61] "kube-scheduler-embed-certs-255556" [091c79cd-bc76-42b3-9635-96fdbe3ecfff] Running
	I0918 21:10:04.579076   61740 system_pods.go:61] "metrics-server-6867b74b74-sr6hq" [8867f8fa-687b-4105-8ace-18af50195726] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0918 21:10:04.579085   61740 system_pods.go:61] "storage-provisioner" [dcc9b789-237c-4d92-96c6-2c23d2c401c0] Running
	I0918 21:10:04.579095   61740 system_pods.go:74] duration metric: took 6.462809ms to wait for pod list to return data ...
	I0918 21:10:04.579106   61740 default_sa.go:34] waiting for default service account to be created ...
	I0918 21:10:04.583020   61740 default_sa.go:45] found service account: "default"
	I0918 21:10:04.583059   61740 default_sa.go:55] duration metric: took 3.946388ms for default service account to be created ...
	I0918 21:10:04.583072   61740 system_pods.go:116] waiting for k8s-apps to be running ...
	I0918 21:10:04.589946   61740 system_pods.go:86] 9 kube-system pods found
	I0918 21:10:04.589991   61740 system_pods.go:89] "coredns-7c65d6cfc9-ptxbt" [798665e6-6f4a-4ba5-b4f9-3192d3f76f03] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0918 21:10:04.590004   61740 system_pods.go:89] "coredns-7c65d6cfc9-vgmtd" [1224ebf9-1b24-413a-b779-093acfcfb61e] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0918 21:10:04.590012   61740 system_pods.go:89] "etcd-embed-certs-255556" [a796cd6d-5b0c-4f73-9dfe-23c552cb2a43] Running
	I0918 21:10:04.590019   61740 system_pods.go:89] "kube-apiserver-embed-certs-255556" [f1926542-940b-4ba9-94cf-23265d42874c] Running
	I0918 21:10:04.590025   61740 system_pods.go:89] "kube-controller-manager-embed-certs-255556" [cd0bc9bb-ca2c-435f-81d5-2d4ea9f32d85] Running
	I0918 21:10:04.590030   61740 system_pods.go:89] "kube-proxy-m7gxh" [47d72a32-7efc-4155-a890-0ddc620af6e0] Running
	I0918 21:10:04.590035   61740 system_pods.go:89] "kube-scheduler-embed-certs-255556" [091c79cd-bc76-42b3-9635-96fdbe3ecfff] Running
	I0918 21:10:04.590044   61740 system_pods.go:89] "metrics-server-6867b74b74-sr6hq" [8867f8fa-687b-4105-8ace-18af50195726] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0918 21:10:04.590051   61740 system_pods.go:89] "storage-provisioner" [dcc9b789-237c-4d92-96c6-2c23d2c401c0] Running
	I0918 21:10:04.590061   61740 system_pods.go:126] duration metric: took 6.981726ms to wait for k8s-apps to be running ...
	I0918 21:10:04.590070   61740 system_svc.go:44] waiting for kubelet service to be running ....
	I0918 21:10:04.590127   61740 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0918 21:10:04.605893   61740 system_svc.go:56] duration metric: took 15.815591ms WaitForService to wait for kubelet
	I0918 21:10:04.605921   61740 kubeadm.go:582] duration metric: took 7.344815015s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0918 21:10:04.605939   61740 node_conditions.go:102] verifying NodePressure condition ...
	I0918 21:10:04.609551   61740 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0918 21:10:04.609577   61740 node_conditions.go:123] node cpu capacity is 2
	I0918 21:10:04.609588   61740 node_conditions.go:105] duration metric: took 3.645116ms to run NodePressure ...
	I0918 21:10:04.609598   61740 start.go:241] waiting for startup goroutines ...
	I0918 21:10:04.609605   61740 start.go:246] waiting for cluster config update ...
	I0918 21:10:04.609614   61740 start.go:255] writing updated cluster config ...
	I0918 21:10:04.609870   61740 ssh_runner.go:195] Run: rm -f paused
	I0918 21:10:04.664479   61740 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I0918 21:10:04.666589   61740 out.go:177] * Done! kubectl is now configured to use "embed-certs-255556" cluster and "default" namespace by default
	I0918 21:10:01.858109   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:10:03.356912   61273 pod_ready.go:82] duration metric: took 4m0.006778464s for pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace to be "Ready" ...
	E0918 21:10:03.356944   61273 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I0918 21:10:03.356952   61273 pod_ready.go:39] duration metric: took 4m0.807781101s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0918 21:10:03.356967   61273 api_server.go:52] waiting for apiserver process to appear ...
	I0918 21:10:03.356994   61273 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0918 21:10:03.357047   61273 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0918 21:10:03.410066   61273 cri.go:89] found id: "a70652dce4d80c6fac020d63cff7054a835d183ac4b910157a5bf4eb8cabcaa1"
	I0918 21:10:03.410096   61273 cri.go:89] found id: ""
	I0918 21:10:03.410104   61273 logs.go:276] 1 containers: [a70652dce4d80c6fac020d63cff7054a835d183ac4b910157a5bf4eb8cabcaa1]
	I0918 21:10:03.410168   61273 ssh_runner.go:195] Run: which crictl
	I0918 21:10:03.414236   61273 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0918 21:10:03.414309   61273 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0918 21:10:03.449405   61273 cri.go:89] found id: "a913074a00723bc43f97ba625ebbdfded7dca1ce664ace9a13173adb96d2068f"
	I0918 21:10:03.449426   61273 cri.go:89] found id: ""
	I0918 21:10:03.449434   61273 logs.go:276] 1 containers: [a913074a00723bc43f97ba625ebbdfded7dca1ce664ace9a13173adb96d2068f]
	I0918 21:10:03.449492   61273 ssh_runner.go:195] Run: which crictl
	I0918 21:10:03.453335   61273 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0918 21:10:03.453403   61273 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0918 21:10:03.487057   61273 cri.go:89] found id: "76b9e08a213468abdd23d7b3998b55d6544e2fbae9205ec6076ef0cd7a80e7be"
	I0918 21:10:03.487081   61273 cri.go:89] found id: ""
	I0918 21:10:03.487089   61273 logs.go:276] 1 containers: [76b9e08a213468abdd23d7b3998b55d6544e2fbae9205ec6076ef0cd7a80e7be]
	I0918 21:10:03.487137   61273 ssh_runner.go:195] Run: which crictl
	I0918 21:10:03.491027   61273 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0918 21:10:03.491101   61273 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0918 21:10:03.529636   61273 cri.go:89] found id: "c372970fdf265433e1668377d0214de6608cb39ebfd706cfad9624cba2f9b481"
	I0918 21:10:03.529665   61273 cri.go:89] found id: ""
	I0918 21:10:03.529675   61273 logs.go:276] 1 containers: [c372970fdf265433e1668377d0214de6608cb39ebfd706cfad9624cba2f9b481]
	I0918 21:10:03.529738   61273 ssh_runner.go:195] Run: which crictl
	I0918 21:10:03.535042   61273 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0918 21:10:03.535121   61273 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0918 21:10:03.572913   61273 cri.go:89] found id: "0257280a0d21d52e2a996c2f717880bdb9d9f6fe90a4b82cc62222c941ba7d84"
	I0918 21:10:03.572942   61273 cri.go:89] found id: ""
	I0918 21:10:03.572952   61273 logs.go:276] 1 containers: [0257280a0d21d52e2a996c2f717880bdb9d9f6fe90a4b82cc62222c941ba7d84]
	I0918 21:10:03.573012   61273 ssh_runner.go:195] Run: which crictl
	I0918 21:10:03.576945   61273 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0918 21:10:03.577021   61273 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0918 21:10:03.612785   61273 cri.go:89] found id: "785dc83056153e5eeb22216ac7520f529b71f1b3ae42e9540a5c8930f1fe62c2"
	I0918 21:10:03.612805   61273 cri.go:89] found id: ""
	I0918 21:10:03.612812   61273 logs.go:276] 1 containers: [785dc83056153e5eeb22216ac7520f529b71f1b3ae42e9540a5c8930f1fe62c2]
	I0918 21:10:03.612868   61273 ssh_runner.go:195] Run: which crictl
	I0918 21:10:03.616855   61273 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0918 21:10:03.616924   61273 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0918 21:10:03.650330   61273 cri.go:89] found id: ""
	I0918 21:10:03.650359   61273 logs.go:276] 0 containers: []
	W0918 21:10:03.650370   61273 logs.go:278] No container was found matching "kindnet"
	I0918 21:10:03.650378   61273 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0918 21:10:03.650446   61273 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0918 21:10:03.698078   61273 cri.go:89] found id: "b44d6f4b44928aa498c4375f783c165714b4a5959a1fa709712ad39a412d6d9f"
	I0918 21:10:03.698106   61273 cri.go:89] found id: "38c14df0554157969a97390590d6e9c46765fd6fc3e286f7d2e3094838e0aff5"
	I0918 21:10:03.698113   61273 cri.go:89] found id: ""
	I0918 21:10:03.698122   61273 logs.go:276] 2 containers: [b44d6f4b44928aa498c4375f783c165714b4a5959a1fa709712ad39a412d6d9f 38c14df0554157969a97390590d6e9c46765fd6fc3e286f7d2e3094838e0aff5]
	I0918 21:10:03.698184   61273 ssh_runner.go:195] Run: which crictl
	I0918 21:10:03.702311   61273 ssh_runner.go:195] Run: which crictl
	I0918 21:10:03.705974   61273 logs.go:123] Gathering logs for kubelet ...
	I0918 21:10:03.705996   61273 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 21:10:03.771043   61273 logs.go:123] Gathering logs for kube-scheduler [c372970fdf265433e1668377d0214de6608cb39ebfd706cfad9624cba2f9b481] ...
	I0918 21:10:03.771097   61273 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c372970fdf265433e1668377d0214de6608cb39ebfd706cfad9624cba2f9b481"
	I0918 21:10:03.813148   61273 logs.go:123] Gathering logs for storage-provisioner [b44d6f4b44928aa498c4375f783c165714b4a5959a1fa709712ad39a412d6d9f] ...
	I0918 21:10:03.813175   61273 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b44d6f4b44928aa498c4375f783c165714b4a5959a1fa709712ad39a412d6d9f"
	I0918 21:10:03.864553   61273 logs.go:123] Gathering logs for CRI-O ...
	I0918 21:10:03.864580   61273 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0918 21:10:04.345484   61273 logs.go:123] Gathering logs for container status ...
	I0918 21:10:04.345531   61273 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 21:10:04.390777   61273 logs.go:123] Gathering logs for dmesg ...
	I0918 21:10:04.390818   61273 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 21:10:04.409877   61273 logs.go:123] Gathering logs for describe nodes ...
	I0918 21:10:04.409918   61273 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0918 21:10:04.536579   61273 logs.go:123] Gathering logs for kube-apiserver [a70652dce4d80c6fac020d63cff7054a835d183ac4b910157a5bf4eb8cabcaa1] ...
	I0918 21:10:04.536609   61273 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a70652dce4d80c6fac020d63cff7054a835d183ac4b910157a5bf4eb8cabcaa1"
	I0918 21:10:04.595640   61273 logs.go:123] Gathering logs for etcd [a913074a00723bc43f97ba625ebbdfded7dca1ce664ace9a13173adb96d2068f] ...
	I0918 21:10:04.595680   61273 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a913074a00723bc43f97ba625ebbdfded7dca1ce664ace9a13173adb96d2068f"
	I0918 21:10:04.642332   61273 logs.go:123] Gathering logs for coredns [76b9e08a213468abdd23d7b3998b55d6544e2fbae9205ec6076ef0cd7a80e7be] ...
	I0918 21:10:04.642377   61273 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 76b9e08a213468abdd23d7b3998b55d6544e2fbae9205ec6076ef0cd7a80e7be"
	I0918 21:10:04.679525   61273 logs.go:123] Gathering logs for kube-proxy [0257280a0d21d52e2a996c2f717880bdb9d9f6fe90a4b82cc62222c941ba7d84] ...
	I0918 21:10:04.679551   61273 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0257280a0d21d52e2a996c2f717880bdb9d9f6fe90a4b82cc62222c941ba7d84"
	I0918 21:10:04.721130   61273 logs.go:123] Gathering logs for kube-controller-manager [785dc83056153e5eeb22216ac7520f529b71f1b3ae42e9540a5c8930f1fe62c2] ...
	I0918 21:10:04.721164   61273 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 785dc83056153e5eeb22216ac7520f529b71f1b3ae42e9540a5c8930f1fe62c2"
	I0918 21:10:04.789527   61273 logs.go:123] Gathering logs for storage-provisioner [38c14df0554157969a97390590d6e9c46765fd6fc3e286f7d2e3094838e0aff5] ...
	I0918 21:10:04.789558   61273 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 38c14df0554157969a97390590d6e9c46765fd6fc3e286f7d2e3094838e0aff5"
	I0918 21:10:07.334989   61273 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:10:07.352382   61273 api_server.go:72] duration metric: took 4m12.031791528s to wait for apiserver process to appear ...
	I0918 21:10:07.352411   61273 api_server.go:88] waiting for apiserver healthz status ...
	I0918 21:10:07.352446   61273 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0918 21:10:07.352494   61273 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0918 21:10:07.404709   61273 cri.go:89] found id: "a70652dce4d80c6fac020d63cff7054a835d183ac4b910157a5bf4eb8cabcaa1"
	I0918 21:10:07.404739   61273 cri.go:89] found id: ""
	I0918 21:10:07.404748   61273 logs.go:276] 1 containers: [a70652dce4d80c6fac020d63cff7054a835d183ac4b910157a5bf4eb8cabcaa1]
	I0918 21:10:07.404815   61273 ssh_runner.go:195] Run: which crictl
	I0918 21:10:07.409205   61273 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0918 21:10:07.409273   61273 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0918 21:10:07.450409   61273 cri.go:89] found id: "a913074a00723bc43f97ba625ebbdfded7dca1ce664ace9a13173adb96d2068f"
	I0918 21:10:07.450429   61273 cri.go:89] found id: ""
	I0918 21:10:07.450438   61273 logs.go:276] 1 containers: [a913074a00723bc43f97ba625ebbdfded7dca1ce664ace9a13173adb96d2068f]
	I0918 21:10:07.450498   61273 ssh_runner.go:195] Run: which crictl
	I0918 21:10:07.454623   61273 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0918 21:10:07.454692   61273 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0918 21:10:07.498344   61273 cri.go:89] found id: "76b9e08a213468abdd23d7b3998b55d6544e2fbae9205ec6076ef0cd7a80e7be"
	I0918 21:10:07.498370   61273 cri.go:89] found id: ""
	I0918 21:10:07.498379   61273 logs.go:276] 1 containers: [76b9e08a213468abdd23d7b3998b55d6544e2fbae9205ec6076ef0cd7a80e7be]
	I0918 21:10:07.498443   61273 ssh_runner.go:195] Run: which crictl
	I0918 21:10:07.503900   61273 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0918 21:10:07.503986   61273 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0918 21:10:07.543438   61273 cri.go:89] found id: "c372970fdf265433e1668377d0214de6608cb39ebfd706cfad9624cba2f9b481"
	I0918 21:10:07.543469   61273 cri.go:89] found id: ""
	I0918 21:10:07.543478   61273 logs.go:276] 1 containers: [c372970fdf265433e1668377d0214de6608cb39ebfd706cfad9624cba2f9b481]
	I0918 21:10:07.543538   61273 ssh_runner.go:195] Run: which crictl
	I0918 21:10:07.548439   61273 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0918 21:10:07.548518   61273 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0918 21:10:07.592109   61273 cri.go:89] found id: "0257280a0d21d52e2a996c2f717880bdb9d9f6fe90a4b82cc62222c941ba7d84"
	I0918 21:10:07.592140   61273 cri.go:89] found id: ""
	I0918 21:10:07.592150   61273 logs.go:276] 1 containers: [0257280a0d21d52e2a996c2f717880bdb9d9f6fe90a4b82cc62222c941ba7d84]
	I0918 21:10:07.592202   61273 ssh_runner.go:195] Run: which crictl
	I0918 21:10:07.596127   61273 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0918 21:10:07.596200   61273 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0918 21:10:07.630588   61273 cri.go:89] found id: "785dc83056153e5eeb22216ac7520f529b71f1b3ae42e9540a5c8930f1fe62c2"
	I0918 21:10:07.630623   61273 cri.go:89] found id: ""
	I0918 21:10:07.630633   61273 logs.go:276] 1 containers: [785dc83056153e5eeb22216ac7520f529b71f1b3ae42e9540a5c8930f1fe62c2]
	I0918 21:10:07.630699   61273 ssh_runner.go:195] Run: which crictl
	I0918 21:10:07.635130   61273 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0918 21:10:07.635214   61273 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0918 21:10:07.672446   61273 cri.go:89] found id: ""
	I0918 21:10:07.672475   61273 logs.go:276] 0 containers: []
	W0918 21:10:07.672487   61273 logs.go:278] No container was found matching "kindnet"
	I0918 21:10:07.672494   61273 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0918 21:10:07.672554   61273 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0918 21:10:07.710660   61273 cri.go:89] found id: "b44d6f4b44928aa498c4375f783c165714b4a5959a1fa709712ad39a412d6d9f"
	I0918 21:10:07.710693   61273 cri.go:89] found id: "38c14df0554157969a97390590d6e9c46765fd6fc3e286f7d2e3094838e0aff5"
	I0918 21:10:07.710700   61273 cri.go:89] found id: ""
	I0918 21:10:07.710709   61273 logs.go:276] 2 containers: [b44d6f4b44928aa498c4375f783c165714b4a5959a1fa709712ad39a412d6d9f 38c14df0554157969a97390590d6e9c46765fd6fc3e286f7d2e3094838e0aff5]
	I0918 21:10:07.710761   61273 ssh_runner.go:195] Run: which crictl
	I0918 21:10:07.714772   61273 ssh_runner.go:195] Run: which crictl
	I0918 21:10:07.718402   61273 logs.go:123] Gathering logs for etcd [a913074a00723bc43f97ba625ebbdfded7dca1ce664ace9a13173adb96d2068f] ...
	I0918 21:10:07.718423   61273 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a913074a00723bc43f97ba625ebbdfded7dca1ce664ace9a13173adb96d2068f"
	I0918 21:10:07.756682   61273 logs.go:123] Gathering logs for coredns [76b9e08a213468abdd23d7b3998b55d6544e2fbae9205ec6076ef0cd7a80e7be] ...
	I0918 21:10:07.756717   61273 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 76b9e08a213468abdd23d7b3998b55d6544e2fbae9205ec6076ef0cd7a80e7be"
	I0918 21:10:07.792784   61273 logs.go:123] Gathering logs for kube-proxy [0257280a0d21d52e2a996c2f717880bdb9d9f6fe90a4b82cc62222c941ba7d84] ...
	I0918 21:10:07.792813   61273 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0257280a0d21d52e2a996c2f717880bdb9d9f6fe90a4b82cc62222c941ba7d84"
	I0918 21:10:07.829746   61273 logs.go:123] Gathering logs for kube-controller-manager [785dc83056153e5eeb22216ac7520f529b71f1b3ae42e9540a5c8930f1fe62c2] ...
	I0918 21:10:07.829779   61273 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 785dc83056153e5eeb22216ac7520f529b71f1b3ae42e9540a5c8930f1fe62c2"
	I0918 21:10:07.882151   61273 logs.go:123] Gathering logs for storage-provisioner [38c14df0554157969a97390590d6e9c46765fd6fc3e286f7d2e3094838e0aff5] ...
	I0918 21:10:07.882190   61273 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 38c14df0554157969a97390590d6e9c46765fd6fc3e286f7d2e3094838e0aff5"
	I0918 21:10:07.921948   61273 logs.go:123] Gathering logs for container status ...
	I0918 21:10:07.921973   61273 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 21:10:07.969080   61273 logs.go:123] Gathering logs for kubelet ...
	I0918 21:10:07.969110   61273 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 21:10:08.036341   61273 logs.go:123] Gathering logs for dmesg ...
	I0918 21:10:08.036376   61273 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 21:10:08.050690   61273 logs.go:123] Gathering logs for describe nodes ...
	I0918 21:10:08.050722   61273 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0918 21:10:08.177111   61273 logs.go:123] Gathering logs for kube-apiserver [a70652dce4d80c6fac020d63cff7054a835d183ac4b910157a5bf4eb8cabcaa1] ...
	I0918 21:10:08.177154   61273 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a70652dce4d80c6fac020d63cff7054a835d183ac4b910157a5bf4eb8cabcaa1"
	I0918 21:10:08.224169   61273 logs.go:123] Gathering logs for kube-scheduler [c372970fdf265433e1668377d0214de6608cb39ebfd706cfad9624cba2f9b481] ...
	I0918 21:10:08.224203   61273 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c372970fdf265433e1668377d0214de6608cb39ebfd706cfad9624cba2f9b481"
	I0918 21:10:08.264412   61273 logs.go:123] Gathering logs for storage-provisioner [b44d6f4b44928aa498c4375f783c165714b4a5959a1fa709712ad39a412d6d9f] ...
	I0918 21:10:08.264437   61273 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b44d6f4b44928aa498c4375f783c165714b4a5959a1fa709712ad39a412d6d9f"
	I0918 21:10:08.309190   61273 logs.go:123] Gathering logs for CRI-O ...
	I0918 21:10:08.309215   61273 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0918 21:10:11.209439   61273 api_server.go:253] Checking apiserver healthz at https://192.168.61.31:8443/healthz ...
	I0918 21:10:11.214345   61273 api_server.go:279] https://192.168.61.31:8443/healthz returned 200:
	ok
	I0918 21:10:11.215424   61273 api_server.go:141] control plane version: v1.31.1
	I0918 21:10:11.215446   61273 api_server.go:131] duration metric: took 3.863027585s to wait for apiserver health ...
	I0918 21:10:11.215456   61273 system_pods.go:43] waiting for kube-system pods to appear ...
	I0918 21:10:11.215485   61273 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0918 21:10:11.215545   61273 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0918 21:10:11.251158   61273 cri.go:89] found id: "a70652dce4d80c6fac020d63cff7054a835d183ac4b910157a5bf4eb8cabcaa1"
	I0918 21:10:11.251182   61273 cri.go:89] found id: ""
	I0918 21:10:11.251190   61273 logs.go:276] 1 containers: [a70652dce4d80c6fac020d63cff7054a835d183ac4b910157a5bf4eb8cabcaa1]
	I0918 21:10:11.251246   61273 ssh_runner.go:195] Run: which crictl
	I0918 21:10:11.255090   61273 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0918 21:10:11.255177   61273 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0918 21:10:11.290504   61273 cri.go:89] found id: "a913074a00723bc43f97ba625ebbdfded7dca1ce664ace9a13173adb96d2068f"
	I0918 21:10:11.290526   61273 cri.go:89] found id: ""
	I0918 21:10:11.290534   61273 logs.go:276] 1 containers: [a913074a00723bc43f97ba625ebbdfded7dca1ce664ace9a13173adb96d2068f]
	I0918 21:10:11.290593   61273 ssh_runner.go:195] Run: which crictl
	I0918 21:10:11.295141   61273 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0918 21:10:11.295224   61273 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0918 21:10:11.340273   61273 cri.go:89] found id: "76b9e08a213468abdd23d7b3998b55d6544e2fbae9205ec6076ef0cd7a80e7be"
	I0918 21:10:11.340300   61273 cri.go:89] found id: ""
	I0918 21:10:11.340310   61273 logs.go:276] 1 containers: [76b9e08a213468abdd23d7b3998b55d6544e2fbae9205ec6076ef0cd7a80e7be]
	I0918 21:10:11.340362   61273 ssh_runner.go:195] Run: which crictl
	I0918 21:10:11.344823   61273 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0918 21:10:11.344903   61273 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0918 21:10:11.384145   61273 cri.go:89] found id: "c372970fdf265433e1668377d0214de6608cb39ebfd706cfad9624cba2f9b481"
	I0918 21:10:11.384172   61273 cri.go:89] found id: ""
	I0918 21:10:11.384187   61273 logs.go:276] 1 containers: [c372970fdf265433e1668377d0214de6608cb39ebfd706cfad9624cba2f9b481]
	I0918 21:10:11.384251   61273 ssh_runner.go:195] Run: which crictl
	I0918 21:10:11.388594   61273 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0918 21:10:11.388673   61273 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0918 21:10:11.434881   61273 cri.go:89] found id: "0257280a0d21d52e2a996c2f717880bdb9d9f6fe90a4b82cc62222c941ba7d84"
	I0918 21:10:11.434915   61273 cri.go:89] found id: ""
	I0918 21:10:11.434925   61273 logs.go:276] 1 containers: [0257280a0d21d52e2a996c2f717880bdb9d9f6fe90a4b82cc62222c941ba7d84]
	I0918 21:10:11.434984   61273 ssh_runner.go:195] Run: which crictl
	I0918 21:10:11.439048   61273 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0918 21:10:11.439124   61273 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0918 21:10:11.474786   61273 cri.go:89] found id: "785dc83056153e5eeb22216ac7520f529b71f1b3ae42e9540a5c8930f1fe62c2"
	I0918 21:10:11.474812   61273 cri.go:89] found id: ""
	I0918 21:10:11.474820   61273 logs.go:276] 1 containers: [785dc83056153e5eeb22216ac7520f529b71f1b3ae42e9540a5c8930f1fe62c2]
	I0918 21:10:11.474871   61273 ssh_runner.go:195] Run: which crictl
	I0918 21:10:11.478907   61273 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0918 21:10:11.478961   61273 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0918 21:10:11.521522   61273 cri.go:89] found id: ""
	I0918 21:10:11.521550   61273 logs.go:276] 0 containers: []
	W0918 21:10:11.521561   61273 logs.go:278] No container was found matching "kindnet"
	I0918 21:10:11.521568   61273 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0918 21:10:11.521642   61273 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0918 21:10:11.560406   61273 cri.go:89] found id: "b44d6f4b44928aa498c4375f783c165714b4a5959a1fa709712ad39a412d6d9f"
	I0918 21:10:11.560428   61273 cri.go:89] found id: "38c14df0554157969a97390590d6e9c46765fd6fc3e286f7d2e3094838e0aff5"
	I0918 21:10:11.560432   61273 cri.go:89] found id: ""
	I0918 21:10:11.560439   61273 logs.go:276] 2 containers: [b44d6f4b44928aa498c4375f783c165714b4a5959a1fa709712ad39a412d6d9f 38c14df0554157969a97390590d6e9c46765fd6fc3e286f7d2e3094838e0aff5]
	I0918 21:10:11.560489   61273 ssh_runner.go:195] Run: which crictl
	I0918 21:10:11.564559   61273 ssh_runner.go:195] Run: which crictl
	I0918 21:10:11.568380   61273 logs.go:123] Gathering logs for kube-proxy [0257280a0d21d52e2a996c2f717880bdb9d9f6fe90a4b82cc62222c941ba7d84] ...
	I0918 21:10:11.568405   61273 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0257280a0d21d52e2a996c2f717880bdb9d9f6fe90a4b82cc62222c941ba7d84"
	I0918 21:10:11.614927   61273 logs.go:123] Gathering logs for kube-controller-manager [785dc83056153e5eeb22216ac7520f529b71f1b3ae42e9540a5c8930f1fe62c2] ...
	I0918 21:10:11.614959   61273 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 785dc83056153e5eeb22216ac7520f529b71f1b3ae42e9540a5c8930f1fe62c2"
	I0918 21:10:11.668337   61273 logs.go:123] Gathering logs for storage-provisioner [b44d6f4b44928aa498c4375f783c165714b4a5959a1fa709712ad39a412d6d9f] ...
	I0918 21:10:11.668372   61273 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b44d6f4b44928aa498c4375f783c165714b4a5959a1fa709712ad39a412d6d9f"
	I0918 21:10:11.705574   61273 logs.go:123] Gathering logs for kubelet ...
	I0918 21:10:11.705604   61273 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 21:10:11.772691   61273 logs.go:123] Gathering logs for describe nodes ...
	I0918 21:10:11.772731   61273 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0918 21:10:11.885001   61273 logs.go:123] Gathering logs for etcd [a913074a00723bc43f97ba625ebbdfded7dca1ce664ace9a13173adb96d2068f] ...
	I0918 21:10:11.885043   61273 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a913074a00723bc43f97ba625ebbdfded7dca1ce664ace9a13173adb96d2068f"
	I0918 21:10:11.929585   61273 logs.go:123] Gathering logs for coredns [76b9e08a213468abdd23d7b3998b55d6544e2fbae9205ec6076ef0cd7a80e7be] ...
	I0918 21:10:11.929623   61273 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 76b9e08a213468abdd23d7b3998b55d6544e2fbae9205ec6076ef0cd7a80e7be"
	I0918 21:10:11.967540   61273 logs.go:123] Gathering logs for kube-scheduler [c372970fdf265433e1668377d0214de6608cb39ebfd706cfad9624cba2f9b481] ...
	I0918 21:10:11.967566   61273 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c372970fdf265433e1668377d0214de6608cb39ebfd706cfad9624cba2f9b481"
	I0918 21:10:12.007037   61273 logs.go:123] Gathering logs for storage-provisioner [38c14df0554157969a97390590d6e9c46765fd6fc3e286f7d2e3094838e0aff5] ...
	I0918 21:10:12.007076   61273 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 38c14df0554157969a97390590d6e9c46765fd6fc3e286f7d2e3094838e0aff5"
	I0918 21:10:12.045764   61273 logs.go:123] Gathering logs for CRI-O ...
	I0918 21:10:12.045805   61273 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0918 21:10:12.434993   61273 logs.go:123] Gathering logs for dmesg ...
	I0918 21:10:12.435042   61273 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 21:10:12.449422   61273 logs.go:123] Gathering logs for kube-apiserver [a70652dce4d80c6fac020d63cff7054a835d183ac4b910157a5bf4eb8cabcaa1] ...
	I0918 21:10:12.449453   61273 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a70652dce4d80c6fac020d63cff7054a835d183ac4b910157a5bf4eb8cabcaa1"
	I0918 21:10:12.500491   61273 logs.go:123] Gathering logs for container status ...
	I0918 21:10:12.500522   61273 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 21:10:15.053164   61273 system_pods.go:59] 8 kube-system pods found
	I0918 21:10:15.053203   61273 system_pods.go:61] "coredns-7c65d6cfc9-dgnw2" [085d5a98-0a61-4678-830f-384780a0d7ef] Running
	I0918 21:10:15.053211   61273 system_pods.go:61] "etcd-no-preload-331658" [d6a53ff4-fcf0-46c0-9e77-9af6d0363e08] Running
	I0918 21:10:15.053218   61273 system_pods.go:61] "kube-apiserver-no-preload-331658" [1953598f-4c3f-4a98-ab75-b8ca1b8093ad] Running
	I0918 21:10:15.053223   61273 system_pods.go:61] "kube-controller-manager-no-preload-331658" [8fc655be-a192-4250-a31d-2e533a2b5e41] Running
	I0918 21:10:15.053228   61273 system_pods.go:61] "kube-proxy-hx25w" [a26512ff-f695-4452-8974-577479257160] Running
	I0918 21:10:15.053232   61273 system_pods.go:61] "kube-scheduler-no-preload-331658" [64e5347e-e7fd-4a0d-ae41-539c3989c29e] Running
	I0918 21:10:15.053243   61273 system_pods.go:61] "metrics-server-6867b74b74-n27vc" [b1de76ec-8987-49ce-ae66-eedda2705cde] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0918 21:10:15.053254   61273 system_pods.go:61] "storage-provisioner" [e110aeb3-e9bc-4bb9-9b49-5579558bdda2] Running
	I0918 21:10:15.053264   61273 system_pods.go:74] duration metric: took 3.837800115s to wait for pod list to return data ...
	I0918 21:10:15.053273   61273 default_sa.go:34] waiting for default service account to be created ...
	I0918 21:10:15.056865   61273 default_sa.go:45] found service account: "default"
	I0918 21:10:15.056900   61273 default_sa.go:55] duration metric: took 3.619144ms for default service account to be created ...
	I0918 21:10:15.056912   61273 system_pods.go:116] waiting for k8s-apps to be running ...
	I0918 21:10:15.061835   61273 system_pods.go:86] 8 kube-system pods found
	I0918 21:10:15.061864   61273 system_pods.go:89] "coredns-7c65d6cfc9-dgnw2" [085d5a98-0a61-4678-830f-384780a0d7ef] Running
	I0918 21:10:15.061870   61273 system_pods.go:89] "etcd-no-preload-331658" [d6a53ff4-fcf0-46c0-9e77-9af6d0363e08] Running
	I0918 21:10:15.061875   61273 system_pods.go:89] "kube-apiserver-no-preload-331658" [1953598f-4c3f-4a98-ab75-b8ca1b8093ad] Running
	I0918 21:10:15.061880   61273 system_pods.go:89] "kube-controller-manager-no-preload-331658" [8fc655be-a192-4250-a31d-2e533a2b5e41] Running
	I0918 21:10:15.061884   61273 system_pods.go:89] "kube-proxy-hx25w" [a26512ff-f695-4452-8974-577479257160] Running
	I0918 21:10:15.061888   61273 system_pods.go:89] "kube-scheduler-no-preload-331658" [64e5347e-e7fd-4a0d-ae41-539c3989c29e] Running
	I0918 21:10:15.061894   61273 system_pods.go:89] "metrics-server-6867b74b74-n27vc" [b1de76ec-8987-49ce-ae66-eedda2705cde] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0918 21:10:15.061898   61273 system_pods.go:89] "storage-provisioner" [e110aeb3-e9bc-4bb9-9b49-5579558bdda2] Running
	I0918 21:10:15.061906   61273 system_pods.go:126] duration metric: took 4.987508ms to wait for k8s-apps to be running ...
	I0918 21:10:15.061912   61273 system_svc.go:44] waiting for kubelet service to be running ....
	I0918 21:10:15.061966   61273 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0918 21:10:15.079834   61273 system_svc.go:56] duration metric: took 17.908997ms WaitForService to wait for kubelet
	I0918 21:10:15.079875   61273 kubeadm.go:582] duration metric: took 4m19.759287892s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0918 21:10:15.079897   61273 node_conditions.go:102] verifying NodePressure condition ...
	I0918 21:10:15.083307   61273 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0918 21:10:15.083390   61273 node_conditions.go:123] node cpu capacity is 2
	I0918 21:10:15.083407   61273 node_conditions.go:105] duration metric: took 3.503352ms to run NodePressure ...
	I0918 21:10:15.083421   61273 start.go:241] waiting for startup goroutines ...
	I0918 21:10:15.083431   61273 start.go:246] waiting for cluster config update ...
	I0918 21:10:15.083444   61273 start.go:255] writing updated cluster config ...
	I0918 21:10:15.083788   61273 ssh_runner.go:195] Run: rm -f paused
	I0918 21:10:15.139144   61273 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I0918 21:10:15.141198   61273 out.go:177] * Done! kubectl is now configured to use "no-preload-331658" cluster and "default" namespace by default
	I0918 21:11:17.441368   62061 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0918 21:11:17.441607   62061 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0918 21:11:17.442921   62061 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0918 21:11:17.443036   62061 kubeadm.go:310] [preflight] Running pre-flight checks
	I0918 21:11:17.443221   62061 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0918 21:11:17.443500   62061 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0918 21:11:17.443818   62061 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0918 21:11:17.444099   62061 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0918 21:11:17.446858   62061 out.go:235]   - Generating certificates and keys ...
	I0918 21:11:17.446965   62061 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0918 21:11:17.447048   62061 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0918 21:11:17.447160   62061 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0918 21:11:17.447248   62061 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0918 21:11:17.447349   62061 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0918 21:11:17.447425   62061 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0918 21:11:17.447507   62061 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0918 21:11:17.447587   62061 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0918 21:11:17.447742   62061 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0918 21:11:17.447847   62061 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0918 21:11:17.447911   62061 kubeadm.go:310] [certs] Using the existing "sa" key
	I0918 21:11:17.447984   62061 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0918 21:11:17.448085   62061 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0918 21:11:17.448163   62061 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0918 21:11:17.448255   62061 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0918 21:11:17.448339   62061 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0918 21:11:17.448486   62061 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0918 21:11:17.448590   62061 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0918 21:11:17.448645   62061 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0918 21:11:17.448733   62061 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0918 21:11:17.450203   62061 out.go:235]   - Booting up control plane ...
	I0918 21:11:17.450299   62061 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0918 21:11:17.450434   62061 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0918 21:11:17.450506   62061 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0918 21:11:17.450578   62061 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0918 21:11:17.450752   62061 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0918 21:11:17.450805   62061 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0918 21:11:17.450863   62061 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0918 21:11:17.451034   62061 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0918 21:11:17.451090   62061 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0918 21:11:17.451259   62061 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0918 21:11:17.451325   62061 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0918 21:11:17.451498   62061 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0918 21:11:17.451562   62061 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0918 21:11:17.451765   62061 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0918 21:11:17.451845   62061 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0918 21:11:17.452027   62061 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0918 21:11:17.452041   62061 kubeadm.go:310] 
	I0918 21:11:17.452083   62061 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0918 21:11:17.452118   62061 kubeadm.go:310] 		timed out waiting for the condition
	I0918 21:11:17.452127   62061 kubeadm.go:310] 
	I0918 21:11:17.452160   62061 kubeadm.go:310] 	This error is likely caused by:
	I0918 21:11:17.452189   62061 kubeadm.go:310] 		- The kubelet is not running
	I0918 21:11:17.452282   62061 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0918 21:11:17.452289   62061 kubeadm.go:310] 
	I0918 21:11:17.452385   62061 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0918 21:11:17.452425   62061 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0918 21:11:17.452473   62061 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0918 21:11:17.452482   62061 kubeadm.go:310] 
	I0918 21:11:17.452578   62061 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0918 21:11:17.452649   62061 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0918 21:11:17.452656   62061 kubeadm.go:310] 
	I0918 21:11:17.452770   62061 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0918 21:11:17.452849   62061 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0918 21:11:17.452920   62061 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0918 21:11:17.452983   62061 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0918 21:11:17.453029   62061 kubeadm.go:310] 
	W0918 21:11:17.453100   62061 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0918 21:11:17.453139   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0918 21:11:17.917211   62061 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0918 21:11:17.933265   62061 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0918 21:11:17.943909   62061 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0918 21:11:17.943937   62061 kubeadm.go:157] found existing configuration files:
	
	I0918 21:11:17.943980   62061 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0918 21:11:17.953745   62061 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0918 21:11:17.953817   62061 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0918 21:11:17.964411   62061 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0918 21:11:17.974601   62061 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0918 21:11:17.974661   62061 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0918 21:11:17.984631   62061 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0918 21:11:17.994280   62061 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0918 21:11:17.994341   62061 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0918 21:11:18.004341   62061 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0918 21:11:18.014066   62061 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0918 21:11:18.014127   62061 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0918 21:11:18.023592   62061 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0918 21:11:18.098831   62061 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0918 21:11:18.098908   62061 kubeadm.go:310] [preflight] Running pre-flight checks
	I0918 21:11:18.254904   62061 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0918 21:11:18.255012   62061 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0918 21:11:18.255102   62061 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0918 21:11:18.444152   62061 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0918 21:11:18.445967   62061 out.go:235]   - Generating certificates and keys ...
	I0918 21:11:18.446082   62061 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0918 21:11:18.446185   62061 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0918 21:11:18.446316   62061 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0918 21:11:18.446403   62061 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0918 21:11:18.446508   62061 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0918 21:11:18.446600   62061 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0918 21:11:18.446706   62061 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0918 21:11:18.446767   62061 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0918 21:11:18.446852   62061 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0918 21:11:18.446936   62061 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0918 21:11:18.446971   62061 kubeadm.go:310] [certs] Using the existing "sa" key
	I0918 21:11:18.447025   62061 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0918 21:11:18.621414   62061 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0918 21:11:18.973691   62061 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0918 21:11:19.177522   62061 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0918 21:11:19.351312   62061 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0918 21:11:19.375169   62061 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0918 21:11:19.376274   62061 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0918 21:11:19.376351   62061 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0918 21:11:19.505030   62061 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0918 21:11:19.507065   62061 out.go:235]   - Booting up control plane ...
	I0918 21:11:19.507197   62061 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0918 21:11:19.519523   62061 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0918 21:11:19.520747   62061 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0918 21:11:19.522723   62061 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0918 21:11:19.524851   62061 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0918 21:11:59.527084   62061 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0918 21:11:59.527421   62061 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0918 21:11:59.527674   62061 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0918 21:12:04.528288   62061 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0918 21:12:04.528530   62061 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0918 21:12:14.529180   62061 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0918 21:12:14.529367   62061 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0918 21:12:34.530066   62061 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0918 21:12:34.530335   62061 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0918 21:13:14.529030   62061 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0918 21:13:14.529305   62061 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0918 21:13:14.529326   62061 kubeadm.go:310] 
	I0918 21:13:14.529374   62061 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0918 21:13:14.529447   62061 kubeadm.go:310] 		timed out waiting for the condition
	I0918 21:13:14.529466   62061 kubeadm.go:310] 
	I0918 21:13:14.529521   62061 kubeadm.go:310] 	This error is likely caused by:
	I0918 21:13:14.529569   62061 kubeadm.go:310] 		- The kubelet is not running
	I0918 21:13:14.529716   62061 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0918 21:13:14.529726   62061 kubeadm.go:310] 
	I0918 21:13:14.529866   62061 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0918 21:13:14.529917   62061 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0918 21:13:14.529967   62061 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0918 21:13:14.529976   62061 kubeadm.go:310] 
	I0918 21:13:14.530120   62061 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0918 21:13:14.530220   62061 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0918 21:13:14.530230   62061 kubeadm.go:310] 
	I0918 21:13:14.530352   62061 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0918 21:13:14.530480   62061 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0918 21:13:14.530579   62061 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0918 21:13:14.530678   62061 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0918 21:13:14.530688   62061 kubeadm.go:310] 
	I0918 21:13:14.531356   62061 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0918 21:13:14.531463   62061 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0918 21:13:14.531520   62061 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0918 21:13:14.531592   62061 kubeadm.go:394] duration metric: took 7m56.917405378s to StartCluster
	I0918 21:13:14.531633   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0918 21:13:14.531689   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0918 21:13:14.578851   62061 cri.go:89] found id: ""
	I0918 21:13:14.578883   62061 logs.go:276] 0 containers: []
	W0918 21:13:14.578893   62061 logs.go:278] No container was found matching "kube-apiserver"
	I0918 21:13:14.578901   62061 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0918 21:13:14.578960   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0918 21:13:14.627137   62061 cri.go:89] found id: ""
	I0918 21:13:14.627168   62061 logs.go:276] 0 containers: []
	W0918 21:13:14.627179   62061 logs.go:278] No container was found matching "etcd"
	I0918 21:13:14.627187   62061 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0918 21:13:14.627245   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0918 21:13:14.670678   62061 cri.go:89] found id: ""
	I0918 21:13:14.670707   62061 logs.go:276] 0 containers: []
	W0918 21:13:14.670717   62061 logs.go:278] No container was found matching "coredns"
	I0918 21:13:14.670724   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0918 21:13:14.670788   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0918 21:13:14.709610   62061 cri.go:89] found id: ""
	I0918 21:13:14.709641   62061 logs.go:276] 0 containers: []
	W0918 21:13:14.709651   62061 logs.go:278] No container was found matching "kube-scheduler"
	I0918 21:13:14.709658   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0918 21:13:14.709722   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0918 21:13:14.743492   62061 cri.go:89] found id: ""
	I0918 21:13:14.743522   62061 logs.go:276] 0 containers: []
	W0918 21:13:14.743534   62061 logs.go:278] No container was found matching "kube-proxy"
	I0918 21:13:14.743540   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0918 21:13:14.743601   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0918 21:13:14.777577   62061 cri.go:89] found id: ""
	I0918 21:13:14.777602   62061 logs.go:276] 0 containers: []
	W0918 21:13:14.777610   62061 logs.go:278] No container was found matching "kube-controller-manager"
	I0918 21:13:14.777616   62061 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0918 21:13:14.777679   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0918 21:13:14.815884   62061 cri.go:89] found id: ""
	I0918 21:13:14.815913   62061 logs.go:276] 0 containers: []
	W0918 21:13:14.815922   62061 logs.go:278] No container was found matching "kindnet"
	I0918 21:13:14.815937   62061 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0918 21:13:14.815997   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0918 21:13:14.855426   62061 cri.go:89] found id: ""
	I0918 21:13:14.855456   62061 logs.go:276] 0 containers: []
	W0918 21:13:14.855464   62061 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0918 21:13:14.855473   62061 logs.go:123] Gathering logs for dmesg ...
	I0918 21:13:14.855484   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 21:13:14.868812   62061 logs.go:123] Gathering logs for describe nodes ...
	I0918 21:13:14.868843   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0918 21:13:14.955093   62061 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0918 21:13:14.955120   62061 logs.go:123] Gathering logs for CRI-O ...
	I0918 21:13:14.955151   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0918 21:13:15.064722   62061 logs.go:123] Gathering logs for container status ...
	I0918 21:13:15.064760   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 21:13:15.105430   62061 logs.go:123] Gathering logs for kubelet ...
	I0918 21:13:15.105466   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0918 21:13:15.157878   62061 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0918 21:13:15.157956   62061 out.go:270] * 
	W0918 21:13:15.158036   62061 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0918 21:13:15.158052   62061 out.go:270] * 
	W0918 21:13:15.158934   62061 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0918 21:13:15.161604   62061 out.go:201] 
	W0918 21:13:15.162606   62061 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0918 21:13:15.162664   62061 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0918 21:13:15.162685   62061 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0918 21:13:15.163831   62061 out.go:201] 
	
	
	==> CRI-O <==
	Sep 18 21:13:17 old-k8s-version-740194 crio[636]: time="2024-09-18 21:13:17.124889778Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726693997124868910,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=95a45d6b-633e-4424-a729-ad80cfac134f name=/runtime.v1.ImageService/ImageFsInfo
	Sep 18 21:13:17 old-k8s-version-740194 crio[636]: time="2024-09-18 21:13:17.125531831Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=c5dcf987-2479-489b-a488-31e574588127 name=/runtime.v1.RuntimeService/ListContainers
	Sep 18 21:13:17 old-k8s-version-740194 crio[636]: time="2024-09-18 21:13:17.125599341Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=c5dcf987-2479-489b-a488-31e574588127 name=/runtime.v1.RuntimeService/ListContainers
	Sep 18 21:13:17 old-k8s-version-740194 crio[636]: time="2024-09-18 21:13:17.125633075Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=c5dcf987-2479-489b-a488-31e574588127 name=/runtime.v1.RuntimeService/ListContainers
	Sep 18 21:13:17 old-k8s-version-740194 crio[636]: time="2024-09-18 21:13:17.157243953Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=1a4650ca-f4ea-4c8f-91a8-71e5f70903f4 name=/runtime.v1.RuntimeService/Version
	Sep 18 21:13:17 old-k8s-version-740194 crio[636]: time="2024-09-18 21:13:17.157333799Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=1a4650ca-f4ea-4c8f-91a8-71e5f70903f4 name=/runtime.v1.RuntimeService/Version
	Sep 18 21:13:17 old-k8s-version-740194 crio[636]: time="2024-09-18 21:13:17.158567225Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=8f639e66-ac6f-4e0e-b02d-06dfdecc56ee name=/runtime.v1.ImageService/ImageFsInfo
	Sep 18 21:13:17 old-k8s-version-740194 crio[636]: time="2024-09-18 21:13:17.158969113Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726693997158941851,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=8f639e66-ac6f-4e0e-b02d-06dfdecc56ee name=/runtime.v1.ImageService/ImageFsInfo
	Sep 18 21:13:17 old-k8s-version-740194 crio[636]: time="2024-09-18 21:13:17.159464368Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=ea56b214-24a3-4deb-8d35-1727e8335793 name=/runtime.v1.RuntimeService/ListContainers
	Sep 18 21:13:17 old-k8s-version-740194 crio[636]: time="2024-09-18 21:13:17.159547730Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=ea56b214-24a3-4deb-8d35-1727e8335793 name=/runtime.v1.RuntimeService/ListContainers
	Sep 18 21:13:17 old-k8s-version-740194 crio[636]: time="2024-09-18 21:13:17.159605886Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=ea56b214-24a3-4deb-8d35-1727e8335793 name=/runtime.v1.RuntimeService/ListContainers
	Sep 18 21:13:17 old-k8s-version-740194 crio[636]: time="2024-09-18 21:13:17.190772867Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=9661794b-4477-48ee-ba34-5397d4d683e1 name=/runtime.v1.RuntimeService/Version
	Sep 18 21:13:17 old-k8s-version-740194 crio[636]: time="2024-09-18 21:13:17.190862633Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=9661794b-4477-48ee-ba34-5397d4d683e1 name=/runtime.v1.RuntimeService/Version
	Sep 18 21:13:17 old-k8s-version-740194 crio[636]: time="2024-09-18 21:13:17.192232110Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=5b264693-9f77-4278-ae24-38f5aa94c16d name=/runtime.v1.ImageService/ImageFsInfo
	Sep 18 21:13:17 old-k8s-version-740194 crio[636]: time="2024-09-18 21:13:17.192619013Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726693997192595524,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=5b264693-9f77-4278-ae24-38f5aa94c16d name=/runtime.v1.ImageService/ImageFsInfo
	Sep 18 21:13:17 old-k8s-version-740194 crio[636]: time="2024-09-18 21:13:17.193220317Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=81bbe284-83c0-417b-9b3b-3831ea829c6c name=/runtime.v1.RuntimeService/ListContainers
	Sep 18 21:13:17 old-k8s-version-740194 crio[636]: time="2024-09-18 21:13:17.193300100Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=81bbe284-83c0-417b-9b3b-3831ea829c6c name=/runtime.v1.RuntimeService/ListContainers
	Sep 18 21:13:17 old-k8s-version-740194 crio[636]: time="2024-09-18 21:13:17.193351674Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=81bbe284-83c0-417b-9b3b-3831ea829c6c name=/runtime.v1.RuntimeService/ListContainers
	Sep 18 21:13:17 old-k8s-version-740194 crio[636]: time="2024-09-18 21:13:17.224489551Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=a90c2631-b87c-4ea3-bba3-be1c28e4ae9a name=/runtime.v1.RuntimeService/Version
	Sep 18 21:13:17 old-k8s-version-740194 crio[636]: time="2024-09-18 21:13:17.224585776Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=a90c2631-b87c-4ea3-bba3-be1c28e4ae9a name=/runtime.v1.RuntimeService/Version
	Sep 18 21:13:17 old-k8s-version-740194 crio[636]: time="2024-09-18 21:13:17.225782555Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=0a044f3b-4d3b-41e5-8697-9cc405b384c5 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 18 21:13:17 old-k8s-version-740194 crio[636]: time="2024-09-18 21:13:17.226214798Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726693997226182017,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=0a044f3b-4d3b-41e5-8697-9cc405b384c5 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 18 21:13:17 old-k8s-version-740194 crio[636]: time="2024-09-18 21:13:17.226832879Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=150281f9-d1b6-40bf-acd7-4cf7a97bddcd name=/runtime.v1.RuntimeService/ListContainers
	Sep 18 21:13:17 old-k8s-version-740194 crio[636]: time="2024-09-18 21:13:17.226920940Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=150281f9-d1b6-40bf-acd7-4cf7a97bddcd name=/runtime.v1.RuntimeService/ListContainers
	Sep 18 21:13:17 old-k8s-version-740194 crio[636]: time="2024-09-18 21:13:17.226960325Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=150281f9-d1b6-40bf-acd7-4cf7a97bddcd name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Sep18 21:04] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.052758] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.039829] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.960759] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +1.971252] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[Sep18 21:05] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000005] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +8.490123] systemd-fstab-generator[559]: Ignoring "noauto" option for root device
	[  +0.066663] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.070792] systemd-fstab-generator[571]: Ignoring "noauto" option for root device
	[  +0.218400] systemd-fstab-generator[585]: Ignoring "noauto" option for root device
	[  +0.116531] systemd-fstab-generator[597]: Ignoring "noauto" option for root device
	[  +0.277213] systemd-fstab-generator[626]: Ignoring "noauto" option for root device
	[  +6.543535] systemd-fstab-generator[885]: Ignoring "noauto" option for root device
	[  +0.067666] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.830893] systemd-fstab-generator[1009]: Ignoring "noauto" option for root device
	[ +11.620626] kauditd_printk_skb: 46 callbacks suppressed
	[Sep18 21:09] systemd-fstab-generator[5030]: Ignoring "noauto" option for root device
	[Sep18 21:11] systemd-fstab-generator[5310]: Ignoring "noauto" option for root device
	[  +0.067380] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> kernel <==
	 21:13:17 up 8 min,  0 users,  load average: 0.00, 0.10, 0.08
	Linux old-k8s-version-740194 5.10.207 #1 SMP Mon Sep 16 15:00:28 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Sep 18 21:13:14 old-k8s-version-740194 kubelet[5487]:         /usr/local/go/src/net/ipsock.go:280 +0x4d4
	Sep 18 21:13:14 old-k8s-version-740194 kubelet[5487]: net.(*Resolver).resolveAddrList(0x70c5740, 0x4f7fe40, 0xc0001e7e00, 0x48abf6d, 0x4, 0x48ab5d6, 0x3, 0xc000b2d560, 0x24, 0x0, ...)
	Sep 18 21:13:14 old-k8s-version-740194 kubelet[5487]:         /usr/local/go/src/net/dial.go:221 +0x47d
	Sep 18 21:13:14 old-k8s-version-740194 kubelet[5487]: net.(*Dialer).DialContext(0xc000cab020, 0x4f7fe00, 0xc000120018, 0x48ab5d6, 0x3, 0xc000b2d560, 0x24, 0x0, 0x0, 0x0, ...)
	Sep 18 21:13:14 old-k8s-version-740194 kubelet[5487]:         /usr/local/go/src/net/dial.go:403 +0x22b
	Sep 18 21:13:14 old-k8s-version-740194 kubelet[5487]: k8s.io/kubernetes/vendor/k8s.io/client-go/util/connrotation.(*Dialer).DialContext(0xc000cb3fe0, 0x4f7fe00, 0xc000120018, 0x48ab5d6, 0x3, 0xc000b2d560, 0x24, 0x60, 0x7fa689468c60, 0x118, ...)
	Sep 18 21:13:14 old-k8s-version-740194 kubelet[5487]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/util/connrotation/connrotation.go:73 +0x7e
	Sep 18 21:13:14 old-k8s-version-740194 kubelet[5487]: net/http.(*Transport).dial(0xc000cc2000, 0x4f7fe00, 0xc000120018, 0x48ab5d6, 0x3, 0xc000b2d560, 0x24, 0x0, 0x0, 0x0, ...)
	Sep 18 21:13:14 old-k8s-version-740194 kubelet[5487]:         /usr/local/go/src/net/http/transport.go:1141 +0x1fd
	Sep 18 21:13:14 old-k8s-version-740194 kubelet[5487]: net/http.(*Transport).dialConn(0xc000cc2000, 0x4f7fe00, 0xc000120018, 0x0, 0xc000c363c0, 0x5, 0xc000b2d560, 0x24, 0x0, 0xc000274120, ...)
	Sep 18 21:13:14 old-k8s-version-740194 kubelet[5487]:         /usr/local/go/src/net/http/transport.go:1575 +0x1abb
	Sep 18 21:13:14 old-k8s-version-740194 kubelet[5487]: net/http.(*Transport).dialConnFor(0xc000cc2000, 0xc0000f3760)
	Sep 18 21:13:14 old-k8s-version-740194 kubelet[5487]:         /usr/local/go/src/net/http/transport.go:1421 +0xc6
	Sep 18 21:13:14 old-k8s-version-740194 kubelet[5487]: created by net/http.(*Transport).queueForDial
	Sep 18 21:13:14 old-k8s-version-740194 kubelet[5487]:         /usr/local/go/src/net/http/transport.go:1390 +0x40f
	Sep 18 21:13:14 old-k8s-version-740194 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Sep 18 21:13:14 old-k8s-version-740194 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Sep 18 21:13:15 old-k8s-version-740194 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 20.
	Sep 18 21:13:15 old-k8s-version-740194 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Sep 18 21:13:15 old-k8s-version-740194 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Sep 18 21:13:15 old-k8s-version-740194 kubelet[5553]: I0918 21:13:15.356020    5553 server.go:416] Version: v1.20.0
	Sep 18 21:13:15 old-k8s-version-740194 kubelet[5553]: I0918 21:13:15.356280    5553 server.go:837] Client rotation is on, will bootstrap in background
	Sep 18 21:13:15 old-k8s-version-740194 kubelet[5553]: I0918 21:13:15.359300    5553 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Sep 18 21:13:15 old-k8s-version-740194 kubelet[5553]: W0918 21:13:15.360685    5553 manager.go:159] Cannot detect current cgroup on cgroup v2
	Sep 18 21:13:15 old-k8s-version-740194 kubelet[5553]: I0918 21:13:15.360911    5553 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-740194 -n old-k8s-version-740194
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-740194 -n old-k8s-version-740194: exit status 2 (230.147787ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-740194" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/SecondStart (739.31s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (544.23s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
E0918 21:10:01.286794   14878 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/functional-790989/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:274: ***** TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-828868 -n default-k8s-diff-port-828868
start_stop_delete_test.go:274: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2024-09-18 21:18:40.28527715 +0000 UTC m=+6043.055618392
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-828868 -n default-k8s-diff-port-828868
helpers_test.go:244: <<< TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-828868 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-828868 logs -n 25: (2.144005176s)
helpers_test.go:252: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| delete  | -p cert-options-347585                                 | cert-options-347585          | jenkins | v1.34.0 | 18 Sep 24 20:55 UTC | 18 Sep 24 20:55 UTC |
	| start   | -p no-preload-331658                                   | no-preload-331658            | jenkins | v1.34.0 | 18 Sep 24 20:55 UTC | 18 Sep 24 20:56 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-878094                           | kubernetes-upgrade-878094    | jenkins | v1.34.0 | 18 Sep 24 20:55 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-878094                           | kubernetes-upgrade-878094    | jenkins | v1.34.0 | 18 Sep 24 20:55 UTC | 18 Sep 24 20:56 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	|         | --alsologtostderr                                      |                              |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                     |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| delete  | -p pause-543700                                        | pause-543700                 | jenkins | v1.34.0 | 18 Sep 24 20:56 UTC | 18 Sep 24 20:56 UTC |
	| start   | -p embed-certs-255556                                  | embed-certs-255556           | jenkins | v1.34.0 | 18 Sep 24 20:56 UTC | 18 Sep 24 20:57 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| delete  | -p kubernetes-upgrade-878094                           | kubernetes-upgrade-878094    | jenkins | v1.34.0 | 18 Sep 24 20:56 UTC | 18 Sep 24 20:56 UTC |
	| delete  | -p                                                     | disable-driver-mounts-335923 | jenkins | v1.34.0 | 18 Sep 24 20:56 UTC | 18 Sep 24 20:56 UTC |
	|         | disable-driver-mounts-335923                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-828868 | jenkins | v1.34.0 | 18 Sep 24 20:56 UTC | 18 Sep 24 20:57 UTC |
	|         | default-k8s-diff-port-828868                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-331658             | no-preload-331658            | jenkins | v1.34.0 | 18 Sep 24 20:56 UTC | 18 Sep 24 20:57 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-331658                                   | no-preload-331658            | jenkins | v1.34.0 | 18 Sep 24 20:57 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-828868  | default-k8s-diff-port-828868 | jenkins | v1.34.0 | 18 Sep 24 20:57 UTC | 18 Sep 24 20:57 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-828868 | jenkins | v1.34.0 | 18 Sep 24 20:57 UTC |                     |
	|         | default-k8s-diff-port-828868                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-255556            | embed-certs-255556           | jenkins | v1.34.0 | 18 Sep 24 20:57 UTC | 18 Sep 24 20:57 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-255556                                  | embed-certs-255556           | jenkins | v1.34.0 | 18 Sep 24 20:57 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-740194        | old-k8s-version-740194       | jenkins | v1.34.0 | 18 Sep 24 20:59 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-331658                  | no-preload-331658            | jenkins | v1.34.0 | 18 Sep 24 20:59 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-331658                                   | no-preload-331658            | jenkins | v1.34.0 | 18 Sep 24 20:59 UTC | 18 Sep 24 21:10 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-828868       | default-k8s-diff-port-828868 | jenkins | v1.34.0 | 18 Sep 24 21:00 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-255556                 | embed-certs-255556           | jenkins | v1.34.0 | 18 Sep 24 21:00 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-828868 | jenkins | v1.34.0 | 18 Sep 24 21:00 UTC | 18 Sep 24 21:09 UTC |
	|         | default-k8s-diff-port-828868                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| start   | -p embed-certs-255556                                  | embed-certs-255556           | jenkins | v1.34.0 | 18 Sep 24 21:00 UTC | 18 Sep 24 21:10 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-740194                              | old-k8s-version-740194       | jenkins | v1.34.0 | 18 Sep 24 21:00 UTC | 18 Sep 24 21:00 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-740194             | old-k8s-version-740194       | jenkins | v1.34.0 | 18 Sep 24 21:00 UTC | 18 Sep 24 21:00 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-740194                              | old-k8s-version-740194       | jenkins | v1.34.0 | 18 Sep 24 21:00 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/18 21:00:59
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0918 21:00:59.486726   62061 out.go:345] Setting OutFile to fd 1 ...
	I0918 21:00:59.486839   62061 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0918 21:00:59.486848   62061 out.go:358] Setting ErrFile to fd 2...
	I0918 21:00:59.486854   62061 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0918 21:00:59.487063   62061 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19667-7671/.minikube/bin
	I0918 21:00:59.487631   62061 out.go:352] Setting JSON to false
	I0918 21:00:59.488570   62061 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":6203,"bootTime":1726687056,"procs":200,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0918 21:00:59.488665   62061 start.go:139] virtualization: kvm guest
	I0918 21:00:59.490927   62061 out.go:177] * [old-k8s-version-740194] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0918 21:00:59.492124   62061 out.go:177]   - MINIKUBE_LOCATION=19667
	I0918 21:00:59.492192   62061 notify.go:220] Checking for updates...
	I0918 21:00:59.494436   62061 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0918 21:00:59.495887   62061 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19667-7671/kubeconfig
	I0918 21:00:59.496929   62061 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19667-7671/.minikube
	I0918 21:00:59.498172   62061 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0918 21:00:59.499384   62061 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0918 21:00:59.500900   62061 config.go:182] Loaded profile config "old-k8s-version-740194": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0918 21:00:59.501295   62061 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19667-7671/.minikube/bin/docker-machine-driver-kvm2
	I0918 21:00:59.501375   62061 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0918 21:00:59.516448   62061 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45397
	I0918 21:00:59.516855   62061 main.go:141] libmachine: () Calling .GetVersion
	I0918 21:00:59.517347   62061 main.go:141] libmachine: Using API Version  1
	I0918 21:00:59.517362   62061 main.go:141] libmachine: () Calling .SetConfigRaw
	I0918 21:00:59.517673   62061 main.go:141] libmachine: () Calling .GetMachineName
	I0918 21:00:59.517848   62061 main.go:141] libmachine: (old-k8s-version-740194) Calling .DriverName
	I0918 21:00:59.519750   62061 out.go:177] * Kubernetes 1.31.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.1
	I0918 21:00:59.521126   62061 driver.go:394] Setting default libvirt URI to qemu:///system
	I0918 21:00:59.521433   62061 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19667-7671/.minikube/bin/docker-machine-driver-kvm2
	I0918 21:00:59.521468   62061 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0918 21:00:59.537580   62061 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39433
	I0918 21:00:59.537973   62061 main.go:141] libmachine: () Calling .GetVersion
	I0918 21:00:59.538452   62061 main.go:141] libmachine: Using API Version  1
	I0918 21:00:59.538471   62061 main.go:141] libmachine: () Calling .SetConfigRaw
	I0918 21:00:59.538852   62061 main.go:141] libmachine: () Calling .GetMachineName
	I0918 21:00:59.539083   62061 main.go:141] libmachine: (old-k8s-version-740194) Calling .DriverName
	I0918 21:00:59.575494   62061 out.go:177] * Using the kvm2 driver based on existing profile
	I0918 21:00:59.576555   62061 start.go:297] selected driver: kvm2
	I0918 21:00:59.576572   62061 start.go:901] validating driver "kvm2" against &{Name:old-k8s-version-740194 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.20.0 ClusterName:old-k8s-version-740194 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.53 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280
h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0918 21:00:59.576682   62061 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0918 21:00:59.577339   62061 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0918 21:00:59.577400   62061 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19667-7671/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0918 21:00:59.593287   62061 install.go:137] /home/jenkins/minikube-integration/19667-7671/.minikube/bin/docker-machine-driver-kvm2 version is 1.34.0
	I0918 21:00:59.593810   62061 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0918 21:00:59.593855   62061 cni.go:84] Creating CNI manager for ""
	I0918 21:00:59.593911   62061 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0918 21:00:59.593972   62061 start.go:340] cluster config:
	{Name:old-k8s-version-740194 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-740194 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.53 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p20
00.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0918 21:00:59.594104   62061 iso.go:125] acquiring lock: {Name:mkbcf4d84f3091567a39802cd5e773cb10b8c545 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0918 21:00:59.595936   62061 out.go:177] * Starting "old-k8s-version-740194" primary control-plane node in "old-k8s-version-740194" cluster
	I0918 21:00:59.932315   61273 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.31:22: connect: no route to host
	I0918 21:00:59.597210   62061 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0918 21:00:59.597243   62061 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19667-7671/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0918 21:00:59.597250   62061 cache.go:56] Caching tarball of preloaded images
	I0918 21:00:59.597325   62061 preload.go:172] Found /home/jenkins/minikube-integration/19667-7671/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0918 21:00:59.597338   62061 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0918 21:00:59.597439   62061 profile.go:143] Saving config to /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/old-k8s-version-740194/config.json ...
	I0918 21:00:59.597658   62061 start.go:360] acquireMachinesLock for old-k8s-version-740194: {Name:mk1b0d68a0b7d9d4bc8204d5d81cb9eb77222526 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0918 21:01:03.004316   61273 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.31:22: connect: no route to host
	I0918 21:01:09.084327   61273 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.31:22: connect: no route to host
	I0918 21:01:12.156358   61273 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.31:22: connect: no route to host
	I0918 21:01:18.236353   61273 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.31:22: connect: no route to host
	I0918 21:01:21.308245   61273 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.31:22: connect: no route to host
	I0918 21:01:27.388302   61273 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.31:22: connect: no route to host
	I0918 21:01:30.460341   61273 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.31:22: connect: no route to host
	I0918 21:01:36.540285   61273 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.31:22: connect: no route to host
	I0918 21:01:39.612345   61273 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.31:22: connect: no route to host
	I0918 21:01:45.692338   61273 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.31:22: connect: no route to host
	I0918 21:01:48.764308   61273 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.31:22: connect: no route to host
	I0918 21:01:54.844344   61273 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.31:22: connect: no route to host
	I0918 21:01:57.916346   61273 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.31:22: connect: no route to host
	I0918 21:02:03.996351   61273 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.31:22: connect: no route to host
	I0918 21:02:07.068377   61273 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.31:22: connect: no route to host
	I0918 21:02:13.148269   61273 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.31:22: connect: no route to host
	I0918 21:02:16.220321   61273 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.31:22: connect: no route to host
	I0918 21:02:22.300282   61273 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.31:22: connect: no route to host
	I0918 21:02:25.372352   61273 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.31:22: connect: no route to host
	I0918 21:02:31.452275   61273 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.31:22: connect: no route to host
	I0918 21:02:34.524362   61273 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.31:22: connect: no route to host
	I0918 21:02:40.604332   61273 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.31:22: connect: no route to host
	I0918 21:02:43.676372   61273 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.31:22: connect: no route to host
	I0918 21:02:49.756305   61273 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.31:22: connect: no route to host
	I0918 21:02:52.828321   61273 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.31:22: connect: no route to host
	I0918 21:02:58.908358   61273 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.31:22: connect: no route to host
	I0918 21:03:01.980309   61273 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.31:22: connect: no route to host
	I0918 21:03:08.060301   61273 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.31:22: connect: no route to host
	I0918 21:03:11.132322   61273 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.31:22: connect: no route to host
	I0918 21:03:17.212232   61273 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.31:22: connect: no route to host
	I0918 21:03:20.284342   61273 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.31:22: connect: no route to host
	I0918 21:03:26.364312   61273 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.31:22: connect: no route to host
	I0918 21:03:29.436328   61273 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.31:22: connect: no route to host
	I0918 21:03:35.516323   61273 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.31:22: connect: no route to host
	I0918 21:03:38.588372   61273 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.31:22: connect: no route to host
	I0918 21:03:44.668300   61273 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.31:22: connect: no route to host
	I0918 21:03:47.740379   61273 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.31:22: connect: no route to host
	I0918 21:03:53.820363   61273 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.31:22: connect: no route to host
	I0918 21:03:56.892355   61273 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.31:22: connect: no route to host
	I0918 21:04:02.972312   61273 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.31:22: connect: no route to host
	I0918 21:04:06.044373   61273 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.31:22: connect: no route to host
	I0918 21:04:09.048392   61659 start.go:364] duration metric: took 3m56.738592157s to acquireMachinesLock for "default-k8s-diff-port-828868"
	I0918 21:04:09.048461   61659 start.go:96] Skipping create...Using existing machine configuration
	I0918 21:04:09.048469   61659 fix.go:54] fixHost starting: 
	I0918 21:04:09.048788   61659 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19667-7671/.minikube/bin/docker-machine-driver-kvm2
	I0918 21:04:09.048827   61659 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0918 21:04:09.064428   61659 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41181
	I0918 21:04:09.064856   61659 main.go:141] libmachine: () Calling .GetVersion
	I0918 21:04:09.065395   61659 main.go:141] libmachine: Using API Version  1
	I0918 21:04:09.065421   61659 main.go:141] libmachine: () Calling .SetConfigRaw
	I0918 21:04:09.065751   61659 main.go:141] libmachine: () Calling .GetMachineName
	I0918 21:04:09.065961   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .DriverName
	I0918 21:04:09.066108   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .GetState
	I0918 21:04:09.067874   61659 fix.go:112] recreateIfNeeded on default-k8s-diff-port-828868: state=Stopped err=<nil>
	I0918 21:04:09.067915   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .DriverName
	W0918 21:04:09.068096   61659 fix.go:138] unexpected machine state, will restart: <nil>
	I0918 21:04:09.069985   61659 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-828868" ...
	I0918 21:04:09.045944   61273 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0918 21:04:09.045978   61273 main.go:141] libmachine: (no-preload-331658) Calling .GetMachineName
	I0918 21:04:09.046314   61273 buildroot.go:166] provisioning hostname "no-preload-331658"
	I0918 21:04:09.046350   61273 main.go:141] libmachine: (no-preload-331658) Calling .GetMachineName
	I0918 21:04:09.046602   61273 main.go:141] libmachine: (no-preload-331658) Calling .GetSSHHostname
	I0918 21:04:09.048253   61273 machine.go:96] duration metric: took 4m37.423609251s to provisionDockerMachine
	I0918 21:04:09.048293   61273 fix.go:56] duration metric: took 4m37.446130108s for fixHost
	I0918 21:04:09.048301   61273 start.go:83] releasing machines lock for "no-preload-331658", held for 4m37.44629145s
	W0918 21:04:09.048329   61273 start.go:714] error starting host: provision: host is not running
	W0918 21:04:09.048451   61273 out.go:270] ! StartHost failed, but will try again: provision: host is not running
	I0918 21:04:09.048465   61273 start.go:729] Will try again in 5 seconds ...
	I0918 21:04:09.071488   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .Start
	I0918 21:04:09.071699   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Ensuring networks are active...
	I0918 21:04:09.072473   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Ensuring network default is active
	I0918 21:04:09.072816   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Ensuring network mk-default-k8s-diff-port-828868 is active
	I0918 21:04:09.073204   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Getting domain xml...
	I0918 21:04:09.073977   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Creating domain...
	I0918 21:04:10.321507   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Waiting to get IP...
	I0918 21:04:10.322390   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | domain default-k8s-diff-port-828868 has defined MAC address 52:54:00:c0:39:06 in network mk-default-k8s-diff-port-828868
	I0918 21:04:10.322863   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | unable to find current IP address of domain default-k8s-diff-port-828868 in network mk-default-k8s-diff-port-828868
	I0918 21:04:10.322907   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | I0918 21:04:10.322821   62722 retry.go:31] will retry after 272.805092ms: waiting for machine to come up
	I0918 21:04:10.597434   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | domain default-k8s-diff-port-828868 has defined MAC address 52:54:00:c0:39:06 in network mk-default-k8s-diff-port-828868
	I0918 21:04:10.597861   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | unable to find current IP address of domain default-k8s-diff-port-828868 in network mk-default-k8s-diff-port-828868
	I0918 21:04:10.597888   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | I0918 21:04:10.597825   62722 retry.go:31] will retry after 302.631333ms: waiting for machine to come up
	I0918 21:04:10.902544   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | domain default-k8s-diff-port-828868 has defined MAC address 52:54:00:c0:39:06 in network mk-default-k8s-diff-port-828868
	I0918 21:04:10.903002   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | unable to find current IP address of domain default-k8s-diff-port-828868 in network mk-default-k8s-diff-port-828868
	I0918 21:04:10.903030   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | I0918 21:04:10.902943   62722 retry.go:31] will retry after 325.769954ms: waiting for machine to come up
	I0918 21:04:11.230182   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | domain default-k8s-diff-port-828868 has defined MAC address 52:54:00:c0:39:06 in network mk-default-k8s-diff-port-828868
	I0918 21:04:11.230602   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | unable to find current IP address of domain default-k8s-diff-port-828868 in network mk-default-k8s-diff-port-828868
	I0918 21:04:11.230652   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | I0918 21:04:11.230557   62722 retry.go:31] will retry after 396.395153ms: waiting for machine to come up
	I0918 21:04:11.628135   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | domain default-k8s-diff-port-828868 has defined MAC address 52:54:00:c0:39:06 in network mk-default-k8s-diff-port-828868
	I0918 21:04:11.628520   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | unable to find current IP address of domain default-k8s-diff-port-828868 in network mk-default-k8s-diff-port-828868
	I0918 21:04:11.628544   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | I0918 21:04:11.628495   62722 retry.go:31] will retry after 578.74167ms: waiting for machine to come up
	I0918 21:04:14.050009   61273 start.go:360] acquireMachinesLock for no-preload-331658: {Name:mk1b0d68a0b7d9d4bc8204d5d81cb9eb77222526 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0918 21:04:12.209844   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | domain default-k8s-diff-port-828868 has defined MAC address 52:54:00:c0:39:06 in network mk-default-k8s-diff-port-828868
	I0918 21:04:12.209911   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | unable to find current IP address of domain default-k8s-diff-port-828868 in network mk-default-k8s-diff-port-828868
	I0918 21:04:12.209937   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | I0918 21:04:12.209841   62722 retry.go:31] will retry after 779.0434ms: waiting for machine to come up
	I0918 21:04:12.990688   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | domain default-k8s-diff-port-828868 has defined MAC address 52:54:00:c0:39:06 in network mk-default-k8s-diff-port-828868
	I0918 21:04:12.991106   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | unable to find current IP address of domain default-k8s-diff-port-828868 in network mk-default-k8s-diff-port-828868
	I0918 21:04:12.991141   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | I0918 21:04:12.991045   62722 retry.go:31] will retry after 772.165771ms: waiting for machine to come up
	I0918 21:04:13.764946   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | domain default-k8s-diff-port-828868 has defined MAC address 52:54:00:c0:39:06 in network mk-default-k8s-diff-port-828868
	I0918 21:04:13.765460   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | unable to find current IP address of domain default-k8s-diff-port-828868 in network mk-default-k8s-diff-port-828868
	I0918 21:04:13.765493   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | I0918 21:04:13.765404   62722 retry.go:31] will retry after 1.017078101s: waiting for machine to come up
	I0918 21:04:14.783920   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | domain default-k8s-diff-port-828868 has defined MAC address 52:54:00:c0:39:06 in network mk-default-k8s-diff-port-828868
	I0918 21:04:14.784320   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | unable to find current IP address of domain default-k8s-diff-port-828868 in network mk-default-k8s-diff-port-828868
	I0918 21:04:14.784348   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | I0918 21:04:14.784276   62722 retry.go:31] will retry after 1.775982574s: waiting for machine to come up
	I0918 21:04:16.562037   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | domain default-k8s-diff-port-828868 has defined MAC address 52:54:00:c0:39:06 in network mk-default-k8s-diff-port-828868
	I0918 21:04:16.562413   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | unable to find current IP address of domain default-k8s-diff-port-828868 in network mk-default-k8s-diff-port-828868
	I0918 21:04:16.562451   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | I0918 21:04:16.562369   62722 retry.go:31] will retry after 1.609664062s: waiting for machine to come up
	I0918 21:04:18.174149   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | domain default-k8s-diff-port-828868 has defined MAC address 52:54:00:c0:39:06 in network mk-default-k8s-diff-port-828868
	I0918 21:04:18.174759   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | unable to find current IP address of domain default-k8s-diff-port-828868 in network mk-default-k8s-diff-port-828868
	I0918 21:04:18.174788   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | I0918 21:04:18.174710   62722 retry.go:31] will retry after 2.26359536s: waiting for machine to come up
	I0918 21:04:20.440599   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | domain default-k8s-diff-port-828868 has defined MAC address 52:54:00:c0:39:06 in network mk-default-k8s-diff-port-828868
	I0918 21:04:20.441000   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | unable to find current IP address of domain default-k8s-diff-port-828868 in network mk-default-k8s-diff-port-828868
	I0918 21:04:20.441027   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | I0918 21:04:20.440955   62722 retry.go:31] will retry after 3.387446315s: waiting for machine to come up
	I0918 21:04:23.832623   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | domain default-k8s-diff-port-828868 has defined MAC address 52:54:00:c0:39:06 in network mk-default-k8s-diff-port-828868
	I0918 21:04:23.833134   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | unable to find current IP address of domain default-k8s-diff-port-828868 in network mk-default-k8s-diff-port-828868
	I0918 21:04:23.833162   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | I0918 21:04:23.833097   62722 retry.go:31] will retry after 3.312983418s: waiting for machine to come up
	I0918 21:04:27.150091   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | domain default-k8s-diff-port-828868 has defined MAC address 52:54:00:c0:39:06 in network mk-default-k8s-diff-port-828868
	I0918 21:04:27.150658   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Found IP for machine: 192.168.50.109
	I0918 21:04:27.150682   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | domain default-k8s-diff-port-828868 has current primary IP address 192.168.50.109 and MAC address 52:54:00:c0:39:06 in network mk-default-k8s-diff-port-828868
	I0918 21:04:27.150703   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Reserving static IP address...
	I0918 21:04:27.151248   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-828868", mac: "52:54:00:c0:39:06", ip: "192.168.50.109"} in network mk-default-k8s-diff-port-828868: {Iface:virbr4 ExpiryTime:2024-09-18 22:04:19 +0000 UTC Type:0 Mac:52:54:00:c0:39:06 Iaid: IPaddr:192.168.50.109 Prefix:24 Hostname:default-k8s-diff-port-828868 Clientid:01:52:54:00:c0:39:06}
	I0918 21:04:27.151276   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Reserved static IP address: 192.168.50.109
	I0918 21:04:27.151297   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | skip adding static IP to network mk-default-k8s-diff-port-828868 - found existing host DHCP lease matching {name: "default-k8s-diff-port-828868", mac: "52:54:00:c0:39:06", ip: "192.168.50.109"}
	I0918 21:04:27.151317   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | Getting to WaitForSSH function...
	I0918 21:04:27.151330   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Waiting for SSH to be available...
	I0918 21:04:27.153633   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | domain default-k8s-diff-port-828868 has defined MAC address 52:54:00:c0:39:06 in network mk-default-k8s-diff-port-828868
	I0918 21:04:27.154006   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:39:06", ip: ""} in network mk-default-k8s-diff-port-828868: {Iface:virbr4 ExpiryTime:2024-09-18 22:04:19 +0000 UTC Type:0 Mac:52:54:00:c0:39:06 Iaid: IPaddr:192.168.50.109 Prefix:24 Hostname:default-k8s-diff-port-828868 Clientid:01:52:54:00:c0:39:06}
	I0918 21:04:27.154036   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | domain default-k8s-diff-port-828868 has defined IP address 192.168.50.109 and MAC address 52:54:00:c0:39:06 in network mk-default-k8s-diff-port-828868
	I0918 21:04:27.154127   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | Using SSH client type: external
	I0918 21:04:27.154153   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | Using SSH private key: /home/jenkins/minikube-integration/19667-7671/.minikube/machines/default-k8s-diff-port-828868/id_rsa (-rw-------)
	I0918 21:04:27.154196   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.109 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19667-7671/.minikube/machines/default-k8s-diff-port-828868/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0918 21:04:27.154211   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | About to run SSH command:
	I0918 21:04:27.154225   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | exit 0
	I0918 21:04:28.308967   61740 start.go:364] duration metric: took 4m9.856658805s to acquireMachinesLock for "embed-certs-255556"
	I0918 21:04:28.309052   61740 start.go:96] Skipping create...Using existing machine configuration
	I0918 21:04:28.309066   61740 fix.go:54] fixHost starting: 
	I0918 21:04:28.309548   61740 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19667-7671/.minikube/bin/docker-machine-driver-kvm2
	I0918 21:04:28.309609   61740 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0918 21:04:28.326972   61740 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41115
	I0918 21:04:28.327375   61740 main.go:141] libmachine: () Calling .GetVersion
	I0918 21:04:28.327941   61740 main.go:141] libmachine: Using API Version  1
	I0918 21:04:28.327974   61740 main.go:141] libmachine: () Calling .SetConfigRaw
	I0918 21:04:28.328300   61740 main.go:141] libmachine: () Calling .GetMachineName
	I0918 21:04:28.328538   61740 main.go:141] libmachine: (embed-certs-255556) Calling .DriverName
	I0918 21:04:28.328676   61740 main.go:141] libmachine: (embed-certs-255556) Calling .GetState
	I0918 21:04:28.330265   61740 fix.go:112] recreateIfNeeded on embed-certs-255556: state=Stopped err=<nil>
	I0918 21:04:28.330312   61740 main.go:141] libmachine: (embed-certs-255556) Calling .DriverName
	W0918 21:04:28.330482   61740 fix.go:138] unexpected machine state, will restart: <nil>
	I0918 21:04:28.332680   61740 out.go:177] * Restarting existing kvm2 VM for "embed-certs-255556" ...
	I0918 21:04:28.333692   61740 main.go:141] libmachine: (embed-certs-255556) Calling .Start
	I0918 21:04:28.333865   61740 main.go:141] libmachine: (embed-certs-255556) Ensuring networks are active...
	I0918 21:04:28.334536   61740 main.go:141] libmachine: (embed-certs-255556) Ensuring network default is active
	I0918 21:04:28.334987   61740 main.go:141] libmachine: (embed-certs-255556) Ensuring network mk-embed-certs-255556 is active
	I0918 21:04:28.335491   61740 main.go:141] libmachine: (embed-certs-255556) Getting domain xml...
	I0918 21:04:28.336206   61740 main.go:141] libmachine: (embed-certs-255556) Creating domain...
	I0918 21:04:27.280056   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | SSH cmd err, output: <nil>: 
	I0918 21:04:27.280448   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .GetConfigRaw
	I0918 21:04:27.281097   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .GetIP
	I0918 21:04:27.283491   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | domain default-k8s-diff-port-828868 has defined MAC address 52:54:00:c0:39:06 in network mk-default-k8s-diff-port-828868
	I0918 21:04:27.283933   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:39:06", ip: ""} in network mk-default-k8s-diff-port-828868: {Iface:virbr4 ExpiryTime:2024-09-18 22:04:19 +0000 UTC Type:0 Mac:52:54:00:c0:39:06 Iaid: IPaddr:192.168.50.109 Prefix:24 Hostname:default-k8s-diff-port-828868 Clientid:01:52:54:00:c0:39:06}
	I0918 21:04:27.283968   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | domain default-k8s-diff-port-828868 has defined IP address 192.168.50.109 and MAC address 52:54:00:c0:39:06 in network mk-default-k8s-diff-port-828868
	I0918 21:04:27.284242   61659 profile.go:143] Saving config to /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/default-k8s-diff-port-828868/config.json ...
	I0918 21:04:27.284483   61659 machine.go:93] provisionDockerMachine start ...
	I0918 21:04:27.284527   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .DriverName
	I0918 21:04:27.284740   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .GetSSHHostname
	I0918 21:04:27.287263   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | domain default-k8s-diff-port-828868 has defined MAC address 52:54:00:c0:39:06 in network mk-default-k8s-diff-port-828868
	I0918 21:04:27.287640   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:39:06", ip: ""} in network mk-default-k8s-diff-port-828868: {Iface:virbr4 ExpiryTime:2024-09-18 22:04:19 +0000 UTC Type:0 Mac:52:54:00:c0:39:06 Iaid: IPaddr:192.168.50.109 Prefix:24 Hostname:default-k8s-diff-port-828868 Clientid:01:52:54:00:c0:39:06}
	I0918 21:04:27.287671   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | domain default-k8s-diff-port-828868 has defined IP address 192.168.50.109 and MAC address 52:54:00:c0:39:06 in network mk-default-k8s-diff-port-828868
	I0918 21:04:27.287831   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .GetSSHPort
	I0918 21:04:27.288053   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .GetSSHKeyPath
	I0918 21:04:27.288230   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .GetSSHKeyPath
	I0918 21:04:27.288348   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .GetSSHUsername
	I0918 21:04:27.288497   61659 main.go:141] libmachine: Using SSH client type: native
	I0918 21:04:27.288759   61659 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.50.109 22 <nil> <nil>}
	I0918 21:04:27.288774   61659 main.go:141] libmachine: About to run SSH command:
	hostname
	I0918 21:04:27.396110   61659 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0918 21:04:27.396140   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .GetMachineName
	I0918 21:04:27.396439   61659 buildroot.go:166] provisioning hostname "default-k8s-diff-port-828868"
	I0918 21:04:27.396472   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .GetMachineName
	I0918 21:04:27.396655   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .GetSSHHostname
	I0918 21:04:27.399285   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | domain default-k8s-diff-port-828868 has defined MAC address 52:54:00:c0:39:06 in network mk-default-k8s-diff-port-828868
	I0918 21:04:27.399629   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:39:06", ip: ""} in network mk-default-k8s-diff-port-828868: {Iface:virbr4 ExpiryTime:2024-09-18 22:04:19 +0000 UTC Type:0 Mac:52:54:00:c0:39:06 Iaid: IPaddr:192.168.50.109 Prefix:24 Hostname:default-k8s-diff-port-828868 Clientid:01:52:54:00:c0:39:06}
	I0918 21:04:27.399670   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | domain default-k8s-diff-port-828868 has defined IP address 192.168.50.109 and MAC address 52:54:00:c0:39:06 in network mk-default-k8s-diff-port-828868
	I0918 21:04:27.399746   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .GetSSHPort
	I0918 21:04:27.399947   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .GetSSHKeyPath
	I0918 21:04:27.400158   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .GetSSHKeyPath
	I0918 21:04:27.400295   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .GetSSHUsername
	I0918 21:04:27.400476   61659 main.go:141] libmachine: Using SSH client type: native
	I0918 21:04:27.400701   61659 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.50.109 22 <nil> <nil>}
	I0918 21:04:27.400714   61659 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-828868 && echo "default-k8s-diff-port-828868" | sudo tee /etc/hostname
	I0918 21:04:27.518553   61659 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-828868
	
	I0918 21:04:27.518579   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .GetSSHHostname
	I0918 21:04:27.521274   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | domain default-k8s-diff-port-828868 has defined MAC address 52:54:00:c0:39:06 in network mk-default-k8s-diff-port-828868
	I0918 21:04:27.521714   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:39:06", ip: ""} in network mk-default-k8s-diff-port-828868: {Iface:virbr4 ExpiryTime:2024-09-18 22:04:19 +0000 UTC Type:0 Mac:52:54:00:c0:39:06 Iaid: IPaddr:192.168.50.109 Prefix:24 Hostname:default-k8s-diff-port-828868 Clientid:01:52:54:00:c0:39:06}
	I0918 21:04:27.521746   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | domain default-k8s-diff-port-828868 has defined IP address 192.168.50.109 and MAC address 52:54:00:c0:39:06 in network mk-default-k8s-diff-port-828868
	I0918 21:04:27.521918   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .GetSSHPort
	I0918 21:04:27.522085   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .GetSSHKeyPath
	I0918 21:04:27.522298   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .GetSSHKeyPath
	I0918 21:04:27.522469   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .GetSSHUsername
	I0918 21:04:27.522689   61659 main.go:141] libmachine: Using SSH client type: native
	I0918 21:04:27.522867   61659 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.50.109 22 <nil> <nil>}
	I0918 21:04:27.522885   61659 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-828868' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-828868/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-828868' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0918 21:04:27.636264   61659 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0918 21:04:27.636296   61659 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19667-7671/.minikube CaCertPath:/home/jenkins/minikube-integration/19667-7671/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19667-7671/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19667-7671/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19667-7671/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19667-7671/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19667-7671/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19667-7671/.minikube}
	I0918 21:04:27.636325   61659 buildroot.go:174] setting up certificates
	I0918 21:04:27.636335   61659 provision.go:84] configureAuth start
	I0918 21:04:27.636343   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .GetMachineName
	I0918 21:04:27.636629   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .GetIP
	I0918 21:04:27.639186   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | domain default-k8s-diff-port-828868 has defined MAC address 52:54:00:c0:39:06 in network mk-default-k8s-diff-port-828868
	I0918 21:04:27.639622   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:39:06", ip: ""} in network mk-default-k8s-diff-port-828868: {Iface:virbr4 ExpiryTime:2024-09-18 22:04:19 +0000 UTC Type:0 Mac:52:54:00:c0:39:06 Iaid: IPaddr:192.168.50.109 Prefix:24 Hostname:default-k8s-diff-port-828868 Clientid:01:52:54:00:c0:39:06}
	I0918 21:04:27.639646   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | domain default-k8s-diff-port-828868 has defined IP address 192.168.50.109 and MAC address 52:54:00:c0:39:06 in network mk-default-k8s-diff-port-828868
	I0918 21:04:27.639858   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .GetSSHHostname
	I0918 21:04:27.642158   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | domain default-k8s-diff-port-828868 has defined MAC address 52:54:00:c0:39:06 in network mk-default-k8s-diff-port-828868
	I0918 21:04:27.642421   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:39:06", ip: ""} in network mk-default-k8s-diff-port-828868: {Iface:virbr4 ExpiryTime:2024-09-18 22:04:19 +0000 UTC Type:0 Mac:52:54:00:c0:39:06 Iaid: IPaddr:192.168.50.109 Prefix:24 Hostname:default-k8s-diff-port-828868 Clientid:01:52:54:00:c0:39:06}
	I0918 21:04:27.642448   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | domain default-k8s-diff-port-828868 has defined IP address 192.168.50.109 and MAC address 52:54:00:c0:39:06 in network mk-default-k8s-diff-port-828868
	I0918 21:04:27.642626   61659 provision.go:143] copyHostCerts
	I0918 21:04:27.642706   61659 exec_runner.go:144] found /home/jenkins/minikube-integration/19667-7671/.minikube/ca.pem, removing ...
	I0918 21:04:27.642869   61659 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19667-7671/.minikube/ca.pem
	I0918 21:04:27.642966   61659 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19667-7671/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19667-7671/.minikube/ca.pem (1082 bytes)
	I0918 21:04:27.643099   61659 exec_runner.go:144] found /home/jenkins/minikube-integration/19667-7671/.minikube/cert.pem, removing ...
	I0918 21:04:27.643111   61659 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19667-7671/.minikube/cert.pem
	I0918 21:04:27.643150   61659 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19667-7671/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19667-7671/.minikube/cert.pem (1123 bytes)
	I0918 21:04:27.643270   61659 exec_runner.go:144] found /home/jenkins/minikube-integration/19667-7671/.minikube/key.pem, removing ...
	I0918 21:04:27.643280   61659 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19667-7671/.minikube/key.pem
	I0918 21:04:27.643320   61659 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19667-7671/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19667-7671/.minikube/key.pem (1679 bytes)
	I0918 21:04:27.643387   61659 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19667-7671/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19667-7671/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19667-7671/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-828868 san=[127.0.0.1 192.168.50.109 default-k8s-diff-port-828868 localhost minikube]
	I0918 21:04:27.693367   61659 provision.go:177] copyRemoteCerts
	I0918 21:04:27.693426   61659 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0918 21:04:27.693463   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .GetSSHHostname
	I0918 21:04:27.696331   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | domain default-k8s-diff-port-828868 has defined MAC address 52:54:00:c0:39:06 in network mk-default-k8s-diff-port-828868
	I0918 21:04:27.696661   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:39:06", ip: ""} in network mk-default-k8s-diff-port-828868: {Iface:virbr4 ExpiryTime:2024-09-18 22:04:19 +0000 UTC Type:0 Mac:52:54:00:c0:39:06 Iaid: IPaddr:192.168.50.109 Prefix:24 Hostname:default-k8s-diff-port-828868 Clientid:01:52:54:00:c0:39:06}
	I0918 21:04:27.696693   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | domain default-k8s-diff-port-828868 has defined IP address 192.168.50.109 and MAC address 52:54:00:c0:39:06 in network mk-default-k8s-diff-port-828868
	I0918 21:04:27.696835   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .GetSSHPort
	I0918 21:04:27.697028   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .GetSSHKeyPath
	I0918 21:04:27.697212   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .GetSSHUsername
	I0918 21:04:27.697317   61659 sshutil.go:53] new ssh client: &{IP:192.168.50.109 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19667-7671/.minikube/machines/default-k8s-diff-port-828868/id_rsa Username:docker}
	I0918 21:04:27.777944   61659 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0918 21:04:27.801476   61659 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0918 21:04:27.825025   61659 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0918 21:04:27.848244   61659 provision.go:87] duration metric: took 211.897185ms to configureAuth
	I0918 21:04:27.848274   61659 buildroot.go:189] setting minikube options for container-runtime
	I0918 21:04:27.848434   61659 config.go:182] Loaded profile config "default-k8s-diff-port-828868": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0918 21:04:27.848513   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .GetSSHHostname
	I0918 21:04:27.851119   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | domain default-k8s-diff-port-828868 has defined MAC address 52:54:00:c0:39:06 in network mk-default-k8s-diff-port-828868
	I0918 21:04:27.851472   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:39:06", ip: ""} in network mk-default-k8s-diff-port-828868: {Iface:virbr4 ExpiryTime:2024-09-18 22:04:19 +0000 UTC Type:0 Mac:52:54:00:c0:39:06 Iaid: IPaddr:192.168.50.109 Prefix:24 Hostname:default-k8s-diff-port-828868 Clientid:01:52:54:00:c0:39:06}
	I0918 21:04:27.851509   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | domain default-k8s-diff-port-828868 has defined IP address 192.168.50.109 and MAC address 52:54:00:c0:39:06 in network mk-default-k8s-diff-port-828868
	I0918 21:04:27.851788   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .GetSSHPort
	I0918 21:04:27.852007   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .GetSSHKeyPath
	I0918 21:04:27.852216   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .GetSSHKeyPath
	I0918 21:04:27.852420   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .GetSSHUsername
	I0918 21:04:27.852670   61659 main.go:141] libmachine: Using SSH client type: native
	I0918 21:04:27.852852   61659 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.50.109 22 <nil> <nil>}
	I0918 21:04:27.852870   61659 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0918 21:04:28.072808   61659 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0918 21:04:28.072843   61659 machine.go:96] duration metric: took 788.346091ms to provisionDockerMachine
	I0918 21:04:28.072858   61659 start.go:293] postStartSetup for "default-k8s-diff-port-828868" (driver="kvm2")
	I0918 21:04:28.072874   61659 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0918 21:04:28.072898   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .DriverName
	I0918 21:04:28.073246   61659 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0918 21:04:28.073287   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .GetSSHHostname
	I0918 21:04:28.075998   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | domain default-k8s-diff-port-828868 has defined MAC address 52:54:00:c0:39:06 in network mk-default-k8s-diff-port-828868
	I0918 21:04:28.076389   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:39:06", ip: ""} in network mk-default-k8s-diff-port-828868: {Iface:virbr4 ExpiryTime:2024-09-18 22:04:19 +0000 UTC Type:0 Mac:52:54:00:c0:39:06 Iaid: IPaddr:192.168.50.109 Prefix:24 Hostname:default-k8s-diff-port-828868 Clientid:01:52:54:00:c0:39:06}
	I0918 21:04:28.076416   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | domain default-k8s-diff-port-828868 has defined IP address 192.168.50.109 and MAC address 52:54:00:c0:39:06 in network mk-default-k8s-diff-port-828868
	I0918 21:04:28.076561   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .GetSSHPort
	I0918 21:04:28.076780   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .GetSSHKeyPath
	I0918 21:04:28.076939   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .GetSSHUsername
	I0918 21:04:28.077063   61659 sshutil.go:53] new ssh client: &{IP:192.168.50.109 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19667-7671/.minikube/machines/default-k8s-diff-port-828868/id_rsa Username:docker}
	I0918 21:04:28.158946   61659 ssh_runner.go:195] Run: cat /etc/os-release
	I0918 21:04:28.163200   61659 info.go:137] Remote host: Buildroot 2023.02.9
	I0918 21:04:28.163231   61659 filesync.go:126] Scanning /home/jenkins/minikube-integration/19667-7671/.minikube/addons for local assets ...
	I0918 21:04:28.163290   61659 filesync.go:126] Scanning /home/jenkins/minikube-integration/19667-7671/.minikube/files for local assets ...
	I0918 21:04:28.163368   61659 filesync.go:149] local asset: /home/jenkins/minikube-integration/19667-7671/.minikube/files/etc/ssl/certs/148782.pem -> 148782.pem in /etc/ssl/certs
	I0918 21:04:28.163464   61659 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0918 21:04:28.172987   61659 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/files/etc/ssl/certs/148782.pem --> /etc/ssl/certs/148782.pem (1708 bytes)
	I0918 21:04:28.198647   61659 start.go:296] duration metric: took 125.77566ms for postStartSetup
	I0918 21:04:28.198685   61659 fix.go:56] duration metric: took 19.150217303s for fixHost
	I0918 21:04:28.198704   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .GetSSHHostname
	I0918 21:04:28.201549   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | domain default-k8s-diff-port-828868 has defined MAC address 52:54:00:c0:39:06 in network mk-default-k8s-diff-port-828868
	I0918 21:04:28.201904   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:39:06", ip: ""} in network mk-default-k8s-diff-port-828868: {Iface:virbr4 ExpiryTime:2024-09-18 22:04:19 +0000 UTC Type:0 Mac:52:54:00:c0:39:06 Iaid: IPaddr:192.168.50.109 Prefix:24 Hostname:default-k8s-diff-port-828868 Clientid:01:52:54:00:c0:39:06}
	I0918 21:04:28.201934   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | domain default-k8s-diff-port-828868 has defined IP address 192.168.50.109 and MAC address 52:54:00:c0:39:06 in network mk-default-k8s-diff-port-828868
	I0918 21:04:28.202093   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .GetSSHPort
	I0918 21:04:28.202278   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .GetSSHKeyPath
	I0918 21:04:28.202435   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .GetSSHKeyPath
	I0918 21:04:28.202588   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .GetSSHUsername
	I0918 21:04:28.202714   61659 main.go:141] libmachine: Using SSH client type: native
	I0918 21:04:28.202871   61659 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.50.109 22 <nil> <nil>}
	I0918 21:04:28.202879   61659 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0918 21:04:28.308752   61659 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726693468.285343658
	
	I0918 21:04:28.308778   61659 fix.go:216] guest clock: 1726693468.285343658
	I0918 21:04:28.308786   61659 fix.go:229] Guest: 2024-09-18 21:04:28.285343658 +0000 UTC Remote: 2024-09-18 21:04:28.198688962 +0000 UTC m=+256.035220061 (delta=86.654696ms)
	I0918 21:04:28.308821   61659 fix.go:200] guest clock delta is within tolerance: 86.654696ms
	I0918 21:04:28.308829   61659 start.go:83] releasing machines lock for "default-k8s-diff-port-828868", held for 19.260404228s
	I0918 21:04:28.308857   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .DriverName
	I0918 21:04:28.309175   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .GetIP
	I0918 21:04:28.312346   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | domain default-k8s-diff-port-828868 has defined MAC address 52:54:00:c0:39:06 in network mk-default-k8s-diff-port-828868
	I0918 21:04:28.312725   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:39:06", ip: ""} in network mk-default-k8s-diff-port-828868: {Iface:virbr4 ExpiryTime:2024-09-18 22:04:19 +0000 UTC Type:0 Mac:52:54:00:c0:39:06 Iaid: IPaddr:192.168.50.109 Prefix:24 Hostname:default-k8s-diff-port-828868 Clientid:01:52:54:00:c0:39:06}
	I0918 21:04:28.312753   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | domain default-k8s-diff-port-828868 has defined IP address 192.168.50.109 and MAC address 52:54:00:c0:39:06 in network mk-default-k8s-diff-port-828868
	I0918 21:04:28.312951   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .DriverName
	I0918 21:04:28.313506   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .DriverName
	I0918 21:04:28.313702   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .DriverName
	I0918 21:04:28.313792   61659 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0918 21:04:28.313849   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .GetSSHHostname
	I0918 21:04:28.313966   61659 ssh_runner.go:195] Run: cat /version.json
	I0918 21:04:28.314001   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .GetSSHHostname
	I0918 21:04:28.316698   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | domain default-k8s-diff-port-828868 has defined MAC address 52:54:00:c0:39:06 in network mk-default-k8s-diff-port-828868
	I0918 21:04:28.316882   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | domain default-k8s-diff-port-828868 has defined MAC address 52:54:00:c0:39:06 in network mk-default-k8s-diff-port-828868
	I0918 21:04:28.317016   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:39:06", ip: ""} in network mk-default-k8s-diff-port-828868: {Iface:virbr4 ExpiryTime:2024-09-18 22:04:19 +0000 UTC Type:0 Mac:52:54:00:c0:39:06 Iaid: IPaddr:192.168.50.109 Prefix:24 Hostname:default-k8s-diff-port-828868 Clientid:01:52:54:00:c0:39:06}
	I0918 21:04:28.317038   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | domain default-k8s-diff-port-828868 has defined IP address 192.168.50.109 and MAC address 52:54:00:c0:39:06 in network mk-default-k8s-diff-port-828868
	I0918 21:04:28.317239   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .GetSSHPort
	I0918 21:04:28.317357   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:39:06", ip: ""} in network mk-default-k8s-diff-port-828868: {Iface:virbr4 ExpiryTime:2024-09-18 22:04:19 +0000 UTC Type:0 Mac:52:54:00:c0:39:06 Iaid: IPaddr:192.168.50.109 Prefix:24 Hostname:default-k8s-diff-port-828868 Clientid:01:52:54:00:c0:39:06}
	I0918 21:04:28.317408   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | domain default-k8s-diff-port-828868 has defined IP address 192.168.50.109 and MAC address 52:54:00:c0:39:06 in network mk-default-k8s-diff-port-828868
	I0918 21:04:28.317410   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .GetSSHKeyPath
	I0918 21:04:28.317596   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .GetSSHPort
	I0918 21:04:28.317598   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .GetSSHUsername
	I0918 21:04:28.317743   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .GetSSHKeyPath
	I0918 21:04:28.317783   61659 sshutil.go:53] new ssh client: &{IP:192.168.50.109 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19667-7671/.minikube/machines/default-k8s-diff-port-828868/id_rsa Username:docker}
	I0918 21:04:28.317905   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .GetSSHUsername
	I0918 21:04:28.318060   61659 sshutil.go:53] new ssh client: &{IP:192.168.50.109 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19667-7671/.minikube/machines/default-k8s-diff-port-828868/id_rsa Username:docker}
	I0918 21:04:28.439960   61659 ssh_runner.go:195] Run: systemctl --version
	I0918 21:04:28.446111   61659 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0918 21:04:28.593574   61659 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0918 21:04:28.599542   61659 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0918 21:04:28.599623   61659 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0918 21:04:28.615775   61659 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0918 21:04:28.615802   61659 start.go:495] detecting cgroup driver to use...
	I0918 21:04:28.615965   61659 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0918 21:04:28.636924   61659 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0918 21:04:28.655681   61659 docker.go:217] disabling cri-docker service (if available) ...
	I0918 21:04:28.655775   61659 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0918 21:04:28.670090   61659 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0918 21:04:28.684780   61659 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0918 21:04:28.807355   61659 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0918 21:04:28.941753   61659 docker.go:233] disabling docker service ...
	I0918 21:04:28.941836   61659 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0918 21:04:28.956786   61659 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0918 21:04:28.970301   61659 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0918 21:04:29.119605   61659 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0918 21:04:29.245330   61659 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0918 21:04:29.259626   61659 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0918 21:04:29.278104   61659 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0918 21:04:29.278162   61659 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0918 21:04:29.288761   61659 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0918 21:04:29.288837   61659 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0918 21:04:29.299631   61659 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0918 21:04:29.310244   61659 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0918 21:04:29.321220   61659 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0918 21:04:29.332722   61659 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0918 21:04:29.343590   61659 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0918 21:04:29.366099   61659 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0918 21:04:29.381180   61659 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0918 21:04:29.394427   61659 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0918 21:04:29.394494   61659 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0918 21:04:29.410069   61659 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0918 21:04:29.421207   61659 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0918 21:04:29.543870   61659 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0918 21:04:29.642149   61659 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0918 21:04:29.642205   61659 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0918 21:04:29.647336   61659 start.go:563] Will wait 60s for crictl version
	I0918 21:04:29.647400   61659 ssh_runner.go:195] Run: which crictl
	I0918 21:04:29.651148   61659 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0918 21:04:29.690903   61659 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0918 21:04:29.690992   61659 ssh_runner.go:195] Run: crio --version
	I0918 21:04:29.717176   61659 ssh_runner.go:195] Run: crio --version
	I0918 21:04:29.747416   61659 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0918 21:04:29.748825   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .GetIP
	I0918 21:04:29.751828   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | domain default-k8s-diff-port-828868 has defined MAC address 52:54:00:c0:39:06 in network mk-default-k8s-diff-port-828868
	I0918 21:04:29.752238   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:39:06", ip: ""} in network mk-default-k8s-diff-port-828868: {Iface:virbr4 ExpiryTime:2024-09-18 22:04:19 +0000 UTC Type:0 Mac:52:54:00:c0:39:06 Iaid: IPaddr:192.168.50.109 Prefix:24 Hostname:default-k8s-diff-port-828868 Clientid:01:52:54:00:c0:39:06}
	I0918 21:04:29.752288   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | domain default-k8s-diff-port-828868 has defined IP address 192.168.50.109 and MAC address 52:54:00:c0:39:06 in network mk-default-k8s-diff-port-828868
	I0918 21:04:29.752533   61659 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0918 21:04:29.756672   61659 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0918 21:04:29.768691   61659 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-828868 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.1 ClusterName:default-k8s-diff-port-828868 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.109 Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0918 21:04:29.768822   61659 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0918 21:04:29.768867   61659 ssh_runner.go:195] Run: sudo crictl images --output json
	I0918 21:04:29.803885   61659 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I0918 21:04:29.803964   61659 ssh_runner.go:195] Run: which lz4
	I0918 21:04:29.808051   61659 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0918 21:04:29.812324   61659 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0918 21:04:29.812363   61659 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I0918 21:04:31.172721   61659 crio.go:462] duration metric: took 1.364736071s to copy over tarball
	I0918 21:04:31.172837   61659 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0918 21:04:29.637411   61740 main.go:141] libmachine: (embed-certs-255556) Waiting to get IP...
	I0918 21:04:29.638427   61740 main.go:141] libmachine: (embed-certs-255556) DBG | domain embed-certs-255556 has defined MAC address 52:54:00:e8:c2:b7 in network mk-embed-certs-255556
	I0918 21:04:29.638877   61740 main.go:141] libmachine: (embed-certs-255556) DBG | unable to find current IP address of domain embed-certs-255556 in network mk-embed-certs-255556
	I0918 21:04:29.638973   61740 main.go:141] libmachine: (embed-certs-255556) DBG | I0918 21:04:29.638868   62857 retry.go:31] will retry after 298.087525ms: waiting for machine to come up
	I0918 21:04:29.938543   61740 main.go:141] libmachine: (embed-certs-255556) DBG | domain embed-certs-255556 has defined MAC address 52:54:00:e8:c2:b7 in network mk-embed-certs-255556
	I0918 21:04:29.938923   61740 main.go:141] libmachine: (embed-certs-255556) DBG | unable to find current IP address of domain embed-certs-255556 in network mk-embed-certs-255556
	I0918 21:04:29.938946   61740 main.go:141] libmachine: (embed-certs-255556) DBG | I0918 21:04:29.938889   62857 retry.go:31] will retry after 362.887862ms: waiting for machine to come up
	I0918 21:04:30.303379   61740 main.go:141] libmachine: (embed-certs-255556) DBG | domain embed-certs-255556 has defined MAC address 52:54:00:e8:c2:b7 in network mk-embed-certs-255556
	I0918 21:04:30.303867   61740 main.go:141] libmachine: (embed-certs-255556) DBG | unable to find current IP address of domain embed-certs-255556 in network mk-embed-certs-255556
	I0918 21:04:30.303898   61740 main.go:141] libmachine: (embed-certs-255556) DBG | I0918 21:04:30.303820   62857 retry.go:31] will retry after 452.771021ms: waiting for machine to come up
	I0918 21:04:30.758353   61740 main.go:141] libmachine: (embed-certs-255556) DBG | domain embed-certs-255556 has defined MAC address 52:54:00:e8:c2:b7 in network mk-embed-certs-255556
	I0918 21:04:30.758897   61740 main.go:141] libmachine: (embed-certs-255556) DBG | unable to find current IP address of domain embed-certs-255556 in network mk-embed-certs-255556
	I0918 21:04:30.758928   61740 main.go:141] libmachine: (embed-certs-255556) DBG | I0918 21:04:30.758856   62857 retry.go:31] will retry after 506.010985ms: waiting for machine to come up
	I0918 21:04:31.266443   61740 main.go:141] libmachine: (embed-certs-255556) DBG | domain embed-certs-255556 has defined MAC address 52:54:00:e8:c2:b7 in network mk-embed-certs-255556
	I0918 21:04:31.266934   61740 main.go:141] libmachine: (embed-certs-255556) DBG | unable to find current IP address of domain embed-certs-255556 in network mk-embed-certs-255556
	I0918 21:04:31.266964   61740 main.go:141] libmachine: (embed-certs-255556) DBG | I0918 21:04:31.266893   62857 retry.go:31] will retry after 584.679329ms: waiting for machine to come up
	I0918 21:04:31.853811   61740 main.go:141] libmachine: (embed-certs-255556) DBG | domain embed-certs-255556 has defined MAC address 52:54:00:e8:c2:b7 in network mk-embed-certs-255556
	I0918 21:04:31.854371   61740 main.go:141] libmachine: (embed-certs-255556) DBG | unable to find current IP address of domain embed-certs-255556 in network mk-embed-certs-255556
	I0918 21:04:31.854402   61740 main.go:141] libmachine: (embed-certs-255556) DBG | I0918 21:04:31.854309   62857 retry.go:31] will retry after 786.010743ms: waiting for machine to come up
	I0918 21:04:32.642494   61740 main.go:141] libmachine: (embed-certs-255556) DBG | domain embed-certs-255556 has defined MAC address 52:54:00:e8:c2:b7 in network mk-embed-certs-255556
	I0918 21:04:32.643068   61740 main.go:141] libmachine: (embed-certs-255556) DBG | unable to find current IP address of domain embed-certs-255556 in network mk-embed-certs-255556
	I0918 21:04:32.643100   61740 main.go:141] libmachine: (embed-certs-255556) DBG | I0918 21:04:32.643013   62857 retry.go:31] will retry after 1.010762944s: waiting for machine to come up
	I0918 21:04:33.299563   61659 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.126697598s)
	I0918 21:04:33.299596   61659 crio.go:469] duration metric: took 2.126840983s to extract the tarball
	I0918 21:04:33.299602   61659 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0918 21:04:33.336428   61659 ssh_runner.go:195] Run: sudo crictl images --output json
	I0918 21:04:33.377303   61659 crio.go:514] all images are preloaded for cri-o runtime.
	I0918 21:04:33.377342   61659 cache_images.go:84] Images are preloaded, skipping loading
	I0918 21:04:33.377352   61659 kubeadm.go:934] updating node { 192.168.50.109 8444 v1.31.1 crio true true} ...
	I0918 21:04:33.377490   61659 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-828868 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.109
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:default-k8s-diff-port-828868 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0918 21:04:33.377574   61659 ssh_runner.go:195] Run: crio config
	I0918 21:04:33.423773   61659 cni.go:84] Creating CNI manager for ""
	I0918 21:04:33.423800   61659 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0918 21:04:33.423816   61659 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0918 21:04:33.423835   61659 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.109 APIServerPort:8444 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-828868 NodeName:default-k8s-diff-port-828868 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.109"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.109 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0918 21:04:33.423976   61659 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.109
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-828868"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.109
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.109"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0918 21:04:33.424058   61659 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0918 21:04:33.434047   61659 binaries.go:44] Found k8s binaries, skipping transfer
	I0918 21:04:33.434119   61659 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0918 21:04:33.443535   61659 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I0918 21:04:33.460116   61659 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0918 21:04:33.475883   61659 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2172 bytes)
	I0918 21:04:33.492311   61659 ssh_runner.go:195] Run: grep 192.168.50.109	control-plane.minikube.internal$ /etc/hosts
	I0918 21:04:33.495940   61659 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.109	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0918 21:04:33.507411   61659 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0918 21:04:33.625104   61659 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0918 21:04:33.641530   61659 certs.go:68] Setting up /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/default-k8s-diff-port-828868 for IP: 192.168.50.109
	I0918 21:04:33.641556   61659 certs.go:194] generating shared ca certs ...
	I0918 21:04:33.641572   61659 certs.go:226] acquiring lock for ca certs: {Name:mkf54e0e71ed884c4372f9d3d4cb308ea4600185 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0918 21:04:33.641757   61659 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19667-7671/.minikube/ca.key
	I0918 21:04:33.641804   61659 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19667-7671/.minikube/proxy-client-ca.key
	I0918 21:04:33.641822   61659 certs.go:256] generating profile certs ...
	I0918 21:04:33.641944   61659 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/default-k8s-diff-port-828868/client.key
	I0918 21:04:33.642036   61659 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/default-k8s-diff-port-828868/apiserver.key.df92be3a
	I0918 21:04:33.642087   61659 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/default-k8s-diff-port-828868/proxy-client.key
	I0918 21:04:33.642255   61659 certs.go:484] found cert: /home/jenkins/minikube-integration/19667-7671/.minikube/certs/14878.pem (1338 bytes)
	W0918 21:04:33.642297   61659 certs.go:480] ignoring /home/jenkins/minikube-integration/19667-7671/.minikube/certs/14878_empty.pem, impossibly tiny 0 bytes
	I0918 21:04:33.642306   61659 certs.go:484] found cert: /home/jenkins/minikube-integration/19667-7671/.minikube/certs/ca-key.pem (1675 bytes)
	I0918 21:04:33.642337   61659 certs.go:484] found cert: /home/jenkins/minikube-integration/19667-7671/.minikube/certs/ca.pem (1082 bytes)
	I0918 21:04:33.642370   61659 certs.go:484] found cert: /home/jenkins/minikube-integration/19667-7671/.minikube/certs/cert.pem (1123 bytes)
	I0918 21:04:33.642404   61659 certs.go:484] found cert: /home/jenkins/minikube-integration/19667-7671/.minikube/certs/key.pem (1679 bytes)
	I0918 21:04:33.642454   61659 certs.go:484] found cert: /home/jenkins/minikube-integration/19667-7671/.minikube/files/etc/ssl/certs/148782.pem (1708 bytes)
	I0918 21:04:33.643116   61659 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0918 21:04:33.682428   61659 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0918 21:04:33.710444   61659 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0918 21:04:33.759078   61659 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0918 21:04:33.797727   61659 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/default-k8s-diff-port-828868/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0918 21:04:33.821989   61659 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/default-k8s-diff-port-828868/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0918 21:04:33.844210   61659 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/default-k8s-diff-port-828868/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0918 21:04:33.866843   61659 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/default-k8s-diff-port-828868/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0918 21:04:33.896125   61659 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/files/etc/ssl/certs/148782.pem --> /usr/share/ca-certificates/148782.pem (1708 bytes)
	I0918 21:04:33.918667   61659 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0918 21:04:33.940790   61659 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/certs/14878.pem --> /usr/share/ca-certificates/14878.pem (1338 bytes)
	I0918 21:04:33.963660   61659 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0918 21:04:33.980348   61659 ssh_runner.go:195] Run: openssl version
	I0918 21:04:33.985856   61659 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/148782.pem && ln -fs /usr/share/ca-certificates/148782.pem /etc/ssl/certs/148782.pem"
	I0918 21:04:33.996472   61659 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/148782.pem
	I0918 21:04:34.000732   61659 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 18 19:57 /usr/share/ca-certificates/148782.pem
	I0918 21:04:34.000788   61659 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/148782.pem
	I0918 21:04:34.006282   61659 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/148782.pem /etc/ssl/certs/3ec20f2e.0"
	I0918 21:04:34.016612   61659 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0918 21:04:34.026689   61659 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0918 21:04:34.030650   61659 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 18 19:39 /usr/share/ca-certificates/minikubeCA.pem
	I0918 21:04:34.030705   61659 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0918 21:04:34.035940   61659 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0918 21:04:34.046516   61659 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14878.pem && ln -fs /usr/share/ca-certificates/14878.pem /etc/ssl/certs/14878.pem"
	I0918 21:04:34.056755   61659 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14878.pem
	I0918 21:04:34.061189   61659 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 18 19:57 /usr/share/ca-certificates/14878.pem
	I0918 21:04:34.061264   61659 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14878.pem
	I0918 21:04:34.066973   61659 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14878.pem /etc/ssl/certs/51391683.0"
	I0918 21:04:34.078781   61659 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0918 21:04:34.083129   61659 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0918 21:04:34.089249   61659 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0918 21:04:34.095211   61659 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0918 21:04:34.101350   61659 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0918 21:04:34.107269   61659 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0918 21:04:34.113177   61659 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0918 21:04:34.119005   61659 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-828868 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.31.1 ClusterName:default-k8s-diff-port-828868 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.109 Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0918 21:04:34.119093   61659 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0918 21:04:34.119147   61659 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0918 21:04:34.162792   61659 cri.go:89] found id: ""
	I0918 21:04:34.162895   61659 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0918 21:04:34.174325   61659 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0918 21:04:34.174358   61659 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0918 21:04:34.174420   61659 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0918 21:04:34.183708   61659 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0918 21:04:34.184680   61659 kubeconfig.go:125] found "default-k8s-diff-port-828868" server: "https://192.168.50.109:8444"
	I0918 21:04:34.186781   61659 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0918 21:04:34.195823   61659 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.109
	I0918 21:04:34.195856   61659 kubeadm.go:1160] stopping kube-system containers ...
	I0918 21:04:34.195866   61659 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0918 21:04:34.195907   61659 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0918 21:04:34.235799   61659 cri.go:89] found id: ""
	I0918 21:04:34.235882   61659 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0918 21:04:34.251412   61659 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0918 21:04:34.261361   61659 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0918 21:04:34.261390   61659 kubeadm.go:157] found existing configuration files:
	
	I0918 21:04:34.261435   61659 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0918 21:04:34.272201   61659 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0918 21:04:34.272272   61659 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0918 21:04:34.283030   61659 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0918 21:04:34.293227   61659 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0918 21:04:34.293321   61659 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0918 21:04:34.303749   61659 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0918 21:04:34.314027   61659 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0918 21:04:34.314116   61659 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0918 21:04:34.324585   61659 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0918 21:04:34.334524   61659 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0918 21:04:34.334594   61659 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0918 21:04:34.344923   61659 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0918 21:04:34.355422   61659 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0918 21:04:34.480395   61659 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0918 21:04:35.320827   61659 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0918 21:04:35.542013   61659 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0918 21:04:35.610886   61659 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0918 21:04:35.694501   61659 api_server.go:52] waiting for apiserver process to appear ...
	I0918 21:04:35.694610   61659 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:04:36.195441   61659 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:04:36.694978   61659 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:04:37.195220   61659 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:04:33.655864   61740 main.go:141] libmachine: (embed-certs-255556) DBG | domain embed-certs-255556 has defined MAC address 52:54:00:e8:c2:b7 in network mk-embed-certs-255556
	I0918 21:04:33.656375   61740 main.go:141] libmachine: (embed-certs-255556) DBG | unable to find current IP address of domain embed-certs-255556 in network mk-embed-certs-255556
	I0918 21:04:33.656407   61740 main.go:141] libmachine: (embed-certs-255556) DBG | I0918 21:04:33.656347   62857 retry.go:31] will retry after 1.375317123s: waiting for machine to come up
	I0918 21:04:35.033882   61740 main.go:141] libmachine: (embed-certs-255556) DBG | domain embed-certs-255556 has defined MAC address 52:54:00:e8:c2:b7 in network mk-embed-certs-255556
	I0918 21:04:35.034266   61740 main.go:141] libmachine: (embed-certs-255556) DBG | unable to find current IP address of domain embed-certs-255556 in network mk-embed-certs-255556
	I0918 21:04:35.034293   61740 main.go:141] libmachine: (embed-certs-255556) DBG | I0918 21:04:35.034232   62857 retry.go:31] will retry after 1.142237895s: waiting for machine to come up
	I0918 21:04:36.178371   61740 main.go:141] libmachine: (embed-certs-255556) DBG | domain embed-certs-255556 has defined MAC address 52:54:00:e8:c2:b7 in network mk-embed-certs-255556
	I0918 21:04:36.178837   61740 main.go:141] libmachine: (embed-certs-255556) DBG | unable to find current IP address of domain embed-certs-255556 in network mk-embed-certs-255556
	I0918 21:04:36.178865   61740 main.go:141] libmachine: (embed-certs-255556) DBG | I0918 21:04:36.178804   62857 retry.go:31] will retry after 1.983853904s: waiting for machine to come up
	I0918 21:04:38.165113   61740 main.go:141] libmachine: (embed-certs-255556) DBG | domain embed-certs-255556 has defined MAC address 52:54:00:e8:c2:b7 in network mk-embed-certs-255556
	I0918 21:04:38.165662   61740 main.go:141] libmachine: (embed-certs-255556) DBG | unable to find current IP address of domain embed-certs-255556 in network mk-embed-certs-255556
	I0918 21:04:38.165697   61740 main.go:141] libmachine: (embed-certs-255556) DBG | I0918 21:04:38.165601   62857 retry.go:31] will retry after 2.407286782s: waiting for machine to come up
	I0918 21:04:37.694916   61659 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:04:37.713724   61659 api_server.go:72] duration metric: took 2.019221095s to wait for apiserver process to appear ...
	I0918 21:04:37.713756   61659 api_server.go:88] waiting for apiserver healthz status ...
	I0918 21:04:37.713782   61659 api_server.go:253] Checking apiserver healthz at https://192.168.50.109:8444/healthz ...
	I0918 21:04:37.714297   61659 api_server.go:269] stopped: https://192.168.50.109:8444/healthz: Get "https://192.168.50.109:8444/healthz": dial tcp 192.168.50.109:8444: connect: connection refused
	I0918 21:04:38.213883   61659 api_server.go:253] Checking apiserver healthz at https://192.168.50.109:8444/healthz ...
	I0918 21:04:40.396513   61659 api_server.go:279] https://192.168.50.109:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0918 21:04:40.396564   61659 api_server.go:103] status: https://192.168.50.109:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0918 21:04:40.396584   61659 api_server.go:253] Checking apiserver healthz at https://192.168.50.109:8444/healthz ...
	I0918 21:04:40.409718   61659 api_server.go:279] https://192.168.50.109:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0918 21:04:40.409750   61659 api_server.go:103] status: https://192.168.50.109:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0918 21:04:40.714176   61659 api_server.go:253] Checking apiserver healthz at https://192.168.50.109:8444/healthz ...
	I0918 21:04:40.719353   61659 api_server.go:279] https://192.168.50.109:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0918 21:04:40.719391   61659 api_server.go:103] status: https://192.168.50.109:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0918 21:04:41.214596   61659 api_server.go:253] Checking apiserver healthz at https://192.168.50.109:8444/healthz ...
	I0918 21:04:41.219579   61659 api_server.go:279] https://192.168.50.109:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0918 21:04:41.219608   61659 api_server.go:103] status: https://192.168.50.109:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0918 21:04:41.713951   61659 api_server.go:253] Checking apiserver healthz at https://192.168.50.109:8444/healthz ...
	I0918 21:04:41.719212   61659 api_server.go:279] https://192.168.50.109:8444/healthz returned 200:
	ok
	I0918 21:04:41.726647   61659 api_server.go:141] control plane version: v1.31.1
	I0918 21:04:41.726679   61659 api_server.go:131] duration metric: took 4.012914861s to wait for apiserver health ...
	I0918 21:04:41.726689   61659 cni.go:84] Creating CNI manager for ""
	I0918 21:04:41.726707   61659 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0918 21:04:41.728312   61659 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0918 21:04:41.729613   61659 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0918 21:04:41.741932   61659 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0918 21:04:41.763195   61659 system_pods.go:43] waiting for kube-system pods to appear ...
	I0918 21:04:41.775167   61659 system_pods.go:59] 8 kube-system pods found
	I0918 21:04:41.775210   61659 system_pods.go:61] "coredns-7c65d6cfc9-xzjd7" [bd8252df-707c-41e6-84b7-cc74480177a8] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0918 21:04:41.775219   61659 system_pods.go:61] "etcd-default-k8s-diff-port-828868" [aa8e221d-abba-48a5-8814-246df0776408] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0918 21:04:41.775227   61659 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-828868" [b44966ac-3478-40c4-b67f-1824bff2bec7] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0918 21:04:41.775233   61659 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-828868" [7af8fbad-3aa2-497e-90df-33facaee6b17] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0918 21:04:41.775239   61659 system_pods.go:61] "kube-proxy-jz7ls" [f931ae9a-0b9c-4754-8b7b-d52c267b018c] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0918 21:04:41.775247   61659 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-828868" [ee89c713-c689-4de3-b1a5-4e08470ff6fb] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0918 21:04:41.775252   61659 system_pods.go:61] "metrics-server-6867b74b74-cqp47" [1ccf8c85-183a-4bea-abbc-eb7bcedca7f0] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0918 21:04:41.775257   61659 system_pods.go:61] "storage-provisioner" [9744cbfa-6b9a-42f0-aa80-0821b87a33d4] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0918 21:04:41.775270   61659 system_pods.go:74] duration metric: took 12.058758ms to wait for pod list to return data ...
	I0918 21:04:41.775280   61659 node_conditions.go:102] verifying NodePressure condition ...
	I0918 21:04:41.779525   61659 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0918 21:04:41.779559   61659 node_conditions.go:123] node cpu capacity is 2
	I0918 21:04:41.779580   61659 node_conditions.go:105] duration metric: took 4.292138ms to run NodePressure ...
	I0918 21:04:41.779615   61659 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0918 21:04:42.079279   61659 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0918 21:04:42.084311   61659 kubeadm.go:739] kubelet initialised
	I0918 21:04:42.084338   61659 kubeadm.go:740] duration metric: took 5.024999ms waiting for restarted kubelet to initialise ...
	I0918 21:04:42.084351   61659 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0918 21:04:42.089113   61659 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-xzjd7" in "kube-system" namespace to be "Ready" ...
	I0918 21:04:42.095539   61659 pod_ready.go:98] node "default-k8s-diff-port-828868" hosting pod "coredns-7c65d6cfc9-xzjd7" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-828868" has status "Ready":"False"
	I0918 21:04:42.095565   61659 pod_ready.go:82] duration metric: took 6.405251ms for pod "coredns-7c65d6cfc9-xzjd7" in "kube-system" namespace to be "Ready" ...
	E0918 21:04:42.095575   61659 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-828868" hosting pod "coredns-7c65d6cfc9-xzjd7" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-828868" has status "Ready":"False"
	I0918 21:04:42.095581   61659 pod_ready.go:79] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-828868" in "kube-system" namespace to be "Ready" ...
	I0918 21:04:42.100447   61659 pod_ready.go:98] node "default-k8s-diff-port-828868" hosting pod "etcd-default-k8s-diff-port-828868" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-828868" has status "Ready":"False"
	I0918 21:04:42.100469   61659 pod_ready.go:82] duration metric: took 4.879955ms for pod "etcd-default-k8s-diff-port-828868" in "kube-system" namespace to be "Ready" ...
	E0918 21:04:42.100480   61659 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-828868" hosting pod "etcd-default-k8s-diff-port-828868" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-828868" has status "Ready":"False"
	I0918 21:04:42.100485   61659 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-828868" in "kube-system" namespace to be "Ready" ...
	I0918 21:04:42.104889   61659 pod_ready.go:98] node "default-k8s-diff-port-828868" hosting pod "kube-apiserver-default-k8s-diff-port-828868" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-828868" has status "Ready":"False"
	I0918 21:04:42.104914   61659 pod_ready.go:82] duration metric: took 4.421708ms for pod "kube-apiserver-default-k8s-diff-port-828868" in "kube-system" namespace to be "Ready" ...
	E0918 21:04:42.104926   61659 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-828868" hosting pod "kube-apiserver-default-k8s-diff-port-828868" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-828868" has status "Ready":"False"
	I0918 21:04:42.104934   61659 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-828868" in "kube-system" namespace to be "Ready" ...
	I0918 21:04:40.574813   61740 main.go:141] libmachine: (embed-certs-255556) DBG | domain embed-certs-255556 has defined MAC address 52:54:00:e8:c2:b7 in network mk-embed-certs-255556
	I0918 21:04:40.575265   61740 main.go:141] libmachine: (embed-certs-255556) DBG | unable to find current IP address of domain embed-certs-255556 in network mk-embed-certs-255556
	I0918 21:04:40.575295   61740 main.go:141] libmachine: (embed-certs-255556) DBG | I0918 21:04:40.575215   62857 retry.go:31] will retry after 2.249084169s: waiting for machine to come up
	I0918 21:04:42.827547   61740 main.go:141] libmachine: (embed-certs-255556) DBG | domain embed-certs-255556 has defined MAC address 52:54:00:e8:c2:b7 in network mk-embed-certs-255556
	I0918 21:04:42.827966   61740 main.go:141] libmachine: (embed-certs-255556) DBG | unable to find current IP address of domain embed-certs-255556 in network mk-embed-certs-255556
	I0918 21:04:42.828028   61740 main.go:141] libmachine: (embed-certs-255556) DBG | I0918 21:04:42.827923   62857 retry.go:31] will retry after 4.512161859s: waiting for machine to come up
	I0918 21:04:44.113739   61659 pod_ready.go:103] pod "kube-controller-manager-default-k8s-diff-port-828868" in "kube-system" namespace has status "Ready":"False"
	I0918 21:04:46.611013   61659 pod_ready.go:103] pod "kube-controller-manager-default-k8s-diff-port-828868" in "kube-system" namespace has status "Ready":"False"
	I0918 21:04:47.345046   61740 main.go:141] libmachine: (embed-certs-255556) DBG | domain embed-certs-255556 has defined MAC address 52:54:00:e8:c2:b7 in network mk-embed-certs-255556
	I0918 21:04:47.345426   61740 main.go:141] libmachine: (embed-certs-255556) Found IP for machine: 192.168.39.21
	I0918 21:04:47.345444   61740 main.go:141] libmachine: (embed-certs-255556) Reserving static IP address...
	I0918 21:04:47.345457   61740 main.go:141] libmachine: (embed-certs-255556) DBG | domain embed-certs-255556 has current primary IP address 192.168.39.21 and MAC address 52:54:00:e8:c2:b7 in network mk-embed-certs-255556
	I0918 21:04:47.345824   61740 main.go:141] libmachine: (embed-certs-255556) DBG | found host DHCP lease matching {name: "embed-certs-255556", mac: "52:54:00:e8:c2:b7", ip: "192.168.39.21"} in network mk-embed-certs-255556: {Iface:virbr1 ExpiryTime:2024-09-18 22:04:39 +0000 UTC Type:0 Mac:52:54:00:e8:c2:b7 Iaid: IPaddr:192.168.39.21 Prefix:24 Hostname:embed-certs-255556 Clientid:01:52:54:00:e8:c2:b7}
	I0918 21:04:47.345846   61740 main.go:141] libmachine: (embed-certs-255556) DBG | skip adding static IP to network mk-embed-certs-255556 - found existing host DHCP lease matching {name: "embed-certs-255556", mac: "52:54:00:e8:c2:b7", ip: "192.168.39.21"}
	I0918 21:04:47.345856   61740 main.go:141] libmachine: (embed-certs-255556) Reserved static IP address: 192.168.39.21
	I0918 21:04:47.345866   61740 main.go:141] libmachine: (embed-certs-255556) Waiting for SSH to be available...
	I0918 21:04:47.345874   61740 main.go:141] libmachine: (embed-certs-255556) DBG | Getting to WaitForSSH function...
	I0918 21:04:47.347972   61740 main.go:141] libmachine: (embed-certs-255556) DBG | domain embed-certs-255556 has defined MAC address 52:54:00:e8:c2:b7 in network mk-embed-certs-255556
	I0918 21:04:47.348327   61740 main.go:141] libmachine: (embed-certs-255556) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e8:c2:b7", ip: ""} in network mk-embed-certs-255556: {Iface:virbr1 ExpiryTime:2024-09-18 22:04:39 +0000 UTC Type:0 Mac:52:54:00:e8:c2:b7 Iaid: IPaddr:192.168.39.21 Prefix:24 Hostname:embed-certs-255556 Clientid:01:52:54:00:e8:c2:b7}
	I0918 21:04:47.348367   61740 main.go:141] libmachine: (embed-certs-255556) DBG | domain embed-certs-255556 has defined IP address 192.168.39.21 and MAC address 52:54:00:e8:c2:b7 in network mk-embed-certs-255556
	I0918 21:04:47.348437   61740 main.go:141] libmachine: (embed-certs-255556) DBG | Using SSH client type: external
	I0918 21:04:47.348469   61740 main.go:141] libmachine: (embed-certs-255556) DBG | Using SSH private key: /home/jenkins/minikube-integration/19667-7671/.minikube/machines/embed-certs-255556/id_rsa (-rw-------)
	I0918 21:04:47.348511   61740 main.go:141] libmachine: (embed-certs-255556) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.21 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19667-7671/.minikube/machines/embed-certs-255556/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0918 21:04:47.348526   61740 main.go:141] libmachine: (embed-certs-255556) DBG | About to run SSH command:
	I0918 21:04:47.348554   61740 main.go:141] libmachine: (embed-certs-255556) DBG | exit 0
	I0918 21:04:47.476457   61740 main.go:141] libmachine: (embed-certs-255556) DBG | SSH cmd err, output: <nil>: 
	I0918 21:04:47.476858   61740 main.go:141] libmachine: (embed-certs-255556) Calling .GetConfigRaw
	I0918 21:04:47.477533   61740 main.go:141] libmachine: (embed-certs-255556) Calling .GetIP
	I0918 21:04:47.480221   61740 main.go:141] libmachine: (embed-certs-255556) DBG | domain embed-certs-255556 has defined MAC address 52:54:00:e8:c2:b7 in network mk-embed-certs-255556
	I0918 21:04:47.480601   61740 main.go:141] libmachine: (embed-certs-255556) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e8:c2:b7", ip: ""} in network mk-embed-certs-255556: {Iface:virbr1 ExpiryTime:2024-09-18 22:04:39 +0000 UTC Type:0 Mac:52:54:00:e8:c2:b7 Iaid: IPaddr:192.168.39.21 Prefix:24 Hostname:embed-certs-255556 Clientid:01:52:54:00:e8:c2:b7}
	I0918 21:04:47.480644   61740 main.go:141] libmachine: (embed-certs-255556) DBG | domain embed-certs-255556 has defined IP address 192.168.39.21 and MAC address 52:54:00:e8:c2:b7 in network mk-embed-certs-255556
	I0918 21:04:47.480966   61740 profile.go:143] Saving config to /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/embed-certs-255556/config.json ...
	I0918 21:04:47.481172   61740 machine.go:93] provisionDockerMachine start ...
	I0918 21:04:47.481189   61740 main.go:141] libmachine: (embed-certs-255556) Calling .DriverName
	I0918 21:04:47.481440   61740 main.go:141] libmachine: (embed-certs-255556) Calling .GetSSHHostname
	I0918 21:04:47.483916   61740 main.go:141] libmachine: (embed-certs-255556) DBG | domain embed-certs-255556 has defined MAC address 52:54:00:e8:c2:b7 in network mk-embed-certs-255556
	I0918 21:04:47.484299   61740 main.go:141] libmachine: (embed-certs-255556) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e8:c2:b7", ip: ""} in network mk-embed-certs-255556: {Iface:virbr1 ExpiryTime:2024-09-18 22:04:39 +0000 UTC Type:0 Mac:52:54:00:e8:c2:b7 Iaid: IPaddr:192.168.39.21 Prefix:24 Hostname:embed-certs-255556 Clientid:01:52:54:00:e8:c2:b7}
	I0918 21:04:47.484328   61740 main.go:141] libmachine: (embed-certs-255556) DBG | domain embed-certs-255556 has defined IP address 192.168.39.21 and MAC address 52:54:00:e8:c2:b7 in network mk-embed-certs-255556
	I0918 21:04:47.484467   61740 main.go:141] libmachine: (embed-certs-255556) Calling .GetSSHPort
	I0918 21:04:47.484703   61740 main.go:141] libmachine: (embed-certs-255556) Calling .GetSSHKeyPath
	I0918 21:04:47.484898   61740 main.go:141] libmachine: (embed-certs-255556) Calling .GetSSHKeyPath
	I0918 21:04:47.485043   61740 main.go:141] libmachine: (embed-certs-255556) Calling .GetSSHUsername
	I0918 21:04:47.485185   61740 main.go:141] libmachine: Using SSH client type: native
	I0918 21:04:47.485386   61740 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.21 22 <nil> <nil>}
	I0918 21:04:47.485399   61740 main.go:141] libmachine: About to run SSH command:
	hostname
	I0918 21:04:47.596243   61740 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0918 21:04:47.596272   61740 main.go:141] libmachine: (embed-certs-255556) Calling .GetMachineName
	I0918 21:04:47.596531   61740 buildroot.go:166] provisioning hostname "embed-certs-255556"
	I0918 21:04:47.596560   61740 main.go:141] libmachine: (embed-certs-255556) Calling .GetMachineName
	I0918 21:04:47.596775   61740 main.go:141] libmachine: (embed-certs-255556) Calling .GetSSHHostname
	I0918 21:04:47.599159   61740 main.go:141] libmachine: (embed-certs-255556) DBG | domain embed-certs-255556 has defined MAC address 52:54:00:e8:c2:b7 in network mk-embed-certs-255556
	I0918 21:04:47.599508   61740 main.go:141] libmachine: (embed-certs-255556) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e8:c2:b7", ip: ""} in network mk-embed-certs-255556: {Iface:virbr1 ExpiryTime:2024-09-18 22:04:39 +0000 UTC Type:0 Mac:52:54:00:e8:c2:b7 Iaid: IPaddr:192.168.39.21 Prefix:24 Hostname:embed-certs-255556 Clientid:01:52:54:00:e8:c2:b7}
	I0918 21:04:47.599532   61740 main.go:141] libmachine: (embed-certs-255556) DBG | domain embed-certs-255556 has defined IP address 192.168.39.21 and MAC address 52:54:00:e8:c2:b7 in network mk-embed-certs-255556
	I0918 21:04:47.599706   61740 main.go:141] libmachine: (embed-certs-255556) Calling .GetSSHPort
	I0918 21:04:47.599888   61740 main.go:141] libmachine: (embed-certs-255556) Calling .GetSSHKeyPath
	I0918 21:04:47.600090   61740 main.go:141] libmachine: (embed-certs-255556) Calling .GetSSHKeyPath
	I0918 21:04:47.600229   61740 main.go:141] libmachine: (embed-certs-255556) Calling .GetSSHUsername
	I0918 21:04:47.600406   61740 main.go:141] libmachine: Using SSH client type: native
	I0918 21:04:47.600589   61740 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.21 22 <nil> <nil>}
	I0918 21:04:47.600602   61740 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-255556 && echo "embed-certs-255556" | sudo tee /etc/hostname
	I0918 21:04:47.726173   61740 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-255556
	
	I0918 21:04:47.726213   61740 main.go:141] libmachine: (embed-certs-255556) Calling .GetSSHHostname
	I0918 21:04:47.729209   61740 main.go:141] libmachine: (embed-certs-255556) DBG | domain embed-certs-255556 has defined MAC address 52:54:00:e8:c2:b7 in network mk-embed-certs-255556
	I0918 21:04:47.729575   61740 main.go:141] libmachine: (embed-certs-255556) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e8:c2:b7", ip: ""} in network mk-embed-certs-255556: {Iface:virbr1 ExpiryTime:2024-09-18 22:04:39 +0000 UTC Type:0 Mac:52:54:00:e8:c2:b7 Iaid: IPaddr:192.168.39.21 Prefix:24 Hostname:embed-certs-255556 Clientid:01:52:54:00:e8:c2:b7}
	I0918 21:04:47.729609   61740 main.go:141] libmachine: (embed-certs-255556) DBG | domain embed-certs-255556 has defined IP address 192.168.39.21 and MAC address 52:54:00:e8:c2:b7 in network mk-embed-certs-255556
	I0918 21:04:47.729775   61740 main.go:141] libmachine: (embed-certs-255556) Calling .GetSSHPort
	I0918 21:04:47.729952   61740 main.go:141] libmachine: (embed-certs-255556) Calling .GetSSHKeyPath
	I0918 21:04:47.730212   61740 main.go:141] libmachine: (embed-certs-255556) Calling .GetSSHKeyPath
	I0918 21:04:47.730386   61740 main.go:141] libmachine: (embed-certs-255556) Calling .GetSSHUsername
	I0918 21:04:47.730583   61740 main.go:141] libmachine: Using SSH client type: native
	I0918 21:04:47.730755   61740 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.21 22 <nil> <nil>}
	I0918 21:04:47.730771   61740 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-255556' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-255556/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-255556' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0918 21:04:47.849894   61740 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0918 21:04:47.849928   61740 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19667-7671/.minikube CaCertPath:/home/jenkins/minikube-integration/19667-7671/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19667-7671/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19667-7671/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19667-7671/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19667-7671/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19667-7671/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19667-7671/.minikube}
	I0918 21:04:47.849954   61740 buildroot.go:174] setting up certificates
	I0918 21:04:47.849961   61740 provision.go:84] configureAuth start
	I0918 21:04:47.849971   61740 main.go:141] libmachine: (embed-certs-255556) Calling .GetMachineName
	I0918 21:04:47.850307   61740 main.go:141] libmachine: (embed-certs-255556) Calling .GetIP
	I0918 21:04:47.852989   61740 main.go:141] libmachine: (embed-certs-255556) DBG | domain embed-certs-255556 has defined MAC address 52:54:00:e8:c2:b7 in network mk-embed-certs-255556
	I0918 21:04:47.853389   61740 main.go:141] libmachine: (embed-certs-255556) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e8:c2:b7", ip: ""} in network mk-embed-certs-255556: {Iface:virbr1 ExpiryTime:2024-09-18 22:04:39 +0000 UTC Type:0 Mac:52:54:00:e8:c2:b7 Iaid: IPaddr:192.168.39.21 Prefix:24 Hostname:embed-certs-255556 Clientid:01:52:54:00:e8:c2:b7}
	I0918 21:04:47.853423   61740 main.go:141] libmachine: (embed-certs-255556) DBG | domain embed-certs-255556 has defined IP address 192.168.39.21 and MAC address 52:54:00:e8:c2:b7 in network mk-embed-certs-255556
	I0918 21:04:47.853555   61740 main.go:141] libmachine: (embed-certs-255556) Calling .GetSSHHostname
	I0918 21:04:47.856032   61740 main.go:141] libmachine: (embed-certs-255556) DBG | domain embed-certs-255556 has defined MAC address 52:54:00:e8:c2:b7 in network mk-embed-certs-255556
	I0918 21:04:47.856389   61740 main.go:141] libmachine: (embed-certs-255556) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e8:c2:b7", ip: ""} in network mk-embed-certs-255556: {Iface:virbr1 ExpiryTime:2024-09-18 22:04:39 +0000 UTC Type:0 Mac:52:54:00:e8:c2:b7 Iaid: IPaddr:192.168.39.21 Prefix:24 Hostname:embed-certs-255556 Clientid:01:52:54:00:e8:c2:b7}
	I0918 21:04:47.856410   61740 main.go:141] libmachine: (embed-certs-255556) DBG | domain embed-certs-255556 has defined IP address 192.168.39.21 and MAC address 52:54:00:e8:c2:b7 in network mk-embed-certs-255556
	I0918 21:04:47.856556   61740 provision.go:143] copyHostCerts
	I0918 21:04:47.856617   61740 exec_runner.go:144] found /home/jenkins/minikube-integration/19667-7671/.minikube/ca.pem, removing ...
	I0918 21:04:47.856627   61740 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19667-7671/.minikube/ca.pem
	I0918 21:04:47.856686   61740 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19667-7671/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19667-7671/.minikube/ca.pem (1082 bytes)
	I0918 21:04:47.856778   61740 exec_runner.go:144] found /home/jenkins/minikube-integration/19667-7671/.minikube/cert.pem, removing ...
	I0918 21:04:47.856786   61740 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19667-7671/.minikube/cert.pem
	I0918 21:04:47.856805   61740 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19667-7671/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19667-7671/.minikube/cert.pem (1123 bytes)
	I0918 21:04:47.856855   61740 exec_runner.go:144] found /home/jenkins/minikube-integration/19667-7671/.minikube/key.pem, removing ...
	I0918 21:04:47.856862   61740 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19667-7671/.minikube/key.pem
	I0918 21:04:47.856881   61740 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19667-7671/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19667-7671/.minikube/key.pem (1679 bytes)
	I0918 21:04:47.856929   61740 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19667-7671/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19667-7671/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19667-7671/.minikube/certs/ca-key.pem org=jenkins.embed-certs-255556 san=[127.0.0.1 192.168.39.21 embed-certs-255556 localhost minikube]
	I0918 21:04:48.145689   61740 provision.go:177] copyRemoteCerts
	I0918 21:04:48.145750   61740 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0918 21:04:48.145779   61740 main.go:141] libmachine: (embed-certs-255556) Calling .GetSSHHostname
	I0918 21:04:48.148420   61740 main.go:141] libmachine: (embed-certs-255556) DBG | domain embed-certs-255556 has defined MAC address 52:54:00:e8:c2:b7 in network mk-embed-certs-255556
	I0918 21:04:48.148785   61740 main.go:141] libmachine: (embed-certs-255556) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e8:c2:b7", ip: ""} in network mk-embed-certs-255556: {Iface:virbr1 ExpiryTime:2024-09-18 22:04:39 +0000 UTC Type:0 Mac:52:54:00:e8:c2:b7 Iaid: IPaddr:192.168.39.21 Prefix:24 Hostname:embed-certs-255556 Clientid:01:52:54:00:e8:c2:b7}
	I0918 21:04:48.148812   61740 main.go:141] libmachine: (embed-certs-255556) DBG | domain embed-certs-255556 has defined IP address 192.168.39.21 and MAC address 52:54:00:e8:c2:b7 in network mk-embed-certs-255556
	I0918 21:04:48.148983   61740 main.go:141] libmachine: (embed-certs-255556) Calling .GetSSHPort
	I0918 21:04:48.149194   61740 main.go:141] libmachine: (embed-certs-255556) Calling .GetSSHKeyPath
	I0918 21:04:48.149371   61740 main.go:141] libmachine: (embed-certs-255556) Calling .GetSSHUsername
	I0918 21:04:48.149486   61740 sshutil.go:53] new ssh client: &{IP:192.168.39.21 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19667-7671/.minikube/machines/embed-certs-255556/id_rsa Username:docker}
	I0918 21:04:48.234451   61740 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0918 21:04:48.260660   61740 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0918 21:04:48.283305   61740 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0918 21:04:48.305919   61740 provision.go:87] duration metric: took 455.946794ms to configureAuth
	I0918 21:04:48.305954   61740 buildroot.go:189] setting minikube options for container-runtime
	I0918 21:04:48.306183   61740 config.go:182] Loaded profile config "embed-certs-255556": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0918 21:04:48.306284   61740 main.go:141] libmachine: (embed-certs-255556) Calling .GetSSHHostname
	I0918 21:04:48.308853   61740 main.go:141] libmachine: (embed-certs-255556) DBG | domain embed-certs-255556 has defined MAC address 52:54:00:e8:c2:b7 in network mk-embed-certs-255556
	I0918 21:04:48.309319   61740 main.go:141] libmachine: (embed-certs-255556) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e8:c2:b7", ip: ""} in network mk-embed-certs-255556: {Iface:virbr1 ExpiryTime:2024-09-18 22:04:39 +0000 UTC Type:0 Mac:52:54:00:e8:c2:b7 Iaid: IPaddr:192.168.39.21 Prefix:24 Hostname:embed-certs-255556 Clientid:01:52:54:00:e8:c2:b7}
	I0918 21:04:48.309359   61740 main.go:141] libmachine: (embed-certs-255556) DBG | domain embed-certs-255556 has defined IP address 192.168.39.21 and MAC address 52:54:00:e8:c2:b7 in network mk-embed-certs-255556
	I0918 21:04:48.309488   61740 main.go:141] libmachine: (embed-certs-255556) Calling .GetSSHPort
	I0918 21:04:48.309706   61740 main.go:141] libmachine: (embed-certs-255556) Calling .GetSSHKeyPath
	I0918 21:04:48.309860   61740 main.go:141] libmachine: (embed-certs-255556) Calling .GetSSHKeyPath
	I0918 21:04:48.309976   61740 main.go:141] libmachine: (embed-certs-255556) Calling .GetSSHUsername
	I0918 21:04:48.310134   61740 main.go:141] libmachine: Using SSH client type: native
	I0918 21:04:48.310349   61740 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.21 22 <nil> <nil>}
	I0918 21:04:48.310372   61740 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0918 21:04:48.782438   62061 start.go:364] duration metric: took 3m49.184727821s to acquireMachinesLock for "old-k8s-version-740194"
	I0918 21:04:48.782503   62061 start.go:96] Skipping create...Using existing machine configuration
	I0918 21:04:48.782514   62061 fix.go:54] fixHost starting: 
	I0918 21:04:48.782993   62061 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19667-7671/.minikube/bin/docker-machine-driver-kvm2
	I0918 21:04:48.783053   62061 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0918 21:04:48.802299   62061 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44463
	I0918 21:04:48.802787   62061 main.go:141] libmachine: () Calling .GetVersion
	I0918 21:04:48.803286   62061 main.go:141] libmachine: Using API Version  1
	I0918 21:04:48.803313   62061 main.go:141] libmachine: () Calling .SetConfigRaw
	I0918 21:04:48.803681   62061 main.go:141] libmachine: () Calling .GetMachineName
	I0918 21:04:48.803873   62061 main.go:141] libmachine: (old-k8s-version-740194) Calling .DriverName
	I0918 21:04:48.804007   62061 main.go:141] libmachine: (old-k8s-version-740194) Calling .GetState
	I0918 21:04:48.805714   62061 fix.go:112] recreateIfNeeded on old-k8s-version-740194: state=Stopped err=<nil>
	I0918 21:04:48.805744   62061 main.go:141] libmachine: (old-k8s-version-740194) Calling .DriverName
	W0918 21:04:48.805910   62061 fix.go:138] unexpected machine state, will restart: <nil>
	I0918 21:04:48.835402   62061 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-740194" ...
	I0918 21:04:48.836753   62061 main.go:141] libmachine: (old-k8s-version-740194) Calling .Start
	I0918 21:04:48.837090   62061 main.go:141] libmachine: (old-k8s-version-740194) Ensuring networks are active...
	I0918 21:04:48.838014   62061 main.go:141] libmachine: (old-k8s-version-740194) Ensuring network default is active
	I0918 21:04:48.838375   62061 main.go:141] libmachine: (old-k8s-version-740194) Ensuring network mk-old-k8s-version-740194 is active
	I0918 21:04:48.839035   62061 main.go:141] libmachine: (old-k8s-version-740194) Getting domain xml...
	I0918 21:04:48.839832   62061 main.go:141] libmachine: (old-k8s-version-740194) Creating domain...
	I0918 21:04:48.532928   61740 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0918 21:04:48.532952   61740 machine.go:96] duration metric: took 1.051769616s to provisionDockerMachine
	I0918 21:04:48.532962   61740 start.go:293] postStartSetup for "embed-certs-255556" (driver="kvm2")
	I0918 21:04:48.532973   61740 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0918 21:04:48.532991   61740 main.go:141] libmachine: (embed-certs-255556) Calling .DriverName
	I0918 21:04:48.533310   61740 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0918 21:04:48.533342   61740 main.go:141] libmachine: (embed-certs-255556) Calling .GetSSHHostname
	I0918 21:04:48.536039   61740 main.go:141] libmachine: (embed-certs-255556) DBG | domain embed-certs-255556 has defined MAC address 52:54:00:e8:c2:b7 in network mk-embed-certs-255556
	I0918 21:04:48.536529   61740 main.go:141] libmachine: (embed-certs-255556) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e8:c2:b7", ip: ""} in network mk-embed-certs-255556: {Iface:virbr1 ExpiryTime:2024-09-18 22:04:39 +0000 UTC Type:0 Mac:52:54:00:e8:c2:b7 Iaid: IPaddr:192.168.39.21 Prefix:24 Hostname:embed-certs-255556 Clientid:01:52:54:00:e8:c2:b7}
	I0918 21:04:48.536558   61740 main.go:141] libmachine: (embed-certs-255556) DBG | domain embed-certs-255556 has defined IP address 192.168.39.21 and MAC address 52:54:00:e8:c2:b7 in network mk-embed-certs-255556
	I0918 21:04:48.536631   61740 main.go:141] libmachine: (embed-certs-255556) Calling .GetSSHPort
	I0918 21:04:48.536806   61740 main.go:141] libmachine: (embed-certs-255556) Calling .GetSSHKeyPath
	I0918 21:04:48.536971   61740 main.go:141] libmachine: (embed-certs-255556) Calling .GetSSHUsername
	I0918 21:04:48.537148   61740 sshutil.go:53] new ssh client: &{IP:192.168.39.21 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19667-7671/.minikube/machines/embed-certs-255556/id_rsa Username:docker}
	I0918 21:04:48.623154   61740 ssh_runner.go:195] Run: cat /etc/os-release
	I0918 21:04:48.627520   61740 info.go:137] Remote host: Buildroot 2023.02.9
	I0918 21:04:48.627544   61740 filesync.go:126] Scanning /home/jenkins/minikube-integration/19667-7671/.minikube/addons for local assets ...
	I0918 21:04:48.627617   61740 filesync.go:126] Scanning /home/jenkins/minikube-integration/19667-7671/.minikube/files for local assets ...
	I0918 21:04:48.627711   61740 filesync.go:149] local asset: /home/jenkins/minikube-integration/19667-7671/.minikube/files/etc/ssl/certs/148782.pem -> 148782.pem in /etc/ssl/certs
	I0918 21:04:48.627827   61740 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0918 21:04:48.637145   61740 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/files/etc/ssl/certs/148782.pem --> /etc/ssl/certs/148782.pem (1708 bytes)
	I0918 21:04:48.661971   61740 start.go:296] duration metric: took 128.997987ms for postStartSetup
	I0918 21:04:48.662012   61740 fix.go:56] duration metric: took 20.352947161s for fixHost
	I0918 21:04:48.662034   61740 main.go:141] libmachine: (embed-certs-255556) Calling .GetSSHHostname
	I0918 21:04:48.665153   61740 main.go:141] libmachine: (embed-certs-255556) DBG | domain embed-certs-255556 has defined MAC address 52:54:00:e8:c2:b7 in network mk-embed-certs-255556
	I0918 21:04:48.665637   61740 main.go:141] libmachine: (embed-certs-255556) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e8:c2:b7", ip: ""} in network mk-embed-certs-255556: {Iface:virbr1 ExpiryTime:2024-09-18 22:04:39 +0000 UTC Type:0 Mac:52:54:00:e8:c2:b7 Iaid: IPaddr:192.168.39.21 Prefix:24 Hostname:embed-certs-255556 Clientid:01:52:54:00:e8:c2:b7}
	I0918 21:04:48.665668   61740 main.go:141] libmachine: (embed-certs-255556) DBG | domain embed-certs-255556 has defined IP address 192.168.39.21 and MAC address 52:54:00:e8:c2:b7 in network mk-embed-certs-255556
	I0918 21:04:48.665853   61740 main.go:141] libmachine: (embed-certs-255556) Calling .GetSSHPort
	I0918 21:04:48.666090   61740 main.go:141] libmachine: (embed-certs-255556) Calling .GetSSHKeyPath
	I0918 21:04:48.666289   61740 main.go:141] libmachine: (embed-certs-255556) Calling .GetSSHKeyPath
	I0918 21:04:48.666607   61740 main.go:141] libmachine: (embed-certs-255556) Calling .GetSSHUsername
	I0918 21:04:48.666784   61740 main.go:141] libmachine: Using SSH client type: native
	I0918 21:04:48.667024   61740 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.21 22 <nil> <nil>}
	I0918 21:04:48.667040   61740 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0918 21:04:48.782245   61740 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726693488.758182538
	
	I0918 21:04:48.782286   61740 fix.go:216] guest clock: 1726693488.758182538
	I0918 21:04:48.782297   61740 fix.go:229] Guest: 2024-09-18 21:04:48.758182538 +0000 UTC Remote: 2024-09-18 21:04:48.662016609 +0000 UTC m=+270.354724953 (delta=96.165929ms)
	I0918 21:04:48.782322   61740 fix.go:200] guest clock delta is within tolerance: 96.165929ms
	I0918 21:04:48.782329   61740 start.go:83] releasing machines lock for "embed-certs-255556", held for 20.47331123s
	I0918 21:04:48.782358   61740 main.go:141] libmachine: (embed-certs-255556) Calling .DriverName
	I0918 21:04:48.782655   61740 main.go:141] libmachine: (embed-certs-255556) Calling .GetIP
	I0918 21:04:48.785572   61740 main.go:141] libmachine: (embed-certs-255556) DBG | domain embed-certs-255556 has defined MAC address 52:54:00:e8:c2:b7 in network mk-embed-certs-255556
	I0918 21:04:48.785959   61740 main.go:141] libmachine: (embed-certs-255556) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e8:c2:b7", ip: ""} in network mk-embed-certs-255556: {Iface:virbr1 ExpiryTime:2024-09-18 22:04:39 +0000 UTC Type:0 Mac:52:54:00:e8:c2:b7 Iaid: IPaddr:192.168.39.21 Prefix:24 Hostname:embed-certs-255556 Clientid:01:52:54:00:e8:c2:b7}
	I0918 21:04:48.785988   61740 main.go:141] libmachine: (embed-certs-255556) DBG | domain embed-certs-255556 has defined IP address 192.168.39.21 and MAC address 52:54:00:e8:c2:b7 in network mk-embed-certs-255556
	I0918 21:04:48.786181   61740 main.go:141] libmachine: (embed-certs-255556) Calling .DriverName
	I0918 21:04:48.786653   61740 main.go:141] libmachine: (embed-certs-255556) Calling .DriverName
	I0918 21:04:48.786859   61740 main.go:141] libmachine: (embed-certs-255556) Calling .DriverName
	I0918 21:04:48.787019   61740 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0918 21:04:48.787083   61740 main.go:141] libmachine: (embed-certs-255556) Calling .GetSSHHostname
	I0918 21:04:48.787118   61740 ssh_runner.go:195] Run: cat /version.json
	I0918 21:04:48.787142   61740 main.go:141] libmachine: (embed-certs-255556) Calling .GetSSHHostname
	I0918 21:04:48.789834   61740 main.go:141] libmachine: (embed-certs-255556) DBG | domain embed-certs-255556 has defined MAC address 52:54:00:e8:c2:b7 in network mk-embed-certs-255556
	I0918 21:04:48.790239   61740 main.go:141] libmachine: (embed-certs-255556) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e8:c2:b7", ip: ""} in network mk-embed-certs-255556: {Iface:virbr1 ExpiryTime:2024-09-18 22:04:39 +0000 UTC Type:0 Mac:52:54:00:e8:c2:b7 Iaid: IPaddr:192.168.39.21 Prefix:24 Hostname:embed-certs-255556 Clientid:01:52:54:00:e8:c2:b7}
	I0918 21:04:48.790290   61740 main.go:141] libmachine: (embed-certs-255556) DBG | domain embed-certs-255556 has defined IP address 192.168.39.21 and MAC address 52:54:00:e8:c2:b7 in network mk-embed-certs-255556
	I0918 21:04:48.790326   61740 main.go:141] libmachine: (embed-certs-255556) DBG | domain embed-certs-255556 has defined MAC address 52:54:00:e8:c2:b7 in network mk-embed-certs-255556
	I0918 21:04:48.790625   61740 main.go:141] libmachine: (embed-certs-255556) Calling .GetSSHPort
	I0918 21:04:48.790805   61740 main.go:141] libmachine: (embed-certs-255556) Calling .GetSSHKeyPath
	I0918 21:04:48.790828   61740 main.go:141] libmachine: (embed-certs-255556) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e8:c2:b7", ip: ""} in network mk-embed-certs-255556: {Iface:virbr1 ExpiryTime:2024-09-18 22:04:39 +0000 UTC Type:0 Mac:52:54:00:e8:c2:b7 Iaid: IPaddr:192.168.39.21 Prefix:24 Hostname:embed-certs-255556 Clientid:01:52:54:00:e8:c2:b7}
	I0918 21:04:48.790860   61740 main.go:141] libmachine: (embed-certs-255556) DBG | domain embed-certs-255556 has defined IP address 192.168.39.21 and MAC address 52:54:00:e8:c2:b7 in network mk-embed-certs-255556
	I0918 21:04:48.791012   61740 main.go:141] libmachine: (embed-certs-255556) Calling .GetSSHUsername
	I0918 21:04:48.791035   61740 main.go:141] libmachine: (embed-certs-255556) Calling .GetSSHPort
	I0918 21:04:48.791172   61740 sshutil.go:53] new ssh client: &{IP:192.168.39.21 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19667-7671/.minikube/machines/embed-certs-255556/id_rsa Username:docker}
	I0918 21:04:48.791251   61740 main.go:141] libmachine: (embed-certs-255556) Calling .GetSSHKeyPath
	I0918 21:04:48.791406   61740 main.go:141] libmachine: (embed-certs-255556) Calling .GetSSHUsername
	I0918 21:04:48.791537   61740 sshutil.go:53] new ssh client: &{IP:192.168.39.21 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19667-7671/.minikube/machines/embed-certs-255556/id_rsa Username:docker}
	I0918 21:04:48.911282   61740 ssh_runner.go:195] Run: systemctl --version
	I0918 21:04:48.917459   61740 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0918 21:04:49.062272   61740 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0918 21:04:49.068629   61740 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0918 21:04:49.068709   61740 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0918 21:04:49.085575   61740 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0918 21:04:49.085607   61740 start.go:495] detecting cgroup driver to use...
	I0918 21:04:49.085677   61740 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0918 21:04:49.102455   61740 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0918 21:04:49.117869   61740 docker.go:217] disabling cri-docker service (if available) ...
	I0918 21:04:49.117958   61740 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0918 21:04:49.135361   61740 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0918 21:04:49.150861   61740 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0918 21:04:49.285901   61740 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0918 21:04:49.438312   61740 docker.go:233] disabling docker service ...
	I0918 21:04:49.438390   61740 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0918 21:04:49.454560   61740 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0918 21:04:49.471109   61740 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0918 21:04:49.631711   61740 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0918 21:04:49.760860   61740 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0918 21:04:49.778574   61740 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0918 21:04:49.797293   61740 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0918 21:04:49.797365   61740 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0918 21:04:49.808796   61740 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0918 21:04:49.808872   61740 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0918 21:04:49.821451   61740 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0918 21:04:49.834678   61740 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0918 21:04:49.847521   61740 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0918 21:04:49.860918   61740 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0918 21:04:49.873942   61740 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0918 21:04:49.892983   61740 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0918 21:04:49.904925   61740 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0918 21:04:49.916195   61740 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0918 21:04:49.916310   61740 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0918 21:04:49.931084   61740 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0918 21:04:49.942692   61740 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0918 21:04:50.065013   61740 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0918 21:04:50.168347   61740 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0918 21:04:50.168440   61740 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0918 21:04:50.174948   61740 start.go:563] Will wait 60s for crictl version
	I0918 21:04:50.175017   61740 ssh_runner.go:195] Run: which crictl
	I0918 21:04:50.180139   61740 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0918 21:04:50.221578   61740 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0918 21:04:50.221687   61740 ssh_runner.go:195] Run: crio --version
	I0918 21:04:50.251587   61740 ssh_runner.go:195] Run: crio --version
	I0918 21:04:50.282931   61740 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0918 21:04:48.112865   61659 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-828868" in "kube-system" namespace has status "Ready":"True"
	I0918 21:04:48.112895   61659 pod_ready.go:82] duration metric: took 6.007950768s for pod "kube-controller-manager-default-k8s-diff-port-828868" in "kube-system" namespace to be "Ready" ...
	I0918 21:04:48.112909   61659 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-jz7ls" in "kube-system" namespace to be "Ready" ...
	I0918 21:04:48.118606   61659 pod_ready.go:93] pod "kube-proxy-jz7ls" in "kube-system" namespace has status "Ready":"True"
	I0918 21:04:48.118628   61659 pod_ready.go:82] duration metric: took 5.710918ms for pod "kube-proxy-jz7ls" in "kube-system" namespace to be "Ready" ...
	I0918 21:04:48.118647   61659 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-828868" in "kube-system" namespace to be "Ready" ...
	I0918 21:04:49.626081   61659 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-828868" in "kube-system" namespace has status "Ready":"True"
	I0918 21:04:49.626116   61659 pod_ready.go:82] duration metric: took 1.507459822s for pod "kube-scheduler-default-k8s-diff-port-828868" in "kube-system" namespace to be "Ready" ...
	I0918 21:04:49.626130   61659 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace to be "Ready" ...
	I0918 21:04:51.635306   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:04:50.284258   61740 main.go:141] libmachine: (embed-certs-255556) Calling .GetIP
	I0918 21:04:50.287321   61740 main.go:141] libmachine: (embed-certs-255556) DBG | domain embed-certs-255556 has defined MAC address 52:54:00:e8:c2:b7 in network mk-embed-certs-255556
	I0918 21:04:50.287754   61740 main.go:141] libmachine: (embed-certs-255556) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e8:c2:b7", ip: ""} in network mk-embed-certs-255556: {Iface:virbr1 ExpiryTime:2024-09-18 22:04:39 +0000 UTC Type:0 Mac:52:54:00:e8:c2:b7 Iaid: IPaddr:192.168.39.21 Prefix:24 Hostname:embed-certs-255556 Clientid:01:52:54:00:e8:c2:b7}
	I0918 21:04:50.287782   61740 main.go:141] libmachine: (embed-certs-255556) DBG | domain embed-certs-255556 has defined IP address 192.168.39.21 and MAC address 52:54:00:e8:c2:b7 in network mk-embed-certs-255556
	I0918 21:04:50.288116   61740 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0918 21:04:50.292221   61740 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0918 21:04:50.304472   61740 kubeadm.go:883] updating cluster {Name:embed-certs-255556 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.1 ClusterName:embed-certs-255556 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.21 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:f
alse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0918 21:04:50.304604   61740 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0918 21:04:50.304675   61740 ssh_runner.go:195] Run: sudo crictl images --output json
	I0918 21:04:50.343445   61740 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I0918 21:04:50.343527   61740 ssh_runner.go:195] Run: which lz4
	I0918 21:04:50.347600   61740 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0918 21:04:50.351647   61740 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0918 21:04:50.351679   61740 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I0918 21:04:51.665892   61740 crio.go:462] duration metric: took 1.318339658s to copy over tarball
	I0918 21:04:51.665970   61740 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0918 21:04:50.157515   62061 main.go:141] libmachine: (old-k8s-version-740194) Waiting to get IP...
	I0918 21:04:50.158369   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | domain old-k8s-version-740194 has defined MAC address 52:54:00:3f:a7:11 in network mk-old-k8s-version-740194
	I0918 21:04:50.158796   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | unable to find current IP address of domain old-k8s-version-740194 in network mk-old-k8s-version-740194
	I0918 21:04:50.158931   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | I0918 21:04:50.158765   63026 retry.go:31] will retry after 214.617426ms: waiting for machine to come up
	I0918 21:04:50.375418   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | domain old-k8s-version-740194 has defined MAC address 52:54:00:3f:a7:11 in network mk-old-k8s-version-740194
	I0918 21:04:50.375882   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | unable to find current IP address of domain old-k8s-version-740194 in network mk-old-k8s-version-740194
	I0918 21:04:50.375910   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | I0918 21:04:50.375834   63026 retry.go:31] will retry after 311.080996ms: waiting for machine to come up
	I0918 21:04:50.688569   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | domain old-k8s-version-740194 has defined MAC address 52:54:00:3f:a7:11 in network mk-old-k8s-version-740194
	I0918 21:04:50.689151   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | unable to find current IP address of domain old-k8s-version-740194 in network mk-old-k8s-version-740194
	I0918 21:04:50.689182   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | I0918 21:04:50.689112   63026 retry.go:31] will retry after 386.384815ms: waiting for machine to come up
	I0918 21:04:51.076846   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | domain old-k8s-version-740194 has defined MAC address 52:54:00:3f:a7:11 in network mk-old-k8s-version-740194
	I0918 21:04:51.077336   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | unable to find current IP address of domain old-k8s-version-740194 in network mk-old-k8s-version-740194
	I0918 21:04:51.077359   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | I0918 21:04:51.077280   63026 retry.go:31] will retry after 556.210887ms: waiting for machine to come up
	I0918 21:04:51.634919   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | domain old-k8s-version-740194 has defined MAC address 52:54:00:3f:a7:11 in network mk-old-k8s-version-740194
	I0918 21:04:51.635423   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | unable to find current IP address of domain old-k8s-version-740194 in network mk-old-k8s-version-740194
	I0918 21:04:51.635462   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | I0918 21:04:51.635325   63026 retry.go:31] will retry after 607.960565ms: waiting for machine to come up
	I0918 21:04:52.245179   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | domain old-k8s-version-740194 has defined MAC address 52:54:00:3f:a7:11 in network mk-old-k8s-version-740194
	I0918 21:04:52.245741   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | unable to find current IP address of domain old-k8s-version-740194 in network mk-old-k8s-version-740194
	I0918 21:04:52.245774   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | I0918 21:04:52.245692   63026 retry.go:31] will retry after 620.243825ms: waiting for machine to come up
	I0918 21:04:52.867067   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | domain old-k8s-version-740194 has defined MAC address 52:54:00:3f:a7:11 in network mk-old-k8s-version-740194
	I0918 21:04:52.867658   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | unable to find current IP address of domain old-k8s-version-740194 in network mk-old-k8s-version-740194
	I0918 21:04:52.867680   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | I0918 21:04:52.867608   63026 retry.go:31] will retry after 1.091814923s: waiting for machine to come up
	I0918 21:04:53.961395   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | domain old-k8s-version-740194 has defined MAC address 52:54:00:3f:a7:11 in network mk-old-k8s-version-740194
	I0918 21:04:53.961819   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | unable to find current IP address of domain old-k8s-version-740194 in network mk-old-k8s-version-740194
	I0918 21:04:53.961842   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | I0918 21:04:53.961784   63026 retry.go:31] will retry after 1.130552716s: waiting for machine to come up
	I0918 21:04:54.133598   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:04:56.134938   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:04:53.837558   61740 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.171557505s)
	I0918 21:04:53.837589   61740 crio.go:469] duration metric: took 2.171667234s to extract the tarball
	I0918 21:04:53.837610   61740 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0918 21:04:53.876381   61740 ssh_runner.go:195] Run: sudo crictl images --output json
	I0918 21:04:53.924938   61740 crio.go:514] all images are preloaded for cri-o runtime.
	I0918 21:04:53.924968   61740 cache_images.go:84] Images are preloaded, skipping loading
	I0918 21:04:53.924979   61740 kubeadm.go:934] updating node { 192.168.39.21 8443 v1.31.1 crio true true} ...
	I0918 21:04:53.925115   61740 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-255556 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.21
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:embed-certs-255556 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0918 21:04:53.925203   61740 ssh_runner.go:195] Run: crio config
	I0918 21:04:53.969048   61740 cni.go:84] Creating CNI manager for ""
	I0918 21:04:53.969076   61740 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0918 21:04:53.969086   61740 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0918 21:04:53.969105   61740 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.21 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-255556 NodeName:embed-certs-255556 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.21"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.21 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath
:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0918 21:04:53.969240   61740 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.21
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-255556"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.21
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.21"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0918 21:04:53.969298   61740 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0918 21:04:53.978636   61740 binaries.go:44] Found k8s binaries, skipping transfer
	I0918 21:04:53.978702   61740 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0918 21:04:53.988580   61740 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0918 21:04:54.005819   61740 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0918 21:04:54.021564   61740 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2159 bytes)
	I0918 21:04:54.038702   61740 ssh_runner.go:195] Run: grep 192.168.39.21	control-plane.minikube.internal$ /etc/hosts
	I0918 21:04:54.042536   61740 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.21	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0918 21:04:54.053896   61740 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0918 21:04:54.180842   61740 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0918 21:04:54.197701   61740 certs.go:68] Setting up /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/embed-certs-255556 for IP: 192.168.39.21
	I0918 21:04:54.197731   61740 certs.go:194] generating shared ca certs ...
	I0918 21:04:54.197754   61740 certs.go:226] acquiring lock for ca certs: {Name:mkf54e0e71ed884c4372f9d3d4cb308ea4600185 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0918 21:04:54.197953   61740 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19667-7671/.minikube/ca.key
	I0918 21:04:54.198020   61740 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19667-7671/.minikube/proxy-client-ca.key
	I0918 21:04:54.198034   61740 certs.go:256] generating profile certs ...
	I0918 21:04:54.198129   61740 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/embed-certs-255556/client.key
	I0918 21:04:54.198191   61740 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/embed-certs-255556/apiserver.key.4704fd19
	I0918 21:04:54.198225   61740 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/embed-certs-255556/proxy-client.key
	I0918 21:04:54.198326   61740 certs.go:484] found cert: /home/jenkins/minikube-integration/19667-7671/.minikube/certs/14878.pem (1338 bytes)
	W0918 21:04:54.198358   61740 certs.go:480] ignoring /home/jenkins/minikube-integration/19667-7671/.minikube/certs/14878_empty.pem, impossibly tiny 0 bytes
	I0918 21:04:54.198370   61740 certs.go:484] found cert: /home/jenkins/minikube-integration/19667-7671/.minikube/certs/ca-key.pem (1675 bytes)
	I0918 21:04:54.198420   61740 certs.go:484] found cert: /home/jenkins/minikube-integration/19667-7671/.minikube/certs/ca.pem (1082 bytes)
	I0918 21:04:54.198463   61740 certs.go:484] found cert: /home/jenkins/minikube-integration/19667-7671/.minikube/certs/cert.pem (1123 bytes)
	I0918 21:04:54.198498   61740 certs.go:484] found cert: /home/jenkins/minikube-integration/19667-7671/.minikube/certs/key.pem (1679 bytes)
	I0918 21:04:54.198566   61740 certs.go:484] found cert: /home/jenkins/minikube-integration/19667-7671/.minikube/files/etc/ssl/certs/148782.pem (1708 bytes)
	I0918 21:04:54.199258   61740 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0918 21:04:54.231688   61740 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0918 21:04:54.276366   61740 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0918 21:04:54.320929   61740 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0918 21:04:54.348698   61740 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/embed-certs-255556/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0918 21:04:54.375168   61740 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/embed-certs-255556/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0918 21:04:54.399159   61740 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/embed-certs-255556/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0918 21:04:54.427975   61740 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/embed-certs-255556/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0918 21:04:54.454648   61740 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/certs/14878.pem --> /usr/share/ca-certificates/14878.pem (1338 bytes)
	I0918 21:04:54.477518   61740 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/files/etc/ssl/certs/148782.pem --> /usr/share/ca-certificates/148782.pem (1708 bytes)
	I0918 21:04:54.500703   61740 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0918 21:04:54.523380   61740 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0918 21:04:54.540053   61740 ssh_runner.go:195] Run: openssl version
	I0918 21:04:54.545818   61740 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14878.pem && ln -fs /usr/share/ca-certificates/14878.pem /etc/ssl/certs/14878.pem"
	I0918 21:04:54.557138   61740 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14878.pem
	I0918 21:04:54.561973   61740 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 18 19:57 /usr/share/ca-certificates/14878.pem
	I0918 21:04:54.562030   61740 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14878.pem
	I0918 21:04:54.568133   61740 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14878.pem /etc/ssl/certs/51391683.0"
	I0918 21:04:54.578964   61740 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/148782.pem && ln -fs /usr/share/ca-certificates/148782.pem /etc/ssl/certs/148782.pem"
	I0918 21:04:54.590254   61740 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/148782.pem
	I0918 21:04:54.594944   61740 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 18 19:57 /usr/share/ca-certificates/148782.pem
	I0918 21:04:54.595022   61740 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/148782.pem
	I0918 21:04:54.600797   61740 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/148782.pem /etc/ssl/certs/3ec20f2e.0"
	I0918 21:04:54.612078   61740 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0918 21:04:54.623280   61740 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0918 21:04:54.628636   61740 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 18 19:39 /usr/share/ca-certificates/minikubeCA.pem
	I0918 21:04:54.628711   61740 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0918 21:04:54.634847   61740 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0918 21:04:54.645647   61740 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0918 21:04:54.650004   61740 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0918 21:04:54.656906   61740 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0918 21:04:54.662778   61740 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0918 21:04:54.668744   61740 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0918 21:04:54.674676   61740 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0918 21:04:54.680431   61740 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0918 21:04:54.686242   61740 kubeadm.go:392] StartCluster: {Name:embed-certs-255556 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.1 ClusterName:embed-certs-255556 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.21 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fals
e MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0918 21:04:54.686364   61740 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0918 21:04:54.686439   61740 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0918 21:04:54.724228   61740 cri.go:89] found id: ""
	I0918 21:04:54.724319   61740 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0918 21:04:54.734427   61740 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0918 21:04:54.734458   61740 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0918 21:04:54.734511   61740 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0918 21:04:54.747453   61740 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0918 21:04:54.748449   61740 kubeconfig.go:125] found "embed-certs-255556" server: "https://192.168.39.21:8443"
	I0918 21:04:54.750481   61740 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0918 21:04:54.760549   61740 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.21
	I0918 21:04:54.760585   61740 kubeadm.go:1160] stopping kube-system containers ...
	I0918 21:04:54.760599   61740 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0918 21:04:54.760659   61740 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0918 21:04:54.796334   61740 cri.go:89] found id: ""
	I0918 21:04:54.796426   61740 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0918 21:04:54.820854   61740 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0918 21:04:54.831959   61740 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0918 21:04:54.831982   61740 kubeadm.go:157] found existing configuration files:
	
	I0918 21:04:54.832075   61740 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0918 21:04:54.841872   61740 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0918 21:04:54.841952   61740 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0918 21:04:54.852032   61740 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0918 21:04:54.862101   61740 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0918 21:04:54.862176   61740 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0918 21:04:54.872575   61740 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0918 21:04:54.882283   61740 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0918 21:04:54.882386   61740 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0918 21:04:54.895907   61740 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0918 21:04:54.905410   61740 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0918 21:04:54.905484   61740 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0918 21:04:54.914938   61740 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0918 21:04:54.924536   61740 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0918 21:04:55.035830   61740 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0918 21:04:55.975305   61740 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0918 21:04:56.227988   61740 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0918 21:04:56.304760   61740 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0918 21:04:56.375088   61740 api_server.go:52] waiting for apiserver process to appear ...
	I0918 21:04:56.375185   61740 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:04:56.875319   61740 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:04:57.375240   61740 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:04:57.875532   61740 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:04:55.093491   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | domain old-k8s-version-740194 has defined MAC address 52:54:00:3f:a7:11 in network mk-old-k8s-version-740194
	I0918 21:04:55.093956   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | unable to find current IP address of domain old-k8s-version-740194 in network mk-old-k8s-version-740194
	I0918 21:04:55.093982   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | I0918 21:04:55.093923   63026 retry.go:31] will retry after 1.824664154s: waiting for machine to come up
	I0918 21:04:56.920959   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | domain old-k8s-version-740194 has defined MAC address 52:54:00:3f:a7:11 in network mk-old-k8s-version-740194
	I0918 21:04:56.921371   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | unable to find current IP address of domain old-k8s-version-740194 in network mk-old-k8s-version-740194
	I0918 21:04:56.921422   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | I0918 21:04:56.921354   63026 retry.go:31] will retry after 1.591260677s: waiting for machine to come up
	I0918 21:04:58.514832   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | domain old-k8s-version-740194 has defined MAC address 52:54:00:3f:a7:11 in network mk-old-k8s-version-740194
	I0918 21:04:58.515294   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | unable to find current IP address of domain old-k8s-version-740194 in network mk-old-k8s-version-740194
	I0918 21:04:58.515322   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | I0918 21:04:58.515262   63026 retry.go:31] will retry after 1.868763497s: waiting for machine to come up
	I0918 21:04:58.135056   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:05:00.633540   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:04:58.375400   61740 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:04:58.392935   61740 api_server.go:72] duration metric: took 2.017847705s to wait for apiserver process to appear ...
	I0918 21:04:58.393110   61740 api_server.go:88] waiting for apiserver healthz status ...
	I0918 21:04:58.393152   61740 api_server.go:253] Checking apiserver healthz at https://192.168.39.21:8443/healthz ...
	I0918 21:04:58.393699   61740 api_server.go:269] stopped: https://192.168.39.21:8443/healthz: Get "https://192.168.39.21:8443/healthz": dial tcp 192.168.39.21:8443: connect: connection refused
	I0918 21:04:58.893291   61740 api_server.go:253] Checking apiserver healthz at https://192.168.39.21:8443/healthz ...
	I0918 21:05:01.124915   61740 api_server.go:279] https://192.168.39.21:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0918 21:05:01.124954   61740 api_server.go:103] status: https://192.168.39.21:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0918 21:05:01.124991   61740 api_server.go:253] Checking apiserver healthz at https://192.168.39.21:8443/healthz ...
	I0918 21:05:01.179199   61740 api_server.go:279] https://192.168.39.21:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0918 21:05:01.179225   61740 api_server.go:103] status: https://192.168.39.21:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0918 21:05:01.393537   61740 api_server.go:253] Checking apiserver healthz at https://192.168.39.21:8443/healthz ...
	I0918 21:05:01.399577   61740 api_server.go:279] https://192.168.39.21:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0918 21:05:01.399610   61740 api_server.go:103] status: https://192.168.39.21:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0918 21:05:01.894174   61740 api_server.go:253] Checking apiserver healthz at https://192.168.39.21:8443/healthz ...
	I0918 21:05:01.899086   61740 api_server.go:279] https://192.168.39.21:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0918 21:05:01.899110   61740 api_server.go:103] status: https://192.168.39.21:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0918 21:05:02.393672   61740 api_server.go:253] Checking apiserver healthz at https://192.168.39.21:8443/healthz ...
	I0918 21:05:02.401942   61740 api_server.go:279] https://192.168.39.21:8443/healthz returned 200:
	ok
	I0918 21:05:02.408523   61740 api_server.go:141] control plane version: v1.31.1
	I0918 21:05:02.408553   61740 api_server.go:131] duration metric: took 4.015427901s to wait for apiserver health ...
	I0918 21:05:02.408562   61740 cni.go:84] Creating CNI manager for ""
	I0918 21:05:02.408568   61740 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0918 21:05:02.410199   61740 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0918 21:05:02.411470   61740 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0918 21:05:02.424617   61740 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0918 21:05:02.443819   61740 system_pods.go:43] waiting for kube-system pods to appear ...
	I0918 21:05:02.458892   61740 system_pods.go:59] 8 kube-system pods found
	I0918 21:05:02.458939   61740 system_pods.go:61] "coredns-7c65d6cfc9-xwn8w" [773b9a83-bb43-40d3-b3a3-40603c3b22b2] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0918 21:05:02.458949   61740 system_pods.go:61] "etcd-embed-certs-255556" [ee3e7dc9-fb5a-4faa-a0b5-b84b7cd506b6] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0918 21:05:02.458961   61740 system_pods.go:61] "kube-apiserver-embed-certs-255556" [c60ce069-c7a0-42d7-a7de-ce3cf91a3d43] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0918 21:05:02.458970   61740 system_pods.go:61] "kube-controller-manager-embed-certs-255556" [ac8f6b42-caa3-4815-9a90-3f7bb1f0060b] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0918 21:05:02.458980   61740 system_pods.go:61] "kube-proxy-v8szm" [367f743a-399b-4d04-8604-dcd441999581] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0918 21:05:02.458993   61740 system_pods.go:61] "kube-scheduler-embed-certs-255556" [b5dd211b-7963-41ac-8b43-0a5451e3e848] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0918 21:05:02.459001   61740 system_pods.go:61] "metrics-server-6867b74b74-z8rm7" [d1b6823e-4ac5-4ac6-88ae-7f8eac622fdf] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0918 21:05:02.459009   61740 system_pods.go:61] "storage-provisioner" [1575f899-35a7-4eb2-ad5f-660183f75aa6] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0918 21:05:02.459015   61740 system_pods.go:74] duration metric: took 15.172393ms to wait for pod list to return data ...
	I0918 21:05:02.459025   61740 node_conditions.go:102] verifying NodePressure condition ...
	I0918 21:05:02.463140   61740 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0918 21:05:02.463177   61740 node_conditions.go:123] node cpu capacity is 2
	I0918 21:05:02.463192   61740 node_conditions.go:105] duration metric: took 4.162401ms to run NodePressure ...
	I0918 21:05:02.463214   61740 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0918 21:05:02.757153   61740 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0918 21:05:02.761949   61740 kubeadm.go:739] kubelet initialised
	I0918 21:05:02.761977   61740 kubeadm.go:740] duration metric: took 4.79396ms waiting for restarted kubelet to initialise ...
	I0918 21:05:02.761985   61740 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0918 21:05:02.767197   61740 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-xwn8w" in "kube-system" namespace to be "Ready" ...
	I0918 21:05:00.385891   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | domain old-k8s-version-740194 has defined MAC address 52:54:00:3f:a7:11 in network mk-old-k8s-version-740194
	I0918 21:05:00.386451   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | unable to find current IP address of domain old-k8s-version-740194 in network mk-old-k8s-version-740194
	I0918 21:05:00.386482   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | I0918 21:05:00.386384   63026 retry.go:31] will retry after 3.274467583s: waiting for machine to come up
	I0918 21:05:03.664788   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | domain old-k8s-version-740194 has defined MAC address 52:54:00:3f:a7:11 in network mk-old-k8s-version-740194
	I0918 21:05:03.665255   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | unable to find current IP address of domain old-k8s-version-740194 in network mk-old-k8s-version-740194
	I0918 21:05:03.665286   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | I0918 21:05:03.665210   63026 retry.go:31] will retry after 4.112908346s: waiting for machine to come up
	I0918 21:05:02.634177   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:05:05.133431   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:05:07.133941   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:05:04.774196   61740 pod_ready.go:103] pod "coredns-7c65d6cfc9-xwn8w" in "kube-system" namespace has status "Ready":"False"
	I0918 21:05:07.273045   61740 pod_ready.go:103] pod "coredns-7c65d6cfc9-xwn8w" in "kube-system" namespace has status "Ready":"False"
	I0918 21:05:09.245246   61273 start.go:364] duration metric: took 55.195169549s to acquireMachinesLock for "no-preload-331658"
	I0918 21:05:09.245300   61273 start.go:96] Skipping create...Using existing machine configuration
	I0918 21:05:09.245311   61273 fix.go:54] fixHost starting: 
	I0918 21:05:09.245741   61273 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19667-7671/.minikube/bin/docker-machine-driver-kvm2
	I0918 21:05:09.245778   61273 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0918 21:05:09.263998   61273 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37335
	I0918 21:05:09.264565   61273 main.go:141] libmachine: () Calling .GetVersion
	I0918 21:05:09.265118   61273 main.go:141] libmachine: Using API Version  1
	I0918 21:05:09.265142   61273 main.go:141] libmachine: () Calling .SetConfigRaw
	I0918 21:05:09.265505   61273 main.go:141] libmachine: () Calling .GetMachineName
	I0918 21:05:09.265732   61273 main.go:141] libmachine: (no-preload-331658) Calling .DriverName
	I0918 21:05:09.265901   61273 main.go:141] libmachine: (no-preload-331658) Calling .GetState
	I0918 21:05:09.269500   61273 fix.go:112] recreateIfNeeded on no-preload-331658: state=Stopped err=<nil>
	I0918 21:05:09.269525   61273 main.go:141] libmachine: (no-preload-331658) Calling .DriverName
	W0918 21:05:09.269730   61273 fix.go:138] unexpected machine state, will restart: <nil>
	I0918 21:05:09.271448   61273 out.go:177] * Restarting existing kvm2 VM for "no-preload-331658" ...
	I0918 21:05:07.782616   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | domain old-k8s-version-740194 has defined MAC address 52:54:00:3f:a7:11 in network mk-old-k8s-version-740194
	I0918 21:05:07.783108   62061 main.go:141] libmachine: (old-k8s-version-740194) Found IP for machine: 192.168.72.53
	I0918 21:05:07.783125   62061 main.go:141] libmachine: (old-k8s-version-740194) Reserving static IP address...
	I0918 21:05:07.783135   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | domain old-k8s-version-740194 has current primary IP address 192.168.72.53 and MAC address 52:54:00:3f:a7:11 in network mk-old-k8s-version-740194
	I0918 21:05:07.783509   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | found host DHCP lease matching {name: "old-k8s-version-740194", mac: "52:54:00:3f:a7:11", ip: "192.168.72.53"} in network mk-old-k8s-version-740194: {Iface:virbr3 ExpiryTime:2024-09-18 22:05:00 +0000 UTC Type:0 Mac:52:54:00:3f:a7:11 Iaid: IPaddr:192.168.72.53 Prefix:24 Hostname:old-k8s-version-740194 Clientid:01:52:54:00:3f:a7:11}
	I0918 21:05:07.783542   62061 main.go:141] libmachine: (old-k8s-version-740194) Reserved static IP address: 192.168.72.53
	I0918 21:05:07.783565   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | skip adding static IP to network mk-old-k8s-version-740194 - found existing host DHCP lease matching {name: "old-k8s-version-740194", mac: "52:54:00:3f:a7:11", ip: "192.168.72.53"}
	I0918 21:05:07.783583   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | Getting to WaitForSSH function...
	I0918 21:05:07.783614   62061 main.go:141] libmachine: (old-k8s-version-740194) Waiting for SSH to be available...
	I0918 21:05:07.785492   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | domain old-k8s-version-740194 has defined MAC address 52:54:00:3f:a7:11 in network mk-old-k8s-version-740194
	I0918 21:05:07.785856   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:a7:11", ip: ""} in network mk-old-k8s-version-740194: {Iface:virbr3 ExpiryTime:2024-09-18 22:05:00 +0000 UTC Type:0 Mac:52:54:00:3f:a7:11 Iaid: IPaddr:192.168.72.53 Prefix:24 Hostname:old-k8s-version-740194 Clientid:01:52:54:00:3f:a7:11}
	I0918 21:05:07.785885   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | domain old-k8s-version-740194 has defined IP address 192.168.72.53 and MAC address 52:54:00:3f:a7:11 in network mk-old-k8s-version-740194
	I0918 21:05:07.786000   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | Using SSH client type: external
	I0918 21:05:07.786021   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | Using SSH private key: /home/jenkins/minikube-integration/19667-7671/.minikube/machines/old-k8s-version-740194/id_rsa (-rw-------)
	I0918 21:05:07.786044   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.53 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19667-7671/.minikube/machines/old-k8s-version-740194/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0918 21:05:07.786055   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | About to run SSH command:
	I0918 21:05:07.786065   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | exit 0
	I0918 21:05:07.915953   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | SSH cmd err, output: <nil>: 
	I0918 21:05:07.916454   62061 main.go:141] libmachine: (old-k8s-version-740194) Calling .GetConfigRaw
	I0918 21:05:07.917059   62061 main.go:141] libmachine: (old-k8s-version-740194) Calling .GetIP
	I0918 21:05:07.919749   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | domain old-k8s-version-740194 has defined MAC address 52:54:00:3f:a7:11 in network mk-old-k8s-version-740194
	I0918 21:05:07.920244   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:a7:11", ip: ""} in network mk-old-k8s-version-740194: {Iface:virbr3 ExpiryTime:2024-09-18 22:05:00 +0000 UTC Type:0 Mac:52:54:00:3f:a7:11 Iaid: IPaddr:192.168.72.53 Prefix:24 Hostname:old-k8s-version-740194 Clientid:01:52:54:00:3f:a7:11}
	I0918 21:05:07.920280   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | domain old-k8s-version-740194 has defined IP address 192.168.72.53 and MAC address 52:54:00:3f:a7:11 in network mk-old-k8s-version-740194
	I0918 21:05:07.920639   62061 profile.go:143] Saving config to /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/old-k8s-version-740194/config.json ...
	I0918 21:05:07.920871   62061 machine.go:93] provisionDockerMachine start ...
	I0918 21:05:07.920893   62061 main.go:141] libmachine: (old-k8s-version-740194) Calling .DriverName
	I0918 21:05:07.921100   62061 main.go:141] libmachine: (old-k8s-version-740194) Calling .GetSSHHostname
	I0918 21:05:07.923129   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | domain old-k8s-version-740194 has defined MAC address 52:54:00:3f:a7:11 in network mk-old-k8s-version-740194
	I0918 21:05:07.923434   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:a7:11", ip: ""} in network mk-old-k8s-version-740194: {Iface:virbr3 ExpiryTime:2024-09-18 22:05:00 +0000 UTC Type:0 Mac:52:54:00:3f:a7:11 Iaid: IPaddr:192.168.72.53 Prefix:24 Hostname:old-k8s-version-740194 Clientid:01:52:54:00:3f:a7:11}
	I0918 21:05:07.923458   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | domain old-k8s-version-740194 has defined IP address 192.168.72.53 and MAC address 52:54:00:3f:a7:11 in network mk-old-k8s-version-740194
	I0918 21:05:07.923573   62061 main.go:141] libmachine: (old-k8s-version-740194) Calling .GetSSHPort
	I0918 21:05:07.923727   62061 main.go:141] libmachine: (old-k8s-version-740194) Calling .GetSSHKeyPath
	I0918 21:05:07.923889   62061 main.go:141] libmachine: (old-k8s-version-740194) Calling .GetSSHKeyPath
	I0918 21:05:07.924036   62061 main.go:141] libmachine: (old-k8s-version-740194) Calling .GetSSHUsername
	I0918 21:05:07.924185   62061 main.go:141] libmachine: Using SSH client type: native
	I0918 21:05:07.924368   62061 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.72.53 22 <nil> <nil>}
	I0918 21:05:07.924378   62061 main.go:141] libmachine: About to run SSH command:
	hostname
	I0918 21:05:08.035941   62061 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0918 21:05:08.035972   62061 main.go:141] libmachine: (old-k8s-version-740194) Calling .GetMachineName
	I0918 21:05:08.036242   62061 buildroot.go:166] provisioning hostname "old-k8s-version-740194"
	I0918 21:05:08.036273   62061 main.go:141] libmachine: (old-k8s-version-740194) Calling .GetMachineName
	I0918 21:05:08.036512   62061 main.go:141] libmachine: (old-k8s-version-740194) Calling .GetSSHHostname
	I0918 21:05:08.038902   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | domain old-k8s-version-740194 has defined MAC address 52:54:00:3f:a7:11 in network mk-old-k8s-version-740194
	I0918 21:05:08.039239   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:a7:11", ip: ""} in network mk-old-k8s-version-740194: {Iface:virbr3 ExpiryTime:2024-09-18 22:05:00 +0000 UTC Type:0 Mac:52:54:00:3f:a7:11 Iaid: IPaddr:192.168.72.53 Prefix:24 Hostname:old-k8s-version-740194 Clientid:01:52:54:00:3f:a7:11}
	I0918 21:05:08.039266   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | domain old-k8s-version-740194 has defined IP address 192.168.72.53 and MAC address 52:54:00:3f:a7:11 in network mk-old-k8s-version-740194
	I0918 21:05:08.039438   62061 main.go:141] libmachine: (old-k8s-version-740194) Calling .GetSSHPort
	I0918 21:05:08.039632   62061 main.go:141] libmachine: (old-k8s-version-740194) Calling .GetSSHKeyPath
	I0918 21:05:08.039899   62061 main.go:141] libmachine: (old-k8s-version-740194) Calling .GetSSHKeyPath
	I0918 21:05:08.040072   62061 main.go:141] libmachine: (old-k8s-version-740194) Calling .GetSSHUsername
	I0918 21:05:08.040248   62061 main.go:141] libmachine: Using SSH client type: native
	I0918 21:05:08.040415   62061 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.72.53 22 <nil> <nil>}
	I0918 21:05:08.040428   62061 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-740194 && echo "old-k8s-version-740194" | sudo tee /etc/hostname
	I0918 21:05:08.165391   62061 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-740194
	
	I0918 21:05:08.165424   62061 main.go:141] libmachine: (old-k8s-version-740194) Calling .GetSSHHostname
	I0918 21:05:08.168059   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | domain old-k8s-version-740194 has defined MAC address 52:54:00:3f:a7:11 in network mk-old-k8s-version-740194
	I0918 21:05:08.168396   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:a7:11", ip: ""} in network mk-old-k8s-version-740194: {Iface:virbr3 ExpiryTime:2024-09-18 22:05:00 +0000 UTC Type:0 Mac:52:54:00:3f:a7:11 Iaid: IPaddr:192.168.72.53 Prefix:24 Hostname:old-k8s-version-740194 Clientid:01:52:54:00:3f:a7:11}
	I0918 21:05:08.168428   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | domain old-k8s-version-740194 has defined IP address 192.168.72.53 and MAC address 52:54:00:3f:a7:11 in network mk-old-k8s-version-740194
	I0918 21:05:08.168621   62061 main.go:141] libmachine: (old-k8s-version-740194) Calling .GetSSHPort
	I0918 21:05:08.168837   62061 main.go:141] libmachine: (old-k8s-version-740194) Calling .GetSSHKeyPath
	I0918 21:05:08.168988   62061 main.go:141] libmachine: (old-k8s-version-740194) Calling .GetSSHKeyPath
	I0918 21:05:08.169231   62061 main.go:141] libmachine: (old-k8s-version-740194) Calling .GetSSHUsername
	I0918 21:05:08.169413   62061 main.go:141] libmachine: Using SSH client type: native
	I0918 21:05:08.169579   62061 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.72.53 22 <nil> <nil>}
	I0918 21:05:08.169594   62061 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-740194' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-740194/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-740194' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0918 21:05:08.288591   62061 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0918 21:05:08.288644   62061 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19667-7671/.minikube CaCertPath:/home/jenkins/minikube-integration/19667-7671/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19667-7671/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19667-7671/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19667-7671/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19667-7671/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19667-7671/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19667-7671/.minikube}
	I0918 21:05:08.288671   62061 buildroot.go:174] setting up certificates
	I0918 21:05:08.288682   62061 provision.go:84] configureAuth start
	I0918 21:05:08.288694   62061 main.go:141] libmachine: (old-k8s-version-740194) Calling .GetMachineName
	I0918 21:05:08.289017   62061 main.go:141] libmachine: (old-k8s-version-740194) Calling .GetIP
	I0918 21:05:08.291949   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | domain old-k8s-version-740194 has defined MAC address 52:54:00:3f:a7:11 in network mk-old-k8s-version-740194
	I0918 21:05:08.292358   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:a7:11", ip: ""} in network mk-old-k8s-version-740194: {Iface:virbr3 ExpiryTime:2024-09-18 22:05:00 +0000 UTC Type:0 Mac:52:54:00:3f:a7:11 Iaid: IPaddr:192.168.72.53 Prefix:24 Hostname:old-k8s-version-740194 Clientid:01:52:54:00:3f:a7:11}
	I0918 21:05:08.292405   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | domain old-k8s-version-740194 has defined IP address 192.168.72.53 and MAC address 52:54:00:3f:a7:11 in network mk-old-k8s-version-740194
	I0918 21:05:08.292526   62061 main.go:141] libmachine: (old-k8s-version-740194) Calling .GetSSHHostname
	I0918 21:05:08.295013   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | domain old-k8s-version-740194 has defined MAC address 52:54:00:3f:a7:11 in network mk-old-k8s-version-740194
	I0918 21:05:08.295378   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:a7:11", ip: ""} in network mk-old-k8s-version-740194: {Iface:virbr3 ExpiryTime:2024-09-18 22:05:00 +0000 UTC Type:0 Mac:52:54:00:3f:a7:11 Iaid: IPaddr:192.168.72.53 Prefix:24 Hostname:old-k8s-version-740194 Clientid:01:52:54:00:3f:a7:11}
	I0918 21:05:08.295399   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | domain old-k8s-version-740194 has defined IP address 192.168.72.53 and MAC address 52:54:00:3f:a7:11 in network mk-old-k8s-version-740194
	I0918 21:05:08.295567   62061 provision.go:143] copyHostCerts
	I0918 21:05:08.295630   62061 exec_runner.go:144] found /home/jenkins/minikube-integration/19667-7671/.minikube/ca.pem, removing ...
	I0918 21:05:08.295640   62061 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19667-7671/.minikube/ca.pem
	I0918 21:05:08.295692   62061 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19667-7671/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19667-7671/.minikube/ca.pem (1082 bytes)
	I0918 21:05:08.295783   62061 exec_runner.go:144] found /home/jenkins/minikube-integration/19667-7671/.minikube/cert.pem, removing ...
	I0918 21:05:08.295790   62061 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19667-7671/.minikube/cert.pem
	I0918 21:05:08.295810   62061 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19667-7671/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19667-7671/.minikube/cert.pem (1123 bytes)
	I0918 21:05:08.295917   62061 exec_runner.go:144] found /home/jenkins/minikube-integration/19667-7671/.minikube/key.pem, removing ...
	I0918 21:05:08.295926   62061 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19667-7671/.minikube/key.pem
	I0918 21:05:08.295949   62061 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19667-7671/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19667-7671/.minikube/key.pem (1679 bytes)
	I0918 21:05:08.296009   62061 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19667-7671/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19667-7671/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19667-7671/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-740194 san=[127.0.0.1 192.168.72.53 localhost minikube old-k8s-version-740194]
	I0918 21:05:08.560726   62061 provision.go:177] copyRemoteCerts
	I0918 21:05:08.560786   62061 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0918 21:05:08.560816   62061 main.go:141] libmachine: (old-k8s-version-740194) Calling .GetSSHHostname
	I0918 21:05:08.563798   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | domain old-k8s-version-740194 has defined MAC address 52:54:00:3f:a7:11 in network mk-old-k8s-version-740194
	I0918 21:05:08.564231   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:a7:11", ip: ""} in network mk-old-k8s-version-740194: {Iface:virbr3 ExpiryTime:2024-09-18 22:05:00 +0000 UTC Type:0 Mac:52:54:00:3f:a7:11 Iaid: IPaddr:192.168.72.53 Prefix:24 Hostname:old-k8s-version-740194 Clientid:01:52:54:00:3f:a7:11}
	I0918 21:05:08.564266   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | domain old-k8s-version-740194 has defined IP address 192.168.72.53 and MAC address 52:54:00:3f:a7:11 in network mk-old-k8s-version-740194
	I0918 21:05:08.564473   62061 main.go:141] libmachine: (old-k8s-version-740194) Calling .GetSSHPort
	I0918 21:05:08.564704   62061 main.go:141] libmachine: (old-k8s-version-740194) Calling .GetSSHKeyPath
	I0918 21:05:08.564876   62061 main.go:141] libmachine: (old-k8s-version-740194) Calling .GetSSHUsername
	I0918 21:05:08.565016   62061 sshutil.go:53] new ssh client: &{IP:192.168.72.53 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19667-7671/.minikube/machines/old-k8s-version-740194/id_rsa Username:docker}
	I0918 21:05:08.654357   62061 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0918 21:05:08.680551   62061 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0918 21:05:08.704324   62061 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0918 21:05:08.727966   62061 provision.go:87] duration metric: took 439.269312ms to configureAuth
	I0918 21:05:08.728003   62061 buildroot.go:189] setting minikube options for container-runtime
	I0918 21:05:08.728235   62061 config.go:182] Loaded profile config "old-k8s-version-740194": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0918 21:05:08.728305   62061 main.go:141] libmachine: (old-k8s-version-740194) Calling .GetSSHHostname
	I0918 21:05:08.730895   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | domain old-k8s-version-740194 has defined MAC address 52:54:00:3f:a7:11 in network mk-old-k8s-version-740194
	I0918 21:05:08.731318   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:a7:11", ip: ""} in network mk-old-k8s-version-740194: {Iface:virbr3 ExpiryTime:2024-09-18 22:05:00 +0000 UTC Type:0 Mac:52:54:00:3f:a7:11 Iaid: IPaddr:192.168.72.53 Prefix:24 Hostname:old-k8s-version-740194 Clientid:01:52:54:00:3f:a7:11}
	I0918 21:05:08.731348   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | domain old-k8s-version-740194 has defined IP address 192.168.72.53 and MAC address 52:54:00:3f:a7:11 in network mk-old-k8s-version-740194
	I0918 21:05:08.731448   62061 main.go:141] libmachine: (old-k8s-version-740194) Calling .GetSSHPort
	I0918 21:05:08.731668   62061 main.go:141] libmachine: (old-k8s-version-740194) Calling .GetSSHKeyPath
	I0918 21:05:08.731884   62061 main.go:141] libmachine: (old-k8s-version-740194) Calling .GetSSHKeyPath
	I0918 21:05:08.732057   62061 main.go:141] libmachine: (old-k8s-version-740194) Calling .GetSSHUsername
	I0918 21:05:08.732223   62061 main.go:141] libmachine: Using SSH client type: native
	I0918 21:05:08.732396   62061 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.72.53 22 <nil> <nil>}
	I0918 21:05:08.732411   62061 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0918 21:05:08.978962   62061 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0918 21:05:08.978995   62061 machine.go:96] duration metric: took 1.05811075s to provisionDockerMachine
	I0918 21:05:08.979008   62061 start.go:293] postStartSetup for "old-k8s-version-740194" (driver="kvm2")
	I0918 21:05:08.979020   62061 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0918 21:05:08.979050   62061 main.go:141] libmachine: (old-k8s-version-740194) Calling .DriverName
	I0918 21:05:08.979409   62061 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0918 21:05:08.979436   62061 main.go:141] libmachine: (old-k8s-version-740194) Calling .GetSSHHostname
	I0918 21:05:08.982472   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | domain old-k8s-version-740194 has defined MAC address 52:54:00:3f:a7:11 in network mk-old-k8s-version-740194
	I0918 21:05:08.982791   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:a7:11", ip: ""} in network mk-old-k8s-version-740194: {Iface:virbr3 ExpiryTime:2024-09-18 22:05:00 +0000 UTC Type:0 Mac:52:54:00:3f:a7:11 Iaid: IPaddr:192.168.72.53 Prefix:24 Hostname:old-k8s-version-740194 Clientid:01:52:54:00:3f:a7:11}
	I0918 21:05:08.982830   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | domain old-k8s-version-740194 has defined IP address 192.168.72.53 and MAC address 52:54:00:3f:a7:11 in network mk-old-k8s-version-740194
	I0918 21:05:08.982996   62061 main.go:141] libmachine: (old-k8s-version-740194) Calling .GetSSHPort
	I0918 21:05:08.983192   62061 main.go:141] libmachine: (old-k8s-version-740194) Calling .GetSSHKeyPath
	I0918 21:05:08.983341   62061 main.go:141] libmachine: (old-k8s-version-740194) Calling .GetSSHUsername
	I0918 21:05:08.983510   62061 sshutil.go:53] new ssh client: &{IP:192.168.72.53 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19667-7671/.minikube/machines/old-k8s-version-740194/id_rsa Username:docker}
	I0918 21:05:09.074995   62061 ssh_runner.go:195] Run: cat /etc/os-release
	I0918 21:05:09.080207   62061 info.go:137] Remote host: Buildroot 2023.02.9
	I0918 21:05:09.080240   62061 filesync.go:126] Scanning /home/jenkins/minikube-integration/19667-7671/.minikube/addons for local assets ...
	I0918 21:05:09.080327   62061 filesync.go:126] Scanning /home/jenkins/minikube-integration/19667-7671/.minikube/files for local assets ...
	I0918 21:05:09.080451   62061 filesync.go:149] local asset: /home/jenkins/minikube-integration/19667-7671/.minikube/files/etc/ssl/certs/148782.pem -> 148782.pem in /etc/ssl/certs
	I0918 21:05:09.080583   62061 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0918 21:05:09.091374   62061 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/files/etc/ssl/certs/148782.pem --> /etc/ssl/certs/148782.pem (1708 bytes)
	I0918 21:05:09.119546   62061 start.go:296] duration metric: took 140.521158ms for postStartSetup
	I0918 21:05:09.119593   62061 fix.go:56] duration metric: took 20.337078765s for fixHost
	I0918 21:05:09.119620   62061 main.go:141] libmachine: (old-k8s-version-740194) Calling .GetSSHHostname
	I0918 21:05:09.122534   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | domain old-k8s-version-740194 has defined MAC address 52:54:00:3f:a7:11 in network mk-old-k8s-version-740194
	I0918 21:05:09.123157   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:a7:11", ip: ""} in network mk-old-k8s-version-740194: {Iface:virbr3 ExpiryTime:2024-09-18 22:05:00 +0000 UTC Type:0 Mac:52:54:00:3f:a7:11 Iaid: IPaddr:192.168.72.53 Prefix:24 Hostname:old-k8s-version-740194 Clientid:01:52:54:00:3f:a7:11}
	I0918 21:05:09.123187   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | domain old-k8s-version-740194 has defined IP address 192.168.72.53 and MAC address 52:54:00:3f:a7:11 in network mk-old-k8s-version-740194
	I0918 21:05:09.123447   62061 main.go:141] libmachine: (old-k8s-version-740194) Calling .GetSSHPort
	I0918 21:05:09.123695   62061 main.go:141] libmachine: (old-k8s-version-740194) Calling .GetSSHKeyPath
	I0918 21:05:09.123891   62061 main.go:141] libmachine: (old-k8s-version-740194) Calling .GetSSHKeyPath
	I0918 21:05:09.124095   62061 main.go:141] libmachine: (old-k8s-version-740194) Calling .GetSSHUsername
	I0918 21:05:09.124373   62061 main.go:141] libmachine: Using SSH client type: native
	I0918 21:05:09.124582   62061 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.72.53 22 <nil> <nil>}
	I0918 21:05:09.124595   62061 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0918 21:05:09.245082   62061 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726693509.219779478
	
	I0918 21:05:09.245109   62061 fix.go:216] guest clock: 1726693509.219779478
	I0918 21:05:09.245119   62061 fix.go:229] Guest: 2024-09-18 21:05:09.219779478 +0000 UTC Remote: 2024-09-18 21:05:09.11959842 +0000 UTC m=+249.666759777 (delta=100.181058ms)
	I0918 21:05:09.245139   62061 fix.go:200] guest clock delta is within tolerance: 100.181058ms
	I0918 21:05:09.245146   62061 start.go:83] releasing machines lock for "old-k8s-version-740194", held for 20.462669229s
	I0918 21:05:09.245176   62061 main.go:141] libmachine: (old-k8s-version-740194) Calling .DriverName
	I0918 21:05:09.245602   62061 main.go:141] libmachine: (old-k8s-version-740194) Calling .GetIP
	I0918 21:05:09.248653   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | domain old-k8s-version-740194 has defined MAC address 52:54:00:3f:a7:11 in network mk-old-k8s-version-740194
	I0918 21:05:09.249110   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:a7:11", ip: ""} in network mk-old-k8s-version-740194: {Iface:virbr3 ExpiryTime:2024-09-18 22:05:00 +0000 UTC Type:0 Mac:52:54:00:3f:a7:11 Iaid: IPaddr:192.168.72.53 Prefix:24 Hostname:old-k8s-version-740194 Clientid:01:52:54:00:3f:a7:11}
	I0918 21:05:09.249156   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | domain old-k8s-version-740194 has defined IP address 192.168.72.53 and MAC address 52:54:00:3f:a7:11 in network mk-old-k8s-version-740194
	I0918 21:05:09.249345   62061 main.go:141] libmachine: (old-k8s-version-740194) Calling .DriverName
	I0918 21:05:09.249838   62061 main.go:141] libmachine: (old-k8s-version-740194) Calling .DriverName
	I0918 21:05:09.250047   62061 main.go:141] libmachine: (old-k8s-version-740194) Calling .DriverName
	I0918 21:05:09.250167   62061 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0918 21:05:09.250230   62061 main.go:141] libmachine: (old-k8s-version-740194) Calling .GetSSHHostname
	I0918 21:05:09.250286   62061 ssh_runner.go:195] Run: cat /version.json
	I0918 21:05:09.250312   62061 main.go:141] libmachine: (old-k8s-version-740194) Calling .GetSSHHostname
	I0918 21:05:09.253242   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | domain old-k8s-version-740194 has defined MAC address 52:54:00:3f:a7:11 in network mk-old-k8s-version-740194
	I0918 21:05:09.253408   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | domain old-k8s-version-740194 has defined MAC address 52:54:00:3f:a7:11 in network mk-old-k8s-version-740194
	I0918 21:05:09.253687   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:a7:11", ip: ""} in network mk-old-k8s-version-740194: {Iface:virbr3 ExpiryTime:2024-09-18 22:05:00 +0000 UTC Type:0 Mac:52:54:00:3f:a7:11 Iaid: IPaddr:192.168.72.53 Prefix:24 Hostname:old-k8s-version-740194 Clientid:01:52:54:00:3f:a7:11}
	I0918 21:05:09.253749   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | domain old-k8s-version-740194 has defined IP address 192.168.72.53 and MAC address 52:54:00:3f:a7:11 in network mk-old-k8s-version-740194
	I0918 21:05:09.253882   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:a7:11", ip: ""} in network mk-old-k8s-version-740194: {Iface:virbr3 ExpiryTime:2024-09-18 22:05:00 +0000 UTC Type:0 Mac:52:54:00:3f:a7:11 Iaid: IPaddr:192.168.72.53 Prefix:24 Hostname:old-k8s-version-740194 Clientid:01:52:54:00:3f:a7:11}
	I0918 21:05:09.253901   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | domain old-k8s-version-740194 has defined IP address 192.168.72.53 and MAC address 52:54:00:3f:a7:11 in network mk-old-k8s-version-740194
	I0918 21:05:09.253944   62061 main.go:141] libmachine: (old-k8s-version-740194) Calling .GetSSHPort
	I0918 21:05:09.254026   62061 main.go:141] libmachine: (old-k8s-version-740194) Calling .GetSSHPort
	I0918 21:05:09.254221   62061 main.go:141] libmachine: (old-k8s-version-740194) Calling .GetSSHKeyPath
	I0918 21:05:09.254243   62061 main.go:141] libmachine: (old-k8s-version-740194) Calling .GetSSHKeyPath
	I0918 21:05:09.254372   62061 main.go:141] libmachine: (old-k8s-version-740194) Calling .GetSSHUsername
	I0918 21:05:09.254426   62061 main.go:141] libmachine: (old-k8s-version-740194) Calling .GetSSHUsername
	I0918 21:05:09.254533   62061 sshutil.go:53] new ssh client: &{IP:192.168.72.53 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19667-7671/.minikube/machines/old-k8s-version-740194/id_rsa Username:docker}
	I0918 21:05:09.254695   62061 sshutil.go:53] new ssh client: &{IP:192.168.72.53 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19667-7671/.minikube/machines/old-k8s-version-740194/id_rsa Username:docker}
	I0918 21:05:09.374728   62061 ssh_runner.go:195] Run: systemctl --version
	I0918 21:05:09.381117   62061 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0918 21:05:09.532059   62061 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0918 21:05:09.538041   62061 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0918 21:05:09.538124   62061 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0918 21:05:09.553871   62061 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0918 21:05:09.553907   62061 start.go:495] detecting cgroup driver to use...
	I0918 21:05:09.553982   62061 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0918 21:05:09.573554   62061 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0918 21:05:09.591963   62061 docker.go:217] disabling cri-docker service (if available) ...
	I0918 21:05:09.592057   62061 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0918 21:05:09.610813   62061 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0918 21:05:09.627153   62061 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0918 21:05:09.763978   62061 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0918 21:05:09.945185   62061 docker.go:233] disabling docker service ...
	I0918 21:05:09.945257   62061 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0918 21:05:09.961076   62061 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0918 21:05:09.974660   62061 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0918 21:05:10.111191   62061 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0918 21:05:10.235058   62061 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0918 21:05:10.251389   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0918 21:05:10.270949   62061 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0918 21:05:10.271047   62061 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0918 21:05:10.284743   62061 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0918 21:05:10.284811   62061 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0918 21:05:10.295221   62061 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0918 21:05:10.305897   62061 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0918 21:05:10.317726   62061 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0918 21:05:10.330136   62061 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0918 21:05:10.339480   62061 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0918 21:05:10.339543   62061 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0918 21:05:10.352954   62061 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0918 21:05:10.370764   62061 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0918 21:05:10.524315   62061 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0918 21:05:10.617374   62061 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0918 21:05:10.617466   62061 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0918 21:05:10.624164   62061 start.go:563] Will wait 60s for crictl version
	I0918 21:05:10.624222   62061 ssh_runner.go:195] Run: which crictl
	I0918 21:05:10.629583   62061 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0918 21:05:10.673613   62061 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0918 21:05:10.673702   62061 ssh_runner.go:195] Run: crio --version
	I0918 21:05:10.703948   62061 ssh_runner.go:195] Run: crio --version
	I0918 21:05:10.733924   62061 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0918 21:05:09.272840   61273 main.go:141] libmachine: (no-preload-331658) Calling .Start
	I0918 21:05:09.273067   61273 main.go:141] libmachine: (no-preload-331658) Ensuring networks are active...
	I0918 21:05:09.274115   61273 main.go:141] libmachine: (no-preload-331658) Ensuring network default is active
	I0918 21:05:09.274576   61273 main.go:141] libmachine: (no-preload-331658) Ensuring network mk-no-preload-331658 is active
	I0918 21:05:09.275108   61273 main.go:141] libmachine: (no-preload-331658) Getting domain xml...
	I0918 21:05:09.276003   61273 main.go:141] libmachine: (no-preload-331658) Creating domain...
	I0918 21:05:10.665647   61273 main.go:141] libmachine: (no-preload-331658) Waiting to get IP...
	I0918 21:05:10.666710   61273 main.go:141] libmachine: (no-preload-331658) DBG | domain no-preload-331658 has defined MAC address 52:54:00:2a:47:d0 in network mk-no-preload-331658
	I0918 21:05:10.667187   61273 main.go:141] libmachine: (no-preload-331658) DBG | unable to find current IP address of domain no-preload-331658 in network mk-no-preload-331658
	I0918 21:05:10.667261   61273 main.go:141] libmachine: (no-preload-331658) DBG | I0918 21:05:10.667162   63200 retry.go:31] will retry after 215.232953ms: waiting for machine to come up
	I0918 21:05:10.883691   61273 main.go:141] libmachine: (no-preload-331658) DBG | domain no-preload-331658 has defined MAC address 52:54:00:2a:47:d0 in network mk-no-preload-331658
	I0918 21:05:10.884249   61273 main.go:141] libmachine: (no-preload-331658) DBG | unable to find current IP address of domain no-preload-331658 in network mk-no-preload-331658
	I0918 21:05:10.884283   61273 main.go:141] libmachine: (no-preload-331658) DBG | I0918 21:05:10.884185   63200 retry.go:31] will retry after 289.698979ms: waiting for machine to come up
	I0918 21:05:11.175936   61273 main.go:141] libmachine: (no-preload-331658) DBG | domain no-preload-331658 has defined MAC address 52:54:00:2a:47:d0 in network mk-no-preload-331658
	I0918 21:05:11.176656   61273 main.go:141] libmachine: (no-preload-331658) DBG | unable to find current IP address of domain no-preload-331658 in network mk-no-preload-331658
	I0918 21:05:11.176680   61273 main.go:141] libmachine: (no-preload-331658) DBG | I0918 21:05:11.176553   63200 retry.go:31] will retry after 424.473311ms: waiting for machine to come up
	I0918 21:05:09.633671   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:05:11.634755   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:05:09.274214   61740 pod_ready.go:103] pod "coredns-7c65d6cfc9-xwn8w" in "kube-system" namespace has status "Ready":"False"
	I0918 21:05:11.275099   61740 pod_ready.go:103] pod "coredns-7c65d6cfc9-xwn8w" in "kube-system" namespace has status "Ready":"False"
	I0918 21:05:10.735500   62061 main.go:141] libmachine: (old-k8s-version-740194) Calling .GetIP
	I0918 21:05:10.738130   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | domain old-k8s-version-740194 has defined MAC address 52:54:00:3f:a7:11 in network mk-old-k8s-version-740194
	I0918 21:05:10.738488   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:a7:11", ip: ""} in network mk-old-k8s-version-740194: {Iface:virbr3 ExpiryTime:2024-09-18 22:05:00 +0000 UTC Type:0 Mac:52:54:00:3f:a7:11 Iaid: IPaddr:192.168.72.53 Prefix:24 Hostname:old-k8s-version-740194 Clientid:01:52:54:00:3f:a7:11}
	I0918 21:05:10.738516   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | domain old-k8s-version-740194 has defined IP address 192.168.72.53 and MAC address 52:54:00:3f:a7:11 in network mk-old-k8s-version-740194
	I0918 21:05:10.738831   62061 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0918 21:05:10.742780   62061 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0918 21:05:10.754785   62061 kubeadm.go:883] updating cluster {Name:old-k8s-version-740194 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-740194 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.53 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0918 21:05:10.754939   62061 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0918 21:05:10.755002   62061 ssh_runner.go:195] Run: sudo crictl images --output json
	I0918 21:05:10.800452   62061 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0918 21:05:10.800526   62061 ssh_runner.go:195] Run: which lz4
	I0918 21:05:10.804522   62061 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0918 21:05:10.809179   62061 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0918 21:05:10.809214   62061 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0918 21:05:12.306958   62061 crio.go:462] duration metric: took 1.502481615s to copy over tarball
	I0918 21:05:12.307049   62061 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0918 21:05:11.603153   61273 main.go:141] libmachine: (no-preload-331658) DBG | domain no-preload-331658 has defined MAC address 52:54:00:2a:47:d0 in network mk-no-preload-331658
	I0918 21:05:11.603791   61273 main.go:141] libmachine: (no-preload-331658) DBG | unable to find current IP address of domain no-preload-331658 in network mk-no-preload-331658
	I0918 21:05:11.603817   61273 main.go:141] libmachine: (no-preload-331658) DBG | I0918 21:05:11.603742   63200 retry.go:31] will retry after 425.818515ms: waiting for machine to come up
	I0918 21:05:12.031622   61273 main.go:141] libmachine: (no-preload-331658) DBG | domain no-preload-331658 has defined MAC address 52:54:00:2a:47:d0 in network mk-no-preload-331658
	I0918 21:05:12.032425   61273 main.go:141] libmachine: (no-preload-331658) DBG | unable to find current IP address of domain no-preload-331658 in network mk-no-preload-331658
	I0918 21:05:12.032458   61273 main.go:141] libmachine: (no-preload-331658) DBG | I0918 21:05:12.032357   63200 retry.go:31] will retry after 701.564015ms: waiting for machine to come up
	I0918 21:05:12.735295   61273 main.go:141] libmachine: (no-preload-331658) DBG | domain no-preload-331658 has defined MAC address 52:54:00:2a:47:d0 in network mk-no-preload-331658
	I0918 21:05:12.735852   61273 main.go:141] libmachine: (no-preload-331658) DBG | unable to find current IP address of domain no-preload-331658 in network mk-no-preload-331658
	I0918 21:05:12.735882   61273 main.go:141] libmachine: (no-preload-331658) DBG | I0918 21:05:12.735814   63200 retry.go:31] will retry after 904.737419ms: waiting for machine to come up
	I0918 21:05:13.642383   61273 main.go:141] libmachine: (no-preload-331658) DBG | domain no-preload-331658 has defined MAC address 52:54:00:2a:47:d0 in network mk-no-preload-331658
	I0918 21:05:13.642913   61273 main.go:141] libmachine: (no-preload-331658) DBG | unable to find current IP address of domain no-preload-331658 in network mk-no-preload-331658
	I0918 21:05:13.642935   61273 main.go:141] libmachine: (no-preload-331658) DBG | I0918 21:05:13.642872   63200 retry.go:31] will retry after 891.091353ms: waiting for machine to come up
	I0918 21:05:14.536200   61273 main.go:141] libmachine: (no-preload-331658) DBG | domain no-preload-331658 has defined MAC address 52:54:00:2a:47:d0 in network mk-no-preload-331658
	I0918 21:05:14.536797   61273 main.go:141] libmachine: (no-preload-331658) DBG | unable to find current IP address of domain no-preload-331658 in network mk-no-preload-331658
	I0918 21:05:14.536849   61273 main.go:141] libmachine: (no-preload-331658) DBG | I0918 21:05:14.536761   63200 retry.go:31] will retry after 1.01795417s: waiting for machine to come up
	I0918 21:05:15.555787   61273 main.go:141] libmachine: (no-preload-331658) DBG | domain no-preload-331658 has defined MAC address 52:54:00:2a:47:d0 in network mk-no-preload-331658
	I0918 21:05:15.556287   61273 main.go:141] libmachine: (no-preload-331658) DBG | unable to find current IP address of domain no-preload-331658 in network mk-no-preload-331658
	I0918 21:05:15.556315   61273 main.go:141] libmachine: (no-preload-331658) DBG | I0918 21:05:15.556243   63200 retry.go:31] will retry after 1.598926126s: waiting for machine to come up
	I0918 21:05:14.132957   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:05:16.133323   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:05:13.778274   61740 pod_ready.go:93] pod "coredns-7c65d6cfc9-xwn8w" in "kube-system" namespace has status "Ready":"True"
	I0918 21:05:13.778310   61740 pod_ready.go:82] duration metric: took 11.011085965s for pod "coredns-7c65d6cfc9-xwn8w" in "kube-system" namespace to be "Ready" ...
	I0918 21:05:13.778325   61740 pod_ready.go:79] waiting up to 4m0s for pod "etcd-embed-certs-255556" in "kube-system" namespace to be "Ready" ...
	I0918 21:05:13.785089   61740 pod_ready.go:93] pod "etcd-embed-certs-255556" in "kube-system" namespace has status "Ready":"True"
	I0918 21:05:13.785121   61740 pod_ready.go:82] duration metric: took 6.787649ms for pod "etcd-embed-certs-255556" in "kube-system" namespace to be "Ready" ...
	I0918 21:05:13.785135   61740 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-embed-certs-255556" in "kube-system" namespace to be "Ready" ...
	I0918 21:05:15.793479   61740 pod_ready.go:103] pod "kube-apiserver-embed-certs-255556" in "kube-system" namespace has status "Ready":"False"
	I0918 21:05:15.357114   62061 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.050037005s)
	I0918 21:05:15.357141   62061 crio.go:469] duration metric: took 3.050151648s to extract the tarball
	I0918 21:05:15.357148   62061 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0918 21:05:15.399373   62061 ssh_runner.go:195] Run: sudo crictl images --output json
	I0918 21:05:15.434204   62061 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0918 21:05:15.434238   62061 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0918 21:05:15.434332   62061 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0918 21:05:15.434369   62061 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0918 21:05:15.434385   62061 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0918 21:05:15.434398   62061 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0918 21:05:15.434339   62061 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0918 21:05:15.434438   62061 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I0918 21:05:15.434443   62061 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0918 21:05:15.434491   62061 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I0918 21:05:15.436820   62061 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0918 21:05:15.436824   62061 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0918 21:05:15.436829   62061 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0918 21:05:15.436856   62061 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0918 21:05:15.436867   62061 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0918 21:05:15.436904   62061 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0918 21:05:15.436952   62061 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0918 21:05:15.437278   62061 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0918 21:05:15.747423   62061 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0918 21:05:15.747705   62061 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0918 21:05:15.748375   62061 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0918 21:05:15.750816   62061 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0918 21:05:15.752244   62061 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0918 21:05:15.754038   62061 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0918 21:05:15.770881   62061 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0918 21:05:15.911654   62061 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0918 21:05:15.911725   62061 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0918 21:05:15.911795   62061 ssh_runner.go:195] Run: which crictl
	I0918 21:05:15.938412   62061 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0918 21:05:15.938464   62061 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0918 21:05:15.938534   62061 ssh_runner.go:195] Run: which crictl
	I0918 21:05:15.944615   62061 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0918 21:05:15.944706   62061 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0918 21:05:15.944736   62061 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0918 21:05:15.944749   62061 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0918 21:05:15.944786   62061 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0918 21:05:15.944809   62061 ssh_runner.go:195] Run: which crictl
	I0918 21:05:15.944820   62061 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0918 21:05:15.944844   62061 ssh_runner.go:195] Run: which crictl
	I0918 21:05:15.944857   62061 ssh_runner.go:195] Run: which crictl
	I0918 21:05:15.944661   62061 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0918 21:05:15.944914   62061 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0918 21:05:15.944950   62061 ssh_runner.go:195] Run: which crictl
	I0918 21:05:15.965316   62061 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0918 21:05:15.965366   62061 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0918 21:05:15.965418   62061 ssh_runner.go:195] Run: which crictl
	I0918 21:05:15.965461   62061 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0918 21:05:15.965419   62061 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0918 21:05:15.965544   62061 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0918 21:05:15.965558   62061 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0918 21:05:15.965613   62061 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0918 21:05:15.965621   62061 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0918 21:05:16.101533   62061 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0918 21:05:16.101536   62061 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0918 21:05:16.105174   62061 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0918 21:05:16.105198   62061 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0918 21:05:16.109044   62061 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0918 21:05:16.109048   62061 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0918 21:05:16.109152   62061 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0918 21:05:16.220653   62061 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0918 21:05:16.255418   62061 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0918 21:05:16.259931   62061 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0918 21:05:16.259986   62061 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0918 21:05:16.260039   62061 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0918 21:05:16.260121   62061 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0918 21:05:16.260337   62061 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0918 21:05:16.352969   62061 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0918 21:05:16.399180   62061 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19667-7671/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0918 21:05:16.445003   62061 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19667-7671/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0918 21:05:16.445078   62061 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19667-7671/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0918 21:05:16.445163   62061 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19667-7671/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0918 21:05:16.445266   62061 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19667-7671/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0918 21:05:16.445387   62061 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19667-7671/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0918 21:05:16.445398   62061 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19667-7671/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0918 21:05:16.633221   62061 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0918 21:05:16.782331   62061 cache_images.go:92] duration metric: took 1.348045359s to LoadCachedImages
	W0918 21:05:16.782445   62061 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19667-7671/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0: no such file or directory
	I0918 21:05:16.782464   62061 kubeadm.go:934] updating node { 192.168.72.53 8443 v1.20.0 crio true true} ...
	I0918 21:05:16.782608   62061 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-740194 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.72.53
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-740194 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0918 21:05:16.782679   62061 ssh_runner.go:195] Run: crio config
	I0918 21:05:16.838946   62061 cni.go:84] Creating CNI manager for ""
	I0918 21:05:16.838978   62061 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0918 21:05:16.838990   62061 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0918 21:05:16.839008   62061 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.53 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-740194 NodeName:old-k8s-version-740194 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.53"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.53 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0918 21:05:16.839163   62061 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.53
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-740194"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.53
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.53"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0918 21:05:16.839232   62061 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0918 21:05:16.849868   62061 binaries.go:44] Found k8s binaries, skipping transfer
	I0918 21:05:16.849955   62061 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0918 21:05:16.859716   62061 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (429 bytes)
	I0918 21:05:16.877857   62061 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0918 21:05:16.895573   62061 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2120 bytes)
	I0918 21:05:16.913545   62061 ssh_runner.go:195] Run: grep 192.168.72.53	control-plane.minikube.internal$ /etc/hosts
	I0918 21:05:16.917476   62061 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.53	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0918 21:05:16.931318   62061 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0918 21:05:17.057636   62061 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0918 21:05:17.076534   62061 certs.go:68] Setting up /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/old-k8s-version-740194 for IP: 192.168.72.53
	I0918 21:05:17.076571   62061 certs.go:194] generating shared ca certs ...
	I0918 21:05:17.076594   62061 certs.go:226] acquiring lock for ca certs: {Name:mkf54e0e71ed884c4372f9d3d4cb308ea4600185 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0918 21:05:17.076796   62061 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19667-7671/.minikube/ca.key
	I0918 21:05:17.076855   62061 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19667-7671/.minikube/proxy-client-ca.key
	I0918 21:05:17.076871   62061 certs.go:256] generating profile certs ...
	I0918 21:05:17.076999   62061 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/old-k8s-version-740194/client.key
	I0918 21:05:17.077083   62061 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/old-k8s-version-740194/apiserver.key.424b07d9
	I0918 21:05:17.077149   62061 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/old-k8s-version-740194/proxy-client.key
	I0918 21:05:17.077321   62061 certs.go:484] found cert: /home/jenkins/minikube-integration/19667-7671/.minikube/certs/14878.pem (1338 bytes)
	W0918 21:05:17.077371   62061 certs.go:480] ignoring /home/jenkins/minikube-integration/19667-7671/.minikube/certs/14878_empty.pem, impossibly tiny 0 bytes
	I0918 21:05:17.077386   62061 certs.go:484] found cert: /home/jenkins/minikube-integration/19667-7671/.minikube/certs/ca-key.pem (1675 bytes)
	I0918 21:05:17.077421   62061 certs.go:484] found cert: /home/jenkins/minikube-integration/19667-7671/.minikube/certs/ca.pem (1082 bytes)
	I0918 21:05:17.077465   62061 certs.go:484] found cert: /home/jenkins/minikube-integration/19667-7671/.minikube/certs/cert.pem (1123 bytes)
	I0918 21:05:17.077499   62061 certs.go:484] found cert: /home/jenkins/minikube-integration/19667-7671/.minikube/certs/key.pem (1679 bytes)
	I0918 21:05:17.077585   62061 certs.go:484] found cert: /home/jenkins/minikube-integration/19667-7671/.minikube/files/etc/ssl/certs/148782.pem (1708 bytes)
	I0918 21:05:17.078553   62061 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0918 21:05:17.116000   62061 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0918 21:05:17.150848   62061 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0918 21:05:17.190616   62061 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0918 21:05:17.222411   62061 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/old-k8s-version-740194/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0918 21:05:17.266923   62061 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/old-k8s-version-740194/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0918 21:05:17.303973   62061 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/old-k8s-version-740194/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0918 21:05:17.334390   62061 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/old-k8s-version-740194/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0918 21:05:17.358732   62061 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0918 21:05:17.385693   62061 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/certs/14878.pem --> /usr/share/ca-certificates/14878.pem (1338 bytes)
	I0918 21:05:17.411396   62061 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/files/etc/ssl/certs/148782.pem --> /usr/share/ca-certificates/148782.pem (1708 bytes)
	I0918 21:05:17.440205   62061 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0918 21:05:17.459639   62061 ssh_runner.go:195] Run: openssl version
	I0918 21:05:17.465846   62061 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0918 21:05:17.477482   62061 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0918 21:05:17.482579   62061 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 18 19:39 /usr/share/ca-certificates/minikubeCA.pem
	I0918 21:05:17.482650   62061 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0918 21:05:17.488822   62061 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0918 21:05:17.499729   62061 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14878.pem && ln -fs /usr/share/ca-certificates/14878.pem /etc/ssl/certs/14878.pem"
	I0918 21:05:17.511718   62061 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14878.pem
	I0918 21:05:17.516361   62061 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 18 19:57 /usr/share/ca-certificates/14878.pem
	I0918 21:05:17.516438   62061 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14878.pem
	I0918 21:05:17.522542   62061 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14878.pem /etc/ssl/certs/51391683.0"
	I0918 21:05:17.534450   62061 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/148782.pem && ln -fs /usr/share/ca-certificates/148782.pem /etc/ssl/certs/148782.pem"
	I0918 21:05:17.545916   62061 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/148782.pem
	I0918 21:05:17.550672   62061 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 18 19:57 /usr/share/ca-certificates/148782.pem
	I0918 21:05:17.550737   62061 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/148782.pem
	I0918 21:05:17.556802   62061 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/148782.pem /etc/ssl/certs/3ec20f2e.0"
	I0918 21:05:17.568991   62061 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0918 21:05:17.574098   62061 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0918 21:05:17.581458   62061 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0918 21:05:17.588242   62061 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0918 21:05:17.594889   62061 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0918 21:05:17.601499   62061 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0918 21:05:17.608193   62061 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0918 21:05:17.614196   62061 kubeadm.go:392] StartCluster: {Name:old-k8s-version-740194 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-740194 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.53 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0918 21:05:17.614298   62061 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0918 21:05:17.614363   62061 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0918 21:05:17.661551   62061 cri.go:89] found id: ""
	I0918 21:05:17.661627   62061 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0918 21:05:17.673087   62061 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0918 21:05:17.673110   62061 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0918 21:05:17.673153   62061 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0918 21:05:17.683213   62061 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0918 21:05:17.684537   62061 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-740194" does not appear in /home/jenkins/minikube-integration/19667-7671/kubeconfig
	I0918 21:05:17.685136   62061 kubeconfig.go:62] /home/jenkins/minikube-integration/19667-7671/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-740194" cluster setting kubeconfig missing "old-k8s-version-740194" context setting]
	I0918 21:05:17.686023   62061 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19667-7671/kubeconfig: {Name:mk3a3eecadb0164dfdd5d3c4392b5b473a2a9bb0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0918 21:05:17.711337   62061 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0918 21:05:17.723894   62061 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.72.53
	I0918 21:05:17.723936   62061 kubeadm.go:1160] stopping kube-system containers ...
	I0918 21:05:17.723949   62061 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0918 21:05:17.724005   62061 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0918 21:05:17.761087   62061 cri.go:89] found id: ""
	I0918 21:05:17.761166   62061 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0918 21:05:17.779689   62061 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0918 21:05:17.791699   62061 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0918 21:05:17.791723   62061 kubeadm.go:157] found existing configuration files:
	
	I0918 21:05:17.791773   62061 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0918 21:05:17.801804   62061 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0918 21:05:17.801879   62061 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0918 21:05:17.812806   62061 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0918 21:05:17.822813   62061 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0918 21:05:17.822902   62061 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0918 21:05:17.833267   62061 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0918 21:05:17.845148   62061 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0918 21:05:17.845230   62061 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0918 21:05:17.855974   62061 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0918 21:05:17.866490   62061 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0918 21:05:17.866593   62061 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0918 21:05:17.877339   62061 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0918 21:05:17.887825   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0918 21:05:18.021691   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0918 21:05:18.726750   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0918 21:05:18.970635   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0918 21:05:19.081869   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0918 21:05:19.203071   62061 api_server.go:52] waiting for apiserver process to appear ...
	I0918 21:05:19.203166   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:17.156934   61273 main.go:141] libmachine: (no-preload-331658) DBG | domain no-preload-331658 has defined MAC address 52:54:00:2a:47:d0 in network mk-no-preload-331658
	I0918 21:05:17.157481   61273 main.go:141] libmachine: (no-preload-331658) DBG | unable to find current IP address of domain no-preload-331658 in network mk-no-preload-331658
	I0918 21:05:17.157509   61273 main.go:141] libmachine: (no-preload-331658) DBG | I0918 21:05:17.157429   63200 retry.go:31] will retry after 1.586399944s: waiting for machine to come up
	I0918 21:05:18.746155   61273 main.go:141] libmachine: (no-preload-331658) DBG | domain no-preload-331658 has defined MAC address 52:54:00:2a:47:d0 in network mk-no-preload-331658
	I0918 21:05:18.746620   61273 main.go:141] libmachine: (no-preload-331658) DBG | unable to find current IP address of domain no-preload-331658 in network mk-no-preload-331658
	I0918 21:05:18.746650   61273 main.go:141] libmachine: (no-preload-331658) DBG | I0918 21:05:18.746571   63200 retry.go:31] will retry after 2.204220189s: waiting for machine to come up
	I0918 21:05:20.953669   61273 main.go:141] libmachine: (no-preload-331658) DBG | domain no-preload-331658 has defined MAC address 52:54:00:2a:47:d0 in network mk-no-preload-331658
	I0918 21:05:20.954223   61273 main.go:141] libmachine: (no-preload-331658) DBG | unable to find current IP address of domain no-preload-331658 in network mk-no-preload-331658
	I0918 21:05:20.954287   61273 main.go:141] libmachine: (no-preload-331658) DBG | I0918 21:05:20.954209   63200 retry.go:31] will retry after 2.418479665s: waiting for machine to come up
	I0918 21:05:18.634113   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:05:21.133516   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:05:18.365915   61740 pod_ready.go:93] pod "kube-apiserver-embed-certs-255556" in "kube-system" namespace has status "Ready":"True"
	I0918 21:05:18.365943   61740 pod_ready.go:82] duration metric: took 4.580799395s for pod "kube-apiserver-embed-certs-255556" in "kube-system" namespace to be "Ready" ...
	I0918 21:05:18.365956   61740 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-255556" in "kube-system" namespace to be "Ready" ...
	I0918 21:05:18.371010   61740 pod_ready.go:93] pod "kube-controller-manager-embed-certs-255556" in "kube-system" namespace has status "Ready":"True"
	I0918 21:05:18.371035   61740 pod_ready.go:82] duration metric: took 5.070331ms for pod "kube-controller-manager-embed-certs-255556" in "kube-system" namespace to be "Ready" ...
	I0918 21:05:18.371046   61740 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-v8szm" in "kube-system" namespace to be "Ready" ...
	I0918 21:05:18.375632   61740 pod_ready.go:93] pod "kube-proxy-v8szm" in "kube-system" namespace has status "Ready":"True"
	I0918 21:05:18.375658   61740 pod_ready.go:82] duration metric: took 4.603787ms for pod "kube-proxy-v8szm" in "kube-system" namespace to be "Ready" ...
	I0918 21:05:18.375671   61740 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-embed-certs-255556" in "kube-system" namespace to be "Ready" ...
	I0918 21:05:18.380527   61740 pod_ready.go:93] pod "kube-scheduler-embed-certs-255556" in "kube-system" namespace has status "Ready":"True"
	I0918 21:05:18.380551   61740 pod_ready.go:82] duration metric: took 4.872699ms for pod "kube-scheduler-embed-certs-255556" in "kube-system" namespace to be "Ready" ...
	I0918 21:05:18.380563   61740 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace to be "Ready" ...
	I0918 21:05:20.388600   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:05:22.887122   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:05:19.704128   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:20.203595   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:20.703244   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:21.204288   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:21.703469   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:22.203224   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:22.703968   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:23.204097   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:23.703933   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:24.204272   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:23.375904   61273 main.go:141] libmachine: (no-preload-331658) DBG | domain no-preload-331658 has defined MAC address 52:54:00:2a:47:d0 in network mk-no-preload-331658
	I0918 21:05:23.376450   61273 main.go:141] libmachine: (no-preload-331658) DBG | unable to find current IP address of domain no-preload-331658 in network mk-no-preload-331658
	I0918 21:05:23.376476   61273 main.go:141] libmachine: (no-preload-331658) DBG | I0918 21:05:23.376397   63200 retry.go:31] will retry after 4.431211335s: waiting for machine to come up
	I0918 21:05:23.633093   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:05:25.633913   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:05:24.887771   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:05:27.386891   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:05:24.704240   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:25.203880   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:25.703983   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:26.204273   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:26.703861   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:27.204064   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:27.703276   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:28.204289   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:28.703701   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:29.203604   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:27.811234   61273 main.go:141] libmachine: (no-preload-331658) DBG | domain no-preload-331658 has defined MAC address 52:54:00:2a:47:d0 in network mk-no-preload-331658
	I0918 21:05:27.811698   61273 main.go:141] libmachine: (no-preload-331658) DBG | domain no-preload-331658 has current primary IP address 192.168.61.31 and MAC address 52:54:00:2a:47:d0 in network mk-no-preload-331658
	I0918 21:05:27.811719   61273 main.go:141] libmachine: (no-preload-331658) Found IP for machine: 192.168.61.31
	I0918 21:05:27.811729   61273 main.go:141] libmachine: (no-preload-331658) Reserving static IP address...
	I0918 21:05:27.812131   61273 main.go:141] libmachine: (no-preload-331658) DBG | found host DHCP lease matching {name: "no-preload-331658", mac: "52:54:00:2a:47:d0", ip: "192.168.61.31"} in network mk-no-preload-331658: {Iface:virbr2 ExpiryTime:2024-09-18 21:55:52 +0000 UTC Type:0 Mac:52:54:00:2a:47:d0 Iaid: IPaddr:192.168.61.31 Prefix:24 Hostname:no-preload-331658 Clientid:01:52:54:00:2a:47:d0}
	I0918 21:05:27.812150   61273 main.go:141] libmachine: (no-preload-331658) Reserved static IP address: 192.168.61.31
	I0918 21:05:27.812163   61273 main.go:141] libmachine: (no-preload-331658) DBG | skip adding static IP to network mk-no-preload-331658 - found existing host DHCP lease matching {name: "no-preload-331658", mac: "52:54:00:2a:47:d0", ip: "192.168.61.31"}
	I0918 21:05:27.812170   61273 main.go:141] libmachine: (no-preload-331658) Waiting for SSH to be available...
	I0918 21:05:27.812178   61273 main.go:141] libmachine: (no-preload-331658) DBG | Getting to WaitForSSH function...
	I0918 21:05:27.814300   61273 main.go:141] libmachine: (no-preload-331658) DBG | domain no-preload-331658 has defined MAC address 52:54:00:2a:47:d0 in network mk-no-preload-331658
	I0918 21:05:27.814735   61273 main.go:141] libmachine: (no-preload-331658) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:47:d0", ip: ""} in network mk-no-preload-331658: {Iface:virbr2 ExpiryTime:2024-09-18 21:55:52 +0000 UTC Type:0 Mac:52:54:00:2a:47:d0 Iaid: IPaddr:192.168.61.31 Prefix:24 Hostname:no-preload-331658 Clientid:01:52:54:00:2a:47:d0}
	I0918 21:05:27.814767   61273 main.go:141] libmachine: (no-preload-331658) DBG | domain no-preload-331658 has defined IP address 192.168.61.31 and MAC address 52:54:00:2a:47:d0 in network mk-no-preload-331658
	I0918 21:05:27.814891   61273 main.go:141] libmachine: (no-preload-331658) DBG | Using SSH client type: external
	I0918 21:05:27.814922   61273 main.go:141] libmachine: (no-preload-331658) DBG | Using SSH private key: /home/jenkins/minikube-integration/19667-7671/.minikube/machines/no-preload-331658/id_rsa (-rw-------)
	I0918 21:05:27.814945   61273 main.go:141] libmachine: (no-preload-331658) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.31 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19667-7671/.minikube/machines/no-preload-331658/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0918 21:05:27.814972   61273 main.go:141] libmachine: (no-preload-331658) DBG | About to run SSH command:
	I0918 21:05:27.814985   61273 main.go:141] libmachine: (no-preload-331658) DBG | exit 0
	I0918 21:05:27.939949   61273 main.go:141] libmachine: (no-preload-331658) DBG | SSH cmd err, output: <nil>: 
	I0918 21:05:27.940365   61273 main.go:141] libmachine: (no-preload-331658) Calling .GetConfigRaw
	I0918 21:05:27.941187   61273 main.go:141] libmachine: (no-preload-331658) Calling .GetIP
	I0918 21:05:27.943976   61273 main.go:141] libmachine: (no-preload-331658) DBG | domain no-preload-331658 has defined MAC address 52:54:00:2a:47:d0 in network mk-no-preload-331658
	I0918 21:05:27.944375   61273 main.go:141] libmachine: (no-preload-331658) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:47:d0", ip: ""} in network mk-no-preload-331658: {Iface:virbr2 ExpiryTime:2024-09-18 21:55:52 +0000 UTC Type:0 Mac:52:54:00:2a:47:d0 Iaid: IPaddr:192.168.61.31 Prefix:24 Hostname:no-preload-331658 Clientid:01:52:54:00:2a:47:d0}
	I0918 21:05:27.944399   61273 main.go:141] libmachine: (no-preload-331658) DBG | domain no-preload-331658 has defined IP address 192.168.61.31 and MAC address 52:54:00:2a:47:d0 in network mk-no-preload-331658
	I0918 21:05:27.944670   61273 profile.go:143] Saving config to /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/no-preload-331658/config.json ...
	I0918 21:05:27.944942   61273 machine.go:93] provisionDockerMachine start ...
	I0918 21:05:27.944963   61273 main.go:141] libmachine: (no-preload-331658) Calling .DriverName
	I0918 21:05:27.945228   61273 main.go:141] libmachine: (no-preload-331658) Calling .GetSSHHostname
	I0918 21:05:27.947444   61273 main.go:141] libmachine: (no-preload-331658) DBG | domain no-preload-331658 has defined MAC address 52:54:00:2a:47:d0 in network mk-no-preload-331658
	I0918 21:05:27.947810   61273 main.go:141] libmachine: (no-preload-331658) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:47:d0", ip: ""} in network mk-no-preload-331658: {Iface:virbr2 ExpiryTime:2024-09-18 21:55:52 +0000 UTC Type:0 Mac:52:54:00:2a:47:d0 Iaid: IPaddr:192.168.61.31 Prefix:24 Hostname:no-preload-331658 Clientid:01:52:54:00:2a:47:d0}
	I0918 21:05:27.947843   61273 main.go:141] libmachine: (no-preload-331658) DBG | domain no-preload-331658 has defined IP address 192.168.61.31 and MAC address 52:54:00:2a:47:d0 in network mk-no-preload-331658
	I0918 21:05:27.947974   61273 main.go:141] libmachine: (no-preload-331658) Calling .GetSSHPort
	I0918 21:05:27.948196   61273 main.go:141] libmachine: (no-preload-331658) Calling .GetSSHKeyPath
	I0918 21:05:27.948404   61273 main.go:141] libmachine: (no-preload-331658) Calling .GetSSHKeyPath
	I0918 21:05:27.948664   61273 main.go:141] libmachine: (no-preload-331658) Calling .GetSSHUsername
	I0918 21:05:27.948845   61273 main.go:141] libmachine: Using SSH client type: native
	I0918 21:05:27.949078   61273 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.61.31 22 <nil> <nil>}
	I0918 21:05:27.949099   61273 main.go:141] libmachine: About to run SSH command:
	hostname
	I0918 21:05:28.052352   61273 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0918 21:05:28.052378   61273 main.go:141] libmachine: (no-preload-331658) Calling .GetMachineName
	I0918 21:05:28.052638   61273 buildroot.go:166] provisioning hostname "no-preload-331658"
	I0918 21:05:28.052668   61273 main.go:141] libmachine: (no-preload-331658) Calling .GetMachineName
	I0918 21:05:28.052923   61273 main.go:141] libmachine: (no-preload-331658) Calling .GetSSHHostname
	I0918 21:05:28.056168   61273 main.go:141] libmachine: (no-preload-331658) DBG | domain no-preload-331658 has defined MAC address 52:54:00:2a:47:d0 in network mk-no-preload-331658
	I0918 21:05:28.056599   61273 main.go:141] libmachine: (no-preload-331658) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:47:d0", ip: ""} in network mk-no-preload-331658: {Iface:virbr2 ExpiryTime:2024-09-18 21:55:52 +0000 UTC Type:0 Mac:52:54:00:2a:47:d0 Iaid: IPaddr:192.168.61.31 Prefix:24 Hostname:no-preload-331658 Clientid:01:52:54:00:2a:47:d0}
	I0918 21:05:28.056631   61273 main.go:141] libmachine: (no-preload-331658) DBG | domain no-preload-331658 has defined IP address 192.168.61.31 and MAC address 52:54:00:2a:47:d0 in network mk-no-preload-331658
	I0918 21:05:28.056805   61273 main.go:141] libmachine: (no-preload-331658) Calling .GetSSHPort
	I0918 21:05:28.057009   61273 main.go:141] libmachine: (no-preload-331658) Calling .GetSSHKeyPath
	I0918 21:05:28.057168   61273 main.go:141] libmachine: (no-preload-331658) Calling .GetSSHKeyPath
	I0918 21:05:28.057305   61273 main.go:141] libmachine: (no-preload-331658) Calling .GetSSHUsername
	I0918 21:05:28.057478   61273 main.go:141] libmachine: Using SSH client type: native
	I0918 21:05:28.057652   61273 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.61.31 22 <nil> <nil>}
	I0918 21:05:28.057665   61273 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-331658 && echo "no-preload-331658" | sudo tee /etc/hostname
	I0918 21:05:28.174245   61273 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-331658
	
	I0918 21:05:28.174282   61273 main.go:141] libmachine: (no-preload-331658) Calling .GetSSHHostname
	I0918 21:05:28.177373   61273 main.go:141] libmachine: (no-preload-331658) DBG | domain no-preload-331658 has defined MAC address 52:54:00:2a:47:d0 in network mk-no-preload-331658
	I0918 21:05:28.177753   61273 main.go:141] libmachine: (no-preload-331658) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:47:d0", ip: ""} in network mk-no-preload-331658: {Iface:virbr2 ExpiryTime:2024-09-18 21:55:52 +0000 UTC Type:0 Mac:52:54:00:2a:47:d0 Iaid: IPaddr:192.168.61.31 Prefix:24 Hostname:no-preload-331658 Clientid:01:52:54:00:2a:47:d0}
	I0918 21:05:28.177781   61273 main.go:141] libmachine: (no-preload-331658) DBG | domain no-preload-331658 has defined IP address 192.168.61.31 and MAC address 52:54:00:2a:47:d0 in network mk-no-preload-331658
	I0918 21:05:28.177981   61273 main.go:141] libmachine: (no-preload-331658) Calling .GetSSHPort
	I0918 21:05:28.178202   61273 main.go:141] libmachine: (no-preload-331658) Calling .GetSSHKeyPath
	I0918 21:05:28.178380   61273 main.go:141] libmachine: (no-preload-331658) Calling .GetSSHKeyPath
	I0918 21:05:28.178523   61273 main.go:141] libmachine: (no-preload-331658) Calling .GetSSHUsername
	I0918 21:05:28.178752   61273 main.go:141] libmachine: Using SSH client type: native
	I0918 21:05:28.178948   61273 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.61.31 22 <nil> <nil>}
	I0918 21:05:28.178965   61273 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-331658' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-331658/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-331658' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0918 21:05:28.292659   61273 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0918 21:05:28.292691   61273 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19667-7671/.minikube CaCertPath:/home/jenkins/minikube-integration/19667-7671/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19667-7671/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19667-7671/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19667-7671/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19667-7671/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19667-7671/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19667-7671/.minikube}
	I0918 21:05:28.292714   61273 buildroot.go:174] setting up certificates
	I0918 21:05:28.292725   61273 provision.go:84] configureAuth start
	I0918 21:05:28.292734   61273 main.go:141] libmachine: (no-preload-331658) Calling .GetMachineName
	I0918 21:05:28.293091   61273 main.go:141] libmachine: (no-preload-331658) Calling .GetIP
	I0918 21:05:28.295792   61273 main.go:141] libmachine: (no-preload-331658) DBG | domain no-preload-331658 has defined MAC address 52:54:00:2a:47:d0 in network mk-no-preload-331658
	I0918 21:05:28.296192   61273 main.go:141] libmachine: (no-preload-331658) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:47:d0", ip: ""} in network mk-no-preload-331658: {Iface:virbr2 ExpiryTime:2024-09-18 21:55:52 +0000 UTC Type:0 Mac:52:54:00:2a:47:d0 Iaid: IPaddr:192.168.61.31 Prefix:24 Hostname:no-preload-331658 Clientid:01:52:54:00:2a:47:d0}
	I0918 21:05:28.296219   61273 main.go:141] libmachine: (no-preload-331658) DBG | domain no-preload-331658 has defined IP address 192.168.61.31 and MAC address 52:54:00:2a:47:d0 in network mk-no-preload-331658
	I0918 21:05:28.296405   61273 main.go:141] libmachine: (no-preload-331658) Calling .GetSSHHostname
	I0918 21:05:28.298446   61273 main.go:141] libmachine: (no-preload-331658) DBG | domain no-preload-331658 has defined MAC address 52:54:00:2a:47:d0 in network mk-no-preload-331658
	I0918 21:05:28.298788   61273 main.go:141] libmachine: (no-preload-331658) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:47:d0", ip: ""} in network mk-no-preload-331658: {Iface:virbr2 ExpiryTime:2024-09-18 21:55:52 +0000 UTC Type:0 Mac:52:54:00:2a:47:d0 Iaid: IPaddr:192.168.61.31 Prefix:24 Hostname:no-preload-331658 Clientid:01:52:54:00:2a:47:d0}
	I0918 21:05:28.298815   61273 main.go:141] libmachine: (no-preload-331658) DBG | domain no-preload-331658 has defined IP address 192.168.61.31 and MAC address 52:54:00:2a:47:d0 in network mk-no-preload-331658
	I0918 21:05:28.298938   61273 provision.go:143] copyHostCerts
	I0918 21:05:28.299013   61273 exec_runner.go:144] found /home/jenkins/minikube-integration/19667-7671/.minikube/ca.pem, removing ...
	I0918 21:05:28.299026   61273 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19667-7671/.minikube/ca.pem
	I0918 21:05:28.299078   61273 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19667-7671/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19667-7671/.minikube/ca.pem (1082 bytes)
	I0918 21:05:28.299170   61273 exec_runner.go:144] found /home/jenkins/minikube-integration/19667-7671/.minikube/cert.pem, removing ...
	I0918 21:05:28.299178   61273 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19667-7671/.minikube/cert.pem
	I0918 21:05:28.299199   61273 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19667-7671/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19667-7671/.minikube/cert.pem (1123 bytes)
	I0918 21:05:28.299252   61273 exec_runner.go:144] found /home/jenkins/minikube-integration/19667-7671/.minikube/key.pem, removing ...
	I0918 21:05:28.299258   61273 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19667-7671/.minikube/key.pem
	I0918 21:05:28.299278   61273 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19667-7671/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19667-7671/.minikube/key.pem (1679 bytes)
	I0918 21:05:28.299325   61273 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19667-7671/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19667-7671/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19667-7671/.minikube/certs/ca-key.pem org=jenkins.no-preload-331658 san=[127.0.0.1 192.168.61.31 localhost minikube no-preload-331658]
	I0918 21:05:28.606565   61273 provision.go:177] copyRemoteCerts
	I0918 21:05:28.606629   61273 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0918 21:05:28.606653   61273 main.go:141] libmachine: (no-preload-331658) Calling .GetSSHHostname
	I0918 21:05:28.609156   61273 main.go:141] libmachine: (no-preload-331658) DBG | domain no-preload-331658 has defined MAC address 52:54:00:2a:47:d0 in network mk-no-preload-331658
	I0918 21:05:28.609533   61273 main.go:141] libmachine: (no-preload-331658) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:47:d0", ip: ""} in network mk-no-preload-331658: {Iface:virbr2 ExpiryTime:2024-09-18 21:55:52 +0000 UTC Type:0 Mac:52:54:00:2a:47:d0 Iaid: IPaddr:192.168.61.31 Prefix:24 Hostname:no-preload-331658 Clientid:01:52:54:00:2a:47:d0}
	I0918 21:05:28.609564   61273 main.go:141] libmachine: (no-preload-331658) DBG | domain no-preload-331658 has defined IP address 192.168.61.31 and MAC address 52:54:00:2a:47:d0 in network mk-no-preload-331658
	I0918 21:05:28.609690   61273 main.go:141] libmachine: (no-preload-331658) Calling .GetSSHPort
	I0918 21:05:28.609891   61273 main.go:141] libmachine: (no-preload-331658) Calling .GetSSHKeyPath
	I0918 21:05:28.610102   61273 main.go:141] libmachine: (no-preload-331658) Calling .GetSSHUsername
	I0918 21:05:28.610332   61273 sshutil.go:53] new ssh client: &{IP:192.168.61.31 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19667-7671/.minikube/machines/no-preload-331658/id_rsa Username:docker}
	I0918 21:05:28.690571   61273 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0918 21:05:28.719257   61273 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0918 21:05:28.744119   61273 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0918 21:05:28.768692   61273 provision.go:87] duration metric: took 475.955066ms to configureAuth
	I0918 21:05:28.768720   61273 buildroot.go:189] setting minikube options for container-runtime
	I0918 21:05:28.768941   61273 config.go:182] Loaded profile config "no-preload-331658": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0918 21:05:28.769031   61273 main.go:141] libmachine: (no-preload-331658) Calling .GetSSHHostname
	I0918 21:05:28.771437   61273 main.go:141] libmachine: (no-preload-331658) DBG | domain no-preload-331658 has defined MAC address 52:54:00:2a:47:d0 in network mk-no-preload-331658
	I0918 21:05:28.771747   61273 main.go:141] libmachine: (no-preload-331658) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:47:d0", ip: ""} in network mk-no-preload-331658: {Iface:virbr2 ExpiryTime:2024-09-18 21:55:52 +0000 UTC Type:0 Mac:52:54:00:2a:47:d0 Iaid: IPaddr:192.168.61.31 Prefix:24 Hostname:no-preload-331658 Clientid:01:52:54:00:2a:47:d0}
	I0918 21:05:28.771786   61273 main.go:141] libmachine: (no-preload-331658) DBG | domain no-preload-331658 has defined IP address 192.168.61.31 and MAC address 52:54:00:2a:47:d0 in network mk-no-preload-331658
	I0918 21:05:28.771906   61273 main.go:141] libmachine: (no-preload-331658) Calling .GetSSHPort
	I0918 21:05:28.772127   61273 main.go:141] libmachine: (no-preload-331658) Calling .GetSSHKeyPath
	I0918 21:05:28.772330   61273 main.go:141] libmachine: (no-preload-331658) Calling .GetSSHKeyPath
	I0918 21:05:28.772496   61273 main.go:141] libmachine: (no-preload-331658) Calling .GetSSHUsername
	I0918 21:05:28.772717   61273 main.go:141] libmachine: Using SSH client type: native
	I0918 21:05:28.772886   61273 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.61.31 22 <nil> <nil>}
	I0918 21:05:28.772902   61273 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0918 21:05:29.001137   61273 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0918 21:05:29.001160   61273 machine.go:96] duration metric: took 1.056205004s to provisionDockerMachine
	I0918 21:05:29.001171   61273 start.go:293] postStartSetup for "no-preload-331658" (driver="kvm2")
	I0918 21:05:29.001181   61273 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0918 21:05:29.001194   61273 main.go:141] libmachine: (no-preload-331658) Calling .DriverName
	I0918 21:05:29.001531   61273 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0918 21:05:29.001556   61273 main.go:141] libmachine: (no-preload-331658) Calling .GetSSHHostname
	I0918 21:05:29.004307   61273 main.go:141] libmachine: (no-preload-331658) DBG | domain no-preload-331658 has defined MAC address 52:54:00:2a:47:d0 in network mk-no-preload-331658
	I0918 21:05:29.004656   61273 main.go:141] libmachine: (no-preload-331658) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:47:d0", ip: ""} in network mk-no-preload-331658: {Iface:virbr2 ExpiryTime:2024-09-18 21:55:52 +0000 UTC Type:0 Mac:52:54:00:2a:47:d0 Iaid: IPaddr:192.168.61.31 Prefix:24 Hostname:no-preload-331658 Clientid:01:52:54:00:2a:47:d0}
	I0918 21:05:29.004686   61273 main.go:141] libmachine: (no-preload-331658) DBG | domain no-preload-331658 has defined IP address 192.168.61.31 and MAC address 52:54:00:2a:47:d0 in network mk-no-preload-331658
	I0918 21:05:29.004877   61273 main.go:141] libmachine: (no-preload-331658) Calling .GetSSHPort
	I0918 21:05:29.005128   61273 main.go:141] libmachine: (no-preload-331658) Calling .GetSSHKeyPath
	I0918 21:05:29.005379   61273 main.go:141] libmachine: (no-preload-331658) Calling .GetSSHUsername
	I0918 21:05:29.005556   61273 sshutil.go:53] new ssh client: &{IP:192.168.61.31 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19667-7671/.minikube/machines/no-preload-331658/id_rsa Username:docker}
	I0918 21:05:29.087453   61273 ssh_runner.go:195] Run: cat /etc/os-release
	I0918 21:05:29.091329   61273 info.go:137] Remote host: Buildroot 2023.02.9
	I0918 21:05:29.091356   61273 filesync.go:126] Scanning /home/jenkins/minikube-integration/19667-7671/.minikube/addons for local assets ...
	I0918 21:05:29.091422   61273 filesync.go:126] Scanning /home/jenkins/minikube-integration/19667-7671/.minikube/files for local assets ...
	I0918 21:05:29.091493   61273 filesync.go:149] local asset: /home/jenkins/minikube-integration/19667-7671/.minikube/files/etc/ssl/certs/148782.pem -> 148782.pem in /etc/ssl/certs
	I0918 21:05:29.091578   61273 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0918 21:05:29.101039   61273 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/files/etc/ssl/certs/148782.pem --> /etc/ssl/certs/148782.pem (1708 bytes)
	I0918 21:05:29.125451   61273 start.go:296] duration metric: took 124.264463ms for postStartSetup
	I0918 21:05:29.125492   61273 fix.go:56] duration metric: took 19.880181743s for fixHost
	I0918 21:05:29.125514   61273 main.go:141] libmachine: (no-preload-331658) Calling .GetSSHHostname
	I0918 21:05:29.128543   61273 main.go:141] libmachine: (no-preload-331658) DBG | domain no-preload-331658 has defined MAC address 52:54:00:2a:47:d0 in network mk-no-preload-331658
	I0918 21:05:29.128968   61273 main.go:141] libmachine: (no-preload-331658) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:47:d0", ip: ""} in network mk-no-preload-331658: {Iface:virbr2 ExpiryTime:2024-09-18 21:55:52 +0000 UTC Type:0 Mac:52:54:00:2a:47:d0 Iaid: IPaddr:192.168.61.31 Prefix:24 Hostname:no-preload-331658 Clientid:01:52:54:00:2a:47:d0}
	I0918 21:05:29.129022   61273 main.go:141] libmachine: (no-preload-331658) DBG | domain no-preload-331658 has defined IP address 192.168.61.31 and MAC address 52:54:00:2a:47:d0 in network mk-no-preload-331658
	I0918 21:05:29.129185   61273 main.go:141] libmachine: (no-preload-331658) Calling .GetSSHPort
	I0918 21:05:29.129385   61273 main.go:141] libmachine: (no-preload-331658) Calling .GetSSHKeyPath
	I0918 21:05:29.129580   61273 main.go:141] libmachine: (no-preload-331658) Calling .GetSSHKeyPath
	I0918 21:05:29.129739   61273 main.go:141] libmachine: (no-preload-331658) Calling .GetSSHUsername
	I0918 21:05:29.129919   61273 main.go:141] libmachine: Using SSH client type: native
	I0918 21:05:29.130155   61273 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.61.31 22 <nil> <nil>}
	I0918 21:05:29.130172   61273 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0918 21:05:29.240857   61273 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726693529.214864261
	
	I0918 21:05:29.240886   61273 fix.go:216] guest clock: 1726693529.214864261
	I0918 21:05:29.240897   61273 fix.go:229] Guest: 2024-09-18 21:05:29.214864261 +0000 UTC Remote: 2024-09-18 21:05:29.125495769 +0000 UTC m=+357.666326175 (delta=89.368492ms)
	I0918 21:05:29.240943   61273 fix.go:200] guest clock delta is within tolerance: 89.368492ms
	I0918 21:05:29.240949   61273 start.go:83] releasing machines lock for "no-preload-331658", held for 19.99567651s
	I0918 21:05:29.240969   61273 main.go:141] libmachine: (no-preload-331658) Calling .DriverName
	I0918 21:05:29.241256   61273 main.go:141] libmachine: (no-preload-331658) Calling .GetIP
	I0918 21:05:29.243922   61273 main.go:141] libmachine: (no-preload-331658) DBG | domain no-preload-331658 has defined MAC address 52:54:00:2a:47:d0 in network mk-no-preload-331658
	I0918 21:05:29.244347   61273 main.go:141] libmachine: (no-preload-331658) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:47:d0", ip: ""} in network mk-no-preload-331658: {Iface:virbr2 ExpiryTime:2024-09-18 21:55:52 +0000 UTC Type:0 Mac:52:54:00:2a:47:d0 Iaid: IPaddr:192.168.61.31 Prefix:24 Hostname:no-preload-331658 Clientid:01:52:54:00:2a:47:d0}
	I0918 21:05:29.244376   61273 main.go:141] libmachine: (no-preload-331658) DBG | domain no-preload-331658 has defined IP address 192.168.61.31 and MAC address 52:54:00:2a:47:d0 in network mk-no-preload-331658
	I0918 21:05:29.244575   61273 main.go:141] libmachine: (no-preload-331658) Calling .DriverName
	I0918 21:05:29.245157   61273 main.go:141] libmachine: (no-preload-331658) Calling .DriverName
	I0918 21:05:29.245380   61273 main.go:141] libmachine: (no-preload-331658) Calling .DriverName
	I0918 21:05:29.245492   61273 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0918 21:05:29.245548   61273 main.go:141] libmachine: (no-preload-331658) Calling .GetSSHHostname
	I0918 21:05:29.245640   61273 ssh_runner.go:195] Run: cat /version.json
	I0918 21:05:29.245665   61273 main.go:141] libmachine: (no-preload-331658) Calling .GetSSHHostname
	I0918 21:05:29.248511   61273 main.go:141] libmachine: (no-preload-331658) DBG | domain no-preload-331658 has defined MAC address 52:54:00:2a:47:d0 in network mk-no-preload-331658
	I0918 21:05:29.248927   61273 main.go:141] libmachine: (no-preload-331658) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:47:d0", ip: ""} in network mk-no-preload-331658: {Iface:virbr2 ExpiryTime:2024-09-18 21:55:52 +0000 UTC Type:0 Mac:52:54:00:2a:47:d0 Iaid: IPaddr:192.168.61.31 Prefix:24 Hostname:no-preload-331658 Clientid:01:52:54:00:2a:47:d0}
	I0918 21:05:29.248954   61273 main.go:141] libmachine: (no-preload-331658) DBG | domain no-preload-331658 has defined MAC address 52:54:00:2a:47:d0 in network mk-no-preload-331658
	I0918 21:05:29.248984   61273 main.go:141] libmachine: (no-preload-331658) DBG | domain no-preload-331658 has defined IP address 192.168.61.31 and MAC address 52:54:00:2a:47:d0 in network mk-no-preload-331658
	I0918 21:05:29.249198   61273 main.go:141] libmachine: (no-preload-331658) Calling .GetSSHPort
	I0918 21:05:29.249423   61273 main.go:141] libmachine: (no-preload-331658) Calling .GetSSHKeyPath
	I0918 21:05:29.249506   61273 main.go:141] libmachine: (no-preload-331658) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:47:d0", ip: ""} in network mk-no-preload-331658: {Iface:virbr2 ExpiryTime:2024-09-18 21:55:52 +0000 UTC Type:0 Mac:52:54:00:2a:47:d0 Iaid: IPaddr:192.168.61.31 Prefix:24 Hostname:no-preload-331658 Clientid:01:52:54:00:2a:47:d0}
	I0918 21:05:29.249538   61273 main.go:141] libmachine: (no-preload-331658) DBG | domain no-preload-331658 has defined IP address 192.168.61.31 and MAC address 52:54:00:2a:47:d0 in network mk-no-preload-331658
	I0918 21:05:29.249608   61273 main.go:141] libmachine: (no-preload-331658) Calling .GetSSHUsername
	I0918 21:05:29.249692   61273 main.go:141] libmachine: (no-preload-331658) Calling .GetSSHPort
	I0918 21:05:29.249791   61273 sshutil.go:53] new ssh client: &{IP:192.168.61.31 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19667-7671/.minikube/machines/no-preload-331658/id_rsa Username:docker}
	I0918 21:05:29.249899   61273 main.go:141] libmachine: (no-preload-331658) Calling .GetSSHKeyPath
	I0918 21:05:29.250076   61273 main.go:141] libmachine: (no-preload-331658) Calling .GetSSHUsername
	I0918 21:05:29.250228   61273 sshutil.go:53] new ssh client: &{IP:192.168.61.31 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19667-7671/.minikube/machines/no-preload-331658/id_rsa Username:docker}
	I0918 21:05:29.365104   61273 ssh_runner.go:195] Run: systemctl --version
	I0918 21:05:29.371202   61273 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0918 21:05:29.518067   61273 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0918 21:05:29.524126   61273 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0918 21:05:29.524207   61273 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0918 21:05:29.540977   61273 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0918 21:05:29.541007   61273 start.go:495] detecting cgroup driver to use...
	I0918 21:05:29.541072   61273 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0918 21:05:29.558893   61273 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0918 21:05:29.576084   61273 docker.go:217] disabling cri-docker service (if available) ...
	I0918 21:05:29.576161   61273 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0918 21:05:29.591212   61273 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0918 21:05:29.605765   61273 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0918 21:05:29.734291   61273 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0918 21:05:29.892707   61273 docker.go:233] disabling docker service ...
	I0918 21:05:29.892771   61273 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0918 21:05:29.907575   61273 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0918 21:05:29.920545   61273 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0918 21:05:30.058604   61273 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0918 21:05:30.196896   61273 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0918 21:05:30.211398   61273 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0918 21:05:30.231791   61273 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0918 21:05:30.231917   61273 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0918 21:05:30.243369   61273 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0918 21:05:30.243465   61273 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0918 21:05:30.254911   61273 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0918 21:05:30.266839   61273 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0918 21:05:30.278532   61273 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0918 21:05:30.290173   61273 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0918 21:05:30.301068   61273 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0918 21:05:30.318589   61273 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0918 21:05:30.329022   61273 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0918 21:05:30.338645   61273 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0918 21:05:30.338720   61273 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0918 21:05:30.351797   61273 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0918 21:05:30.363412   61273 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0918 21:05:30.504035   61273 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0918 21:05:30.606470   61273 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0918 21:05:30.606547   61273 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0918 21:05:30.611499   61273 start.go:563] Will wait 60s for crictl version
	I0918 21:05:30.611559   61273 ssh_runner.go:195] Run: which crictl
	I0918 21:05:30.615485   61273 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0918 21:05:30.659735   61273 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0918 21:05:30.659835   61273 ssh_runner.go:195] Run: crio --version
	I0918 21:05:30.690573   61273 ssh_runner.go:195] Run: crio --version
	I0918 21:05:30.723342   61273 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0918 21:05:30.724604   61273 main.go:141] libmachine: (no-preload-331658) Calling .GetIP
	I0918 21:05:30.727445   61273 main.go:141] libmachine: (no-preload-331658) DBG | domain no-preload-331658 has defined MAC address 52:54:00:2a:47:d0 in network mk-no-preload-331658
	I0918 21:05:30.727885   61273 main.go:141] libmachine: (no-preload-331658) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:47:d0", ip: ""} in network mk-no-preload-331658: {Iface:virbr2 ExpiryTime:2024-09-18 21:55:52 +0000 UTC Type:0 Mac:52:54:00:2a:47:d0 Iaid: IPaddr:192.168.61.31 Prefix:24 Hostname:no-preload-331658 Clientid:01:52:54:00:2a:47:d0}
	I0918 21:05:30.727919   61273 main.go:141] libmachine: (no-preload-331658) DBG | domain no-preload-331658 has defined IP address 192.168.61.31 and MAC address 52:54:00:2a:47:d0 in network mk-no-preload-331658
	I0918 21:05:30.728132   61273 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0918 21:05:30.732134   61273 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0918 21:05:30.745695   61273 kubeadm.go:883] updating cluster {Name:no-preload-331658 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.1 ClusterName:no-preload-331658 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.31 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0918 21:05:30.745813   61273 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0918 21:05:30.745849   61273 ssh_runner.go:195] Run: sudo crictl images --output json
	I0918 21:05:30.788504   61273 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I0918 21:05:30.788537   61273 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.31.1 registry.k8s.io/kube-controller-manager:v1.31.1 registry.k8s.io/kube-scheduler:v1.31.1 registry.k8s.io/kube-proxy:v1.31.1 registry.k8s.io/pause:3.10 registry.k8s.io/etcd:3.5.15-0 registry.k8s.io/coredns/coredns:v1.11.3 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0918 21:05:30.788634   61273 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0918 21:05:30.788655   61273 image.go:135] retrieving image: registry.k8s.io/pause:3.10
	I0918 21:05:30.788673   61273 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.31.1
	I0918 21:05:30.788685   61273 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.11.3
	I0918 21:05:30.788655   61273 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.31.1
	I0918 21:05:30.788796   61273 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.31.1
	I0918 21:05:30.788804   61273 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.1
	I0918 21:05:30.788655   61273 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.15-0
	I0918 21:05:30.790173   61273 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0918 21:05:30.790181   61273 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.1
	I0918 21:05:30.790199   61273 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.3: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.3
	I0918 21:05:30.790170   61273 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.1
	I0918 21:05:30.790222   61273 image.go:178] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I0918 21:05:30.790237   61273 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.1
	I0918 21:05:30.790268   61273 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.15-0
	I0918 21:05:30.790542   61273 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.1
	I0918 21:05:31.049150   61273 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10
	I0918 21:05:31.052046   61273 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.31.1
	I0918 21:05:31.099660   61273 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.3
	I0918 21:05:31.099861   61273 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.31.1
	I0918 21:05:31.111308   61273 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.31.1
	I0918 21:05:31.111439   61273 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.15-0
	I0918 21:05:31.112293   61273 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.31.1
	I0918 21:05:31.203873   61273 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.31.1" needs transfer: "registry.k8s.io/kube-proxy:v1.31.1" does not exist at hash "60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561" in container runtime
	I0918 21:05:31.203934   61273 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.31.1
	I0918 21:05:31.204042   61273 ssh_runner.go:195] Run: which crictl
	I0918 21:05:31.208912   61273 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.3" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.3" does not exist at hash "c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6" in container runtime
	I0918 21:05:31.208937   61273 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.31.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.31.1" does not exist at hash "9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b" in container runtime
	I0918 21:05:31.208968   61273 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.31.1
	I0918 21:05:31.208960   61273 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.3
	I0918 21:05:31.209020   61273 ssh_runner.go:195] Run: which crictl
	I0918 21:05:31.209029   61273 ssh_runner.go:195] Run: which crictl
	I0918 21:05:31.249355   61273 cache_images.go:116] "registry.k8s.io/etcd:3.5.15-0" needs transfer: "registry.k8s.io/etcd:3.5.15-0" does not exist at hash "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4" in container runtime
	I0918 21:05:31.249408   61273 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.15-0
	I0918 21:05:31.249459   61273 ssh_runner.go:195] Run: which crictl
	I0918 21:05:31.253214   61273 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.31.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.31.1" does not exist at hash "175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1" in container runtime
	I0918 21:05:31.253244   61273 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.31.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.31.1" does not exist at hash "6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee" in container runtime
	I0918 21:05:31.253286   61273 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.31.1
	I0918 21:05:31.253274   61273 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.31.1
	I0918 21:05:31.253335   61273 ssh_runner.go:195] Run: which crictl
	I0918 21:05:31.253339   61273 ssh_runner.go:195] Run: which crictl
	I0918 21:05:31.253351   61273 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.1
	I0918 21:05:31.253405   61273 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.1
	I0918 21:05:31.253419   61273 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I0918 21:05:31.255163   61273 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0918 21:05:31.330929   61273 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.1
	I0918 21:05:31.330999   61273 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.1
	I0918 21:05:31.349540   61273 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.1
	I0918 21:05:31.349558   61273 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.1
	I0918 21:05:31.350088   61273 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I0918 21:05:31.353763   61273 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0918 21:05:31.447057   61273 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.1
	I0918 21:05:31.457171   61273 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.1
	I0918 21:05:31.457239   61273 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.1
	I0918 21:05:31.483087   61273 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.1
	I0918 21:05:31.483097   61273 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I0918 21:05:31.483210   61273 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0918 21:05:28.131874   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:05:30.133067   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:05:32.134557   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:05:29.389052   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:05:31.887032   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:05:29.704274   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:30.203372   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:30.703751   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:31.203670   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:31.704097   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:32.203611   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:32.703968   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:33.203356   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:33.704260   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:34.204082   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:31.573784   61273 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19667-7671/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.1
	I0918 21:05:31.573906   61273 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.31.1
	I0918 21:05:31.573927   61273 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.1
	I0918 21:05:31.573951   61273 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19667-7671/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.1
	I0918 21:05:31.574038   61273 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.31.1
	I0918 21:05:31.605972   61273 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19667-7671/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3
	I0918 21:05:31.606077   61273 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.1
	I0918 21:05:31.606086   61273 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.11.3
	I0918 21:05:31.613640   61273 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19667-7671/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0
	I0918 21:05:31.613769   61273 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.15-0
	I0918 21:05:31.641105   61273 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19667-7671/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.1
	I0918 21:05:31.641109   61273 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.31.1 (exists)
	I0918 21:05:31.641199   61273 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.31.1
	I0918 21:05:31.641223   61273 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.31.1
	I0918 21:05:31.641244   61273 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.1
	I0918 21:05:31.641175   61273 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.31.1 (exists)
	I0918 21:05:31.666586   61273 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.3 (exists)
	I0918 21:05:31.666661   61273 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19667-7671/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.1
	I0918 21:05:31.666792   61273 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.31.1
	I0918 21:05:31.666821   61273 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.31.1 (exists)
	I0918 21:05:31.666795   61273 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.15-0 (exists)
	I0918 21:05:32.009797   61273 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0918 21:05:33.610028   61273 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.1: (1.968756977s)
	I0918 21:05:33.610065   61273 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19667-7671/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.1 from cache
	I0918 21:05:33.610080   61273 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.31.1: (1.943261692s)
	I0918 21:05:33.610111   61273 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.31.1 (exists)
	I0918 21:05:33.610090   61273 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.31.1
	I0918 21:05:33.610122   61273 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (1.600294362s)
	I0918 21:05:33.610161   61273 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0918 21:05:33.610174   61273 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.1
	I0918 21:05:33.610193   61273 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0918 21:05:33.610242   61273 ssh_runner.go:195] Run: which crictl
	I0918 21:05:35.571685   61273 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.1: (1.96147024s)
	I0918 21:05:35.571722   61273 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19667-7671/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.1 from cache
	I0918 21:05:35.571748   61273 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.3
	I0918 21:05:35.571802   61273 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.3
	I0918 21:05:35.571802   61273 ssh_runner.go:235] Completed: which crictl: (1.961540517s)
	I0918 21:05:35.571882   61273 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0918 21:05:34.632853   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:05:36.633341   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:05:33.887591   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:05:36.387534   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:05:34.703445   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:35.203660   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:35.703374   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:36.203304   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:36.704191   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:37.204129   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:37.703912   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:38.203310   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:38.703616   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:39.203258   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:37.536622   61273 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.96470192s)
	I0918 21:05:37.536666   61273 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.3: (1.96484474s)
	I0918 21:05:37.536690   61273 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19667-7671/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3 from cache
	I0918 21:05:37.536713   61273 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0918 21:05:37.536721   61273 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.31.1
	I0918 21:05:37.536766   61273 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.1
	I0918 21:05:39.615751   61273 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.1: (2.078954836s)
	I0918 21:05:39.615791   61273 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19667-7671/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.1 from cache
	I0918 21:05:39.615823   61273 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (2.079084749s)
	I0918 21:05:39.615902   61273 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0918 21:05:39.615829   61273 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.15-0
	I0918 21:05:39.615972   61273 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0
	I0918 21:05:39.676258   61273 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19667-7671/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0918 21:05:39.676355   61273 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I0918 21:05:38.634158   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:05:40.634292   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:05:38.888255   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:05:41.387766   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:05:39.704105   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:40.204102   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:40.704073   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:41.203654   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:41.703947   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:42.203722   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:42.703303   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:43.203847   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:43.704163   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:44.203216   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:42.909577   61273 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: (3.233201912s)
	I0918 21:05:42.909617   61273 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0918 21:05:42.909722   61273 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0: (3.293701319s)
	I0918 21:05:42.909748   61273 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19667-7671/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0 from cache
	I0918 21:05:42.909781   61273 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.31.1
	I0918 21:05:42.909859   61273 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.1
	I0918 21:05:44.767646   61273 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.1: (1.857764218s)
	I0918 21:05:44.767673   61273 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19667-7671/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.1 from cache
	I0918 21:05:44.767705   61273 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0918 21:05:44.767787   61273 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0918 21:05:45.419210   61273 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19667-7671/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0918 21:05:45.419257   61273 cache_images.go:123] Successfully loaded all cached images
	I0918 21:05:45.419265   61273 cache_images.go:92] duration metric: took 14.630712818s to LoadCachedImages
	I0918 21:05:45.419278   61273 kubeadm.go:934] updating node { 192.168.61.31 8443 v1.31.1 crio true true} ...
	I0918 21:05:45.419399   61273 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-331658 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.31
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:no-preload-331658 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0918 21:05:45.419479   61273 ssh_runner.go:195] Run: crio config
	I0918 21:05:45.468525   61273 cni.go:84] Creating CNI manager for ""
	I0918 21:05:45.468549   61273 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0918 21:05:45.468558   61273 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0918 21:05:45.468579   61273 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.31 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-331658 NodeName:no-preload-331658 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.31"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.31 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0918 21:05:45.468706   61273 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.31
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-331658"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.31
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.31"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0918 21:05:45.468781   61273 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0918 21:05:45.479592   61273 binaries.go:44] Found k8s binaries, skipping transfer
	I0918 21:05:45.479662   61273 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0918 21:05:45.488586   61273 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (316 bytes)
	I0918 21:05:45.507027   61273 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0918 21:05:45.525430   61273 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2158 bytes)
	I0918 21:05:45.543854   61273 ssh_runner.go:195] Run: grep 192.168.61.31	control-plane.minikube.internal$ /etc/hosts
	I0918 21:05:45.547792   61273 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.31	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0918 21:05:45.559968   61273 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0918 21:05:45.686602   61273 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0918 21:05:45.702793   61273 certs.go:68] Setting up /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/no-preload-331658 for IP: 192.168.61.31
	I0918 21:05:45.702814   61273 certs.go:194] generating shared ca certs ...
	I0918 21:05:45.702829   61273 certs.go:226] acquiring lock for ca certs: {Name:mkf54e0e71ed884c4372f9d3d4cb308ea4600185 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0918 21:05:45.703005   61273 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19667-7671/.minikube/ca.key
	I0918 21:05:45.703071   61273 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19667-7671/.minikube/proxy-client-ca.key
	I0918 21:05:45.703085   61273 certs.go:256] generating profile certs ...
	I0918 21:05:45.703159   61273 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/no-preload-331658/client.key
	I0918 21:05:45.703228   61273 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/no-preload-331658/apiserver.key.1a336b78
	I0918 21:05:45.703263   61273 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/no-preload-331658/proxy-client.key
	I0918 21:05:45.703384   61273 certs.go:484] found cert: /home/jenkins/minikube-integration/19667-7671/.minikube/certs/14878.pem (1338 bytes)
	W0918 21:05:45.703417   61273 certs.go:480] ignoring /home/jenkins/minikube-integration/19667-7671/.minikube/certs/14878_empty.pem, impossibly tiny 0 bytes
	I0918 21:05:45.703430   61273 certs.go:484] found cert: /home/jenkins/minikube-integration/19667-7671/.minikube/certs/ca-key.pem (1675 bytes)
	I0918 21:05:45.703463   61273 certs.go:484] found cert: /home/jenkins/minikube-integration/19667-7671/.minikube/certs/ca.pem (1082 bytes)
	I0918 21:05:45.703493   61273 certs.go:484] found cert: /home/jenkins/minikube-integration/19667-7671/.minikube/certs/cert.pem (1123 bytes)
	I0918 21:05:45.703521   61273 certs.go:484] found cert: /home/jenkins/minikube-integration/19667-7671/.minikube/certs/key.pem (1679 bytes)
	I0918 21:05:45.703582   61273 certs.go:484] found cert: /home/jenkins/minikube-integration/19667-7671/.minikube/files/etc/ssl/certs/148782.pem (1708 bytes)
	I0918 21:05:45.704338   61273 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0918 21:05:45.757217   61273 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0918 21:05:45.791588   61273 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0918 21:05:45.825543   61273 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0918 21:05:45.859322   61273 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/no-preload-331658/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0918 21:05:45.892890   61273 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/no-preload-331658/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0918 21:05:45.922841   61273 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/no-preload-331658/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0918 21:05:45.947670   61273 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/no-preload-331658/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0918 21:05:45.973315   61273 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/files/etc/ssl/certs/148782.pem --> /usr/share/ca-certificates/148782.pem (1708 bytes)
	I0918 21:05:45.997699   61273 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0918 21:05:46.022802   61273 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/certs/14878.pem --> /usr/share/ca-certificates/14878.pem (1338 bytes)
	I0918 21:05:46.046646   61273 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0918 21:05:46.063329   61273 ssh_runner.go:195] Run: openssl version
	I0918 21:05:46.069432   61273 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/148782.pem && ln -fs /usr/share/ca-certificates/148782.pem /etc/ssl/certs/148782.pem"
	I0918 21:05:46.081104   61273 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/148782.pem
	I0918 21:05:46.086180   61273 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 18 19:57 /usr/share/ca-certificates/148782.pem
	I0918 21:05:46.086241   61273 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/148782.pem
	I0918 21:05:46.092527   61273 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/148782.pem /etc/ssl/certs/3ec20f2e.0"
	I0918 21:05:46.103601   61273 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0918 21:05:46.114656   61273 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0918 21:05:46.118788   61273 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 18 19:39 /usr/share/ca-certificates/minikubeCA.pem
	I0918 21:05:46.118855   61273 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0918 21:05:46.124094   61273 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0918 21:05:46.135442   61273 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14878.pem && ln -fs /usr/share/ca-certificates/14878.pem /etc/ssl/certs/14878.pem"
	I0918 21:05:46.146105   61273 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14878.pem
	I0918 21:05:46.150661   61273 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 18 19:57 /usr/share/ca-certificates/14878.pem
	I0918 21:05:46.150714   61273 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14878.pem
	I0918 21:05:46.156247   61273 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14878.pem /etc/ssl/certs/51391683.0"
	I0918 21:05:46.167475   61273 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0918 21:05:46.172172   61273 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0918 21:05:46.178638   61273 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0918 21:05:46.184644   61273 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0918 21:05:46.190704   61273 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0918 21:05:46.196414   61273 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0918 21:05:46.202467   61273 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0918 21:05:46.208306   61273 kubeadm.go:392] StartCluster: {Name:no-preload-331658 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.1 ClusterName:no-preload-331658 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.31 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0918 21:05:46.208405   61273 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0918 21:05:46.208472   61273 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0918 21:05:46.247189   61273 cri.go:89] found id: ""
	I0918 21:05:46.247267   61273 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0918 21:05:46.258228   61273 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0918 21:05:46.258253   61273 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0918 21:05:46.258309   61273 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0918 21:05:46.268703   61273 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0918 21:05:46.269728   61273 kubeconfig.go:125] found "no-preload-331658" server: "https://192.168.61.31:8443"
	I0918 21:05:46.271749   61273 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0918 21:05:46.282051   61273 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.61.31
	I0918 21:05:46.282105   61273 kubeadm.go:1160] stopping kube-system containers ...
	I0918 21:05:46.282122   61273 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0918 21:05:46.282191   61273 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0918 21:05:46.319805   61273 cri.go:89] found id: ""
	I0918 21:05:46.319880   61273 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0918 21:05:46.336130   61273 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0918 21:05:46.345940   61273 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0918 21:05:46.345962   61273 kubeadm.go:157] found existing configuration files:
	
	I0918 21:05:46.346008   61273 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0918 21:05:46.355577   61273 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0918 21:05:46.355658   61273 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0918 21:05:46.367154   61273 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0918 21:05:46.377062   61273 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0918 21:05:46.377126   61273 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0918 21:05:46.387180   61273 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0918 21:05:46.396578   61273 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0918 21:05:46.396642   61273 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0918 21:05:46.406687   61273 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0918 21:05:46.416545   61273 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0918 21:05:46.416617   61273 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0918 21:05:46.426405   61273 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0918 21:05:46.436343   61273 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0918 21:05:43.132484   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:05:45.132905   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:05:47.132942   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:05:43.890245   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:05:46.386955   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:05:44.703267   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:45.203924   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:45.703945   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:46.203386   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:46.703674   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:47.203387   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:47.704034   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:48.203348   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:48.703715   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:49.203984   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:46.563094   61273 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0918 21:05:47.663823   61273 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.100694645s)
	I0918 21:05:47.663857   61273 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0918 21:05:47.895962   61273 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0918 21:05:47.978862   61273 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0918 21:05:48.095438   61273 api_server.go:52] waiting for apiserver process to appear ...
	I0918 21:05:48.095530   61273 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:48.595581   61273 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:49.095761   61273 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:49.122304   61273 api_server.go:72] duration metric: took 1.026867171s to wait for apiserver process to appear ...
	I0918 21:05:49.122343   61273 api_server.go:88] waiting for apiserver healthz status ...
	I0918 21:05:49.122361   61273 api_server.go:253] Checking apiserver healthz at https://192.168.61.31:8443/healthz ...
	I0918 21:05:49.133503   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:05:51.133761   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:05:48.386996   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:05:50.387697   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:05:52.886989   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:05:52.253818   61273 api_server.go:279] https://192.168.61.31:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0918 21:05:52.253850   61273 api_server.go:103] status: https://192.168.61.31:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0918 21:05:52.253864   61273 api_server.go:253] Checking apiserver healthz at https://192.168.61.31:8443/healthz ...
	I0918 21:05:52.290586   61273 api_server.go:279] https://192.168.61.31:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0918 21:05:52.290617   61273 api_server.go:103] status: https://192.168.61.31:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0918 21:05:52.623078   61273 api_server.go:253] Checking apiserver healthz at https://192.168.61.31:8443/healthz ...
	I0918 21:05:52.631774   61273 api_server.go:279] https://192.168.61.31:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0918 21:05:52.631811   61273 api_server.go:103] status: https://192.168.61.31:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0918 21:05:53.123498   61273 api_server.go:253] Checking apiserver healthz at https://192.168.61.31:8443/healthz ...
	I0918 21:05:53.132091   61273 api_server.go:279] https://192.168.61.31:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0918 21:05:53.132120   61273 api_server.go:103] status: https://192.168.61.31:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0918 21:05:53.622597   61273 api_server.go:253] Checking apiserver healthz at https://192.168.61.31:8443/healthz ...
	I0918 21:05:53.628896   61273 api_server.go:279] https://192.168.61.31:8443/healthz returned 200:
	ok
	I0918 21:05:53.638315   61273 api_server.go:141] control plane version: v1.31.1
	I0918 21:05:53.638354   61273 api_server.go:131] duration metric: took 4.516002991s to wait for apiserver health ...
	I0918 21:05:53.638367   61273 cni.go:84] Creating CNI manager for ""
	I0918 21:05:53.638376   61273 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0918 21:05:53.639948   61273 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0918 21:05:49.703565   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:50.204192   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:50.704248   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:51.203335   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:51.703761   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:52.203474   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:52.703901   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:53.203856   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:53.704192   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:54.204243   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:53.641376   61273 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0918 21:05:53.667828   61273 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0918 21:05:53.701667   61273 system_pods.go:43] waiting for kube-system pods to appear ...
	I0918 21:05:53.714053   61273 system_pods.go:59] 8 kube-system pods found
	I0918 21:05:53.714101   61273 system_pods.go:61] "coredns-7c65d6cfc9-dgnw2" [085d5a98-0a61-4678-830f-384780a0d7ef] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0918 21:05:53.714113   61273 system_pods.go:61] "etcd-no-preload-331658" [d6a53ff4-fcf0-46c0-9e77-9af6d0363e08] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0918 21:05:53.714126   61273 system_pods.go:61] "kube-apiserver-no-preload-331658" [1953598f-4c3f-4a98-ab75-b8ca1b8093ad] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0918 21:05:53.714135   61273 system_pods.go:61] "kube-controller-manager-no-preload-331658" [8fc655be-a192-4250-a31d-2e533a2b5e41] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0918 21:05:53.714145   61273 system_pods.go:61] "kube-proxy-hx25w" [a26512ff-f695-4452-8974-577479257160] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0918 21:05:53.714157   61273 system_pods.go:61] "kube-scheduler-no-preload-331658" [64e5347e-e7fd-4a0d-ae41-539c3989c29e] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0918 21:05:53.714169   61273 system_pods.go:61] "metrics-server-6867b74b74-n27vc" [b1de76ec-8987-49ce-ae66-eedda2705cde] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0918 21:05:53.714181   61273 system_pods.go:61] "storage-provisioner" [e110aeb3-e9bc-4bb9-9b49-5579558bdda2] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0918 21:05:53.714191   61273 system_pods.go:74] duration metric: took 12.499195ms to wait for pod list to return data ...
	I0918 21:05:53.714206   61273 node_conditions.go:102] verifying NodePressure condition ...
	I0918 21:05:53.720251   61273 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0918 21:05:53.720283   61273 node_conditions.go:123] node cpu capacity is 2
	I0918 21:05:53.720296   61273 node_conditions.go:105] duration metric: took 6.082637ms to run NodePressure ...
	I0918 21:05:53.720317   61273 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0918 21:05:54.056981   61273 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0918 21:05:54.062413   61273 kubeadm.go:739] kubelet initialised
	I0918 21:05:54.062436   61273 kubeadm.go:740] duration metric: took 5.424693ms waiting for restarted kubelet to initialise ...
	I0918 21:05:54.062443   61273 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0918 21:05:54.069721   61273 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-dgnw2" in "kube-system" namespace to be "Ready" ...
	I0918 21:05:54.089970   61273 pod_ready.go:98] node "no-preload-331658" hosting pod "coredns-7c65d6cfc9-dgnw2" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-331658" has status "Ready":"False"
	I0918 21:05:54.090005   61273 pod_ready.go:82] duration metric: took 20.250586ms for pod "coredns-7c65d6cfc9-dgnw2" in "kube-system" namespace to be "Ready" ...
	E0918 21:05:54.090017   61273 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-331658" hosting pod "coredns-7c65d6cfc9-dgnw2" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-331658" has status "Ready":"False"
	I0918 21:05:54.090046   61273 pod_ready.go:79] waiting up to 4m0s for pod "etcd-no-preload-331658" in "kube-system" namespace to be "Ready" ...
	I0918 21:05:54.105121   61273 pod_ready.go:98] node "no-preload-331658" hosting pod "etcd-no-preload-331658" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-331658" has status "Ready":"False"
	I0918 21:05:54.105156   61273 pod_ready.go:82] duration metric: took 15.097714ms for pod "etcd-no-preload-331658" in "kube-system" namespace to be "Ready" ...
	E0918 21:05:54.105170   61273 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-331658" hosting pod "etcd-no-preload-331658" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-331658" has status "Ready":"False"
	I0918 21:05:54.105180   61273 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-no-preload-331658" in "kube-system" namespace to be "Ready" ...
	I0918 21:05:54.112687   61273 pod_ready.go:98] node "no-preload-331658" hosting pod "kube-apiserver-no-preload-331658" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-331658" has status "Ready":"False"
	I0918 21:05:54.112711   61273 pod_ready.go:82] duration metric: took 7.523191ms for pod "kube-apiserver-no-preload-331658" in "kube-system" namespace to be "Ready" ...
	E0918 21:05:54.112722   61273 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-331658" hosting pod "kube-apiserver-no-preload-331658" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-331658" has status "Ready":"False"
	I0918 21:05:54.112730   61273 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-no-preload-331658" in "kube-system" namespace to be "Ready" ...
	I0918 21:05:54.119681   61273 pod_ready.go:98] node "no-preload-331658" hosting pod "kube-controller-manager-no-preload-331658" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-331658" has status "Ready":"False"
	I0918 21:05:54.119707   61273 pod_ready.go:82] duration metric: took 6.967275ms for pod "kube-controller-manager-no-preload-331658" in "kube-system" namespace to be "Ready" ...
	E0918 21:05:54.119716   61273 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-331658" hosting pod "kube-controller-manager-no-preload-331658" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-331658" has status "Ready":"False"
	I0918 21:05:54.119723   61273 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-hx25w" in "kube-system" namespace to be "Ready" ...
	I0918 21:05:54.505099   61273 pod_ready.go:98] node "no-preload-331658" hosting pod "kube-proxy-hx25w" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-331658" has status "Ready":"False"
	I0918 21:05:54.505127   61273 pod_ready.go:82] duration metric: took 385.395528ms for pod "kube-proxy-hx25w" in "kube-system" namespace to be "Ready" ...
	E0918 21:05:54.505140   61273 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-331658" hosting pod "kube-proxy-hx25w" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-331658" has status "Ready":"False"
	I0918 21:05:54.505147   61273 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-no-preload-331658" in "kube-system" namespace to be "Ready" ...
	I0918 21:05:54.905748   61273 pod_ready.go:98] node "no-preload-331658" hosting pod "kube-scheduler-no-preload-331658" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-331658" has status "Ready":"False"
	I0918 21:05:54.905774   61273 pod_ready.go:82] duration metric: took 400.618175ms for pod "kube-scheduler-no-preload-331658" in "kube-system" namespace to be "Ready" ...
	E0918 21:05:54.905785   61273 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-331658" hosting pod "kube-scheduler-no-preload-331658" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-331658" has status "Ready":"False"
	I0918 21:05:54.905794   61273 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace to be "Ready" ...
	I0918 21:05:55.305077   61273 pod_ready.go:98] node "no-preload-331658" hosting pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-331658" has status "Ready":"False"
	I0918 21:05:55.305106   61273 pod_ready.go:82] duration metric: took 399.301293ms for pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace to be "Ready" ...
	E0918 21:05:55.305118   61273 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-331658" hosting pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-331658" has status "Ready":"False"
	I0918 21:05:55.305126   61273 pod_ready.go:39] duration metric: took 1.242662699s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0918 21:05:55.305150   61273 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0918 21:05:55.317568   61273 ops.go:34] apiserver oom_adj: -16
	I0918 21:05:55.317597   61273 kubeadm.go:597] duration metric: took 9.0593375s to restartPrimaryControlPlane
	I0918 21:05:55.317616   61273 kubeadm.go:394] duration metric: took 9.109322119s to StartCluster
	I0918 21:05:55.317643   61273 settings.go:142] acquiring lock: {Name:mk6ae95a69dcbe00c28846409e9f46945a88de2f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0918 21:05:55.317720   61273 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19667-7671/kubeconfig
	I0918 21:05:55.320228   61273 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19667-7671/kubeconfig: {Name:mk3a3eecadb0164dfdd5d3c4392b5b473a2a9bb0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0918 21:05:55.320552   61273 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.61.31 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0918 21:05:55.320609   61273 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0918 21:05:55.320716   61273 addons.go:69] Setting storage-provisioner=true in profile "no-preload-331658"
	I0918 21:05:55.320725   61273 addons.go:69] Setting default-storageclass=true in profile "no-preload-331658"
	I0918 21:05:55.320739   61273 addons.go:234] Setting addon storage-provisioner=true in "no-preload-331658"
	W0918 21:05:55.320747   61273 addons.go:243] addon storage-provisioner should already be in state true
	I0918 21:05:55.320765   61273 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-331658"
	I0918 21:05:55.320785   61273 host.go:66] Checking if "no-preload-331658" exists ...
	I0918 21:05:55.320769   61273 addons.go:69] Setting metrics-server=true in profile "no-preload-331658"
	I0918 21:05:55.320799   61273 config.go:182] Loaded profile config "no-preload-331658": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0918 21:05:55.320808   61273 addons.go:234] Setting addon metrics-server=true in "no-preload-331658"
	W0918 21:05:55.320863   61273 addons.go:243] addon metrics-server should already be in state true
	I0918 21:05:55.320889   61273 host.go:66] Checking if "no-preload-331658" exists ...
	I0918 21:05:55.321208   61273 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19667-7671/.minikube/bin/docker-machine-driver-kvm2
	I0918 21:05:55.321228   61273 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19667-7671/.minikube/bin/docker-machine-driver-kvm2
	I0918 21:05:55.321208   61273 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19667-7671/.minikube/bin/docker-machine-driver-kvm2
	I0918 21:05:55.321262   61273 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0918 21:05:55.321282   61273 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0918 21:05:55.321357   61273 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0918 21:05:55.323762   61273 out.go:177] * Verifying Kubernetes components...
	I0918 21:05:55.325718   61273 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0918 21:05:55.348485   61273 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43725
	I0918 21:05:55.349072   61273 main.go:141] libmachine: () Calling .GetVersion
	I0918 21:05:55.349611   61273 main.go:141] libmachine: Using API Version  1
	I0918 21:05:55.349641   61273 main.go:141] libmachine: () Calling .SetConfigRaw
	I0918 21:05:55.349978   61273 main.go:141] libmachine: () Calling .GetMachineName
	I0918 21:05:55.350556   61273 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19667-7671/.minikube/bin/docker-machine-driver-kvm2
	I0918 21:05:55.350606   61273 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0918 21:05:55.368807   61273 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34285
	I0918 21:05:55.369340   61273 main.go:141] libmachine: () Calling .GetVersion
	I0918 21:05:55.369826   61273 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35389
	I0918 21:05:55.369908   61273 main.go:141] libmachine: Using API Version  1
	I0918 21:05:55.369928   61273 main.go:141] libmachine: () Calling .SetConfigRaw
	I0918 21:05:55.369949   61273 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42891
	I0918 21:05:55.370195   61273 main.go:141] libmachine: () Calling .GetVersion
	I0918 21:05:55.370303   61273 main.go:141] libmachine: () Calling .GetMachineName
	I0918 21:05:55.370408   61273 main.go:141] libmachine: () Calling .GetVersion
	I0918 21:05:55.370494   61273 main.go:141] libmachine: (no-preload-331658) Calling .GetState
	I0918 21:05:55.370772   61273 main.go:141] libmachine: Using API Version  1
	I0918 21:05:55.370797   61273 main.go:141] libmachine: () Calling .SetConfigRaw
	I0918 21:05:55.370908   61273 main.go:141] libmachine: Using API Version  1
	I0918 21:05:55.370929   61273 main.go:141] libmachine: () Calling .SetConfigRaw
	I0918 21:05:55.371790   61273 main.go:141] libmachine: () Calling .GetMachineName
	I0918 21:05:55.371833   61273 main.go:141] libmachine: () Calling .GetMachineName
	I0918 21:05:55.371996   61273 main.go:141] libmachine: (no-preload-331658) Calling .GetState
	I0918 21:05:55.372415   61273 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19667-7671/.minikube/bin/docker-machine-driver-kvm2
	I0918 21:05:55.372470   61273 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0918 21:05:55.372532   61273 main.go:141] libmachine: (no-preload-331658) Calling .DriverName
	I0918 21:05:55.375524   61273 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0918 21:05:55.375574   61273 addons.go:234] Setting addon default-storageclass=true in "no-preload-331658"
	W0918 21:05:55.375593   61273 addons.go:243] addon default-storageclass should already be in state true
	I0918 21:05:55.375626   61273 host.go:66] Checking if "no-preload-331658" exists ...
	I0918 21:05:55.376008   61273 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19667-7671/.minikube/bin/docker-machine-driver-kvm2
	I0918 21:05:55.376097   61273 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0918 21:05:55.377828   61273 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0918 21:05:55.377848   61273 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0918 21:05:55.377864   61273 main.go:141] libmachine: (no-preload-331658) Calling .GetSSHHostname
	I0918 21:05:55.381877   61273 main.go:141] libmachine: (no-preload-331658) DBG | domain no-preload-331658 has defined MAC address 52:54:00:2a:47:d0 in network mk-no-preload-331658
	I0918 21:05:55.382379   61273 main.go:141] libmachine: (no-preload-331658) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:47:d0", ip: ""} in network mk-no-preload-331658: {Iface:virbr2 ExpiryTime:2024-09-18 21:55:52 +0000 UTC Type:0 Mac:52:54:00:2a:47:d0 Iaid: IPaddr:192.168.61.31 Prefix:24 Hostname:no-preload-331658 Clientid:01:52:54:00:2a:47:d0}
	I0918 21:05:55.382438   61273 main.go:141] libmachine: (no-preload-331658) DBG | domain no-preload-331658 has defined IP address 192.168.61.31 and MAC address 52:54:00:2a:47:d0 in network mk-no-preload-331658
	I0918 21:05:55.382767   61273 main.go:141] libmachine: (no-preload-331658) Calling .GetSSHPort
	I0918 21:05:55.384470   61273 main.go:141] libmachine: (no-preload-331658) Calling .GetSSHKeyPath
	I0918 21:05:55.384700   61273 main.go:141] libmachine: (no-preload-331658) Calling .GetSSHUsername
	I0918 21:05:55.384863   61273 sshutil.go:53] new ssh client: &{IP:192.168.61.31 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19667-7671/.minikube/machines/no-preload-331658/id_rsa Username:docker}
	I0918 21:05:55.399531   61273 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38229
	I0918 21:05:55.400009   61273 main.go:141] libmachine: () Calling .GetVersion
	I0918 21:05:55.400532   61273 main.go:141] libmachine: Using API Version  1
	I0918 21:05:55.400552   61273 main.go:141] libmachine: () Calling .SetConfigRaw
	I0918 21:05:55.400918   61273 main.go:141] libmachine: () Calling .GetMachineName
	I0918 21:05:55.401097   61273 main.go:141] libmachine: (no-preload-331658) Calling .GetState
	I0918 21:05:55.403124   61273 main.go:141] libmachine: (no-preload-331658) Calling .DriverName
	I0918 21:05:55.404237   61273 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45209
	I0918 21:05:55.404637   61273 main.go:141] libmachine: () Calling .GetVersion
	I0918 21:05:55.405088   61273 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0918 21:05:55.405422   61273 main.go:141] libmachine: Using API Version  1
	I0918 21:05:55.405443   61273 main.go:141] libmachine: () Calling .SetConfigRaw
	I0918 21:05:55.405906   61273 main.go:141] libmachine: () Calling .GetMachineName
	I0918 21:05:55.406570   61273 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19667-7671/.minikube/bin/docker-machine-driver-kvm2
	I0918 21:05:55.406620   61273 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0918 21:05:55.406959   61273 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0918 21:05:55.406973   61273 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0918 21:05:55.407380   61273 main.go:141] libmachine: (no-preload-331658) Calling .GetSSHHostname
	I0918 21:05:55.411410   61273 main.go:141] libmachine: (no-preload-331658) DBG | domain no-preload-331658 has defined MAC address 52:54:00:2a:47:d0 in network mk-no-preload-331658
	I0918 21:05:55.411430   61273 main.go:141] libmachine: (no-preload-331658) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:47:d0", ip: ""} in network mk-no-preload-331658: {Iface:virbr2 ExpiryTime:2024-09-18 21:55:52 +0000 UTC Type:0 Mac:52:54:00:2a:47:d0 Iaid: IPaddr:192.168.61.31 Prefix:24 Hostname:no-preload-331658 Clientid:01:52:54:00:2a:47:d0}
	I0918 21:05:55.411440   61273 main.go:141] libmachine: (no-preload-331658) DBG | domain no-preload-331658 has defined IP address 192.168.61.31 and MAC address 52:54:00:2a:47:d0 in network mk-no-preload-331658
	I0918 21:05:55.411727   61273 main.go:141] libmachine: (no-preload-331658) Calling .GetSSHPort
	I0918 21:05:55.411965   61273 main.go:141] libmachine: (no-preload-331658) Calling .GetSSHKeyPath
	I0918 21:05:55.412171   61273 main.go:141] libmachine: (no-preload-331658) Calling .GetSSHUsername
	I0918 21:05:55.412377   61273 sshutil.go:53] new ssh client: &{IP:192.168.61.31 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19667-7671/.minikube/machines/no-preload-331658/id_rsa Username:docker}
	I0918 21:05:55.426166   61273 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38277
	I0918 21:05:55.426704   61273 main.go:141] libmachine: () Calling .GetVersion
	I0918 21:05:55.427211   61273 main.go:141] libmachine: Using API Version  1
	I0918 21:05:55.427232   61273 main.go:141] libmachine: () Calling .SetConfigRaw
	I0918 21:05:55.427610   61273 main.go:141] libmachine: () Calling .GetMachineName
	I0918 21:05:55.427805   61273 main.go:141] libmachine: (no-preload-331658) Calling .GetState
	I0918 21:05:55.429864   61273 main.go:141] libmachine: (no-preload-331658) Calling .DriverName
	I0918 21:05:55.430238   61273 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0918 21:05:55.430256   61273 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0918 21:05:55.430278   61273 main.go:141] libmachine: (no-preload-331658) Calling .GetSSHHostname
	I0918 21:05:55.433576   61273 main.go:141] libmachine: (no-preload-331658) DBG | domain no-preload-331658 has defined MAC address 52:54:00:2a:47:d0 in network mk-no-preload-331658
	I0918 21:05:55.433894   61273 main.go:141] libmachine: (no-preload-331658) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:47:d0", ip: ""} in network mk-no-preload-331658: {Iface:virbr2 ExpiryTime:2024-09-18 21:55:52 +0000 UTC Type:0 Mac:52:54:00:2a:47:d0 Iaid: IPaddr:192.168.61.31 Prefix:24 Hostname:no-preload-331658 Clientid:01:52:54:00:2a:47:d0}
	I0918 21:05:55.433918   61273 main.go:141] libmachine: (no-preload-331658) DBG | domain no-preload-331658 has defined IP address 192.168.61.31 and MAC address 52:54:00:2a:47:d0 in network mk-no-preload-331658
	I0918 21:05:55.434411   61273 main.go:141] libmachine: (no-preload-331658) Calling .GetSSHPort
	I0918 21:05:55.434650   61273 main.go:141] libmachine: (no-preload-331658) Calling .GetSSHKeyPath
	I0918 21:05:55.434798   61273 main.go:141] libmachine: (no-preload-331658) Calling .GetSSHUsername
	I0918 21:05:55.434942   61273 sshutil.go:53] new ssh client: &{IP:192.168.61.31 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19667-7671/.minikube/machines/no-preload-331658/id_rsa Username:docker}
	I0918 21:05:55.528033   61273 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0918 21:05:55.545524   61273 node_ready.go:35] waiting up to 6m0s for node "no-preload-331658" to be "Ready" ...
	I0918 21:05:55.606477   61273 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0918 21:05:55.606498   61273 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0918 21:05:55.628256   61273 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0918 21:05:55.636122   61273 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0918 21:05:55.636154   61273 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0918 21:05:55.663081   61273 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0918 21:05:55.663108   61273 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0918 21:05:55.715011   61273 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0918 21:05:55.738192   61273 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0918 21:05:56.247539   61273 main.go:141] libmachine: Making call to close driver server
	I0918 21:05:56.247568   61273 main.go:141] libmachine: (no-preload-331658) Calling .Close
	I0918 21:05:56.247900   61273 main.go:141] libmachine: (no-preload-331658) DBG | Closing plugin on server side
	I0918 21:05:56.247922   61273 main.go:141] libmachine: Successfully made call to close driver server
	I0918 21:05:56.247937   61273 main.go:141] libmachine: Making call to close connection to plugin binary
	I0918 21:05:56.247948   61273 main.go:141] libmachine: Making call to close driver server
	I0918 21:05:56.247960   61273 main.go:141] libmachine: (no-preload-331658) Calling .Close
	I0918 21:05:56.248225   61273 main.go:141] libmachine: Successfully made call to close driver server
	I0918 21:05:56.248240   61273 main.go:141] libmachine: Making call to close connection to plugin binary
	I0918 21:05:56.248273   61273 main.go:141] libmachine: (no-preload-331658) DBG | Closing plugin on server side
	I0918 21:05:56.261942   61273 main.go:141] libmachine: Making call to close driver server
	I0918 21:05:56.261972   61273 main.go:141] libmachine: (no-preload-331658) Calling .Close
	I0918 21:05:56.262269   61273 main.go:141] libmachine: (no-preload-331658) DBG | Closing plugin on server side
	I0918 21:05:56.262344   61273 main.go:141] libmachine: Successfully made call to close driver server
	I0918 21:05:56.262361   61273 main.go:141] libmachine: Making call to close connection to plugin binary
	I0918 21:05:56.944008   61273 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.22895695s)
	I0918 21:05:56.944084   61273 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.205856091s)
	I0918 21:05:56.944121   61273 main.go:141] libmachine: Making call to close driver server
	I0918 21:05:56.944138   61273 main.go:141] libmachine: (no-preload-331658) Calling .Close
	I0918 21:05:56.944087   61273 main.go:141] libmachine: Making call to close driver server
	I0918 21:05:56.944186   61273 main.go:141] libmachine: (no-preload-331658) Calling .Close
	I0918 21:05:56.944489   61273 main.go:141] libmachine: (no-preload-331658) DBG | Closing plugin on server side
	I0918 21:05:56.944539   61273 main.go:141] libmachine: Successfully made call to close driver server
	I0918 21:05:56.944553   61273 main.go:141] libmachine: Making call to close connection to plugin binary
	I0918 21:05:56.944561   61273 main.go:141] libmachine: Making call to close driver server
	I0918 21:05:56.944572   61273 main.go:141] libmachine: (no-preload-331658) Calling .Close
	I0918 21:05:56.944559   61273 main.go:141] libmachine: (no-preload-331658) DBG | Closing plugin on server side
	I0918 21:05:56.944570   61273 main.go:141] libmachine: Successfully made call to close driver server
	I0918 21:05:56.944654   61273 main.go:141] libmachine: Making call to close connection to plugin binary
	I0918 21:05:56.944669   61273 main.go:141] libmachine: Making call to close driver server
	I0918 21:05:56.944678   61273 main.go:141] libmachine: (no-preload-331658) Calling .Close
	I0918 21:05:56.944794   61273 main.go:141] libmachine: Successfully made call to close driver server
	I0918 21:05:56.944808   61273 main.go:141] libmachine: Making call to close connection to plugin binary
	I0918 21:05:56.944823   61273 main.go:141] libmachine: (no-preload-331658) DBG | Closing plugin on server side
	I0918 21:05:56.944965   61273 main.go:141] libmachine: (no-preload-331658) DBG | Closing plugin on server side
	I0918 21:05:56.944988   61273 main.go:141] libmachine: Successfully made call to close driver server
	I0918 21:05:56.944998   61273 main.go:141] libmachine: Making call to close connection to plugin binary
	I0918 21:05:56.945010   61273 addons.go:475] Verifying addon metrics-server=true in "no-preload-331658"
	I0918 21:05:56.946962   61273 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I0918 21:05:53.135068   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:05:55.633160   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:05:55.393859   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:05:57.888366   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:05:54.703227   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:55.204057   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:55.704178   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:56.203443   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:56.703517   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:57.203499   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:57.703598   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:58.203660   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:58.703897   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:59.203256   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:56.948595   61273 addons.go:510] duration metric: took 1.627989207s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I0918 21:05:57.549092   61273 node_ready.go:53] node "no-preload-331658" has status "Ready":"False"
	I0918 21:06:00.050199   61273 node_ready.go:53] node "no-preload-331658" has status "Ready":"False"
	I0918 21:05:58.134289   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:06:00.632302   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:06:00.386644   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:06:02.387972   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:05:59.704149   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:06:00.203356   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:06:00.703750   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:06:01.203765   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:06:01.704295   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:06:02.203759   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:06:02.703342   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:06:03.204083   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:06:03.703777   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:06:04.203340   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:06:02.549111   61273 node_ready.go:49] node "no-preload-331658" has status "Ready":"True"
	I0918 21:06:02.549153   61273 node_ready.go:38] duration metric: took 7.003597589s for node "no-preload-331658" to be "Ready" ...
	I0918 21:06:02.549162   61273 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0918 21:06:02.554487   61273 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-dgnw2" in "kube-system" namespace to be "Ready" ...
	I0918 21:06:02.560130   61273 pod_ready.go:93] pod "coredns-7c65d6cfc9-dgnw2" in "kube-system" namespace has status "Ready":"True"
	I0918 21:06:02.560160   61273 pod_ready.go:82] duration metric: took 5.643145ms for pod "coredns-7c65d6cfc9-dgnw2" in "kube-system" namespace to be "Ready" ...
	I0918 21:06:02.560173   61273 pod_ready.go:79] waiting up to 6m0s for pod "etcd-no-preload-331658" in "kube-system" namespace to be "Ready" ...
	I0918 21:06:02.567971   61273 pod_ready.go:93] pod "etcd-no-preload-331658" in "kube-system" namespace has status "Ready":"True"
	I0918 21:06:02.567992   61273 pod_ready.go:82] duration metric: took 7.811385ms for pod "etcd-no-preload-331658" in "kube-system" namespace to be "Ready" ...
	I0918 21:06:02.568001   61273 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-no-preload-331658" in "kube-system" namespace to be "Ready" ...
	I0918 21:06:02.572606   61273 pod_ready.go:93] pod "kube-apiserver-no-preload-331658" in "kube-system" namespace has status "Ready":"True"
	I0918 21:06:02.572633   61273 pod_ready.go:82] duration metric: took 4.625414ms for pod "kube-apiserver-no-preload-331658" in "kube-system" namespace to be "Ready" ...
	I0918 21:06:02.572644   61273 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-no-preload-331658" in "kube-system" namespace to be "Ready" ...
	I0918 21:06:02.577222   61273 pod_ready.go:93] pod "kube-controller-manager-no-preload-331658" in "kube-system" namespace has status "Ready":"True"
	I0918 21:06:02.577243   61273 pod_ready.go:82] duration metric: took 4.591499ms for pod "kube-controller-manager-no-preload-331658" in "kube-system" namespace to be "Ready" ...
	I0918 21:06:02.577252   61273 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-hx25w" in "kube-system" namespace to be "Ready" ...
	I0918 21:06:02.949682   61273 pod_ready.go:93] pod "kube-proxy-hx25w" in "kube-system" namespace has status "Ready":"True"
	I0918 21:06:02.949707   61273 pod_ready.go:82] duration metric: took 372.449094ms for pod "kube-proxy-hx25w" in "kube-system" namespace to be "Ready" ...
	I0918 21:06:02.949716   61273 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-no-preload-331658" in "kube-system" namespace to be "Ready" ...
	I0918 21:06:03.350071   61273 pod_ready.go:93] pod "kube-scheduler-no-preload-331658" in "kube-system" namespace has status "Ready":"True"
	I0918 21:06:03.350104   61273 pod_ready.go:82] duration metric: took 400.380059ms for pod "kube-scheduler-no-preload-331658" in "kube-system" namespace to be "Ready" ...
	I0918 21:06:03.350118   61273 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace to be "Ready" ...
	I0918 21:06:05.357041   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:06:02.634105   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:06:05.132860   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:06:04.887184   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:06:06.887596   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:06:04.703762   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:06:05.203639   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:06:05.703335   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:06:06.204156   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:06:06.703735   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:06:07.203278   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:06:07.704072   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:06:08.203299   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:06:08.703528   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:06:09.203725   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:06:07.857844   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:06:10.356822   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:06:07.633985   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:06:10.133861   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:06:08.887695   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:06:11.387735   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:06:09.703712   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:06:10.203930   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:06:10.704216   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:06:11.203706   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:06:11.703494   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:06:12.204098   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:06:12.703927   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:06:13.203379   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:06:13.703248   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:06:14.204000   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:06:12.356878   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:06:14.360285   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:06:12.631731   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:06:15.132229   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:06:17.132802   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:06:13.887296   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:06:16.386306   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:06:14.704159   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:06:15.203401   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:06:15.703942   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:06:16.204043   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:06:16.703691   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:06:17.203508   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:06:17.703445   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:06:18.203689   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:06:18.704249   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:06:19.203290   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0918 21:06:19.203401   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0918 21:06:19.239033   62061 cri.go:89] found id: ""
	I0918 21:06:19.239065   62061 logs.go:276] 0 containers: []
	W0918 21:06:19.239073   62061 logs.go:278] No container was found matching "kube-apiserver"
	I0918 21:06:19.239079   62061 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0918 21:06:19.239141   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0918 21:06:19.274781   62061 cri.go:89] found id: ""
	I0918 21:06:19.274809   62061 logs.go:276] 0 containers: []
	W0918 21:06:19.274819   62061 logs.go:278] No container was found matching "etcd"
	I0918 21:06:19.274833   62061 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0918 21:06:19.274895   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0918 21:06:19.307894   62061 cri.go:89] found id: ""
	I0918 21:06:19.307928   62061 logs.go:276] 0 containers: []
	W0918 21:06:19.307940   62061 logs.go:278] No container was found matching "coredns"
	I0918 21:06:19.307948   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0918 21:06:19.308002   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0918 21:06:19.340572   62061 cri.go:89] found id: ""
	I0918 21:06:19.340602   62061 logs.go:276] 0 containers: []
	W0918 21:06:19.340610   62061 logs.go:278] No container was found matching "kube-scheduler"
	I0918 21:06:19.340615   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0918 21:06:19.340672   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0918 21:06:19.375448   62061 cri.go:89] found id: ""
	I0918 21:06:19.375475   62061 logs.go:276] 0 containers: []
	W0918 21:06:19.375483   62061 logs.go:278] No container was found matching "kube-proxy"
	I0918 21:06:19.375489   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0918 21:06:19.375536   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0918 21:06:19.413102   62061 cri.go:89] found id: ""
	I0918 21:06:19.413133   62061 logs.go:276] 0 containers: []
	W0918 21:06:19.413158   62061 logs.go:278] No container was found matching "kube-controller-manager"
	I0918 21:06:19.413166   62061 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0918 21:06:19.413238   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0918 21:06:19.447497   62061 cri.go:89] found id: ""
	I0918 21:06:19.447526   62061 logs.go:276] 0 containers: []
	W0918 21:06:19.447536   62061 logs.go:278] No container was found matching "kindnet"
	I0918 21:06:19.447544   62061 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0918 21:06:19.447605   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0918 21:06:19.480848   62061 cri.go:89] found id: ""
	I0918 21:06:19.480880   62061 logs.go:276] 0 containers: []
	W0918 21:06:19.480892   62061 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0918 21:06:19.480903   62061 logs.go:123] Gathering logs for kubelet ...
	I0918 21:06:19.480916   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 21:06:16.856845   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:06:19.358010   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:06:19.632608   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:06:22.132792   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:06:18.387488   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:06:20.887832   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:06:19.533573   62061 logs.go:123] Gathering logs for dmesg ...
	I0918 21:06:19.533625   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 21:06:19.546578   62061 logs.go:123] Gathering logs for describe nodes ...
	I0918 21:06:19.546604   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0918 21:06:19.667439   62061 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0918 21:06:19.667459   62061 logs.go:123] Gathering logs for CRI-O ...
	I0918 21:06:19.667472   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0918 21:06:19.739525   62061 logs.go:123] Gathering logs for container status ...
	I0918 21:06:19.739563   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 21:06:22.280913   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:06:22.296045   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0918 21:06:22.296132   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0918 21:06:22.332361   62061 cri.go:89] found id: ""
	I0918 21:06:22.332401   62061 logs.go:276] 0 containers: []
	W0918 21:06:22.332413   62061 logs.go:278] No container was found matching "kube-apiserver"
	I0918 21:06:22.332421   62061 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0918 21:06:22.332485   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0918 21:06:22.392138   62061 cri.go:89] found id: ""
	I0918 21:06:22.392170   62061 logs.go:276] 0 containers: []
	W0918 21:06:22.392178   62061 logs.go:278] No container was found matching "etcd"
	I0918 21:06:22.392184   62061 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0918 21:06:22.392237   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0918 21:06:22.431265   62061 cri.go:89] found id: ""
	I0918 21:06:22.431296   62061 logs.go:276] 0 containers: []
	W0918 21:06:22.431306   62061 logs.go:278] No container was found matching "coredns"
	I0918 21:06:22.431313   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0918 21:06:22.431376   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0918 21:06:22.463685   62061 cri.go:89] found id: ""
	I0918 21:06:22.463718   62061 logs.go:276] 0 containers: []
	W0918 21:06:22.463730   62061 logs.go:278] No container was found matching "kube-scheduler"
	I0918 21:06:22.463738   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0918 21:06:22.463808   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0918 21:06:22.498031   62061 cri.go:89] found id: ""
	I0918 21:06:22.498058   62061 logs.go:276] 0 containers: []
	W0918 21:06:22.498069   62061 logs.go:278] No container was found matching "kube-proxy"
	I0918 21:06:22.498076   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0918 21:06:22.498140   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0918 21:06:22.537701   62061 cri.go:89] found id: ""
	I0918 21:06:22.537732   62061 logs.go:276] 0 containers: []
	W0918 21:06:22.537740   62061 logs.go:278] No container was found matching "kube-controller-manager"
	I0918 21:06:22.537746   62061 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0918 21:06:22.537803   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0918 21:06:22.574085   62061 cri.go:89] found id: ""
	I0918 21:06:22.574118   62061 logs.go:276] 0 containers: []
	W0918 21:06:22.574130   62061 logs.go:278] No container was found matching "kindnet"
	I0918 21:06:22.574138   62061 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0918 21:06:22.574202   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0918 21:06:22.606313   62061 cri.go:89] found id: ""
	I0918 21:06:22.606341   62061 logs.go:276] 0 containers: []
	W0918 21:06:22.606349   62061 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0918 21:06:22.606359   62061 logs.go:123] Gathering logs for describe nodes ...
	I0918 21:06:22.606372   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0918 21:06:22.678484   62061 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0918 21:06:22.678511   62061 logs.go:123] Gathering logs for CRI-O ...
	I0918 21:06:22.678524   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0918 21:06:22.756730   62061 logs.go:123] Gathering logs for container status ...
	I0918 21:06:22.756767   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 21:06:22.793537   62061 logs.go:123] Gathering logs for kubelet ...
	I0918 21:06:22.793573   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 21:06:22.845987   62061 logs.go:123] Gathering logs for dmesg ...
	I0918 21:06:22.846021   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 21:06:21.857010   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:06:23.857823   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:06:26.358268   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:06:24.133063   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:06:26.632474   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:06:23.387764   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:06:25.886548   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:06:27.887108   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:06:25.360980   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:06:25.374930   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0918 21:06:25.375010   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0918 21:06:25.413100   62061 cri.go:89] found id: ""
	I0918 21:06:25.413135   62061 logs.go:276] 0 containers: []
	W0918 21:06:25.413147   62061 logs.go:278] No container was found matching "kube-apiserver"
	I0918 21:06:25.413155   62061 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0918 21:06:25.413221   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0918 21:06:25.450999   62061 cri.go:89] found id: ""
	I0918 21:06:25.451024   62061 logs.go:276] 0 containers: []
	W0918 21:06:25.451032   62061 logs.go:278] No container was found matching "etcd"
	I0918 21:06:25.451038   62061 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0918 21:06:25.451087   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0918 21:06:25.496114   62061 cri.go:89] found id: ""
	I0918 21:06:25.496147   62061 logs.go:276] 0 containers: []
	W0918 21:06:25.496155   62061 logs.go:278] No container was found matching "coredns"
	I0918 21:06:25.496161   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0918 21:06:25.496218   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0918 21:06:25.529186   62061 cri.go:89] found id: ""
	I0918 21:06:25.529211   62061 logs.go:276] 0 containers: []
	W0918 21:06:25.529218   62061 logs.go:278] No container was found matching "kube-scheduler"
	I0918 21:06:25.529223   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0918 21:06:25.529294   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0918 21:06:25.569710   62061 cri.go:89] found id: ""
	I0918 21:06:25.569735   62061 logs.go:276] 0 containers: []
	W0918 21:06:25.569743   62061 logs.go:278] No container was found matching "kube-proxy"
	I0918 21:06:25.569749   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0918 21:06:25.569796   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0918 21:06:25.606915   62061 cri.go:89] found id: ""
	I0918 21:06:25.606951   62061 logs.go:276] 0 containers: []
	W0918 21:06:25.606964   62061 logs.go:278] No container was found matching "kube-controller-manager"
	I0918 21:06:25.606972   62061 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0918 21:06:25.607034   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0918 21:06:25.642705   62061 cri.go:89] found id: ""
	I0918 21:06:25.642736   62061 logs.go:276] 0 containers: []
	W0918 21:06:25.642744   62061 logs.go:278] No container was found matching "kindnet"
	I0918 21:06:25.642750   62061 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0918 21:06:25.642801   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0918 21:06:25.676418   62061 cri.go:89] found id: ""
	I0918 21:06:25.676448   62061 logs.go:276] 0 containers: []
	W0918 21:06:25.676457   62061 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0918 21:06:25.676466   62061 logs.go:123] Gathering logs for dmesg ...
	I0918 21:06:25.676477   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 21:06:25.689913   62061 logs.go:123] Gathering logs for describe nodes ...
	I0918 21:06:25.689945   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0918 21:06:25.771579   62061 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0918 21:06:25.771607   62061 logs.go:123] Gathering logs for CRI-O ...
	I0918 21:06:25.771622   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0918 21:06:25.850904   62061 logs.go:123] Gathering logs for container status ...
	I0918 21:06:25.850945   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 21:06:25.890820   62061 logs.go:123] Gathering logs for kubelet ...
	I0918 21:06:25.890849   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 21:06:28.438958   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:06:28.451961   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0918 21:06:28.452043   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0918 21:06:28.485000   62061 cri.go:89] found id: ""
	I0918 21:06:28.485059   62061 logs.go:276] 0 containers: []
	W0918 21:06:28.485070   62061 logs.go:278] No container was found matching "kube-apiserver"
	I0918 21:06:28.485077   62061 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0918 21:06:28.485138   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0918 21:06:28.517852   62061 cri.go:89] found id: ""
	I0918 21:06:28.517885   62061 logs.go:276] 0 containers: []
	W0918 21:06:28.517895   62061 logs.go:278] No container was found matching "etcd"
	I0918 21:06:28.517903   62061 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0918 21:06:28.517957   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0918 21:06:28.549914   62061 cri.go:89] found id: ""
	I0918 21:06:28.549940   62061 logs.go:276] 0 containers: []
	W0918 21:06:28.549949   62061 logs.go:278] No container was found matching "coredns"
	I0918 21:06:28.549955   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0918 21:06:28.550000   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0918 21:06:28.584133   62061 cri.go:89] found id: ""
	I0918 21:06:28.584156   62061 logs.go:276] 0 containers: []
	W0918 21:06:28.584163   62061 logs.go:278] No container was found matching "kube-scheduler"
	I0918 21:06:28.584169   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0918 21:06:28.584213   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0918 21:06:28.620031   62061 cri.go:89] found id: ""
	I0918 21:06:28.620061   62061 logs.go:276] 0 containers: []
	W0918 21:06:28.620071   62061 logs.go:278] No container was found matching "kube-proxy"
	I0918 21:06:28.620078   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0918 21:06:28.620143   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0918 21:06:28.653393   62061 cri.go:89] found id: ""
	I0918 21:06:28.653424   62061 logs.go:276] 0 containers: []
	W0918 21:06:28.653435   62061 logs.go:278] No container was found matching "kube-controller-manager"
	I0918 21:06:28.653443   62061 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0918 21:06:28.653505   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0918 21:06:28.687696   62061 cri.go:89] found id: ""
	I0918 21:06:28.687729   62061 logs.go:276] 0 containers: []
	W0918 21:06:28.687741   62061 logs.go:278] No container was found matching "kindnet"
	I0918 21:06:28.687752   62061 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0918 21:06:28.687809   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0918 21:06:28.724824   62061 cri.go:89] found id: ""
	I0918 21:06:28.724854   62061 logs.go:276] 0 containers: []
	W0918 21:06:28.724864   62061 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0918 21:06:28.724875   62061 logs.go:123] Gathering logs for kubelet ...
	I0918 21:06:28.724890   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 21:06:28.785489   62061 logs.go:123] Gathering logs for dmesg ...
	I0918 21:06:28.785529   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 21:06:28.804170   62061 logs.go:123] Gathering logs for describe nodes ...
	I0918 21:06:28.804211   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0918 21:06:28.895508   62061 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0918 21:06:28.895538   62061 logs.go:123] Gathering logs for CRI-O ...
	I0918 21:06:28.895554   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0918 21:06:28.974643   62061 logs.go:123] Gathering logs for container status ...
	I0918 21:06:28.974685   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 21:06:28.858259   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:06:31.356644   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:06:28.633851   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:06:31.133612   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:06:30.392038   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:06:32.886708   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:06:31.517305   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:06:31.530374   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0918 21:06:31.530435   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0918 21:06:31.568335   62061 cri.go:89] found id: ""
	I0918 21:06:31.568374   62061 logs.go:276] 0 containers: []
	W0918 21:06:31.568385   62061 logs.go:278] No container was found matching "kube-apiserver"
	I0918 21:06:31.568393   62061 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0918 21:06:31.568462   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0918 21:06:31.603597   62061 cri.go:89] found id: ""
	I0918 21:06:31.603624   62061 logs.go:276] 0 containers: []
	W0918 21:06:31.603633   62061 logs.go:278] No container was found matching "etcd"
	I0918 21:06:31.603639   62061 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0918 21:06:31.603694   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0918 21:06:31.639355   62061 cri.go:89] found id: ""
	I0918 21:06:31.639380   62061 logs.go:276] 0 containers: []
	W0918 21:06:31.639388   62061 logs.go:278] No container was found matching "coredns"
	I0918 21:06:31.639393   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0918 21:06:31.639445   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0918 21:06:31.674197   62061 cri.go:89] found id: ""
	I0918 21:06:31.674223   62061 logs.go:276] 0 containers: []
	W0918 21:06:31.674231   62061 logs.go:278] No container was found matching "kube-scheduler"
	I0918 21:06:31.674237   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0918 21:06:31.674286   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0918 21:06:31.710563   62061 cri.go:89] found id: ""
	I0918 21:06:31.710592   62061 logs.go:276] 0 containers: []
	W0918 21:06:31.710606   62061 logs.go:278] No container was found matching "kube-proxy"
	I0918 21:06:31.710612   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0918 21:06:31.710663   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0918 21:06:31.746923   62061 cri.go:89] found id: ""
	I0918 21:06:31.746958   62061 logs.go:276] 0 containers: []
	W0918 21:06:31.746969   62061 logs.go:278] No container was found matching "kube-controller-manager"
	I0918 21:06:31.746976   62061 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0918 21:06:31.747039   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0918 21:06:31.781913   62061 cri.go:89] found id: ""
	I0918 21:06:31.781946   62061 logs.go:276] 0 containers: []
	W0918 21:06:31.781961   62061 logs.go:278] No container was found matching "kindnet"
	I0918 21:06:31.781969   62061 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0918 21:06:31.782023   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0918 21:06:31.817293   62061 cri.go:89] found id: ""
	I0918 21:06:31.817318   62061 logs.go:276] 0 containers: []
	W0918 21:06:31.817327   62061 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0918 21:06:31.817338   62061 logs.go:123] Gathering logs for kubelet ...
	I0918 21:06:31.817375   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 21:06:31.869479   62061 logs.go:123] Gathering logs for dmesg ...
	I0918 21:06:31.869518   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 21:06:31.884187   62061 logs.go:123] Gathering logs for describe nodes ...
	I0918 21:06:31.884219   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0918 21:06:31.967159   62061 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0918 21:06:31.967189   62061 logs.go:123] Gathering logs for CRI-O ...
	I0918 21:06:31.967202   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0918 21:06:32.050404   62061 logs.go:123] Gathering logs for container status ...
	I0918 21:06:32.050458   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 21:06:33.357380   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:06:35.856960   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:06:33.633434   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:06:36.133740   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:06:34.888738   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:06:37.386351   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:06:34.593135   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:06:34.613917   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0918 21:06:34.614001   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0918 21:06:34.649649   62061 cri.go:89] found id: ""
	I0918 21:06:34.649676   62061 logs.go:276] 0 containers: []
	W0918 21:06:34.649684   62061 logs.go:278] No container was found matching "kube-apiserver"
	I0918 21:06:34.649690   62061 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0918 21:06:34.649738   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0918 21:06:34.684898   62061 cri.go:89] found id: ""
	I0918 21:06:34.684932   62061 logs.go:276] 0 containers: []
	W0918 21:06:34.684940   62061 logs.go:278] No container was found matching "etcd"
	I0918 21:06:34.684948   62061 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0918 21:06:34.685018   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0918 21:06:34.717519   62061 cri.go:89] found id: ""
	I0918 21:06:34.717549   62061 logs.go:276] 0 containers: []
	W0918 21:06:34.717559   62061 logs.go:278] No container was found matching "coredns"
	I0918 21:06:34.717573   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0918 21:06:34.717626   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0918 21:06:34.751242   62061 cri.go:89] found id: ""
	I0918 21:06:34.751275   62061 logs.go:276] 0 containers: []
	W0918 21:06:34.751289   62061 logs.go:278] No container was found matching "kube-scheduler"
	I0918 21:06:34.751298   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0918 21:06:34.751368   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0918 21:06:34.786700   62061 cri.go:89] found id: ""
	I0918 21:06:34.786729   62061 logs.go:276] 0 containers: []
	W0918 21:06:34.786737   62061 logs.go:278] No container was found matching "kube-proxy"
	I0918 21:06:34.786742   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0918 21:06:34.786792   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0918 21:06:34.822400   62061 cri.go:89] found id: ""
	I0918 21:06:34.822446   62061 logs.go:276] 0 containers: []
	W0918 21:06:34.822459   62061 logs.go:278] No container was found matching "kube-controller-manager"
	I0918 21:06:34.822468   62061 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0918 21:06:34.822537   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0918 21:06:34.860509   62061 cri.go:89] found id: ""
	I0918 21:06:34.860540   62061 logs.go:276] 0 containers: []
	W0918 21:06:34.860549   62061 logs.go:278] No container was found matching "kindnet"
	I0918 21:06:34.860554   62061 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0918 21:06:34.860626   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0918 21:06:34.900682   62061 cri.go:89] found id: ""
	I0918 21:06:34.900712   62061 logs.go:276] 0 containers: []
	W0918 21:06:34.900719   62061 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0918 21:06:34.900727   62061 logs.go:123] Gathering logs for describe nodes ...
	I0918 21:06:34.900739   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0918 21:06:34.975975   62061 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0918 21:06:34.976009   62061 logs.go:123] Gathering logs for CRI-O ...
	I0918 21:06:34.976043   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0918 21:06:35.058972   62061 logs.go:123] Gathering logs for container status ...
	I0918 21:06:35.059012   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 21:06:35.101690   62061 logs.go:123] Gathering logs for kubelet ...
	I0918 21:06:35.101716   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 21:06:35.154486   62061 logs.go:123] Gathering logs for dmesg ...
	I0918 21:06:35.154525   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 21:06:37.669814   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:06:37.683235   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0918 21:06:37.683294   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0918 21:06:37.720666   62061 cri.go:89] found id: ""
	I0918 21:06:37.720697   62061 logs.go:276] 0 containers: []
	W0918 21:06:37.720708   62061 logs.go:278] No container was found matching "kube-apiserver"
	I0918 21:06:37.720715   62061 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0918 21:06:37.720777   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0918 21:06:37.758288   62061 cri.go:89] found id: ""
	I0918 21:06:37.758327   62061 logs.go:276] 0 containers: []
	W0918 21:06:37.758335   62061 logs.go:278] No container was found matching "etcd"
	I0918 21:06:37.758341   62061 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0918 21:06:37.758436   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0918 21:06:37.793394   62061 cri.go:89] found id: ""
	I0918 21:06:37.793421   62061 logs.go:276] 0 containers: []
	W0918 21:06:37.793430   62061 logs.go:278] No container was found matching "coredns"
	I0918 21:06:37.793437   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0918 21:06:37.793501   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0918 21:06:37.828258   62061 cri.go:89] found id: ""
	I0918 21:06:37.828291   62061 logs.go:276] 0 containers: []
	W0918 21:06:37.828300   62061 logs.go:278] No container was found matching "kube-scheduler"
	I0918 21:06:37.828307   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0918 21:06:37.828361   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0918 21:06:37.863164   62061 cri.go:89] found id: ""
	I0918 21:06:37.863189   62061 logs.go:276] 0 containers: []
	W0918 21:06:37.863197   62061 logs.go:278] No container was found matching "kube-proxy"
	I0918 21:06:37.863203   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0918 21:06:37.863251   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0918 21:06:37.899585   62061 cri.go:89] found id: ""
	I0918 21:06:37.899614   62061 logs.go:276] 0 containers: []
	W0918 21:06:37.899622   62061 logs.go:278] No container was found matching "kube-controller-manager"
	I0918 21:06:37.899628   62061 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0918 21:06:37.899675   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0918 21:06:37.936246   62061 cri.go:89] found id: ""
	I0918 21:06:37.936282   62061 logs.go:276] 0 containers: []
	W0918 21:06:37.936292   62061 logs.go:278] No container was found matching "kindnet"
	I0918 21:06:37.936299   62061 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0918 21:06:37.936362   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0918 21:06:37.969913   62061 cri.go:89] found id: ""
	I0918 21:06:37.969942   62061 logs.go:276] 0 containers: []
	W0918 21:06:37.969950   62061 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0918 21:06:37.969958   62061 logs.go:123] Gathering logs for kubelet ...
	I0918 21:06:37.969968   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 21:06:38.023055   62061 logs.go:123] Gathering logs for dmesg ...
	I0918 21:06:38.023094   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 21:06:38.036006   62061 logs.go:123] Gathering logs for describe nodes ...
	I0918 21:06:38.036047   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0918 21:06:38.116926   62061 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0918 21:06:38.116957   62061 logs.go:123] Gathering logs for CRI-O ...
	I0918 21:06:38.116972   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0918 21:06:38.192632   62061 logs.go:123] Gathering logs for container status ...
	I0918 21:06:38.192668   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 21:06:37.860654   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:06:40.357107   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:06:38.633432   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:06:41.131957   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:06:39.387927   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:06:41.886904   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:06:40.729602   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:06:40.743291   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0918 21:06:40.743368   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0918 21:06:40.778138   62061 cri.go:89] found id: ""
	I0918 21:06:40.778166   62061 logs.go:276] 0 containers: []
	W0918 21:06:40.778176   62061 logs.go:278] No container was found matching "kube-apiserver"
	I0918 21:06:40.778184   62061 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0918 21:06:40.778248   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0918 21:06:40.813026   62061 cri.go:89] found id: ""
	I0918 21:06:40.813062   62061 logs.go:276] 0 containers: []
	W0918 21:06:40.813073   62061 logs.go:278] No container was found matching "etcd"
	I0918 21:06:40.813081   62061 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0918 21:06:40.813146   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0918 21:06:40.847609   62061 cri.go:89] found id: ""
	I0918 21:06:40.847641   62061 logs.go:276] 0 containers: []
	W0918 21:06:40.847651   62061 logs.go:278] No container was found matching "coredns"
	I0918 21:06:40.847658   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0918 21:06:40.847727   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0918 21:06:40.885403   62061 cri.go:89] found id: ""
	I0918 21:06:40.885432   62061 logs.go:276] 0 containers: []
	W0918 21:06:40.885441   62061 logs.go:278] No container was found matching "kube-scheduler"
	I0918 21:06:40.885448   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0918 21:06:40.885515   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0918 21:06:40.922679   62061 cri.go:89] found id: ""
	I0918 21:06:40.922705   62061 logs.go:276] 0 containers: []
	W0918 21:06:40.922714   62061 logs.go:278] No container was found matching "kube-proxy"
	I0918 21:06:40.922719   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0918 21:06:40.922776   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0918 21:06:40.957187   62061 cri.go:89] found id: ""
	I0918 21:06:40.957215   62061 logs.go:276] 0 containers: []
	W0918 21:06:40.957222   62061 logs.go:278] No container was found matching "kube-controller-manager"
	I0918 21:06:40.957228   62061 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0918 21:06:40.957291   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0918 21:06:40.991722   62061 cri.go:89] found id: ""
	I0918 21:06:40.991751   62061 logs.go:276] 0 containers: []
	W0918 21:06:40.991762   62061 logs.go:278] No container was found matching "kindnet"
	I0918 21:06:40.991769   62061 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0918 21:06:40.991835   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0918 21:06:41.027210   62061 cri.go:89] found id: ""
	I0918 21:06:41.027234   62061 logs.go:276] 0 containers: []
	W0918 21:06:41.027242   62061 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0918 21:06:41.027250   62061 logs.go:123] Gathering logs for kubelet ...
	I0918 21:06:41.027275   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 21:06:41.077183   62061 logs.go:123] Gathering logs for dmesg ...
	I0918 21:06:41.077221   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 21:06:41.090707   62061 logs.go:123] Gathering logs for describe nodes ...
	I0918 21:06:41.090734   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0918 21:06:41.166206   62061 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0918 21:06:41.166228   62061 logs.go:123] Gathering logs for CRI-O ...
	I0918 21:06:41.166241   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0918 21:06:41.240308   62061 logs.go:123] Gathering logs for container status ...
	I0918 21:06:41.240346   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 21:06:43.777517   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:06:43.790901   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0918 21:06:43.790970   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0918 21:06:43.827127   62061 cri.go:89] found id: ""
	I0918 21:06:43.827156   62061 logs.go:276] 0 containers: []
	W0918 21:06:43.827164   62061 logs.go:278] No container was found matching "kube-apiserver"
	I0918 21:06:43.827170   62061 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0918 21:06:43.827218   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0918 21:06:43.861218   62061 cri.go:89] found id: ""
	I0918 21:06:43.861244   62061 logs.go:276] 0 containers: []
	W0918 21:06:43.861252   62061 logs.go:278] No container was found matching "etcd"
	I0918 21:06:43.861257   62061 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0918 21:06:43.861308   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0918 21:06:43.899669   62061 cri.go:89] found id: ""
	I0918 21:06:43.899694   62061 logs.go:276] 0 containers: []
	W0918 21:06:43.899701   62061 logs.go:278] No container was found matching "coredns"
	I0918 21:06:43.899707   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0918 21:06:43.899755   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0918 21:06:43.934698   62061 cri.go:89] found id: ""
	I0918 21:06:43.934731   62061 logs.go:276] 0 containers: []
	W0918 21:06:43.934741   62061 logs.go:278] No container was found matching "kube-scheduler"
	I0918 21:06:43.934748   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0918 21:06:43.934819   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0918 21:06:43.971715   62061 cri.go:89] found id: ""
	I0918 21:06:43.971742   62061 logs.go:276] 0 containers: []
	W0918 21:06:43.971755   62061 logs.go:278] No container was found matching "kube-proxy"
	I0918 21:06:43.971760   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0918 21:06:43.971817   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0918 21:06:44.005880   62061 cri.go:89] found id: ""
	I0918 21:06:44.005915   62061 logs.go:276] 0 containers: []
	W0918 21:06:44.005927   62061 logs.go:278] No container was found matching "kube-controller-manager"
	I0918 21:06:44.005935   62061 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0918 21:06:44.006003   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0918 21:06:44.038144   62061 cri.go:89] found id: ""
	I0918 21:06:44.038171   62061 logs.go:276] 0 containers: []
	W0918 21:06:44.038180   62061 logs.go:278] No container was found matching "kindnet"
	I0918 21:06:44.038186   62061 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0918 21:06:44.038239   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0918 21:06:44.073920   62061 cri.go:89] found id: ""
	I0918 21:06:44.073953   62061 logs.go:276] 0 containers: []
	W0918 21:06:44.073966   62061 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0918 21:06:44.073978   62061 logs.go:123] Gathering logs for describe nodes ...
	I0918 21:06:44.073992   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0918 21:06:44.142881   62061 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0918 21:06:44.142910   62061 logs.go:123] Gathering logs for CRI-O ...
	I0918 21:06:44.142926   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0918 21:06:44.222302   62061 logs.go:123] Gathering logs for container status ...
	I0918 21:06:44.222341   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 21:06:44.265914   62061 logs.go:123] Gathering logs for kubelet ...
	I0918 21:06:44.265952   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 21:06:44.316229   62061 logs.go:123] Gathering logs for dmesg ...
	I0918 21:06:44.316266   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 21:06:42.856192   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:06:44.857673   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:06:43.132992   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:06:45.134509   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:06:43.888282   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:06:45.889414   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:06:46.830793   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:06:46.843829   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0918 21:06:46.843886   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0918 21:06:46.878566   62061 cri.go:89] found id: ""
	I0918 21:06:46.878607   62061 logs.go:276] 0 containers: []
	W0918 21:06:46.878621   62061 logs.go:278] No container was found matching "kube-apiserver"
	I0918 21:06:46.878630   62061 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0918 21:06:46.878710   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0918 21:06:46.915576   62061 cri.go:89] found id: ""
	I0918 21:06:46.915603   62061 logs.go:276] 0 containers: []
	W0918 21:06:46.915611   62061 logs.go:278] No container was found matching "etcd"
	I0918 21:06:46.915616   62061 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0918 21:06:46.915663   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0918 21:06:46.952982   62061 cri.go:89] found id: ""
	I0918 21:06:46.953014   62061 logs.go:276] 0 containers: []
	W0918 21:06:46.953032   62061 logs.go:278] No container was found matching "coredns"
	I0918 21:06:46.953039   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0918 21:06:46.953104   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0918 21:06:46.987718   62061 cri.go:89] found id: ""
	I0918 21:06:46.987749   62061 logs.go:276] 0 containers: []
	W0918 21:06:46.987757   62061 logs.go:278] No container was found matching "kube-scheduler"
	I0918 21:06:46.987763   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0918 21:06:46.987815   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0918 21:06:47.022695   62061 cri.go:89] found id: ""
	I0918 21:06:47.022726   62061 logs.go:276] 0 containers: []
	W0918 21:06:47.022735   62061 logs.go:278] No container was found matching "kube-proxy"
	I0918 21:06:47.022741   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0918 21:06:47.022799   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0918 21:06:47.058058   62061 cri.go:89] found id: ""
	I0918 21:06:47.058086   62061 logs.go:276] 0 containers: []
	W0918 21:06:47.058094   62061 logs.go:278] No container was found matching "kube-controller-manager"
	I0918 21:06:47.058101   62061 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0918 21:06:47.058159   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0918 21:06:47.093314   62061 cri.go:89] found id: ""
	I0918 21:06:47.093352   62061 logs.go:276] 0 containers: []
	W0918 21:06:47.093361   62061 logs.go:278] No container was found matching "kindnet"
	I0918 21:06:47.093367   62061 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0918 21:06:47.093424   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0918 21:06:47.126997   62061 cri.go:89] found id: ""
	I0918 21:06:47.127023   62061 logs.go:276] 0 containers: []
	W0918 21:06:47.127033   62061 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0918 21:06:47.127041   62061 logs.go:123] Gathering logs for CRI-O ...
	I0918 21:06:47.127053   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0918 21:06:47.203239   62061 logs.go:123] Gathering logs for container status ...
	I0918 21:06:47.203282   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 21:06:47.240982   62061 logs.go:123] Gathering logs for kubelet ...
	I0918 21:06:47.241013   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 21:06:47.293553   62061 logs.go:123] Gathering logs for dmesg ...
	I0918 21:06:47.293591   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 21:06:47.307462   62061 logs.go:123] Gathering logs for describe nodes ...
	I0918 21:06:47.307493   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0918 21:06:47.379500   62061 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0918 21:06:47.357136   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:06:49.359981   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:06:47.633023   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:06:49.633350   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:06:52.134627   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:06:48.387568   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:06:50.886679   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:06:52.887065   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:06:49.880441   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:06:49.895816   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0918 21:06:49.895903   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0918 21:06:49.931916   62061 cri.go:89] found id: ""
	I0918 21:06:49.931941   62061 logs.go:276] 0 containers: []
	W0918 21:06:49.931950   62061 logs.go:278] No container was found matching "kube-apiserver"
	I0918 21:06:49.931955   62061 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0918 21:06:49.932007   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0918 21:06:49.967053   62061 cri.go:89] found id: ""
	I0918 21:06:49.967083   62061 logs.go:276] 0 containers: []
	W0918 21:06:49.967093   62061 logs.go:278] No container was found matching "etcd"
	I0918 21:06:49.967101   62061 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0918 21:06:49.967194   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0918 21:06:50.002943   62061 cri.go:89] found id: ""
	I0918 21:06:50.003004   62061 logs.go:276] 0 containers: []
	W0918 21:06:50.003015   62061 logs.go:278] No container was found matching "coredns"
	I0918 21:06:50.003023   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0918 21:06:50.003085   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0918 21:06:50.039445   62061 cri.go:89] found id: ""
	I0918 21:06:50.039478   62061 logs.go:276] 0 containers: []
	W0918 21:06:50.039489   62061 logs.go:278] No container was found matching "kube-scheduler"
	I0918 21:06:50.039497   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0918 21:06:50.039561   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0918 21:06:50.073529   62061 cri.go:89] found id: ""
	I0918 21:06:50.073562   62061 logs.go:276] 0 containers: []
	W0918 21:06:50.073572   62061 logs.go:278] No container was found matching "kube-proxy"
	I0918 21:06:50.073578   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0918 21:06:50.073645   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0918 21:06:50.109363   62061 cri.go:89] found id: ""
	I0918 21:06:50.109394   62061 logs.go:276] 0 containers: []
	W0918 21:06:50.109406   62061 logs.go:278] No container was found matching "kube-controller-manager"
	I0918 21:06:50.109413   62061 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0918 21:06:50.109485   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0918 21:06:50.144387   62061 cri.go:89] found id: ""
	I0918 21:06:50.144418   62061 logs.go:276] 0 containers: []
	W0918 21:06:50.144429   62061 logs.go:278] No container was found matching "kindnet"
	I0918 21:06:50.144450   62061 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0918 21:06:50.144524   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0918 21:06:50.178079   62061 cri.go:89] found id: ""
	I0918 21:06:50.178110   62061 logs.go:276] 0 containers: []
	W0918 21:06:50.178119   62061 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0918 21:06:50.178128   62061 logs.go:123] Gathering logs for kubelet ...
	I0918 21:06:50.178140   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 21:06:50.228857   62061 logs.go:123] Gathering logs for dmesg ...
	I0918 21:06:50.228893   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 21:06:50.242077   62061 logs.go:123] Gathering logs for describe nodes ...
	I0918 21:06:50.242108   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0918 21:06:50.312868   62061 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0918 21:06:50.312903   62061 logs.go:123] Gathering logs for CRI-O ...
	I0918 21:06:50.312920   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0918 21:06:50.392880   62061 logs.go:123] Gathering logs for container status ...
	I0918 21:06:50.392924   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 21:06:52.931623   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:06:52.945298   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0918 21:06:52.945394   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0918 21:06:52.980129   62061 cri.go:89] found id: ""
	I0918 21:06:52.980162   62061 logs.go:276] 0 containers: []
	W0918 21:06:52.980171   62061 logs.go:278] No container was found matching "kube-apiserver"
	I0918 21:06:52.980176   62061 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0918 21:06:52.980224   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0918 21:06:53.017899   62061 cri.go:89] found id: ""
	I0918 21:06:53.017932   62061 logs.go:276] 0 containers: []
	W0918 21:06:53.017941   62061 logs.go:278] No container was found matching "etcd"
	I0918 21:06:53.017947   62061 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0918 21:06:53.018015   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0918 21:06:53.056594   62061 cri.go:89] found id: ""
	I0918 21:06:53.056619   62061 logs.go:276] 0 containers: []
	W0918 21:06:53.056627   62061 logs.go:278] No container was found matching "coredns"
	I0918 21:06:53.056635   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0918 21:06:53.056684   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0918 21:06:53.089876   62061 cri.go:89] found id: ""
	I0918 21:06:53.089907   62061 logs.go:276] 0 containers: []
	W0918 21:06:53.089915   62061 logs.go:278] No container was found matching "kube-scheduler"
	I0918 21:06:53.089920   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0918 21:06:53.089967   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0918 21:06:53.122845   62061 cri.go:89] found id: ""
	I0918 21:06:53.122881   62061 logs.go:276] 0 containers: []
	W0918 21:06:53.122890   62061 logs.go:278] No container was found matching "kube-proxy"
	I0918 21:06:53.122904   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0918 21:06:53.122956   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0918 21:06:53.162103   62061 cri.go:89] found id: ""
	I0918 21:06:53.162137   62061 logs.go:276] 0 containers: []
	W0918 21:06:53.162148   62061 logs.go:278] No container was found matching "kube-controller-manager"
	I0918 21:06:53.162156   62061 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0918 21:06:53.162218   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0918 21:06:53.195034   62061 cri.go:89] found id: ""
	I0918 21:06:53.195067   62061 logs.go:276] 0 containers: []
	W0918 21:06:53.195078   62061 logs.go:278] No container was found matching "kindnet"
	I0918 21:06:53.195085   62061 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0918 21:06:53.195144   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0918 21:06:53.229473   62061 cri.go:89] found id: ""
	I0918 21:06:53.229518   62061 logs.go:276] 0 containers: []
	W0918 21:06:53.229542   62061 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0918 21:06:53.229556   62061 logs.go:123] Gathering logs for kubelet ...
	I0918 21:06:53.229577   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 21:06:53.283929   62061 logs.go:123] Gathering logs for dmesg ...
	I0918 21:06:53.283973   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 21:06:53.296932   62061 logs.go:123] Gathering logs for describe nodes ...
	I0918 21:06:53.296965   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0918 21:06:53.379339   62061 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0918 21:06:53.379369   62061 logs.go:123] Gathering logs for CRI-O ...
	I0918 21:06:53.379410   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0918 21:06:53.469608   62061 logs.go:123] Gathering logs for container status ...
	I0918 21:06:53.469652   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 21:06:51.855788   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:06:53.857284   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:06:55.860982   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:06:54.633423   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:06:56.633695   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:06:54.888052   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:06:57.387393   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:06:56.009227   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:06:56.023451   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0918 21:06:56.023510   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0918 21:06:56.056839   62061 cri.go:89] found id: ""
	I0918 21:06:56.056866   62061 logs.go:276] 0 containers: []
	W0918 21:06:56.056875   62061 logs.go:278] No container was found matching "kube-apiserver"
	I0918 21:06:56.056881   62061 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0918 21:06:56.056931   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0918 21:06:56.091893   62061 cri.go:89] found id: ""
	I0918 21:06:56.091919   62061 logs.go:276] 0 containers: []
	W0918 21:06:56.091928   62061 logs.go:278] No container was found matching "etcd"
	I0918 21:06:56.091933   62061 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0918 21:06:56.091980   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0918 21:06:56.127432   62061 cri.go:89] found id: ""
	I0918 21:06:56.127456   62061 logs.go:276] 0 containers: []
	W0918 21:06:56.127464   62061 logs.go:278] No container was found matching "coredns"
	I0918 21:06:56.127470   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0918 21:06:56.127518   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0918 21:06:56.162085   62061 cri.go:89] found id: ""
	I0918 21:06:56.162109   62061 logs.go:276] 0 containers: []
	W0918 21:06:56.162118   62061 logs.go:278] No container was found matching "kube-scheduler"
	I0918 21:06:56.162123   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0918 21:06:56.162176   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0918 21:06:56.195711   62061 cri.go:89] found id: ""
	I0918 21:06:56.195743   62061 logs.go:276] 0 containers: []
	W0918 21:06:56.195753   62061 logs.go:278] No container was found matching "kube-proxy"
	I0918 21:06:56.195759   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0918 21:06:56.195809   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0918 21:06:56.234481   62061 cri.go:89] found id: ""
	I0918 21:06:56.234507   62061 logs.go:276] 0 containers: []
	W0918 21:06:56.234515   62061 logs.go:278] No container was found matching "kube-controller-manager"
	I0918 21:06:56.234522   62061 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0918 21:06:56.234567   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0918 21:06:56.268566   62061 cri.go:89] found id: ""
	I0918 21:06:56.268596   62061 logs.go:276] 0 containers: []
	W0918 21:06:56.268608   62061 logs.go:278] No container was found matching "kindnet"
	I0918 21:06:56.268617   62061 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0918 21:06:56.268681   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0918 21:06:56.309785   62061 cri.go:89] found id: ""
	I0918 21:06:56.309813   62061 logs.go:276] 0 containers: []
	W0918 21:06:56.309824   62061 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0918 21:06:56.309835   62061 logs.go:123] Gathering logs for kubelet ...
	I0918 21:06:56.309874   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 21:06:56.364834   62061 logs.go:123] Gathering logs for dmesg ...
	I0918 21:06:56.364880   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 21:06:56.378424   62061 logs.go:123] Gathering logs for describe nodes ...
	I0918 21:06:56.378451   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0918 21:06:56.450482   62061 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0918 21:06:56.450511   62061 logs.go:123] Gathering logs for CRI-O ...
	I0918 21:06:56.450522   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0918 21:06:56.536261   62061 logs.go:123] Gathering logs for container status ...
	I0918 21:06:56.536305   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 21:06:59.074494   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:06:59.087553   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0918 21:06:59.087622   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0918 21:06:59.128323   62061 cri.go:89] found id: ""
	I0918 21:06:59.128357   62061 logs.go:276] 0 containers: []
	W0918 21:06:59.128368   62061 logs.go:278] No container was found matching "kube-apiserver"
	I0918 21:06:59.128375   62061 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0918 21:06:59.128436   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0918 21:06:59.161135   62061 cri.go:89] found id: ""
	I0918 21:06:59.161161   62061 logs.go:276] 0 containers: []
	W0918 21:06:59.161170   62061 logs.go:278] No container was found matching "etcd"
	I0918 21:06:59.161178   62061 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0918 21:06:59.161240   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0918 21:06:59.201558   62061 cri.go:89] found id: ""
	I0918 21:06:59.201595   62061 logs.go:276] 0 containers: []
	W0918 21:06:59.201607   62061 logs.go:278] No container was found matching "coredns"
	I0918 21:06:59.201614   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0918 21:06:59.201678   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0918 21:06:59.235330   62061 cri.go:89] found id: ""
	I0918 21:06:59.235356   62061 logs.go:276] 0 containers: []
	W0918 21:06:59.235378   62061 logs.go:278] No container was found matching "kube-scheduler"
	I0918 21:06:59.235385   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0918 21:06:59.235450   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0918 21:06:59.270957   62061 cri.go:89] found id: ""
	I0918 21:06:59.270994   62061 logs.go:276] 0 containers: []
	W0918 21:06:59.271007   62061 logs.go:278] No container was found matching "kube-proxy"
	I0918 21:06:59.271016   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0918 21:06:59.271088   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0918 21:06:59.305068   62061 cri.go:89] found id: ""
	I0918 21:06:59.305103   62061 logs.go:276] 0 containers: []
	W0918 21:06:59.305115   62061 logs.go:278] No container was found matching "kube-controller-manager"
	I0918 21:06:59.305123   62061 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0918 21:06:59.305177   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0918 21:06:59.338763   62061 cri.go:89] found id: ""
	I0918 21:06:59.338796   62061 logs.go:276] 0 containers: []
	W0918 21:06:59.338809   62061 logs.go:278] No container was found matching "kindnet"
	I0918 21:06:59.338818   62061 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0918 21:06:59.338887   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0918 21:06:59.376557   62061 cri.go:89] found id: ""
	I0918 21:06:59.376585   62061 logs.go:276] 0 containers: []
	W0918 21:06:59.376593   62061 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0918 21:06:59.376602   62061 logs.go:123] Gathering logs for CRI-O ...
	I0918 21:06:59.376615   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0918 21:06:59.455228   62061 logs.go:123] Gathering logs for container status ...
	I0918 21:06:59.455264   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 21:06:58.356648   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:07:00.357253   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:06:59.133274   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:07:01.632548   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:06:59.388183   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:07:01.886834   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:06:59.494461   62061 logs.go:123] Gathering logs for kubelet ...
	I0918 21:06:59.494488   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 21:06:59.543143   62061 logs.go:123] Gathering logs for dmesg ...
	I0918 21:06:59.543177   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 21:06:59.556031   62061 logs.go:123] Gathering logs for describe nodes ...
	I0918 21:06:59.556062   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0918 21:06:59.623888   62061 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0918 21:07:02.124743   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:07:02.139392   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0918 21:07:02.139451   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0918 21:07:02.172176   62061 cri.go:89] found id: ""
	I0918 21:07:02.172201   62061 logs.go:276] 0 containers: []
	W0918 21:07:02.172209   62061 logs.go:278] No container was found matching "kube-apiserver"
	I0918 21:07:02.172215   62061 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0918 21:07:02.172263   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0918 21:07:02.206480   62061 cri.go:89] found id: ""
	I0918 21:07:02.206507   62061 logs.go:276] 0 containers: []
	W0918 21:07:02.206518   62061 logs.go:278] No container was found matching "etcd"
	I0918 21:07:02.206525   62061 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0918 21:07:02.206586   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0918 21:07:02.240256   62061 cri.go:89] found id: ""
	I0918 21:07:02.240281   62061 logs.go:276] 0 containers: []
	W0918 21:07:02.240289   62061 logs.go:278] No container was found matching "coredns"
	I0918 21:07:02.240295   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0918 21:07:02.240353   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0918 21:07:02.274001   62061 cri.go:89] found id: ""
	I0918 21:07:02.274034   62061 logs.go:276] 0 containers: []
	W0918 21:07:02.274046   62061 logs.go:278] No container was found matching "kube-scheduler"
	I0918 21:07:02.274056   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0918 21:07:02.274115   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0918 21:07:02.307491   62061 cri.go:89] found id: ""
	I0918 21:07:02.307520   62061 logs.go:276] 0 containers: []
	W0918 21:07:02.307528   62061 logs.go:278] No container was found matching "kube-proxy"
	I0918 21:07:02.307534   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0918 21:07:02.307597   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0918 21:07:02.344688   62061 cri.go:89] found id: ""
	I0918 21:07:02.344720   62061 logs.go:276] 0 containers: []
	W0918 21:07:02.344731   62061 logs.go:278] No container was found matching "kube-controller-manager"
	I0918 21:07:02.344739   62061 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0918 21:07:02.344805   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0918 21:07:02.384046   62061 cri.go:89] found id: ""
	I0918 21:07:02.384077   62061 logs.go:276] 0 containers: []
	W0918 21:07:02.384088   62061 logs.go:278] No container was found matching "kindnet"
	I0918 21:07:02.384095   62061 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0918 21:07:02.384154   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0918 21:07:02.422530   62061 cri.go:89] found id: ""
	I0918 21:07:02.422563   62061 logs.go:276] 0 containers: []
	W0918 21:07:02.422575   62061 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0918 21:07:02.422586   62061 logs.go:123] Gathering logs for container status ...
	I0918 21:07:02.422604   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 21:07:02.461236   62061 logs.go:123] Gathering logs for kubelet ...
	I0918 21:07:02.461266   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 21:07:02.512280   62061 logs.go:123] Gathering logs for dmesg ...
	I0918 21:07:02.512320   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 21:07:02.525404   62061 logs.go:123] Gathering logs for describe nodes ...
	I0918 21:07:02.525435   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0918 21:07:02.599746   62061 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0918 21:07:02.599773   62061 logs.go:123] Gathering logs for CRI-O ...
	I0918 21:07:02.599785   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0918 21:07:02.856077   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:07:04.858098   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:07:04.133240   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:07:06.135937   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:07:03.887306   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:07:05.888675   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:07:05.183459   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:07:05.197615   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0918 21:07:05.197681   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0918 21:07:05.234313   62061 cri.go:89] found id: ""
	I0918 21:07:05.234354   62061 logs.go:276] 0 containers: []
	W0918 21:07:05.234365   62061 logs.go:278] No container was found matching "kube-apiserver"
	I0918 21:07:05.234371   62061 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0918 21:07:05.234429   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0918 21:07:05.271436   62061 cri.go:89] found id: ""
	I0918 21:07:05.271466   62061 logs.go:276] 0 containers: []
	W0918 21:07:05.271477   62061 logs.go:278] No container was found matching "etcd"
	I0918 21:07:05.271484   62061 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0918 21:07:05.271554   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0918 21:07:05.307007   62061 cri.go:89] found id: ""
	I0918 21:07:05.307038   62061 logs.go:276] 0 containers: []
	W0918 21:07:05.307050   62061 logs.go:278] No container was found matching "coredns"
	I0918 21:07:05.307056   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0918 21:07:05.307120   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0918 21:07:05.341764   62061 cri.go:89] found id: ""
	I0918 21:07:05.341810   62061 logs.go:276] 0 containers: []
	W0918 21:07:05.341831   62061 logs.go:278] No container was found matching "kube-scheduler"
	I0918 21:07:05.341840   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0918 21:07:05.341908   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0918 21:07:05.381611   62061 cri.go:89] found id: ""
	I0918 21:07:05.381646   62061 logs.go:276] 0 containers: []
	W0918 21:07:05.381658   62061 logs.go:278] No container was found matching "kube-proxy"
	I0918 21:07:05.381666   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0918 21:07:05.381747   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0918 21:07:05.421468   62061 cri.go:89] found id: ""
	I0918 21:07:05.421499   62061 logs.go:276] 0 containers: []
	W0918 21:07:05.421520   62061 logs.go:278] No container was found matching "kube-controller-manager"
	I0918 21:07:05.421528   62061 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0918 21:07:05.421590   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0918 21:07:05.457320   62061 cri.go:89] found id: ""
	I0918 21:07:05.457348   62061 logs.go:276] 0 containers: []
	W0918 21:07:05.457359   62061 logs.go:278] No container was found matching "kindnet"
	I0918 21:07:05.457367   62061 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0918 21:07:05.457425   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0918 21:07:05.493879   62061 cri.go:89] found id: ""
	I0918 21:07:05.493915   62061 logs.go:276] 0 containers: []
	W0918 21:07:05.493928   62061 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0918 21:07:05.493943   62061 logs.go:123] Gathering logs for kubelet ...
	I0918 21:07:05.493955   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 21:07:05.543825   62061 logs.go:123] Gathering logs for dmesg ...
	I0918 21:07:05.543867   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 21:07:05.558254   62061 logs.go:123] Gathering logs for describe nodes ...
	I0918 21:07:05.558285   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0918 21:07:05.627065   62061 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0918 21:07:05.627091   62061 logs.go:123] Gathering logs for CRI-O ...
	I0918 21:07:05.627103   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0918 21:07:05.703838   62061 logs.go:123] Gathering logs for container status ...
	I0918 21:07:05.703876   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 21:07:08.244087   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:07:08.257807   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0918 21:07:08.257897   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0918 21:07:08.292034   62061 cri.go:89] found id: ""
	I0918 21:07:08.292064   62061 logs.go:276] 0 containers: []
	W0918 21:07:08.292076   62061 logs.go:278] No container was found matching "kube-apiserver"
	I0918 21:07:08.292084   62061 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0918 21:07:08.292154   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0918 21:07:08.325858   62061 cri.go:89] found id: ""
	I0918 21:07:08.325897   62061 logs.go:276] 0 containers: []
	W0918 21:07:08.325910   62061 logs.go:278] No container was found matching "etcd"
	I0918 21:07:08.325918   62061 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0918 21:07:08.325987   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0918 21:07:08.369427   62061 cri.go:89] found id: ""
	I0918 21:07:08.369457   62061 logs.go:276] 0 containers: []
	W0918 21:07:08.369468   62061 logs.go:278] No container was found matching "coredns"
	I0918 21:07:08.369475   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0918 21:07:08.369536   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0918 21:07:08.409402   62061 cri.go:89] found id: ""
	I0918 21:07:08.409434   62061 logs.go:276] 0 containers: []
	W0918 21:07:08.409444   62061 logs.go:278] No container was found matching "kube-scheduler"
	I0918 21:07:08.409451   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0918 21:07:08.409515   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0918 21:07:08.445530   62061 cri.go:89] found id: ""
	I0918 21:07:08.445565   62061 logs.go:276] 0 containers: []
	W0918 21:07:08.445575   62061 logs.go:278] No container was found matching "kube-proxy"
	I0918 21:07:08.445584   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0918 21:07:08.445646   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0918 21:07:08.481905   62061 cri.go:89] found id: ""
	I0918 21:07:08.481935   62061 logs.go:276] 0 containers: []
	W0918 21:07:08.481945   62061 logs.go:278] No container was found matching "kube-controller-manager"
	I0918 21:07:08.481952   62061 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0918 21:07:08.482020   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0918 21:07:08.516710   62061 cri.go:89] found id: ""
	I0918 21:07:08.516738   62061 logs.go:276] 0 containers: []
	W0918 21:07:08.516746   62061 logs.go:278] No container was found matching "kindnet"
	I0918 21:07:08.516752   62061 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0918 21:07:08.516802   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0918 21:07:08.550707   62061 cri.go:89] found id: ""
	I0918 21:07:08.550740   62061 logs.go:276] 0 containers: []
	W0918 21:07:08.550760   62061 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0918 21:07:08.550768   62061 logs.go:123] Gathering logs for describe nodes ...
	I0918 21:07:08.550782   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0918 21:07:08.624821   62061 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0918 21:07:08.624843   62061 logs.go:123] Gathering logs for CRI-O ...
	I0918 21:07:08.624854   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0918 21:07:08.705347   62061 logs.go:123] Gathering logs for container status ...
	I0918 21:07:08.705383   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 21:07:08.744394   62061 logs.go:123] Gathering logs for kubelet ...
	I0918 21:07:08.744425   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 21:07:08.799336   62061 logs.go:123] Gathering logs for dmesg ...
	I0918 21:07:08.799375   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 21:07:07.358154   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:07:09.857118   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:07:08.633211   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:07:11.132676   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:07:08.388884   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:07:10.887356   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:07:11.312920   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:07:11.327524   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0918 21:07:11.327598   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0918 21:07:11.366177   62061 cri.go:89] found id: ""
	I0918 21:07:11.366209   62061 logs.go:276] 0 containers: []
	W0918 21:07:11.366221   62061 logs.go:278] No container was found matching "kube-apiserver"
	I0918 21:07:11.366227   62061 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0918 21:07:11.366274   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0918 21:07:11.405296   62061 cri.go:89] found id: ""
	I0918 21:07:11.405326   62061 logs.go:276] 0 containers: []
	W0918 21:07:11.405344   62061 logs.go:278] No container was found matching "etcd"
	I0918 21:07:11.405351   62061 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0918 21:07:11.405413   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0918 21:07:11.441786   62061 cri.go:89] found id: ""
	I0918 21:07:11.441818   62061 logs.go:276] 0 containers: []
	W0918 21:07:11.441829   62061 logs.go:278] No container was found matching "coredns"
	I0918 21:07:11.441837   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0918 21:07:11.441891   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0918 21:07:11.476144   62061 cri.go:89] found id: ""
	I0918 21:07:11.476167   62061 logs.go:276] 0 containers: []
	W0918 21:07:11.476175   62061 logs.go:278] No container was found matching "kube-scheduler"
	I0918 21:07:11.476181   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0918 21:07:11.476224   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0918 21:07:11.512115   62061 cri.go:89] found id: ""
	I0918 21:07:11.512143   62061 logs.go:276] 0 containers: []
	W0918 21:07:11.512154   62061 logs.go:278] No container was found matching "kube-proxy"
	I0918 21:07:11.512160   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0918 21:07:11.512221   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0918 21:07:11.545466   62061 cri.go:89] found id: ""
	I0918 21:07:11.545494   62061 logs.go:276] 0 containers: []
	W0918 21:07:11.545504   62061 logs.go:278] No container was found matching "kube-controller-manager"
	I0918 21:07:11.545512   62061 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0918 21:07:11.545575   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0918 21:07:11.579940   62061 cri.go:89] found id: ""
	I0918 21:07:11.579965   62061 logs.go:276] 0 containers: []
	W0918 21:07:11.579975   62061 logs.go:278] No container was found matching "kindnet"
	I0918 21:07:11.579994   62061 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0918 21:07:11.580080   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0918 21:07:11.613454   62061 cri.go:89] found id: ""
	I0918 21:07:11.613480   62061 logs.go:276] 0 containers: []
	W0918 21:07:11.613491   62061 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0918 21:07:11.613512   62061 logs.go:123] Gathering logs for kubelet ...
	I0918 21:07:11.613535   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 21:07:11.669079   62061 logs.go:123] Gathering logs for dmesg ...
	I0918 21:07:11.669140   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 21:07:11.682614   62061 logs.go:123] Gathering logs for describe nodes ...
	I0918 21:07:11.682639   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0918 21:07:11.756876   62061 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0918 21:07:11.756901   62061 logs.go:123] Gathering logs for CRI-O ...
	I0918 21:07:11.756914   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0918 21:07:11.835046   62061 logs.go:123] Gathering logs for container status ...
	I0918 21:07:11.835086   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 21:07:14.377541   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:07:14.392083   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0918 21:07:14.392161   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0918 21:07:14.431752   62061 cri.go:89] found id: ""
	I0918 21:07:14.431786   62061 logs.go:276] 0 containers: []
	W0918 21:07:14.431795   62061 logs.go:278] No container was found matching "kube-apiserver"
	I0918 21:07:14.431800   62061 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0918 21:07:14.431856   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0918 21:07:14.468530   62061 cri.go:89] found id: ""
	I0918 21:07:14.468562   62061 logs.go:276] 0 containers: []
	W0918 21:07:14.468573   62061 logs.go:278] No container was found matching "etcd"
	I0918 21:07:14.468580   62061 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0918 21:07:14.468643   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0918 21:07:11.857763   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:07:14.357253   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:07:13.132895   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:07:15.133426   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:07:13.386537   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:07:15.387844   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:07:17.888743   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:07:14.506510   62061 cri.go:89] found id: ""
	I0918 21:07:14.506540   62061 logs.go:276] 0 containers: []
	W0918 21:07:14.506550   62061 logs.go:278] No container was found matching "coredns"
	I0918 21:07:14.506557   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0918 21:07:14.506625   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0918 21:07:14.538986   62061 cri.go:89] found id: ""
	I0918 21:07:14.539021   62061 logs.go:276] 0 containers: []
	W0918 21:07:14.539032   62061 logs.go:278] No container was found matching "kube-scheduler"
	I0918 21:07:14.539039   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0918 21:07:14.539103   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0918 21:07:14.572390   62061 cri.go:89] found id: ""
	I0918 21:07:14.572421   62061 logs.go:276] 0 containers: []
	W0918 21:07:14.572432   62061 logs.go:278] No container was found matching "kube-proxy"
	I0918 21:07:14.572440   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0918 21:07:14.572499   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0918 21:07:14.607875   62061 cri.go:89] found id: ""
	I0918 21:07:14.607905   62061 logs.go:276] 0 containers: []
	W0918 21:07:14.607917   62061 logs.go:278] No container was found matching "kube-controller-manager"
	I0918 21:07:14.607924   62061 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0918 21:07:14.607988   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0918 21:07:14.642584   62061 cri.go:89] found id: ""
	I0918 21:07:14.642616   62061 logs.go:276] 0 containers: []
	W0918 21:07:14.642625   62061 logs.go:278] No container was found matching "kindnet"
	I0918 21:07:14.642630   62061 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0918 21:07:14.642685   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0918 21:07:14.682117   62061 cri.go:89] found id: ""
	I0918 21:07:14.682144   62061 logs.go:276] 0 containers: []
	W0918 21:07:14.682152   62061 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0918 21:07:14.682163   62061 logs.go:123] Gathering logs for dmesg ...
	I0918 21:07:14.682177   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 21:07:14.694780   62061 logs.go:123] Gathering logs for describe nodes ...
	I0918 21:07:14.694808   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0918 21:07:14.767988   62061 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0918 21:07:14.768036   62061 logs.go:123] Gathering logs for CRI-O ...
	I0918 21:07:14.768055   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0918 21:07:14.860620   62061 logs.go:123] Gathering logs for container status ...
	I0918 21:07:14.860655   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 21:07:14.905071   62061 logs.go:123] Gathering logs for kubelet ...
	I0918 21:07:14.905102   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 21:07:17.455582   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:07:17.468713   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0918 21:07:17.468774   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0918 21:07:17.503682   62061 cri.go:89] found id: ""
	I0918 21:07:17.503709   62061 logs.go:276] 0 containers: []
	W0918 21:07:17.503721   62061 logs.go:278] No container was found matching "kube-apiserver"
	I0918 21:07:17.503729   62061 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0918 21:07:17.503794   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0918 21:07:17.536581   62061 cri.go:89] found id: ""
	I0918 21:07:17.536611   62061 logs.go:276] 0 containers: []
	W0918 21:07:17.536619   62061 logs.go:278] No container was found matching "etcd"
	I0918 21:07:17.536625   62061 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0918 21:07:17.536673   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0918 21:07:17.570483   62061 cri.go:89] found id: ""
	I0918 21:07:17.570510   62061 logs.go:276] 0 containers: []
	W0918 21:07:17.570518   62061 logs.go:278] No container was found matching "coredns"
	I0918 21:07:17.570523   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0918 21:07:17.570591   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0918 21:07:17.604102   62061 cri.go:89] found id: ""
	I0918 21:07:17.604140   62061 logs.go:276] 0 containers: []
	W0918 21:07:17.604152   62061 logs.go:278] No container was found matching "kube-scheduler"
	I0918 21:07:17.604160   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0918 21:07:17.604229   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0918 21:07:17.638633   62061 cri.go:89] found id: ""
	I0918 21:07:17.638661   62061 logs.go:276] 0 containers: []
	W0918 21:07:17.638672   62061 logs.go:278] No container was found matching "kube-proxy"
	I0918 21:07:17.638678   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0918 21:07:17.638725   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0918 21:07:17.673016   62061 cri.go:89] found id: ""
	I0918 21:07:17.673048   62061 logs.go:276] 0 containers: []
	W0918 21:07:17.673057   62061 logs.go:278] No container was found matching "kube-controller-manager"
	I0918 21:07:17.673063   62061 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0918 21:07:17.673116   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0918 21:07:17.708576   62061 cri.go:89] found id: ""
	I0918 21:07:17.708601   62061 logs.go:276] 0 containers: []
	W0918 21:07:17.708609   62061 logs.go:278] No container was found matching "kindnet"
	I0918 21:07:17.708620   62061 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0918 21:07:17.708672   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0918 21:07:17.742820   62061 cri.go:89] found id: ""
	I0918 21:07:17.742843   62061 logs.go:276] 0 containers: []
	W0918 21:07:17.742851   62061 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0918 21:07:17.742859   62061 logs.go:123] Gathering logs for dmesg ...
	I0918 21:07:17.742870   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 21:07:17.757091   62061 logs.go:123] Gathering logs for describe nodes ...
	I0918 21:07:17.757120   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0918 21:07:17.824116   62061 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0918 21:07:17.824137   62061 logs.go:123] Gathering logs for CRI-O ...
	I0918 21:07:17.824151   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0918 21:07:17.911013   62061 logs.go:123] Gathering logs for container status ...
	I0918 21:07:17.911065   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 21:07:17.950517   62061 logs.go:123] Gathering logs for kubelet ...
	I0918 21:07:17.950553   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 21:07:16.857284   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:07:19.357336   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:07:17.635033   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:07:20.134331   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:07:20.388498   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:07:22.887115   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:07:20.502423   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:07:20.529214   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0918 21:07:20.529297   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0918 21:07:20.583618   62061 cri.go:89] found id: ""
	I0918 21:07:20.583660   62061 logs.go:276] 0 containers: []
	W0918 21:07:20.583672   62061 logs.go:278] No container was found matching "kube-apiserver"
	I0918 21:07:20.583682   62061 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0918 21:07:20.583751   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0918 21:07:20.633051   62061 cri.go:89] found id: ""
	I0918 21:07:20.633079   62061 logs.go:276] 0 containers: []
	W0918 21:07:20.633091   62061 logs.go:278] No container was found matching "etcd"
	I0918 21:07:20.633099   62061 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0918 21:07:20.633158   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0918 21:07:20.668513   62061 cri.go:89] found id: ""
	I0918 21:07:20.668543   62061 logs.go:276] 0 containers: []
	W0918 21:07:20.668552   62061 logs.go:278] No container was found matching "coredns"
	I0918 21:07:20.668558   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0918 21:07:20.668611   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0918 21:07:20.702648   62061 cri.go:89] found id: ""
	I0918 21:07:20.702683   62061 logs.go:276] 0 containers: []
	W0918 21:07:20.702698   62061 logs.go:278] No container was found matching "kube-scheduler"
	I0918 21:07:20.702706   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0918 21:07:20.702767   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0918 21:07:20.740639   62061 cri.go:89] found id: ""
	I0918 21:07:20.740666   62061 logs.go:276] 0 containers: []
	W0918 21:07:20.740674   62061 logs.go:278] No container was found matching "kube-proxy"
	I0918 21:07:20.740680   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0918 21:07:20.740731   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0918 21:07:20.771819   62061 cri.go:89] found id: ""
	I0918 21:07:20.771847   62061 logs.go:276] 0 containers: []
	W0918 21:07:20.771855   62061 logs.go:278] No container was found matching "kube-controller-manager"
	I0918 21:07:20.771863   62061 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0918 21:07:20.771913   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0918 21:07:20.803613   62061 cri.go:89] found id: ""
	I0918 21:07:20.803639   62061 logs.go:276] 0 containers: []
	W0918 21:07:20.803647   62061 logs.go:278] No container was found matching "kindnet"
	I0918 21:07:20.803653   62061 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0918 21:07:20.803708   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0918 21:07:20.835671   62061 cri.go:89] found id: ""
	I0918 21:07:20.835695   62061 logs.go:276] 0 containers: []
	W0918 21:07:20.835702   62061 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0918 21:07:20.835710   62061 logs.go:123] Gathering logs for kubelet ...
	I0918 21:07:20.835720   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 21:07:20.886159   62061 logs.go:123] Gathering logs for dmesg ...
	I0918 21:07:20.886191   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 21:07:20.901412   62061 logs.go:123] Gathering logs for describe nodes ...
	I0918 21:07:20.901443   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0918 21:07:20.979009   62061 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0918 21:07:20.979034   62061 logs.go:123] Gathering logs for CRI-O ...
	I0918 21:07:20.979045   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0918 21:07:21.059895   62061 logs.go:123] Gathering logs for container status ...
	I0918 21:07:21.059930   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 21:07:23.598133   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:07:23.611038   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0918 21:07:23.611102   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0918 21:07:23.648037   62061 cri.go:89] found id: ""
	I0918 21:07:23.648067   62061 logs.go:276] 0 containers: []
	W0918 21:07:23.648075   62061 logs.go:278] No container was found matching "kube-apiserver"
	I0918 21:07:23.648081   62061 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0918 21:07:23.648135   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0918 21:07:23.683956   62061 cri.go:89] found id: ""
	I0918 21:07:23.683985   62061 logs.go:276] 0 containers: []
	W0918 21:07:23.683995   62061 logs.go:278] No container was found matching "etcd"
	I0918 21:07:23.684003   62061 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0918 21:07:23.684083   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0918 21:07:23.719274   62061 cri.go:89] found id: ""
	I0918 21:07:23.719301   62061 logs.go:276] 0 containers: []
	W0918 21:07:23.719309   62061 logs.go:278] No container was found matching "coredns"
	I0918 21:07:23.719315   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0918 21:07:23.719378   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0918 21:07:23.757101   62061 cri.go:89] found id: ""
	I0918 21:07:23.757133   62061 logs.go:276] 0 containers: []
	W0918 21:07:23.757145   62061 logs.go:278] No container was found matching "kube-scheduler"
	I0918 21:07:23.757152   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0918 21:07:23.757202   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0918 21:07:23.790115   62061 cri.go:89] found id: ""
	I0918 21:07:23.790149   62061 logs.go:276] 0 containers: []
	W0918 21:07:23.790160   62061 logs.go:278] No container was found matching "kube-proxy"
	I0918 21:07:23.790168   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0918 21:07:23.790231   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0918 21:07:23.823089   62061 cri.go:89] found id: ""
	I0918 21:07:23.823120   62061 logs.go:276] 0 containers: []
	W0918 21:07:23.823131   62061 logs.go:278] No container was found matching "kube-controller-manager"
	I0918 21:07:23.823140   62061 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0918 21:07:23.823192   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0918 21:07:23.858566   62061 cri.go:89] found id: ""
	I0918 21:07:23.858591   62061 logs.go:276] 0 containers: []
	W0918 21:07:23.858601   62061 logs.go:278] No container was found matching "kindnet"
	I0918 21:07:23.858613   62061 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0918 21:07:23.858674   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0918 21:07:23.892650   62061 cri.go:89] found id: ""
	I0918 21:07:23.892679   62061 logs.go:276] 0 containers: []
	W0918 21:07:23.892690   62061 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0918 21:07:23.892699   62061 logs.go:123] Gathering logs for kubelet ...
	I0918 21:07:23.892713   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 21:07:23.941087   62061 logs.go:123] Gathering logs for dmesg ...
	I0918 21:07:23.941123   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 21:07:23.955674   62061 logs.go:123] Gathering logs for describe nodes ...
	I0918 21:07:23.955712   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0918 21:07:24.026087   62061 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0918 21:07:24.026115   62061 logs.go:123] Gathering logs for CRI-O ...
	I0918 21:07:24.026132   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0918 21:07:24.105237   62061 logs.go:123] Gathering logs for container status ...
	I0918 21:07:24.105272   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 21:07:21.857391   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:07:23.857954   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:07:26.356553   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:07:22.633058   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:07:25.133773   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:07:25.387123   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:07:27.886688   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:07:26.646822   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:07:26.659978   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0918 21:07:26.660087   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0918 21:07:26.695776   62061 cri.go:89] found id: ""
	I0918 21:07:26.695804   62061 logs.go:276] 0 containers: []
	W0918 21:07:26.695817   62061 logs.go:278] No container was found matching "kube-apiserver"
	I0918 21:07:26.695824   62061 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0918 21:07:26.695875   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0918 21:07:26.729717   62061 cri.go:89] found id: ""
	I0918 21:07:26.729747   62061 logs.go:276] 0 containers: []
	W0918 21:07:26.729756   62061 logs.go:278] No container was found matching "etcd"
	I0918 21:07:26.729762   62061 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0918 21:07:26.729830   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0918 21:07:26.763848   62061 cri.go:89] found id: ""
	I0918 21:07:26.763886   62061 logs.go:276] 0 containers: []
	W0918 21:07:26.763907   62061 logs.go:278] No container was found matching "coredns"
	I0918 21:07:26.763915   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0918 21:07:26.763974   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0918 21:07:26.798670   62061 cri.go:89] found id: ""
	I0918 21:07:26.798703   62061 logs.go:276] 0 containers: []
	W0918 21:07:26.798713   62061 logs.go:278] No container was found matching "kube-scheduler"
	I0918 21:07:26.798720   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0918 21:07:26.798789   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0918 21:07:26.833778   62061 cri.go:89] found id: ""
	I0918 21:07:26.833804   62061 logs.go:276] 0 containers: []
	W0918 21:07:26.833815   62061 logs.go:278] No container was found matching "kube-proxy"
	I0918 21:07:26.833822   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0918 21:07:26.833905   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0918 21:07:26.869657   62061 cri.go:89] found id: ""
	I0918 21:07:26.869688   62061 logs.go:276] 0 containers: []
	W0918 21:07:26.869699   62061 logs.go:278] No container was found matching "kube-controller-manager"
	I0918 21:07:26.869707   62061 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0918 21:07:26.869772   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0918 21:07:26.908163   62061 cri.go:89] found id: ""
	I0918 21:07:26.908194   62061 logs.go:276] 0 containers: []
	W0918 21:07:26.908205   62061 logs.go:278] No container was found matching "kindnet"
	I0918 21:07:26.908212   62061 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0918 21:07:26.908269   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0918 21:07:26.943416   62061 cri.go:89] found id: ""
	I0918 21:07:26.943442   62061 logs.go:276] 0 containers: []
	W0918 21:07:26.943451   62061 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0918 21:07:26.943459   62061 logs.go:123] Gathering logs for kubelet ...
	I0918 21:07:26.943471   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 21:07:26.993796   62061 logs.go:123] Gathering logs for dmesg ...
	I0918 21:07:26.993833   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 21:07:27.007619   62061 logs.go:123] Gathering logs for describe nodes ...
	I0918 21:07:27.007661   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0918 21:07:27.072880   62061 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0918 21:07:27.072904   62061 logs.go:123] Gathering logs for CRI-O ...
	I0918 21:07:27.072919   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0918 21:07:27.148984   62061 logs.go:123] Gathering logs for container status ...
	I0918 21:07:27.149031   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 21:07:28.357006   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:07:30.857527   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:07:27.632697   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:07:30.133718   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:07:29.887981   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:07:32.387478   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:07:29.690106   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:07:29.702853   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0918 21:07:29.702932   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0918 21:07:29.736427   62061 cri.go:89] found id: ""
	I0918 21:07:29.736461   62061 logs.go:276] 0 containers: []
	W0918 21:07:29.736473   62061 logs.go:278] No container was found matching "kube-apiserver"
	I0918 21:07:29.736480   62061 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0918 21:07:29.736537   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0918 21:07:29.771287   62061 cri.go:89] found id: ""
	I0918 21:07:29.771317   62061 logs.go:276] 0 containers: []
	W0918 21:07:29.771328   62061 logs.go:278] No container was found matching "etcd"
	I0918 21:07:29.771334   62061 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0918 21:07:29.771398   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0918 21:07:29.804826   62061 cri.go:89] found id: ""
	I0918 21:07:29.804861   62061 logs.go:276] 0 containers: []
	W0918 21:07:29.804875   62061 logs.go:278] No container was found matching "coredns"
	I0918 21:07:29.804882   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0918 21:07:29.804934   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0918 21:07:29.838570   62061 cri.go:89] found id: ""
	I0918 21:07:29.838598   62061 logs.go:276] 0 containers: []
	W0918 21:07:29.838608   62061 logs.go:278] No container was found matching "kube-scheduler"
	I0918 21:07:29.838614   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0918 21:07:29.838659   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0918 21:07:29.871591   62061 cri.go:89] found id: ""
	I0918 21:07:29.871621   62061 logs.go:276] 0 containers: []
	W0918 21:07:29.871631   62061 logs.go:278] No container was found matching "kube-proxy"
	I0918 21:07:29.871638   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0918 21:07:29.871699   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0918 21:07:29.905789   62061 cri.go:89] found id: ""
	I0918 21:07:29.905824   62061 logs.go:276] 0 containers: []
	W0918 21:07:29.905835   62061 logs.go:278] No container was found matching "kube-controller-manager"
	I0918 21:07:29.905846   62061 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0918 21:07:29.905910   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0918 21:07:29.945914   62061 cri.go:89] found id: ""
	I0918 21:07:29.945941   62061 logs.go:276] 0 containers: []
	W0918 21:07:29.945950   62061 logs.go:278] No container was found matching "kindnet"
	I0918 21:07:29.945955   62061 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0918 21:07:29.946004   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0918 21:07:29.979365   62061 cri.go:89] found id: ""
	I0918 21:07:29.979396   62061 logs.go:276] 0 containers: []
	W0918 21:07:29.979405   62061 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0918 21:07:29.979413   62061 logs.go:123] Gathering logs for kubelet ...
	I0918 21:07:29.979425   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 21:07:30.026925   62061 logs.go:123] Gathering logs for dmesg ...
	I0918 21:07:30.026956   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 21:07:30.040589   62061 logs.go:123] Gathering logs for describe nodes ...
	I0918 21:07:30.040623   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0918 21:07:30.112900   62061 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0918 21:07:30.112928   62061 logs.go:123] Gathering logs for CRI-O ...
	I0918 21:07:30.112948   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0918 21:07:30.195208   62061 logs.go:123] Gathering logs for container status ...
	I0918 21:07:30.195256   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 21:07:32.735787   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:07:32.748969   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0918 21:07:32.749049   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0918 21:07:32.782148   62061 cri.go:89] found id: ""
	I0918 21:07:32.782179   62061 logs.go:276] 0 containers: []
	W0918 21:07:32.782189   62061 logs.go:278] No container was found matching "kube-apiserver"
	I0918 21:07:32.782196   62061 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0918 21:07:32.782262   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0918 21:07:32.816119   62061 cri.go:89] found id: ""
	I0918 21:07:32.816144   62061 logs.go:276] 0 containers: []
	W0918 21:07:32.816152   62061 logs.go:278] No container was found matching "etcd"
	I0918 21:07:32.816158   62061 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0918 21:07:32.816203   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0918 21:07:32.849817   62061 cri.go:89] found id: ""
	I0918 21:07:32.849844   62061 logs.go:276] 0 containers: []
	W0918 21:07:32.849853   62061 logs.go:278] No container was found matching "coredns"
	I0918 21:07:32.849859   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0918 21:07:32.849911   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0918 21:07:32.884464   62061 cri.go:89] found id: ""
	I0918 21:07:32.884494   62061 logs.go:276] 0 containers: []
	W0918 21:07:32.884506   62061 logs.go:278] No container was found matching "kube-scheduler"
	I0918 21:07:32.884513   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0918 21:07:32.884576   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0918 21:07:32.917942   62061 cri.go:89] found id: ""
	I0918 21:07:32.917973   62061 logs.go:276] 0 containers: []
	W0918 21:07:32.917983   62061 logs.go:278] No container was found matching "kube-proxy"
	I0918 21:07:32.917990   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0918 21:07:32.918051   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0918 21:07:32.950748   62061 cri.go:89] found id: ""
	I0918 21:07:32.950780   62061 logs.go:276] 0 containers: []
	W0918 21:07:32.950791   62061 logs.go:278] No container was found matching "kube-controller-manager"
	I0918 21:07:32.950800   62061 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0918 21:07:32.950864   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0918 21:07:32.985059   62061 cri.go:89] found id: ""
	I0918 21:07:32.985092   62061 logs.go:276] 0 containers: []
	W0918 21:07:32.985100   62061 logs.go:278] No container was found matching "kindnet"
	I0918 21:07:32.985106   62061 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0918 21:07:32.985167   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0918 21:07:33.021496   62061 cri.go:89] found id: ""
	I0918 21:07:33.021526   62061 logs.go:276] 0 containers: []
	W0918 21:07:33.021536   62061 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0918 21:07:33.021546   62061 logs.go:123] Gathering logs for kubelet ...
	I0918 21:07:33.021560   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 21:07:33.071744   62061 logs.go:123] Gathering logs for dmesg ...
	I0918 21:07:33.071793   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 21:07:33.086533   62061 logs.go:123] Gathering logs for describe nodes ...
	I0918 21:07:33.086565   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0918 21:07:33.155274   62061 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0918 21:07:33.155307   62061 logs.go:123] Gathering logs for CRI-O ...
	I0918 21:07:33.155326   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0918 21:07:33.238301   62061 logs.go:123] Gathering logs for container status ...
	I0918 21:07:33.238342   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 21:07:33.356874   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:07:35.357445   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:07:32.631814   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:07:34.631954   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:07:36.633057   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:07:34.387725   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:07:36.887031   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:07:35.777905   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:07:35.792442   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0918 21:07:35.792510   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0918 21:07:35.827310   62061 cri.go:89] found id: ""
	I0918 21:07:35.827339   62061 logs.go:276] 0 containers: []
	W0918 21:07:35.827350   62061 logs.go:278] No container was found matching "kube-apiserver"
	I0918 21:07:35.827357   62061 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0918 21:07:35.827422   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0918 21:07:35.861873   62061 cri.go:89] found id: ""
	I0918 21:07:35.861897   62061 logs.go:276] 0 containers: []
	W0918 21:07:35.861905   62061 logs.go:278] No container was found matching "etcd"
	I0918 21:07:35.861916   62061 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0918 21:07:35.861966   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0918 21:07:35.895295   62061 cri.go:89] found id: ""
	I0918 21:07:35.895324   62061 logs.go:276] 0 containers: []
	W0918 21:07:35.895345   62061 logs.go:278] No container was found matching "coredns"
	I0918 21:07:35.895353   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0918 21:07:35.895410   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0918 21:07:35.927690   62061 cri.go:89] found id: ""
	I0918 21:07:35.927717   62061 logs.go:276] 0 containers: []
	W0918 21:07:35.927728   62061 logs.go:278] No container was found matching "kube-scheduler"
	I0918 21:07:35.927736   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0918 21:07:35.927788   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0918 21:07:35.963954   62061 cri.go:89] found id: ""
	I0918 21:07:35.963979   62061 logs.go:276] 0 containers: []
	W0918 21:07:35.963986   62061 logs.go:278] No container was found matching "kube-proxy"
	I0918 21:07:35.963992   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0918 21:07:35.964059   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0918 21:07:36.002287   62061 cri.go:89] found id: ""
	I0918 21:07:36.002313   62061 logs.go:276] 0 containers: []
	W0918 21:07:36.002322   62061 logs.go:278] No container was found matching "kube-controller-manager"
	I0918 21:07:36.002328   62061 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0918 21:07:36.002380   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0918 21:07:36.036748   62061 cri.go:89] found id: ""
	I0918 21:07:36.036778   62061 logs.go:276] 0 containers: []
	W0918 21:07:36.036790   62061 logs.go:278] No container was found matching "kindnet"
	I0918 21:07:36.036797   62061 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0918 21:07:36.036861   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0918 21:07:36.072616   62061 cri.go:89] found id: ""
	I0918 21:07:36.072647   62061 logs.go:276] 0 containers: []
	W0918 21:07:36.072658   62061 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0918 21:07:36.072668   62061 logs.go:123] Gathering logs for kubelet ...
	I0918 21:07:36.072683   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 21:07:36.121723   62061 logs.go:123] Gathering logs for dmesg ...
	I0918 21:07:36.121762   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 21:07:36.136528   62061 logs.go:123] Gathering logs for describe nodes ...
	I0918 21:07:36.136560   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0918 21:07:36.202878   62061 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0918 21:07:36.202911   62061 logs.go:123] Gathering logs for CRI-O ...
	I0918 21:07:36.202926   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0918 21:07:36.287882   62061 logs.go:123] Gathering logs for container status ...
	I0918 21:07:36.287924   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 21:07:38.825888   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:07:38.838437   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0918 21:07:38.838510   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0918 21:07:38.872312   62061 cri.go:89] found id: ""
	I0918 21:07:38.872348   62061 logs.go:276] 0 containers: []
	W0918 21:07:38.872358   62061 logs.go:278] No container was found matching "kube-apiserver"
	I0918 21:07:38.872366   62061 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0918 21:07:38.872417   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0918 21:07:38.905927   62061 cri.go:89] found id: ""
	I0918 21:07:38.905962   62061 logs.go:276] 0 containers: []
	W0918 21:07:38.905975   62061 logs.go:278] No container was found matching "etcd"
	I0918 21:07:38.905982   62061 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0918 21:07:38.906042   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0918 21:07:38.938245   62061 cri.go:89] found id: ""
	I0918 21:07:38.938274   62061 logs.go:276] 0 containers: []
	W0918 21:07:38.938286   62061 logs.go:278] No container was found matching "coredns"
	I0918 21:07:38.938293   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0918 21:07:38.938358   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0918 21:07:38.971497   62061 cri.go:89] found id: ""
	I0918 21:07:38.971536   62061 logs.go:276] 0 containers: []
	W0918 21:07:38.971548   62061 logs.go:278] No container was found matching "kube-scheduler"
	I0918 21:07:38.971555   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0918 21:07:38.971625   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0918 21:07:39.004755   62061 cri.go:89] found id: ""
	I0918 21:07:39.004784   62061 logs.go:276] 0 containers: []
	W0918 21:07:39.004793   62061 logs.go:278] No container was found matching "kube-proxy"
	I0918 21:07:39.004799   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0918 21:07:39.004854   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0918 21:07:39.036237   62061 cri.go:89] found id: ""
	I0918 21:07:39.036265   62061 logs.go:276] 0 containers: []
	W0918 21:07:39.036273   62061 logs.go:278] No container was found matching "kube-controller-manager"
	I0918 21:07:39.036279   62061 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0918 21:07:39.036328   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0918 21:07:39.071504   62061 cri.go:89] found id: ""
	I0918 21:07:39.071537   62061 logs.go:276] 0 containers: []
	W0918 21:07:39.071549   62061 logs.go:278] No container was found matching "kindnet"
	I0918 21:07:39.071557   62061 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0918 21:07:39.071623   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0918 21:07:39.107035   62061 cri.go:89] found id: ""
	I0918 21:07:39.107063   62061 logs.go:276] 0 containers: []
	W0918 21:07:39.107071   62061 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0918 21:07:39.107080   62061 logs.go:123] Gathering logs for kubelet ...
	I0918 21:07:39.107090   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 21:07:39.158078   62061 logs.go:123] Gathering logs for dmesg ...
	I0918 21:07:39.158113   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 21:07:39.172846   62061 logs.go:123] Gathering logs for describe nodes ...
	I0918 21:07:39.172875   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0918 21:07:39.240577   62061 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0918 21:07:39.240602   62061 logs.go:123] Gathering logs for CRI-O ...
	I0918 21:07:39.240618   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0918 21:07:39.319762   62061 logs.go:123] Gathering logs for container status ...
	I0918 21:07:39.319797   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 21:07:37.857371   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:07:40.356710   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:07:39.133586   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:07:41.632538   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:07:38.887485   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:07:41.386252   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:07:41.856586   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:07:41.870308   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0918 21:07:41.870388   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0918 21:07:41.905657   62061 cri.go:89] found id: ""
	I0918 21:07:41.905688   62061 logs.go:276] 0 containers: []
	W0918 21:07:41.905699   62061 logs.go:278] No container was found matching "kube-apiserver"
	I0918 21:07:41.905706   62061 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0918 21:07:41.905766   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0918 21:07:41.944507   62061 cri.go:89] found id: ""
	I0918 21:07:41.944544   62061 logs.go:276] 0 containers: []
	W0918 21:07:41.944555   62061 logs.go:278] No container was found matching "etcd"
	I0918 21:07:41.944566   62061 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0918 21:07:41.944634   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0918 21:07:41.979217   62061 cri.go:89] found id: ""
	I0918 21:07:41.979252   62061 logs.go:276] 0 containers: []
	W0918 21:07:41.979271   62061 logs.go:278] No container was found matching "coredns"
	I0918 21:07:41.979279   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0918 21:07:41.979346   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0918 21:07:42.013613   62061 cri.go:89] found id: ""
	I0918 21:07:42.013641   62061 logs.go:276] 0 containers: []
	W0918 21:07:42.013652   62061 logs.go:278] No container was found matching "kube-scheduler"
	I0918 21:07:42.013659   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0918 21:07:42.013725   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0918 21:07:42.049225   62061 cri.go:89] found id: ""
	I0918 21:07:42.049259   62061 logs.go:276] 0 containers: []
	W0918 21:07:42.049271   62061 logs.go:278] No container was found matching "kube-proxy"
	I0918 21:07:42.049279   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0918 21:07:42.049364   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0918 21:07:42.085737   62061 cri.go:89] found id: ""
	I0918 21:07:42.085763   62061 logs.go:276] 0 containers: []
	W0918 21:07:42.085775   62061 logs.go:278] No container was found matching "kube-controller-manager"
	I0918 21:07:42.085782   62061 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0918 21:07:42.085843   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0918 21:07:42.121326   62061 cri.go:89] found id: ""
	I0918 21:07:42.121356   62061 logs.go:276] 0 containers: []
	W0918 21:07:42.121365   62061 logs.go:278] No container was found matching "kindnet"
	I0918 21:07:42.121371   62061 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0918 21:07:42.121428   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0918 21:07:42.157070   62061 cri.go:89] found id: ""
	I0918 21:07:42.157097   62061 logs.go:276] 0 containers: []
	W0918 21:07:42.157107   62061 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0918 21:07:42.157118   62061 logs.go:123] Gathering logs for CRI-O ...
	I0918 21:07:42.157133   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0918 21:07:42.232110   62061 logs.go:123] Gathering logs for container status ...
	I0918 21:07:42.232162   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 21:07:42.270478   62061 logs.go:123] Gathering logs for kubelet ...
	I0918 21:07:42.270517   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 21:07:42.324545   62061 logs.go:123] Gathering logs for dmesg ...
	I0918 21:07:42.324586   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 21:07:42.339928   62061 logs.go:123] Gathering logs for describe nodes ...
	I0918 21:07:42.339962   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0918 21:07:42.415124   62061 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0918 21:07:42.356847   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:07:44.856845   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:07:43.633029   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:07:46.134786   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:07:43.387596   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:07:45.887071   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:07:44.915316   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:07:44.928261   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0918 21:07:44.928331   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0918 21:07:44.959641   62061 cri.go:89] found id: ""
	I0918 21:07:44.959673   62061 logs.go:276] 0 containers: []
	W0918 21:07:44.959682   62061 logs.go:278] No container was found matching "kube-apiserver"
	I0918 21:07:44.959689   62061 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0918 21:07:44.959791   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0918 21:07:44.991744   62061 cri.go:89] found id: ""
	I0918 21:07:44.991778   62061 logs.go:276] 0 containers: []
	W0918 21:07:44.991787   62061 logs.go:278] No container was found matching "etcd"
	I0918 21:07:44.991803   62061 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0918 21:07:44.991877   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0918 21:07:45.024228   62061 cri.go:89] found id: ""
	I0918 21:07:45.024261   62061 logs.go:276] 0 containers: []
	W0918 21:07:45.024272   62061 logs.go:278] No container was found matching "coredns"
	I0918 21:07:45.024280   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0918 21:07:45.024355   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0918 21:07:45.061905   62061 cri.go:89] found id: ""
	I0918 21:07:45.061931   62061 logs.go:276] 0 containers: []
	W0918 21:07:45.061940   62061 logs.go:278] No container was found matching "kube-scheduler"
	I0918 21:07:45.061946   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0918 21:07:45.061994   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0918 21:07:45.099224   62061 cri.go:89] found id: ""
	I0918 21:07:45.099251   62061 logs.go:276] 0 containers: []
	W0918 21:07:45.099259   62061 logs.go:278] No container was found matching "kube-proxy"
	I0918 21:07:45.099266   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0918 21:07:45.099327   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0918 21:07:45.137235   62061 cri.go:89] found id: ""
	I0918 21:07:45.137262   62061 logs.go:276] 0 containers: []
	W0918 21:07:45.137270   62061 logs.go:278] No container was found matching "kube-controller-manager"
	I0918 21:07:45.137278   62061 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0918 21:07:45.137333   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0918 21:07:45.174343   62061 cri.go:89] found id: ""
	I0918 21:07:45.174370   62061 logs.go:276] 0 containers: []
	W0918 21:07:45.174379   62061 logs.go:278] No container was found matching "kindnet"
	I0918 21:07:45.174385   62061 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0918 21:07:45.174432   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0918 21:07:45.209278   62061 cri.go:89] found id: ""
	I0918 21:07:45.209306   62061 logs.go:276] 0 containers: []
	W0918 21:07:45.209316   62061 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0918 21:07:45.209326   62061 logs.go:123] Gathering logs for dmesg ...
	I0918 21:07:45.209341   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 21:07:45.222376   62061 logs.go:123] Gathering logs for describe nodes ...
	I0918 21:07:45.222409   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0918 21:07:45.294562   62061 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0918 21:07:45.294595   62061 logs.go:123] Gathering logs for CRI-O ...
	I0918 21:07:45.294607   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0918 21:07:45.382582   62061 logs.go:123] Gathering logs for container status ...
	I0918 21:07:45.382631   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 21:07:45.424743   62061 logs.go:123] Gathering logs for kubelet ...
	I0918 21:07:45.424780   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 21:07:47.978171   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:07:47.990485   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0918 21:07:47.990554   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0918 21:07:48.024465   62061 cri.go:89] found id: ""
	I0918 21:07:48.024494   62061 logs.go:276] 0 containers: []
	W0918 21:07:48.024505   62061 logs.go:278] No container was found matching "kube-apiserver"
	I0918 21:07:48.024512   62061 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0918 21:07:48.024575   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0918 21:07:48.058838   62061 cri.go:89] found id: ""
	I0918 21:07:48.058871   62061 logs.go:276] 0 containers: []
	W0918 21:07:48.058899   62061 logs.go:278] No container was found matching "etcd"
	I0918 21:07:48.058905   62061 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0918 21:07:48.058959   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0918 21:07:48.094307   62061 cri.go:89] found id: ""
	I0918 21:07:48.094335   62061 logs.go:276] 0 containers: []
	W0918 21:07:48.094343   62061 logs.go:278] No container was found matching "coredns"
	I0918 21:07:48.094349   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0918 21:07:48.094398   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0918 21:07:48.127497   62061 cri.go:89] found id: ""
	I0918 21:07:48.127529   62061 logs.go:276] 0 containers: []
	W0918 21:07:48.127538   62061 logs.go:278] No container was found matching "kube-scheduler"
	I0918 21:07:48.127544   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0918 21:07:48.127590   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0918 21:07:48.160440   62061 cri.go:89] found id: ""
	I0918 21:07:48.160465   62061 logs.go:276] 0 containers: []
	W0918 21:07:48.160473   62061 logs.go:278] No container was found matching "kube-proxy"
	I0918 21:07:48.160478   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0918 21:07:48.160527   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0918 21:07:48.195581   62061 cri.go:89] found id: ""
	I0918 21:07:48.195615   62061 logs.go:276] 0 containers: []
	W0918 21:07:48.195624   62061 logs.go:278] No container was found matching "kube-controller-manager"
	I0918 21:07:48.195631   62061 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0918 21:07:48.195675   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0918 21:07:48.226478   62061 cri.go:89] found id: ""
	I0918 21:07:48.226506   62061 logs.go:276] 0 containers: []
	W0918 21:07:48.226514   62061 logs.go:278] No container was found matching "kindnet"
	I0918 21:07:48.226519   62061 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0918 21:07:48.226566   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0918 21:07:48.259775   62061 cri.go:89] found id: ""
	I0918 21:07:48.259804   62061 logs.go:276] 0 containers: []
	W0918 21:07:48.259815   62061 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0918 21:07:48.259825   62061 logs.go:123] Gathering logs for dmesg ...
	I0918 21:07:48.259839   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 21:07:48.274364   62061 logs.go:123] Gathering logs for describe nodes ...
	I0918 21:07:48.274391   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0918 21:07:48.347131   62061 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0918 21:07:48.347151   62061 logs.go:123] Gathering logs for CRI-O ...
	I0918 21:07:48.347163   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0918 21:07:48.430569   62061 logs.go:123] Gathering logs for container status ...
	I0918 21:07:48.430607   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 21:07:48.472972   62061 logs.go:123] Gathering logs for kubelet ...
	I0918 21:07:48.473007   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 21:07:47.356907   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:07:49.857984   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:07:48.633550   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:07:51.133639   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:07:48.388136   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:07:50.888317   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:07:51.024429   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:07:51.037249   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0918 21:07:51.037328   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0918 21:07:51.070137   62061 cri.go:89] found id: ""
	I0918 21:07:51.070173   62061 logs.go:276] 0 containers: []
	W0918 21:07:51.070195   62061 logs.go:278] No container was found matching "kube-apiserver"
	I0918 21:07:51.070203   62061 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0918 21:07:51.070276   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0918 21:07:51.105014   62061 cri.go:89] found id: ""
	I0918 21:07:51.105039   62061 logs.go:276] 0 containers: []
	W0918 21:07:51.105048   62061 logs.go:278] No container was found matching "etcd"
	I0918 21:07:51.105054   62061 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0918 21:07:51.105101   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0918 21:07:51.143287   62061 cri.go:89] found id: ""
	I0918 21:07:51.143310   62061 logs.go:276] 0 containers: []
	W0918 21:07:51.143318   62061 logs.go:278] No container was found matching "coredns"
	I0918 21:07:51.143325   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0918 21:07:51.143372   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0918 21:07:51.176811   62061 cri.go:89] found id: ""
	I0918 21:07:51.176838   62061 logs.go:276] 0 containers: []
	W0918 21:07:51.176846   62061 logs.go:278] No container was found matching "kube-scheduler"
	I0918 21:07:51.176852   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0918 21:07:51.176898   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0918 21:07:51.210817   62061 cri.go:89] found id: ""
	I0918 21:07:51.210842   62061 logs.go:276] 0 containers: []
	W0918 21:07:51.210850   62061 logs.go:278] No container was found matching "kube-proxy"
	I0918 21:07:51.210856   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0918 21:07:51.210916   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0918 21:07:51.243968   62061 cri.go:89] found id: ""
	I0918 21:07:51.244002   62061 logs.go:276] 0 containers: []
	W0918 21:07:51.244035   62061 logs.go:278] No container was found matching "kube-controller-manager"
	I0918 21:07:51.244043   62061 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0918 21:07:51.244104   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0918 21:07:51.279078   62061 cri.go:89] found id: ""
	I0918 21:07:51.279108   62061 logs.go:276] 0 containers: []
	W0918 21:07:51.279117   62061 logs.go:278] No container was found matching "kindnet"
	I0918 21:07:51.279122   62061 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0918 21:07:51.279173   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0918 21:07:51.315033   62061 cri.go:89] found id: ""
	I0918 21:07:51.315082   62061 logs.go:276] 0 containers: []
	W0918 21:07:51.315091   62061 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0918 21:07:51.315100   62061 logs.go:123] Gathering logs for CRI-O ...
	I0918 21:07:51.315111   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0918 21:07:51.391927   62061 logs.go:123] Gathering logs for container status ...
	I0918 21:07:51.391976   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 21:07:51.435515   62061 logs.go:123] Gathering logs for kubelet ...
	I0918 21:07:51.435542   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 21:07:51.488404   62061 logs.go:123] Gathering logs for dmesg ...
	I0918 21:07:51.488442   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 21:07:51.502019   62061 logs.go:123] Gathering logs for describe nodes ...
	I0918 21:07:51.502065   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0918 21:07:51.567893   62061 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0918 21:07:54.068142   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:07:54.082544   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0918 21:07:54.082619   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0918 21:07:54.118600   62061 cri.go:89] found id: ""
	I0918 21:07:54.118629   62061 logs.go:276] 0 containers: []
	W0918 21:07:54.118638   62061 logs.go:278] No container was found matching "kube-apiserver"
	I0918 21:07:54.118644   62061 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0918 21:07:54.118695   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0918 21:07:54.154086   62061 cri.go:89] found id: ""
	I0918 21:07:54.154115   62061 logs.go:276] 0 containers: []
	W0918 21:07:54.154127   62061 logs.go:278] No container was found matching "etcd"
	I0918 21:07:54.154135   62061 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0918 21:07:54.154187   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0918 21:07:54.188213   62061 cri.go:89] found id: ""
	I0918 21:07:54.188246   62061 logs.go:276] 0 containers: []
	W0918 21:07:54.188257   62061 logs.go:278] No container was found matching "coredns"
	I0918 21:07:54.188266   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0918 21:07:54.188329   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0918 21:07:54.221682   62061 cri.go:89] found id: ""
	I0918 21:07:54.221712   62061 logs.go:276] 0 containers: []
	W0918 21:07:54.221721   62061 logs.go:278] No container was found matching "kube-scheduler"
	I0918 21:07:54.221728   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0918 21:07:54.221791   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0918 21:07:54.254746   62061 cri.go:89] found id: ""
	I0918 21:07:54.254770   62061 logs.go:276] 0 containers: []
	W0918 21:07:54.254778   62061 logs.go:278] No container was found matching "kube-proxy"
	I0918 21:07:54.254783   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0918 21:07:54.254844   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0918 21:07:54.288249   62061 cri.go:89] found id: ""
	I0918 21:07:54.288273   62061 logs.go:276] 0 containers: []
	W0918 21:07:54.288281   62061 logs.go:278] No container was found matching "kube-controller-manager"
	I0918 21:07:54.288288   62061 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0918 21:07:54.288349   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0918 21:07:54.323390   62061 cri.go:89] found id: ""
	I0918 21:07:54.323422   62061 logs.go:276] 0 containers: []
	W0918 21:07:54.323433   62061 logs.go:278] No container was found matching "kindnet"
	I0918 21:07:54.323440   62061 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0918 21:07:54.323507   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0918 21:07:54.357680   62061 cri.go:89] found id: ""
	I0918 21:07:54.357709   62061 logs.go:276] 0 containers: []
	W0918 21:07:54.357719   62061 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0918 21:07:54.357734   62061 logs.go:123] Gathering logs for dmesg ...
	I0918 21:07:54.357748   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 21:07:54.396644   62061 logs.go:123] Gathering logs for describe nodes ...
	I0918 21:07:54.396683   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0918 21:07:54.469136   62061 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0918 21:07:54.469175   62061 logs.go:123] Gathering logs for CRI-O ...
	I0918 21:07:54.469191   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0918 21:07:52.357187   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:07:54.857437   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:07:53.633161   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:07:56.132554   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:07:53.386646   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:07:55.387377   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:07:57.387524   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:07:54.547848   62061 logs.go:123] Gathering logs for container status ...
	I0918 21:07:54.547884   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 21:07:54.585758   62061 logs.go:123] Gathering logs for kubelet ...
	I0918 21:07:54.585788   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 21:07:57.139198   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:07:57.152342   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0918 21:07:57.152429   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0918 21:07:57.186747   62061 cri.go:89] found id: ""
	I0918 21:07:57.186778   62061 logs.go:276] 0 containers: []
	W0918 21:07:57.186789   62061 logs.go:278] No container was found matching "kube-apiserver"
	I0918 21:07:57.186796   62061 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0918 21:07:57.186855   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0918 21:07:57.219211   62061 cri.go:89] found id: ""
	I0918 21:07:57.219239   62061 logs.go:276] 0 containers: []
	W0918 21:07:57.219248   62061 logs.go:278] No container was found matching "etcd"
	I0918 21:07:57.219256   62061 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0918 21:07:57.219315   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0918 21:07:57.251361   62061 cri.go:89] found id: ""
	I0918 21:07:57.251388   62061 logs.go:276] 0 containers: []
	W0918 21:07:57.251396   62061 logs.go:278] No container was found matching "coredns"
	I0918 21:07:57.251401   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0918 21:07:57.251450   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0918 21:07:57.285467   62061 cri.go:89] found id: ""
	I0918 21:07:57.285501   62061 logs.go:276] 0 containers: []
	W0918 21:07:57.285512   62061 logs.go:278] No container was found matching "kube-scheduler"
	I0918 21:07:57.285519   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0918 21:07:57.285580   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0918 21:07:57.317973   62061 cri.go:89] found id: ""
	I0918 21:07:57.317999   62061 logs.go:276] 0 containers: []
	W0918 21:07:57.318006   62061 logs.go:278] No container was found matching "kube-proxy"
	I0918 21:07:57.318012   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0918 21:07:57.318071   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0918 21:07:57.352172   62061 cri.go:89] found id: ""
	I0918 21:07:57.352202   62061 logs.go:276] 0 containers: []
	W0918 21:07:57.352215   62061 logs.go:278] No container was found matching "kube-controller-manager"
	I0918 21:07:57.352223   62061 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0918 21:07:57.352277   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0918 21:07:57.388117   62061 cri.go:89] found id: ""
	I0918 21:07:57.388137   62061 logs.go:276] 0 containers: []
	W0918 21:07:57.388144   62061 logs.go:278] No container was found matching "kindnet"
	I0918 21:07:57.388150   62061 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0918 21:07:57.388205   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0918 21:07:57.424814   62061 cri.go:89] found id: ""
	I0918 21:07:57.424846   62061 logs.go:276] 0 containers: []
	W0918 21:07:57.424857   62061 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0918 21:07:57.424868   62061 logs.go:123] Gathering logs for dmesg ...
	I0918 21:07:57.424882   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 21:07:57.437317   62061 logs.go:123] Gathering logs for describe nodes ...
	I0918 21:07:57.437351   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0918 21:07:57.503393   62061 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0918 21:07:57.503417   62061 logs.go:123] Gathering logs for CRI-O ...
	I0918 21:07:57.503429   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0918 21:07:57.584167   62061 logs.go:123] Gathering logs for container status ...
	I0918 21:07:57.584203   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 21:07:57.620761   62061 logs.go:123] Gathering logs for kubelet ...
	I0918 21:07:57.620792   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 21:07:57.357989   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:07:59.856413   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:07:58.133077   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:08:00.633233   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:07:59.886455   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:08:01.887882   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:08:00.174933   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:08:00.188098   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0918 21:08:00.188177   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0918 21:08:00.221852   62061 cri.go:89] found id: ""
	I0918 21:08:00.221877   62061 logs.go:276] 0 containers: []
	W0918 21:08:00.221890   62061 logs.go:278] No container was found matching "kube-apiserver"
	I0918 21:08:00.221895   62061 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0918 21:08:00.221947   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0918 21:08:00.255947   62061 cri.go:89] found id: ""
	I0918 21:08:00.255974   62061 logs.go:276] 0 containers: []
	W0918 21:08:00.255982   62061 logs.go:278] No container was found matching "etcd"
	I0918 21:08:00.255987   62061 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0918 21:08:00.256056   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0918 21:08:00.290919   62061 cri.go:89] found id: ""
	I0918 21:08:00.290953   62061 logs.go:276] 0 containers: []
	W0918 21:08:00.290961   62061 logs.go:278] No container was found matching "coredns"
	I0918 21:08:00.290966   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0918 21:08:00.291017   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0918 21:08:00.328175   62061 cri.go:89] found id: ""
	I0918 21:08:00.328200   62061 logs.go:276] 0 containers: []
	W0918 21:08:00.328208   62061 logs.go:278] No container was found matching "kube-scheduler"
	I0918 21:08:00.328214   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0918 21:08:00.328261   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0918 21:08:00.364863   62061 cri.go:89] found id: ""
	I0918 21:08:00.364892   62061 logs.go:276] 0 containers: []
	W0918 21:08:00.364903   62061 logs.go:278] No container was found matching "kube-proxy"
	I0918 21:08:00.364912   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0918 21:08:00.364967   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0918 21:08:00.400368   62061 cri.go:89] found id: ""
	I0918 21:08:00.400397   62061 logs.go:276] 0 containers: []
	W0918 21:08:00.400408   62061 logs.go:278] No container was found matching "kube-controller-manager"
	I0918 21:08:00.400415   62061 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0918 21:08:00.400480   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0918 21:08:00.435341   62061 cri.go:89] found id: ""
	I0918 21:08:00.435375   62061 logs.go:276] 0 containers: []
	W0918 21:08:00.435386   62061 logs.go:278] No container was found matching "kindnet"
	I0918 21:08:00.435394   62061 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0918 21:08:00.435459   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0918 21:08:00.469981   62061 cri.go:89] found id: ""
	I0918 21:08:00.470010   62061 logs.go:276] 0 containers: []
	W0918 21:08:00.470019   62061 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0918 21:08:00.470028   62061 logs.go:123] Gathering logs for describe nodes ...
	I0918 21:08:00.470041   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0918 21:08:00.538006   62061 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0918 21:08:00.538037   62061 logs.go:123] Gathering logs for CRI-O ...
	I0918 21:08:00.538053   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0918 21:08:00.618497   62061 logs.go:123] Gathering logs for container status ...
	I0918 21:08:00.618543   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 21:08:00.656884   62061 logs.go:123] Gathering logs for kubelet ...
	I0918 21:08:00.656912   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 21:08:00.708836   62061 logs.go:123] Gathering logs for dmesg ...
	I0918 21:08:00.708870   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 21:08:03.222489   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:08:03.236904   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0918 21:08:03.236972   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0918 21:08:03.270559   62061 cri.go:89] found id: ""
	I0918 21:08:03.270588   62061 logs.go:276] 0 containers: []
	W0918 21:08:03.270596   62061 logs.go:278] No container was found matching "kube-apiserver"
	I0918 21:08:03.270602   62061 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0918 21:08:03.270649   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0918 21:08:03.305908   62061 cri.go:89] found id: ""
	I0918 21:08:03.305933   62061 logs.go:276] 0 containers: []
	W0918 21:08:03.305940   62061 logs.go:278] No container was found matching "etcd"
	I0918 21:08:03.305946   62061 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0918 21:08:03.306004   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0918 21:08:03.339442   62061 cri.go:89] found id: ""
	I0918 21:08:03.339468   62061 logs.go:276] 0 containers: []
	W0918 21:08:03.339476   62061 logs.go:278] No container was found matching "coredns"
	I0918 21:08:03.339482   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0918 21:08:03.339550   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0918 21:08:03.377460   62061 cri.go:89] found id: ""
	I0918 21:08:03.377486   62061 logs.go:276] 0 containers: []
	W0918 21:08:03.377495   62061 logs.go:278] No container was found matching "kube-scheduler"
	I0918 21:08:03.377501   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0918 21:08:03.377552   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0918 21:08:03.414815   62061 cri.go:89] found id: ""
	I0918 21:08:03.414850   62061 logs.go:276] 0 containers: []
	W0918 21:08:03.414861   62061 logs.go:278] No container was found matching "kube-proxy"
	I0918 21:08:03.414869   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0918 21:08:03.414930   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0918 21:08:03.448654   62061 cri.go:89] found id: ""
	I0918 21:08:03.448680   62061 logs.go:276] 0 containers: []
	W0918 21:08:03.448690   62061 logs.go:278] No container was found matching "kube-controller-manager"
	I0918 21:08:03.448698   62061 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0918 21:08:03.448759   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0918 21:08:03.483598   62061 cri.go:89] found id: ""
	I0918 21:08:03.483628   62061 logs.go:276] 0 containers: []
	W0918 21:08:03.483639   62061 logs.go:278] No container was found matching "kindnet"
	I0918 21:08:03.483646   62061 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0918 21:08:03.483717   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0918 21:08:03.518557   62061 cri.go:89] found id: ""
	I0918 21:08:03.518585   62061 logs.go:276] 0 containers: []
	W0918 21:08:03.518601   62061 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0918 21:08:03.518612   62061 logs.go:123] Gathering logs for container status ...
	I0918 21:08:03.518627   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 21:08:03.555922   62061 logs.go:123] Gathering logs for kubelet ...
	I0918 21:08:03.555958   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 21:08:03.608173   62061 logs.go:123] Gathering logs for dmesg ...
	I0918 21:08:03.608208   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 21:08:03.621251   62061 logs.go:123] Gathering logs for describe nodes ...
	I0918 21:08:03.621278   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0918 21:08:03.688773   62061 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0918 21:08:03.688796   62061 logs.go:123] Gathering logs for CRI-O ...
	I0918 21:08:03.688812   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0918 21:08:01.857289   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:08:03.857768   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:08:06.356504   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:08:03.132376   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:08:05.134169   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:08:04.386905   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:08:06.891459   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:08:06.272727   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:08:06.286033   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0918 21:08:06.286115   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0918 21:08:06.319058   62061 cri.go:89] found id: ""
	I0918 21:08:06.319084   62061 logs.go:276] 0 containers: []
	W0918 21:08:06.319092   62061 logs.go:278] No container was found matching "kube-apiserver"
	I0918 21:08:06.319099   62061 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0918 21:08:06.319167   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0918 21:08:06.352572   62061 cri.go:89] found id: ""
	I0918 21:08:06.352606   62061 logs.go:276] 0 containers: []
	W0918 21:08:06.352627   62061 logs.go:278] No container was found matching "etcd"
	I0918 21:08:06.352638   62061 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0918 21:08:06.352709   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0918 21:08:06.389897   62061 cri.go:89] found id: ""
	I0918 21:08:06.389922   62061 logs.go:276] 0 containers: []
	W0918 21:08:06.389929   62061 logs.go:278] No container was found matching "coredns"
	I0918 21:08:06.389935   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0918 21:08:06.389993   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0918 21:08:06.427265   62061 cri.go:89] found id: ""
	I0918 21:08:06.427294   62061 logs.go:276] 0 containers: []
	W0918 21:08:06.427306   62061 logs.go:278] No container was found matching "kube-scheduler"
	I0918 21:08:06.427314   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0918 21:08:06.427375   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0918 21:08:06.462706   62061 cri.go:89] found id: ""
	I0918 21:08:06.462738   62061 logs.go:276] 0 containers: []
	W0918 21:08:06.462746   62061 logs.go:278] No container was found matching "kube-proxy"
	I0918 21:08:06.462753   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0918 21:08:06.462816   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0918 21:08:06.499672   62061 cri.go:89] found id: ""
	I0918 21:08:06.499707   62061 logs.go:276] 0 containers: []
	W0918 21:08:06.499719   62061 logs.go:278] No container was found matching "kube-controller-manager"
	I0918 21:08:06.499727   62061 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0918 21:08:06.499781   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0918 21:08:06.540386   62061 cri.go:89] found id: ""
	I0918 21:08:06.540415   62061 logs.go:276] 0 containers: []
	W0918 21:08:06.540426   62061 logs.go:278] No container was found matching "kindnet"
	I0918 21:08:06.540433   62061 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0918 21:08:06.540492   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0918 21:08:06.582919   62061 cri.go:89] found id: ""
	I0918 21:08:06.582946   62061 logs.go:276] 0 containers: []
	W0918 21:08:06.582957   62061 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0918 21:08:06.582966   62061 logs.go:123] Gathering logs for kubelet ...
	I0918 21:08:06.582982   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 21:08:06.637315   62061 logs.go:123] Gathering logs for dmesg ...
	I0918 21:08:06.637355   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 21:08:06.650626   62061 logs.go:123] Gathering logs for describe nodes ...
	I0918 21:08:06.650662   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0918 21:08:06.725605   62061 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0918 21:08:06.725641   62061 logs.go:123] Gathering logs for CRI-O ...
	I0918 21:08:06.725656   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0918 21:08:06.805431   62061 logs.go:123] Gathering logs for container status ...
	I0918 21:08:06.805471   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 21:08:09.343498   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:08:09.357396   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0918 21:08:09.357472   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0918 21:08:09.391561   62061 cri.go:89] found id: ""
	I0918 21:08:09.391590   62061 logs.go:276] 0 containers: []
	W0918 21:08:09.391600   62061 logs.go:278] No container was found matching "kube-apiserver"
	I0918 21:08:09.391605   62061 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0918 21:08:09.391655   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0918 21:08:09.429154   62061 cri.go:89] found id: ""
	I0918 21:08:09.429181   62061 logs.go:276] 0 containers: []
	W0918 21:08:09.429190   62061 logs.go:278] No container was found matching "etcd"
	I0918 21:08:09.429196   62061 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0918 21:08:09.429259   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0918 21:08:09.464807   62061 cri.go:89] found id: ""
	I0918 21:08:09.464848   62061 logs.go:276] 0 containers: []
	W0918 21:08:09.464859   62061 logs.go:278] No container was found matching "coredns"
	I0918 21:08:09.464866   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0918 21:08:09.464927   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0918 21:08:08.856578   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:08:10.856650   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:08:07.633438   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:08:10.132651   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:08:12.132903   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:08:09.387482   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:08:11.886885   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:08:09.499512   62061 cri.go:89] found id: ""
	I0918 21:08:09.499540   62061 logs.go:276] 0 containers: []
	W0918 21:08:09.499549   62061 logs.go:278] No container was found matching "kube-scheduler"
	I0918 21:08:09.499556   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0918 21:08:09.499619   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0918 21:08:09.534569   62061 cri.go:89] found id: ""
	I0918 21:08:09.534593   62061 logs.go:276] 0 containers: []
	W0918 21:08:09.534601   62061 logs.go:278] No container was found matching "kube-proxy"
	I0918 21:08:09.534607   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0918 21:08:09.534660   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0918 21:08:09.570387   62061 cri.go:89] found id: ""
	I0918 21:08:09.570414   62061 logs.go:276] 0 containers: []
	W0918 21:08:09.570422   62061 logs.go:278] No container was found matching "kube-controller-manager"
	I0918 21:08:09.570428   62061 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0918 21:08:09.570489   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0918 21:08:09.603832   62061 cri.go:89] found id: ""
	I0918 21:08:09.603863   62061 logs.go:276] 0 containers: []
	W0918 21:08:09.603871   62061 logs.go:278] No container was found matching "kindnet"
	I0918 21:08:09.603877   62061 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0918 21:08:09.603923   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0918 21:08:09.638887   62061 cri.go:89] found id: ""
	I0918 21:08:09.638918   62061 logs.go:276] 0 containers: []
	W0918 21:08:09.638930   62061 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0918 21:08:09.638940   62061 logs.go:123] Gathering logs for kubelet ...
	I0918 21:08:09.638953   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 21:08:09.691197   62061 logs.go:123] Gathering logs for dmesg ...
	I0918 21:08:09.691237   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 21:08:09.705470   62061 logs.go:123] Gathering logs for describe nodes ...
	I0918 21:08:09.705500   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0918 21:08:09.776826   62061 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0918 21:08:09.776858   62061 logs.go:123] Gathering logs for CRI-O ...
	I0918 21:08:09.776871   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0918 21:08:09.851287   62061 logs.go:123] Gathering logs for container status ...
	I0918 21:08:09.851321   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 21:08:12.390895   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:08:12.406734   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0918 21:08:12.406809   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0918 21:08:12.459302   62061 cri.go:89] found id: ""
	I0918 21:08:12.459331   62061 logs.go:276] 0 containers: []
	W0918 21:08:12.459349   62061 logs.go:278] No container was found matching "kube-apiserver"
	I0918 21:08:12.459360   62061 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0918 21:08:12.459420   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0918 21:08:12.517527   62061 cri.go:89] found id: ""
	I0918 21:08:12.517557   62061 logs.go:276] 0 containers: []
	W0918 21:08:12.517567   62061 logs.go:278] No container was found matching "etcd"
	I0918 21:08:12.517573   62061 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0918 21:08:12.517628   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0918 21:08:12.549661   62061 cri.go:89] found id: ""
	I0918 21:08:12.549689   62061 logs.go:276] 0 containers: []
	W0918 21:08:12.549696   62061 logs.go:278] No container was found matching "coredns"
	I0918 21:08:12.549702   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0918 21:08:12.549747   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0918 21:08:12.582970   62061 cri.go:89] found id: ""
	I0918 21:08:12.582996   62061 logs.go:276] 0 containers: []
	W0918 21:08:12.583004   62061 logs.go:278] No container was found matching "kube-scheduler"
	I0918 21:08:12.583009   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0918 21:08:12.583054   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0918 21:08:12.617059   62061 cri.go:89] found id: ""
	I0918 21:08:12.617089   62061 logs.go:276] 0 containers: []
	W0918 21:08:12.617098   62061 logs.go:278] No container was found matching "kube-proxy"
	I0918 21:08:12.617103   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0918 21:08:12.617153   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0918 21:08:12.651115   62061 cri.go:89] found id: ""
	I0918 21:08:12.651143   62061 logs.go:276] 0 containers: []
	W0918 21:08:12.651156   62061 logs.go:278] No container was found matching "kube-controller-manager"
	I0918 21:08:12.651164   62061 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0918 21:08:12.651217   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0918 21:08:12.684727   62061 cri.go:89] found id: ""
	I0918 21:08:12.684758   62061 logs.go:276] 0 containers: []
	W0918 21:08:12.684765   62061 logs.go:278] No container was found matching "kindnet"
	I0918 21:08:12.684771   62061 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0918 21:08:12.684829   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0918 21:08:12.718181   62061 cri.go:89] found id: ""
	I0918 21:08:12.718215   62061 logs.go:276] 0 containers: []
	W0918 21:08:12.718226   62061 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0918 21:08:12.718237   62061 logs.go:123] Gathering logs for container status ...
	I0918 21:08:12.718252   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 21:08:12.760350   62061 logs.go:123] Gathering logs for kubelet ...
	I0918 21:08:12.760379   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 21:08:12.810028   62061 logs.go:123] Gathering logs for dmesg ...
	I0918 21:08:12.810067   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 21:08:12.823785   62061 logs.go:123] Gathering logs for describe nodes ...
	I0918 21:08:12.823815   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0918 21:08:12.898511   62061 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0918 21:08:12.898532   62061 logs.go:123] Gathering logs for CRI-O ...
	I0918 21:08:12.898545   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0918 21:08:12.856697   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:08:15.356381   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:08:14.632694   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:08:17.131888   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:08:13.887157   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:08:15.887190   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:08:17.890618   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:08:15.476840   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:08:15.489386   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0918 21:08:15.489470   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0918 21:08:15.524549   62061 cri.go:89] found id: ""
	I0918 21:08:15.524578   62061 logs.go:276] 0 containers: []
	W0918 21:08:15.524585   62061 logs.go:278] No container was found matching "kube-apiserver"
	I0918 21:08:15.524591   62061 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0918 21:08:15.524642   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0918 21:08:15.557834   62061 cri.go:89] found id: ""
	I0918 21:08:15.557867   62061 logs.go:276] 0 containers: []
	W0918 21:08:15.557876   62061 logs.go:278] No container was found matching "etcd"
	I0918 21:08:15.557883   62061 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0918 21:08:15.557933   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0918 21:08:15.598774   62061 cri.go:89] found id: ""
	I0918 21:08:15.598805   62061 logs.go:276] 0 containers: []
	W0918 21:08:15.598818   62061 logs.go:278] No container was found matching "coredns"
	I0918 21:08:15.598828   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0918 21:08:15.598882   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0918 21:08:15.633123   62061 cri.go:89] found id: ""
	I0918 21:08:15.633147   62061 logs.go:276] 0 containers: []
	W0918 21:08:15.633155   62061 logs.go:278] No container was found matching "kube-scheduler"
	I0918 21:08:15.633161   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0918 21:08:15.633208   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0918 21:08:15.667281   62061 cri.go:89] found id: ""
	I0918 21:08:15.667307   62061 logs.go:276] 0 containers: []
	W0918 21:08:15.667317   62061 logs.go:278] No container was found matching "kube-proxy"
	I0918 21:08:15.667323   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0918 21:08:15.667381   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0918 21:08:15.705945   62061 cri.go:89] found id: ""
	I0918 21:08:15.705977   62061 logs.go:276] 0 containers: []
	W0918 21:08:15.705990   62061 logs.go:278] No container was found matching "kube-controller-manager"
	I0918 21:08:15.706015   62061 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0918 21:08:15.706093   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0918 21:08:15.739795   62061 cri.go:89] found id: ""
	I0918 21:08:15.739826   62061 logs.go:276] 0 containers: []
	W0918 21:08:15.739836   62061 logs.go:278] No container was found matching "kindnet"
	I0918 21:08:15.739843   62061 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0918 21:08:15.739904   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0918 21:08:15.778520   62061 cri.go:89] found id: ""
	I0918 21:08:15.778556   62061 logs.go:276] 0 containers: []
	W0918 21:08:15.778567   62061 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0918 21:08:15.778579   62061 logs.go:123] Gathering logs for kubelet ...
	I0918 21:08:15.778592   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 21:08:15.829357   62061 logs.go:123] Gathering logs for dmesg ...
	I0918 21:08:15.829394   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 21:08:15.842852   62061 logs.go:123] Gathering logs for describe nodes ...
	I0918 21:08:15.842882   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0918 21:08:15.922438   62061 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0918 21:08:15.922471   62061 logs.go:123] Gathering logs for CRI-O ...
	I0918 21:08:15.922483   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0918 21:08:16.001687   62061 logs.go:123] Gathering logs for container status ...
	I0918 21:08:16.001726   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 21:08:18.542067   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:08:18.554783   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0918 21:08:18.554870   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0918 21:08:18.589555   62061 cri.go:89] found id: ""
	I0918 21:08:18.589581   62061 logs.go:276] 0 containers: []
	W0918 21:08:18.589592   62061 logs.go:278] No container was found matching "kube-apiserver"
	I0918 21:08:18.589604   62061 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0918 21:08:18.589667   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0918 21:08:18.623035   62061 cri.go:89] found id: ""
	I0918 21:08:18.623059   62061 logs.go:276] 0 containers: []
	W0918 21:08:18.623067   62061 logs.go:278] No container was found matching "etcd"
	I0918 21:08:18.623073   62061 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0918 21:08:18.623127   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0918 21:08:18.655875   62061 cri.go:89] found id: ""
	I0918 21:08:18.655901   62061 logs.go:276] 0 containers: []
	W0918 21:08:18.655909   62061 logs.go:278] No container was found matching "coredns"
	I0918 21:08:18.655915   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0918 21:08:18.655973   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0918 21:08:18.688964   62061 cri.go:89] found id: ""
	I0918 21:08:18.688997   62061 logs.go:276] 0 containers: []
	W0918 21:08:18.689008   62061 logs.go:278] No container was found matching "kube-scheduler"
	I0918 21:08:18.689016   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0918 21:08:18.689080   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0918 21:08:18.723164   62061 cri.go:89] found id: ""
	I0918 21:08:18.723186   62061 logs.go:276] 0 containers: []
	W0918 21:08:18.723196   62061 logs.go:278] No container was found matching "kube-proxy"
	I0918 21:08:18.723201   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0918 21:08:18.723246   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0918 21:08:18.755022   62061 cri.go:89] found id: ""
	I0918 21:08:18.755048   62061 logs.go:276] 0 containers: []
	W0918 21:08:18.755057   62061 logs.go:278] No container was found matching "kube-controller-manager"
	I0918 21:08:18.755063   62061 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0918 21:08:18.755113   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0918 21:08:18.791614   62061 cri.go:89] found id: ""
	I0918 21:08:18.791645   62061 logs.go:276] 0 containers: []
	W0918 21:08:18.791655   62061 logs.go:278] No container was found matching "kindnet"
	I0918 21:08:18.791663   62061 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0918 21:08:18.791731   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0918 21:08:18.823553   62061 cri.go:89] found id: ""
	I0918 21:08:18.823583   62061 logs.go:276] 0 containers: []
	W0918 21:08:18.823590   62061 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0918 21:08:18.823597   62061 logs.go:123] Gathering logs for container status ...
	I0918 21:08:18.823609   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 21:08:18.864528   62061 logs.go:123] Gathering logs for kubelet ...
	I0918 21:08:18.864567   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 21:08:18.916590   62061 logs.go:123] Gathering logs for dmesg ...
	I0918 21:08:18.916628   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 21:08:18.930077   62061 logs.go:123] Gathering logs for describe nodes ...
	I0918 21:08:18.930107   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0918 21:08:18.999958   62061 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0918 21:08:18.999982   62061 logs.go:123] Gathering logs for CRI-O ...
	I0918 21:08:18.999997   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0918 21:08:17.358190   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:08:19.856605   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:08:19.132382   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:08:21.634433   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:08:20.387223   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:08:22.387374   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:08:21.573718   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:08:21.588164   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0918 21:08:21.588252   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0918 21:08:21.630044   62061 cri.go:89] found id: ""
	I0918 21:08:21.630075   62061 logs.go:276] 0 containers: []
	W0918 21:08:21.630086   62061 logs.go:278] No container was found matching "kube-apiserver"
	I0918 21:08:21.630094   62061 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0918 21:08:21.630154   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0918 21:08:21.666992   62061 cri.go:89] found id: ""
	I0918 21:08:21.667021   62061 logs.go:276] 0 containers: []
	W0918 21:08:21.667029   62061 logs.go:278] No container was found matching "etcd"
	I0918 21:08:21.667035   62061 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0918 21:08:21.667083   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0918 21:08:21.702379   62061 cri.go:89] found id: ""
	I0918 21:08:21.702403   62061 logs.go:276] 0 containers: []
	W0918 21:08:21.702411   62061 logs.go:278] No container was found matching "coredns"
	I0918 21:08:21.702416   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0918 21:08:21.702463   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0918 21:08:21.739877   62061 cri.go:89] found id: ""
	I0918 21:08:21.739908   62061 logs.go:276] 0 containers: []
	W0918 21:08:21.739918   62061 logs.go:278] No container was found matching "kube-scheduler"
	I0918 21:08:21.739923   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0918 21:08:21.739974   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0918 21:08:21.777536   62061 cri.go:89] found id: ""
	I0918 21:08:21.777573   62061 logs.go:276] 0 containers: []
	W0918 21:08:21.777584   62061 logs.go:278] No container was found matching "kube-proxy"
	I0918 21:08:21.777592   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0918 21:08:21.777652   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0918 21:08:21.812284   62061 cri.go:89] found id: ""
	I0918 21:08:21.812316   62061 logs.go:276] 0 containers: []
	W0918 21:08:21.812325   62061 logs.go:278] No container was found matching "kube-controller-manager"
	I0918 21:08:21.812332   62061 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0918 21:08:21.812401   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0918 21:08:21.848143   62061 cri.go:89] found id: ""
	I0918 21:08:21.848176   62061 logs.go:276] 0 containers: []
	W0918 21:08:21.848185   62061 logs.go:278] No container was found matching "kindnet"
	I0918 21:08:21.848191   62061 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0918 21:08:21.848250   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0918 21:08:21.887151   62061 cri.go:89] found id: ""
	I0918 21:08:21.887177   62061 logs.go:276] 0 containers: []
	W0918 21:08:21.887188   62061 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0918 21:08:21.887199   62061 logs.go:123] Gathering logs for kubelet ...
	I0918 21:08:21.887213   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 21:08:21.939969   62061 logs.go:123] Gathering logs for dmesg ...
	I0918 21:08:21.940008   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 21:08:21.954128   62061 logs.go:123] Gathering logs for describe nodes ...
	I0918 21:08:21.954164   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0918 21:08:22.022827   62061 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0918 21:08:22.022853   62061 logs.go:123] Gathering logs for CRI-O ...
	I0918 21:08:22.022865   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0918 21:08:22.103131   62061 logs.go:123] Gathering logs for container status ...
	I0918 21:08:22.103172   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 21:08:22.356641   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:08:24.358204   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:08:24.133101   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:08:26.633701   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:08:24.888715   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:08:27.386901   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:08:24.642045   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:08:24.655273   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0918 21:08:24.655343   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0918 21:08:24.687816   62061 cri.go:89] found id: ""
	I0918 21:08:24.687847   62061 logs.go:276] 0 containers: []
	W0918 21:08:24.687858   62061 logs.go:278] No container was found matching "kube-apiserver"
	I0918 21:08:24.687865   62061 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0918 21:08:24.687927   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0918 21:08:24.721276   62061 cri.go:89] found id: ""
	I0918 21:08:24.721303   62061 logs.go:276] 0 containers: []
	W0918 21:08:24.721311   62061 logs.go:278] No container was found matching "etcd"
	I0918 21:08:24.721316   62061 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0918 21:08:24.721366   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0918 21:08:24.753874   62061 cri.go:89] found id: ""
	I0918 21:08:24.753904   62061 logs.go:276] 0 containers: []
	W0918 21:08:24.753911   62061 logs.go:278] No container was found matching "coredns"
	I0918 21:08:24.753917   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0918 21:08:24.753967   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0918 21:08:24.789107   62061 cri.go:89] found id: ""
	I0918 21:08:24.789148   62061 logs.go:276] 0 containers: []
	W0918 21:08:24.789163   62061 logs.go:278] No container was found matching "kube-scheduler"
	I0918 21:08:24.789170   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0918 21:08:24.789219   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0918 21:08:24.826283   62061 cri.go:89] found id: ""
	I0918 21:08:24.826316   62061 logs.go:276] 0 containers: []
	W0918 21:08:24.826329   62061 logs.go:278] No container was found matching "kube-proxy"
	I0918 21:08:24.826337   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0918 21:08:24.826401   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0918 21:08:24.859878   62061 cri.go:89] found id: ""
	I0918 21:08:24.859907   62061 logs.go:276] 0 containers: []
	W0918 21:08:24.859917   62061 logs.go:278] No container was found matching "kube-controller-manager"
	I0918 21:08:24.859924   62061 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0918 21:08:24.859982   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0918 21:08:24.895717   62061 cri.go:89] found id: ""
	I0918 21:08:24.895747   62061 logs.go:276] 0 containers: []
	W0918 21:08:24.895758   62061 logs.go:278] No container was found matching "kindnet"
	I0918 21:08:24.895766   62061 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0918 21:08:24.895830   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0918 21:08:24.930207   62061 cri.go:89] found id: ""
	I0918 21:08:24.930239   62061 logs.go:276] 0 containers: []
	W0918 21:08:24.930250   62061 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0918 21:08:24.930262   62061 logs.go:123] Gathering logs for container status ...
	I0918 21:08:24.930279   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 21:08:24.967939   62061 logs.go:123] Gathering logs for kubelet ...
	I0918 21:08:24.967981   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 21:08:25.022526   62061 logs.go:123] Gathering logs for dmesg ...
	I0918 21:08:25.022569   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 21:08:25.036175   62061 logs.go:123] Gathering logs for describe nodes ...
	I0918 21:08:25.036214   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0918 21:08:25.104251   62061 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0918 21:08:25.104277   62061 logs.go:123] Gathering logs for CRI-O ...
	I0918 21:08:25.104294   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0918 21:08:27.686943   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:08:27.700071   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0918 21:08:27.700147   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0918 21:08:27.735245   62061 cri.go:89] found id: ""
	I0918 21:08:27.735277   62061 logs.go:276] 0 containers: []
	W0918 21:08:27.735286   62061 logs.go:278] No container was found matching "kube-apiserver"
	I0918 21:08:27.735291   62061 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0918 21:08:27.735349   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0918 21:08:27.768945   62061 cri.go:89] found id: ""
	I0918 21:08:27.768974   62061 logs.go:276] 0 containers: []
	W0918 21:08:27.768985   62061 logs.go:278] No container was found matching "etcd"
	I0918 21:08:27.768993   62061 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0918 21:08:27.769055   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0918 21:08:27.805425   62061 cri.go:89] found id: ""
	I0918 21:08:27.805457   62061 logs.go:276] 0 containers: []
	W0918 21:08:27.805468   62061 logs.go:278] No container was found matching "coredns"
	I0918 21:08:27.805474   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0918 21:08:27.805523   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0918 21:08:27.838049   62061 cri.go:89] found id: ""
	I0918 21:08:27.838081   62061 logs.go:276] 0 containers: []
	W0918 21:08:27.838091   62061 logs.go:278] No container was found matching "kube-scheduler"
	I0918 21:08:27.838098   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0918 21:08:27.838163   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0918 21:08:27.873961   62061 cri.go:89] found id: ""
	I0918 21:08:27.873986   62061 logs.go:276] 0 containers: []
	W0918 21:08:27.873994   62061 logs.go:278] No container was found matching "kube-proxy"
	I0918 21:08:27.874001   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0918 21:08:27.874064   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0918 21:08:27.908822   62061 cri.go:89] found id: ""
	I0918 21:08:27.908846   62061 logs.go:276] 0 containers: []
	W0918 21:08:27.908854   62061 logs.go:278] No container was found matching "kube-controller-manager"
	I0918 21:08:27.908860   62061 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0918 21:08:27.908915   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0918 21:08:27.943758   62061 cri.go:89] found id: ""
	I0918 21:08:27.943794   62061 logs.go:276] 0 containers: []
	W0918 21:08:27.943802   62061 logs.go:278] No container was found matching "kindnet"
	I0918 21:08:27.943808   62061 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0918 21:08:27.943875   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0918 21:08:27.978136   62061 cri.go:89] found id: ""
	I0918 21:08:27.978168   62061 logs.go:276] 0 containers: []
	W0918 21:08:27.978179   62061 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0918 21:08:27.978189   62061 logs.go:123] Gathering logs for dmesg ...
	I0918 21:08:27.978202   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 21:08:27.992744   62061 logs.go:123] Gathering logs for describe nodes ...
	I0918 21:08:27.992773   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0918 21:08:28.070339   62061 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0918 21:08:28.070362   62061 logs.go:123] Gathering logs for CRI-O ...
	I0918 21:08:28.070374   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0918 21:08:28.152405   62061 logs.go:123] Gathering logs for container status ...
	I0918 21:08:28.152452   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 21:08:28.191190   62061 logs.go:123] Gathering logs for kubelet ...
	I0918 21:08:28.191220   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 21:08:26.857256   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:08:29.356662   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:08:29.132577   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:08:31.133108   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:08:29.387068   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:08:31.886962   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:08:30.742507   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:08:30.756451   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0918 21:08:30.756553   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0918 21:08:30.790746   62061 cri.go:89] found id: ""
	I0918 21:08:30.790771   62061 logs.go:276] 0 containers: []
	W0918 21:08:30.790781   62061 logs.go:278] No container was found matching "kube-apiserver"
	I0918 21:08:30.790787   62061 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0918 21:08:30.790851   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0918 21:08:30.823633   62061 cri.go:89] found id: ""
	I0918 21:08:30.823670   62061 logs.go:276] 0 containers: []
	W0918 21:08:30.823682   62061 logs.go:278] No container was found matching "etcd"
	I0918 21:08:30.823689   62061 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0918 21:08:30.823754   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0918 21:08:30.861912   62061 cri.go:89] found id: ""
	I0918 21:08:30.861936   62061 logs.go:276] 0 containers: []
	W0918 21:08:30.861943   62061 logs.go:278] No container was found matching "coredns"
	I0918 21:08:30.861949   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0918 21:08:30.862000   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0918 21:08:30.899452   62061 cri.go:89] found id: ""
	I0918 21:08:30.899481   62061 logs.go:276] 0 containers: []
	W0918 21:08:30.899489   62061 logs.go:278] No container was found matching "kube-scheduler"
	I0918 21:08:30.899495   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0918 21:08:30.899562   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0918 21:08:30.935871   62061 cri.go:89] found id: ""
	I0918 21:08:30.935898   62061 logs.go:276] 0 containers: []
	W0918 21:08:30.935906   62061 logs.go:278] No container was found matching "kube-proxy"
	I0918 21:08:30.935912   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0918 21:08:30.935969   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0918 21:08:30.974608   62061 cri.go:89] found id: ""
	I0918 21:08:30.974643   62061 logs.go:276] 0 containers: []
	W0918 21:08:30.974655   62061 logs.go:278] No container was found matching "kube-controller-manager"
	I0918 21:08:30.974663   62061 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0918 21:08:30.974722   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0918 21:08:31.008249   62061 cri.go:89] found id: ""
	I0918 21:08:31.008279   62061 logs.go:276] 0 containers: []
	W0918 21:08:31.008290   62061 logs.go:278] No container was found matching "kindnet"
	I0918 21:08:31.008297   62061 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0918 21:08:31.008366   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0918 21:08:31.047042   62061 cri.go:89] found id: ""
	I0918 21:08:31.047075   62061 logs.go:276] 0 containers: []
	W0918 21:08:31.047083   62061 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0918 21:08:31.047093   62061 logs.go:123] Gathering logs for kubelet ...
	I0918 21:08:31.047107   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 21:08:31.098961   62061 logs.go:123] Gathering logs for dmesg ...
	I0918 21:08:31.099001   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 21:08:31.113116   62061 logs.go:123] Gathering logs for describe nodes ...
	I0918 21:08:31.113147   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0918 21:08:31.179609   62061 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0918 21:08:31.179650   62061 logs.go:123] Gathering logs for CRI-O ...
	I0918 21:08:31.179664   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0918 21:08:31.258299   62061 logs.go:123] Gathering logs for container status ...
	I0918 21:08:31.258335   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 21:08:33.798360   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:08:33.812105   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0918 21:08:33.812172   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0918 21:08:33.848120   62061 cri.go:89] found id: ""
	I0918 21:08:33.848149   62061 logs.go:276] 0 containers: []
	W0918 21:08:33.848160   62061 logs.go:278] No container was found matching "kube-apiserver"
	I0918 21:08:33.848169   62061 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0918 21:08:33.848231   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0918 21:08:33.884974   62061 cri.go:89] found id: ""
	I0918 21:08:33.885007   62061 logs.go:276] 0 containers: []
	W0918 21:08:33.885019   62061 logs.go:278] No container was found matching "etcd"
	I0918 21:08:33.885028   62061 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0918 21:08:33.885104   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0918 21:08:33.919613   62061 cri.go:89] found id: ""
	I0918 21:08:33.919658   62061 logs.go:276] 0 containers: []
	W0918 21:08:33.919667   62061 logs.go:278] No container was found matching "coredns"
	I0918 21:08:33.919673   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0918 21:08:33.919721   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0918 21:08:33.953074   62061 cri.go:89] found id: ""
	I0918 21:08:33.953112   62061 logs.go:276] 0 containers: []
	W0918 21:08:33.953120   62061 logs.go:278] No container was found matching "kube-scheduler"
	I0918 21:08:33.953125   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0918 21:08:33.953185   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0918 21:08:33.985493   62061 cri.go:89] found id: ""
	I0918 21:08:33.985521   62061 logs.go:276] 0 containers: []
	W0918 21:08:33.985532   62061 logs.go:278] No container was found matching "kube-proxy"
	I0918 21:08:33.985539   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0918 21:08:33.985630   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0918 21:08:34.020938   62061 cri.go:89] found id: ""
	I0918 21:08:34.020962   62061 logs.go:276] 0 containers: []
	W0918 21:08:34.020972   62061 logs.go:278] No container was found matching "kube-controller-manager"
	I0918 21:08:34.020981   62061 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0918 21:08:34.021047   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0918 21:08:34.053947   62061 cri.go:89] found id: ""
	I0918 21:08:34.053977   62061 logs.go:276] 0 containers: []
	W0918 21:08:34.053988   62061 logs.go:278] No container was found matching "kindnet"
	I0918 21:08:34.053996   62061 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0918 21:08:34.054060   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0918 21:08:34.090072   62061 cri.go:89] found id: ""
	I0918 21:08:34.090110   62061 logs.go:276] 0 containers: []
	W0918 21:08:34.090123   62061 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0918 21:08:34.090133   62061 logs.go:123] Gathering logs for kubelet ...
	I0918 21:08:34.090145   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 21:08:34.142069   62061 logs.go:123] Gathering logs for dmesg ...
	I0918 21:08:34.142107   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 21:08:34.156709   62061 logs.go:123] Gathering logs for describe nodes ...
	I0918 21:08:34.156740   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0918 21:08:34.232644   62061 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0918 21:08:34.232672   62061 logs.go:123] Gathering logs for CRI-O ...
	I0918 21:08:34.232685   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0918 21:08:34.311833   62061 logs.go:123] Gathering logs for container status ...
	I0918 21:08:34.311878   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 21:08:31.859360   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:08:34.357056   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:08:33.133212   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:08:35.632885   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:08:33.888487   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:08:36.386571   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:08:36.850811   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:08:36.864479   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0918 21:08:36.864558   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0918 21:08:36.902595   62061 cri.go:89] found id: ""
	I0918 21:08:36.902628   62061 logs.go:276] 0 containers: []
	W0918 21:08:36.902640   62061 logs.go:278] No container was found matching "kube-apiserver"
	I0918 21:08:36.902649   62061 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0918 21:08:36.902706   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0918 21:08:36.940336   62061 cri.go:89] found id: ""
	I0918 21:08:36.940385   62061 logs.go:276] 0 containers: []
	W0918 21:08:36.940394   62061 logs.go:278] No container was found matching "etcd"
	I0918 21:08:36.940400   62061 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0918 21:08:36.940465   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0918 21:08:36.973909   62061 cri.go:89] found id: ""
	I0918 21:08:36.973942   62061 logs.go:276] 0 containers: []
	W0918 21:08:36.973952   62061 logs.go:278] No container was found matching "coredns"
	I0918 21:08:36.973958   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0918 21:08:36.974013   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0918 21:08:37.008766   62061 cri.go:89] found id: ""
	I0918 21:08:37.008791   62061 logs.go:276] 0 containers: []
	W0918 21:08:37.008799   62061 logs.go:278] No container was found matching "kube-scheduler"
	I0918 21:08:37.008805   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0918 21:08:37.008852   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0918 21:08:37.041633   62061 cri.go:89] found id: ""
	I0918 21:08:37.041669   62061 logs.go:276] 0 containers: []
	W0918 21:08:37.041681   62061 logs.go:278] No container was found matching "kube-proxy"
	I0918 21:08:37.041688   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0918 21:08:37.041750   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0918 21:08:37.075154   62061 cri.go:89] found id: ""
	I0918 21:08:37.075188   62061 logs.go:276] 0 containers: []
	W0918 21:08:37.075197   62061 logs.go:278] No container was found matching "kube-controller-manager"
	I0918 21:08:37.075204   62061 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0918 21:08:37.075268   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0918 21:08:37.111083   62061 cri.go:89] found id: ""
	I0918 21:08:37.111119   62061 logs.go:276] 0 containers: []
	W0918 21:08:37.111130   62061 logs.go:278] No container was found matching "kindnet"
	I0918 21:08:37.111138   62061 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0918 21:08:37.111192   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0918 21:08:37.147894   62061 cri.go:89] found id: ""
	I0918 21:08:37.147925   62061 logs.go:276] 0 containers: []
	W0918 21:08:37.147936   62061 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0918 21:08:37.147948   62061 logs.go:123] Gathering logs for kubelet ...
	I0918 21:08:37.147962   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 21:08:37.200102   62061 logs.go:123] Gathering logs for dmesg ...
	I0918 21:08:37.200141   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 21:08:37.213511   62061 logs.go:123] Gathering logs for describe nodes ...
	I0918 21:08:37.213537   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0918 21:08:37.286978   62061 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0918 21:08:37.287012   62061 logs.go:123] Gathering logs for CRI-O ...
	I0918 21:08:37.287027   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0918 21:08:37.368153   62061 logs.go:123] Gathering logs for container status ...
	I0918 21:08:37.368191   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 21:08:36.857508   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:08:39.357177   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:08:41.357329   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:08:38.134332   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:08:40.633274   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:08:38.387121   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:08:40.387310   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:08:42.887614   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:08:39.907600   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:08:39.922325   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0918 21:08:39.922395   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0918 21:08:39.958150   62061 cri.go:89] found id: ""
	I0918 21:08:39.958175   62061 logs.go:276] 0 containers: []
	W0918 21:08:39.958183   62061 logs.go:278] No container was found matching "kube-apiserver"
	I0918 21:08:39.958189   62061 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0918 21:08:39.958254   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0918 21:08:39.993916   62061 cri.go:89] found id: ""
	I0918 21:08:39.993945   62061 logs.go:276] 0 containers: []
	W0918 21:08:39.993956   62061 logs.go:278] No container was found matching "etcd"
	I0918 21:08:39.993963   62061 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0918 21:08:39.994026   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0918 21:08:40.031078   62061 cri.go:89] found id: ""
	I0918 21:08:40.031114   62061 logs.go:276] 0 containers: []
	W0918 21:08:40.031126   62061 logs.go:278] No container was found matching "coredns"
	I0918 21:08:40.031133   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0918 21:08:40.031194   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0918 21:08:40.065028   62061 cri.go:89] found id: ""
	I0918 21:08:40.065054   62061 logs.go:276] 0 containers: []
	W0918 21:08:40.065065   62061 logs.go:278] No container was found matching "kube-scheduler"
	I0918 21:08:40.065072   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0918 21:08:40.065129   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0918 21:08:40.099437   62061 cri.go:89] found id: ""
	I0918 21:08:40.099466   62061 logs.go:276] 0 containers: []
	W0918 21:08:40.099474   62061 logs.go:278] No container was found matching "kube-proxy"
	I0918 21:08:40.099480   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0918 21:08:40.099544   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0918 21:08:40.139841   62061 cri.go:89] found id: ""
	I0918 21:08:40.139866   62061 logs.go:276] 0 containers: []
	W0918 21:08:40.139874   62061 logs.go:278] No container was found matching "kube-controller-manager"
	I0918 21:08:40.139880   62061 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0918 21:08:40.139936   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0918 21:08:40.175287   62061 cri.go:89] found id: ""
	I0918 21:08:40.175316   62061 logs.go:276] 0 containers: []
	W0918 21:08:40.175327   62061 logs.go:278] No container was found matching "kindnet"
	I0918 21:08:40.175334   62061 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0918 21:08:40.175397   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0918 21:08:40.208646   62061 cri.go:89] found id: ""
	I0918 21:08:40.208677   62061 logs.go:276] 0 containers: []
	W0918 21:08:40.208690   62061 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0918 21:08:40.208701   62061 logs.go:123] Gathering logs for kubelet ...
	I0918 21:08:40.208712   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 21:08:40.262944   62061 logs.go:123] Gathering logs for dmesg ...
	I0918 21:08:40.262982   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 21:08:40.276192   62061 logs.go:123] Gathering logs for describe nodes ...
	I0918 21:08:40.276222   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0918 21:08:40.346393   62061 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0918 21:08:40.346414   62061 logs.go:123] Gathering logs for CRI-O ...
	I0918 21:08:40.346426   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0918 21:08:40.433797   62061 logs.go:123] Gathering logs for container status ...
	I0918 21:08:40.433848   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 21:08:42.976574   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:08:42.990198   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0918 21:08:42.990263   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0918 21:08:43.031744   62061 cri.go:89] found id: ""
	I0918 21:08:43.031774   62061 logs.go:276] 0 containers: []
	W0918 21:08:43.031784   62061 logs.go:278] No container was found matching "kube-apiserver"
	I0918 21:08:43.031791   62061 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0918 21:08:43.031854   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0918 21:08:43.065198   62061 cri.go:89] found id: ""
	I0918 21:08:43.065240   62061 logs.go:276] 0 containers: []
	W0918 21:08:43.065248   62061 logs.go:278] No container was found matching "etcd"
	I0918 21:08:43.065261   62061 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0918 21:08:43.065319   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0918 21:08:43.099292   62061 cri.go:89] found id: ""
	I0918 21:08:43.099320   62061 logs.go:276] 0 containers: []
	W0918 21:08:43.099328   62061 logs.go:278] No container was found matching "coredns"
	I0918 21:08:43.099333   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0918 21:08:43.099388   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0918 21:08:43.135085   62061 cri.go:89] found id: ""
	I0918 21:08:43.135110   62061 logs.go:276] 0 containers: []
	W0918 21:08:43.135119   62061 logs.go:278] No container was found matching "kube-scheduler"
	I0918 21:08:43.135131   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0918 21:08:43.135190   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0918 21:08:43.173255   62061 cri.go:89] found id: ""
	I0918 21:08:43.173312   62061 logs.go:276] 0 containers: []
	W0918 21:08:43.173326   62061 logs.go:278] No container was found matching "kube-proxy"
	I0918 21:08:43.173335   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0918 21:08:43.173433   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0918 21:08:43.209249   62061 cri.go:89] found id: ""
	I0918 21:08:43.209282   62061 logs.go:276] 0 containers: []
	W0918 21:08:43.209294   62061 logs.go:278] No container was found matching "kube-controller-manager"
	I0918 21:08:43.209303   62061 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0918 21:08:43.209377   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0918 21:08:43.242333   62061 cri.go:89] found id: ""
	I0918 21:08:43.242366   62061 logs.go:276] 0 containers: []
	W0918 21:08:43.242376   62061 logs.go:278] No container was found matching "kindnet"
	I0918 21:08:43.242383   62061 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0918 21:08:43.242449   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0918 21:08:43.278099   62061 cri.go:89] found id: ""
	I0918 21:08:43.278128   62061 logs.go:276] 0 containers: []
	W0918 21:08:43.278136   62061 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0918 21:08:43.278146   62061 logs.go:123] Gathering logs for kubelet ...
	I0918 21:08:43.278164   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 21:08:43.329621   62061 logs.go:123] Gathering logs for dmesg ...
	I0918 21:08:43.329661   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 21:08:43.343357   62061 logs.go:123] Gathering logs for describe nodes ...
	I0918 21:08:43.343402   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0918 21:08:43.415392   62061 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0918 21:08:43.415419   62061 logs.go:123] Gathering logs for CRI-O ...
	I0918 21:08:43.415435   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0918 21:08:43.501634   62061 logs.go:123] Gathering logs for container status ...
	I0918 21:08:43.501670   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 21:08:43.357675   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:08:45.857212   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:08:43.133389   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:08:45.134057   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:08:44.887763   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:08:47.387221   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:08:46.043925   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:08:46.057457   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0918 21:08:46.057540   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0918 21:08:46.091987   62061 cri.go:89] found id: ""
	I0918 21:08:46.092036   62061 logs.go:276] 0 containers: []
	W0918 21:08:46.092047   62061 logs.go:278] No container was found matching "kube-apiserver"
	I0918 21:08:46.092055   62061 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0918 21:08:46.092118   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0918 21:08:46.131171   62061 cri.go:89] found id: ""
	I0918 21:08:46.131195   62061 logs.go:276] 0 containers: []
	W0918 21:08:46.131203   62061 logs.go:278] No container was found matching "etcd"
	I0918 21:08:46.131209   62061 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0918 21:08:46.131266   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0918 21:08:46.167324   62061 cri.go:89] found id: ""
	I0918 21:08:46.167360   62061 logs.go:276] 0 containers: []
	W0918 21:08:46.167369   62061 logs.go:278] No container was found matching "coredns"
	I0918 21:08:46.167375   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0918 21:08:46.167433   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0918 21:08:46.200940   62061 cri.go:89] found id: ""
	I0918 21:08:46.200969   62061 logs.go:276] 0 containers: []
	W0918 21:08:46.200978   62061 logs.go:278] No container was found matching "kube-scheduler"
	I0918 21:08:46.200983   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0918 21:08:46.201042   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0918 21:08:46.233736   62061 cri.go:89] found id: ""
	I0918 21:08:46.233765   62061 logs.go:276] 0 containers: []
	W0918 21:08:46.233773   62061 logs.go:278] No container was found matching "kube-proxy"
	I0918 21:08:46.233779   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0918 21:08:46.233841   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0918 21:08:46.267223   62061 cri.go:89] found id: ""
	I0918 21:08:46.267250   62061 logs.go:276] 0 containers: []
	W0918 21:08:46.267261   62061 logs.go:278] No container was found matching "kube-controller-manager"
	I0918 21:08:46.267268   62061 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0918 21:08:46.267331   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0918 21:08:46.302218   62061 cri.go:89] found id: ""
	I0918 21:08:46.302245   62061 logs.go:276] 0 containers: []
	W0918 21:08:46.302255   62061 logs.go:278] No container was found matching "kindnet"
	I0918 21:08:46.302262   62061 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0918 21:08:46.302326   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0918 21:08:46.334979   62061 cri.go:89] found id: ""
	I0918 21:08:46.335005   62061 logs.go:276] 0 containers: []
	W0918 21:08:46.335015   62061 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0918 21:08:46.335024   62061 logs.go:123] Gathering logs for describe nodes ...
	I0918 21:08:46.335038   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0918 21:08:46.422905   62061 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0918 21:08:46.422929   62061 logs.go:123] Gathering logs for CRI-O ...
	I0918 21:08:46.422948   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0918 21:08:46.504831   62061 logs.go:123] Gathering logs for container status ...
	I0918 21:08:46.504884   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 21:08:46.543954   62061 logs.go:123] Gathering logs for kubelet ...
	I0918 21:08:46.543981   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 21:08:46.595817   62061 logs.go:123] Gathering logs for dmesg ...
	I0918 21:08:46.595854   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 21:08:49.109984   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:08:49.124966   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0918 21:08:49.125052   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0918 21:08:49.163272   62061 cri.go:89] found id: ""
	I0918 21:08:49.163302   62061 logs.go:276] 0 containers: []
	W0918 21:08:49.163334   62061 logs.go:278] No container was found matching "kube-apiserver"
	I0918 21:08:49.163342   62061 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0918 21:08:49.163411   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0918 21:08:49.199973   62061 cri.go:89] found id: ""
	I0918 21:08:49.200003   62061 logs.go:276] 0 containers: []
	W0918 21:08:49.200037   62061 logs.go:278] No container was found matching "etcd"
	I0918 21:08:49.200046   62061 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0918 21:08:49.200111   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0918 21:08:49.235980   62061 cri.go:89] found id: ""
	I0918 21:08:49.236036   62061 logs.go:276] 0 containers: []
	W0918 21:08:49.236050   62061 logs.go:278] No container was found matching "coredns"
	I0918 21:08:49.236061   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0918 21:08:49.236123   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0918 21:08:49.271162   62061 cri.go:89] found id: ""
	I0918 21:08:49.271386   62061 logs.go:276] 0 containers: []
	W0918 21:08:49.271404   62061 logs.go:278] No container was found matching "kube-scheduler"
	I0918 21:08:49.271413   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0918 21:08:49.271483   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0918 21:08:49.305799   62061 cri.go:89] found id: ""
	I0918 21:08:49.305831   62061 logs.go:276] 0 containers: []
	W0918 21:08:49.305842   62061 logs.go:278] No container was found matching "kube-proxy"
	I0918 21:08:49.305848   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0918 21:08:49.305899   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0918 21:08:49.342170   62061 cri.go:89] found id: ""
	I0918 21:08:49.342195   62061 logs.go:276] 0 containers: []
	W0918 21:08:49.342202   62061 logs.go:278] No container was found matching "kube-controller-manager"
	I0918 21:08:49.342208   62061 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0918 21:08:49.342265   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0918 21:08:49.374071   62061 cri.go:89] found id: ""
	I0918 21:08:49.374098   62061 logs.go:276] 0 containers: []
	W0918 21:08:49.374108   62061 logs.go:278] No container was found matching "kindnet"
	I0918 21:08:49.374116   62061 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0918 21:08:49.374186   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0918 21:08:49.407614   62061 cri.go:89] found id: ""
	I0918 21:08:49.407645   62061 logs.go:276] 0 containers: []
	W0918 21:08:49.407657   62061 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0918 21:08:49.407669   62061 logs.go:123] Gathering logs for kubelet ...
	I0918 21:08:49.407685   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 21:08:49.458433   62061 logs.go:123] Gathering logs for dmesg ...
	I0918 21:08:49.458473   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 21:08:49.471490   62061 logs.go:123] Gathering logs for describe nodes ...
	I0918 21:08:49.471519   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0918 21:08:47.857798   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:08:50.355748   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:08:47.634158   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:08:49.627085   61659 pod_ready.go:82] duration metric: took 4m0.000936582s for pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace to be "Ready" ...
	E0918 21:08:49.627133   61659 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace to be "Ready" (will not retry!)
	I0918 21:08:49.627156   61659 pod_ready.go:39] duration metric: took 4m7.542795536s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0918 21:08:49.627192   61659 kubeadm.go:597] duration metric: took 4m15.452827752s to restartPrimaryControlPlane
	W0918 21:08:49.627251   61659 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0918 21:08:49.627290   61659 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0918 21:08:49.387560   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:08:51.887591   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	W0918 21:08:49.533874   62061 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0918 21:08:49.533901   62061 logs.go:123] Gathering logs for CRI-O ...
	I0918 21:08:49.533916   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0918 21:08:49.611711   62061 logs.go:123] Gathering logs for container status ...
	I0918 21:08:49.611744   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 21:08:52.157839   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:08:52.170690   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0918 21:08:52.170770   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0918 21:08:52.208737   62061 cri.go:89] found id: ""
	I0918 21:08:52.208767   62061 logs.go:276] 0 containers: []
	W0918 21:08:52.208779   62061 logs.go:278] No container was found matching "kube-apiserver"
	I0918 21:08:52.208787   62061 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0918 21:08:52.208846   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0918 21:08:52.242624   62061 cri.go:89] found id: ""
	I0918 21:08:52.242658   62061 logs.go:276] 0 containers: []
	W0918 21:08:52.242669   62061 logs.go:278] No container was found matching "etcd"
	I0918 21:08:52.242677   62061 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0918 21:08:52.242742   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0918 21:08:52.280686   62061 cri.go:89] found id: ""
	I0918 21:08:52.280717   62061 logs.go:276] 0 containers: []
	W0918 21:08:52.280728   62061 logs.go:278] No container was found matching "coredns"
	I0918 21:08:52.280736   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0918 21:08:52.280798   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0918 21:08:52.313748   62061 cri.go:89] found id: ""
	I0918 21:08:52.313777   62061 logs.go:276] 0 containers: []
	W0918 21:08:52.313785   62061 logs.go:278] No container was found matching "kube-scheduler"
	I0918 21:08:52.313791   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0918 21:08:52.313840   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0918 21:08:52.353072   62061 cri.go:89] found id: ""
	I0918 21:08:52.353102   62061 logs.go:276] 0 containers: []
	W0918 21:08:52.353124   62061 logs.go:278] No container was found matching "kube-proxy"
	I0918 21:08:52.353132   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0918 21:08:52.353195   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0918 21:08:52.390358   62061 cri.go:89] found id: ""
	I0918 21:08:52.390384   62061 logs.go:276] 0 containers: []
	W0918 21:08:52.390392   62061 logs.go:278] No container was found matching "kube-controller-manager"
	I0918 21:08:52.390398   62061 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0918 21:08:52.390448   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0918 21:08:52.430039   62061 cri.go:89] found id: ""
	I0918 21:08:52.430068   62061 logs.go:276] 0 containers: []
	W0918 21:08:52.430081   62061 logs.go:278] No container was found matching "kindnet"
	I0918 21:08:52.430088   62061 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0918 21:08:52.430146   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0918 21:08:52.466096   62061 cri.go:89] found id: ""
	I0918 21:08:52.466125   62061 logs.go:276] 0 containers: []
	W0918 21:08:52.466137   62061 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0918 21:08:52.466149   62061 logs.go:123] Gathering logs for kubelet ...
	I0918 21:08:52.466162   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 21:08:52.518643   62061 logs.go:123] Gathering logs for dmesg ...
	I0918 21:08:52.518678   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 21:08:52.531768   62061 logs.go:123] Gathering logs for describe nodes ...
	I0918 21:08:52.531797   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0918 21:08:52.605130   62061 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0918 21:08:52.605163   62061 logs.go:123] Gathering logs for CRI-O ...
	I0918 21:08:52.605181   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0918 21:08:52.686510   62061 logs.go:123] Gathering logs for container status ...
	I0918 21:08:52.686553   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 21:08:52.356535   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:08:54.356671   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:08:54.387306   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:08:56.887745   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:08:55.225867   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:08:55.240537   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0918 21:08:55.240618   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0918 21:08:55.276461   62061 cri.go:89] found id: ""
	I0918 21:08:55.276490   62061 logs.go:276] 0 containers: []
	W0918 21:08:55.276498   62061 logs.go:278] No container was found matching "kube-apiserver"
	I0918 21:08:55.276504   62061 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0918 21:08:55.276564   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0918 21:08:55.313449   62061 cri.go:89] found id: ""
	I0918 21:08:55.313482   62061 logs.go:276] 0 containers: []
	W0918 21:08:55.313493   62061 logs.go:278] No container was found matching "etcd"
	I0918 21:08:55.313499   62061 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0918 21:08:55.313551   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0918 21:08:55.352436   62061 cri.go:89] found id: ""
	I0918 21:08:55.352475   62061 logs.go:276] 0 containers: []
	W0918 21:08:55.352485   62061 logs.go:278] No container was found matching "coredns"
	I0918 21:08:55.352492   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0918 21:08:55.352560   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0918 21:08:55.390433   62061 cri.go:89] found id: ""
	I0918 21:08:55.390458   62061 logs.go:276] 0 containers: []
	W0918 21:08:55.390466   62061 logs.go:278] No container was found matching "kube-scheduler"
	I0918 21:08:55.390472   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0918 21:08:55.390529   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0918 21:08:55.428426   62061 cri.go:89] found id: ""
	I0918 21:08:55.428455   62061 logs.go:276] 0 containers: []
	W0918 21:08:55.428465   62061 logs.go:278] No container was found matching "kube-proxy"
	I0918 21:08:55.428473   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0918 21:08:55.428540   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0918 21:08:55.465587   62061 cri.go:89] found id: ""
	I0918 21:08:55.465622   62061 logs.go:276] 0 containers: []
	W0918 21:08:55.465633   62061 logs.go:278] No container was found matching "kube-controller-manager"
	I0918 21:08:55.465641   62061 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0918 21:08:55.465710   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0918 21:08:55.502137   62061 cri.go:89] found id: ""
	I0918 21:08:55.502185   62061 logs.go:276] 0 containers: []
	W0918 21:08:55.502196   62061 logs.go:278] No container was found matching "kindnet"
	I0918 21:08:55.502203   62061 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0918 21:08:55.502266   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0918 21:08:55.535992   62061 cri.go:89] found id: ""
	I0918 21:08:55.536037   62061 logs.go:276] 0 containers: []
	W0918 21:08:55.536050   62061 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0918 21:08:55.536060   62061 logs.go:123] Gathering logs for dmesg ...
	I0918 21:08:55.536078   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 21:08:55.549267   62061 logs.go:123] Gathering logs for describe nodes ...
	I0918 21:08:55.549296   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0918 21:08:55.616522   62061 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0918 21:08:55.616556   62061 logs.go:123] Gathering logs for CRI-O ...
	I0918 21:08:55.616572   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0918 21:08:55.698822   62061 logs.go:123] Gathering logs for container status ...
	I0918 21:08:55.698874   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 21:08:55.740234   62061 logs.go:123] Gathering logs for kubelet ...
	I0918 21:08:55.740264   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 21:08:58.294876   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:08:58.307543   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0918 21:08:58.307630   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0918 21:08:58.340435   62061 cri.go:89] found id: ""
	I0918 21:08:58.340467   62061 logs.go:276] 0 containers: []
	W0918 21:08:58.340479   62061 logs.go:278] No container was found matching "kube-apiserver"
	I0918 21:08:58.340486   62061 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0918 21:08:58.340545   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0918 21:08:58.374759   62061 cri.go:89] found id: ""
	I0918 21:08:58.374792   62061 logs.go:276] 0 containers: []
	W0918 21:08:58.374804   62061 logs.go:278] No container was found matching "etcd"
	I0918 21:08:58.374810   62061 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0918 21:08:58.374863   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0918 21:08:58.411756   62061 cri.go:89] found id: ""
	I0918 21:08:58.411787   62061 logs.go:276] 0 containers: []
	W0918 21:08:58.411797   62061 logs.go:278] No container was found matching "coredns"
	I0918 21:08:58.411804   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0918 21:08:58.411867   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0918 21:08:58.449787   62061 cri.go:89] found id: ""
	I0918 21:08:58.449820   62061 logs.go:276] 0 containers: []
	W0918 21:08:58.449832   62061 logs.go:278] No container was found matching "kube-scheduler"
	I0918 21:08:58.449839   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0918 21:08:58.449903   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0918 21:08:58.485212   62061 cri.go:89] found id: ""
	I0918 21:08:58.485243   62061 logs.go:276] 0 containers: []
	W0918 21:08:58.485254   62061 logs.go:278] No container was found matching "kube-proxy"
	I0918 21:08:58.485261   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0918 21:08:58.485331   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0918 21:08:58.528667   62061 cri.go:89] found id: ""
	I0918 21:08:58.528696   62061 logs.go:276] 0 containers: []
	W0918 21:08:58.528706   62061 logs.go:278] No container was found matching "kube-controller-manager"
	I0918 21:08:58.528714   62061 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0918 21:08:58.528775   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0918 21:08:58.568201   62061 cri.go:89] found id: ""
	I0918 21:08:58.568231   62061 logs.go:276] 0 containers: []
	W0918 21:08:58.568241   62061 logs.go:278] No container was found matching "kindnet"
	I0918 21:08:58.568247   62061 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0918 21:08:58.568305   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0918 21:08:58.623952   62061 cri.go:89] found id: ""
	I0918 21:08:58.623982   62061 logs.go:276] 0 containers: []
	W0918 21:08:58.623991   62061 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0918 21:08:58.624000   62061 logs.go:123] Gathering logs for container status ...
	I0918 21:08:58.624011   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 21:08:58.665418   62061 logs.go:123] Gathering logs for kubelet ...
	I0918 21:08:58.665457   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 21:08:58.713464   62061 logs.go:123] Gathering logs for dmesg ...
	I0918 21:08:58.713504   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 21:08:58.727511   62061 logs.go:123] Gathering logs for describe nodes ...
	I0918 21:08:58.727552   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0918 21:08:58.799004   62061 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0918 21:08:58.799035   62061 logs.go:123] Gathering logs for CRI-O ...
	I0918 21:08:58.799050   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0918 21:08:56.856428   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:08:58.856632   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:09:00.857301   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:08:59.386076   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:09:01.387016   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:09:01.389457   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:09:01.404658   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0918 21:09:01.404734   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0918 21:09:01.439139   62061 cri.go:89] found id: ""
	I0918 21:09:01.439170   62061 logs.go:276] 0 containers: []
	W0918 21:09:01.439180   62061 logs.go:278] No container was found matching "kube-apiserver"
	I0918 21:09:01.439187   62061 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0918 21:09:01.439251   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0918 21:09:01.473866   62061 cri.go:89] found id: ""
	I0918 21:09:01.473896   62061 logs.go:276] 0 containers: []
	W0918 21:09:01.473907   62061 logs.go:278] No container was found matching "etcd"
	I0918 21:09:01.473915   62061 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0918 21:09:01.473978   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0918 21:09:01.506734   62061 cri.go:89] found id: ""
	I0918 21:09:01.506767   62061 logs.go:276] 0 containers: []
	W0918 21:09:01.506777   62061 logs.go:278] No container was found matching "coredns"
	I0918 21:09:01.506783   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0918 21:09:01.506836   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0918 21:09:01.540123   62061 cri.go:89] found id: ""
	I0918 21:09:01.540152   62061 logs.go:276] 0 containers: []
	W0918 21:09:01.540162   62061 logs.go:278] No container was found matching "kube-scheduler"
	I0918 21:09:01.540169   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0918 21:09:01.540236   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0918 21:09:01.575002   62061 cri.go:89] found id: ""
	I0918 21:09:01.575037   62061 logs.go:276] 0 containers: []
	W0918 21:09:01.575048   62061 logs.go:278] No container was found matching "kube-proxy"
	I0918 21:09:01.575071   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0918 21:09:01.575159   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0918 21:09:01.612285   62061 cri.go:89] found id: ""
	I0918 21:09:01.612316   62061 logs.go:276] 0 containers: []
	W0918 21:09:01.612327   62061 logs.go:278] No container was found matching "kube-controller-manager"
	I0918 21:09:01.612335   62061 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0918 21:09:01.612399   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0918 21:09:01.648276   62061 cri.go:89] found id: ""
	I0918 21:09:01.648304   62061 logs.go:276] 0 containers: []
	W0918 21:09:01.648318   62061 logs.go:278] No container was found matching "kindnet"
	I0918 21:09:01.648325   62061 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0918 21:09:01.648397   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0918 21:09:01.686192   62061 cri.go:89] found id: ""
	I0918 21:09:01.686220   62061 logs.go:276] 0 containers: []
	W0918 21:09:01.686228   62061 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0918 21:09:01.686236   62061 logs.go:123] Gathering logs for kubelet ...
	I0918 21:09:01.686247   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 21:09:01.738366   62061 logs.go:123] Gathering logs for dmesg ...
	I0918 21:09:01.738408   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 21:09:01.752650   62061 logs.go:123] Gathering logs for describe nodes ...
	I0918 21:09:01.752683   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0918 21:09:01.825010   62061 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0918 21:09:01.825114   62061 logs.go:123] Gathering logs for CRI-O ...
	I0918 21:09:01.825156   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0918 21:09:01.907401   62061 logs.go:123] Gathering logs for container status ...
	I0918 21:09:01.907448   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 21:09:04.448316   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:09:04.461242   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0918 21:09:04.461304   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0918 21:09:03.357089   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:09:05.856126   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:09:03.387563   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:09:05.389665   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:09:07.886523   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:09:04.495951   62061 cri.go:89] found id: ""
	I0918 21:09:04.495984   62061 logs.go:276] 0 containers: []
	W0918 21:09:04.495997   62061 logs.go:278] No container was found matching "kube-apiserver"
	I0918 21:09:04.496006   62061 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0918 21:09:04.496090   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0918 21:09:04.536874   62061 cri.go:89] found id: ""
	I0918 21:09:04.536906   62061 logs.go:276] 0 containers: []
	W0918 21:09:04.536917   62061 logs.go:278] No container was found matching "etcd"
	I0918 21:09:04.536935   62061 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0918 21:09:04.537010   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0918 21:09:04.572601   62061 cri.go:89] found id: ""
	I0918 21:09:04.572634   62061 logs.go:276] 0 containers: []
	W0918 21:09:04.572646   62061 logs.go:278] No container was found matching "coredns"
	I0918 21:09:04.572653   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0918 21:09:04.572716   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0918 21:09:04.609783   62061 cri.go:89] found id: ""
	I0918 21:09:04.609817   62061 logs.go:276] 0 containers: []
	W0918 21:09:04.609826   62061 logs.go:278] No container was found matching "kube-scheduler"
	I0918 21:09:04.609832   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0918 21:09:04.609891   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0918 21:09:04.645124   62061 cri.go:89] found id: ""
	I0918 21:09:04.645156   62061 logs.go:276] 0 containers: []
	W0918 21:09:04.645167   62061 logs.go:278] No container was found matching "kube-proxy"
	I0918 21:09:04.645175   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0918 21:09:04.645241   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0918 21:09:04.680927   62061 cri.go:89] found id: ""
	I0918 21:09:04.680959   62061 logs.go:276] 0 containers: []
	W0918 21:09:04.680971   62061 logs.go:278] No container was found matching "kube-controller-manager"
	I0918 21:09:04.680978   62061 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0918 21:09:04.681038   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0918 21:09:04.718920   62061 cri.go:89] found id: ""
	I0918 21:09:04.718954   62061 logs.go:276] 0 containers: []
	W0918 21:09:04.718972   62061 logs.go:278] No container was found matching "kindnet"
	I0918 21:09:04.718979   62061 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0918 21:09:04.719039   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0918 21:09:04.751450   62061 cri.go:89] found id: ""
	I0918 21:09:04.751488   62061 logs.go:276] 0 containers: []
	W0918 21:09:04.751500   62061 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0918 21:09:04.751511   62061 logs.go:123] Gathering logs for container status ...
	I0918 21:09:04.751529   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 21:09:04.788969   62061 logs.go:123] Gathering logs for kubelet ...
	I0918 21:09:04.789001   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 21:09:04.837638   62061 logs.go:123] Gathering logs for dmesg ...
	I0918 21:09:04.837673   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 21:09:04.853673   62061 logs.go:123] Gathering logs for describe nodes ...
	I0918 21:09:04.853706   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0918 21:09:04.923670   62061 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0918 21:09:04.923703   62061 logs.go:123] Gathering logs for CRI-O ...
	I0918 21:09:04.923717   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0918 21:09:07.507979   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:09:07.521581   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0918 21:09:07.521656   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0918 21:09:07.556280   62061 cri.go:89] found id: ""
	I0918 21:09:07.556310   62061 logs.go:276] 0 containers: []
	W0918 21:09:07.556321   62061 logs.go:278] No container was found matching "kube-apiserver"
	I0918 21:09:07.556329   62061 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0918 21:09:07.556397   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0918 21:09:07.590775   62061 cri.go:89] found id: ""
	I0918 21:09:07.590802   62061 logs.go:276] 0 containers: []
	W0918 21:09:07.590810   62061 logs.go:278] No container was found matching "etcd"
	I0918 21:09:07.590815   62061 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0918 21:09:07.590862   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0918 21:09:07.625971   62061 cri.go:89] found id: ""
	I0918 21:09:07.626000   62061 logs.go:276] 0 containers: []
	W0918 21:09:07.626010   62061 logs.go:278] No container was found matching "coredns"
	I0918 21:09:07.626018   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0918 21:09:07.626080   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0918 21:09:07.660083   62061 cri.go:89] found id: ""
	I0918 21:09:07.660116   62061 logs.go:276] 0 containers: []
	W0918 21:09:07.660128   62061 logs.go:278] No container was found matching "kube-scheduler"
	I0918 21:09:07.660136   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0918 21:09:07.660201   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0918 21:09:07.694165   62061 cri.go:89] found id: ""
	I0918 21:09:07.694195   62061 logs.go:276] 0 containers: []
	W0918 21:09:07.694204   62061 logs.go:278] No container was found matching "kube-proxy"
	I0918 21:09:07.694211   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0918 21:09:07.694269   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0918 21:09:07.728298   62061 cri.go:89] found id: ""
	I0918 21:09:07.728328   62061 logs.go:276] 0 containers: []
	W0918 21:09:07.728338   62061 logs.go:278] No container was found matching "kube-controller-manager"
	I0918 21:09:07.728349   62061 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0918 21:09:07.728409   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0918 21:09:07.762507   62061 cri.go:89] found id: ""
	I0918 21:09:07.762546   62061 logs.go:276] 0 containers: []
	W0918 21:09:07.762555   62061 logs.go:278] No container was found matching "kindnet"
	I0918 21:09:07.762565   62061 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0918 21:09:07.762745   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0918 21:09:07.798006   62061 cri.go:89] found id: ""
	I0918 21:09:07.798038   62061 logs.go:276] 0 containers: []
	W0918 21:09:07.798049   62061 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0918 21:09:07.798059   62061 logs.go:123] Gathering logs for kubelet ...
	I0918 21:09:07.798074   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 21:09:07.849222   62061 logs.go:123] Gathering logs for dmesg ...
	I0918 21:09:07.849267   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 21:09:07.865023   62061 logs.go:123] Gathering logs for describe nodes ...
	I0918 21:09:07.865055   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0918 21:09:07.938810   62061 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0918 21:09:07.938830   62061 logs.go:123] Gathering logs for CRI-O ...
	I0918 21:09:07.938842   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0918 21:09:08.018885   62061 logs.go:123] Gathering logs for container status ...
	I0918 21:09:08.018924   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 21:09:07.856987   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:09:10.356244   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:09:09.886563   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:09:12.386922   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:09:10.562565   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:09:10.575854   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0918 21:09:10.575941   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0918 21:09:10.612853   62061 cri.go:89] found id: ""
	I0918 21:09:10.612884   62061 logs.go:276] 0 containers: []
	W0918 21:09:10.612896   62061 logs.go:278] No container was found matching "kube-apiserver"
	I0918 21:09:10.612906   62061 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0918 21:09:10.612966   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0918 21:09:10.648672   62061 cri.go:89] found id: ""
	I0918 21:09:10.648703   62061 logs.go:276] 0 containers: []
	W0918 21:09:10.648713   62061 logs.go:278] No container was found matching "etcd"
	I0918 21:09:10.648720   62061 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0918 21:09:10.648780   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0918 21:09:10.682337   62061 cri.go:89] found id: ""
	I0918 21:09:10.682370   62061 logs.go:276] 0 containers: []
	W0918 21:09:10.682388   62061 logs.go:278] No container was found matching "coredns"
	I0918 21:09:10.682394   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0918 21:09:10.682445   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0918 21:09:10.718221   62061 cri.go:89] found id: ""
	I0918 21:09:10.718257   62061 logs.go:276] 0 containers: []
	W0918 21:09:10.718269   62061 logs.go:278] No container was found matching "kube-scheduler"
	I0918 21:09:10.718277   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0918 21:09:10.718345   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0918 21:09:10.751570   62061 cri.go:89] found id: ""
	I0918 21:09:10.751600   62061 logs.go:276] 0 containers: []
	W0918 21:09:10.751609   62061 logs.go:278] No container was found matching "kube-proxy"
	I0918 21:09:10.751615   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0918 21:09:10.751706   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0918 21:09:10.792125   62061 cri.go:89] found id: ""
	I0918 21:09:10.792158   62061 logs.go:276] 0 containers: []
	W0918 21:09:10.792170   62061 logs.go:278] No container was found matching "kube-controller-manager"
	I0918 21:09:10.792178   62061 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0918 21:09:10.792245   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0918 21:09:10.830699   62061 cri.go:89] found id: ""
	I0918 21:09:10.830733   62061 logs.go:276] 0 containers: []
	W0918 21:09:10.830742   62061 logs.go:278] No container was found matching "kindnet"
	I0918 21:09:10.830748   62061 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0918 21:09:10.830820   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0918 21:09:10.869625   62061 cri.go:89] found id: ""
	I0918 21:09:10.869655   62061 logs.go:276] 0 containers: []
	W0918 21:09:10.869663   62061 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0918 21:09:10.869672   62061 logs.go:123] Gathering logs for kubelet ...
	I0918 21:09:10.869684   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 21:09:10.921340   62061 logs.go:123] Gathering logs for dmesg ...
	I0918 21:09:10.921378   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 21:09:10.937032   62061 logs.go:123] Gathering logs for describe nodes ...
	I0918 21:09:10.937071   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0918 21:09:11.006248   62061 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0918 21:09:11.006276   62061 logs.go:123] Gathering logs for CRI-O ...
	I0918 21:09:11.006291   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0918 21:09:11.086458   62061 logs.go:123] Gathering logs for container status ...
	I0918 21:09:11.086496   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 21:09:13.628824   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:09:13.642499   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0918 21:09:13.642578   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0918 21:09:13.677337   62061 cri.go:89] found id: ""
	I0918 21:09:13.677368   62061 logs.go:276] 0 containers: []
	W0918 21:09:13.677378   62061 logs.go:278] No container was found matching "kube-apiserver"
	I0918 21:09:13.677385   62061 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0918 21:09:13.677449   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0918 21:09:13.717317   62061 cri.go:89] found id: ""
	I0918 21:09:13.717341   62061 logs.go:276] 0 containers: []
	W0918 21:09:13.717353   62061 logs.go:278] No container was found matching "etcd"
	I0918 21:09:13.717358   62061 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0918 21:09:13.717419   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0918 21:09:13.752151   62061 cri.go:89] found id: ""
	I0918 21:09:13.752181   62061 logs.go:276] 0 containers: []
	W0918 21:09:13.752189   62061 logs.go:278] No container was found matching "coredns"
	I0918 21:09:13.752195   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0918 21:09:13.752253   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0918 21:09:13.791946   62061 cri.go:89] found id: ""
	I0918 21:09:13.791975   62061 logs.go:276] 0 containers: []
	W0918 21:09:13.791983   62061 logs.go:278] No container was found matching "kube-scheduler"
	I0918 21:09:13.791989   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0918 21:09:13.792069   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0918 21:09:13.825173   62061 cri.go:89] found id: ""
	I0918 21:09:13.825198   62061 logs.go:276] 0 containers: []
	W0918 21:09:13.825209   62061 logs.go:278] No container was found matching "kube-proxy"
	I0918 21:09:13.825216   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0918 21:09:13.825276   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0918 21:09:13.859801   62061 cri.go:89] found id: ""
	I0918 21:09:13.859834   62061 logs.go:276] 0 containers: []
	W0918 21:09:13.859846   62061 logs.go:278] No container was found matching "kube-controller-manager"
	I0918 21:09:13.859853   62061 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0918 21:09:13.859907   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0918 21:09:13.895413   62061 cri.go:89] found id: ""
	I0918 21:09:13.895445   62061 logs.go:276] 0 containers: []
	W0918 21:09:13.895456   62061 logs.go:278] No container was found matching "kindnet"
	I0918 21:09:13.895463   62061 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0918 21:09:13.895515   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0918 21:09:13.929048   62061 cri.go:89] found id: ""
	I0918 21:09:13.929075   62061 logs.go:276] 0 containers: []
	W0918 21:09:13.929083   62061 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0918 21:09:13.929092   62061 logs.go:123] Gathering logs for kubelet ...
	I0918 21:09:13.929104   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 21:09:13.981579   62061 logs.go:123] Gathering logs for dmesg ...
	I0918 21:09:13.981613   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 21:09:13.995642   62061 logs.go:123] Gathering logs for describe nodes ...
	I0918 21:09:13.995679   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0918 21:09:14.061762   62061 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0918 21:09:14.061782   62061 logs.go:123] Gathering logs for CRI-O ...
	I0918 21:09:14.061793   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0918 21:09:14.139623   62061 logs.go:123] Gathering logs for container status ...
	I0918 21:09:14.139659   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 21:09:16.001617   61659 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.374302262s)
	I0918 21:09:16.001692   61659 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0918 21:09:16.019307   61659 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0918 21:09:16.029547   61659 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0918 21:09:16.039132   61659 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0918 21:09:16.039154   61659 kubeadm.go:157] found existing configuration files:
	
	I0918 21:09:16.039196   61659 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0918 21:09:16.048506   61659 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0918 21:09:16.048567   61659 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0918 21:09:16.058120   61659 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0918 21:09:16.067686   61659 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0918 21:09:16.067746   61659 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0918 21:09:16.077707   61659 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0918 21:09:16.087089   61659 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0918 21:09:16.087149   61659 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0918 21:09:16.097040   61659 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0918 21:09:16.106448   61659 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0918 21:09:16.106514   61659 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0918 21:09:16.116060   61659 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0918 21:09:16.159721   61659 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I0918 21:09:16.159797   61659 kubeadm.go:310] [preflight] Running pre-flight checks
	I0918 21:09:16.266821   61659 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0918 21:09:16.266968   61659 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0918 21:09:16.267122   61659 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0918 21:09:16.275249   61659 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0918 21:09:12.855996   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:09:14.857296   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:09:16.277228   61659 out.go:235]   - Generating certificates and keys ...
	I0918 21:09:16.277333   61659 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0918 21:09:16.277419   61659 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0918 21:09:16.277534   61659 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0918 21:09:16.277617   61659 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0918 21:09:16.277709   61659 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0918 21:09:16.277790   61659 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0918 21:09:16.277904   61659 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0918 21:09:16.278013   61659 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0918 21:09:16.278131   61659 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0918 21:09:16.278265   61659 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0918 21:09:16.278331   61659 kubeadm.go:310] [certs] Using the existing "sa" key
	I0918 21:09:16.278401   61659 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0918 21:09:16.516263   61659 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0918 21:09:16.708220   61659 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0918 21:09:17.009820   61659 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0918 21:09:17.108871   61659 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0918 21:09:17.211014   61659 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0918 21:09:17.211658   61659 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0918 21:09:17.216626   61659 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0918 21:09:14.887068   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:09:16.888350   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:09:16.686071   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:09:16.699769   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0918 21:09:16.699844   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0918 21:09:16.735242   62061 cri.go:89] found id: ""
	I0918 21:09:16.735277   62061 logs.go:276] 0 containers: []
	W0918 21:09:16.735288   62061 logs.go:278] No container was found matching "kube-apiserver"
	I0918 21:09:16.735298   62061 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0918 21:09:16.735371   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0918 21:09:16.770027   62061 cri.go:89] found id: ""
	I0918 21:09:16.770052   62061 logs.go:276] 0 containers: []
	W0918 21:09:16.770060   62061 logs.go:278] No container was found matching "etcd"
	I0918 21:09:16.770066   62061 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0918 21:09:16.770114   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0918 21:09:16.806525   62061 cri.go:89] found id: ""
	I0918 21:09:16.806555   62061 logs.go:276] 0 containers: []
	W0918 21:09:16.806563   62061 logs.go:278] No container was found matching "coredns"
	I0918 21:09:16.806569   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0918 21:09:16.806636   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0918 21:09:16.851146   62061 cri.go:89] found id: ""
	I0918 21:09:16.851183   62061 logs.go:276] 0 containers: []
	W0918 21:09:16.851194   62061 logs.go:278] No container was found matching "kube-scheduler"
	I0918 21:09:16.851204   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0918 21:09:16.851271   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0918 21:09:16.890718   62061 cri.go:89] found id: ""
	I0918 21:09:16.890748   62061 logs.go:276] 0 containers: []
	W0918 21:09:16.890760   62061 logs.go:278] No container was found matching "kube-proxy"
	I0918 21:09:16.890767   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0918 21:09:16.890824   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0918 21:09:16.928971   62061 cri.go:89] found id: ""
	I0918 21:09:16.929002   62061 logs.go:276] 0 containers: []
	W0918 21:09:16.929012   62061 logs.go:278] No container was found matching "kube-controller-manager"
	I0918 21:09:16.929020   62061 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0918 21:09:16.929079   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0918 21:09:16.970965   62061 cri.go:89] found id: ""
	I0918 21:09:16.970999   62061 logs.go:276] 0 containers: []
	W0918 21:09:16.971011   62061 logs.go:278] No container was found matching "kindnet"
	I0918 21:09:16.971019   62061 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0918 21:09:16.971089   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0918 21:09:17.006427   62061 cri.go:89] found id: ""
	I0918 21:09:17.006453   62061 logs.go:276] 0 containers: []
	W0918 21:09:17.006461   62061 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0918 21:09:17.006468   62061 logs.go:123] Gathering logs for kubelet ...
	I0918 21:09:17.006480   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 21:09:17.058690   62061 logs.go:123] Gathering logs for dmesg ...
	I0918 21:09:17.058733   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 21:09:17.072593   62061 logs.go:123] Gathering logs for describe nodes ...
	I0918 21:09:17.072623   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0918 21:09:17.143046   62061 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0918 21:09:17.143071   62061 logs.go:123] Gathering logs for CRI-O ...
	I0918 21:09:17.143082   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0918 21:09:17.236943   62061 logs.go:123] Gathering logs for container status ...
	I0918 21:09:17.236989   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 21:09:17.357978   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:09:19.858268   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:09:17.218406   61659 out.go:235]   - Booting up control plane ...
	I0918 21:09:17.218544   61659 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0918 21:09:17.218662   61659 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0918 21:09:17.218765   61659 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0918 21:09:17.238076   61659 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0918 21:09:17.248123   61659 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0918 21:09:17.248226   61659 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0918 21:09:17.379685   61659 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0918 21:09:17.379840   61659 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0918 21:09:18.380791   61659 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001279947s
	I0918 21:09:18.380906   61659 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0918 21:09:18.380783   61740 pod_ready.go:82] duration metric: took 4m0.000205104s for pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace to be "Ready" ...
	E0918 21:09:18.380812   61740 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace to be "Ready" (will not retry!)
	I0918 21:09:18.380832   61740 pod_ready.go:39] duration metric: took 4m15.618837854s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0918 21:09:18.380875   61740 kubeadm.go:597] duration metric: took 4m23.646410044s to restartPrimaryControlPlane
	W0918 21:09:18.380936   61740 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0918 21:09:18.380966   61740 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0918 21:09:23.386705   61659 kubeadm.go:310] [api-check] The API server is healthy after 5.005706581s
	I0918 21:09:23.402316   61659 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0918 21:09:23.422786   61659 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0918 21:09:23.462099   61659 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0918 21:09:23.462373   61659 kubeadm.go:310] [mark-control-plane] Marking the node default-k8s-diff-port-828868 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0918 21:09:23.484276   61659 kubeadm.go:310] [bootstrap-token] Using token: 2vcil8.e13zhc1806da8knq
	I0918 21:09:19.782266   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:09:19.799433   62061 kubeadm.go:597] duration metric: took 4m2.126311085s to restartPrimaryControlPlane
	W0918 21:09:19.799513   62061 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0918 21:09:19.799543   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0918 21:09:20.910192   62061 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (1.110625463s)
	I0918 21:09:20.910273   62061 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0918 21:09:20.925992   62061 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0918 21:09:20.936876   62061 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0918 21:09:20.947170   62061 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0918 21:09:20.947199   62061 kubeadm.go:157] found existing configuration files:
	
	I0918 21:09:20.947255   62061 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0918 21:09:20.958140   62061 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0918 21:09:20.958240   62061 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0918 21:09:20.968351   62061 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0918 21:09:20.978669   62061 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0918 21:09:20.978735   62061 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0918 21:09:20.989765   62061 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0918 21:09:20.999842   62061 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0918 21:09:20.999903   62061 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0918 21:09:21.009945   62061 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0918 21:09:21.020229   62061 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0918 21:09:21.020289   62061 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0918 21:09:21.030583   62061 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0918 21:09:21.271399   62061 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0918 21:09:23.485978   61659 out.go:235]   - Configuring RBAC rules ...
	I0918 21:09:23.486112   61659 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0918 21:09:23.499163   61659 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0918 21:09:23.510754   61659 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0918 21:09:23.514794   61659 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0918 21:09:23.519247   61659 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0918 21:09:23.530424   61659 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0918 21:09:23.799778   61659 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0918 21:09:24.223469   61659 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0918 21:09:24.794852   61659 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0918 21:09:24.794886   61659 kubeadm.go:310] 
	I0918 21:09:24.794951   61659 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0918 21:09:24.794963   61659 kubeadm.go:310] 
	I0918 21:09:24.795058   61659 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0918 21:09:24.795073   61659 kubeadm.go:310] 
	I0918 21:09:24.795105   61659 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0918 21:09:24.795192   61659 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0918 21:09:24.795255   61659 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0918 21:09:24.795285   61659 kubeadm.go:310] 
	I0918 21:09:24.795366   61659 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0918 21:09:24.795376   61659 kubeadm.go:310] 
	I0918 21:09:24.795416   61659 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0918 21:09:24.795425   61659 kubeadm.go:310] 
	I0918 21:09:24.795497   61659 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0918 21:09:24.795580   61659 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0918 21:09:24.795678   61659 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0918 21:09:24.795692   61659 kubeadm.go:310] 
	I0918 21:09:24.795779   61659 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0918 21:09:24.795891   61659 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0918 21:09:24.795901   61659 kubeadm.go:310] 
	I0918 21:09:24.796174   61659 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8444 --token 2vcil8.e13zhc1806da8knq \
	I0918 21:09:24.796299   61659 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:8ad46bf288e8cc2226f5a929a768e6c95cddecc97ffc4016be27ef0987b1e36f \
	I0918 21:09:24.796350   61659 kubeadm.go:310] 	--control-plane 
	I0918 21:09:24.796367   61659 kubeadm.go:310] 
	I0918 21:09:24.796479   61659 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0918 21:09:24.796487   61659 kubeadm.go:310] 
	I0918 21:09:24.796594   61659 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8444 --token 2vcil8.e13zhc1806da8knq \
	I0918 21:09:24.796738   61659 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:8ad46bf288e8cc2226f5a929a768e6c95cddecc97ffc4016be27ef0987b1e36f 
	I0918 21:09:24.797359   61659 kubeadm.go:310] W0918 21:09:16.134048    2561 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0918 21:09:24.797679   61659 kubeadm.go:310] W0918 21:09:16.134873    2561 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0918 21:09:24.797832   61659 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0918 21:09:24.797858   61659 cni.go:84] Creating CNI manager for ""
	I0918 21:09:24.797872   61659 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0918 21:09:24.799953   61659 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0918 21:09:22.357582   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:09:24.857037   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:09:24.801259   61659 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0918 21:09:24.812277   61659 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0918 21:09:24.834749   61659 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0918 21:09:24.834855   61659 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0918 21:09:24.834871   61659 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-828868 minikube.k8s.io/updated_at=2024_09_18T21_09_24_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=85073601a832bd4bbda5d11fa91feafff6ec6b91 minikube.k8s.io/name=default-k8s-diff-port-828868 minikube.k8s.io/primary=true
	I0918 21:09:25.022861   61659 ops.go:34] apiserver oom_adj: -16
	I0918 21:09:25.022930   61659 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0918 21:09:25.523400   61659 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0918 21:09:26.023075   61659 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0918 21:09:26.523330   61659 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0918 21:09:27.023179   61659 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0918 21:09:27.523363   61659 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0918 21:09:28.023150   61659 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0918 21:09:28.523941   61659 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0918 21:09:29.023542   61659 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0918 21:09:29.143581   61659 kubeadm.go:1113] duration metric: took 4.308796493s to wait for elevateKubeSystemPrivileges
	I0918 21:09:29.143614   61659 kubeadm.go:394] duration metric: took 4m55.024616229s to StartCluster
	I0918 21:09:29.143632   61659 settings.go:142] acquiring lock: {Name:mk6ae95a69dcbe00c28846409e9f46945a88de2f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0918 21:09:29.143727   61659 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19667-7671/kubeconfig
	I0918 21:09:29.145397   61659 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19667-7671/kubeconfig: {Name:mk3a3eecadb0164dfdd5d3c4392b5b473a2a9bb0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0918 21:09:29.145680   61659 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.50.109 Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0918 21:09:29.145767   61659 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0918 21:09:29.145851   61659 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-828868"
	I0918 21:09:29.145869   61659 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-828868"
	I0918 21:09:29.145877   61659 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-828868"
	W0918 21:09:29.145885   61659 addons.go:243] addon storage-provisioner should already be in state true
	I0918 21:09:29.145896   61659 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-828868"
	I0918 21:09:29.145898   61659 config.go:182] Loaded profile config "default-k8s-diff-port-828868": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0918 21:09:29.145900   61659 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-828868"
	I0918 21:09:29.145920   61659 host.go:66] Checking if "default-k8s-diff-port-828868" exists ...
	I0918 21:09:29.145932   61659 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-828868"
	W0918 21:09:29.145946   61659 addons.go:243] addon metrics-server should already be in state true
	I0918 21:09:29.145980   61659 host.go:66] Checking if "default-k8s-diff-port-828868" exists ...
	I0918 21:09:29.146234   61659 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19667-7671/.minikube/bin/docker-machine-driver-kvm2
	I0918 21:09:29.146238   61659 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19667-7671/.minikube/bin/docker-machine-driver-kvm2
	I0918 21:09:29.146282   61659 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0918 21:09:29.146297   61659 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0918 21:09:29.146372   61659 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19667-7671/.minikube/bin/docker-machine-driver-kvm2
	I0918 21:09:29.146389   61659 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0918 21:09:29.147645   61659 out.go:177] * Verifying Kubernetes components...
	I0918 21:09:29.149574   61659 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0918 21:09:29.164779   61659 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43359
	I0918 21:09:29.165002   61659 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37857
	I0918 21:09:29.165390   61659 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44573
	I0918 21:09:29.165682   61659 main.go:141] libmachine: () Calling .GetVersion
	I0918 21:09:29.165749   61659 main.go:141] libmachine: () Calling .GetVersion
	I0918 21:09:29.166233   61659 main.go:141] libmachine: Using API Version  1
	I0918 21:09:29.166254   61659 main.go:141] libmachine: () Calling .SetConfigRaw
	I0918 21:09:29.166270   61659 main.go:141] libmachine: () Calling .GetVersion
	I0918 21:09:29.166388   61659 main.go:141] libmachine: Using API Version  1
	I0918 21:09:29.166414   61659 main.go:141] libmachine: () Calling .SetConfigRaw
	I0918 21:09:29.166544   61659 main.go:141] libmachine: () Calling .GetMachineName
	I0918 21:09:29.166711   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .GetState
	I0918 21:09:29.166730   61659 main.go:141] libmachine: () Calling .GetMachineName
	I0918 21:09:29.166894   61659 main.go:141] libmachine: Using API Version  1
	I0918 21:09:29.166918   61659 main.go:141] libmachine: () Calling .SetConfigRaw
	I0918 21:09:29.167381   61659 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19667-7671/.minikube/bin/docker-machine-driver-kvm2
	I0918 21:09:29.167425   61659 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0918 21:09:29.168144   61659 main.go:141] libmachine: () Calling .GetMachineName
	I0918 21:09:29.168578   61659 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19667-7671/.minikube/bin/docker-machine-driver-kvm2
	I0918 21:09:29.168614   61659 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0918 21:09:29.171072   61659 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-828868"
	W0918 21:09:29.171101   61659 addons.go:243] addon default-storageclass should already be in state true
	I0918 21:09:29.171133   61659 host.go:66] Checking if "default-k8s-diff-port-828868" exists ...
	I0918 21:09:29.171534   61659 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19667-7671/.minikube/bin/docker-machine-driver-kvm2
	I0918 21:09:29.171597   61659 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0918 21:09:29.186305   61659 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42211
	I0918 21:09:29.186318   61659 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45153
	I0918 21:09:29.186838   61659 main.go:141] libmachine: () Calling .GetVersion
	I0918 21:09:29.186847   61659 main.go:141] libmachine: () Calling .GetVersion
	I0918 21:09:29.187353   61659 main.go:141] libmachine: Using API Version  1
	I0918 21:09:29.187367   61659 main.go:141] libmachine: () Calling .SetConfigRaw
	I0918 21:09:29.187373   61659 main.go:141] libmachine: Using API Version  1
	I0918 21:09:29.187403   61659 main.go:141] libmachine: () Calling .SetConfigRaw
	I0918 21:09:29.187840   61659 main.go:141] libmachine: () Calling .GetMachineName
	I0918 21:09:29.187855   61659 main.go:141] libmachine: () Calling .GetMachineName
	I0918 21:09:29.188085   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .GetState
	I0918 21:09:29.188106   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .GetState
	I0918 21:09:29.193453   61659 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33471
	I0918 21:09:29.193905   61659 main.go:141] libmachine: () Calling .GetVersion
	I0918 21:09:29.194477   61659 main.go:141] libmachine: Using API Version  1
	I0918 21:09:29.194513   61659 main.go:141] libmachine: () Calling .SetConfigRaw
	I0918 21:09:29.194981   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .DriverName
	I0918 21:09:29.195155   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .DriverName
	I0918 21:09:29.195254   61659 main.go:141] libmachine: () Calling .GetMachineName
	I0918 21:09:29.195807   61659 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19667-7671/.minikube/bin/docker-machine-driver-kvm2
	I0918 21:09:29.195839   61659 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0918 21:09:29.197102   61659 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0918 21:09:29.197111   61659 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0918 21:09:29.198425   61659 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0918 21:09:29.198458   61659 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0918 21:09:29.198486   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .GetSSHHostname
	I0918 21:09:29.198589   61659 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0918 21:09:29.198605   61659 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0918 21:09:29.198622   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .GetSSHHostname
	I0918 21:09:29.202110   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | domain default-k8s-diff-port-828868 has defined MAC address 52:54:00:c0:39:06 in network mk-default-k8s-diff-port-828868
	I0918 21:09:29.202236   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | domain default-k8s-diff-port-828868 has defined MAC address 52:54:00:c0:39:06 in network mk-default-k8s-diff-port-828868
	I0918 21:09:29.202634   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:39:06", ip: ""} in network mk-default-k8s-diff-port-828868: {Iface:virbr4 ExpiryTime:2024-09-18 22:04:19 +0000 UTC Type:0 Mac:52:54:00:c0:39:06 Iaid: IPaddr:192.168.50.109 Prefix:24 Hostname:default-k8s-diff-port-828868 Clientid:01:52:54:00:c0:39:06}
	I0918 21:09:29.202656   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:39:06", ip: ""} in network mk-default-k8s-diff-port-828868: {Iface:virbr4 ExpiryTime:2024-09-18 22:04:19 +0000 UTC Type:0 Mac:52:54:00:c0:39:06 Iaid: IPaddr:192.168.50.109 Prefix:24 Hostname:default-k8s-diff-port-828868 Clientid:01:52:54:00:c0:39:06}
	I0918 21:09:29.202661   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | domain default-k8s-diff-port-828868 has defined IP address 192.168.50.109 and MAC address 52:54:00:c0:39:06 in network mk-default-k8s-diff-port-828868
	I0918 21:09:29.202677   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | domain default-k8s-diff-port-828868 has defined IP address 192.168.50.109 and MAC address 52:54:00:c0:39:06 in network mk-default-k8s-diff-port-828868
	I0918 21:09:29.202895   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .GetSSHPort
	I0918 21:09:29.202942   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .GetSSHPort
	I0918 21:09:29.203084   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .GetSSHKeyPath
	I0918 21:09:29.203129   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .GetSSHKeyPath
	I0918 21:09:29.203268   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .GetSSHUsername
	I0918 21:09:29.203275   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .GetSSHUsername
	I0918 21:09:29.203393   61659 sshutil.go:53] new ssh client: &{IP:192.168.50.109 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19667-7671/.minikube/machines/default-k8s-diff-port-828868/id_rsa Username:docker}
	I0918 21:09:29.203407   61659 sshutil.go:53] new ssh client: &{IP:192.168.50.109 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19667-7671/.minikube/machines/default-k8s-diff-port-828868/id_rsa Username:docker}
	I0918 21:09:29.215178   61659 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41565
	I0918 21:09:29.215727   61659 main.go:141] libmachine: () Calling .GetVersion
	I0918 21:09:29.216301   61659 main.go:141] libmachine: Using API Version  1
	I0918 21:09:29.216325   61659 main.go:141] libmachine: () Calling .SetConfigRaw
	I0918 21:09:29.216669   61659 main.go:141] libmachine: () Calling .GetMachineName
	I0918 21:09:29.216873   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .GetState
	I0918 21:09:29.218689   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .DriverName
	I0918 21:09:29.218980   61659 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0918 21:09:29.218994   61659 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0918 21:09:29.219009   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .GetSSHHostname
	I0918 21:09:29.222542   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | domain default-k8s-diff-port-828868 has defined MAC address 52:54:00:c0:39:06 in network mk-default-k8s-diff-port-828868
	I0918 21:09:29.222963   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:39:06", ip: ""} in network mk-default-k8s-diff-port-828868: {Iface:virbr4 ExpiryTime:2024-09-18 22:04:19 +0000 UTC Type:0 Mac:52:54:00:c0:39:06 Iaid: IPaddr:192.168.50.109 Prefix:24 Hostname:default-k8s-diff-port-828868 Clientid:01:52:54:00:c0:39:06}
	I0918 21:09:29.222985   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | domain default-k8s-diff-port-828868 has defined IP address 192.168.50.109 and MAC address 52:54:00:c0:39:06 in network mk-default-k8s-diff-port-828868
	I0918 21:09:29.223398   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .GetSSHPort
	I0918 21:09:29.223632   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .GetSSHKeyPath
	I0918 21:09:29.223820   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .GetSSHUsername
	I0918 21:09:29.224004   61659 sshutil.go:53] new ssh client: &{IP:192.168.50.109 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19667-7671/.minikube/machines/default-k8s-diff-port-828868/id_rsa Username:docker}
	I0918 21:09:29.360595   61659 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0918 21:09:29.381254   61659 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-828868" to be "Ready" ...
	I0918 21:09:29.390526   61659 node_ready.go:49] node "default-k8s-diff-port-828868" has status "Ready":"True"
	I0918 21:09:29.390554   61659 node_ready.go:38] duration metric: took 9.264338ms for node "default-k8s-diff-port-828868" to be "Ready" ...
	I0918 21:09:29.390565   61659 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0918 21:09:29.395433   61659 pod_ready.go:79] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-828868" in "kube-system" namespace to be "Ready" ...
	I0918 21:09:29.468492   61659 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0918 21:09:29.526515   61659 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0918 21:09:29.527137   61659 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0918 21:09:29.527162   61659 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0918 21:09:29.570619   61659 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0918 21:09:29.570651   61659 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0918 21:09:29.631944   61659 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0918 21:09:29.631975   61659 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0918 21:09:29.653905   61659 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0918 21:09:30.402107   61659 main.go:141] libmachine: Making call to close driver server
	I0918 21:09:30.402145   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .Close
	I0918 21:09:30.402142   61659 main.go:141] libmachine: Making call to close driver server
	I0918 21:09:30.402167   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .Close
	I0918 21:09:30.402466   61659 main.go:141] libmachine: Successfully made call to close driver server
	I0918 21:09:30.402480   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | Closing plugin on server side
	I0918 21:09:30.402493   61659 main.go:141] libmachine: Making call to close connection to plugin binary
	I0918 21:09:30.402503   61659 main.go:141] libmachine: Making call to close driver server
	I0918 21:09:30.402512   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .Close
	I0918 21:09:30.402537   61659 main.go:141] libmachine: Successfully made call to close driver server
	I0918 21:09:30.402546   61659 main.go:141] libmachine: Making call to close connection to plugin binary
	I0918 21:09:30.402555   61659 main.go:141] libmachine: Making call to close driver server
	I0918 21:09:30.402571   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .Close
	I0918 21:09:30.402733   61659 main.go:141] libmachine: Successfully made call to close driver server
	I0918 21:09:30.402773   61659 main.go:141] libmachine: Making call to close connection to plugin binary
	I0918 21:09:30.402921   61659 main.go:141] libmachine: Successfully made call to close driver server
	I0918 21:09:30.402941   61659 main.go:141] libmachine: Making call to close connection to plugin binary
	I0918 21:09:30.435323   61659 main.go:141] libmachine: Making call to close driver server
	I0918 21:09:30.435366   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .Close
	I0918 21:09:30.435659   61659 main.go:141] libmachine: Successfully made call to close driver server
	I0918 21:09:30.435683   61659 main.go:141] libmachine: Making call to close connection to plugin binary
	I0918 21:09:30.975630   61659 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.321677798s)
	I0918 21:09:30.975716   61659 main.go:141] libmachine: Making call to close driver server
	I0918 21:09:30.975733   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .Close
	I0918 21:09:30.976074   61659 main.go:141] libmachine: Successfully made call to close driver server
	I0918 21:09:30.976094   61659 main.go:141] libmachine: Making call to close connection to plugin binary
	I0918 21:09:30.976105   61659 main.go:141] libmachine: Making call to close driver server
	I0918 21:09:30.976113   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .Close
	I0918 21:09:30.976369   61659 main.go:141] libmachine: Successfully made call to close driver server
	I0918 21:09:30.976395   61659 main.go:141] libmachine: Making call to close connection to plugin binary
	I0918 21:09:30.976406   61659 addons.go:475] Verifying addon metrics-server=true in "default-k8s-diff-port-828868"
	I0918 21:09:30.978345   61659 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0918 21:09:26.857486   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:09:29.356533   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:09:31.358269   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:09:30.979731   61659 addons.go:510] duration metric: took 1.833970994s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0918 21:09:31.403620   61659 pod_ready.go:103] pod "etcd-default-k8s-diff-port-828868" in "kube-system" namespace has status "Ready":"False"
	I0918 21:09:33.857960   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:09:36.357454   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:09:33.902436   61659 pod_ready.go:103] pod "etcd-default-k8s-diff-port-828868" in "kube-system" namespace has status "Ready":"False"
	I0918 21:09:36.401889   61659 pod_ready.go:103] pod "etcd-default-k8s-diff-port-828868" in "kube-system" namespace has status "Ready":"False"
	I0918 21:09:36.902002   61659 pod_ready.go:93] pod "etcd-default-k8s-diff-port-828868" in "kube-system" namespace has status "Ready":"True"
	I0918 21:09:36.902026   61659 pod_ready.go:82] duration metric: took 7.506563242s for pod "etcd-default-k8s-diff-port-828868" in "kube-system" namespace to be "Ready" ...
	I0918 21:09:36.902035   61659 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-828868" in "kube-system" namespace to be "Ready" ...
	I0918 21:09:36.907689   61659 pod_ready.go:93] pod "kube-apiserver-default-k8s-diff-port-828868" in "kube-system" namespace has status "Ready":"True"
	I0918 21:09:36.907713   61659 pod_ready.go:82] duration metric: took 5.672631ms for pod "kube-apiserver-default-k8s-diff-port-828868" in "kube-system" namespace to be "Ready" ...
	I0918 21:09:36.907722   61659 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-828868" in "kube-system" namespace to be "Ready" ...
	I0918 21:09:38.914521   61659 pod_ready.go:103] pod "kube-controller-manager-default-k8s-diff-port-828868" in "kube-system" namespace has status "Ready":"False"
	I0918 21:09:39.414168   61659 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-828868" in "kube-system" namespace has status "Ready":"True"
	I0918 21:09:39.414196   61659 pod_ready.go:82] duration metric: took 2.506467297s for pod "kube-controller-manager-default-k8s-diff-port-828868" in "kube-system" namespace to be "Ready" ...
	I0918 21:09:39.414207   61659 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-hf5mm" in "kube-system" namespace to be "Ready" ...
	I0918 21:09:39.419030   61659 pod_ready.go:93] pod "kube-proxy-hf5mm" in "kube-system" namespace has status "Ready":"True"
	I0918 21:09:39.419053   61659 pod_ready.go:82] duration metric: took 4.838856ms for pod "kube-proxy-hf5mm" in "kube-system" namespace to be "Ready" ...
	I0918 21:09:39.419061   61659 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-828868" in "kube-system" namespace to be "Ready" ...
	I0918 21:09:39.423321   61659 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-828868" in "kube-system" namespace has status "Ready":"True"
	I0918 21:09:39.423341   61659 pod_ready.go:82] duration metric: took 4.274601ms for pod "kube-scheduler-default-k8s-diff-port-828868" in "kube-system" namespace to be "Ready" ...
	I0918 21:09:39.423348   61659 pod_ready.go:39] duration metric: took 10.03277208s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0918 21:09:39.423360   61659 api_server.go:52] waiting for apiserver process to appear ...
	I0918 21:09:39.423407   61659 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:09:39.438272   61659 api_server.go:72] duration metric: took 10.292559807s to wait for apiserver process to appear ...
	I0918 21:09:39.438297   61659 api_server.go:88] waiting for apiserver healthz status ...
	I0918 21:09:39.438315   61659 api_server.go:253] Checking apiserver healthz at https://192.168.50.109:8444/healthz ...
	I0918 21:09:39.443342   61659 api_server.go:279] https://192.168.50.109:8444/healthz returned 200:
	ok
	I0918 21:09:39.444238   61659 api_server.go:141] control plane version: v1.31.1
	I0918 21:09:39.444262   61659 api_server.go:131] duration metric: took 5.958748ms to wait for apiserver health ...
	I0918 21:09:39.444270   61659 system_pods.go:43] waiting for kube-system pods to appear ...
	I0918 21:09:39.449914   61659 system_pods.go:59] 9 kube-system pods found
	I0918 21:09:39.449938   61659 system_pods.go:61] "coredns-7c65d6cfc9-8gz5v" [bfcbe99d-9a3d-4da8-a976-58cb9d9bf7ac] Running
	I0918 21:09:39.449942   61659 system_pods.go:61] "coredns-7c65d6cfc9-shx5p" [2d6d25ab-9a90-490a-911b-bf396605fa88] Running
	I0918 21:09:39.449947   61659 system_pods.go:61] "etcd-default-k8s-diff-port-828868" [d9772e35-df33-4b90-af88-7479a81776a2] Running
	I0918 21:09:39.449950   61659 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-828868" [9ca05dc8-1e15-4f6e-9855-1e1b6376fbfc] Running
	I0918 21:09:39.449954   61659 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-828868" [b4196fd3-4882-4e31-b418-2f5f24d2610d] Running
	I0918 21:09:39.449957   61659 system_pods.go:61] "kube-proxy-hf5mm" [3fb0a166-a925-4486-9695-6db05ae704b8] Running
	I0918 21:09:39.449962   61659 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-828868" [161b8693-3400-43a4-b361-cce5749dd2eb] Running
	I0918 21:09:39.449969   61659 system_pods.go:61] "metrics-server-6867b74b74-hdt52" [bf3aa7d0-a121-4a2c-92dd-2c79c6cbaa4a] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0918 21:09:39.449976   61659 system_pods.go:61] "storage-provisioner" [8b4e1077-e23f-4262-b83e-989506798531] Running
	I0918 21:09:39.449983   61659 system_pods.go:74] duration metric: took 5.708013ms to wait for pod list to return data ...
	I0918 21:09:39.449992   61659 default_sa.go:34] waiting for default service account to be created ...
	I0918 21:09:39.453256   61659 default_sa.go:45] found service account: "default"
	I0918 21:09:39.453278   61659 default_sa.go:55] duration metric: took 3.281012ms for default service account to be created ...
	I0918 21:09:39.453287   61659 system_pods.go:116] waiting for k8s-apps to be running ...
	I0918 21:09:39.502200   61659 system_pods.go:86] 9 kube-system pods found
	I0918 21:09:39.502231   61659 system_pods.go:89] "coredns-7c65d6cfc9-8gz5v" [bfcbe99d-9a3d-4da8-a976-58cb9d9bf7ac] Running
	I0918 21:09:39.502237   61659 system_pods.go:89] "coredns-7c65d6cfc9-shx5p" [2d6d25ab-9a90-490a-911b-bf396605fa88] Running
	I0918 21:09:39.502241   61659 system_pods.go:89] "etcd-default-k8s-diff-port-828868" [d9772e35-df33-4b90-af88-7479a81776a2] Running
	I0918 21:09:39.502246   61659 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-828868" [9ca05dc8-1e15-4f6e-9855-1e1b6376fbfc] Running
	I0918 21:09:39.502250   61659 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-828868" [b4196fd3-4882-4e31-b418-2f5f24d2610d] Running
	I0918 21:09:39.502253   61659 system_pods.go:89] "kube-proxy-hf5mm" [3fb0a166-a925-4486-9695-6db05ae704b8] Running
	I0918 21:09:39.502256   61659 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-828868" [161b8693-3400-43a4-b361-cce5749dd2eb] Running
	I0918 21:09:39.502262   61659 system_pods.go:89] "metrics-server-6867b74b74-hdt52" [bf3aa7d0-a121-4a2c-92dd-2c79c6cbaa4a] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0918 21:09:39.502266   61659 system_pods.go:89] "storage-provisioner" [8b4e1077-e23f-4262-b83e-989506798531] Running
	I0918 21:09:39.502276   61659 system_pods.go:126] duration metric: took 48.981872ms to wait for k8s-apps to be running ...
	I0918 21:09:39.502291   61659 system_svc.go:44] waiting for kubelet service to be running ....
	I0918 21:09:39.502367   61659 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0918 21:09:39.517514   61659 system_svc.go:56] duration metric: took 15.213443ms WaitForService to wait for kubelet
	I0918 21:09:39.517549   61659 kubeadm.go:582] duration metric: took 10.37183977s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0918 21:09:39.517573   61659 node_conditions.go:102] verifying NodePressure condition ...
	I0918 21:09:39.700593   61659 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0918 21:09:39.700616   61659 node_conditions.go:123] node cpu capacity is 2
	I0918 21:09:39.700626   61659 node_conditions.go:105] duration metric: took 183.048537ms to run NodePressure ...
	I0918 21:09:39.700637   61659 start.go:241] waiting for startup goroutines ...
	I0918 21:09:39.700643   61659 start.go:246] waiting for cluster config update ...
	I0918 21:09:39.700653   61659 start.go:255] writing updated cluster config ...
	I0918 21:09:39.700899   61659 ssh_runner.go:195] Run: rm -f paused
	I0918 21:09:39.750890   61659 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I0918 21:09:39.753015   61659 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-828868" cluster and "default" namespace by default
	I0918 21:09:38.857481   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:09:41.356307   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:09:44.581125   61740 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.200138695s)
	I0918 21:09:44.581198   61740 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0918 21:09:44.597051   61740 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0918 21:09:44.607195   61740 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0918 21:09:44.617135   61740 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0918 21:09:44.617160   61740 kubeadm.go:157] found existing configuration files:
	
	I0918 21:09:44.617203   61740 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0918 21:09:44.626216   61740 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0918 21:09:44.626278   61740 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0918 21:09:44.635161   61740 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0918 21:09:44.643767   61740 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0918 21:09:44.643828   61740 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0918 21:09:44.652663   61740 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0918 21:09:44.662045   61740 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0918 21:09:44.662107   61740 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0918 21:09:44.671165   61740 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0918 21:09:44.680397   61740 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0918 21:09:44.680469   61740 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0918 21:09:44.689168   61740 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0918 21:09:44.733425   61740 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I0918 21:09:44.733528   61740 kubeadm.go:310] [preflight] Running pre-flight checks
	I0918 21:09:44.846369   61740 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0918 21:09:44.846477   61740 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0918 21:09:44.846612   61740 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0918 21:09:44.855581   61740 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0918 21:09:44.857599   61740 out.go:235]   - Generating certificates and keys ...
	I0918 21:09:44.857709   61740 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0918 21:09:44.857777   61740 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0918 21:09:44.857851   61740 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0918 21:09:44.857942   61740 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0918 21:09:44.858061   61740 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0918 21:09:44.858137   61740 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0918 21:09:44.858243   61740 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0918 21:09:44.858339   61740 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0918 21:09:44.858409   61740 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0918 21:09:44.858509   61740 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0918 21:09:44.858547   61740 kubeadm.go:310] [certs] Using the existing "sa" key
	I0918 21:09:44.858615   61740 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0918 21:09:45.048967   61740 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0918 21:09:45.229640   61740 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0918 21:09:45.397078   61740 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0918 21:09:45.722116   61740 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0918 21:09:45.850285   61740 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0918 21:09:45.850902   61740 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0918 21:09:45.853909   61740 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0918 21:09:43.357136   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:09:45.858056   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:09:45.855803   61740 out.go:235]   - Booting up control plane ...
	I0918 21:09:45.855931   61740 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0918 21:09:45.857227   61740 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0918 21:09:45.858855   61740 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0918 21:09:45.877299   61740 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0918 21:09:45.883953   61740 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0918 21:09:45.884043   61740 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0918 21:09:46.015368   61740 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0918 21:09:46.015509   61740 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0918 21:09:47.016371   61740 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001062473s
	I0918 21:09:47.016465   61740 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0918 21:09:48.357057   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:09:50.856124   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:09:51.518808   61740 kubeadm.go:310] [api-check] The API server is healthy after 4.502250914s
	I0918 21:09:51.532148   61740 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0918 21:09:51.549560   61740 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0918 21:09:51.579801   61740 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0918 21:09:51.580053   61740 kubeadm.go:310] [mark-control-plane] Marking the node embed-certs-255556 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0918 21:09:51.598605   61740 kubeadm.go:310] [bootstrap-token] Using token: iilbxo.n0c6mbjmeqehlfso
	I0918 21:09:51.600035   61740 out.go:235]   - Configuring RBAC rules ...
	I0918 21:09:51.600200   61740 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0918 21:09:51.614672   61740 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0918 21:09:51.626186   61740 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0918 21:09:51.629722   61740 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0918 21:09:51.634757   61740 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0918 21:09:51.642778   61740 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0918 21:09:51.931051   61740 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0918 21:09:52.359085   61740 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0918 21:09:52.930191   61740 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0918 21:09:52.931033   61740 kubeadm.go:310] 
	I0918 21:09:52.931100   61740 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0918 21:09:52.931108   61740 kubeadm.go:310] 
	I0918 21:09:52.931178   61740 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0918 21:09:52.931186   61740 kubeadm.go:310] 
	I0918 21:09:52.931208   61740 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0918 21:09:52.931313   61740 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0918 21:09:52.931400   61740 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0918 21:09:52.931435   61740 kubeadm.go:310] 
	I0918 21:09:52.931524   61740 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0918 21:09:52.931537   61740 kubeadm.go:310] 
	I0918 21:09:52.931601   61740 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0918 21:09:52.931627   61740 kubeadm.go:310] 
	I0918 21:09:52.931721   61740 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0918 21:09:52.931825   61740 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0918 21:09:52.931896   61740 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0918 21:09:52.931903   61740 kubeadm.go:310] 
	I0918 21:09:52.931974   61740 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0918 21:09:52.932073   61740 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0918 21:09:52.932081   61740 kubeadm.go:310] 
	I0918 21:09:52.932154   61740 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token iilbxo.n0c6mbjmeqehlfso \
	I0918 21:09:52.932243   61740 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:8ad46bf288e8cc2226f5a929a768e6c95cddecc97ffc4016be27ef0987b1e36f \
	I0918 21:09:52.932289   61740 kubeadm.go:310] 	--control-plane 
	I0918 21:09:52.932296   61740 kubeadm.go:310] 
	I0918 21:09:52.932365   61740 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0918 21:09:52.932372   61740 kubeadm.go:310] 
	I0918 21:09:52.932438   61740 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token iilbxo.n0c6mbjmeqehlfso \
	I0918 21:09:52.932568   61740 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:8ad46bf288e8cc2226f5a929a768e6c95cddecc97ffc4016be27ef0987b1e36f 
	I0918 21:09:52.934280   61740 kubeadm.go:310] W0918 21:09:44.705437    2512 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0918 21:09:52.934656   61740 kubeadm.go:310] W0918 21:09:44.706219    2512 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0918 21:09:52.934841   61740 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0918 21:09:52.934861   61740 cni.go:84] Creating CNI manager for ""
	I0918 21:09:52.934871   61740 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0918 21:09:52.937656   61740 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0918 21:09:52.939150   61740 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0918 21:09:52.950774   61740 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0918 21:09:52.973081   61740 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0918 21:09:52.973161   61740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0918 21:09:52.973210   61740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-255556 minikube.k8s.io/updated_at=2024_09_18T21_09_52_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=85073601a832bd4bbda5d11fa91feafff6ec6b91 minikube.k8s.io/name=embed-certs-255556 minikube.k8s.io/primary=true
	I0918 21:09:53.012402   61740 ops.go:34] apiserver oom_adj: -16
	I0918 21:09:53.180983   61740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0918 21:09:52.857161   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:09:55.357515   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:09:53.681852   61740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0918 21:09:54.181892   61740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0918 21:09:54.681768   61740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0918 21:09:55.181353   61740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0918 21:09:55.681336   61740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0918 21:09:56.181389   61740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0918 21:09:56.681574   61740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0918 21:09:57.181050   61740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0918 21:09:57.258766   61740 kubeadm.go:1113] duration metric: took 4.285672952s to wait for elevateKubeSystemPrivileges
	I0918 21:09:57.258809   61740 kubeadm.go:394] duration metric: took 5m2.572577294s to StartCluster
	I0918 21:09:57.258831   61740 settings.go:142] acquiring lock: {Name:mk6ae95a69dcbe00c28846409e9f46945a88de2f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0918 21:09:57.258925   61740 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19667-7671/kubeconfig
	I0918 21:09:57.260757   61740 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19667-7671/kubeconfig: {Name:mk3a3eecadb0164dfdd5d3c4392b5b473a2a9bb0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0918 21:09:57.261072   61740 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.21 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0918 21:09:57.261168   61740 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0918 21:09:57.261275   61740 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-255556"
	I0918 21:09:57.261302   61740 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-255556"
	W0918 21:09:57.261314   61740 addons.go:243] addon storage-provisioner should already be in state true
	I0918 21:09:57.261344   61740 host.go:66] Checking if "embed-certs-255556" exists ...
	I0918 21:09:57.261337   61740 addons.go:69] Setting default-storageclass=true in profile "embed-certs-255556"
	I0918 21:09:57.261366   61740 config.go:182] Loaded profile config "embed-certs-255556": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0918 21:09:57.261363   61740 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-255556"
	I0918 21:09:57.261354   61740 addons.go:69] Setting metrics-server=true in profile "embed-certs-255556"
	I0918 21:09:57.261413   61740 addons.go:234] Setting addon metrics-server=true in "embed-certs-255556"
	W0918 21:09:57.261423   61740 addons.go:243] addon metrics-server should already be in state true
	I0918 21:09:57.261450   61740 host.go:66] Checking if "embed-certs-255556" exists ...
	I0918 21:09:57.261751   61740 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19667-7671/.minikube/bin/docker-machine-driver-kvm2
	I0918 21:09:57.261773   61740 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19667-7671/.minikube/bin/docker-machine-driver-kvm2
	I0918 21:09:57.261797   61740 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0918 21:09:57.261805   61740 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0918 21:09:57.261827   61740 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19667-7671/.minikube/bin/docker-machine-driver-kvm2
	I0918 21:09:57.261913   61740 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0918 21:09:57.263016   61740 out.go:177] * Verifying Kubernetes components...
	I0918 21:09:57.264732   61740 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0918 21:09:57.279143   61740 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41499
	I0918 21:09:57.279741   61740 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38525
	I0918 21:09:57.279948   61740 main.go:141] libmachine: () Calling .GetVersion
	I0918 21:09:57.280150   61740 main.go:141] libmachine: () Calling .GetVersion
	I0918 21:09:57.280518   61740 main.go:141] libmachine: Using API Version  1
	I0918 21:09:57.280536   61740 main.go:141] libmachine: () Calling .SetConfigRaw
	I0918 21:09:57.280662   61740 main.go:141] libmachine: Using API Version  1
	I0918 21:09:57.280699   61740 main.go:141] libmachine: () Calling .SetConfigRaw
	I0918 21:09:57.280899   61740 main.go:141] libmachine: () Calling .GetMachineName
	I0918 21:09:57.281014   61740 main.go:141] libmachine: () Calling .GetMachineName
	I0918 21:09:57.281224   61740 main.go:141] libmachine: (embed-certs-255556) Calling .GetState
	I0918 21:09:57.281401   61740 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45081
	I0918 21:09:57.281609   61740 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19667-7671/.minikube/bin/docker-machine-driver-kvm2
	I0918 21:09:57.281669   61740 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0918 21:09:57.281824   61740 main.go:141] libmachine: () Calling .GetVersion
	I0918 21:09:57.282291   61740 main.go:141] libmachine: Using API Version  1
	I0918 21:09:57.282316   61740 main.go:141] libmachine: () Calling .SetConfigRaw
	I0918 21:09:57.282655   61740 main.go:141] libmachine: () Calling .GetMachineName
	I0918 21:09:57.283166   61740 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19667-7671/.minikube/bin/docker-machine-driver-kvm2
	I0918 21:09:57.283198   61740 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0918 21:09:57.284993   61740 addons.go:234] Setting addon default-storageclass=true in "embed-certs-255556"
	W0918 21:09:57.285013   61740 addons.go:243] addon default-storageclass should already be in state true
	I0918 21:09:57.285042   61740 host.go:66] Checking if "embed-certs-255556" exists ...
	I0918 21:09:57.285400   61740 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19667-7671/.minikube/bin/docker-machine-driver-kvm2
	I0918 21:09:57.285441   61740 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0918 21:09:57.298996   61740 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39709
	I0918 21:09:57.299572   61740 main.go:141] libmachine: () Calling .GetVersion
	I0918 21:09:57.300427   61740 main.go:141] libmachine: Using API Version  1
	I0918 21:09:57.300453   61740 main.go:141] libmachine: () Calling .SetConfigRaw
	I0918 21:09:57.300865   61740 main.go:141] libmachine: () Calling .GetMachineName
	I0918 21:09:57.301062   61740 main.go:141] libmachine: (embed-certs-255556) Calling .GetState
	I0918 21:09:57.301827   61740 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40441
	I0918 21:09:57.302410   61740 main.go:141] libmachine: () Calling .GetVersion
	I0918 21:09:57.302948   61740 main.go:141] libmachine: Using API Version  1
	I0918 21:09:57.302968   61740 main.go:141] libmachine: () Calling .SetConfigRaw
	I0918 21:09:57.303284   61740 main.go:141] libmachine: (embed-certs-255556) Calling .DriverName
	I0918 21:09:57.303333   61740 main.go:141] libmachine: () Calling .GetMachineName
	I0918 21:09:57.303512   61740 main.go:141] libmachine: (embed-certs-255556) Calling .GetState
	I0918 21:09:57.304409   61740 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43125
	I0918 21:09:57.304836   61740 main.go:141] libmachine: () Calling .GetVersion
	I0918 21:09:57.305379   61740 main.go:141] libmachine: Using API Version  1
	I0918 21:09:57.305393   61740 main.go:141] libmachine: () Calling .SetConfigRaw
	I0918 21:09:57.305423   61740 main.go:141] libmachine: (embed-certs-255556) Calling .DriverName
	I0918 21:09:57.305449   61740 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0918 21:09:57.305705   61740 main.go:141] libmachine: () Calling .GetMachineName
	I0918 21:09:57.306221   61740 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19667-7671/.minikube/bin/docker-machine-driver-kvm2
	I0918 21:09:57.306270   61740 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0918 21:09:57.306972   61740 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0918 21:09:57.307226   61740 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0918 21:09:57.307247   61740 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0918 21:09:57.307261   61740 main.go:141] libmachine: (embed-certs-255556) Calling .GetSSHHostname
	I0918 21:09:57.308757   61740 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0918 21:09:57.308778   61740 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0918 21:09:57.308798   61740 main.go:141] libmachine: (embed-certs-255556) Calling .GetSSHHostname
	I0918 21:09:57.311608   61740 main.go:141] libmachine: (embed-certs-255556) DBG | domain embed-certs-255556 has defined MAC address 52:54:00:e8:c2:b7 in network mk-embed-certs-255556
	I0918 21:09:57.312311   61740 main.go:141] libmachine: (embed-certs-255556) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e8:c2:b7", ip: ""} in network mk-embed-certs-255556: {Iface:virbr1 ExpiryTime:2024-09-18 22:04:39 +0000 UTC Type:0 Mac:52:54:00:e8:c2:b7 Iaid: IPaddr:192.168.39.21 Prefix:24 Hostname:embed-certs-255556 Clientid:01:52:54:00:e8:c2:b7}
	I0918 21:09:57.312346   61740 main.go:141] libmachine: (embed-certs-255556) DBG | domain embed-certs-255556 has defined IP address 192.168.39.21 and MAC address 52:54:00:e8:c2:b7 in network mk-embed-certs-255556
	I0918 21:09:57.312529   61740 main.go:141] libmachine: (embed-certs-255556) Calling .GetSSHPort
	I0918 21:09:57.313308   61740 main.go:141] libmachine: (embed-certs-255556) DBG | domain embed-certs-255556 has defined MAC address 52:54:00:e8:c2:b7 in network mk-embed-certs-255556
	I0918 21:09:57.313344   61740 main.go:141] libmachine: (embed-certs-255556) Calling .GetSSHKeyPath
	I0918 21:09:57.313533   61740 main.go:141] libmachine: (embed-certs-255556) Calling .GetSSHUsername
	I0918 21:09:57.313707   61740 sshutil.go:53] new ssh client: &{IP:192.168.39.21 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19667-7671/.minikube/machines/embed-certs-255556/id_rsa Username:docker}
	I0918 21:09:57.313964   61740 main.go:141] libmachine: (embed-certs-255556) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e8:c2:b7", ip: ""} in network mk-embed-certs-255556: {Iface:virbr1 ExpiryTime:2024-09-18 22:04:39 +0000 UTC Type:0 Mac:52:54:00:e8:c2:b7 Iaid: IPaddr:192.168.39.21 Prefix:24 Hostname:embed-certs-255556 Clientid:01:52:54:00:e8:c2:b7}
	I0918 21:09:57.313991   61740 main.go:141] libmachine: (embed-certs-255556) DBG | domain embed-certs-255556 has defined IP address 192.168.39.21 and MAC address 52:54:00:e8:c2:b7 in network mk-embed-certs-255556
	I0918 21:09:57.314181   61740 main.go:141] libmachine: (embed-certs-255556) Calling .GetSSHPort
	I0918 21:09:57.314357   61740 main.go:141] libmachine: (embed-certs-255556) Calling .GetSSHKeyPath
	I0918 21:09:57.314517   61740 main.go:141] libmachine: (embed-certs-255556) Calling .GetSSHUsername
	I0918 21:09:57.314644   61740 sshutil.go:53] new ssh client: &{IP:192.168.39.21 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19667-7671/.minikube/machines/embed-certs-255556/id_rsa Username:docker}
	I0918 21:09:57.325307   61740 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45641
	I0918 21:09:57.325800   61740 main.go:141] libmachine: () Calling .GetVersion
	I0918 21:09:57.326390   61740 main.go:141] libmachine: Using API Version  1
	I0918 21:09:57.326416   61740 main.go:141] libmachine: () Calling .SetConfigRaw
	I0918 21:09:57.326850   61740 main.go:141] libmachine: () Calling .GetMachineName
	I0918 21:09:57.327116   61740 main.go:141] libmachine: (embed-certs-255556) Calling .GetState
	I0918 21:09:57.328954   61740 main.go:141] libmachine: (embed-certs-255556) Calling .DriverName
	I0918 21:09:57.329179   61740 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0918 21:09:57.329197   61740 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0918 21:09:57.329216   61740 main.go:141] libmachine: (embed-certs-255556) Calling .GetSSHHostname
	I0918 21:09:57.332176   61740 main.go:141] libmachine: (embed-certs-255556) DBG | domain embed-certs-255556 has defined MAC address 52:54:00:e8:c2:b7 in network mk-embed-certs-255556
	I0918 21:09:57.332608   61740 main.go:141] libmachine: (embed-certs-255556) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e8:c2:b7", ip: ""} in network mk-embed-certs-255556: {Iface:virbr1 ExpiryTime:2024-09-18 22:04:39 +0000 UTC Type:0 Mac:52:54:00:e8:c2:b7 Iaid: IPaddr:192.168.39.21 Prefix:24 Hostname:embed-certs-255556 Clientid:01:52:54:00:e8:c2:b7}
	I0918 21:09:57.332633   61740 main.go:141] libmachine: (embed-certs-255556) DBG | domain embed-certs-255556 has defined IP address 192.168.39.21 and MAC address 52:54:00:e8:c2:b7 in network mk-embed-certs-255556
	I0918 21:09:57.332803   61740 main.go:141] libmachine: (embed-certs-255556) Calling .GetSSHPort
	I0918 21:09:57.332991   61740 main.go:141] libmachine: (embed-certs-255556) Calling .GetSSHKeyPath
	I0918 21:09:57.333132   61740 main.go:141] libmachine: (embed-certs-255556) Calling .GetSSHUsername
	I0918 21:09:57.333254   61740 sshutil.go:53] new ssh client: &{IP:192.168.39.21 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19667-7671/.minikube/machines/embed-certs-255556/id_rsa Username:docker}
	I0918 21:09:57.463767   61740 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0918 21:09:57.480852   61740 node_ready.go:35] waiting up to 6m0s for node "embed-certs-255556" to be "Ready" ...
	I0918 21:09:57.492198   61740 node_ready.go:49] node "embed-certs-255556" has status "Ready":"True"
	I0918 21:09:57.492221   61740 node_ready.go:38] duration metric: took 11.335784ms for node "embed-certs-255556" to be "Ready" ...
	I0918 21:09:57.492229   61740 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0918 21:09:57.496607   61740 pod_ready.go:79] waiting up to 6m0s for pod "etcd-embed-certs-255556" in "kube-system" namespace to be "Ready" ...
	I0918 21:09:57.627581   61740 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0918 21:09:57.631704   61740 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0918 21:09:57.647778   61740 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0918 21:09:57.647799   61740 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0918 21:09:57.686558   61740 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0918 21:09:57.686589   61740 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0918 21:09:57.726206   61740 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0918 21:09:57.726230   61740 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0918 21:09:57.831932   61740 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0918 21:09:58.026530   61740 main.go:141] libmachine: Making call to close driver server
	I0918 21:09:58.026554   61740 main.go:141] libmachine: (embed-certs-255556) Calling .Close
	I0918 21:09:58.026862   61740 main.go:141] libmachine: Successfully made call to close driver server
	I0918 21:09:58.026885   61740 main.go:141] libmachine: Making call to close connection to plugin binary
	I0918 21:09:58.026895   61740 main.go:141] libmachine: Making call to close driver server
	I0918 21:09:58.026903   61740 main.go:141] libmachine: (embed-certs-255556) Calling .Close
	I0918 21:09:58.027205   61740 main.go:141] libmachine: Successfully made call to close driver server
	I0918 21:09:58.027260   61740 main.go:141] libmachine: Making call to close connection to plugin binary
	I0918 21:09:58.027269   61740 main.go:141] libmachine: (embed-certs-255556) DBG | Closing plugin on server side
	I0918 21:09:58.038140   61740 main.go:141] libmachine: Making call to close driver server
	I0918 21:09:58.038172   61740 main.go:141] libmachine: (embed-certs-255556) Calling .Close
	I0918 21:09:58.038506   61740 main.go:141] libmachine: Successfully made call to close driver server
	I0918 21:09:58.038555   61740 main.go:141] libmachine: Making call to close connection to plugin binary
	I0918 21:09:58.038512   61740 main.go:141] libmachine: (embed-certs-255556) DBG | Closing plugin on server side
	I0918 21:09:58.551479   61740 main.go:141] libmachine: Making call to close driver server
	I0918 21:09:58.551518   61740 main.go:141] libmachine: (embed-certs-255556) Calling .Close
	I0918 21:09:58.551851   61740 main.go:141] libmachine: Successfully made call to close driver server
	I0918 21:09:58.551870   61740 main.go:141] libmachine: Making call to close connection to plugin binary
	I0918 21:09:58.551885   61740 main.go:141] libmachine: Making call to close driver server
	I0918 21:09:58.551893   61740 main.go:141] libmachine: (embed-certs-255556) Calling .Close
	I0918 21:09:58.552242   61740 main.go:141] libmachine: Successfully made call to close driver server
	I0918 21:09:58.552307   61740 main.go:141] libmachine: Making call to close connection to plugin binary
	I0918 21:09:58.552326   61740 main.go:141] libmachine: (embed-certs-255556) DBG | Closing plugin on server side
	I0918 21:09:59.078469   61740 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.246485041s)
	I0918 21:09:59.078532   61740 main.go:141] libmachine: Making call to close driver server
	I0918 21:09:59.078550   61740 main.go:141] libmachine: (embed-certs-255556) Calling .Close
	I0918 21:09:59.078883   61740 main.go:141] libmachine: Successfully made call to close driver server
	I0918 21:09:59.078906   61740 main.go:141] libmachine: Making call to close connection to plugin binary
	I0918 21:09:59.078917   61740 main.go:141] libmachine: Making call to close driver server
	I0918 21:09:59.078924   61740 main.go:141] libmachine: (embed-certs-255556) Calling .Close
	I0918 21:09:59.079143   61740 main.go:141] libmachine: Successfully made call to close driver server
	I0918 21:09:59.079157   61740 main.go:141] libmachine: Making call to close connection to plugin binary
	I0918 21:09:59.079168   61740 addons.go:475] Verifying addon metrics-server=true in "embed-certs-255556"
	I0918 21:09:59.080861   61740 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I0918 21:09:57.357619   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:09:59.357838   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:09:59.082145   61740 addons.go:510] duration metric: took 1.82098849s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I0918 21:09:59.526424   61740 pod_ready.go:93] pod "etcd-embed-certs-255556" in "kube-system" namespace has status "Ready":"True"
	I0918 21:09:59.526445   61740 pod_ready.go:82] duration metric: took 2.02981732s for pod "etcd-embed-certs-255556" in "kube-system" namespace to be "Ready" ...
	I0918 21:09:59.526455   61740 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-embed-certs-255556" in "kube-system" namespace to be "Ready" ...
	I0918 21:10:00.033589   61740 pod_ready.go:93] pod "kube-apiserver-embed-certs-255556" in "kube-system" namespace has status "Ready":"True"
	I0918 21:10:00.033616   61740 pod_ready.go:82] duration metric: took 507.155125ms for pod "kube-apiserver-embed-certs-255556" in "kube-system" namespace to be "Ready" ...
	I0918 21:10:00.033630   61740 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-255556" in "kube-system" namespace to be "Ready" ...
	I0918 21:10:02.039884   61740 pod_ready.go:103] pod "kube-controller-manager-embed-certs-255556" in "kube-system" namespace has status "Ready":"False"
	I0918 21:10:04.040760   61740 pod_ready.go:103] pod "kube-controller-manager-embed-certs-255556" in "kube-system" namespace has status "Ready":"False"
	I0918 21:10:04.541799   61740 pod_ready.go:93] pod "kube-controller-manager-embed-certs-255556" in "kube-system" namespace has status "Ready":"True"
	I0918 21:10:04.541821   61740 pod_ready.go:82] duration metric: took 4.508184279s for pod "kube-controller-manager-embed-certs-255556" in "kube-system" namespace to be "Ready" ...
	I0918 21:10:04.541830   61740 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-embed-certs-255556" in "kube-system" namespace to be "Ready" ...
	I0918 21:10:04.550008   61740 pod_ready.go:93] pod "kube-scheduler-embed-certs-255556" in "kube-system" namespace has status "Ready":"True"
	I0918 21:10:04.550038   61740 pod_ready.go:82] duration metric: took 8.201765ms for pod "kube-scheduler-embed-certs-255556" in "kube-system" namespace to be "Ready" ...
	I0918 21:10:04.550046   61740 pod_ready.go:39] duration metric: took 7.057808243s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0918 21:10:04.550060   61740 api_server.go:52] waiting for apiserver process to appear ...
	I0918 21:10:04.550110   61740 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:10:04.566882   61740 api_server.go:72] duration metric: took 7.305767858s to wait for apiserver process to appear ...
	I0918 21:10:04.566914   61740 api_server.go:88] waiting for apiserver healthz status ...
	I0918 21:10:04.566937   61740 api_server.go:253] Checking apiserver healthz at https://192.168.39.21:8443/healthz ...
	I0918 21:10:04.571495   61740 api_server.go:279] https://192.168.39.21:8443/healthz returned 200:
	ok
	I0918 21:10:04.572590   61740 api_server.go:141] control plane version: v1.31.1
	I0918 21:10:04.572618   61740 api_server.go:131] duration metric: took 5.69747ms to wait for apiserver health ...
	I0918 21:10:04.572625   61740 system_pods.go:43] waiting for kube-system pods to appear ...
	I0918 21:10:04.578979   61740 system_pods.go:59] 9 kube-system pods found
	I0918 21:10:04.579019   61740 system_pods.go:61] "coredns-7c65d6cfc9-ptxbt" [798665e6-6f4a-4ba5-b4f9-3192d3f76f03] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0918 21:10:04.579030   61740 system_pods.go:61] "coredns-7c65d6cfc9-vgmtd" [1224ebf9-1b24-413a-b779-093acfcfb61e] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0918 21:10:04.579039   61740 system_pods.go:61] "etcd-embed-certs-255556" [a796cd6d-5b0c-4f73-9dfe-23c552cb2a43] Running
	I0918 21:10:04.579046   61740 system_pods.go:61] "kube-apiserver-embed-certs-255556" [f1926542-940b-4ba9-94cf-23265d42874c] Running
	I0918 21:10:04.579051   61740 system_pods.go:61] "kube-controller-manager-embed-certs-255556" [cd0bc9bb-ca2c-435f-81d5-2d4ea9f32d85] Running
	I0918 21:10:04.579057   61740 system_pods.go:61] "kube-proxy-m7gxh" [47d72a32-7efc-4155-a890-0ddc620af6e0] Running
	I0918 21:10:04.579067   61740 system_pods.go:61] "kube-scheduler-embed-certs-255556" [091c79cd-bc76-42b3-9635-96fdbe3ecfff] Running
	I0918 21:10:04.579076   61740 system_pods.go:61] "metrics-server-6867b74b74-sr6hq" [8867f8fa-687b-4105-8ace-18af50195726] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0918 21:10:04.579085   61740 system_pods.go:61] "storage-provisioner" [dcc9b789-237c-4d92-96c6-2c23d2c401c0] Running
	I0918 21:10:04.579095   61740 system_pods.go:74] duration metric: took 6.462809ms to wait for pod list to return data ...
	I0918 21:10:04.579106   61740 default_sa.go:34] waiting for default service account to be created ...
	I0918 21:10:04.583020   61740 default_sa.go:45] found service account: "default"
	I0918 21:10:04.583059   61740 default_sa.go:55] duration metric: took 3.946388ms for default service account to be created ...
	I0918 21:10:04.583072   61740 system_pods.go:116] waiting for k8s-apps to be running ...
	I0918 21:10:04.589946   61740 system_pods.go:86] 9 kube-system pods found
	I0918 21:10:04.589991   61740 system_pods.go:89] "coredns-7c65d6cfc9-ptxbt" [798665e6-6f4a-4ba5-b4f9-3192d3f76f03] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0918 21:10:04.590004   61740 system_pods.go:89] "coredns-7c65d6cfc9-vgmtd" [1224ebf9-1b24-413a-b779-093acfcfb61e] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0918 21:10:04.590012   61740 system_pods.go:89] "etcd-embed-certs-255556" [a796cd6d-5b0c-4f73-9dfe-23c552cb2a43] Running
	I0918 21:10:04.590019   61740 system_pods.go:89] "kube-apiserver-embed-certs-255556" [f1926542-940b-4ba9-94cf-23265d42874c] Running
	I0918 21:10:04.590025   61740 system_pods.go:89] "kube-controller-manager-embed-certs-255556" [cd0bc9bb-ca2c-435f-81d5-2d4ea9f32d85] Running
	I0918 21:10:04.590030   61740 system_pods.go:89] "kube-proxy-m7gxh" [47d72a32-7efc-4155-a890-0ddc620af6e0] Running
	I0918 21:10:04.590035   61740 system_pods.go:89] "kube-scheduler-embed-certs-255556" [091c79cd-bc76-42b3-9635-96fdbe3ecfff] Running
	I0918 21:10:04.590044   61740 system_pods.go:89] "metrics-server-6867b74b74-sr6hq" [8867f8fa-687b-4105-8ace-18af50195726] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0918 21:10:04.590051   61740 system_pods.go:89] "storage-provisioner" [dcc9b789-237c-4d92-96c6-2c23d2c401c0] Running
	I0918 21:10:04.590061   61740 system_pods.go:126] duration metric: took 6.981726ms to wait for k8s-apps to be running ...
	I0918 21:10:04.590070   61740 system_svc.go:44] waiting for kubelet service to be running ....
	I0918 21:10:04.590127   61740 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0918 21:10:04.605893   61740 system_svc.go:56] duration metric: took 15.815591ms WaitForService to wait for kubelet
	I0918 21:10:04.605921   61740 kubeadm.go:582] duration metric: took 7.344815015s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0918 21:10:04.605939   61740 node_conditions.go:102] verifying NodePressure condition ...
	I0918 21:10:04.609551   61740 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0918 21:10:04.609577   61740 node_conditions.go:123] node cpu capacity is 2
	I0918 21:10:04.609588   61740 node_conditions.go:105] duration metric: took 3.645116ms to run NodePressure ...
	I0918 21:10:04.609598   61740 start.go:241] waiting for startup goroutines ...
	I0918 21:10:04.609605   61740 start.go:246] waiting for cluster config update ...
	I0918 21:10:04.609614   61740 start.go:255] writing updated cluster config ...
	I0918 21:10:04.609870   61740 ssh_runner.go:195] Run: rm -f paused
	I0918 21:10:04.664479   61740 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I0918 21:10:04.666589   61740 out.go:177] * Done! kubectl is now configured to use "embed-certs-255556" cluster and "default" namespace by default
	I0918 21:10:01.858109   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:10:03.356912   61273 pod_ready.go:82] duration metric: took 4m0.006778464s for pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace to be "Ready" ...
	E0918 21:10:03.356944   61273 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I0918 21:10:03.356952   61273 pod_ready.go:39] duration metric: took 4m0.807781101s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0918 21:10:03.356967   61273 api_server.go:52] waiting for apiserver process to appear ...
	I0918 21:10:03.356994   61273 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0918 21:10:03.357047   61273 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0918 21:10:03.410066   61273 cri.go:89] found id: "a70652dce4d80c6fac020d63cff7054a835d183ac4b910157a5bf4eb8cabcaa1"
	I0918 21:10:03.410096   61273 cri.go:89] found id: ""
	I0918 21:10:03.410104   61273 logs.go:276] 1 containers: [a70652dce4d80c6fac020d63cff7054a835d183ac4b910157a5bf4eb8cabcaa1]
	I0918 21:10:03.410168   61273 ssh_runner.go:195] Run: which crictl
	I0918 21:10:03.414236   61273 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0918 21:10:03.414309   61273 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0918 21:10:03.449405   61273 cri.go:89] found id: "a913074a00723bc43f97ba625ebbdfded7dca1ce664ace9a13173adb96d2068f"
	I0918 21:10:03.449426   61273 cri.go:89] found id: ""
	I0918 21:10:03.449434   61273 logs.go:276] 1 containers: [a913074a00723bc43f97ba625ebbdfded7dca1ce664ace9a13173adb96d2068f]
	I0918 21:10:03.449492   61273 ssh_runner.go:195] Run: which crictl
	I0918 21:10:03.453335   61273 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0918 21:10:03.453403   61273 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0918 21:10:03.487057   61273 cri.go:89] found id: "76b9e08a213468abdd23d7b3998b55d6544e2fbae9205ec6076ef0cd7a80e7be"
	I0918 21:10:03.487081   61273 cri.go:89] found id: ""
	I0918 21:10:03.487089   61273 logs.go:276] 1 containers: [76b9e08a213468abdd23d7b3998b55d6544e2fbae9205ec6076ef0cd7a80e7be]
	I0918 21:10:03.487137   61273 ssh_runner.go:195] Run: which crictl
	I0918 21:10:03.491027   61273 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0918 21:10:03.491101   61273 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0918 21:10:03.529636   61273 cri.go:89] found id: "c372970fdf265433e1668377d0214de6608cb39ebfd706cfad9624cba2f9b481"
	I0918 21:10:03.529665   61273 cri.go:89] found id: ""
	I0918 21:10:03.529675   61273 logs.go:276] 1 containers: [c372970fdf265433e1668377d0214de6608cb39ebfd706cfad9624cba2f9b481]
	I0918 21:10:03.529738   61273 ssh_runner.go:195] Run: which crictl
	I0918 21:10:03.535042   61273 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0918 21:10:03.535121   61273 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0918 21:10:03.572913   61273 cri.go:89] found id: "0257280a0d21d52e2a996c2f717880bdb9d9f6fe90a4b82cc62222c941ba7d84"
	I0918 21:10:03.572942   61273 cri.go:89] found id: ""
	I0918 21:10:03.572952   61273 logs.go:276] 1 containers: [0257280a0d21d52e2a996c2f717880bdb9d9f6fe90a4b82cc62222c941ba7d84]
	I0918 21:10:03.573012   61273 ssh_runner.go:195] Run: which crictl
	I0918 21:10:03.576945   61273 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0918 21:10:03.577021   61273 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0918 21:10:03.612785   61273 cri.go:89] found id: "785dc83056153e5eeb22216ac7520f529b71f1b3ae42e9540a5c8930f1fe62c2"
	I0918 21:10:03.612805   61273 cri.go:89] found id: ""
	I0918 21:10:03.612812   61273 logs.go:276] 1 containers: [785dc83056153e5eeb22216ac7520f529b71f1b3ae42e9540a5c8930f1fe62c2]
	I0918 21:10:03.612868   61273 ssh_runner.go:195] Run: which crictl
	I0918 21:10:03.616855   61273 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0918 21:10:03.616924   61273 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0918 21:10:03.650330   61273 cri.go:89] found id: ""
	I0918 21:10:03.650359   61273 logs.go:276] 0 containers: []
	W0918 21:10:03.650370   61273 logs.go:278] No container was found matching "kindnet"
	I0918 21:10:03.650378   61273 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0918 21:10:03.650446   61273 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0918 21:10:03.698078   61273 cri.go:89] found id: "b44d6f4b44928aa498c4375f783c165714b4a5959a1fa709712ad39a412d6d9f"
	I0918 21:10:03.698106   61273 cri.go:89] found id: "38c14df0554157969a97390590d6e9c46765fd6fc3e286f7d2e3094838e0aff5"
	I0918 21:10:03.698113   61273 cri.go:89] found id: ""
	I0918 21:10:03.698122   61273 logs.go:276] 2 containers: [b44d6f4b44928aa498c4375f783c165714b4a5959a1fa709712ad39a412d6d9f 38c14df0554157969a97390590d6e9c46765fd6fc3e286f7d2e3094838e0aff5]
	I0918 21:10:03.698184   61273 ssh_runner.go:195] Run: which crictl
	I0918 21:10:03.702311   61273 ssh_runner.go:195] Run: which crictl
	I0918 21:10:03.705974   61273 logs.go:123] Gathering logs for kubelet ...
	I0918 21:10:03.705996   61273 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 21:10:03.771043   61273 logs.go:123] Gathering logs for kube-scheduler [c372970fdf265433e1668377d0214de6608cb39ebfd706cfad9624cba2f9b481] ...
	I0918 21:10:03.771097   61273 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c372970fdf265433e1668377d0214de6608cb39ebfd706cfad9624cba2f9b481"
	I0918 21:10:03.813148   61273 logs.go:123] Gathering logs for storage-provisioner [b44d6f4b44928aa498c4375f783c165714b4a5959a1fa709712ad39a412d6d9f] ...
	I0918 21:10:03.813175   61273 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b44d6f4b44928aa498c4375f783c165714b4a5959a1fa709712ad39a412d6d9f"
	I0918 21:10:03.864553   61273 logs.go:123] Gathering logs for CRI-O ...
	I0918 21:10:03.864580   61273 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0918 21:10:04.345484   61273 logs.go:123] Gathering logs for container status ...
	I0918 21:10:04.345531   61273 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 21:10:04.390777   61273 logs.go:123] Gathering logs for dmesg ...
	I0918 21:10:04.390818   61273 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 21:10:04.409877   61273 logs.go:123] Gathering logs for describe nodes ...
	I0918 21:10:04.409918   61273 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0918 21:10:04.536579   61273 logs.go:123] Gathering logs for kube-apiserver [a70652dce4d80c6fac020d63cff7054a835d183ac4b910157a5bf4eb8cabcaa1] ...
	I0918 21:10:04.536609   61273 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a70652dce4d80c6fac020d63cff7054a835d183ac4b910157a5bf4eb8cabcaa1"
	I0918 21:10:04.595640   61273 logs.go:123] Gathering logs for etcd [a913074a00723bc43f97ba625ebbdfded7dca1ce664ace9a13173adb96d2068f] ...
	I0918 21:10:04.595680   61273 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a913074a00723bc43f97ba625ebbdfded7dca1ce664ace9a13173adb96d2068f"
	I0918 21:10:04.642332   61273 logs.go:123] Gathering logs for coredns [76b9e08a213468abdd23d7b3998b55d6544e2fbae9205ec6076ef0cd7a80e7be] ...
	I0918 21:10:04.642377   61273 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 76b9e08a213468abdd23d7b3998b55d6544e2fbae9205ec6076ef0cd7a80e7be"
	I0918 21:10:04.679525   61273 logs.go:123] Gathering logs for kube-proxy [0257280a0d21d52e2a996c2f717880bdb9d9f6fe90a4b82cc62222c941ba7d84] ...
	I0918 21:10:04.679551   61273 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0257280a0d21d52e2a996c2f717880bdb9d9f6fe90a4b82cc62222c941ba7d84"
	I0918 21:10:04.721130   61273 logs.go:123] Gathering logs for kube-controller-manager [785dc83056153e5eeb22216ac7520f529b71f1b3ae42e9540a5c8930f1fe62c2] ...
	I0918 21:10:04.721164   61273 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 785dc83056153e5eeb22216ac7520f529b71f1b3ae42e9540a5c8930f1fe62c2"
	I0918 21:10:04.789527   61273 logs.go:123] Gathering logs for storage-provisioner [38c14df0554157969a97390590d6e9c46765fd6fc3e286f7d2e3094838e0aff5] ...
	I0918 21:10:04.789558   61273 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 38c14df0554157969a97390590d6e9c46765fd6fc3e286f7d2e3094838e0aff5"
	I0918 21:10:07.334989   61273 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:10:07.352382   61273 api_server.go:72] duration metric: took 4m12.031791528s to wait for apiserver process to appear ...
	I0918 21:10:07.352411   61273 api_server.go:88] waiting for apiserver healthz status ...
	I0918 21:10:07.352446   61273 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0918 21:10:07.352494   61273 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0918 21:10:07.404709   61273 cri.go:89] found id: "a70652dce4d80c6fac020d63cff7054a835d183ac4b910157a5bf4eb8cabcaa1"
	I0918 21:10:07.404739   61273 cri.go:89] found id: ""
	I0918 21:10:07.404748   61273 logs.go:276] 1 containers: [a70652dce4d80c6fac020d63cff7054a835d183ac4b910157a5bf4eb8cabcaa1]
	I0918 21:10:07.404815   61273 ssh_runner.go:195] Run: which crictl
	I0918 21:10:07.409205   61273 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0918 21:10:07.409273   61273 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0918 21:10:07.450409   61273 cri.go:89] found id: "a913074a00723bc43f97ba625ebbdfded7dca1ce664ace9a13173adb96d2068f"
	I0918 21:10:07.450429   61273 cri.go:89] found id: ""
	I0918 21:10:07.450438   61273 logs.go:276] 1 containers: [a913074a00723bc43f97ba625ebbdfded7dca1ce664ace9a13173adb96d2068f]
	I0918 21:10:07.450498   61273 ssh_runner.go:195] Run: which crictl
	I0918 21:10:07.454623   61273 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0918 21:10:07.454692   61273 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0918 21:10:07.498344   61273 cri.go:89] found id: "76b9e08a213468abdd23d7b3998b55d6544e2fbae9205ec6076ef0cd7a80e7be"
	I0918 21:10:07.498370   61273 cri.go:89] found id: ""
	I0918 21:10:07.498379   61273 logs.go:276] 1 containers: [76b9e08a213468abdd23d7b3998b55d6544e2fbae9205ec6076ef0cd7a80e7be]
	I0918 21:10:07.498443   61273 ssh_runner.go:195] Run: which crictl
	I0918 21:10:07.503900   61273 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0918 21:10:07.503986   61273 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0918 21:10:07.543438   61273 cri.go:89] found id: "c372970fdf265433e1668377d0214de6608cb39ebfd706cfad9624cba2f9b481"
	I0918 21:10:07.543469   61273 cri.go:89] found id: ""
	I0918 21:10:07.543478   61273 logs.go:276] 1 containers: [c372970fdf265433e1668377d0214de6608cb39ebfd706cfad9624cba2f9b481]
	I0918 21:10:07.543538   61273 ssh_runner.go:195] Run: which crictl
	I0918 21:10:07.548439   61273 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0918 21:10:07.548518   61273 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0918 21:10:07.592109   61273 cri.go:89] found id: "0257280a0d21d52e2a996c2f717880bdb9d9f6fe90a4b82cc62222c941ba7d84"
	I0918 21:10:07.592140   61273 cri.go:89] found id: ""
	I0918 21:10:07.592150   61273 logs.go:276] 1 containers: [0257280a0d21d52e2a996c2f717880bdb9d9f6fe90a4b82cc62222c941ba7d84]
	I0918 21:10:07.592202   61273 ssh_runner.go:195] Run: which crictl
	I0918 21:10:07.596127   61273 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0918 21:10:07.596200   61273 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0918 21:10:07.630588   61273 cri.go:89] found id: "785dc83056153e5eeb22216ac7520f529b71f1b3ae42e9540a5c8930f1fe62c2"
	I0918 21:10:07.630623   61273 cri.go:89] found id: ""
	I0918 21:10:07.630633   61273 logs.go:276] 1 containers: [785dc83056153e5eeb22216ac7520f529b71f1b3ae42e9540a5c8930f1fe62c2]
	I0918 21:10:07.630699   61273 ssh_runner.go:195] Run: which crictl
	I0918 21:10:07.635130   61273 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0918 21:10:07.635214   61273 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0918 21:10:07.672446   61273 cri.go:89] found id: ""
	I0918 21:10:07.672475   61273 logs.go:276] 0 containers: []
	W0918 21:10:07.672487   61273 logs.go:278] No container was found matching "kindnet"
	I0918 21:10:07.672494   61273 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0918 21:10:07.672554   61273 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0918 21:10:07.710660   61273 cri.go:89] found id: "b44d6f4b44928aa498c4375f783c165714b4a5959a1fa709712ad39a412d6d9f"
	I0918 21:10:07.710693   61273 cri.go:89] found id: "38c14df0554157969a97390590d6e9c46765fd6fc3e286f7d2e3094838e0aff5"
	I0918 21:10:07.710700   61273 cri.go:89] found id: ""
	I0918 21:10:07.710709   61273 logs.go:276] 2 containers: [b44d6f4b44928aa498c4375f783c165714b4a5959a1fa709712ad39a412d6d9f 38c14df0554157969a97390590d6e9c46765fd6fc3e286f7d2e3094838e0aff5]
	I0918 21:10:07.710761   61273 ssh_runner.go:195] Run: which crictl
	I0918 21:10:07.714772   61273 ssh_runner.go:195] Run: which crictl
	I0918 21:10:07.718402   61273 logs.go:123] Gathering logs for etcd [a913074a00723bc43f97ba625ebbdfded7dca1ce664ace9a13173adb96d2068f] ...
	I0918 21:10:07.718423   61273 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a913074a00723bc43f97ba625ebbdfded7dca1ce664ace9a13173adb96d2068f"
	I0918 21:10:07.756682   61273 logs.go:123] Gathering logs for coredns [76b9e08a213468abdd23d7b3998b55d6544e2fbae9205ec6076ef0cd7a80e7be] ...
	I0918 21:10:07.756717   61273 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 76b9e08a213468abdd23d7b3998b55d6544e2fbae9205ec6076ef0cd7a80e7be"
	I0918 21:10:07.792784   61273 logs.go:123] Gathering logs for kube-proxy [0257280a0d21d52e2a996c2f717880bdb9d9f6fe90a4b82cc62222c941ba7d84] ...
	I0918 21:10:07.792813   61273 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0257280a0d21d52e2a996c2f717880bdb9d9f6fe90a4b82cc62222c941ba7d84"
	I0918 21:10:07.829746   61273 logs.go:123] Gathering logs for kube-controller-manager [785dc83056153e5eeb22216ac7520f529b71f1b3ae42e9540a5c8930f1fe62c2] ...
	I0918 21:10:07.829779   61273 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 785dc83056153e5eeb22216ac7520f529b71f1b3ae42e9540a5c8930f1fe62c2"
	I0918 21:10:07.882151   61273 logs.go:123] Gathering logs for storage-provisioner [38c14df0554157969a97390590d6e9c46765fd6fc3e286f7d2e3094838e0aff5] ...
	I0918 21:10:07.882190   61273 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 38c14df0554157969a97390590d6e9c46765fd6fc3e286f7d2e3094838e0aff5"
	I0918 21:10:07.921948   61273 logs.go:123] Gathering logs for container status ...
	I0918 21:10:07.921973   61273 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 21:10:07.969080   61273 logs.go:123] Gathering logs for kubelet ...
	I0918 21:10:07.969110   61273 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 21:10:08.036341   61273 logs.go:123] Gathering logs for dmesg ...
	I0918 21:10:08.036376   61273 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 21:10:08.050690   61273 logs.go:123] Gathering logs for describe nodes ...
	I0918 21:10:08.050722   61273 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0918 21:10:08.177111   61273 logs.go:123] Gathering logs for kube-apiserver [a70652dce4d80c6fac020d63cff7054a835d183ac4b910157a5bf4eb8cabcaa1] ...
	I0918 21:10:08.177154   61273 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a70652dce4d80c6fac020d63cff7054a835d183ac4b910157a5bf4eb8cabcaa1"
	I0918 21:10:08.224169   61273 logs.go:123] Gathering logs for kube-scheduler [c372970fdf265433e1668377d0214de6608cb39ebfd706cfad9624cba2f9b481] ...
	I0918 21:10:08.224203   61273 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c372970fdf265433e1668377d0214de6608cb39ebfd706cfad9624cba2f9b481"
	I0918 21:10:08.264412   61273 logs.go:123] Gathering logs for storage-provisioner [b44d6f4b44928aa498c4375f783c165714b4a5959a1fa709712ad39a412d6d9f] ...
	I0918 21:10:08.264437   61273 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b44d6f4b44928aa498c4375f783c165714b4a5959a1fa709712ad39a412d6d9f"
	I0918 21:10:08.309190   61273 logs.go:123] Gathering logs for CRI-O ...
	I0918 21:10:08.309215   61273 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0918 21:10:11.209439   61273 api_server.go:253] Checking apiserver healthz at https://192.168.61.31:8443/healthz ...
	I0918 21:10:11.214345   61273 api_server.go:279] https://192.168.61.31:8443/healthz returned 200:
	ok
	I0918 21:10:11.215424   61273 api_server.go:141] control plane version: v1.31.1
	I0918 21:10:11.215446   61273 api_server.go:131] duration metric: took 3.863027585s to wait for apiserver health ...
	I0918 21:10:11.215456   61273 system_pods.go:43] waiting for kube-system pods to appear ...
	I0918 21:10:11.215485   61273 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0918 21:10:11.215545   61273 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0918 21:10:11.251158   61273 cri.go:89] found id: "a70652dce4d80c6fac020d63cff7054a835d183ac4b910157a5bf4eb8cabcaa1"
	I0918 21:10:11.251182   61273 cri.go:89] found id: ""
	I0918 21:10:11.251190   61273 logs.go:276] 1 containers: [a70652dce4d80c6fac020d63cff7054a835d183ac4b910157a5bf4eb8cabcaa1]
	I0918 21:10:11.251246   61273 ssh_runner.go:195] Run: which crictl
	I0918 21:10:11.255090   61273 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0918 21:10:11.255177   61273 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0918 21:10:11.290504   61273 cri.go:89] found id: "a913074a00723bc43f97ba625ebbdfded7dca1ce664ace9a13173adb96d2068f"
	I0918 21:10:11.290526   61273 cri.go:89] found id: ""
	I0918 21:10:11.290534   61273 logs.go:276] 1 containers: [a913074a00723bc43f97ba625ebbdfded7dca1ce664ace9a13173adb96d2068f]
	I0918 21:10:11.290593   61273 ssh_runner.go:195] Run: which crictl
	I0918 21:10:11.295141   61273 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0918 21:10:11.295224   61273 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0918 21:10:11.340273   61273 cri.go:89] found id: "76b9e08a213468abdd23d7b3998b55d6544e2fbae9205ec6076ef0cd7a80e7be"
	I0918 21:10:11.340300   61273 cri.go:89] found id: ""
	I0918 21:10:11.340310   61273 logs.go:276] 1 containers: [76b9e08a213468abdd23d7b3998b55d6544e2fbae9205ec6076ef0cd7a80e7be]
	I0918 21:10:11.340362   61273 ssh_runner.go:195] Run: which crictl
	I0918 21:10:11.344823   61273 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0918 21:10:11.344903   61273 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0918 21:10:11.384145   61273 cri.go:89] found id: "c372970fdf265433e1668377d0214de6608cb39ebfd706cfad9624cba2f9b481"
	I0918 21:10:11.384172   61273 cri.go:89] found id: ""
	I0918 21:10:11.384187   61273 logs.go:276] 1 containers: [c372970fdf265433e1668377d0214de6608cb39ebfd706cfad9624cba2f9b481]
	I0918 21:10:11.384251   61273 ssh_runner.go:195] Run: which crictl
	I0918 21:10:11.388594   61273 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0918 21:10:11.388673   61273 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0918 21:10:11.434881   61273 cri.go:89] found id: "0257280a0d21d52e2a996c2f717880bdb9d9f6fe90a4b82cc62222c941ba7d84"
	I0918 21:10:11.434915   61273 cri.go:89] found id: ""
	I0918 21:10:11.434925   61273 logs.go:276] 1 containers: [0257280a0d21d52e2a996c2f717880bdb9d9f6fe90a4b82cc62222c941ba7d84]
	I0918 21:10:11.434984   61273 ssh_runner.go:195] Run: which crictl
	I0918 21:10:11.439048   61273 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0918 21:10:11.439124   61273 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0918 21:10:11.474786   61273 cri.go:89] found id: "785dc83056153e5eeb22216ac7520f529b71f1b3ae42e9540a5c8930f1fe62c2"
	I0918 21:10:11.474812   61273 cri.go:89] found id: ""
	I0918 21:10:11.474820   61273 logs.go:276] 1 containers: [785dc83056153e5eeb22216ac7520f529b71f1b3ae42e9540a5c8930f1fe62c2]
	I0918 21:10:11.474871   61273 ssh_runner.go:195] Run: which crictl
	I0918 21:10:11.478907   61273 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0918 21:10:11.478961   61273 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0918 21:10:11.521522   61273 cri.go:89] found id: ""
	I0918 21:10:11.521550   61273 logs.go:276] 0 containers: []
	W0918 21:10:11.521561   61273 logs.go:278] No container was found matching "kindnet"
	I0918 21:10:11.521568   61273 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0918 21:10:11.521642   61273 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0918 21:10:11.560406   61273 cri.go:89] found id: "b44d6f4b44928aa498c4375f783c165714b4a5959a1fa709712ad39a412d6d9f"
	I0918 21:10:11.560428   61273 cri.go:89] found id: "38c14df0554157969a97390590d6e9c46765fd6fc3e286f7d2e3094838e0aff5"
	I0918 21:10:11.560432   61273 cri.go:89] found id: ""
	I0918 21:10:11.560439   61273 logs.go:276] 2 containers: [b44d6f4b44928aa498c4375f783c165714b4a5959a1fa709712ad39a412d6d9f 38c14df0554157969a97390590d6e9c46765fd6fc3e286f7d2e3094838e0aff5]
	I0918 21:10:11.560489   61273 ssh_runner.go:195] Run: which crictl
	I0918 21:10:11.564559   61273 ssh_runner.go:195] Run: which crictl
	I0918 21:10:11.568380   61273 logs.go:123] Gathering logs for kube-proxy [0257280a0d21d52e2a996c2f717880bdb9d9f6fe90a4b82cc62222c941ba7d84] ...
	I0918 21:10:11.568405   61273 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0257280a0d21d52e2a996c2f717880bdb9d9f6fe90a4b82cc62222c941ba7d84"
	I0918 21:10:11.614927   61273 logs.go:123] Gathering logs for kube-controller-manager [785dc83056153e5eeb22216ac7520f529b71f1b3ae42e9540a5c8930f1fe62c2] ...
	I0918 21:10:11.614959   61273 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 785dc83056153e5eeb22216ac7520f529b71f1b3ae42e9540a5c8930f1fe62c2"
	I0918 21:10:11.668337   61273 logs.go:123] Gathering logs for storage-provisioner [b44d6f4b44928aa498c4375f783c165714b4a5959a1fa709712ad39a412d6d9f] ...
	I0918 21:10:11.668372   61273 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b44d6f4b44928aa498c4375f783c165714b4a5959a1fa709712ad39a412d6d9f"
	I0918 21:10:11.705574   61273 logs.go:123] Gathering logs for kubelet ...
	I0918 21:10:11.705604   61273 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 21:10:11.772691   61273 logs.go:123] Gathering logs for describe nodes ...
	I0918 21:10:11.772731   61273 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0918 21:10:11.885001   61273 logs.go:123] Gathering logs for etcd [a913074a00723bc43f97ba625ebbdfded7dca1ce664ace9a13173adb96d2068f] ...
	I0918 21:10:11.885043   61273 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a913074a00723bc43f97ba625ebbdfded7dca1ce664ace9a13173adb96d2068f"
	I0918 21:10:11.929585   61273 logs.go:123] Gathering logs for coredns [76b9e08a213468abdd23d7b3998b55d6544e2fbae9205ec6076ef0cd7a80e7be] ...
	I0918 21:10:11.929623   61273 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 76b9e08a213468abdd23d7b3998b55d6544e2fbae9205ec6076ef0cd7a80e7be"
	I0918 21:10:11.967540   61273 logs.go:123] Gathering logs for kube-scheduler [c372970fdf265433e1668377d0214de6608cb39ebfd706cfad9624cba2f9b481] ...
	I0918 21:10:11.967566   61273 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c372970fdf265433e1668377d0214de6608cb39ebfd706cfad9624cba2f9b481"
	I0918 21:10:12.007037   61273 logs.go:123] Gathering logs for storage-provisioner [38c14df0554157969a97390590d6e9c46765fd6fc3e286f7d2e3094838e0aff5] ...
	I0918 21:10:12.007076   61273 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 38c14df0554157969a97390590d6e9c46765fd6fc3e286f7d2e3094838e0aff5"
	I0918 21:10:12.045764   61273 logs.go:123] Gathering logs for CRI-O ...
	I0918 21:10:12.045805   61273 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0918 21:10:12.434993   61273 logs.go:123] Gathering logs for dmesg ...
	I0918 21:10:12.435042   61273 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 21:10:12.449422   61273 logs.go:123] Gathering logs for kube-apiserver [a70652dce4d80c6fac020d63cff7054a835d183ac4b910157a5bf4eb8cabcaa1] ...
	I0918 21:10:12.449453   61273 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a70652dce4d80c6fac020d63cff7054a835d183ac4b910157a5bf4eb8cabcaa1"
	I0918 21:10:12.500491   61273 logs.go:123] Gathering logs for container status ...
	I0918 21:10:12.500522   61273 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 21:10:15.053164   61273 system_pods.go:59] 8 kube-system pods found
	I0918 21:10:15.053203   61273 system_pods.go:61] "coredns-7c65d6cfc9-dgnw2" [085d5a98-0a61-4678-830f-384780a0d7ef] Running
	I0918 21:10:15.053211   61273 system_pods.go:61] "etcd-no-preload-331658" [d6a53ff4-fcf0-46c0-9e77-9af6d0363e08] Running
	I0918 21:10:15.053218   61273 system_pods.go:61] "kube-apiserver-no-preload-331658" [1953598f-4c3f-4a98-ab75-b8ca1b8093ad] Running
	I0918 21:10:15.053223   61273 system_pods.go:61] "kube-controller-manager-no-preload-331658" [8fc655be-a192-4250-a31d-2e533a2b5e41] Running
	I0918 21:10:15.053228   61273 system_pods.go:61] "kube-proxy-hx25w" [a26512ff-f695-4452-8974-577479257160] Running
	I0918 21:10:15.053232   61273 system_pods.go:61] "kube-scheduler-no-preload-331658" [64e5347e-e7fd-4a0d-ae41-539c3989c29e] Running
	I0918 21:10:15.053243   61273 system_pods.go:61] "metrics-server-6867b74b74-n27vc" [b1de76ec-8987-49ce-ae66-eedda2705cde] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0918 21:10:15.053254   61273 system_pods.go:61] "storage-provisioner" [e110aeb3-e9bc-4bb9-9b49-5579558bdda2] Running
	I0918 21:10:15.053264   61273 system_pods.go:74] duration metric: took 3.837800115s to wait for pod list to return data ...
	I0918 21:10:15.053273   61273 default_sa.go:34] waiting for default service account to be created ...
	I0918 21:10:15.056865   61273 default_sa.go:45] found service account: "default"
	I0918 21:10:15.056900   61273 default_sa.go:55] duration metric: took 3.619144ms for default service account to be created ...
	I0918 21:10:15.056912   61273 system_pods.go:116] waiting for k8s-apps to be running ...
	I0918 21:10:15.061835   61273 system_pods.go:86] 8 kube-system pods found
	I0918 21:10:15.061864   61273 system_pods.go:89] "coredns-7c65d6cfc9-dgnw2" [085d5a98-0a61-4678-830f-384780a0d7ef] Running
	I0918 21:10:15.061870   61273 system_pods.go:89] "etcd-no-preload-331658" [d6a53ff4-fcf0-46c0-9e77-9af6d0363e08] Running
	I0918 21:10:15.061875   61273 system_pods.go:89] "kube-apiserver-no-preload-331658" [1953598f-4c3f-4a98-ab75-b8ca1b8093ad] Running
	I0918 21:10:15.061880   61273 system_pods.go:89] "kube-controller-manager-no-preload-331658" [8fc655be-a192-4250-a31d-2e533a2b5e41] Running
	I0918 21:10:15.061884   61273 system_pods.go:89] "kube-proxy-hx25w" [a26512ff-f695-4452-8974-577479257160] Running
	I0918 21:10:15.061888   61273 system_pods.go:89] "kube-scheduler-no-preload-331658" [64e5347e-e7fd-4a0d-ae41-539c3989c29e] Running
	I0918 21:10:15.061894   61273 system_pods.go:89] "metrics-server-6867b74b74-n27vc" [b1de76ec-8987-49ce-ae66-eedda2705cde] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0918 21:10:15.061898   61273 system_pods.go:89] "storage-provisioner" [e110aeb3-e9bc-4bb9-9b49-5579558bdda2] Running
	I0918 21:10:15.061906   61273 system_pods.go:126] duration metric: took 4.987508ms to wait for k8s-apps to be running ...
	I0918 21:10:15.061912   61273 system_svc.go:44] waiting for kubelet service to be running ....
	I0918 21:10:15.061966   61273 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0918 21:10:15.079834   61273 system_svc.go:56] duration metric: took 17.908997ms WaitForService to wait for kubelet
	I0918 21:10:15.079875   61273 kubeadm.go:582] duration metric: took 4m19.759287892s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0918 21:10:15.079897   61273 node_conditions.go:102] verifying NodePressure condition ...
	I0918 21:10:15.083307   61273 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0918 21:10:15.083390   61273 node_conditions.go:123] node cpu capacity is 2
	I0918 21:10:15.083407   61273 node_conditions.go:105] duration metric: took 3.503352ms to run NodePressure ...
	I0918 21:10:15.083421   61273 start.go:241] waiting for startup goroutines ...
	I0918 21:10:15.083431   61273 start.go:246] waiting for cluster config update ...
	I0918 21:10:15.083444   61273 start.go:255] writing updated cluster config ...
	I0918 21:10:15.083788   61273 ssh_runner.go:195] Run: rm -f paused
	I0918 21:10:15.139144   61273 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I0918 21:10:15.141198   61273 out.go:177] * Done! kubectl is now configured to use "no-preload-331658" cluster and "default" namespace by default
	I0918 21:11:17.441368   62061 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0918 21:11:17.441607   62061 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0918 21:11:17.442921   62061 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0918 21:11:17.443036   62061 kubeadm.go:310] [preflight] Running pre-flight checks
	I0918 21:11:17.443221   62061 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0918 21:11:17.443500   62061 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0918 21:11:17.443818   62061 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0918 21:11:17.444099   62061 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0918 21:11:17.446858   62061 out.go:235]   - Generating certificates and keys ...
	I0918 21:11:17.446965   62061 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0918 21:11:17.447048   62061 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0918 21:11:17.447160   62061 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0918 21:11:17.447248   62061 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0918 21:11:17.447349   62061 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0918 21:11:17.447425   62061 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0918 21:11:17.447507   62061 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0918 21:11:17.447587   62061 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0918 21:11:17.447742   62061 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0918 21:11:17.447847   62061 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0918 21:11:17.447911   62061 kubeadm.go:310] [certs] Using the existing "sa" key
	I0918 21:11:17.447984   62061 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0918 21:11:17.448085   62061 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0918 21:11:17.448163   62061 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0918 21:11:17.448255   62061 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0918 21:11:17.448339   62061 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0918 21:11:17.448486   62061 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0918 21:11:17.448590   62061 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0918 21:11:17.448645   62061 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0918 21:11:17.448733   62061 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0918 21:11:17.450203   62061 out.go:235]   - Booting up control plane ...
	I0918 21:11:17.450299   62061 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0918 21:11:17.450434   62061 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0918 21:11:17.450506   62061 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0918 21:11:17.450578   62061 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0918 21:11:17.450752   62061 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0918 21:11:17.450805   62061 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0918 21:11:17.450863   62061 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0918 21:11:17.451034   62061 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0918 21:11:17.451090   62061 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0918 21:11:17.451259   62061 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0918 21:11:17.451325   62061 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0918 21:11:17.451498   62061 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0918 21:11:17.451562   62061 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0918 21:11:17.451765   62061 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0918 21:11:17.451845   62061 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0918 21:11:17.452027   62061 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0918 21:11:17.452041   62061 kubeadm.go:310] 
	I0918 21:11:17.452083   62061 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0918 21:11:17.452118   62061 kubeadm.go:310] 		timed out waiting for the condition
	I0918 21:11:17.452127   62061 kubeadm.go:310] 
	I0918 21:11:17.452160   62061 kubeadm.go:310] 	This error is likely caused by:
	I0918 21:11:17.452189   62061 kubeadm.go:310] 		- The kubelet is not running
	I0918 21:11:17.452282   62061 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0918 21:11:17.452289   62061 kubeadm.go:310] 
	I0918 21:11:17.452385   62061 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0918 21:11:17.452425   62061 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0918 21:11:17.452473   62061 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0918 21:11:17.452482   62061 kubeadm.go:310] 
	I0918 21:11:17.452578   62061 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0918 21:11:17.452649   62061 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0918 21:11:17.452656   62061 kubeadm.go:310] 
	I0918 21:11:17.452770   62061 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0918 21:11:17.452849   62061 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0918 21:11:17.452920   62061 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0918 21:11:17.452983   62061 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0918 21:11:17.453029   62061 kubeadm.go:310] 
	W0918 21:11:17.453100   62061 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0918 21:11:17.453139   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0918 21:11:17.917211   62061 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0918 21:11:17.933265   62061 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0918 21:11:17.943909   62061 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0918 21:11:17.943937   62061 kubeadm.go:157] found existing configuration files:
	
	I0918 21:11:17.943980   62061 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0918 21:11:17.953745   62061 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0918 21:11:17.953817   62061 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0918 21:11:17.964411   62061 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0918 21:11:17.974601   62061 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0918 21:11:17.974661   62061 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0918 21:11:17.984631   62061 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0918 21:11:17.994280   62061 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0918 21:11:17.994341   62061 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0918 21:11:18.004341   62061 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0918 21:11:18.014066   62061 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0918 21:11:18.014127   62061 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0918 21:11:18.023592   62061 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0918 21:11:18.098831   62061 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0918 21:11:18.098908   62061 kubeadm.go:310] [preflight] Running pre-flight checks
	I0918 21:11:18.254904   62061 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0918 21:11:18.255012   62061 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0918 21:11:18.255102   62061 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0918 21:11:18.444152   62061 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0918 21:11:18.445967   62061 out.go:235]   - Generating certificates and keys ...
	I0918 21:11:18.446082   62061 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0918 21:11:18.446185   62061 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0918 21:11:18.446316   62061 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0918 21:11:18.446403   62061 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0918 21:11:18.446508   62061 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0918 21:11:18.446600   62061 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0918 21:11:18.446706   62061 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0918 21:11:18.446767   62061 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0918 21:11:18.446852   62061 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0918 21:11:18.446936   62061 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0918 21:11:18.446971   62061 kubeadm.go:310] [certs] Using the existing "sa" key
	I0918 21:11:18.447025   62061 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0918 21:11:18.621414   62061 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0918 21:11:18.973691   62061 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0918 21:11:19.177522   62061 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0918 21:11:19.351312   62061 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0918 21:11:19.375169   62061 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0918 21:11:19.376274   62061 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0918 21:11:19.376351   62061 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0918 21:11:19.505030   62061 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0918 21:11:19.507065   62061 out.go:235]   - Booting up control plane ...
	I0918 21:11:19.507197   62061 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0918 21:11:19.519523   62061 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0918 21:11:19.520747   62061 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0918 21:11:19.522723   62061 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0918 21:11:19.524851   62061 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0918 21:11:59.527084   62061 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0918 21:11:59.527421   62061 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0918 21:11:59.527674   62061 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0918 21:12:04.528288   62061 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0918 21:12:04.528530   62061 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0918 21:12:14.529180   62061 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0918 21:12:14.529367   62061 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0918 21:12:34.530066   62061 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0918 21:12:34.530335   62061 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0918 21:13:14.529030   62061 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0918 21:13:14.529305   62061 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0918 21:13:14.529326   62061 kubeadm.go:310] 
	I0918 21:13:14.529374   62061 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0918 21:13:14.529447   62061 kubeadm.go:310] 		timed out waiting for the condition
	I0918 21:13:14.529466   62061 kubeadm.go:310] 
	I0918 21:13:14.529521   62061 kubeadm.go:310] 	This error is likely caused by:
	I0918 21:13:14.529569   62061 kubeadm.go:310] 		- The kubelet is not running
	I0918 21:13:14.529716   62061 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0918 21:13:14.529726   62061 kubeadm.go:310] 
	I0918 21:13:14.529866   62061 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0918 21:13:14.529917   62061 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0918 21:13:14.529967   62061 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0918 21:13:14.529976   62061 kubeadm.go:310] 
	I0918 21:13:14.530120   62061 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0918 21:13:14.530220   62061 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0918 21:13:14.530230   62061 kubeadm.go:310] 
	I0918 21:13:14.530352   62061 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0918 21:13:14.530480   62061 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0918 21:13:14.530579   62061 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0918 21:13:14.530678   62061 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0918 21:13:14.530688   62061 kubeadm.go:310] 
	I0918 21:13:14.531356   62061 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0918 21:13:14.531463   62061 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0918 21:13:14.531520   62061 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0918 21:13:14.531592   62061 kubeadm.go:394] duration metric: took 7m56.917405378s to StartCluster
	I0918 21:13:14.531633   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0918 21:13:14.531689   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0918 21:13:14.578851   62061 cri.go:89] found id: ""
	I0918 21:13:14.578883   62061 logs.go:276] 0 containers: []
	W0918 21:13:14.578893   62061 logs.go:278] No container was found matching "kube-apiserver"
	I0918 21:13:14.578901   62061 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0918 21:13:14.578960   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0918 21:13:14.627137   62061 cri.go:89] found id: ""
	I0918 21:13:14.627168   62061 logs.go:276] 0 containers: []
	W0918 21:13:14.627179   62061 logs.go:278] No container was found matching "etcd"
	I0918 21:13:14.627187   62061 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0918 21:13:14.627245   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0918 21:13:14.670678   62061 cri.go:89] found id: ""
	I0918 21:13:14.670707   62061 logs.go:276] 0 containers: []
	W0918 21:13:14.670717   62061 logs.go:278] No container was found matching "coredns"
	I0918 21:13:14.670724   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0918 21:13:14.670788   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0918 21:13:14.709610   62061 cri.go:89] found id: ""
	I0918 21:13:14.709641   62061 logs.go:276] 0 containers: []
	W0918 21:13:14.709651   62061 logs.go:278] No container was found matching "kube-scheduler"
	I0918 21:13:14.709658   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0918 21:13:14.709722   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0918 21:13:14.743492   62061 cri.go:89] found id: ""
	I0918 21:13:14.743522   62061 logs.go:276] 0 containers: []
	W0918 21:13:14.743534   62061 logs.go:278] No container was found matching "kube-proxy"
	I0918 21:13:14.743540   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0918 21:13:14.743601   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0918 21:13:14.777577   62061 cri.go:89] found id: ""
	I0918 21:13:14.777602   62061 logs.go:276] 0 containers: []
	W0918 21:13:14.777610   62061 logs.go:278] No container was found matching "kube-controller-manager"
	I0918 21:13:14.777616   62061 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0918 21:13:14.777679   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0918 21:13:14.815884   62061 cri.go:89] found id: ""
	I0918 21:13:14.815913   62061 logs.go:276] 0 containers: []
	W0918 21:13:14.815922   62061 logs.go:278] No container was found matching "kindnet"
	I0918 21:13:14.815937   62061 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0918 21:13:14.815997   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0918 21:13:14.855426   62061 cri.go:89] found id: ""
	I0918 21:13:14.855456   62061 logs.go:276] 0 containers: []
	W0918 21:13:14.855464   62061 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0918 21:13:14.855473   62061 logs.go:123] Gathering logs for dmesg ...
	I0918 21:13:14.855484   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 21:13:14.868812   62061 logs.go:123] Gathering logs for describe nodes ...
	I0918 21:13:14.868843   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0918 21:13:14.955093   62061 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0918 21:13:14.955120   62061 logs.go:123] Gathering logs for CRI-O ...
	I0918 21:13:14.955151   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0918 21:13:15.064722   62061 logs.go:123] Gathering logs for container status ...
	I0918 21:13:15.064760   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 21:13:15.105430   62061 logs.go:123] Gathering logs for kubelet ...
	I0918 21:13:15.105466   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0918 21:13:15.157878   62061 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0918 21:13:15.157956   62061 out.go:270] * 
	W0918 21:13:15.158036   62061 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0918 21:13:15.158052   62061 out.go:270] * 
	W0918 21:13:15.158934   62061 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0918 21:13:15.161604   62061 out.go:201] 
	W0918 21:13:15.162606   62061 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0918 21:13:15.162664   62061 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0918 21:13:15.162685   62061 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0918 21:13:15.163831   62061 out.go:201] 
	
	
	==> CRI-O <==
	Sep 18 21:18:41 default-k8s-diff-port-828868 crio[712]: time="2024-09-18 21:18:41.825913541Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726694321825887713,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=0c559d0f-995e-4dae-bfbe-ad0198b7d317 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 18 21:18:41 default-k8s-diff-port-828868 crio[712]: time="2024-09-18 21:18:41.826477991Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=6d750cf1-62e8-4b79-a8e4-620d50eead77 name=/runtime.v1.RuntimeService/ListContainers
	Sep 18 21:18:41 default-k8s-diff-port-828868 crio[712]: time="2024-09-18 21:18:41.826532614Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=6d750cf1-62e8-4b79-a8e4-620d50eead77 name=/runtime.v1.RuntimeService/ListContainers
	Sep 18 21:18:41 default-k8s-diff-port-828868 crio[712]: time="2024-09-18 21:18:41.826770137Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:0d0b97e9f72af5a0bd052549df0f2c3ea4d140c9ef1e8b1ccc76c58a461ee37c,PodSandboxId:72eef0ad95d538297fb3c4789da4a985ab3d8807bf345d5e2fd68178031ef278,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726693771060281006,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8b4e1077-e23f-4262-b83e-989506798531,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e2ac9232270efdf1abd0fec72f0128462216dc1b45d80b9b4c8a4f56a4261da5,PodSandboxId:972edd043ce5eb380ceea6fbe077b618e479c6f3231ef4aab9475bd38bb924c6,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726693770591622237,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-8gz5v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bfcbe99d-9a3d-4da8-a976-58cb9d9bf7ac,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:25cde4223682100325c2aeb2b6938730f12051a03dcf6e0748c2a6c85a43436a,PodSandboxId:7023205c094dab04e074a6d2623a1004e56c54d2e075ae154796ac3ae3987b87,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726693770596500921,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-shx5p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: 2d6d25ab-9a90-490a-911b-bf396605fa88,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d92ded6c9bd3df5db2c65359fa2fe855e949921242337824b7bdb69266eeefcd,PodSandboxId:b4fa714267183c158aee6efcab88236b7c0eb21f45034638f59bc4a6463c4aca,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING
,CreatedAt:1726693769792140827,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-hf5mm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3fb0a166-a925-4486-9695-6db05ae704b8,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7acfe06c0ec767f7534be911cae0c5ac2ad444bc6e8fd03889aed6d2739d0fc8,PodSandboxId:54cd361ac0647b81bc37e551cbfef36f826f2868d111e67f6c82fe039925e9d2,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:172669375897931982
7,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-828868,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 04e4144c885b49a7b621930470176aaa,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:74b07ca92709a7cfcc8dc0aa3398a6cf6f8e91026dbe1f565981b6fe699b75f0,PodSandboxId:f2e650fe9f8b28118c175e9708adadc8e8933663f2617a1ea8ba42b4693dd884,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:17266937589
84442846,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-828868,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a43ac7bdaae7ab7a4bac05d31bb8c2d3,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f727b8fd80c86645115bb17a6c04d85dd1ea62b00858983c116acf36270ea847,PodSandboxId:653b77cd43b7fe8a6d8c314791a5374edcbc889446d46788069772c965b7b4ca,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,Creat
edAt:1726693758899147724,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-828868,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 40d24393f8fb0592258907503c39ea41,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c0b240f30eafde9982d8ce7201787f2c29c207a40d9ce60bf72ff9d7eb9afb55,PodSandboxId:b349872526b54819d3acc551db5561db1345f1208b9cdd7544e8ab7b29e8bc55,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726693
758906888361,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-828868,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a793cbd41f245f3396e69fa0dc64ce00,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:63709198b2a1bec977af0e16d4f231ed7e5caf103e797daf118541e61d8b03f3,PodSandboxId:6835a506addfd99bea4ffb2e34f01bf1a0bb7009e61282906162de7cb128ade5,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726693477357010195,Labels:map
[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-828868,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 40d24393f8fb0592258907503c39ea41,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=6d750cf1-62e8-4b79-a8e4-620d50eead77 name=/runtime.v1.RuntimeService/ListContainers
	Sep 18 21:18:41 default-k8s-diff-port-828868 crio[712]: time="2024-09-18 21:18:41.862073067Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=9e24df91-cba7-46ab-bffd-944e106c5155 name=/runtime.v1.RuntimeService/Version
	Sep 18 21:18:41 default-k8s-diff-port-828868 crio[712]: time="2024-09-18 21:18:41.862148102Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=9e24df91-cba7-46ab-bffd-944e106c5155 name=/runtime.v1.RuntimeService/Version
	Sep 18 21:18:41 default-k8s-diff-port-828868 crio[712]: time="2024-09-18 21:18:41.863395683Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=3b8957b5-7e22-400a-b359-8ad90423c968 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 18 21:18:41 default-k8s-diff-port-828868 crio[712]: time="2024-09-18 21:18:41.863802397Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726694321863780069,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=3b8957b5-7e22-400a-b359-8ad90423c968 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 18 21:18:41 default-k8s-diff-port-828868 crio[712]: time="2024-09-18 21:18:41.864379537Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=0f943138-128f-4c1f-9bcd-953e21e41f38 name=/runtime.v1.RuntimeService/ListContainers
	Sep 18 21:18:41 default-k8s-diff-port-828868 crio[712]: time="2024-09-18 21:18:41.864451436Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=0f943138-128f-4c1f-9bcd-953e21e41f38 name=/runtime.v1.RuntimeService/ListContainers
	Sep 18 21:18:41 default-k8s-diff-port-828868 crio[712]: time="2024-09-18 21:18:41.864681289Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:0d0b97e9f72af5a0bd052549df0f2c3ea4d140c9ef1e8b1ccc76c58a461ee37c,PodSandboxId:72eef0ad95d538297fb3c4789da4a985ab3d8807bf345d5e2fd68178031ef278,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726693771060281006,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8b4e1077-e23f-4262-b83e-989506798531,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e2ac9232270efdf1abd0fec72f0128462216dc1b45d80b9b4c8a4f56a4261da5,PodSandboxId:972edd043ce5eb380ceea6fbe077b618e479c6f3231ef4aab9475bd38bb924c6,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726693770591622237,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-8gz5v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bfcbe99d-9a3d-4da8-a976-58cb9d9bf7ac,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:25cde4223682100325c2aeb2b6938730f12051a03dcf6e0748c2a6c85a43436a,PodSandboxId:7023205c094dab04e074a6d2623a1004e56c54d2e075ae154796ac3ae3987b87,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726693770596500921,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-shx5p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: 2d6d25ab-9a90-490a-911b-bf396605fa88,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d92ded6c9bd3df5db2c65359fa2fe855e949921242337824b7bdb69266eeefcd,PodSandboxId:b4fa714267183c158aee6efcab88236b7c0eb21f45034638f59bc4a6463c4aca,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING
,CreatedAt:1726693769792140827,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-hf5mm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3fb0a166-a925-4486-9695-6db05ae704b8,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7acfe06c0ec767f7534be911cae0c5ac2ad444bc6e8fd03889aed6d2739d0fc8,PodSandboxId:54cd361ac0647b81bc37e551cbfef36f826f2868d111e67f6c82fe039925e9d2,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:172669375897931982
7,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-828868,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 04e4144c885b49a7b621930470176aaa,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:74b07ca92709a7cfcc8dc0aa3398a6cf6f8e91026dbe1f565981b6fe699b75f0,PodSandboxId:f2e650fe9f8b28118c175e9708adadc8e8933663f2617a1ea8ba42b4693dd884,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:17266937589
84442846,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-828868,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a43ac7bdaae7ab7a4bac05d31bb8c2d3,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f727b8fd80c86645115bb17a6c04d85dd1ea62b00858983c116acf36270ea847,PodSandboxId:653b77cd43b7fe8a6d8c314791a5374edcbc889446d46788069772c965b7b4ca,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,Creat
edAt:1726693758899147724,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-828868,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 40d24393f8fb0592258907503c39ea41,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c0b240f30eafde9982d8ce7201787f2c29c207a40d9ce60bf72ff9d7eb9afb55,PodSandboxId:b349872526b54819d3acc551db5561db1345f1208b9cdd7544e8ab7b29e8bc55,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726693
758906888361,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-828868,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a793cbd41f245f3396e69fa0dc64ce00,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:63709198b2a1bec977af0e16d4f231ed7e5caf103e797daf118541e61d8b03f3,PodSandboxId:6835a506addfd99bea4ffb2e34f01bf1a0bb7009e61282906162de7cb128ade5,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726693477357010195,Labels:map
[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-828868,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 40d24393f8fb0592258907503c39ea41,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=0f943138-128f-4c1f-9bcd-953e21e41f38 name=/runtime.v1.RuntimeService/ListContainers
	Sep 18 21:18:41 default-k8s-diff-port-828868 crio[712]: time="2024-09-18 21:18:41.899122371Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=da178390-aa4f-4ece-a469-b1dcadae3618 name=/runtime.v1.RuntimeService/Version
	Sep 18 21:18:41 default-k8s-diff-port-828868 crio[712]: time="2024-09-18 21:18:41.899265038Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=da178390-aa4f-4ece-a469-b1dcadae3618 name=/runtime.v1.RuntimeService/Version
	Sep 18 21:18:41 default-k8s-diff-port-828868 crio[712]: time="2024-09-18 21:18:41.900497551Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=20b5330b-ec61-43e2-b10f-7139abd76344 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 18 21:18:41 default-k8s-diff-port-828868 crio[712]: time="2024-09-18 21:18:41.900972322Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726694321900944001,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=20b5330b-ec61-43e2-b10f-7139abd76344 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 18 21:18:41 default-k8s-diff-port-828868 crio[712]: time="2024-09-18 21:18:41.901645709Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=9c102872-8e7d-4a6c-b6ab-e57de734bca4 name=/runtime.v1.RuntimeService/ListContainers
	Sep 18 21:18:41 default-k8s-diff-port-828868 crio[712]: time="2024-09-18 21:18:41.901724323Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=9c102872-8e7d-4a6c-b6ab-e57de734bca4 name=/runtime.v1.RuntimeService/ListContainers
	Sep 18 21:18:41 default-k8s-diff-port-828868 crio[712]: time="2024-09-18 21:18:41.901940569Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:0d0b97e9f72af5a0bd052549df0f2c3ea4d140c9ef1e8b1ccc76c58a461ee37c,PodSandboxId:72eef0ad95d538297fb3c4789da4a985ab3d8807bf345d5e2fd68178031ef278,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726693771060281006,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8b4e1077-e23f-4262-b83e-989506798531,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e2ac9232270efdf1abd0fec72f0128462216dc1b45d80b9b4c8a4f56a4261da5,PodSandboxId:972edd043ce5eb380ceea6fbe077b618e479c6f3231ef4aab9475bd38bb924c6,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726693770591622237,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-8gz5v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bfcbe99d-9a3d-4da8-a976-58cb9d9bf7ac,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:25cde4223682100325c2aeb2b6938730f12051a03dcf6e0748c2a6c85a43436a,PodSandboxId:7023205c094dab04e074a6d2623a1004e56c54d2e075ae154796ac3ae3987b87,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726693770596500921,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-shx5p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: 2d6d25ab-9a90-490a-911b-bf396605fa88,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d92ded6c9bd3df5db2c65359fa2fe855e949921242337824b7bdb69266eeefcd,PodSandboxId:b4fa714267183c158aee6efcab88236b7c0eb21f45034638f59bc4a6463c4aca,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING
,CreatedAt:1726693769792140827,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-hf5mm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3fb0a166-a925-4486-9695-6db05ae704b8,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7acfe06c0ec767f7534be911cae0c5ac2ad444bc6e8fd03889aed6d2739d0fc8,PodSandboxId:54cd361ac0647b81bc37e551cbfef36f826f2868d111e67f6c82fe039925e9d2,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:172669375897931982
7,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-828868,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 04e4144c885b49a7b621930470176aaa,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:74b07ca92709a7cfcc8dc0aa3398a6cf6f8e91026dbe1f565981b6fe699b75f0,PodSandboxId:f2e650fe9f8b28118c175e9708adadc8e8933663f2617a1ea8ba42b4693dd884,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:17266937589
84442846,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-828868,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a43ac7bdaae7ab7a4bac05d31bb8c2d3,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f727b8fd80c86645115bb17a6c04d85dd1ea62b00858983c116acf36270ea847,PodSandboxId:653b77cd43b7fe8a6d8c314791a5374edcbc889446d46788069772c965b7b4ca,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,Creat
edAt:1726693758899147724,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-828868,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 40d24393f8fb0592258907503c39ea41,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c0b240f30eafde9982d8ce7201787f2c29c207a40d9ce60bf72ff9d7eb9afb55,PodSandboxId:b349872526b54819d3acc551db5561db1345f1208b9cdd7544e8ab7b29e8bc55,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726693
758906888361,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-828868,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a793cbd41f245f3396e69fa0dc64ce00,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:63709198b2a1bec977af0e16d4f231ed7e5caf103e797daf118541e61d8b03f3,PodSandboxId:6835a506addfd99bea4ffb2e34f01bf1a0bb7009e61282906162de7cb128ade5,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726693477357010195,Labels:map
[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-828868,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 40d24393f8fb0592258907503c39ea41,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=9c102872-8e7d-4a6c-b6ab-e57de734bca4 name=/runtime.v1.RuntimeService/ListContainers
	Sep 18 21:18:41 default-k8s-diff-port-828868 crio[712]: time="2024-09-18 21:18:41.935282764Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=303899b4-1479-4285-81ba-c07d8ea87ae6 name=/runtime.v1.RuntimeService/Version
	Sep 18 21:18:41 default-k8s-diff-port-828868 crio[712]: time="2024-09-18 21:18:41.935363239Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=303899b4-1479-4285-81ba-c07d8ea87ae6 name=/runtime.v1.RuntimeService/Version
	Sep 18 21:18:41 default-k8s-diff-port-828868 crio[712]: time="2024-09-18 21:18:41.936605484Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=cf95be93-a6d3-40a4-902d-18c2f243cdc1 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 18 21:18:41 default-k8s-diff-port-828868 crio[712]: time="2024-09-18 21:18:41.937040923Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726694321937016523,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=cf95be93-a6d3-40a4-902d-18c2f243cdc1 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 18 21:18:41 default-k8s-diff-port-828868 crio[712]: time="2024-09-18 21:18:41.937639739Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=0b583d6e-41ac-4670-9b42-a96a8adc40c6 name=/runtime.v1.RuntimeService/ListContainers
	Sep 18 21:18:41 default-k8s-diff-port-828868 crio[712]: time="2024-09-18 21:18:41.937692197Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=0b583d6e-41ac-4670-9b42-a96a8adc40c6 name=/runtime.v1.RuntimeService/ListContainers
	Sep 18 21:18:41 default-k8s-diff-port-828868 crio[712]: time="2024-09-18 21:18:41.937892249Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:0d0b97e9f72af5a0bd052549df0f2c3ea4d140c9ef1e8b1ccc76c58a461ee37c,PodSandboxId:72eef0ad95d538297fb3c4789da4a985ab3d8807bf345d5e2fd68178031ef278,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726693771060281006,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8b4e1077-e23f-4262-b83e-989506798531,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e2ac9232270efdf1abd0fec72f0128462216dc1b45d80b9b4c8a4f56a4261da5,PodSandboxId:972edd043ce5eb380ceea6fbe077b618e479c6f3231ef4aab9475bd38bb924c6,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726693770591622237,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-8gz5v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bfcbe99d-9a3d-4da8-a976-58cb9d9bf7ac,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:25cde4223682100325c2aeb2b6938730f12051a03dcf6e0748c2a6c85a43436a,PodSandboxId:7023205c094dab04e074a6d2623a1004e56c54d2e075ae154796ac3ae3987b87,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726693770596500921,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-shx5p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: 2d6d25ab-9a90-490a-911b-bf396605fa88,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d92ded6c9bd3df5db2c65359fa2fe855e949921242337824b7bdb69266eeefcd,PodSandboxId:b4fa714267183c158aee6efcab88236b7c0eb21f45034638f59bc4a6463c4aca,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING
,CreatedAt:1726693769792140827,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-hf5mm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3fb0a166-a925-4486-9695-6db05ae704b8,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7acfe06c0ec767f7534be911cae0c5ac2ad444bc6e8fd03889aed6d2739d0fc8,PodSandboxId:54cd361ac0647b81bc37e551cbfef36f826f2868d111e67f6c82fe039925e9d2,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:172669375897931982
7,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-828868,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 04e4144c885b49a7b621930470176aaa,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:74b07ca92709a7cfcc8dc0aa3398a6cf6f8e91026dbe1f565981b6fe699b75f0,PodSandboxId:f2e650fe9f8b28118c175e9708adadc8e8933663f2617a1ea8ba42b4693dd884,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:17266937589
84442846,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-828868,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a43ac7bdaae7ab7a4bac05d31bb8c2d3,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f727b8fd80c86645115bb17a6c04d85dd1ea62b00858983c116acf36270ea847,PodSandboxId:653b77cd43b7fe8a6d8c314791a5374edcbc889446d46788069772c965b7b4ca,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,Creat
edAt:1726693758899147724,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-828868,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 40d24393f8fb0592258907503c39ea41,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c0b240f30eafde9982d8ce7201787f2c29c207a40d9ce60bf72ff9d7eb9afb55,PodSandboxId:b349872526b54819d3acc551db5561db1345f1208b9cdd7544e8ab7b29e8bc55,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726693
758906888361,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-828868,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a793cbd41f245f3396e69fa0dc64ce00,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:63709198b2a1bec977af0e16d4f231ed7e5caf103e797daf118541e61d8b03f3,PodSandboxId:6835a506addfd99bea4ffb2e34f01bf1a0bb7009e61282906162de7cb128ade5,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726693477357010195,Labels:map
[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-828868,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 40d24393f8fb0592258907503c39ea41,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=0b583d6e-41ac-4670-9b42-a96a8adc40c6 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	0d0b97e9f72af       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   9 minutes ago       Running             storage-provisioner       0                   72eef0ad95d53       storage-provisioner
	25cde42236821       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   9 minutes ago       Running             coredns                   0                   7023205c094da       coredns-7c65d6cfc9-shx5p
	e2ac9232270ef       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   9 minutes ago       Running             coredns                   0                   972edd043ce5e       coredns-7c65d6cfc9-8gz5v
	d92ded6c9bd3d       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561   9 minutes ago       Running             kube-proxy                0                   b4fa714267183       kube-proxy-hf5mm
	74b07ca92709a       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1   9 minutes ago       Running             kube-controller-manager   2                   f2e650fe9f8b2       kube-controller-manager-default-k8s-diff-port-828868
	7acfe06c0ec76       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b   9 minutes ago       Running             kube-scheduler            2                   54cd361ac0647       kube-scheduler-default-k8s-diff-port-828868
	c0b240f30eafd       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4   9 minutes ago       Running             etcd                      2                   b349872526b54       etcd-default-k8s-diff-port-828868
	f727b8fd80c86       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee   9 minutes ago       Running             kube-apiserver            2                   653b77cd43b7f       kube-apiserver-default-k8s-diff-port-828868
	63709198b2a1b       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee   14 minutes ago      Exited              kube-apiserver            1                   6835a506addfd       kube-apiserver-default-k8s-diff-port-828868
	
	
	==> coredns [25cde4223682100325c2aeb2b6938730f12051a03dcf6e0748c2a6c85a43436a] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	
	
	==> coredns [e2ac9232270efdf1abd0fec72f0128462216dc1b45d80b9b4c8a4f56a4261da5] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-828868
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-828868
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=85073601a832bd4bbda5d11fa91feafff6ec6b91
	                    minikube.k8s.io/name=default-k8s-diff-port-828868
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_18T21_09_24_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 18 Sep 2024 21:09:21 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-828868
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 18 Sep 2024 21:18:35 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 18 Sep 2024 21:14:40 +0000   Wed, 18 Sep 2024 21:09:19 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 18 Sep 2024 21:14:40 +0000   Wed, 18 Sep 2024 21:09:19 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 18 Sep 2024 21:14:40 +0000   Wed, 18 Sep 2024 21:09:19 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 18 Sep 2024 21:14:40 +0000   Wed, 18 Sep 2024 21:09:21 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.50.109
	  Hostname:    default-k8s-diff-port-828868
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 fecab61126ae4306b71ea4ef8286345b
	  System UUID:                fecab611-26ae-4306-b71e-a4ef8286345b
	  Boot ID:                    183f01c1-2271-4ea7-bca0-7c5ddeafec3c
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7c65d6cfc9-8gz5v                                100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     9m13s
	  kube-system                 coredns-7c65d6cfc9-shx5p                                100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     9m13s
	  kube-system                 etcd-default-k8s-diff-port-828868                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         9m18s
	  kube-system                 kube-apiserver-default-k8s-diff-port-828868             250m (12%)    0 (0%)      0 (0%)           0 (0%)         9m19s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-828868    200m (10%)    0 (0%)      0 (0%)           0 (0%)         9m18s
	  kube-system                 kube-proxy-hf5mm                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m13s
	  kube-system                 kube-scheduler-default-k8s-diff-port-828868             100m (5%)     0 (0%)      0 (0%)           0 (0%)         9m18s
	  kube-system                 metrics-server-6867b74b74-hdt52                         100m (5%)     0 (0%)      200Mi (9%)       0 (0%)         9m12s
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m12s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   0 (0%)
	  memory             440Mi (20%)  340Mi (16%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 9m11s                  kube-proxy       
	  Normal  Starting                 9m24s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  9m24s (x8 over 9m24s)  kubelet          Node default-k8s-diff-port-828868 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m24s (x8 over 9m24s)  kubelet          Node default-k8s-diff-port-828868 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m24s (x7 over 9m24s)  kubelet          Node default-k8s-diff-port-828868 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  9m24s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 9m18s                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  9m18s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  9m18s                  kubelet          Node default-k8s-diff-port-828868 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m18s                  kubelet          Node default-k8s-diff-port-828868 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m18s                  kubelet          Node default-k8s-diff-port-828868 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           9m14s                  node-controller  Node default-k8s-diff-port-828868 event: Registered Node default-k8s-diff-port-828868 in Controller
	
	
	==> dmesg <==
	[  +0.051364] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.037672] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.769921] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +1.843305] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.528209] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +8.094296] systemd-fstab-generator[635]: Ignoring "noauto" option for root device
	[  +0.059733] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.057010] systemd-fstab-generator[647]: Ignoring "noauto" option for root device
	[  +0.171312] systemd-fstab-generator[661]: Ignoring "noauto" option for root device
	[  +0.149503] systemd-fstab-generator[673]: Ignoring "noauto" option for root device
	[  +0.295214] systemd-fstab-generator[702]: Ignoring "noauto" option for root device
	[  +4.085393] systemd-fstab-generator[795]: Ignoring "noauto" option for root device
	[  +1.889464] systemd-fstab-generator[915]: Ignoring "noauto" option for root device
	[  +0.071826] kauditd_printk_skb: 158 callbacks suppressed
	[  +5.600889] kauditd_printk_skb: 69 callbacks suppressed
	[  +7.850084] kauditd_printk_skb: 85 callbacks suppressed
	[Sep18 21:09] kauditd_printk_skb: 7 callbacks suppressed
	[  +1.309185] systemd-fstab-generator[2588]: Ignoring "noauto" option for root device
	[  +5.033554] kauditd_printk_skb: 54 callbacks suppressed
	[  +1.523447] systemd-fstab-generator[2912]: Ignoring "noauto" option for root device
	[  +5.384123] systemd-fstab-generator[3042]: Ignoring "noauto" option for root device
	[  +0.114623] kauditd_printk_skb: 14 callbacks suppressed
	[  +5.273292] kauditd_printk_skb: 86 callbacks suppressed
	
	
	==> etcd [c0b240f30eafde9982d8ce7201787f2c29c207a40d9ce60bf72ff9d7eb9afb55] <==
	{"level":"info","ts":"2024-09-18T21:09:19.230239Z","caller":"embed/etcd.go:728","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-09-18T21:09:19.230646Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"46a65bd61cd538c0","initial-advertise-peer-urls":["https://192.168.50.109:2380"],"listen-peer-urls":["https://192.168.50.109:2380"],"advertise-client-urls":["https://192.168.50.109:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.50.109:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-09-18T21:09:19.230406Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.50.109:2380"}
	{"level":"info","ts":"2024-09-18T21:09:19.231940Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.50.109:2380"}
	{"level":"info","ts":"2024-09-18T21:09:19.231477Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-09-18T21:09:19.759241Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"46a65bd61cd538c0 is starting a new election at term 1"}
	{"level":"info","ts":"2024-09-18T21:09:19.759403Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"46a65bd61cd538c0 became pre-candidate at term 1"}
	{"level":"info","ts":"2024-09-18T21:09:19.759479Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"46a65bd61cd538c0 received MsgPreVoteResp from 46a65bd61cd538c0 at term 1"}
	{"level":"info","ts":"2024-09-18T21:09:19.759531Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"46a65bd61cd538c0 became candidate at term 2"}
	{"level":"info","ts":"2024-09-18T21:09:19.761245Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"46a65bd61cd538c0 received MsgVoteResp from 46a65bd61cd538c0 at term 2"}
	{"level":"info","ts":"2024-09-18T21:09:19.761429Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"46a65bd61cd538c0 became leader at term 2"}
	{"level":"info","ts":"2024-09-18T21:09:19.761558Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 46a65bd61cd538c0 elected leader 46a65bd61cd538c0 at term 2"}
	{"level":"info","ts":"2024-09-18T21:09:19.769692Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"46a65bd61cd538c0","local-member-attributes":"{Name:default-k8s-diff-port-828868 ClientURLs:[https://192.168.50.109:2379]}","request-path":"/0/members/46a65bd61cd538c0/attributes","cluster-id":"d0e6cadbc325cfac","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-18T21:09:19.772237Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-18T21:09:19.772874Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-18T21:09:19.781324Z","caller":"etcdserver/server.go:2629","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-18T21:09:19.782575Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-18T21:09:19.783422Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-09-18T21:09:19.786056Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-18T21:09:19.790931Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.50.109:2379"}
	{"level":"info","ts":"2024-09-18T21:09:19.796243Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-18T21:09:19.796349Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-18T21:09:19.798329Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"d0e6cadbc325cfac","local-member-id":"46a65bd61cd538c0","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-18T21:09:19.798452Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-18T21:09:19.798506Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
	
	
	==> kernel <==
	 21:18:42 up 14 min,  0 users,  load average: 0.20, 0.24, 0.18
	Linux default-k8s-diff-port-828868 5.10.207 #1 SMP Mon Sep 16 15:00:28 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [63709198b2a1bec977af0e16d4f231ed7e5caf103e797daf118541e61d8b03f3] <==
	W0918 21:09:14.758386       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0918 21:09:14.769873       1 logging.go:55] [core] [Channel #100 SubChannel #101]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0918 21:09:14.787455       1 logging.go:55] [core] [Channel #31 SubChannel #32]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0918 21:09:14.815093       1 logging.go:55] [core] [Channel #169 SubChannel #170]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0918 21:09:14.832344       1 logging.go:55] [core] [Channel #130 SubChannel #131]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0918 21:09:14.851696       1 logging.go:55] [core] [Channel #127 SubChannel #128]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0918 21:09:14.887491       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0918 21:09:14.888925       1 logging.go:55] [core] [Channel #58 SubChannel #59]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0918 21:09:14.904028       1 logging.go:55] [core] [Channel #94 SubChannel #95]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0918 21:09:14.905114       1 logging.go:55] [core] [Channel #145 SubChannel #146]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0918 21:09:15.019554       1 logging.go:55] [core] [Channel #148 SubChannel #149]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0918 21:09:15.031999       1 logging.go:55] [core] [Channel #34 SubChannel #35]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0918 21:09:15.144876       1 logging.go:55] [core] [Channel #85 SubChannel #86]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0918 21:09:15.170847       1 logging.go:55] [core] [Channel #124 SubChannel #125]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0918 21:09:15.235722       1 logging.go:55] [core] [Channel #178 SubChannel #179]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0918 21:09:15.328707       1 logging.go:55] [core] [Channel #61 SubChannel #62]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0918 21:09:15.446491       1 logging.go:55] [core] [Channel #97 SubChannel #98]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0918 21:09:15.464481       1 logging.go:55] [core] [Channel #70 SubChannel #71]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0918 21:09:15.480365       1 logging.go:55] [core] [Channel #112 SubChannel #113]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0918 21:09:15.531666       1 logging.go:55] [core] [Channel #46 SubChannel #47]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0918 21:09:15.557053       1 logging.go:55] [core] [Channel #118 SubChannel #119]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0918 21:09:15.577907       1 logging.go:55] [core] [Channel #160 SubChannel #161]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0918 21:09:15.633495       1 logging.go:55] [core] [Channel #175 SubChannel #176]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0918 21:09:15.728705       1 logging.go:55] [core] [Channel #154 SubChannel #155]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0918 21:09:15.748949       1 logging.go:55] [core] [Channel #136 SubChannel #137]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-apiserver [f727b8fd80c86645115bb17a6c04d85dd1ea62b00858983c116acf36270ea847] <==
	W0918 21:14:22.535391       1 handler_proxy.go:99] no RequestInfo found in the context
	E0918 21:14:22.535446       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0918 21:14:22.536571       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0918 21:14:22.536619       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0918 21:15:22.537200       1 handler_proxy.go:99] no RequestInfo found in the context
	E0918 21:15:22.537376       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W0918 21:15:22.537200       1 handler_proxy.go:99] no RequestInfo found in the context
	E0918 21:15:22.537436       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I0918 21:15:22.538544       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0918 21:15:22.538594       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0918 21:17:22.539128       1 handler_proxy.go:99] no RequestInfo found in the context
	E0918 21:17:22.539794       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W0918 21:17:22.539505       1 handler_proxy.go:99] no RequestInfo found in the context
	E0918 21:17:22.539906       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I0918 21:17:22.541006       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0918 21:17:22.541065       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [74b07ca92709a7cfcc8dc0aa3398a6cf6f8e91026dbe1f565981b6fe699b75f0] <==
	E0918 21:13:28.557901       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0918 21:13:28.975750       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0918 21:13:58.564830       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0918 21:13:58.983444       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0918 21:14:28.572326       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0918 21:14:28.991623       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0918 21:14:40.268312       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="default-k8s-diff-port-828868"
	E0918 21:14:58.579013       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0918 21:14:59.001318       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0918 21:15:28.585574       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0918 21:15:29.010281       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0918 21:15:31.123199       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="238.278µs"
	I0918 21:15:43.124116       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="396.022µs"
	E0918 21:15:58.591798       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0918 21:15:59.018095       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0918 21:16:28.600118       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0918 21:16:29.025920       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0918 21:16:58.606215       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0918 21:16:59.033970       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0918 21:17:28.612703       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0918 21:17:29.042580       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0918 21:17:58.619867       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0918 21:17:59.058094       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0918 21:18:28.626536       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0918 21:18:29.066309       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [d92ded6c9bd3df5db2c65359fa2fe855e949921242337824b7bdb69266eeefcd] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0918 21:09:30.248981       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0918 21:09:30.258900       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.50.109"]
	E0918 21:09:30.259002       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0918 21:09:30.360943       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0918 21:09:30.361036       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0918 21:09:30.361077       1 server_linux.go:169] "Using iptables Proxier"
	I0918 21:09:30.364320       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0918 21:09:30.364730       1 server.go:483] "Version info" version="v1.31.1"
	I0918 21:09:30.364755       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0918 21:09:30.368663       1 config.go:199] "Starting service config controller"
	I0918 21:09:30.368802       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0918 21:09:30.368832       1 config.go:105] "Starting endpoint slice config controller"
	I0918 21:09:30.368848       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0918 21:09:30.371322       1 config.go:328] "Starting node config controller"
	I0918 21:09:30.371336       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0918 21:09:30.472561       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0918 21:09:30.475752       1 shared_informer.go:320] Caches are synced for node config
	I0918 21:09:30.475823       1 shared_informer.go:320] Caches are synced for service config
	
	
	==> kube-scheduler [7acfe06c0ec767f7534be911cae0c5ac2ad444bc6e8fd03889aed6d2739d0fc8] <==
	W0918 21:09:21.606798       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0918 21:09:21.606830       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0918 21:09:21.606893       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0918 21:09:21.606917       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0918 21:09:21.606973       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0918 21:09:21.606996       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0918 21:09:21.607748       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0918 21:09:21.607864       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0918 21:09:22.554481       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0918 21:09:22.554559       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0918 21:09:22.598071       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0918 21:09:22.598223       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0918 21:09:22.653296       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0918 21:09:22.653424       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0918 21:09:22.754523       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0918 21:09:22.754630       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0918 21:09:22.811701       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0918 21:09:22.811775       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0918 21:09:22.812841       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0918 21:09:22.812968       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0918 21:09:22.908487       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0918 21:09:22.908663       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0918 21:09:23.008646       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0918 21:09:23.009074       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	I0918 21:09:24.797417       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 18 21:17:28 default-k8s-diff-port-828868 kubelet[2919]: E0918 21:17:28.108479    2919 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-hdt52" podUID="bf3aa7d0-a121-4a2c-92dd-2c79c6cbaa4a"
	Sep 18 21:17:34 default-k8s-diff-port-828868 kubelet[2919]: E0918 21:17:34.271389    2919 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726694254270828514,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 18 21:17:34 default-k8s-diff-port-828868 kubelet[2919]: E0918 21:17:34.271705    2919 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726694254270828514,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 18 21:17:39 default-k8s-diff-port-828868 kubelet[2919]: E0918 21:17:39.107558    2919 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-hdt52" podUID="bf3aa7d0-a121-4a2c-92dd-2c79c6cbaa4a"
	Sep 18 21:17:44 default-k8s-diff-port-828868 kubelet[2919]: E0918 21:17:44.273898    2919 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726694264273527583,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 18 21:17:44 default-k8s-diff-port-828868 kubelet[2919]: E0918 21:17:44.274203    2919 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726694264273527583,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 18 21:17:50 default-k8s-diff-port-828868 kubelet[2919]: E0918 21:17:50.108121    2919 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-hdt52" podUID="bf3aa7d0-a121-4a2c-92dd-2c79c6cbaa4a"
	Sep 18 21:17:54 default-k8s-diff-port-828868 kubelet[2919]: E0918 21:17:54.276561    2919 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726694274276062060,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 18 21:17:54 default-k8s-diff-port-828868 kubelet[2919]: E0918 21:17:54.276612    2919 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726694274276062060,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 18 21:18:04 default-k8s-diff-port-828868 kubelet[2919]: E0918 21:18:04.278004    2919 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726694284277631475,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 18 21:18:04 default-k8s-diff-port-828868 kubelet[2919]: E0918 21:18:04.278043    2919 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726694284277631475,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 18 21:18:05 default-k8s-diff-port-828868 kubelet[2919]: E0918 21:18:05.107510    2919 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-hdt52" podUID="bf3aa7d0-a121-4a2c-92dd-2c79c6cbaa4a"
	Sep 18 21:18:14 default-k8s-diff-port-828868 kubelet[2919]: E0918 21:18:14.279991    2919 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726694294279622013,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 18 21:18:14 default-k8s-diff-port-828868 kubelet[2919]: E0918 21:18:14.280465    2919 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726694294279622013,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 18 21:18:20 default-k8s-diff-port-828868 kubelet[2919]: E0918 21:18:20.108426    2919 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-hdt52" podUID="bf3aa7d0-a121-4a2c-92dd-2c79c6cbaa4a"
	Sep 18 21:18:24 default-k8s-diff-port-828868 kubelet[2919]: E0918 21:18:24.148257    2919 iptables.go:577] "Could not set up iptables canary" err=<
	Sep 18 21:18:24 default-k8s-diff-port-828868 kubelet[2919]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Sep 18 21:18:24 default-k8s-diff-port-828868 kubelet[2919]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 18 21:18:24 default-k8s-diff-port-828868 kubelet[2919]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 18 21:18:24 default-k8s-diff-port-828868 kubelet[2919]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 18 21:18:24 default-k8s-diff-port-828868 kubelet[2919]: E0918 21:18:24.281854    2919 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726694304281535881,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 18 21:18:24 default-k8s-diff-port-828868 kubelet[2919]: E0918 21:18:24.281911    2919 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726694304281535881,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 18 21:18:34 default-k8s-diff-port-828868 kubelet[2919]: E0918 21:18:34.108145    2919 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-hdt52" podUID="bf3aa7d0-a121-4a2c-92dd-2c79c6cbaa4a"
	Sep 18 21:18:34 default-k8s-diff-port-828868 kubelet[2919]: E0918 21:18:34.283812    2919 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726694314283234780,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 18 21:18:34 default-k8s-diff-port-828868 kubelet[2919]: E0918 21:18:34.283934    2919 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726694314283234780,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	
	
	==> storage-provisioner [0d0b97e9f72af5a0bd052549df0f2c3ea4d140c9ef1e8b1ccc76c58a461ee37c] <==
	I0918 21:09:31.173059       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0918 21:09:31.186322       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0918 21:09:31.186718       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0918 21:09:31.203351       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0918 21:09:31.203606       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-828868_0e992c30-dd4d-4bb2-9c51-0fe02b0d69ee!
	I0918 21:09:31.205746       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"d95df135-fbfe-405c-94e8-b9c522473029", APIVersion:"v1", ResourceVersion:"430", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-828868_0e992c30-dd4d-4bb2-9c51-0fe02b0d69ee became leader
	I0918 21:09:31.306831       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-828868_0e992c30-dd4d-4bb2-9c51-0fe02b0d69ee!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-828868 -n default-k8s-diff-port-828868
helpers_test.go:261: (dbg) Run:  kubectl --context default-k8s-diff-port-828868 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-6867b74b74-hdt52
helpers_test.go:274: ======> post-mortem[TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context default-k8s-diff-port-828868 describe pod metrics-server-6867b74b74-hdt52
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-828868 describe pod metrics-server-6867b74b74-hdt52: exit status 1 (63.77396ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-6867b74b74-hdt52" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context default-k8s-diff-port-828868 describe pod metrics-server-6867b74b74-hdt52: exit status 1
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (544.23s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (544.24s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:274: ***** TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-255556 -n embed-certs-255556
start_stop_delete_test.go:274: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2024-09-18 21:19:05.221837078 +0000 UTC m=+6067.992178320
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-255556 -n embed-certs-255556
helpers_test.go:244: <<< TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-255556 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-255556 logs -n 25: (2.141978538s)
helpers_test.go:252: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| delete  | -p cert-options-347585                                 | cert-options-347585          | jenkins | v1.34.0 | 18 Sep 24 20:55 UTC | 18 Sep 24 20:55 UTC |
	| start   | -p no-preload-331658                                   | no-preload-331658            | jenkins | v1.34.0 | 18 Sep 24 20:55 UTC | 18 Sep 24 20:56 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-878094                           | kubernetes-upgrade-878094    | jenkins | v1.34.0 | 18 Sep 24 20:55 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-878094                           | kubernetes-upgrade-878094    | jenkins | v1.34.0 | 18 Sep 24 20:55 UTC | 18 Sep 24 20:56 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	|         | --alsologtostderr                                      |                              |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                     |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| delete  | -p pause-543700                                        | pause-543700                 | jenkins | v1.34.0 | 18 Sep 24 20:56 UTC | 18 Sep 24 20:56 UTC |
	| start   | -p embed-certs-255556                                  | embed-certs-255556           | jenkins | v1.34.0 | 18 Sep 24 20:56 UTC | 18 Sep 24 20:57 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| delete  | -p kubernetes-upgrade-878094                           | kubernetes-upgrade-878094    | jenkins | v1.34.0 | 18 Sep 24 20:56 UTC | 18 Sep 24 20:56 UTC |
	| delete  | -p                                                     | disable-driver-mounts-335923 | jenkins | v1.34.0 | 18 Sep 24 20:56 UTC | 18 Sep 24 20:56 UTC |
	|         | disable-driver-mounts-335923                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-828868 | jenkins | v1.34.0 | 18 Sep 24 20:56 UTC | 18 Sep 24 20:57 UTC |
	|         | default-k8s-diff-port-828868                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-331658             | no-preload-331658            | jenkins | v1.34.0 | 18 Sep 24 20:56 UTC | 18 Sep 24 20:57 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-331658                                   | no-preload-331658            | jenkins | v1.34.0 | 18 Sep 24 20:57 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-828868  | default-k8s-diff-port-828868 | jenkins | v1.34.0 | 18 Sep 24 20:57 UTC | 18 Sep 24 20:57 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-828868 | jenkins | v1.34.0 | 18 Sep 24 20:57 UTC |                     |
	|         | default-k8s-diff-port-828868                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-255556            | embed-certs-255556           | jenkins | v1.34.0 | 18 Sep 24 20:57 UTC | 18 Sep 24 20:57 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-255556                                  | embed-certs-255556           | jenkins | v1.34.0 | 18 Sep 24 20:57 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-740194        | old-k8s-version-740194       | jenkins | v1.34.0 | 18 Sep 24 20:59 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-331658                  | no-preload-331658            | jenkins | v1.34.0 | 18 Sep 24 20:59 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-331658                                   | no-preload-331658            | jenkins | v1.34.0 | 18 Sep 24 20:59 UTC | 18 Sep 24 21:10 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-828868       | default-k8s-diff-port-828868 | jenkins | v1.34.0 | 18 Sep 24 21:00 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-255556                 | embed-certs-255556           | jenkins | v1.34.0 | 18 Sep 24 21:00 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-828868 | jenkins | v1.34.0 | 18 Sep 24 21:00 UTC | 18 Sep 24 21:09 UTC |
	|         | default-k8s-diff-port-828868                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| start   | -p embed-certs-255556                                  | embed-certs-255556           | jenkins | v1.34.0 | 18 Sep 24 21:00 UTC | 18 Sep 24 21:10 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-740194                              | old-k8s-version-740194       | jenkins | v1.34.0 | 18 Sep 24 21:00 UTC | 18 Sep 24 21:00 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-740194             | old-k8s-version-740194       | jenkins | v1.34.0 | 18 Sep 24 21:00 UTC | 18 Sep 24 21:00 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-740194                              | old-k8s-version-740194       | jenkins | v1.34.0 | 18 Sep 24 21:00 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/18 21:00:59
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0918 21:00:59.486726   62061 out.go:345] Setting OutFile to fd 1 ...
	I0918 21:00:59.486839   62061 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0918 21:00:59.486848   62061 out.go:358] Setting ErrFile to fd 2...
	I0918 21:00:59.486854   62061 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0918 21:00:59.487063   62061 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19667-7671/.minikube/bin
	I0918 21:00:59.487631   62061 out.go:352] Setting JSON to false
	I0918 21:00:59.488570   62061 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":6203,"bootTime":1726687056,"procs":200,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0918 21:00:59.488665   62061 start.go:139] virtualization: kvm guest
	I0918 21:00:59.490927   62061 out.go:177] * [old-k8s-version-740194] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0918 21:00:59.492124   62061 out.go:177]   - MINIKUBE_LOCATION=19667
	I0918 21:00:59.492192   62061 notify.go:220] Checking for updates...
	I0918 21:00:59.494436   62061 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0918 21:00:59.495887   62061 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19667-7671/kubeconfig
	I0918 21:00:59.496929   62061 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19667-7671/.minikube
	I0918 21:00:59.498172   62061 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0918 21:00:59.499384   62061 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0918 21:00:59.500900   62061 config.go:182] Loaded profile config "old-k8s-version-740194": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0918 21:00:59.501295   62061 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19667-7671/.minikube/bin/docker-machine-driver-kvm2
	I0918 21:00:59.501375   62061 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0918 21:00:59.516448   62061 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45397
	I0918 21:00:59.516855   62061 main.go:141] libmachine: () Calling .GetVersion
	I0918 21:00:59.517347   62061 main.go:141] libmachine: Using API Version  1
	I0918 21:00:59.517362   62061 main.go:141] libmachine: () Calling .SetConfigRaw
	I0918 21:00:59.517673   62061 main.go:141] libmachine: () Calling .GetMachineName
	I0918 21:00:59.517848   62061 main.go:141] libmachine: (old-k8s-version-740194) Calling .DriverName
	I0918 21:00:59.519750   62061 out.go:177] * Kubernetes 1.31.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.1
	I0918 21:00:59.521126   62061 driver.go:394] Setting default libvirt URI to qemu:///system
	I0918 21:00:59.521433   62061 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19667-7671/.minikube/bin/docker-machine-driver-kvm2
	I0918 21:00:59.521468   62061 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0918 21:00:59.537580   62061 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39433
	I0918 21:00:59.537973   62061 main.go:141] libmachine: () Calling .GetVersion
	I0918 21:00:59.538452   62061 main.go:141] libmachine: Using API Version  1
	I0918 21:00:59.538471   62061 main.go:141] libmachine: () Calling .SetConfigRaw
	I0918 21:00:59.538852   62061 main.go:141] libmachine: () Calling .GetMachineName
	I0918 21:00:59.539083   62061 main.go:141] libmachine: (old-k8s-version-740194) Calling .DriverName
	I0918 21:00:59.575494   62061 out.go:177] * Using the kvm2 driver based on existing profile
	I0918 21:00:59.576555   62061 start.go:297] selected driver: kvm2
	I0918 21:00:59.576572   62061 start.go:901] validating driver "kvm2" against &{Name:old-k8s-version-740194 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.20.0 ClusterName:old-k8s-version-740194 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.53 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280
h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0918 21:00:59.576682   62061 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0918 21:00:59.577339   62061 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0918 21:00:59.577400   62061 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19667-7671/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0918 21:00:59.593287   62061 install.go:137] /home/jenkins/minikube-integration/19667-7671/.minikube/bin/docker-machine-driver-kvm2 version is 1.34.0
	I0918 21:00:59.593810   62061 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0918 21:00:59.593855   62061 cni.go:84] Creating CNI manager for ""
	I0918 21:00:59.593911   62061 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0918 21:00:59.593972   62061 start.go:340] cluster config:
	{Name:old-k8s-version-740194 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-740194 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.53 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p20
00.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0918 21:00:59.594104   62061 iso.go:125] acquiring lock: {Name:mkbcf4d84f3091567a39802cd5e773cb10b8c545 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0918 21:00:59.595936   62061 out.go:177] * Starting "old-k8s-version-740194" primary control-plane node in "old-k8s-version-740194" cluster
	I0918 21:00:59.932315   61273 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.31:22: connect: no route to host
	I0918 21:00:59.597210   62061 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0918 21:00:59.597243   62061 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19667-7671/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0918 21:00:59.597250   62061 cache.go:56] Caching tarball of preloaded images
	I0918 21:00:59.597325   62061 preload.go:172] Found /home/jenkins/minikube-integration/19667-7671/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0918 21:00:59.597338   62061 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0918 21:00:59.597439   62061 profile.go:143] Saving config to /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/old-k8s-version-740194/config.json ...
	I0918 21:00:59.597658   62061 start.go:360] acquireMachinesLock for old-k8s-version-740194: {Name:mk1b0d68a0b7d9d4bc8204d5d81cb9eb77222526 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0918 21:01:03.004316   61273 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.31:22: connect: no route to host
	I0918 21:01:09.084327   61273 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.31:22: connect: no route to host
	I0918 21:01:12.156358   61273 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.31:22: connect: no route to host
	I0918 21:01:18.236353   61273 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.31:22: connect: no route to host
	I0918 21:01:21.308245   61273 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.31:22: connect: no route to host
	I0918 21:01:27.388302   61273 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.31:22: connect: no route to host
	I0918 21:01:30.460341   61273 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.31:22: connect: no route to host
	I0918 21:01:36.540285   61273 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.31:22: connect: no route to host
	I0918 21:01:39.612345   61273 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.31:22: connect: no route to host
	I0918 21:01:45.692338   61273 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.31:22: connect: no route to host
	I0918 21:01:48.764308   61273 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.31:22: connect: no route to host
	I0918 21:01:54.844344   61273 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.31:22: connect: no route to host
	I0918 21:01:57.916346   61273 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.31:22: connect: no route to host
	I0918 21:02:03.996351   61273 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.31:22: connect: no route to host
	I0918 21:02:07.068377   61273 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.31:22: connect: no route to host
	I0918 21:02:13.148269   61273 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.31:22: connect: no route to host
	I0918 21:02:16.220321   61273 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.31:22: connect: no route to host
	I0918 21:02:22.300282   61273 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.31:22: connect: no route to host
	I0918 21:02:25.372352   61273 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.31:22: connect: no route to host
	I0918 21:02:31.452275   61273 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.31:22: connect: no route to host
	I0918 21:02:34.524362   61273 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.31:22: connect: no route to host
	I0918 21:02:40.604332   61273 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.31:22: connect: no route to host
	I0918 21:02:43.676372   61273 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.31:22: connect: no route to host
	I0918 21:02:49.756305   61273 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.31:22: connect: no route to host
	I0918 21:02:52.828321   61273 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.31:22: connect: no route to host
	I0918 21:02:58.908358   61273 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.31:22: connect: no route to host
	I0918 21:03:01.980309   61273 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.31:22: connect: no route to host
	I0918 21:03:08.060301   61273 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.31:22: connect: no route to host
	I0918 21:03:11.132322   61273 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.31:22: connect: no route to host
	I0918 21:03:17.212232   61273 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.31:22: connect: no route to host
	I0918 21:03:20.284342   61273 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.31:22: connect: no route to host
	I0918 21:03:26.364312   61273 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.31:22: connect: no route to host
	I0918 21:03:29.436328   61273 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.31:22: connect: no route to host
	I0918 21:03:35.516323   61273 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.31:22: connect: no route to host
	I0918 21:03:38.588372   61273 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.31:22: connect: no route to host
	I0918 21:03:44.668300   61273 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.31:22: connect: no route to host
	I0918 21:03:47.740379   61273 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.31:22: connect: no route to host
	I0918 21:03:53.820363   61273 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.31:22: connect: no route to host
	I0918 21:03:56.892355   61273 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.31:22: connect: no route to host
	I0918 21:04:02.972312   61273 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.31:22: connect: no route to host
	I0918 21:04:06.044373   61273 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.31:22: connect: no route to host
	I0918 21:04:09.048392   61659 start.go:364] duration metric: took 3m56.738592157s to acquireMachinesLock for "default-k8s-diff-port-828868"
	I0918 21:04:09.048461   61659 start.go:96] Skipping create...Using existing machine configuration
	I0918 21:04:09.048469   61659 fix.go:54] fixHost starting: 
	I0918 21:04:09.048788   61659 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19667-7671/.minikube/bin/docker-machine-driver-kvm2
	I0918 21:04:09.048827   61659 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0918 21:04:09.064428   61659 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41181
	I0918 21:04:09.064856   61659 main.go:141] libmachine: () Calling .GetVersion
	I0918 21:04:09.065395   61659 main.go:141] libmachine: Using API Version  1
	I0918 21:04:09.065421   61659 main.go:141] libmachine: () Calling .SetConfigRaw
	I0918 21:04:09.065751   61659 main.go:141] libmachine: () Calling .GetMachineName
	I0918 21:04:09.065961   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .DriverName
	I0918 21:04:09.066108   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .GetState
	I0918 21:04:09.067874   61659 fix.go:112] recreateIfNeeded on default-k8s-diff-port-828868: state=Stopped err=<nil>
	I0918 21:04:09.067915   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .DriverName
	W0918 21:04:09.068096   61659 fix.go:138] unexpected machine state, will restart: <nil>
	I0918 21:04:09.069985   61659 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-828868" ...
	I0918 21:04:09.045944   61273 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0918 21:04:09.045978   61273 main.go:141] libmachine: (no-preload-331658) Calling .GetMachineName
	I0918 21:04:09.046314   61273 buildroot.go:166] provisioning hostname "no-preload-331658"
	I0918 21:04:09.046350   61273 main.go:141] libmachine: (no-preload-331658) Calling .GetMachineName
	I0918 21:04:09.046602   61273 main.go:141] libmachine: (no-preload-331658) Calling .GetSSHHostname
	I0918 21:04:09.048253   61273 machine.go:96] duration metric: took 4m37.423609251s to provisionDockerMachine
	I0918 21:04:09.048293   61273 fix.go:56] duration metric: took 4m37.446130108s for fixHost
	I0918 21:04:09.048301   61273 start.go:83] releasing machines lock for "no-preload-331658", held for 4m37.44629145s
	W0918 21:04:09.048329   61273 start.go:714] error starting host: provision: host is not running
	W0918 21:04:09.048451   61273 out.go:270] ! StartHost failed, but will try again: provision: host is not running
	I0918 21:04:09.048465   61273 start.go:729] Will try again in 5 seconds ...
	I0918 21:04:09.071488   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .Start
	I0918 21:04:09.071699   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Ensuring networks are active...
	I0918 21:04:09.072473   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Ensuring network default is active
	I0918 21:04:09.072816   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Ensuring network mk-default-k8s-diff-port-828868 is active
	I0918 21:04:09.073204   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Getting domain xml...
	I0918 21:04:09.073977   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Creating domain...
	I0918 21:04:10.321507   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Waiting to get IP...
	I0918 21:04:10.322390   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | domain default-k8s-diff-port-828868 has defined MAC address 52:54:00:c0:39:06 in network mk-default-k8s-diff-port-828868
	I0918 21:04:10.322863   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | unable to find current IP address of domain default-k8s-diff-port-828868 in network mk-default-k8s-diff-port-828868
	I0918 21:04:10.322907   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | I0918 21:04:10.322821   62722 retry.go:31] will retry after 272.805092ms: waiting for machine to come up
	I0918 21:04:10.597434   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | domain default-k8s-diff-port-828868 has defined MAC address 52:54:00:c0:39:06 in network mk-default-k8s-diff-port-828868
	I0918 21:04:10.597861   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | unable to find current IP address of domain default-k8s-diff-port-828868 in network mk-default-k8s-diff-port-828868
	I0918 21:04:10.597888   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | I0918 21:04:10.597825   62722 retry.go:31] will retry after 302.631333ms: waiting for machine to come up
	I0918 21:04:10.902544   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | domain default-k8s-diff-port-828868 has defined MAC address 52:54:00:c0:39:06 in network mk-default-k8s-diff-port-828868
	I0918 21:04:10.903002   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | unable to find current IP address of domain default-k8s-diff-port-828868 in network mk-default-k8s-diff-port-828868
	I0918 21:04:10.903030   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | I0918 21:04:10.902943   62722 retry.go:31] will retry after 325.769954ms: waiting for machine to come up
	I0918 21:04:11.230182   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | domain default-k8s-diff-port-828868 has defined MAC address 52:54:00:c0:39:06 in network mk-default-k8s-diff-port-828868
	I0918 21:04:11.230602   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | unable to find current IP address of domain default-k8s-diff-port-828868 in network mk-default-k8s-diff-port-828868
	I0918 21:04:11.230652   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | I0918 21:04:11.230557   62722 retry.go:31] will retry after 396.395153ms: waiting for machine to come up
	I0918 21:04:11.628135   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | domain default-k8s-diff-port-828868 has defined MAC address 52:54:00:c0:39:06 in network mk-default-k8s-diff-port-828868
	I0918 21:04:11.628520   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | unable to find current IP address of domain default-k8s-diff-port-828868 in network mk-default-k8s-diff-port-828868
	I0918 21:04:11.628544   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | I0918 21:04:11.628495   62722 retry.go:31] will retry after 578.74167ms: waiting for machine to come up
	I0918 21:04:14.050009   61273 start.go:360] acquireMachinesLock for no-preload-331658: {Name:mk1b0d68a0b7d9d4bc8204d5d81cb9eb77222526 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0918 21:04:12.209844   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | domain default-k8s-diff-port-828868 has defined MAC address 52:54:00:c0:39:06 in network mk-default-k8s-diff-port-828868
	I0918 21:04:12.209911   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | unable to find current IP address of domain default-k8s-diff-port-828868 in network mk-default-k8s-diff-port-828868
	I0918 21:04:12.209937   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | I0918 21:04:12.209841   62722 retry.go:31] will retry after 779.0434ms: waiting for machine to come up
	I0918 21:04:12.990688   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | domain default-k8s-diff-port-828868 has defined MAC address 52:54:00:c0:39:06 in network mk-default-k8s-diff-port-828868
	I0918 21:04:12.991106   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | unable to find current IP address of domain default-k8s-diff-port-828868 in network mk-default-k8s-diff-port-828868
	I0918 21:04:12.991141   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | I0918 21:04:12.991045   62722 retry.go:31] will retry after 772.165771ms: waiting for machine to come up
	I0918 21:04:13.764946   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | domain default-k8s-diff-port-828868 has defined MAC address 52:54:00:c0:39:06 in network mk-default-k8s-diff-port-828868
	I0918 21:04:13.765460   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | unable to find current IP address of domain default-k8s-diff-port-828868 in network mk-default-k8s-diff-port-828868
	I0918 21:04:13.765493   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | I0918 21:04:13.765404   62722 retry.go:31] will retry after 1.017078101s: waiting for machine to come up
	I0918 21:04:14.783920   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | domain default-k8s-diff-port-828868 has defined MAC address 52:54:00:c0:39:06 in network mk-default-k8s-diff-port-828868
	I0918 21:04:14.784320   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | unable to find current IP address of domain default-k8s-diff-port-828868 in network mk-default-k8s-diff-port-828868
	I0918 21:04:14.784348   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | I0918 21:04:14.784276   62722 retry.go:31] will retry after 1.775982574s: waiting for machine to come up
	I0918 21:04:16.562037   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | domain default-k8s-diff-port-828868 has defined MAC address 52:54:00:c0:39:06 in network mk-default-k8s-diff-port-828868
	I0918 21:04:16.562413   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | unable to find current IP address of domain default-k8s-diff-port-828868 in network mk-default-k8s-diff-port-828868
	I0918 21:04:16.562451   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | I0918 21:04:16.562369   62722 retry.go:31] will retry after 1.609664062s: waiting for machine to come up
	I0918 21:04:18.174149   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | domain default-k8s-diff-port-828868 has defined MAC address 52:54:00:c0:39:06 in network mk-default-k8s-diff-port-828868
	I0918 21:04:18.174759   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | unable to find current IP address of domain default-k8s-diff-port-828868 in network mk-default-k8s-diff-port-828868
	I0918 21:04:18.174788   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | I0918 21:04:18.174710   62722 retry.go:31] will retry after 2.26359536s: waiting for machine to come up
	I0918 21:04:20.440599   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | domain default-k8s-diff-port-828868 has defined MAC address 52:54:00:c0:39:06 in network mk-default-k8s-diff-port-828868
	I0918 21:04:20.441000   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | unable to find current IP address of domain default-k8s-diff-port-828868 in network mk-default-k8s-diff-port-828868
	I0918 21:04:20.441027   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | I0918 21:04:20.440955   62722 retry.go:31] will retry after 3.387446315s: waiting for machine to come up
	I0918 21:04:23.832623   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | domain default-k8s-diff-port-828868 has defined MAC address 52:54:00:c0:39:06 in network mk-default-k8s-diff-port-828868
	I0918 21:04:23.833134   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | unable to find current IP address of domain default-k8s-diff-port-828868 in network mk-default-k8s-diff-port-828868
	I0918 21:04:23.833162   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | I0918 21:04:23.833097   62722 retry.go:31] will retry after 3.312983418s: waiting for machine to come up
	I0918 21:04:27.150091   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | domain default-k8s-diff-port-828868 has defined MAC address 52:54:00:c0:39:06 in network mk-default-k8s-diff-port-828868
	I0918 21:04:27.150658   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Found IP for machine: 192.168.50.109
	I0918 21:04:27.150682   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | domain default-k8s-diff-port-828868 has current primary IP address 192.168.50.109 and MAC address 52:54:00:c0:39:06 in network mk-default-k8s-diff-port-828868
	I0918 21:04:27.150703   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Reserving static IP address...
	I0918 21:04:27.151248   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-828868", mac: "52:54:00:c0:39:06", ip: "192.168.50.109"} in network mk-default-k8s-diff-port-828868: {Iface:virbr4 ExpiryTime:2024-09-18 22:04:19 +0000 UTC Type:0 Mac:52:54:00:c0:39:06 Iaid: IPaddr:192.168.50.109 Prefix:24 Hostname:default-k8s-diff-port-828868 Clientid:01:52:54:00:c0:39:06}
	I0918 21:04:27.151276   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Reserved static IP address: 192.168.50.109
	I0918 21:04:27.151297   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | skip adding static IP to network mk-default-k8s-diff-port-828868 - found existing host DHCP lease matching {name: "default-k8s-diff-port-828868", mac: "52:54:00:c0:39:06", ip: "192.168.50.109"}
	I0918 21:04:27.151317   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | Getting to WaitForSSH function...
	I0918 21:04:27.151330   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Waiting for SSH to be available...
	I0918 21:04:27.153633   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | domain default-k8s-diff-port-828868 has defined MAC address 52:54:00:c0:39:06 in network mk-default-k8s-diff-port-828868
	I0918 21:04:27.154006   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:39:06", ip: ""} in network mk-default-k8s-diff-port-828868: {Iface:virbr4 ExpiryTime:2024-09-18 22:04:19 +0000 UTC Type:0 Mac:52:54:00:c0:39:06 Iaid: IPaddr:192.168.50.109 Prefix:24 Hostname:default-k8s-diff-port-828868 Clientid:01:52:54:00:c0:39:06}
	I0918 21:04:27.154036   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | domain default-k8s-diff-port-828868 has defined IP address 192.168.50.109 and MAC address 52:54:00:c0:39:06 in network mk-default-k8s-diff-port-828868
	I0918 21:04:27.154127   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | Using SSH client type: external
	I0918 21:04:27.154153   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | Using SSH private key: /home/jenkins/minikube-integration/19667-7671/.minikube/machines/default-k8s-diff-port-828868/id_rsa (-rw-------)
	I0918 21:04:27.154196   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.109 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19667-7671/.minikube/machines/default-k8s-diff-port-828868/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0918 21:04:27.154211   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | About to run SSH command:
	I0918 21:04:27.154225   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | exit 0
	I0918 21:04:28.308967   61740 start.go:364] duration metric: took 4m9.856658805s to acquireMachinesLock for "embed-certs-255556"
	I0918 21:04:28.309052   61740 start.go:96] Skipping create...Using existing machine configuration
	I0918 21:04:28.309066   61740 fix.go:54] fixHost starting: 
	I0918 21:04:28.309548   61740 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19667-7671/.minikube/bin/docker-machine-driver-kvm2
	I0918 21:04:28.309609   61740 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0918 21:04:28.326972   61740 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41115
	I0918 21:04:28.327375   61740 main.go:141] libmachine: () Calling .GetVersion
	I0918 21:04:28.327941   61740 main.go:141] libmachine: Using API Version  1
	I0918 21:04:28.327974   61740 main.go:141] libmachine: () Calling .SetConfigRaw
	I0918 21:04:28.328300   61740 main.go:141] libmachine: () Calling .GetMachineName
	I0918 21:04:28.328538   61740 main.go:141] libmachine: (embed-certs-255556) Calling .DriverName
	I0918 21:04:28.328676   61740 main.go:141] libmachine: (embed-certs-255556) Calling .GetState
	I0918 21:04:28.330265   61740 fix.go:112] recreateIfNeeded on embed-certs-255556: state=Stopped err=<nil>
	I0918 21:04:28.330312   61740 main.go:141] libmachine: (embed-certs-255556) Calling .DriverName
	W0918 21:04:28.330482   61740 fix.go:138] unexpected machine state, will restart: <nil>
	I0918 21:04:28.332680   61740 out.go:177] * Restarting existing kvm2 VM for "embed-certs-255556" ...
	I0918 21:04:28.333692   61740 main.go:141] libmachine: (embed-certs-255556) Calling .Start
	I0918 21:04:28.333865   61740 main.go:141] libmachine: (embed-certs-255556) Ensuring networks are active...
	I0918 21:04:28.334536   61740 main.go:141] libmachine: (embed-certs-255556) Ensuring network default is active
	I0918 21:04:28.334987   61740 main.go:141] libmachine: (embed-certs-255556) Ensuring network mk-embed-certs-255556 is active
	I0918 21:04:28.335491   61740 main.go:141] libmachine: (embed-certs-255556) Getting domain xml...
	I0918 21:04:28.336206   61740 main.go:141] libmachine: (embed-certs-255556) Creating domain...
	I0918 21:04:27.280056   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | SSH cmd err, output: <nil>: 
	I0918 21:04:27.280448   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .GetConfigRaw
	I0918 21:04:27.281097   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .GetIP
	I0918 21:04:27.283491   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | domain default-k8s-diff-port-828868 has defined MAC address 52:54:00:c0:39:06 in network mk-default-k8s-diff-port-828868
	I0918 21:04:27.283933   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:39:06", ip: ""} in network mk-default-k8s-diff-port-828868: {Iface:virbr4 ExpiryTime:2024-09-18 22:04:19 +0000 UTC Type:0 Mac:52:54:00:c0:39:06 Iaid: IPaddr:192.168.50.109 Prefix:24 Hostname:default-k8s-diff-port-828868 Clientid:01:52:54:00:c0:39:06}
	I0918 21:04:27.283968   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | domain default-k8s-diff-port-828868 has defined IP address 192.168.50.109 and MAC address 52:54:00:c0:39:06 in network mk-default-k8s-diff-port-828868
	I0918 21:04:27.284242   61659 profile.go:143] Saving config to /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/default-k8s-diff-port-828868/config.json ...
	I0918 21:04:27.284483   61659 machine.go:93] provisionDockerMachine start ...
	I0918 21:04:27.284527   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .DriverName
	I0918 21:04:27.284740   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .GetSSHHostname
	I0918 21:04:27.287263   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | domain default-k8s-diff-port-828868 has defined MAC address 52:54:00:c0:39:06 in network mk-default-k8s-diff-port-828868
	I0918 21:04:27.287640   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:39:06", ip: ""} in network mk-default-k8s-diff-port-828868: {Iface:virbr4 ExpiryTime:2024-09-18 22:04:19 +0000 UTC Type:0 Mac:52:54:00:c0:39:06 Iaid: IPaddr:192.168.50.109 Prefix:24 Hostname:default-k8s-diff-port-828868 Clientid:01:52:54:00:c0:39:06}
	I0918 21:04:27.287671   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | domain default-k8s-diff-port-828868 has defined IP address 192.168.50.109 and MAC address 52:54:00:c0:39:06 in network mk-default-k8s-diff-port-828868
	I0918 21:04:27.287831   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .GetSSHPort
	I0918 21:04:27.288053   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .GetSSHKeyPath
	I0918 21:04:27.288230   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .GetSSHKeyPath
	I0918 21:04:27.288348   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .GetSSHUsername
	I0918 21:04:27.288497   61659 main.go:141] libmachine: Using SSH client type: native
	I0918 21:04:27.288759   61659 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.50.109 22 <nil> <nil>}
	I0918 21:04:27.288774   61659 main.go:141] libmachine: About to run SSH command:
	hostname
	I0918 21:04:27.396110   61659 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0918 21:04:27.396140   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .GetMachineName
	I0918 21:04:27.396439   61659 buildroot.go:166] provisioning hostname "default-k8s-diff-port-828868"
	I0918 21:04:27.396472   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .GetMachineName
	I0918 21:04:27.396655   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .GetSSHHostname
	I0918 21:04:27.399285   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | domain default-k8s-diff-port-828868 has defined MAC address 52:54:00:c0:39:06 in network mk-default-k8s-diff-port-828868
	I0918 21:04:27.399629   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:39:06", ip: ""} in network mk-default-k8s-diff-port-828868: {Iface:virbr4 ExpiryTime:2024-09-18 22:04:19 +0000 UTC Type:0 Mac:52:54:00:c0:39:06 Iaid: IPaddr:192.168.50.109 Prefix:24 Hostname:default-k8s-diff-port-828868 Clientid:01:52:54:00:c0:39:06}
	I0918 21:04:27.399670   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | domain default-k8s-diff-port-828868 has defined IP address 192.168.50.109 and MAC address 52:54:00:c0:39:06 in network mk-default-k8s-diff-port-828868
	I0918 21:04:27.399746   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .GetSSHPort
	I0918 21:04:27.399947   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .GetSSHKeyPath
	I0918 21:04:27.400158   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .GetSSHKeyPath
	I0918 21:04:27.400295   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .GetSSHUsername
	I0918 21:04:27.400476   61659 main.go:141] libmachine: Using SSH client type: native
	I0918 21:04:27.400701   61659 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.50.109 22 <nil> <nil>}
	I0918 21:04:27.400714   61659 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-828868 && echo "default-k8s-diff-port-828868" | sudo tee /etc/hostname
	I0918 21:04:27.518553   61659 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-828868
	
	I0918 21:04:27.518579   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .GetSSHHostname
	I0918 21:04:27.521274   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | domain default-k8s-diff-port-828868 has defined MAC address 52:54:00:c0:39:06 in network mk-default-k8s-diff-port-828868
	I0918 21:04:27.521714   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:39:06", ip: ""} in network mk-default-k8s-diff-port-828868: {Iface:virbr4 ExpiryTime:2024-09-18 22:04:19 +0000 UTC Type:0 Mac:52:54:00:c0:39:06 Iaid: IPaddr:192.168.50.109 Prefix:24 Hostname:default-k8s-diff-port-828868 Clientid:01:52:54:00:c0:39:06}
	I0918 21:04:27.521746   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | domain default-k8s-diff-port-828868 has defined IP address 192.168.50.109 and MAC address 52:54:00:c0:39:06 in network mk-default-k8s-diff-port-828868
	I0918 21:04:27.521918   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .GetSSHPort
	I0918 21:04:27.522085   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .GetSSHKeyPath
	I0918 21:04:27.522298   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .GetSSHKeyPath
	I0918 21:04:27.522469   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .GetSSHUsername
	I0918 21:04:27.522689   61659 main.go:141] libmachine: Using SSH client type: native
	I0918 21:04:27.522867   61659 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.50.109 22 <nil> <nil>}
	I0918 21:04:27.522885   61659 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-828868' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-828868/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-828868' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0918 21:04:27.636264   61659 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0918 21:04:27.636296   61659 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19667-7671/.minikube CaCertPath:/home/jenkins/minikube-integration/19667-7671/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19667-7671/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19667-7671/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19667-7671/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19667-7671/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19667-7671/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19667-7671/.minikube}
	I0918 21:04:27.636325   61659 buildroot.go:174] setting up certificates
	I0918 21:04:27.636335   61659 provision.go:84] configureAuth start
	I0918 21:04:27.636343   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .GetMachineName
	I0918 21:04:27.636629   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .GetIP
	I0918 21:04:27.639186   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | domain default-k8s-diff-port-828868 has defined MAC address 52:54:00:c0:39:06 in network mk-default-k8s-diff-port-828868
	I0918 21:04:27.639622   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:39:06", ip: ""} in network mk-default-k8s-diff-port-828868: {Iface:virbr4 ExpiryTime:2024-09-18 22:04:19 +0000 UTC Type:0 Mac:52:54:00:c0:39:06 Iaid: IPaddr:192.168.50.109 Prefix:24 Hostname:default-k8s-diff-port-828868 Clientid:01:52:54:00:c0:39:06}
	I0918 21:04:27.639646   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | domain default-k8s-diff-port-828868 has defined IP address 192.168.50.109 and MAC address 52:54:00:c0:39:06 in network mk-default-k8s-diff-port-828868
	I0918 21:04:27.639858   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .GetSSHHostname
	I0918 21:04:27.642158   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | domain default-k8s-diff-port-828868 has defined MAC address 52:54:00:c0:39:06 in network mk-default-k8s-diff-port-828868
	I0918 21:04:27.642421   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:39:06", ip: ""} in network mk-default-k8s-diff-port-828868: {Iface:virbr4 ExpiryTime:2024-09-18 22:04:19 +0000 UTC Type:0 Mac:52:54:00:c0:39:06 Iaid: IPaddr:192.168.50.109 Prefix:24 Hostname:default-k8s-diff-port-828868 Clientid:01:52:54:00:c0:39:06}
	I0918 21:04:27.642448   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | domain default-k8s-diff-port-828868 has defined IP address 192.168.50.109 and MAC address 52:54:00:c0:39:06 in network mk-default-k8s-diff-port-828868
	I0918 21:04:27.642626   61659 provision.go:143] copyHostCerts
	I0918 21:04:27.642706   61659 exec_runner.go:144] found /home/jenkins/minikube-integration/19667-7671/.minikube/ca.pem, removing ...
	I0918 21:04:27.642869   61659 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19667-7671/.minikube/ca.pem
	I0918 21:04:27.642966   61659 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19667-7671/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19667-7671/.minikube/ca.pem (1082 bytes)
	I0918 21:04:27.643099   61659 exec_runner.go:144] found /home/jenkins/minikube-integration/19667-7671/.minikube/cert.pem, removing ...
	I0918 21:04:27.643111   61659 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19667-7671/.minikube/cert.pem
	I0918 21:04:27.643150   61659 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19667-7671/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19667-7671/.minikube/cert.pem (1123 bytes)
	I0918 21:04:27.643270   61659 exec_runner.go:144] found /home/jenkins/minikube-integration/19667-7671/.minikube/key.pem, removing ...
	I0918 21:04:27.643280   61659 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19667-7671/.minikube/key.pem
	I0918 21:04:27.643320   61659 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19667-7671/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19667-7671/.minikube/key.pem (1679 bytes)
	I0918 21:04:27.643387   61659 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19667-7671/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19667-7671/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19667-7671/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-828868 san=[127.0.0.1 192.168.50.109 default-k8s-diff-port-828868 localhost minikube]
	I0918 21:04:27.693367   61659 provision.go:177] copyRemoteCerts
	I0918 21:04:27.693426   61659 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0918 21:04:27.693463   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .GetSSHHostname
	I0918 21:04:27.696331   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | domain default-k8s-diff-port-828868 has defined MAC address 52:54:00:c0:39:06 in network mk-default-k8s-diff-port-828868
	I0918 21:04:27.696661   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:39:06", ip: ""} in network mk-default-k8s-diff-port-828868: {Iface:virbr4 ExpiryTime:2024-09-18 22:04:19 +0000 UTC Type:0 Mac:52:54:00:c0:39:06 Iaid: IPaddr:192.168.50.109 Prefix:24 Hostname:default-k8s-diff-port-828868 Clientid:01:52:54:00:c0:39:06}
	I0918 21:04:27.696693   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | domain default-k8s-diff-port-828868 has defined IP address 192.168.50.109 and MAC address 52:54:00:c0:39:06 in network mk-default-k8s-diff-port-828868
	I0918 21:04:27.696835   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .GetSSHPort
	I0918 21:04:27.697028   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .GetSSHKeyPath
	I0918 21:04:27.697212   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .GetSSHUsername
	I0918 21:04:27.697317   61659 sshutil.go:53] new ssh client: &{IP:192.168.50.109 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19667-7671/.minikube/machines/default-k8s-diff-port-828868/id_rsa Username:docker}
	I0918 21:04:27.777944   61659 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0918 21:04:27.801476   61659 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0918 21:04:27.825025   61659 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0918 21:04:27.848244   61659 provision.go:87] duration metric: took 211.897185ms to configureAuth
	I0918 21:04:27.848274   61659 buildroot.go:189] setting minikube options for container-runtime
	I0918 21:04:27.848434   61659 config.go:182] Loaded profile config "default-k8s-diff-port-828868": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0918 21:04:27.848513   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .GetSSHHostname
	I0918 21:04:27.851119   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | domain default-k8s-diff-port-828868 has defined MAC address 52:54:00:c0:39:06 in network mk-default-k8s-diff-port-828868
	I0918 21:04:27.851472   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:39:06", ip: ""} in network mk-default-k8s-diff-port-828868: {Iface:virbr4 ExpiryTime:2024-09-18 22:04:19 +0000 UTC Type:0 Mac:52:54:00:c0:39:06 Iaid: IPaddr:192.168.50.109 Prefix:24 Hostname:default-k8s-diff-port-828868 Clientid:01:52:54:00:c0:39:06}
	I0918 21:04:27.851509   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | domain default-k8s-diff-port-828868 has defined IP address 192.168.50.109 and MAC address 52:54:00:c0:39:06 in network mk-default-k8s-diff-port-828868
	I0918 21:04:27.851788   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .GetSSHPort
	I0918 21:04:27.852007   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .GetSSHKeyPath
	I0918 21:04:27.852216   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .GetSSHKeyPath
	I0918 21:04:27.852420   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .GetSSHUsername
	I0918 21:04:27.852670   61659 main.go:141] libmachine: Using SSH client type: native
	I0918 21:04:27.852852   61659 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.50.109 22 <nil> <nil>}
	I0918 21:04:27.852870   61659 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0918 21:04:28.072808   61659 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0918 21:04:28.072843   61659 machine.go:96] duration metric: took 788.346091ms to provisionDockerMachine
	I0918 21:04:28.072858   61659 start.go:293] postStartSetup for "default-k8s-diff-port-828868" (driver="kvm2")
	I0918 21:04:28.072874   61659 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0918 21:04:28.072898   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .DriverName
	I0918 21:04:28.073246   61659 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0918 21:04:28.073287   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .GetSSHHostname
	I0918 21:04:28.075998   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | domain default-k8s-diff-port-828868 has defined MAC address 52:54:00:c0:39:06 in network mk-default-k8s-diff-port-828868
	I0918 21:04:28.076389   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:39:06", ip: ""} in network mk-default-k8s-diff-port-828868: {Iface:virbr4 ExpiryTime:2024-09-18 22:04:19 +0000 UTC Type:0 Mac:52:54:00:c0:39:06 Iaid: IPaddr:192.168.50.109 Prefix:24 Hostname:default-k8s-diff-port-828868 Clientid:01:52:54:00:c0:39:06}
	I0918 21:04:28.076416   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | domain default-k8s-diff-port-828868 has defined IP address 192.168.50.109 and MAC address 52:54:00:c0:39:06 in network mk-default-k8s-diff-port-828868
	I0918 21:04:28.076561   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .GetSSHPort
	I0918 21:04:28.076780   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .GetSSHKeyPath
	I0918 21:04:28.076939   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .GetSSHUsername
	I0918 21:04:28.077063   61659 sshutil.go:53] new ssh client: &{IP:192.168.50.109 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19667-7671/.minikube/machines/default-k8s-diff-port-828868/id_rsa Username:docker}
	I0918 21:04:28.158946   61659 ssh_runner.go:195] Run: cat /etc/os-release
	I0918 21:04:28.163200   61659 info.go:137] Remote host: Buildroot 2023.02.9
	I0918 21:04:28.163231   61659 filesync.go:126] Scanning /home/jenkins/minikube-integration/19667-7671/.minikube/addons for local assets ...
	I0918 21:04:28.163290   61659 filesync.go:126] Scanning /home/jenkins/minikube-integration/19667-7671/.minikube/files for local assets ...
	I0918 21:04:28.163368   61659 filesync.go:149] local asset: /home/jenkins/minikube-integration/19667-7671/.minikube/files/etc/ssl/certs/148782.pem -> 148782.pem in /etc/ssl/certs
	I0918 21:04:28.163464   61659 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0918 21:04:28.172987   61659 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/files/etc/ssl/certs/148782.pem --> /etc/ssl/certs/148782.pem (1708 bytes)
	I0918 21:04:28.198647   61659 start.go:296] duration metric: took 125.77566ms for postStartSetup
	I0918 21:04:28.198685   61659 fix.go:56] duration metric: took 19.150217303s for fixHost
	I0918 21:04:28.198704   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .GetSSHHostname
	I0918 21:04:28.201549   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | domain default-k8s-diff-port-828868 has defined MAC address 52:54:00:c0:39:06 in network mk-default-k8s-diff-port-828868
	I0918 21:04:28.201904   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:39:06", ip: ""} in network mk-default-k8s-diff-port-828868: {Iface:virbr4 ExpiryTime:2024-09-18 22:04:19 +0000 UTC Type:0 Mac:52:54:00:c0:39:06 Iaid: IPaddr:192.168.50.109 Prefix:24 Hostname:default-k8s-diff-port-828868 Clientid:01:52:54:00:c0:39:06}
	I0918 21:04:28.201934   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | domain default-k8s-diff-port-828868 has defined IP address 192.168.50.109 and MAC address 52:54:00:c0:39:06 in network mk-default-k8s-diff-port-828868
	I0918 21:04:28.202093   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .GetSSHPort
	I0918 21:04:28.202278   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .GetSSHKeyPath
	I0918 21:04:28.202435   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .GetSSHKeyPath
	I0918 21:04:28.202588   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .GetSSHUsername
	I0918 21:04:28.202714   61659 main.go:141] libmachine: Using SSH client type: native
	I0918 21:04:28.202871   61659 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.50.109 22 <nil> <nil>}
	I0918 21:04:28.202879   61659 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0918 21:04:28.308752   61659 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726693468.285343658
	
	I0918 21:04:28.308778   61659 fix.go:216] guest clock: 1726693468.285343658
	I0918 21:04:28.308786   61659 fix.go:229] Guest: 2024-09-18 21:04:28.285343658 +0000 UTC Remote: 2024-09-18 21:04:28.198688962 +0000 UTC m=+256.035220061 (delta=86.654696ms)
	I0918 21:04:28.308821   61659 fix.go:200] guest clock delta is within tolerance: 86.654696ms
	I0918 21:04:28.308829   61659 start.go:83] releasing machines lock for "default-k8s-diff-port-828868", held for 19.260404228s
	I0918 21:04:28.308857   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .DriverName
	I0918 21:04:28.309175   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .GetIP
	I0918 21:04:28.312346   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | domain default-k8s-diff-port-828868 has defined MAC address 52:54:00:c0:39:06 in network mk-default-k8s-diff-port-828868
	I0918 21:04:28.312725   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:39:06", ip: ""} in network mk-default-k8s-diff-port-828868: {Iface:virbr4 ExpiryTime:2024-09-18 22:04:19 +0000 UTC Type:0 Mac:52:54:00:c0:39:06 Iaid: IPaddr:192.168.50.109 Prefix:24 Hostname:default-k8s-diff-port-828868 Clientid:01:52:54:00:c0:39:06}
	I0918 21:04:28.312753   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | domain default-k8s-diff-port-828868 has defined IP address 192.168.50.109 and MAC address 52:54:00:c0:39:06 in network mk-default-k8s-diff-port-828868
	I0918 21:04:28.312951   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .DriverName
	I0918 21:04:28.313506   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .DriverName
	I0918 21:04:28.313702   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .DriverName
	I0918 21:04:28.313792   61659 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0918 21:04:28.313849   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .GetSSHHostname
	I0918 21:04:28.313966   61659 ssh_runner.go:195] Run: cat /version.json
	I0918 21:04:28.314001   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .GetSSHHostname
	I0918 21:04:28.316698   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | domain default-k8s-diff-port-828868 has defined MAC address 52:54:00:c0:39:06 in network mk-default-k8s-diff-port-828868
	I0918 21:04:28.316882   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | domain default-k8s-diff-port-828868 has defined MAC address 52:54:00:c0:39:06 in network mk-default-k8s-diff-port-828868
	I0918 21:04:28.317016   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:39:06", ip: ""} in network mk-default-k8s-diff-port-828868: {Iface:virbr4 ExpiryTime:2024-09-18 22:04:19 +0000 UTC Type:0 Mac:52:54:00:c0:39:06 Iaid: IPaddr:192.168.50.109 Prefix:24 Hostname:default-k8s-diff-port-828868 Clientid:01:52:54:00:c0:39:06}
	I0918 21:04:28.317038   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | domain default-k8s-diff-port-828868 has defined IP address 192.168.50.109 and MAC address 52:54:00:c0:39:06 in network mk-default-k8s-diff-port-828868
	I0918 21:04:28.317239   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .GetSSHPort
	I0918 21:04:28.317357   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:39:06", ip: ""} in network mk-default-k8s-diff-port-828868: {Iface:virbr4 ExpiryTime:2024-09-18 22:04:19 +0000 UTC Type:0 Mac:52:54:00:c0:39:06 Iaid: IPaddr:192.168.50.109 Prefix:24 Hostname:default-k8s-diff-port-828868 Clientid:01:52:54:00:c0:39:06}
	I0918 21:04:28.317408   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | domain default-k8s-diff-port-828868 has defined IP address 192.168.50.109 and MAC address 52:54:00:c0:39:06 in network mk-default-k8s-diff-port-828868
	I0918 21:04:28.317410   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .GetSSHKeyPath
	I0918 21:04:28.317596   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .GetSSHPort
	I0918 21:04:28.317598   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .GetSSHUsername
	I0918 21:04:28.317743   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .GetSSHKeyPath
	I0918 21:04:28.317783   61659 sshutil.go:53] new ssh client: &{IP:192.168.50.109 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19667-7671/.minikube/machines/default-k8s-diff-port-828868/id_rsa Username:docker}
	I0918 21:04:28.317905   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .GetSSHUsername
	I0918 21:04:28.318060   61659 sshutil.go:53] new ssh client: &{IP:192.168.50.109 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19667-7671/.minikube/machines/default-k8s-diff-port-828868/id_rsa Username:docker}
	I0918 21:04:28.439960   61659 ssh_runner.go:195] Run: systemctl --version
	I0918 21:04:28.446111   61659 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0918 21:04:28.593574   61659 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0918 21:04:28.599542   61659 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0918 21:04:28.599623   61659 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0918 21:04:28.615775   61659 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0918 21:04:28.615802   61659 start.go:495] detecting cgroup driver to use...
	I0918 21:04:28.615965   61659 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0918 21:04:28.636924   61659 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0918 21:04:28.655681   61659 docker.go:217] disabling cri-docker service (if available) ...
	I0918 21:04:28.655775   61659 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0918 21:04:28.670090   61659 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0918 21:04:28.684780   61659 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0918 21:04:28.807355   61659 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0918 21:04:28.941753   61659 docker.go:233] disabling docker service ...
	I0918 21:04:28.941836   61659 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0918 21:04:28.956786   61659 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0918 21:04:28.970301   61659 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0918 21:04:29.119605   61659 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0918 21:04:29.245330   61659 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0918 21:04:29.259626   61659 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0918 21:04:29.278104   61659 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0918 21:04:29.278162   61659 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0918 21:04:29.288761   61659 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0918 21:04:29.288837   61659 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0918 21:04:29.299631   61659 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0918 21:04:29.310244   61659 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0918 21:04:29.321220   61659 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0918 21:04:29.332722   61659 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0918 21:04:29.343590   61659 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0918 21:04:29.366099   61659 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0918 21:04:29.381180   61659 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0918 21:04:29.394427   61659 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0918 21:04:29.394494   61659 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0918 21:04:29.410069   61659 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0918 21:04:29.421207   61659 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0918 21:04:29.543870   61659 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0918 21:04:29.642149   61659 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0918 21:04:29.642205   61659 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0918 21:04:29.647336   61659 start.go:563] Will wait 60s for crictl version
	I0918 21:04:29.647400   61659 ssh_runner.go:195] Run: which crictl
	I0918 21:04:29.651148   61659 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0918 21:04:29.690903   61659 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0918 21:04:29.690992   61659 ssh_runner.go:195] Run: crio --version
	I0918 21:04:29.717176   61659 ssh_runner.go:195] Run: crio --version
	I0918 21:04:29.747416   61659 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0918 21:04:29.748825   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .GetIP
	I0918 21:04:29.751828   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | domain default-k8s-diff-port-828868 has defined MAC address 52:54:00:c0:39:06 in network mk-default-k8s-diff-port-828868
	I0918 21:04:29.752238   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:39:06", ip: ""} in network mk-default-k8s-diff-port-828868: {Iface:virbr4 ExpiryTime:2024-09-18 22:04:19 +0000 UTC Type:0 Mac:52:54:00:c0:39:06 Iaid: IPaddr:192.168.50.109 Prefix:24 Hostname:default-k8s-diff-port-828868 Clientid:01:52:54:00:c0:39:06}
	I0918 21:04:29.752288   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | domain default-k8s-diff-port-828868 has defined IP address 192.168.50.109 and MAC address 52:54:00:c0:39:06 in network mk-default-k8s-diff-port-828868
	I0918 21:04:29.752533   61659 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0918 21:04:29.756672   61659 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0918 21:04:29.768691   61659 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-828868 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.1 ClusterName:default-k8s-diff-port-828868 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.109 Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0918 21:04:29.768822   61659 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0918 21:04:29.768867   61659 ssh_runner.go:195] Run: sudo crictl images --output json
	I0918 21:04:29.803885   61659 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I0918 21:04:29.803964   61659 ssh_runner.go:195] Run: which lz4
	I0918 21:04:29.808051   61659 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0918 21:04:29.812324   61659 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0918 21:04:29.812363   61659 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I0918 21:04:31.172721   61659 crio.go:462] duration metric: took 1.364736071s to copy over tarball
	I0918 21:04:31.172837   61659 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0918 21:04:29.637411   61740 main.go:141] libmachine: (embed-certs-255556) Waiting to get IP...
	I0918 21:04:29.638427   61740 main.go:141] libmachine: (embed-certs-255556) DBG | domain embed-certs-255556 has defined MAC address 52:54:00:e8:c2:b7 in network mk-embed-certs-255556
	I0918 21:04:29.638877   61740 main.go:141] libmachine: (embed-certs-255556) DBG | unable to find current IP address of domain embed-certs-255556 in network mk-embed-certs-255556
	I0918 21:04:29.638973   61740 main.go:141] libmachine: (embed-certs-255556) DBG | I0918 21:04:29.638868   62857 retry.go:31] will retry after 298.087525ms: waiting for machine to come up
	I0918 21:04:29.938543   61740 main.go:141] libmachine: (embed-certs-255556) DBG | domain embed-certs-255556 has defined MAC address 52:54:00:e8:c2:b7 in network mk-embed-certs-255556
	I0918 21:04:29.938923   61740 main.go:141] libmachine: (embed-certs-255556) DBG | unable to find current IP address of domain embed-certs-255556 in network mk-embed-certs-255556
	I0918 21:04:29.938946   61740 main.go:141] libmachine: (embed-certs-255556) DBG | I0918 21:04:29.938889   62857 retry.go:31] will retry after 362.887862ms: waiting for machine to come up
	I0918 21:04:30.303379   61740 main.go:141] libmachine: (embed-certs-255556) DBG | domain embed-certs-255556 has defined MAC address 52:54:00:e8:c2:b7 in network mk-embed-certs-255556
	I0918 21:04:30.303867   61740 main.go:141] libmachine: (embed-certs-255556) DBG | unable to find current IP address of domain embed-certs-255556 in network mk-embed-certs-255556
	I0918 21:04:30.303898   61740 main.go:141] libmachine: (embed-certs-255556) DBG | I0918 21:04:30.303820   62857 retry.go:31] will retry after 452.771021ms: waiting for machine to come up
	I0918 21:04:30.758353   61740 main.go:141] libmachine: (embed-certs-255556) DBG | domain embed-certs-255556 has defined MAC address 52:54:00:e8:c2:b7 in network mk-embed-certs-255556
	I0918 21:04:30.758897   61740 main.go:141] libmachine: (embed-certs-255556) DBG | unable to find current IP address of domain embed-certs-255556 in network mk-embed-certs-255556
	I0918 21:04:30.758928   61740 main.go:141] libmachine: (embed-certs-255556) DBG | I0918 21:04:30.758856   62857 retry.go:31] will retry after 506.010985ms: waiting for machine to come up
	I0918 21:04:31.266443   61740 main.go:141] libmachine: (embed-certs-255556) DBG | domain embed-certs-255556 has defined MAC address 52:54:00:e8:c2:b7 in network mk-embed-certs-255556
	I0918 21:04:31.266934   61740 main.go:141] libmachine: (embed-certs-255556) DBG | unable to find current IP address of domain embed-certs-255556 in network mk-embed-certs-255556
	I0918 21:04:31.266964   61740 main.go:141] libmachine: (embed-certs-255556) DBG | I0918 21:04:31.266893   62857 retry.go:31] will retry after 584.679329ms: waiting for machine to come up
	I0918 21:04:31.853811   61740 main.go:141] libmachine: (embed-certs-255556) DBG | domain embed-certs-255556 has defined MAC address 52:54:00:e8:c2:b7 in network mk-embed-certs-255556
	I0918 21:04:31.854371   61740 main.go:141] libmachine: (embed-certs-255556) DBG | unable to find current IP address of domain embed-certs-255556 in network mk-embed-certs-255556
	I0918 21:04:31.854402   61740 main.go:141] libmachine: (embed-certs-255556) DBG | I0918 21:04:31.854309   62857 retry.go:31] will retry after 786.010743ms: waiting for machine to come up
	I0918 21:04:32.642494   61740 main.go:141] libmachine: (embed-certs-255556) DBG | domain embed-certs-255556 has defined MAC address 52:54:00:e8:c2:b7 in network mk-embed-certs-255556
	I0918 21:04:32.643068   61740 main.go:141] libmachine: (embed-certs-255556) DBG | unable to find current IP address of domain embed-certs-255556 in network mk-embed-certs-255556
	I0918 21:04:32.643100   61740 main.go:141] libmachine: (embed-certs-255556) DBG | I0918 21:04:32.643013   62857 retry.go:31] will retry after 1.010762944s: waiting for machine to come up
	I0918 21:04:33.299563   61659 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.126697598s)
	I0918 21:04:33.299596   61659 crio.go:469] duration metric: took 2.126840983s to extract the tarball
	I0918 21:04:33.299602   61659 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0918 21:04:33.336428   61659 ssh_runner.go:195] Run: sudo crictl images --output json
	I0918 21:04:33.377303   61659 crio.go:514] all images are preloaded for cri-o runtime.
	I0918 21:04:33.377342   61659 cache_images.go:84] Images are preloaded, skipping loading
	I0918 21:04:33.377352   61659 kubeadm.go:934] updating node { 192.168.50.109 8444 v1.31.1 crio true true} ...
	I0918 21:04:33.377490   61659 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-828868 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.109
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:default-k8s-diff-port-828868 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0918 21:04:33.377574   61659 ssh_runner.go:195] Run: crio config
	I0918 21:04:33.423773   61659 cni.go:84] Creating CNI manager for ""
	I0918 21:04:33.423800   61659 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0918 21:04:33.423816   61659 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0918 21:04:33.423835   61659 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.109 APIServerPort:8444 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-828868 NodeName:default-k8s-diff-port-828868 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.109"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.109 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0918 21:04:33.423976   61659 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.109
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-828868"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.109
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.109"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0918 21:04:33.424058   61659 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0918 21:04:33.434047   61659 binaries.go:44] Found k8s binaries, skipping transfer
	I0918 21:04:33.434119   61659 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0918 21:04:33.443535   61659 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I0918 21:04:33.460116   61659 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0918 21:04:33.475883   61659 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2172 bytes)
	I0918 21:04:33.492311   61659 ssh_runner.go:195] Run: grep 192.168.50.109	control-plane.minikube.internal$ /etc/hosts
	I0918 21:04:33.495940   61659 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.109	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0918 21:04:33.507411   61659 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0918 21:04:33.625104   61659 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0918 21:04:33.641530   61659 certs.go:68] Setting up /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/default-k8s-diff-port-828868 for IP: 192.168.50.109
	I0918 21:04:33.641556   61659 certs.go:194] generating shared ca certs ...
	I0918 21:04:33.641572   61659 certs.go:226] acquiring lock for ca certs: {Name:mkf54e0e71ed884c4372f9d3d4cb308ea4600185 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0918 21:04:33.641757   61659 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19667-7671/.minikube/ca.key
	I0918 21:04:33.641804   61659 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19667-7671/.minikube/proxy-client-ca.key
	I0918 21:04:33.641822   61659 certs.go:256] generating profile certs ...
	I0918 21:04:33.641944   61659 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/default-k8s-diff-port-828868/client.key
	I0918 21:04:33.642036   61659 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/default-k8s-diff-port-828868/apiserver.key.df92be3a
	I0918 21:04:33.642087   61659 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/default-k8s-diff-port-828868/proxy-client.key
	I0918 21:04:33.642255   61659 certs.go:484] found cert: /home/jenkins/minikube-integration/19667-7671/.minikube/certs/14878.pem (1338 bytes)
	W0918 21:04:33.642297   61659 certs.go:480] ignoring /home/jenkins/minikube-integration/19667-7671/.minikube/certs/14878_empty.pem, impossibly tiny 0 bytes
	I0918 21:04:33.642306   61659 certs.go:484] found cert: /home/jenkins/minikube-integration/19667-7671/.minikube/certs/ca-key.pem (1675 bytes)
	I0918 21:04:33.642337   61659 certs.go:484] found cert: /home/jenkins/minikube-integration/19667-7671/.minikube/certs/ca.pem (1082 bytes)
	I0918 21:04:33.642370   61659 certs.go:484] found cert: /home/jenkins/minikube-integration/19667-7671/.minikube/certs/cert.pem (1123 bytes)
	I0918 21:04:33.642404   61659 certs.go:484] found cert: /home/jenkins/minikube-integration/19667-7671/.minikube/certs/key.pem (1679 bytes)
	I0918 21:04:33.642454   61659 certs.go:484] found cert: /home/jenkins/minikube-integration/19667-7671/.minikube/files/etc/ssl/certs/148782.pem (1708 bytes)
	I0918 21:04:33.643116   61659 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0918 21:04:33.682428   61659 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0918 21:04:33.710444   61659 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0918 21:04:33.759078   61659 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0918 21:04:33.797727   61659 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/default-k8s-diff-port-828868/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0918 21:04:33.821989   61659 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/default-k8s-diff-port-828868/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0918 21:04:33.844210   61659 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/default-k8s-diff-port-828868/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0918 21:04:33.866843   61659 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/default-k8s-diff-port-828868/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0918 21:04:33.896125   61659 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/files/etc/ssl/certs/148782.pem --> /usr/share/ca-certificates/148782.pem (1708 bytes)
	I0918 21:04:33.918667   61659 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0918 21:04:33.940790   61659 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/certs/14878.pem --> /usr/share/ca-certificates/14878.pem (1338 bytes)
	I0918 21:04:33.963660   61659 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0918 21:04:33.980348   61659 ssh_runner.go:195] Run: openssl version
	I0918 21:04:33.985856   61659 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/148782.pem && ln -fs /usr/share/ca-certificates/148782.pem /etc/ssl/certs/148782.pem"
	I0918 21:04:33.996472   61659 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/148782.pem
	I0918 21:04:34.000732   61659 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 18 19:57 /usr/share/ca-certificates/148782.pem
	I0918 21:04:34.000788   61659 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/148782.pem
	I0918 21:04:34.006282   61659 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/148782.pem /etc/ssl/certs/3ec20f2e.0"
	I0918 21:04:34.016612   61659 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0918 21:04:34.026689   61659 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0918 21:04:34.030650   61659 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 18 19:39 /usr/share/ca-certificates/minikubeCA.pem
	I0918 21:04:34.030705   61659 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0918 21:04:34.035940   61659 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0918 21:04:34.046516   61659 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14878.pem && ln -fs /usr/share/ca-certificates/14878.pem /etc/ssl/certs/14878.pem"
	I0918 21:04:34.056755   61659 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14878.pem
	I0918 21:04:34.061189   61659 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 18 19:57 /usr/share/ca-certificates/14878.pem
	I0918 21:04:34.061264   61659 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14878.pem
	I0918 21:04:34.066973   61659 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14878.pem /etc/ssl/certs/51391683.0"
	I0918 21:04:34.078781   61659 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0918 21:04:34.083129   61659 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0918 21:04:34.089249   61659 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0918 21:04:34.095211   61659 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0918 21:04:34.101350   61659 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0918 21:04:34.107269   61659 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0918 21:04:34.113177   61659 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0918 21:04:34.119005   61659 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-828868 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.31.1 ClusterName:default-k8s-diff-port-828868 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.109 Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0918 21:04:34.119093   61659 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0918 21:04:34.119147   61659 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0918 21:04:34.162792   61659 cri.go:89] found id: ""
	I0918 21:04:34.162895   61659 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0918 21:04:34.174325   61659 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0918 21:04:34.174358   61659 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0918 21:04:34.174420   61659 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0918 21:04:34.183708   61659 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0918 21:04:34.184680   61659 kubeconfig.go:125] found "default-k8s-diff-port-828868" server: "https://192.168.50.109:8444"
	I0918 21:04:34.186781   61659 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0918 21:04:34.195823   61659 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.109
	I0918 21:04:34.195856   61659 kubeadm.go:1160] stopping kube-system containers ...
	I0918 21:04:34.195866   61659 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0918 21:04:34.195907   61659 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0918 21:04:34.235799   61659 cri.go:89] found id: ""
	I0918 21:04:34.235882   61659 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0918 21:04:34.251412   61659 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0918 21:04:34.261361   61659 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0918 21:04:34.261390   61659 kubeadm.go:157] found existing configuration files:
	
	I0918 21:04:34.261435   61659 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0918 21:04:34.272201   61659 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0918 21:04:34.272272   61659 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0918 21:04:34.283030   61659 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0918 21:04:34.293227   61659 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0918 21:04:34.293321   61659 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0918 21:04:34.303749   61659 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0918 21:04:34.314027   61659 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0918 21:04:34.314116   61659 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0918 21:04:34.324585   61659 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0918 21:04:34.334524   61659 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0918 21:04:34.334594   61659 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0918 21:04:34.344923   61659 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0918 21:04:34.355422   61659 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0918 21:04:34.480395   61659 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0918 21:04:35.320827   61659 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0918 21:04:35.542013   61659 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0918 21:04:35.610886   61659 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0918 21:04:35.694501   61659 api_server.go:52] waiting for apiserver process to appear ...
	I0918 21:04:35.694610   61659 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:04:36.195441   61659 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:04:36.694978   61659 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:04:37.195220   61659 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:04:33.655864   61740 main.go:141] libmachine: (embed-certs-255556) DBG | domain embed-certs-255556 has defined MAC address 52:54:00:e8:c2:b7 in network mk-embed-certs-255556
	I0918 21:04:33.656375   61740 main.go:141] libmachine: (embed-certs-255556) DBG | unable to find current IP address of domain embed-certs-255556 in network mk-embed-certs-255556
	I0918 21:04:33.656407   61740 main.go:141] libmachine: (embed-certs-255556) DBG | I0918 21:04:33.656347   62857 retry.go:31] will retry after 1.375317123s: waiting for machine to come up
	I0918 21:04:35.033882   61740 main.go:141] libmachine: (embed-certs-255556) DBG | domain embed-certs-255556 has defined MAC address 52:54:00:e8:c2:b7 in network mk-embed-certs-255556
	I0918 21:04:35.034266   61740 main.go:141] libmachine: (embed-certs-255556) DBG | unable to find current IP address of domain embed-certs-255556 in network mk-embed-certs-255556
	I0918 21:04:35.034293   61740 main.go:141] libmachine: (embed-certs-255556) DBG | I0918 21:04:35.034232   62857 retry.go:31] will retry after 1.142237895s: waiting for machine to come up
	I0918 21:04:36.178371   61740 main.go:141] libmachine: (embed-certs-255556) DBG | domain embed-certs-255556 has defined MAC address 52:54:00:e8:c2:b7 in network mk-embed-certs-255556
	I0918 21:04:36.178837   61740 main.go:141] libmachine: (embed-certs-255556) DBG | unable to find current IP address of domain embed-certs-255556 in network mk-embed-certs-255556
	I0918 21:04:36.178865   61740 main.go:141] libmachine: (embed-certs-255556) DBG | I0918 21:04:36.178804   62857 retry.go:31] will retry after 1.983853904s: waiting for machine to come up
	I0918 21:04:38.165113   61740 main.go:141] libmachine: (embed-certs-255556) DBG | domain embed-certs-255556 has defined MAC address 52:54:00:e8:c2:b7 in network mk-embed-certs-255556
	I0918 21:04:38.165662   61740 main.go:141] libmachine: (embed-certs-255556) DBG | unable to find current IP address of domain embed-certs-255556 in network mk-embed-certs-255556
	I0918 21:04:38.165697   61740 main.go:141] libmachine: (embed-certs-255556) DBG | I0918 21:04:38.165601   62857 retry.go:31] will retry after 2.407286782s: waiting for machine to come up
	I0918 21:04:37.694916   61659 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:04:37.713724   61659 api_server.go:72] duration metric: took 2.019221095s to wait for apiserver process to appear ...
	I0918 21:04:37.713756   61659 api_server.go:88] waiting for apiserver healthz status ...
	I0918 21:04:37.713782   61659 api_server.go:253] Checking apiserver healthz at https://192.168.50.109:8444/healthz ...
	I0918 21:04:37.714297   61659 api_server.go:269] stopped: https://192.168.50.109:8444/healthz: Get "https://192.168.50.109:8444/healthz": dial tcp 192.168.50.109:8444: connect: connection refused
	I0918 21:04:38.213883   61659 api_server.go:253] Checking apiserver healthz at https://192.168.50.109:8444/healthz ...
	I0918 21:04:40.396513   61659 api_server.go:279] https://192.168.50.109:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0918 21:04:40.396564   61659 api_server.go:103] status: https://192.168.50.109:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0918 21:04:40.396584   61659 api_server.go:253] Checking apiserver healthz at https://192.168.50.109:8444/healthz ...
	I0918 21:04:40.409718   61659 api_server.go:279] https://192.168.50.109:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0918 21:04:40.409750   61659 api_server.go:103] status: https://192.168.50.109:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0918 21:04:40.714176   61659 api_server.go:253] Checking apiserver healthz at https://192.168.50.109:8444/healthz ...
	I0918 21:04:40.719353   61659 api_server.go:279] https://192.168.50.109:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0918 21:04:40.719391   61659 api_server.go:103] status: https://192.168.50.109:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0918 21:04:41.214596   61659 api_server.go:253] Checking apiserver healthz at https://192.168.50.109:8444/healthz ...
	I0918 21:04:41.219579   61659 api_server.go:279] https://192.168.50.109:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0918 21:04:41.219608   61659 api_server.go:103] status: https://192.168.50.109:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0918 21:04:41.713951   61659 api_server.go:253] Checking apiserver healthz at https://192.168.50.109:8444/healthz ...
	I0918 21:04:41.719212   61659 api_server.go:279] https://192.168.50.109:8444/healthz returned 200:
	ok
	I0918 21:04:41.726647   61659 api_server.go:141] control plane version: v1.31.1
	I0918 21:04:41.726679   61659 api_server.go:131] duration metric: took 4.012914861s to wait for apiserver health ...
	I0918 21:04:41.726689   61659 cni.go:84] Creating CNI manager for ""
	I0918 21:04:41.726707   61659 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0918 21:04:41.728312   61659 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0918 21:04:41.729613   61659 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0918 21:04:41.741932   61659 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0918 21:04:41.763195   61659 system_pods.go:43] waiting for kube-system pods to appear ...
	I0918 21:04:41.775167   61659 system_pods.go:59] 8 kube-system pods found
	I0918 21:04:41.775210   61659 system_pods.go:61] "coredns-7c65d6cfc9-xzjd7" [bd8252df-707c-41e6-84b7-cc74480177a8] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0918 21:04:41.775219   61659 system_pods.go:61] "etcd-default-k8s-diff-port-828868" [aa8e221d-abba-48a5-8814-246df0776408] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0918 21:04:41.775227   61659 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-828868" [b44966ac-3478-40c4-b67f-1824bff2bec7] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0918 21:04:41.775233   61659 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-828868" [7af8fbad-3aa2-497e-90df-33facaee6b17] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0918 21:04:41.775239   61659 system_pods.go:61] "kube-proxy-jz7ls" [f931ae9a-0b9c-4754-8b7b-d52c267b018c] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0918 21:04:41.775247   61659 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-828868" [ee89c713-c689-4de3-b1a5-4e08470ff6fb] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0918 21:04:41.775252   61659 system_pods.go:61] "metrics-server-6867b74b74-cqp47" [1ccf8c85-183a-4bea-abbc-eb7bcedca7f0] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0918 21:04:41.775257   61659 system_pods.go:61] "storage-provisioner" [9744cbfa-6b9a-42f0-aa80-0821b87a33d4] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0918 21:04:41.775270   61659 system_pods.go:74] duration metric: took 12.058758ms to wait for pod list to return data ...
	I0918 21:04:41.775280   61659 node_conditions.go:102] verifying NodePressure condition ...
	I0918 21:04:41.779525   61659 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0918 21:04:41.779559   61659 node_conditions.go:123] node cpu capacity is 2
	I0918 21:04:41.779580   61659 node_conditions.go:105] duration metric: took 4.292138ms to run NodePressure ...
	I0918 21:04:41.779615   61659 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0918 21:04:42.079279   61659 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0918 21:04:42.084311   61659 kubeadm.go:739] kubelet initialised
	I0918 21:04:42.084338   61659 kubeadm.go:740] duration metric: took 5.024999ms waiting for restarted kubelet to initialise ...
	I0918 21:04:42.084351   61659 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0918 21:04:42.089113   61659 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-xzjd7" in "kube-system" namespace to be "Ready" ...
	I0918 21:04:42.095539   61659 pod_ready.go:98] node "default-k8s-diff-port-828868" hosting pod "coredns-7c65d6cfc9-xzjd7" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-828868" has status "Ready":"False"
	I0918 21:04:42.095565   61659 pod_ready.go:82] duration metric: took 6.405251ms for pod "coredns-7c65d6cfc9-xzjd7" in "kube-system" namespace to be "Ready" ...
	E0918 21:04:42.095575   61659 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-828868" hosting pod "coredns-7c65d6cfc9-xzjd7" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-828868" has status "Ready":"False"
	I0918 21:04:42.095581   61659 pod_ready.go:79] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-828868" in "kube-system" namespace to be "Ready" ...
	I0918 21:04:42.100447   61659 pod_ready.go:98] node "default-k8s-diff-port-828868" hosting pod "etcd-default-k8s-diff-port-828868" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-828868" has status "Ready":"False"
	I0918 21:04:42.100469   61659 pod_ready.go:82] duration metric: took 4.879955ms for pod "etcd-default-k8s-diff-port-828868" in "kube-system" namespace to be "Ready" ...
	E0918 21:04:42.100480   61659 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-828868" hosting pod "etcd-default-k8s-diff-port-828868" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-828868" has status "Ready":"False"
	I0918 21:04:42.100485   61659 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-828868" in "kube-system" namespace to be "Ready" ...
	I0918 21:04:42.104889   61659 pod_ready.go:98] node "default-k8s-diff-port-828868" hosting pod "kube-apiserver-default-k8s-diff-port-828868" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-828868" has status "Ready":"False"
	I0918 21:04:42.104914   61659 pod_ready.go:82] duration metric: took 4.421708ms for pod "kube-apiserver-default-k8s-diff-port-828868" in "kube-system" namespace to be "Ready" ...
	E0918 21:04:42.104926   61659 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-828868" hosting pod "kube-apiserver-default-k8s-diff-port-828868" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-828868" has status "Ready":"False"
	I0918 21:04:42.104934   61659 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-828868" in "kube-system" namespace to be "Ready" ...
	I0918 21:04:40.574813   61740 main.go:141] libmachine: (embed-certs-255556) DBG | domain embed-certs-255556 has defined MAC address 52:54:00:e8:c2:b7 in network mk-embed-certs-255556
	I0918 21:04:40.575265   61740 main.go:141] libmachine: (embed-certs-255556) DBG | unable to find current IP address of domain embed-certs-255556 in network mk-embed-certs-255556
	I0918 21:04:40.575295   61740 main.go:141] libmachine: (embed-certs-255556) DBG | I0918 21:04:40.575215   62857 retry.go:31] will retry after 2.249084169s: waiting for machine to come up
	I0918 21:04:42.827547   61740 main.go:141] libmachine: (embed-certs-255556) DBG | domain embed-certs-255556 has defined MAC address 52:54:00:e8:c2:b7 in network mk-embed-certs-255556
	I0918 21:04:42.827966   61740 main.go:141] libmachine: (embed-certs-255556) DBG | unable to find current IP address of domain embed-certs-255556 in network mk-embed-certs-255556
	I0918 21:04:42.828028   61740 main.go:141] libmachine: (embed-certs-255556) DBG | I0918 21:04:42.827923   62857 retry.go:31] will retry after 4.512161859s: waiting for machine to come up
	I0918 21:04:44.113739   61659 pod_ready.go:103] pod "kube-controller-manager-default-k8s-diff-port-828868" in "kube-system" namespace has status "Ready":"False"
	I0918 21:04:46.611013   61659 pod_ready.go:103] pod "kube-controller-manager-default-k8s-diff-port-828868" in "kube-system" namespace has status "Ready":"False"
	I0918 21:04:47.345046   61740 main.go:141] libmachine: (embed-certs-255556) DBG | domain embed-certs-255556 has defined MAC address 52:54:00:e8:c2:b7 in network mk-embed-certs-255556
	I0918 21:04:47.345426   61740 main.go:141] libmachine: (embed-certs-255556) Found IP for machine: 192.168.39.21
	I0918 21:04:47.345444   61740 main.go:141] libmachine: (embed-certs-255556) Reserving static IP address...
	I0918 21:04:47.345457   61740 main.go:141] libmachine: (embed-certs-255556) DBG | domain embed-certs-255556 has current primary IP address 192.168.39.21 and MAC address 52:54:00:e8:c2:b7 in network mk-embed-certs-255556
	I0918 21:04:47.345824   61740 main.go:141] libmachine: (embed-certs-255556) DBG | found host DHCP lease matching {name: "embed-certs-255556", mac: "52:54:00:e8:c2:b7", ip: "192.168.39.21"} in network mk-embed-certs-255556: {Iface:virbr1 ExpiryTime:2024-09-18 22:04:39 +0000 UTC Type:0 Mac:52:54:00:e8:c2:b7 Iaid: IPaddr:192.168.39.21 Prefix:24 Hostname:embed-certs-255556 Clientid:01:52:54:00:e8:c2:b7}
	I0918 21:04:47.345846   61740 main.go:141] libmachine: (embed-certs-255556) DBG | skip adding static IP to network mk-embed-certs-255556 - found existing host DHCP lease matching {name: "embed-certs-255556", mac: "52:54:00:e8:c2:b7", ip: "192.168.39.21"}
	I0918 21:04:47.345856   61740 main.go:141] libmachine: (embed-certs-255556) Reserved static IP address: 192.168.39.21
	I0918 21:04:47.345866   61740 main.go:141] libmachine: (embed-certs-255556) Waiting for SSH to be available...
	I0918 21:04:47.345874   61740 main.go:141] libmachine: (embed-certs-255556) DBG | Getting to WaitForSSH function...
	I0918 21:04:47.347972   61740 main.go:141] libmachine: (embed-certs-255556) DBG | domain embed-certs-255556 has defined MAC address 52:54:00:e8:c2:b7 in network mk-embed-certs-255556
	I0918 21:04:47.348327   61740 main.go:141] libmachine: (embed-certs-255556) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e8:c2:b7", ip: ""} in network mk-embed-certs-255556: {Iface:virbr1 ExpiryTime:2024-09-18 22:04:39 +0000 UTC Type:0 Mac:52:54:00:e8:c2:b7 Iaid: IPaddr:192.168.39.21 Prefix:24 Hostname:embed-certs-255556 Clientid:01:52:54:00:e8:c2:b7}
	I0918 21:04:47.348367   61740 main.go:141] libmachine: (embed-certs-255556) DBG | domain embed-certs-255556 has defined IP address 192.168.39.21 and MAC address 52:54:00:e8:c2:b7 in network mk-embed-certs-255556
	I0918 21:04:47.348437   61740 main.go:141] libmachine: (embed-certs-255556) DBG | Using SSH client type: external
	I0918 21:04:47.348469   61740 main.go:141] libmachine: (embed-certs-255556) DBG | Using SSH private key: /home/jenkins/minikube-integration/19667-7671/.minikube/machines/embed-certs-255556/id_rsa (-rw-------)
	I0918 21:04:47.348511   61740 main.go:141] libmachine: (embed-certs-255556) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.21 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19667-7671/.minikube/machines/embed-certs-255556/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0918 21:04:47.348526   61740 main.go:141] libmachine: (embed-certs-255556) DBG | About to run SSH command:
	I0918 21:04:47.348554   61740 main.go:141] libmachine: (embed-certs-255556) DBG | exit 0
	I0918 21:04:47.476457   61740 main.go:141] libmachine: (embed-certs-255556) DBG | SSH cmd err, output: <nil>: 
	I0918 21:04:47.476858   61740 main.go:141] libmachine: (embed-certs-255556) Calling .GetConfigRaw
	I0918 21:04:47.477533   61740 main.go:141] libmachine: (embed-certs-255556) Calling .GetIP
	I0918 21:04:47.480221   61740 main.go:141] libmachine: (embed-certs-255556) DBG | domain embed-certs-255556 has defined MAC address 52:54:00:e8:c2:b7 in network mk-embed-certs-255556
	I0918 21:04:47.480601   61740 main.go:141] libmachine: (embed-certs-255556) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e8:c2:b7", ip: ""} in network mk-embed-certs-255556: {Iface:virbr1 ExpiryTime:2024-09-18 22:04:39 +0000 UTC Type:0 Mac:52:54:00:e8:c2:b7 Iaid: IPaddr:192.168.39.21 Prefix:24 Hostname:embed-certs-255556 Clientid:01:52:54:00:e8:c2:b7}
	I0918 21:04:47.480644   61740 main.go:141] libmachine: (embed-certs-255556) DBG | domain embed-certs-255556 has defined IP address 192.168.39.21 and MAC address 52:54:00:e8:c2:b7 in network mk-embed-certs-255556
	I0918 21:04:47.480966   61740 profile.go:143] Saving config to /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/embed-certs-255556/config.json ...
	I0918 21:04:47.481172   61740 machine.go:93] provisionDockerMachine start ...
	I0918 21:04:47.481189   61740 main.go:141] libmachine: (embed-certs-255556) Calling .DriverName
	I0918 21:04:47.481440   61740 main.go:141] libmachine: (embed-certs-255556) Calling .GetSSHHostname
	I0918 21:04:47.483916   61740 main.go:141] libmachine: (embed-certs-255556) DBG | domain embed-certs-255556 has defined MAC address 52:54:00:e8:c2:b7 in network mk-embed-certs-255556
	I0918 21:04:47.484299   61740 main.go:141] libmachine: (embed-certs-255556) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e8:c2:b7", ip: ""} in network mk-embed-certs-255556: {Iface:virbr1 ExpiryTime:2024-09-18 22:04:39 +0000 UTC Type:0 Mac:52:54:00:e8:c2:b7 Iaid: IPaddr:192.168.39.21 Prefix:24 Hostname:embed-certs-255556 Clientid:01:52:54:00:e8:c2:b7}
	I0918 21:04:47.484328   61740 main.go:141] libmachine: (embed-certs-255556) DBG | domain embed-certs-255556 has defined IP address 192.168.39.21 and MAC address 52:54:00:e8:c2:b7 in network mk-embed-certs-255556
	I0918 21:04:47.484467   61740 main.go:141] libmachine: (embed-certs-255556) Calling .GetSSHPort
	I0918 21:04:47.484703   61740 main.go:141] libmachine: (embed-certs-255556) Calling .GetSSHKeyPath
	I0918 21:04:47.484898   61740 main.go:141] libmachine: (embed-certs-255556) Calling .GetSSHKeyPath
	I0918 21:04:47.485043   61740 main.go:141] libmachine: (embed-certs-255556) Calling .GetSSHUsername
	I0918 21:04:47.485185   61740 main.go:141] libmachine: Using SSH client type: native
	I0918 21:04:47.485386   61740 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.21 22 <nil> <nil>}
	I0918 21:04:47.485399   61740 main.go:141] libmachine: About to run SSH command:
	hostname
	I0918 21:04:47.596243   61740 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0918 21:04:47.596272   61740 main.go:141] libmachine: (embed-certs-255556) Calling .GetMachineName
	I0918 21:04:47.596531   61740 buildroot.go:166] provisioning hostname "embed-certs-255556"
	I0918 21:04:47.596560   61740 main.go:141] libmachine: (embed-certs-255556) Calling .GetMachineName
	I0918 21:04:47.596775   61740 main.go:141] libmachine: (embed-certs-255556) Calling .GetSSHHostname
	I0918 21:04:47.599159   61740 main.go:141] libmachine: (embed-certs-255556) DBG | domain embed-certs-255556 has defined MAC address 52:54:00:e8:c2:b7 in network mk-embed-certs-255556
	I0918 21:04:47.599508   61740 main.go:141] libmachine: (embed-certs-255556) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e8:c2:b7", ip: ""} in network mk-embed-certs-255556: {Iface:virbr1 ExpiryTime:2024-09-18 22:04:39 +0000 UTC Type:0 Mac:52:54:00:e8:c2:b7 Iaid: IPaddr:192.168.39.21 Prefix:24 Hostname:embed-certs-255556 Clientid:01:52:54:00:e8:c2:b7}
	I0918 21:04:47.599532   61740 main.go:141] libmachine: (embed-certs-255556) DBG | domain embed-certs-255556 has defined IP address 192.168.39.21 and MAC address 52:54:00:e8:c2:b7 in network mk-embed-certs-255556
	I0918 21:04:47.599706   61740 main.go:141] libmachine: (embed-certs-255556) Calling .GetSSHPort
	I0918 21:04:47.599888   61740 main.go:141] libmachine: (embed-certs-255556) Calling .GetSSHKeyPath
	I0918 21:04:47.600090   61740 main.go:141] libmachine: (embed-certs-255556) Calling .GetSSHKeyPath
	I0918 21:04:47.600229   61740 main.go:141] libmachine: (embed-certs-255556) Calling .GetSSHUsername
	I0918 21:04:47.600406   61740 main.go:141] libmachine: Using SSH client type: native
	I0918 21:04:47.600589   61740 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.21 22 <nil> <nil>}
	I0918 21:04:47.600602   61740 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-255556 && echo "embed-certs-255556" | sudo tee /etc/hostname
	I0918 21:04:47.726173   61740 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-255556
	
	I0918 21:04:47.726213   61740 main.go:141] libmachine: (embed-certs-255556) Calling .GetSSHHostname
	I0918 21:04:47.729209   61740 main.go:141] libmachine: (embed-certs-255556) DBG | domain embed-certs-255556 has defined MAC address 52:54:00:e8:c2:b7 in network mk-embed-certs-255556
	I0918 21:04:47.729575   61740 main.go:141] libmachine: (embed-certs-255556) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e8:c2:b7", ip: ""} in network mk-embed-certs-255556: {Iface:virbr1 ExpiryTime:2024-09-18 22:04:39 +0000 UTC Type:0 Mac:52:54:00:e8:c2:b7 Iaid: IPaddr:192.168.39.21 Prefix:24 Hostname:embed-certs-255556 Clientid:01:52:54:00:e8:c2:b7}
	I0918 21:04:47.729609   61740 main.go:141] libmachine: (embed-certs-255556) DBG | domain embed-certs-255556 has defined IP address 192.168.39.21 and MAC address 52:54:00:e8:c2:b7 in network mk-embed-certs-255556
	I0918 21:04:47.729775   61740 main.go:141] libmachine: (embed-certs-255556) Calling .GetSSHPort
	I0918 21:04:47.729952   61740 main.go:141] libmachine: (embed-certs-255556) Calling .GetSSHKeyPath
	I0918 21:04:47.730212   61740 main.go:141] libmachine: (embed-certs-255556) Calling .GetSSHKeyPath
	I0918 21:04:47.730386   61740 main.go:141] libmachine: (embed-certs-255556) Calling .GetSSHUsername
	I0918 21:04:47.730583   61740 main.go:141] libmachine: Using SSH client type: native
	I0918 21:04:47.730755   61740 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.21 22 <nil> <nil>}
	I0918 21:04:47.730771   61740 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-255556' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-255556/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-255556' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0918 21:04:47.849894   61740 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0918 21:04:47.849928   61740 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19667-7671/.minikube CaCertPath:/home/jenkins/minikube-integration/19667-7671/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19667-7671/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19667-7671/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19667-7671/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19667-7671/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19667-7671/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19667-7671/.minikube}
	I0918 21:04:47.849954   61740 buildroot.go:174] setting up certificates
	I0918 21:04:47.849961   61740 provision.go:84] configureAuth start
	I0918 21:04:47.849971   61740 main.go:141] libmachine: (embed-certs-255556) Calling .GetMachineName
	I0918 21:04:47.850307   61740 main.go:141] libmachine: (embed-certs-255556) Calling .GetIP
	I0918 21:04:47.852989   61740 main.go:141] libmachine: (embed-certs-255556) DBG | domain embed-certs-255556 has defined MAC address 52:54:00:e8:c2:b7 in network mk-embed-certs-255556
	I0918 21:04:47.853389   61740 main.go:141] libmachine: (embed-certs-255556) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e8:c2:b7", ip: ""} in network mk-embed-certs-255556: {Iface:virbr1 ExpiryTime:2024-09-18 22:04:39 +0000 UTC Type:0 Mac:52:54:00:e8:c2:b7 Iaid: IPaddr:192.168.39.21 Prefix:24 Hostname:embed-certs-255556 Clientid:01:52:54:00:e8:c2:b7}
	I0918 21:04:47.853423   61740 main.go:141] libmachine: (embed-certs-255556) DBG | domain embed-certs-255556 has defined IP address 192.168.39.21 and MAC address 52:54:00:e8:c2:b7 in network mk-embed-certs-255556
	I0918 21:04:47.853555   61740 main.go:141] libmachine: (embed-certs-255556) Calling .GetSSHHostname
	I0918 21:04:47.856032   61740 main.go:141] libmachine: (embed-certs-255556) DBG | domain embed-certs-255556 has defined MAC address 52:54:00:e8:c2:b7 in network mk-embed-certs-255556
	I0918 21:04:47.856389   61740 main.go:141] libmachine: (embed-certs-255556) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e8:c2:b7", ip: ""} in network mk-embed-certs-255556: {Iface:virbr1 ExpiryTime:2024-09-18 22:04:39 +0000 UTC Type:0 Mac:52:54:00:e8:c2:b7 Iaid: IPaddr:192.168.39.21 Prefix:24 Hostname:embed-certs-255556 Clientid:01:52:54:00:e8:c2:b7}
	I0918 21:04:47.856410   61740 main.go:141] libmachine: (embed-certs-255556) DBG | domain embed-certs-255556 has defined IP address 192.168.39.21 and MAC address 52:54:00:e8:c2:b7 in network mk-embed-certs-255556
	I0918 21:04:47.856556   61740 provision.go:143] copyHostCerts
	I0918 21:04:47.856617   61740 exec_runner.go:144] found /home/jenkins/minikube-integration/19667-7671/.minikube/ca.pem, removing ...
	I0918 21:04:47.856627   61740 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19667-7671/.minikube/ca.pem
	I0918 21:04:47.856686   61740 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19667-7671/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19667-7671/.minikube/ca.pem (1082 bytes)
	I0918 21:04:47.856778   61740 exec_runner.go:144] found /home/jenkins/minikube-integration/19667-7671/.minikube/cert.pem, removing ...
	I0918 21:04:47.856786   61740 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19667-7671/.minikube/cert.pem
	I0918 21:04:47.856805   61740 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19667-7671/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19667-7671/.minikube/cert.pem (1123 bytes)
	I0918 21:04:47.856855   61740 exec_runner.go:144] found /home/jenkins/minikube-integration/19667-7671/.minikube/key.pem, removing ...
	I0918 21:04:47.856862   61740 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19667-7671/.minikube/key.pem
	I0918 21:04:47.856881   61740 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19667-7671/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19667-7671/.minikube/key.pem (1679 bytes)
	I0918 21:04:47.856929   61740 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19667-7671/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19667-7671/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19667-7671/.minikube/certs/ca-key.pem org=jenkins.embed-certs-255556 san=[127.0.0.1 192.168.39.21 embed-certs-255556 localhost minikube]
	I0918 21:04:48.145689   61740 provision.go:177] copyRemoteCerts
	I0918 21:04:48.145750   61740 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0918 21:04:48.145779   61740 main.go:141] libmachine: (embed-certs-255556) Calling .GetSSHHostname
	I0918 21:04:48.148420   61740 main.go:141] libmachine: (embed-certs-255556) DBG | domain embed-certs-255556 has defined MAC address 52:54:00:e8:c2:b7 in network mk-embed-certs-255556
	I0918 21:04:48.148785   61740 main.go:141] libmachine: (embed-certs-255556) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e8:c2:b7", ip: ""} in network mk-embed-certs-255556: {Iface:virbr1 ExpiryTime:2024-09-18 22:04:39 +0000 UTC Type:0 Mac:52:54:00:e8:c2:b7 Iaid: IPaddr:192.168.39.21 Prefix:24 Hostname:embed-certs-255556 Clientid:01:52:54:00:e8:c2:b7}
	I0918 21:04:48.148812   61740 main.go:141] libmachine: (embed-certs-255556) DBG | domain embed-certs-255556 has defined IP address 192.168.39.21 and MAC address 52:54:00:e8:c2:b7 in network mk-embed-certs-255556
	I0918 21:04:48.148983   61740 main.go:141] libmachine: (embed-certs-255556) Calling .GetSSHPort
	I0918 21:04:48.149194   61740 main.go:141] libmachine: (embed-certs-255556) Calling .GetSSHKeyPath
	I0918 21:04:48.149371   61740 main.go:141] libmachine: (embed-certs-255556) Calling .GetSSHUsername
	I0918 21:04:48.149486   61740 sshutil.go:53] new ssh client: &{IP:192.168.39.21 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19667-7671/.minikube/machines/embed-certs-255556/id_rsa Username:docker}
	I0918 21:04:48.234451   61740 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0918 21:04:48.260660   61740 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0918 21:04:48.283305   61740 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0918 21:04:48.305919   61740 provision.go:87] duration metric: took 455.946794ms to configureAuth
	I0918 21:04:48.305954   61740 buildroot.go:189] setting minikube options for container-runtime
	I0918 21:04:48.306183   61740 config.go:182] Loaded profile config "embed-certs-255556": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0918 21:04:48.306284   61740 main.go:141] libmachine: (embed-certs-255556) Calling .GetSSHHostname
	I0918 21:04:48.308853   61740 main.go:141] libmachine: (embed-certs-255556) DBG | domain embed-certs-255556 has defined MAC address 52:54:00:e8:c2:b7 in network mk-embed-certs-255556
	I0918 21:04:48.309319   61740 main.go:141] libmachine: (embed-certs-255556) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e8:c2:b7", ip: ""} in network mk-embed-certs-255556: {Iface:virbr1 ExpiryTime:2024-09-18 22:04:39 +0000 UTC Type:0 Mac:52:54:00:e8:c2:b7 Iaid: IPaddr:192.168.39.21 Prefix:24 Hostname:embed-certs-255556 Clientid:01:52:54:00:e8:c2:b7}
	I0918 21:04:48.309359   61740 main.go:141] libmachine: (embed-certs-255556) DBG | domain embed-certs-255556 has defined IP address 192.168.39.21 and MAC address 52:54:00:e8:c2:b7 in network mk-embed-certs-255556
	I0918 21:04:48.309488   61740 main.go:141] libmachine: (embed-certs-255556) Calling .GetSSHPort
	I0918 21:04:48.309706   61740 main.go:141] libmachine: (embed-certs-255556) Calling .GetSSHKeyPath
	I0918 21:04:48.309860   61740 main.go:141] libmachine: (embed-certs-255556) Calling .GetSSHKeyPath
	I0918 21:04:48.309976   61740 main.go:141] libmachine: (embed-certs-255556) Calling .GetSSHUsername
	I0918 21:04:48.310134   61740 main.go:141] libmachine: Using SSH client type: native
	I0918 21:04:48.310349   61740 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.21 22 <nil> <nil>}
	I0918 21:04:48.310372   61740 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0918 21:04:48.782438   62061 start.go:364] duration metric: took 3m49.184727821s to acquireMachinesLock for "old-k8s-version-740194"
	I0918 21:04:48.782503   62061 start.go:96] Skipping create...Using existing machine configuration
	I0918 21:04:48.782514   62061 fix.go:54] fixHost starting: 
	I0918 21:04:48.782993   62061 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19667-7671/.minikube/bin/docker-machine-driver-kvm2
	I0918 21:04:48.783053   62061 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0918 21:04:48.802299   62061 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44463
	I0918 21:04:48.802787   62061 main.go:141] libmachine: () Calling .GetVersion
	I0918 21:04:48.803286   62061 main.go:141] libmachine: Using API Version  1
	I0918 21:04:48.803313   62061 main.go:141] libmachine: () Calling .SetConfigRaw
	I0918 21:04:48.803681   62061 main.go:141] libmachine: () Calling .GetMachineName
	I0918 21:04:48.803873   62061 main.go:141] libmachine: (old-k8s-version-740194) Calling .DriverName
	I0918 21:04:48.804007   62061 main.go:141] libmachine: (old-k8s-version-740194) Calling .GetState
	I0918 21:04:48.805714   62061 fix.go:112] recreateIfNeeded on old-k8s-version-740194: state=Stopped err=<nil>
	I0918 21:04:48.805744   62061 main.go:141] libmachine: (old-k8s-version-740194) Calling .DriverName
	W0918 21:04:48.805910   62061 fix.go:138] unexpected machine state, will restart: <nil>
	I0918 21:04:48.835402   62061 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-740194" ...
	I0918 21:04:48.836753   62061 main.go:141] libmachine: (old-k8s-version-740194) Calling .Start
	I0918 21:04:48.837090   62061 main.go:141] libmachine: (old-k8s-version-740194) Ensuring networks are active...
	I0918 21:04:48.838014   62061 main.go:141] libmachine: (old-k8s-version-740194) Ensuring network default is active
	I0918 21:04:48.838375   62061 main.go:141] libmachine: (old-k8s-version-740194) Ensuring network mk-old-k8s-version-740194 is active
	I0918 21:04:48.839035   62061 main.go:141] libmachine: (old-k8s-version-740194) Getting domain xml...
	I0918 21:04:48.839832   62061 main.go:141] libmachine: (old-k8s-version-740194) Creating domain...
	I0918 21:04:48.532928   61740 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0918 21:04:48.532952   61740 machine.go:96] duration metric: took 1.051769616s to provisionDockerMachine
	I0918 21:04:48.532962   61740 start.go:293] postStartSetup for "embed-certs-255556" (driver="kvm2")
	I0918 21:04:48.532973   61740 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0918 21:04:48.532991   61740 main.go:141] libmachine: (embed-certs-255556) Calling .DriverName
	I0918 21:04:48.533310   61740 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0918 21:04:48.533342   61740 main.go:141] libmachine: (embed-certs-255556) Calling .GetSSHHostname
	I0918 21:04:48.536039   61740 main.go:141] libmachine: (embed-certs-255556) DBG | domain embed-certs-255556 has defined MAC address 52:54:00:e8:c2:b7 in network mk-embed-certs-255556
	I0918 21:04:48.536529   61740 main.go:141] libmachine: (embed-certs-255556) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e8:c2:b7", ip: ""} in network mk-embed-certs-255556: {Iface:virbr1 ExpiryTime:2024-09-18 22:04:39 +0000 UTC Type:0 Mac:52:54:00:e8:c2:b7 Iaid: IPaddr:192.168.39.21 Prefix:24 Hostname:embed-certs-255556 Clientid:01:52:54:00:e8:c2:b7}
	I0918 21:04:48.536558   61740 main.go:141] libmachine: (embed-certs-255556) DBG | domain embed-certs-255556 has defined IP address 192.168.39.21 and MAC address 52:54:00:e8:c2:b7 in network mk-embed-certs-255556
	I0918 21:04:48.536631   61740 main.go:141] libmachine: (embed-certs-255556) Calling .GetSSHPort
	I0918 21:04:48.536806   61740 main.go:141] libmachine: (embed-certs-255556) Calling .GetSSHKeyPath
	I0918 21:04:48.536971   61740 main.go:141] libmachine: (embed-certs-255556) Calling .GetSSHUsername
	I0918 21:04:48.537148   61740 sshutil.go:53] new ssh client: &{IP:192.168.39.21 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19667-7671/.minikube/machines/embed-certs-255556/id_rsa Username:docker}
	I0918 21:04:48.623154   61740 ssh_runner.go:195] Run: cat /etc/os-release
	I0918 21:04:48.627520   61740 info.go:137] Remote host: Buildroot 2023.02.9
	I0918 21:04:48.627544   61740 filesync.go:126] Scanning /home/jenkins/minikube-integration/19667-7671/.minikube/addons for local assets ...
	I0918 21:04:48.627617   61740 filesync.go:126] Scanning /home/jenkins/minikube-integration/19667-7671/.minikube/files for local assets ...
	I0918 21:04:48.627711   61740 filesync.go:149] local asset: /home/jenkins/minikube-integration/19667-7671/.minikube/files/etc/ssl/certs/148782.pem -> 148782.pem in /etc/ssl/certs
	I0918 21:04:48.627827   61740 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0918 21:04:48.637145   61740 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/files/etc/ssl/certs/148782.pem --> /etc/ssl/certs/148782.pem (1708 bytes)
	I0918 21:04:48.661971   61740 start.go:296] duration metric: took 128.997987ms for postStartSetup
	I0918 21:04:48.662012   61740 fix.go:56] duration metric: took 20.352947161s for fixHost
	I0918 21:04:48.662034   61740 main.go:141] libmachine: (embed-certs-255556) Calling .GetSSHHostname
	I0918 21:04:48.665153   61740 main.go:141] libmachine: (embed-certs-255556) DBG | domain embed-certs-255556 has defined MAC address 52:54:00:e8:c2:b7 in network mk-embed-certs-255556
	I0918 21:04:48.665637   61740 main.go:141] libmachine: (embed-certs-255556) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e8:c2:b7", ip: ""} in network mk-embed-certs-255556: {Iface:virbr1 ExpiryTime:2024-09-18 22:04:39 +0000 UTC Type:0 Mac:52:54:00:e8:c2:b7 Iaid: IPaddr:192.168.39.21 Prefix:24 Hostname:embed-certs-255556 Clientid:01:52:54:00:e8:c2:b7}
	I0918 21:04:48.665668   61740 main.go:141] libmachine: (embed-certs-255556) DBG | domain embed-certs-255556 has defined IP address 192.168.39.21 and MAC address 52:54:00:e8:c2:b7 in network mk-embed-certs-255556
	I0918 21:04:48.665853   61740 main.go:141] libmachine: (embed-certs-255556) Calling .GetSSHPort
	I0918 21:04:48.666090   61740 main.go:141] libmachine: (embed-certs-255556) Calling .GetSSHKeyPath
	I0918 21:04:48.666289   61740 main.go:141] libmachine: (embed-certs-255556) Calling .GetSSHKeyPath
	I0918 21:04:48.666607   61740 main.go:141] libmachine: (embed-certs-255556) Calling .GetSSHUsername
	I0918 21:04:48.666784   61740 main.go:141] libmachine: Using SSH client type: native
	I0918 21:04:48.667024   61740 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.21 22 <nil> <nil>}
	I0918 21:04:48.667040   61740 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0918 21:04:48.782245   61740 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726693488.758182538
	
	I0918 21:04:48.782286   61740 fix.go:216] guest clock: 1726693488.758182538
	I0918 21:04:48.782297   61740 fix.go:229] Guest: 2024-09-18 21:04:48.758182538 +0000 UTC Remote: 2024-09-18 21:04:48.662016609 +0000 UTC m=+270.354724953 (delta=96.165929ms)
	I0918 21:04:48.782322   61740 fix.go:200] guest clock delta is within tolerance: 96.165929ms
	I0918 21:04:48.782329   61740 start.go:83] releasing machines lock for "embed-certs-255556", held for 20.47331123s
	I0918 21:04:48.782358   61740 main.go:141] libmachine: (embed-certs-255556) Calling .DriverName
	I0918 21:04:48.782655   61740 main.go:141] libmachine: (embed-certs-255556) Calling .GetIP
	I0918 21:04:48.785572   61740 main.go:141] libmachine: (embed-certs-255556) DBG | domain embed-certs-255556 has defined MAC address 52:54:00:e8:c2:b7 in network mk-embed-certs-255556
	I0918 21:04:48.785959   61740 main.go:141] libmachine: (embed-certs-255556) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e8:c2:b7", ip: ""} in network mk-embed-certs-255556: {Iface:virbr1 ExpiryTime:2024-09-18 22:04:39 +0000 UTC Type:0 Mac:52:54:00:e8:c2:b7 Iaid: IPaddr:192.168.39.21 Prefix:24 Hostname:embed-certs-255556 Clientid:01:52:54:00:e8:c2:b7}
	I0918 21:04:48.785988   61740 main.go:141] libmachine: (embed-certs-255556) DBG | domain embed-certs-255556 has defined IP address 192.168.39.21 and MAC address 52:54:00:e8:c2:b7 in network mk-embed-certs-255556
	I0918 21:04:48.786181   61740 main.go:141] libmachine: (embed-certs-255556) Calling .DriverName
	I0918 21:04:48.786653   61740 main.go:141] libmachine: (embed-certs-255556) Calling .DriverName
	I0918 21:04:48.786859   61740 main.go:141] libmachine: (embed-certs-255556) Calling .DriverName
	I0918 21:04:48.787019   61740 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0918 21:04:48.787083   61740 main.go:141] libmachine: (embed-certs-255556) Calling .GetSSHHostname
	I0918 21:04:48.787118   61740 ssh_runner.go:195] Run: cat /version.json
	I0918 21:04:48.787142   61740 main.go:141] libmachine: (embed-certs-255556) Calling .GetSSHHostname
	I0918 21:04:48.789834   61740 main.go:141] libmachine: (embed-certs-255556) DBG | domain embed-certs-255556 has defined MAC address 52:54:00:e8:c2:b7 in network mk-embed-certs-255556
	I0918 21:04:48.790239   61740 main.go:141] libmachine: (embed-certs-255556) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e8:c2:b7", ip: ""} in network mk-embed-certs-255556: {Iface:virbr1 ExpiryTime:2024-09-18 22:04:39 +0000 UTC Type:0 Mac:52:54:00:e8:c2:b7 Iaid: IPaddr:192.168.39.21 Prefix:24 Hostname:embed-certs-255556 Clientid:01:52:54:00:e8:c2:b7}
	I0918 21:04:48.790290   61740 main.go:141] libmachine: (embed-certs-255556) DBG | domain embed-certs-255556 has defined IP address 192.168.39.21 and MAC address 52:54:00:e8:c2:b7 in network mk-embed-certs-255556
	I0918 21:04:48.790326   61740 main.go:141] libmachine: (embed-certs-255556) DBG | domain embed-certs-255556 has defined MAC address 52:54:00:e8:c2:b7 in network mk-embed-certs-255556
	I0918 21:04:48.790625   61740 main.go:141] libmachine: (embed-certs-255556) Calling .GetSSHPort
	I0918 21:04:48.790805   61740 main.go:141] libmachine: (embed-certs-255556) Calling .GetSSHKeyPath
	I0918 21:04:48.790828   61740 main.go:141] libmachine: (embed-certs-255556) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e8:c2:b7", ip: ""} in network mk-embed-certs-255556: {Iface:virbr1 ExpiryTime:2024-09-18 22:04:39 +0000 UTC Type:0 Mac:52:54:00:e8:c2:b7 Iaid: IPaddr:192.168.39.21 Prefix:24 Hostname:embed-certs-255556 Clientid:01:52:54:00:e8:c2:b7}
	I0918 21:04:48.790860   61740 main.go:141] libmachine: (embed-certs-255556) DBG | domain embed-certs-255556 has defined IP address 192.168.39.21 and MAC address 52:54:00:e8:c2:b7 in network mk-embed-certs-255556
	I0918 21:04:48.791012   61740 main.go:141] libmachine: (embed-certs-255556) Calling .GetSSHUsername
	I0918 21:04:48.791035   61740 main.go:141] libmachine: (embed-certs-255556) Calling .GetSSHPort
	I0918 21:04:48.791172   61740 sshutil.go:53] new ssh client: &{IP:192.168.39.21 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19667-7671/.minikube/machines/embed-certs-255556/id_rsa Username:docker}
	I0918 21:04:48.791251   61740 main.go:141] libmachine: (embed-certs-255556) Calling .GetSSHKeyPath
	I0918 21:04:48.791406   61740 main.go:141] libmachine: (embed-certs-255556) Calling .GetSSHUsername
	I0918 21:04:48.791537   61740 sshutil.go:53] new ssh client: &{IP:192.168.39.21 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19667-7671/.minikube/machines/embed-certs-255556/id_rsa Username:docker}
	I0918 21:04:48.911282   61740 ssh_runner.go:195] Run: systemctl --version
	I0918 21:04:48.917459   61740 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0918 21:04:49.062272   61740 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0918 21:04:49.068629   61740 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0918 21:04:49.068709   61740 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0918 21:04:49.085575   61740 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0918 21:04:49.085607   61740 start.go:495] detecting cgroup driver to use...
	I0918 21:04:49.085677   61740 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0918 21:04:49.102455   61740 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0918 21:04:49.117869   61740 docker.go:217] disabling cri-docker service (if available) ...
	I0918 21:04:49.117958   61740 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0918 21:04:49.135361   61740 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0918 21:04:49.150861   61740 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0918 21:04:49.285901   61740 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0918 21:04:49.438312   61740 docker.go:233] disabling docker service ...
	I0918 21:04:49.438390   61740 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0918 21:04:49.454560   61740 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0918 21:04:49.471109   61740 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0918 21:04:49.631711   61740 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0918 21:04:49.760860   61740 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0918 21:04:49.778574   61740 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0918 21:04:49.797293   61740 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0918 21:04:49.797365   61740 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0918 21:04:49.808796   61740 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0918 21:04:49.808872   61740 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0918 21:04:49.821451   61740 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0918 21:04:49.834678   61740 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0918 21:04:49.847521   61740 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0918 21:04:49.860918   61740 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0918 21:04:49.873942   61740 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0918 21:04:49.892983   61740 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0918 21:04:49.904925   61740 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0918 21:04:49.916195   61740 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0918 21:04:49.916310   61740 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0918 21:04:49.931084   61740 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0918 21:04:49.942692   61740 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0918 21:04:50.065013   61740 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0918 21:04:50.168347   61740 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0918 21:04:50.168440   61740 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0918 21:04:50.174948   61740 start.go:563] Will wait 60s for crictl version
	I0918 21:04:50.175017   61740 ssh_runner.go:195] Run: which crictl
	I0918 21:04:50.180139   61740 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0918 21:04:50.221578   61740 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0918 21:04:50.221687   61740 ssh_runner.go:195] Run: crio --version
	I0918 21:04:50.251587   61740 ssh_runner.go:195] Run: crio --version
	I0918 21:04:50.282931   61740 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0918 21:04:48.112865   61659 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-828868" in "kube-system" namespace has status "Ready":"True"
	I0918 21:04:48.112895   61659 pod_ready.go:82] duration metric: took 6.007950768s for pod "kube-controller-manager-default-k8s-diff-port-828868" in "kube-system" namespace to be "Ready" ...
	I0918 21:04:48.112909   61659 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-jz7ls" in "kube-system" namespace to be "Ready" ...
	I0918 21:04:48.118606   61659 pod_ready.go:93] pod "kube-proxy-jz7ls" in "kube-system" namespace has status "Ready":"True"
	I0918 21:04:48.118628   61659 pod_ready.go:82] duration metric: took 5.710918ms for pod "kube-proxy-jz7ls" in "kube-system" namespace to be "Ready" ...
	I0918 21:04:48.118647   61659 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-828868" in "kube-system" namespace to be "Ready" ...
	I0918 21:04:49.626081   61659 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-828868" in "kube-system" namespace has status "Ready":"True"
	I0918 21:04:49.626116   61659 pod_ready.go:82] duration metric: took 1.507459822s for pod "kube-scheduler-default-k8s-diff-port-828868" in "kube-system" namespace to be "Ready" ...
	I0918 21:04:49.626130   61659 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace to be "Ready" ...
	I0918 21:04:51.635306   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:04:50.284258   61740 main.go:141] libmachine: (embed-certs-255556) Calling .GetIP
	I0918 21:04:50.287321   61740 main.go:141] libmachine: (embed-certs-255556) DBG | domain embed-certs-255556 has defined MAC address 52:54:00:e8:c2:b7 in network mk-embed-certs-255556
	I0918 21:04:50.287754   61740 main.go:141] libmachine: (embed-certs-255556) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e8:c2:b7", ip: ""} in network mk-embed-certs-255556: {Iface:virbr1 ExpiryTime:2024-09-18 22:04:39 +0000 UTC Type:0 Mac:52:54:00:e8:c2:b7 Iaid: IPaddr:192.168.39.21 Prefix:24 Hostname:embed-certs-255556 Clientid:01:52:54:00:e8:c2:b7}
	I0918 21:04:50.287782   61740 main.go:141] libmachine: (embed-certs-255556) DBG | domain embed-certs-255556 has defined IP address 192.168.39.21 and MAC address 52:54:00:e8:c2:b7 in network mk-embed-certs-255556
	I0918 21:04:50.288116   61740 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0918 21:04:50.292221   61740 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0918 21:04:50.304472   61740 kubeadm.go:883] updating cluster {Name:embed-certs-255556 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.1 ClusterName:embed-certs-255556 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.21 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:f
alse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0918 21:04:50.304604   61740 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0918 21:04:50.304675   61740 ssh_runner.go:195] Run: sudo crictl images --output json
	I0918 21:04:50.343445   61740 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I0918 21:04:50.343527   61740 ssh_runner.go:195] Run: which lz4
	I0918 21:04:50.347600   61740 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0918 21:04:50.351647   61740 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0918 21:04:50.351679   61740 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I0918 21:04:51.665892   61740 crio.go:462] duration metric: took 1.318339658s to copy over tarball
	I0918 21:04:51.665970   61740 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0918 21:04:50.157515   62061 main.go:141] libmachine: (old-k8s-version-740194) Waiting to get IP...
	I0918 21:04:50.158369   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | domain old-k8s-version-740194 has defined MAC address 52:54:00:3f:a7:11 in network mk-old-k8s-version-740194
	I0918 21:04:50.158796   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | unable to find current IP address of domain old-k8s-version-740194 in network mk-old-k8s-version-740194
	I0918 21:04:50.158931   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | I0918 21:04:50.158765   63026 retry.go:31] will retry after 214.617426ms: waiting for machine to come up
	I0918 21:04:50.375418   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | domain old-k8s-version-740194 has defined MAC address 52:54:00:3f:a7:11 in network mk-old-k8s-version-740194
	I0918 21:04:50.375882   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | unable to find current IP address of domain old-k8s-version-740194 in network mk-old-k8s-version-740194
	I0918 21:04:50.375910   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | I0918 21:04:50.375834   63026 retry.go:31] will retry after 311.080996ms: waiting for machine to come up
	I0918 21:04:50.688569   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | domain old-k8s-version-740194 has defined MAC address 52:54:00:3f:a7:11 in network mk-old-k8s-version-740194
	I0918 21:04:50.689151   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | unable to find current IP address of domain old-k8s-version-740194 in network mk-old-k8s-version-740194
	I0918 21:04:50.689182   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | I0918 21:04:50.689112   63026 retry.go:31] will retry after 386.384815ms: waiting for machine to come up
	I0918 21:04:51.076846   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | domain old-k8s-version-740194 has defined MAC address 52:54:00:3f:a7:11 in network mk-old-k8s-version-740194
	I0918 21:04:51.077336   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | unable to find current IP address of domain old-k8s-version-740194 in network mk-old-k8s-version-740194
	I0918 21:04:51.077359   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | I0918 21:04:51.077280   63026 retry.go:31] will retry after 556.210887ms: waiting for machine to come up
	I0918 21:04:51.634919   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | domain old-k8s-version-740194 has defined MAC address 52:54:00:3f:a7:11 in network mk-old-k8s-version-740194
	I0918 21:04:51.635423   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | unable to find current IP address of domain old-k8s-version-740194 in network mk-old-k8s-version-740194
	I0918 21:04:51.635462   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | I0918 21:04:51.635325   63026 retry.go:31] will retry after 607.960565ms: waiting for machine to come up
	I0918 21:04:52.245179   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | domain old-k8s-version-740194 has defined MAC address 52:54:00:3f:a7:11 in network mk-old-k8s-version-740194
	I0918 21:04:52.245741   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | unable to find current IP address of domain old-k8s-version-740194 in network mk-old-k8s-version-740194
	I0918 21:04:52.245774   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | I0918 21:04:52.245692   63026 retry.go:31] will retry after 620.243825ms: waiting for machine to come up
	I0918 21:04:52.867067   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | domain old-k8s-version-740194 has defined MAC address 52:54:00:3f:a7:11 in network mk-old-k8s-version-740194
	I0918 21:04:52.867658   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | unable to find current IP address of domain old-k8s-version-740194 in network mk-old-k8s-version-740194
	I0918 21:04:52.867680   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | I0918 21:04:52.867608   63026 retry.go:31] will retry after 1.091814923s: waiting for machine to come up
	I0918 21:04:53.961395   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | domain old-k8s-version-740194 has defined MAC address 52:54:00:3f:a7:11 in network mk-old-k8s-version-740194
	I0918 21:04:53.961819   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | unable to find current IP address of domain old-k8s-version-740194 in network mk-old-k8s-version-740194
	I0918 21:04:53.961842   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | I0918 21:04:53.961784   63026 retry.go:31] will retry after 1.130552716s: waiting for machine to come up
	I0918 21:04:54.133598   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:04:56.134938   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:04:53.837558   61740 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.171557505s)
	I0918 21:04:53.837589   61740 crio.go:469] duration metric: took 2.171667234s to extract the tarball
	I0918 21:04:53.837610   61740 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0918 21:04:53.876381   61740 ssh_runner.go:195] Run: sudo crictl images --output json
	I0918 21:04:53.924938   61740 crio.go:514] all images are preloaded for cri-o runtime.
	I0918 21:04:53.924968   61740 cache_images.go:84] Images are preloaded, skipping loading
	I0918 21:04:53.924979   61740 kubeadm.go:934] updating node { 192.168.39.21 8443 v1.31.1 crio true true} ...
	I0918 21:04:53.925115   61740 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-255556 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.21
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:embed-certs-255556 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0918 21:04:53.925203   61740 ssh_runner.go:195] Run: crio config
	I0918 21:04:53.969048   61740 cni.go:84] Creating CNI manager for ""
	I0918 21:04:53.969076   61740 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0918 21:04:53.969086   61740 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0918 21:04:53.969105   61740 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.21 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-255556 NodeName:embed-certs-255556 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.21"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.21 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath
:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0918 21:04:53.969240   61740 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.21
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-255556"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.21
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.21"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0918 21:04:53.969298   61740 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0918 21:04:53.978636   61740 binaries.go:44] Found k8s binaries, skipping transfer
	I0918 21:04:53.978702   61740 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0918 21:04:53.988580   61740 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0918 21:04:54.005819   61740 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0918 21:04:54.021564   61740 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2159 bytes)
	I0918 21:04:54.038702   61740 ssh_runner.go:195] Run: grep 192.168.39.21	control-plane.minikube.internal$ /etc/hosts
	I0918 21:04:54.042536   61740 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.21	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0918 21:04:54.053896   61740 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0918 21:04:54.180842   61740 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0918 21:04:54.197701   61740 certs.go:68] Setting up /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/embed-certs-255556 for IP: 192.168.39.21
	I0918 21:04:54.197731   61740 certs.go:194] generating shared ca certs ...
	I0918 21:04:54.197754   61740 certs.go:226] acquiring lock for ca certs: {Name:mkf54e0e71ed884c4372f9d3d4cb308ea4600185 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0918 21:04:54.197953   61740 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19667-7671/.minikube/ca.key
	I0918 21:04:54.198020   61740 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19667-7671/.minikube/proxy-client-ca.key
	I0918 21:04:54.198034   61740 certs.go:256] generating profile certs ...
	I0918 21:04:54.198129   61740 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/embed-certs-255556/client.key
	I0918 21:04:54.198191   61740 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/embed-certs-255556/apiserver.key.4704fd19
	I0918 21:04:54.198225   61740 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/embed-certs-255556/proxy-client.key
	I0918 21:04:54.198326   61740 certs.go:484] found cert: /home/jenkins/minikube-integration/19667-7671/.minikube/certs/14878.pem (1338 bytes)
	W0918 21:04:54.198358   61740 certs.go:480] ignoring /home/jenkins/minikube-integration/19667-7671/.minikube/certs/14878_empty.pem, impossibly tiny 0 bytes
	I0918 21:04:54.198370   61740 certs.go:484] found cert: /home/jenkins/minikube-integration/19667-7671/.minikube/certs/ca-key.pem (1675 bytes)
	I0918 21:04:54.198420   61740 certs.go:484] found cert: /home/jenkins/minikube-integration/19667-7671/.minikube/certs/ca.pem (1082 bytes)
	I0918 21:04:54.198463   61740 certs.go:484] found cert: /home/jenkins/minikube-integration/19667-7671/.minikube/certs/cert.pem (1123 bytes)
	I0918 21:04:54.198498   61740 certs.go:484] found cert: /home/jenkins/minikube-integration/19667-7671/.minikube/certs/key.pem (1679 bytes)
	I0918 21:04:54.198566   61740 certs.go:484] found cert: /home/jenkins/minikube-integration/19667-7671/.minikube/files/etc/ssl/certs/148782.pem (1708 bytes)
	I0918 21:04:54.199258   61740 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0918 21:04:54.231688   61740 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0918 21:04:54.276366   61740 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0918 21:04:54.320929   61740 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0918 21:04:54.348698   61740 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/embed-certs-255556/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0918 21:04:54.375168   61740 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/embed-certs-255556/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0918 21:04:54.399159   61740 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/embed-certs-255556/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0918 21:04:54.427975   61740 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/embed-certs-255556/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0918 21:04:54.454648   61740 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/certs/14878.pem --> /usr/share/ca-certificates/14878.pem (1338 bytes)
	I0918 21:04:54.477518   61740 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/files/etc/ssl/certs/148782.pem --> /usr/share/ca-certificates/148782.pem (1708 bytes)
	I0918 21:04:54.500703   61740 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0918 21:04:54.523380   61740 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0918 21:04:54.540053   61740 ssh_runner.go:195] Run: openssl version
	I0918 21:04:54.545818   61740 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14878.pem && ln -fs /usr/share/ca-certificates/14878.pem /etc/ssl/certs/14878.pem"
	I0918 21:04:54.557138   61740 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14878.pem
	I0918 21:04:54.561973   61740 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 18 19:57 /usr/share/ca-certificates/14878.pem
	I0918 21:04:54.562030   61740 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14878.pem
	I0918 21:04:54.568133   61740 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14878.pem /etc/ssl/certs/51391683.0"
	I0918 21:04:54.578964   61740 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/148782.pem && ln -fs /usr/share/ca-certificates/148782.pem /etc/ssl/certs/148782.pem"
	I0918 21:04:54.590254   61740 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/148782.pem
	I0918 21:04:54.594944   61740 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 18 19:57 /usr/share/ca-certificates/148782.pem
	I0918 21:04:54.595022   61740 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/148782.pem
	I0918 21:04:54.600797   61740 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/148782.pem /etc/ssl/certs/3ec20f2e.0"
	I0918 21:04:54.612078   61740 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0918 21:04:54.623280   61740 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0918 21:04:54.628636   61740 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 18 19:39 /usr/share/ca-certificates/minikubeCA.pem
	I0918 21:04:54.628711   61740 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0918 21:04:54.634847   61740 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0918 21:04:54.645647   61740 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0918 21:04:54.650004   61740 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0918 21:04:54.656906   61740 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0918 21:04:54.662778   61740 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0918 21:04:54.668744   61740 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0918 21:04:54.674676   61740 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0918 21:04:54.680431   61740 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0918 21:04:54.686242   61740 kubeadm.go:392] StartCluster: {Name:embed-certs-255556 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.1 ClusterName:embed-certs-255556 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.21 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fals
e MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0918 21:04:54.686364   61740 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0918 21:04:54.686439   61740 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0918 21:04:54.724228   61740 cri.go:89] found id: ""
	I0918 21:04:54.724319   61740 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0918 21:04:54.734427   61740 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0918 21:04:54.734458   61740 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0918 21:04:54.734511   61740 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0918 21:04:54.747453   61740 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0918 21:04:54.748449   61740 kubeconfig.go:125] found "embed-certs-255556" server: "https://192.168.39.21:8443"
	I0918 21:04:54.750481   61740 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0918 21:04:54.760549   61740 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.21
	I0918 21:04:54.760585   61740 kubeadm.go:1160] stopping kube-system containers ...
	I0918 21:04:54.760599   61740 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0918 21:04:54.760659   61740 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0918 21:04:54.796334   61740 cri.go:89] found id: ""
	I0918 21:04:54.796426   61740 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0918 21:04:54.820854   61740 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0918 21:04:54.831959   61740 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0918 21:04:54.831982   61740 kubeadm.go:157] found existing configuration files:
	
	I0918 21:04:54.832075   61740 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0918 21:04:54.841872   61740 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0918 21:04:54.841952   61740 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0918 21:04:54.852032   61740 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0918 21:04:54.862101   61740 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0918 21:04:54.862176   61740 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0918 21:04:54.872575   61740 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0918 21:04:54.882283   61740 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0918 21:04:54.882386   61740 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0918 21:04:54.895907   61740 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0918 21:04:54.905410   61740 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0918 21:04:54.905484   61740 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0918 21:04:54.914938   61740 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0918 21:04:54.924536   61740 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0918 21:04:55.035830   61740 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0918 21:04:55.975305   61740 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0918 21:04:56.227988   61740 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0918 21:04:56.304760   61740 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0918 21:04:56.375088   61740 api_server.go:52] waiting for apiserver process to appear ...
	I0918 21:04:56.375185   61740 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:04:56.875319   61740 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:04:57.375240   61740 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:04:57.875532   61740 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:04:55.093491   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | domain old-k8s-version-740194 has defined MAC address 52:54:00:3f:a7:11 in network mk-old-k8s-version-740194
	I0918 21:04:55.093956   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | unable to find current IP address of domain old-k8s-version-740194 in network mk-old-k8s-version-740194
	I0918 21:04:55.093982   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | I0918 21:04:55.093923   63026 retry.go:31] will retry after 1.824664154s: waiting for machine to come up
	I0918 21:04:56.920959   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | domain old-k8s-version-740194 has defined MAC address 52:54:00:3f:a7:11 in network mk-old-k8s-version-740194
	I0918 21:04:56.921371   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | unable to find current IP address of domain old-k8s-version-740194 in network mk-old-k8s-version-740194
	I0918 21:04:56.921422   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | I0918 21:04:56.921354   63026 retry.go:31] will retry after 1.591260677s: waiting for machine to come up
	I0918 21:04:58.514832   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | domain old-k8s-version-740194 has defined MAC address 52:54:00:3f:a7:11 in network mk-old-k8s-version-740194
	I0918 21:04:58.515294   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | unable to find current IP address of domain old-k8s-version-740194 in network mk-old-k8s-version-740194
	I0918 21:04:58.515322   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | I0918 21:04:58.515262   63026 retry.go:31] will retry after 1.868763497s: waiting for machine to come up
	I0918 21:04:58.135056   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:05:00.633540   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:04:58.375400   61740 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:04:58.392935   61740 api_server.go:72] duration metric: took 2.017847705s to wait for apiserver process to appear ...
	I0918 21:04:58.393110   61740 api_server.go:88] waiting for apiserver healthz status ...
	I0918 21:04:58.393152   61740 api_server.go:253] Checking apiserver healthz at https://192.168.39.21:8443/healthz ...
	I0918 21:04:58.393699   61740 api_server.go:269] stopped: https://192.168.39.21:8443/healthz: Get "https://192.168.39.21:8443/healthz": dial tcp 192.168.39.21:8443: connect: connection refused
	I0918 21:04:58.893291   61740 api_server.go:253] Checking apiserver healthz at https://192.168.39.21:8443/healthz ...
	I0918 21:05:01.124915   61740 api_server.go:279] https://192.168.39.21:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0918 21:05:01.124954   61740 api_server.go:103] status: https://192.168.39.21:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0918 21:05:01.124991   61740 api_server.go:253] Checking apiserver healthz at https://192.168.39.21:8443/healthz ...
	I0918 21:05:01.179199   61740 api_server.go:279] https://192.168.39.21:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0918 21:05:01.179225   61740 api_server.go:103] status: https://192.168.39.21:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0918 21:05:01.393537   61740 api_server.go:253] Checking apiserver healthz at https://192.168.39.21:8443/healthz ...
	I0918 21:05:01.399577   61740 api_server.go:279] https://192.168.39.21:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0918 21:05:01.399610   61740 api_server.go:103] status: https://192.168.39.21:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0918 21:05:01.894174   61740 api_server.go:253] Checking apiserver healthz at https://192.168.39.21:8443/healthz ...
	I0918 21:05:01.899086   61740 api_server.go:279] https://192.168.39.21:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0918 21:05:01.899110   61740 api_server.go:103] status: https://192.168.39.21:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0918 21:05:02.393672   61740 api_server.go:253] Checking apiserver healthz at https://192.168.39.21:8443/healthz ...
	I0918 21:05:02.401942   61740 api_server.go:279] https://192.168.39.21:8443/healthz returned 200:
	ok
	I0918 21:05:02.408523   61740 api_server.go:141] control plane version: v1.31.1
	I0918 21:05:02.408553   61740 api_server.go:131] duration metric: took 4.015427901s to wait for apiserver health ...
	I0918 21:05:02.408562   61740 cni.go:84] Creating CNI manager for ""
	I0918 21:05:02.408568   61740 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0918 21:05:02.410199   61740 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0918 21:05:02.411470   61740 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0918 21:05:02.424617   61740 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0918 21:05:02.443819   61740 system_pods.go:43] waiting for kube-system pods to appear ...
	I0918 21:05:02.458892   61740 system_pods.go:59] 8 kube-system pods found
	I0918 21:05:02.458939   61740 system_pods.go:61] "coredns-7c65d6cfc9-xwn8w" [773b9a83-bb43-40d3-b3a3-40603c3b22b2] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0918 21:05:02.458949   61740 system_pods.go:61] "etcd-embed-certs-255556" [ee3e7dc9-fb5a-4faa-a0b5-b84b7cd506b6] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0918 21:05:02.458961   61740 system_pods.go:61] "kube-apiserver-embed-certs-255556" [c60ce069-c7a0-42d7-a7de-ce3cf91a3d43] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0918 21:05:02.458970   61740 system_pods.go:61] "kube-controller-manager-embed-certs-255556" [ac8f6b42-caa3-4815-9a90-3f7bb1f0060b] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0918 21:05:02.458980   61740 system_pods.go:61] "kube-proxy-v8szm" [367f743a-399b-4d04-8604-dcd441999581] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0918 21:05:02.458993   61740 system_pods.go:61] "kube-scheduler-embed-certs-255556" [b5dd211b-7963-41ac-8b43-0a5451e3e848] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0918 21:05:02.459001   61740 system_pods.go:61] "metrics-server-6867b74b74-z8rm7" [d1b6823e-4ac5-4ac6-88ae-7f8eac622fdf] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0918 21:05:02.459009   61740 system_pods.go:61] "storage-provisioner" [1575f899-35a7-4eb2-ad5f-660183f75aa6] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0918 21:05:02.459015   61740 system_pods.go:74] duration metric: took 15.172393ms to wait for pod list to return data ...
	I0918 21:05:02.459025   61740 node_conditions.go:102] verifying NodePressure condition ...
	I0918 21:05:02.463140   61740 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0918 21:05:02.463177   61740 node_conditions.go:123] node cpu capacity is 2
	I0918 21:05:02.463192   61740 node_conditions.go:105] duration metric: took 4.162401ms to run NodePressure ...
	I0918 21:05:02.463214   61740 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0918 21:05:02.757153   61740 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0918 21:05:02.761949   61740 kubeadm.go:739] kubelet initialised
	I0918 21:05:02.761977   61740 kubeadm.go:740] duration metric: took 4.79396ms waiting for restarted kubelet to initialise ...
	I0918 21:05:02.761985   61740 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0918 21:05:02.767197   61740 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-xwn8w" in "kube-system" namespace to be "Ready" ...
	I0918 21:05:00.385891   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | domain old-k8s-version-740194 has defined MAC address 52:54:00:3f:a7:11 in network mk-old-k8s-version-740194
	I0918 21:05:00.386451   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | unable to find current IP address of domain old-k8s-version-740194 in network mk-old-k8s-version-740194
	I0918 21:05:00.386482   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | I0918 21:05:00.386384   63026 retry.go:31] will retry after 3.274467583s: waiting for machine to come up
	I0918 21:05:03.664788   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | domain old-k8s-version-740194 has defined MAC address 52:54:00:3f:a7:11 in network mk-old-k8s-version-740194
	I0918 21:05:03.665255   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | unable to find current IP address of domain old-k8s-version-740194 in network mk-old-k8s-version-740194
	I0918 21:05:03.665286   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | I0918 21:05:03.665210   63026 retry.go:31] will retry after 4.112908346s: waiting for machine to come up
	I0918 21:05:02.634177   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:05:05.133431   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:05:07.133941   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:05:04.774196   61740 pod_ready.go:103] pod "coredns-7c65d6cfc9-xwn8w" in "kube-system" namespace has status "Ready":"False"
	I0918 21:05:07.273045   61740 pod_ready.go:103] pod "coredns-7c65d6cfc9-xwn8w" in "kube-system" namespace has status "Ready":"False"
	I0918 21:05:09.245246   61273 start.go:364] duration metric: took 55.195169549s to acquireMachinesLock for "no-preload-331658"
	I0918 21:05:09.245300   61273 start.go:96] Skipping create...Using existing machine configuration
	I0918 21:05:09.245311   61273 fix.go:54] fixHost starting: 
	I0918 21:05:09.245741   61273 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19667-7671/.minikube/bin/docker-machine-driver-kvm2
	I0918 21:05:09.245778   61273 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0918 21:05:09.263998   61273 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37335
	I0918 21:05:09.264565   61273 main.go:141] libmachine: () Calling .GetVersion
	I0918 21:05:09.265118   61273 main.go:141] libmachine: Using API Version  1
	I0918 21:05:09.265142   61273 main.go:141] libmachine: () Calling .SetConfigRaw
	I0918 21:05:09.265505   61273 main.go:141] libmachine: () Calling .GetMachineName
	I0918 21:05:09.265732   61273 main.go:141] libmachine: (no-preload-331658) Calling .DriverName
	I0918 21:05:09.265901   61273 main.go:141] libmachine: (no-preload-331658) Calling .GetState
	I0918 21:05:09.269500   61273 fix.go:112] recreateIfNeeded on no-preload-331658: state=Stopped err=<nil>
	I0918 21:05:09.269525   61273 main.go:141] libmachine: (no-preload-331658) Calling .DriverName
	W0918 21:05:09.269730   61273 fix.go:138] unexpected machine state, will restart: <nil>
	I0918 21:05:09.271448   61273 out.go:177] * Restarting existing kvm2 VM for "no-preload-331658" ...
	I0918 21:05:07.782616   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | domain old-k8s-version-740194 has defined MAC address 52:54:00:3f:a7:11 in network mk-old-k8s-version-740194
	I0918 21:05:07.783108   62061 main.go:141] libmachine: (old-k8s-version-740194) Found IP for machine: 192.168.72.53
	I0918 21:05:07.783125   62061 main.go:141] libmachine: (old-k8s-version-740194) Reserving static IP address...
	I0918 21:05:07.783135   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | domain old-k8s-version-740194 has current primary IP address 192.168.72.53 and MAC address 52:54:00:3f:a7:11 in network mk-old-k8s-version-740194
	I0918 21:05:07.783509   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | found host DHCP lease matching {name: "old-k8s-version-740194", mac: "52:54:00:3f:a7:11", ip: "192.168.72.53"} in network mk-old-k8s-version-740194: {Iface:virbr3 ExpiryTime:2024-09-18 22:05:00 +0000 UTC Type:0 Mac:52:54:00:3f:a7:11 Iaid: IPaddr:192.168.72.53 Prefix:24 Hostname:old-k8s-version-740194 Clientid:01:52:54:00:3f:a7:11}
	I0918 21:05:07.783542   62061 main.go:141] libmachine: (old-k8s-version-740194) Reserved static IP address: 192.168.72.53
	I0918 21:05:07.783565   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | skip adding static IP to network mk-old-k8s-version-740194 - found existing host DHCP lease matching {name: "old-k8s-version-740194", mac: "52:54:00:3f:a7:11", ip: "192.168.72.53"}
	I0918 21:05:07.783583   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | Getting to WaitForSSH function...
	I0918 21:05:07.783614   62061 main.go:141] libmachine: (old-k8s-version-740194) Waiting for SSH to be available...
	I0918 21:05:07.785492   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | domain old-k8s-version-740194 has defined MAC address 52:54:00:3f:a7:11 in network mk-old-k8s-version-740194
	I0918 21:05:07.785856   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:a7:11", ip: ""} in network mk-old-k8s-version-740194: {Iface:virbr3 ExpiryTime:2024-09-18 22:05:00 +0000 UTC Type:0 Mac:52:54:00:3f:a7:11 Iaid: IPaddr:192.168.72.53 Prefix:24 Hostname:old-k8s-version-740194 Clientid:01:52:54:00:3f:a7:11}
	I0918 21:05:07.785885   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | domain old-k8s-version-740194 has defined IP address 192.168.72.53 and MAC address 52:54:00:3f:a7:11 in network mk-old-k8s-version-740194
	I0918 21:05:07.786000   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | Using SSH client type: external
	I0918 21:05:07.786021   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | Using SSH private key: /home/jenkins/minikube-integration/19667-7671/.minikube/machines/old-k8s-version-740194/id_rsa (-rw-------)
	I0918 21:05:07.786044   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.53 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19667-7671/.minikube/machines/old-k8s-version-740194/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0918 21:05:07.786055   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | About to run SSH command:
	I0918 21:05:07.786065   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | exit 0
	I0918 21:05:07.915953   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | SSH cmd err, output: <nil>: 
	I0918 21:05:07.916454   62061 main.go:141] libmachine: (old-k8s-version-740194) Calling .GetConfigRaw
	I0918 21:05:07.917059   62061 main.go:141] libmachine: (old-k8s-version-740194) Calling .GetIP
	I0918 21:05:07.919749   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | domain old-k8s-version-740194 has defined MAC address 52:54:00:3f:a7:11 in network mk-old-k8s-version-740194
	I0918 21:05:07.920244   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:a7:11", ip: ""} in network mk-old-k8s-version-740194: {Iface:virbr3 ExpiryTime:2024-09-18 22:05:00 +0000 UTC Type:0 Mac:52:54:00:3f:a7:11 Iaid: IPaddr:192.168.72.53 Prefix:24 Hostname:old-k8s-version-740194 Clientid:01:52:54:00:3f:a7:11}
	I0918 21:05:07.920280   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | domain old-k8s-version-740194 has defined IP address 192.168.72.53 and MAC address 52:54:00:3f:a7:11 in network mk-old-k8s-version-740194
	I0918 21:05:07.920639   62061 profile.go:143] Saving config to /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/old-k8s-version-740194/config.json ...
	I0918 21:05:07.920871   62061 machine.go:93] provisionDockerMachine start ...
	I0918 21:05:07.920893   62061 main.go:141] libmachine: (old-k8s-version-740194) Calling .DriverName
	I0918 21:05:07.921100   62061 main.go:141] libmachine: (old-k8s-version-740194) Calling .GetSSHHostname
	I0918 21:05:07.923129   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | domain old-k8s-version-740194 has defined MAC address 52:54:00:3f:a7:11 in network mk-old-k8s-version-740194
	I0918 21:05:07.923434   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:a7:11", ip: ""} in network mk-old-k8s-version-740194: {Iface:virbr3 ExpiryTime:2024-09-18 22:05:00 +0000 UTC Type:0 Mac:52:54:00:3f:a7:11 Iaid: IPaddr:192.168.72.53 Prefix:24 Hostname:old-k8s-version-740194 Clientid:01:52:54:00:3f:a7:11}
	I0918 21:05:07.923458   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | domain old-k8s-version-740194 has defined IP address 192.168.72.53 and MAC address 52:54:00:3f:a7:11 in network mk-old-k8s-version-740194
	I0918 21:05:07.923573   62061 main.go:141] libmachine: (old-k8s-version-740194) Calling .GetSSHPort
	I0918 21:05:07.923727   62061 main.go:141] libmachine: (old-k8s-version-740194) Calling .GetSSHKeyPath
	I0918 21:05:07.923889   62061 main.go:141] libmachine: (old-k8s-version-740194) Calling .GetSSHKeyPath
	I0918 21:05:07.924036   62061 main.go:141] libmachine: (old-k8s-version-740194) Calling .GetSSHUsername
	I0918 21:05:07.924185   62061 main.go:141] libmachine: Using SSH client type: native
	I0918 21:05:07.924368   62061 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.72.53 22 <nil> <nil>}
	I0918 21:05:07.924378   62061 main.go:141] libmachine: About to run SSH command:
	hostname
	I0918 21:05:08.035941   62061 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0918 21:05:08.035972   62061 main.go:141] libmachine: (old-k8s-version-740194) Calling .GetMachineName
	I0918 21:05:08.036242   62061 buildroot.go:166] provisioning hostname "old-k8s-version-740194"
	I0918 21:05:08.036273   62061 main.go:141] libmachine: (old-k8s-version-740194) Calling .GetMachineName
	I0918 21:05:08.036512   62061 main.go:141] libmachine: (old-k8s-version-740194) Calling .GetSSHHostname
	I0918 21:05:08.038902   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | domain old-k8s-version-740194 has defined MAC address 52:54:00:3f:a7:11 in network mk-old-k8s-version-740194
	I0918 21:05:08.039239   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:a7:11", ip: ""} in network mk-old-k8s-version-740194: {Iface:virbr3 ExpiryTime:2024-09-18 22:05:00 +0000 UTC Type:0 Mac:52:54:00:3f:a7:11 Iaid: IPaddr:192.168.72.53 Prefix:24 Hostname:old-k8s-version-740194 Clientid:01:52:54:00:3f:a7:11}
	I0918 21:05:08.039266   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | domain old-k8s-version-740194 has defined IP address 192.168.72.53 and MAC address 52:54:00:3f:a7:11 in network mk-old-k8s-version-740194
	I0918 21:05:08.039438   62061 main.go:141] libmachine: (old-k8s-version-740194) Calling .GetSSHPort
	I0918 21:05:08.039632   62061 main.go:141] libmachine: (old-k8s-version-740194) Calling .GetSSHKeyPath
	I0918 21:05:08.039899   62061 main.go:141] libmachine: (old-k8s-version-740194) Calling .GetSSHKeyPath
	I0918 21:05:08.040072   62061 main.go:141] libmachine: (old-k8s-version-740194) Calling .GetSSHUsername
	I0918 21:05:08.040248   62061 main.go:141] libmachine: Using SSH client type: native
	I0918 21:05:08.040415   62061 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.72.53 22 <nil> <nil>}
	I0918 21:05:08.040428   62061 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-740194 && echo "old-k8s-version-740194" | sudo tee /etc/hostname
	I0918 21:05:08.165391   62061 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-740194
	
	I0918 21:05:08.165424   62061 main.go:141] libmachine: (old-k8s-version-740194) Calling .GetSSHHostname
	I0918 21:05:08.168059   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | domain old-k8s-version-740194 has defined MAC address 52:54:00:3f:a7:11 in network mk-old-k8s-version-740194
	I0918 21:05:08.168396   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:a7:11", ip: ""} in network mk-old-k8s-version-740194: {Iface:virbr3 ExpiryTime:2024-09-18 22:05:00 +0000 UTC Type:0 Mac:52:54:00:3f:a7:11 Iaid: IPaddr:192.168.72.53 Prefix:24 Hostname:old-k8s-version-740194 Clientid:01:52:54:00:3f:a7:11}
	I0918 21:05:08.168428   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | domain old-k8s-version-740194 has defined IP address 192.168.72.53 and MAC address 52:54:00:3f:a7:11 in network mk-old-k8s-version-740194
	I0918 21:05:08.168621   62061 main.go:141] libmachine: (old-k8s-version-740194) Calling .GetSSHPort
	I0918 21:05:08.168837   62061 main.go:141] libmachine: (old-k8s-version-740194) Calling .GetSSHKeyPath
	I0918 21:05:08.168988   62061 main.go:141] libmachine: (old-k8s-version-740194) Calling .GetSSHKeyPath
	I0918 21:05:08.169231   62061 main.go:141] libmachine: (old-k8s-version-740194) Calling .GetSSHUsername
	I0918 21:05:08.169413   62061 main.go:141] libmachine: Using SSH client type: native
	I0918 21:05:08.169579   62061 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.72.53 22 <nil> <nil>}
	I0918 21:05:08.169594   62061 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-740194' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-740194/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-740194' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0918 21:05:08.288591   62061 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0918 21:05:08.288644   62061 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19667-7671/.minikube CaCertPath:/home/jenkins/minikube-integration/19667-7671/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19667-7671/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19667-7671/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19667-7671/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19667-7671/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19667-7671/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19667-7671/.minikube}
	I0918 21:05:08.288671   62061 buildroot.go:174] setting up certificates
	I0918 21:05:08.288682   62061 provision.go:84] configureAuth start
	I0918 21:05:08.288694   62061 main.go:141] libmachine: (old-k8s-version-740194) Calling .GetMachineName
	I0918 21:05:08.289017   62061 main.go:141] libmachine: (old-k8s-version-740194) Calling .GetIP
	I0918 21:05:08.291949   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | domain old-k8s-version-740194 has defined MAC address 52:54:00:3f:a7:11 in network mk-old-k8s-version-740194
	I0918 21:05:08.292358   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:a7:11", ip: ""} in network mk-old-k8s-version-740194: {Iface:virbr3 ExpiryTime:2024-09-18 22:05:00 +0000 UTC Type:0 Mac:52:54:00:3f:a7:11 Iaid: IPaddr:192.168.72.53 Prefix:24 Hostname:old-k8s-version-740194 Clientid:01:52:54:00:3f:a7:11}
	I0918 21:05:08.292405   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | domain old-k8s-version-740194 has defined IP address 192.168.72.53 and MAC address 52:54:00:3f:a7:11 in network mk-old-k8s-version-740194
	I0918 21:05:08.292526   62061 main.go:141] libmachine: (old-k8s-version-740194) Calling .GetSSHHostname
	I0918 21:05:08.295013   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | domain old-k8s-version-740194 has defined MAC address 52:54:00:3f:a7:11 in network mk-old-k8s-version-740194
	I0918 21:05:08.295378   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:a7:11", ip: ""} in network mk-old-k8s-version-740194: {Iface:virbr3 ExpiryTime:2024-09-18 22:05:00 +0000 UTC Type:0 Mac:52:54:00:3f:a7:11 Iaid: IPaddr:192.168.72.53 Prefix:24 Hostname:old-k8s-version-740194 Clientid:01:52:54:00:3f:a7:11}
	I0918 21:05:08.295399   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | domain old-k8s-version-740194 has defined IP address 192.168.72.53 and MAC address 52:54:00:3f:a7:11 in network mk-old-k8s-version-740194
	I0918 21:05:08.295567   62061 provision.go:143] copyHostCerts
	I0918 21:05:08.295630   62061 exec_runner.go:144] found /home/jenkins/minikube-integration/19667-7671/.minikube/ca.pem, removing ...
	I0918 21:05:08.295640   62061 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19667-7671/.minikube/ca.pem
	I0918 21:05:08.295692   62061 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19667-7671/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19667-7671/.minikube/ca.pem (1082 bytes)
	I0918 21:05:08.295783   62061 exec_runner.go:144] found /home/jenkins/minikube-integration/19667-7671/.minikube/cert.pem, removing ...
	I0918 21:05:08.295790   62061 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19667-7671/.minikube/cert.pem
	I0918 21:05:08.295810   62061 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19667-7671/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19667-7671/.minikube/cert.pem (1123 bytes)
	I0918 21:05:08.295917   62061 exec_runner.go:144] found /home/jenkins/minikube-integration/19667-7671/.minikube/key.pem, removing ...
	I0918 21:05:08.295926   62061 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19667-7671/.minikube/key.pem
	I0918 21:05:08.295949   62061 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19667-7671/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19667-7671/.minikube/key.pem (1679 bytes)
	I0918 21:05:08.296009   62061 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19667-7671/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19667-7671/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19667-7671/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-740194 san=[127.0.0.1 192.168.72.53 localhost minikube old-k8s-version-740194]
	I0918 21:05:08.560726   62061 provision.go:177] copyRemoteCerts
	I0918 21:05:08.560786   62061 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0918 21:05:08.560816   62061 main.go:141] libmachine: (old-k8s-version-740194) Calling .GetSSHHostname
	I0918 21:05:08.563798   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | domain old-k8s-version-740194 has defined MAC address 52:54:00:3f:a7:11 in network mk-old-k8s-version-740194
	I0918 21:05:08.564231   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:a7:11", ip: ""} in network mk-old-k8s-version-740194: {Iface:virbr3 ExpiryTime:2024-09-18 22:05:00 +0000 UTC Type:0 Mac:52:54:00:3f:a7:11 Iaid: IPaddr:192.168.72.53 Prefix:24 Hostname:old-k8s-version-740194 Clientid:01:52:54:00:3f:a7:11}
	I0918 21:05:08.564266   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | domain old-k8s-version-740194 has defined IP address 192.168.72.53 and MAC address 52:54:00:3f:a7:11 in network mk-old-k8s-version-740194
	I0918 21:05:08.564473   62061 main.go:141] libmachine: (old-k8s-version-740194) Calling .GetSSHPort
	I0918 21:05:08.564704   62061 main.go:141] libmachine: (old-k8s-version-740194) Calling .GetSSHKeyPath
	I0918 21:05:08.564876   62061 main.go:141] libmachine: (old-k8s-version-740194) Calling .GetSSHUsername
	I0918 21:05:08.565016   62061 sshutil.go:53] new ssh client: &{IP:192.168.72.53 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19667-7671/.minikube/machines/old-k8s-version-740194/id_rsa Username:docker}
	I0918 21:05:08.654357   62061 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0918 21:05:08.680551   62061 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0918 21:05:08.704324   62061 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0918 21:05:08.727966   62061 provision.go:87] duration metric: took 439.269312ms to configureAuth
	I0918 21:05:08.728003   62061 buildroot.go:189] setting minikube options for container-runtime
	I0918 21:05:08.728235   62061 config.go:182] Loaded profile config "old-k8s-version-740194": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0918 21:05:08.728305   62061 main.go:141] libmachine: (old-k8s-version-740194) Calling .GetSSHHostname
	I0918 21:05:08.730895   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | domain old-k8s-version-740194 has defined MAC address 52:54:00:3f:a7:11 in network mk-old-k8s-version-740194
	I0918 21:05:08.731318   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:a7:11", ip: ""} in network mk-old-k8s-version-740194: {Iface:virbr3 ExpiryTime:2024-09-18 22:05:00 +0000 UTC Type:0 Mac:52:54:00:3f:a7:11 Iaid: IPaddr:192.168.72.53 Prefix:24 Hostname:old-k8s-version-740194 Clientid:01:52:54:00:3f:a7:11}
	I0918 21:05:08.731348   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | domain old-k8s-version-740194 has defined IP address 192.168.72.53 and MAC address 52:54:00:3f:a7:11 in network mk-old-k8s-version-740194
	I0918 21:05:08.731448   62061 main.go:141] libmachine: (old-k8s-version-740194) Calling .GetSSHPort
	I0918 21:05:08.731668   62061 main.go:141] libmachine: (old-k8s-version-740194) Calling .GetSSHKeyPath
	I0918 21:05:08.731884   62061 main.go:141] libmachine: (old-k8s-version-740194) Calling .GetSSHKeyPath
	I0918 21:05:08.732057   62061 main.go:141] libmachine: (old-k8s-version-740194) Calling .GetSSHUsername
	I0918 21:05:08.732223   62061 main.go:141] libmachine: Using SSH client type: native
	I0918 21:05:08.732396   62061 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.72.53 22 <nil> <nil>}
	I0918 21:05:08.732411   62061 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0918 21:05:08.978962   62061 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0918 21:05:08.978995   62061 machine.go:96] duration metric: took 1.05811075s to provisionDockerMachine
	I0918 21:05:08.979008   62061 start.go:293] postStartSetup for "old-k8s-version-740194" (driver="kvm2")
	I0918 21:05:08.979020   62061 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0918 21:05:08.979050   62061 main.go:141] libmachine: (old-k8s-version-740194) Calling .DriverName
	I0918 21:05:08.979409   62061 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0918 21:05:08.979436   62061 main.go:141] libmachine: (old-k8s-version-740194) Calling .GetSSHHostname
	I0918 21:05:08.982472   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | domain old-k8s-version-740194 has defined MAC address 52:54:00:3f:a7:11 in network mk-old-k8s-version-740194
	I0918 21:05:08.982791   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:a7:11", ip: ""} in network mk-old-k8s-version-740194: {Iface:virbr3 ExpiryTime:2024-09-18 22:05:00 +0000 UTC Type:0 Mac:52:54:00:3f:a7:11 Iaid: IPaddr:192.168.72.53 Prefix:24 Hostname:old-k8s-version-740194 Clientid:01:52:54:00:3f:a7:11}
	I0918 21:05:08.982830   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | domain old-k8s-version-740194 has defined IP address 192.168.72.53 and MAC address 52:54:00:3f:a7:11 in network mk-old-k8s-version-740194
	I0918 21:05:08.982996   62061 main.go:141] libmachine: (old-k8s-version-740194) Calling .GetSSHPort
	I0918 21:05:08.983192   62061 main.go:141] libmachine: (old-k8s-version-740194) Calling .GetSSHKeyPath
	I0918 21:05:08.983341   62061 main.go:141] libmachine: (old-k8s-version-740194) Calling .GetSSHUsername
	I0918 21:05:08.983510   62061 sshutil.go:53] new ssh client: &{IP:192.168.72.53 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19667-7671/.minikube/machines/old-k8s-version-740194/id_rsa Username:docker}
	I0918 21:05:09.074995   62061 ssh_runner.go:195] Run: cat /etc/os-release
	I0918 21:05:09.080207   62061 info.go:137] Remote host: Buildroot 2023.02.9
	I0918 21:05:09.080240   62061 filesync.go:126] Scanning /home/jenkins/minikube-integration/19667-7671/.minikube/addons for local assets ...
	I0918 21:05:09.080327   62061 filesync.go:126] Scanning /home/jenkins/minikube-integration/19667-7671/.minikube/files for local assets ...
	I0918 21:05:09.080451   62061 filesync.go:149] local asset: /home/jenkins/minikube-integration/19667-7671/.minikube/files/etc/ssl/certs/148782.pem -> 148782.pem in /etc/ssl/certs
	I0918 21:05:09.080583   62061 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0918 21:05:09.091374   62061 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/files/etc/ssl/certs/148782.pem --> /etc/ssl/certs/148782.pem (1708 bytes)
	I0918 21:05:09.119546   62061 start.go:296] duration metric: took 140.521158ms for postStartSetup
	I0918 21:05:09.119593   62061 fix.go:56] duration metric: took 20.337078765s for fixHost
	I0918 21:05:09.119620   62061 main.go:141] libmachine: (old-k8s-version-740194) Calling .GetSSHHostname
	I0918 21:05:09.122534   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | domain old-k8s-version-740194 has defined MAC address 52:54:00:3f:a7:11 in network mk-old-k8s-version-740194
	I0918 21:05:09.123157   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:a7:11", ip: ""} in network mk-old-k8s-version-740194: {Iface:virbr3 ExpiryTime:2024-09-18 22:05:00 +0000 UTC Type:0 Mac:52:54:00:3f:a7:11 Iaid: IPaddr:192.168.72.53 Prefix:24 Hostname:old-k8s-version-740194 Clientid:01:52:54:00:3f:a7:11}
	I0918 21:05:09.123187   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | domain old-k8s-version-740194 has defined IP address 192.168.72.53 and MAC address 52:54:00:3f:a7:11 in network mk-old-k8s-version-740194
	I0918 21:05:09.123447   62061 main.go:141] libmachine: (old-k8s-version-740194) Calling .GetSSHPort
	I0918 21:05:09.123695   62061 main.go:141] libmachine: (old-k8s-version-740194) Calling .GetSSHKeyPath
	I0918 21:05:09.123891   62061 main.go:141] libmachine: (old-k8s-version-740194) Calling .GetSSHKeyPath
	I0918 21:05:09.124095   62061 main.go:141] libmachine: (old-k8s-version-740194) Calling .GetSSHUsername
	I0918 21:05:09.124373   62061 main.go:141] libmachine: Using SSH client type: native
	I0918 21:05:09.124582   62061 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.72.53 22 <nil> <nil>}
	I0918 21:05:09.124595   62061 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0918 21:05:09.245082   62061 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726693509.219779478
	
	I0918 21:05:09.245109   62061 fix.go:216] guest clock: 1726693509.219779478
	I0918 21:05:09.245119   62061 fix.go:229] Guest: 2024-09-18 21:05:09.219779478 +0000 UTC Remote: 2024-09-18 21:05:09.11959842 +0000 UTC m=+249.666759777 (delta=100.181058ms)
	I0918 21:05:09.245139   62061 fix.go:200] guest clock delta is within tolerance: 100.181058ms
	I0918 21:05:09.245146   62061 start.go:83] releasing machines lock for "old-k8s-version-740194", held for 20.462669229s
	I0918 21:05:09.245176   62061 main.go:141] libmachine: (old-k8s-version-740194) Calling .DriverName
	I0918 21:05:09.245602   62061 main.go:141] libmachine: (old-k8s-version-740194) Calling .GetIP
	I0918 21:05:09.248653   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | domain old-k8s-version-740194 has defined MAC address 52:54:00:3f:a7:11 in network mk-old-k8s-version-740194
	I0918 21:05:09.249110   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:a7:11", ip: ""} in network mk-old-k8s-version-740194: {Iface:virbr3 ExpiryTime:2024-09-18 22:05:00 +0000 UTC Type:0 Mac:52:54:00:3f:a7:11 Iaid: IPaddr:192.168.72.53 Prefix:24 Hostname:old-k8s-version-740194 Clientid:01:52:54:00:3f:a7:11}
	I0918 21:05:09.249156   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | domain old-k8s-version-740194 has defined IP address 192.168.72.53 and MAC address 52:54:00:3f:a7:11 in network mk-old-k8s-version-740194
	I0918 21:05:09.249345   62061 main.go:141] libmachine: (old-k8s-version-740194) Calling .DriverName
	I0918 21:05:09.249838   62061 main.go:141] libmachine: (old-k8s-version-740194) Calling .DriverName
	I0918 21:05:09.250047   62061 main.go:141] libmachine: (old-k8s-version-740194) Calling .DriverName
	I0918 21:05:09.250167   62061 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0918 21:05:09.250230   62061 main.go:141] libmachine: (old-k8s-version-740194) Calling .GetSSHHostname
	I0918 21:05:09.250286   62061 ssh_runner.go:195] Run: cat /version.json
	I0918 21:05:09.250312   62061 main.go:141] libmachine: (old-k8s-version-740194) Calling .GetSSHHostname
	I0918 21:05:09.253242   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | domain old-k8s-version-740194 has defined MAC address 52:54:00:3f:a7:11 in network mk-old-k8s-version-740194
	I0918 21:05:09.253408   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | domain old-k8s-version-740194 has defined MAC address 52:54:00:3f:a7:11 in network mk-old-k8s-version-740194
	I0918 21:05:09.253687   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:a7:11", ip: ""} in network mk-old-k8s-version-740194: {Iface:virbr3 ExpiryTime:2024-09-18 22:05:00 +0000 UTC Type:0 Mac:52:54:00:3f:a7:11 Iaid: IPaddr:192.168.72.53 Prefix:24 Hostname:old-k8s-version-740194 Clientid:01:52:54:00:3f:a7:11}
	I0918 21:05:09.253749   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | domain old-k8s-version-740194 has defined IP address 192.168.72.53 and MAC address 52:54:00:3f:a7:11 in network mk-old-k8s-version-740194
	I0918 21:05:09.253882   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:a7:11", ip: ""} in network mk-old-k8s-version-740194: {Iface:virbr3 ExpiryTime:2024-09-18 22:05:00 +0000 UTC Type:0 Mac:52:54:00:3f:a7:11 Iaid: IPaddr:192.168.72.53 Prefix:24 Hostname:old-k8s-version-740194 Clientid:01:52:54:00:3f:a7:11}
	I0918 21:05:09.253901   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | domain old-k8s-version-740194 has defined IP address 192.168.72.53 and MAC address 52:54:00:3f:a7:11 in network mk-old-k8s-version-740194
	I0918 21:05:09.253944   62061 main.go:141] libmachine: (old-k8s-version-740194) Calling .GetSSHPort
	I0918 21:05:09.254026   62061 main.go:141] libmachine: (old-k8s-version-740194) Calling .GetSSHPort
	I0918 21:05:09.254221   62061 main.go:141] libmachine: (old-k8s-version-740194) Calling .GetSSHKeyPath
	I0918 21:05:09.254243   62061 main.go:141] libmachine: (old-k8s-version-740194) Calling .GetSSHKeyPath
	I0918 21:05:09.254372   62061 main.go:141] libmachine: (old-k8s-version-740194) Calling .GetSSHUsername
	I0918 21:05:09.254426   62061 main.go:141] libmachine: (old-k8s-version-740194) Calling .GetSSHUsername
	I0918 21:05:09.254533   62061 sshutil.go:53] new ssh client: &{IP:192.168.72.53 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19667-7671/.minikube/machines/old-k8s-version-740194/id_rsa Username:docker}
	I0918 21:05:09.254695   62061 sshutil.go:53] new ssh client: &{IP:192.168.72.53 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19667-7671/.minikube/machines/old-k8s-version-740194/id_rsa Username:docker}
	I0918 21:05:09.374728   62061 ssh_runner.go:195] Run: systemctl --version
	I0918 21:05:09.381117   62061 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0918 21:05:09.532059   62061 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0918 21:05:09.538041   62061 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0918 21:05:09.538124   62061 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0918 21:05:09.553871   62061 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0918 21:05:09.553907   62061 start.go:495] detecting cgroup driver to use...
	I0918 21:05:09.553982   62061 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0918 21:05:09.573554   62061 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0918 21:05:09.591963   62061 docker.go:217] disabling cri-docker service (if available) ...
	I0918 21:05:09.592057   62061 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0918 21:05:09.610813   62061 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0918 21:05:09.627153   62061 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0918 21:05:09.763978   62061 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0918 21:05:09.945185   62061 docker.go:233] disabling docker service ...
	I0918 21:05:09.945257   62061 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0918 21:05:09.961076   62061 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0918 21:05:09.974660   62061 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0918 21:05:10.111191   62061 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0918 21:05:10.235058   62061 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0918 21:05:10.251389   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0918 21:05:10.270949   62061 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0918 21:05:10.271047   62061 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0918 21:05:10.284743   62061 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0918 21:05:10.284811   62061 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0918 21:05:10.295221   62061 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0918 21:05:10.305897   62061 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0918 21:05:10.317726   62061 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0918 21:05:10.330136   62061 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0918 21:05:10.339480   62061 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0918 21:05:10.339543   62061 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0918 21:05:10.352954   62061 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0918 21:05:10.370764   62061 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0918 21:05:10.524315   62061 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0918 21:05:10.617374   62061 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0918 21:05:10.617466   62061 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0918 21:05:10.624164   62061 start.go:563] Will wait 60s for crictl version
	I0918 21:05:10.624222   62061 ssh_runner.go:195] Run: which crictl
	I0918 21:05:10.629583   62061 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0918 21:05:10.673613   62061 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0918 21:05:10.673702   62061 ssh_runner.go:195] Run: crio --version
	I0918 21:05:10.703948   62061 ssh_runner.go:195] Run: crio --version
	I0918 21:05:10.733924   62061 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0918 21:05:09.272840   61273 main.go:141] libmachine: (no-preload-331658) Calling .Start
	I0918 21:05:09.273067   61273 main.go:141] libmachine: (no-preload-331658) Ensuring networks are active...
	I0918 21:05:09.274115   61273 main.go:141] libmachine: (no-preload-331658) Ensuring network default is active
	I0918 21:05:09.274576   61273 main.go:141] libmachine: (no-preload-331658) Ensuring network mk-no-preload-331658 is active
	I0918 21:05:09.275108   61273 main.go:141] libmachine: (no-preload-331658) Getting domain xml...
	I0918 21:05:09.276003   61273 main.go:141] libmachine: (no-preload-331658) Creating domain...
	I0918 21:05:10.665647   61273 main.go:141] libmachine: (no-preload-331658) Waiting to get IP...
	I0918 21:05:10.666710   61273 main.go:141] libmachine: (no-preload-331658) DBG | domain no-preload-331658 has defined MAC address 52:54:00:2a:47:d0 in network mk-no-preload-331658
	I0918 21:05:10.667187   61273 main.go:141] libmachine: (no-preload-331658) DBG | unable to find current IP address of domain no-preload-331658 in network mk-no-preload-331658
	I0918 21:05:10.667261   61273 main.go:141] libmachine: (no-preload-331658) DBG | I0918 21:05:10.667162   63200 retry.go:31] will retry after 215.232953ms: waiting for machine to come up
	I0918 21:05:10.883691   61273 main.go:141] libmachine: (no-preload-331658) DBG | domain no-preload-331658 has defined MAC address 52:54:00:2a:47:d0 in network mk-no-preload-331658
	I0918 21:05:10.884249   61273 main.go:141] libmachine: (no-preload-331658) DBG | unable to find current IP address of domain no-preload-331658 in network mk-no-preload-331658
	I0918 21:05:10.884283   61273 main.go:141] libmachine: (no-preload-331658) DBG | I0918 21:05:10.884185   63200 retry.go:31] will retry after 289.698979ms: waiting for machine to come up
	I0918 21:05:11.175936   61273 main.go:141] libmachine: (no-preload-331658) DBG | domain no-preload-331658 has defined MAC address 52:54:00:2a:47:d0 in network mk-no-preload-331658
	I0918 21:05:11.176656   61273 main.go:141] libmachine: (no-preload-331658) DBG | unable to find current IP address of domain no-preload-331658 in network mk-no-preload-331658
	I0918 21:05:11.176680   61273 main.go:141] libmachine: (no-preload-331658) DBG | I0918 21:05:11.176553   63200 retry.go:31] will retry after 424.473311ms: waiting for machine to come up
	I0918 21:05:09.633671   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:05:11.634755   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:05:09.274214   61740 pod_ready.go:103] pod "coredns-7c65d6cfc9-xwn8w" in "kube-system" namespace has status "Ready":"False"
	I0918 21:05:11.275099   61740 pod_ready.go:103] pod "coredns-7c65d6cfc9-xwn8w" in "kube-system" namespace has status "Ready":"False"
	I0918 21:05:10.735500   62061 main.go:141] libmachine: (old-k8s-version-740194) Calling .GetIP
	I0918 21:05:10.738130   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | domain old-k8s-version-740194 has defined MAC address 52:54:00:3f:a7:11 in network mk-old-k8s-version-740194
	I0918 21:05:10.738488   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:a7:11", ip: ""} in network mk-old-k8s-version-740194: {Iface:virbr3 ExpiryTime:2024-09-18 22:05:00 +0000 UTC Type:0 Mac:52:54:00:3f:a7:11 Iaid: IPaddr:192.168.72.53 Prefix:24 Hostname:old-k8s-version-740194 Clientid:01:52:54:00:3f:a7:11}
	I0918 21:05:10.738516   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | domain old-k8s-version-740194 has defined IP address 192.168.72.53 and MAC address 52:54:00:3f:a7:11 in network mk-old-k8s-version-740194
	I0918 21:05:10.738831   62061 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0918 21:05:10.742780   62061 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0918 21:05:10.754785   62061 kubeadm.go:883] updating cluster {Name:old-k8s-version-740194 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-740194 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.53 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0918 21:05:10.754939   62061 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0918 21:05:10.755002   62061 ssh_runner.go:195] Run: sudo crictl images --output json
	I0918 21:05:10.800452   62061 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0918 21:05:10.800526   62061 ssh_runner.go:195] Run: which lz4
	I0918 21:05:10.804522   62061 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0918 21:05:10.809179   62061 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0918 21:05:10.809214   62061 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0918 21:05:12.306958   62061 crio.go:462] duration metric: took 1.502481615s to copy over tarball
	I0918 21:05:12.307049   62061 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0918 21:05:11.603153   61273 main.go:141] libmachine: (no-preload-331658) DBG | domain no-preload-331658 has defined MAC address 52:54:00:2a:47:d0 in network mk-no-preload-331658
	I0918 21:05:11.603791   61273 main.go:141] libmachine: (no-preload-331658) DBG | unable to find current IP address of domain no-preload-331658 in network mk-no-preload-331658
	I0918 21:05:11.603817   61273 main.go:141] libmachine: (no-preload-331658) DBG | I0918 21:05:11.603742   63200 retry.go:31] will retry after 425.818515ms: waiting for machine to come up
	I0918 21:05:12.031622   61273 main.go:141] libmachine: (no-preload-331658) DBG | domain no-preload-331658 has defined MAC address 52:54:00:2a:47:d0 in network mk-no-preload-331658
	I0918 21:05:12.032425   61273 main.go:141] libmachine: (no-preload-331658) DBG | unable to find current IP address of domain no-preload-331658 in network mk-no-preload-331658
	I0918 21:05:12.032458   61273 main.go:141] libmachine: (no-preload-331658) DBG | I0918 21:05:12.032357   63200 retry.go:31] will retry after 701.564015ms: waiting for machine to come up
	I0918 21:05:12.735295   61273 main.go:141] libmachine: (no-preload-331658) DBG | domain no-preload-331658 has defined MAC address 52:54:00:2a:47:d0 in network mk-no-preload-331658
	I0918 21:05:12.735852   61273 main.go:141] libmachine: (no-preload-331658) DBG | unable to find current IP address of domain no-preload-331658 in network mk-no-preload-331658
	I0918 21:05:12.735882   61273 main.go:141] libmachine: (no-preload-331658) DBG | I0918 21:05:12.735814   63200 retry.go:31] will retry after 904.737419ms: waiting for machine to come up
	I0918 21:05:13.642383   61273 main.go:141] libmachine: (no-preload-331658) DBG | domain no-preload-331658 has defined MAC address 52:54:00:2a:47:d0 in network mk-no-preload-331658
	I0918 21:05:13.642913   61273 main.go:141] libmachine: (no-preload-331658) DBG | unable to find current IP address of domain no-preload-331658 in network mk-no-preload-331658
	I0918 21:05:13.642935   61273 main.go:141] libmachine: (no-preload-331658) DBG | I0918 21:05:13.642872   63200 retry.go:31] will retry after 891.091353ms: waiting for machine to come up
	I0918 21:05:14.536200   61273 main.go:141] libmachine: (no-preload-331658) DBG | domain no-preload-331658 has defined MAC address 52:54:00:2a:47:d0 in network mk-no-preload-331658
	I0918 21:05:14.536797   61273 main.go:141] libmachine: (no-preload-331658) DBG | unable to find current IP address of domain no-preload-331658 in network mk-no-preload-331658
	I0918 21:05:14.536849   61273 main.go:141] libmachine: (no-preload-331658) DBG | I0918 21:05:14.536761   63200 retry.go:31] will retry after 1.01795417s: waiting for machine to come up
	I0918 21:05:15.555787   61273 main.go:141] libmachine: (no-preload-331658) DBG | domain no-preload-331658 has defined MAC address 52:54:00:2a:47:d0 in network mk-no-preload-331658
	I0918 21:05:15.556287   61273 main.go:141] libmachine: (no-preload-331658) DBG | unable to find current IP address of domain no-preload-331658 in network mk-no-preload-331658
	I0918 21:05:15.556315   61273 main.go:141] libmachine: (no-preload-331658) DBG | I0918 21:05:15.556243   63200 retry.go:31] will retry after 1.598926126s: waiting for machine to come up
	I0918 21:05:14.132957   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:05:16.133323   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:05:13.778274   61740 pod_ready.go:93] pod "coredns-7c65d6cfc9-xwn8w" in "kube-system" namespace has status "Ready":"True"
	I0918 21:05:13.778310   61740 pod_ready.go:82] duration metric: took 11.011085965s for pod "coredns-7c65d6cfc9-xwn8w" in "kube-system" namespace to be "Ready" ...
	I0918 21:05:13.778325   61740 pod_ready.go:79] waiting up to 4m0s for pod "etcd-embed-certs-255556" in "kube-system" namespace to be "Ready" ...
	I0918 21:05:13.785089   61740 pod_ready.go:93] pod "etcd-embed-certs-255556" in "kube-system" namespace has status "Ready":"True"
	I0918 21:05:13.785121   61740 pod_ready.go:82] duration metric: took 6.787649ms for pod "etcd-embed-certs-255556" in "kube-system" namespace to be "Ready" ...
	I0918 21:05:13.785135   61740 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-embed-certs-255556" in "kube-system" namespace to be "Ready" ...
	I0918 21:05:15.793479   61740 pod_ready.go:103] pod "kube-apiserver-embed-certs-255556" in "kube-system" namespace has status "Ready":"False"
	I0918 21:05:15.357114   62061 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.050037005s)
	I0918 21:05:15.357141   62061 crio.go:469] duration metric: took 3.050151648s to extract the tarball
	I0918 21:05:15.357148   62061 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0918 21:05:15.399373   62061 ssh_runner.go:195] Run: sudo crictl images --output json
	I0918 21:05:15.434204   62061 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0918 21:05:15.434238   62061 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0918 21:05:15.434332   62061 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0918 21:05:15.434369   62061 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0918 21:05:15.434385   62061 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0918 21:05:15.434398   62061 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0918 21:05:15.434339   62061 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0918 21:05:15.434438   62061 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I0918 21:05:15.434443   62061 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0918 21:05:15.434491   62061 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I0918 21:05:15.436820   62061 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0918 21:05:15.436824   62061 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0918 21:05:15.436829   62061 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0918 21:05:15.436856   62061 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0918 21:05:15.436867   62061 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0918 21:05:15.436904   62061 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0918 21:05:15.436952   62061 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0918 21:05:15.437278   62061 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0918 21:05:15.747423   62061 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0918 21:05:15.747705   62061 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0918 21:05:15.748375   62061 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0918 21:05:15.750816   62061 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0918 21:05:15.752244   62061 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0918 21:05:15.754038   62061 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0918 21:05:15.770881   62061 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0918 21:05:15.911654   62061 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0918 21:05:15.911725   62061 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0918 21:05:15.911795   62061 ssh_runner.go:195] Run: which crictl
	I0918 21:05:15.938412   62061 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0918 21:05:15.938464   62061 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0918 21:05:15.938534   62061 ssh_runner.go:195] Run: which crictl
	I0918 21:05:15.944615   62061 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0918 21:05:15.944706   62061 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0918 21:05:15.944736   62061 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0918 21:05:15.944749   62061 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0918 21:05:15.944786   62061 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0918 21:05:15.944809   62061 ssh_runner.go:195] Run: which crictl
	I0918 21:05:15.944820   62061 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0918 21:05:15.944844   62061 ssh_runner.go:195] Run: which crictl
	I0918 21:05:15.944857   62061 ssh_runner.go:195] Run: which crictl
	I0918 21:05:15.944661   62061 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0918 21:05:15.944914   62061 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0918 21:05:15.944950   62061 ssh_runner.go:195] Run: which crictl
	I0918 21:05:15.965316   62061 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0918 21:05:15.965366   62061 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0918 21:05:15.965418   62061 ssh_runner.go:195] Run: which crictl
	I0918 21:05:15.965461   62061 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0918 21:05:15.965419   62061 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0918 21:05:15.965544   62061 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0918 21:05:15.965558   62061 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0918 21:05:15.965613   62061 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0918 21:05:15.965621   62061 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0918 21:05:16.101533   62061 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0918 21:05:16.101536   62061 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0918 21:05:16.105174   62061 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0918 21:05:16.105198   62061 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0918 21:05:16.109044   62061 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0918 21:05:16.109048   62061 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0918 21:05:16.109152   62061 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0918 21:05:16.220653   62061 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0918 21:05:16.255418   62061 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0918 21:05:16.259931   62061 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0918 21:05:16.259986   62061 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0918 21:05:16.260039   62061 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0918 21:05:16.260121   62061 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0918 21:05:16.260337   62061 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0918 21:05:16.352969   62061 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0918 21:05:16.399180   62061 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19667-7671/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0918 21:05:16.445003   62061 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19667-7671/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0918 21:05:16.445078   62061 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19667-7671/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0918 21:05:16.445163   62061 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19667-7671/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0918 21:05:16.445266   62061 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19667-7671/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0918 21:05:16.445387   62061 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19667-7671/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0918 21:05:16.445398   62061 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19667-7671/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0918 21:05:16.633221   62061 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0918 21:05:16.782331   62061 cache_images.go:92] duration metric: took 1.348045359s to LoadCachedImages
	W0918 21:05:16.782445   62061 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19667-7671/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0: no such file or directory
	I0918 21:05:16.782464   62061 kubeadm.go:934] updating node { 192.168.72.53 8443 v1.20.0 crio true true} ...
	I0918 21:05:16.782608   62061 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-740194 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.72.53
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-740194 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0918 21:05:16.782679   62061 ssh_runner.go:195] Run: crio config
	I0918 21:05:16.838946   62061 cni.go:84] Creating CNI manager for ""
	I0918 21:05:16.838978   62061 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0918 21:05:16.838990   62061 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0918 21:05:16.839008   62061 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.53 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-740194 NodeName:old-k8s-version-740194 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.53"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.53 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0918 21:05:16.839163   62061 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.53
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-740194"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.53
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.53"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0918 21:05:16.839232   62061 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0918 21:05:16.849868   62061 binaries.go:44] Found k8s binaries, skipping transfer
	I0918 21:05:16.849955   62061 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0918 21:05:16.859716   62061 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (429 bytes)
	I0918 21:05:16.877857   62061 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0918 21:05:16.895573   62061 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2120 bytes)
	I0918 21:05:16.913545   62061 ssh_runner.go:195] Run: grep 192.168.72.53	control-plane.minikube.internal$ /etc/hosts
	I0918 21:05:16.917476   62061 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.53	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0918 21:05:16.931318   62061 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0918 21:05:17.057636   62061 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0918 21:05:17.076534   62061 certs.go:68] Setting up /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/old-k8s-version-740194 for IP: 192.168.72.53
	I0918 21:05:17.076571   62061 certs.go:194] generating shared ca certs ...
	I0918 21:05:17.076594   62061 certs.go:226] acquiring lock for ca certs: {Name:mkf54e0e71ed884c4372f9d3d4cb308ea4600185 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0918 21:05:17.076796   62061 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19667-7671/.minikube/ca.key
	I0918 21:05:17.076855   62061 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19667-7671/.minikube/proxy-client-ca.key
	I0918 21:05:17.076871   62061 certs.go:256] generating profile certs ...
	I0918 21:05:17.076999   62061 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/old-k8s-version-740194/client.key
	I0918 21:05:17.077083   62061 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/old-k8s-version-740194/apiserver.key.424b07d9
	I0918 21:05:17.077149   62061 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/old-k8s-version-740194/proxy-client.key
	I0918 21:05:17.077321   62061 certs.go:484] found cert: /home/jenkins/minikube-integration/19667-7671/.minikube/certs/14878.pem (1338 bytes)
	W0918 21:05:17.077371   62061 certs.go:480] ignoring /home/jenkins/minikube-integration/19667-7671/.minikube/certs/14878_empty.pem, impossibly tiny 0 bytes
	I0918 21:05:17.077386   62061 certs.go:484] found cert: /home/jenkins/minikube-integration/19667-7671/.minikube/certs/ca-key.pem (1675 bytes)
	I0918 21:05:17.077421   62061 certs.go:484] found cert: /home/jenkins/minikube-integration/19667-7671/.minikube/certs/ca.pem (1082 bytes)
	I0918 21:05:17.077465   62061 certs.go:484] found cert: /home/jenkins/minikube-integration/19667-7671/.minikube/certs/cert.pem (1123 bytes)
	I0918 21:05:17.077499   62061 certs.go:484] found cert: /home/jenkins/minikube-integration/19667-7671/.minikube/certs/key.pem (1679 bytes)
	I0918 21:05:17.077585   62061 certs.go:484] found cert: /home/jenkins/minikube-integration/19667-7671/.minikube/files/etc/ssl/certs/148782.pem (1708 bytes)
	I0918 21:05:17.078553   62061 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0918 21:05:17.116000   62061 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0918 21:05:17.150848   62061 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0918 21:05:17.190616   62061 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0918 21:05:17.222411   62061 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/old-k8s-version-740194/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0918 21:05:17.266923   62061 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/old-k8s-version-740194/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0918 21:05:17.303973   62061 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/old-k8s-version-740194/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0918 21:05:17.334390   62061 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/old-k8s-version-740194/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0918 21:05:17.358732   62061 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0918 21:05:17.385693   62061 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/certs/14878.pem --> /usr/share/ca-certificates/14878.pem (1338 bytes)
	I0918 21:05:17.411396   62061 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/files/etc/ssl/certs/148782.pem --> /usr/share/ca-certificates/148782.pem (1708 bytes)
	I0918 21:05:17.440205   62061 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0918 21:05:17.459639   62061 ssh_runner.go:195] Run: openssl version
	I0918 21:05:17.465846   62061 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0918 21:05:17.477482   62061 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0918 21:05:17.482579   62061 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 18 19:39 /usr/share/ca-certificates/minikubeCA.pem
	I0918 21:05:17.482650   62061 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0918 21:05:17.488822   62061 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0918 21:05:17.499729   62061 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14878.pem && ln -fs /usr/share/ca-certificates/14878.pem /etc/ssl/certs/14878.pem"
	I0918 21:05:17.511718   62061 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14878.pem
	I0918 21:05:17.516361   62061 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 18 19:57 /usr/share/ca-certificates/14878.pem
	I0918 21:05:17.516438   62061 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14878.pem
	I0918 21:05:17.522542   62061 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14878.pem /etc/ssl/certs/51391683.0"
	I0918 21:05:17.534450   62061 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/148782.pem && ln -fs /usr/share/ca-certificates/148782.pem /etc/ssl/certs/148782.pem"
	I0918 21:05:17.545916   62061 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/148782.pem
	I0918 21:05:17.550672   62061 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 18 19:57 /usr/share/ca-certificates/148782.pem
	I0918 21:05:17.550737   62061 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/148782.pem
	I0918 21:05:17.556802   62061 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/148782.pem /etc/ssl/certs/3ec20f2e.0"
	I0918 21:05:17.568991   62061 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0918 21:05:17.574098   62061 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0918 21:05:17.581458   62061 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0918 21:05:17.588242   62061 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0918 21:05:17.594889   62061 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0918 21:05:17.601499   62061 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0918 21:05:17.608193   62061 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0918 21:05:17.614196   62061 kubeadm.go:392] StartCluster: {Name:old-k8s-version-740194 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-740194 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.53 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0918 21:05:17.614298   62061 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0918 21:05:17.614363   62061 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0918 21:05:17.661551   62061 cri.go:89] found id: ""
	I0918 21:05:17.661627   62061 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0918 21:05:17.673087   62061 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0918 21:05:17.673110   62061 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0918 21:05:17.673153   62061 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0918 21:05:17.683213   62061 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0918 21:05:17.684537   62061 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-740194" does not appear in /home/jenkins/minikube-integration/19667-7671/kubeconfig
	I0918 21:05:17.685136   62061 kubeconfig.go:62] /home/jenkins/minikube-integration/19667-7671/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-740194" cluster setting kubeconfig missing "old-k8s-version-740194" context setting]
	I0918 21:05:17.686023   62061 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19667-7671/kubeconfig: {Name:mk3a3eecadb0164dfdd5d3c4392b5b473a2a9bb0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0918 21:05:17.711337   62061 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0918 21:05:17.723894   62061 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.72.53
	I0918 21:05:17.723936   62061 kubeadm.go:1160] stopping kube-system containers ...
	I0918 21:05:17.723949   62061 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0918 21:05:17.724005   62061 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0918 21:05:17.761087   62061 cri.go:89] found id: ""
	I0918 21:05:17.761166   62061 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0918 21:05:17.779689   62061 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0918 21:05:17.791699   62061 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0918 21:05:17.791723   62061 kubeadm.go:157] found existing configuration files:
	
	I0918 21:05:17.791773   62061 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0918 21:05:17.801804   62061 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0918 21:05:17.801879   62061 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0918 21:05:17.812806   62061 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0918 21:05:17.822813   62061 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0918 21:05:17.822902   62061 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0918 21:05:17.833267   62061 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0918 21:05:17.845148   62061 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0918 21:05:17.845230   62061 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0918 21:05:17.855974   62061 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0918 21:05:17.866490   62061 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0918 21:05:17.866593   62061 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0918 21:05:17.877339   62061 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0918 21:05:17.887825   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0918 21:05:18.021691   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0918 21:05:18.726750   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0918 21:05:18.970635   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0918 21:05:19.081869   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0918 21:05:19.203071   62061 api_server.go:52] waiting for apiserver process to appear ...
	I0918 21:05:19.203166   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:17.156934   61273 main.go:141] libmachine: (no-preload-331658) DBG | domain no-preload-331658 has defined MAC address 52:54:00:2a:47:d0 in network mk-no-preload-331658
	I0918 21:05:17.157481   61273 main.go:141] libmachine: (no-preload-331658) DBG | unable to find current IP address of domain no-preload-331658 in network mk-no-preload-331658
	I0918 21:05:17.157509   61273 main.go:141] libmachine: (no-preload-331658) DBG | I0918 21:05:17.157429   63200 retry.go:31] will retry after 1.586399944s: waiting for machine to come up
	I0918 21:05:18.746155   61273 main.go:141] libmachine: (no-preload-331658) DBG | domain no-preload-331658 has defined MAC address 52:54:00:2a:47:d0 in network mk-no-preload-331658
	I0918 21:05:18.746620   61273 main.go:141] libmachine: (no-preload-331658) DBG | unable to find current IP address of domain no-preload-331658 in network mk-no-preload-331658
	I0918 21:05:18.746650   61273 main.go:141] libmachine: (no-preload-331658) DBG | I0918 21:05:18.746571   63200 retry.go:31] will retry after 2.204220189s: waiting for machine to come up
	I0918 21:05:20.953669   61273 main.go:141] libmachine: (no-preload-331658) DBG | domain no-preload-331658 has defined MAC address 52:54:00:2a:47:d0 in network mk-no-preload-331658
	I0918 21:05:20.954223   61273 main.go:141] libmachine: (no-preload-331658) DBG | unable to find current IP address of domain no-preload-331658 in network mk-no-preload-331658
	I0918 21:05:20.954287   61273 main.go:141] libmachine: (no-preload-331658) DBG | I0918 21:05:20.954209   63200 retry.go:31] will retry after 2.418479665s: waiting for machine to come up
	I0918 21:05:18.634113   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:05:21.133516   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:05:18.365915   61740 pod_ready.go:93] pod "kube-apiserver-embed-certs-255556" in "kube-system" namespace has status "Ready":"True"
	I0918 21:05:18.365943   61740 pod_ready.go:82] duration metric: took 4.580799395s for pod "kube-apiserver-embed-certs-255556" in "kube-system" namespace to be "Ready" ...
	I0918 21:05:18.365956   61740 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-255556" in "kube-system" namespace to be "Ready" ...
	I0918 21:05:18.371010   61740 pod_ready.go:93] pod "kube-controller-manager-embed-certs-255556" in "kube-system" namespace has status "Ready":"True"
	I0918 21:05:18.371035   61740 pod_ready.go:82] duration metric: took 5.070331ms for pod "kube-controller-manager-embed-certs-255556" in "kube-system" namespace to be "Ready" ...
	I0918 21:05:18.371046   61740 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-v8szm" in "kube-system" namespace to be "Ready" ...
	I0918 21:05:18.375632   61740 pod_ready.go:93] pod "kube-proxy-v8szm" in "kube-system" namespace has status "Ready":"True"
	I0918 21:05:18.375658   61740 pod_ready.go:82] duration metric: took 4.603787ms for pod "kube-proxy-v8szm" in "kube-system" namespace to be "Ready" ...
	I0918 21:05:18.375671   61740 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-embed-certs-255556" in "kube-system" namespace to be "Ready" ...
	I0918 21:05:18.380527   61740 pod_ready.go:93] pod "kube-scheduler-embed-certs-255556" in "kube-system" namespace has status "Ready":"True"
	I0918 21:05:18.380551   61740 pod_ready.go:82] duration metric: took 4.872699ms for pod "kube-scheduler-embed-certs-255556" in "kube-system" namespace to be "Ready" ...
	I0918 21:05:18.380563   61740 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace to be "Ready" ...
	I0918 21:05:20.388600   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:05:22.887122   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:05:19.704128   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:20.203595   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:20.703244   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:21.204288   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:21.703469   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:22.203224   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:22.703968   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:23.204097   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:23.703933   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:24.204272   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:23.375904   61273 main.go:141] libmachine: (no-preload-331658) DBG | domain no-preload-331658 has defined MAC address 52:54:00:2a:47:d0 in network mk-no-preload-331658
	I0918 21:05:23.376450   61273 main.go:141] libmachine: (no-preload-331658) DBG | unable to find current IP address of domain no-preload-331658 in network mk-no-preload-331658
	I0918 21:05:23.376476   61273 main.go:141] libmachine: (no-preload-331658) DBG | I0918 21:05:23.376397   63200 retry.go:31] will retry after 4.431211335s: waiting for machine to come up
	I0918 21:05:23.633093   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:05:25.633913   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:05:24.887771   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:05:27.386891   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:05:24.704240   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:25.203880   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:25.703983   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:26.204273   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:26.703861   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:27.204064   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:27.703276   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:28.204289   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:28.703701   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:29.203604   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:27.811234   61273 main.go:141] libmachine: (no-preload-331658) DBG | domain no-preload-331658 has defined MAC address 52:54:00:2a:47:d0 in network mk-no-preload-331658
	I0918 21:05:27.811698   61273 main.go:141] libmachine: (no-preload-331658) DBG | domain no-preload-331658 has current primary IP address 192.168.61.31 and MAC address 52:54:00:2a:47:d0 in network mk-no-preload-331658
	I0918 21:05:27.811719   61273 main.go:141] libmachine: (no-preload-331658) Found IP for machine: 192.168.61.31
	I0918 21:05:27.811729   61273 main.go:141] libmachine: (no-preload-331658) Reserving static IP address...
	I0918 21:05:27.812131   61273 main.go:141] libmachine: (no-preload-331658) DBG | found host DHCP lease matching {name: "no-preload-331658", mac: "52:54:00:2a:47:d0", ip: "192.168.61.31"} in network mk-no-preload-331658: {Iface:virbr2 ExpiryTime:2024-09-18 21:55:52 +0000 UTC Type:0 Mac:52:54:00:2a:47:d0 Iaid: IPaddr:192.168.61.31 Prefix:24 Hostname:no-preload-331658 Clientid:01:52:54:00:2a:47:d0}
	I0918 21:05:27.812150   61273 main.go:141] libmachine: (no-preload-331658) Reserved static IP address: 192.168.61.31
	I0918 21:05:27.812163   61273 main.go:141] libmachine: (no-preload-331658) DBG | skip adding static IP to network mk-no-preload-331658 - found existing host DHCP lease matching {name: "no-preload-331658", mac: "52:54:00:2a:47:d0", ip: "192.168.61.31"}
	I0918 21:05:27.812170   61273 main.go:141] libmachine: (no-preload-331658) Waiting for SSH to be available...
	I0918 21:05:27.812178   61273 main.go:141] libmachine: (no-preload-331658) DBG | Getting to WaitForSSH function...
	I0918 21:05:27.814300   61273 main.go:141] libmachine: (no-preload-331658) DBG | domain no-preload-331658 has defined MAC address 52:54:00:2a:47:d0 in network mk-no-preload-331658
	I0918 21:05:27.814735   61273 main.go:141] libmachine: (no-preload-331658) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:47:d0", ip: ""} in network mk-no-preload-331658: {Iface:virbr2 ExpiryTime:2024-09-18 21:55:52 +0000 UTC Type:0 Mac:52:54:00:2a:47:d0 Iaid: IPaddr:192.168.61.31 Prefix:24 Hostname:no-preload-331658 Clientid:01:52:54:00:2a:47:d0}
	I0918 21:05:27.814767   61273 main.go:141] libmachine: (no-preload-331658) DBG | domain no-preload-331658 has defined IP address 192.168.61.31 and MAC address 52:54:00:2a:47:d0 in network mk-no-preload-331658
	I0918 21:05:27.814891   61273 main.go:141] libmachine: (no-preload-331658) DBG | Using SSH client type: external
	I0918 21:05:27.814922   61273 main.go:141] libmachine: (no-preload-331658) DBG | Using SSH private key: /home/jenkins/minikube-integration/19667-7671/.minikube/machines/no-preload-331658/id_rsa (-rw-------)
	I0918 21:05:27.814945   61273 main.go:141] libmachine: (no-preload-331658) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.31 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19667-7671/.minikube/machines/no-preload-331658/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0918 21:05:27.814972   61273 main.go:141] libmachine: (no-preload-331658) DBG | About to run SSH command:
	I0918 21:05:27.814985   61273 main.go:141] libmachine: (no-preload-331658) DBG | exit 0
	I0918 21:05:27.939949   61273 main.go:141] libmachine: (no-preload-331658) DBG | SSH cmd err, output: <nil>: 
	I0918 21:05:27.940365   61273 main.go:141] libmachine: (no-preload-331658) Calling .GetConfigRaw
	I0918 21:05:27.941187   61273 main.go:141] libmachine: (no-preload-331658) Calling .GetIP
	I0918 21:05:27.943976   61273 main.go:141] libmachine: (no-preload-331658) DBG | domain no-preload-331658 has defined MAC address 52:54:00:2a:47:d0 in network mk-no-preload-331658
	I0918 21:05:27.944375   61273 main.go:141] libmachine: (no-preload-331658) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:47:d0", ip: ""} in network mk-no-preload-331658: {Iface:virbr2 ExpiryTime:2024-09-18 21:55:52 +0000 UTC Type:0 Mac:52:54:00:2a:47:d0 Iaid: IPaddr:192.168.61.31 Prefix:24 Hostname:no-preload-331658 Clientid:01:52:54:00:2a:47:d0}
	I0918 21:05:27.944399   61273 main.go:141] libmachine: (no-preload-331658) DBG | domain no-preload-331658 has defined IP address 192.168.61.31 and MAC address 52:54:00:2a:47:d0 in network mk-no-preload-331658
	I0918 21:05:27.944670   61273 profile.go:143] Saving config to /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/no-preload-331658/config.json ...
	I0918 21:05:27.944942   61273 machine.go:93] provisionDockerMachine start ...
	I0918 21:05:27.944963   61273 main.go:141] libmachine: (no-preload-331658) Calling .DriverName
	I0918 21:05:27.945228   61273 main.go:141] libmachine: (no-preload-331658) Calling .GetSSHHostname
	I0918 21:05:27.947444   61273 main.go:141] libmachine: (no-preload-331658) DBG | domain no-preload-331658 has defined MAC address 52:54:00:2a:47:d0 in network mk-no-preload-331658
	I0918 21:05:27.947810   61273 main.go:141] libmachine: (no-preload-331658) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:47:d0", ip: ""} in network mk-no-preload-331658: {Iface:virbr2 ExpiryTime:2024-09-18 21:55:52 +0000 UTC Type:0 Mac:52:54:00:2a:47:d0 Iaid: IPaddr:192.168.61.31 Prefix:24 Hostname:no-preload-331658 Clientid:01:52:54:00:2a:47:d0}
	I0918 21:05:27.947843   61273 main.go:141] libmachine: (no-preload-331658) DBG | domain no-preload-331658 has defined IP address 192.168.61.31 and MAC address 52:54:00:2a:47:d0 in network mk-no-preload-331658
	I0918 21:05:27.947974   61273 main.go:141] libmachine: (no-preload-331658) Calling .GetSSHPort
	I0918 21:05:27.948196   61273 main.go:141] libmachine: (no-preload-331658) Calling .GetSSHKeyPath
	I0918 21:05:27.948404   61273 main.go:141] libmachine: (no-preload-331658) Calling .GetSSHKeyPath
	I0918 21:05:27.948664   61273 main.go:141] libmachine: (no-preload-331658) Calling .GetSSHUsername
	I0918 21:05:27.948845   61273 main.go:141] libmachine: Using SSH client type: native
	I0918 21:05:27.949078   61273 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.61.31 22 <nil> <nil>}
	I0918 21:05:27.949099   61273 main.go:141] libmachine: About to run SSH command:
	hostname
	I0918 21:05:28.052352   61273 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0918 21:05:28.052378   61273 main.go:141] libmachine: (no-preload-331658) Calling .GetMachineName
	I0918 21:05:28.052638   61273 buildroot.go:166] provisioning hostname "no-preload-331658"
	I0918 21:05:28.052668   61273 main.go:141] libmachine: (no-preload-331658) Calling .GetMachineName
	I0918 21:05:28.052923   61273 main.go:141] libmachine: (no-preload-331658) Calling .GetSSHHostname
	I0918 21:05:28.056168   61273 main.go:141] libmachine: (no-preload-331658) DBG | domain no-preload-331658 has defined MAC address 52:54:00:2a:47:d0 in network mk-no-preload-331658
	I0918 21:05:28.056599   61273 main.go:141] libmachine: (no-preload-331658) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:47:d0", ip: ""} in network mk-no-preload-331658: {Iface:virbr2 ExpiryTime:2024-09-18 21:55:52 +0000 UTC Type:0 Mac:52:54:00:2a:47:d0 Iaid: IPaddr:192.168.61.31 Prefix:24 Hostname:no-preload-331658 Clientid:01:52:54:00:2a:47:d0}
	I0918 21:05:28.056631   61273 main.go:141] libmachine: (no-preload-331658) DBG | domain no-preload-331658 has defined IP address 192.168.61.31 and MAC address 52:54:00:2a:47:d0 in network mk-no-preload-331658
	I0918 21:05:28.056805   61273 main.go:141] libmachine: (no-preload-331658) Calling .GetSSHPort
	I0918 21:05:28.057009   61273 main.go:141] libmachine: (no-preload-331658) Calling .GetSSHKeyPath
	I0918 21:05:28.057168   61273 main.go:141] libmachine: (no-preload-331658) Calling .GetSSHKeyPath
	I0918 21:05:28.057305   61273 main.go:141] libmachine: (no-preload-331658) Calling .GetSSHUsername
	I0918 21:05:28.057478   61273 main.go:141] libmachine: Using SSH client type: native
	I0918 21:05:28.057652   61273 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.61.31 22 <nil> <nil>}
	I0918 21:05:28.057665   61273 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-331658 && echo "no-preload-331658" | sudo tee /etc/hostname
	I0918 21:05:28.174245   61273 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-331658
	
	I0918 21:05:28.174282   61273 main.go:141] libmachine: (no-preload-331658) Calling .GetSSHHostname
	I0918 21:05:28.177373   61273 main.go:141] libmachine: (no-preload-331658) DBG | domain no-preload-331658 has defined MAC address 52:54:00:2a:47:d0 in network mk-no-preload-331658
	I0918 21:05:28.177753   61273 main.go:141] libmachine: (no-preload-331658) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:47:d0", ip: ""} in network mk-no-preload-331658: {Iface:virbr2 ExpiryTime:2024-09-18 21:55:52 +0000 UTC Type:0 Mac:52:54:00:2a:47:d0 Iaid: IPaddr:192.168.61.31 Prefix:24 Hostname:no-preload-331658 Clientid:01:52:54:00:2a:47:d0}
	I0918 21:05:28.177781   61273 main.go:141] libmachine: (no-preload-331658) DBG | domain no-preload-331658 has defined IP address 192.168.61.31 and MAC address 52:54:00:2a:47:d0 in network mk-no-preload-331658
	I0918 21:05:28.177981   61273 main.go:141] libmachine: (no-preload-331658) Calling .GetSSHPort
	I0918 21:05:28.178202   61273 main.go:141] libmachine: (no-preload-331658) Calling .GetSSHKeyPath
	I0918 21:05:28.178380   61273 main.go:141] libmachine: (no-preload-331658) Calling .GetSSHKeyPath
	I0918 21:05:28.178523   61273 main.go:141] libmachine: (no-preload-331658) Calling .GetSSHUsername
	I0918 21:05:28.178752   61273 main.go:141] libmachine: Using SSH client type: native
	I0918 21:05:28.178948   61273 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.61.31 22 <nil> <nil>}
	I0918 21:05:28.178965   61273 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-331658' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-331658/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-331658' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0918 21:05:28.292659   61273 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0918 21:05:28.292691   61273 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19667-7671/.minikube CaCertPath:/home/jenkins/minikube-integration/19667-7671/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19667-7671/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19667-7671/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19667-7671/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19667-7671/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19667-7671/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19667-7671/.minikube}
	I0918 21:05:28.292714   61273 buildroot.go:174] setting up certificates
	I0918 21:05:28.292725   61273 provision.go:84] configureAuth start
	I0918 21:05:28.292734   61273 main.go:141] libmachine: (no-preload-331658) Calling .GetMachineName
	I0918 21:05:28.293091   61273 main.go:141] libmachine: (no-preload-331658) Calling .GetIP
	I0918 21:05:28.295792   61273 main.go:141] libmachine: (no-preload-331658) DBG | domain no-preload-331658 has defined MAC address 52:54:00:2a:47:d0 in network mk-no-preload-331658
	I0918 21:05:28.296192   61273 main.go:141] libmachine: (no-preload-331658) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:47:d0", ip: ""} in network mk-no-preload-331658: {Iface:virbr2 ExpiryTime:2024-09-18 21:55:52 +0000 UTC Type:0 Mac:52:54:00:2a:47:d0 Iaid: IPaddr:192.168.61.31 Prefix:24 Hostname:no-preload-331658 Clientid:01:52:54:00:2a:47:d0}
	I0918 21:05:28.296219   61273 main.go:141] libmachine: (no-preload-331658) DBG | domain no-preload-331658 has defined IP address 192.168.61.31 and MAC address 52:54:00:2a:47:d0 in network mk-no-preload-331658
	I0918 21:05:28.296405   61273 main.go:141] libmachine: (no-preload-331658) Calling .GetSSHHostname
	I0918 21:05:28.298446   61273 main.go:141] libmachine: (no-preload-331658) DBG | domain no-preload-331658 has defined MAC address 52:54:00:2a:47:d0 in network mk-no-preload-331658
	I0918 21:05:28.298788   61273 main.go:141] libmachine: (no-preload-331658) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:47:d0", ip: ""} in network mk-no-preload-331658: {Iface:virbr2 ExpiryTime:2024-09-18 21:55:52 +0000 UTC Type:0 Mac:52:54:00:2a:47:d0 Iaid: IPaddr:192.168.61.31 Prefix:24 Hostname:no-preload-331658 Clientid:01:52:54:00:2a:47:d0}
	I0918 21:05:28.298815   61273 main.go:141] libmachine: (no-preload-331658) DBG | domain no-preload-331658 has defined IP address 192.168.61.31 and MAC address 52:54:00:2a:47:d0 in network mk-no-preload-331658
	I0918 21:05:28.298938   61273 provision.go:143] copyHostCerts
	I0918 21:05:28.299013   61273 exec_runner.go:144] found /home/jenkins/minikube-integration/19667-7671/.minikube/ca.pem, removing ...
	I0918 21:05:28.299026   61273 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19667-7671/.minikube/ca.pem
	I0918 21:05:28.299078   61273 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19667-7671/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19667-7671/.minikube/ca.pem (1082 bytes)
	I0918 21:05:28.299170   61273 exec_runner.go:144] found /home/jenkins/minikube-integration/19667-7671/.minikube/cert.pem, removing ...
	I0918 21:05:28.299178   61273 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19667-7671/.minikube/cert.pem
	I0918 21:05:28.299199   61273 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19667-7671/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19667-7671/.minikube/cert.pem (1123 bytes)
	I0918 21:05:28.299252   61273 exec_runner.go:144] found /home/jenkins/minikube-integration/19667-7671/.minikube/key.pem, removing ...
	I0918 21:05:28.299258   61273 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19667-7671/.minikube/key.pem
	I0918 21:05:28.299278   61273 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19667-7671/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19667-7671/.minikube/key.pem (1679 bytes)
	I0918 21:05:28.299325   61273 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19667-7671/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19667-7671/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19667-7671/.minikube/certs/ca-key.pem org=jenkins.no-preload-331658 san=[127.0.0.1 192.168.61.31 localhost minikube no-preload-331658]
	I0918 21:05:28.606565   61273 provision.go:177] copyRemoteCerts
	I0918 21:05:28.606629   61273 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0918 21:05:28.606653   61273 main.go:141] libmachine: (no-preload-331658) Calling .GetSSHHostname
	I0918 21:05:28.609156   61273 main.go:141] libmachine: (no-preload-331658) DBG | domain no-preload-331658 has defined MAC address 52:54:00:2a:47:d0 in network mk-no-preload-331658
	I0918 21:05:28.609533   61273 main.go:141] libmachine: (no-preload-331658) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:47:d0", ip: ""} in network mk-no-preload-331658: {Iface:virbr2 ExpiryTime:2024-09-18 21:55:52 +0000 UTC Type:0 Mac:52:54:00:2a:47:d0 Iaid: IPaddr:192.168.61.31 Prefix:24 Hostname:no-preload-331658 Clientid:01:52:54:00:2a:47:d0}
	I0918 21:05:28.609564   61273 main.go:141] libmachine: (no-preload-331658) DBG | domain no-preload-331658 has defined IP address 192.168.61.31 and MAC address 52:54:00:2a:47:d0 in network mk-no-preload-331658
	I0918 21:05:28.609690   61273 main.go:141] libmachine: (no-preload-331658) Calling .GetSSHPort
	I0918 21:05:28.609891   61273 main.go:141] libmachine: (no-preload-331658) Calling .GetSSHKeyPath
	I0918 21:05:28.610102   61273 main.go:141] libmachine: (no-preload-331658) Calling .GetSSHUsername
	I0918 21:05:28.610332   61273 sshutil.go:53] new ssh client: &{IP:192.168.61.31 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19667-7671/.minikube/machines/no-preload-331658/id_rsa Username:docker}
	I0918 21:05:28.690571   61273 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0918 21:05:28.719257   61273 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0918 21:05:28.744119   61273 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0918 21:05:28.768692   61273 provision.go:87] duration metric: took 475.955066ms to configureAuth
	I0918 21:05:28.768720   61273 buildroot.go:189] setting minikube options for container-runtime
	I0918 21:05:28.768941   61273 config.go:182] Loaded profile config "no-preload-331658": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0918 21:05:28.769031   61273 main.go:141] libmachine: (no-preload-331658) Calling .GetSSHHostname
	I0918 21:05:28.771437   61273 main.go:141] libmachine: (no-preload-331658) DBG | domain no-preload-331658 has defined MAC address 52:54:00:2a:47:d0 in network mk-no-preload-331658
	I0918 21:05:28.771747   61273 main.go:141] libmachine: (no-preload-331658) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:47:d0", ip: ""} in network mk-no-preload-331658: {Iface:virbr2 ExpiryTime:2024-09-18 21:55:52 +0000 UTC Type:0 Mac:52:54:00:2a:47:d0 Iaid: IPaddr:192.168.61.31 Prefix:24 Hostname:no-preload-331658 Clientid:01:52:54:00:2a:47:d0}
	I0918 21:05:28.771786   61273 main.go:141] libmachine: (no-preload-331658) DBG | domain no-preload-331658 has defined IP address 192.168.61.31 and MAC address 52:54:00:2a:47:d0 in network mk-no-preload-331658
	I0918 21:05:28.771906   61273 main.go:141] libmachine: (no-preload-331658) Calling .GetSSHPort
	I0918 21:05:28.772127   61273 main.go:141] libmachine: (no-preload-331658) Calling .GetSSHKeyPath
	I0918 21:05:28.772330   61273 main.go:141] libmachine: (no-preload-331658) Calling .GetSSHKeyPath
	I0918 21:05:28.772496   61273 main.go:141] libmachine: (no-preload-331658) Calling .GetSSHUsername
	I0918 21:05:28.772717   61273 main.go:141] libmachine: Using SSH client type: native
	I0918 21:05:28.772886   61273 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.61.31 22 <nil> <nil>}
	I0918 21:05:28.772902   61273 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0918 21:05:29.001137   61273 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0918 21:05:29.001160   61273 machine.go:96] duration metric: took 1.056205004s to provisionDockerMachine
	I0918 21:05:29.001171   61273 start.go:293] postStartSetup for "no-preload-331658" (driver="kvm2")
	I0918 21:05:29.001181   61273 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0918 21:05:29.001194   61273 main.go:141] libmachine: (no-preload-331658) Calling .DriverName
	I0918 21:05:29.001531   61273 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0918 21:05:29.001556   61273 main.go:141] libmachine: (no-preload-331658) Calling .GetSSHHostname
	I0918 21:05:29.004307   61273 main.go:141] libmachine: (no-preload-331658) DBG | domain no-preload-331658 has defined MAC address 52:54:00:2a:47:d0 in network mk-no-preload-331658
	I0918 21:05:29.004656   61273 main.go:141] libmachine: (no-preload-331658) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:47:d0", ip: ""} in network mk-no-preload-331658: {Iface:virbr2 ExpiryTime:2024-09-18 21:55:52 +0000 UTC Type:0 Mac:52:54:00:2a:47:d0 Iaid: IPaddr:192.168.61.31 Prefix:24 Hostname:no-preload-331658 Clientid:01:52:54:00:2a:47:d0}
	I0918 21:05:29.004686   61273 main.go:141] libmachine: (no-preload-331658) DBG | domain no-preload-331658 has defined IP address 192.168.61.31 and MAC address 52:54:00:2a:47:d0 in network mk-no-preload-331658
	I0918 21:05:29.004877   61273 main.go:141] libmachine: (no-preload-331658) Calling .GetSSHPort
	I0918 21:05:29.005128   61273 main.go:141] libmachine: (no-preload-331658) Calling .GetSSHKeyPath
	I0918 21:05:29.005379   61273 main.go:141] libmachine: (no-preload-331658) Calling .GetSSHUsername
	I0918 21:05:29.005556   61273 sshutil.go:53] new ssh client: &{IP:192.168.61.31 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19667-7671/.minikube/machines/no-preload-331658/id_rsa Username:docker}
	I0918 21:05:29.087453   61273 ssh_runner.go:195] Run: cat /etc/os-release
	I0918 21:05:29.091329   61273 info.go:137] Remote host: Buildroot 2023.02.9
	I0918 21:05:29.091356   61273 filesync.go:126] Scanning /home/jenkins/minikube-integration/19667-7671/.minikube/addons for local assets ...
	I0918 21:05:29.091422   61273 filesync.go:126] Scanning /home/jenkins/minikube-integration/19667-7671/.minikube/files for local assets ...
	I0918 21:05:29.091493   61273 filesync.go:149] local asset: /home/jenkins/minikube-integration/19667-7671/.minikube/files/etc/ssl/certs/148782.pem -> 148782.pem in /etc/ssl/certs
	I0918 21:05:29.091578   61273 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0918 21:05:29.101039   61273 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/files/etc/ssl/certs/148782.pem --> /etc/ssl/certs/148782.pem (1708 bytes)
	I0918 21:05:29.125451   61273 start.go:296] duration metric: took 124.264463ms for postStartSetup
	I0918 21:05:29.125492   61273 fix.go:56] duration metric: took 19.880181743s for fixHost
	I0918 21:05:29.125514   61273 main.go:141] libmachine: (no-preload-331658) Calling .GetSSHHostname
	I0918 21:05:29.128543   61273 main.go:141] libmachine: (no-preload-331658) DBG | domain no-preload-331658 has defined MAC address 52:54:00:2a:47:d0 in network mk-no-preload-331658
	I0918 21:05:29.128968   61273 main.go:141] libmachine: (no-preload-331658) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:47:d0", ip: ""} in network mk-no-preload-331658: {Iface:virbr2 ExpiryTime:2024-09-18 21:55:52 +0000 UTC Type:0 Mac:52:54:00:2a:47:d0 Iaid: IPaddr:192.168.61.31 Prefix:24 Hostname:no-preload-331658 Clientid:01:52:54:00:2a:47:d0}
	I0918 21:05:29.129022   61273 main.go:141] libmachine: (no-preload-331658) DBG | domain no-preload-331658 has defined IP address 192.168.61.31 and MAC address 52:54:00:2a:47:d0 in network mk-no-preload-331658
	I0918 21:05:29.129185   61273 main.go:141] libmachine: (no-preload-331658) Calling .GetSSHPort
	I0918 21:05:29.129385   61273 main.go:141] libmachine: (no-preload-331658) Calling .GetSSHKeyPath
	I0918 21:05:29.129580   61273 main.go:141] libmachine: (no-preload-331658) Calling .GetSSHKeyPath
	I0918 21:05:29.129739   61273 main.go:141] libmachine: (no-preload-331658) Calling .GetSSHUsername
	I0918 21:05:29.129919   61273 main.go:141] libmachine: Using SSH client type: native
	I0918 21:05:29.130155   61273 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.61.31 22 <nil> <nil>}
	I0918 21:05:29.130172   61273 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0918 21:05:29.240857   61273 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726693529.214864261
	
	I0918 21:05:29.240886   61273 fix.go:216] guest clock: 1726693529.214864261
	I0918 21:05:29.240897   61273 fix.go:229] Guest: 2024-09-18 21:05:29.214864261 +0000 UTC Remote: 2024-09-18 21:05:29.125495769 +0000 UTC m=+357.666326175 (delta=89.368492ms)
	I0918 21:05:29.240943   61273 fix.go:200] guest clock delta is within tolerance: 89.368492ms
	I0918 21:05:29.240949   61273 start.go:83] releasing machines lock for "no-preload-331658", held for 19.99567651s
	I0918 21:05:29.240969   61273 main.go:141] libmachine: (no-preload-331658) Calling .DriverName
	I0918 21:05:29.241256   61273 main.go:141] libmachine: (no-preload-331658) Calling .GetIP
	I0918 21:05:29.243922   61273 main.go:141] libmachine: (no-preload-331658) DBG | domain no-preload-331658 has defined MAC address 52:54:00:2a:47:d0 in network mk-no-preload-331658
	I0918 21:05:29.244347   61273 main.go:141] libmachine: (no-preload-331658) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:47:d0", ip: ""} in network mk-no-preload-331658: {Iface:virbr2 ExpiryTime:2024-09-18 21:55:52 +0000 UTC Type:0 Mac:52:54:00:2a:47:d0 Iaid: IPaddr:192.168.61.31 Prefix:24 Hostname:no-preload-331658 Clientid:01:52:54:00:2a:47:d0}
	I0918 21:05:29.244376   61273 main.go:141] libmachine: (no-preload-331658) DBG | domain no-preload-331658 has defined IP address 192.168.61.31 and MAC address 52:54:00:2a:47:d0 in network mk-no-preload-331658
	I0918 21:05:29.244575   61273 main.go:141] libmachine: (no-preload-331658) Calling .DriverName
	I0918 21:05:29.245157   61273 main.go:141] libmachine: (no-preload-331658) Calling .DriverName
	I0918 21:05:29.245380   61273 main.go:141] libmachine: (no-preload-331658) Calling .DriverName
	I0918 21:05:29.245492   61273 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0918 21:05:29.245548   61273 main.go:141] libmachine: (no-preload-331658) Calling .GetSSHHostname
	I0918 21:05:29.245640   61273 ssh_runner.go:195] Run: cat /version.json
	I0918 21:05:29.245665   61273 main.go:141] libmachine: (no-preload-331658) Calling .GetSSHHostname
	I0918 21:05:29.248511   61273 main.go:141] libmachine: (no-preload-331658) DBG | domain no-preload-331658 has defined MAC address 52:54:00:2a:47:d0 in network mk-no-preload-331658
	I0918 21:05:29.248927   61273 main.go:141] libmachine: (no-preload-331658) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:47:d0", ip: ""} in network mk-no-preload-331658: {Iface:virbr2 ExpiryTime:2024-09-18 21:55:52 +0000 UTC Type:0 Mac:52:54:00:2a:47:d0 Iaid: IPaddr:192.168.61.31 Prefix:24 Hostname:no-preload-331658 Clientid:01:52:54:00:2a:47:d0}
	I0918 21:05:29.248954   61273 main.go:141] libmachine: (no-preload-331658) DBG | domain no-preload-331658 has defined MAC address 52:54:00:2a:47:d0 in network mk-no-preload-331658
	I0918 21:05:29.248984   61273 main.go:141] libmachine: (no-preload-331658) DBG | domain no-preload-331658 has defined IP address 192.168.61.31 and MAC address 52:54:00:2a:47:d0 in network mk-no-preload-331658
	I0918 21:05:29.249198   61273 main.go:141] libmachine: (no-preload-331658) Calling .GetSSHPort
	I0918 21:05:29.249423   61273 main.go:141] libmachine: (no-preload-331658) Calling .GetSSHKeyPath
	I0918 21:05:29.249506   61273 main.go:141] libmachine: (no-preload-331658) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:47:d0", ip: ""} in network mk-no-preload-331658: {Iface:virbr2 ExpiryTime:2024-09-18 21:55:52 +0000 UTC Type:0 Mac:52:54:00:2a:47:d0 Iaid: IPaddr:192.168.61.31 Prefix:24 Hostname:no-preload-331658 Clientid:01:52:54:00:2a:47:d0}
	I0918 21:05:29.249538   61273 main.go:141] libmachine: (no-preload-331658) DBG | domain no-preload-331658 has defined IP address 192.168.61.31 and MAC address 52:54:00:2a:47:d0 in network mk-no-preload-331658
	I0918 21:05:29.249608   61273 main.go:141] libmachine: (no-preload-331658) Calling .GetSSHUsername
	I0918 21:05:29.249692   61273 main.go:141] libmachine: (no-preload-331658) Calling .GetSSHPort
	I0918 21:05:29.249791   61273 sshutil.go:53] new ssh client: &{IP:192.168.61.31 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19667-7671/.minikube/machines/no-preload-331658/id_rsa Username:docker}
	I0918 21:05:29.249899   61273 main.go:141] libmachine: (no-preload-331658) Calling .GetSSHKeyPath
	I0918 21:05:29.250076   61273 main.go:141] libmachine: (no-preload-331658) Calling .GetSSHUsername
	I0918 21:05:29.250228   61273 sshutil.go:53] new ssh client: &{IP:192.168.61.31 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19667-7671/.minikube/machines/no-preload-331658/id_rsa Username:docker}
	I0918 21:05:29.365104   61273 ssh_runner.go:195] Run: systemctl --version
	I0918 21:05:29.371202   61273 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0918 21:05:29.518067   61273 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0918 21:05:29.524126   61273 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0918 21:05:29.524207   61273 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0918 21:05:29.540977   61273 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0918 21:05:29.541007   61273 start.go:495] detecting cgroup driver to use...
	I0918 21:05:29.541072   61273 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0918 21:05:29.558893   61273 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0918 21:05:29.576084   61273 docker.go:217] disabling cri-docker service (if available) ...
	I0918 21:05:29.576161   61273 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0918 21:05:29.591212   61273 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0918 21:05:29.605765   61273 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0918 21:05:29.734291   61273 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0918 21:05:29.892707   61273 docker.go:233] disabling docker service ...
	I0918 21:05:29.892771   61273 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0918 21:05:29.907575   61273 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0918 21:05:29.920545   61273 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0918 21:05:30.058604   61273 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0918 21:05:30.196896   61273 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0918 21:05:30.211398   61273 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0918 21:05:30.231791   61273 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0918 21:05:30.231917   61273 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0918 21:05:30.243369   61273 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0918 21:05:30.243465   61273 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0918 21:05:30.254911   61273 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0918 21:05:30.266839   61273 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0918 21:05:30.278532   61273 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0918 21:05:30.290173   61273 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0918 21:05:30.301068   61273 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0918 21:05:30.318589   61273 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0918 21:05:30.329022   61273 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0918 21:05:30.338645   61273 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0918 21:05:30.338720   61273 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0918 21:05:30.351797   61273 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0918 21:05:30.363412   61273 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0918 21:05:30.504035   61273 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0918 21:05:30.606470   61273 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0918 21:05:30.606547   61273 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0918 21:05:30.611499   61273 start.go:563] Will wait 60s for crictl version
	I0918 21:05:30.611559   61273 ssh_runner.go:195] Run: which crictl
	I0918 21:05:30.615485   61273 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0918 21:05:30.659735   61273 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0918 21:05:30.659835   61273 ssh_runner.go:195] Run: crio --version
	I0918 21:05:30.690573   61273 ssh_runner.go:195] Run: crio --version
	I0918 21:05:30.723342   61273 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0918 21:05:30.724604   61273 main.go:141] libmachine: (no-preload-331658) Calling .GetIP
	I0918 21:05:30.727445   61273 main.go:141] libmachine: (no-preload-331658) DBG | domain no-preload-331658 has defined MAC address 52:54:00:2a:47:d0 in network mk-no-preload-331658
	I0918 21:05:30.727885   61273 main.go:141] libmachine: (no-preload-331658) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:47:d0", ip: ""} in network mk-no-preload-331658: {Iface:virbr2 ExpiryTime:2024-09-18 21:55:52 +0000 UTC Type:0 Mac:52:54:00:2a:47:d0 Iaid: IPaddr:192.168.61.31 Prefix:24 Hostname:no-preload-331658 Clientid:01:52:54:00:2a:47:d0}
	I0918 21:05:30.727919   61273 main.go:141] libmachine: (no-preload-331658) DBG | domain no-preload-331658 has defined IP address 192.168.61.31 and MAC address 52:54:00:2a:47:d0 in network mk-no-preload-331658
	I0918 21:05:30.728132   61273 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0918 21:05:30.732134   61273 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0918 21:05:30.745695   61273 kubeadm.go:883] updating cluster {Name:no-preload-331658 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.1 ClusterName:no-preload-331658 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.31 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0918 21:05:30.745813   61273 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0918 21:05:30.745849   61273 ssh_runner.go:195] Run: sudo crictl images --output json
	I0918 21:05:30.788504   61273 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I0918 21:05:30.788537   61273 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.31.1 registry.k8s.io/kube-controller-manager:v1.31.1 registry.k8s.io/kube-scheduler:v1.31.1 registry.k8s.io/kube-proxy:v1.31.1 registry.k8s.io/pause:3.10 registry.k8s.io/etcd:3.5.15-0 registry.k8s.io/coredns/coredns:v1.11.3 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0918 21:05:30.788634   61273 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0918 21:05:30.788655   61273 image.go:135] retrieving image: registry.k8s.io/pause:3.10
	I0918 21:05:30.788673   61273 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.31.1
	I0918 21:05:30.788685   61273 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.11.3
	I0918 21:05:30.788655   61273 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.31.1
	I0918 21:05:30.788796   61273 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.31.1
	I0918 21:05:30.788804   61273 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.1
	I0918 21:05:30.788655   61273 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.15-0
	I0918 21:05:30.790173   61273 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0918 21:05:30.790181   61273 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.1
	I0918 21:05:30.790199   61273 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.3: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.3
	I0918 21:05:30.790170   61273 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.1
	I0918 21:05:30.790222   61273 image.go:178] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I0918 21:05:30.790237   61273 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.1
	I0918 21:05:30.790268   61273 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.15-0
	I0918 21:05:30.790542   61273 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.1
	I0918 21:05:31.049150   61273 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10
	I0918 21:05:31.052046   61273 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.31.1
	I0918 21:05:31.099660   61273 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.3
	I0918 21:05:31.099861   61273 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.31.1
	I0918 21:05:31.111308   61273 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.31.1
	I0918 21:05:31.111439   61273 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.15-0
	I0918 21:05:31.112293   61273 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.31.1
	I0918 21:05:31.203873   61273 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.31.1" needs transfer: "registry.k8s.io/kube-proxy:v1.31.1" does not exist at hash "60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561" in container runtime
	I0918 21:05:31.203934   61273 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.31.1
	I0918 21:05:31.204042   61273 ssh_runner.go:195] Run: which crictl
	I0918 21:05:31.208912   61273 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.3" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.3" does not exist at hash "c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6" in container runtime
	I0918 21:05:31.208937   61273 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.31.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.31.1" does not exist at hash "9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b" in container runtime
	I0918 21:05:31.208968   61273 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.31.1
	I0918 21:05:31.208960   61273 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.3
	I0918 21:05:31.209020   61273 ssh_runner.go:195] Run: which crictl
	I0918 21:05:31.209029   61273 ssh_runner.go:195] Run: which crictl
	I0918 21:05:31.249355   61273 cache_images.go:116] "registry.k8s.io/etcd:3.5.15-0" needs transfer: "registry.k8s.io/etcd:3.5.15-0" does not exist at hash "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4" in container runtime
	I0918 21:05:31.249408   61273 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.15-0
	I0918 21:05:31.249459   61273 ssh_runner.go:195] Run: which crictl
	I0918 21:05:31.253214   61273 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.31.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.31.1" does not exist at hash "175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1" in container runtime
	I0918 21:05:31.253244   61273 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.31.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.31.1" does not exist at hash "6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee" in container runtime
	I0918 21:05:31.253286   61273 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.31.1
	I0918 21:05:31.253274   61273 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.31.1
	I0918 21:05:31.253335   61273 ssh_runner.go:195] Run: which crictl
	I0918 21:05:31.253339   61273 ssh_runner.go:195] Run: which crictl
	I0918 21:05:31.253351   61273 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.1
	I0918 21:05:31.253405   61273 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.1
	I0918 21:05:31.253419   61273 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I0918 21:05:31.255163   61273 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0918 21:05:31.330929   61273 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.1
	I0918 21:05:31.330999   61273 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.1
	I0918 21:05:31.349540   61273 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.1
	I0918 21:05:31.349558   61273 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.1
	I0918 21:05:31.350088   61273 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I0918 21:05:31.353763   61273 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0918 21:05:31.447057   61273 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.1
	I0918 21:05:31.457171   61273 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.1
	I0918 21:05:31.457239   61273 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.1
	I0918 21:05:31.483087   61273 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.1
	I0918 21:05:31.483097   61273 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I0918 21:05:31.483210   61273 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0918 21:05:28.131874   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:05:30.133067   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:05:32.134557   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:05:29.389052   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:05:31.887032   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:05:29.704274   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:30.203372   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:30.703751   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:31.203670   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:31.704097   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:32.203611   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:32.703968   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:33.203356   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:33.704260   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:34.204082   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:31.573784   61273 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19667-7671/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.1
	I0918 21:05:31.573906   61273 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.31.1
	I0918 21:05:31.573927   61273 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.1
	I0918 21:05:31.573951   61273 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19667-7671/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.1
	I0918 21:05:31.574038   61273 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.31.1
	I0918 21:05:31.605972   61273 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19667-7671/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3
	I0918 21:05:31.606077   61273 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.1
	I0918 21:05:31.606086   61273 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.11.3
	I0918 21:05:31.613640   61273 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19667-7671/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0
	I0918 21:05:31.613769   61273 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.15-0
	I0918 21:05:31.641105   61273 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19667-7671/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.1
	I0918 21:05:31.641109   61273 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.31.1 (exists)
	I0918 21:05:31.641199   61273 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.31.1
	I0918 21:05:31.641223   61273 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.31.1
	I0918 21:05:31.641244   61273 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.1
	I0918 21:05:31.641175   61273 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.31.1 (exists)
	I0918 21:05:31.666586   61273 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.3 (exists)
	I0918 21:05:31.666661   61273 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19667-7671/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.1
	I0918 21:05:31.666792   61273 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.31.1
	I0918 21:05:31.666821   61273 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.31.1 (exists)
	I0918 21:05:31.666795   61273 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.15-0 (exists)
	I0918 21:05:32.009797   61273 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0918 21:05:33.610028   61273 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.1: (1.968756977s)
	I0918 21:05:33.610065   61273 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19667-7671/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.1 from cache
	I0918 21:05:33.610080   61273 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.31.1: (1.943261692s)
	I0918 21:05:33.610111   61273 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.31.1 (exists)
	I0918 21:05:33.610090   61273 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.31.1
	I0918 21:05:33.610122   61273 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (1.600294362s)
	I0918 21:05:33.610161   61273 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0918 21:05:33.610174   61273 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.1
	I0918 21:05:33.610193   61273 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0918 21:05:33.610242   61273 ssh_runner.go:195] Run: which crictl
	I0918 21:05:35.571685   61273 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.1: (1.96147024s)
	I0918 21:05:35.571722   61273 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19667-7671/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.1 from cache
	I0918 21:05:35.571748   61273 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.3
	I0918 21:05:35.571802   61273 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.3
	I0918 21:05:35.571802   61273 ssh_runner.go:235] Completed: which crictl: (1.961540517s)
	I0918 21:05:35.571882   61273 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0918 21:05:34.632853   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:05:36.633341   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:05:33.887591   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:05:36.387534   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:05:34.703445   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:35.203660   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:35.703374   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:36.203304   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:36.704191   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:37.204129   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:37.703912   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:38.203310   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:38.703616   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:39.203258   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:37.536622   61273 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.96470192s)
	I0918 21:05:37.536666   61273 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.3: (1.96484474s)
	I0918 21:05:37.536690   61273 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19667-7671/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3 from cache
	I0918 21:05:37.536713   61273 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0918 21:05:37.536721   61273 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.31.1
	I0918 21:05:37.536766   61273 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.1
	I0918 21:05:39.615751   61273 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.1: (2.078954836s)
	I0918 21:05:39.615791   61273 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19667-7671/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.1 from cache
	I0918 21:05:39.615823   61273 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (2.079084749s)
	I0918 21:05:39.615902   61273 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0918 21:05:39.615829   61273 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.15-0
	I0918 21:05:39.615972   61273 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0
	I0918 21:05:39.676258   61273 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19667-7671/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0918 21:05:39.676355   61273 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I0918 21:05:38.634158   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:05:40.634292   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:05:38.888255   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:05:41.387766   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:05:39.704105   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:40.204102   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:40.704073   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:41.203654   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:41.703947   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:42.203722   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:42.703303   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:43.203847   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:43.704163   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:44.203216   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:42.909577   61273 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: (3.233201912s)
	I0918 21:05:42.909617   61273 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0918 21:05:42.909722   61273 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0: (3.293701319s)
	I0918 21:05:42.909748   61273 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19667-7671/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0 from cache
	I0918 21:05:42.909781   61273 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.31.1
	I0918 21:05:42.909859   61273 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.1
	I0918 21:05:44.767646   61273 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.1: (1.857764218s)
	I0918 21:05:44.767673   61273 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19667-7671/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.1 from cache
	I0918 21:05:44.767705   61273 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0918 21:05:44.767787   61273 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0918 21:05:45.419210   61273 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19667-7671/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0918 21:05:45.419257   61273 cache_images.go:123] Successfully loaded all cached images
	I0918 21:05:45.419265   61273 cache_images.go:92] duration metric: took 14.630712818s to LoadCachedImages
	I0918 21:05:45.419278   61273 kubeadm.go:934] updating node { 192.168.61.31 8443 v1.31.1 crio true true} ...
	I0918 21:05:45.419399   61273 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-331658 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.31
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:no-preload-331658 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0918 21:05:45.419479   61273 ssh_runner.go:195] Run: crio config
	I0918 21:05:45.468525   61273 cni.go:84] Creating CNI manager for ""
	I0918 21:05:45.468549   61273 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0918 21:05:45.468558   61273 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0918 21:05:45.468579   61273 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.31 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-331658 NodeName:no-preload-331658 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.31"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.31 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0918 21:05:45.468706   61273 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.31
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-331658"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.31
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.31"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0918 21:05:45.468781   61273 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0918 21:05:45.479592   61273 binaries.go:44] Found k8s binaries, skipping transfer
	I0918 21:05:45.479662   61273 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0918 21:05:45.488586   61273 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (316 bytes)
	I0918 21:05:45.507027   61273 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0918 21:05:45.525430   61273 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2158 bytes)
	I0918 21:05:45.543854   61273 ssh_runner.go:195] Run: grep 192.168.61.31	control-plane.minikube.internal$ /etc/hosts
	I0918 21:05:45.547792   61273 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.31	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0918 21:05:45.559968   61273 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0918 21:05:45.686602   61273 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0918 21:05:45.702793   61273 certs.go:68] Setting up /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/no-preload-331658 for IP: 192.168.61.31
	I0918 21:05:45.702814   61273 certs.go:194] generating shared ca certs ...
	I0918 21:05:45.702829   61273 certs.go:226] acquiring lock for ca certs: {Name:mkf54e0e71ed884c4372f9d3d4cb308ea4600185 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0918 21:05:45.703005   61273 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19667-7671/.minikube/ca.key
	I0918 21:05:45.703071   61273 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19667-7671/.minikube/proxy-client-ca.key
	I0918 21:05:45.703085   61273 certs.go:256] generating profile certs ...
	I0918 21:05:45.703159   61273 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/no-preload-331658/client.key
	I0918 21:05:45.703228   61273 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/no-preload-331658/apiserver.key.1a336b78
	I0918 21:05:45.703263   61273 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/no-preload-331658/proxy-client.key
	I0918 21:05:45.703384   61273 certs.go:484] found cert: /home/jenkins/minikube-integration/19667-7671/.minikube/certs/14878.pem (1338 bytes)
	W0918 21:05:45.703417   61273 certs.go:480] ignoring /home/jenkins/minikube-integration/19667-7671/.minikube/certs/14878_empty.pem, impossibly tiny 0 bytes
	I0918 21:05:45.703430   61273 certs.go:484] found cert: /home/jenkins/minikube-integration/19667-7671/.minikube/certs/ca-key.pem (1675 bytes)
	I0918 21:05:45.703463   61273 certs.go:484] found cert: /home/jenkins/minikube-integration/19667-7671/.minikube/certs/ca.pem (1082 bytes)
	I0918 21:05:45.703493   61273 certs.go:484] found cert: /home/jenkins/minikube-integration/19667-7671/.minikube/certs/cert.pem (1123 bytes)
	I0918 21:05:45.703521   61273 certs.go:484] found cert: /home/jenkins/minikube-integration/19667-7671/.minikube/certs/key.pem (1679 bytes)
	I0918 21:05:45.703582   61273 certs.go:484] found cert: /home/jenkins/minikube-integration/19667-7671/.minikube/files/etc/ssl/certs/148782.pem (1708 bytes)
	I0918 21:05:45.704338   61273 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0918 21:05:45.757217   61273 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0918 21:05:45.791588   61273 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0918 21:05:45.825543   61273 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0918 21:05:45.859322   61273 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/no-preload-331658/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0918 21:05:45.892890   61273 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/no-preload-331658/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0918 21:05:45.922841   61273 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/no-preload-331658/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0918 21:05:45.947670   61273 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/no-preload-331658/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0918 21:05:45.973315   61273 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/files/etc/ssl/certs/148782.pem --> /usr/share/ca-certificates/148782.pem (1708 bytes)
	I0918 21:05:45.997699   61273 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0918 21:05:46.022802   61273 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/certs/14878.pem --> /usr/share/ca-certificates/14878.pem (1338 bytes)
	I0918 21:05:46.046646   61273 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0918 21:05:46.063329   61273 ssh_runner.go:195] Run: openssl version
	I0918 21:05:46.069432   61273 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/148782.pem && ln -fs /usr/share/ca-certificates/148782.pem /etc/ssl/certs/148782.pem"
	I0918 21:05:46.081104   61273 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/148782.pem
	I0918 21:05:46.086180   61273 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 18 19:57 /usr/share/ca-certificates/148782.pem
	I0918 21:05:46.086241   61273 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/148782.pem
	I0918 21:05:46.092527   61273 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/148782.pem /etc/ssl/certs/3ec20f2e.0"
	I0918 21:05:46.103601   61273 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0918 21:05:46.114656   61273 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0918 21:05:46.118788   61273 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 18 19:39 /usr/share/ca-certificates/minikubeCA.pem
	I0918 21:05:46.118855   61273 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0918 21:05:46.124094   61273 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0918 21:05:46.135442   61273 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14878.pem && ln -fs /usr/share/ca-certificates/14878.pem /etc/ssl/certs/14878.pem"
	I0918 21:05:46.146105   61273 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14878.pem
	I0918 21:05:46.150661   61273 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 18 19:57 /usr/share/ca-certificates/14878.pem
	I0918 21:05:46.150714   61273 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14878.pem
	I0918 21:05:46.156247   61273 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14878.pem /etc/ssl/certs/51391683.0"
	I0918 21:05:46.167475   61273 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0918 21:05:46.172172   61273 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0918 21:05:46.178638   61273 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0918 21:05:46.184644   61273 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0918 21:05:46.190704   61273 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0918 21:05:46.196414   61273 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0918 21:05:46.202467   61273 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0918 21:05:46.208306   61273 kubeadm.go:392] StartCluster: {Name:no-preload-331658 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.1 ClusterName:no-preload-331658 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.31 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0918 21:05:46.208405   61273 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0918 21:05:46.208472   61273 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0918 21:05:46.247189   61273 cri.go:89] found id: ""
	I0918 21:05:46.247267   61273 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0918 21:05:46.258228   61273 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0918 21:05:46.258253   61273 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0918 21:05:46.258309   61273 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0918 21:05:46.268703   61273 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0918 21:05:46.269728   61273 kubeconfig.go:125] found "no-preload-331658" server: "https://192.168.61.31:8443"
	I0918 21:05:46.271749   61273 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0918 21:05:46.282051   61273 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.61.31
	I0918 21:05:46.282105   61273 kubeadm.go:1160] stopping kube-system containers ...
	I0918 21:05:46.282122   61273 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0918 21:05:46.282191   61273 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0918 21:05:46.319805   61273 cri.go:89] found id: ""
	I0918 21:05:46.319880   61273 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0918 21:05:46.336130   61273 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0918 21:05:46.345940   61273 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0918 21:05:46.345962   61273 kubeadm.go:157] found existing configuration files:
	
	I0918 21:05:46.346008   61273 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0918 21:05:46.355577   61273 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0918 21:05:46.355658   61273 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0918 21:05:46.367154   61273 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0918 21:05:46.377062   61273 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0918 21:05:46.377126   61273 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0918 21:05:46.387180   61273 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0918 21:05:46.396578   61273 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0918 21:05:46.396642   61273 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0918 21:05:46.406687   61273 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0918 21:05:46.416545   61273 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0918 21:05:46.416617   61273 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0918 21:05:46.426405   61273 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0918 21:05:46.436343   61273 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0918 21:05:43.132484   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:05:45.132905   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:05:47.132942   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:05:43.890245   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:05:46.386955   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:05:44.703267   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:45.203924   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:45.703945   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:46.203386   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:46.703674   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:47.203387   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:47.704034   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:48.203348   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:48.703715   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:49.203984   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:46.563094   61273 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0918 21:05:47.663823   61273 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.100694645s)
	I0918 21:05:47.663857   61273 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0918 21:05:47.895962   61273 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0918 21:05:47.978862   61273 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0918 21:05:48.095438   61273 api_server.go:52] waiting for apiserver process to appear ...
	I0918 21:05:48.095530   61273 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:48.595581   61273 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:49.095761   61273 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:49.122304   61273 api_server.go:72] duration metric: took 1.026867171s to wait for apiserver process to appear ...
	I0918 21:05:49.122343   61273 api_server.go:88] waiting for apiserver healthz status ...
	I0918 21:05:49.122361   61273 api_server.go:253] Checking apiserver healthz at https://192.168.61.31:8443/healthz ...
	I0918 21:05:49.133503   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:05:51.133761   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:05:48.386996   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:05:50.387697   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:05:52.886989   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:05:52.253818   61273 api_server.go:279] https://192.168.61.31:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0918 21:05:52.253850   61273 api_server.go:103] status: https://192.168.61.31:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0918 21:05:52.253864   61273 api_server.go:253] Checking apiserver healthz at https://192.168.61.31:8443/healthz ...
	I0918 21:05:52.290586   61273 api_server.go:279] https://192.168.61.31:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0918 21:05:52.290617   61273 api_server.go:103] status: https://192.168.61.31:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0918 21:05:52.623078   61273 api_server.go:253] Checking apiserver healthz at https://192.168.61.31:8443/healthz ...
	I0918 21:05:52.631774   61273 api_server.go:279] https://192.168.61.31:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0918 21:05:52.631811   61273 api_server.go:103] status: https://192.168.61.31:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0918 21:05:53.123498   61273 api_server.go:253] Checking apiserver healthz at https://192.168.61.31:8443/healthz ...
	I0918 21:05:53.132091   61273 api_server.go:279] https://192.168.61.31:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0918 21:05:53.132120   61273 api_server.go:103] status: https://192.168.61.31:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0918 21:05:53.622597   61273 api_server.go:253] Checking apiserver healthz at https://192.168.61.31:8443/healthz ...
	I0918 21:05:53.628896   61273 api_server.go:279] https://192.168.61.31:8443/healthz returned 200:
	ok
	I0918 21:05:53.638315   61273 api_server.go:141] control plane version: v1.31.1
	I0918 21:05:53.638354   61273 api_server.go:131] duration metric: took 4.516002991s to wait for apiserver health ...
	I0918 21:05:53.638367   61273 cni.go:84] Creating CNI manager for ""
	I0918 21:05:53.638376   61273 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0918 21:05:53.639948   61273 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0918 21:05:49.703565   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:50.204192   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:50.704248   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:51.203335   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:51.703761   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:52.203474   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:52.703901   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:53.203856   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:53.704192   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:54.204243   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:53.641376   61273 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0918 21:05:53.667828   61273 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0918 21:05:53.701667   61273 system_pods.go:43] waiting for kube-system pods to appear ...
	I0918 21:05:53.714053   61273 system_pods.go:59] 8 kube-system pods found
	I0918 21:05:53.714101   61273 system_pods.go:61] "coredns-7c65d6cfc9-dgnw2" [085d5a98-0a61-4678-830f-384780a0d7ef] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0918 21:05:53.714113   61273 system_pods.go:61] "etcd-no-preload-331658" [d6a53ff4-fcf0-46c0-9e77-9af6d0363e08] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0918 21:05:53.714126   61273 system_pods.go:61] "kube-apiserver-no-preload-331658" [1953598f-4c3f-4a98-ab75-b8ca1b8093ad] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0918 21:05:53.714135   61273 system_pods.go:61] "kube-controller-manager-no-preload-331658" [8fc655be-a192-4250-a31d-2e533a2b5e41] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0918 21:05:53.714145   61273 system_pods.go:61] "kube-proxy-hx25w" [a26512ff-f695-4452-8974-577479257160] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0918 21:05:53.714157   61273 system_pods.go:61] "kube-scheduler-no-preload-331658" [64e5347e-e7fd-4a0d-ae41-539c3989c29e] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0918 21:05:53.714169   61273 system_pods.go:61] "metrics-server-6867b74b74-n27vc" [b1de76ec-8987-49ce-ae66-eedda2705cde] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0918 21:05:53.714181   61273 system_pods.go:61] "storage-provisioner" [e110aeb3-e9bc-4bb9-9b49-5579558bdda2] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0918 21:05:53.714191   61273 system_pods.go:74] duration metric: took 12.499195ms to wait for pod list to return data ...
	I0918 21:05:53.714206   61273 node_conditions.go:102] verifying NodePressure condition ...
	I0918 21:05:53.720251   61273 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0918 21:05:53.720283   61273 node_conditions.go:123] node cpu capacity is 2
	I0918 21:05:53.720296   61273 node_conditions.go:105] duration metric: took 6.082637ms to run NodePressure ...
	I0918 21:05:53.720317   61273 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0918 21:05:54.056981   61273 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0918 21:05:54.062413   61273 kubeadm.go:739] kubelet initialised
	I0918 21:05:54.062436   61273 kubeadm.go:740] duration metric: took 5.424693ms waiting for restarted kubelet to initialise ...
	I0918 21:05:54.062443   61273 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0918 21:05:54.069721   61273 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-dgnw2" in "kube-system" namespace to be "Ready" ...
	I0918 21:05:54.089970   61273 pod_ready.go:98] node "no-preload-331658" hosting pod "coredns-7c65d6cfc9-dgnw2" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-331658" has status "Ready":"False"
	I0918 21:05:54.090005   61273 pod_ready.go:82] duration metric: took 20.250586ms for pod "coredns-7c65d6cfc9-dgnw2" in "kube-system" namespace to be "Ready" ...
	E0918 21:05:54.090017   61273 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-331658" hosting pod "coredns-7c65d6cfc9-dgnw2" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-331658" has status "Ready":"False"
	I0918 21:05:54.090046   61273 pod_ready.go:79] waiting up to 4m0s for pod "etcd-no-preload-331658" in "kube-system" namespace to be "Ready" ...
	I0918 21:05:54.105121   61273 pod_ready.go:98] node "no-preload-331658" hosting pod "etcd-no-preload-331658" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-331658" has status "Ready":"False"
	I0918 21:05:54.105156   61273 pod_ready.go:82] duration metric: took 15.097714ms for pod "etcd-no-preload-331658" in "kube-system" namespace to be "Ready" ...
	E0918 21:05:54.105170   61273 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-331658" hosting pod "etcd-no-preload-331658" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-331658" has status "Ready":"False"
	I0918 21:05:54.105180   61273 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-no-preload-331658" in "kube-system" namespace to be "Ready" ...
	I0918 21:05:54.112687   61273 pod_ready.go:98] node "no-preload-331658" hosting pod "kube-apiserver-no-preload-331658" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-331658" has status "Ready":"False"
	I0918 21:05:54.112711   61273 pod_ready.go:82] duration metric: took 7.523191ms for pod "kube-apiserver-no-preload-331658" in "kube-system" namespace to be "Ready" ...
	E0918 21:05:54.112722   61273 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-331658" hosting pod "kube-apiserver-no-preload-331658" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-331658" has status "Ready":"False"
	I0918 21:05:54.112730   61273 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-no-preload-331658" in "kube-system" namespace to be "Ready" ...
	I0918 21:05:54.119681   61273 pod_ready.go:98] node "no-preload-331658" hosting pod "kube-controller-manager-no-preload-331658" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-331658" has status "Ready":"False"
	I0918 21:05:54.119707   61273 pod_ready.go:82] duration metric: took 6.967275ms for pod "kube-controller-manager-no-preload-331658" in "kube-system" namespace to be "Ready" ...
	E0918 21:05:54.119716   61273 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-331658" hosting pod "kube-controller-manager-no-preload-331658" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-331658" has status "Ready":"False"
	I0918 21:05:54.119723   61273 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-hx25w" in "kube-system" namespace to be "Ready" ...
	I0918 21:05:54.505099   61273 pod_ready.go:98] node "no-preload-331658" hosting pod "kube-proxy-hx25w" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-331658" has status "Ready":"False"
	I0918 21:05:54.505127   61273 pod_ready.go:82] duration metric: took 385.395528ms for pod "kube-proxy-hx25w" in "kube-system" namespace to be "Ready" ...
	E0918 21:05:54.505140   61273 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-331658" hosting pod "kube-proxy-hx25w" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-331658" has status "Ready":"False"
	I0918 21:05:54.505147   61273 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-no-preload-331658" in "kube-system" namespace to be "Ready" ...
	I0918 21:05:54.905748   61273 pod_ready.go:98] node "no-preload-331658" hosting pod "kube-scheduler-no-preload-331658" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-331658" has status "Ready":"False"
	I0918 21:05:54.905774   61273 pod_ready.go:82] duration metric: took 400.618175ms for pod "kube-scheduler-no-preload-331658" in "kube-system" namespace to be "Ready" ...
	E0918 21:05:54.905785   61273 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-331658" hosting pod "kube-scheduler-no-preload-331658" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-331658" has status "Ready":"False"
	I0918 21:05:54.905794   61273 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace to be "Ready" ...
	I0918 21:05:55.305077   61273 pod_ready.go:98] node "no-preload-331658" hosting pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-331658" has status "Ready":"False"
	I0918 21:05:55.305106   61273 pod_ready.go:82] duration metric: took 399.301293ms for pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace to be "Ready" ...
	E0918 21:05:55.305118   61273 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-331658" hosting pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-331658" has status "Ready":"False"
	I0918 21:05:55.305126   61273 pod_ready.go:39] duration metric: took 1.242662699s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0918 21:05:55.305150   61273 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0918 21:05:55.317568   61273 ops.go:34] apiserver oom_adj: -16
	I0918 21:05:55.317597   61273 kubeadm.go:597] duration metric: took 9.0593375s to restartPrimaryControlPlane
	I0918 21:05:55.317616   61273 kubeadm.go:394] duration metric: took 9.109322119s to StartCluster
	I0918 21:05:55.317643   61273 settings.go:142] acquiring lock: {Name:mk6ae95a69dcbe00c28846409e9f46945a88de2f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0918 21:05:55.317720   61273 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19667-7671/kubeconfig
	I0918 21:05:55.320228   61273 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19667-7671/kubeconfig: {Name:mk3a3eecadb0164dfdd5d3c4392b5b473a2a9bb0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0918 21:05:55.320552   61273 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.61.31 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0918 21:05:55.320609   61273 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0918 21:05:55.320716   61273 addons.go:69] Setting storage-provisioner=true in profile "no-preload-331658"
	I0918 21:05:55.320725   61273 addons.go:69] Setting default-storageclass=true in profile "no-preload-331658"
	I0918 21:05:55.320739   61273 addons.go:234] Setting addon storage-provisioner=true in "no-preload-331658"
	W0918 21:05:55.320747   61273 addons.go:243] addon storage-provisioner should already be in state true
	I0918 21:05:55.320765   61273 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-331658"
	I0918 21:05:55.320785   61273 host.go:66] Checking if "no-preload-331658" exists ...
	I0918 21:05:55.320769   61273 addons.go:69] Setting metrics-server=true in profile "no-preload-331658"
	I0918 21:05:55.320799   61273 config.go:182] Loaded profile config "no-preload-331658": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0918 21:05:55.320808   61273 addons.go:234] Setting addon metrics-server=true in "no-preload-331658"
	W0918 21:05:55.320863   61273 addons.go:243] addon metrics-server should already be in state true
	I0918 21:05:55.320889   61273 host.go:66] Checking if "no-preload-331658" exists ...
	I0918 21:05:55.321208   61273 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19667-7671/.minikube/bin/docker-machine-driver-kvm2
	I0918 21:05:55.321228   61273 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19667-7671/.minikube/bin/docker-machine-driver-kvm2
	I0918 21:05:55.321208   61273 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19667-7671/.minikube/bin/docker-machine-driver-kvm2
	I0918 21:05:55.321262   61273 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0918 21:05:55.321282   61273 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0918 21:05:55.321357   61273 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0918 21:05:55.323762   61273 out.go:177] * Verifying Kubernetes components...
	I0918 21:05:55.325718   61273 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0918 21:05:55.348485   61273 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43725
	I0918 21:05:55.349072   61273 main.go:141] libmachine: () Calling .GetVersion
	I0918 21:05:55.349611   61273 main.go:141] libmachine: Using API Version  1
	I0918 21:05:55.349641   61273 main.go:141] libmachine: () Calling .SetConfigRaw
	I0918 21:05:55.349978   61273 main.go:141] libmachine: () Calling .GetMachineName
	I0918 21:05:55.350556   61273 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19667-7671/.minikube/bin/docker-machine-driver-kvm2
	I0918 21:05:55.350606   61273 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0918 21:05:55.368807   61273 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34285
	I0918 21:05:55.369340   61273 main.go:141] libmachine: () Calling .GetVersion
	I0918 21:05:55.369826   61273 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35389
	I0918 21:05:55.369908   61273 main.go:141] libmachine: Using API Version  1
	I0918 21:05:55.369928   61273 main.go:141] libmachine: () Calling .SetConfigRaw
	I0918 21:05:55.369949   61273 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42891
	I0918 21:05:55.370195   61273 main.go:141] libmachine: () Calling .GetVersion
	I0918 21:05:55.370303   61273 main.go:141] libmachine: () Calling .GetMachineName
	I0918 21:05:55.370408   61273 main.go:141] libmachine: () Calling .GetVersion
	I0918 21:05:55.370494   61273 main.go:141] libmachine: (no-preload-331658) Calling .GetState
	I0918 21:05:55.370772   61273 main.go:141] libmachine: Using API Version  1
	I0918 21:05:55.370797   61273 main.go:141] libmachine: () Calling .SetConfigRaw
	I0918 21:05:55.370908   61273 main.go:141] libmachine: Using API Version  1
	I0918 21:05:55.370929   61273 main.go:141] libmachine: () Calling .SetConfigRaw
	I0918 21:05:55.371790   61273 main.go:141] libmachine: () Calling .GetMachineName
	I0918 21:05:55.371833   61273 main.go:141] libmachine: () Calling .GetMachineName
	I0918 21:05:55.371996   61273 main.go:141] libmachine: (no-preload-331658) Calling .GetState
	I0918 21:05:55.372415   61273 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19667-7671/.minikube/bin/docker-machine-driver-kvm2
	I0918 21:05:55.372470   61273 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0918 21:05:55.372532   61273 main.go:141] libmachine: (no-preload-331658) Calling .DriverName
	I0918 21:05:55.375524   61273 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0918 21:05:55.375574   61273 addons.go:234] Setting addon default-storageclass=true in "no-preload-331658"
	W0918 21:05:55.375593   61273 addons.go:243] addon default-storageclass should already be in state true
	I0918 21:05:55.375626   61273 host.go:66] Checking if "no-preload-331658" exists ...
	I0918 21:05:55.376008   61273 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19667-7671/.minikube/bin/docker-machine-driver-kvm2
	I0918 21:05:55.376097   61273 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0918 21:05:55.377828   61273 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0918 21:05:55.377848   61273 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0918 21:05:55.377864   61273 main.go:141] libmachine: (no-preload-331658) Calling .GetSSHHostname
	I0918 21:05:55.381877   61273 main.go:141] libmachine: (no-preload-331658) DBG | domain no-preload-331658 has defined MAC address 52:54:00:2a:47:d0 in network mk-no-preload-331658
	I0918 21:05:55.382379   61273 main.go:141] libmachine: (no-preload-331658) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:47:d0", ip: ""} in network mk-no-preload-331658: {Iface:virbr2 ExpiryTime:2024-09-18 21:55:52 +0000 UTC Type:0 Mac:52:54:00:2a:47:d0 Iaid: IPaddr:192.168.61.31 Prefix:24 Hostname:no-preload-331658 Clientid:01:52:54:00:2a:47:d0}
	I0918 21:05:55.382438   61273 main.go:141] libmachine: (no-preload-331658) DBG | domain no-preload-331658 has defined IP address 192.168.61.31 and MAC address 52:54:00:2a:47:d0 in network mk-no-preload-331658
	I0918 21:05:55.382767   61273 main.go:141] libmachine: (no-preload-331658) Calling .GetSSHPort
	I0918 21:05:55.384470   61273 main.go:141] libmachine: (no-preload-331658) Calling .GetSSHKeyPath
	I0918 21:05:55.384700   61273 main.go:141] libmachine: (no-preload-331658) Calling .GetSSHUsername
	I0918 21:05:55.384863   61273 sshutil.go:53] new ssh client: &{IP:192.168.61.31 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19667-7671/.minikube/machines/no-preload-331658/id_rsa Username:docker}
	I0918 21:05:55.399531   61273 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38229
	I0918 21:05:55.400009   61273 main.go:141] libmachine: () Calling .GetVersion
	I0918 21:05:55.400532   61273 main.go:141] libmachine: Using API Version  1
	I0918 21:05:55.400552   61273 main.go:141] libmachine: () Calling .SetConfigRaw
	I0918 21:05:55.400918   61273 main.go:141] libmachine: () Calling .GetMachineName
	I0918 21:05:55.401097   61273 main.go:141] libmachine: (no-preload-331658) Calling .GetState
	I0918 21:05:55.403124   61273 main.go:141] libmachine: (no-preload-331658) Calling .DriverName
	I0918 21:05:55.404237   61273 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45209
	I0918 21:05:55.404637   61273 main.go:141] libmachine: () Calling .GetVersion
	I0918 21:05:55.405088   61273 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0918 21:05:55.405422   61273 main.go:141] libmachine: Using API Version  1
	I0918 21:05:55.405443   61273 main.go:141] libmachine: () Calling .SetConfigRaw
	I0918 21:05:55.405906   61273 main.go:141] libmachine: () Calling .GetMachineName
	I0918 21:05:55.406570   61273 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19667-7671/.minikube/bin/docker-machine-driver-kvm2
	I0918 21:05:55.406620   61273 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0918 21:05:55.406959   61273 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0918 21:05:55.406973   61273 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0918 21:05:55.407380   61273 main.go:141] libmachine: (no-preload-331658) Calling .GetSSHHostname
	I0918 21:05:55.411410   61273 main.go:141] libmachine: (no-preload-331658) DBG | domain no-preload-331658 has defined MAC address 52:54:00:2a:47:d0 in network mk-no-preload-331658
	I0918 21:05:55.411430   61273 main.go:141] libmachine: (no-preload-331658) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:47:d0", ip: ""} in network mk-no-preload-331658: {Iface:virbr2 ExpiryTime:2024-09-18 21:55:52 +0000 UTC Type:0 Mac:52:54:00:2a:47:d0 Iaid: IPaddr:192.168.61.31 Prefix:24 Hostname:no-preload-331658 Clientid:01:52:54:00:2a:47:d0}
	I0918 21:05:55.411440   61273 main.go:141] libmachine: (no-preload-331658) DBG | domain no-preload-331658 has defined IP address 192.168.61.31 and MAC address 52:54:00:2a:47:d0 in network mk-no-preload-331658
	I0918 21:05:55.411727   61273 main.go:141] libmachine: (no-preload-331658) Calling .GetSSHPort
	I0918 21:05:55.411965   61273 main.go:141] libmachine: (no-preload-331658) Calling .GetSSHKeyPath
	I0918 21:05:55.412171   61273 main.go:141] libmachine: (no-preload-331658) Calling .GetSSHUsername
	I0918 21:05:55.412377   61273 sshutil.go:53] new ssh client: &{IP:192.168.61.31 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19667-7671/.minikube/machines/no-preload-331658/id_rsa Username:docker}
	I0918 21:05:55.426166   61273 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38277
	I0918 21:05:55.426704   61273 main.go:141] libmachine: () Calling .GetVersion
	I0918 21:05:55.427211   61273 main.go:141] libmachine: Using API Version  1
	I0918 21:05:55.427232   61273 main.go:141] libmachine: () Calling .SetConfigRaw
	I0918 21:05:55.427610   61273 main.go:141] libmachine: () Calling .GetMachineName
	I0918 21:05:55.427805   61273 main.go:141] libmachine: (no-preload-331658) Calling .GetState
	I0918 21:05:55.429864   61273 main.go:141] libmachine: (no-preload-331658) Calling .DriverName
	I0918 21:05:55.430238   61273 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0918 21:05:55.430256   61273 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0918 21:05:55.430278   61273 main.go:141] libmachine: (no-preload-331658) Calling .GetSSHHostname
	I0918 21:05:55.433576   61273 main.go:141] libmachine: (no-preload-331658) DBG | domain no-preload-331658 has defined MAC address 52:54:00:2a:47:d0 in network mk-no-preload-331658
	I0918 21:05:55.433894   61273 main.go:141] libmachine: (no-preload-331658) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:47:d0", ip: ""} in network mk-no-preload-331658: {Iface:virbr2 ExpiryTime:2024-09-18 21:55:52 +0000 UTC Type:0 Mac:52:54:00:2a:47:d0 Iaid: IPaddr:192.168.61.31 Prefix:24 Hostname:no-preload-331658 Clientid:01:52:54:00:2a:47:d0}
	I0918 21:05:55.433918   61273 main.go:141] libmachine: (no-preload-331658) DBG | domain no-preload-331658 has defined IP address 192.168.61.31 and MAC address 52:54:00:2a:47:d0 in network mk-no-preload-331658
	I0918 21:05:55.434411   61273 main.go:141] libmachine: (no-preload-331658) Calling .GetSSHPort
	I0918 21:05:55.434650   61273 main.go:141] libmachine: (no-preload-331658) Calling .GetSSHKeyPath
	I0918 21:05:55.434798   61273 main.go:141] libmachine: (no-preload-331658) Calling .GetSSHUsername
	I0918 21:05:55.434942   61273 sshutil.go:53] new ssh client: &{IP:192.168.61.31 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19667-7671/.minikube/machines/no-preload-331658/id_rsa Username:docker}
	I0918 21:05:55.528033   61273 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0918 21:05:55.545524   61273 node_ready.go:35] waiting up to 6m0s for node "no-preload-331658" to be "Ready" ...
	I0918 21:05:55.606477   61273 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0918 21:05:55.606498   61273 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0918 21:05:55.628256   61273 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0918 21:05:55.636122   61273 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0918 21:05:55.636154   61273 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0918 21:05:55.663081   61273 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0918 21:05:55.663108   61273 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0918 21:05:55.715011   61273 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0918 21:05:55.738192   61273 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0918 21:05:56.247539   61273 main.go:141] libmachine: Making call to close driver server
	I0918 21:05:56.247568   61273 main.go:141] libmachine: (no-preload-331658) Calling .Close
	I0918 21:05:56.247900   61273 main.go:141] libmachine: (no-preload-331658) DBG | Closing plugin on server side
	I0918 21:05:56.247922   61273 main.go:141] libmachine: Successfully made call to close driver server
	I0918 21:05:56.247937   61273 main.go:141] libmachine: Making call to close connection to plugin binary
	I0918 21:05:56.247948   61273 main.go:141] libmachine: Making call to close driver server
	I0918 21:05:56.247960   61273 main.go:141] libmachine: (no-preload-331658) Calling .Close
	I0918 21:05:56.248225   61273 main.go:141] libmachine: Successfully made call to close driver server
	I0918 21:05:56.248240   61273 main.go:141] libmachine: Making call to close connection to plugin binary
	I0918 21:05:56.248273   61273 main.go:141] libmachine: (no-preload-331658) DBG | Closing plugin on server side
	I0918 21:05:56.261942   61273 main.go:141] libmachine: Making call to close driver server
	I0918 21:05:56.261972   61273 main.go:141] libmachine: (no-preload-331658) Calling .Close
	I0918 21:05:56.262269   61273 main.go:141] libmachine: (no-preload-331658) DBG | Closing plugin on server side
	I0918 21:05:56.262344   61273 main.go:141] libmachine: Successfully made call to close driver server
	I0918 21:05:56.262361   61273 main.go:141] libmachine: Making call to close connection to plugin binary
	I0918 21:05:56.944008   61273 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.22895695s)
	I0918 21:05:56.944084   61273 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.205856091s)
	I0918 21:05:56.944121   61273 main.go:141] libmachine: Making call to close driver server
	I0918 21:05:56.944138   61273 main.go:141] libmachine: (no-preload-331658) Calling .Close
	I0918 21:05:56.944087   61273 main.go:141] libmachine: Making call to close driver server
	I0918 21:05:56.944186   61273 main.go:141] libmachine: (no-preload-331658) Calling .Close
	I0918 21:05:56.944489   61273 main.go:141] libmachine: (no-preload-331658) DBG | Closing plugin on server side
	I0918 21:05:56.944539   61273 main.go:141] libmachine: Successfully made call to close driver server
	I0918 21:05:56.944553   61273 main.go:141] libmachine: Making call to close connection to plugin binary
	I0918 21:05:56.944561   61273 main.go:141] libmachine: Making call to close driver server
	I0918 21:05:56.944572   61273 main.go:141] libmachine: (no-preload-331658) Calling .Close
	I0918 21:05:56.944559   61273 main.go:141] libmachine: (no-preload-331658) DBG | Closing plugin on server side
	I0918 21:05:56.944570   61273 main.go:141] libmachine: Successfully made call to close driver server
	I0918 21:05:56.944654   61273 main.go:141] libmachine: Making call to close connection to plugin binary
	I0918 21:05:56.944669   61273 main.go:141] libmachine: Making call to close driver server
	I0918 21:05:56.944678   61273 main.go:141] libmachine: (no-preload-331658) Calling .Close
	I0918 21:05:56.944794   61273 main.go:141] libmachine: Successfully made call to close driver server
	I0918 21:05:56.944808   61273 main.go:141] libmachine: Making call to close connection to plugin binary
	I0918 21:05:56.944823   61273 main.go:141] libmachine: (no-preload-331658) DBG | Closing plugin on server side
	I0918 21:05:56.944965   61273 main.go:141] libmachine: (no-preload-331658) DBG | Closing plugin on server side
	I0918 21:05:56.944988   61273 main.go:141] libmachine: Successfully made call to close driver server
	I0918 21:05:56.944998   61273 main.go:141] libmachine: Making call to close connection to plugin binary
	I0918 21:05:56.945010   61273 addons.go:475] Verifying addon metrics-server=true in "no-preload-331658"
	I0918 21:05:56.946962   61273 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I0918 21:05:53.135068   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:05:55.633160   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:05:55.393859   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:05:57.888366   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:05:54.703227   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:55.204057   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:55.704178   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:56.203443   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:56.703517   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:57.203499   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:57.703598   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:58.203660   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:58.703897   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:59.203256   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:56.948595   61273 addons.go:510] duration metric: took 1.627989207s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I0918 21:05:57.549092   61273 node_ready.go:53] node "no-preload-331658" has status "Ready":"False"
	I0918 21:06:00.050199   61273 node_ready.go:53] node "no-preload-331658" has status "Ready":"False"
	I0918 21:05:58.134289   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:06:00.632302   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:06:00.386644   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:06:02.387972   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:05:59.704149   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:06:00.203356   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:06:00.703750   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:06:01.203765   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:06:01.704295   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:06:02.203759   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:06:02.703342   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:06:03.204083   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:06:03.703777   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:06:04.203340   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:06:02.549111   61273 node_ready.go:49] node "no-preload-331658" has status "Ready":"True"
	I0918 21:06:02.549153   61273 node_ready.go:38] duration metric: took 7.003597589s for node "no-preload-331658" to be "Ready" ...
	I0918 21:06:02.549162   61273 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0918 21:06:02.554487   61273 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-dgnw2" in "kube-system" namespace to be "Ready" ...
	I0918 21:06:02.560130   61273 pod_ready.go:93] pod "coredns-7c65d6cfc9-dgnw2" in "kube-system" namespace has status "Ready":"True"
	I0918 21:06:02.560160   61273 pod_ready.go:82] duration metric: took 5.643145ms for pod "coredns-7c65d6cfc9-dgnw2" in "kube-system" namespace to be "Ready" ...
	I0918 21:06:02.560173   61273 pod_ready.go:79] waiting up to 6m0s for pod "etcd-no-preload-331658" in "kube-system" namespace to be "Ready" ...
	I0918 21:06:02.567971   61273 pod_ready.go:93] pod "etcd-no-preload-331658" in "kube-system" namespace has status "Ready":"True"
	I0918 21:06:02.567992   61273 pod_ready.go:82] duration metric: took 7.811385ms for pod "etcd-no-preload-331658" in "kube-system" namespace to be "Ready" ...
	I0918 21:06:02.568001   61273 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-no-preload-331658" in "kube-system" namespace to be "Ready" ...
	I0918 21:06:02.572606   61273 pod_ready.go:93] pod "kube-apiserver-no-preload-331658" in "kube-system" namespace has status "Ready":"True"
	I0918 21:06:02.572633   61273 pod_ready.go:82] duration metric: took 4.625414ms for pod "kube-apiserver-no-preload-331658" in "kube-system" namespace to be "Ready" ...
	I0918 21:06:02.572644   61273 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-no-preload-331658" in "kube-system" namespace to be "Ready" ...
	I0918 21:06:02.577222   61273 pod_ready.go:93] pod "kube-controller-manager-no-preload-331658" in "kube-system" namespace has status "Ready":"True"
	I0918 21:06:02.577243   61273 pod_ready.go:82] duration metric: took 4.591499ms for pod "kube-controller-manager-no-preload-331658" in "kube-system" namespace to be "Ready" ...
	I0918 21:06:02.577252   61273 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-hx25w" in "kube-system" namespace to be "Ready" ...
	I0918 21:06:02.949682   61273 pod_ready.go:93] pod "kube-proxy-hx25w" in "kube-system" namespace has status "Ready":"True"
	I0918 21:06:02.949707   61273 pod_ready.go:82] duration metric: took 372.449094ms for pod "kube-proxy-hx25w" in "kube-system" namespace to be "Ready" ...
	I0918 21:06:02.949716   61273 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-no-preload-331658" in "kube-system" namespace to be "Ready" ...
	I0918 21:06:03.350071   61273 pod_ready.go:93] pod "kube-scheduler-no-preload-331658" in "kube-system" namespace has status "Ready":"True"
	I0918 21:06:03.350104   61273 pod_ready.go:82] duration metric: took 400.380059ms for pod "kube-scheduler-no-preload-331658" in "kube-system" namespace to be "Ready" ...
	I0918 21:06:03.350118   61273 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace to be "Ready" ...
	I0918 21:06:05.357041   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:06:02.634105   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:06:05.132860   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:06:04.887184   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:06:06.887596   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:06:04.703762   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:06:05.203639   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:06:05.703335   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:06:06.204156   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:06:06.703735   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:06:07.203278   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:06:07.704072   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:06:08.203299   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:06:08.703528   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:06:09.203725   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:06:07.857844   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:06:10.356822   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:06:07.633985   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:06:10.133861   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:06:08.887695   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:06:11.387735   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:06:09.703712   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:06:10.203930   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:06:10.704216   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:06:11.203706   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:06:11.703494   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:06:12.204098   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:06:12.703927   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:06:13.203379   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:06:13.703248   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:06:14.204000   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:06:12.356878   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:06:14.360285   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:06:12.631731   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:06:15.132229   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:06:17.132802   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:06:13.887296   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:06:16.386306   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:06:14.704159   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:06:15.203401   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:06:15.703942   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:06:16.204043   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:06:16.703691   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:06:17.203508   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:06:17.703445   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:06:18.203689   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:06:18.704249   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:06:19.203290   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0918 21:06:19.203401   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0918 21:06:19.239033   62061 cri.go:89] found id: ""
	I0918 21:06:19.239065   62061 logs.go:276] 0 containers: []
	W0918 21:06:19.239073   62061 logs.go:278] No container was found matching "kube-apiserver"
	I0918 21:06:19.239079   62061 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0918 21:06:19.239141   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0918 21:06:19.274781   62061 cri.go:89] found id: ""
	I0918 21:06:19.274809   62061 logs.go:276] 0 containers: []
	W0918 21:06:19.274819   62061 logs.go:278] No container was found matching "etcd"
	I0918 21:06:19.274833   62061 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0918 21:06:19.274895   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0918 21:06:19.307894   62061 cri.go:89] found id: ""
	I0918 21:06:19.307928   62061 logs.go:276] 0 containers: []
	W0918 21:06:19.307940   62061 logs.go:278] No container was found matching "coredns"
	I0918 21:06:19.307948   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0918 21:06:19.308002   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0918 21:06:19.340572   62061 cri.go:89] found id: ""
	I0918 21:06:19.340602   62061 logs.go:276] 0 containers: []
	W0918 21:06:19.340610   62061 logs.go:278] No container was found matching "kube-scheduler"
	I0918 21:06:19.340615   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0918 21:06:19.340672   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0918 21:06:19.375448   62061 cri.go:89] found id: ""
	I0918 21:06:19.375475   62061 logs.go:276] 0 containers: []
	W0918 21:06:19.375483   62061 logs.go:278] No container was found matching "kube-proxy"
	I0918 21:06:19.375489   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0918 21:06:19.375536   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0918 21:06:19.413102   62061 cri.go:89] found id: ""
	I0918 21:06:19.413133   62061 logs.go:276] 0 containers: []
	W0918 21:06:19.413158   62061 logs.go:278] No container was found matching "kube-controller-manager"
	I0918 21:06:19.413166   62061 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0918 21:06:19.413238   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0918 21:06:19.447497   62061 cri.go:89] found id: ""
	I0918 21:06:19.447526   62061 logs.go:276] 0 containers: []
	W0918 21:06:19.447536   62061 logs.go:278] No container was found matching "kindnet"
	I0918 21:06:19.447544   62061 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0918 21:06:19.447605   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0918 21:06:19.480848   62061 cri.go:89] found id: ""
	I0918 21:06:19.480880   62061 logs.go:276] 0 containers: []
	W0918 21:06:19.480892   62061 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0918 21:06:19.480903   62061 logs.go:123] Gathering logs for kubelet ...
	I0918 21:06:19.480916   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 21:06:16.856845   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:06:19.358010   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:06:19.632608   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:06:22.132792   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:06:18.387488   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:06:20.887832   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:06:19.533573   62061 logs.go:123] Gathering logs for dmesg ...
	I0918 21:06:19.533625   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 21:06:19.546578   62061 logs.go:123] Gathering logs for describe nodes ...
	I0918 21:06:19.546604   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0918 21:06:19.667439   62061 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0918 21:06:19.667459   62061 logs.go:123] Gathering logs for CRI-O ...
	I0918 21:06:19.667472   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0918 21:06:19.739525   62061 logs.go:123] Gathering logs for container status ...
	I0918 21:06:19.739563   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 21:06:22.280913   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:06:22.296045   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0918 21:06:22.296132   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0918 21:06:22.332361   62061 cri.go:89] found id: ""
	I0918 21:06:22.332401   62061 logs.go:276] 0 containers: []
	W0918 21:06:22.332413   62061 logs.go:278] No container was found matching "kube-apiserver"
	I0918 21:06:22.332421   62061 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0918 21:06:22.332485   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0918 21:06:22.392138   62061 cri.go:89] found id: ""
	I0918 21:06:22.392170   62061 logs.go:276] 0 containers: []
	W0918 21:06:22.392178   62061 logs.go:278] No container was found matching "etcd"
	I0918 21:06:22.392184   62061 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0918 21:06:22.392237   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0918 21:06:22.431265   62061 cri.go:89] found id: ""
	I0918 21:06:22.431296   62061 logs.go:276] 0 containers: []
	W0918 21:06:22.431306   62061 logs.go:278] No container was found matching "coredns"
	I0918 21:06:22.431313   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0918 21:06:22.431376   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0918 21:06:22.463685   62061 cri.go:89] found id: ""
	I0918 21:06:22.463718   62061 logs.go:276] 0 containers: []
	W0918 21:06:22.463730   62061 logs.go:278] No container was found matching "kube-scheduler"
	I0918 21:06:22.463738   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0918 21:06:22.463808   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0918 21:06:22.498031   62061 cri.go:89] found id: ""
	I0918 21:06:22.498058   62061 logs.go:276] 0 containers: []
	W0918 21:06:22.498069   62061 logs.go:278] No container was found matching "kube-proxy"
	I0918 21:06:22.498076   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0918 21:06:22.498140   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0918 21:06:22.537701   62061 cri.go:89] found id: ""
	I0918 21:06:22.537732   62061 logs.go:276] 0 containers: []
	W0918 21:06:22.537740   62061 logs.go:278] No container was found matching "kube-controller-manager"
	I0918 21:06:22.537746   62061 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0918 21:06:22.537803   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0918 21:06:22.574085   62061 cri.go:89] found id: ""
	I0918 21:06:22.574118   62061 logs.go:276] 0 containers: []
	W0918 21:06:22.574130   62061 logs.go:278] No container was found matching "kindnet"
	I0918 21:06:22.574138   62061 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0918 21:06:22.574202   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0918 21:06:22.606313   62061 cri.go:89] found id: ""
	I0918 21:06:22.606341   62061 logs.go:276] 0 containers: []
	W0918 21:06:22.606349   62061 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0918 21:06:22.606359   62061 logs.go:123] Gathering logs for describe nodes ...
	I0918 21:06:22.606372   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0918 21:06:22.678484   62061 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0918 21:06:22.678511   62061 logs.go:123] Gathering logs for CRI-O ...
	I0918 21:06:22.678524   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0918 21:06:22.756730   62061 logs.go:123] Gathering logs for container status ...
	I0918 21:06:22.756767   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 21:06:22.793537   62061 logs.go:123] Gathering logs for kubelet ...
	I0918 21:06:22.793573   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 21:06:22.845987   62061 logs.go:123] Gathering logs for dmesg ...
	I0918 21:06:22.846021   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 21:06:21.857010   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:06:23.857823   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:06:26.358268   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:06:24.133063   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:06:26.632474   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:06:23.387764   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:06:25.886548   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:06:27.887108   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:06:25.360980   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:06:25.374930   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0918 21:06:25.375010   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0918 21:06:25.413100   62061 cri.go:89] found id: ""
	I0918 21:06:25.413135   62061 logs.go:276] 0 containers: []
	W0918 21:06:25.413147   62061 logs.go:278] No container was found matching "kube-apiserver"
	I0918 21:06:25.413155   62061 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0918 21:06:25.413221   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0918 21:06:25.450999   62061 cri.go:89] found id: ""
	I0918 21:06:25.451024   62061 logs.go:276] 0 containers: []
	W0918 21:06:25.451032   62061 logs.go:278] No container was found matching "etcd"
	I0918 21:06:25.451038   62061 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0918 21:06:25.451087   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0918 21:06:25.496114   62061 cri.go:89] found id: ""
	I0918 21:06:25.496147   62061 logs.go:276] 0 containers: []
	W0918 21:06:25.496155   62061 logs.go:278] No container was found matching "coredns"
	I0918 21:06:25.496161   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0918 21:06:25.496218   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0918 21:06:25.529186   62061 cri.go:89] found id: ""
	I0918 21:06:25.529211   62061 logs.go:276] 0 containers: []
	W0918 21:06:25.529218   62061 logs.go:278] No container was found matching "kube-scheduler"
	I0918 21:06:25.529223   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0918 21:06:25.529294   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0918 21:06:25.569710   62061 cri.go:89] found id: ""
	I0918 21:06:25.569735   62061 logs.go:276] 0 containers: []
	W0918 21:06:25.569743   62061 logs.go:278] No container was found matching "kube-proxy"
	I0918 21:06:25.569749   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0918 21:06:25.569796   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0918 21:06:25.606915   62061 cri.go:89] found id: ""
	I0918 21:06:25.606951   62061 logs.go:276] 0 containers: []
	W0918 21:06:25.606964   62061 logs.go:278] No container was found matching "kube-controller-manager"
	I0918 21:06:25.606972   62061 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0918 21:06:25.607034   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0918 21:06:25.642705   62061 cri.go:89] found id: ""
	I0918 21:06:25.642736   62061 logs.go:276] 0 containers: []
	W0918 21:06:25.642744   62061 logs.go:278] No container was found matching "kindnet"
	I0918 21:06:25.642750   62061 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0918 21:06:25.642801   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0918 21:06:25.676418   62061 cri.go:89] found id: ""
	I0918 21:06:25.676448   62061 logs.go:276] 0 containers: []
	W0918 21:06:25.676457   62061 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0918 21:06:25.676466   62061 logs.go:123] Gathering logs for dmesg ...
	I0918 21:06:25.676477   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 21:06:25.689913   62061 logs.go:123] Gathering logs for describe nodes ...
	I0918 21:06:25.689945   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0918 21:06:25.771579   62061 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0918 21:06:25.771607   62061 logs.go:123] Gathering logs for CRI-O ...
	I0918 21:06:25.771622   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0918 21:06:25.850904   62061 logs.go:123] Gathering logs for container status ...
	I0918 21:06:25.850945   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 21:06:25.890820   62061 logs.go:123] Gathering logs for kubelet ...
	I0918 21:06:25.890849   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 21:06:28.438958   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:06:28.451961   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0918 21:06:28.452043   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0918 21:06:28.485000   62061 cri.go:89] found id: ""
	I0918 21:06:28.485059   62061 logs.go:276] 0 containers: []
	W0918 21:06:28.485070   62061 logs.go:278] No container was found matching "kube-apiserver"
	I0918 21:06:28.485077   62061 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0918 21:06:28.485138   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0918 21:06:28.517852   62061 cri.go:89] found id: ""
	I0918 21:06:28.517885   62061 logs.go:276] 0 containers: []
	W0918 21:06:28.517895   62061 logs.go:278] No container was found matching "etcd"
	I0918 21:06:28.517903   62061 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0918 21:06:28.517957   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0918 21:06:28.549914   62061 cri.go:89] found id: ""
	I0918 21:06:28.549940   62061 logs.go:276] 0 containers: []
	W0918 21:06:28.549949   62061 logs.go:278] No container was found matching "coredns"
	I0918 21:06:28.549955   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0918 21:06:28.550000   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0918 21:06:28.584133   62061 cri.go:89] found id: ""
	I0918 21:06:28.584156   62061 logs.go:276] 0 containers: []
	W0918 21:06:28.584163   62061 logs.go:278] No container was found matching "kube-scheduler"
	I0918 21:06:28.584169   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0918 21:06:28.584213   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0918 21:06:28.620031   62061 cri.go:89] found id: ""
	I0918 21:06:28.620061   62061 logs.go:276] 0 containers: []
	W0918 21:06:28.620071   62061 logs.go:278] No container was found matching "kube-proxy"
	I0918 21:06:28.620078   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0918 21:06:28.620143   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0918 21:06:28.653393   62061 cri.go:89] found id: ""
	I0918 21:06:28.653424   62061 logs.go:276] 0 containers: []
	W0918 21:06:28.653435   62061 logs.go:278] No container was found matching "kube-controller-manager"
	I0918 21:06:28.653443   62061 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0918 21:06:28.653505   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0918 21:06:28.687696   62061 cri.go:89] found id: ""
	I0918 21:06:28.687729   62061 logs.go:276] 0 containers: []
	W0918 21:06:28.687741   62061 logs.go:278] No container was found matching "kindnet"
	I0918 21:06:28.687752   62061 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0918 21:06:28.687809   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0918 21:06:28.724824   62061 cri.go:89] found id: ""
	I0918 21:06:28.724854   62061 logs.go:276] 0 containers: []
	W0918 21:06:28.724864   62061 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0918 21:06:28.724875   62061 logs.go:123] Gathering logs for kubelet ...
	I0918 21:06:28.724890   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 21:06:28.785489   62061 logs.go:123] Gathering logs for dmesg ...
	I0918 21:06:28.785529   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 21:06:28.804170   62061 logs.go:123] Gathering logs for describe nodes ...
	I0918 21:06:28.804211   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0918 21:06:28.895508   62061 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0918 21:06:28.895538   62061 logs.go:123] Gathering logs for CRI-O ...
	I0918 21:06:28.895554   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0918 21:06:28.974643   62061 logs.go:123] Gathering logs for container status ...
	I0918 21:06:28.974685   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 21:06:28.858259   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:06:31.356644   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:06:28.633851   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:06:31.133612   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:06:30.392038   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:06:32.886708   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:06:31.517305   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:06:31.530374   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0918 21:06:31.530435   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0918 21:06:31.568335   62061 cri.go:89] found id: ""
	I0918 21:06:31.568374   62061 logs.go:276] 0 containers: []
	W0918 21:06:31.568385   62061 logs.go:278] No container was found matching "kube-apiserver"
	I0918 21:06:31.568393   62061 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0918 21:06:31.568462   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0918 21:06:31.603597   62061 cri.go:89] found id: ""
	I0918 21:06:31.603624   62061 logs.go:276] 0 containers: []
	W0918 21:06:31.603633   62061 logs.go:278] No container was found matching "etcd"
	I0918 21:06:31.603639   62061 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0918 21:06:31.603694   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0918 21:06:31.639355   62061 cri.go:89] found id: ""
	I0918 21:06:31.639380   62061 logs.go:276] 0 containers: []
	W0918 21:06:31.639388   62061 logs.go:278] No container was found matching "coredns"
	I0918 21:06:31.639393   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0918 21:06:31.639445   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0918 21:06:31.674197   62061 cri.go:89] found id: ""
	I0918 21:06:31.674223   62061 logs.go:276] 0 containers: []
	W0918 21:06:31.674231   62061 logs.go:278] No container was found matching "kube-scheduler"
	I0918 21:06:31.674237   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0918 21:06:31.674286   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0918 21:06:31.710563   62061 cri.go:89] found id: ""
	I0918 21:06:31.710592   62061 logs.go:276] 0 containers: []
	W0918 21:06:31.710606   62061 logs.go:278] No container was found matching "kube-proxy"
	I0918 21:06:31.710612   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0918 21:06:31.710663   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0918 21:06:31.746923   62061 cri.go:89] found id: ""
	I0918 21:06:31.746958   62061 logs.go:276] 0 containers: []
	W0918 21:06:31.746969   62061 logs.go:278] No container was found matching "kube-controller-manager"
	I0918 21:06:31.746976   62061 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0918 21:06:31.747039   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0918 21:06:31.781913   62061 cri.go:89] found id: ""
	I0918 21:06:31.781946   62061 logs.go:276] 0 containers: []
	W0918 21:06:31.781961   62061 logs.go:278] No container was found matching "kindnet"
	I0918 21:06:31.781969   62061 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0918 21:06:31.782023   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0918 21:06:31.817293   62061 cri.go:89] found id: ""
	I0918 21:06:31.817318   62061 logs.go:276] 0 containers: []
	W0918 21:06:31.817327   62061 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0918 21:06:31.817338   62061 logs.go:123] Gathering logs for kubelet ...
	I0918 21:06:31.817375   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 21:06:31.869479   62061 logs.go:123] Gathering logs for dmesg ...
	I0918 21:06:31.869518   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 21:06:31.884187   62061 logs.go:123] Gathering logs for describe nodes ...
	I0918 21:06:31.884219   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0918 21:06:31.967159   62061 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0918 21:06:31.967189   62061 logs.go:123] Gathering logs for CRI-O ...
	I0918 21:06:31.967202   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0918 21:06:32.050404   62061 logs.go:123] Gathering logs for container status ...
	I0918 21:06:32.050458   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 21:06:33.357380   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:06:35.856960   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:06:33.633434   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:06:36.133740   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:06:34.888738   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:06:37.386351   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:06:34.593135   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:06:34.613917   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0918 21:06:34.614001   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0918 21:06:34.649649   62061 cri.go:89] found id: ""
	I0918 21:06:34.649676   62061 logs.go:276] 0 containers: []
	W0918 21:06:34.649684   62061 logs.go:278] No container was found matching "kube-apiserver"
	I0918 21:06:34.649690   62061 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0918 21:06:34.649738   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0918 21:06:34.684898   62061 cri.go:89] found id: ""
	I0918 21:06:34.684932   62061 logs.go:276] 0 containers: []
	W0918 21:06:34.684940   62061 logs.go:278] No container was found matching "etcd"
	I0918 21:06:34.684948   62061 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0918 21:06:34.685018   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0918 21:06:34.717519   62061 cri.go:89] found id: ""
	I0918 21:06:34.717549   62061 logs.go:276] 0 containers: []
	W0918 21:06:34.717559   62061 logs.go:278] No container was found matching "coredns"
	I0918 21:06:34.717573   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0918 21:06:34.717626   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0918 21:06:34.751242   62061 cri.go:89] found id: ""
	I0918 21:06:34.751275   62061 logs.go:276] 0 containers: []
	W0918 21:06:34.751289   62061 logs.go:278] No container was found matching "kube-scheduler"
	I0918 21:06:34.751298   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0918 21:06:34.751368   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0918 21:06:34.786700   62061 cri.go:89] found id: ""
	I0918 21:06:34.786729   62061 logs.go:276] 0 containers: []
	W0918 21:06:34.786737   62061 logs.go:278] No container was found matching "kube-proxy"
	I0918 21:06:34.786742   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0918 21:06:34.786792   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0918 21:06:34.822400   62061 cri.go:89] found id: ""
	I0918 21:06:34.822446   62061 logs.go:276] 0 containers: []
	W0918 21:06:34.822459   62061 logs.go:278] No container was found matching "kube-controller-manager"
	I0918 21:06:34.822468   62061 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0918 21:06:34.822537   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0918 21:06:34.860509   62061 cri.go:89] found id: ""
	I0918 21:06:34.860540   62061 logs.go:276] 0 containers: []
	W0918 21:06:34.860549   62061 logs.go:278] No container was found matching "kindnet"
	I0918 21:06:34.860554   62061 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0918 21:06:34.860626   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0918 21:06:34.900682   62061 cri.go:89] found id: ""
	I0918 21:06:34.900712   62061 logs.go:276] 0 containers: []
	W0918 21:06:34.900719   62061 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0918 21:06:34.900727   62061 logs.go:123] Gathering logs for describe nodes ...
	I0918 21:06:34.900739   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0918 21:06:34.975975   62061 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0918 21:06:34.976009   62061 logs.go:123] Gathering logs for CRI-O ...
	I0918 21:06:34.976043   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0918 21:06:35.058972   62061 logs.go:123] Gathering logs for container status ...
	I0918 21:06:35.059012   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 21:06:35.101690   62061 logs.go:123] Gathering logs for kubelet ...
	I0918 21:06:35.101716   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 21:06:35.154486   62061 logs.go:123] Gathering logs for dmesg ...
	I0918 21:06:35.154525   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 21:06:37.669814   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:06:37.683235   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0918 21:06:37.683294   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0918 21:06:37.720666   62061 cri.go:89] found id: ""
	I0918 21:06:37.720697   62061 logs.go:276] 0 containers: []
	W0918 21:06:37.720708   62061 logs.go:278] No container was found matching "kube-apiserver"
	I0918 21:06:37.720715   62061 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0918 21:06:37.720777   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0918 21:06:37.758288   62061 cri.go:89] found id: ""
	I0918 21:06:37.758327   62061 logs.go:276] 0 containers: []
	W0918 21:06:37.758335   62061 logs.go:278] No container was found matching "etcd"
	I0918 21:06:37.758341   62061 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0918 21:06:37.758436   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0918 21:06:37.793394   62061 cri.go:89] found id: ""
	I0918 21:06:37.793421   62061 logs.go:276] 0 containers: []
	W0918 21:06:37.793430   62061 logs.go:278] No container was found matching "coredns"
	I0918 21:06:37.793437   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0918 21:06:37.793501   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0918 21:06:37.828258   62061 cri.go:89] found id: ""
	I0918 21:06:37.828291   62061 logs.go:276] 0 containers: []
	W0918 21:06:37.828300   62061 logs.go:278] No container was found matching "kube-scheduler"
	I0918 21:06:37.828307   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0918 21:06:37.828361   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0918 21:06:37.863164   62061 cri.go:89] found id: ""
	I0918 21:06:37.863189   62061 logs.go:276] 0 containers: []
	W0918 21:06:37.863197   62061 logs.go:278] No container was found matching "kube-proxy"
	I0918 21:06:37.863203   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0918 21:06:37.863251   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0918 21:06:37.899585   62061 cri.go:89] found id: ""
	I0918 21:06:37.899614   62061 logs.go:276] 0 containers: []
	W0918 21:06:37.899622   62061 logs.go:278] No container was found matching "kube-controller-manager"
	I0918 21:06:37.899628   62061 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0918 21:06:37.899675   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0918 21:06:37.936246   62061 cri.go:89] found id: ""
	I0918 21:06:37.936282   62061 logs.go:276] 0 containers: []
	W0918 21:06:37.936292   62061 logs.go:278] No container was found matching "kindnet"
	I0918 21:06:37.936299   62061 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0918 21:06:37.936362   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0918 21:06:37.969913   62061 cri.go:89] found id: ""
	I0918 21:06:37.969942   62061 logs.go:276] 0 containers: []
	W0918 21:06:37.969950   62061 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0918 21:06:37.969958   62061 logs.go:123] Gathering logs for kubelet ...
	I0918 21:06:37.969968   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 21:06:38.023055   62061 logs.go:123] Gathering logs for dmesg ...
	I0918 21:06:38.023094   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 21:06:38.036006   62061 logs.go:123] Gathering logs for describe nodes ...
	I0918 21:06:38.036047   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0918 21:06:38.116926   62061 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0918 21:06:38.116957   62061 logs.go:123] Gathering logs for CRI-O ...
	I0918 21:06:38.116972   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0918 21:06:38.192632   62061 logs.go:123] Gathering logs for container status ...
	I0918 21:06:38.192668   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 21:06:37.860654   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:06:40.357107   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:06:38.633432   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:06:41.131957   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:06:39.387927   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:06:41.886904   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:06:40.729602   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:06:40.743291   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0918 21:06:40.743368   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0918 21:06:40.778138   62061 cri.go:89] found id: ""
	I0918 21:06:40.778166   62061 logs.go:276] 0 containers: []
	W0918 21:06:40.778176   62061 logs.go:278] No container was found matching "kube-apiserver"
	I0918 21:06:40.778184   62061 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0918 21:06:40.778248   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0918 21:06:40.813026   62061 cri.go:89] found id: ""
	I0918 21:06:40.813062   62061 logs.go:276] 0 containers: []
	W0918 21:06:40.813073   62061 logs.go:278] No container was found matching "etcd"
	I0918 21:06:40.813081   62061 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0918 21:06:40.813146   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0918 21:06:40.847609   62061 cri.go:89] found id: ""
	I0918 21:06:40.847641   62061 logs.go:276] 0 containers: []
	W0918 21:06:40.847651   62061 logs.go:278] No container was found matching "coredns"
	I0918 21:06:40.847658   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0918 21:06:40.847727   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0918 21:06:40.885403   62061 cri.go:89] found id: ""
	I0918 21:06:40.885432   62061 logs.go:276] 0 containers: []
	W0918 21:06:40.885441   62061 logs.go:278] No container was found matching "kube-scheduler"
	I0918 21:06:40.885448   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0918 21:06:40.885515   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0918 21:06:40.922679   62061 cri.go:89] found id: ""
	I0918 21:06:40.922705   62061 logs.go:276] 0 containers: []
	W0918 21:06:40.922714   62061 logs.go:278] No container was found matching "kube-proxy"
	I0918 21:06:40.922719   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0918 21:06:40.922776   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0918 21:06:40.957187   62061 cri.go:89] found id: ""
	I0918 21:06:40.957215   62061 logs.go:276] 0 containers: []
	W0918 21:06:40.957222   62061 logs.go:278] No container was found matching "kube-controller-manager"
	I0918 21:06:40.957228   62061 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0918 21:06:40.957291   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0918 21:06:40.991722   62061 cri.go:89] found id: ""
	I0918 21:06:40.991751   62061 logs.go:276] 0 containers: []
	W0918 21:06:40.991762   62061 logs.go:278] No container was found matching "kindnet"
	I0918 21:06:40.991769   62061 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0918 21:06:40.991835   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0918 21:06:41.027210   62061 cri.go:89] found id: ""
	I0918 21:06:41.027234   62061 logs.go:276] 0 containers: []
	W0918 21:06:41.027242   62061 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0918 21:06:41.027250   62061 logs.go:123] Gathering logs for kubelet ...
	I0918 21:06:41.027275   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 21:06:41.077183   62061 logs.go:123] Gathering logs for dmesg ...
	I0918 21:06:41.077221   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 21:06:41.090707   62061 logs.go:123] Gathering logs for describe nodes ...
	I0918 21:06:41.090734   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0918 21:06:41.166206   62061 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0918 21:06:41.166228   62061 logs.go:123] Gathering logs for CRI-O ...
	I0918 21:06:41.166241   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0918 21:06:41.240308   62061 logs.go:123] Gathering logs for container status ...
	I0918 21:06:41.240346   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 21:06:43.777517   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:06:43.790901   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0918 21:06:43.790970   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0918 21:06:43.827127   62061 cri.go:89] found id: ""
	I0918 21:06:43.827156   62061 logs.go:276] 0 containers: []
	W0918 21:06:43.827164   62061 logs.go:278] No container was found matching "kube-apiserver"
	I0918 21:06:43.827170   62061 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0918 21:06:43.827218   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0918 21:06:43.861218   62061 cri.go:89] found id: ""
	I0918 21:06:43.861244   62061 logs.go:276] 0 containers: []
	W0918 21:06:43.861252   62061 logs.go:278] No container was found matching "etcd"
	I0918 21:06:43.861257   62061 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0918 21:06:43.861308   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0918 21:06:43.899669   62061 cri.go:89] found id: ""
	I0918 21:06:43.899694   62061 logs.go:276] 0 containers: []
	W0918 21:06:43.899701   62061 logs.go:278] No container was found matching "coredns"
	I0918 21:06:43.899707   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0918 21:06:43.899755   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0918 21:06:43.934698   62061 cri.go:89] found id: ""
	I0918 21:06:43.934731   62061 logs.go:276] 0 containers: []
	W0918 21:06:43.934741   62061 logs.go:278] No container was found matching "kube-scheduler"
	I0918 21:06:43.934748   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0918 21:06:43.934819   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0918 21:06:43.971715   62061 cri.go:89] found id: ""
	I0918 21:06:43.971742   62061 logs.go:276] 0 containers: []
	W0918 21:06:43.971755   62061 logs.go:278] No container was found matching "kube-proxy"
	I0918 21:06:43.971760   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0918 21:06:43.971817   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0918 21:06:44.005880   62061 cri.go:89] found id: ""
	I0918 21:06:44.005915   62061 logs.go:276] 0 containers: []
	W0918 21:06:44.005927   62061 logs.go:278] No container was found matching "kube-controller-manager"
	I0918 21:06:44.005935   62061 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0918 21:06:44.006003   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0918 21:06:44.038144   62061 cri.go:89] found id: ""
	I0918 21:06:44.038171   62061 logs.go:276] 0 containers: []
	W0918 21:06:44.038180   62061 logs.go:278] No container was found matching "kindnet"
	I0918 21:06:44.038186   62061 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0918 21:06:44.038239   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0918 21:06:44.073920   62061 cri.go:89] found id: ""
	I0918 21:06:44.073953   62061 logs.go:276] 0 containers: []
	W0918 21:06:44.073966   62061 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0918 21:06:44.073978   62061 logs.go:123] Gathering logs for describe nodes ...
	I0918 21:06:44.073992   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0918 21:06:44.142881   62061 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0918 21:06:44.142910   62061 logs.go:123] Gathering logs for CRI-O ...
	I0918 21:06:44.142926   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0918 21:06:44.222302   62061 logs.go:123] Gathering logs for container status ...
	I0918 21:06:44.222341   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 21:06:44.265914   62061 logs.go:123] Gathering logs for kubelet ...
	I0918 21:06:44.265952   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 21:06:44.316229   62061 logs.go:123] Gathering logs for dmesg ...
	I0918 21:06:44.316266   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 21:06:42.856192   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:06:44.857673   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:06:43.132992   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:06:45.134509   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:06:43.888282   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:06:45.889414   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:06:46.830793   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:06:46.843829   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0918 21:06:46.843886   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0918 21:06:46.878566   62061 cri.go:89] found id: ""
	I0918 21:06:46.878607   62061 logs.go:276] 0 containers: []
	W0918 21:06:46.878621   62061 logs.go:278] No container was found matching "kube-apiserver"
	I0918 21:06:46.878630   62061 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0918 21:06:46.878710   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0918 21:06:46.915576   62061 cri.go:89] found id: ""
	I0918 21:06:46.915603   62061 logs.go:276] 0 containers: []
	W0918 21:06:46.915611   62061 logs.go:278] No container was found matching "etcd"
	I0918 21:06:46.915616   62061 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0918 21:06:46.915663   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0918 21:06:46.952982   62061 cri.go:89] found id: ""
	I0918 21:06:46.953014   62061 logs.go:276] 0 containers: []
	W0918 21:06:46.953032   62061 logs.go:278] No container was found matching "coredns"
	I0918 21:06:46.953039   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0918 21:06:46.953104   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0918 21:06:46.987718   62061 cri.go:89] found id: ""
	I0918 21:06:46.987749   62061 logs.go:276] 0 containers: []
	W0918 21:06:46.987757   62061 logs.go:278] No container was found matching "kube-scheduler"
	I0918 21:06:46.987763   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0918 21:06:46.987815   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0918 21:06:47.022695   62061 cri.go:89] found id: ""
	I0918 21:06:47.022726   62061 logs.go:276] 0 containers: []
	W0918 21:06:47.022735   62061 logs.go:278] No container was found matching "kube-proxy"
	I0918 21:06:47.022741   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0918 21:06:47.022799   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0918 21:06:47.058058   62061 cri.go:89] found id: ""
	I0918 21:06:47.058086   62061 logs.go:276] 0 containers: []
	W0918 21:06:47.058094   62061 logs.go:278] No container was found matching "kube-controller-manager"
	I0918 21:06:47.058101   62061 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0918 21:06:47.058159   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0918 21:06:47.093314   62061 cri.go:89] found id: ""
	I0918 21:06:47.093352   62061 logs.go:276] 0 containers: []
	W0918 21:06:47.093361   62061 logs.go:278] No container was found matching "kindnet"
	I0918 21:06:47.093367   62061 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0918 21:06:47.093424   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0918 21:06:47.126997   62061 cri.go:89] found id: ""
	I0918 21:06:47.127023   62061 logs.go:276] 0 containers: []
	W0918 21:06:47.127033   62061 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0918 21:06:47.127041   62061 logs.go:123] Gathering logs for CRI-O ...
	I0918 21:06:47.127053   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0918 21:06:47.203239   62061 logs.go:123] Gathering logs for container status ...
	I0918 21:06:47.203282   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 21:06:47.240982   62061 logs.go:123] Gathering logs for kubelet ...
	I0918 21:06:47.241013   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 21:06:47.293553   62061 logs.go:123] Gathering logs for dmesg ...
	I0918 21:06:47.293591   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 21:06:47.307462   62061 logs.go:123] Gathering logs for describe nodes ...
	I0918 21:06:47.307493   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0918 21:06:47.379500   62061 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0918 21:06:47.357136   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:06:49.359981   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:06:47.633023   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:06:49.633350   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:06:52.134627   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:06:48.387568   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:06:50.886679   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:06:52.887065   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:06:49.880441   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:06:49.895816   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0918 21:06:49.895903   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0918 21:06:49.931916   62061 cri.go:89] found id: ""
	I0918 21:06:49.931941   62061 logs.go:276] 0 containers: []
	W0918 21:06:49.931950   62061 logs.go:278] No container was found matching "kube-apiserver"
	I0918 21:06:49.931955   62061 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0918 21:06:49.932007   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0918 21:06:49.967053   62061 cri.go:89] found id: ""
	I0918 21:06:49.967083   62061 logs.go:276] 0 containers: []
	W0918 21:06:49.967093   62061 logs.go:278] No container was found matching "etcd"
	I0918 21:06:49.967101   62061 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0918 21:06:49.967194   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0918 21:06:50.002943   62061 cri.go:89] found id: ""
	I0918 21:06:50.003004   62061 logs.go:276] 0 containers: []
	W0918 21:06:50.003015   62061 logs.go:278] No container was found matching "coredns"
	I0918 21:06:50.003023   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0918 21:06:50.003085   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0918 21:06:50.039445   62061 cri.go:89] found id: ""
	I0918 21:06:50.039478   62061 logs.go:276] 0 containers: []
	W0918 21:06:50.039489   62061 logs.go:278] No container was found matching "kube-scheduler"
	I0918 21:06:50.039497   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0918 21:06:50.039561   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0918 21:06:50.073529   62061 cri.go:89] found id: ""
	I0918 21:06:50.073562   62061 logs.go:276] 0 containers: []
	W0918 21:06:50.073572   62061 logs.go:278] No container was found matching "kube-proxy"
	I0918 21:06:50.073578   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0918 21:06:50.073645   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0918 21:06:50.109363   62061 cri.go:89] found id: ""
	I0918 21:06:50.109394   62061 logs.go:276] 0 containers: []
	W0918 21:06:50.109406   62061 logs.go:278] No container was found matching "kube-controller-manager"
	I0918 21:06:50.109413   62061 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0918 21:06:50.109485   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0918 21:06:50.144387   62061 cri.go:89] found id: ""
	I0918 21:06:50.144418   62061 logs.go:276] 0 containers: []
	W0918 21:06:50.144429   62061 logs.go:278] No container was found matching "kindnet"
	I0918 21:06:50.144450   62061 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0918 21:06:50.144524   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0918 21:06:50.178079   62061 cri.go:89] found id: ""
	I0918 21:06:50.178110   62061 logs.go:276] 0 containers: []
	W0918 21:06:50.178119   62061 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0918 21:06:50.178128   62061 logs.go:123] Gathering logs for kubelet ...
	I0918 21:06:50.178140   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 21:06:50.228857   62061 logs.go:123] Gathering logs for dmesg ...
	I0918 21:06:50.228893   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 21:06:50.242077   62061 logs.go:123] Gathering logs for describe nodes ...
	I0918 21:06:50.242108   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0918 21:06:50.312868   62061 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0918 21:06:50.312903   62061 logs.go:123] Gathering logs for CRI-O ...
	I0918 21:06:50.312920   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0918 21:06:50.392880   62061 logs.go:123] Gathering logs for container status ...
	I0918 21:06:50.392924   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 21:06:52.931623   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:06:52.945298   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0918 21:06:52.945394   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0918 21:06:52.980129   62061 cri.go:89] found id: ""
	I0918 21:06:52.980162   62061 logs.go:276] 0 containers: []
	W0918 21:06:52.980171   62061 logs.go:278] No container was found matching "kube-apiserver"
	I0918 21:06:52.980176   62061 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0918 21:06:52.980224   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0918 21:06:53.017899   62061 cri.go:89] found id: ""
	I0918 21:06:53.017932   62061 logs.go:276] 0 containers: []
	W0918 21:06:53.017941   62061 logs.go:278] No container was found matching "etcd"
	I0918 21:06:53.017947   62061 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0918 21:06:53.018015   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0918 21:06:53.056594   62061 cri.go:89] found id: ""
	I0918 21:06:53.056619   62061 logs.go:276] 0 containers: []
	W0918 21:06:53.056627   62061 logs.go:278] No container was found matching "coredns"
	I0918 21:06:53.056635   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0918 21:06:53.056684   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0918 21:06:53.089876   62061 cri.go:89] found id: ""
	I0918 21:06:53.089907   62061 logs.go:276] 0 containers: []
	W0918 21:06:53.089915   62061 logs.go:278] No container was found matching "kube-scheduler"
	I0918 21:06:53.089920   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0918 21:06:53.089967   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0918 21:06:53.122845   62061 cri.go:89] found id: ""
	I0918 21:06:53.122881   62061 logs.go:276] 0 containers: []
	W0918 21:06:53.122890   62061 logs.go:278] No container was found matching "kube-proxy"
	I0918 21:06:53.122904   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0918 21:06:53.122956   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0918 21:06:53.162103   62061 cri.go:89] found id: ""
	I0918 21:06:53.162137   62061 logs.go:276] 0 containers: []
	W0918 21:06:53.162148   62061 logs.go:278] No container was found matching "kube-controller-manager"
	I0918 21:06:53.162156   62061 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0918 21:06:53.162218   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0918 21:06:53.195034   62061 cri.go:89] found id: ""
	I0918 21:06:53.195067   62061 logs.go:276] 0 containers: []
	W0918 21:06:53.195078   62061 logs.go:278] No container was found matching "kindnet"
	I0918 21:06:53.195085   62061 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0918 21:06:53.195144   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0918 21:06:53.229473   62061 cri.go:89] found id: ""
	I0918 21:06:53.229518   62061 logs.go:276] 0 containers: []
	W0918 21:06:53.229542   62061 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0918 21:06:53.229556   62061 logs.go:123] Gathering logs for kubelet ...
	I0918 21:06:53.229577   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 21:06:53.283929   62061 logs.go:123] Gathering logs for dmesg ...
	I0918 21:06:53.283973   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 21:06:53.296932   62061 logs.go:123] Gathering logs for describe nodes ...
	I0918 21:06:53.296965   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0918 21:06:53.379339   62061 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0918 21:06:53.379369   62061 logs.go:123] Gathering logs for CRI-O ...
	I0918 21:06:53.379410   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0918 21:06:53.469608   62061 logs.go:123] Gathering logs for container status ...
	I0918 21:06:53.469652   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 21:06:51.855788   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:06:53.857284   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:06:55.860982   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:06:54.633423   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:06:56.633695   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:06:54.888052   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:06:57.387393   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:06:56.009227   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:06:56.023451   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0918 21:06:56.023510   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0918 21:06:56.056839   62061 cri.go:89] found id: ""
	I0918 21:06:56.056866   62061 logs.go:276] 0 containers: []
	W0918 21:06:56.056875   62061 logs.go:278] No container was found matching "kube-apiserver"
	I0918 21:06:56.056881   62061 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0918 21:06:56.056931   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0918 21:06:56.091893   62061 cri.go:89] found id: ""
	I0918 21:06:56.091919   62061 logs.go:276] 0 containers: []
	W0918 21:06:56.091928   62061 logs.go:278] No container was found matching "etcd"
	I0918 21:06:56.091933   62061 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0918 21:06:56.091980   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0918 21:06:56.127432   62061 cri.go:89] found id: ""
	I0918 21:06:56.127456   62061 logs.go:276] 0 containers: []
	W0918 21:06:56.127464   62061 logs.go:278] No container was found matching "coredns"
	I0918 21:06:56.127470   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0918 21:06:56.127518   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0918 21:06:56.162085   62061 cri.go:89] found id: ""
	I0918 21:06:56.162109   62061 logs.go:276] 0 containers: []
	W0918 21:06:56.162118   62061 logs.go:278] No container was found matching "kube-scheduler"
	I0918 21:06:56.162123   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0918 21:06:56.162176   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0918 21:06:56.195711   62061 cri.go:89] found id: ""
	I0918 21:06:56.195743   62061 logs.go:276] 0 containers: []
	W0918 21:06:56.195753   62061 logs.go:278] No container was found matching "kube-proxy"
	I0918 21:06:56.195759   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0918 21:06:56.195809   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0918 21:06:56.234481   62061 cri.go:89] found id: ""
	I0918 21:06:56.234507   62061 logs.go:276] 0 containers: []
	W0918 21:06:56.234515   62061 logs.go:278] No container was found matching "kube-controller-manager"
	I0918 21:06:56.234522   62061 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0918 21:06:56.234567   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0918 21:06:56.268566   62061 cri.go:89] found id: ""
	I0918 21:06:56.268596   62061 logs.go:276] 0 containers: []
	W0918 21:06:56.268608   62061 logs.go:278] No container was found matching "kindnet"
	I0918 21:06:56.268617   62061 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0918 21:06:56.268681   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0918 21:06:56.309785   62061 cri.go:89] found id: ""
	I0918 21:06:56.309813   62061 logs.go:276] 0 containers: []
	W0918 21:06:56.309824   62061 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0918 21:06:56.309835   62061 logs.go:123] Gathering logs for kubelet ...
	I0918 21:06:56.309874   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 21:06:56.364834   62061 logs.go:123] Gathering logs for dmesg ...
	I0918 21:06:56.364880   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 21:06:56.378424   62061 logs.go:123] Gathering logs for describe nodes ...
	I0918 21:06:56.378451   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0918 21:06:56.450482   62061 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0918 21:06:56.450511   62061 logs.go:123] Gathering logs for CRI-O ...
	I0918 21:06:56.450522   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0918 21:06:56.536261   62061 logs.go:123] Gathering logs for container status ...
	I0918 21:06:56.536305   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 21:06:59.074494   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:06:59.087553   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0918 21:06:59.087622   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0918 21:06:59.128323   62061 cri.go:89] found id: ""
	I0918 21:06:59.128357   62061 logs.go:276] 0 containers: []
	W0918 21:06:59.128368   62061 logs.go:278] No container was found matching "kube-apiserver"
	I0918 21:06:59.128375   62061 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0918 21:06:59.128436   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0918 21:06:59.161135   62061 cri.go:89] found id: ""
	I0918 21:06:59.161161   62061 logs.go:276] 0 containers: []
	W0918 21:06:59.161170   62061 logs.go:278] No container was found matching "etcd"
	I0918 21:06:59.161178   62061 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0918 21:06:59.161240   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0918 21:06:59.201558   62061 cri.go:89] found id: ""
	I0918 21:06:59.201595   62061 logs.go:276] 0 containers: []
	W0918 21:06:59.201607   62061 logs.go:278] No container was found matching "coredns"
	I0918 21:06:59.201614   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0918 21:06:59.201678   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0918 21:06:59.235330   62061 cri.go:89] found id: ""
	I0918 21:06:59.235356   62061 logs.go:276] 0 containers: []
	W0918 21:06:59.235378   62061 logs.go:278] No container was found matching "kube-scheduler"
	I0918 21:06:59.235385   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0918 21:06:59.235450   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0918 21:06:59.270957   62061 cri.go:89] found id: ""
	I0918 21:06:59.270994   62061 logs.go:276] 0 containers: []
	W0918 21:06:59.271007   62061 logs.go:278] No container was found matching "kube-proxy"
	I0918 21:06:59.271016   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0918 21:06:59.271088   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0918 21:06:59.305068   62061 cri.go:89] found id: ""
	I0918 21:06:59.305103   62061 logs.go:276] 0 containers: []
	W0918 21:06:59.305115   62061 logs.go:278] No container was found matching "kube-controller-manager"
	I0918 21:06:59.305123   62061 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0918 21:06:59.305177   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0918 21:06:59.338763   62061 cri.go:89] found id: ""
	I0918 21:06:59.338796   62061 logs.go:276] 0 containers: []
	W0918 21:06:59.338809   62061 logs.go:278] No container was found matching "kindnet"
	I0918 21:06:59.338818   62061 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0918 21:06:59.338887   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0918 21:06:59.376557   62061 cri.go:89] found id: ""
	I0918 21:06:59.376585   62061 logs.go:276] 0 containers: []
	W0918 21:06:59.376593   62061 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0918 21:06:59.376602   62061 logs.go:123] Gathering logs for CRI-O ...
	I0918 21:06:59.376615   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0918 21:06:59.455228   62061 logs.go:123] Gathering logs for container status ...
	I0918 21:06:59.455264   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 21:06:58.356648   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:07:00.357253   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:06:59.133274   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:07:01.632548   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:06:59.388183   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:07:01.886834   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:06:59.494461   62061 logs.go:123] Gathering logs for kubelet ...
	I0918 21:06:59.494488   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 21:06:59.543143   62061 logs.go:123] Gathering logs for dmesg ...
	I0918 21:06:59.543177   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 21:06:59.556031   62061 logs.go:123] Gathering logs for describe nodes ...
	I0918 21:06:59.556062   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0918 21:06:59.623888   62061 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0918 21:07:02.124743   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:07:02.139392   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0918 21:07:02.139451   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0918 21:07:02.172176   62061 cri.go:89] found id: ""
	I0918 21:07:02.172201   62061 logs.go:276] 0 containers: []
	W0918 21:07:02.172209   62061 logs.go:278] No container was found matching "kube-apiserver"
	I0918 21:07:02.172215   62061 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0918 21:07:02.172263   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0918 21:07:02.206480   62061 cri.go:89] found id: ""
	I0918 21:07:02.206507   62061 logs.go:276] 0 containers: []
	W0918 21:07:02.206518   62061 logs.go:278] No container was found matching "etcd"
	I0918 21:07:02.206525   62061 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0918 21:07:02.206586   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0918 21:07:02.240256   62061 cri.go:89] found id: ""
	I0918 21:07:02.240281   62061 logs.go:276] 0 containers: []
	W0918 21:07:02.240289   62061 logs.go:278] No container was found matching "coredns"
	I0918 21:07:02.240295   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0918 21:07:02.240353   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0918 21:07:02.274001   62061 cri.go:89] found id: ""
	I0918 21:07:02.274034   62061 logs.go:276] 0 containers: []
	W0918 21:07:02.274046   62061 logs.go:278] No container was found matching "kube-scheduler"
	I0918 21:07:02.274056   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0918 21:07:02.274115   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0918 21:07:02.307491   62061 cri.go:89] found id: ""
	I0918 21:07:02.307520   62061 logs.go:276] 0 containers: []
	W0918 21:07:02.307528   62061 logs.go:278] No container was found matching "kube-proxy"
	I0918 21:07:02.307534   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0918 21:07:02.307597   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0918 21:07:02.344688   62061 cri.go:89] found id: ""
	I0918 21:07:02.344720   62061 logs.go:276] 0 containers: []
	W0918 21:07:02.344731   62061 logs.go:278] No container was found matching "kube-controller-manager"
	I0918 21:07:02.344739   62061 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0918 21:07:02.344805   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0918 21:07:02.384046   62061 cri.go:89] found id: ""
	I0918 21:07:02.384077   62061 logs.go:276] 0 containers: []
	W0918 21:07:02.384088   62061 logs.go:278] No container was found matching "kindnet"
	I0918 21:07:02.384095   62061 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0918 21:07:02.384154   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0918 21:07:02.422530   62061 cri.go:89] found id: ""
	I0918 21:07:02.422563   62061 logs.go:276] 0 containers: []
	W0918 21:07:02.422575   62061 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0918 21:07:02.422586   62061 logs.go:123] Gathering logs for container status ...
	I0918 21:07:02.422604   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 21:07:02.461236   62061 logs.go:123] Gathering logs for kubelet ...
	I0918 21:07:02.461266   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 21:07:02.512280   62061 logs.go:123] Gathering logs for dmesg ...
	I0918 21:07:02.512320   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 21:07:02.525404   62061 logs.go:123] Gathering logs for describe nodes ...
	I0918 21:07:02.525435   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0918 21:07:02.599746   62061 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0918 21:07:02.599773   62061 logs.go:123] Gathering logs for CRI-O ...
	I0918 21:07:02.599785   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0918 21:07:02.856077   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:07:04.858098   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:07:04.133240   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:07:06.135937   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:07:03.887306   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:07:05.888675   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:07:05.183459   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:07:05.197615   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0918 21:07:05.197681   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0918 21:07:05.234313   62061 cri.go:89] found id: ""
	I0918 21:07:05.234354   62061 logs.go:276] 0 containers: []
	W0918 21:07:05.234365   62061 logs.go:278] No container was found matching "kube-apiserver"
	I0918 21:07:05.234371   62061 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0918 21:07:05.234429   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0918 21:07:05.271436   62061 cri.go:89] found id: ""
	I0918 21:07:05.271466   62061 logs.go:276] 0 containers: []
	W0918 21:07:05.271477   62061 logs.go:278] No container was found matching "etcd"
	I0918 21:07:05.271484   62061 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0918 21:07:05.271554   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0918 21:07:05.307007   62061 cri.go:89] found id: ""
	I0918 21:07:05.307038   62061 logs.go:276] 0 containers: []
	W0918 21:07:05.307050   62061 logs.go:278] No container was found matching "coredns"
	I0918 21:07:05.307056   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0918 21:07:05.307120   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0918 21:07:05.341764   62061 cri.go:89] found id: ""
	I0918 21:07:05.341810   62061 logs.go:276] 0 containers: []
	W0918 21:07:05.341831   62061 logs.go:278] No container was found matching "kube-scheduler"
	I0918 21:07:05.341840   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0918 21:07:05.341908   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0918 21:07:05.381611   62061 cri.go:89] found id: ""
	I0918 21:07:05.381646   62061 logs.go:276] 0 containers: []
	W0918 21:07:05.381658   62061 logs.go:278] No container was found matching "kube-proxy"
	I0918 21:07:05.381666   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0918 21:07:05.381747   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0918 21:07:05.421468   62061 cri.go:89] found id: ""
	I0918 21:07:05.421499   62061 logs.go:276] 0 containers: []
	W0918 21:07:05.421520   62061 logs.go:278] No container was found matching "kube-controller-manager"
	I0918 21:07:05.421528   62061 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0918 21:07:05.421590   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0918 21:07:05.457320   62061 cri.go:89] found id: ""
	I0918 21:07:05.457348   62061 logs.go:276] 0 containers: []
	W0918 21:07:05.457359   62061 logs.go:278] No container was found matching "kindnet"
	I0918 21:07:05.457367   62061 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0918 21:07:05.457425   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0918 21:07:05.493879   62061 cri.go:89] found id: ""
	I0918 21:07:05.493915   62061 logs.go:276] 0 containers: []
	W0918 21:07:05.493928   62061 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0918 21:07:05.493943   62061 logs.go:123] Gathering logs for kubelet ...
	I0918 21:07:05.493955   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 21:07:05.543825   62061 logs.go:123] Gathering logs for dmesg ...
	I0918 21:07:05.543867   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 21:07:05.558254   62061 logs.go:123] Gathering logs for describe nodes ...
	I0918 21:07:05.558285   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0918 21:07:05.627065   62061 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0918 21:07:05.627091   62061 logs.go:123] Gathering logs for CRI-O ...
	I0918 21:07:05.627103   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0918 21:07:05.703838   62061 logs.go:123] Gathering logs for container status ...
	I0918 21:07:05.703876   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 21:07:08.244087   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:07:08.257807   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0918 21:07:08.257897   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0918 21:07:08.292034   62061 cri.go:89] found id: ""
	I0918 21:07:08.292064   62061 logs.go:276] 0 containers: []
	W0918 21:07:08.292076   62061 logs.go:278] No container was found matching "kube-apiserver"
	I0918 21:07:08.292084   62061 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0918 21:07:08.292154   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0918 21:07:08.325858   62061 cri.go:89] found id: ""
	I0918 21:07:08.325897   62061 logs.go:276] 0 containers: []
	W0918 21:07:08.325910   62061 logs.go:278] No container was found matching "etcd"
	I0918 21:07:08.325918   62061 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0918 21:07:08.325987   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0918 21:07:08.369427   62061 cri.go:89] found id: ""
	I0918 21:07:08.369457   62061 logs.go:276] 0 containers: []
	W0918 21:07:08.369468   62061 logs.go:278] No container was found matching "coredns"
	I0918 21:07:08.369475   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0918 21:07:08.369536   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0918 21:07:08.409402   62061 cri.go:89] found id: ""
	I0918 21:07:08.409434   62061 logs.go:276] 0 containers: []
	W0918 21:07:08.409444   62061 logs.go:278] No container was found matching "kube-scheduler"
	I0918 21:07:08.409451   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0918 21:07:08.409515   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0918 21:07:08.445530   62061 cri.go:89] found id: ""
	I0918 21:07:08.445565   62061 logs.go:276] 0 containers: []
	W0918 21:07:08.445575   62061 logs.go:278] No container was found matching "kube-proxy"
	I0918 21:07:08.445584   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0918 21:07:08.445646   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0918 21:07:08.481905   62061 cri.go:89] found id: ""
	I0918 21:07:08.481935   62061 logs.go:276] 0 containers: []
	W0918 21:07:08.481945   62061 logs.go:278] No container was found matching "kube-controller-manager"
	I0918 21:07:08.481952   62061 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0918 21:07:08.482020   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0918 21:07:08.516710   62061 cri.go:89] found id: ""
	I0918 21:07:08.516738   62061 logs.go:276] 0 containers: []
	W0918 21:07:08.516746   62061 logs.go:278] No container was found matching "kindnet"
	I0918 21:07:08.516752   62061 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0918 21:07:08.516802   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0918 21:07:08.550707   62061 cri.go:89] found id: ""
	I0918 21:07:08.550740   62061 logs.go:276] 0 containers: []
	W0918 21:07:08.550760   62061 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0918 21:07:08.550768   62061 logs.go:123] Gathering logs for describe nodes ...
	I0918 21:07:08.550782   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0918 21:07:08.624821   62061 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0918 21:07:08.624843   62061 logs.go:123] Gathering logs for CRI-O ...
	I0918 21:07:08.624854   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0918 21:07:08.705347   62061 logs.go:123] Gathering logs for container status ...
	I0918 21:07:08.705383   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 21:07:08.744394   62061 logs.go:123] Gathering logs for kubelet ...
	I0918 21:07:08.744425   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 21:07:08.799336   62061 logs.go:123] Gathering logs for dmesg ...
	I0918 21:07:08.799375   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 21:07:07.358154   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:07:09.857118   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:07:08.633211   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:07:11.132676   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:07:08.388884   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:07:10.887356   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:07:11.312920   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:07:11.327524   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0918 21:07:11.327598   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0918 21:07:11.366177   62061 cri.go:89] found id: ""
	I0918 21:07:11.366209   62061 logs.go:276] 0 containers: []
	W0918 21:07:11.366221   62061 logs.go:278] No container was found matching "kube-apiserver"
	I0918 21:07:11.366227   62061 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0918 21:07:11.366274   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0918 21:07:11.405296   62061 cri.go:89] found id: ""
	I0918 21:07:11.405326   62061 logs.go:276] 0 containers: []
	W0918 21:07:11.405344   62061 logs.go:278] No container was found matching "etcd"
	I0918 21:07:11.405351   62061 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0918 21:07:11.405413   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0918 21:07:11.441786   62061 cri.go:89] found id: ""
	I0918 21:07:11.441818   62061 logs.go:276] 0 containers: []
	W0918 21:07:11.441829   62061 logs.go:278] No container was found matching "coredns"
	I0918 21:07:11.441837   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0918 21:07:11.441891   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0918 21:07:11.476144   62061 cri.go:89] found id: ""
	I0918 21:07:11.476167   62061 logs.go:276] 0 containers: []
	W0918 21:07:11.476175   62061 logs.go:278] No container was found matching "kube-scheduler"
	I0918 21:07:11.476181   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0918 21:07:11.476224   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0918 21:07:11.512115   62061 cri.go:89] found id: ""
	I0918 21:07:11.512143   62061 logs.go:276] 0 containers: []
	W0918 21:07:11.512154   62061 logs.go:278] No container was found matching "kube-proxy"
	I0918 21:07:11.512160   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0918 21:07:11.512221   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0918 21:07:11.545466   62061 cri.go:89] found id: ""
	I0918 21:07:11.545494   62061 logs.go:276] 0 containers: []
	W0918 21:07:11.545504   62061 logs.go:278] No container was found matching "kube-controller-manager"
	I0918 21:07:11.545512   62061 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0918 21:07:11.545575   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0918 21:07:11.579940   62061 cri.go:89] found id: ""
	I0918 21:07:11.579965   62061 logs.go:276] 0 containers: []
	W0918 21:07:11.579975   62061 logs.go:278] No container was found matching "kindnet"
	I0918 21:07:11.579994   62061 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0918 21:07:11.580080   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0918 21:07:11.613454   62061 cri.go:89] found id: ""
	I0918 21:07:11.613480   62061 logs.go:276] 0 containers: []
	W0918 21:07:11.613491   62061 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0918 21:07:11.613512   62061 logs.go:123] Gathering logs for kubelet ...
	I0918 21:07:11.613535   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 21:07:11.669079   62061 logs.go:123] Gathering logs for dmesg ...
	I0918 21:07:11.669140   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 21:07:11.682614   62061 logs.go:123] Gathering logs for describe nodes ...
	I0918 21:07:11.682639   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0918 21:07:11.756876   62061 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0918 21:07:11.756901   62061 logs.go:123] Gathering logs for CRI-O ...
	I0918 21:07:11.756914   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0918 21:07:11.835046   62061 logs.go:123] Gathering logs for container status ...
	I0918 21:07:11.835086   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 21:07:14.377541   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:07:14.392083   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0918 21:07:14.392161   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0918 21:07:14.431752   62061 cri.go:89] found id: ""
	I0918 21:07:14.431786   62061 logs.go:276] 0 containers: []
	W0918 21:07:14.431795   62061 logs.go:278] No container was found matching "kube-apiserver"
	I0918 21:07:14.431800   62061 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0918 21:07:14.431856   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0918 21:07:14.468530   62061 cri.go:89] found id: ""
	I0918 21:07:14.468562   62061 logs.go:276] 0 containers: []
	W0918 21:07:14.468573   62061 logs.go:278] No container was found matching "etcd"
	I0918 21:07:14.468580   62061 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0918 21:07:14.468643   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0918 21:07:11.857763   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:07:14.357253   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:07:13.132895   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:07:15.133426   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:07:13.386537   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:07:15.387844   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:07:17.888743   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:07:14.506510   62061 cri.go:89] found id: ""
	I0918 21:07:14.506540   62061 logs.go:276] 0 containers: []
	W0918 21:07:14.506550   62061 logs.go:278] No container was found matching "coredns"
	I0918 21:07:14.506557   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0918 21:07:14.506625   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0918 21:07:14.538986   62061 cri.go:89] found id: ""
	I0918 21:07:14.539021   62061 logs.go:276] 0 containers: []
	W0918 21:07:14.539032   62061 logs.go:278] No container was found matching "kube-scheduler"
	I0918 21:07:14.539039   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0918 21:07:14.539103   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0918 21:07:14.572390   62061 cri.go:89] found id: ""
	I0918 21:07:14.572421   62061 logs.go:276] 0 containers: []
	W0918 21:07:14.572432   62061 logs.go:278] No container was found matching "kube-proxy"
	I0918 21:07:14.572440   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0918 21:07:14.572499   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0918 21:07:14.607875   62061 cri.go:89] found id: ""
	I0918 21:07:14.607905   62061 logs.go:276] 0 containers: []
	W0918 21:07:14.607917   62061 logs.go:278] No container was found matching "kube-controller-manager"
	I0918 21:07:14.607924   62061 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0918 21:07:14.607988   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0918 21:07:14.642584   62061 cri.go:89] found id: ""
	I0918 21:07:14.642616   62061 logs.go:276] 0 containers: []
	W0918 21:07:14.642625   62061 logs.go:278] No container was found matching "kindnet"
	I0918 21:07:14.642630   62061 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0918 21:07:14.642685   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0918 21:07:14.682117   62061 cri.go:89] found id: ""
	I0918 21:07:14.682144   62061 logs.go:276] 0 containers: []
	W0918 21:07:14.682152   62061 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0918 21:07:14.682163   62061 logs.go:123] Gathering logs for dmesg ...
	I0918 21:07:14.682177   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 21:07:14.694780   62061 logs.go:123] Gathering logs for describe nodes ...
	I0918 21:07:14.694808   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0918 21:07:14.767988   62061 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0918 21:07:14.768036   62061 logs.go:123] Gathering logs for CRI-O ...
	I0918 21:07:14.768055   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0918 21:07:14.860620   62061 logs.go:123] Gathering logs for container status ...
	I0918 21:07:14.860655   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 21:07:14.905071   62061 logs.go:123] Gathering logs for kubelet ...
	I0918 21:07:14.905102   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 21:07:17.455582   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:07:17.468713   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0918 21:07:17.468774   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0918 21:07:17.503682   62061 cri.go:89] found id: ""
	I0918 21:07:17.503709   62061 logs.go:276] 0 containers: []
	W0918 21:07:17.503721   62061 logs.go:278] No container was found matching "kube-apiserver"
	I0918 21:07:17.503729   62061 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0918 21:07:17.503794   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0918 21:07:17.536581   62061 cri.go:89] found id: ""
	I0918 21:07:17.536611   62061 logs.go:276] 0 containers: []
	W0918 21:07:17.536619   62061 logs.go:278] No container was found matching "etcd"
	I0918 21:07:17.536625   62061 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0918 21:07:17.536673   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0918 21:07:17.570483   62061 cri.go:89] found id: ""
	I0918 21:07:17.570510   62061 logs.go:276] 0 containers: []
	W0918 21:07:17.570518   62061 logs.go:278] No container was found matching "coredns"
	I0918 21:07:17.570523   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0918 21:07:17.570591   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0918 21:07:17.604102   62061 cri.go:89] found id: ""
	I0918 21:07:17.604140   62061 logs.go:276] 0 containers: []
	W0918 21:07:17.604152   62061 logs.go:278] No container was found matching "kube-scheduler"
	I0918 21:07:17.604160   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0918 21:07:17.604229   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0918 21:07:17.638633   62061 cri.go:89] found id: ""
	I0918 21:07:17.638661   62061 logs.go:276] 0 containers: []
	W0918 21:07:17.638672   62061 logs.go:278] No container was found matching "kube-proxy"
	I0918 21:07:17.638678   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0918 21:07:17.638725   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0918 21:07:17.673016   62061 cri.go:89] found id: ""
	I0918 21:07:17.673048   62061 logs.go:276] 0 containers: []
	W0918 21:07:17.673057   62061 logs.go:278] No container was found matching "kube-controller-manager"
	I0918 21:07:17.673063   62061 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0918 21:07:17.673116   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0918 21:07:17.708576   62061 cri.go:89] found id: ""
	I0918 21:07:17.708601   62061 logs.go:276] 0 containers: []
	W0918 21:07:17.708609   62061 logs.go:278] No container was found matching "kindnet"
	I0918 21:07:17.708620   62061 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0918 21:07:17.708672   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0918 21:07:17.742820   62061 cri.go:89] found id: ""
	I0918 21:07:17.742843   62061 logs.go:276] 0 containers: []
	W0918 21:07:17.742851   62061 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0918 21:07:17.742859   62061 logs.go:123] Gathering logs for dmesg ...
	I0918 21:07:17.742870   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 21:07:17.757091   62061 logs.go:123] Gathering logs for describe nodes ...
	I0918 21:07:17.757120   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0918 21:07:17.824116   62061 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0918 21:07:17.824137   62061 logs.go:123] Gathering logs for CRI-O ...
	I0918 21:07:17.824151   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0918 21:07:17.911013   62061 logs.go:123] Gathering logs for container status ...
	I0918 21:07:17.911065   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 21:07:17.950517   62061 logs.go:123] Gathering logs for kubelet ...
	I0918 21:07:17.950553   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 21:07:16.857284   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:07:19.357336   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:07:17.635033   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:07:20.134331   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:07:20.388498   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:07:22.887115   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:07:20.502423   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:07:20.529214   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0918 21:07:20.529297   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0918 21:07:20.583618   62061 cri.go:89] found id: ""
	I0918 21:07:20.583660   62061 logs.go:276] 0 containers: []
	W0918 21:07:20.583672   62061 logs.go:278] No container was found matching "kube-apiserver"
	I0918 21:07:20.583682   62061 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0918 21:07:20.583751   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0918 21:07:20.633051   62061 cri.go:89] found id: ""
	I0918 21:07:20.633079   62061 logs.go:276] 0 containers: []
	W0918 21:07:20.633091   62061 logs.go:278] No container was found matching "etcd"
	I0918 21:07:20.633099   62061 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0918 21:07:20.633158   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0918 21:07:20.668513   62061 cri.go:89] found id: ""
	I0918 21:07:20.668543   62061 logs.go:276] 0 containers: []
	W0918 21:07:20.668552   62061 logs.go:278] No container was found matching "coredns"
	I0918 21:07:20.668558   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0918 21:07:20.668611   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0918 21:07:20.702648   62061 cri.go:89] found id: ""
	I0918 21:07:20.702683   62061 logs.go:276] 0 containers: []
	W0918 21:07:20.702698   62061 logs.go:278] No container was found matching "kube-scheduler"
	I0918 21:07:20.702706   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0918 21:07:20.702767   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0918 21:07:20.740639   62061 cri.go:89] found id: ""
	I0918 21:07:20.740666   62061 logs.go:276] 0 containers: []
	W0918 21:07:20.740674   62061 logs.go:278] No container was found matching "kube-proxy"
	I0918 21:07:20.740680   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0918 21:07:20.740731   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0918 21:07:20.771819   62061 cri.go:89] found id: ""
	I0918 21:07:20.771847   62061 logs.go:276] 0 containers: []
	W0918 21:07:20.771855   62061 logs.go:278] No container was found matching "kube-controller-manager"
	I0918 21:07:20.771863   62061 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0918 21:07:20.771913   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0918 21:07:20.803613   62061 cri.go:89] found id: ""
	I0918 21:07:20.803639   62061 logs.go:276] 0 containers: []
	W0918 21:07:20.803647   62061 logs.go:278] No container was found matching "kindnet"
	I0918 21:07:20.803653   62061 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0918 21:07:20.803708   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0918 21:07:20.835671   62061 cri.go:89] found id: ""
	I0918 21:07:20.835695   62061 logs.go:276] 0 containers: []
	W0918 21:07:20.835702   62061 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0918 21:07:20.835710   62061 logs.go:123] Gathering logs for kubelet ...
	I0918 21:07:20.835720   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 21:07:20.886159   62061 logs.go:123] Gathering logs for dmesg ...
	I0918 21:07:20.886191   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 21:07:20.901412   62061 logs.go:123] Gathering logs for describe nodes ...
	I0918 21:07:20.901443   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0918 21:07:20.979009   62061 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0918 21:07:20.979034   62061 logs.go:123] Gathering logs for CRI-O ...
	I0918 21:07:20.979045   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0918 21:07:21.059895   62061 logs.go:123] Gathering logs for container status ...
	I0918 21:07:21.059930   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 21:07:23.598133   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:07:23.611038   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0918 21:07:23.611102   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0918 21:07:23.648037   62061 cri.go:89] found id: ""
	I0918 21:07:23.648067   62061 logs.go:276] 0 containers: []
	W0918 21:07:23.648075   62061 logs.go:278] No container was found matching "kube-apiserver"
	I0918 21:07:23.648081   62061 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0918 21:07:23.648135   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0918 21:07:23.683956   62061 cri.go:89] found id: ""
	I0918 21:07:23.683985   62061 logs.go:276] 0 containers: []
	W0918 21:07:23.683995   62061 logs.go:278] No container was found matching "etcd"
	I0918 21:07:23.684003   62061 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0918 21:07:23.684083   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0918 21:07:23.719274   62061 cri.go:89] found id: ""
	I0918 21:07:23.719301   62061 logs.go:276] 0 containers: []
	W0918 21:07:23.719309   62061 logs.go:278] No container was found matching "coredns"
	I0918 21:07:23.719315   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0918 21:07:23.719378   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0918 21:07:23.757101   62061 cri.go:89] found id: ""
	I0918 21:07:23.757133   62061 logs.go:276] 0 containers: []
	W0918 21:07:23.757145   62061 logs.go:278] No container was found matching "kube-scheduler"
	I0918 21:07:23.757152   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0918 21:07:23.757202   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0918 21:07:23.790115   62061 cri.go:89] found id: ""
	I0918 21:07:23.790149   62061 logs.go:276] 0 containers: []
	W0918 21:07:23.790160   62061 logs.go:278] No container was found matching "kube-proxy"
	I0918 21:07:23.790168   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0918 21:07:23.790231   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0918 21:07:23.823089   62061 cri.go:89] found id: ""
	I0918 21:07:23.823120   62061 logs.go:276] 0 containers: []
	W0918 21:07:23.823131   62061 logs.go:278] No container was found matching "kube-controller-manager"
	I0918 21:07:23.823140   62061 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0918 21:07:23.823192   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0918 21:07:23.858566   62061 cri.go:89] found id: ""
	I0918 21:07:23.858591   62061 logs.go:276] 0 containers: []
	W0918 21:07:23.858601   62061 logs.go:278] No container was found matching "kindnet"
	I0918 21:07:23.858613   62061 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0918 21:07:23.858674   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0918 21:07:23.892650   62061 cri.go:89] found id: ""
	I0918 21:07:23.892679   62061 logs.go:276] 0 containers: []
	W0918 21:07:23.892690   62061 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0918 21:07:23.892699   62061 logs.go:123] Gathering logs for kubelet ...
	I0918 21:07:23.892713   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 21:07:23.941087   62061 logs.go:123] Gathering logs for dmesg ...
	I0918 21:07:23.941123   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 21:07:23.955674   62061 logs.go:123] Gathering logs for describe nodes ...
	I0918 21:07:23.955712   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0918 21:07:24.026087   62061 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0918 21:07:24.026115   62061 logs.go:123] Gathering logs for CRI-O ...
	I0918 21:07:24.026132   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0918 21:07:24.105237   62061 logs.go:123] Gathering logs for container status ...
	I0918 21:07:24.105272   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 21:07:21.857391   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:07:23.857954   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:07:26.356553   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:07:22.633058   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:07:25.133773   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:07:25.387123   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:07:27.886688   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:07:26.646822   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:07:26.659978   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0918 21:07:26.660087   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0918 21:07:26.695776   62061 cri.go:89] found id: ""
	I0918 21:07:26.695804   62061 logs.go:276] 0 containers: []
	W0918 21:07:26.695817   62061 logs.go:278] No container was found matching "kube-apiserver"
	I0918 21:07:26.695824   62061 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0918 21:07:26.695875   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0918 21:07:26.729717   62061 cri.go:89] found id: ""
	I0918 21:07:26.729747   62061 logs.go:276] 0 containers: []
	W0918 21:07:26.729756   62061 logs.go:278] No container was found matching "etcd"
	I0918 21:07:26.729762   62061 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0918 21:07:26.729830   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0918 21:07:26.763848   62061 cri.go:89] found id: ""
	I0918 21:07:26.763886   62061 logs.go:276] 0 containers: []
	W0918 21:07:26.763907   62061 logs.go:278] No container was found matching "coredns"
	I0918 21:07:26.763915   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0918 21:07:26.763974   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0918 21:07:26.798670   62061 cri.go:89] found id: ""
	I0918 21:07:26.798703   62061 logs.go:276] 0 containers: []
	W0918 21:07:26.798713   62061 logs.go:278] No container was found matching "kube-scheduler"
	I0918 21:07:26.798720   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0918 21:07:26.798789   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0918 21:07:26.833778   62061 cri.go:89] found id: ""
	I0918 21:07:26.833804   62061 logs.go:276] 0 containers: []
	W0918 21:07:26.833815   62061 logs.go:278] No container was found matching "kube-proxy"
	I0918 21:07:26.833822   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0918 21:07:26.833905   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0918 21:07:26.869657   62061 cri.go:89] found id: ""
	I0918 21:07:26.869688   62061 logs.go:276] 0 containers: []
	W0918 21:07:26.869699   62061 logs.go:278] No container was found matching "kube-controller-manager"
	I0918 21:07:26.869707   62061 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0918 21:07:26.869772   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0918 21:07:26.908163   62061 cri.go:89] found id: ""
	I0918 21:07:26.908194   62061 logs.go:276] 0 containers: []
	W0918 21:07:26.908205   62061 logs.go:278] No container was found matching "kindnet"
	I0918 21:07:26.908212   62061 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0918 21:07:26.908269   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0918 21:07:26.943416   62061 cri.go:89] found id: ""
	I0918 21:07:26.943442   62061 logs.go:276] 0 containers: []
	W0918 21:07:26.943451   62061 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0918 21:07:26.943459   62061 logs.go:123] Gathering logs for kubelet ...
	I0918 21:07:26.943471   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 21:07:26.993796   62061 logs.go:123] Gathering logs for dmesg ...
	I0918 21:07:26.993833   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 21:07:27.007619   62061 logs.go:123] Gathering logs for describe nodes ...
	I0918 21:07:27.007661   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0918 21:07:27.072880   62061 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0918 21:07:27.072904   62061 logs.go:123] Gathering logs for CRI-O ...
	I0918 21:07:27.072919   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0918 21:07:27.148984   62061 logs.go:123] Gathering logs for container status ...
	I0918 21:07:27.149031   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 21:07:28.357006   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:07:30.857527   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:07:27.632697   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:07:30.133718   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:07:29.887981   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:07:32.387478   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:07:29.690106   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:07:29.702853   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0918 21:07:29.702932   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0918 21:07:29.736427   62061 cri.go:89] found id: ""
	I0918 21:07:29.736461   62061 logs.go:276] 0 containers: []
	W0918 21:07:29.736473   62061 logs.go:278] No container was found matching "kube-apiserver"
	I0918 21:07:29.736480   62061 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0918 21:07:29.736537   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0918 21:07:29.771287   62061 cri.go:89] found id: ""
	I0918 21:07:29.771317   62061 logs.go:276] 0 containers: []
	W0918 21:07:29.771328   62061 logs.go:278] No container was found matching "etcd"
	I0918 21:07:29.771334   62061 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0918 21:07:29.771398   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0918 21:07:29.804826   62061 cri.go:89] found id: ""
	I0918 21:07:29.804861   62061 logs.go:276] 0 containers: []
	W0918 21:07:29.804875   62061 logs.go:278] No container was found matching "coredns"
	I0918 21:07:29.804882   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0918 21:07:29.804934   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0918 21:07:29.838570   62061 cri.go:89] found id: ""
	I0918 21:07:29.838598   62061 logs.go:276] 0 containers: []
	W0918 21:07:29.838608   62061 logs.go:278] No container was found matching "kube-scheduler"
	I0918 21:07:29.838614   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0918 21:07:29.838659   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0918 21:07:29.871591   62061 cri.go:89] found id: ""
	I0918 21:07:29.871621   62061 logs.go:276] 0 containers: []
	W0918 21:07:29.871631   62061 logs.go:278] No container was found matching "kube-proxy"
	I0918 21:07:29.871638   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0918 21:07:29.871699   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0918 21:07:29.905789   62061 cri.go:89] found id: ""
	I0918 21:07:29.905824   62061 logs.go:276] 0 containers: []
	W0918 21:07:29.905835   62061 logs.go:278] No container was found matching "kube-controller-manager"
	I0918 21:07:29.905846   62061 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0918 21:07:29.905910   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0918 21:07:29.945914   62061 cri.go:89] found id: ""
	I0918 21:07:29.945941   62061 logs.go:276] 0 containers: []
	W0918 21:07:29.945950   62061 logs.go:278] No container was found matching "kindnet"
	I0918 21:07:29.945955   62061 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0918 21:07:29.946004   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0918 21:07:29.979365   62061 cri.go:89] found id: ""
	I0918 21:07:29.979396   62061 logs.go:276] 0 containers: []
	W0918 21:07:29.979405   62061 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0918 21:07:29.979413   62061 logs.go:123] Gathering logs for kubelet ...
	I0918 21:07:29.979425   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 21:07:30.026925   62061 logs.go:123] Gathering logs for dmesg ...
	I0918 21:07:30.026956   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 21:07:30.040589   62061 logs.go:123] Gathering logs for describe nodes ...
	I0918 21:07:30.040623   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0918 21:07:30.112900   62061 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0918 21:07:30.112928   62061 logs.go:123] Gathering logs for CRI-O ...
	I0918 21:07:30.112948   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0918 21:07:30.195208   62061 logs.go:123] Gathering logs for container status ...
	I0918 21:07:30.195256   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 21:07:32.735787   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:07:32.748969   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0918 21:07:32.749049   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0918 21:07:32.782148   62061 cri.go:89] found id: ""
	I0918 21:07:32.782179   62061 logs.go:276] 0 containers: []
	W0918 21:07:32.782189   62061 logs.go:278] No container was found matching "kube-apiserver"
	I0918 21:07:32.782196   62061 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0918 21:07:32.782262   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0918 21:07:32.816119   62061 cri.go:89] found id: ""
	I0918 21:07:32.816144   62061 logs.go:276] 0 containers: []
	W0918 21:07:32.816152   62061 logs.go:278] No container was found matching "etcd"
	I0918 21:07:32.816158   62061 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0918 21:07:32.816203   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0918 21:07:32.849817   62061 cri.go:89] found id: ""
	I0918 21:07:32.849844   62061 logs.go:276] 0 containers: []
	W0918 21:07:32.849853   62061 logs.go:278] No container was found matching "coredns"
	I0918 21:07:32.849859   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0918 21:07:32.849911   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0918 21:07:32.884464   62061 cri.go:89] found id: ""
	I0918 21:07:32.884494   62061 logs.go:276] 0 containers: []
	W0918 21:07:32.884506   62061 logs.go:278] No container was found matching "kube-scheduler"
	I0918 21:07:32.884513   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0918 21:07:32.884576   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0918 21:07:32.917942   62061 cri.go:89] found id: ""
	I0918 21:07:32.917973   62061 logs.go:276] 0 containers: []
	W0918 21:07:32.917983   62061 logs.go:278] No container was found matching "kube-proxy"
	I0918 21:07:32.917990   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0918 21:07:32.918051   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0918 21:07:32.950748   62061 cri.go:89] found id: ""
	I0918 21:07:32.950780   62061 logs.go:276] 0 containers: []
	W0918 21:07:32.950791   62061 logs.go:278] No container was found matching "kube-controller-manager"
	I0918 21:07:32.950800   62061 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0918 21:07:32.950864   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0918 21:07:32.985059   62061 cri.go:89] found id: ""
	I0918 21:07:32.985092   62061 logs.go:276] 0 containers: []
	W0918 21:07:32.985100   62061 logs.go:278] No container was found matching "kindnet"
	I0918 21:07:32.985106   62061 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0918 21:07:32.985167   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0918 21:07:33.021496   62061 cri.go:89] found id: ""
	I0918 21:07:33.021526   62061 logs.go:276] 0 containers: []
	W0918 21:07:33.021536   62061 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0918 21:07:33.021546   62061 logs.go:123] Gathering logs for kubelet ...
	I0918 21:07:33.021560   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 21:07:33.071744   62061 logs.go:123] Gathering logs for dmesg ...
	I0918 21:07:33.071793   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 21:07:33.086533   62061 logs.go:123] Gathering logs for describe nodes ...
	I0918 21:07:33.086565   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0918 21:07:33.155274   62061 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0918 21:07:33.155307   62061 logs.go:123] Gathering logs for CRI-O ...
	I0918 21:07:33.155326   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0918 21:07:33.238301   62061 logs.go:123] Gathering logs for container status ...
	I0918 21:07:33.238342   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 21:07:33.356874   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:07:35.357445   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:07:32.631814   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:07:34.631954   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:07:36.633057   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:07:34.387725   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:07:36.887031   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:07:35.777905   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:07:35.792442   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0918 21:07:35.792510   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0918 21:07:35.827310   62061 cri.go:89] found id: ""
	I0918 21:07:35.827339   62061 logs.go:276] 0 containers: []
	W0918 21:07:35.827350   62061 logs.go:278] No container was found matching "kube-apiserver"
	I0918 21:07:35.827357   62061 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0918 21:07:35.827422   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0918 21:07:35.861873   62061 cri.go:89] found id: ""
	I0918 21:07:35.861897   62061 logs.go:276] 0 containers: []
	W0918 21:07:35.861905   62061 logs.go:278] No container was found matching "etcd"
	I0918 21:07:35.861916   62061 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0918 21:07:35.861966   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0918 21:07:35.895295   62061 cri.go:89] found id: ""
	I0918 21:07:35.895324   62061 logs.go:276] 0 containers: []
	W0918 21:07:35.895345   62061 logs.go:278] No container was found matching "coredns"
	I0918 21:07:35.895353   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0918 21:07:35.895410   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0918 21:07:35.927690   62061 cri.go:89] found id: ""
	I0918 21:07:35.927717   62061 logs.go:276] 0 containers: []
	W0918 21:07:35.927728   62061 logs.go:278] No container was found matching "kube-scheduler"
	I0918 21:07:35.927736   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0918 21:07:35.927788   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0918 21:07:35.963954   62061 cri.go:89] found id: ""
	I0918 21:07:35.963979   62061 logs.go:276] 0 containers: []
	W0918 21:07:35.963986   62061 logs.go:278] No container was found matching "kube-proxy"
	I0918 21:07:35.963992   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0918 21:07:35.964059   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0918 21:07:36.002287   62061 cri.go:89] found id: ""
	I0918 21:07:36.002313   62061 logs.go:276] 0 containers: []
	W0918 21:07:36.002322   62061 logs.go:278] No container was found matching "kube-controller-manager"
	I0918 21:07:36.002328   62061 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0918 21:07:36.002380   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0918 21:07:36.036748   62061 cri.go:89] found id: ""
	I0918 21:07:36.036778   62061 logs.go:276] 0 containers: []
	W0918 21:07:36.036790   62061 logs.go:278] No container was found matching "kindnet"
	I0918 21:07:36.036797   62061 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0918 21:07:36.036861   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0918 21:07:36.072616   62061 cri.go:89] found id: ""
	I0918 21:07:36.072647   62061 logs.go:276] 0 containers: []
	W0918 21:07:36.072658   62061 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0918 21:07:36.072668   62061 logs.go:123] Gathering logs for kubelet ...
	I0918 21:07:36.072683   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 21:07:36.121723   62061 logs.go:123] Gathering logs for dmesg ...
	I0918 21:07:36.121762   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 21:07:36.136528   62061 logs.go:123] Gathering logs for describe nodes ...
	I0918 21:07:36.136560   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0918 21:07:36.202878   62061 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0918 21:07:36.202911   62061 logs.go:123] Gathering logs for CRI-O ...
	I0918 21:07:36.202926   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0918 21:07:36.287882   62061 logs.go:123] Gathering logs for container status ...
	I0918 21:07:36.287924   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 21:07:38.825888   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:07:38.838437   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0918 21:07:38.838510   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0918 21:07:38.872312   62061 cri.go:89] found id: ""
	I0918 21:07:38.872348   62061 logs.go:276] 0 containers: []
	W0918 21:07:38.872358   62061 logs.go:278] No container was found matching "kube-apiserver"
	I0918 21:07:38.872366   62061 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0918 21:07:38.872417   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0918 21:07:38.905927   62061 cri.go:89] found id: ""
	I0918 21:07:38.905962   62061 logs.go:276] 0 containers: []
	W0918 21:07:38.905975   62061 logs.go:278] No container was found matching "etcd"
	I0918 21:07:38.905982   62061 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0918 21:07:38.906042   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0918 21:07:38.938245   62061 cri.go:89] found id: ""
	I0918 21:07:38.938274   62061 logs.go:276] 0 containers: []
	W0918 21:07:38.938286   62061 logs.go:278] No container was found matching "coredns"
	I0918 21:07:38.938293   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0918 21:07:38.938358   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0918 21:07:38.971497   62061 cri.go:89] found id: ""
	I0918 21:07:38.971536   62061 logs.go:276] 0 containers: []
	W0918 21:07:38.971548   62061 logs.go:278] No container was found matching "kube-scheduler"
	I0918 21:07:38.971555   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0918 21:07:38.971625   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0918 21:07:39.004755   62061 cri.go:89] found id: ""
	I0918 21:07:39.004784   62061 logs.go:276] 0 containers: []
	W0918 21:07:39.004793   62061 logs.go:278] No container was found matching "kube-proxy"
	I0918 21:07:39.004799   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0918 21:07:39.004854   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0918 21:07:39.036237   62061 cri.go:89] found id: ""
	I0918 21:07:39.036265   62061 logs.go:276] 0 containers: []
	W0918 21:07:39.036273   62061 logs.go:278] No container was found matching "kube-controller-manager"
	I0918 21:07:39.036279   62061 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0918 21:07:39.036328   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0918 21:07:39.071504   62061 cri.go:89] found id: ""
	I0918 21:07:39.071537   62061 logs.go:276] 0 containers: []
	W0918 21:07:39.071549   62061 logs.go:278] No container was found matching "kindnet"
	I0918 21:07:39.071557   62061 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0918 21:07:39.071623   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0918 21:07:39.107035   62061 cri.go:89] found id: ""
	I0918 21:07:39.107063   62061 logs.go:276] 0 containers: []
	W0918 21:07:39.107071   62061 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0918 21:07:39.107080   62061 logs.go:123] Gathering logs for kubelet ...
	I0918 21:07:39.107090   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 21:07:39.158078   62061 logs.go:123] Gathering logs for dmesg ...
	I0918 21:07:39.158113   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 21:07:39.172846   62061 logs.go:123] Gathering logs for describe nodes ...
	I0918 21:07:39.172875   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0918 21:07:39.240577   62061 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0918 21:07:39.240602   62061 logs.go:123] Gathering logs for CRI-O ...
	I0918 21:07:39.240618   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0918 21:07:39.319762   62061 logs.go:123] Gathering logs for container status ...
	I0918 21:07:39.319797   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 21:07:37.857371   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:07:40.356710   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:07:39.133586   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:07:41.632538   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:07:38.887485   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:07:41.386252   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:07:41.856586   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:07:41.870308   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0918 21:07:41.870388   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0918 21:07:41.905657   62061 cri.go:89] found id: ""
	I0918 21:07:41.905688   62061 logs.go:276] 0 containers: []
	W0918 21:07:41.905699   62061 logs.go:278] No container was found matching "kube-apiserver"
	I0918 21:07:41.905706   62061 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0918 21:07:41.905766   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0918 21:07:41.944507   62061 cri.go:89] found id: ""
	I0918 21:07:41.944544   62061 logs.go:276] 0 containers: []
	W0918 21:07:41.944555   62061 logs.go:278] No container was found matching "etcd"
	I0918 21:07:41.944566   62061 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0918 21:07:41.944634   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0918 21:07:41.979217   62061 cri.go:89] found id: ""
	I0918 21:07:41.979252   62061 logs.go:276] 0 containers: []
	W0918 21:07:41.979271   62061 logs.go:278] No container was found matching "coredns"
	I0918 21:07:41.979279   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0918 21:07:41.979346   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0918 21:07:42.013613   62061 cri.go:89] found id: ""
	I0918 21:07:42.013641   62061 logs.go:276] 0 containers: []
	W0918 21:07:42.013652   62061 logs.go:278] No container was found matching "kube-scheduler"
	I0918 21:07:42.013659   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0918 21:07:42.013725   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0918 21:07:42.049225   62061 cri.go:89] found id: ""
	I0918 21:07:42.049259   62061 logs.go:276] 0 containers: []
	W0918 21:07:42.049271   62061 logs.go:278] No container was found matching "kube-proxy"
	I0918 21:07:42.049279   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0918 21:07:42.049364   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0918 21:07:42.085737   62061 cri.go:89] found id: ""
	I0918 21:07:42.085763   62061 logs.go:276] 0 containers: []
	W0918 21:07:42.085775   62061 logs.go:278] No container was found matching "kube-controller-manager"
	I0918 21:07:42.085782   62061 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0918 21:07:42.085843   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0918 21:07:42.121326   62061 cri.go:89] found id: ""
	I0918 21:07:42.121356   62061 logs.go:276] 0 containers: []
	W0918 21:07:42.121365   62061 logs.go:278] No container was found matching "kindnet"
	I0918 21:07:42.121371   62061 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0918 21:07:42.121428   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0918 21:07:42.157070   62061 cri.go:89] found id: ""
	I0918 21:07:42.157097   62061 logs.go:276] 0 containers: []
	W0918 21:07:42.157107   62061 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0918 21:07:42.157118   62061 logs.go:123] Gathering logs for CRI-O ...
	I0918 21:07:42.157133   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0918 21:07:42.232110   62061 logs.go:123] Gathering logs for container status ...
	I0918 21:07:42.232162   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 21:07:42.270478   62061 logs.go:123] Gathering logs for kubelet ...
	I0918 21:07:42.270517   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 21:07:42.324545   62061 logs.go:123] Gathering logs for dmesg ...
	I0918 21:07:42.324586   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 21:07:42.339928   62061 logs.go:123] Gathering logs for describe nodes ...
	I0918 21:07:42.339962   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0918 21:07:42.415124   62061 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0918 21:07:42.356847   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:07:44.856845   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:07:43.633029   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:07:46.134786   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:07:43.387596   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:07:45.887071   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:07:44.915316   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:07:44.928261   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0918 21:07:44.928331   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0918 21:07:44.959641   62061 cri.go:89] found id: ""
	I0918 21:07:44.959673   62061 logs.go:276] 0 containers: []
	W0918 21:07:44.959682   62061 logs.go:278] No container was found matching "kube-apiserver"
	I0918 21:07:44.959689   62061 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0918 21:07:44.959791   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0918 21:07:44.991744   62061 cri.go:89] found id: ""
	I0918 21:07:44.991778   62061 logs.go:276] 0 containers: []
	W0918 21:07:44.991787   62061 logs.go:278] No container was found matching "etcd"
	I0918 21:07:44.991803   62061 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0918 21:07:44.991877   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0918 21:07:45.024228   62061 cri.go:89] found id: ""
	I0918 21:07:45.024261   62061 logs.go:276] 0 containers: []
	W0918 21:07:45.024272   62061 logs.go:278] No container was found matching "coredns"
	I0918 21:07:45.024280   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0918 21:07:45.024355   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0918 21:07:45.061905   62061 cri.go:89] found id: ""
	I0918 21:07:45.061931   62061 logs.go:276] 0 containers: []
	W0918 21:07:45.061940   62061 logs.go:278] No container was found matching "kube-scheduler"
	I0918 21:07:45.061946   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0918 21:07:45.061994   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0918 21:07:45.099224   62061 cri.go:89] found id: ""
	I0918 21:07:45.099251   62061 logs.go:276] 0 containers: []
	W0918 21:07:45.099259   62061 logs.go:278] No container was found matching "kube-proxy"
	I0918 21:07:45.099266   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0918 21:07:45.099327   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0918 21:07:45.137235   62061 cri.go:89] found id: ""
	I0918 21:07:45.137262   62061 logs.go:276] 0 containers: []
	W0918 21:07:45.137270   62061 logs.go:278] No container was found matching "kube-controller-manager"
	I0918 21:07:45.137278   62061 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0918 21:07:45.137333   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0918 21:07:45.174343   62061 cri.go:89] found id: ""
	I0918 21:07:45.174370   62061 logs.go:276] 0 containers: []
	W0918 21:07:45.174379   62061 logs.go:278] No container was found matching "kindnet"
	I0918 21:07:45.174385   62061 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0918 21:07:45.174432   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0918 21:07:45.209278   62061 cri.go:89] found id: ""
	I0918 21:07:45.209306   62061 logs.go:276] 0 containers: []
	W0918 21:07:45.209316   62061 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0918 21:07:45.209326   62061 logs.go:123] Gathering logs for dmesg ...
	I0918 21:07:45.209341   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 21:07:45.222376   62061 logs.go:123] Gathering logs for describe nodes ...
	I0918 21:07:45.222409   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0918 21:07:45.294562   62061 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0918 21:07:45.294595   62061 logs.go:123] Gathering logs for CRI-O ...
	I0918 21:07:45.294607   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0918 21:07:45.382582   62061 logs.go:123] Gathering logs for container status ...
	I0918 21:07:45.382631   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 21:07:45.424743   62061 logs.go:123] Gathering logs for kubelet ...
	I0918 21:07:45.424780   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 21:07:47.978171   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:07:47.990485   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0918 21:07:47.990554   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0918 21:07:48.024465   62061 cri.go:89] found id: ""
	I0918 21:07:48.024494   62061 logs.go:276] 0 containers: []
	W0918 21:07:48.024505   62061 logs.go:278] No container was found matching "kube-apiserver"
	I0918 21:07:48.024512   62061 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0918 21:07:48.024575   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0918 21:07:48.058838   62061 cri.go:89] found id: ""
	I0918 21:07:48.058871   62061 logs.go:276] 0 containers: []
	W0918 21:07:48.058899   62061 logs.go:278] No container was found matching "etcd"
	I0918 21:07:48.058905   62061 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0918 21:07:48.058959   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0918 21:07:48.094307   62061 cri.go:89] found id: ""
	I0918 21:07:48.094335   62061 logs.go:276] 0 containers: []
	W0918 21:07:48.094343   62061 logs.go:278] No container was found matching "coredns"
	I0918 21:07:48.094349   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0918 21:07:48.094398   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0918 21:07:48.127497   62061 cri.go:89] found id: ""
	I0918 21:07:48.127529   62061 logs.go:276] 0 containers: []
	W0918 21:07:48.127538   62061 logs.go:278] No container was found matching "kube-scheduler"
	I0918 21:07:48.127544   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0918 21:07:48.127590   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0918 21:07:48.160440   62061 cri.go:89] found id: ""
	I0918 21:07:48.160465   62061 logs.go:276] 0 containers: []
	W0918 21:07:48.160473   62061 logs.go:278] No container was found matching "kube-proxy"
	I0918 21:07:48.160478   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0918 21:07:48.160527   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0918 21:07:48.195581   62061 cri.go:89] found id: ""
	I0918 21:07:48.195615   62061 logs.go:276] 0 containers: []
	W0918 21:07:48.195624   62061 logs.go:278] No container was found matching "kube-controller-manager"
	I0918 21:07:48.195631   62061 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0918 21:07:48.195675   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0918 21:07:48.226478   62061 cri.go:89] found id: ""
	I0918 21:07:48.226506   62061 logs.go:276] 0 containers: []
	W0918 21:07:48.226514   62061 logs.go:278] No container was found matching "kindnet"
	I0918 21:07:48.226519   62061 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0918 21:07:48.226566   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0918 21:07:48.259775   62061 cri.go:89] found id: ""
	I0918 21:07:48.259804   62061 logs.go:276] 0 containers: []
	W0918 21:07:48.259815   62061 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0918 21:07:48.259825   62061 logs.go:123] Gathering logs for dmesg ...
	I0918 21:07:48.259839   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 21:07:48.274364   62061 logs.go:123] Gathering logs for describe nodes ...
	I0918 21:07:48.274391   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0918 21:07:48.347131   62061 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0918 21:07:48.347151   62061 logs.go:123] Gathering logs for CRI-O ...
	I0918 21:07:48.347163   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0918 21:07:48.430569   62061 logs.go:123] Gathering logs for container status ...
	I0918 21:07:48.430607   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 21:07:48.472972   62061 logs.go:123] Gathering logs for kubelet ...
	I0918 21:07:48.473007   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 21:07:47.356907   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:07:49.857984   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:07:48.633550   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:07:51.133639   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:07:48.388136   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:07:50.888317   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:07:51.024429   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:07:51.037249   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0918 21:07:51.037328   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0918 21:07:51.070137   62061 cri.go:89] found id: ""
	I0918 21:07:51.070173   62061 logs.go:276] 0 containers: []
	W0918 21:07:51.070195   62061 logs.go:278] No container was found matching "kube-apiserver"
	I0918 21:07:51.070203   62061 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0918 21:07:51.070276   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0918 21:07:51.105014   62061 cri.go:89] found id: ""
	I0918 21:07:51.105039   62061 logs.go:276] 0 containers: []
	W0918 21:07:51.105048   62061 logs.go:278] No container was found matching "etcd"
	I0918 21:07:51.105054   62061 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0918 21:07:51.105101   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0918 21:07:51.143287   62061 cri.go:89] found id: ""
	I0918 21:07:51.143310   62061 logs.go:276] 0 containers: []
	W0918 21:07:51.143318   62061 logs.go:278] No container was found matching "coredns"
	I0918 21:07:51.143325   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0918 21:07:51.143372   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0918 21:07:51.176811   62061 cri.go:89] found id: ""
	I0918 21:07:51.176838   62061 logs.go:276] 0 containers: []
	W0918 21:07:51.176846   62061 logs.go:278] No container was found matching "kube-scheduler"
	I0918 21:07:51.176852   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0918 21:07:51.176898   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0918 21:07:51.210817   62061 cri.go:89] found id: ""
	I0918 21:07:51.210842   62061 logs.go:276] 0 containers: []
	W0918 21:07:51.210850   62061 logs.go:278] No container was found matching "kube-proxy"
	I0918 21:07:51.210856   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0918 21:07:51.210916   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0918 21:07:51.243968   62061 cri.go:89] found id: ""
	I0918 21:07:51.244002   62061 logs.go:276] 0 containers: []
	W0918 21:07:51.244035   62061 logs.go:278] No container was found matching "kube-controller-manager"
	I0918 21:07:51.244043   62061 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0918 21:07:51.244104   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0918 21:07:51.279078   62061 cri.go:89] found id: ""
	I0918 21:07:51.279108   62061 logs.go:276] 0 containers: []
	W0918 21:07:51.279117   62061 logs.go:278] No container was found matching "kindnet"
	I0918 21:07:51.279122   62061 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0918 21:07:51.279173   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0918 21:07:51.315033   62061 cri.go:89] found id: ""
	I0918 21:07:51.315082   62061 logs.go:276] 0 containers: []
	W0918 21:07:51.315091   62061 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0918 21:07:51.315100   62061 logs.go:123] Gathering logs for CRI-O ...
	I0918 21:07:51.315111   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0918 21:07:51.391927   62061 logs.go:123] Gathering logs for container status ...
	I0918 21:07:51.391976   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 21:07:51.435515   62061 logs.go:123] Gathering logs for kubelet ...
	I0918 21:07:51.435542   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 21:07:51.488404   62061 logs.go:123] Gathering logs for dmesg ...
	I0918 21:07:51.488442   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 21:07:51.502019   62061 logs.go:123] Gathering logs for describe nodes ...
	I0918 21:07:51.502065   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0918 21:07:51.567893   62061 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0918 21:07:54.068142   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:07:54.082544   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0918 21:07:54.082619   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0918 21:07:54.118600   62061 cri.go:89] found id: ""
	I0918 21:07:54.118629   62061 logs.go:276] 0 containers: []
	W0918 21:07:54.118638   62061 logs.go:278] No container was found matching "kube-apiserver"
	I0918 21:07:54.118644   62061 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0918 21:07:54.118695   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0918 21:07:54.154086   62061 cri.go:89] found id: ""
	I0918 21:07:54.154115   62061 logs.go:276] 0 containers: []
	W0918 21:07:54.154127   62061 logs.go:278] No container was found matching "etcd"
	I0918 21:07:54.154135   62061 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0918 21:07:54.154187   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0918 21:07:54.188213   62061 cri.go:89] found id: ""
	I0918 21:07:54.188246   62061 logs.go:276] 0 containers: []
	W0918 21:07:54.188257   62061 logs.go:278] No container was found matching "coredns"
	I0918 21:07:54.188266   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0918 21:07:54.188329   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0918 21:07:54.221682   62061 cri.go:89] found id: ""
	I0918 21:07:54.221712   62061 logs.go:276] 0 containers: []
	W0918 21:07:54.221721   62061 logs.go:278] No container was found matching "kube-scheduler"
	I0918 21:07:54.221728   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0918 21:07:54.221791   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0918 21:07:54.254746   62061 cri.go:89] found id: ""
	I0918 21:07:54.254770   62061 logs.go:276] 0 containers: []
	W0918 21:07:54.254778   62061 logs.go:278] No container was found matching "kube-proxy"
	I0918 21:07:54.254783   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0918 21:07:54.254844   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0918 21:07:54.288249   62061 cri.go:89] found id: ""
	I0918 21:07:54.288273   62061 logs.go:276] 0 containers: []
	W0918 21:07:54.288281   62061 logs.go:278] No container was found matching "kube-controller-manager"
	I0918 21:07:54.288288   62061 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0918 21:07:54.288349   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0918 21:07:54.323390   62061 cri.go:89] found id: ""
	I0918 21:07:54.323422   62061 logs.go:276] 0 containers: []
	W0918 21:07:54.323433   62061 logs.go:278] No container was found matching "kindnet"
	I0918 21:07:54.323440   62061 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0918 21:07:54.323507   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0918 21:07:54.357680   62061 cri.go:89] found id: ""
	I0918 21:07:54.357709   62061 logs.go:276] 0 containers: []
	W0918 21:07:54.357719   62061 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0918 21:07:54.357734   62061 logs.go:123] Gathering logs for dmesg ...
	I0918 21:07:54.357748   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 21:07:54.396644   62061 logs.go:123] Gathering logs for describe nodes ...
	I0918 21:07:54.396683   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0918 21:07:54.469136   62061 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0918 21:07:54.469175   62061 logs.go:123] Gathering logs for CRI-O ...
	I0918 21:07:54.469191   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0918 21:07:52.357187   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:07:54.857437   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:07:53.633161   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:07:56.132554   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:07:53.386646   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:07:55.387377   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:07:57.387524   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:07:54.547848   62061 logs.go:123] Gathering logs for container status ...
	I0918 21:07:54.547884   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 21:07:54.585758   62061 logs.go:123] Gathering logs for kubelet ...
	I0918 21:07:54.585788   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 21:07:57.139198   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:07:57.152342   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0918 21:07:57.152429   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0918 21:07:57.186747   62061 cri.go:89] found id: ""
	I0918 21:07:57.186778   62061 logs.go:276] 0 containers: []
	W0918 21:07:57.186789   62061 logs.go:278] No container was found matching "kube-apiserver"
	I0918 21:07:57.186796   62061 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0918 21:07:57.186855   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0918 21:07:57.219211   62061 cri.go:89] found id: ""
	I0918 21:07:57.219239   62061 logs.go:276] 0 containers: []
	W0918 21:07:57.219248   62061 logs.go:278] No container was found matching "etcd"
	I0918 21:07:57.219256   62061 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0918 21:07:57.219315   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0918 21:07:57.251361   62061 cri.go:89] found id: ""
	I0918 21:07:57.251388   62061 logs.go:276] 0 containers: []
	W0918 21:07:57.251396   62061 logs.go:278] No container was found matching "coredns"
	I0918 21:07:57.251401   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0918 21:07:57.251450   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0918 21:07:57.285467   62061 cri.go:89] found id: ""
	I0918 21:07:57.285501   62061 logs.go:276] 0 containers: []
	W0918 21:07:57.285512   62061 logs.go:278] No container was found matching "kube-scheduler"
	I0918 21:07:57.285519   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0918 21:07:57.285580   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0918 21:07:57.317973   62061 cri.go:89] found id: ""
	I0918 21:07:57.317999   62061 logs.go:276] 0 containers: []
	W0918 21:07:57.318006   62061 logs.go:278] No container was found matching "kube-proxy"
	I0918 21:07:57.318012   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0918 21:07:57.318071   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0918 21:07:57.352172   62061 cri.go:89] found id: ""
	I0918 21:07:57.352202   62061 logs.go:276] 0 containers: []
	W0918 21:07:57.352215   62061 logs.go:278] No container was found matching "kube-controller-manager"
	I0918 21:07:57.352223   62061 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0918 21:07:57.352277   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0918 21:07:57.388117   62061 cri.go:89] found id: ""
	I0918 21:07:57.388137   62061 logs.go:276] 0 containers: []
	W0918 21:07:57.388144   62061 logs.go:278] No container was found matching "kindnet"
	I0918 21:07:57.388150   62061 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0918 21:07:57.388205   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0918 21:07:57.424814   62061 cri.go:89] found id: ""
	I0918 21:07:57.424846   62061 logs.go:276] 0 containers: []
	W0918 21:07:57.424857   62061 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0918 21:07:57.424868   62061 logs.go:123] Gathering logs for dmesg ...
	I0918 21:07:57.424882   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 21:07:57.437317   62061 logs.go:123] Gathering logs for describe nodes ...
	I0918 21:07:57.437351   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0918 21:07:57.503393   62061 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0918 21:07:57.503417   62061 logs.go:123] Gathering logs for CRI-O ...
	I0918 21:07:57.503429   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0918 21:07:57.584167   62061 logs.go:123] Gathering logs for container status ...
	I0918 21:07:57.584203   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 21:07:57.620761   62061 logs.go:123] Gathering logs for kubelet ...
	I0918 21:07:57.620792   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 21:07:57.357989   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:07:59.856413   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:07:58.133077   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:08:00.633233   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:07:59.886455   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:08:01.887882   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:08:00.174933   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:08:00.188098   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0918 21:08:00.188177   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0918 21:08:00.221852   62061 cri.go:89] found id: ""
	I0918 21:08:00.221877   62061 logs.go:276] 0 containers: []
	W0918 21:08:00.221890   62061 logs.go:278] No container was found matching "kube-apiserver"
	I0918 21:08:00.221895   62061 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0918 21:08:00.221947   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0918 21:08:00.255947   62061 cri.go:89] found id: ""
	I0918 21:08:00.255974   62061 logs.go:276] 0 containers: []
	W0918 21:08:00.255982   62061 logs.go:278] No container was found matching "etcd"
	I0918 21:08:00.255987   62061 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0918 21:08:00.256056   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0918 21:08:00.290919   62061 cri.go:89] found id: ""
	I0918 21:08:00.290953   62061 logs.go:276] 0 containers: []
	W0918 21:08:00.290961   62061 logs.go:278] No container was found matching "coredns"
	I0918 21:08:00.290966   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0918 21:08:00.291017   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0918 21:08:00.328175   62061 cri.go:89] found id: ""
	I0918 21:08:00.328200   62061 logs.go:276] 0 containers: []
	W0918 21:08:00.328208   62061 logs.go:278] No container was found matching "kube-scheduler"
	I0918 21:08:00.328214   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0918 21:08:00.328261   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0918 21:08:00.364863   62061 cri.go:89] found id: ""
	I0918 21:08:00.364892   62061 logs.go:276] 0 containers: []
	W0918 21:08:00.364903   62061 logs.go:278] No container was found matching "kube-proxy"
	I0918 21:08:00.364912   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0918 21:08:00.364967   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0918 21:08:00.400368   62061 cri.go:89] found id: ""
	I0918 21:08:00.400397   62061 logs.go:276] 0 containers: []
	W0918 21:08:00.400408   62061 logs.go:278] No container was found matching "kube-controller-manager"
	I0918 21:08:00.400415   62061 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0918 21:08:00.400480   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0918 21:08:00.435341   62061 cri.go:89] found id: ""
	I0918 21:08:00.435375   62061 logs.go:276] 0 containers: []
	W0918 21:08:00.435386   62061 logs.go:278] No container was found matching "kindnet"
	I0918 21:08:00.435394   62061 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0918 21:08:00.435459   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0918 21:08:00.469981   62061 cri.go:89] found id: ""
	I0918 21:08:00.470010   62061 logs.go:276] 0 containers: []
	W0918 21:08:00.470019   62061 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0918 21:08:00.470028   62061 logs.go:123] Gathering logs for describe nodes ...
	I0918 21:08:00.470041   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0918 21:08:00.538006   62061 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0918 21:08:00.538037   62061 logs.go:123] Gathering logs for CRI-O ...
	I0918 21:08:00.538053   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0918 21:08:00.618497   62061 logs.go:123] Gathering logs for container status ...
	I0918 21:08:00.618543   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 21:08:00.656884   62061 logs.go:123] Gathering logs for kubelet ...
	I0918 21:08:00.656912   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 21:08:00.708836   62061 logs.go:123] Gathering logs for dmesg ...
	I0918 21:08:00.708870   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 21:08:03.222489   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:08:03.236904   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0918 21:08:03.236972   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0918 21:08:03.270559   62061 cri.go:89] found id: ""
	I0918 21:08:03.270588   62061 logs.go:276] 0 containers: []
	W0918 21:08:03.270596   62061 logs.go:278] No container was found matching "kube-apiserver"
	I0918 21:08:03.270602   62061 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0918 21:08:03.270649   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0918 21:08:03.305908   62061 cri.go:89] found id: ""
	I0918 21:08:03.305933   62061 logs.go:276] 0 containers: []
	W0918 21:08:03.305940   62061 logs.go:278] No container was found matching "etcd"
	I0918 21:08:03.305946   62061 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0918 21:08:03.306004   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0918 21:08:03.339442   62061 cri.go:89] found id: ""
	I0918 21:08:03.339468   62061 logs.go:276] 0 containers: []
	W0918 21:08:03.339476   62061 logs.go:278] No container was found matching "coredns"
	I0918 21:08:03.339482   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0918 21:08:03.339550   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0918 21:08:03.377460   62061 cri.go:89] found id: ""
	I0918 21:08:03.377486   62061 logs.go:276] 0 containers: []
	W0918 21:08:03.377495   62061 logs.go:278] No container was found matching "kube-scheduler"
	I0918 21:08:03.377501   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0918 21:08:03.377552   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0918 21:08:03.414815   62061 cri.go:89] found id: ""
	I0918 21:08:03.414850   62061 logs.go:276] 0 containers: []
	W0918 21:08:03.414861   62061 logs.go:278] No container was found matching "kube-proxy"
	I0918 21:08:03.414869   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0918 21:08:03.414930   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0918 21:08:03.448654   62061 cri.go:89] found id: ""
	I0918 21:08:03.448680   62061 logs.go:276] 0 containers: []
	W0918 21:08:03.448690   62061 logs.go:278] No container was found matching "kube-controller-manager"
	I0918 21:08:03.448698   62061 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0918 21:08:03.448759   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0918 21:08:03.483598   62061 cri.go:89] found id: ""
	I0918 21:08:03.483628   62061 logs.go:276] 0 containers: []
	W0918 21:08:03.483639   62061 logs.go:278] No container was found matching "kindnet"
	I0918 21:08:03.483646   62061 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0918 21:08:03.483717   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0918 21:08:03.518557   62061 cri.go:89] found id: ""
	I0918 21:08:03.518585   62061 logs.go:276] 0 containers: []
	W0918 21:08:03.518601   62061 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0918 21:08:03.518612   62061 logs.go:123] Gathering logs for container status ...
	I0918 21:08:03.518627   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 21:08:03.555922   62061 logs.go:123] Gathering logs for kubelet ...
	I0918 21:08:03.555958   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 21:08:03.608173   62061 logs.go:123] Gathering logs for dmesg ...
	I0918 21:08:03.608208   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 21:08:03.621251   62061 logs.go:123] Gathering logs for describe nodes ...
	I0918 21:08:03.621278   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0918 21:08:03.688773   62061 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0918 21:08:03.688796   62061 logs.go:123] Gathering logs for CRI-O ...
	I0918 21:08:03.688812   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0918 21:08:01.857289   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:08:03.857768   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:08:06.356504   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:08:03.132376   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:08:05.134169   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:08:04.386905   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:08:06.891459   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:08:06.272727   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:08:06.286033   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0918 21:08:06.286115   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0918 21:08:06.319058   62061 cri.go:89] found id: ""
	I0918 21:08:06.319084   62061 logs.go:276] 0 containers: []
	W0918 21:08:06.319092   62061 logs.go:278] No container was found matching "kube-apiserver"
	I0918 21:08:06.319099   62061 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0918 21:08:06.319167   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0918 21:08:06.352572   62061 cri.go:89] found id: ""
	I0918 21:08:06.352606   62061 logs.go:276] 0 containers: []
	W0918 21:08:06.352627   62061 logs.go:278] No container was found matching "etcd"
	I0918 21:08:06.352638   62061 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0918 21:08:06.352709   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0918 21:08:06.389897   62061 cri.go:89] found id: ""
	I0918 21:08:06.389922   62061 logs.go:276] 0 containers: []
	W0918 21:08:06.389929   62061 logs.go:278] No container was found matching "coredns"
	I0918 21:08:06.389935   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0918 21:08:06.389993   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0918 21:08:06.427265   62061 cri.go:89] found id: ""
	I0918 21:08:06.427294   62061 logs.go:276] 0 containers: []
	W0918 21:08:06.427306   62061 logs.go:278] No container was found matching "kube-scheduler"
	I0918 21:08:06.427314   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0918 21:08:06.427375   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0918 21:08:06.462706   62061 cri.go:89] found id: ""
	I0918 21:08:06.462738   62061 logs.go:276] 0 containers: []
	W0918 21:08:06.462746   62061 logs.go:278] No container was found matching "kube-proxy"
	I0918 21:08:06.462753   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0918 21:08:06.462816   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0918 21:08:06.499672   62061 cri.go:89] found id: ""
	I0918 21:08:06.499707   62061 logs.go:276] 0 containers: []
	W0918 21:08:06.499719   62061 logs.go:278] No container was found matching "kube-controller-manager"
	I0918 21:08:06.499727   62061 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0918 21:08:06.499781   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0918 21:08:06.540386   62061 cri.go:89] found id: ""
	I0918 21:08:06.540415   62061 logs.go:276] 0 containers: []
	W0918 21:08:06.540426   62061 logs.go:278] No container was found matching "kindnet"
	I0918 21:08:06.540433   62061 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0918 21:08:06.540492   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0918 21:08:06.582919   62061 cri.go:89] found id: ""
	I0918 21:08:06.582946   62061 logs.go:276] 0 containers: []
	W0918 21:08:06.582957   62061 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0918 21:08:06.582966   62061 logs.go:123] Gathering logs for kubelet ...
	I0918 21:08:06.582982   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 21:08:06.637315   62061 logs.go:123] Gathering logs for dmesg ...
	I0918 21:08:06.637355   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 21:08:06.650626   62061 logs.go:123] Gathering logs for describe nodes ...
	I0918 21:08:06.650662   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0918 21:08:06.725605   62061 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0918 21:08:06.725641   62061 logs.go:123] Gathering logs for CRI-O ...
	I0918 21:08:06.725656   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0918 21:08:06.805431   62061 logs.go:123] Gathering logs for container status ...
	I0918 21:08:06.805471   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 21:08:09.343498   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:08:09.357396   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0918 21:08:09.357472   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0918 21:08:09.391561   62061 cri.go:89] found id: ""
	I0918 21:08:09.391590   62061 logs.go:276] 0 containers: []
	W0918 21:08:09.391600   62061 logs.go:278] No container was found matching "kube-apiserver"
	I0918 21:08:09.391605   62061 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0918 21:08:09.391655   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0918 21:08:09.429154   62061 cri.go:89] found id: ""
	I0918 21:08:09.429181   62061 logs.go:276] 0 containers: []
	W0918 21:08:09.429190   62061 logs.go:278] No container was found matching "etcd"
	I0918 21:08:09.429196   62061 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0918 21:08:09.429259   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0918 21:08:09.464807   62061 cri.go:89] found id: ""
	I0918 21:08:09.464848   62061 logs.go:276] 0 containers: []
	W0918 21:08:09.464859   62061 logs.go:278] No container was found matching "coredns"
	I0918 21:08:09.464866   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0918 21:08:09.464927   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0918 21:08:08.856578   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:08:10.856650   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:08:07.633438   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:08:10.132651   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:08:12.132903   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:08:09.387482   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:08:11.886885   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:08:09.499512   62061 cri.go:89] found id: ""
	I0918 21:08:09.499540   62061 logs.go:276] 0 containers: []
	W0918 21:08:09.499549   62061 logs.go:278] No container was found matching "kube-scheduler"
	I0918 21:08:09.499556   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0918 21:08:09.499619   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0918 21:08:09.534569   62061 cri.go:89] found id: ""
	I0918 21:08:09.534593   62061 logs.go:276] 0 containers: []
	W0918 21:08:09.534601   62061 logs.go:278] No container was found matching "kube-proxy"
	I0918 21:08:09.534607   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0918 21:08:09.534660   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0918 21:08:09.570387   62061 cri.go:89] found id: ""
	I0918 21:08:09.570414   62061 logs.go:276] 0 containers: []
	W0918 21:08:09.570422   62061 logs.go:278] No container was found matching "kube-controller-manager"
	I0918 21:08:09.570428   62061 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0918 21:08:09.570489   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0918 21:08:09.603832   62061 cri.go:89] found id: ""
	I0918 21:08:09.603863   62061 logs.go:276] 0 containers: []
	W0918 21:08:09.603871   62061 logs.go:278] No container was found matching "kindnet"
	I0918 21:08:09.603877   62061 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0918 21:08:09.603923   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0918 21:08:09.638887   62061 cri.go:89] found id: ""
	I0918 21:08:09.638918   62061 logs.go:276] 0 containers: []
	W0918 21:08:09.638930   62061 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0918 21:08:09.638940   62061 logs.go:123] Gathering logs for kubelet ...
	I0918 21:08:09.638953   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 21:08:09.691197   62061 logs.go:123] Gathering logs for dmesg ...
	I0918 21:08:09.691237   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 21:08:09.705470   62061 logs.go:123] Gathering logs for describe nodes ...
	I0918 21:08:09.705500   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0918 21:08:09.776826   62061 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0918 21:08:09.776858   62061 logs.go:123] Gathering logs for CRI-O ...
	I0918 21:08:09.776871   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0918 21:08:09.851287   62061 logs.go:123] Gathering logs for container status ...
	I0918 21:08:09.851321   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 21:08:12.390895   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:08:12.406734   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0918 21:08:12.406809   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0918 21:08:12.459302   62061 cri.go:89] found id: ""
	I0918 21:08:12.459331   62061 logs.go:276] 0 containers: []
	W0918 21:08:12.459349   62061 logs.go:278] No container was found matching "kube-apiserver"
	I0918 21:08:12.459360   62061 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0918 21:08:12.459420   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0918 21:08:12.517527   62061 cri.go:89] found id: ""
	I0918 21:08:12.517557   62061 logs.go:276] 0 containers: []
	W0918 21:08:12.517567   62061 logs.go:278] No container was found matching "etcd"
	I0918 21:08:12.517573   62061 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0918 21:08:12.517628   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0918 21:08:12.549661   62061 cri.go:89] found id: ""
	I0918 21:08:12.549689   62061 logs.go:276] 0 containers: []
	W0918 21:08:12.549696   62061 logs.go:278] No container was found matching "coredns"
	I0918 21:08:12.549702   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0918 21:08:12.549747   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0918 21:08:12.582970   62061 cri.go:89] found id: ""
	I0918 21:08:12.582996   62061 logs.go:276] 0 containers: []
	W0918 21:08:12.583004   62061 logs.go:278] No container was found matching "kube-scheduler"
	I0918 21:08:12.583009   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0918 21:08:12.583054   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0918 21:08:12.617059   62061 cri.go:89] found id: ""
	I0918 21:08:12.617089   62061 logs.go:276] 0 containers: []
	W0918 21:08:12.617098   62061 logs.go:278] No container was found matching "kube-proxy"
	I0918 21:08:12.617103   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0918 21:08:12.617153   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0918 21:08:12.651115   62061 cri.go:89] found id: ""
	I0918 21:08:12.651143   62061 logs.go:276] 0 containers: []
	W0918 21:08:12.651156   62061 logs.go:278] No container was found matching "kube-controller-manager"
	I0918 21:08:12.651164   62061 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0918 21:08:12.651217   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0918 21:08:12.684727   62061 cri.go:89] found id: ""
	I0918 21:08:12.684758   62061 logs.go:276] 0 containers: []
	W0918 21:08:12.684765   62061 logs.go:278] No container was found matching "kindnet"
	I0918 21:08:12.684771   62061 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0918 21:08:12.684829   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0918 21:08:12.718181   62061 cri.go:89] found id: ""
	I0918 21:08:12.718215   62061 logs.go:276] 0 containers: []
	W0918 21:08:12.718226   62061 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0918 21:08:12.718237   62061 logs.go:123] Gathering logs for container status ...
	I0918 21:08:12.718252   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 21:08:12.760350   62061 logs.go:123] Gathering logs for kubelet ...
	I0918 21:08:12.760379   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 21:08:12.810028   62061 logs.go:123] Gathering logs for dmesg ...
	I0918 21:08:12.810067   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 21:08:12.823785   62061 logs.go:123] Gathering logs for describe nodes ...
	I0918 21:08:12.823815   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0918 21:08:12.898511   62061 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0918 21:08:12.898532   62061 logs.go:123] Gathering logs for CRI-O ...
	I0918 21:08:12.898545   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0918 21:08:12.856697   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:08:15.356381   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:08:14.632694   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:08:17.131888   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:08:13.887157   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:08:15.887190   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:08:17.890618   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:08:15.476840   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:08:15.489386   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0918 21:08:15.489470   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0918 21:08:15.524549   62061 cri.go:89] found id: ""
	I0918 21:08:15.524578   62061 logs.go:276] 0 containers: []
	W0918 21:08:15.524585   62061 logs.go:278] No container was found matching "kube-apiserver"
	I0918 21:08:15.524591   62061 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0918 21:08:15.524642   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0918 21:08:15.557834   62061 cri.go:89] found id: ""
	I0918 21:08:15.557867   62061 logs.go:276] 0 containers: []
	W0918 21:08:15.557876   62061 logs.go:278] No container was found matching "etcd"
	I0918 21:08:15.557883   62061 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0918 21:08:15.557933   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0918 21:08:15.598774   62061 cri.go:89] found id: ""
	I0918 21:08:15.598805   62061 logs.go:276] 0 containers: []
	W0918 21:08:15.598818   62061 logs.go:278] No container was found matching "coredns"
	I0918 21:08:15.598828   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0918 21:08:15.598882   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0918 21:08:15.633123   62061 cri.go:89] found id: ""
	I0918 21:08:15.633147   62061 logs.go:276] 0 containers: []
	W0918 21:08:15.633155   62061 logs.go:278] No container was found matching "kube-scheduler"
	I0918 21:08:15.633161   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0918 21:08:15.633208   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0918 21:08:15.667281   62061 cri.go:89] found id: ""
	I0918 21:08:15.667307   62061 logs.go:276] 0 containers: []
	W0918 21:08:15.667317   62061 logs.go:278] No container was found matching "kube-proxy"
	I0918 21:08:15.667323   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0918 21:08:15.667381   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0918 21:08:15.705945   62061 cri.go:89] found id: ""
	I0918 21:08:15.705977   62061 logs.go:276] 0 containers: []
	W0918 21:08:15.705990   62061 logs.go:278] No container was found matching "kube-controller-manager"
	I0918 21:08:15.706015   62061 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0918 21:08:15.706093   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0918 21:08:15.739795   62061 cri.go:89] found id: ""
	I0918 21:08:15.739826   62061 logs.go:276] 0 containers: []
	W0918 21:08:15.739836   62061 logs.go:278] No container was found matching "kindnet"
	I0918 21:08:15.739843   62061 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0918 21:08:15.739904   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0918 21:08:15.778520   62061 cri.go:89] found id: ""
	I0918 21:08:15.778556   62061 logs.go:276] 0 containers: []
	W0918 21:08:15.778567   62061 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0918 21:08:15.778579   62061 logs.go:123] Gathering logs for kubelet ...
	I0918 21:08:15.778592   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 21:08:15.829357   62061 logs.go:123] Gathering logs for dmesg ...
	I0918 21:08:15.829394   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 21:08:15.842852   62061 logs.go:123] Gathering logs for describe nodes ...
	I0918 21:08:15.842882   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0918 21:08:15.922438   62061 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0918 21:08:15.922471   62061 logs.go:123] Gathering logs for CRI-O ...
	I0918 21:08:15.922483   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0918 21:08:16.001687   62061 logs.go:123] Gathering logs for container status ...
	I0918 21:08:16.001726   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 21:08:18.542067   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:08:18.554783   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0918 21:08:18.554870   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0918 21:08:18.589555   62061 cri.go:89] found id: ""
	I0918 21:08:18.589581   62061 logs.go:276] 0 containers: []
	W0918 21:08:18.589592   62061 logs.go:278] No container was found matching "kube-apiserver"
	I0918 21:08:18.589604   62061 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0918 21:08:18.589667   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0918 21:08:18.623035   62061 cri.go:89] found id: ""
	I0918 21:08:18.623059   62061 logs.go:276] 0 containers: []
	W0918 21:08:18.623067   62061 logs.go:278] No container was found matching "etcd"
	I0918 21:08:18.623073   62061 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0918 21:08:18.623127   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0918 21:08:18.655875   62061 cri.go:89] found id: ""
	I0918 21:08:18.655901   62061 logs.go:276] 0 containers: []
	W0918 21:08:18.655909   62061 logs.go:278] No container was found matching "coredns"
	I0918 21:08:18.655915   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0918 21:08:18.655973   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0918 21:08:18.688964   62061 cri.go:89] found id: ""
	I0918 21:08:18.688997   62061 logs.go:276] 0 containers: []
	W0918 21:08:18.689008   62061 logs.go:278] No container was found matching "kube-scheduler"
	I0918 21:08:18.689016   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0918 21:08:18.689080   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0918 21:08:18.723164   62061 cri.go:89] found id: ""
	I0918 21:08:18.723186   62061 logs.go:276] 0 containers: []
	W0918 21:08:18.723196   62061 logs.go:278] No container was found matching "kube-proxy"
	I0918 21:08:18.723201   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0918 21:08:18.723246   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0918 21:08:18.755022   62061 cri.go:89] found id: ""
	I0918 21:08:18.755048   62061 logs.go:276] 0 containers: []
	W0918 21:08:18.755057   62061 logs.go:278] No container was found matching "kube-controller-manager"
	I0918 21:08:18.755063   62061 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0918 21:08:18.755113   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0918 21:08:18.791614   62061 cri.go:89] found id: ""
	I0918 21:08:18.791645   62061 logs.go:276] 0 containers: []
	W0918 21:08:18.791655   62061 logs.go:278] No container was found matching "kindnet"
	I0918 21:08:18.791663   62061 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0918 21:08:18.791731   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0918 21:08:18.823553   62061 cri.go:89] found id: ""
	I0918 21:08:18.823583   62061 logs.go:276] 0 containers: []
	W0918 21:08:18.823590   62061 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0918 21:08:18.823597   62061 logs.go:123] Gathering logs for container status ...
	I0918 21:08:18.823609   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 21:08:18.864528   62061 logs.go:123] Gathering logs for kubelet ...
	I0918 21:08:18.864567   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 21:08:18.916590   62061 logs.go:123] Gathering logs for dmesg ...
	I0918 21:08:18.916628   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 21:08:18.930077   62061 logs.go:123] Gathering logs for describe nodes ...
	I0918 21:08:18.930107   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0918 21:08:18.999958   62061 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0918 21:08:18.999982   62061 logs.go:123] Gathering logs for CRI-O ...
	I0918 21:08:18.999997   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0918 21:08:17.358190   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:08:19.856605   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:08:19.132382   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:08:21.634433   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:08:20.387223   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:08:22.387374   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:08:21.573718   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:08:21.588164   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0918 21:08:21.588252   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0918 21:08:21.630044   62061 cri.go:89] found id: ""
	I0918 21:08:21.630075   62061 logs.go:276] 0 containers: []
	W0918 21:08:21.630086   62061 logs.go:278] No container was found matching "kube-apiserver"
	I0918 21:08:21.630094   62061 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0918 21:08:21.630154   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0918 21:08:21.666992   62061 cri.go:89] found id: ""
	I0918 21:08:21.667021   62061 logs.go:276] 0 containers: []
	W0918 21:08:21.667029   62061 logs.go:278] No container was found matching "etcd"
	I0918 21:08:21.667035   62061 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0918 21:08:21.667083   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0918 21:08:21.702379   62061 cri.go:89] found id: ""
	I0918 21:08:21.702403   62061 logs.go:276] 0 containers: []
	W0918 21:08:21.702411   62061 logs.go:278] No container was found matching "coredns"
	I0918 21:08:21.702416   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0918 21:08:21.702463   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0918 21:08:21.739877   62061 cri.go:89] found id: ""
	I0918 21:08:21.739908   62061 logs.go:276] 0 containers: []
	W0918 21:08:21.739918   62061 logs.go:278] No container was found matching "kube-scheduler"
	I0918 21:08:21.739923   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0918 21:08:21.739974   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0918 21:08:21.777536   62061 cri.go:89] found id: ""
	I0918 21:08:21.777573   62061 logs.go:276] 0 containers: []
	W0918 21:08:21.777584   62061 logs.go:278] No container was found matching "kube-proxy"
	I0918 21:08:21.777592   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0918 21:08:21.777652   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0918 21:08:21.812284   62061 cri.go:89] found id: ""
	I0918 21:08:21.812316   62061 logs.go:276] 0 containers: []
	W0918 21:08:21.812325   62061 logs.go:278] No container was found matching "kube-controller-manager"
	I0918 21:08:21.812332   62061 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0918 21:08:21.812401   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0918 21:08:21.848143   62061 cri.go:89] found id: ""
	I0918 21:08:21.848176   62061 logs.go:276] 0 containers: []
	W0918 21:08:21.848185   62061 logs.go:278] No container was found matching "kindnet"
	I0918 21:08:21.848191   62061 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0918 21:08:21.848250   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0918 21:08:21.887151   62061 cri.go:89] found id: ""
	I0918 21:08:21.887177   62061 logs.go:276] 0 containers: []
	W0918 21:08:21.887188   62061 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0918 21:08:21.887199   62061 logs.go:123] Gathering logs for kubelet ...
	I0918 21:08:21.887213   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 21:08:21.939969   62061 logs.go:123] Gathering logs for dmesg ...
	I0918 21:08:21.940008   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 21:08:21.954128   62061 logs.go:123] Gathering logs for describe nodes ...
	I0918 21:08:21.954164   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0918 21:08:22.022827   62061 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0918 21:08:22.022853   62061 logs.go:123] Gathering logs for CRI-O ...
	I0918 21:08:22.022865   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0918 21:08:22.103131   62061 logs.go:123] Gathering logs for container status ...
	I0918 21:08:22.103172   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 21:08:22.356641   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:08:24.358204   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:08:24.133101   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:08:26.633701   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:08:24.888715   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:08:27.386901   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:08:24.642045   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:08:24.655273   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0918 21:08:24.655343   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0918 21:08:24.687816   62061 cri.go:89] found id: ""
	I0918 21:08:24.687847   62061 logs.go:276] 0 containers: []
	W0918 21:08:24.687858   62061 logs.go:278] No container was found matching "kube-apiserver"
	I0918 21:08:24.687865   62061 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0918 21:08:24.687927   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0918 21:08:24.721276   62061 cri.go:89] found id: ""
	I0918 21:08:24.721303   62061 logs.go:276] 0 containers: []
	W0918 21:08:24.721311   62061 logs.go:278] No container was found matching "etcd"
	I0918 21:08:24.721316   62061 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0918 21:08:24.721366   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0918 21:08:24.753874   62061 cri.go:89] found id: ""
	I0918 21:08:24.753904   62061 logs.go:276] 0 containers: []
	W0918 21:08:24.753911   62061 logs.go:278] No container was found matching "coredns"
	I0918 21:08:24.753917   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0918 21:08:24.753967   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0918 21:08:24.789107   62061 cri.go:89] found id: ""
	I0918 21:08:24.789148   62061 logs.go:276] 0 containers: []
	W0918 21:08:24.789163   62061 logs.go:278] No container was found matching "kube-scheduler"
	I0918 21:08:24.789170   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0918 21:08:24.789219   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0918 21:08:24.826283   62061 cri.go:89] found id: ""
	I0918 21:08:24.826316   62061 logs.go:276] 0 containers: []
	W0918 21:08:24.826329   62061 logs.go:278] No container was found matching "kube-proxy"
	I0918 21:08:24.826337   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0918 21:08:24.826401   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0918 21:08:24.859878   62061 cri.go:89] found id: ""
	I0918 21:08:24.859907   62061 logs.go:276] 0 containers: []
	W0918 21:08:24.859917   62061 logs.go:278] No container was found matching "kube-controller-manager"
	I0918 21:08:24.859924   62061 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0918 21:08:24.859982   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0918 21:08:24.895717   62061 cri.go:89] found id: ""
	I0918 21:08:24.895747   62061 logs.go:276] 0 containers: []
	W0918 21:08:24.895758   62061 logs.go:278] No container was found matching "kindnet"
	I0918 21:08:24.895766   62061 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0918 21:08:24.895830   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0918 21:08:24.930207   62061 cri.go:89] found id: ""
	I0918 21:08:24.930239   62061 logs.go:276] 0 containers: []
	W0918 21:08:24.930250   62061 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0918 21:08:24.930262   62061 logs.go:123] Gathering logs for container status ...
	I0918 21:08:24.930279   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 21:08:24.967939   62061 logs.go:123] Gathering logs for kubelet ...
	I0918 21:08:24.967981   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 21:08:25.022526   62061 logs.go:123] Gathering logs for dmesg ...
	I0918 21:08:25.022569   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 21:08:25.036175   62061 logs.go:123] Gathering logs for describe nodes ...
	I0918 21:08:25.036214   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0918 21:08:25.104251   62061 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0918 21:08:25.104277   62061 logs.go:123] Gathering logs for CRI-O ...
	I0918 21:08:25.104294   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0918 21:08:27.686943   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:08:27.700071   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0918 21:08:27.700147   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0918 21:08:27.735245   62061 cri.go:89] found id: ""
	I0918 21:08:27.735277   62061 logs.go:276] 0 containers: []
	W0918 21:08:27.735286   62061 logs.go:278] No container was found matching "kube-apiserver"
	I0918 21:08:27.735291   62061 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0918 21:08:27.735349   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0918 21:08:27.768945   62061 cri.go:89] found id: ""
	I0918 21:08:27.768974   62061 logs.go:276] 0 containers: []
	W0918 21:08:27.768985   62061 logs.go:278] No container was found matching "etcd"
	I0918 21:08:27.768993   62061 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0918 21:08:27.769055   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0918 21:08:27.805425   62061 cri.go:89] found id: ""
	I0918 21:08:27.805457   62061 logs.go:276] 0 containers: []
	W0918 21:08:27.805468   62061 logs.go:278] No container was found matching "coredns"
	I0918 21:08:27.805474   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0918 21:08:27.805523   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0918 21:08:27.838049   62061 cri.go:89] found id: ""
	I0918 21:08:27.838081   62061 logs.go:276] 0 containers: []
	W0918 21:08:27.838091   62061 logs.go:278] No container was found matching "kube-scheduler"
	I0918 21:08:27.838098   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0918 21:08:27.838163   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0918 21:08:27.873961   62061 cri.go:89] found id: ""
	I0918 21:08:27.873986   62061 logs.go:276] 0 containers: []
	W0918 21:08:27.873994   62061 logs.go:278] No container was found matching "kube-proxy"
	I0918 21:08:27.874001   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0918 21:08:27.874064   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0918 21:08:27.908822   62061 cri.go:89] found id: ""
	I0918 21:08:27.908846   62061 logs.go:276] 0 containers: []
	W0918 21:08:27.908854   62061 logs.go:278] No container was found matching "kube-controller-manager"
	I0918 21:08:27.908860   62061 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0918 21:08:27.908915   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0918 21:08:27.943758   62061 cri.go:89] found id: ""
	I0918 21:08:27.943794   62061 logs.go:276] 0 containers: []
	W0918 21:08:27.943802   62061 logs.go:278] No container was found matching "kindnet"
	I0918 21:08:27.943808   62061 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0918 21:08:27.943875   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0918 21:08:27.978136   62061 cri.go:89] found id: ""
	I0918 21:08:27.978168   62061 logs.go:276] 0 containers: []
	W0918 21:08:27.978179   62061 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0918 21:08:27.978189   62061 logs.go:123] Gathering logs for dmesg ...
	I0918 21:08:27.978202   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 21:08:27.992744   62061 logs.go:123] Gathering logs for describe nodes ...
	I0918 21:08:27.992773   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0918 21:08:28.070339   62061 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0918 21:08:28.070362   62061 logs.go:123] Gathering logs for CRI-O ...
	I0918 21:08:28.070374   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0918 21:08:28.152405   62061 logs.go:123] Gathering logs for container status ...
	I0918 21:08:28.152452   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 21:08:28.191190   62061 logs.go:123] Gathering logs for kubelet ...
	I0918 21:08:28.191220   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 21:08:26.857256   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:08:29.356662   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:08:29.132577   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:08:31.133108   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:08:29.387068   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:08:31.886962   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:08:30.742507   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:08:30.756451   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0918 21:08:30.756553   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0918 21:08:30.790746   62061 cri.go:89] found id: ""
	I0918 21:08:30.790771   62061 logs.go:276] 0 containers: []
	W0918 21:08:30.790781   62061 logs.go:278] No container was found matching "kube-apiserver"
	I0918 21:08:30.790787   62061 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0918 21:08:30.790851   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0918 21:08:30.823633   62061 cri.go:89] found id: ""
	I0918 21:08:30.823670   62061 logs.go:276] 0 containers: []
	W0918 21:08:30.823682   62061 logs.go:278] No container was found matching "etcd"
	I0918 21:08:30.823689   62061 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0918 21:08:30.823754   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0918 21:08:30.861912   62061 cri.go:89] found id: ""
	I0918 21:08:30.861936   62061 logs.go:276] 0 containers: []
	W0918 21:08:30.861943   62061 logs.go:278] No container was found matching "coredns"
	I0918 21:08:30.861949   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0918 21:08:30.862000   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0918 21:08:30.899452   62061 cri.go:89] found id: ""
	I0918 21:08:30.899481   62061 logs.go:276] 0 containers: []
	W0918 21:08:30.899489   62061 logs.go:278] No container was found matching "kube-scheduler"
	I0918 21:08:30.899495   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0918 21:08:30.899562   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0918 21:08:30.935871   62061 cri.go:89] found id: ""
	I0918 21:08:30.935898   62061 logs.go:276] 0 containers: []
	W0918 21:08:30.935906   62061 logs.go:278] No container was found matching "kube-proxy"
	I0918 21:08:30.935912   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0918 21:08:30.935969   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0918 21:08:30.974608   62061 cri.go:89] found id: ""
	I0918 21:08:30.974643   62061 logs.go:276] 0 containers: []
	W0918 21:08:30.974655   62061 logs.go:278] No container was found matching "kube-controller-manager"
	I0918 21:08:30.974663   62061 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0918 21:08:30.974722   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0918 21:08:31.008249   62061 cri.go:89] found id: ""
	I0918 21:08:31.008279   62061 logs.go:276] 0 containers: []
	W0918 21:08:31.008290   62061 logs.go:278] No container was found matching "kindnet"
	I0918 21:08:31.008297   62061 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0918 21:08:31.008366   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0918 21:08:31.047042   62061 cri.go:89] found id: ""
	I0918 21:08:31.047075   62061 logs.go:276] 0 containers: []
	W0918 21:08:31.047083   62061 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0918 21:08:31.047093   62061 logs.go:123] Gathering logs for kubelet ...
	I0918 21:08:31.047107   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 21:08:31.098961   62061 logs.go:123] Gathering logs for dmesg ...
	I0918 21:08:31.099001   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 21:08:31.113116   62061 logs.go:123] Gathering logs for describe nodes ...
	I0918 21:08:31.113147   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0918 21:08:31.179609   62061 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0918 21:08:31.179650   62061 logs.go:123] Gathering logs for CRI-O ...
	I0918 21:08:31.179664   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0918 21:08:31.258299   62061 logs.go:123] Gathering logs for container status ...
	I0918 21:08:31.258335   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 21:08:33.798360   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:08:33.812105   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0918 21:08:33.812172   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0918 21:08:33.848120   62061 cri.go:89] found id: ""
	I0918 21:08:33.848149   62061 logs.go:276] 0 containers: []
	W0918 21:08:33.848160   62061 logs.go:278] No container was found matching "kube-apiserver"
	I0918 21:08:33.848169   62061 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0918 21:08:33.848231   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0918 21:08:33.884974   62061 cri.go:89] found id: ""
	I0918 21:08:33.885007   62061 logs.go:276] 0 containers: []
	W0918 21:08:33.885019   62061 logs.go:278] No container was found matching "etcd"
	I0918 21:08:33.885028   62061 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0918 21:08:33.885104   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0918 21:08:33.919613   62061 cri.go:89] found id: ""
	I0918 21:08:33.919658   62061 logs.go:276] 0 containers: []
	W0918 21:08:33.919667   62061 logs.go:278] No container was found matching "coredns"
	I0918 21:08:33.919673   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0918 21:08:33.919721   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0918 21:08:33.953074   62061 cri.go:89] found id: ""
	I0918 21:08:33.953112   62061 logs.go:276] 0 containers: []
	W0918 21:08:33.953120   62061 logs.go:278] No container was found matching "kube-scheduler"
	I0918 21:08:33.953125   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0918 21:08:33.953185   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0918 21:08:33.985493   62061 cri.go:89] found id: ""
	I0918 21:08:33.985521   62061 logs.go:276] 0 containers: []
	W0918 21:08:33.985532   62061 logs.go:278] No container was found matching "kube-proxy"
	I0918 21:08:33.985539   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0918 21:08:33.985630   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0918 21:08:34.020938   62061 cri.go:89] found id: ""
	I0918 21:08:34.020962   62061 logs.go:276] 0 containers: []
	W0918 21:08:34.020972   62061 logs.go:278] No container was found matching "kube-controller-manager"
	I0918 21:08:34.020981   62061 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0918 21:08:34.021047   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0918 21:08:34.053947   62061 cri.go:89] found id: ""
	I0918 21:08:34.053977   62061 logs.go:276] 0 containers: []
	W0918 21:08:34.053988   62061 logs.go:278] No container was found matching "kindnet"
	I0918 21:08:34.053996   62061 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0918 21:08:34.054060   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0918 21:08:34.090072   62061 cri.go:89] found id: ""
	I0918 21:08:34.090110   62061 logs.go:276] 0 containers: []
	W0918 21:08:34.090123   62061 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0918 21:08:34.090133   62061 logs.go:123] Gathering logs for kubelet ...
	I0918 21:08:34.090145   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 21:08:34.142069   62061 logs.go:123] Gathering logs for dmesg ...
	I0918 21:08:34.142107   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 21:08:34.156709   62061 logs.go:123] Gathering logs for describe nodes ...
	I0918 21:08:34.156740   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0918 21:08:34.232644   62061 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0918 21:08:34.232672   62061 logs.go:123] Gathering logs for CRI-O ...
	I0918 21:08:34.232685   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0918 21:08:34.311833   62061 logs.go:123] Gathering logs for container status ...
	I0918 21:08:34.311878   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 21:08:31.859360   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:08:34.357056   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:08:33.133212   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:08:35.632885   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:08:33.888487   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:08:36.386571   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:08:36.850811   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:08:36.864479   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0918 21:08:36.864558   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0918 21:08:36.902595   62061 cri.go:89] found id: ""
	I0918 21:08:36.902628   62061 logs.go:276] 0 containers: []
	W0918 21:08:36.902640   62061 logs.go:278] No container was found matching "kube-apiserver"
	I0918 21:08:36.902649   62061 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0918 21:08:36.902706   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0918 21:08:36.940336   62061 cri.go:89] found id: ""
	I0918 21:08:36.940385   62061 logs.go:276] 0 containers: []
	W0918 21:08:36.940394   62061 logs.go:278] No container was found matching "etcd"
	I0918 21:08:36.940400   62061 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0918 21:08:36.940465   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0918 21:08:36.973909   62061 cri.go:89] found id: ""
	I0918 21:08:36.973942   62061 logs.go:276] 0 containers: []
	W0918 21:08:36.973952   62061 logs.go:278] No container was found matching "coredns"
	I0918 21:08:36.973958   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0918 21:08:36.974013   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0918 21:08:37.008766   62061 cri.go:89] found id: ""
	I0918 21:08:37.008791   62061 logs.go:276] 0 containers: []
	W0918 21:08:37.008799   62061 logs.go:278] No container was found matching "kube-scheduler"
	I0918 21:08:37.008805   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0918 21:08:37.008852   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0918 21:08:37.041633   62061 cri.go:89] found id: ""
	I0918 21:08:37.041669   62061 logs.go:276] 0 containers: []
	W0918 21:08:37.041681   62061 logs.go:278] No container was found matching "kube-proxy"
	I0918 21:08:37.041688   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0918 21:08:37.041750   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0918 21:08:37.075154   62061 cri.go:89] found id: ""
	I0918 21:08:37.075188   62061 logs.go:276] 0 containers: []
	W0918 21:08:37.075197   62061 logs.go:278] No container was found matching "kube-controller-manager"
	I0918 21:08:37.075204   62061 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0918 21:08:37.075268   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0918 21:08:37.111083   62061 cri.go:89] found id: ""
	I0918 21:08:37.111119   62061 logs.go:276] 0 containers: []
	W0918 21:08:37.111130   62061 logs.go:278] No container was found matching "kindnet"
	I0918 21:08:37.111138   62061 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0918 21:08:37.111192   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0918 21:08:37.147894   62061 cri.go:89] found id: ""
	I0918 21:08:37.147925   62061 logs.go:276] 0 containers: []
	W0918 21:08:37.147936   62061 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0918 21:08:37.147948   62061 logs.go:123] Gathering logs for kubelet ...
	I0918 21:08:37.147962   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 21:08:37.200102   62061 logs.go:123] Gathering logs for dmesg ...
	I0918 21:08:37.200141   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 21:08:37.213511   62061 logs.go:123] Gathering logs for describe nodes ...
	I0918 21:08:37.213537   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0918 21:08:37.286978   62061 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0918 21:08:37.287012   62061 logs.go:123] Gathering logs for CRI-O ...
	I0918 21:08:37.287027   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0918 21:08:37.368153   62061 logs.go:123] Gathering logs for container status ...
	I0918 21:08:37.368191   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 21:08:36.857508   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:08:39.357177   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:08:41.357329   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:08:38.134332   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:08:40.633274   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:08:38.387121   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:08:40.387310   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:08:42.887614   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:08:39.907600   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:08:39.922325   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0918 21:08:39.922395   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0918 21:08:39.958150   62061 cri.go:89] found id: ""
	I0918 21:08:39.958175   62061 logs.go:276] 0 containers: []
	W0918 21:08:39.958183   62061 logs.go:278] No container was found matching "kube-apiserver"
	I0918 21:08:39.958189   62061 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0918 21:08:39.958254   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0918 21:08:39.993916   62061 cri.go:89] found id: ""
	I0918 21:08:39.993945   62061 logs.go:276] 0 containers: []
	W0918 21:08:39.993956   62061 logs.go:278] No container was found matching "etcd"
	I0918 21:08:39.993963   62061 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0918 21:08:39.994026   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0918 21:08:40.031078   62061 cri.go:89] found id: ""
	I0918 21:08:40.031114   62061 logs.go:276] 0 containers: []
	W0918 21:08:40.031126   62061 logs.go:278] No container was found matching "coredns"
	I0918 21:08:40.031133   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0918 21:08:40.031194   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0918 21:08:40.065028   62061 cri.go:89] found id: ""
	I0918 21:08:40.065054   62061 logs.go:276] 0 containers: []
	W0918 21:08:40.065065   62061 logs.go:278] No container was found matching "kube-scheduler"
	I0918 21:08:40.065072   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0918 21:08:40.065129   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0918 21:08:40.099437   62061 cri.go:89] found id: ""
	I0918 21:08:40.099466   62061 logs.go:276] 0 containers: []
	W0918 21:08:40.099474   62061 logs.go:278] No container was found matching "kube-proxy"
	I0918 21:08:40.099480   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0918 21:08:40.099544   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0918 21:08:40.139841   62061 cri.go:89] found id: ""
	I0918 21:08:40.139866   62061 logs.go:276] 0 containers: []
	W0918 21:08:40.139874   62061 logs.go:278] No container was found matching "kube-controller-manager"
	I0918 21:08:40.139880   62061 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0918 21:08:40.139936   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0918 21:08:40.175287   62061 cri.go:89] found id: ""
	I0918 21:08:40.175316   62061 logs.go:276] 0 containers: []
	W0918 21:08:40.175327   62061 logs.go:278] No container was found matching "kindnet"
	I0918 21:08:40.175334   62061 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0918 21:08:40.175397   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0918 21:08:40.208646   62061 cri.go:89] found id: ""
	I0918 21:08:40.208677   62061 logs.go:276] 0 containers: []
	W0918 21:08:40.208690   62061 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0918 21:08:40.208701   62061 logs.go:123] Gathering logs for kubelet ...
	I0918 21:08:40.208712   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 21:08:40.262944   62061 logs.go:123] Gathering logs for dmesg ...
	I0918 21:08:40.262982   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 21:08:40.276192   62061 logs.go:123] Gathering logs for describe nodes ...
	I0918 21:08:40.276222   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0918 21:08:40.346393   62061 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0918 21:08:40.346414   62061 logs.go:123] Gathering logs for CRI-O ...
	I0918 21:08:40.346426   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0918 21:08:40.433797   62061 logs.go:123] Gathering logs for container status ...
	I0918 21:08:40.433848   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 21:08:42.976574   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:08:42.990198   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0918 21:08:42.990263   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0918 21:08:43.031744   62061 cri.go:89] found id: ""
	I0918 21:08:43.031774   62061 logs.go:276] 0 containers: []
	W0918 21:08:43.031784   62061 logs.go:278] No container was found matching "kube-apiserver"
	I0918 21:08:43.031791   62061 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0918 21:08:43.031854   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0918 21:08:43.065198   62061 cri.go:89] found id: ""
	I0918 21:08:43.065240   62061 logs.go:276] 0 containers: []
	W0918 21:08:43.065248   62061 logs.go:278] No container was found matching "etcd"
	I0918 21:08:43.065261   62061 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0918 21:08:43.065319   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0918 21:08:43.099292   62061 cri.go:89] found id: ""
	I0918 21:08:43.099320   62061 logs.go:276] 0 containers: []
	W0918 21:08:43.099328   62061 logs.go:278] No container was found matching "coredns"
	I0918 21:08:43.099333   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0918 21:08:43.099388   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0918 21:08:43.135085   62061 cri.go:89] found id: ""
	I0918 21:08:43.135110   62061 logs.go:276] 0 containers: []
	W0918 21:08:43.135119   62061 logs.go:278] No container was found matching "kube-scheduler"
	I0918 21:08:43.135131   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0918 21:08:43.135190   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0918 21:08:43.173255   62061 cri.go:89] found id: ""
	I0918 21:08:43.173312   62061 logs.go:276] 0 containers: []
	W0918 21:08:43.173326   62061 logs.go:278] No container was found matching "kube-proxy"
	I0918 21:08:43.173335   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0918 21:08:43.173433   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0918 21:08:43.209249   62061 cri.go:89] found id: ""
	I0918 21:08:43.209282   62061 logs.go:276] 0 containers: []
	W0918 21:08:43.209294   62061 logs.go:278] No container was found matching "kube-controller-manager"
	I0918 21:08:43.209303   62061 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0918 21:08:43.209377   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0918 21:08:43.242333   62061 cri.go:89] found id: ""
	I0918 21:08:43.242366   62061 logs.go:276] 0 containers: []
	W0918 21:08:43.242376   62061 logs.go:278] No container was found matching "kindnet"
	I0918 21:08:43.242383   62061 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0918 21:08:43.242449   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0918 21:08:43.278099   62061 cri.go:89] found id: ""
	I0918 21:08:43.278128   62061 logs.go:276] 0 containers: []
	W0918 21:08:43.278136   62061 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0918 21:08:43.278146   62061 logs.go:123] Gathering logs for kubelet ...
	I0918 21:08:43.278164   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 21:08:43.329621   62061 logs.go:123] Gathering logs for dmesg ...
	I0918 21:08:43.329661   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 21:08:43.343357   62061 logs.go:123] Gathering logs for describe nodes ...
	I0918 21:08:43.343402   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0918 21:08:43.415392   62061 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0918 21:08:43.415419   62061 logs.go:123] Gathering logs for CRI-O ...
	I0918 21:08:43.415435   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0918 21:08:43.501634   62061 logs.go:123] Gathering logs for container status ...
	I0918 21:08:43.501670   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 21:08:43.357675   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:08:45.857212   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:08:43.133389   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:08:45.134057   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:08:44.887763   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:08:47.387221   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:08:46.043925   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:08:46.057457   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0918 21:08:46.057540   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0918 21:08:46.091987   62061 cri.go:89] found id: ""
	I0918 21:08:46.092036   62061 logs.go:276] 0 containers: []
	W0918 21:08:46.092047   62061 logs.go:278] No container was found matching "kube-apiserver"
	I0918 21:08:46.092055   62061 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0918 21:08:46.092118   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0918 21:08:46.131171   62061 cri.go:89] found id: ""
	I0918 21:08:46.131195   62061 logs.go:276] 0 containers: []
	W0918 21:08:46.131203   62061 logs.go:278] No container was found matching "etcd"
	I0918 21:08:46.131209   62061 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0918 21:08:46.131266   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0918 21:08:46.167324   62061 cri.go:89] found id: ""
	I0918 21:08:46.167360   62061 logs.go:276] 0 containers: []
	W0918 21:08:46.167369   62061 logs.go:278] No container was found matching "coredns"
	I0918 21:08:46.167375   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0918 21:08:46.167433   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0918 21:08:46.200940   62061 cri.go:89] found id: ""
	I0918 21:08:46.200969   62061 logs.go:276] 0 containers: []
	W0918 21:08:46.200978   62061 logs.go:278] No container was found matching "kube-scheduler"
	I0918 21:08:46.200983   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0918 21:08:46.201042   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0918 21:08:46.233736   62061 cri.go:89] found id: ""
	I0918 21:08:46.233765   62061 logs.go:276] 0 containers: []
	W0918 21:08:46.233773   62061 logs.go:278] No container was found matching "kube-proxy"
	I0918 21:08:46.233779   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0918 21:08:46.233841   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0918 21:08:46.267223   62061 cri.go:89] found id: ""
	I0918 21:08:46.267250   62061 logs.go:276] 0 containers: []
	W0918 21:08:46.267261   62061 logs.go:278] No container was found matching "kube-controller-manager"
	I0918 21:08:46.267268   62061 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0918 21:08:46.267331   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0918 21:08:46.302218   62061 cri.go:89] found id: ""
	I0918 21:08:46.302245   62061 logs.go:276] 0 containers: []
	W0918 21:08:46.302255   62061 logs.go:278] No container was found matching "kindnet"
	I0918 21:08:46.302262   62061 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0918 21:08:46.302326   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0918 21:08:46.334979   62061 cri.go:89] found id: ""
	I0918 21:08:46.335005   62061 logs.go:276] 0 containers: []
	W0918 21:08:46.335015   62061 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0918 21:08:46.335024   62061 logs.go:123] Gathering logs for describe nodes ...
	I0918 21:08:46.335038   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0918 21:08:46.422905   62061 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0918 21:08:46.422929   62061 logs.go:123] Gathering logs for CRI-O ...
	I0918 21:08:46.422948   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0918 21:08:46.504831   62061 logs.go:123] Gathering logs for container status ...
	I0918 21:08:46.504884   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 21:08:46.543954   62061 logs.go:123] Gathering logs for kubelet ...
	I0918 21:08:46.543981   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 21:08:46.595817   62061 logs.go:123] Gathering logs for dmesg ...
	I0918 21:08:46.595854   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 21:08:49.109984   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:08:49.124966   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0918 21:08:49.125052   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0918 21:08:49.163272   62061 cri.go:89] found id: ""
	I0918 21:08:49.163302   62061 logs.go:276] 0 containers: []
	W0918 21:08:49.163334   62061 logs.go:278] No container was found matching "kube-apiserver"
	I0918 21:08:49.163342   62061 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0918 21:08:49.163411   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0918 21:08:49.199973   62061 cri.go:89] found id: ""
	I0918 21:08:49.200003   62061 logs.go:276] 0 containers: []
	W0918 21:08:49.200037   62061 logs.go:278] No container was found matching "etcd"
	I0918 21:08:49.200046   62061 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0918 21:08:49.200111   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0918 21:08:49.235980   62061 cri.go:89] found id: ""
	I0918 21:08:49.236036   62061 logs.go:276] 0 containers: []
	W0918 21:08:49.236050   62061 logs.go:278] No container was found matching "coredns"
	I0918 21:08:49.236061   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0918 21:08:49.236123   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0918 21:08:49.271162   62061 cri.go:89] found id: ""
	I0918 21:08:49.271386   62061 logs.go:276] 0 containers: []
	W0918 21:08:49.271404   62061 logs.go:278] No container was found matching "kube-scheduler"
	I0918 21:08:49.271413   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0918 21:08:49.271483   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0918 21:08:49.305799   62061 cri.go:89] found id: ""
	I0918 21:08:49.305831   62061 logs.go:276] 0 containers: []
	W0918 21:08:49.305842   62061 logs.go:278] No container was found matching "kube-proxy"
	I0918 21:08:49.305848   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0918 21:08:49.305899   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0918 21:08:49.342170   62061 cri.go:89] found id: ""
	I0918 21:08:49.342195   62061 logs.go:276] 0 containers: []
	W0918 21:08:49.342202   62061 logs.go:278] No container was found matching "kube-controller-manager"
	I0918 21:08:49.342208   62061 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0918 21:08:49.342265   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0918 21:08:49.374071   62061 cri.go:89] found id: ""
	I0918 21:08:49.374098   62061 logs.go:276] 0 containers: []
	W0918 21:08:49.374108   62061 logs.go:278] No container was found matching "kindnet"
	I0918 21:08:49.374116   62061 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0918 21:08:49.374186   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0918 21:08:49.407614   62061 cri.go:89] found id: ""
	I0918 21:08:49.407645   62061 logs.go:276] 0 containers: []
	W0918 21:08:49.407657   62061 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0918 21:08:49.407669   62061 logs.go:123] Gathering logs for kubelet ...
	I0918 21:08:49.407685   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 21:08:49.458433   62061 logs.go:123] Gathering logs for dmesg ...
	I0918 21:08:49.458473   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 21:08:49.471490   62061 logs.go:123] Gathering logs for describe nodes ...
	I0918 21:08:49.471519   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0918 21:08:47.857798   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:08:50.355748   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:08:47.634158   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:08:49.627085   61659 pod_ready.go:82] duration metric: took 4m0.000936582s for pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace to be "Ready" ...
	E0918 21:08:49.627133   61659 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace to be "Ready" (will not retry!)
	I0918 21:08:49.627156   61659 pod_ready.go:39] duration metric: took 4m7.542795536s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0918 21:08:49.627192   61659 kubeadm.go:597] duration metric: took 4m15.452827752s to restartPrimaryControlPlane
	W0918 21:08:49.627251   61659 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0918 21:08:49.627290   61659 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0918 21:08:49.387560   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:08:51.887591   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	W0918 21:08:49.533874   62061 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0918 21:08:49.533901   62061 logs.go:123] Gathering logs for CRI-O ...
	I0918 21:08:49.533916   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0918 21:08:49.611711   62061 logs.go:123] Gathering logs for container status ...
	I0918 21:08:49.611744   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 21:08:52.157839   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:08:52.170690   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0918 21:08:52.170770   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0918 21:08:52.208737   62061 cri.go:89] found id: ""
	I0918 21:08:52.208767   62061 logs.go:276] 0 containers: []
	W0918 21:08:52.208779   62061 logs.go:278] No container was found matching "kube-apiserver"
	I0918 21:08:52.208787   62061 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0918 21:08:52.208846   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0918 21:08:52.242624   62061 cri.go:89] found id: ""
	I0918 21:08:52.242658   62061 logs.go:276] 0 containers: []
	W0918 21:08:52.242669   62061 logs.go:278] No container was found matching "etcd"
	I0918 21:08:52.242677   62061 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0918 21:08:52.242742   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0918 21:08:52.280686   62061 cri.go:89] found id: ""
	I0918 21:08:52.280717   62061 logs.go:276] 0 containers: []
	W0918 21:08:52.280728   62061 logs.go:278] No container was found matching "coredns"
	I0918 21:08:52.280736   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0918 21:08:52.280798   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0918 21:08:52.313748   62061 cri.go:89] found id: ""
	I0918 21:08:52.313777   62061 logs.go:276] 0 containers: []
	W0918 21:08:52.313785   62061 logs.go:278] No container was found matching "kube-scheduler"
	I0918 21:08:52.313791   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0918 21:08:52.313840   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0918 21:08:52.353072   62061 cri.go:89] found id: ""
	I0918 21:08:52.353102   62061 logs.go:276] 0 containers: []
	W0918 21:08:52.353124   62061 logs.go:278] No container was found matching "kube-proxy"
	I0918 21:08:52.353132   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0918 21:08:52.353195   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0918 21:08:52.390358   62061 cri.go:89] found id: ""
	I0918 21:08:52.390384   62061 logs.go:276] 0 containers: []
	W0918 21:08:52.390392   62061 logs.go:278] No container was found matching "kube-controller-manager"
	I0918 21:08:52.390398   62061 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0918 21:08:52.390448   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0918 21:08:52.430039   62061 cri.go:89] found id: ""
	I0918 21:08:52.430068   62061 logs.go:276] 0 containers: []
	W0918 21:08:52.430081   62061 logs.go:278] No container was found matching "kindnet"
	I0918 21:08:52.430088   62061 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0918 21:08:52.430146   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0918 21:08:52.466096   62061 cri.go:89] found id: ""
	I0918 21:08:52.466125   62061 logs.go:276] 0 containers: []
	W0918 21:08:52.466137   62061 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0918 21:08:52.466149   62061 logs.go:123] Gathering logs for kubelet ...
	I0918 21:08:52.466162   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 21:08:52.518643   62061 logs.go:123] Gathering logs for dmesg ...
	I0918 21:08:52.518678   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 21:08:52.531768   62061 logs.go:123] Gathering logs for describe nodes ...
	I0918 21:08:52.531797   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0918 21:08:52.605130   62061 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0918 21:08:52.605163   62061 logs.go:123] Gathering logs for CRI-O ...
	I0918 21:08:52.605181   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0918 21:08:52.686510   62061 logs.go:123] Gathering logs for container status ...
	I0918 21:08:52.686553   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 21:08:52.356535   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:08:54.356671   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:08:54.387306   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:08:56.887745   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:08:55.225867   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:08:55.240537   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0918 21:08:55.240618   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0918 21:08:55.276461   62061 cri.go:89] found id: ""
	I0918 21:08:55.276490   62061 logs.go:276] 0 containers: []
	W0918 21:08:55.276498   62061 logs.go:278] No container was found matching "kube-apiserver"
	I0918 21:08:55.276504   62061 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0918 21:08:55.276564   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0918 21:08:55.313449   62061 cri.go:89] found id: ""
	I0918 21:08:55.313482   62061 logs.go:276] 0 containers: []
	W0918 21:08:55.313493   62061 logs.go:278] No container was found matching "etcd"
	I0918 21:08:55.313499   62061 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0918 21:08:55.313551   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0918 21:08:55.352436   62061 cri.go:89] found id: ""
	I0918 21:08:55.352475   62061 logs.go:276] 0 containers: []
	W0918 21:08:55.352485   62061 logs.go:278] No container was found matching "coredns"
	I0918 21:08:55.352492   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0918 21:08:55.352560   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0918 21:08:55.390433   62061 cri.go:89] found id: ""
	I0918 21:08:55.390458   62061 logs.go:276] 0 containers: []
	W0918 21:08:55.390466   62061 logs.go:278] No container was found matching "kube-scheduler"
	I0918 21:08:55.390472   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0918 21:08:55.390529   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0918 21:08:55.428426   62061 cri.go:89] found id: ""
	I0918 21:08:55.428455   62061 logs.go:276] 0 containers: []
	W0918 21:08:55.428465   62061 logs.go:278] No container was found matching "kube-proxy"
	I0918 21:08:55.428473   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0918 21:08:55.428540   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0918 21:08:55.465587   62061 cri.go:89] found id: ""
	I0918 21:08:55.465622   62061 logs.go:276] 0 containers: []
	W0918 21:08:55.465633   62061 logs.go:278] No container was found matching "kube-controller-manager"
	I0918 21:08:55.465641   62061 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0918 21:08:55.465710   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0918 21:08:55.502137   62061 cri.go:89] found id: ""
	I0918 21:08:55.502185   62061 logs.go:276] 0 containers: []
	W0918 21:08:55.502196   62061 logs.go:278] No container was found matching "kindnet"
	I0918 21:08:55.502203   62061 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0918 21:08:55.502266   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0918 21:08:55.535992   62061 cri.go:89] found id: ""
	I0918 21:08:55.536037   62061 logs.go:276] 0 containers: []
	W0918 21:08:55.536050   62061 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0918 21:08:55.536060   62061 logs.go:123] Gathering logs for dmesg ...
	I0918 21:08:55.536078   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 21:08:55.549267   62061 logs.go:123] Gathering logs for describe nodes ...
	I0918 21:08:55.549296   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0918 21:08:55.616522   62061 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0918 21:08:55.616556   62061 logs.go:123] Gathering logs for CRI-O ...
	I0918 21:08:55.616572   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0918 21:08:55.698822   62061 logs.go:123] Gathering logs for container status ...
	I0918 21:08:55.698874   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 21:08:55.740234   62061 logs.go:123] Gathering logs for kubelet ...
	I0918 21:08:55.740264   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 21:08:58.294876   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:08:58.307543   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0918 21:08:58.307630   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0918 21:08:58.340435   62061 cri.go:89] found id: ""
	I0918 21:08:58.340467   62061 logs.go:276] 0 containers: []
	W0918 21:08:58.340479   62061 logs.go:278] No container was found matching "kube-apiserver"
	I0918 21:08:58.340486   62061 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0918 21:08:58.340545   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0918 21:08:58.374759   62061 cri.go:89] found id: ""
	I0918 21:08:58.374792   62061 logs.go:276] 0 containers: []
	W0918 21:08:58.374804   62061 logs.go:278] No container was found matching "etcd"
	I0918 21:08:58.374810   62061 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0918 21:08:58.374863   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0918 21:08:58.411756   62061 cri.go:89] found id: ""
	I0918 21:08:58.411787   62061 logs.go:276] 0 containers: []
	W0918 21:08:58.411797   62061 logs.go:278] No container was found matching "coredns"
	I0918 21:08:58.411804   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0918 21:08:58.411867   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0918 21:08:58.449787   62061 cri.go:89] found id: ""
	I0918 21:08:58.449820   62061 logs.go:276] 0 containers: []
	W0918 21:08:58.449832   62061 logs.go:278] No container was found matching "kube-scheduler"
	I0918 21:08:58.449839   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0918 21:08:58.449903   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0918 21:08:58.485212   62061 cri.go:89] found id: ""
	I0918 21:08:58.485243   62061 logs.go:276] 0 containers: []
	W0918 21:08:58.485254   62061 logs.go:278] No container was found matching "kube-proxy"
	I0918 21:08:58.485261   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0918 21:08:58.485331   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0918 21:08:58.528667   62061 cri.go:89] found id: ""
	I0918 21:08:58.528696   62061 logs.go:276] 0 containers: []
	W0918 21:08:58.528706   62061 logs.go:278] No container was found matching "kube-controller-manager"
	I0918 21:08:58.528714   62061 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0918 21:08:58.528775   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0918 21:08:58.568201   62061 cri.go:89] found id: ""
	I0918 21:08:58.568231   62061 logs.go:276] 0 containers: []
	W0918 21:08:58.568241   62061 logs.go:278] No container was found matching "kindnet"
	I0918 21:08:58.568247   62061 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0918 21:08:58.568305   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0918 21:08:58.623952   62061 cri.go:89] found id: ""
	I0918 21:08:58.623982   62061 logs.go:276] 0 containers: []
	W0918 21:08:58.623991   62061 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0918 21:08:58.624000   62061 logs.go:123] Gathering logs for container status ...
	I0918 21:08:58.624011   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 21:08:58.665418   62061 logs.go:123] Gathering logs for kubelet ...
	I0918 21:08:58.665457   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 21:08:58.713464   62061 logs.go:123] Gathering logs for dmesg ...
	I0918 21:08:58.713504   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 21:08:58.727511   62061 logs.go:123] Gathering logs for describe nodes ...
	I0918 21:08:58.727552   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0918 21:08:58.799004   62061 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0918 21:08:58.799035   62061 logs.go:123] Gathering logs for CRI-O ...
	I0918 21:08:58.799050   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0918 21:08:56.856428   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:08:58.856632   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:09:00.857301   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:08:59.386076   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:09:01.387016   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:09:01.389457   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:09:01.404658   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0918 21:09:01.404734   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0918 21:09:01.439139   62061 cri.go:89] found id: ""
	I0918 21:09:01.439170   62061 logs.go:276] 0 containers: []
	W0918 21:09:01.439180   62061 logs.go:278] No container was found matching "kube-apiserver"
	I0918 21:09:01.439187   62061 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0918 21:09:01.439251   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0918 21:09:01.473866   62061 cri.go:89] found id: ""
	I0918 21:09:01.473896   62061 logs.go:276] 0 containers: []
	W0918 21:09:01.473907   62061 logs.go:278] No container was found matching "etcd"
	I0918 21:09:01.473915   62061 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0918 21:09:01.473978   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0918 21:09:01.506734   62061 cri.go:89] found id: ""
	I0918 21:09:01.506767   62061 logs.go:276] 0 containers: []
	W0918 21:09:01.506777   62061 logs.go:278] No container was found matching "coredns"
	I0918 21:09:01.506783   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0918 21:09:01.506836   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0918 21:09:01.540123   62061 cri.go:89] found id: ""
	I0918 21:09:01.540152   62061 logs.go:276] 0 containers: []
	W0918 21:09:01.540162   62061 logs.go:278] No container was found matching "kube-scheduler"
	I0918 21:09:01.540169   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0918 21:09:01.540236   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0918 21:09:01.575002   62061 cri.go:89] found id: ""
	I0918 21:09:01.575037   62061 logs.go:276] 0 containers: []
	W0918 21:09:01.575048   62061 logs.go:278] No container was found matching "kube-proxy"
	I0918 21:09:01.575071   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0918 21:09:01.575159   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0918 21:09:01.612285   62061 cri.go:89] found id: ""
	I0918 21:09:01.612316   62061 logs.go:276] 0 containers: []
	W0918 21:09:01.612327   62061 logs.go:278] No container was found matching "kube-controller-manager"
	I0918 21:09:01.612335   62061 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0918 21:09:01.612399   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0918 21:09:01.648276   62061 cri.go:89] found id: ""
	I0918 21:09:01.648304   62061 logs.go:276] 0 containers: []
	W0918 21:09:01.648318   62061 logs.go:278] No container was found matching "kindnet"
	I0918 21:09:01.648325   62061 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0918 21:09:01.648397   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0918 21:09:01.686192   62061 cri.go:89] found id: ""
	I0918 21:09:01.686220   62061 logs.go:276] 0 containers: []
	W0918 21:09:01.686228   62061 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0918 21:09:01.686236   62061 logs.go:123] Gathering logs for kubelet ...
	I0918 21:09:01.686247   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 21:09:01.738366   62061 logs.go:123] Gathering logs for dmesg ...
	I0918 21:09:01.738408   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 21:09:01.752650   62061 logs.go:123] Gathering logs for describe nodes ...
	I0918 21:09:01.752683   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0918 21:09:01.825010   62061 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0918 21:09:01.825114   62061 logs.go:123] Gathering logs for CRI-O ...
	I0918 21:09:01.825156   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0918 21:09:01.907401   62061 logs.go:123] Gathering logs for container status ...
	I0918 21:09:01.907448   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 21:09:04.448316   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:09:04.461242   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0918 21:09:04.461304   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0918 21:09:03.357089   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:09:05.856126   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:09:03.387563   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:09:05.389665   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:09:07.886523   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:09:04.495951   62061 cri.go:89] found id: ""
	I0918 21:09:04.495984   62061 logs.go:276] 0 containers: []
	W0918 21:09:04.495997   62061 logs.go:278] No container was found matching "kube-apiserver"
	I0918 21:09:04.496006   62061 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0918 21:09:04.496090   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0918 21:09:04.536874   62061 cri.go:89] found id: ""
	I0918 21:09:04.536906   62061 logs.go:276] 0 containers: []
	W0918 21:09:04.536917   62061 logs.go:278] No container was found matching "etcd"
	I0918 21:09:04.536935   62061 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0918 21:09:04.537010   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0918 21:09:04.572601   62061 cri.go:89] found id: ""
	I0918 21:09:04.572634   62061 logs.go:276] 0 containers: []
	W0918 21:09:04.572646   62061 logs.go:278] No container was found matching "coredns"
	I0918 21:09:04.572653   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0918 21:09:04.572716   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0918 21:09:04.609783   62061 cri.go:89] found id: ""
	I0918 21:09:04.609817   62061 logs.go:276] 0 containers: []
	W0918 21:09:04.609826   62061 logs.go:278] No container was found matching "kube-scheduler"
	I0918 21:09:04.609832   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0918 21:09:04.609891   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0918 21:09:04.645124   62061 cri.go:89] found id: ""
	I0918 21:09:04.645156   62061 logs.go:276] 0 containers: []
	W0918 21:09:04.645167   62061 logs.go:278] No container was found matching "kube-proxy"
	I0918 21:09:04.645175   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0918 21:09:04.645241   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0918 21:09:04.680927   62061 cri.go:89] found id: ""
	I0918 21:09:04.680959   62061 logs.go:276] 0 containers: []
	W0918 21:09:04.680971   62061 logs.go:278] No container was found matching "kube-controller-manager"
	I0918 21:09:04.680978   62061 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0918 21:09:04.681038   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0918 21:09:04.718920   62061 cri.go:89] found id: ""
	I0918 21:09:04.718954   62061 logs.go:276] 0 containers: []
	W0918 21:09:04.718972   62061 logs.go:278] No container was found matching "kindnet"
	I0918 21:09:04.718979   62061 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0918 21:09:04.719039   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0918 21:09:04.751450   62061 cri.go:89] found id: ""
	I0918 21:09:04.751488   62061 logs.go:276] 0 containers: []
	W0918 21:09:04.751500   62061 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0918 21:09:04.751511   62061 logs.go:123] Gathering logs for container status ...
	I0918 21:09:04.751529   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 21:09:04.788969   62061 logs.go:123] Gathering logs for kubelet ...
	I0918 21:09:04.789001   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 21:09:04.837638   62061 logs.go:123] Gathering logs for dmesg ...
	I0918 21:09:04.837673   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 21:09:04.853673   62061 logs.go:123] Gathering logs for describe nodes ...
	I0918 21:09:04.853706   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0918 21:09:04.923670   62061 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0918 21:09:04.923703   62061 logs.go:123] Gathering logs for CRI-O ...
	I0918 21:09:04.923717   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0918 21:09:07.507979   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:09:07.521581   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0918 21:09:07.521656   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0918 21:09:07.556280   62061 cri.go:89] found id: ""
	I0918 21:09:07.556310   62061 logs.go:276] 0 containers: []
	W0918 21:09:07.556321   62061 logs.go:278] No container was found matching "kube-apiserver"
	I0918 21:09:07.556329   62061 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0918 21:09:07.556397   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0918 21:09:07.590775   62061 cri.go:89] found id: ""
	I0918 21:09:07.590802   62061 logs.go:276] 0 containers: []
	W0918 21:09:07.590810   62061 logs.go:278] No container was found matching "etcd"
	I0918 21:09:07.590815   62061 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0918 21:09:07.590862   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0918 21:09:07.625971   62061 cri.go:89] found id: ""
	I0918 21:09:07.626000   62061 logs.go:276] 0 containers: []
	W0918 21:09:07.626010   62061 logs.go:278] No container was found matching "coredns"
	I0918 21:09:07.626018   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0918 21:09:07.626080   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0918 21:09:07.660083   62061 cri.go:89] found id: ""
	I0918 21:09:07.660116   62061 logs.go:276] 0 containers: []
	W0918 21:09:07.660128   62061 logs.go:278] No container was found matching "kube-scheduler"
	I0918 21:09:07.660136   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0918 21:09:07.660201   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0918 21:09:07.694165   62061 cri.go:89] found id: ""
	I0918 21:09:07.694195   62061 logs.go:276] 0 containers: []
	W0918 21:09:07.694204   62061 logs.go:278] No container was found matching "kube-proxy"
	I0918 21:09:07.694211   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0918 21:09:07.694269   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0918 21:09:07.728298   62061 cri.go:89] found id: ""
	I0918 21:09:07.728328   62061 logs.go:276] 0 containers: []
	W0918 21:09:07.728338   62061 logs.go:278] No container was found matching "kube-controller-manager"
	I0918 21:09:07.728349   62061 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0918 21:09:07.728409   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0918 21:09:07.762507   62061 cri.go:89] found id: ""
	I0918 21:09:07.762546   62061 logs.go:276] 0 containers: []
	W0918 21:09:07.762555   62061 logs.go:278] No container was found matching "kindnet"
	I0918 21:09:07.762565   62061 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0918 21:09:07.762745   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0918 21:09:07.798006   62061 cri.go:89] found id: ""
	I0918 21:09:07.798038   62061 logs.go:276] 0 containers: []
	W0918 21:09:07.798049   62061 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0918 21:09:07.798059   62061 logs.go:123] Gathering logs for kubelet ...
	I0918 21:09:07.798074   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 21:09:07.849222   62061 logs.go:123] Gathering logs for dmesg ...
	I0918 21:09:07.849267   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 21:09:07.865023   62061 logs.go:123] Gathering logs for describe nodes ...
	I0918 21:09:07.865055   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0918 21:09:07.938810   62061 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0918 21:09:07.938830   62061 logs.go:123] Gathering logs for CRI-O ...
	I0918 21:09:07.938842   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0918 21:09:08.018885   62061 logs.go:123] Gathering logs for container status ...
	I0918 21:09:08.018924   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 21:09:07.856987   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:09:10.356244   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:09:09.886563   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:09:12.386922   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:09:10.562565   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:09:10.575854   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0918 21:09:10.575941   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0918 21:09:10.612853   62061 cri.go:89] found id: ""
	I0918 21:09:10.612884   62061 logs.go:276] 0 containers: []
	W0918 21:09:10.612896   62061 logs.go:278] No container was found matching "kube-apiserver"
	I0918 21:09:10.612906   62061 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0918 21:09:10.612966   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0918 21:09:10.648672   62061 cri.go:89] found id: ""
	I0918 21:09:10.648703   62061 logs.go:276] 0 containers: []
	W0918 21:09:10.648713   62061 logs.go:278] No container was found matching "etcd"
	I0918 21:09:10.648720   62061 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0918 21:09:10.648780   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0918 21:09:10.682337   62061 cri.go:89] found id: ""
	I0918 21:09:10.682370   62061 logs.go:276] 0 containers: []
	W0918 21:09:10.682388   62061 logs.go:278] No container was found matching "coredns"
	I0918 21:09:10.682394   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0918 21:09:10.682445   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0918 21:09:10.718221   62061 cri.go:89] found id: ""
	I0918 21:09:10.718257   62061 logs.go:276] 0 containers: []
	W0918 21:09:10.718269   62061 logs.go:278] No container was found matching "kube-scheduler"
	I0918 21:09:10.718277   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0918 21:09:10.718345   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0918 21:09:10.751570   62061 cri.go:89] found id: ""
	I0918 21:09:10.751600   62061 logs.go:276] 0 containers: []
	W0918 21:09:10.751609   62061 logs.go:278] No container was found matching "kube-proxy"
	I0918 21:09:10.751615   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0918 21:09:10.751706   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0918 21:09:10.792125   62061 cri.go:89] found id: ""
	I0918 21:09:10.792158   62061 logs.go:276] 0 containers: []
	W0918 21:09:10.792170   62061 logs.go:278] No container was found matching "kube-controller-manager"
	I0918 21:09:10.792178   62061 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0918 21:09:10.792245   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0918 21:09:10.830699   62061 cri.go:89] found id: ""
	I0918 21:09:10.830733   62061 logs.go:276] 0 containers: []
	W0918 21:09:10.830742   62061 logs.go:278] No container was found matching "kindnet"
	I0918 21:09:10.830748   62061 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0918 21:09:10.830820   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0918 21:09:10.869625   62061 cri.go:89] found id: ""
	I0918 21:09:10.869655   62061 logs.go:276] 0 containers: []
	W0918 21:09:10.869663   62061 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0918 21:09:10.869672   62061 logs.go:123] Gathering logs for kubelet ...
	I0918 21:09:10.869684   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 21:09:10.921340   62061 logs.go:123] Gathering logs for dmesg ...
	I0918 21:09:10.921378   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 21:09:10.937032   62061 logs.go:123] Gathering logs for describe nodes ...
	I0918 21:09:10.937071   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0918 21:09:11.006248   62061 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0918 21:09:11.006276   62061 logs.go:123] Gathering logs for CRI-O ...
	I0918 21:09:11.006291   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0918 21:09:11.086458   62061 logs.go:123] Gathering logs for container status ...
	I0918 21:09:11.086496   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 21:09:13.628824   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:09:13.642499   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0918 21:09:13.642578   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0918 21:09:13.677337   62061 cri.go:89] found id: ""
	I0918 21:09:13.677368   62061 logs.go:276] 0 containers: []
	W0918 21:09:13.677378   62061 logs.go:278] No container was found matching "kube-apiserver"
	I0918 21:09:13.677385   62061 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0918 21:09:13.677449   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0918 21:09:13.717317   62061 cri.go:89] found id: ""
	I0918 21:09:13.717341   62061 logs.go:276] 0 containers: []
	W0918 21:09:13.717353   62061 logs.go:278] No container was found matching "etcd"
	I0918 21:09:13.717358   62061 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0918 21:09:13.717419   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0918 21:09:13.752151   62061 cri.go:89] found id: ""
	I0918 21:09:13.752181   62061 logs.go:276] 0 containers: []
	W0918 21:09:13.752189   62061 logs.go:278] No container was found matching "coredns"
	I0918 21:09:13.752195   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0918 21:09:13.752253   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0918 21:09:13.791946   62061 cri.go:89] found id: ""
	I0918 21:09:13.791975   62061 logs.go:276] 0 containers: []
	W0918 21:09:13.791983   62061 logs.go:278] No container was found matching "kube-scheduler"
	I0918 21:09:13.791989   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0918 21:09:13.792069   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0918 21:09:13.825173   62061 cri.go:89] found id: ""
	I0918 21:09:13.825198   62061 logs.go:276] 0 containers: []
	W0918 21:09:13.825209   62061 logs.go:278] No container was found matching "kube-proxy"
	I0918 21:09:13.825216   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0918 21:09:13.825276   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0918 21:09:13.859801   62061 cri.go:89] found id: ""
	I0918 21:09:13.859834   62061 logs.go:276] 0 containers: []
	W0918 21:09:13.859846   62061 logs.go:278] No container was found matching "kube-controller-manager"
	I0918 21:09:13.859853   62061 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0918 21:09:13.859907   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0918 21:09:13.895413   62061 cri.go:89] found id: ""
	I0918 21:09:13.895445   62061 logs.go:276] 0 containers: []
	W0918 21:09:13.895456   62061 logs.go:278] No container was found matching "kindnet"
	I0918 21:09:13.895463   62061 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0918 21:09:13.895515   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0918 21:09:13.929048   62061 cri.go:89] found id: ""
	I0918 21:09:13.929075   62061 logs.go:276] 0 containers: []
	W0918 21:09:13.929083   62061 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0918 21:09:13.929092   62061 logs.go:123] Gathering logs for kubelet ...
	I0918 21:09:13.929104   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 21:09:13.981579   62061 logs.go:123] Gathering logs for dmesg ...
	I0918 21:09:13.981613   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 21:09:13.995642   62061 logs.go:123] Gathering logs for describe nodes ...
	I0918 21:09:13.995679   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0918 21:09:14.061762   62061 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0918 21:09:14.061782   62061 logs.go:123] Gathering logs for CRI-O ...
	I0918 21:09:14.061793   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0918 21:09:14.139623   62061 logs.go:123] Gathering logs for container status ...
	I0918 21:09:14.139659   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 21:09:16.001617   61659 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.374302262s)
	I0918 21:09:16.001692   61659 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0918 21:09:16.019307   61659 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0918 21:09:16.029547   61659 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0918 21:09:16.039132   61659 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0918 21:09:16.039154   61659 kubeadm.go:157] found existing configuration files:
	
	I0918 21:09:16.039196   61659 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0918 21:09:16.048506   61659 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0918 21:09:16.048567   61659 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0918 21:09:16.058120   61659 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0918 21:09:16.067686   61659 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0918 21:09:16.067746   61659 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0918 21:09:16.077707   61659 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0918 21:09:16.087089   61659 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0918 21:09:16.087149   61659 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0918 21:09:16.097040   61659 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0918 21:09:16.106448   61659 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0918 21:09:16.106514   61659 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0918 21:09:16.116060   61659 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0918 21:09:16.159721   61659 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I0918 21:09:16.159797   61659 kubeadm.go:310] [preflight] Running pre-flight checks
	I0918 21:09:16.266821   61659 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0918 21:09:16.266968   61659 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0918 21:09:16.267122   61659 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0918 21:09:16.275249   61659 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0918 21:09:12.855996   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:09:14.857296   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:09:16.277228   61659 out.go:235]   - Generating certificates and keys ...
	I0918 21:09:16.277333   61659 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0918 21:09:16.277419   61659 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0918 21:09:16.277534   61659 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0918 21:09:16.277617   61659 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0918 21:09:16.277709   61659 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0918 21:09:16.277790   61659 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0918 21:09:16.277904   61659 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0918 21:09:16.278013   61659 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0918 21:09:16.278131   61659 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0918 21:09:16.278265   61659 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0918 21:09:16.278331   61659 kubeadm.go:310] [certs] Using the existing "sa" key
	I0918 21:09:16.278401   61659 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0918 21:09:16.516263   61659 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0918 21:09:16.708220   61659 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0918 21:09:17.009820   61659 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0918 21:09:17.108871   61659 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0918 21:09:17.211014   61659 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0918 21:09:17.211658   61659 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0918 21:09:17.216626   61659 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0918 21:09:14.887068   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:09:16.888350   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:09:16.686071   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:09:16.699769   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0918 21:09:16.699844   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0918 21:09:16.735242   62061 cri.go:89] found id: ""
	I0918 21:09:16.735277   62061 logs.go:276] 0 containers: []
	W0918 21:09:16.735288   62061 logs.go:278] No container was found matching "kube-apiserver"
	I0918 21:09:16.735298   62061 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0918 21:09:16.735371   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0918 21:09:16.770027   62061 cri.go:89] found id: ""
	I0918 21:09:16.770052   62061 logs.go:276] 0 containers: []
	W0918 21:09:16.770060   62061 logs.go:278] No container was found matching "etcd"
	I0918 21:09:16.770066   62061 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0918 21:09:16.770114   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0918 21:09:16.806525   62061 cri.go:89] found id: ""
	I0918 21:09:16.806555   62061 logs.go:276] 0 containers: []
	W0918 21:09:16.806563   62061 logs.go:278] No container was found matching "coredns"
	I0918 21:09:16.806569   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0918 21:09:16.806636   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0918 21:09:16.851146   62061 cri.go:89] found id: ""
	I0918 21:09:16.851183   62061 logs.go:276] 0 containers: []
	W0918 21:09:16.851194   62061 logs.go:278] No container was found matching "kube-scheduler"
	I0918 21:09:16.851204   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0918 21:09:16.851271   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0918 21:09:16.890718   62061 cri.go:89] found id: ""
	I0918 21:09:16.890748   62061 logs.go:276] 0 containers: []
	W0918 21:09:16.890760   62061 logs.go:278] No container was found matching "kube-proxy"
	I0918 21:09:16.890767   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0918 21:09:16.890824   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0918 21:09:16.928971   62061 cri.go:89] found id: ""
	I0918 21:09:16.929002   62061 logs.go:276] 0 containers: []
	W0918 21:09:16.929012   62061 logs.go:278] No container was found matching "kube-controller-manager"
	I0918 21:09:16.929020   62061 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0918 21:09:16.929079   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0918 21:09:16.970965   62061 cri.go:89] found id: ""
	I0918 21:09:16.970999   62061 logs.go:276] 0 containers: []
	W0918 21:09:16.971011   62061 logs.go:278] No container was found matching "kindnet"
	I0918 21:09:16.971019   62061 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0918 21:09:16.971089   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0918 21:09:17.006427   62061 cri.go:89] found id: ""
	I0918 21:09:17.006453   62061 logs.go:276] 0 containers: []
	W0918 21:09:17.006461   62061 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0918 21:09:17.006468   62061 logs.go:123] Gathering logs for kubelet ...
	I0918 21:09:17.006480   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 21:09:17.058690   62061 logs.go:123] Gathering logs for dmesg ...
	I0918 21:09:17.058733   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 21:09:17.072593   62061 logs.go:123] Gathering logs for describe nodes ...
	I0918 21:09:17.072623   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0918 21:09:17.143046   62061 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0918 21:09:17.143071   62061 logs.go:123] Gathering logs for CRI-O ...
	I0918 21:09:17.143082   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0918 21:09:17.236943   62061 logs.go:123] Gathering logs for container status ...
	I0918 21:09:17.236989   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 21:09:17.357978   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:09:19.858268   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:09:17.218406   61659 out.go:235]   - Booting up control plane ...
	I0918 21:09:17.218544   61659 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0918 21:09:17.218662   61659 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0918 21:09:17.218765   61659 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0918 21:09:17.238076   61659 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0918 21:09:17.248123   61659 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0918 21:09:17.248226   61659 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0918 21:09:17.379685   61659 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0918 21:09:17.379840   61659 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0918 21:09:18.380791   61659 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001279947s
	I0918 21:09:18.380906   61659 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0918 21:09:18.380783   61740 pod_ready.go:82] duration metric: took 4m0.000205104s for pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace to be "Ready" ...
	E0918 21:09:18.380812   61740 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace to be "Ready" (will not retry!)
	I0918 21:09:18.380832   61740 pod_ready.go:39] duration metric: took 4m15.618837854s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0918 21:09:18.380875   61740 kubeadm.go:597] duration metric: took 4m23.646410044s to restartPrimaryControlPlane
	W0918 21:09:18.380936   61740 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0918 21:09:18.380966   61740 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0918 21:09:23.386705   61659 kubeadm.go:310] [api-check] The API server is healthy after 5.005706581s
	I0918 21:09:23.402316   61659 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0918 21:09:23.422786   61659 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0918 21:09:23.462099   61659 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0918 21:09:23.462373   61659 kubeadm.go:310] [mark-control-plane] Marking the node default-k8s-diff-port-828868 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0918 21:09:23.484276   61659 kubeadm.go:310] [bootstrap-token] Using token: 2vcil8.e13zhc1806da8knq
	I0918 21:09:19.782266   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:09:19.799433   62061 kubeadm.go:597] duration metric: took 4m2.126311085s to restartPrimaryControlPlane
	W0918 21:09:19.799513   62061 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0918 21:09:19.799543   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0918 21:09:20.910192   62061 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (1.110625463s)
	I0918 21:09:20.910273   62061 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0918 21:09:20.925992   62061 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0918 21:09:20.936876   62061 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0918 21:09:20.947170   62061 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0918 21:09:20.947199   62061 kubeadm.go:157] found existing configuration files:
	
	I0918 21:09:20.947255   62061 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0918 21:09:20.958140   62061 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0918 21:09:20.958240   62061 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0918 21:09:20.968351   62061 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0918 21:09:20.978669   62061 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0918 21:09:20.978735   62061 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0918 21:09:20.989765   62061 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0918 21:09:20.999842   62061 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0918 21:09:20.999903   62061 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0918 21:09:21.009945   62061 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0918 21:09:21.020229   62061 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0918 21:09:21.020289   62061 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0918 21:09:21.030583   62061 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0918 21:09:21.271399   62061 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0918 21:09:23.485978   61659 out.go:235]   - Configuring RBAC rules ...
	I0918 21:09:23.486112   61659 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0918 21:09:23.499163   61659 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0918 21:09:23.510754   61659 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0918 21:09:23.514794   61659 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0918 21:09:23.519247   61659 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0918 21:09:23.530424   61659 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0918 21:09:23.799778   61659 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0918 21:09:24.223469   61659 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0918 21:09:24.794852   61659 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0918 21:09:24.794886   61659 kubeadm.go:310] 
	I0918 21:09:24.794951   61659 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0918 21:09:24.794963   61659 kubeadm.go:310] 
	I0918 21:09:24.795058   61659 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0918 21:09:24.795073   61659 kubeadm.go:310] 
	I0918 21:09:24.795105   61659 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0918 21:09:24.795192   61659 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0918 21:09:24.795255   61659 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0918 21:09:24.795285   61659 kubeadm.go:310] 
	I0918 21:09:24.795366   61659 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0918 21:09:24.795376   61659 kubeadm.go:310] 
	I0918 21:09:24.795416   61659 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0918 21:09:24.795425   61659 kubeadm.go:310] 
	I0918 21:09:24.795497   61659 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0918 21:09:24.795580   61659 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0918 21:09:24.795678   61659 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0918 21:09:24.795692   61659 kubeadm.go:310] 
	I0918 21:09:24.795779   61659 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0918 21:09:24.795891   61659 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0918 21:09:24.795901   61659 kubeadm.go:310] 
	I0918 21:09:24.796174   61659 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8444 --token 2vcil8.e13zhc1806da8knq \
	I0918 21:09:24.796299   61659 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:8ad46bf288e8cc2226f5a929a768e6c95cddecc97ffc4016be27ef0987b1e36f \
	I0918 21:09:24.796350   61659 kubeadm.go:310] 	--control-plane 
	I0918 21:09:24.796367   61659 kubeadm.go:310] 
	I0918 21:09:24.796479   61659 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0918 21:09:24.796487   61659 kubeadm.go:310] 
	I0918 21:09:24.796594   61659 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8444 --token 2vcil8.e13zhc1806da8knq \
	I0918 21:09:24.796738   61659 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:8ad46bf288e8cc2226f5a929a768e6c95cddecc97ffc4016be27ef0987b1e36f 
	I0918 21:09:24.797359   61659 kubeadm.go:310] W0918 21:09:16.134048    2561 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0918 21:09:24.797679   61659 kubeadm.go:310] W0918 21:09:16.134873    2561 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0918 21:09:24.797832   61659 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0918 21:09:24.797858   61659 cni.go:84] Creating CNI manager for ""
	I0918 21:09:24.797872   61659 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0918 21:09:24.799953   61659 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0918 21:09:22.357582   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:09:24.857037   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:09:24.801259   61659 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0918 21:09:24.812277   61659 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0918 21:09:24.834749   61659 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0918 21:09:24.834855   61659 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0918 21:09:24.834871   61659 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-828868 minikube.k8s.io/updated_at=2024_09_18T21_09_24_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=85073601a832bd4bbda5d11fa91feafff6ec6b91 minikube.k8s.io/name=default-k8s-diff-port-828868 minikube.k8s.io/primary=true
	I0918 21:09:25.022861   61659 ops.go:34] apiserver oom_adj: -16
	I0918 21:09:25.022930   61659 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0918 21:09:25.523400   61659 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0918 21:09:26.023075   61659 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0918 21:09:26.523330   61659 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0918 21:09:27.023179   61659 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0918 21:09:27.523363   61659 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0918 21:09:28.023150   61659 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0918 21:09:28.523941   61659 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0918 21:09:29.023542   61659 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0918 21:09:29.143581   61659 kubeadm.go:1113] duration metric: took 4.308796493s to wait for elevateKubeSystemPrivileges
	I0918 21:09:29.143614   61659 kubeadm.go:394] duration metric: took 4m55.024616229s to StartCluster
	I0918 21:09:29.143632   61659 settings.go:142] acquiring lock: {Name:mk6ae95a69dcbe00c28846409e9f46945a88de2f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0918 21:09:29.143727   61659 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19667-7671/kubeconfig
	I0918 21:09:29.145397   61659 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19667-7671/kubeconfig: {Name:mk3a3eecadb0164dfdd5d3c4392b5b473a2a9bb0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0918 21:09:29.145680   61659 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.50.109 Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0918 21:09:29.145767   61659 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0918 21:09:29.145851   61659 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-828868"
	I0918 21:09:29.145869   61659 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-828868"
	I0918 21:09:29.145877   61659 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-828868"
	W0918 21:09:29.145885   61659 addons.go:243] addon storage-provisioner should already be in state true
	I0918 21:09:29.145896   61659 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-828868"
	I0918 21:09:29.145898   61659 config.go:182] Loaded profile config "default-k8s-diff-port-828868": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0918 21:09:29.145900   61659 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-828868"
	I0918 21:09:29.145920   61659 host.go:66] Checking if "default-k8s-diff-port-828868" exists ...
	I0918 21:09:29.145932   61659 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-828868"
	W0918 21:09:29.145946   61659 addons.go:243] addon metrics-server should already be in state true
	I0918 21:09:29.145980   61659 host.go:66] Checking if "default-k8s-diff-port-828868" exists ...
	I0918 21:09:29.146234   61659 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19667-7671/.minikube/bin/docker-machine-driver-kvm2
	I0918 21:09:29.146238   61659 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19667-7671/.minikube/bin/docker-machine-driver-kvm2
	I0918 21:09:29.146282   61659 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0918 21:09:29.146297   61659 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0918 21:09:29.146372   61659 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19667-7671/.minikube/bin/docker-machine-driver-kvm2
	I0918 21:09:29.146389   61659 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0918 21:09:29.147645   61659 out.go:177] * Verifying Kubernetes components...
	I0918 21:09:29.149574   61659 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0918 21:09:29.164779   61659 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43359
	I0918 21:09:29.165002   61659 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37857
	I0918 21:09:29.165390   61659 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44573
	I0918 21:09:29.165682   61659 main.go:141] libmachine: () Calling .GetVersion
	I0918 21:09:29.165749   61659 main.go:141] libmachine: () Calling .GetVersion
	I0918 21:09:29.166233   61659 main.go:141] libmachine: Using API Version  1
	I0918 21:09:29.166254   61659 main.go:141] libmachine: () Calling .SetConfigRaw
	I0918 21:09:29.166270   61659 main.go:141] libmachine: () Calling .GetVersion
	I0918 21:09:29.166388   61659 main.go:141] libmachine: Using API Version  1
	I0918 21:09:29.166414   61659 main.go:141] libmachine: () Calling .SetConfigRaw
	I0918 21:09:29.166544   61659 main.go:141] libmachine: () Calling .GetMachineName
	I0918 21:09:29.166711   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .GetState
	I0918 21:09:29.166730   61659 main.go:141] libmachine: () Calling .GetMachineName
	I0918 21:09:29.166894   61659 main.go:141] libmachine: Using API Version  1
	I0918 21:09:29.166918   61659 main.go:141] libmachine: () Calling .SetConfigRaw
	I0918 21:09:29.167381   61659 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19667-7671/.minikube/bin/docker-machine-driver-kvm2
	I0918 21:09:29.167425   61659 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0918 21:09:29.168144   61659 main.go:141] libmachine: () Calling .GetMachineName
	I0918 21:09:29.168578   61659 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19667-7671/.minikube/bin/docker-machine-driver-kvm2
	I0918 21:09:29.168614   61659 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0918 21:09:29.171072   61659 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-828868"
	W0918 21:09:29.171101   61659 addons.go:243] addon default-storageclass should already be in state true
	I0918 21:09:29.171133   61659 host.go:66] Checking if "default-k8s-diff-port-828868" exists ...
	I0918 21:09:29.171534   61659 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19667-7671/.minikube/bin/docker-machine-driver-kvm2
	I0918 21:09:29.171597   61659 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0918 21:09:29.186305   61659 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42211
	I0918 21:09:29.186318   61659 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45153
	I0918 21:09:29.186838   61659 main.go:141] libmachine: () Calling .GetVersion
	I0918 21:09:29.186847   61659 main.go:141] libmachine: () Calling .GetVersion
	I0918 21:09:29.187353   61659 main.go:141] libmachine: Using API Version  1
	I0918 21:09:29.187367   61659 main.go:141] libmachine: () Calling .SetConfigRaw
	I0918 21:09:29.187373   61659 main.go:141] libmachine: Using API Version  1
	I0918 21:09:29.187403   61659 main.go:141] libmachine: () Calling .SetConfigRaw
	I0918 21:09:29.187840   61659 main.go:141] libmachine: () Calling .GetMachineName
	I0918 21:09:29.187855   61659 main.go:141] libmachine: () Calling .GetMachineName
	I0918 21:09:29.188085   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .GetState
	I0918 21:09:29.188106   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .GetState
	I0918 21:09:29.193453   61659 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33471
	I0918 21:09:29.193905   61659 main.go:141] libmachine: () Calling .GetVersion
	I0918 21:09:29.194477   61659 main.go:141] libmachine: Using API Version  1
	I0918 21:09:29.194513   61659 main.go:141] libmachine: () Calling .SetConfigRaw
	I0918 21:09:29.194981   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .DriverName
	I0918 21:09:29.195155   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .DriverName
	I0918 21:09:29.195254   61659 main.go:141] libmachine: () Calling .GetMachineName
	I0918 21:09:29.195807   61659 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19667-7671/.minikube/bin/docker-machine-driver-kvm2
	I0918 21:09:29.195839   61659 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0918 21:09:29.197102   61659 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0918 21:09:29.197111   61659 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0918 21:09:29.198425   61659 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0918 21:09:29.198458   61659 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0918 21:09:29.198486   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .GetSSHHostname
	I0918 21:09:29.198589   61659 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0918 21:09:29.198605   61659 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0918 21:09:29.198622   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .GetSSHHostname
	I0918 21:09:29.202110   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | domain default-k8s-diff-port-828868 has defined MAC address 52:54:00:c0:39:06 in network mk-default-k8s-diff-port-828868
	I0918 21:09:29.202236   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | domain default-k8s-diff-port-828868 has defined MAC address 52:54:00:c0:39:06 in network mk-default-k8s-diff-port-828868
	I0918 21:09:29.202634   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:39:06", ip: ""} in network mk-default-k8s-diff-port-828868: {Iface:virbr4 ExpiryTime:2024-09-18 22:04:19 +0000 UTC Type:0 Mac:52:54:00:c0:39:06 Iaid: IPaddr:192.168.50.109 Prefix:24 Hostname:default-k8s-diff-port-828868 Clientid:01:52:54:00:c0:39:06}
	I0918 21:09:29.202656   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:39:06", ip: ""} in network mk-default-k8s-diff-port-828868: {Iface:virbr4 ExpiryTime:2024-09-18 22:04:19 +0000 UTC Type:0 Mac:52:54:00:c0:39:06 Iaid: IPaddr:192.168.50.109 Prefix:24 Hostname:default-k8s-diff-port-828868 Clientid:01:52:54:00:c0:39:06}
	I0918 21:09:29.202661   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | domain default-k8s-diff-port-828868 has defined IP address 192.168.50.109 and MAC address 52:54:00:c0:39:06 in network mk-default-k8s-diff-port-828868
	I0918 21:09:29.202677   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | domain default-k8s-diff-port-828868 has defined IP address 192.168.50.109 and MAC address 52:54:00:c0:39:06 in network mk-default-k8s-diff-port-828868
	I0918 21:09:29.202895   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .GetSSHPort
	I0918 21:09:29.202942   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .GetSSHPort
	I0918 21:09:29.203084   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .GetSSHKeyPath
	I0918 21:09:29.203129   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .GetSSHKeyPath
	I0918 21:09:29.203268   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .GetSSHUsername
	I0918 21:09:29.203275   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .GetSSHUsername
	I0918 21:09:29.203393   61659 sshutil.go:53] new ssh client: &{IP:192.168.50.109 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19667-7671/.minikube/machines/default-k8s-diff-port-828868/id_rsa Username:docker}
	I0918 21:09:29.203407   61659 sshutil.go:53] new ssh client: &{IP:192.168.50.109 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19667-7671/.minikube/machines/default-k8s-diff-port-828868/id_rsa Username:docker}
	I0918 21:09:29.215178   61659 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41565
	I0918 21:09:29.215727   61659 main.go:141] libmachine: () Calling .GetVersion
	I0918 21:09:29.216301   61659 main.go:141] libmachine: Using API Version  1
	I0918 21:09:29.216325   61659 main.go:141] libmachine: () Calling .SetConfigRaw
	I0918 21:09:29.216669   61659 main.go:141] libmachine: () Calling .GetMachineName
	I0918 21:09:29.216873   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .GetState
	I0918 21:09:29.218689   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .DriverName
	I0918 21:09:29.218980   61659 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0918 21:09:29.218994   61659 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0918 21:09:29.219009   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .GetSSHHostname
	I0918 21:09:29.222542   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | domain default-k8s-diff-port-828868 has defined MAC address 52:54:00:c0:39:06 in network mk-default-k8s-diff-port-828868
	I0918 21:09:29.222963   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:39:06", ip: ""} in network mk-default-k8s-diff-port-828868: {Iface:virbr4 ExpiryTime:2024-09-18 22:04:19 +0000 UTC Type:0 Mac:52:54:00:c0:39:06 Iaid: IPaddr:192.168.50.109 Prefix:24 Hostname:default-k8s-diff-port-828868 Clientid:01:52:54:00:c0:39:06}
	I0918 21:09:29.222985   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | domain default-k8s-diff-port-828868 has defined IP address 192.168.50.109 and MAC address 52:54:00:c0:39:06 in network mk-default-k8s-diff-port-828868
	I0918 21:09:29.223398   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .GetSSHPort
	I0918 21:09:29.223632   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .GetSSHKeyPath
	I0918 21:09:29.223820   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .GetSSHUsername
	I0918 21:09:29.224004   61659 sshutil.go:53] new ssh client: &{IP:192.168.50.109 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19667-7671/.minikube/machines/default-k8s-diff-port-828868/id_rsa Username:docker}
	I0918 21:09:29.360595   61659 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0918 21:09:29.381254   61659 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-828868" to be "Ready" ...
	I0918 21:09:29.390526   61659 node_ready.go:49] node "default-k8s-diff-port-828868" has status "Ready":"True"
	I0918 21:09:29.390554   61659 node_ready.go:38] duration metric: took 9.264338ms for node "default-k8s-diff-port-828868" to be "Ready" ...
	I0918 21:09:29.390565   61659 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0918 21:09:29.395433   61659 pod_ready.go:79] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-828868" in "kube-system" namespace to be "Ready" ...
	I0918 21:09:29.468492   61659 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0918 21:09:29.526515   61659 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0918 21:09:29.527137   61659 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0918 21:09:29.527162   61659 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0918 21:09:29.570619   61659 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0918 21:09:29.570651   61659 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0918 21:09:29.631944   61659 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0918 21:09:29.631975   61659 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0918 21:09:29.653905   61659 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0918 21:09:30.402107   61659 main.go:141] libmachine: Making call to close driver server
	I0918 21:09:30.402145   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .Close
	I0918 21:09:30.402142   61659 main.go:141] libmachine: Making call to close driver server
	I0918 21:09:30.402167   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .Close
	I0918 21:09:30.402466   61659 main.go:141] libmachine: Successfully made call to close driver server
	I0918 21:09:30.402480   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | Closing plugin on server side
	I0918 21:09:30.402493   61659 main.go:141] libmachine: Making call to close connection to plugin binary
	I0918 21:09:30.402503   61659 main.go:141] libmachine: Making call to close driver server
	I0918 21:09:30.402512   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .Close
	I0918 21:09:30.402537   61659 main.go:141] libmachine: Successfully made call to close driver server
	I0918 21:09:30.402546   61659 main.go:141] libmachine: Making call to close connection to plugin binary
	I0918 21:09:30.402555   61659 main.go:141] libmachine: Making call to close driver server
	I0918 21:09:30.402571   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .Close
	I0918 21:09:30.402733   61659 main.go:141] libmachine: Successfully made call to close driver server
	I0918 21:09:30.402773   61659 main.go:141] libmachine: Making call to close connection to plugin binary
	I0918 21:09:30.402921   61659 main.go:141] libmachine: Successfully made call to close driver server
	I0918 21:09:30.402941   61659 main.go:141] libmachine: Making call to close connection to plugin binary
	I0918 21:09:30.435323   61659 main.go:141] libmachine: Making call to close driver server
	I0918 21:09:30.435366   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .Close
	I0918 21:09:30.435659   61659 main.go:141] libmachine: Successfully made call to close driver server
	I0918 21:09:30.435683   61659 main.go:141] libmachine: Making call to close connection to plugin binary
	I0918 21:09:30.975630   61659 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.321677798s)
	I0918 21:09:30.975716   61659 main.go:141] libmachine: Making call to close driver server
	I0918 21:09:30.975733   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .Close
	I0918 21:09:30.976074   61659 main.go:141] libmachine: Successfully made call to close driver server
	I0918 21:09:30.976094   61659 main.go:141] libmachine: Making call to close connection to plugin binary
	I0918 21:09:30.976105   61659 main.go:141] libmachine: Making call to close driver server
	I0918 21:09:30.976113   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .Close
	I0918 21:09:30.976369   61659 main.go:141] libmachine: Successfully made call to close driver server
	I0918 21:09:30.976395   61659 main.go:141] libmachine: Making call to close connection to plugin binary
	I0918 21:09:30.976406   61659 addons.go:475] Verifying addon metrics-server=true in "default-k8s-diff-port-828868"
	I0918 21:09:30.978345   61659 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0918 21:09:26.857486   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:09:29.356533   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:09:31.358269   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:09:30.979731   61659 addons.go:510] duration metric: took 1.833970994s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0918 21:09:31.403620   61659 pod_ready.go:103] pod "etcd-default-k8s-diff-port-828868" in "kube-system" namespace has status "Ready":"False"
	I0918 21:09:33.857960   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:09:36.357454   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:09:33.902436   61659 pod_ready.go:103] pod "etcd-default-k8s-diff-port-828868" in "kube-system" namespace has status "Ready":"False"
	I0918 21:09:36.401889   61659 pod_ready.go:103] pod "etcd-default-k8s-diff-port-828868" in "kube-system" namespace has status "Ready":"False"
	I0918 21:09:36.902002   61659 pod_ready.go:93] pod "etcd-default-k8s-diff-port-828868" in "kube-system" namespace has status "Ready":"True"
	I0918 21:09:36.902026   61659 pod_ready.go:82] duration metric: took 7.506563242s for pod "etcd-default-k8s-diff-port-828868" in "kube-system" namespace to be "Ready" ...
	I0918 21:09:36.902035   61659 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-828868" in "kube-system" namespace to be "Ready" ...
	I0918 21:09:36.907689   61659 pod_ready.go:93] pod "kube-apiserver-default-k8s-diff-port-828868" in "kube-system" namespace has status "Ready":"True"
	I0918 21:09:36.907713   61659 pod_ready.go:82] duration metric: took 5.672631ms for pod "kube-apiserver-default-k8s-diff-port-828868" in "kube-system" namespace to be "Ready" ...
	I0918 21:09:36.907722   61659 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-828868" in "kube-system" namespace to be "Ready" ...
	I0918 21:09:38.914521   61659 pod_ready.go:103] pod "kube-controller-manager-default-k8s-diff-port-828868" in "kube-system" namespace has status "Ready":"False"
	I0918 21:09:39.414168   61659 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-828868" in "kube-system" namespace has status "Ready":"True"
	I0918 21:09:39.414196   61659 pod_ready.go:82] duration metric: took 2.506467297s for pod "kube-controller-manager-default-k8s-diff-port-828868" in "kube-system" namespace to be "Ready" ...
	I0918 21:09:39.414207   61659 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-hf5mm" in "kube-system" namespace to be "Ready" ...
	I0918 21:09:39.419030   61659 pod_ready.go:93] pod "kube-proxy-hf5mm" in "kube-system" namespace has status "Ready":"True"
	I0918 21:09:39.419053   61659 pod_ready.go:82] duration metric: took 4.838856ms for pod "kube-proxy-hf5mm" in "kube-system" namespace to be "Ready" ...
	I0918 21:09:39.419061   61659 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-828868" in "kube-system" namespace to be "Ready" ...
	I0918 21:09:39.423321   61659 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-828868" in "kube-system" namespace has status "Ready":"True"
	I0918 21:09:39.423341   61659 pod_ready.go:82] duration metric: took 4.274601ms for pod "kube-scheduler-default-k8s-diff-port-828868" in "kube-system" namespace to be "Ready" ...
	I0918 21:09:39.423348   61659 pod_ready.go:39] duration metric: took 10.03277208s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0918 21:09:39.423360   61659 api_server.go:52] waiting for apiserver process to appear ...
	I0918 21:09:39.423407   61659 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:09:39.438272   61659 api_server.go:72] duration metric: took 10.292559807s to wait for apiserver process to appear ...
	I0918 21:09:39.438297   61659 api_server.go:88] waiting for apiserver healthz status ...
	I0918 21:09:39.438315   61659 api_server.go:253] Checking apiserver healthz at https://192.168.50.109:8444/healthz ...
	I0918 21:09:39.443342   61659 api_server.go:279] https://192.168.50.109:8444/healthz returned 200:
	ok
	I0918 21:09:39.444238   61659 api_server.go:141] control plane version: v1.31.1
	I0918 21:09:39.444262   61659 api_server.go:131] duration metric: took 5.958748ms to wait for apiserver health ...
	I0918 21:09:39.444270   61659 system_pods.go:43] waiting for kube-system pods to appear ...
	I0918 21:09:39.449914   61659 system_pods.go:59] 9 kube-system pods found
	I0918 21:09:39.449938   61659 system_pods.go:61] "coredns-7c65d6cfc9-8gz5v" [bfcbe99d-9a3d-4da8-a976-58cb9d9bf7ac] Running
	I0918 21:09:39.449942   61659 system_pods.go:61] "coredns-7c65d6cfc9-shx5p" [2d6d25ab-9a90-490a-911b-bf396605fa88] Running
	I0918 21:09:39.449947   61659 system_pods.go:61] "etcd-default-k8s-diff-port-828868" [d9772e35-df33-4b90-af88-7479a81776a2] Running
	I0918 21:09:39.449950   61659 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-828868" [9ca05dc8-1e15-4f6e-9855-1e1b6376fbfc] Running
	I0918 21:09:39.449954   61659 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-828868" [b4196fd3-4882-4e31-b418-2f5f24d2610d] Running
	I0918 21:09:39.449957   61659 system_pods.go:61] "kube-proxy-hf5mm" [3fb0a166-a925-4486-9695-6db05ae704b8] Running
	I0918 21:09:39.449962   61659 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-828868" [161b8693-3400-43a4-b361-cce5749dd2eb] Running
	I0918 21:09:39.449969   61659 system_pods.go:61] "metrics-server-6867b74b74-hdt52" [bf3aa7d0-a121-4a2c-92dd-2c79c6cbaa4a] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0918 21:09:39.449976   61659 system_pods.go:61] "storage-provisioner" [8b4e1077-e23f-4262-b83e-989506798531] Running
	I0918 21:09:39.449983   61659 system_pods.go:74] duration metric: took 5.708013ms to wait for pod list to return data ...
	I0918 21:09:39.449992   61659 default_sa.go:34] waiting for default service account to be created ...
	I0918 21:09:39.453256   61659 default_sa.go:45] found service account: "default"
	I0918 21:09:39.453278   61659 default_sa.go:55] duration metric: took 3.281012ms for default service account to be created ...
	I0918 21:09:39.453287   61659 system_pods.go:116] waiting for k8s-apps to be running ...
	I0918 21:09:39.502200   61659 system_pods.go:86] 9 kube-system pods found
	I0918 21:09:39.502231   61659 system_pods.go:89] "coredns-7c65d6cfc9-8gz5v" [bfcbe99d-9a3d-4da8-a976-58cb9d9bf7ac] Running
	I0918 21:09:39.502237   61659 system_pods.go:89] "coredns-7c65d6cfc9-shx5p" [2d6d25ab-9a90-490a-911b-bf396605fa88] Running
	I0918 21:09:39.502241   61659 system_pods.go:89] "etcd-default-k8s-diff-port-828868" [d9772e35-df33-4b90-af88-7479a81776a2] Running
	I0918 21:09:39.502246   61659 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-828868" [9ca05dc8-1e15-4f6e-9855-1e1b6376fbfc] Running
	I0918 21:09:39.502250   61659 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-828868" [b4196fd3-4882-4e31-b418-2f5f24d2610d] Running
	I0918 21:09:39.502253   61659 system_pods.go:89] "kube-proxy-hf5mm" [3fb0a166-a925-4486-9695-6db05ae704b8] Running
	I0918 21:09:39.502256   61659 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-828868" [161b8693-3400-43a4-b361-cce5749dd2eb] Running
	I0918 21:09:39.502262   61659 system_pods.go:89] "metrics-server-6867b74b74-hdt52" [bf3aa7d0-a121-4a2c-92dd-2c79c6cbaa4a] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0918 21:09:39.502266   61659 system_pods.go:89] "storage-provisioner" [8b4e1077-e23f-4262-b83e-989506798531] Running
	I0918 21:09:39.502276   61659 system_pods.go:126] duration metric: took 48.981872ms to wait for k8s-apps to be running ...
	I0918 21:09:39.502291   61659 system_svc.go:44] waiting for kubelet service to be running ....
	I0918 21:09:39.502367   61659 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0918 21:09:39.517514   61659 system_svc.go:56] duration metric: took 15.213443ms WaitForService to wait for kubelet
	I0918 21:09:39.517549   61659 kubeadm.go:582] duration metric: took 10.37183977s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0918 21:09:39.517573   61659 node_conditions.go:102] verifying NodePressure condition ...
	I0918 21:09:39.700593   61659 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0918 21:09:39.700616   61659 node_conditions.go:123] node cpu capacity is 2
	I0918 21:09:39.700626   61659 node_conditions.go:105] duration metric: took 183.048537ms to run NodePressure ...
	I0918 21:09:39.700637   61659 start.go:241] waiting for startup goroutines ...
	I0918 21:09:39.700643   61659 start.go:246] waiting for cluster config update ...
	I0918 21:09:39.700653   61659 start.go:255] writing updated cluster config ...
	I0918 21:09:39.700899   61659 ssh_runner.go:195] Run: rm -f paused
	I0918 21:09:39.750890   61659 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I0918 21:09:39.753015   61659 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-828868" cluster and "default" namespace by default
	I0918 21:09:38.857481   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:09:41.356307   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:09:44.581125   61740 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.200138695s)
	I0918 21:09:44.581198   61740 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0918 21:09:44.597051   61740 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0918 21:09:44.607195   61740 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0918 21:09:44.617135   61740 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0918 21:09:44.617160   61740 kubeadm.go:157] found existing configuration files:
	
	I0918 21:09:44.617203   61740 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0918 21:09:44.626216   61740 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0918 21:09:44.626278   61740 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0918 21:09:44.635161   61740 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0918 21:09:44.643767   61740 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0918 21:09:44.643828   61740 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0918 21:09:44.652663   61740 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0918 21:09:44.662045   61740 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0918 21:09:44.662107   61740 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0918 21:09:44.671165   61740 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0918 21:09:44.680397   61740 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0918 21:09:44.680469   61740 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0918 21:09:44.689168   61740 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0918 21:09:44.733425   61740 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I0918 21:09:44.733528   61740 kubeadm.go:310] [preflight] Running pre-flight checks
	I0918 21:09:44.846369   61740 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0918 21:09:44.846477   61740 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0918 21:09:44.846612   61740 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0918 21:09:44.855581   61740 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0918 21:09:44.857599   61740 out.go:235]   - Generating certificates and keys ...
	I0918 21:09:44.857709   61740 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0918 21:09:44.857777   61740 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0918 21:09:44.857851   61740 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0918 21:09:44.857942   61740 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0918 21:09:44.858061   61740 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0918 21:09:44.858137   61740 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0918 21:09:44.858243   61740 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0918 21:09:44.858339   61740 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0918 21:09:44.858409   61740 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0918 21:09:44.858509   61740 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0918 21:09:44.858547   61740 kubeadm.go:310] [certs] Using the existing "sa" key
	I0918 21:09:44.858615   61740 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0918 21:09:45.048967   61740 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0918 21:09:45.229640   61740 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0918 21:09:45.397078   61740 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0918 21:09:45.722116   61740 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0918 21:09:45.850285   61740 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0918 21:09:45.850902   61740 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0918 21:09:45.853909   61740 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0918 21:09:43.357136   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:09:45.858056   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:09:45.855803   61740 out.go:235]   - Booting up control plane ...
	I0918 21:09:45.855931   61740 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0918 21:09:45.857227   61740 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0918 21:09:45.858855   61740 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0918 21:09:45.877299   61740 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0918 21:09:45.883953   61740 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0918 21:09:45.884043   61740 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0918 21:09:46.015368   61740 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0918 21:09:46.015509   61740 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0918 21:09:47.016371   61740 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001062473s
	I0918 21:09:47.016465   61740 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0918 21:09:48.357057   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:09:50.856124   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:09:51.518808   61740 kubeadm.go:310] [api-check] The API server is healthy after 4.502250914s
	I0918 21:09:51.532148   61740 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0918 21:09:51.549560   61740 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0918 21:09:51.579801   61740 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0918 21:09:51.580053   61740 kubeadm.go:310] [mark-control-plane] Marking the node embed-certs-255556 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0918 21:09:51.598605   61740 kubeadm.go:310] [bootstrap-token] Using token: iilbxo.n0c6mbjmeqehlfso
	I0918 21:09:51.600035   61740 out.go:235]   - Configuring RBAC rules ...
	I0918 21:09:51.600200   61740 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0918 21:09:51.614672   61740 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0918 21:09:51.626186   61740 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0918 21:09:51.629722   61740 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0918 21:09:51.634757   61740 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0918 21:09:51.642778   61740 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0918 21:09:51.931051   61740 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0918 21:09:52.359085   61740 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0918 21:09:52.930191   61740 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0918 21:09:52.931033   61740 kubeadm.go:310] 
	I0918 21:09:52.931100   61740 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0918 21:09:52.931108   61740 kubeadm.go:310] 
	I0918 21:09:52.931178   61740 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0918 21:09:52.931186   61740 kubeadm.go:310] 
	I0918 21:09:52.931208   61740 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0918 21:09:52.931313   61740 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0918 21:09:52.931400   61740 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0918 21:09:52.931435   61740 kubeadm.go:310] 
	I0918 21:09:52.931524   61740 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0918 21:09:52.931537   61740 kubeadm.go:310] 
	I0918 21:09:52.931601   61740 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0918 21:09:52.931627   61740 kubeadm.go:310] 
	I0918 21:09:52.931721   61740 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0918 21:09:52.931825   61740 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0918 21:09:52.931896   61740 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0918 21:09:52.931903   61740 kubeadm.go:310] 
	I0918 21:09:52.931974   61740 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0918 21:09:52.932073   61740 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0918 21:09:52.932081   61740 kubeadm.go:310] 
	I0918 21:09:52.932154   61740 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token iilbxo.n0c6mbjmeqehlfso \
	I0918 21:09:52.932243   61740 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:8ad46bf288e8cc2226f5a929a768e6c95cddecc97ffc4016be27ef0987b1e36f \
	I0918 21:09:52.932289   61740 kubeadm.go:310] 	--control-plane 
	I0918 21:09:52.932296   61740 kubeadm.go:310] 
	I0918 21:09:52.932365   61740 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0918 21:09:52.932372   61740 kubeadm.go:310] 
	I0918 21:09:52.932438   61740 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token iilbxo.n0c6mbjmeqehlfso \
	I0918 21:09:52.932568   61740 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:8ad46bf288e8cc2226f5a929a768e6c95cddecc97ffc4016be27ef0987b1e36f 
	I0918 21:09:52.934280   61740 kubeadm.go:310] W0918 21:09:44.705437    2512 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0918 21:09:52.934656   61740 kubeadm.go:310] W0918 21:09:44.706219    2512 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0918 21:09:52.934841   61740 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0918 21:09:52.934861   61740 cni.go:84] Creating CNI manager for ""
	I0918 21:09:52.934871   61740 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0918 21:09:52.937656   61740 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0918 21:09:52.939150   61740 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0918 21:09:52.950774   61740 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0918 21:09:52.973081   61740 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0918 21:09:52.973161   61740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0918 21:09:52.973210   61740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-255556 minikube.k8s.io/updated_at=2024_09_18T21_09_52_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=85073601a832bd4bbda5d11fa91feafff6ec6b91 minikube.k8s.io/name=embed-certs-255556 minikube.k8s.io/primary=true
	I0918 21:09:53.012402   61740 ops.go:34] apiserver oom_adj: -16
	I0918 21:09:53.180983   61740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0918 21:09:52.857161   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:09:55.357515   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:09:53.681852   61740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0918 21:09:54.181892   61740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0918 21:09:54.681768   61740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0918 21:09:55.181353   61740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0918 21:09:55.681336   61740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0918 21:09:56.181389   61740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0918 21:09:56.681574   61740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0918 21:09:57.181050   61740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0918 21:09:57.258766   61740 kubeadm.go:1113] duration metric: took 4.285672952s to wait for elevateKubeSystemPrivileges
	I0918 21:09:57.258809   61740 kubeadm.go:394] duration metric: took 5m2.572577294s to StartCluster
	I0918 21:09:57.258831   61740 settings.go:142] acquiring lock: {Name:mk6ae95a69dcbe00c28846409e9f46945a88de2f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0918 21:09:57.258925   61740 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19667-7671/kubeconfig
	I0918 21:09:57.260757   61740 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19667-7671/kubeconfig: {Name:mk3a3eecadb0164dfdd5d3c4392b5b473a2a9bb0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0918 21:09:57.261072   61740 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.21 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0918 21:09:57.261168   61740 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0918 21:09:57.261275   61740 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-255556"
	I0918 21:09:57.261302   61740 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-255556"
	W0918 21:09:57.261314   61740 addons.go:243] addon storage-provisioner should already be in state true
	I0918 21:09:57.261344   61740 host.go:66] Checking if "embed-certs-255556" exists ...
	I0918 21:09:57.261337   61740 addons.go:69] Setting default-storageclass=true in profile "embed-certs-255556"
	I0918 21:09:57.261366   61740 config.go:182] Loaded profile config "embed-certs-255556": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0918 21:09:57.261363   61740 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-255556"
	I0918 21:09:57.261354   61740 addons.go:69] Setting metrics-server=true in profile "embed-certs-255556"
	I0918 21:09:57.261413   61740 addons.go:234] Setting addon metrics-server=true in "embed-certs-255556"
	W0918 21:09:57.261423   61740 addons.go:243] addon metrics-server should already be in state true
	I0918 21:09:57.261450   61740 host.go:66] Checking if "embed-certs-255556" exists ...
	I0918 21:09:57.261751   61740 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19667-7671/.minikube/bin/docker-machine-driver-kvm2
	I0918 21:09:57.261773   61740 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19667-7671/.minikube/bin/docker-machine-driver-kvm2
	I0918 21:09:57.261797   61740 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0918 21:09:57.261805   61740 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0918 21:09:57.261827   61740 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19667-7671/.minikube/bin/docker-machine-driver-kvm2
	I0918 21:09:57.261913   61740 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0918 21:09:57.263016   61740 out.go:177] * Verifying Kubernetes components...
	I0918 21:09:57.264732   61740 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0918 21:09:57.279143   61740 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41499
	I0918 21:09:57.279741   61740 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38525
	I0918 21:09:57.279948   61740 main.go:141] libmachine: () Calling .GetVersion
	I0918 21:09:57.280150   61740 main.go:141] libmachine: () Calling .GetVersion
	I0918 21:09:57.280518   61740 main.go:141] libmachine: Using API Version  1
	I0918 21:09:57.280536   61740 main.go:141] libmachine: () Calling .SetConfigRaw
	I0918 21:09:57.280662   61740 main.go:141] libmachine: Using API Version  1
	I0918 21:09:57.280699   61740 main.go:141] libmachine: () Calling .SetConfigRaw
	I0918 21:09:57.280899   61740 main.go:141] libmachine: () Calling .GetMachineName
	I0918 21:09:57.281014   61740 main.go:141] libmachine: () Calling .GetMachineName
	I0918 21:09:57.281224   61740 main.go:141] libmachine: (embed-certs-255556) Calling .GetState
	I0918 21:09:57.281401   61740 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45081
	I0918 21:09:57.281609   61740 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19667-7671/.minikube/bin/docker-machine-driver-kvm2
	I0918 21:09:57.281669   61740 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0918 21:09:57.281824   61740 main.go:141] libmachine: () Calling .GetVersion
	I0918 21:09:57.282291   61740 main.go:141] libmachine: Using API Version  1
	I0918 21:09:57.282316   61740 main.go:141] libmachine: () Calling .SetConfigRaw
	I0918 21:09:57.282655   61740 main.go:141] libmachine: () Calling .GetMachineName
	I0918 21:09:57.283166   61740 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19667-7671/.minikube/bin/docker-machine-driver-kvm2
	I0918 21:09:57.283198   61740 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0918 21:09:57.284993   61740 addons.go:234] Setting addon default-storageclass=true in "embed-certs-255556"
	W0918 21:09:57.285013   61740 addons.go:243] addon default-storageclass should already be in state true
	I0918 21:09:57.285042   61740 host.go:66] Checking if "embed-certs-255556" exists ...
	I0918 21:09:57.285400   61740 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19667-7671/.minikube/bin/docker-machine-driver-kvm2
	I0918 21:09:57.285441   61740 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0918 21:09:57.298996   61740 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39709
	I0918 21:09:57.299572   61740 main.go:141] libmachine: () Calling .GetVersion
	I0918 21:09:57.300427   61740 main.go:141] libmachine: Using API Version  1
	I0918 21:09:57.300453   61740 main.go:141] libmachine: () Calling .SetConfigRaw
	I0918 21:09:57.300865   61740 main.go:141] libmachine: () Calling .GetMachineName
	I0918 21:09:57.301062   61740 main.go:141] libmachine: (embed-certs-255556) Calling .GetState
	I0918 21:09:57.301827   61740 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40441
	I0918 21:09:57.302410   61740 main.go:141] libmachine: () Calling .GetVersion
	I0918 21:09:57.302948   61740 main.go:141] libmachine: Using API Version  1
	I0918 21:09:57.302968   61740 main.go:141] libmachine: () Calling .SetConfigRaw
	I0918 21:09:57.303284   61740 main.go:141] libmachine: (embed-certs-255556) Calling .DriverName
	I0918 21:09:57.303333   61740 main.go:141] libmachine: () Calling .GetMachineName
	I0918 21:09:57.303512   61740 main.go:141] libmachine: (embed-certs-255556) Calling .GetState
	I0918 21:09:57.304409   61740 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43125
	I0918 21:09:57.304836   61740 main.go:141] libmachine: () Calling .GetVersion
	I0918 21:09:57.305379   61740 main.go:141] libmachine: Using API Version  1
	I0918 21:09:57.305393   61740 main.go:141] libmachine: () Calling .SetConfigRaw
	I0918 21:09:57.305423   61740 main.go:141] libmachine: (embed-certs-255556) Calling .DriverName
	I0918 21:09:57.305449   61740 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0918 21:09:57.305705   61740 main.go:141] libmachine: () Calling .GetMachineName
	I0918 21:09:57.306221   61740 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19667-7671/.minikube/bin/docker-machine-driver-kvm2
	I0918 21:09:57.306270   61740 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0918 21:09:57.306972   61740 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0918 21:09:57.307226   61740 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0918 21:09:57.307247   61740 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0918 21:09:57.307261   61740 main.go:141] libmachine: (embed-certs-255556) Calling .GetSSHHostname
	I0918 21:09:57.308757   61740 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0918 21:09:57.308778   61740 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0918 21:09:57.308798   61740 main.go:141] libmachine: (embed-certs-255556) Calling .GetSSHHostname
	I0918 21:09:57.311608   61740 main.go:141] libmachine: (embed-certs-255556) DBG | domain embed-certs-255556 has defined MAC address 52:54:00:e8:c2:b7 in network mk-embed-certs-255556
	I0918 21:09:57.312311   61740 main.go:141] libmachine: (embed-certs-255556) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e8:c2:b7", ip: ""} in network mk-embed-certs-255556: {Iface:virbr1 ExpiryTime:2024-09-18 22:04:39 +0000 UTC Type:0 Mac:52:54:00:e8:c2:b7 Iaid: IPaddr:192.168.39.21 Prefix:24 Hostname:embed-certs-255556 Clientid:01:52:54:00:e8:c2:b7}
	I0918 21:09:57.312346   61740 main.go:141] libmachine: (embed-certs-255556) DBG | domain embed-certs-255556 has defined IP address 192.168.39.21 and MAC address 52:54:00:e8:c2:b7 in network mk-embed-certs-255556
	I0918 21:09:57.312529   61740 main.go:141] libmachine: (embed-certs-255556) Calling .GetSSHPort
	I0918 21:09:57.313308   61740 main.go:141] libmachine: (embed-certs-255556) DBG | domain embed-certs-255556 has defined MAC address 52:54:00:e8:c2:b7 in network mk-embed-certs-255556
	I0918 21:09:57.313344   61740 main.go:141] libmachine: (embed-certs-255556) Calling .GetSSHKeyPath
	I0918 21:09:57.313533   61740 main.go:141] libmachine: (embed-certs-255556) Calling .GetSSHUsername
	I0918 21:09:57.313707   61740 sshutil.go:53] new ssh client: &{IP:192.168.39.21 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19667-7671/.minikube/machines/embed-certs-255556/id_rsa Username:docker}
	I0918 21:09:57.313964   61740 main.go:141] libmachine: (embed-certs-255556) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e8:c2:b7", ip: ""} in network mk-embed-certs-255556: {Iface:virbr1 ExpiryTime:2024-09-18 22:04:39 +0000 UTC Type:0 Mac:52:54:00:e8:c2:b7 Iaid: IPaddr:192.168.39.21 Prefix:24 Hostname:embed-certs-255556 Clientid:01:52:54:00:e8:c2:b7}
	I0918 21:09:57.313991   61740 main.go:141] libmachine: (embed-certs-255556) DBG | domain embed-certs-255556 has defined IP address 192.168.39.21 and MAC address 52:54:00:e8:c2:b7 in network mk-embed-certs-255556
	I0918 21:09:57.314181   61740 main.go:141] libmachine: (embed-certs-255556) Calling .GetSSHPort
	I0918 21:09:57.314357   61740 main.go:141] libmachine: (embed-certs-255556) Calling .GetSSHKeyPath
	I0918 21:09:57.314517   61740 main.go:141] libmachine: (embed-certs-255556) Calling .GetSSHUsername
	I0918 21:09:57.314644   61740 sshutil.go:53] new ssh client: &{IP:192.168.39.21 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19667-7671/.minikube/machines/embed-certs-255556/id_rsa Username:docker}
	I0918 21:09:57.325307   61740 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45641
	I0918 21:09:57.325800   61740 main.go:141] libmachine: () Calling .GetVersion
	I0918 21:09:57.326390   61740 main.go:141] libmachine: Using API Version  1
	I0918 21:09:57.326416   61740 main.go:141] libmachine: () Calling .SetConfigRaw
	I0918 21:09:57.326850   61740 main.go:141] libmachine: () Calling .GetMachineName
	I0918 21:09:57.327116   61740 main.go:141] libmachine: (embed-certs-255556) Calling .GetState
	I0918 21:09:57.328954   61740 main.go:141] libmachine: (embed-certs-255556) Calling .DriverName
	I0918 21:09:57.329179   61740 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0918 21:09:57.329197   61740 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0918 21:09:57.329216   61740 main.go:141] libmachine: (embed-certs-255556) Calling .GetSSHHostname
	I0918 21:09:57.332176   61740 main.go:141] libmachine: (embed-certs-255556) DBG | domain embed-certs-255556 has defined MAC address 52:54:00:e8:c2:b7 in network mk-embed-certs-255556
	I0918 21:09:57.332608   61740 main.go:141] libmachine: (embed-certs-255556) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e8:c2:b7", ip: ""} in network mk-embed-certs-255556: {Iface:virbr1 ExpiryTime:2024-09-18 22:04:39 +0000 UTC Type:0 Mac:52:54:00:e8:c2:b7 Iaid: IPaddr:192.168.39.21 Prefix:24 Hostname:embed-certs-255556 Clientid:01:52:54:00:e8:c2:b7}
	I0918 21:09:57.332633   61740 main.go:141] libmachine: (embed-certs-255556) DBG | domain embed-certs-255556 has defined IP address 192.168.39.21 and MAC address 52:54:00:e8:c2:b7 in network mk-embed-certs-255556
	I0918 21:09:57.332803   61740 main.go:141] libmachine: (embed-certs-255556) Calling .GetSSHPort
	I0918 21:09:57.332991   61740 main.go:141] libmachine: (embed-certs-255556) Calling .GetSSHKeyPath
	I0918 21:09:57.333132   61740 main.go:141] libmachine: (embed-certs-255556) Calling .GetSSHUsername
	I0918 21:09:57.333254   61740 sshutil.go:53] new ssh client: &{IP:192.168.39.21 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19667-7671/.minikube/machines/embed-certs-255556/id_rsa Username:docker}
	I0918 21:09:57.463767   61740 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0918 21:09:57.480852   61740 node_ready.go:35] waiting up to 6m0s for node "embed-certs-255556" to be "Ready" ...
	I0918 21:09:57.492198   61740 node_ready.go:49] node "embed-certs-255556" has status "Ready":"True"
	I0918 21:09:57.492221   61740 node_ready.go:38] duration metric: took 11.335784ms for node "embed-certs-255556" to be "Ready" ...
	I0918 21:09:57.492229   61740 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0918 21:09:57.496607   61740 pod_ready.go:79] waiting up to 6m0s for pod "etcd-embed-certs-255556" in "kube-system" namespace to be "Ready" ...
	I0918 21:09:57.627581   61740 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0918 21:09:57.631704   61740 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0918 21:09:57.647778   61740 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0918 21:09:57.647799   61740 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0918 21:09:57.686558   61740 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0918 21:09:57.686589   61740 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0918 21:09:57.726206   61740 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0918 21:09:57.726230   61740 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0918 21:09:57.831932   61740 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0918 21:09:58.026530   61740 main.go:141] libmachine: Making call to close driver server
	I0918 21:09:58.026554   61740 main.go:141] libmachine: (embed-certs-255556) Calling .Close
	I0918 21:09:58.026862   61740 main.go:141] libmachine: Successfully made call to close driver server
	I0918 21:09:58.026885   61740 main.go:141] libmachine: Making call to close connection to plugin binary
	I0918 21:09:58.026895   61740 main.go:141] libmachine: Making call to close driver server
	I0918 21:09:58.026903   61740 main.go:141] libmachine: (embed-certs-255556) Calling .Close
	I0918 21:09:58.027205   61740 main.go:141] libmachine: Successfully made call to close driver server
	I0918 21:09:58.027260   61740 main.go:141] libmachine: Making call to close connection to plugin binary
	I0918 21:09:58.027269   61740 main.go:141] libmachine: (embed-certs-255556) DBG | Closing plugin on server side
	I0918 21:09:58.038140   61740 main.go:141] libmachine: Making call to close driver server
	I0918 21:09:58.038172   61740 main.go:141] libmachine: (embed-certs-255556) Calling .Close
	I0918 21:09:58.038506   61740 main.go:141] libmachine: Successfully made call to close driver server
	I0918 21:09:58.038555   61740 main.go:141] libmachine: Making call to close connection to plugin binary
	I0918 21:09:58.038512   61740 main.go:141] libmachine: (embed-certs-255556) DBG | Closing plugin on server side
	I0918 21:09:58.551479   61740 main.go:141] libmachine: Making call to close driver server
	I0918 21:09:58.551518   61740 main.go:141] libmachine: (embed-certs-255556) Calling .Close
	I0918 21:09:58.551851   61740 main.go:141] libmachine: Successfully made call to close driver server
	I0918 21:09:58.551870   61740 main.go:141] libmachine: Making call to close connection to plugin binary
	I0918 21:09:58.551885   61740 main.go:141] libmachine: Making call to close driver server
	I0918 21:09:58.551893   61740 main.go:141] libmachine: (embed-certs-255556) Calling .Close
	I0918 21:09:58.552242   61740 main.go:141] libmachine: Successfully made call to close driver server
	I0918 21:09:58.552307   61740 main.go:141] libmachine: Making call to close connection to plugin binary
	I0918 21:09:58.552326   61740 main.go:141] libmachine: (embed-certs-255556) DBG | Closing plugin on server side
	I0918 21:09:59.078469   61740 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.246485041s)
	I0918 21:09:59.078532   61740 main.go:141] libmachine: Making call to close driver server
	I0918 21:09:59.078550   61740 main.go:141] libmachine: (embed-certs-255556) Calling .Close
	I0918 21:09:59.078883   61740 main.go:141] libmachine: Successfully made call to close driver server
	I0918 21:09:59.078906   61740 main.go:141] libmachine: Making call to close connection to plugin binary
	I0918 21:09:59.078917   61740 main.go:141] libmachine: Making call to close driver server
	I0918 21:09:59.078924   61740 main.go:141] libmachine: (embed-certs-255556) Calling .Close
	I0918 21:09:59.079143   61740 main.go:141] libmachine: Successfully made call to close driver server
	I0918 21:09:59.079157   61740 main.go:141] libmachine: Making call to close connection to plugin binary
	I0918 21:09:59.079168   61740 addons.go:475] Verifying addon metrics-server=true in "embed-certs-255556"
	I0918 21:09:59.080861   61740 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I0918 21:09:57.357619   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:09:59.357838   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:09:59.082145   61740 addons.go:510] duration metric: took 1.82098849s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I0918 21:09:59.526424   61740 pod_ready.go:93] pod "etcd-embed-certs-255556" in "kube-system" namespace has status "Ready":"True"
	I0918 21:09:59.526445   61740 pod_ready.go:82] duration metric: took 2.02981732s for pod "etcd-embed-certs-255556" in "kube-system" namespace to be "Ready" ...
	I0918 21:09:59.526455   61740 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-embed-certs-255556" in "kube-system" namespace to be "Ready" ...
	I0918 21:10:00.033589   61740 pod_ready.go:93] pod "kube-apiserver-embed-certs-255556" in "kube-system" namespace has status "Ready":"True"
	I0918 21:10:00.033616   61740 pod_ready.go:82] duration metric: took 507.155125ms for pod "kube-apiserver-embed-certs-255556" in "kube-system" namespace to be "Ready" ...
	I0918 21:10:00.033630   61740 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-255556" in "kube-system" namespace to be "Ready" ...
	I0918 21:10:02.039884   61740 pod_ready.go:103] pod "kube-controller-manager-embed-certs-255556" in "kube-system" namespace has status "Ready":"False"
	I0918 21:10:04.040760   61740 pod_ready.go:103] pod "kube-controller-manager-embed-certs-255556" in "kube-system" namespace has status "Ready":"False"
	I0918 21:10:04.541799   61740 pod_ready.go:93] pod "kube-controller-manager-embed-certs-255556" in "kube-system" namespace has status "Ready":"True"
	I0918 21:10:04.541821   61740 pod_ready.go:82] duration metric: took 4.508184279s for pod "kube-controller-manager-embed-certs-255556" in "kube-system" namespace to be "Ready" ...
	I0918 21:10:04.541830   61740 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-embed-certs-255556" in "kube-system" namespace to be "Ready" ...
	I0918 21:10:04.550008   61740 pod_ready.go:93] pod "kube-scheduler-embed-certs-255556" in "kube-system" namespace has status "Ready":"True"
	I0918 21:10:04.550038   61740 pod_ready.go:82] duration metric: took 8.201765ms for pod "kube-scheduler-embed-certs-255556" in "kube-system" namespace to be "Ready" ...
	I0918 21:10:04.550046   61740 pod_ready.go:39] duration metric: took 7.057808243s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0918 21:10:04.550060   61740 api_server.go:52] waiting for apiserver process to appear ...
	I0918 21:10:04.550110   61740 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:10:04.566882   61740 api_server.go:72] duration metric: took 7.305767858s to wait for apiserver process to appear ...
	I0918 21:10:04.566914   61740 api_server.go:88] waiting for apiserver healthz status ...
	I0918 21:10:04.566937   61740 api_server.go:253] Checking apiserver healthz at https://192.168.39.21:8443/healthz ...
	I0918 21:10:04.571495   61740 api_server.go:279] https://192.168.39.21:8443/healthz returned 200:
	ok
	I0918 21:10:04.572590   61740 api_server.go:141] control plane version: v1.31.1
	I0918 21:10:04.572618   61740 api_server.go:131] duration metric: took 5.69747ms to wait for apiserver health ...
	I0918 21:10:04.572625   61740 system_pods.go:43] waiting for kube-system pods to appear ...
	I0918 21:10:04.578979   61740 system_pods.go:59] 9 kube-system pods found
	I0918 21:10:04.579019   61740 system_pods.go:61] "coredns-7c65d6cfc9-ptxbt" [798665e6-6f4a-4ba5-b4f9-3192d3f76f03] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0918 21:10:04.579030   61740 system_pods.go:61] "coredns-7c65d6cfc9-vgmtd" [1224ebf9-1b24-413a-b779-093acfcfb61e] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0918 21:10:04.579039   61740 system_pods.go:61] "etcd-embed-certs-255556" [a796cd6d-5b0c-4f73-9dfe-23c552cb2a43] Running
	I0918 21:10:04.579046   61740 system_pods.go:61] "kube-apiserver-embed-certs-255556" [f1926542-940b-4ba9-94cf-23265d42874c] Running
	I0918 21:10:04.579051   61740 system_pods.go:61] "kube-controller-manager-embed-certs-255556" [cd0bc9bb-ca2c-435f-81d5-2d4ea9f32d85] Running
	I0918 21:10:04.579057   61740 system_pods.go:61] "kube-proxy-m7gxh" [47d72a32-7efc-4155-a890-0ddc620af6e0] Running
	I0918 21:10:04.579067   61740 system_pods.go:61] "kube-scheduler-embed-certs-255556" [091c79cd-bc76-42b3-9635-96fdbe3ecfff] Running
	I0918 21:10:04.579076   61740 system_pods.go:61] "metrics-server-6867b74b74-sr6hq" [8867f8fa-687b-4105-8ace-18af50195726] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0918 21:10:04.579085   61740 system_pods.go:61] "storage-provisioner" [dcc9b789-237c-4d92-96c6-2c23d2c401c0] Running
	I0918 21:10:04.579095   61740 system_pods.go:74] duration metric: took 6.462809ms to wait for pod list to return data ...
	I0918 21:10:04.579106   61740 default_sa.go:34] waiting for default service account to be created ...
	I0918 21:10:04.583020   61740 default_sa.go:45] found service account: "default"
	I0918 21:10:04.583059   61740 default_sa.go:55] duration metric: took 3.946388ms for default service account to be created ...
	I0918 21:10:04.583072   61740 system_pods.go:116] waiting for k8s-apps to be running ...
	I0918 21:10:04.589946   61740 system_pods.go:86] 9 kube-system pods found
	I0918 21:10:04.589991   61740 system_pods.go:89] "coredns-7c65d6cfc9-ptxbt" [798665e6-6f4a-4ba5-b4f9-3192d3f76f03] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0918 21:10:04.590004   61740 system_pods.go:89] "coredns-7c65d6cfc9-vgmtd" [1224ebf9-1b24-413a-b779-093acfcfb61e] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0918 21:10:04.590012   61740 system_pods.go:89] "etcd-embed-certs-255556" [a796cd6d-5b0c-4f73-9dfe-23c552cb2a43] Running
	I0918 21:10:04.590019   61740 system_pods.go:89] "kube-apiserver-embed-certs-255556" [f1926542-940b-4ba9-94cf-23265d42874c] Running
	I0918 21:10:04.590025   61740 system_pods.go:89] "kube-controller-manager-embed-certs-255556" [cd0bc9bb-ca2c-435f-81d5-2d4ea9f32d85] Running
	I0918 21:10:04.590030   61740 system_pods.go:89] "kube-proxy-m7gxh" [47d72a32-7efc-4155-a890-0ddc620af6e0] Running
	I0918 21:10:04.590035   61740 system_pods.go:89] "kube-scheduler-embed-certs-255556" [091c79cd-bc76-42b3-9635-96fdbe3ecfff] Running
	I0918 21:10:04.590044   61740 system_pods.go:89] "metrics-server-6867b74b74-sr6hq" [8867f8fa-687b-4105-8ace-18af50195726] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0918 21:10:04.590051   61740 system_pods.go:89] "storage-provisioner" [dcc9b789-237c-4d92-96c6-2c23d2c401c0] Running
	I0918 21:10:04.590061   61740 system_pods.go:126] duration metric: took 6.981726ms to wait for k8s-apps to be running ...
	I0918 21:10:04.590070   61740 system_svc.go:44] waiting for kubelet service to be running ....
	I0918 21:10:04.590127   61740 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0918 21:10:04.605893   61740 system_svc.go:56] duration metric: took 15.815591ms WaitForService to wait for kubelet
	I0918 21:10:04.605921   61740 kubeadm.go:582] duration metric: took 7.344815015s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0918 21:10:04.605939   61740 node_conditions.go:102] verifying NodePressure condition ...
	I0918 21:10:04.609551   61740 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0918 21:10:04.609577   61740 node_conditions.go:123] node cpu capacity is 2
	I0918 21:10:04.609588   61740 node_conditions.go:105] duration metric: took 3.645116ms to run NodePressure ...
	I0918 21:10:04.609598   61740 start.go:241] waiting for startup goroutines ...
	I0918 21:10:04.609605   61740 start.go:246] waiting for cluster config update ...
	I0918 21:10:04.609614   61740 start.go:255] writing updated cluster config ...
	I0918 21:10:04.609870   61740 ssh_runner.go:195] Run: rm -f paused
	I0918 21:10:04.664479   61740 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I0918 21:10:04.666589   61740 out.go:177] * Done! kubectl is now configured to use "embed-certs-255556" cluster and "default" namespace by default
	I0918 21:10:01.858109   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:10:03.356912   61273 pod_ready.go:82] duration metric: took 4m0.006778464s for pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace to be "Ready" ...
	E0918 21:10:03.356944   61273 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I0918 21:10:03.356952   61273 pod_ready.go:39] duration metric: took 4m0.807781101s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0918 21:10:03.356967   61273 api_server.go:52] waiting for apiserver process to appear ...
	I0918 21:10:03.356994   61273 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0918 21:10:03.357047   61273 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0918 21:10:03.410066   61273 cri.go:89] found id: "a70652dce4d80c6fac020d63cff7054a835d183ac4b910157a5bf4eb8cabcaa1"
	I0918 21:10:03.410096   61273 cri.go:89] found id: ""
	I0918 21:10:03.410104   61273 logs.go:276] 1 containers: [a70652dce4d80c6fac020d63cff7054a835d183ac4b910157a5bf4eb8cabcaa1]
	I0918 21:10:03.410168   61273 ssh_runner.go:195] Run: which crictl
	I0918 21:10:03.414236   61273 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0918 21:10:03.414309   61273 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0918 21:10:03.449405   61273 cri.go:89] found id: "a913074a00723bc43f97ba625ebbdfded7dca1ce664ace9a13173adb96d2068f"
	I0918 21:10:03.449426   61273 cri.go:89] found id: ""
	I0918 21:10:03.449434   61273 logs.go:276] 1 containers: [a913074a00723bc43f97ba625ebbdfded7dca1ce664ace9a13173adb96d2068f]
	I0918 21:10:03.449492   61273 ssh_runner.go:195] Run: which crictl
	I0918 21:10:03.453335   61273 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0918 21:10:03.453403   61273 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0918 21:10:03.487057   61273 cri.go:89] found id: "76b9e08a213468abdd23d7b3998b55d6544e2fbae9205ec6076ef0cd7a80e7be"
	I0918 21:10:03.487081   61273 cri.go:89] found id: ""
	I0918 21:10:03.487089   61273 logs.go:276] 1 containers: [76b9e08a213468abdd23d7b3998b55d6544e2fbae9205ec6076ef0cd7a80e7be]
	I0918 21:10:03.487137   61273 ssh_runner.go:195] Run: which crictl
	I0918 21:10:03.491027   61273 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0918 21:10:03.491101   61273 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0918 21:10:03.529636   61273 cri.go:89] found id: "c372970fdf265433e1668377d0214de6608cb39ebfd706cfad9624cba2f9b481"
	I0918 21:10:03.529665   61273 cri.go:89] found id: ""
	I0918 21:10:03.529675   61273 logs.go:276] 1 containers: [c372970fdf265433e1668377d0214de6608cb39ebfd706cfad9624cba2f9b481]
	I0918 21:10:03.529738   61273 ssh_runner.go:195] Run: which crictl
	I0918 21:10:03.535042   61273 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0918 21:10:03.535121   61273 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0918 21:10:03.572913   61273 cri.go:89] found id: "0257280a0d21d52e2a996c2f717880bdb9d9f6fe90a4b82cc62222c941ba7d84"
	I0918 21:10:03.572942   61273 cri.go:89] found id: ""
	I0918 21:10:03.572952   61273 logs.go:276] 1 containers: [0257280a0d21d52e2a996c2f717880bdb9d9f6fe90a4b82cc62222c941ba7d84]
	I0918 21:10:03.573012   61273 ssh_runner.go:195] Run: which crictl
	I0918 21:10:03.576945   61273 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0918 21:10:03.577021   61273 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0918 21:10:03.612785   61273 cri.go:89] found id: "785dc83056153e5eeb22216ac7520f529b71f1b3ae42e9540a5c8930f1fe62c2"
	I0918 21:10:03.612805   61273 cri.go:89] found id: ""
	I0918 21:10:03.612812   61273 logs.go:276] 1 containers: [785dc83056153e5eeb22216ac7520f529b71f1b3ae42e9540a5c8930f1fe62c2]
	I0918 21:10:03.612868   61273 ssh_runner.go:195] Run: which crictl
	I0918 21:10:03.616855   61273 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0918 21:10:03.616924   61273 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0918 21:10:03.650330   61273 cri.go:89] found id: ""
	I0918 21:10:03.650359   61273 logs.go:276] 0 containers: []
	W0918 21:10:03.650370   61273 logs.go:278] No container was found matching "kindnet"
	I0918 21:10:03.650378   61273 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0918 21:10:03.650446   61273 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0918 21:10:03.698078   61273 cri.go:89] found id: "b44d6f4b44928aa498c4375f783c165714b4a5959a1fa709712ad39a412d6d9f"
	I0918 21:10:03.698106   61273 cri.go:89] found id: "38c14df0554157969a97390590d6e9c46765fd6fc3e286f7d2e3094838e0aff5"
	I0918 21:10:03.698113   61273 cri.go:89] found id: ""
	I0918 21:10:03.698122   61273 logs.go:276] 2 containers: [b44d6f4b44928aa498c4375f783c165714b4a5959a1fa709712ad39a412d6d9f 38c14df0554157969a97390590d6e9c46765fd6fc3e286f7d2e3094838e0aff5]
	I0918 21:10:03.698184   61273 ssh_runner.go:195] Run: which crictl
	I0918 21:10:03.702311   61273 ssh_runner.go:195] Run: which crictl
	I0918 21:10:03.705974   61273 logs.go:123] Gathering logs for kubelet ...
	I0918 21:10:03.705996   61273 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 21:10:03.771043   61273 logs.go:123] Gathering logs for kube-scheduler [c372970fdf265433e1668377d0214de6608cb39ebfd706cfad9624cba2f9b481] ...
	I0918 21:10:03.771097   61273 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c372970fdf265433e1668377d0214de6608cb39ebfd706cfad9624cba2f9b481"
	I0918 21:10:03.813148   61273 logs.go:123] Gathering logs for storage-provisioner [b44d6f4b44928aa498c4375f783c165714b4a5959a1fa709712ad39a412d6d9f] ...
	I0918 21:10:03.813175   61273 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b44d6f4b44928aa498c4375f783c165714b4a5959a1fa709712ad39a412d6d9f"
	I0918 21:10:03.864553   61273 logs.go:123] Gathering logs for CRI-O ...
	I0918 21:10:03.864580   61273 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0918 21:10:04.345484   61273 logs.go:123] Gathering logs for container status ...
	I0918 21:10:04.345531   61273 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 21:10:04.390777   61273 logs.go:123] Gathering logs for dmesg ...
	I0918 21:10:04.390818   61273 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 21:10:04.409877   61273 logs.go:123] Gathering logs for describe nodes ...
	I0918 21:10:04.409918   61273 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0918 21:10:04.536579   61273 logs.go:123] Gathering logs for kube-apiserver [a70652dce4d80c6fac020d63cff7054a835d183ac4b910157a5bf4eb8cabcaa1] ...
	I0918 21:10:04.536609   61273 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a70652dce4d80c6fac020d63cff7054a835d183ac4b910157a5bf4eb8cabcaa1"
	I0918 21:10:04.595640   61273 logs.go:123] Gathering logs for etcd [a913074a00723bc43f97ba625ebbdfded7dca1ce664ace9a13173adb96d2068f] ...
	I0918 21:10:04.595680   61273 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a913074a00723bc43f97ba625ebbdfded7dca1ce664ace9a13173adb96d2068f"
	I0918 21:10:04.642332   61273 logs.go:123] Gathering logs for coredns [76b9e08a213468abdd23d7b3998b55d6544e2fbae9205ec6076ef0cd7a80e7be] ...
	I0918 21:10:04.642377   61273 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 76b9e08a213468abdd23d7b3998b55d6544e2fbae9205ec6076ef0cd7a80e7be"
	I0918 21:10:04.679525   61273 logs.go:123] Gathering logs for kube-proxy [0257280a0d21d52e2a996c2f717880bdb9d9f6fe90a4b82cc62222c941ba7d84] ...
	I0918 21:10:04.679551   61273 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0257280a0d21d52e2a996c2f717880bdb9d9f6fe90a4b82cc62222c941ba7d84"
	I0918 21:10:04.721130   61273 logs.go:123] Gathering logs for kube-controller-manager [785dc83056153e5eeb22216ac7520f529b71f1b3ae42e9540a5c8930f1fe62c2] ...
	I0918 21:10:04.721164   61273 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 785dc83056153e5eeb22216ac7520f529b71f1b3ae42e9540a5c8930f1fe62c2"
	I0918 21:10:04.789527   61273 logs.go:123] Gathering logs for storage-provisioner [38c14df0554157969a97390590d6e9c46765fd6fc3e286f7d2e3094838e0aff5] ...
	I0918 21:10:04.789558   61273 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 38c14df0554157969a97390590d6e9c46765fd6fc3e286f7d2e3094838e0aff5"
	I0918 21:10:07.334989   61273 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:10:07.352382   61273 api_server.go:72] duration metric: took 4m12.031791528s to wait for apiserver process to appear ...
	I0918 21:10:07.352411   61273 api_server.go:88] waiting for apiserver healthz status ...
	I0918 21:10:07.352446   61273 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0918 21:10:07.352494   61273 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0918 21:10:07.404709   61273 cri.go:89] found id: "a70652dce4d80c6fac020d63cff7054a835d183ac4b910157a5bf4eb8cabcaa1"
	I0918 21:10:07.404739   61273 cri.go:89] found id: ""
	I0918 21:10:07.404748   61273 logs.go:276] 1 containers: [a70652dce4d80c6fac020d63cff7054a835d183ac4b910157a5bf4eb8cabcaa1]
	I0918 21:10:07.404815   61273 ssh_runner.go:195] Run: which crictl
	I0918 21:10:07.409205   61273 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0918 21:10:07.409273   61273 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0918 21:10:07.450409   61273 cri.go:89] found id: "a913074a00723bc43f97ba625ebbdfded7dca1ce664ace9a13173adb96d2068f"
	I0918 21:10:07.450429   61273 cri.go:89] found id: ""
	I0918 21:10:07.450438   61273 logs.go:276] 1 containers: [a913074a00723bc43f97ba625ebbdfded7dca1ce664ace9a13173adb96d2068f]
	I0918 21:10:07.450498   61273 ssh_runner.go:195] Run: which crictl
	I0918 21:10:07.454623   61273 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0918 21:10:07.454692   61273 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0918 21:10:07.498344   61273 cri.go:89] found id: "76b9e08a213468abdd23d7b3998b55d6544e2fbae9205ec6076ef0cd7a80e7be"
	I0918 21:10:07.498370   61273 cri.go:89] found id: ""
	I0918 21:10:07.498379   61273 logs.go:276] 1 containers: [76b9e08a213468abdd23d7b3998b55d6544e2fbae9205ec6076ef0cd7a80e7be]
	I0918 21:10:07.498443   61273 ssh_runner.go:195] Run: which crictl
	I0918 21:10:07.503900   61273 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0918 21:10:07.503986   61273 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0918 21:10:07.543438   61273 cri.go:89] found id: "c372970fdf265433e1668377d0214de6608cb39ebfd706cfad9624cba2f9b481"
	I0918 21:10:07.543469   61273 cri.go:89] found id: ""
	I0918 21:10:07.543478   61273 logs.go:276] 1 containers: [c372970fdf265433e1668377d0214de6608cb39ebfd706cfad9624cba2f9b481]
	I0918 21:10:07.543538   61273 ssh_runner.go:195] Run: which crictl
	I0918 21:10:07.548439   61273 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0918 21:10:07.548518   61273 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0918 21:10:07.592109   61273 cri.go:89] found id: "0257280a0d21d52e2a996c2f717880bdb9d9f6fe90a4b82cc62222c941ba7d84"
	I0918 21:10:07.592140   61273 cri.go:89] found id: ""
	I0918 21:10:07.592150   61273 logs.go:276] 1 containers: [0257280a0d21d52e2a996c2f717880bdb9d9f6fe90a4b82cc62222c941ba7d84]
	I0918 21:10:07.592202   61273 ssh_runner.go:195] Run: which crictl
	I0918 21:10:07.596127   61273 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0918 21:10:07.596200   61273 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0918 21:10:07.630588   61273 cri.go:89] found id: "785dc83056153e5eeb22216ac7520f529b71f1b3ae42e9540a5c8930f1fe62c2"
	I0918 21:10:07.630623   61273 cri.go:89] found id: ""
	I0918 21:10:07.630633   61273 logs.go:276] 1 containers: [785dc83056153e5eeb22216ac7520f529b71f1b3ae42e9540a5c8930f1fe62c2]
	I0918 21:10:07.630699   61273 ssh_runner.go:195] Run: which crictl
	I0918 21:10:07.635130   61273 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0918 21:10:07.635214   61273 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0918 21:10:07.672446   61273 cri.go:89] found id: ""
	I0918 21:10:07.672475   61273 logs.go:276] 0 containers: []
	W0918 21:10:07.672487   61273 logs.go:278] No container was found matching "kindnet"
	I0918 21:10:07.672494   61273 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0918 21:10:07.672554   61273 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0918 21:10:07.710660   61273 cri.go:89] found id: "b44d6f4b44928aa498c4375f783c165714b4a5959a1fa709712ad39a412d6d9f"
	I0918 21:10:07.710693   61273 cri.go:89] found id: "38c14df0554157969a97390590d6e9c46765fd6fc3e286f7d2e3094838e0aff5"
	I0918 21:10:07.710700   61273 cri.go:89] found id: ""
	I0918 21:10:07.710709   61273 logs.go:276] 2 containers: [b44d6f4b44928aa498c4375f783c165714b4a5959a1fa709712ad39a412d6d9f 38c14df0554157969a97390590d6e9c46765fd6fc3e286f7d2e3094838e0aff5]
	I0918 21:10:07.710761   61273 ssh_runner.go:195] Run: which crictl
	I0918 21:10:07.714772   61273 ssh_runner.go:195] Run: which crictl
	I0918 21:10:07.718402   61273 logs.go:123] Gathering logs for etcd [a913074a00723bc43f97ba625ebbdfded7dca1ce664ace9a13173adb96d2068f] ...
	I0918 21:10:07.718423   61273 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a913074a00723bc43f97ba625ebbdfded7dca1ce664ace9a13173adb96d2068f"
	I0918 21:10:07.756682   61273 logs.go:123] Gathering logs for coredns [76b9e08a213468abdd23d7b3998b55d6544e2fbae9205ec6076ef0cd7a80e7be] ...
	I0918 21:10:07.756717   61273 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 76b9e08a213468abdd23d7b3998b55d6544e2fbae9205ec6076ef0cd7a80e7be"
	I0918 21:10:07.792784   61273 logs.go:123] Gathering logs for kube-proxy [0257280a0d21d52e2a996c2f717880bdb9d9f6fe90a4b82cc62222c941ba7d84] ...
	I0918 21:10:07.792813   61273 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0257280a0d21d52e2a996c2f717880bdb9d9f6fe90a4b82cc62222c941ba7d84"
	I0918 21:10:07.829746   61273 logs.go:123] Gathering logs for kube-controller-manager [785dc83056153e5eeb22216ac7520f529b71f1b3ae42e9540a5c8930f1fe62c2] ...
	I0918 21:10:07.829779   61273 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 785dc83056153e5eeb22216ac7520f529b71f1b3ae42e9540a5c8930f1fe62c2"
	I0918 21:10:07.882151   61273 logs.go:123] Gathering logs for storage-provisioner [38c14df0554157969a97390590d6e9c46765fd6fc3e286f7d2e3094838e0aff5] ...
	I0918 21:10:07.882190   61273 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 38c14df0554157969a97390590d6e9c46765fd6fc3e286f7d2e3094838e0aff5"
	I0918 21:10:07.921948   61273 logs.go:123] Gathering logs for container status ...
	I0918 21:10:07.921973   61273 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 21:10:07.969080   61273 logs.go:123] Gathering logs for kubelet ...
	I0918 21:10:07.969110   61273 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 21:10:08.036341   61273 logs.go:123] Gathering logs for dmesg ...
	I0918 21:10:08.036376   61273 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 21:10:08.050690   61273 logs.go:123] Gathering logs for describe nodes ...
	I0918 21:10:08.050722   61273 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0918 21:10:08.177111   61273 logs.go:123] Gathering logs for kube-apiserver [a70652dce4d80c6fac020d63cff7054a835d183ac4b910157a5bf4eb8cabcaa1] ...
	I0918 21:10:08.177154   61273 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a70652dce4d80c6fac020d63cff7054a835d183ac4b910157a5bf4eb8cabcaa1"
	I0918 21:10:08.224169   61273 logs.go:123] Gathering logs for kube-scheduler [c372970fdf265433e1668377d0214de6608cb39ebfd706cfad9624cba2f9b481] ...
	I0918 21:10:08.224203   61273 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c372970fdf265433e1668377d0214de6608cb39ebfd706cfad9624cba2f9b481"
	I0918 21:10:08.264412   61273 logs.go:123] Gathering logs for storage-provisioner [b44d6f4b44928aa498c4375f783c165714b4a5959a1fa709712ad39a412d6d9f] ...
	I0918 21:10:08.264437   61273 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b44d6f4b44928aa498c4375f783c165714b4a5959a1fa709712ad39a412d6d9f"
	I0918 21:10:08.309190   61273 logs.go:123] Gathering logs for CRI-O ...
	I0918 21:10:08.309215   61273 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0918 21:10:11.209439   61273 api_server.go:253] Checking apiserver healthz at https://192.168.61.31:8443/healthz ...
	I0918 21:10:11.214345   61273 api_server.go:279] https://192.168.61.31:8443/healthz returned 200:
	ok
	I0918 21:10:11.215424   61273 api_server.go:141] control plane version: v1.31.1
	I0918 21:10:11.215446   61273 api_server.go:131] duration metric: took 3.863027585s to wait for apiserver health ...
	I0918 21:10:11.215456   61273 system_pods.go:43] waiting for kube-system pods to appear ...
	I0918 21:10:11.215485   61273 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0918 21:10:11.215545   61273 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0918 21:10:11.251158   61273 cri.go:89] found id: "a70652dce4d80c6fac020d63cff7054a835d183ac4b910157a5bf4eb8cabcaa1"
	I0918 21:10:11.251182   61273 cri.go:89] found id: ""
	I0918 21:10:11.251190   61273 logs.go:276] 1 containers: [a70652dce4d80c6fac020d63cff7054a835d183ac4b910157a5bf4eb8cabcaa1]
	I0918 21:10:11.251246   61273 ssh_runner.go:195] Run: which crictl
	I0918 21:10:11.255090   61273 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0918 21:10:11.255177   61273 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0918 21:10:11.290504   61273 cri.go:89] found id: "a913074a00723bc43f97ba625ebbdfded7dca1ce664ace9a13173adb96d2068f"
	I0918 21:10:11.290526   61273 cri.go:89] found id: ""
	I0918 21:10:11.290534   61273 logs.go:276] 1 containers: [a913074a00723bc43f97ba625ebbdfded7dca1ce664ace9a13173adb96d2068f]
	I0918 21:10:11.290593   61273 ssh_runner.go:195] Run: which crictl
	I0918 21:10:11.295141   61273 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0918 21:10:11.295224   61273 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0918 21:10:11.340273   61273 cri.go:89] found id: "76b9e08a213468abdd23d7b3998b55d6544e2fbae9205ec6076ef0cd7a80e7be"
	I0918 21:10:11.340300   61273 cri.go:89] found id: ""
	I0918 21:10:11.340310   61273 logs.go:276] 1 containers: [76b9e08a213468abdd23d7b3998b55d6544e2fbae9205ec6076ef0cd7a80e7be]
	I0918 21:10:11.340362   61273 ssh_runner.go:195] Run: which crictl
	I0918 21:10:11.344823   61273 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0918 21:10:11.344903   61273 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0918 21:10:11.384145   61273 cri.go:89] found id: "c372970fdf265433e1668377d0214de6608cb39ebfd706cfad9624cba2f9b481"
	I0918 21:10:11.384172   61273 cri.go:89] found id: ""
	I0918 21:10:11.384187   61273 logs.go:276] 1 containers: [c372970fdf265433e1668377d0214de6608cb39ebfd706cfad9624cba2f9b481]
	I0918 21:10:11.384251   61273 ssh_runner.go:195] Run: which crictl
	I0918 21:10:11.388594   61273 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0918 21:10:11.388673   61273 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0918 21:10:11.434881   61273 cri.go:89] found id: "0257280a0d21d52e2a996c2f717880bdb9d9f6fe90a4b82cc62222c941ba7d84"
	I0918 21:10:11.434915   61273 cri.go:89] found id: ""
	I0918 21:10:11.434925   61273 logs.go:276] 1 containers: [0257280a0d21d52e2a996c2f717880bdb9d9f6fe90a4b82cc62222c941ba7d84]
	I0918 21:10:11.434984   61273 ssh_runner.go:195] Run: which crictl
	I0918 21:10:11.439048   61273 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0918 21:10:11.439124   61273 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0918 21:10:11.474786   61273 cri.go:89] found id: "785dc83056153e5eeb22216ac7520f529b71f1b3ae42e9540a5c8930f1fe62c2"
	I0918 21:10:11.474812   61273 cri.go:89] found id: ""
	I0918 21:10:11.474820   61273 logs.go:276] 1 containers: [785dc83056153e5eeb22216ac7520f529b71f1b3ae42e9540a5c8930f1fe62c2]
	I0918 21:10:11.474871   61273 ssh_runner.go:195] Run: which crictl
	I0918 21:10:11.478907   61273 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0918 21:10:11.478961   61273 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0918 21:10:11.521522   61273 cri.go:89] found id: ""
	I0918 21:10:11.521550   61273 logs.go:276] 0 containers: []
	W0918 21:10:11.521561   61273 logs.go:278] No container was found matching "kindnet"
	I0918 21:10:11.521568   61273 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0918 21:10:11.521642   61273 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0918 21:10:11.560406   61273 cri.go:89] found id: "b44d6f4b44928aa498c4375f783c165714b4a5959a1fa709712ad39a412d6d9f"
	I0918 21:10:11.560428   61273 cri.go:89] found id: "38c14df0554157969a97390590d6e9c46765fd6fc3e286f7d2e3094838e0aff5"
	I0918 21:10:11.560432   61273 cri.go:89] found id: ""
	I0918 21:10:11.560439   61273 logs.go:276] 2 containers: [b44d6f4b44928aa498c4375f783c165714b4a5959a1fa709712ad39a412d6d9f 38c14df0554157969a97390590d6e9c46765fd6fc3e286f7d2e3094838e0aff5]
	I0918 21:10:11.560489   61273 ssh_runner.go:195] Run: which crictl
	I0918 21:10:11.564559   61273 ssh_runner.go:195] Run: which crictl
	I0918 21:10:11.568380   61273 logs.go:123] Gathering logs for kube-proxy [0257280a0d21d52e2a996c2f717880bdb9d9f6fe90a4b82cc62222c941ba7d84] ...
	I0918 21:10:11.568405   61273 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0257280a0d21d52e2a996c2f717880bdb9d9f6fe90a4b82cc62222c941ba7d84"
	I0918 21:10:11.614927   61273 logs.go:123] Gathering logs for kube-controller-manager [785dc83056153e5eeb22216ac7520f529b71f1b3ae42e9540a5c8930f1fe62c2] ...
	I0918 21:10:11.614959   61273 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 785dc83056153e5eeb22216ac7520f529b71f1b3ae42e9540a5c8930f1fe62c2"
	I0918 21:10:11.668337   61273 logs.go:123] Gathering logs for storage-provisioner [b44d6f4b44928aa498c4375f783c165714b4a5959a1fa709712ad39a412d6d9f] ...
	I0918 21:10:11.668372   61273 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b44d6f4b44928aa498c4375f783c165714b4a5959a1fa709712ad39a412d6d9f"
	I0918 21:10:11.705574   61273 logs.go:123] Gathering logs for kubelet ...
	I0918 21:10:11.705604   61273 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 21:10:11.772691   61273 logs.go:123] Gathering logs for describe nodes ...
	I0918 21:10:11.772731   61273 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0918 21:10:11.885001   61273 logs.go:123] Gathering logs for etcd [a913074a00723bc43f97ba625ebbdfded7dca1ce664ace9a13173adb96d2068f] ...
	I0918 21:10:11.885043   61273 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a913074a00723bc43f97ba625ebbdfded7dca1ce664ace9a13173adb96d2068f"
	I0918 21:10:11.929585   61273 logs.go:123] Gathering logs for coredns [76b9e08a213468abdd23d7b3998b55d6544e2fbae9205ec6076ef0cd7a80e7be] ...
	I0918 21:10:11.929623   61273 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 76b9e08a213468abdd23d7b3998b55d6544e2fbae9205ec6076ef0cd7a80e7be"
	I0918 21:10:11.967540   61273 logs.go:123] Gathering logs for kube-scheduler [c372970fdf265433e1668377d0214de6608cb39ebfd706cfad9624cba2f9b481] ...
	I0918 21:10:11.967566   61273 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c372970fdf265433e1668377d0214de6608cb39ebfd706cfad9624cba2f9b481"
	I0918 21:10:12.007037   61273 logs.go:123] Gathering logs for storage-provisioner [38c14df0554157969a97390590d6e9c46765fd6fc3e286f7d2e3094838e0aff5] ...
	I0918 21:10:12.007076   61273 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 38c14df0554157969a97390590d6e9c46765fd6fc3e286f7d2e3094838e0aff5"
	I0918 21:10:12.045764   61273 logs.go:123] Gathering logs for CRI-O ...
	I0918 21:10:12.045805   61273 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0918 21:10:12.434993   61273 logs.go:123] Gathering logs for dmesg ...
	I0918 21:10:12.435042   61273 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 21:10:12.449422   61273 logs.go:123] Gathering logs for kube-apiserver [a70652dce4d80c6fac020d63cff7054a835d183ac4b910157a5bf4eb8cabcaa1] ...
	I0918 21:10:12.449453   61273 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a70652dce4d80c6fac020d63cff7054a835d183ac4b910157a5bf4eb8cabcaa1"
	I0918 21:10:12.500491   61273 logs.go:123] Gathering logs for container status ...
	I0918 21:10:12.500522   61273 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 21:10:15.053164   61273 system_pods.go:59] 8 kube-system pods found
	I0918 21:10:15.053203   61273 system_pods.go:61] "coredns-7c65d6cfc9-dgnw2" [085d5a98-0a61-4678-830f-384780a0d7ef] Running
	I0918 21:10:15.053211   61273 system_pods.go:61] "etcd-no-preload-331658" [d6a53ff4-fcf0-46c0-9e77-9af6d0363e08] Running
	I0918 21:10:15.053218   61273 system_pods.go:61] "kube-apiserver-no-preload-331658" [1953598f-4c3f-4a98-ab75-b8ca1b8093ad] Running
	I0918 21:10:15.053223   61273 system_pods.go:61] "kube-controller-manager-no-preload-331658" [8fc655be-a192-4250-a31d-2e533a2b5e41] Running
	I0918 21:10:15.053228   61273 system_pods.go:61] "kube-proxy-hx25w" [a26512ff-f695-4452-8974-577479257160] Running
	I0918 21:10:15.053232   61273 system_pods.go:61] "kube-scheduler-no-preload-331658" [64e5347e-e7fd-4a0d-ae41-539c3989c29e] Running
	I0918 21:10:15.053243   61273 system_pods.go:61] "metrics-server-6867b74b74-n27vc" [b1de76ec-8987-49ce-ae66-eedda2705cde] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0918 21:10:15.053254   61273 system_pods.go:61] "storage-provisioner" [e110aeb3-e9bc-4bb9-9b49-5579558bdda2] Running
	I0918 21:10:15.053264   61273 system_pods.go:74] duration metric: took 3.837800115s to wait for pod list to return data ...
	I0918 21:10:15.053273   61273 default_sa.go:34] waiting for default service account to be created ...
	I0918 21:10:15.056865   61273 default_sa.go:45] found service account: "default"
	I0918 21:10:15.056900   61273 default_sa.go:55] duration metric: took 3.619144ms for default service account to be created ...
	I0918 21:10:15.056912   61273 system_pods.go:116] waiting for k8s-apps to be running ...
	I0918 21:10:15.061835   61273 system_pods.go:86] 8 kube-system pods found
	I0918 21:10:15.061864   61273 system_pods.go:89] "coredns-7c65d6cfc9-dgnw2" [085d5a98-0a61-4678-830f-384780a0d7ef] Running
	I0918 21:10:15.061870   61273 system_pods.go:89] "etcd-no-preload-331658" [d6a53ff4-fcf0-46c0-9e77-9af6d0363e08] Running
	I0918 21:10:15.061875   61273 system_pods.go:89] "kube-apiserver-no-preload-331658" [1953598f-4c3f-4a98-ab75-b8ca1b8093ad] Running
	I0918 21:10:15.061880   61273 system_pods.go:89] "kube-controller-manager-no-preload-331658" [8fc655be-a192-4250-a31d-2e533a2b5e41] Running
	I0918 21:10:15.061884   61273 system_pods.go:89] "kube-proxy-hx25w" [a26512ff-f695-4452-8974-577479257160] Running
	I0918 21:10:15.061888   61273 system_pods.go:89] "kube-scheduler-no-preload-331658" [64e5347e-e7fd-4a0d-ae41-539c3989c29e] Running
	I0918 21:10:15.061894   61273 system_pods.go:89] "metrics-server-6867b74b74-n27vc" [b1de76ec-8987-49ce-ae66-eedda2705cde] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0918 21:10:15.061898   61273 system_pods.go:89] "storage-provisioner" [e110aeb3-e9bc-4bb9-9b49-5579558bdda2] Running
	I0918 21:10:15.061906   61273 system_pods.go:126] duration metric: took 4.987508ms to wait for k8s-apps to be running ...
	I0918 21:10:15.061912   61273 system_svc.go:44] waiting for kubelet service to be running ....
	I0918 21:10:15.061966   61273 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0918 21:10:15.079834   61273 system_svc.go:56] duration metric: took 17.908997ms WaitForService to wait for kubelet
	I0918 21:10:15.079875   61273 kubeadm.go:582] duration metric: took 4m19.759287892s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0918 21:10:15.079897   61273 node_conditions.go:102] verifying NodePressure condition ...
	I0918 21:10:15.083307   61273 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0918 21:10:15.083390   61273 node_conditions.go:123] node cpu capacity is 2
	I0918 21:10:15.083407   61273 node_conditions.go:105] duration metric: took 3.503352ms to run NodePressure ...
	I0918 21:10:15.083421   61273 start.go:241] waiting for startup goroutines ...
	I0918 21:10:15.083431   61273 start.go:246] waiting for cluster config update ...
	I0918 21:10:15.083444   61273 start.go:255] writing updated cluster config ...
	I0918 21:10:15.083788   61273 ssh_runner.go:195] Run: rm -f paused
	I0918 21:10:15.139144   61273 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I0918 21:10:15.141198   61273 out.go:177] * Done! kubectl is now configured to use "no-preload-331658" cluster and "default" namespace by default
	I0918 21:11:17.441368   62061 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0918 21:11:17.441607   62061 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0918 21:11:17.442921   62061 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0918 21:11:17.443036   62061 kubeadm.go:310] [preflight] Running pre-flight checks
	I0918 21:11:17.443221   62061 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0918 21:11:17.443500   62061 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0918 21:11:17.443818   62061 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0918 21:11:17.444099   62061 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0918 21:11:17.446858   62061 out.go:235]   - Generating certificates and keys ...
	I0918 21:11:17.446965   62061 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0918 21:11:17.447048   62061 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0918 21:11:17.447160   62061 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0918 21:11:17.447248   62061 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0918 21:11:17.447349   62061 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0918 21:11:17.447425   62061 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0918 21:11:17.447507   62061 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0918 21:11:17.447587   62061 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0918 21:11:17.447742   62061 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0918 21:11:17.447847   62061 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0918 21:11:17.447911   62061 kubeadm.go:310] [certs] Using the existing "sa" key
	I0918 21:11:17.447984   62061 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0918 21:11:17.448085   62061 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0918 21:11:17.448163   62061 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0918 21:11:17.448255   62061 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0918 21:11:17.448339   62061 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0918 21:11:17.448486   62061 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0918 21:11:17.448590   62061 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0918 21:11:17.448645   62061 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0918 21:11:17.448733   62061 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0918 21:11:17.450203   62061 out.go:235]   - Booting up control plane ...
	I0918 21:11:17.450299   62061 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0918 21:11:17.450434   62061 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0918 21:11:17.450506   62061 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0918 21:11:17.450578   62061 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0918 21:11:17.450752   62061 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0918 21:11:17.450805   62061 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0918 21:11:17.450863   62061 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0918 21:11:17.451034   62061 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0918 21:11:17.451090   62061 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0918 21:11:17.451259   62061 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0918 21:11:17.451325   62061 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0918 21:11:17.451498   62061 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0918 21:11:17.451562   62061 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0918 21:11:17.451765   62061 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0918 21:11:17.451845   62061 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0918 21:11:17.452027   62061 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0918 21:11:17.452041   62061 kubeadm.go:310] 
	I0918 21:11:17.452083   62061 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0918 21:11:17.452118   62061 kubeadm.go:310] 		timed out waiting for the condition
	I0918 21:11:17.452127   62061 kubeadm.go:310] 
	I0918 21:11:17.452160   62061 kubeadm.go:310] 	This error is likely caused by:
	I0918 21:11:17.452189   62061 kubeadm.go:310] 		- The kubelet is not running
	I0918 21:11:17.452282   62061 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0918 21:11:17.452289   62061 kubeadm.go:310] 
	I0918 21:11:17.452385   62061 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0918 21:11:17.452425   62061 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0918 21:11:17.452473   62061 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0918 21:11:17.452482   62061 kubeadm.go:310] 
	I0918 21:11:17.452578   62061 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0918 21:11:17.452649   62061 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0918 21:11:17.452656   62061 kubeadm.go:310] 
	I0918 21:11:17.452770   62061 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0918 21:11:17.452849   62061 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0918 21:11:17.452920   62061 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0918 21:11:17.452983   62061 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0918 21:11:17.453029   62061 kubeadm.go:310] 
	W0918 21:11:17.453100   62061 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0918 21:11:17.453139   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0918 21:11:17.917211   62061 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0918 21:11:17.933265   62061 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0918 21:11:17.943909   62061 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0918 21:11:17.943937   62061 kubeadm.go:157] found existing configuration files:
	
	I0918 21:11:17.943980   62061 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0918 21:11:17.953745   62061 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0918 21:11:17.953817   62061 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0918 21:11:17.964411   62061 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0918 21:11:17.974601   62061 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0918 21:11:17.974661   62061 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0918 21:11:17.984631   62061 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0918 21:11:17.994280   62061 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0918 21:11:17.994341   62061 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0918 21:11:18.004341   62061 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0918 21:11:18.014066   62061 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0918 21:11:18.014127   62061 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0918 21:11:18.023592   62061 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0918 21:11:18.098831   62061 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0918 21:11:18.098908   62061 kubeadm.go:310] [preflight] Running pre-flight checks
	I0918 21:11:18.254904   62061 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0918 21:11:18.255012   62061 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0918 21:11:18.255102   62061 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0918 21:11:18.444152   62061 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0918 21:11:18.445967   62061 out.go:235]   - Generating certificates and keys ...
	I0918 21:11:18.446082   62061 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0918 21:11:18.446185   62061 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0918 21:11:18.446316   62061 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0918 21:11:18.446403   62061 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0918 21:11:18.446508   62061 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0918 21:11:18.446600   62061 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0918 21:11:18.446706   62061 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0918 21:11:18.446767   62061 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0918 21:11:18.446852   62061 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0918 21:11:18.446936   62061 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0918 21:11:18.446971   62061 kubeadm.go:310] [certs] Using the existing "sa" key
	I0918 21:11:18.447025   62061 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0918 21:11:18.621414   62061 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0918 21:11:18.973691   62061 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0918 21:11:19.177522   62061 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0918 21:11:19.351312   62061 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0918 21:11:19.375169   62061 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0918 21:11:19.376274   62061 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0918 21:11:19.376351   62061 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0918 21:11:19.505030   62061 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0918 21:11:19.507065   62061 out.go:235]   - Booting up control plane ...
	I0918 21:11:19.507197   62061 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0918 21:11:19.519523   62061 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0918 21:11:19.520747   62061 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0918 21:11:19.522723   62061 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0918 21:11:19.524851   62061 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0918 21:11:59.527084   62061 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0918 21:11:59.527421   62061 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0918 21:11:59.527674   62061 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0918 21:12:04.528288   62061 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0918 21:12:04.528530   62061 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0918 21:12:14.529180   62061 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0918 21:12:14.529367   62061 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0918 21:12:34.530066   62061 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0918 21:12:34.530335   62061 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0918 21:13:14.529030   62061 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0918 21:13:14.529305   62061 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0918 21:13:14.529326   62061 kubeadm.go:310] 
	I0918 21:13:14.529374   62061 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0918 21:13:14.529447   62061 kubeadm.go:310] 		timed out waiting for the condition
	I0918 21:13:14.529466   62061 kubeadm.go:310] 
	I0918 21:13:14.529521   62061 kubeadm.go:310] 	This error is likely caused by:
	I0918 21:13:14.529569   62061 kubeadm.go:310] 		- The kubelet is not running
	I0918 21:13:14.529716   62061 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0918 21:13:14.529726   62061 kubeadm.go:310] 
	I0918 21:13:14.529866   62061 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0918 21:13:14.529917   62061 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0918 21:13:14.529967   62061 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0918 21:13:14.529976   62061 kubeadm.go:310] 
	I0918 21:13:14.530120   62061 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0918 21:13:14.530220   62061 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0918 21:13:14.530230   62061 kubeadm.go:310] 
	I0918 21:13:14.530352   62061 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0918 21:13:14.530480   62061 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0918 21:13:14.530579   62061 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0918 21:13:14.530678   62061 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0918 21:13:14.530688   62061 kubeadm.go:310] 
	I0918 21:13:14.531356   62061 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0918 21:13:14.531463   62061 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0918 21:13:14.531520   62061 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0918 21:13:14.531592   62061 kubeadm.go:394] duration metric: took 7m56.917405378s to StartCluster
	I0918 21:13:14.531633   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0918 21:13:14.531689   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0918 21:13:14.578851   62061 cri.go:89] found id: ""
	I0918 21:13:14.578883   62061 logs.go:276] 0 containers: []
	W0918 21:13:14.578893   62061 logs.go:278] No container was found matching "kube-apiserver"
	I0918 21:13:14.578901   62061 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0918 21:13:14.578960   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0918 21:13:14.627137   62061 cri.go:89] found id: ""
	I0918 21:13:14.627168   62061 logs.go:276] 0 containers: []
	W0918 21:13:14.627179   62061 logs.go:278] No container was found matching "etcd"
	I0918 21:13:14.627187   62061 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0918 21:13:14.627245   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0918 21:13:14.670678   62061 cri.go:89] found id: ""
	I0918 21:13:14.670707   62061 logs.go:276] 0 containers: []
	W0918 21:13:14.670717   62061 logs.go:278] No container was found matching "coredns"
	I0918 21:13:14.670724   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0918 21:13:14.670788   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0918 21:13:14.709610   62061 cri.go:89] found id: ""
	I0918 21:13:14.709641   62061 logs.go:276] 0 containers: []
	W0918 21:13:14.709651   62061 logs.go:278] No container was found matching "kube-scheduler"
	I0918 21:13:14.709658   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0918 21:13:14.709722   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0918 21:13:14.743492   62061 cri.go:89] found id: ""
	I0918 21:13:14.743522   62061 logs.go:276] 0 containers: []
	W0918 21:13:14.743534   62061 logs.go:278] No container was found matching "kube-proxy"
	I0918 21:13:14.743540   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0918 21:13:14.743601   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0918 21:13:14.777577   62061 cri.go:89] found id: ""
	I0918 21:13:14.777602   62061 logs.go:276] 0 containers: []
	W0918 21:13:14.777610   62061 logs.go:278] No container was found matching "kube-controller-manager"
	I0918 21:13:14.777616   62061 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0918 21:13:14.777679   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0918 21:13:14.815884   62061 cri.go:89] found id: ""
	I0918 21:13:14.815913   62061 logs.go:276] 0 containers: []
	W0918 21:13:14.815922   62061 logs.go:278] No container was found matching "kindnet"
	I0918 21:13:14.815937   62061 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0918 21:13:14.815997   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0918 21:13:14.855426   62061 cri.go:89] found id: ""
	I0918 21:13:14.855456   62061 logs.go:276] 0 containers: []
	W0918 21:13:14.855464   62061 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0918 21:13:14.855473   62061 logs.go:123] Gathering logs for dmesg ...
	I0918 21:13:14.855484   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 21:13:14.868812   62061 logs.go:123] Gathering logs for describe nodes ...
	I0918 21:13:14.868843   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0918 21:13:14.955093   62061 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0918 21:13:14.955120   62061 logs.go:123] Gathering logs for CRI-O ...
	I0918 21:13:14.955151   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0918 21:13:15.064722   62061 logs.go:123] Gathering logs for container status ...
	I0918 21:13:15.064760   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 21:13:15.105430   62061 logs.go:123] Gathering logs for kubelet ...
	I0918 21:13:15.105466   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0918 21:13:15.157878   62061 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0918 21:13:15.157956   62061 out.go:270] * 
	W0918 21:13:15.158036   62061 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0918 21:13:15.158052   62061 out.go:270] * 
	W0918 21:13:15.158934   62061 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0918 21:13:15.161604   62061 out.go:201] 
	W0918 21:13:15.162606   62061 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0918 21:13:15.162664   62061 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0918 21:13:15.162685   62061 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0918 21:13:15.163831   62061 out.go:201] 
	
	
	==> CRI-O <==
	Sep 18 21:19:06 embed-certs-255556 crio[682]: time="2024-09-18 21:19:06.780822399Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726694346780798145,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=b41e447f-c052-4a89-950f-a59f2c6e59e8 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 18 21:19:06 embed-certs-255556 crio[682]: time="2024-09-18 21:19:06.781368654Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=bd4d9863-b3ca-4c65-8763-7cea2f0f8cec name=/runtime.v1.RuntimeService/ListContainers
	Sep 18 21:19:06 embed-certs-255556 crio[682]: time="2024-09-18 21:19:06.781422662Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=bd4d9863-b3ca-4c65-8763-7cea2f0f8cec name=/runtime.v1.RuntimeService/ListContainers
	Sep 18 21:19:06 embed-certs-255556 crio[682]: time="2024-09-18 21:19:06.781763593Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f4a5989dc9c66d57fbb948e86861e58fa98697a7dfb22e5dfd9be743a4f8f076,PodSandboxId:d60dc3b49edf0e2886ba25401f793cd90a13b06324b690a2e2e2aed54e6b5cec,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726693799224422399,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dcc9b789-237c-4d92-96c6-2c23d2c401c0,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:41327fabd1f80a15235c7deb09523f5617af36f4d3f3fc07f8cf501beb3facde,PodSandboxId:58b4788d8b4af932a32bd7be33feaf6c4fbe17d6110be02d3528f03f2a14d2a4,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726693798933407165,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-ptxbt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 798665e6-6f4a-4ba5-b4f9-3192d3f76f03,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UD
P\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cea70894e040287c08794805d7f3b8f7182a6f7a180df6de048b480f4d186f9e,PodSandboxId:6b5065325583ec41ef1067801916683a5bd9ba68c372857f3344cb58db4fe6d0,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726693798724374308,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-vgmtd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1
224ebf9-1b24-413a-b779-093acfcfb61e,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5dd648996f6325d0d5aa564f4218f56772a3e2ce9abf525f3d27ab7a53b0bd87,PodSandboxId:6edde6859292b134b7892c92af3999f270c2df6230afae26048c490f07950493,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt
:1726693798145760092,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-m7gxh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 47d72a32-7efc-4155-a890-0ddc620af6e0,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5822532d2a32a540040588fade5c42910d6050af9d0a0d6c110881be16e83ef7,PodSandboxId:29d74ececeb949d47a28def66a76fe67d1cb7e0490c5b6d18e58ee0b0009870b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726693787175401666,Labels:m
ap[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-255556,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 441c7024775a4b212d948f8f8dc32239,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:311a22617dfc469fc0e4ac6731dca5c3b663f607c8d60f0a3ef05695fd7a02da,PodSandboxId:67d2b94e52293070e6401e1ffca87a1d51f4cf2051c0956233ac7506a048606a,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726693787177233492,Labels:map[string]string{io.ku
bernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-255556,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 11063eb2ec944bfb7d80ace46d649f35,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7d38c7d6a99952a1fc0f1eacff46e5c50f67b25237192b834534c0fe3c5e5f6d,PodSandboxId:b356a54fa925344999343b4bfd13fc882c4542d02d0fed730e4d5b9ef41e5d31,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726693787108149405,Labels:map[string]string{io.kubernetes.container.name: kube-ap
iserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-255556,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dd8fa81c77941b65929f2e8b548a3480,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c563aafe653941fe1518a56e35605bb08416c5d02c816472ced216f51d39ac60,PodSandboxId:c9bbb1b207ff87ed413bf6bb1739fb694e2236ba0bed6dc47adb43cf49cf3f4a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726693787069099051,Labels:map[string]string{io.kubernetes.container.name: kube-contr
oller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-255556,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6e2f00147e16e007afce11a22718d8af,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:929b63815a268640f858a8357b598df21c4a0591ea8c6bc078fb967444ebab6e,PodSandboxId:a4721dc78266322e264b86fe3d6e211aae634cb921e4545fc6cdfcf3f36bd494,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726693498045783113,Labels:map[string]string{io.kubernetes.container.name: kube-
apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-255556,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dd8fa81c77941b65929f2e8b548a3480,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=bd4d9863-b3ca-4c65-8763-7cea2f0f8cec name=/runtime.v1.RuntimeService/ListContainers
	Sep 18 21:19:06 embed-certs-255556 crio[682]: time="2024-09-18 21:19:06.818059025Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=b28ff53c-d374-470d-a1d6-c9e09851e2e0 name=/runtime.v1.RuntimeService/Version
	Sep 18 21:19:06 embed-certs-255556 crio[682]: time="2024-09-18 21:19:06.818134526Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=b28ff53c-d374-470d-a1d6-c9e09851e2e0 name=/runtime.v1.RuntimeService/Version
	Sep 18 21:19:06 embed-certs-255556 crio[682]: time="2024-09-18 21:19:06.819864668Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=eab82c27-7266-463f-b4eb-6d2bcf0e401d name=/runtime.v1.ImageService/ImageFsInfo
	Sep 18 21:19:06 embed-certs-255556 crio[682]: time="2024-09-18 21:19:06.820289766Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726694346820264398,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=eab82c27-7266-463f-b4eb-6d2bcf0e401d name=/runtime.v1.ImageService/ImageFsInfo
	Sep 18 21:19:06 embed-certs-255556 crio[682]: time="2024-09-18 21:19:06.821509664Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=232d508e-aea2-4e5c-89f4-b53094bbda99 name=/runtime.v1.RuntimeService/ListContainers
	Sep 18 21:19:06 embed-certs-255556 crio[682]: time="2024-09-18 21:19:06.821609569Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=232d508e-aea2-4e5c-89f4-b53094bbda99 name=/runtime.v1.RuntimeService/ListContainers
	Sep 18 21:19:06 embed-certs-255556 crio[682]: time="2024-09-18 21:19:06.821850935Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f4a5989dc9c66d57fbb948e86861e58fa98697a7dfb22e5dfd9be743a4f8f076,PodSandboxId:d60dc3b49edf0e2886ba25401f793cd90a13b06324b690a2e2e2aed54e6b5cec,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726693799224422399,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dcc9b789-237c-4d92-96c6-2c23d2c401c0,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:41327fabd1f80a15235c7deb09523f5617af36f4d3f3fc07f8cf501beb3facde,PodSandboxId:58b4788d8b4af932a32bd7be33feaf6c4fbe17d6110be02d3528f03f2a14d2a4,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726693798933407165,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-ptxbt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 798665e6-6f4a-4ba5-b4f9-3192d3f76f03,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UD
P\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cea70894e040287c08794805d7f3b8f7182a6f7a180df6de048b480f4d186f9e,PodSandboxId:6b5065325583ec41ef1067801916683a5bd9ba68c372857f3344cb58db4fe6d0,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726693798724374308,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-vgmtd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1
224ebf9-1b24-413a-b779-093acfcfb61e,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5dd648996f6325d0d5aa564f4218f56772a3e2ce9abf525f3d27ab7a53b0bd87,PodSandboxId:6edde6859292b134b7892c92af3999f270c2df6230afae26048c490f07950493,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt
:1726693798145760092,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-m7gxh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 47d72a32-7efc-4155-a890-0ddc620af6e0,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5822532d2a32a540040588fade5c42910d6050af9d0a0d6c110881be16e83ef7,PodSandboxId:29d74ececeb949d47a28def66a76fe67d1cb7e0490c5b6d18e58ee0b0009870b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726693787175401666,Labels:m
ap[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-255556,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 441c7024775a4b212d948f8f8dc32239,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:311a22617dfc469fc0e4ac6731dca5c3b663f607c8d60f0a3ef05695fd7a02da,PodSandboxId:67d2b94e52293070e6401e1ffca87a1d51f4cf2051c0956233ac7506a048606a,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726693787177233492,Labels:map[string]string{io.ku
bernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-255556,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 11063eb2ec944bfb7d80ace46d649f35,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7d38c7d6a99952a1fc0f1eacff46e5c50f67b25237192b834534c0fe3c5e5f6d,PodSandboxId:b356a54fa925344999343b4bfd13fc882c4542d02d0fed730e4d5b9ef41e5d31,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726693787108149405,Labels:map[string]string{io.kubernetes.container.name: kube-ap
iserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-255556,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dd8fa81c77941b65929f2e8b548a3480,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c563aafe653941fe1518a56e35605bb08416c5d02c816472ced216f51d39ac60,PodSandboxId:c9bbb1b207ff87ed413bf6bb1739fb694e2236ba0bed6dc47adb43cf49cf3f4a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726693787069099051,Labels:map[string]string{io.kubernetes.container.name: kube-contr
oller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-255556,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6e2f00147e16e007afce11a22718d8af,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:929b63815a268640f858a8357b598df21c4a0591ea8c6bc078fb967444ebab6e,PodSandboxId:a4721dc78266322e264b86fe3d6e211aae634cb921e4545fc6cdfcf3f36bd494,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726693498045783113,Labels:map[string]string{io.kubernetes.container.name: kube-
apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-255556,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dd8fa81c77941b65929f2e8b548a3480,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=232d508e-aea2-4e5c-89f4-b53094bbda99 name=/runtime.v1.RuntimeService/ListContainers
	Sep 18 21:19:06 embed-certs-255556 crio[682]: time="2024-09-18 21:19:06.857669661Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=8a20e4d7-0073-4ea6-997d-4e8041e164ff name=/runtime.v1.RuntimeService/Version
	Sep 18 21:19:06 embed-certs-255556 crio[682]: time="2024-09-18 21:19:06.857779270Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=8a20e4d7-0073-4ea6-997d-4e8041e164ff name=/runtime.v1.RuntimeService/Version
	Sep 18 21:19:06 embed-certs-255556 crio[682]: time="2024-09-18 21:19:06.858812412Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=7cde910f-045b-4d9d-97b2-b693a97caa4f name=/runtime.v1.ImageService/ImageFsInfo
	Sep 18 21:19:06 embed-certs-255556 crio[682]: time="2024-09-18 21:19:06.859347031Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726694346859318433,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=7cde910f-045b-4d9d-97b2-b693a97caa4f name=/runtime.v1.ImageService/ImageFsInfo
	Sep 18 21:19:06 embed-certs-255556 crio[682]: time="2024-09-18 21:19:06.859918819Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=3726b126-3945-4763-aced-cdaaf8ab7d91 name=/runtime.v1.RuntimeService/ListContainers
	Sep 18 21:19:06 embed-certs-255556 crio[682]: time="2024-09-18 21:19:06.859990364Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=3726b126-3945-4763-aced-cdaaf8ab7d91 name=/runtime.v1.RuntimeService/ListContainers
	Sep 18 21:19:06 embed-certs-255556 crio[682]: time="2024-09-18 21:19:06.860241748Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f4a5989dc9c66d57fbb948e86861e58fa98697a7dfb22e5dfd9be743a4f8f076,PodSandboxId:d60dc3b49edf0e2886ba25401f793cd90a13b06324b690a2e2e2aed54e6b5cec,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726693799224422399,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dcc9b789-237c-4d92-96c6-2c23d2c401c0,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:41327fabd1f80a15235c7deb09523f5617af36f4d3f3fc07f8cf501beb3facde,PodSandboxId:58b4788d8b4af932a32bd7be33feaf6c4fbe17d6110be02d3528f03f2a14d2a4,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726693798933407165,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-ptxbt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 798665e6-6f4a-4ba5-b4f9-3192d3f76f03,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UD
P\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cea70894e040287c08794805d7f3b8f7182a6f7a180df6de048b480f4d186f9e,PodSandboxId:6b5065325583ec41ef1067801916683a5bd9ba68c372857f3344cb58db4fe6d0,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726693798724374308,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-vgmtd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1
224ebf9-1b24-413a-b779-093acfcfb61e,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5dd648996f6325d0d5aa564f4218f56772a3e2ce9abf525f3d27ab7a53b0bd87,PodSandboxId:6edde6859292b134b7892c92af3999f270c2df6230afae26048c490f07950493,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt
:1726693798145760092,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-m7gxh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 47d72a32-7efc-4155-a890-0ddc620af6e0,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5822532d2a32a540040588fade5c42910d6050af9d0a0d6c110881be16e83ef7,PodSandboxId:29d74ececeb949d47a28def66a76fe67d1cb7e0490c5b6d18e58ee0b0009870b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726693787175401666,Labels:m
ap[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-255556,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 441c7024775a4b212d948f8f8dc32239,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:311a22617dfc469fc0e4ac6731dca5c3b663f607c8d60f0a3ef05695fd7a02da,PodSandboxId:67d2b94e52293070e6401e1ffca87a1d51f4cf2051c0956233ac7506a048606a,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726693787177233492,Labels:map[string]string{io.ku
bernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-255556,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 11063eb2ec944bfb7d80ace46d649f35,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7d38c7d6a99952a1fc0f1eacff46e5c50f67b25237192b834534c0fe3c5e5f6d,PodSandboxId:b356a54fa925344999343b4bfd13fc882c4542d02d0fed730e4d5b9ef41e5d31,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726693787108149405,Labels:map[string]string{io.kubernetes.container.name: kube-ap
iserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-255556,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dd8fa81c77941b65929f2e8b548a3480,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c563aafe653941fe1518a56e35605bb08416c5d02c816472ced216f51d39ac60,PodSandboxId:c9bbb1b207ff87ed413bf6bb1739fb694e2236ba0bed6dc47adb43cf49cf3f4a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726693787069099051,Labels:map[string]string{io.kubernetes.container.name: kube-contr
oller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-255556,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6e2f00147e16e007afce11a22718d8af,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:929b63815a268640f858a8357b598df21c4a0591ea8c6bc078fb967444ebab6e,PodSandboxId:a4721dc78266322e264b86fe3d6e211aae634cb921e4545fc6cdfcf3f36bd494,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726693498045783113,Labels:map[string]string{io.kubernetes.container.name: kube-
apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-255556,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dd8fa81c77941b65929f2e8b548a3480,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=3726b126-3945-4763-aced-cdaaf8ab7d91 name=/runtime.v1.RuntimeService/ListContainers
	Sep 18 21:19:06 embed-certs-255556 crio[682]: time="2024-09-18 21:19:06.896362750Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=e144608c-4b96-4375-8f61-93445e17919b name=/runtime.v1.RuntimeService/Version
	Sep 18 21:19:06 embed-certs-255556 crio[682]: time="2024-09-18 21:19:06.896455097Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=e144608c-4b96-4375-8f61-93445e17919b name=/runtime.v1.RuntimeService/Version
	Sep 18 21:19:06 embed-certs-255556 crio[682]: time="2024-09-18 21:19:06.897930264Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=82a84330-9751-46a0-9a51-14144ece6b7e name=/runtime.v1.ImageService/ImageFsInfo
	Sep 18 21:19:06 embed-certs-255556 crio[682]: time="2024-09-18 21:19:06.898364232Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726694346898338675,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=82a84330-9751-46a0-9a51-14144ece6b7e name=/runtime.v1.ImageService/ImageFsInfo
	Sep 18 21:19:06 embed-certs-255556 crio[682]: time="2024-09-18 21:19:06.898982041Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=4369b2eb-1d6b-4c8b-b4b5-99a13d29b182 name=/runtime.v1.RuntimeService/ListContainers
	Sep 18 21:19:06 embed-certs-255556 crio[682]: time="2024-09-18 21:19:06.899050405Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=4369b2eb-1d6b-4c8b-b4b5-99a13d29b182 name=/runtime.v1.RuntimeService/ListContainers
	Sep 18 21:19:06 embed-certs-255556 crio[682]: time="2024-09-18 21:19:06.899259194Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f4a5989dc9c66d57fbb948e86861e58fa98697a7dfb22e5dfd9be743a4f8f076,PodSandboxId:d60dc3b49edf0e2886ba25401f793cd90a13b06324b690a2e2e2aed54e6b5cec,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726693799224422399,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dcc9b789-237c-4d92-96c6-2c23d2c401c0,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:41327fabd1f80a15235c7deb09523f5617af36f4d3f3fc07f8cf501beb3facde,PodSandboxId:58b4788d8b4af932a32bd7be33feaf6c4fbe17d6110be02d3528f03f2a14d2a4,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726693798933407165,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-ptxbt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 798665e6-6f4a-4ba5-b4f9-3192d3f76f03,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UD
P\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cea70894e040287c08794805d7f3b8f7182a6f7a180df6de048b480f4d186f9e,PodSandboxId:6b5065325583ec41ef1067801916683a5bd9ba68c372857f3344cb58db4fe6d0,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726693798724374308,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-vgmtd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1
224ebf9-1b24-413a-b779-093acfcfb61e,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5dd648996f6325d0d5aa564f4218f56772a3e2ce9abf525f3d27ab7a53b0bd87,PodSandboxId:6edde6859292b134b7892c92af3999f270c2df6230afae26048c490f07950493,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt
:1726693798145760092,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-m7gxh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 47d72a32-7efc-4155-a890-0ddc620af6e0,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5822532d2a32a540040588fade5c42910d6050af9d0a0d6c110881be16e83ef7,PodSandboxId:29d74ececeb949d47a28def66a76fe67d1cb7e0490c5b6d18e58ee0b0009870b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726693787175401666,Labels:m
ap[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-255556,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 441c7024775a4b212d948f8f8dc32239,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:311a22617dfc469fc0e4ac6731dca5c3b663f607c8d60f0a3ef05695fd7a02da,PodSandboxId:67d2b94e52293070e6401e1ffca87a1d51f4cf2051c0956233ac7506a048606a,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726693787177233492,Labels:map[string]string{io.ku
bernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-255556,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 11063eb2ec944bfb7d80ace46d649f35,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7d38c7d6a99952a1fc0f1eacff46e5c50f67b25237192b834534c0fe3c5e5f6d,PodSandboxId:b356a54fa925344999343b4bfd13fc882c4542d02d0fed730e4d5b9ef41e5d31,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726693787108149405,Labels:map[string]string{io.kubernetes.container.name: kube-ap
iserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-255556,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dd8fa81c77941b65929f2e8b548a3480,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c563aafe653941fe1518a56e35605bb08416c5d02c816472ced216f51d39ac60,PodSandboxId:c9bbb1b207ff87ed413bf6bb1739fb694e2236ba0bed6dc47adb43cf49cf3f4a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726693787069099051,Labels:map[string]string{io.kubernetes.container.name: kube-contr
oller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-255556,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6e2f00147e16e007afce11a22718d8af,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:929b63815a268640f858a8357b598df21c4a0591ea8c6bc078fb967444ebab6e,PodSandboxId:a4721dc78266322e264b86fe3d6e211aae634cb921e4545fc6cdfcf3f36bd494,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726693498045783113,Labels:map[string]string{io.kubernetes.container.name: kube-
apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-255556,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dd8fa81c77941b65929f2e8b548a3480,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=4369b2eb-1d6b-4c8b-b4b5-99a13d29b182 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	f4a5989dc9c66       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   9 minutes ago       Running             storage-provisioner       0                   d60dc3b49edf0       storage-provisioner
	41327fabd1f80       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   9 minutes ago       Running             coredns                   0                   58b4788d8b4af       coredns-7c65d6cfc9-ptxbt
	cea70894e0402       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   9 minutes ago       Running             coredns                   0                   6b5065325583e       coredns-7c65d6cfc9-vgmtd
	5dd648996f632       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561   9 minutes ago       Running             kube-proxy                0                   6edde6859292b       kube-proxy-m7gxh
	311a22617dfc4       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4   9 minutes ago       Running             etcd                      2                   67d2b94e52293       etcd-embed-certs-255556
	5822532d2a32a       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b   9 minutes ago       Running             kube-scheduler            2                   29d74ececeb94       kube-scheduler-embed-certs-255556
	7d38c7d6a9995       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee   9 minutes ago       Running             kube-apiserver            2                   b356a54fa9253       kube-apiserver-embed-certs-255556
	c563aafe65394       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1   9 minutes ago       Running             kube-controller-manager   2                   c9bbb1b207ff8       kube-controller-manager-embed-certs-255556
	929b63815a268       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee   14 minutes ago      Exited              kube-apiserver            1                   a4721dc782663       kube-apiserver-embed-certs-255556
	
	
	==> coredns [41327fabd1f80a15235c7deb09523f5617af36f4d3f3fc07f8cf501beb3facde] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	
	
	==> coredns [cea70894e040287c08794805d7f3b8f7182a6f7a180df6de048b480f4d186f9e] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	
	
	==> describe nodes <==
	Name:               embed-certs-255556
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-255556
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=85073601a832bd4bbda5d11fa91feafff6ec6b91
	                    minikube.k8s.io/name=embed-certs-255556
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_18T21_09_52_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 18 Sep 2024 21:09:49 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-255556
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 18 Sep 2024 21:19:03 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 18 Sep 2024 21:15:08 +0000   Wed, 18 Sep 2024 21:09:48 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 18 Sep 2024 21:15:08 +0000   Wed, 18 Sep 2024 21:09:48 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 18 Sep 2024 21:15:08 +0000   Wed, 18 Sep 2024 21:09:48 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 18 Sep 2024 21:15:08 +0000   Wed, 18 Sep 2024 21:09:50 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.21
	  Hostname:    embed-certs-255556
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 0c6567145a664a07ac62659c94c4c9a6
	  System UUID:                0c656714-5a66-4a07-ac62-659c94c4c9a6
	  Boot ID:                    3a64d178-a667-4d3a-89d7-15de20adee8f
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7c65d6cfc9-ptxbt                      100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     9m10s
	  kube-system                 coredns-7c65d6cfc9-vgmtd                      100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     9m10s
	  kube-system                 etcd-embed-certs-255556                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         9m15s
	  kube-system                 kube-apiserver-embed-certs-255556             250m (12%)    0 (0%)      0 (0%)           0 (0%)         9m15s
	  kube-system                 kube-controller-manager-embed-certs-255556    200m (10%)    0 (0%)      0 (0%)           0 (0%)         9m16s
	  kube-system                 kube-proxy-m7gxh                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m10s
	  kube-system                 kube-scheduler-embed-certs-255556             100m (5%)     0 (0%)      0 (0%)           0 (0%)         9m15s
	  kube-system                 metrics-server-6867b74b74-sr6hq               100m (5%)     0 (0%)      200Mi (9%)       0 (0%)         9m9s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m9s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   0 (0%)
	  memory             440Mi (20%)  340Mi (16%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 9m8s                   kube-proxy       
	  Normal  Starting                 9m21s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  9m21s (x8 over 9m21s)  kubelet          Node embed-certs-255556 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m21s (x8 over 9m21s)  kubelet          Node embed-certs-255556 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m21s (x7 over 9m21s)  kubelet          Node embed-certs-255556 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  9m21s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 9m15s                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  9m15s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  9m15s                  kubelet          Node embed-certs-255556 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m15s                  kubelet          Node embed-certs-255556 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m15s                  kubelet          Node embed-certs-255556 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           9m11s                  node-controller  Node embed-certs-255556 event: Registered Node embed-certs-255556 in Controller
	
	
	==> dmesg <==
	[  +0.051417] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.040003] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.847762] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +1.960067] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +2.341489] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000008] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +7.967624] systemd-fstab-generator[605]: Ignoring "noauto" option for root device
	[  +0.079504] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.060715] systemd-fstab-generator[617]: Ignoring "noauto" option for root device
	[  +0.199631] systemd-fstab-generator[631]: Ignoring "noauto" option for root device
	[  +0.144016] systemd-fstab-generator[643]: Ignoring "noauto" option for root device
	[  +0.312970] systemd-fstab-generator[674]: Ignoring "noauto" option for root device
	[  +4.115484] systemd-fstab-generator[763]: Ignoring "noauto" option for root device
	[  +2.015406] systemd-fstab-generator[883]: Ignoring "noauto" option for root device
	[  +0.074405] kauditd_printk_skb: 158 callbacks suppressed
	[Sep18 21:05] kauditd_printk_skb: 69 callbacks suppressed
	[  +7.498684] kauditd_printk_skb: 85 callbacks suppressed
	[Sep18 21:09] kauditd_printk_skb: 7 callbacks suppressed
	[  +1.368389] systemd-fstab-generator[2538]: Ignoring "noauto" option for root device
	[  +4.632361] kauditd_printk_skb: 54 callbacks suppressed
	[  +1.411286] systemd-fstab-generator[2864]: Ignoring "noauto" option for root device
	[  +5.379266] systemd-fstab-generator[2988]: Ignoring "noauto" option for root device
	[  +0.096244] kauditd_printk_skb: 14 callbacks suppressed
	[Sep18 21:10] kauditd_printk_skb: 84 callbacks suppressed
	
	
	==> etcd [311a22617dfc469fc0e4ac6731dca5c3b663f607c8d60f0a3ef05695fd7a02da] <==
	{"level":"info","ts":"2024-09-18T21:09:47.514272Z","caller":"embed/etcd.go:728","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-09-18T21:09:47.515098Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.39.21:2380"}
	{"level":"info","ts":"2024-09-18T21:09:47.522595Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.39.21:2380"}
	{"level":"info","ts":"2024-09-18T21:09:47.521315Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"3c2bdad7569acae7","initial-advertise-peer-urls":["https://192.168.39.21:2380"],"listen-peer-urls":["https://192.168.39.21:2380"],"advertise-client-urls":["https://192.168.39.21:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.21:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-09-18T21:09:47.521396Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-09-18T21:09:48.258651Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3c2bdad7569acae7 is starting a new election at term 1"}
	{"level":"info","ts":"2024-09-18T21:09:48.258729Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3c2bdad7569acae7 became pre-candidate at term 1"}
	{"level":"info","ts":"2024-09-18T21:09:48.258766Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3c2bdad7569acae7 received MsgPreVoteResp from 3c2bdad7569acae7 at term 1"}
	{"level":"info","ts":"2024-09-18T21:09:48.258781Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3c2bdad7569acae7 became candidate at term 2"}
	{"level":"info","ts":"2024-09-18T21:09:48.258787Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3c2bdad7569acae7 received MsgVoteResp from 3c2bdad7569acae7 at term 2"}
	{"level":"info","ts":"2024-09-18T21:09:48.258796Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3c2bdad7569acae7 became leader at term 2"}
	{"level":"info","ts":"2024-09-18T21:09:48.258803Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 3c2bdad7569acae7 elected leader 3c2bdad7569acae7 at term 2"}
	{"level":"info","ts":"2024-09-18T21:09:48.262183Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"3c2bdad7569acae7","local-member-attributes":"{Name:embed-certs-255556 ClientURLs:[https://192.168.39.21:2379]}","request-path":"/0/members/3c2bdad7569acae7/attributes","cluster-id":"f019a0e2d3e7d785","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-18T21:09:48.262313Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-18T21:09:48.262767Z","caller":"etcdserver/server.go:2629","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-18T21:09:48.263023Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-18T21:09:48.264091Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-18T21:09:48.264930Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-09-18T21:09:48.265533Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-18T21:09:48.265632Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-18T21:09:48.266738Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-18T21:09:48.267673Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.21:2379"}
	{"level":"info","ts":"2024-09-18T21:09:48.275037Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"f019a0e2d3e7d785","local-member-id":"3c2bdad7569acae7","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-18T21:09:48.275147Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-18T21:09:48.275183Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
	
	
	==> kernel <==
	 21:19:07 up 14 min,  0 users,  load average: 0.11, 0.18, 0.17
	Linux embed-certs-255556 5.10.207 #1 SMP Mon Sep 16 15:00:28 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [7d38c7d6a99952a1fc0f1eacff46e5c50f67b25237192b834534c0fe3c5e5f6d] <==
	E0918 21:14:50.791681       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	E0918 21:14:50.791721       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0918 21:14:50.792814       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0918 21:14:50.792877       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0918 21:15:50.793053       1 handler_proxy.go:99] no RequestInfo found in the context
	E0918 21:15:50.793153       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W0918 21:15:50.793229       1 handler_proxy.go:99] no RequestInfo found in the context
	E0918 21:15:50.793248       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I0918 21:15:50.794276       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0918 21:15:50.794372       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0918 21:17:50.795593       1 handler_proxy.go:99] no RequestInfo found in the context
	E0918 21:17:50.795748       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W0918 21:17:50.795594       1 handler_proxy.go:99] no RequestInfo found in the context
	E0918 21:17:50.795833       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I0918 21:17:50.797030       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0918 21:17:50.797066       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-apiserver [929b63815a268640f858a8357b598df21c4a0591ea8c6bc078fb967444ebab6e] <==
	W0918 21:09:42.556237       1 logging.go:55] [core] [Channel #37 SubChannel #38]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0918 21:09:42.598071       1 logging.go:55] [core] [Channel #157 SubChannel #158]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0918 21:09:42.651823       1 logging.go:55] [core] [Channel #208 SubChannel #209]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0918 21:09:43.003229       1 logging.go:55] [core] [Channel #124 SubChannel #125]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0918 21:09:43.069606       1 logging.go:55] [core] [Channel #139 SubChannel #140]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0918 21:09:43.090989       1 logging.go:55] [core] [Channel #58 SubChannel #59]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0918 21:09:43.196634       1 logging.go:55] [core] [Channel #10 SubChannel #11]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0918 21:09:43.196634       1 logging.go:55] [core] [Channel #163 SubChannel #164]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0918 21:09:43.287062       1 logging.go:55] [core] [Channel #103 SubChannel #104]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0918 21:09:43.381834       1 logging.go:55] [core] [Channel #64 SubChannel #65]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0918 21:09:43.492947       1 logging.go:55] [core] [Channel #2 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0918 21:09:43.509427       1 logging.go:55] [core] [Channel #22 SubChannel #23]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0918 21:09:43.603296       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0918 21:09:43.603408       1 logging.go:55] [core] [Channel #118 SubChannel #119]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0918 21:09:43.837782       1 logging.go:55] [core] [Channel #148 SubChannel #149]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0918 21:09:43.881910       1 logging.go:55] [core] [Channel #76 SubChannel #77]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0918 21:09:43.967190       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0918 21:09:44.017876       1 logging.go:55] [core] [Channel #121 SubChannel #122]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0918 21:09:44.059947       1 logging.go:55] [core] [Channel #31 SubChannel #32]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0918 21:09:44.071840       1 logging.go:55] [core] [Channel #79 SubChannel #80]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0918 21:09:44.120081       1 logging.go:55] [core] [Channel #61 SubChannel #62]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0918 21:09:44.185540       1 logging.go:55] [core] [Channel #127 SubChannel #128]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0918 21:09:44.191174       1 logging.go:55] [core] [Channel #73 SubChannel #74]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0918 21:09:44.204160       1 logging.go:55] [core] [Channel #25 SubChannel #26]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0918 21:09:44.349637       1 logging.go:55] [core] [Channel #145 SubChannel #146]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-controller-manager [c563aafe653941fe1518a56e35605bb08416c5d02c816472ced216f51d39ac60] <==
	E0918 21:13:56.757262       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0918 21:13:57.200815       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0918 21:14:26.764152       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0918 21:14:27.208687       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0918 21:14:56.771088       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0918 21:14:57.217157       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0918 21:15:08.224037       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="embed-certs-255556"
	E0918 21:15:26.777280       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0918 21:15:27.225388       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0918 21:15:56.264370       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="205.606µs"
	E0918 21:15:56.784281       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0918 21:15:57.233829       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0918 21:16:08.262184       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="80.643µs"
	E0918 21:16:26.790246       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0918 21:16:27.242454       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0918 21:16:56.797096       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0918 21:16:57.250045       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0918 21:17:26.804159       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0918 21:17:27.257318       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0918 21:17:56.810629       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0918 21:17:57.268866       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0918 21:18:26.817077       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0918 21:18:27.276536       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0918 21:18:56.825287       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0918 21:18:57.284641       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [5dd648996f6325d0d5aa564f4218f56772a3e2ce9abf525f3d27ab7a53b0bd87] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0918 21:09:58.657292       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0918 21:09:58.729941       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.21"]
	E0918 21:09:58.730021       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0918 21:09:58.953284       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0918 21:09:58.953315       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0918 21:09:58.953337       1 server_linux.go:169] "Using iptables Proxier"
	I0918 21:09:59.044362       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0918 21:09:59.044719       1 server.go:483] "Version info" version="v1.31.1"
	I0918 21:09:59.044733       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0918 21:09:59.107804       1 config.go:199] "Starting service config controller"
	I0918 21:09:59.107862       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0918 21:09:59.107908       1 config.go:105] "Starting endpoint slice config controller"
	I0918 21:09:59.107924       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0918 21:09:59.108982       1 config.go:328] "Starting node config controller"
	I0918 21:09:59.109011       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0918 21:09:59.208681       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0918 21:09:59.208821       1 shared_informer.go:320] Caches are synced for service config
	I0918 21:09:59.215216       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [5822532d2a32a540040588fade5c42910d6050af9d0a0d6c110881be16e83ef7] <==
	W0918 21:09:49.805597       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0918 21:09:49.805635       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0918 21:09:49.805701       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0918 21:09:49.805728       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0918 21:09:49.805797       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0918 21:09:49.805822       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0918 21:09:49.805918       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0918 21:09:49.805956       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0918 21:09:49.807958       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0918 21:09:49.807992       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0918 21:09:49.808055       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0918 21:09:49.808101       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0918 21:09:49.808240       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0918 21:09:49.808277       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0918 21:09:50.624720       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0918 21:09:50.624761       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0918 21:09:50.753190       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0918 21:09:50.753366       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0918 21:09:50.785940       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0918 21:09:50.786073       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0918 21:09:51.032872       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0918 21:09:51.032985       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0918 21:09:51.035430       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0918 21:09:51.035473       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0918 21:09:52.395062       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 18 21:17:52 embed-certs-255556 kubelet[2871]: E0918 21:17:52.410157    2871 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726694272409648260,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 18 21:18:00 embed-certs-255556 kubelet[2871]: E0918 21:18:00.244638    2871 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-sr6hq" podUID="8867f8fa-687b-4105-8ace-18af50195726"
	Sep 18 21:18:02 embed-certs-255556 kubelet[2871]: E0918 21:18:02.412367    2871 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726694282411905951,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 18 21:18:02 embed-certs-255556 kubelet[2871]: E0918 21:18:02.412706    2871 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726694282411905951,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 18 21:18:12 embed-certs-255556 kubelet[2871]: E0918 21:18:12.414659    2871 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726694292414264526,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 18 21:18:12 embed-certs-255556 kubelet[2871]: E0918 21:18:12.415099    2871 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726694292414264526,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 18 21:18:14 embed-certs-255556 kubelet[2871]: E0918 21:18:14.244766    2871 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-sr6hq" podUID="8867f8fa-687b-4105-8ace-18af50195726"
	Sep 18 21:18:22 embed-certs-255556 kubelet[2871]: E0918 21:18:22.418298    2871 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726694302417913994,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 18 21:18:22 embed-certs-255556 kubelet[2871]: E0918 21:18:22.418668    2871 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726694302417913994,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 18 21:18:25 embed-certs-255556 kubelet[2871]: E0918 21:18:25.245007    2871 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-sr6hq" podUID="8867f8fa-687b-4105-8ace-18af50195726"
	Sep 18 21:18:32 embed-certs-255556 kubelet[2871]: E0918 21:18:32.420640    2871 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726694312419972936,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 18 21:18:32 embed-certs-255556 kubelet[2871]: E0918 21:18:32.420996    2871 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726694312419972936,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 18 21:18:39 embed-certs-255556 kubelet[2871]: E0918 21:18:39.245176    2871 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-sr6hq" podUID="8867f8fa-687b-4105-8ace-18af50195726"
	Sep 18 21:18:42 embed-certs-255556 kubelet[2871]: E0918 21:18:42.423639    2871 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726694322423128430,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 18 21:18:42 embed-certs-255556 kubelet[2871]: E0918 21:18:42.423724    2871 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726694322423128430,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 18 21:18:52 embed-certs-255556 kubelet[2871]: E0918 21:18:52.264351    2871 iptables.go:577] "Could not set up iptables canary" err=<
	Sep 18 21:18:52 embed-certs-255556 kubelet[2871]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Sep 18 21:18:52 embed-certs-255556 kubelet[2871]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 18 21:18:52 embed-certs-255556 kubelet[2871]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 18 21:18:52 embed-certs-255556 kubelet[2871]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 18 21:18:52 embed-certs-255556 kubelet[2871]: E0918 21:18:52.425465    2871 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726694332425026816,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 18 21:18:52 embed-certs-255556 kubelet[2871]: E0918 21:18:52.425604    2871 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726694332425026816,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 18 21:18:53 embed-certs-255556 kubelet[2871]: E0918 21:18:53.245145    2871 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-sr6hq" podUID="8867f8fa-687b-4105-8ace-18af50195726"
	Sep 18 21:19:02 embed-certs-255556 kubelet[2871]: E0918 21:19:02.427290    2871 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726694342426892679,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 18 21:19:02 embed-certs-255556 kubelet[2871]: E0918 21:19:02.427323    2871 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726694342426892679,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	
	
	==> storage-provisioner [f4a5989dc9c66d57fbb948e86861e58fa98697a7dfb22e5dfd9be743a4f8f076] <==
	I0918 21:09:59.391359       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0918 21:09:59.408021       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0918 21:09:59.409228       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0918 21:09:59.426866       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0918 21:09:59.449079       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-255556_372ad411-ea08-4670-bb13-bfc2f465df48!
	I0918 21:09:59.443973       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"3d7ac12d-e6c5-470b-9559-125d6ebd6917", APIVersion:"v1", ResourceVersion:"430", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-255556_372ad411-ea08-4670-bb13-bfc2f465df48 became leader
	I0918 21:09:59.550313       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-255556_372ad411-ea08-4670-bb13-bfc2f465df48!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-255556 -n embed-certs-255556
helpers_test.go:261: (dbg) Run:  kubectl --context embed-certs-255556 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-6867b74b74-sr6hq
helpers_test.go:274: ======> post-mortem[TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context embed-certs-255556 describe pod metrics-server-6867b74b74-sr6hq
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context embed-certs-255556 describe pod metrics-server-6867b74b74-sr6hq: exit status 1 (62.001508ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-6867b74b74-sr6hq" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context embed-certs-255556 describe pod metrics-server-6867b74b74-sr6hq: exit status 1
--- FAIL: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (544.24s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (544.13s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
E0918 21:11:12.175349   14878 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/addons-815929/client.crt: no such file or directory" logger="UnhandledError"
E0918 21:11:24.360155   14878 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/functional-790989/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:274: ***** TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-331658 -n no-preload-331658
start_stop_delete_test.go:274: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2024-09-18 21:19:15.663118976 +0000 UTC m=+6078.433460212
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-331658 -n no-preload-331658
helpers_test.go:244: <<< TestStartStop/group/no-preload/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/no-preload/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-331658 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p no-preload-331658 logs -n 25: (2.073285368s)
helpers_test.go:252: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| delete  | -p cert-options-347585                                 | cert-options-347585          | jenkins | v1.34.0 | 18 Sep 24 20:55 UTC | 18 Sep 24 20:55 UTC |
	| start   | -p no-preload-331658                                   | no-preload-331658            | jenkins | v1.34.0 | 18 Sep 24 20:55 UTC | 18 Sep 24 20:56 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-878094                           | kubernetes-upgrade-878094    | jenkins | v1.34.0 | 18 Sep 24 20:55 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-878094                           | kubernetes-upgrade-878094    | jenkins | v1.34.0 | 18 Sep 24 20:55 UTC | 18 Sep 24 20:56 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	|         | --alsologtostderr                                      |                              |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                     |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| delete  | -p pause-543700                                        | pause-543700                 | jenkins | v1.34.0 | 18 Sep 24 20:56 UTC | 18 Sep 24 20:56 UTC |
	| start   | -p embed-certs-255556                                  | embed-certs-255556           | jenkins | v1.34.0 | 18 Sep 24 20:56 UTC | 18 Sep 24 20:57 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| delete  | -p kubernetes-upgrade-878094                           | kubernetes-upgrade-878094    | jenkins | v1.34.0 | 18 Sep 24 20:56 UTC | 18 Sep 24 20:56 UTC |
	| delete  | -p                                                     | disable-driver-mounts-335923 | jenkins | v1.34.0 | 18 Sep 24 20:56 UTC | 18 Sep 24 20:56 UTC |
	|         | disable-driver-mounts-335923                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-828868 | jenkins | v1.34.0 | 18 Sep 24 20:56 UTC | 18 Sep 24 20:57 UTC |
	|         | default-k8s-diff-port-828868                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-331658             | no-preload-331658            | jenkins | v1.34.0 | 18 Sep 24 20:56 UTC | 18 Sep 24 20:57 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-331658                                   | no-preload-331658            | jenkins | v1.34.0 | 18 Sep 24 20:57 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-828868  | default-k8s-diff-port-828868 | jenkins | v1.34.0 | 18 Sep 24 20:57 UTC | 18 Sep 24 20:57 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-828868 | jenkins | v1.34.0 | 18 Sep 24 20:57 UTC |                     |
	|         | default-k8s-diff-port-828868                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-255556            | embed-certs-255556           | jenkins | v1.34.0 | 18 Sep 24 20:57 UTC | 18 Sep 24 20:57 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-255556                                  | embed-certs-255556           | jenkins | v1.34.0 | 18 Sep 24 20:57 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-740194        | old-k8s-version-740194       | jenkins | v1.34.0 | 18 Sep 24 20:59 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-331658                  | no-preload-331658            | jenkins | v1.34.0 | 18 Sep 24 20:59 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-331658                                   | no-preload-331658            | jenkins | v1.34.0 | 18 Sep 24 20:59 UTC | 18 Sep 24 21:10 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-828868       | default-k8s-diff-port-828868 | jenkins | v1.34.0 | 18 Sep 24 21:00 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-255556                 | embed-certs-255556           | jenkins | v1.34.0 | 18 Sep 24 21:00 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-828868 | jenkins | v1.34.0 | 18 Sep 24 21:00 UTC | 18 Sep 24 21:09 UTC |
	|         | default-k8s-diff-port-828868                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| start   | -p embed-certs-255556                                  | embed-certs-255556           | jenkins | v1.34.0 | 18 Sep 24 21:00 UTC | 18 Sep 24 21:10 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-740194                              | old-k8s-version-740194       | jenkins | v1.34.0 | 18 Sep 24 21:00 UTC | 18 Sep 24 21:00 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-740194             | old-k8s-version-740194       | jenkins | v1.34.0 | 18 Sep 24 21:00 UTC | 18 Sep 24 21:00 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-740194                              | old-k8s-version-740194       | jenkins | v1.34.0 | 18 Sep 24 21:00 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/18 21:00:59
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0918 21:00:59.486726   62061 out.go:345] Setting OutFile to fd 1 ...
	I0918 21:00:59.486839   62061 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0918 21:00:59.486848   62061 out.go:358] Setting ErrFile to fd 2...
	I0918 21:00:59.486854   62061 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0918 21:00:59.487063   62061 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19667-7671/.minikube/bin
	I0918 21:00:59.487631   62061 out.go:352] Setting JSON to false
	I0918 21:00:59.488570   62061 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":6203,"bootTime":1726687056,"procs":200,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0918 21:00:59.488665   62061 start.go:139] virtualization: kvm guest
	I0918 21:00:59.490927   62061 out.go:177] * [old-k8s-version-740194] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0918 21:00:59.492124   62061 out.go:177]   - MINIKUBE_LOCATION=19667
	I0918 21:00:59.492192   62061 notify.go:220] Checking for updates...
	I0918 21:00:59.494436   62061 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0918 21:00:59.495887   62061 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19667-7671/kubeconfig
	I0918 21:00:59.496929   62061 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19667-7671/.minikube
	I0918 21:00:59.498172   62061 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0918 21:00:59.499384   62061 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0918 21:00:59.500900   62061 config.go:182] Loaded profile config "old-k8s-version-740194": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0918 21:00:59.501295   62061 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19667-7671/.minikube/bin/docker-machine-driver-kvm2
	I0918 21:00:59.501375   62061 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0918 21:00:59.516448   62061 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45397
	I0918 21:00:59.516855   62061 main.go:141] libmachine: () Calling .GetVersion
	I0918 21:00:59.517347   62061 main.go:141] libmachine: Using API Version  1
	I0918 21:00:59.517362   62061 main.go:141] libmachine: () Calling .SetConfigRaw
	I0918 21:00:59.517673   62061 main.go:141] libmachine: () Calling .GetMachineName
	I0918 21:00:59.517848   62061 main.go:141] libmachine: (old-k8s-version-740194) Calling .DriverName
	I0918 21:00:59.519750   62061 out.go:177] * Kubernetes 1.31.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.1
	I0918 21:00:59.521126   62061 driver.go:394] Setting default libvirt URI to qemu:///system
	I0918 21:00:59.521433   62061 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19667-7671/.minikube/bin/docker-machine-driver-kvm2
	I0918 21:00:59.521468   62061 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0918 21:00:59.537580   62061 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39433
	I0918 21:00:59.537973   62061 main.go:141] libmachine: () Calling .GetVersion
	I0918 21:00:59.538452   62061 main.go:141] libmachine: Using API Version  1
	I0918 21:00:59.538471   62061 main.go:141] libmachine: () Calling .SetConfigRaw
	I0918 21:00:59.538852   62061 main.go:141] libmachine: () Calling .GetMachineName
	I0918 21:00:59.539083   62061 main.go:141] libmachine: (old-k8s-version-740194) Calling .DriverName
	I0918 21:00:59.575494   62061 out.go:177] * Using the kvm2 driver based on existing profile
	I0918 21:00:59.576555   62061 start.go:297] selected driver: kvm2
	I0918 21:00:59.576572   62061 start.go:901] validating driver "kvm2" against &{Name:old-k8s-version-740194 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.20.0 ClusterName:old-k8s-version-740194 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.53 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280
h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0918 21:00:59.576682   62061 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0918 21:00:59.577339   62061 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0918 21:00:59.577400   62061 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19667-7671/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0918 21:00:59.593287   62061 install.go:137] /home/jenkins/minikube-integration/19667-7671/.minikube/bin/docker-machine-driver-kvm2 version is 1.34.0
	I0918 21:00:59.593810   62061 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0918 21:00:59.593855   62061 cni.go:84] Creating CNI manager for ""
	I0918 21:00:59.593911   62061 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0918 21:00:59.593972   62061 start.go:340] cluster config:
	{Name:old-k8s-version-740194 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-740194 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.53 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p20
00.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0918 21:00:59.594104   62061 iso.go:125] acquiring lock: {Name:mkbcf4d84f3091567a39802cd5e773cb10b8c545 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0918 21:00:59.595936   62061 out.go:177] * Starting "old-k8s-version-740194" primary control-plane node in "old-k8s-version-740194" cluster
	I0918 21:00:59.932315   61273 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.31:22: connect: no route to host
	I0918 21:00:59.597210   62061 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0918 21:00:59.597243   62061 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19667-7671/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0918 21:00:59.597250   62061 cache.go:56] Caching tarball of preloaded images
	I0918 21:00:59.597325   62061 preload.go:172] Found /home/jenkins/minikube-integration/19667-7671/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0918 21:00:59.597338   62061 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0918 21:00:59.597439   62061 profile.go:143] Saving config to /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/old-k8s-version-740194/config.json ...
	I0918 21:00:59.597658   62061 start.go:360] acquireMachinesLock for old-k8s-version-740194: {Name:mk1b0d68a0b7d9d4bc8204d5d81cb9eb77222526 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0918 21:01:03.004316   61273 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.31:22: connect: no route to host
	I0918 21:01:09.084327   61273 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.31:22: connect: no route to host
	I0918 21:01:12.156358   61273 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.31:22: connect: no route to host
	I0918 21:01:18.236353   61273 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.31:22: connect: no route to host
	I0918 21:01:21.308245   61273 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.31:22: connect: no route to host
	I0918 21:01:27.388302   61273 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.31:22: connect: no route to host
	I0918 21:01:30.460341   61273 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.31:22: connect: no route to host
	I0918 21:01:36.540285   61273 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.31:22: connect: no route to host
	I0918 21:01:39.612345   61273 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.31:22: connect: no route to host
	I0918 21:01:45.692338   61273 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.31:22: connect: no route to host
	I0918 21:01:48.764308   61273 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.31:22: connect: no route to host
	I0918 21:01:54.844344   61273 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.31:22: connect: no route to host
	I0918 21:01:57.916346   61273 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.31:22: connect: no route to host
	I0918 21:02:03.996351   61273 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.31:22: connect: no route to host
	I0918 21:02:07.068377   61273 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.31:22: connect: no route to host
	I0918 21:02:13.148269   61273 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.31:22: connect: no route to host
	I0918 21:02:16.220321   61273 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.31:22: connect: no route to host
	I0918 21:02:22.300282   61273 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.31:22: connect: no route to host
	I0918 21:02:25.372352   61273 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.31:22: connect: no route to host
	I0918 21:02:31.452275   61273 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.31:22: connect: no route to host
	I0918 21:02:34.524362   61273 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.31:22: connect: no route to host
	I0918 21:02:40.604332   61273 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.31:22: connect: no route to host
	I0918 21:02:43.676372   61273 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.31:22: connect: no route to host
	I0918 21:02:49.756305   61273 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.31:22: connect: no route to host
	I0918 21:02:52.828321   61273 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.31:22: connect: no route to host
	I0918 21:02:58.908358   61273 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.31:22: connect: no route to host
	I0918 21:03:01.980309   61273 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.31:22: connect: no route to host
	I0918 21:03:08.060301   61273 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.31:22: connect: no route to host
	I0918 21:03:11.132322   61273 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.31:22: connect: no route to host
	I0918 21:03:17.212232   61273 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.31:22: connect: no route to host
	I0918 21:03:20.284342   61273 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.31:22: connect: no route to host
	I0918 21:03:26.364312   61273 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.31:22: connect: no route to host
	I0918 21:03:29.436328   61273 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.31:22: connect: no route to host
	I0918 21:03:35.516323   61273 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.31:22: connect: no route to host
	I0918 21:03:38.588372   61273 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.31:22: connect: no route to host
	I0918 21:03:44.668300   61273 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.31:22: connect: no route to host
	I0918 21:03:47.740379   61273 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.31:22: connect: no route to host
	I0918 21:03:53.820363   61273 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.31:22: connect: no route to host
	I0918 21:03:56.892355   61273 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.31:22: connect: no route to host
	I0918 21:04:02.972312   61273 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.31:22: connect: no route to host
	I0918 21:04:06.044373   61273 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.31:22: connect: no route to host
	I0918 21:04:09.048392   61659 start.go:364] duration metric: took 3m56.738592157s to acquireMachinesLock for "default-k8s-diff-port-828868"
	I0918 21:04:09.048461   61659 start.go:96] Skipping create...Using existing machine configuration
	I0918 21:04:09.048469   61659 fix.go:54] fixHost starting: 
	I0918 21:04:09.048788   61659 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19667-7671/.minikube/bin/docker-machine-driver-kvm2
	I0918 21:04:09.048827   61659 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0918 21:04:09.064428   61659 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41181
	I0918 21:04:09.064856   61659 main.go:141] libmachine: () Calling .GetVersion
	I0918 21:04:09.065395   61659 main.go:141] libmachine: Using API Version  1
	I0918 21:04:09.065421   61659 main.go:141] libmachine: () Calling .SetConfigRaw
	I0918 21:04:09.065751   61659 main.go:141] libmachine: () Calling .GetMachineName
	I0918 21:04:09.065961   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .DriverName
	I0918 21:04:09.066108   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .GetState
	I0918 21:04:09.067874   61659 fix.go:112] recreateIfNeeded on default-k8s-diff-port-828868: state=Stopped err=<nil>
	I0918 21:04:09.067915   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .DriverName
	W0918 21:04:09.068096   61659 fix.go:138] unexpected machine state, will restart: <nil>
	I0918 21:04:09.069985   61659 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-828868" ...
	I0918 21:04:09.045944   61273 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0918 21:04:09.045978   61273 main.go:141] libmachine: (no-preload-331658) Calling .GetMachineName
	I0918 21:04:09.046314   61273 buildroot.go:166] provisioning hostname "no-preload-331658"
	I0918 21:04:09.046350   61273 main.go:141] libmachine: (no-preload-331658) Calling .GetMachineName
	I0918 21:04:09.046602   61273 main.go:141] libmachine: (no-preload-331658) Calling .GetSSHHostname
	I0918 21:04:09.048253   61273 machine.go:96] duration metric: took 4m37.423609251s to provisionDockerMachine
	I0918 21:04:09.048293   61273 fix.go:56] duration metric: took 4m37.446130108s for fixHost
	I0918 21:04:09.048301   61273 start.go:83] releasing machines lock for "no-preload-331658", held for 4m37.44629145s
	W0918 21:04:09.048329   61273 start.go:714] error starting host: provision: host is not running
	W0918 21:04:09.048451   61273 out.go:270] ! StartHost failed, but will try again: provision: host is not running
	I0918 21:04:09.048465   61273 start.go:729] Will try again in 5 seconds ...
	I0918 21:04:09.071488   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .Start
	I0918 21:04:09.071699   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Ensuring networks are active...
	I0918 21:04:09.072473   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Ensuring network default is active
	I0918 21:04:09.072816   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Ensuring network mk-default-k8s-diff-port-828868 is active
	I0918 21:04:09.073204   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Getting domain xml...
	I0918 21:04:09.073977   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Creating domain...
	I0918 21:04:10.321507   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Waiting to get IP...
	I0918 21:04:10.322390   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | domain default-k8s-diff-port-828868 has defined MAC address 52:54:00:c0:39:06 in network mk-default-k8s-diff-port-828868
	I0918 21:04:10.322863   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | unable to find current IP address of domain default-k8s-diff-port-828868 in network mk-default-k8s-diff-port-828868
	I0918 21:04:10.322907   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | I0918 21:04:10.322821   62722 retry.go:31] will retry after 272.805092ms: waiting for machine to come up
	I0918 21:04:10.597434   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | domain default-k8s-diff-port-828868 has defined MAC address 52:54:00:c0:39:06 in network mk-default-k8s-diff-port-828868
	I0918 21:04:10.597861   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | unable to find current IP address of domain default-k8s-diff-port-828868 in network mk-default-k8s-diff-port-828868
	I0918 21:04:10.597888   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | I0918 21:04:10.597825   62722 retry.go:31] will retry after 302.631333ms: waiting for machine to come up
	I0918 21:04:10.902544   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | domain default-k8s-diff-port-828868 has defined MAC address 52:54:00:c0:39:06 in network mk-default-k8s-diff-port-828868
	I0918 21:04:10.903002   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | unable to find current IP address of domain default-k8s-diff-port-828868 in network mk-default-k8s-diff-port-828868
	I0918 21:04:10.903030   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | I0918 21:04:10.902943   62722 retry.go:31] will retry after 325.769954ms: waiting for machine to come up
	I0918 21:04:11.230182   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | domain default-k8s-diff-port-828868 has defined MAC address 52:54:00:c0:39:06 in network mk-default-k8s-diff-port-828868
	I0918 21:04:11.230602   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | unable to find current IP address of domain default-k8s-diff-port-828868 in network mk-default-k8s-diff-port-828868
	I0918 21:04:11.230652   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | I0918 21:04:11.230557   62722 retry.go:31] will retry after 396.395153ms: waiting for machine to come up
	I0918 21:04:11.628135   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | domain default-k8s-diff-port-828868 has defined MAC address 52:54:00:c0:39:06 in network mk-default-k8s-diff-port-828868
	I0918 21:04:11.628520   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | unable to find current IP address of domain default-k8s-diff-port-828868 in network mk-default-k8s-diff-port-828868
	I0918 21:04:11.628544   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | I0918 21:04:11.628495   62722 retry.go:31] will retry after 578.74167ms: waiting for machine to come up
	I0918 21:04:14.050009   61273 start.go:360] acquireMachinesLock for no-preload-331658: {Name:mk1b0d68a0b7d9d4bc8204d5d81cb9eb77222526 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0918 21:04:12.209844   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | domain default-k8s-diff-port-828868 has defined MAC address 52:54:00:c0:39:06 in network mk-default-k8s-diff-port-828868
	I0918 21:04:12.209911   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | unable to find current IP address of domain default-k8s-diff-port-828868 in network mk-default-k8s-diff-port-828868
	I0918 21:04:12.209937   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | I0918 21:04:12.209841   62722 retry.go:31] will retry after 779.0434ms: waiting for machine to come up
	I0918 21:04:12.990688   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | domain default-k8s-diff-port-828868 has defined MAC address 52:54:00:c0:39:06 in network mk-default-k8s-diff-port-828868
	I0918 21:04:12.991106   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | unable to find current IP address of domain default-k8s-diff-port-828868 in network mk-default-k8s-diff-port-828868
	I0918 21:04:12.991141   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | I0918 21:04:12.991045   62722 retry.go:31] will retry after 772.165771ms: waiting for machine to come up
	I0918 21:04:13.764946   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | domain default-k8s-diff-port-828868 has defined MAC address 52:54:00:c0:39:06 in network mk-default-k8s-diff-port-828868
	I0918 21:04:13.765460   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | unable to find current IP address of domain default-k8s-diff-port-828868 in network mk-default-k8s-diff-port-828868
	I0918 21:04:13.765493   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | I0918 21:04:13.765404   62722 retry.go:31] will retry after 1.017078101s: waiting for machine to come up
	I0918 21:04:14.783920   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | domain default-k8s-diff-port-828868 has defined MAC address 52:54:00:c0:39:06 in network mk-default-k8s-diff-port-828868
	I0918 21:04:14.784320   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | unable to find current IP address of domain default-k8s-diff-port-828868 in network mk-default-k8s-diff-port-828868
	I0918 21:04:14.784348   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | I0918 21:04:14.784276   62722 retry.go:31] will retry after 1.775982574s: waiting for machine to come up
	I0918 21:04:16.562037   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | domain default-k8s-diff-port-828868 has defined MAC address 52:54:00:c0:39:06 in network mk-default-k8s-diff-port-828868
	I0918 21:04:16.562413   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | unable to find current IP address of domain default-k8s-diff-port-828868 in network mk-default-k8s-diff-port-828868
	I0918 21:04:16.562451   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | I0918 21:04:16.562369   62722 retry.go:31] will retry after 1.609664062s: waiting for machine to come up
	I0918 21:04:18.174149   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | domain default-k8s-diff-port-828868 has defined MAC address 52:54:00:c0:39:06 in network mk-default-k8s-diff-port-828868
	I0918 21:04:18.174759   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | unable to find current IP address of domain default-k8s-diff-port-828868 in network mk-default-k8s-diff-port-828868
	I0918 21:04:18.174788   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | I0918 21:04:18.174710   62722 retry.go:31] will retry after 2.26359536s: waiting for machine to come up
	I0918 21:04:20.440599   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | domain default-k8s-diff-port-828868 has defined MAC address 52:54:00:c0:39:06 in network mk-default-k8s-diff-port-828868
	I0918 21:04:20.441000   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | unable to find current IP address of domain default-k8s-diff-port-828868 in network mk-default-k8s-diff-port-828868
	I0918 21:04:20.441027   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | I0918 21:04:20.440955   62722 retry.go:31] will retry after 3.387446315s: waiting for machine to come up
	I0918 21:04:23.832623   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | domain default-k8s-diff-port-828868 has defined MAC address 52:54:00:c0:39:06 in network mk-default-k8s-diff-port-828868
	I0918 21:04:23.833134   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | unable to find current IP address of domain default-k8s-diff-port-828868 in network mk-default-k8s-diff-port-828868
	I0918 21:04:23.833162   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | I0918 21:04:23.833097   62722 retry.go:31] will retry after 3.312983418s: waiting for machine to come up
	I0918 21:04:27.150091   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | domain default-k8s-diff-port-828868 has defined MAC address 52:54:00:c0:39:06 in network mk-default-k8s-diff-port-828868
	I0918 21:04:27.150658   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Found IP for machine: 192.168.50.109
	I0918 21:04:27.150682   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | domain default-k8s-diff-port-828868 has current primary IP address 192.168.50.109 and MAC address 52:54:00:c0:39:06 in network mk-default-k8s-diff-port-828868
	I0918 21:04:27.150703   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Reserving static IP address...
	I0918 21:04:27.151248   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-828868", mac: "52:54:00:c0:39:06", ip: "192.168.50.109"} in network mk-default-k8s-diff-port-828868: {Iface:virbr4 ExpiryTime:2024-09-18 22:04:19 +0000 UTC Type:0 Mac:52:54:00:c0:39:06 Iaid: IPaddr:192.168.50.109 Prefix:24 Hostname:default-k8s-diff-port-828868 Clientid:01:52:54:00:c0:39:06}
	I0918 21:04:27.151276   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Reserved static IP address: 192.168.50.109
	I0918 21:04:27.151297   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | skip adding static IP to network mk-default-k8s-diff-port-828868 - found existing host DHCP lease matching {name: "default-k8s-diff-port-828868", mac: "52:54:00:c0:39:06", ip: "192.168.50.109"}
	I0918 21:04:27.151317   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | Getting to WaitForSSH function...
	I0918 21:04:27.151330   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Waiting for SSH to be available...
	I0918 21:04:27.153633   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | domain default-k8s-diff-port-828868 has defined MAC address 52:54:00:c0:39:06 in network mk-default-k8s-diff-port-828868
	I0918 21:04:27.154006   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:39:06", ip: ""} in network mk-default-k8s-diff-port-828868: {Iface:virbr4 ExpiryTime:2024-09-18 22:04:19 +0000 UTC Type:0 Mac:52:54:00:c0:39:06 Iaid: IPaddr:192.168.50.109 Prefix:24 Hostname:default-k8s-diff-port-828868 Clientid:01:52:54:00:c0:39:06}
	I0918 21:04:27.154036   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | domain default-k8s-diff-port-828868 has defined IP address 192.168.50.109 and MAC address 52:54:00:c0:39:06 in network mk-default-k8s-diff-port-828868
	I0918 21:04:27.154127   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | Using SSH client type: external
	I0918 21:04:27.154153   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | Using SSH private key: /home/jenkins/minikube-integration/19667-7671/.minikube/machines/default-k8s-diff-port-828868/id_rsa (-rw-------)
	I0918 21:04:27.154196   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.109 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19667-7671/.minikube/machines/default-k8s-diff-port-828868/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0918 21:04:27.154211   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | About to run SSH command:
	I0918 21:04:27.154225   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | exit 0
	I0918 21:04:28.308967   61740 start.go:364] duration metric: took 4m9.856658805s to acquireMachinesLock for "embed-certs-255556"
	I0918 21:04:28.309052   61740 start.go:96] Skipping create...Using existing machine configuration
	I0918 21:04:28.309066   61740 fix.go:54] fixHost starting: 
	I0918 21:04:28.309548   61740 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19667-7671/.minikube/bin/docker-machine-driver-kvm2
	I0918 21:04:28.309609   61740 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0918 21:04:28.326972   61740 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41115
	I0918 21:04:28.327375   61740 main.go:141] libmachine: () Calling .GetVersion
	I0918 21:04:28.327941   61740 main.go:141] libmachine: Using API Version  1
	I0918 21:04:28.327974   61740 main.go:141] libmachine: () Calling .SetConfigRaw
	I0918 21:04:28.328300   61740 main.go:141] libmachine: () Calling .GetMachineName
	I0918 21:04:28.328538   61740 main.go:141] libmachine: (embed-certs-255556) Calling .DriverName
	I0918 21:04:28.328676   61740 main.go:141] libmachine: (embed-certs-255556) Calling .GetState
	I0918 21:04:28.330265   61740 fix.go:112] recreateIfNeeded on embed-certs-255556: state=Stopped err=<nil>
	I0918 21:04:28.330312   61740 main.go:141] libmachine: (embed-certs-255556) Calling .DriverName
	W0918 21:04:28.330482   61740 fix.go:138] unexpected machine state, will restart: <nil>
	I0918 21:04:28.332680   61740 out.go:177] * Restarting existing kvm2 VM for "embed-certs-255556" ...
	I0918 21:04:28.333692   61740 main.go:141] libmachine: (embed-certs-255556) Calling .Start
	I0918 21:04:28.333865   61740 main.go:141] libmachine: (embed-certs-255556) Ensuring networks are active...
	I0918 21:04:28.334536   61740 main.go:141] libmachine: (embed-certs-255556) Ensuring network default is active
	I0918 21:04:28.334987   61740 main.go:141] libmachine: (embed-certs-255556) Ensuring network mk-embed-certs-255556 is active
	I0918 21:04:28.335491   61740 main.go:141] libmachine: (embed-certs-255556) Getting domain xml...
	I0918 21:04:28.336206   61740 main.go:141] libmachine: (embed-certs-255556) Creating domain...
	I0918 21:04:27.280056   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | SSH cmd err, output: <nil>: 
	I0918 21:04:27.280448   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .GetConfigRaw
	I0918 21:04:27.281097   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .GetIP
	I0918 21:04:27.283491   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | domain default-k8s-diff-port-828868 has defined MAC address 52:54:00:c0:39:06 in network mk-default-k8s-diff-port-828868
	I0918 21:04:27.283933   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:39:06", ip: ""} in network mk-default-k8s-diff-port-828868: {Iface:virbr4 ExpiryTime:2024-09-18 22:04:19 +0000 UTC Type:0 Mac:52:54:00:c0:39:06 Iaid: IPaddr:192.168.50.109 Prefix:24 Hostname:default-k8s-diff-port-828868 Clientid:01:52:54:00:c0:39:06}
	I0918 21:04:27.283968   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | domain default-k8s-diff-port-828868 has defined IP address 192.168.50.109 and MAC address 52:54:00:c0:39:06 in network mk-default-k8s-diff-port-828868
	I0918 21:04:27.284242   61659 profile.go:143] Saving config to /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/default-k8s-diff-port-828868/config.json ...
	I0918 21:04:27.284483   61659 machine.go:93] provisionDockerMachine start ...
	I0918 21:04:27.284527   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .DriverName
	I0918 21:04:27.284740   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .GetSSHHostname
	I0918 21:04:27.287263   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | domain default-k8s-diff-port-828868 has defined MAC address 52:54:00:c0:39:06 in network mk-default-k8s-diff-port-828868
	I0918 21:04:27.287640   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:39:06", ip: ""} in network mk-default-k8s-diff-port-828868: {Iface:virbr4 ExpiryTime:2024-09-18 22:04:19 +0000 UTC Type:0 Mac:52:54:00:c0:39:06 Iaid: IPaddr:192.168.50.109 Prefix:24 Hostname:default-k8s-diff-port-828868 Clientid:01:52:54:00:c0:39:06}
	I0918 21:04:27.287671   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | domain default-k8s-diff-port-828868 has defined IP address 192.168.50.109 and MAC address 52:54:00:c0:39:06 in network mk-default-k8s-diff-port-828868
	I0918 21:04:27.287831   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .GetSSHPort
	I0918 21:04:27.288053   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .GetSSHKeyPath
	I0918 21:04:27.288230   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .GetSSHKeyPath
	I0918 21:04:27.288348   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .GetSSHUsername
	I0918 21:04:27.288497   61659 main.go:141] libmachine: Using SSH client type: native
	I0918 21:04:27.288759   61659 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.50.109 22 <nil> <nil>}
	I0918 21:04:27.288774   61659 main.go:141] libmachine: About to run SSH command:
	hostname
	I0918 21:04:27.396110   61659 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0918 21:04:27.396140   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .GetMachineName
	I0918 21:04:27.396439   61659 buildroot.go:166] provisioning hostname "default-k8s-diff-port-828868"
	I0918 21:04:27.396472   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .GetMachineName
	I0918 21:04:27.396655   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .GetSSHHostname
	I0918 21:04:27.399285   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | domain default-k8s-diff-port-828868 has defined MAC address 52:54:00:c0:39:06 in network mk-default-k8s-diff-port-828868
	I0918 21:04:27.399629   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:39:06", ip: ""} in network mk-default-k8s-diff-port-828868: {Iface:virbr4 ExpiryTime:2024-09-18 22:04:19 +0000 UTC Type:0 Mac:52:54:00:c0:39:06 Iaid: IPaddr:192.168.50.109 Prefix:24 Hostname:default-k8s-diff-port-828868 Clientid:01:52:54:00:c0:39:06}
	I0918 21:04:27.399670   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | domain default-k8s-diff-port-828868 has defined IP address 192.168.50.109 and MAC address 52:54:00:c0:39:06 in network mk-default-k8s-diff-port-828868
	I0918 21:04:27.399746   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .GetSSHPort
	I0918 21:04:27.399947   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .GetSSHKeyPath
	I0918 21:04:27.400158   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .GetSSHKeyPath
	I0918 21:04:27.400295   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .GetSSHUsername
	I0918 21:04:27.400476   61659 main.go:141] libmachine: Using SSH client type: native
	I0918 21:04:27.400701   61659 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.50.109 22 <nil> <nil>}
	I0918 21:04:27.400714   61659 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-828868 && echo "default-k8s-diff-port-828868" | sudo tee /etc/hostname
	I0918 21:04:27.518553   61659 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-828868
	
	I0918 21:04:27.518579   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .GetSSHHostname
	I0918 21:04:27.521274   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | domain default-k8s-diff-port-828868 has defined MAC address 52:54:00:c0:39:06 in network mk-default-k8s-diff-port-828868
	I0918 21:04:27.521714   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:39:06", ip: ""} in network mk-default-k8s-diff-port-828868: {Iface:virbr4 ExpiryTime:2024-09-18 22:04:19 +0000 UTC Type:0 Mac:52:54:00:c0:39:06 Iaid: IPaddr:192.168.50.109 Prefix:24 Hostname:default-k8s-diff-port-828868 Clientid:01:52:54:00:c0:39:06}
	I0918 21:04:27.521746   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | domain default-k8s-diff-port-828868 has defined IP address 192.168.50.109 and MAC address 52:54:00:c0:39:06 in network mk-default-k8s-diff-port-828868
	I0918 21:04:27.521918   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .GetSSHPort
	I0918 21:04:27.522085   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .GetSSHKeyPath
	I0918 21:04:27.522298   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .GetSSHKeyPath
	I0918 21:04:27.522469   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .GetSSHUsername
	I0918 21:04:27.522689   61659 main.go:141] libmachine: Using SSH client type: native
	I0918 21:04:27.522867   61659 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.50.109 22 <nil> <nil>}
	I0918 21:04:27.522885   61659 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-828868' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-828868/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-828868' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0918 21:04:27.636264   61659 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0918 21:04:27.636296   61659 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19667-7671/.minikube CaCertPath:/home/jenkins/minikube-integration/19667-7671/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19667-7671/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19667-7671/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19667-7671/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19667-7671/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19667-7671/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19667-7671/.minikube}
	I0918 21:04:27.636325   61659 buildroot.go:174] setting up certificates
	I0918 21:04:27.636335   61659 provision.go:84] configureAuth start
	I0918 21:04:27.636343   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .GetMachineName
	I0918 21:04:27.636629   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .GetIP
	I0918 21:04:27.639186   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | domain default-k8s-diff-port-828868 has defined MAC address 52:54:00:c0:39:06 in network mk-default-k8s-diff-port-828868
	I0918 21:04:27.639622   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:39:06", ip: ""} in network mk-default-k8s-diff-port-828868: {Iface:virbr4 ExpiryTime:2024-09-18 22:04:19 +0000 UTC Type:0 Mac:52:54:00:c0:39:06 Iaid: IPaddr:192.168.50.109 Prefix:24 Hostname:default-k8s-diff-port-828868 Clientid:01:52:54:00:c0:39:06}
	I0918 21:04:27.639646   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | domain default-k8s-diff-port-828868 has defined IP address 192.168.50.109 and MAC address 52:54:00:c0:39:06 in network mk-default-k8s-diff-port-828868
	I0918 21:04:27.639858   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .GetSSHHostname
	I0918 21:04:27.642158   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | domain default-k8s-diff-port-828868 has defined MAC address 52:54:00:c0:39:06 in network mk-default-k8s-diff-port-828868
	I0918 21:04:27.642421   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:39:06", ip: ""} in network mk-default-k8s-diff-port-828868: {Iface:virbr4 ExpiryTime:2024-09-18 22:04:19 +0000 UTC Type:0 Mac:52:54:00:c0:39:06 Iaid: IPaddr:192.168.50.109 Prefix:24 Hostname:default-k8s-diff-port-828868 Clientid:01:52:54:00:c0:39:06}
	I0918 21:04:27.642448   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | domain default-k8s-diff-port-828868 has defined IP address 192.168.50.109 and MAC address 52:54:00:c0:39:06 in network mk-default-k8s-diff-port-828868
	I0918 21:04:27.642626   61659 provision.go:143] copyHostCerts
	I0918 21:04:27.642706   61659 exec_runner.go:144] found /home/jenkins/minikube-integration/19667-7671/.minikube/ca.pem, removing ...
	I0918 21:04:27.642869   61659 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19667-7671/.minikube/ca.pem
	I0918 21:04:27.642966   61659 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19667-7671/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19667-7671/.minikube/ca.pem (1082 bytes)
	I0918 21:04:27.643099   61659 exec_runner.go:144] found /home/jenkins/minikube-integration/19667-7671/.minikube/cert.pem, removing ...
	I0918 21:04:27.643111   61659 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19667-7671/.minikube/cert.pem
	I0918 21:04:27.643150   61659 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19667-7671/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19667-7671/.minikube/cert.pem (1123 bytes)
	I0918 21:04:27.643270   61659 exec_runner.go:144] found /home/jenkins/minikube-integration/19667-7671/.minikube/key.pem, removing ...
	I0918 21:04:27.643280   61659 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19667-7671/.minikube/key.pem
	I0918 21:04:27.643320   61659 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19667-7671/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19667-7671/.minikube/key.pem (1679 bytes)
	I0918 21:04:27.643387   61659 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19667-7671/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19667-7671/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19667-7671/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-828868 san=[127.0.0.1 192.168.50.109 default-k8s-diff-port-828868 localhost minikube]
	I0918 21:04:27.693367   61659 provision.go:177] copyRemoteCerts
	I0918 21:04:27.693426   61659 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0918 21:04:27.693463   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .GetSSHHostname
	I0918 21:04:27.696331   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | domain default-k8s-diff-port-828868 has defined MAC address 52:54:00:c0:39:06 in network mk-default-k8s-diff-port-828868
	I0918 21:04:27.696661   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:39:06", ip: ""} in network mk-default-k8s-diff-port-828868: {Iface:virbr4 ExpiryTime:2024-09-18 22:04:19 +0000 UTC Type:0 Mac:52:54:00:c0:39:06 Iaid: IPaddr:192.168.50.109 Prefix:24 Hostname:default-k8s-diff-port-828868 Clientid:01:52:54:00:c0:39:06}
	I0918 21:04:27.696693   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | domain default-k8s-diff-port-828868 has defined IP address 192.168.50.109 and MAC address 52:54:00:c0:39:06 in network mk-default-k8s-diff-port-828868
	I0918 21:04:27.696835   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .GetSSHPort
	I0918 21:04:27.697028   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .GetSSHKeyPath
	I0918 21:04:27.697212   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .GetSSHUsername
	I0918 21:04:27.697317   61659 sshutil.go:53] new ssh client: &{IP:192.168.50.109 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19667-7671/.minikube/machines/default-k8s-diff-port-828868/id_rsa Username:docker}
	I0918 21:04:27.777944   61659 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0918 21:04:27.801476   61659 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0918 21:04:27.825025   61659 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0918 21:04:27.848244   61659 provision.go:87] duration metric: took 211.897185ms to configureAuth
	I0918 21:04:27.848274   61659 buildroot.go:189] setting minikube options for container-runtime
	I0918 21:04:27.848434   61659 config.go:182] Loaded profile config "default-k8s-diff-port-828868": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0918 21:04:27.848513   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .GetSSHHostname
	I0918 21:04:27.851119   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | domain default-k8s-diff-port-828868 has defined MAC address 52:54:00:c0:39:06 in network mk-default-k8s-diff-port-828868
	I0918 21:04:27.851472   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:39:06", ip: ""} in network mk-default-k8s-diff-port-828868: {Iface:virbr4 ExpiryTime:2024-09-18 22:04:19 +0000 UTC Type:0 Mac:52:54:00:c0:39:06 Iaid: IPaddr:192.168.50.109 Prefix:24 Hostname:default-k8s-diff-port-828868 Clientid:01:52:54:00:c0:39:06}
	I0918 21:04:27.851509   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | domain default-k8s-diff-port-828868 has defined IP address 192.168.50.109 and MAC address 52:54:00:c0:39:06 in network mk-default-k8s-diff-port-828868
	I0918 21:04:27.851788   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .GetSSHPort
	I0918 21:04:27.852007   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .GetSSHKeyPath
	I0918 21:04:27.852216   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .GetSSHKeyPath
	I0918 21:04:27.852420   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .GetSSHUsername
	I0918 21:04:27.852670   61659 main.go:141] libmachine: Using SSH client type: native
	I0918 21:04:27.852852   61659 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.50.109 22 <nil> <nil>}
	I0918 21:04:27.852870   61659 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0918 21:04:28.072808   61659 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0918 21:04:28.072843   61659 machine.go:96] duration metric: took 788.346091ms to provisionDockerMachine
	I0918 21:04:28.072858   61659 start.go:293] postStartSetup for "default-k8s-diff-port-828868" (driver="kvm2")
	I0918 21:04:28.072874   61659 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0918 21:04:28.072898   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .DriverName
	I0918 21:04:28.073246   61659 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0918 21:04:28.073287   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .GetSSHHostname
	I0918 21:04:28.075998   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | domain default-k8s-diff-port-828868 has defined MAC address 52:54:00:c0:39:06 in network mk-default-k8s-diff-port-828868
	I0918 21:04:28.076389   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:39:06", ip: ""} in network mk-default-k8s-diff-port-828868: {Iface:virbr4 ExpiryTime:2024-09-18 22:04:19 +0000 UTC Type:0 Mac:52:54:00:c0:39:06 Iaid: IPaddr:192.168.50.109 Prefix:24 Hostname:default-k8s-diff-port-828868 Clientid:01:52:54:00:c0:39:06}
	I0918 21:04:28.076416   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | domain default-k8s-diff-port-828868 has defined IP address 192.168.50.109 and MAC address 52:54:00:c0:39:06 in network mk-default-k8s-diff-port-828868
	I0918 21:04:28.076561   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .GetSSHPort
	I0918 21:04:28.076780   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .GetSSHKeyPath
	I0918 21:04:28.076939   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .GetSSHUsername
	I0918 21:04:28.077063   61659 sshutil.go:53] new ssh client: &{IP:192.168.50.109 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19667-7671/.minikube/machines/default-k8s-diff-port-828868/id_rsa Username:docker}
	I0918 21:04:28.158946   61659 ssh_runner.go:195] Run: cat /etc/os-release
	I0918 21:04:28.163200   61659 info.go:137] Remote host: Buildroot 2023.02.9
	I0918 21:04:28.163231   61659 filesync.go:126] Scanning /home/jenkins/minikube-integration/19667-7671/.minikube/addons for local assets ...
	I0918 21:04:28.163290   61659 filesync.go:126] Scanning /home/jenkins/minikube-integration/19667-7671/.minikube/files for local assets ...
	I0918 21:04:28.163368   61659 filesync.go:149] local asset: /home/jenkins/minikube-integration/19667-7671/.minikube/files/etc/ssl/certs/148782.pem -> 148782.pem in /etc/ssl/certs
	I0918 21:04:28.163464   61659 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0918 21:04:28.172987   61659 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/files/etc/ssl/certs/148782.pem --> /etc/ssl/certs/148782.pem (1708 bytes)
	I0918 21:04:28.198647   61659 start.go:296] duration metric: took 125.77566ms for postStartSetup
	I0918 21:04:28.198685   61659 fix.go:56] duration metric: took 19.150217303s for fixHost
	I0918 21:04:28.198704   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .GetSSHHostname
	I0918 21:04:28.201549   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | domain default-k8s-diff-port-828868 has defined MAC address 52:54:00:c0:39:06 in network mk-default-k8s-diff-port-828868
	I0918 21:04:28.201904   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:39:06", ip: ""} in network mk-default-k8s-diff-port-828868: {Iface:virbr4 ExpiryTime:2024-09-18 22:04:19 +0000 UTC Type:0 Mac:52:54:00:c0:39:06 Iaid: IPaddr:192.168.50.109 Prefix:24 Hostname:default-k8s-diff-port-828868 Clientid:01:52:54:00:c0:39:06}
	I0918 21:04:28.201934   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | domain default-k8s-diff-port-828868 has defined IP address 192.168.50.109 and MAC address 52:54:00:c0:39:06 in network mk-default-k8s-diff-port-828868
	I0918 21:04:28.202093   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .GetSSHPort
	I0918 21:04:28.202278   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .GetSSHKeyPath
	I0918 21:04:28.202435   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .GetSSHKeyPath
	I0918 21:04:28.202588   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .GetSSHUsername
	I0918 21:04:28.202714   61659 main.go:141] libmachine: Using SSH client type: native
	I0918 21:04:28.202871   61659 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.50.109 22 <nil> <nil>}
	I0918 21:04:28.202879   61659 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0918 21:04:28.308752   61659 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726693468.285343658
	
	I0918 21:04:28.308778   61659 fix.go:216] guest clock: 1726693468.285343658
	I0918 21:04:28.308786   61659 fix.go:229] Guest: 2024-09-18 21:04:28.285343658 +0000 UTC Remote: 2024-09-18 21:04:28.198688962 +0000 UTC m=+256.035220061 (delta=86.654696ms)
	I0918 21:04:28.308821   61659 fix.go:200] guest clock delta is within tolerance: 86.654696ms
	I0918 21:04:28.308829   61659 start.go:83] releasing machines lock for "default-k8s-diff-port-828868", held for 19.260404228s
	I0918 21:04:28.308857   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .DriverName
	I0918 21:04:28.309175   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .GetIP
	I0918 21:04:28.312346   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | domain default-k8s-diff-port-828868 has defined MAC address 52:54:00:c0:39:06 in network mk-default-k8s-diff-port-828868
	I0918 21:04:28.312725   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:39:06", ip: ""} in network mk-default-k8s-diff-port-828868: {Iface:virbr4 ExpiryTime:2024-09-18 22:04:19 +0000 UTC Type:0 Mac:52:54:00:c0:39:06 Iaid: IPaddr:192.168.50.109 Prefix:24 Hostname:default-k8s-diff-port-828868 Clientid:01:52:54:00:c0:39:06}
	I0918 21:04:28.312753   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | domain default-k8s-diff-port-828868 has defined IP address 192.168.50.109 and MAC address 52:54:00:c0:39:06 in network mk-default-k8s-diff-port-828868
	I0918 21:04:28.312951   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .DriverName
	I0918 21:04:28.313506   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .DriverName
	I0918 21:04:28.313702   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .DriverName
	I0918 21:04:28.313792   61659 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0918 21:04:28.313849   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .GetSSHHostname
	I0918 21:04:28.313966   61659 ssh_runner.go:195] Run: cat /version.json
	I0918 21:04:28.314001   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .GetSSHHostname
	I0918 21:04:28.316698   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | domain default-k8s-diff-port-828868 has defined MAC address 52:54:00:c0:39:06 in network mk-default-k8s-diff-port-828868
	I0918 21:04:28.316882   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | domain default-k8s-diff-port-828868 has defined MAC address 52:54:00:c0:39:06 in network mk-default-k8s-diff-port-828868
	I0918 21:04:28.317016   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:39:06", ip: ""} in network mk-default-k8s-diff-port-828868: {Iface:virbr4 ExpiryTime:2024-09-18 22:04:19 +0000 UTC Type:0 Mac:52:54:00:c0:39:06 Iaid: IPaddr:192.168.50.109 Prefix:24 Hostname:default-k8s-diff-port-828868 Clientid:01:52:54:00:c0:39:06}
	I0918 21:04:28.317038   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | domain default-k8s-diff-port-828868 has defined IP address 192.168.50.109 and MAC address 52:54:00:c0:39:06 in network mk-default-k8s-diff-port-828868
	I0918 21:04:28.317239   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .GetSSHPort
	I0918 21:04:28.317357   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:39:06", ip: ""} in network mk-default-k8s-diff-port-828868: {Iface:virbr4 ExpiryTime:2024-09-18 22:04:19 +0000 UTC Type:0 Mac:52:54:00:c0:39:06 Iaid: IPaddr:192.168.50.109 Prefix:24 Hostname:default-k8s-diff-port-828868 Clientid:01:52:54:00:c0:39:06}
	I0918 21:04:28.317408   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | domain default-k8s-diff-port-828868 has defined IP address 192.168.50.109 and MAC address 52:54:00:c0:39:06 in network mk-default-k8s-diff-port-828868
	I0918 21:04:28.317410   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .GetSSHKeyPath
	I0918 21:04:28.317596   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .GetSSHPort
	I0918 21:04:28.317598   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .GetSSHUsername
	I0918 21:04:28.317743   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .GetSSHKeyPath
	I0918 21:04:28.317783   61659 sshutil.go:53] new ssh client: &{IP:192.168.50.109 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19667-7671/.minikube/machines/default-k8s-diff-port-828868/id_rsa Username:docker}
	I0918 21:04:28.317905   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .GetSSHUsername
	I0918 21:04:28.318060   61659 sshutil.go:53] new ssh client: &{IP:192.168.50.109 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19667-7671/.minikube/machines/default-k8s-diff-port-828868/id_rsa Username:docker}
	I0918 21:04:28.439960   61659 ssh_runner.go:195] Run: systemctl --version
	I0918 21:04:28.446111   61659 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0918 21:04:28.593574   61659 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0918 21:04:28.599542   61659 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0918 21:04:28.599623   61659 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0918 21:04:28.615775   61659 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0918 21:04:28.615802   61659 start.go:495] detecting cgroup driver to use...
	I0918 21:04:28.615965   61659 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0918 21:04:28.636924   61659 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0918 21:04:28.655681   61659 docker.go:217] disabling cri-docker service (if available) ...
	I0918 21:04:28.655775   61659 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0918 21:04:28.670090   61659 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0918 21:04:28.684780   61659 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0918 21:04:28.807355   61659 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0918 21:04:28.941753   61659 docker.go:233] disabling docker service ...
	I0918 21:04:28.941836   61659 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0918 21:04:28.956786   61659 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0918 21:04:28.970301   61659 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0918 21:04:29.119605   61659 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0918 21:04:29.245330   61659 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0918 21:04:29.259626   61659 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0918 21:04:29.278104   61659 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0918 21:04:29.278162   61659 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0918 21:04:29.288761   61659 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0918 21:04:29.288837   61659 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0918 21:04:29.299631   61659 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0918 21:04:29.310244   61659 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0918 21:04:29.321220   61659 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0918 21:04:29.332722   61659 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0918 21:04:29.343590   61659 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0918 21:04:29.366099   61659 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0918 21:04:29.381180   61659 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0918 21:04:29.394427   61659 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0918 21:04:29.394494   61659 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0918 21:04:29.410069   61659 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0918 21:04:29.421207   61659 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0918 21:04:29.543870   61659 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0918 21:04:29.642149   61659 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0918 21:04:29.642205   61659 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0918 21:04:29.647336   61659 start.go:563] Will wait 60s for crictl version
	I0918 21:04:29.647400   61659 ssh_runner.go:195] Run: which crictl
	I0918 21:04:29.651148   61659 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0918 21:04:29.690903   61659 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0918 21:04:29.690992   61659 ssh_runner.go:195] Run: crio --version
	I0918 21:04:29.717176   61659 ssh_runner.go:195] Run: crio --version
	I0918 21:04:29.747416   61659 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0918 21:04:29.748825   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .GetIP
	I0918 21:04:29.751828   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | domain default-k8s-diff-port-828868 has defined MAC address 52:54:00:c0:39:06 in network mk-default-k8s-diff-port-828868
	I0918 21:04:29.752238   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:39:06", ip: ""} in network mk-default-k8s-diff-port-828868: {Iface:virbr4 ExpiryTime:2024-09-18 22:04:19 +0000 UTC Type:0 Mac:52:54:00:c0:39:06 Iaid: IPaddr:192.168.50.109 Prefix:24 Hostname:default-k8s-diff-port-828868 Clientid:01:52:54:00:c0:39:06}
	I0918 21:04:29.752288   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | domain default-k8s-diff-port-828868 has defined IP address 192.168.50.109 and MAC address 52:54:00:c0:39:06 in network mk-default-k8s-diff-port-828868
	I0918 21:04:29.752533   61659 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0918 21:04:29.756672   61659 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0918 21:04:29.768691   61659 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-828868 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.1 ClusterName:default-k8s-diff-port-828868 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.109 Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0918 21:04:29.768822   61659 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0918 21:04:29.768867   61659 ssh_runner.go:195] Run: sudo crictl images --output json
	I0918 21:04:29.803885   61659 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I0918 21:04:29.803964   61659 ssh_runner.go:195] Run: which lz4
	I0918 21:04:29.808051   61659 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0918 21:04:29.812324   61659 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0918 21:04:29.812363   61659 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I0918 21:04:31.172721   61659 crio.go:462] duration metric: took 1.364736071s to copy over tarball
	I0918 21:04:31.172837   61659 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0918 21:04:29.637411   61740 main.go:141] libmachine: (embed-certs-255556) Waiting to get IP...
	I0918 21:04:29.638427   61740 main.go:141] libmachine: (embed-certs-255556) DBG | domain embed-certs-255556 has defined MAC address 52:54:00:e8:c2:b7 in network mk-embed-certs-255556
	I0918 21:04:29.638877   61740 main.go:141] libmachine: (embed-certs-255556) DBG | unable to find current IP address of domain embed-certs-255556 in network mk-embed-certs-255556
	I0918 21:04:29.638973   61740 main.go:141] libmachine: (embed-certs-255556) DBG | I0918 21:04:29.638868   62857 retry.go:31] will retry after 298.087525ms: waiting for machine to come up
	I0918 21:04:29.938543   61740 main.go:141] libmachine: (embed-certs-255556) DBG | domain embed-certs-255556 has defined MAC address 52:54:00:e8:c2:b7 in network mk-embed-certs-255556
	I0918 21:04:29.938923   61740 main.go:141] libmachine: (embed-certs-255556) DBG | unable to find current IP address of domain embed-certs-255556 in network mk-embed-certs-255556
	I0918 21:04:29.938946   61740 main.go:141] libmachine: (embed-certs-255556) DBG | I0918 21:04:29.938889   62857 retry.go:31] will retry after 362.887862ms: waiting for machine to come up
	I0918 21:04:30.303379   61740 main.go:141] libmachine: (embed-certs-255556) DBG | domain embed-certs-255556 has defined MAC address 52:54:00:e8:c2:b7 in network mk-embed-certs-255556
	I0918 21:04:30.303867   61740 main.go:141] libmachine: (embed-certs-255556) DBG | unable to find current IP address of domain embed-certs-255556 in network mk-embed-certs-255556
	I0918 21:04:30.303898   61740 main.go:141] libmachine: (embed-certs-255556) DBG | I0918 21:04:30.303820   62857 retry.go:31] will retry after 452.771021ms: waiting for machine to come up
	I0918 21:04:30.758353   61740 main.go:141] libmachine: (embed-certs-255556) DBG | domain embed-certs-255556 has defined MAC address 52:54:00:e8:c2:b7 in network mk-embed-certs-255556
	I0918 21:04:30.758897   61740 main.go:141] libmachine: (embed-certs-255556) DBG | unable to find current IP address of domain embed-certs-255556 in network mk-embed-certs-255556
	I0918 21:04:30.758928   61740 main.go:141] libmachine: (embed-certs-255556) DBG | I0918 21:04:30.758856   62857 retry.go:31] will retry after 506.010985ms: waiting for machine to come up
	I0918 21:04:31.266443   61740 main.go:141] libmachine: (embed-certs-255556) DBG | domain embed-certs-255556 has defined MAC address 52:54:00:e8:c2:b7 in network mk-embed-certs-255556
	I0918 21:04:31.266934   61740 main.go:141] libmachine: (embed-certs-255556) DBG | unable to find current IP address of domain embed-certs-255556 in network mk-embed-certs-255556
	I0918 21:04:31.266964   61740 main.go:141] libmachine: (embed-certs-255556) DBG | I0918 21:04:31.266893   62857 retry.go:31] will retry after 584.679329ms: waiting for machine to come up
	I0918 21:04:31.853811   61740 main.go:141] libmachine: (embed-certs-255556) DBG | domain embed-certs-255556 has defined MAC address 52:54:00:e8:c2:b7 in network mk-embed-certs-255556
	I0918 21:04:31.854371   61740 main.go:141] libmachine: (embed-certs-255556) DBG | unable to find current IP address of domain embed-certs-255556 in network mk-embed-certs-255556
	I0918 21:04:31.854402   61740 main.go:141] libmachine: (embed-certs-255556) DBG | I0918 21:04:31.854309   62857 retry.go:31] will retry after 786.010743ms: waiting for machine to come up
	I0918 21:04:32.642494   61740 main.go:141] libmachine: (embed-certs-255556) DBG | domain embed-certs-255556 has defined MAC address 52:54:00:e8:c2:b7 in network mk-embed-certs-255556
	I0918 21:04:32.643068   61740 main.go:141] libmachine: (embed-certs-255556) DBG | unable to find current IP address of domain embed-certs-255556 in network mk-embed-certs-255556
	I0918 21:04:32.643100   61740 main.go:141] libmachine: (embed-certs-255556) DBG | I0918 21:04:32.643013   62857 retry.go:31] will retry after 1.010762944s: waiting for machine to come up
	I0918 21:04:33.299563   61659 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.126697598s)
	I0918 21:04:33.299596   61659 crio.go:469] duration metric: took 2.126840983s to extract the tarball
	I0918 21:04:33.299602   61659 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0918 21:04:33.336428   61659 ssh_runner.go:195] Run: sudo crictl images --output json
	I0918 21:04:33.377303   61659 crio.go:514] all images are preloaded for cri-o runtime.
	I0918 21:04:33.377342   61659 cache_images.go:84] Images are preloaded, skipping loading
	I0918 21:04:33.377352   61659 kubeadm.go:934] updating node { 192.168.50.109 8444 v1.31.1 crio true true} ...
	I0918 21:04:33.377490   61659 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-828868 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.109
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:default-k8s-diff-port-828868 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0918 21:04:33.377574   61659 ssh_runner.go:195] Run: crio config
	I0918 21:04:33.423773   61659 cni.go:84] Creating CNI manager for ""
	I0918 21:04:33.423800   61659 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0918 21:04:33.423816   61659 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0918 21:04:33.423835   61659 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.109 APIServerPort:8444 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-828868 NodeName:default-k8s-diff-port-828868 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.109"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.109 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0918 21:04:33.423976   61659 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.109
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-828868"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.109
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.109"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0918 21:04:33.424058   61659 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0918 21:04:33.434047   61659 binaries.go:44] Found k8s binaries, skipping transfer
	I0918 21:04:33.434119   61659 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0918 21:04:33.443535   61659 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I0918 21:04:33.460116   61659 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0918 21:04:33.475883   61659 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2172 bytes)
	I0918 21:04:33.492311   61659 ssh_runner.go:195] Run: grep 192.168.50.109	control-plane.minikube.internal$ /etc/hosts
	I0918 21:04:33.495940   61659 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.109	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0918 21:04:33.507411   61659 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0918 21:04:33.625104   61659 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0918 21:04:33.641530   61659 certs.go:68] Setting up /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/default-k8s-diff-port-828868 for IP: 192.168.50.109
	I0918 21:04:33.641556   61659 certs.go:194] generating shared ca certs ...
	I0918 21:04:33.641572   61659 certs.go:226] acquiring lock for ca certs: {Name:mkf54e0e71ed884c4372f9d3d4cb308ea4600185 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0918 21:04:33.641757   61659 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19667-7671/.minikube/ca.key
	I0918 21:04:33.641804   61659 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19667-7671/.minikube/proxy-client-ca.key
	I0918 21:04:33.641822   61659 certs.go:256] generating profile certs ...
	I0918 21:04:33.641944   61659 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/default-k8s-diff-port-828868/client.key
	I0918 21:04:33.642036   61659 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/default-k8s-diff-port-828868/apiserver.key.df92be3a
	I0918 21:04:33.642087   61659 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/default-k8s-diff-port-828868/proxy-client.key
	I0918 21:04:33.642255   61659 certs.go:484] found cert: /home/jenkins/minikube-integration/19667-7671/.minikube/certs/14878.pem (1338 bytes)
	W0918 21:04:33.642297   61659 certs.go:480] ignoring /home/jenkins/minikube-integration/19667-7671/.minikube/certs/14878_empty.pem, impossibly tiny 0 bytes
	I0918 21:04:33.642306   61659 certs.go:484] found cert: /home/jenkins/minikube-integration/19667-7671/.minikube/certs/ca-key.pem (1675 bytes)
	I0918 21:04:33.642337   61659 certs.go:484] found cert: /home/jenkins/minikube-integration/19667-7671/.minikube/certs/ca.pem (1082 bytes)
	I0918 21:04:33.642370   61659 certs.go:484] found cert: /home/jenkins/minikube-integration/19667-7671/.minikube/certs/cert.pem (1123 bytes)
	I0918 21:04:33.642404   61659 certs.go:484] found cert: /home/jenkins/minikube-integration/19667-7671/.minikube/certs/key.pem (1679 bytes)
	I0918 21:04:33.642454   61659 certs.go:484] found cert: /home/jenkins/minikube-integration/19667-7671/.minikube/files/etc/ssl/certs/148782.pem (1708 bytes)
	I0918 21:04:33.643116   61659 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0918 21:04:33.682428   61659 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0918 21:04:33.710444   61659 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0918 21:04:33.759078   61659 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0918 21:04:33.797727   61659 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/default-k8s-diff-port-828868/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0918 21:04:33.821989   61659 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/default-k8s-diff-port-828868/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0918 21:04:33.844210   61659 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/default-k8s-diff-port-828868/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0918 21:04:33.866843   61659 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/default-k8s-diff-port-828868/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0918 21:04:33.896125   61659 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/files/etc/ssl/certs/148782.pem --> /usr/share/ca-certificates/148782.pem (1708 bytes)
	I0918 21:04:33.918667   61659 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0918 21:04:33.940790   61659 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/certs/14878.pem --> /usr/share/ca-certificates/14878.pem (1338 bytes)
	I0918 21:04:33.963660   61659 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0918 21:04:33.980348   61659 ssh_runner.go:195] Run: openssl version
	I0918 21:04:33.985856   61659 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/148782.pem && ln -fs /usr/share/ca-certificates/148782.pem /etc/ssl/certs/148782.pem"
	I0918 21:04:33.996472   61659 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/148782.pem
	I0918 21:04:34.000732   61659 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 18 19:57 /usr/share/ca-certificates/148782.pem
	I0918 21:04:34.000788   61659 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/148782.pem
	I0918 21:04:34.006282   61659 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/148782.pem /etc/ssl/certs/3ec20f2e.0"
	I0918 21:04:34.016612   61659 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0918 21:04:34.026689   61659 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0918 21:04:34.030650   61659 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 18 19:39 /usr/share/ca-certificates/minikubeCA.pem
	I0918 21:04:34.030705   61659 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0918 21:04:34.035940   61659 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0918 21:04:34.046516   61659 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14878.pem && ln -fs /usr/share/ca-certificates/14878.pem /etc/ssl/certs/14878.pem"
	I0918 21:04:34.056755   61659 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14878.pem
	I0918 21:04:34.061189   61659 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 18 19:57 /usr/share/ca-certificates/14878.pem
	I0918 21:04:34.061264   61659 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14878.pem
	I0918 21:04:34.066973   61659 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14878.pem /etc/ssl/certs/51391683.0"
	I0918 21:04:34.078781   61659 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0918 21:04:34.083129   61659 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0918 21:04:34.089249   61659 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0918 21:04:34.095211   61659 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0918 21:04:34.101350   61659 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0918 21:04:34.107269   61659 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0918 21:04:34.113177   61659 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0918 21:04:34.119005   61659 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-828868 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.31.1 ClusterName:default-k8s-diff-port-828868 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.109 Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0918 21:04:34.119093   61659 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0918 21:04:34.119147   61659 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0918 21:04:34.162792   61659 cri.go:89] found id: ""
	I0918 21:04:34.162895   61659 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0918 21:04:34.174325   61659 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0918 21:04:34.174358   61659 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0918 21:04:34.174420   61659 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0918 21:04:34.183708   61659 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0918 21:04:34.184680   61659 kubeconfig.go:125] found "default-k8s-diff-port-828868" server: "https://192.168.50.109:8444"
	I0918 21:04:34.186781   61659 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0918 21:04:34.195823   61659 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.109
	I0918 21:04:34.195856   61659 kubeadm.go:1160] stopping kube-system containers ...
	I0918 21:04:34.195866   61659 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0918 21:04:34.195907   61659 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0918 21:04:34.235799   61659 cri.go:89] found id: ""
	I0918 21:04:34.235882   61659 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0918 21:04:34.251412   61659 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0918 21:04:34.261361   61659 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0918 21:04:34.261390   61659 kubeadm.go:157] found existing configuration files:
	
	I0918 21:04:34.261435   61659 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0918 21:04:34.272201   61659 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0918 21:04:34.272272   61659 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0918 21:04:34.283030   61659 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0918 21:04:34.293227   61659 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0918 21:04:34.293321   61659 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0918 21:04:34.303749   61659 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0918 21:04:34.314027   61659 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0918 21:04:34.314116   61659 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0918 21:04:34.324585   61659 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0918 21:04:34.334524   61659 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0918 21:04:34.334594   61659 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0918 21:04:34.344923   61659 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0918 21:04:34.355422   61659 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0918 21:04:34.480395   61659 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0918 21:04:35.320827   61659 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0918 21:04:35.542013   61659 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0918 21:04:35.610886   61659 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0918 21:04:35.694501   61659 api_server.go:52] waiting for apiserver process to appear ...
	I0918 21:04:35.694610   61659 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:04:36.195441   61659 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:04:36.694978   61659 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:04:37.195220   61659 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:04:33.655864   61740 main.go:141] libmachine: (embed-certs-255556) DBG | domain embed-certs-255556 has defined MAC address 52:54:00:e8:c2:b7 in network mk-embed-certs-255556
	I0918 21:04:33.656375   61740 main.go:141] libmachine: (embed-certs-255556) DBG | unable to find current IP address of domain embed-certs-255556 in network mk-embed-certs-255556
	I0918 21:04:33.656407   61740 main.go:141] libmachine: (embed-certs-255556) DBG | I0918 21:04:33.656347   62857 retry.go:31] will retry after 1.375317123s: waiting for machine to come up
	I0918 21:04:35.033882   61740 main.go:141] libmachine: (embed-certs-255556) DBG | domain embed-certs-255556 has defined MAC address 52:54:00:e8:c2:b7 in network mk-embed-certs-255556
	I0918 21:04:35.034266   61740 main.go:141] libmachine: (embed-certs-255556) DBG | unable to find current IP address of domain embed-certs-255556 in network mk-embed-certs-255556
	I0918 21:04:35.034293   61740 main.go:141] libmachine: (embed-certs-255556) DBG | I0918 21:04:35.034232   62857 retry.go:31] will retry after 1.142237895s: waiting for machine to come up
	I0918 21:04:36.178371   61740 main.go:141] libmachine: (embed-certs-255556) DBG | domain embed-certs-255556 has defined MAC address 52:54:00:e8:c2:b7 in network mk-embed-certs-255556
	I0918 21:04:36.178837   61740 main.go:141] libmachine: (embed-certs-255556) DBG | unable to find current IP address of domain embed-certs-255556 in network mk-embed-certs-255556
	I0918 21:04:36.178865   61740 main.go:141] libmachine: (embed-certs-255556) DBG | I0918 21:04:36.178804   62857 retry.go:31] will retry after 1.983853904s: waiting for machine to come up
	I0918 21:04:38.165113   61740 main.go:141] libmachine: (embed-certs-255556) DBG | domain embed-certs-255556 has defined MAC address 52:54:00:e8:c2:b7 in network mk-embed-certs-255556
	I0918 21:04:38.165662   61740 main.go:141] libmachine: (embed-certs-255556) DBG | unable to find current IP address of domain embed-certs-255556 in network mk-embed-certs-255556
	I0918 21:04:38.165697   61740 main.go:141] libmachine: (embed-certs-255556) DBG | I0918 21:04:38.165601   62857 retry.go:31] will retry after 2.407286782s: waiting for machine to come up
	I0918 21:04:37.694916   61659 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:04:37.713724   61659 api_server.go:72] duration metric: took 2.019221095s to wait for apiserver process to appear ...
	I0918 21:04:37.713756   61659 api_server.go:88] waiting for apiserver healthz status ...
	I0918 21:04:37.713782   61659 api_server.go:253] Checking apiserver healthz at https://192.168.50.109:8444/healthz ...
	I0918 21:04:37.714297   61659 api_server.go:269] stopped: https://192.168.50.109:8444/healthz: Get "https://192.168.50.109:8444/healthz": dial tcp 192.168.50.109:8444: connect: connection refused
	I0918 21:04:38.213883   61659 api_server.go:253] Checking apiserver healthz at https://192.168.50.109:8444/healthz ...
	I0918 21:04:40.396513   61659 api_server.go:279] https://192.168.50.109:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0918 21:04:40.396564   61659 api_server.go:103] status: https://192.168.50.109:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0918 21:04:40.396584   61659 api_server.go:253] Checking apiserver healthz at https://192.168.50.109:8444/healthz ...
	I0918 21:04:40.409718   61659 api_server.go:279] https://192.168.50.109:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0918 21:04:40.409750   61659 api_server.go:103] status: https://192.168.50.109:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0918 21:04:40.714176   61659 api_server.go:253] Checking apiserver healthz at https://192.168.50.109:8444/healthz ...
	I0918 21:04:40.719353   61659 api_server.go:279] https://192.168.50.109:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0918 21:04:40.719391   61659 api_server.go:103] status: https://192.168.50.109:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0918 21:04:41.214596   61659 api_server.go:253] Checking apiserver healthz at https://192.168.50.109:8444/healthz ...
	I0918 21:04:41.219579   61659 api_server.go:279] https://192.168.50.109:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0918 21:04:41.219608   61659 api_server.go:103] status: https://192.168.50.109:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0918 21:04:41.713951   61659 api_server.go:253] Checking apiserver healthz at https://192.168.50.109:8444/healthz ...
	I0918 21:04:41.719212   61659 api_server.go:279] https://192.168.50.109:8444/healthz returned 200:
	ok
	I0918 21:04:41.726647   61659 api_server.go:141] control plane version: v1.31.1
	I0918 21:04:41.726679   61659 api_server.go:131] duration metric: took 4.012914861s to wait for apiserver health ...
	I0918 21:04:41.726689   61659 cni.go:84] Creating CNI manager for ""
	I0918 21:04:41.726707   61659 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0918 21:04:41.728312   61659 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0918 21:04:41.729613   61659 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0918 21:04:41.741932   61659 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0918 21:04:41.763195   61659 system_pods.go:43] waiting for kube-system pods to appear ...
	I0918 21:04:41.775167   61659 system_pods.go:59] 8 kube-system pods found
	I0918 21:04:41.775210   61659 system_pods.go:61] "coredns-7c65d6cfc9-xzjd7" [bd8252df-707c-41e6-84b7-cc74480177a8] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0918 21:04:41.775219   61659 system_pods.go:61] "etcd-default-k8s-diff-port-828868" [aa8e221d-abba-48a5-8814-246df0776408] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0918 21:04:41.775227   61659 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-828868" [b44966ac-3478-40c4-b67f-1824bff2bec7] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0918 21:04:41.775233   61659 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-828868" [7af8fbad-3aa2-497e-90df-33facaee6b17] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0918 21:04:41.775239   61659 system_pods.go:61] "kube-proxy-jz7ls" [f931ae9a-0b9c-4754-8b7b-d52c267b018c] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0918 21:04:41.775247   61659 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-828868" [ee89c713-c689-4de3-b1a5-4e08470ff6fb] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0918 21:04:41.775252   61659 system_pods.go:61] "metrics-server-6867b74b74-cqp47" [1ccf8c85-183a-4bea-abbc-eb7bcedca7f0] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0918 21:04:41.775257   61659 system_pods.go:61] "storage-provisioner" [9744cbfa-6b9a-42f0-aa80-0821b87a33d4] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0918 21:04:41.775270   61659 system_pods.go:74] duration metric: took 12.058758ms to wait for pod list to return data ...
	I0918 21:04:41.775280   61659 node_conditions.go:102] verifying NodePressure condition ...
	I0918 21:04:41.779525   61659 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0918 21:04:41.779559   61659 node_conditions.go:123] node cpu capacity is 2
	I0918 21:04:41.779580   61659 node_conditions.go:105] duration metric: took 4.292138ms to run NodePressure ...
	I0918 21:04:41.779615   61659 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0918 21:04:42.079279   61659 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0918 21:04:42.084311   61659 kubeadm.go:739] kubelet initialised
	I0918 21:04:42.084338   61659 kubeadm.go:740] duration metric: took 5.024999ms waiting for restarted kubelet to initialise ...
	I0918 21:04:42.084351   61659 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0918 21:04:42.089113   61659 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-xzjd7" in "kube-system" namespace to be "Ready" ...
	I0918 21:04:42.095539   61659 pod_ready.go:98] node "default-k8s-diff-port-828868" hosting pod "coredns-7c65d6cfc9-xzjd7" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-828868" has status "Ready":"False"
	I0918 21:04:42.095565   61659 pod_ready.go:82] duration metric: took 6.405251ms for pod "coredns-7c65d6cfc9-xzjd7" in "kube-system" namespace to be "Ready" ...
	E0918 21:04:42.095575   61659 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-828868" hosting pod "coredns-7c65d6cfc9-xzjd7" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-828868" has status "Ready":"False"
	I0918 21:04:42.095581   61659 pod_ready.go:79] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-828868" in "kube-system" namespace to be "Ready" ...
	I0918 21:04:42.100447   61659 pod_ready.go:98] node "default-k8s-diff-port-828868" hosting pod "etcd-default-k8s-diff-port-828868" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-828868" has status "Ready":"False"
	I0918 21:04:42.100469   61659 pod_ready.go:82] duration metric: took 4.879955ms for pod "etcd-default-k8s-diff-port-828868" in "kube-system" namespace to be "Ready" ...
	E0918 21:04:42.100480   61659 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-828868" hosting pod "etcd-default-k8s-diff-port-828868" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-828868" has status "Ready":"False"
	I0918 21:04:42.100485   61659 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-828868" in "kube-system" namespace to be "Ready" ...
	I0918 21:04:42.104889   61659 pod_ready.go:98] node "default-k8s-diff-port-828868" hosting pod "kube-apiserver-default-k8s-diff-port-828868" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-828868" has status "Ready":"False"
	I0918 21:04:42.104914   61659 pod_ready.go:82] duration metric: took 4.421708ms for pod "kube-apiserver-default-k8s-diff-port-828868" in "kube-system" namespace to be "Ready" ...
	E0918 21:04:42.104926   61659 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-828868" hosting pod "kube-apiserver-default-k8s-diff-port-828868" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-828868" has status "Ready":"False"
	I0918 21:04:42.104934   61659 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-828868" in "kube-system" namespace to be "Ready" ...
	I0918 21:04:40.574813   61740 main.go:141] libmachine: (embed-certs-255556) DBG | domain embed-certs-255556 has defined MAC address 52:54:00:e8:c2:b7 in network mk-embed-certs-255556
	I0918 21:04:40.575265   61740 main.go:141] libmachine: (embed-certs-255556) DBG | unable to find current IP address of domain embed-certs-255556 in network mk-embed-certs-255556
	I0918 21:04:40.575295   61740 main.go:141] libmachine: (embed-certs-255556) DBG | I0918 21:04:40.575215   62857 retry.go:31] will retry after 2.249084169s: waiting for machine to come up
	I0918 21:04:42.827547   61740 main.go:141] libmachine: (embed-certs-255556) DBG | domain embed-certs-255556 has defined MAC address 52:54:00:e8:c2:b7 in network mk-embed-certs-255556
	I0918 21:04:42.827966   61740 main.go:141] libmachine: (embed-certs-255556) DBG | unable to find current IP address of domain embed-certs-255556 in network mk-embed-certs-255556
	I0918 21:04:42.828028   61740 main.go:141] libmachine: (embed-certs-255556) DBG | I0918 21:04:42.827923   62857 retry.go:31] will retry after 4.512161859s: waiting for machine to come up
	I0918 21:04:44.113739   61659 pod_ready.go:103] pod "kube-controller-manager-default-k8s-diff-port-828868" in "kube-system" namespace has status "Ready":"False"
	I0918 21:04:46.611013   61659 pod_ready.go:103] pod "kube-controller-manager-default-k8s-diff-port-828868" in "kube-system" namespace has status "Ready":"False"
	I0918 21:04:47.345046   61740 main.go:141] libmachine: (embed-certs-255556) DBG | domain embed-certs-255556 has defined MAC address 52:54:00:e8:c2:b7 in network mk-embed-certs-255556
	I0918 21:04:47.345426   61740 main.go:141] libmachine: (embed-certs-255556) Found IP for machine: 192.168.39.21
	I0918 21:04:47.345444   61740 main.go:141] libmachine: (embed-certs-255556) Reserving static IP address...
	I0918 21:04:47.345457   61740 main.go:141] libmachine: (embed-certs-255556) DBG | domain embed-certs-255556 has current primary IP address 192.168.39.21 and MAC address 52:54:00:e8:c2:b7 in network mk-embed-certs-255556
	I0918 21:04:47.345824   61740 main.go:141] libmachine: (embed-certs-255556) DBG | found host DHCP lease matching {name: "embed-certs-255556", mac: "52:54:00:e8:c2:b7", ip: "192.168.39.21"} in network mk-embed-certs-255556: {Iface:virbr1 ExpiryTime:2024-09-18 22:04:39 +0000 UTC Type:0 Mac:52:54:00:e8:c2:b7 Iaid: IPaddr:192.168.39.21 Prefix:24 Hostname:embed-certs-255556 Clientid:01:52:54:00:e8:c2:b7}
	I0918 21:04:47.345846   61740 main.go:141] libmachine: (embed-certs-255556) DBG | skip adding static IP to network mk-embed-certs-255556 - found existing host DHCP lease matching {name: "embed-certs-255556", mac: "52:54:00:e8:c2:b7", ip: "192.168.39.21"}
	I0918 21:04:47.345856   61740 main.go:141] libmachine: (embed-certs-255556) Reserved static IP address: 192.168.39.21
	I0918 21:04:47.345866   61740 main.go:141] libmachine: (embed-certs-255556) Waiting for SSH to be available...
	I0918 21:04:47.345874   61740 main.go:141] libmachine: (embed-certs-255556) DBG | Getting to WaitForSSH function...
	I0918 21:04:47.347972   61740 main.go:141] libmachine: (embed-certs-255556) DBG | domain embed-certs-255556 has defined MAC address 52:54:00:e8:c2:b7 in network mk-embed-certs-255556
	I0918 21:04:47.348327   61740 main.go:141] libmachine: (embed-certs-255556) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e8:c2:b7", ip: ""} in network mk-embed-certs-255556: {Iface:virbr1 ExpiryTime:2024-09-18 22:04:39 +0000 UTC Type:0 Mac:52:54:00:e8:c2:b7 Iaid: IPaddr:192.168.39.21 Prefix:24 Hostname:embed-certs-255556 Clientid:01:52:54:00:e8:c2:b7}
	I0918 21:04:47.348367   61740 main.go:141] libmachine: (embed-certs-255556) DBG | domain embed-certs-255556 has defined IP address 192.168.39.21 and MAC address 52:54:00:e8:c2:b7 in network mk-embed-certs-255556
	I0918 21:04:47.348437   61740 main.go:141] libmachine: (embed-certs-255556) DBG | Using SSH client type: external
	I0918 21:04:47.348469   61740 main.go:141] libmachine: (embed-certs-255556) DBG | Using SSH private key: /home/jenkins/minikube-integration/19667-7671/.minikube/machines/embed-certs-255556/id_rsa (-rw-------)
	I0918 21:04:47.348511   61740 main.go:141] libmachine: (embed-certs-255556) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.21 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19667-7671/.minikube/machines/embed-certs-255556/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0918 21:04:47.348526   61740 main.go:141] libmachine: (embed-certs-255556) DBG | About to run SSH command:
	I0918 21:04:47.348554   61740 main.go:141] libmachine: (embed-certs-255556) DBG | exit 0
	I0918 21:04:47.476457   61740 main.go:141] libmachine: (embed-certs-255556) DBG | SSH cmd err, output: <nil>: 
	I0918 21:04:47.476858   61740 main.go:141] libmachine: (embed-certs-255556) Calling .GetConfigRaw
	I0918 21:04:47.477533   61740 main.go:141] libmachine: (embed-certs-255556) Calling .GetIP
	I0918 21:04:47.480221   61740 main.go:141] libmachine: (embed-certs-255556) DBG | domain embed-certs-255556 has defined MAC address 52:54:00:e8:c2:b7 in network mk-embed-certs-255556
	I0918 21:04:47.480601   61740 main.go:141] libmachine: (embed-certs-255556) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e8:c2:b7", ip: ""} in network mk-embed-certs-255556: {Iface:virbr1 ExpiryTime:2024-09-18 22:04:39 +0000 UTC Type:0 Mac:52:54:00:e8:c2:b7 Iaid: IPaddr:192.168.39.21 Prefix:24 Hostname:embed-certs-255556 Clientid:01:52:54:00:e8:c2:b7}
	I0918 21:04:47.480644   61740 main.go:141] libmachine: (embed-certs-255556) DBG | domain embed-certs-255556 has defined IP address 192.168.39.21 and MAC address 52:54:00:e8:c2:b7 in network mk-embed-certs-255556
	I0918 21:04:47.480966   61740 profile.go:143] Saving config to /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/embed-certs-255556/config.json ...
	I0918 21:04:47.481172   61740 machine.go:93] provisionDockerMachine start ...
	I0918 21:04:47.481189   61740 main.go:141] libmachine: (embed-certs-255556) Calling .DriverName
	I0918 21:04:47.481440   61740 main.go:141] libmachine: (embed-certs-255556) Calling .GetSSHHostname
	I0918 21:04:47.483916   61740 main.go:141] libmachine: (embed-certs-255556) DBG | domain embed-certs-255556 has defined MAC address 52:54:00:e8:c2:b7 in network mk-embed-certs-255556
	I0918 21:04:47.484299   61740 main.go:141] libmachine: (embed-certs-255556) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e8:c2:b7", ip: ""} in network mk-embed-certs-255556: {Iface:virbr1 ExpiryTime:2024-09-18 22:04:39 +0000 UTC Type:0 Mac:52:54:00:e8:c2:b7 Iaid: IPaddr:192.168.39.21 Prefix:24 Hostname:embed-certs-255556 Clientid:01:52:54:00:e8:c2:b7}
	I0918 21:04:47.484328   61740 main.go:141] libmachine: (embed-certs-255556) DBG | domain embed-certs-255556 has defined IP address 192.168.39.21 and MAC address 52:54:00:e8:c2:b7 in network mk-embed-certs-255556
	I0918 21:04:47.484467   61740 main.go:141] libmachine: (embed-certs-255556) Calling .GetSSHPort
	I0918 21:04:47.484703   61740 main.go:141] libmachine: (embed-certs-255556) Calling .GetSSHKeyPath
	I0918 21:04:47.484898   61740 main.go:141] libmachine: (embed-certs-255556) Calling .GetSSHKeyPath
	I0918 21:04:47.485043   61740 main.go:141] libmachine: (embed-certs-255556) Calling .GetSSHUsername
	I0918 21:04:47.485185   61740 main.go:141] libmachine: Using SSH client type: native
	I0918 21:04:47.485386   61740 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.21 22 <nil> <nil>}
	I0918 21:04:47.485399   61740 main.go:141] libmachine: About to run SSH command:
	hostname
	I0918 21:04:47.596243   61740 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0918 21:04:47.596272   61740 main.go:141] libmachine: (embed-certs-255556) Calling .GetMachineName
	I0918 21:04:47.596531   61740 buildroot.go:166] provisioning hostname "embed-certs-255556"
	I0918 21:04:47.596560   61740 main.go:141] libmachine: (embed-certs-255556) Calling .GetMachineName
	I0918 21:04:47.596775   61740 main.go:141] libmachine: (embed-certs-255556) Calling .GetSSHHostname
	I0918 21:04:47.599159   61740 main.go:141] libmachine: (embed-certs-255556) DBG | domain embed-certs-255556 has defined MAC address 52:54:00:e8:c2:b7 in network mk-embed-certs-255556
	I0918 21:04:47.599508   61740 main.go:141] libmachine: (embed-certs-255556) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e8:c2:b7", ip: ""} in network mk-embed-certs-255556: {Iface:virbr1 ExpiryTime:2024-09-18 22:04:39 +0000 UTC Type:0 Mac:52:54:00:e8:c2:b7 Iaid: IPaddr:192.168.39.21 Prefix:24 Hostname:embed-certs-255556 Clientid:01:52:54:00:e8:c2:b7}
	I0918 21:04:47.599532   61740 main.go:141] libmachine: (embed-certs-255556) DBG | domain embed-certs-255556 has defined IP address 192.168.39.21 and MAC address 52:54:00:e8:c2:b7 in network mk-embed-certs-255556
	I0918 21:04:47.599706   61740 main.go:141] libmachine: (embed-certs-255556) Calling .GetSSHPort
	I0918 21:04:47.599888   61740 main.go:141] libmachine: (embed-certs-255556) Calling .GetSSHKeyPath
	I0918 21:04:47.600090   61740 main.go:141] libmachine: (embed-certs-255556) Calling .GetSSHKeyPath
	I0918 21:04:47.600229   61740 main.go:141] libmachine: (embed-certs-255556) Calling .GetSSHUsername
	I0918 21:04:47.600406   61740 main.go:141] libmachine: Using SSH client type: native
	I0918 21:04:47.600589   61740 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.21 22 <nil> <nil>}
	I0918 21:04:47.600602   61740 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-255556 && echo "embed-certs-255556" | sudo tee /etc/hostname
	I0918 21:04:47.726173   61740 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-255556
	
	I0918 21:04:47.726213   61740 main.go:141] libmachine: (embed-certs-255556) Calling .GetSSHHostname
	I0918 21:04:47.729209   61740 main.go:141] libmachine: (embed-certs-255556) DBG | domain embed-certs-255556 has defined MAC address 52:54:00:e8:c2:b7 in network mk-embed-certs-255556
	I0918 21:04:47.729575   61740 main.go:141] libmachine: (embed-certs-255556) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e8:c2:b7", ip: ""} in network mk-embed-certs-255556: {Iface:virbr1 ExpiryTime:2024-09-18 22:04:39 +0000 UTC Type:0 Mac:52:54:00:e8:c2:b7 Iaid: IPaddr:192.168.39.21 Prefix:24 Hostname:embed-certs-255556 Clientid:01:52:54:00:e8:c2:b7}
	I0918 21:04:47.729609   61740 main.go:141] libmachine: (embed-certs-255556) DBG | domain embed-certs-255556 has defined IP address 192.168.39.21 and MAC address 52:54:00:e8:c2:b7 in network mk-embed-certs-255556
	I0918 21:04:47.729775   61740 main.go:141] libmachine: (embed-certs-255556) Calling .GetSSHPort
	I0918 21:04:47.729952   61740 main.go:141] libmachine: (embed-certs-255556) Calling .GetSSHKeyPath
	I0918 21:04:47.730212   61740 main.go:141] libmachine: (embed-certs-255556) Calling .GetSSHKeyPath
	I0918 21:04:47.730386   61740 main.go:141] libmachine: (embed-certs-255556) Calling .GetSSHUsername
	I0918 21:04:47.730583   61740 main.go:141] libmachine: Using SSH client type: native
	I0918 21:04:47.730755   61740 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.21 22 <nil> <nil>}
	I0918 21:04:47.730771   61740 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-255556' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-255556/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-255556' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0918 21:04:47.849894   61740 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0918 21:04:47.849928   61740 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19667-7671/.minikube CaCertPath:/home/jenkins/minikube-integration/19667-7671/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19667-7671/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19667-7671/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19667-7671/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19667-7671/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19667-7671/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19667-7671/.minikube}
	I0918 21:04:47.849954   61740 buildroot.go:174] setting up certificates
	I0918 21:04:47.849961   61740 provision.go:84] configureAuth start
	I0918 21:04:47.849971   61740 main.go:141] libmachine: (embed-certs-255556) Calling .GetMachineName
	I0918 21:04:47.850307   61740 main.go:141] libmachine: (embed-certs-255556) Calling .GetIP
	I0918 21:04:47.852989   61740 main.go:141] libmachine: (embed-certs-255556) DBG | domain embed-certs-255556 has defined MAC address 52:54:00:e8:c2:b7 in network mk-embed-certs-255556
	I0918 21:04:47.853389   61740 main.go:141] libmachine: (embed-certs-255556) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e8:c2:b7", ip: ""} in network mk-embed-certs-255556: {Iface:virbr1 ExpiryTime:2024-09-18 22:04:39 +0000 UTC Type:0 Mac:52:54:00:e8:c2:b7 Iaid: IPaddr:192.168.39.21 Prefix:24 Hostname:embed-certs-255556 Clientid:01:52:54:00:e8:c2:b7}
	I0918 21:04:47.853423   61740 main.go:141] libmachine: (embed-certs-255556) DBG | domain embed-certs-255556 has defined IP address 192.168.39.21 and MAC address 52:54:00:e8:c2:b7 in network mk-embed-certs-255556
	I0918 21:04:47.853555   61740 main.go:141] libmachine: (embed-certs-255556) Calling .GetSSHHostname
	I0918 21:04:47.856032   61740 main.go:141] libmachine: (embed-certs-255556) DBG | domain embed-certs-255556 has defined MAC address 52:54:00:e8:c2:b7 in network mk-embed-certs-255556
	I0918 21:04:47.856389   61740 main.go:141] libmachine: (embed-certs-255556) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e8:c2:b7", ip: ""} in network mk-embed-certs-255556: {Iface:virbr1 ExpiryTime:2024-09-18 22:04:39 +0000 UTC Type:0 Mac:52:54:00:e8:c2:b7 Iaid: IPaddr:192.168.39.21 Prefix:24 Hostname:embed-certs-255556 Clientid:01:52:54:00:e8:c2:b7}
	I0918 21:04:47.856410   61740 main.go:141] libmachine: (embed-certs-255556) DBG | domain embed-certs-255556 has defined IP address 192.168.39.21 and MAC address 52:54:00:e8:c2:b7 in network mk-embed-certs-255556
	I0918 21:04:47.856556   61740 provision.go:143] copyHostCerts
	I0918 21:04:47.856617   61740 exec_runner.go:144] found /home/jenkins/minikube-integration/19667-7671/.minikube/ca.pem, removing ...
	I0918 21:04:47.856627   61740 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19667-7671/.minikube/ca.pem
	I0918 21:04:47.856686   61740 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19667-7671/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19667-7671/.minikube/ca.pem (1082 bytes)
	I0918 21:04:47.856778   61740 exec_runner.go:144] found /home/jenkins/minikube-integration/19667-7671/.minikube/cert.pem, removing ...
	I0918 21:04:47.856786   61740 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19667-7671/.minikube/cert.pem
	I0918 21:04:47.856805   61740 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19667-7671/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19667-7671/.minikube/cert.pem (1123 bytes)
	I0918 21:04:47.856855   61740 exec_runner.go:144] found /home/jenkins/minikube-integration/19667-7671/.minikube/key.pem, removing ...
	I0918 21:04:47.856862   61740 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19667-7671/.minikube/key.pem
	I0918 21:04:47.856881   61740 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19667-7671/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19667-7671/.minikube/key.pem (1679 bytes)
	I0918 21:04:47.856929   61740 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19667-7671/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19667-7671/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19667-7671/.minikube/certs/ca-key.pem org=jenkins.embed-certs-255556 san=[127.0.0.1 192.168.39.21 embed-certs-255556 localhost minikube]
	I0918 21:04:48.145689   61740 provision.go:177] copyRemoteCerts
	I0918 21:04:48.145750   61740 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0918 21:04:48.145779   61740 main.go:141] libmachine: (embed-certs-255556) Calling .GetSSHHostname
	I0918 21:04:48.148420   61740 main.go:141] libmachine: (embed-certs-255556) DBG | domain embed-certs-255556 has defined MAC address 52:54:00:e8:c2:b7 in network mk-embed-certs-255556
	I0918 21:04:48.148785   61740 main.go:141] libmachine: (embed-certs-255556) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e8:c2:b7", ip: ""} in network mk-embed-certs-255556: {Iface:virbr1 ExpiryTime:2024-09-18 22:04:39 +0000 UTC Type:0 Mac:52:54:00:e8:c2:b7 Iaid: IPaddr:192.168.39.21 Prefix:24 Hostname:embed-certs-255556 Clientid:01:52:54:00:e8:c2:b7}
	I0918 21:04:48.148812   61740 main.go:141] libmachine: (embed-certs-255556) DBG | domain embed-certs-255556 has defined IP address 192.168.39.21 and MAC address 52:54:00:e8:c2:b7 in network mk-embed-certs-255556
	I0918 21:04:48.148983   61740 main.go:141] libmachine: (embed-certs-255556) Calling .GetSSHPort
	I0918 21:04:48.149194   61740 main.go:141] libmachine: (embed-certs-255556) Calling .GetSSHKeyPath
	I0918 21:04:48.149371   61740 main.go:141] libmachine: (embed-certs-255556) Calling .GetSSHUsername
	I0918 21:04:48.149486   61740 sshutil.go:53] new ssh client: &{IP:192.168.39.21 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19667-7671/.minikube/machines/embed-certs-255556/id_rsa Username:docker}
	I0918 21:04:48.234451   61740 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0918 21:04:48.260660   61740 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0918 21:04:48.283305   61740 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0918 21:04:48.305919   61740 provision.go:87] duration metric: took 455.946794ms to configureAuth
	I0918 21:04:48.305954   61740 buildroot.go:189] setting minikube options for container-runtime
	I0918 21:04:48.306183   61740 config.go:182] Loaded profile config "embed-certs-255556": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0918 21:04:48.306284   61740 main.go:141] libmachine: (embed-certs-255556) Calling .GetSSHHostname
	I0918 21:04:48.308853   61740 main.go:141] libmachine: (embed-certs-255556) DBG | domain embed-certs-255556 has defined MAC address 52:54:00:e8:c2:b7 in network mk-embed-certs-255556
	I0918 21:04:48.309319   61740 main.go:141] libmachine: (embed-certs-255556) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e8:c2:b7", ip: ""} in network mk-embed-certs-255556: {Iface:virbr1 ExpiryTime:2024-09-18 22:04:39 +0000 UTC Type:0 Mac:52:54:00:e8:c2:b7 Iaid: IPaddr:192.168.39.21 Prefix:24 Hostname:embed-certs-255556 Clientid:01:52:54:00:e8:c2:b7}
	I0918 21:04:48.309359   61740 main.go:141] libmachine: (embed-certs-255556) DBG | domain embed-certs-255556 has defined IP address 192.168.39.21 and MAC address 52:54:00:e8:c2:b7 in network mk-embed-certs-255556
	I0918 21:04:48.309488   61740 main.go:141] libmachine: (embed-certs-255556) Calling .GetSSHPort
	I0918 21:04:48.309706   61740 main.go:141] libmachine: (embed-certs-255556) Calling .GetSSHKeyPath
	I0918 21:04:48.309860   61740 main.go:141] libmachine: (embed-certs-255556) Calling .GetSSHKeyPath
	I0918 21:04:48.309976   61740 main.go:141] libmachine: (embed-certs-255556) Calling .GetSSHUsername
	I0918 21:04:48.310134   61740 main.go:141] libmachine: Using SSH client type: native
	I0918 21:04:48.310349   61740 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.21 22 <nil> <nil>}
	I0918 21:04:48.310372   61740 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0918 21:04:48.782438   62061 start.go:364] duration metric: took 3m49.184727821s to acquireMachinesLock for "old-k8s-version-740194"
	I0918 21:04:48.782503   62061 start.go:96] Skipping create...Using existing machine configuration
	I0918 21:04:48.782514   62061 fix.go:54] fixHost starting: 
	I0918 21:04:48.782993   62061 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19667-7671/.minikube/bin/docker-machine-driver-kvm2
	I0918 21:04:48.783053   62061 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0918 21:04:48.802299   62061 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44463
	I0918 21:04:48.802787   62061 main.go:141] libmachine: () Calling .GetVersion
	I0918 21:04:48.803286   62061 main.go:141] libmachine: Using API Version  1
	I0918 21:04:48.803313   62061 main.go:141] libmachine: () Calling .SetConfigRaw
	I0918 21:04:48.803681   62061 main.go:141] libmachine: () Calling .GetMachineName
	I0918 21:04:48.803873   62061 main.go:141] libmachine: (old-k8s-version-740194) Calling .DriverName
	I0918 21:04:48.804007   62061 main.go:141] libmachine: (old-k8s-version-740194) Calling .GetState
	I0918 21:04:48.805714   62061 fix.go:112] recreateIfNeeded on old-k8s-version-740194: state=Stopped err=<nil>
	I0918 21:04:48.805744   62061 main.go:141] libmachine: (old-k8s-version-740194) Calling .DriverName
	W0918 21:04:48.805910   62061 fix.go:138] unexpected machine state, will restart: <nil>
	I0918 21:04:48.835402   62061 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-740194" ...
	I0918 21:04:48.836753   62061 main.go:141] libmachine: (old-k8s-version-740194) Calling .Start
	I0918 21:04:48.837090   62061 main.go:141] libmachine: (old-k8s-version-740194) Ensuring networks are active...
	I0918 21:04:48.838014   62061 main.go:141] libmachine: (old-k8s-version-740194) Ensuring network default is active
	I0918 21:04:48.838375   62061 main.go:141] libmachine: (old-k8s-version-740194) Ensuring network mk-old-k8s-version-740194 is active
	I0918 21:04:48.839035   62061 main.go:141] libmachine: (old-k8s-version-740194) Getting domain xml...
	I0918 21:04:48.839832   62061 main.go:141] libmachine: (old-k8s-version-740194) Creating domain...
	I0918 21:04:48.532928   61740 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0918 21:04:48.532952   61740 machine.go:96] duration metric: took 1.051769616s to provisionDockerMachine
	I0918 21:04:48.532962   61740 start.go:293] postStartSetup for "embed-certs-255556" (driver="kvm2")
	I0918 21:04:48.532973   61740 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0918 21:04:48.532991   61740 main.go:141] libmachine: (embed-certs-255556) Calling .DriverName
	I0918 21:04:48.533310   61740 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0918 21:04:48.533342   61740 main.go:141] libmachine: (embed-certs-255556) Calling .GetSSHHostname
	I0918 21:04:48.536039   61740 main.go:141] libmachine: (embed-certs-255556) DBG | domain embed-certs-255556 has defined MAC address 52:54:00:e8:c2:b7 in network mk-embed-certs-255556
	I0918 21:04:48.536529   61740 main.go:141] libmachine: (embed-certs-255556) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e8:c2:b7", ip: ""} in network mk-embed-certs-255556: {Iface:virbr1 ExpiryTime:2024-09-18 22:04:39 +0000 UTC Type:0 Mac:52:54:00:e8:c2:b7 Iaid: IPaddr:192.168.39.21 Prefix:24 Hostname:embed-certs-255556 Clientid:01:52:54:00:e8:c2:b7}
	I0918 21:04:48.536558   61740 main.go:141] libmachine: (embed-certs-255556) DBG | domain embed-certs-255556 has defined IP address 192.168.39.21 and MAC address 52:54:00:e8:c2:b7 in network mk-embed-certs-255556
	I0918 21:04:48.536631   61740 main.go:141] libmachine: (embed-certs-255556) Calling .GetSSHPort
	I0918 21:04:48.536806   61740 main.go:141] libmachine: (embed-certs-255556) Calling .GetSSHKeyPath
	I0918 21:04:48.536971   61740 main.go:141] libmachine: (embed-certs-255556) Calling .GetSSHUsername
	I0918 21:04:48.537148   61740 sshutil.go:53] new ssh client: &{IP:192.168.39.21 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19667-7671/.minikube/machines/embed-certs-255556/id_rsa Username:docker}
	I0918 21:04:48.623154   61740 ssh_runner.go:195] Run: cat /etc/os-release
	I0918 21:04:48.627520   61740 info.go:137] Remote host: Buildroot 2023.02.9
	I0918 21:04:48.627544   61740 filesync.go:126] Scanning /home/jenkins/minikube-integration/19667-7671/.minikube/addons for local assets ...
	I0918 21:04:48.627617   61740 filesync.go:126] Scanning /home/jenkins/minikube-integration/19667-7671/.minikube/files for local assets ...
	I0918 21:04:48.627711   61740 filesync.go:149] local asset: /home/jenkins/minikube-integration/19667-7671/.minikube/files/etc/ssl/certs/148782.pem -> 148782.pem in /etc/ssl/certs
	I0918 21:04:48.627827   61740 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0918 21:04:48.637145   61740 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/files/etc/ssl/certs/148782.pem --> /etc/ssl/certs/148782.pem (1708 bytes)
	I0918 21:04:48.661971   61740 start.go:296] duration metric: took 128.997987ms for postStartSetup
	I0918 21:04:48.662012   61740 fix.go:56] duration metric: took 20.352947161s for fixHost
	I0918 21:04:48.662034   61740 main.go:141] libmachine: (embed-certs-255556) Calling .GetSSHHostname
	I0918 21:04:48.665153   61740 main.go:141] libmachine: (embed-certs-255556) DBG | domain embed-certs-255556 has defined MAC address 52:54:00:e8:c2:b7 in network mk-embed-certs-255556
	I0918 21:04:48.665637   61740 main.go:141] libmachine: (embed-certs-255556) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e8:c2:b7", ip: ""} in network mk-embed-certs-255556: {Iface:virbr1 ExpiryTime:2024-09-18 22:04:39 +0000 UTC Type:0 Mac:52:54:00:e8:c2:b7 Iaid: IPaddr:192.168.39.21 Prefix:24 Hostname:embed-certs-255556 Clientid:01:52:54:00:e8:c2:b7}
	I0918 21:04:48.665668   61740 main.go:141] libmachine: (embed-certs-255556) DBG | domain embed-certs-255556 has defined IP address 192.168.39.21 and MAC address 52:54:00:e8:c2:b7 in network mk-embed-certs-255556
	I0918 21:04:48.665853   61740 main.go:141] libmachine: (embed-certs-255556) Calling .GetSSHPort
	I0918 21:04:48.666090   61740 main.go:141] libmachine: (embed-certs-255556) Calling .GetSSHKeyPath
	I0918 21:04:48.666289   61740 main.go:141] libmachine: (embed-certs-255556) Calling .GetSSHKeyPath
	I0918 21:04:48.666607   61740 main.go:141] libmachine: (embed-certs-255556) Calling .GetSSHUsername
	I0918 21:04:48.666784   61740 main.go:141] libmachine: Using SSH client type: native
	I0918 21:04:48.667024   61740 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.21 22 <nil> <nil>}
	I0918 21:04:48.667040   61740 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0918 21:04:48.782245   61740 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726693488.758182538
	
	I0918 21:04:48.782286   61740 fix.go:216] guest clock: 1726693488.758182538
	I0918 21:04:48.782297   61740 fix.go:229] Guest: 2024-09-18 21:04:48.758182538 +0000 UTC Remote: 2024-09-18 21:04:48.662016609 +0000 UTC m=+270.354724953 (delta=96.165929ms)
	I0918 21:04:48.782322   61740 fix.go:200] guest clock delta is within tolerance: 96.165929ms
	I0918 21:04:48.782329   61740 start.go:83] releasing machines lock for "embed-certs-255556", held for 20.47331123s
	I0918 21:04:48.782358   61740 main.go:141] libmachine: (embed-certs-255556) Calling .DriverName
	I0918 21:04:48.782655   61740 main.go:141] libmachine: (embed-certs-255556) Calling .GetIP
	I0918 21:04:48.785572   61740 main.go:141] libmachine: (embed-certs-255556) DBG | domain embed-certs-255556 has defined MAC address 52:54:00:e8:c2:b7 in network mk-embed-certs-255556
	I0918 21:04:48.785959   61740 main.go:141] libmachine: (embed-certs-255556) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e8:c2:b7", ip: ""} in network mk-embed-certs-255556: {Iface:virbr1 ExpiryTime:2024-09-18 22:04:39 +0000 UTC Type:0 Mac:52:54:00:e8:c2:b7 Iaid: IPaddr:192.168.39.21 Prefix:24 Hostname:embed-certs-255556 Clientid:01:52:54:00:e8:c2:b7}
	I0918 21:04:48.785988   61740 main.go:141] libmachine: (embed-certs-255556) DBG | domain embed-certs-255556 has defined IP address 192.168.39.21 and MAC address 52:54:00:e8:c2:b7 in network mk-embed-certs-255556
	I0918 21:04:48.786181   61740 main.go:141] libmachine: (embed-certs-255556) Calling .DriverName
	I0918 21:04:48.786653   61740 main.go:141] libmachine: (embed-certs-255556) Calling .DriverName
	I0918 21:04:48.786859   61740 main.go:141] libmachine: (embed-certs-255556) Calling .DriverName
	I0918 21:04:48.787019   61740 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0918 21:04:48.787083   61740 main.go:141] libmachine: (embed-certs-255556) Calling .GetSSHHostname
	I0918 21:04:48.787118   61740 ssh_runner.go:195] Run: cat /version.json
	I0918 21:04:48.787142   61740 main.go:141] libmachine: (embed-certs-255556) Calling .GetSSHHostname
	I0918 21:04:48.789834   61740 main.go:141] libmachine: (embed-certs-255556) DBG | domain embed-certs-255556 has defined MAC address 52:54:00:e8:c2:b7 in network mk-embed-certs-255556
	I0918 21:04:48.790239   61740 main.go:141] libmachine: (embed-certs-255556) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e8:c2:b7", ip: ""} in network mk-embed-certs-255556: {Iface:virbr1 ExpiryTime:2024-09-18 22:04:39 +0000 UTC Type:0 Mac:52:54:00:e8:c2:b7 Iaid: IPaddr:192.168.39.21 Prefix:24 Hostname:embed-certs-255556 Clientid:01:52:54:00:e8:c2:b7}
	I0918 21:04:48.790290   61740 main.go:141] libmachine: (embed-certs-255556) DBG | domain embed-certs-255556 has defined IP address 192.168.39.21 and MAC address 52:54:00:e8:c2:b7 in network mk-embed-certs-255556
	I0918 21:04:48.790326   61740 main.go:141] libmachine: (embed-certs-255556) DBG | domain embed-certs-255556 has defined MAC address 52:54:00:e8:c2:b7 in network mk-embed-certs-255556
	I0918 21:04:48.790625   61740 main.go:141] libmachine: (embed-certs-255556) Calling .GetSSHPort
	I0918 21:04:48.790805   61740 main.go:141] libmachine: (embed-certs-255556) Calling .GetSSHKeyPath
	I0918 21:04:48.790828   61740 main.go:141] libmachine: (embed-certs-255556) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e8:c2:b7", ip: ""} in network mk-embed-certs-255556: {Iface:virbr1 ExpiryTime:2024-09-18 22:04:39 +0000 UTC Type:0 Mac:52:54:00:e8:c2:b7 Iaid: IPaddr:192.168.39.21 Prefix:24 Hostname:embed-certs-255556 Clientid:01:52:54:00:e8:c2:b7}
	I0918 21:04:48.790860   61740 main.go:141] libmachine: (embed-certs-255556) DBG | domain embed-certs-255556 has defined IP address 192.168.39.21 and MAC address 52:54:00:e8:c2:b7 in network mk-embed-certs-255556
	I0918 21:04:48.791012   61740 main.go:141] libmachine: (embed-certs-255556) Calling .GetSSHUsername
	I0918 21:04:48.791035   61740 main.go:141] libmachine: (embed-certs-255556) Calling .GetSSHPort
	I0918 21:04:48.791172   61740 sshutil.go:53] new ssh client: &{IP:192.168.39.21 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19667-7671/.minikube/machines/embed-certs-255556/id_rsa Username:docker}
	I0918 21:04:48.791251   61740 main.go:141] libmachine: (embed-certs-255556) Calling .GetSSHKeyPath
	I0918 21:04:48.791406   61740 main.go:141] libmachine: (embed-certs-255556) Calling .GetSSHUsername
	I0918 21:04:48.791537   61740 sshutil.go:53] new ssh client: &{IP:192.168.39.21 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19667-7671/.minikube/machines/embed-certs-255556/id_rsa Username:docker}
	I0918 21:04:48.911282   61740 ssh_runner.go:195] Run: systemctl --version
	I0918 21:04:48.917459   61740 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0918 21:04:49.062272   61740 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0918 21:04:49.068629   61740 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0918 21:04:49.068709   61740 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0918 21:04:49.085575   61740 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0918 21:04:49.085607   61740 start.go:495] detecting cgroup driver to use...
	I0918 21:04:49.085677   61740 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0918 21:04:49.102455   61740 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0918 21:04:49.117869   61740 docker.go:217] disabling cri-docker service (if available) ...
	I0918 21:04:49.117958   61740 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0918 21:04:49.135361   61740 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0918 21:04:49.150861   61740 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0918 21:04:49.285901   61740 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0918 21:04:49.438312   61740 docker.go:233] disabling docker service ...
	I0918 21:04:49.438390   61740 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0918 21:04:49.454560   61740 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0918 21:04:49.471109   61740 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0918 21:04:49.631711   61740 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0918 21:04:49.760860   61740 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0918 21:04:49.778574   61740 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0918 21:04:49.797293   61740 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0918 21:04:49.797365   61740 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0918 21:04:49.808796   61740 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0918 21:04:49.808872   61740 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0918 21:04:49.821451   61740 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0918 21:04:49.834678   61740 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0918 21:04:49.847521   61740 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0918 21:04:49.860918   61740 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0918 21:04:49.873942   61740 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0918 21:04:49.892983   61740 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0918 21:04:49.904925   61740 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0918 21:04:49.916195   61740 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0918 21:04:49.916310   61740 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0918 21:04:49.931084   61740 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0918 21:04:49.942692   61740 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0918 21:04:50.065013   61740 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0918 21:04:50.168347   61740 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0918 21:04:50.168440   61740 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0918 21:04:50.174948   61740 start.go:563] Will wait 60s for crictl version
	I0918 21:04:50.175017   61740 ssh_runner.go:195] Run: which crictl
	I0918 21:04:50.180139   61740 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0918 21:04:50.221578   61740 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0918 21:04:50.221687   61740 ssh_runner.go:195] Run: crio --version
	I0918 21:04:50.251587   61740 ssh_runner.go:195] Run: crio --version
	I0918 21:04:50.282931   61740 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0918 21:04:48.112865   61659 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-828868" in "kube-system" namespace has status "Ready":"True"
	I0918 21:04:48.112895   61659 pod_ready.go:82] duration metric: took 6.007950768s for pod "kube-controller-manager-default-k8s-diff-port-828868" in "kube-system" namespace to be "Ready" ...
	I0918 21:04:48.112909   61659 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-jz7ls" in "kube-system" namespace to be "Ready" ...
	I0918 21:04:48.118606   61659 pod_ready.go:93] pod "kube-proxy-jz7ls" in "kube-system" namespace has status "Ready":"True"
	I0918 21:04:48.118628   61659 pod_ready.go:82] duration metric: took 5.710918ms for pod "kube-proxy-jz7ls" in "kube-system" namespace to be "Ready" ...
	I0918 21:04:48.118647   61659 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-828868" in "kube-system" namespace to be "Ready" ...
	I0918 21:04:49.626081   61659 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-828868" in "kube-system" namespace has status "Ready":"True"
	I0918 21:04:49.626116   61659 pod_ready.go:82] duration metric: took 1.507459822s for pod "kube-scheduler-default-k8s-diff-port-828868" in "kube-system" namespace to be "Ready" ...
	I0918 21:04:49.626130   61659 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace to be "Ready" ...
	I0918 21:04:51.635306   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:04:50.284258   61740 main.go:141] libmachine: (embed-certs-255556) Calling .GetIP
	I0918 21:04:50.287321   61740 main.go:141] libmachine: (embed-certs-255556) DBG | domain embed-certs-255556 has defined MAC address 52:54:00:e8:c2:b7 in network mk-embed-certs-255556
	I0918 21:04:50.287754   61740 main.go:141] libmachine: (embed-certs-255556) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e8:c2:b7", ip: ""} in network mk-embed-certs-255556: {Iface:virbr1 ExpiryTime:2024-09-18 22:04:39 +0000 UTC Type:0 Mac:52:54:00:e8:c2:b7 Iaid: IPaddr:192.168.39.21 Prefix:24 Hostname:embed-certs-255556 Clientid:01:52:54:00:e8:c2:b7}
	I0918 21:04:50.287782   61740 main.go:141] libmachine: (embed-certs-255556) DBG | domain embed-certs-255556 has defined IP address 192.168.39.21 and MAC address 52:54:00:e8:c2:b7 in network mk-embed-certs-255556
	I0918 21:04:50.288116   61740 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0918 21:04:50.292221   61740 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0918 21:04:50.304472   61740 kubeadm.go:883] updating cluster {Name:embed-certs-255556 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.1 ClusterName:embed-certs-255556 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.21 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:f
alse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0918 21:04:50.304604   61740 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0918 21:04:50.304675   61740 ssh_runner.go:195] Run: sudo crictl images --output json
	I0918 21:04:50.343445   61740 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I0918 21:04:50.343527   61740 ssh_runner.go:195] Run: which lz4
	I0918 21:04:50.347600   61740 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0918 21:04:50.351647   61740 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0918 21:04:50.351679   61740 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I0918 21:04:51.665892   61740 crio.go:462] duration metric: took 1.318339658s to copy over tarball
	I0918 21:04:51.665970   61740 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0918 21:04:50.157515   62061 main.go:141] libmachine: (old-k8s-version-740194) Waiting to get IP...
	I0918 21:04:50.158369   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | domain old-k8s-version-740194 has defined MAC address 52:54:00:3f:a7:11 in network mk-old-k8s-version-740194
	I0918 21:04:50.158796   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | unable to find current IP address of domain old-k8s-version-740194 in network mk-old-k8s-version-740194
	I0918 21:04:50.158931   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | I0918 21:04:50.158765   63026 retry.go:31] will retry after 214.617426ms: waiting for machine to come up
	I0918 21:04:50.375418   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | domain old-k8s-version-740194 has defined MAC address 52:54:00:3f:a7:11 in network mk-old-k8s-version-740194
	I0918 21:04:50.375882   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | unable to find current IP address of domain old-k8s-version-740194 in network mk-old-k8s-version-740194
	I0918 21:04:50.375910   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | I0918 21:04:50.375834   63026 retry.go:31] will retry after 311.080996ms: waiting for machine to come up
	I0918 21:04:50.688569   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | domain old-k8s-version-740194 has defined MAC address 52:54:00:3f:a7:11 in network mk-old-k8s-version-740194
	I0918 21:04:50.689151   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | unable to find current IP address of domain old-k8s-version-740194 in network mk-old-k8s-version-740194
	I0918 21:04:50.689182   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | I0918 21:04:50.689112   63026 retry.go:31] will retry after 386.384815ms: waiting for machine to come up
	I0918 21:04:51.076846   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | domain old-k8s-version-740194 has defined MAC address 52:54:00:3f:a7:11 in network mk-old-k8s-version-740194
	I0918 21:04:51.077336   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | unable to find current IP address of domain old-k8s-version-740194 in network mk-old-k8s-version-740194
	I0918 21:04:51.077359   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | I0918 21:04:51.077280   63026 retry.go:31] will retry after 556.210887ms: waiting for machine to come up
	I0918 21:04:51.634919   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | domain old-k8s-version-740194 has defined MAC address 52:54:00:3f:a7:11 in network mk-old-k8s-version-740194
	I0918 21:04:51.635423   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | unable to find current IP address of domain old-k8s-version-740194 in network mk-old-k8s-version-740194
	I0918 21:04:51.635462   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | I0918 21:04:51.635325   63026 retry.go:31] will retry after 607.960565ms: waiting for machine to come up
	I0918 21:04:52.245179   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | domain old-k8s-version-740194 has defined MAC address 52:54:00:3f:a7:11 in network mk-old-k8s-version-740194
	I0918 21:04:52.245741   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | unable to find current IP address of domain old-k8s-version-740194 in network mk-old-k8s-version-740194
	I0918 21:04:52.245774   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | I0918 21:04:52.245692   63026 retry.go:31] will retry after 620.243825ms: waiting for machine to come up
	I0918 21:04:52.867067   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | domain old-k8s-version-740194 has defined MAC address 52:54:00:3f:a7:11 in network mk-old-k8s-version-740194
	I0918 21:04:52.867658   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | unable to find current IP address of domain old-k8s-version-740194 in network mk-old-k8s-version-740194
	I0918 21:04:52.867680   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | I0918 21:04:52.867608   63026 retry.go:31] will retry after 1.091814923s: waiting for machine to come up
	I0918 21:04:53.961395   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | domain old-k8s-version-740194 has defined MAC address 52:54:00:3f:a7:11 in network mk-old-k8s-version-740194
	I0918 21:04:53.961819   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | unable to find current IP address of domain old-k8s-version-740194 in network mk-old-k8s-version-740194
	I0918 21:04:53.961842   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | I0918 21:04:53.961784   63026 retry.go:31] will retry after 1.130552716s: waiting for machine to come up
	I0918 21:04:54.133598   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:04:56.134938   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:04:53.837558   61740 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.171557505s)
	I0918 21:04:53.837589   61740 crio.go:469] duration metric: took 2.171667234s to extract the tarball
	I0918 21:04:53.837610   61740 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0918 21:04:53.876381   61740 ssh_runner.go:195] Run: sudo crictl images --output json
	I0918 21:04:53.924938   61740 crio.go:514] all images are preloaded for cri-o runtime.
	I0918 21:04:53.924968   61740 cache_images.go:84] Images are preloaded, skipping loading
	I0918 21:04:53.924979   61740 kubeadm.go:934] updating node { 192.168.39.21 8443 v1.31.1 crio true true} ...
	I0918 21:04:53.925115   61740 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-255556 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.21
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:embed-certs-255556 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0918 21:04:53.925203   61740 ssh_runner.go:195] Run: crio config
	I0918 21:04:53.969048   61740 cni.go:84] Creating CNI manager for ""
	I0918 21:04:53.969076   61740 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0918 21:04:53.969086   61740 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0918 21:04:53.969105   61740 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.21 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-255556 NodeName:embed-certs-255556 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.21"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.21 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath
:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0918 21:04:53.969240   61740 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.21
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-255556"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.21
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.21"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0918 21:04:53.969298   61740 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0918 21:04:53.978636   61740 binaries.go:44] Found k8s binaries, skipping transfer
	I0918 21:04:53.978702   61740 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0918 21:04:53.988580   61740 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0918 21:04:54.005819   61740 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0918 21:04:54.021564   61740 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2159 bytes)
	I0918 21:04:54.038702   61740 ssh_runner.go:195] Run: grep 192.168.39.21	control-plane.minikube.internal$ /etc/hosts
	I0918 21:04:54.042536   61740 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.21	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0918 21:04:54.053896   61740 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0918 21:04:54.180842   61740 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0918 21:04:54.197701   61740 certs.go:68] Setting up /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/embed-certs-255556 for IP: 192.168.39.21
	I0918 21:04:54.197731   61740 certs.go:194] generating shared ca certs ...
	I0918 21:04:54.197754   61740 certs.go:226] acquiring lock for ca certs: {Name:mkf54e0e71ed884c4372f9d3d4cb308ea4600185 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0918 21:04:54.197953   61740 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19667-7671/.minikube/ca.key
	I0918 21:04:54.198020   61740 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19667-7671/.minikube/proxy-client-ca.key
	I0918 21:04:54.198034   61740 certs.go:256] generating profile certs ...
	I0918 21:04:54.198129   61740 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/embed-certs-255556/client.key
	I0918 21:04:54.198191   61740 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/embed-certs-255556/apiserver.key.4704fd19
	I0918 21:04:54.198225   61740 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/embed-certs-255556/proxy-client.key
	I0918 21:04:54.198326   61740 certs.go:484] found cert: /home/jenkins/minikube-integration/19667-7671/.minikube/certs/14878.pem (1338 bytes)
	W0918 21:04:54.198358   61740 certs.go:480] ignoring /home/jenkins/minikube-integration/19667-7671/.minikube/certs/14878_empty.pem, impossibly tiny 0 bytes
	I0918 21:04:54.198370   61740 certs.go:484] found cert: /home/jenkins/minikube-integration/19667-7671/.minikube/certs/ca-key.pem (1675 bytes)
	I0918 21:04:54.198420   61740 certs.go:484] found cert: /home/jenkins/minikube-integration/19667-7671/.minikube/certs/ca.pem (1082 bytes)
	I0918 21:04:54.198463   61740 certs.go:484] found cert: /home/jenkins/minikube-integration/19667-7671/.minikube/certs/cert.pem (1123 bytes)
	I0918 21:04:54.198498   61740 certs.go:484] found cert: /home/jenkins/minikube-integration/19667-7671/.minikube/certs/key.pem (1679 bytes)
	I0918 21:04:54.198566   61740 certs.go:484] found cert: /home/jenkins/minikube-integration/19667-7671/.minikube/files/etc/ssl/certs/148782.pem (1708 bytes)
	I0918 21:04:54.199258   61740 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0918 21:04:54.231688   61740 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0918 21:04:54.276366   61740 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0918 21:04:54.320929   61740 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0918 21:04:54.348698   61740 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/embed-certs-255556/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0918 21:04:54.375168   61740 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/embed-certs-255556/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0918 21:04:54.399159   61740 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/embed-certs-255556/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0918 21:04:54.427975   61740 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/embed-certs-255556/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0918 21:04:54.454648   61740 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/certs/14878.pem --> /usr/share/ca-certificates/14878.pem (1338 bytes)
	I0918 21:04:54.477518   61740 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/files/etc/ssl/certs/148782.pem --> /usr/share/ca-certificates/148782.pem (1708 bytes)
	I0918 21:04:54.500703   61740 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0918 21:04:54.523380   61740 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0918 21:04:54.540053   61740 ssh_runner.go:195] Run: openssl version
	I0918 21:04:54.545818   61740 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14878.pem && ln -fs /usr/share/ca-certificates/14878.pem /etc/ssl/certs/14878.pem"
	I0918 21:04:54.557138   61740 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14878.pem
	I0918 21:04:54.561973   61740 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 18 19:57 /usr/share/ca-certificates/14878.pem
	I0918 21:04:54.562030   61740 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14878.pem
	I0918 21:04:54.568133   61740 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14878.pem /etc/ssl/certs/51391683.0"
	I0918 21:04:54.578964   61740 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/148782.pem && ln -fs /usr/share/ca-certificates/148782.pem /etc/ssl/certs/148782.pem"
	I0918 21:04:54.590254   61740 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/148782.pem
	I0918 21:04:54.594944   61740 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 18 19:57 /usr/share/ca-certificates/148782.pem
	I0918 21:04:54.595022   61740 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/148782.pem
	I0918 21:04:54.600797   61740 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/148782.pem /etc/ssl/certs/3ec20f2e.0"
	I0918 21:04:54.612078   61740 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0918 21:04:54.623280   61740 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0918 21:04:54.628636   61740 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 18 19:39 /usr/share/ca-certificates/minikubeCA.pem
	I0918 21:04:54.628711   61740 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0918 21:04:54.634847   61740 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0918 21:04:54.645647   61740 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0918 21:04:54.650004   61740 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0918 21:04:54.656906   61740 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0918 21:04:54.662778   61740 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0918 21:04:54.668744   61740 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0918 21:04:54.674676   61740 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0918 21:04:54.680431   61740 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0918 21:04:54.686242   61740 kubeadm.go:392] StartCluster: {Name:embed-certs-255556 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.1 ClusterName:embed-certs-255556 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.21 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fals
e MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0918 21:04:54.686364   61740 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0918 21:04:54.686439   61740 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0918 21:04:54.724228   61740 cri.go:89] found id: ""
	I0918 21:04:54.724319   61740 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0918 21:04:54.734427   61740 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0918 21:04:54.734458   61740 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0918 21:04:54.734511   61740 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0918 21:04:54.747453   61740 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0918 21:04:54.748449   61740 kubeconfig.go:125] found "embed-certs-255556" server: "https://192.168.39.21:8443"
	I0918 21:04:54.750481   61740 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0918 21:04:54.760549   61740 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.21
	I0918 21:04:54.760585   61740 kubeadm.go:1160] stopping kube-system containers ...
	I0918 21:04:54.760599   61740 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0918 21:04:54.760659   61740 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0918 21:04:54.796334   61740 cri.go:89] found id: ""
	I0918 21:04:54.796426   61740 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0918 21:04:54.820854   61740 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0918 21:04:54.831959   61740 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0918 21:04:54.831982   61740 kubeadm.go:157] found existing configuration files:
	
	I0918 21:04:54.832075   61740 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0918 21:04:54.841872   61740 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0918 21:04:54.841952   61740 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0918 21:04:54.852032   61740 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0918 21:04:54.862101   61740 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0918 21:04:54.862176   61740 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0918 21:04:54.872575   61740 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0918 21:04:54.882283   61740 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0918 21:04:54.882386   61740 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0918 21:04:54.895907   61740 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0918 21:04:54.905410   61740 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0918 21:04:54.905484   61740 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0918 21:04:54.914938   61740 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0918 21:04:54.924536   61740 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0918 21:04:55.035830   61740 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0918 21:04:55.975305   61740 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0918 21:04:56.227988   61740 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0918 21:04:56.304760   61740 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0918 21:04:56.375088   61740 api_server.go:52] waiting for apiserver process to appear ...
	I0918 21:04:56.375185   61740 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:04:56.875319   61740 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:04:57.375240   61740 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:04:57.875532   61740 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:04:55.093491   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | domain old-k8s-version-740194 has defined MAC address 52:54:00:3f:a7:11 in network mk-old-k8s-version-740194
	I0918 21:04:55.093956   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | unable to find current IP address of domain old-k8s-version-740194 in network mk-old-k8s-version-740194
	I0918 21:04:55.093982   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | I0918 21:04:55.093923   63026 retry.go:31] will retry after 1.824664154s: waiting for machine to come up
	I0918 21:04:56.920959   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | domain old-k8s-version-740194 has defined MAC address 52:54:00:3f:a7:11 in network mk-old-k8s-version-740194
	I0918 21:04:56.921371   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | unable to find current IP address of domain old-k8s-version-740194 in network mk-old-k8s-version-740194
	I0918 21:04:56.921422   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | I0918 21:04:56.921354   63026 retry.go:31] will retry after 1.591260677s: waiting for machine to come up
	I0918 21:04:58.514832   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | domain old-k8s-version-740194 has defined MAC address 52:54:00:3f:a7:11 in network mk-old-k8s-version-740194
	I0918 21:04:58.515294   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | unable to find current IP address of domain old-k8s-version-740194 in network mk-old-k8s-version-740194
	I0918 21:04:58.515322   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | I0918 21:04:58.515262   63026 retry.go:31] will retry after 1.868763497s: waiting for machine to come up
	I0918 21:04:58.135056   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:05:00.633540   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:04:58.375400   61740 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:04:58.392935   61740 api_server.go:72] duration metric: took 2.017847705s to wait for apiserver process to appear ...
	I0918 21:04:58.393110   61740 api_server.go:88] waiting for apiserver healthz status ...
	I0918 21:04:58.393152   61740 api_server.go:253] Checking apiserver healthz at https://192.168.39.21:8443/healthz ...
	I0918 21:04:58.393699   61740 api_server.go:269] stopped: https://192.168.39.21:8443/healthz: Get "https://192.168.39.21:8443/healthz": dial tcp 192.168.39.21:8443: connect: connection refused
	I0918 21:04:58.893291   61740 api_server.go:253] Checking apiserver healthz at https://192.168.39.21:8443/healthz ...
	I0918 21:05:01.124915   61740 api_server.go:279] https://192.168.39.21:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0918 21:05:01.124954   61740 api_server.go:103] status: https://192.168.39.21:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0918 21:05:01.124991   61740 api_server.go:253] Checking apiserver healthz at https://192.168.39.21:8443/healthz ...
	I0918 21:05:01.179199   61740 api_server.go:279] https://192.168.39.21:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0918 21:05:01.179225   61740 api_server.go:103] status: https://192.168.39.21:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0918 21:05:01.393537   61740 api_server.go:253] Checking apiserver healthz at https://192.168.39.21:8443/healthz ...
	I0918 21:05:01.399577   61740 api_server.go:279] https://192.168.39.21:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0918 21:05:01.399610   61740 api_server.go:103] status: https://192.168.39.21:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0918 21:05:01.894174   61740 api_server.go:253] Checking apiserver healthz at https://192.168.39.21:8443/healthz ...
	I0918 21:05:01.899086   61740 api_server.go:279] https://192.168.39.21:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0918 21:05:01.899110   61740 api_server.go:103] status: https://192.168.39.21:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0918 21:05:02.393672   61740 api_server.go:253] Checking apiserver healthz at https://192.168.39.21:8443/healthz ...
	I0918 21:05:02.401942   61740 api_server.go:279] https://192.168.39.21:8443/healthz returned 200:
	ok
	I0918 21:05:02.408523   61740 api_server.go:141] control plane version: v1.31.1
	I0918 21:05:02.408553   61740 api_server.go:131] duration metric: took 4.015427901s to wait for apiserver health ...
	I0918 21:05:02.408562   61740 cni.go:84] Creating CNI manager for ""
	I0918 21:05:02.408568   61740 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0918 21:05:02.410199   61740 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0918 21:05:02.411470   61740 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0918 21:05:02.424617   61740 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0918 21:05:02.443819   61740 system_pods.go:43] waiting for kube-system pods to appear ...
	I0918 21:05:02.458892   61740 system_pods.go:59] 8 kube-system pods found
	I0918 21:05:02.458939   61740 system_pods.go:61] "coredns-7c65d6cfc9-xwn8w" [773b9a83-bb43-40d3-b3a3-40603c3b22b2] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0918 21:05:02.458949   61740 system_pods.go:61] "etcd-embed-certs-255556" [ee3e7dc9-fb5a-4faa-a0b5-b84b7cd506b6] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0918 21:05:02.458961   61740 system_pods.go:61] "kube-apiserver-embed-certs-255556" [c60ce069-c7a0-42d7-a7de-ce3cf91a3d43] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0918 21:05:02.458970   61740 system_pods.go:61] "kube-controller-manager-embed-certs-255556" [ac8f6b42-caa3-4815-9a90-3f7bb1f0060b] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0918 21:05:02.458980   61740 system_pods.go:61] "kube-proxy-v8szm" [367f743a-399b-4d04-8604-dcd441999581] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0918 21:05:02.458993   61740 system_pods.go:61] "kube-scheduler-embed-certs-255556" [b5dd211b-7963-41ac-8b43-0a5451e3e848] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0918 21:05:02.459001   61740 system_pods.go:61] "metrics-server-6867b74b74-z8rm7" [d1b6823e-4ac5-4ac6-88ae-7f8eac622fdf] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0918 21:05:02.459009   61740 system_pods.go:61] "storage-provisioner" [1575f899-35a7-4eb2-ad5f-660183f75aa6] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0918 21:05:02.459015   61740 system_pods.go:74] duration metric: took 15.172393ms to wait for pod list to return data ...
	I0918 21:05:02.459025   61740 node_conditions.go:102] verifying NodePressure condition ...
	I0918 21:05:02.463140   61740 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0918 21:05:02.463177   61740 node_conditions.go:123] node cpu capacity is 2
	I0918 21:05:02.463192   61740 node_conditions.go:105] duration metric: took 4.162401ms to run NodePressure ...
	I0918 21:05:02.463214   61740 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0918 21:05:02.757153   61740 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0918 21:05:02.761949   61740 kubeadm.go:739] kubelet initialised
	I0918 21:05:02.761977   61740 kubeadm.go:740] duration metric: took 4.79396ms waiting for restarted kubelet to initialise ...
	I0918 21:05:02.761985   61740 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0918 21:05:02.767197   61740 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-xwn8w" in "kube-system" namespace to be "Ready" ...
	I0918 21:05:00.385891   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | domain old-k8s-version-740194 has defined MAC address 52:54:00:3f:a7:11 in network mk-old-k8s-version-740194
	I0918 21:05:00.386451   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | unable to find current IP address of domain old-k8s-version-740194 in network mk-old-k8s-version-740194
	I0918 21:05:00.386482   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | I0918 21:05:00.386384   63026 retry.go:31] will retry after 3.274467583s: waiting for machine to come up
	I0918 21:05:03.664788   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | domain old-k8s-version-740194 has defined MAC address 52:54:00:3f:a7:11 in network mk-old-k8s-version-740194
	I0918 21:05:03.665255   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | unable to find current IP address of domain old-k8s-version-740194 in network mk-old-k8s-version-740194
	I0918 21:05:03.665286   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | I0918 21:05:03.665210   63026 retry.go:31] will retry after 4.112908346s: waiting for machine to come up
	I0918 21:05:02.634177   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:05:05.133431   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:05:07.133941   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:05:04.774196   61740 pod_ready.go:103] pod "coredns-7c65d6cfc9-xwn8w" in "kube-system" namespace has status "Ready":"False"
	I0918 21:05:07.273045   61740 pod_ready.go:103] pod "coredns-7c65d6cfc9-xwn8w" in "kube-system" namespace has status "Ready":"False"
	I0918 21:05:09.245246   61273 start.go:364] duration metric: took 55.195169549s to acquireMachinesLock for "no-preload-331658"
	I0918 21:05:09.245300   61273 start.go:96] Skipping create...Using existing machine configuration
	I0918 21:05:09.245311   61273 fix.go:54] fixHost starting: 
	I0918 21:05:09.245741   61273 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19667-7671/.minikube/bin/docker-machine-driver-kvm2
	I0918 21:05:09.245778   61273 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0918 21:05:09.263998   61273 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37335
	I0918 21:05:09.264565   61273 main.go:141] libmachine: () Calling .GetVersion
	I0918 21:05:09.265118   61273 main.go:141] libmachine: Using API Version  1
	I0918 21:05:09.265142   61273 main.go:141] libmachine: () Calling .SetConfigRaw
	I0918 21:05:09.265505   61273 main.go:141] libmachine: () Calling .GetMachineName
	I0918 21:05:09.265732   61273 main.go:141] libmachine: (no-preload-331658) Calling .DriverName
	I0918 21:05:09.265901   61273 main.go:141] libmachine: (no-preload-331658) Calling .GetState
	I0918 21:05:09.269500   61273 fix.go:112] recreateIfNeeded on no-preload-331658: state=Stopped err=<nil>
	I0918 21:05:09.269525   61273 main.go:141] libmachine: (no-preload-331658) Calling .DriverName
	W0918 21:05:09.269730   61273 fix.go:138] unexpected machine state, will restart: <nil>
	I0918 21:05:09.271448   61273 out.go:177] * Restarting existing kvm2 VM for "no-preload-331658" ...
	I0918 21:05:07.782616   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | domain old-k8s-version-740194 has defined MAC address 52:54:00:3f:a7:11 in network mk-old-k8s-version-740194
	I0918 21:05:07.783108   62061 main.go:141] libmachine: (old-k8s-version-740194) Found IP for machine: 192.168.72.53
	I0918 21:05:07.783125   62061 main.go:141] libmachine: (old-k8s-version-740194) Reserving static IP address...
	I0918 21:05:07.783135   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | domain old-k8s-version-740194 has current primary IP address 192.168.72.53 and MAC address 52:54:00:3f:a7:11 in network mk-old-k8s-version-740194
	I0918 21:05:07.783509   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | found host DHCP lease matching {name: "old-k8s-version-740194", mac: "52:54:00:3f:a7:11", ip: "192.168.72.53"} in network mk-old-k8s-version-740194: {Iface:virbr3 ExpiryTime:2024-09-18 22:05:00 +0000 UTC Type:0 Mac:52:54:00:3f:a7:11 Iaid: IPaddr:192.168.72.53 Prefix:24 Hostname:old-k8s-version-740194 Clientid:01:52:54:00:3f:a7:11}
	I0918 21:05:07.783542   62061 main.go:141] libmachine: (old-k8s-version-740194) Reserved static IP address: 192.168.72.53
	I0918 21:05:07.783565   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | skip adding static IP to network mk-old-k8s-version-740194 - found existing host DHCP lease matching {name: "old-k8s-version-740194", mac: "52:54:00:3f:a7:11", ip: "192.168.72.53"}
	I0918 21:05:07.783583   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | Getting to WaitForSSH function...
	I0918 21:05:07.783614   62061 main.go:141] libmachine: (old-k8s-version-740194) Waiting for SSH to be available...
	I0918 21:05:07.785492   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | domain old-k8s-version-740194 has defined MAC address 52:54:00:3f:a7:11 in network mk-old-k8s-version-740194
	I0918 21:05:07.785856   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:a7:11", ip: ""} in network mk-old-k8s-version-740194: {Iface:virbr3 ExpiryTime:2024-09-18 22:05:00 +0000 UTC Type:0 Mac:52:54:00:3f:a7:11 Iaid: IPaddr:192.168.72.53 Prefix:24 Hostname:old-k8s-version-740194 Clientid:01:52:54:00:3f:a7:11}
	I0918 21:05:07.785885   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | domain old-k8s-version-740194 has defined IP address 192.168.72.53 and MAC address 52:54:00:3f:a7:11 in network mk-old-k8s-version-740194
	I0918 21:05:07.786000   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | Using SSH client type: external
	I0918 21:05:07.786021   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | Using SSH private key: /home/jenkins/minikube-integration/19667-7671/.minikube/machines/old-k8s-version-740194/id_rsa (-rw-------)
	I0918 21:05:07.786044   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.53 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19667-7671/.minikube/machines/old-k8s-version-740194/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0918 21:05:07.786055   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | About to run SSH command:
	I0918 21:05:07.786065   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | exit 0
	I0918 21:05:07.915953   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | SSH cmd err, output: <nil>: 
	I0918 21:05:07.916454   62061 main.go:141] libmachine: (old-k8s-version-740194) Calling .GetConfigRaw
	I0918 21:05:07.917059   62061 main.go:141] libmachine: (old-k8s-version-740194) Calling .GetIP
	I0918 21:05:07.919749   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | domain old-k8s-version-740194 has defined MAC address 52:54:00:3f:a7:11 in network mk-old-k8s-version-740194
	I0918 21:05:07.920244   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:a7:11", ip: ""} in network mk-old-k8s-version-740194: {Iface:virbr3 ExpiryTime:2024-09-18 22:05:00 +0000 UTC Type:0 Mac:52:54:00:3f:a7:11 Iaid: IPaddr:192.168.72.53 Prefix:24 Hostname:old-k8s-version-740194 Clientid:01:52:54:00:3f:a7:11}
	I0918 21:05:07.920280   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | domain old-k8s-version-740194 has defined IP address 192.168.72.53 and MAC address 52:54:00:3f:a7:11 in network mk-old-k8s-version-740194
	I0918 21:05:07.920639   62061 profile.go:143] Saving config to /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/old-k8s-version-740194/config.json ...
	I0918 21:05:07.920871   62061 machine.go:93] provisionDockerMachine start ...
	I0918 21:05:07.920893   62061 main.go:141] libmachine: (old-k8s-version-740194) Calling .DriverName
	I0918 21:05:07.921100   62061 main.go:141] libmachine: (old-k8s-version-740194) Calling .GetSSHHostname
	I0918 21:05:07.923129   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | domain old-k8s-version-740194 has defined MAC address 52:54:00:3f:a7:11 in network mk-old-k8s-version-740194
	I0918 21:05:07.923434   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:a7:11", ip: ""} in network mk-old-k8s-version-740194: {Iface:virbr3 ExpiryTime:2024-09-18 22:05:00 +0000 UTC Type:0 Mac:52:54:00:3f:a7:11 Iaid: IPaddr:192.168.72.53 Prefix:24 Hostname:old-k8s-version-740194 Clientid:01:52:54:00:3f:a7:11}
	I0918 21:05:07.923458   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | domain old-k8s-version-740194 has defined IP address 192.168.72.53 and MAC address 52:54:00:3f:a7:11 in network mk-old-k8s-version-740194
	I0918 21:05:07.923573   62061 main.go:141] libmachine: (old-k8s-version-740194) Calling .GetSSHPort
	I0918 21:05:07.923727   62061 main.go:141] libmachine: (old-k8s-version-740194) Calling .GetSSHKeyPath
	I0918 21:05:07.923889   62061 main.go:141] libmachine: (old-k8s-version-740194) Calling .GetSSHKeyPath
	I0918 21:05:07.924036   62061 main.go:141] libmachine: (old-k8s-version-740194) Calling .GetSSHUsername
	I0918 21:05:07.924185   62061 main.go:141] libmachine: Using SSH client type: native
	I0918 21:05:07.924368   62061 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.72.53 22 <nil> <nil>}
	I0918 21:05:07.924378   62061 main.go:141] libmachine: About to run SSH command:
	hostname
	I0918 21:05:08.035941   62061 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0918 21:05:08.035972   62061 main.go:141] libmachine: (old-k8s-version-740194) Calling .GetMachineName
	I0918 21:05:08.036242   62061 buildroot.go:166] provisioning hostname "old-k8s-version-740194"
	I0918 21:05:08.036273   62061 main.go:141] libmachine: (old-k8s-version-740194) Calling .GetMachineName
	I0918 21:05:08.036512   62061 main.go:141] libmachine: (old-k8s-version-740194) Calling .GetSSHHostname
	I0918 21:05:08.038902   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | domain old-k8s-version-740194 has defined MAC address 52:54:00:3f:a7:11 in network mk-old-k8s-version-740194
	I0918 21:05:08.039239   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:a7:11", ip: ""} in network mk-old-k8s-version-740194: {Iface:virbr3 ExpiryTime:2024-09-18 22:05:00 +0000 UTC Type:0 Mac:52:54:00:3f:a7:11 Iaid: IPaddr:192.168.72.53 Prefix:24 Hostname:old-k8s-version-740194 Clientid:01:52:54:00:3f:a7:11}
	I0918 21:05:08.039266   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | domain old-k8s-version-740194 has defined IP address 192.168.72.53 and MAC address 52:54:00:3f:a7:11 in network mk-old-k8s-version-740194
	I0918 21:05:08.039438   62061 main.go:141] libmachine: (old-k8s-version-740194) Calling .GetSSHPort
	I0918 21:05:08.039632   62061 main.go:141] libmachine: (old-k8s-version-740194) Calling .GetSSHKeyPath
	I0918 21:05:08.039899   62061 main.go:141] libmachine: (old-k8s-version-740194) Calling .GetSSHKeyPath
	I0918 21:05:08.040072   62061 main.go:141] libmachine: (old-k8s-version-740194) Calling .GetSSHUsername
	I0918 21:05:08.040248   62061 main.go:141] libmachine: Using SSH client type: native
	I0918 21:05:08.040415   62061 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.72.53 22 <nil> <nil>}
	I0918 21:05:08.040428   62061 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-740194 && echo "old-k8s-version-740194" | sudo tee /etc/hostname
	I0918 21:05:08.165391   62061 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-740194
	
	I0918 21:05:08.165424   62061 main.go:141] libmachine: (old-k8s-version-740194) Calling .GetSSHHostname
	I0918 21:05:08.168059   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | domain old-k8s-version-740194 has defined MAC address 52:54:00:3f:a7:11 in network mk-old-k8s-version-740194
	I0918 21:05:08.168396   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:a7:11", ip: ""} in network mk-old-k8s-version-740194: {Iface:virbr3 ExpiryTime:2024-09-18 22:05:00 +0000 UTC Type:0 Mac:52:54:00:3f:a7:11 Iaid: IPaddr:192.168.72.53 Prefix:24 Hostname:old-k8s-version-740194 Clientid:01:52:54:00:3f:a7:11}
	I0918 21:05:08.168428   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | domain old-k8s-version-740194 has defined IP address 192.168.72.53 and MAC address 52:54:00:3f:a7:11 in network mk-old-k8s-version-740194
	I0918 21:05:08.168621   62061 main.go:141] libmachine: (old-k8s-version-740194) Calling .GetSSHPort
	I0918 21:05:08.168837   62061 main.go:141] libmachine: (old-k8s-version-740194) Calling .GetSSHKeyPath
	I0918 21:05:08.168988   62061 main.go:141] libmachine: (old-k8s-version-740194) Calling .GetSSHKeyPath
	I0918 21:05:08.169231   62061 main.go:141] libmachine: (old-k8s-version-740194) Calling .GetSSHUsername
	I0918 21:05:08.169413   62061 main.go:141] libmachine: Using SSH client type: native
	I0918 21:05:08.169579   62061 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.72.53 22 <nil> <nil>}
	I0918 21:05:08.169594   62061 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-740194' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-740194/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-740194' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0918 21:05:08.288591   62061 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0918 21:05:08.288644   62061 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19667-7671/.minikube CaCertPath:/home/jenkins/minikube-integration/19667-7671/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19667-7671/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19667-7671/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19667-7671/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19667-7671/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19667-7671/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19667-7671/.minikube}
	I0918 21:05:08.288671   62061 buildroot.go:174] setting up certificates
	I0918 21:05:08.288682   62061 provision.go:84] configureAuth start
	I0918 21:05:08.288694   62061 main.go:141] libmachine: (old-k8s-version-740194) Calling .GetMachineName
	I0918 21:05:08.289017   62061 main.go:141] libmachine: (old-k8s-version-740194) Calling .GetIP
	I0918 21:05:08.291949   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | domain old-k8s-version-740194 has defined MAC address 52:54:00:3f:a7:11 in network mk-old-k8s-version-740194
	I0918 21:05:08.292358   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:a7:11", ip: ""} in network mk-old-k8s-version-740194: {Iface:virbr3 ExpiryTime:2024-09-18 22:05:00 +0000 UTC Type:0 Mac:52:54:00:3f:a7:11 Iaid: IPaddr:192.168.72.53 Prefix:24 Hostname:old-k8s-version-740194 Clientid:01:52:54:00:3f:a7:11}
	I0918 21:05:08.292405   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | domain old-k8s-version-740194 has defined IP address 192.168.72.53 and MAC address 52:54:00:3f:a7:11 in network mk-old-k8s-version-740194
	I0918 21:05:08.292526   62061 main.go:141] libmachine: (old-k8s-version-740194) Calling .GetSSHHostname
	I0918 21:05:08.295013   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | domain old-k8s-version-740194 has defined MAC address 52:54:00:3f:a7:11 in network mk-old-k8s-version-740194
	I0918 21:05:08.295378   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:a7:11", ip: ""} in network mk-old-k8s-version-740194: {Iface:virbr3 ExpiryTime:2024-09-18 22:05:00 +0000 UTC Type:0 Mac:52:54:00:3f:a7:11 Iaid: IPaddr:192.168.72.53 Prefix:24 Hostname:old-k8s-version-740194 Clientid:01:52:54:00:3f:a7:11}
	I0918 21:05:08.295399   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | domain old-k8s-version-740194 has defined IP address 192.168.72.53 and MAC address 52:54:00:3f:a7:11 in network mk-old-k8s-version-740194
	I0918 21:05:08.295567   62061 provision.go:143] copyHostCerts
	I0918 21:05:08.295630   62061 exec_runner.go:144] found /home/jenkins/minikube-integration/19667-7671/.minikube/ca.pem, removing ...
	I0918 21:05:08.295640   62061 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19667-7671/.minikube/ca.pem
	I0918 21:05:08.295692   62061 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19667-7671/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19667-7671/.minikube/ca.pem (1082 bytes)
	I0918 21:05:08.295783   62061 exec_runner.go:144] found /home/jenkins/minikube-integration/19667-7671/.minikube/cert.pem, removing ...
	I0918 21:05:08.295790   62061 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19667-7671/.minikube/cert.pem
	I0918 21:05:08.295810   62061 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19667-7671/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19667-7671/.minikube/cert.pem (1123 bytes)
	I0918 21:05:08.295917   62061 exec_runner.go:144] found /home/jenkins/minikube-integration/19667-7671/.minikube/key.pem, removing ...
	I0918 21:05:08.295926   62061 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19667-7671/.minikube/key.pem
	I0918 21:05:08.295949   62061 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19667-7671/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19667-7671/.minikube/key.pem (1679 bytes)
	I0918 21:05:08.296009   62061 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19667-7671/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19667-7671/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19667-7671/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-740194 san=[127.0.0.1 192.168.72.53 localhost minikube old-k8s-version-740194]
	I0918 21:05:08.560726   62061 provision.go:177] copyRemoteCerts
	I0918 21:05:08.560786   62061 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0918 21:05:08.560816   62061 main.go:141] libmachine: (old-k8s-version-740194) Calling .GetSSHHostname
	I0918 21:05:08.563798   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | domain old-k8s-version-740194 has defined MAC address 52:54:00:3f:a7:11 in network mk-old-k8s-version-740194
	I0918 21:05:08.564231   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:a7:11", ip: ""} in network mk-old-k8s-version-740194: {Iface:virbr3 ExpiryTime:2024-09-18 22:05:00 +0000 UTC Type:0 Mac:52:54:00:3f:a7:11 Iaid: IPaddr:192.168.72.53 Prefix:24 Hostname:old-k8s-version-740194 Clientid:01:52:54:00:3f:a7:11}
	I0918 21:05:08.564266   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | domain old-k8s-version-740194 has defined IP address 192.168.72.53 and MAC address 52:54:00:3f:a7:11 in network mk-old-k8s-version-740194
	I0918 21:05:08.564473   62061 main.go:141] libmachine: (old-k8s-version-740194) Calling .GetSSHPort
	I0918 21:05:08.564704   62061 main.go:141] libmachine: (old-k8s-version-740194) Calling .GetSSHKeyPath
	I0918 21:05:08.564876   62061 main.go:141] libmachine: (old-k8s-version-740194) Calling .GetSSHUsername
	I0918 21:05:08.565016   62061 sshutil.go:53] new ssh client: &{IP:192.168.72.53 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19667-7671/.minikube/machines/old-k8s-version-740194/id_rsa Username:docker}
	I0918 21:05:08.654357   62061 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0918 21:05:08.680551   62061 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0918 21:05:08.704324   62061 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0918 21:05:08.727966   62061 provision.go:87] duration metric: took 439.269312ms to configureAuth
	I0918 21:05:08.728003   62061 buildroot.go:189] setting minikube options for container-runtime
	I0918 21:05:08.728235   62061 config.go:182] Loaded profile config "old-k8s-version-740194": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0918 21:05:08.728305   62061 main.go:141] libmachine: (old-k8s-version-740194) Calling .GetSSHHostname
	I0918 21:05:08.730895   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | domain old-k8s-version-740194 has defined MAC address 52:54:00:3f:a7:11 in network mk-old-k8s-version-740194
	I0918 21:05:08.731318   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:a7:11", ip: ""} in network mk-old-k8s-version-740194: {Iface:virbr3 ExpiryTime:2024-09-18 22:05:00 +0000 UTC Type:0 Mac:52:54:00:3f:a7:11 Iaid: IPaddr:192.168.72.53 Prefix:24 Hostname:old-k8s-version-740194 Clientid:01:52:54:00:3f:a7:11}
	I0918 21:05:08.731348   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | domain old-k8s-version-740194 has defined IP address 192.168.72.53 and MAC address 52:54:00:3f:a7:11 in network mk-old-k8s-version-740194
	I0918 21:05:08.731448   62061 main.go:141] libmachine: (old-k8s-version-740194) Calling .GetSSHPort
	I0918 21:05:08.731668   62061 main.go:141] libmachine: (old-k8s-version-740194) Calling .GetSSHKeyPath
	I0918 21:05:08.731884   62061 main.go:141] libmachine: (old-k8s-version-740194) Calling .GetSSHKeyPath
	I0918 21:05:08.732057   62061 main.go:141] libmachine: (old-k8s-version-740194) Calling .GetSSHUsername
	I0918 21:05:08.732223   62061 main.go:141] libmachine: Using SSH client type: native
	I0918 21:05:08.732396   62061 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.72.53 22 <nil> <nil>}
	I0918 21:05:08.732411   62061 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0918 21:05:08.978962   62061 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0918 21:05:08.978995   62061 machine.go:96] duration metric: took 1.05811075s to provisionDockerMachine
	I0918 21:05:08.979008   62061 start.go:293] postStartSetup for "old-k8s-version-740194" (driver="kvm2")
	I0918 21:05:08.979020   62061 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0918 21:05:08.979050   62061 main.go:141] libmachine: (old-k8s-version-740194) Calling .DriverName
	I0918 21:05:08.979409   62061 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0918 21:05:08.979436   62061 main.go:141] libmachine: (old-k8s-version-740194) Calling .GetSSHHostname
	I0918 21:05:08.982472   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | domain old-k8s-version-740194 has defined MAC address 52:54:00:3f:a7:11 in network mk-old-k8s-version-740194
	I0918 21:05:08.982791   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:a7:11", ip: ""} in network mk-old-k8s-version-740194: {Iface:virbr3 ExpiryTime:2024-09-18 22:05:00 +0000 UTC Type:0 Mac:52:54:00:3f:a7:11 Iaid: IPaddr:192.168.72.53 Prefix:24 Hostname:old-k8s-version-740194 Clientid:01:52:54:00:3f:a7:11}
	I0918 21:05:08.982830   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | domain old-k8s-version-740194 has defined IP address 192.168.72.53 and MAC address 52:54:00:3f:a7:11 in network mk-old-k8s-version-740194
	I0918 21:05:08.982996   62061 main.go:141] libmachine: (old-k8s-version-740194) Calling .GetSSHPort
	I0918 21:05:08.983192   62061 main.go:141] libmachine: (old-k8s-version-740194) Calling .GetSSHKeyPath
	I0918 21:05:08.983341   62061 main.go:141] libmachine: (old-k8s-version-740194) Calling .GetSSHUsername
	I0918 21:05:08.983510   62061 sshutil.go:53] new ssh client: &{IP:192.168.72.53 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19667-7671/.minikube/machines/old-k8s-version-740194/id_rsa Username:docker}
	I0918 21:05:09.074995   62061 ssh_runner.go:195] Run: cat /etc/os-release
	I0918 21:05:09.080207   62061 info.go:137] Remote host: Buildroot 2023.02.9
	I0918 21:05:09.080240   62061 filesync.go:126] Scanning /home/jenkins/minikube-integration/19667-7671/.minikube/addons for local assets ...
	I0918 21:05:09.080327   62061 filesync.go:126] Scanning /home/jenkins/minikube-integration/19667-7671/.minikube/files for local assets ...
	I0918 21:05:09.080451   62061 filesync.go:149] local asset: /home/jenkins/minikube-integration/19667-7671/.minikube/files/etc/ssl/certs/148782.pem -> 148782.pem in /etc/ssl/certs
	I0918 21:05:09.080583   62061 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0918 21:05:09.091374   62061 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/files/etc/ssl/certs/148782.pem --> /etc/ssl/certs/148782.pem (1708 bytes)
	I0918 21:05:09.119546   62061 start.go:296] duration metric: took 140.521158ms for postStartSetup
	I0918 21:05:09.119593   62061 fix.go:56] duration metric: took 20.337078765s for fixHost
	I0918 21:05:09.119620   62061 main.go:141] libmachine: (old-k8s-version-740194) Calling .GetSSHHostname
	I0918 21:05:09.122534   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | domain old-k8s-version-740194 has defined MAC address 52:54:00:3f:a7:11 in network mk-old-k8s-version-740194
	I0918 21:05:09.123157   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:a7:11", ip: ""} in network mk-old-k8s-version-740194: {Iface:virbr3 ExpiryTime:2024-09-18 22:05:00 +0000 UTC Type:0 Mac:52:54:00:3f:a7:11 Iaid: IPaddr:192.168.72.53 Prefix:24 Hostname:old-k8s-version-740194 Clientid:01:52:54:00:3f:a7:11}
	I0918 21:05:09.123187   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | domain old-k8s-version-740194 has defined IP address 192.168.72.53 and MAC address 52:54:00:3f:a7:11 in network mk-old-k8s-version-740194
	I0918 21:05:09.123447   62061 main.go:141] libmachine: (old-k8s-version-740194) Calling .GetSSHPort
	I0918 21:05:09.123695   62061 main.go:141] libmachine: (old-k8s-version-740194) Calling .GetSSHKeyPath
	I0918 21:05:09.123891   62061 main.go:141] libmachine: (old-k8s-version-740194) Calling .GetSSHKeyPath
	I0918 21:05:09.124095   62061 main.go:141] libmachine: (old-k8s-version-740194) Calling .GetSSHUsername
	I0918 21:05:09.124373   62061 main.go:141] libmachine: Using SSH client type: native
	I0918 21:05:09.124582   62061 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.72.53 22 <nil> <nil>}
	I0918 21:05:09.124595   62061 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0918 21:05:09.245082   62061 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726693509.219779478
	
	I0918 21:05:09.245109   62061 fix.go:216] guest clock: 1726693509.219779478
	I0918 21:05:09.245119   62061 fix.go:229] Guest: 2024-09-18 21:05:09.219779478 +0000 UTC Remote: 2024-09-18 21:05:09.11959842 +0000 UTC m=+249.666759777 (delta=100.181058ms)
	I0918 21:05:09.245139   62061 fix.go:200] guest clock delta is within tolerance: 100.181058ms
	I0918 21:05:09.245146   62061 start.go:83] releasing machines lock for "old-k8s-version-740194", held for 20.462669229s
	I0918 21:05:09.245176   62061 main.go:141] libmachine: (old-k8s-version-740194) Calling .DriverName
	I0918 21:05:09.245602   62061 main.go:141] libmachine: (old-k8s-version-740194) Calling .GetIP
	I0918 21:05:09.248653   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | domain old-k8s-version-740194 has defined MAC address 52:54:00:3f:a7:11 in network mk-old-k8s-version-740194
	I0918 21:05:09.249110   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:a7:11", ip: ""} in network mk-old-k8s-version-740194: {Iface:virbr3 ExpiryTime:2024-09-18 22:05:00 +0000 UTC Type:0 Mac:52:54:00:3f:a7:11 Iaid: IPaddr:192.168.72.53 Prefix:24 Hostname:old-k8s-version-740194 Clientid:01:52:54:00:3f:a7:11}
	I0918 21:05:09.249156   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | domain old-k8s-version-740194 has defined IP address 192.168.72.53 and MAC address 52:54:00:3f:a7:11 in network mk-old-k8s-version-740194
	I0918 21:05:09.249345   62061 main.go:141] libmachine: (old-k8s-version-740194) Calling .DriverName
	I0918 21:05:09.249838   62061 main.go:141] libmachine: (old-k8s-version-740194) Calling .DriverName
	I0918 21:05:09.250047   62061 main.go:141] libmachine: (old-k8s-version-740194) Calling .DriverName
	I0918 21:05:09.250167   62061 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0918 21:05:09.250230   62061 main.go:141] libmachine: (old-k8s-version-740194) Calling .GetSSHHostname
	I0918 21:05:09.250286   62061 ssh_runner.go:195] Run: cat /version.json
	I0918 21:05:09.250312   62061 main.go:141] libmachine: (old-k8s-version-740194) Calling .GetSSHHostname
	I0918 21:05:09.253242   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | domain old-k8s-version-740194 has defined MAC address 52:54:00:3f:a7:11 in network mk-old-k8s-version-740194
	I0918 21:05:09.253408   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | domain old-k8s-version-740194 has defined MAC address 52:54:00:3f:a7:11 in network mk-old-k8s-version-740194
	I0918 21:05:09.253687   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:a7:11", ip: ""} in network mk-old-k8s-version-740194: {Iface:virbr3 ExpiryTime:2024-09-18 22:05:00 +0000 UTC Type:0 Mac:52:54:00:3f:a7:11 Iaid: IPaddr:192.168.72.53 Prefix:24 Hostname:old-k8s-version-740194 Clientid:01:52:54:00:3f:a7:11}
	I0918 21:05:09.253749   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | domain old-k8s-version-740194 has defined IP address 192.168.72.53 and MAC address 52:54:00:3f:a7:11 in network mk-old-k8s-version-740194
	I0918 21:05:09.253882   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:a7:11", ip: ""} in network mk-old-k8s-version-740194: {Iface:virbr3 ExpiryTime:2024-09-18 22:05:00 +0000 UTC Type:0 Mac:52:54:00:3f:a7:11 Iaid: IPaddr:192.168.72.53 Prefix:24 Hostname:old-k8s-version-740194 Clientid:01:52:54:00:3f:a7:11}
	I0918 21:05:09.253901   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | domain old-k8s-version-740194 has defined IP address 192.168.72.53 and MAC address 52:54:00:3f:a7:11 in network mk-old-k8s-version-740194
	I0918 21:05:09.253944   62061 main.go:141] libmachine: (old-k8s-version-740194) Calling .GetSSHPort
	I0918 21:05:09.254026   62061 main.go:141] libmachine: (old-k8s-version-740194) Calling .GetSSHPort
	I0918 21:05:09.254221   62061 main.go:141] libmachine: (old-k8s-version-740194) Calling .GetSSHKeyPath
	I0918 21:05:09.254243   62061 main.go:141] libmachine: (old-k8s-version-740194) Calling .GetSSHKeyPath
	I0918 21:05:09.254372   62061 main.go:141] libmachine: (old-k8s-version-740194) Calling .GetSSHUsername
	I0918 21:05:09.254426   62061 main.go:141] libmachine: (old-k8s-version-740194) Calling .GetSSHUsername
	I0918 21:05:09.254533   62061 sshutil.go:53] new ssh client: &{IP:192.168.72.53 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19667-7671/.minikube/machines/old-k8s-version-740194/id_rsa Username:docker}
	I0918 21:05:09.254695   62061 sshutil.go:53] new ssh client: &{IP:192.168.72.53 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19667-7671/.minikube/machines/old-k8s-version-740194/id_rsa Username:docker}
	I0918 21:05:09.374728   62061 ssh_runner.go:195] Run: systemctl --version
	I0918 21:05:09.381117   62061 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0918 21:05:09.532059   62061 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0918 21:05:09.538041   62061 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0918 21:05:09.538124   62061 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0918 21:05:09.553871   62061 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0918 21:05:09.553907   62061 start.go:495] detecting cgroup driver to use...
	I0918 21:05:09.553982   62061 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0918 21:05:09.573554   62061 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0918 21:05:09.591963   62061 docker.go:217] disabling cri-docker service (if available) ...
	I0918 21:05:09.592057   62061 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0918 21:05:09.610813   62061 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0918 21:05:09.627153   62061 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0918 21:05:09.763978   62061 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0918 21:05:09.945185   62061 docker.go:233] disabling docker service ...
	I0918 21:05:09.945257   62061 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0918 21:05:09.961076   62061 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0918 21:05:09.974660   62061 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0918 21:05:10.111191   62061 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0918 21:05:10.235058   62061 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0918 21:05:10.251389   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0918 21:05:10.270949   62061 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0918 21:05:10.271047   62061 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0918 21:05:10.284743   62061 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0918 21:05:10.284811   62061 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0918 21:05:10.295221   62061 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0918 21:05:10.305897   62061 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0918 21:05:10.317726   62061 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0918 21:05:10.330136   62061 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0918 21:05:10.339480   62061 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0918 21:05:10.339543   62061 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0918 21:05:10.352954   62061 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0918 21:05:10.370764   62061 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0918 21:05:10.524315   62061 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0918 21:05:10.617374   62061 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0918 21:05:10.617466   62061 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0918 21:05:10.624164   62061 start.go:563] Will wait 60s for crictl version
	I0918 21:05:10.624222   62061 ssh_runner.go:195] Run: which crictl
	I0918 21:05:10.629583   62061 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0918 21:05:10.673613   62061 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0918 21:05:10.673702   62061 ssh_runner.go:195] Run: crio --version
	I0918 21:05:10.703948   62061 ssh_runner.go:195] Run: crio --version
	I0918 21:05:10.733924   62061 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0918 21:05:09.272840   61273 main.go:141] libmachine: (no-preload-331658) Calling .Start
	I0918 21:05:09.273067   61273 main.go:141] libmachine: (no-preload-331658) Ensuring networks are active...
	I0918 21:05:09.274115   61273 main.go:141] libmachine: (no-preload-331658) Ensuring network default is active
	I0918 21:05:09.274576   61273 main.go:141] libmachine: (no-preload-331658) Ensuring network mk-no-preload-331658 is active
	I0918 21:05:09.275108   61273 main.go:141] libmachine: (no-preload-331658) Getting domain xml...
	I0918 21:05:09.276003   61273 main.go:141] libmachine: (no-preload-331658) Creating domain...
	I0918 21:05:10.665647   61273 main.go:141] libmachine: (no-preload-331658) Waiting to get IP...
	I0918 21:05:10.666710   61273 main.go:141] libmachine: (no-preload-331658) DBG | domain no-preload-331658 has defined MAC address 52:54:00:2a:47:d0 in network mk-no-preload-331658
	I0918 21:05:10.667187   61273 main.go:141] libmachine: (no-preload-331658) DBG | unable to find current IP address of domain no-preload-331658 in network mk-no-preload-331658
	I0918 21:05:10.667261   61273 main.go:141] libmachine: (no-preload-331658) DBG | I0918 21:05:10.667162   63200 retry.go:31] will retry after 215.232953ms: waiting for machine to come up
	I0918 21:05:10.883691   61273 main.go:141] libmachine: (no-preload-331658) DBG | domain no-preload-331658 has defined MAC address 52:54:00:2a:47:d0 in network mk-no-preload-331658
	I0918 21:05:10.884249   61273 main.go:141] libmachine: (no-preload-331658) DBG | unable to find current IP address of domain no-preload-331658 in network mk-no-preload-331658
	I0918 21:05:10.884283   61273 main.go:141] libmachine: (no-preload-331658) DBG | I0918 21:05:10.884185   63200 retry.go:31] will retry after 289.698979ms: waiting for machine to come up
	I0918 21:05:11.175936   61273 main.go:141] libmachine: (no-preload-331658) DBG | domain no-preload-331658 has defined MAC address 52:54:00:2a:47:d0 in network mk-no-preload-331658
	I0918 21:05:11.176656   61273 main.go:141] libmachine: (no-preload-331658) DBG | unable to find current IP address of domain no-preload-331658 in network mk-no-preload-331658
	I0918 21:05:11.176680   61273 main.go:141] libmachine: (no-preload-331658) DBG | I0918 21:05:11.176553   63200 retry.go:31] will retry after 424.473311ms: waiting for machine to come up
	I0918 21:05:09.633671   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:05:11.634755   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:05:09.274214   61740 pod_ready.go:103] pod "coredns-7c65d6cfc9-xwn8w" in "kube-system" namespace has status "Ready":"False"
	I0918 21:05:11.275099   61740 pod_ready.go:103] pod "coredns-7c65d6cfc9-xwn8w" in "kube-system" namespace has status "Ready":"False"
	I0918 21:05:10.735500   62061 main.go:141] libmachine: (old-k8s-version-740194) Calling .GetIP
	I0918 21:05:10.738130   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | domain old-k8s-version-740194 has defined MAC address 52:54:00:3f:a7:11 in network mk-old-k8s-version-740194
	I0918 21:05:10.738488   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:a7:11", ip: ""} in network mk-old-k8s-version-740194: {Iface:virbr3 ExpiryTime:2024-09-18 22:05:00 +0000 UTC Type:0 Mac:52:54:00:3f:a7:11 Iaid: IPaddr:192.168.72.53 Prefix:24 Hostname:old-k8s-version-740194 Clientid:01:52:54:00:3f:a7:11}
	I0918 21:05:10.738516   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | domain old-k8s-version-740194 has defined IP address 192.168.72.53 and MAC address 52:54:00:3f:a7:11 in network mk-old-k8s-version-740194
	I0918 21:05:10.738831   62061 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0918 21:05:10.742780   62061 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0918 21:05:10.754785   62061 kubeadm.go:883] updating cluster {Name:old-k8s-version-740194 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-740194 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.53 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0918 21:05:10.754939   62061 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0918 21:05:10.755002   62061 ssh_runner.go:195] Run: sudo crictl images --output json
	I0918 21:05:10.800452   62061 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0918 21:05:10.800526   62061 ssh_runner.go:195] Run: which lz4
	I0918 21:05:10.804522   62061 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0918 21:05:10.809179   62061 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0918 21:05:10.809214   62061 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0918 21:05:12.306958   62061 crio.go:462] duration metric: took 1.502481615s to copy over tarball
	I0918 21:05:12.307049   62061 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0918 21:05:11.603153   61273 main.go:141] libmachine: (no-preload-331658) DBG | domain no-preload-331658 has defined MAC address 52:54:00:2a:47:d0 in network mk-no-preload-331658
	I0918 21:05:11.603791   61273 main.go:141] libmachine: (no-preload-331658) DBG | unable to find current IP address of domain no-preload-331658 in network mk-no-preload-331658
	I0918 21:05:11.603817   61273 main.go:141] libmachine: (no-preload-331658) DBG | I0918 21:05:11.603742   63200 retry.go:31] will retry after 425.818515ms: waiting for machine to come up
	I0918 21:05:12.031622   61273 main.go:141] libmachine: (no-preload-331658) DBG | domain no-preload-331658 has defined MAC address 52:54:00:2a:47:d0 in network mk-no-preload-331658
	I0918 21:05:12.032425   61273 main.go:141] libmachine: (no-preload-331658) DBG | unable to find current IP address of domain no-preload-331658 in network mk-no-preload-331658
	I0918 21:05:12.032458   61273 main.go:141] libmachine: (no-preload-331658) DBG | I0918 21:05:12.032357   63200 retry.go:31] will retry after 701.564015ms: waiting for machine to come up
	I0918 21:05:12.735295   61273 main.go:141] libmachine: (no-preload-331658) DBG | domain no-preload-331658 has defined MAC address 52:54:00:2a:47:d0 in network mk-no-preload-331658
	I0918 21:05:12.735852   61273 main.go:141] libmachine: (no-preload-331658) DBG | unable to find current IP address of domain no-preload-331658 in network mk-no-preload-331658
	I0918 21:05:12.735882   61273 main.go:141] libmachine: (no-preload-331658) DBG | I0918 21:05:12.735814   63200 retry.go:31] will retry after 904.737419ms: waiting for machine to come up
	I0918 21:05:13.642383   61273 main.go:141] libmachine: (no-preload-331658) DBG | domain no-preload-331658 has defined MAC address 52:54:00:2a:47:d0 in network mk-no-preload-331658
	I0918 21:05:13.642913   61273 main.go:141] libmachine: (no-preload-331658) DBG | unable to find current IP address of domain no-preload-331658 in network mk-no-preload-331658
	I0918 21:05:13.642935   61273 main.go:141] libmachine: (no-preload-331658) DBG | I0918 21:05:13.642872   63200 retry.go:31] will retry after 891.091353ms: waiting for machine to come up
	I0918 21:05:14.536200   61273 main.go:141] libmachine: (no-preload-331658) DBG | domain no-preload-331658 has defined MAC address 52:54:00:2a:47:d0 in network mk-no-preload-331658
	I0918 21:05:14.536797   61273 main.go:141] libmachine: (no-preload-331658) DBG | unable to find current IP address of domain no-preload-331658 in network mk-no-preload-331658
	I0918 21:05:14.536849   61273 main.go:141] libmachine: (no-preload-331658) DBG | I0918 21:05:14.536761   63200 retry.go:31] will retry after 1.01795417s: waiting for machine to come up
	I0918 21:05:15.555787   61273 main.go:141] libmachine: (no-preload-331658) DBG | domain no-preload-331658 has defined MAC address 52:54:00:2a:47:d0 in network mk-no-preload-331658
	I0918 21:05:15.556287   61273 main.go:141] libmachine: (no-preload-331658) DBG | unable to find current IP address of domain no-preload-331658 in network mk-no-preload-331658
	I0918 21:05:15.556315   61273 main.go:141] libmachine: (no-preload-331658) DBG | I0918 21:05:15.556243   63200 retry.go:31] will retry after 1.598926126s: waiting for machine to come up
	I0918 21:05:14.132957   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:05:16.133323   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:05:13.778274   61740 pod_ready.go:93] pod "coredns-7c65d6cfc9-xwn8w" in "kube-system" namespace has status "Ready":"True"
	I0918 21:05:13.778310   61740 pod_ready.go:82] duration metric: took 11.011085965s for pod "coredns-7c65d6cfc9-xwn8w" in "kube-system" namespace to be "Ready" ...
	I0918 21:05:13.778325   61740 pod_ready.go:79] waiting up to 4m0s for pod "etcd-embed-certs-255556" in "kube-system" namespace to be "Ready" ...
	I0918 21:05:13.785089   61740 pod_ready.go:93] pod "etcd-embed-certs-255556" in "kube-system" namespace has status "Ready":"True"
	I0918 21:05:13.785121   61740 pod_ready.go:82] duration metric: took 6.787649ms for pod "etcd-embed-certs-255556" in "kube-system" namespace to be "Ready" ...
	I0918 21:05:13.785135   61740 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-embed-certs-255556" in "kube-system" namespace to be "Ready" ...
	I0918 21:05:15.793479   61740 pod_ready.go:103] pod "kube-apiserver-embed-certs-255556" in "kube-system" namespace has status "Ready":"False"
	I0918 21:05:15.357114   62061 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.050037005s)
	I0918 21:05:15.357141   62061 crio.go:469] duration metric: took 3.050151648s to extract the tarball
	I0918 21:05:15.357148   62061 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0918 21:05:15.399373   62061 ssh_runner.go:195] Run: sudo crictl images --output json
	I0918 21:05:15.434204   62061 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0918 21:05:15.434238   62061 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0918 21:05:15.434332   62061 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0918 21:05:15.434369   62061 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0918 21:05:15.434385   62061 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0918 21:05:15.434398   62061 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0918 21:05:15.434339   62061 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0918 21:05:15.434438   62061 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I0918 21:05:15.434443   62061 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0918 21:05:15.434491   62061 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I0918 21:05:15.436820   62061 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0918 21:05:15.436824   62061 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0918 21:05:15.436829   62061 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0918 21:05:15.436856   62061 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0918 21:05:15.436867   62061 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0918 21:05:15.436904   62061 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0918 21:05:15.436952   62061 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0918 21:05:15.437278   62061 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0918 21:05:15.747423   62061 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0918 21:05:15.747705   62061 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0918 21:05:15.748375   62061 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0918 21:05:15.750816   62061 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0918 21:05:15.752244   62061 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0918 21:05:15.754038   62061 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0918 21:05:15.770881   62061 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0918 21:05:15.911654   62061 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0918 21:05:15.911725   62061 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0918 21:05:15.911795   62061 ssh_runner.go:195] Run: which crictl
	I0918 21:05:15.938412   62061 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0918 21:05:15.938464   62061 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0918 21:05:15.938534   62061 ssh_runner.go:195] Run: which crictl
	I0918 21:05:15.944615   62061 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0918 21:05:15.944706   62061 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0918 21:05:15.944736   62061 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0918 21:05:15.944749   62061 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0918 21:05:15.944786   62061 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0918 21:05:15.944809   62061 ssh_runner.go:195] Run: which crictl
	I0918 21:05:15.944820   62061 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0918 21:05:15.944844   62061 ssh_runner.go:195] Run: which crictl
	I0918 21:05:15.944857   62061 ssh_runner.go:195] Run: which crictl
	I0918 21:05:15.944661   62061 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0918 21:05:15.944914   62061 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0918 21:05:15.944950   62061 ssh_runner.go:195] Run: which crictl
	I0918 21:05:15.965316   62061 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0918 21:05:15.965366   62061 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0918 21:05:15.965418   62061 ssh_runner.go:195] Run: which crictl
	I0918 21:05:15.965461   62061 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0918 21:05:15.965419   62061 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0918 21:05:15.965544   62061 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0918 21:05:15.965558   62061 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0918 21:05:15.965613   62061 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0918 21:05:15.965621   62061 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0918 21:05:16.101533   62061 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0918 21:05:16.101536   62061 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0918 21:05:16.105174   62061 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0918 21:05:16.105198   62061 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0918 21:05:16.109044   62061 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0918 21:05:16.109048   62061 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0918 21:05:16.109152   62061 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0918 21:05:16.220653   62061 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0918 21:05:16.255418   62061 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0918 21:05:16.259931   62061 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0918 21:05:16.259986   62061 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0918 21:05:16.260039   62061 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0918 21:05:16.260121   62061 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0918 21:05:16.260337   62061 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0918 21:05:16.352969   62061 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0918 21:05:16.399180   62061 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19667-7671/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0918 21:05:16.445003   62061 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19667-7671/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0918 21:05:16.445078   62061 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19667-7671/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0918 21:05:16.445163   62061 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19667-7671/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0918 21:05:16.445266   62061 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19667-7671/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0918 21:05:16.445387   62061 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19667-7671/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0918 21:05:16.445398   62061 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19667-7671/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0918 21:05:16.633221   62061 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0918 21:05:16.782331   62061 cache_images.go:92] duration metric: took 1.348045359s to LoadCachedImages
	W0918 21:05:16.782445   62061 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19667-7671/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0: no such file or directory
	I0918 21:05:16.782464   62061 kubeadm.go:934] updating node { 192.168.72.53 8443 v1.20.0 crio true true} ...
	I0918 21:05:16.782608   62061 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-740194 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.72.53
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-740194 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0918 21:05:16.782679   62061 ssh_runner.go:195] Run: crio config
	I0918 21:05:16.838946   62061 cni.go:84] Creating CNI manager for ""
	I0918 21:05:16.838978   62061 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0918 21:05:16.838990   62061 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0918 21:05:16.839008   62061 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.53 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-740194 NodeName:old-k8s-version-740194 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.53"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.53 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0918 21:05:16.839163   62061 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.53
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-740194"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.53
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.53"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0918 21:05:16.839232   62061 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0918 21:05:16.849868   62061 binaries.go:44] Found k8s binaries, skipping transfer
	I0918 21:05:16.849955   62061 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0918 21:05:16.859716   62061 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (429 bytes)
	I0918 21:05:16.877857   62061 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0918 21:05:16.895573   62061 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2120 bytes)
	I0918 21:05:16.913545   62061 ssh_runner.go:195] Run: grep 192.168.72.53	control-plane.minikube.internal$ /etc/hosts
	I0918 21:05:16.917476   62061 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.53	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0918 21:05:16.931318   62061 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0918 21:05:17.057636   62061 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0918 21:05:17.076534   62061 certs.go:68] Setting up /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/old-k8s-version-740194 for IP: 192.168.72.53
	I0918 21:05:17.076571   62061 certs.go:194] generating shared ca certs ...
	I0918 21:05:17.076594   62061 certs.go:226] acquiring lock for ca certs: {Name:mkf54e0e71ed884c4372f9d3d4cb308ea4600185 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0918 21:05:17.076796   62061 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19667-7671/.minikube/ca.key
	I0918 21:05:17.076855   62061 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19667-7671/.minikube/proxy-client-ca.key
	I0918 21:05:17.076871   62061 certs.go:256] generating profile certs ...
	I0918 21:05:17.076999   62061 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/old-k8s-version-740194/client.key
	I0918 21:05:17.077083   62061 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/old-k8s-version-740194/apiserver.key.424b07d9
	I0918 21:05:17.077149   62061 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/old-k8s-version-740194/proxy-client.key
	I0918 21:05:17.077321   62061 certs.go:484] found cert: /home/jenkins/minikube-integration/19667-7671/.minikube/certs/14878.pem (1338 bytes)
	W0918 21:05:17.077371   62061 certs.go:480] ignoring /home/jenkins/minikube-integration/19667-7671/.minikube/certs/14878_empty.pem, impossibly tiny 0 bytes
	I0918 21:05:17.077386   62061 certs.go:484] found cert: /home/jenkins/minikube-integration/19667-7671/.minikube/certs/ca-key.pem (1675 bytes)
	I0918 21:05:17.077421   62061 certs.go:484] found cert: /home/jenkins/minikube-integration/19667-7671/.minikube/certs/ca.pem (1082 bytes)
	I0918 21:05:17.077465   62061 certs.go:484] found cert: /home/jenkins/minikube-integration/19667-7671/.minikube/certs/cert.pem (1123 bytes)
	I0918 21:05:17.077499   62061 certs.go:484] found cert: /home/jenkins/minikube-integration/19667-7671/.minikube/certs/key.pem (1679 bytes)
	I0918 21:05:17.077585   62061 certs.go:484] found cert: /home/jenkins/minikube-integration/19667-7671/.minikube/files/etc/ssl/certs/148782.pem (1708 bytes)
	I0918 21:05:17.078553   62061 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0918 21:05:17.116000   62061 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0918 21:05:17.150848   62061 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0918 21:05:17.190616   62061 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0918 21:05:17.222411   62061 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/old-k8s-version-740194/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0918 21:05:17.266923   62061 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/old-k8s-version-740194/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0918 21:05:17.303973   62061 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/old-k8s-version-740194/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0918 21:05:17.334390   62061 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/old-k8s-version-740194/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0918 21:05:17.358732   62061 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0918 21:05:17.385693   62061 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/certs/14878.pem --> /usr/share/ca-certificates/14878.pem (1338 bytes)
	I0918 21:05:17.411396   62061 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/files/etc/ssl/certs/148782.pem --> /usr/share/ca-certificates/148782.pem (1708 bytes)
	I0918 21:05:17.440205   62061 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0918 21:05:17.459639   62061 ssh_runner.go:195] Run: openssl version
	I0918 21:05:17.465846   62061 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0918 21:05:17.477482   62061 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0918 21:05:17.482579   62061 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 18 19:39 /usr/share/ca-certificates/minikubeCA.pem
	I0918 21:05:17.482650   62061 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0918 21:05:17.488822   62061 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0918 21:05:17.499729   62061 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14878.pem && ln -fs /usr/share/ca-certificates/14878.pem /etc/ssl/certs/14878.pem"
	I0918 21:05:17.511718   62061 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14878.pem
	I0918 21:05:17.516361   62061 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 18 19:57 /usr/share/ca-certificates/14878.pem
	I0918 21:05:17.516438   62061 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14878.pem
	I0918 21:05:17.522542   62061 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14878.pem /etc/ssl/certs/51391683.0"
	I0918 21:05:17.534450   62061 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/148782.pem && ln -fs /usr/share/ca-certificates/148782.pem /etc/ssl/certs/148782.pem"
	I0918 21:05:17.545916   62061 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/148782.pem
	I0918 21:05:17.550672   62061 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 18 19:57 /usr/share/ca-certificates/148782.pem
	I0918 21:05:17.550737   62061 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/148782.pem
	I0918 21:05:17.556802   62061 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/148782.pem /etc/ssl/certs/3ec20f2e.0"
	I0918 21:05:17.568991   62061 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0918 21:05:17.574098   62061 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0918 21:05:17.581458   62061 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0918 21:05:17.588242   62061 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0918 21:05:17.594889   62061 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0918 21:05:17.601499   62061 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0918 21:05:17.608193   62061 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0918 21:05:17.614196   62061 kubeadm.go:392] StartCluster: {Name:old-k8s-version-740194 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-740194 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.53 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0918 21:05:17.614298   62061 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0918 21:05:17.614363   62061 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0918 21:05:17.661551   62061 cri.go:89] found id: ""
	I0918 21:05:17.661627   62061 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0918 21:05:17.673087   62061 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0918 21:05:17.673110   62061 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0918 21:05:17.673153   62061 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0918 21:05:17.683213   62061 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0918 21:05:17.684537   62061 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-740194" does not appear in /home/jenkins/minikube-integration/19667-7671/kubeconfig
	I0918 21:05:17.685136   62061 kubeconfig.go:62] /home/jenkins/minikube-integration/19667-7671/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-740194" cluster setting kubeconfig missing "old-k8s-version-740194" context setting]
	I0918 21:05:17.686023   62061 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19667-7671/kubeconfig: {Name:mk3a3eecadb0164dfdd5d3c4392b5b473a2a9bb0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0918 21:05:17.711337   62061 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0918 21:05:17.723894   62061 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.72.53
	I0918 21:05:17.723936   62061 kubeadm.go:1160] stopping kube-system containers ...
	I0918 21:05:17.723949   62061 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0918 21:05:17.724005   62061 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0918 21:05:17.761087   62061 cri.go:89] found id: ""
	I0918 21:05:17.761166   62061 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0918 21:05:17.779689   62061 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0918 21:05:17.791699   62061 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0918 21:05:17.791723   62061 kubeadm.go:157] found existing configuration files:
	
	I0918 21:05:17.791773   62061 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0918 21:05:17.801804   62061 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0918 21:05:17.801879   62061 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0918 21:05:17.812806   62061 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0918 21:05:17.822813   62061 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0918 21:05:17.822902   62061 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0918 21:05:17.833267   62061 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0918 21:05:17.845148   62061 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0918 21:05:17.845230   62061 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0918 21:05:17.855974   62061 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0918 21:05:17.866490   62061 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0918 21:05:17.866593   62061 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0918 21:05:17.877339   62061 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0918 21:05:17.887825   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0918 21:05:18.021691   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0918 21:05:18.726750   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0918 21:05:18.970635   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0918 21:05:19.081869   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0918 21:05:19.203071   62061 api_server.go:52] waiting for apiserver process to appear ...
	I0918 21:05:19.203166   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:17.156934   61273 main.go:141] libmachine: (no-preload-331658) DBG | domain no-preload-331658 has defined MAC address 52:54:00:2a:47:d0 in network mk-no-preload-331658
	I0918 21:05:17.157481   61273 main.go:141] libmachine: (no-preload-331658) DBG | unable to find current IP address of domain no-preload-331658 in network mk-no-preload-331658
	I0918 21:05:17.157509   61273 main.go:141] libmachine: (no-preload-331658) DBG | I0918 21:05:17.157429   63200 retry.go:31] will retry after 1.586399944s: waiting for machine to come up
	I0918 21:05:18.746155   61273 main.go:141] libmachine: (no-preload-331658) DBG | domain no-preload-331658 has defined MAC address 52:54:00:2a:47:d0 in network mk-no-preload-331658
	I0918 21:05:18.746620   61273 main.go:141] libmachine: (no-preload-331658) DBG | unable to find current IP address of domain no-preload-331658 in network mk-no-preload-331658
	I0918 21:05:18.746650   61273 main.go:141] libmachine: (no-preload-331658) DBG | I0918 21:05:18.746571   63200 retry.go:31] will retry after 2.204220189s: waiting for machine to come up
	I0918 21:05:20.953669   61273 main.go:141] libmachine: (no-preload-331658) DBG | domain no-preload-331658 has defined MAC address 52:54:00:2a:47:d0 in network mk-no-preload-331658
	I0918 21:05:20.954223   61273 main.go:141] libmachine: (no-preload-331658) DBG | unable to find current IP address of domain no-preload-331658 in network mk-no-preload-331658
	I0918 21:05:20.954287   61273 main.go:141] libmachine: (no-preload-331658) DBG | I0918 21:05:20.954209   63200 retry.go:31] will retry after 2.418479665s: waiting for machine to come up
	I0918 21:05:18.634113   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:05:21.133516   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:05:18.365915   61740 pod_ready.go:93] pod "kube-apiserver-embed-certs-255556" in "kube-system" namespace has status "Ready":"True"
	I0918 21:05:18.365943   61740 pod_ready.go:82] duration metric: took 4.580799395s for pod "kube-apiserver-embed-certs-255556" in "kube-system" namespace to be "Ready" ...
	I0918 21:05:18.365956   61740 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-255556" in "kube-system" namespace to be "Ready" ...
	I0918 21:05:18.371010   61740 pod_ready.go:93] pod "kube-controller-manager-embed-certs-255556" in "kube-system" namespace has status "Ready":"True"
	I0918 21:05:18.371035   61740 pod_ready.go:82] duration metric: took 5.070331ms for pod "kube-controller-manager-embed-certs-255556" in "kube-system" namespace to be "Ready" ...
	I0918 21:05:18.371046   61740 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-v8szm" in "kube-system" namespace to be "Ready" ...
	I0918 21:05:18.375632   61740 pod_ready.go:93] pod "kube-proxy-v8szm" in "kube-system" namespace has status "Ready":"True"
	I0918 21:05:18.375658   61740 pod_ready.go:82] duration metric: took 4.603787ms for pod "kube-proxy-v8szm" in "kube-system" namespace to be "Ready" ...
	I0918 21:05:18.375671   61740 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-embed-certs-255556" in "kube-system" namespace to be "Ready" ...
	I0918 21:05:18.380527   61740 pod_ready.go:93] pod "kube-scheduler-embed-certs-255556" in "kube-system" namespace has status "Ready":"True"
	I0918 21:05:18.380551   61740 pod_ready.go:82] duration metric: took 4.872699ms for pod "kube-scheduler-embed-certs-255556" in "kube-system" namespace to be "Ready" ...
	I0918 21:05:18.380563   61740 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace to be "Ready" ...
	I0918 21:05:20.388600   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:05:22.887122   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:05:19.704128   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:20.203595   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:20.703244   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:21.204288   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:21.703469   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:22.203224   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:22.703968   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:23.204097   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:23.703933   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:24.204272   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:23.375904   61273 main.go:141] libmachine: (no-preload-331658) DBG | domain no-preload-331658 has defined MAC address 52:54:00:2a:47:d0 in network mk-no-preload-331658
	I0918 21:05:23.376450   61273 main.go:141] libmachine: (no-preload-331658) DBG | unable to find current IP address of domain no-preload-331658 in network mk-no-preload-331658
	I0918 21:05:23.376476   61273 main.go:141] libmachine: (no-preload-331658) DBG | I0918 21:05:23.376397   63200 retry.go:31] will retry after 4.431211335s: waiting for machine to come up
	I0918 21:05:23.633093   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:05:25.633913   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:05:24.887771   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:05:27.386891   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:05:24.704240   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:25.203880   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:25.703983   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:26.204273   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:26.703861   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:27.204064   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:27.703276   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:28.204289   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:28.703701   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:29.203604   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:27.811234   61273 main.go:141] libmachine: (no-preload-331658) DBG | domain no-preload-331658 has defined MAC address 52:54:00:2a:47:d0 in network mk-no-preload-331658
	I0918 21:05:27.811698   61273 main.go:141] libmachine: (no-preload-331658) DBG | domain no-preload-331658 has current primary IP address 192.168.61.31 and MAC address 52:54:00:2a:47:d0 in network mk-no-preload-331658
	I0918 21:05:27.811719   61273 main.go:141] libmachine: (no-preload-331658) Found IP for machine: 192.168.61.31
	I0918 21:05:27.811729   61273 main.go:141] libmachine: (no-preload-331658) Reserving static IP address...
	I0918 21:05:27.812131   61273 main.go:141] libmachine: (no-preload-331658) DBG | found host DHCP lease matching {name: "no-preload-331658", mac: "52:54:00:2a:47:d0", ip: "192.168.61.31"} in network mk-no-preload-331658: {Iface:virbr2 ExpiryTime:2024-09-18 21:55:52 +0000 UTC Type:0 Mac:52:54:00:2a:47:d0 Iaid: IPaddr:192.168.61.31 Prefix:24 Hostname:no-preload-331658 Clientid:01:52:54:00:2a:47:d0}
	I0918 21:05:27.812150   61273 main.go:141] libmachine: (no-preload-331658) Reserved static IP address: 192.168.61.31
	I0918 21:05:27.812163   61273 main.go:141] libmachine: (no-preload-331658) DBG | skip adding static IP to network mk-no-preload-331658 - found existing host DHCP lease matching {name: "no-preload-331658", mac: "52:54:00:2a:47:d0", ip: "192.168.61.31"}
	I0918 21:05:27.812170   61273 main.go:141] libmachine: (no-preload-331658) Waiting for SSH to be available...
	I0918 21:05:27.812178   61273 main.go:141] libmachine: (no-preload-331658) DBG | Getting to WaitForSSH function...
	I0918 21:05:27.814300   61273 main.go:141] libmachine: (no-preload-331658) DBG | domain no-preload-331658 has defined MAC address 52:54:00:2a:47:d0 in network mk-no-preload-331658
	I0918 21:05:27.814735   61273 main.go:141] libmachine: (no-preload-331658) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:47:d0", ip: ""} in network mk-no-preload-331658: {Iface:virbr2 ExpiryTime:2024-09-18 21:55:52 +0000 UTC Type:0 Mac:52:54:00:2a:47:d0 Iaid: IPaddr:192.168.61.31 Prefix:24 Hostname:no-preload-331658 Clientid:01:52:54:00:2a:47:d0}
	I0918 21:05:27.814767   61273 main.go:141] libmachine: (no-preload-331658) DBG | domain no-preload-331658 has defined IP address 192.168.61.31 and MAC address 52:54:00:2a:47:d0 in network mk-no-preload-331658
	I0918 21:05:27.814891   61273 main.go:141] libmachine: (no-preload-331658) DBG | Using SSH client type: external
	I0918 21:05:27.814922   61273 main.go:141] libmachine: (no-preload-331658) DBG | Using SSH private key: /home/jenkins/minikube-integration/19667-7671/.minikube/machines/no-preload-331658/id_rsa (-rw-------)
	I0918 21:05:27.814945   61273 main.go:141] libmachine: (no-preload-331658) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.31 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19667-7671/.minikube/machines/no-preload-331658/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0918 21:05:27.814972   61273 main.go:141] libmachine: (no-preload-331658) DBG | About to run SSH command:
	I0918 21:05:27.814985   61273 main.go:141] libmachine: (no-preload-331658) DBG | exit 0
	I0918 21:05:27.939949   61273 main.go:141] libmachine: (no-preload-331658) DBG | SSH cmd err, output: <nil>: 
	I0918 21:05:27.940365   61273 main.go:141] libmachine: (no-preload-331658) Calling .GetConfigRaw
	I0918 21:05:27.941187   61273 main.go:141] libmachine: (no-preload-331658) Calling .GetIP
	I0918 21:05:27.943976   61273 main.go:141] libmachine: (no-preload-331658) DBG | domain no-preload-331658 has defined MAC address 52:54:00:2a:47:d0 in network mk-no-preload-331658
	I0918 21:05:27.944375   61273 main.go:141] libmachine: (no-preload-331658) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:47:d0", ip: ""} in network mk-no-preload-331658: {Iface:virbr2 ExpiryTime:2024-09-18 21:55:52 +0000 UTC Type:0 Mac:52:54:00:2a:47:d0 Iaid: IPaddr:192.168.61.31 Prefix:24 Hostname:no-preload-331658 Clientid:01:52:54:00:2a:47:d0}
	I0918 21:05:27.944399   61273 main.go:141] libmachine: (no-preload-331658) DBG | domain no-preload-331658 has defined IP address 192.168.61.31 and MAC address 52:54:00:2a:47:d0 in network mk-no-preload-331658
	I0918 21:05:27.944670   61273 profile.go:143] Saving config to /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/no-preload-331658/config.json ...
	I0918 21:05:27.944942   61273 machine.go:93] provisionDockerMachine start ...
	I0918 21:05:27.944963   61273 main.go:141] libmachine: (no-preload-331658) Calling .DriverName
	I0918 21:05:27.945228   61273 main.go:141] libmachine: (no-preload-331658) Calling .GetSSHHostname
	I0918 21:05:27.947444   61273 main.go:141] libmachine: (no-preload-331658) DBG | domain no-preload-331658 has defined MAC address 52:54:00:2a:47:d0 in network mk-no-preload-331658
	I0918 21:05:27.947810   61273 main.go:141] libmachine: (no-preload-331658) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:47:d0", ip: ""} in network mk-no-preload-331658: {Iface:virbr2 ExpiryTime:2024-09-18 21:55:52 +0000 UTC Type:0 Mac:52:54:00:2a:47:d0 Iaid: IPaddr:192.168.61.31 Prefix:24 Hostname:no-preload-331658 Clientid:01:52:54:00:2a:47:d0}
	I0918 21:05:27.947843   61273 main.go:141] libmachine: (no-preload-331658) DBG | domain no-preload-331658 has defined IP address 192.168.61.31 and MAC address 52:54:00:2a:47:d0 in network mk-no-preload-331658
	I0918 21:05:27.947974   61273 main.go:141] libmachine: (no-preload-331658) Calling .GetSSHPort
	I0918 21:05:27.948196   61273 main.go:141] libmachine: (no-preload-331658) Calling .GetSSHKeyPath
	I0918 21:05:27.948404   61273 main.go:141] libmachine: (no-preload-331658) Calling .GetSSHKeyPath
	I0918 21:05:27.948664   61273 main.go:141] libmachine: (no-preload-331658) Calling .GetSSHUsername
	I0918 21:05:27.948845   61273 main.go:141] libmachine: Using SSH client type: native
	I0918 21:05:27.949078   61273 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.61.31 22 <nil> <nil>}
	I0918 21:05:27.949099   61273 main.go:141] libmachine: About to run SSH command:
	hostname
	I0918 21:05:28.052352   61273 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0918 21:05:28.052378   61273 main.go:141] libmachine: (no-preload-331658) Calling .GetMachineName
	I0918 21:05:28.052638   61273 buildroot.go:166] provisioning hostname "no-preload-331658"
	I0918 21:05:28.052668   61273 main.go:141] libmachine: (no-preload-331658) Calling .GetMachineName
	I0918 21:05:28.052923   61273 main.go:141] libmachine: (no-preload-331658) Calling .GetSSHHostname
	I0918 21:05:28.056168   61273 main.go:141] libmachine: (no-preload-331658) DBG | domain no-preload-331658 has defined MAC address 52:54:00:2a:47:d0 in network mk-no-preload-331658
	I0918 21:05:28.056599   61273 main.go:141] libmachine: (no-preload-331658) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:47:d0", ip: ""} in network mk-no-preload-331658: {Iface:virbr2 ExpiryTime:2024-09-18 21:55:52 +0000 UTC Type:0 Mac:52:54:00:2a:47:d0 Iaid: IPaddr:192.168.61.31 Prefix:24 Hostname:no-preload-331658 Clientid:01:52:54:00:2a:47:d0}
	I0918 21:05:28.056631   61273 main.go:141] libmachine: (no-preload-331658) DBG | domain no-preload-331658 has defined IP address 192.168.61.31 and MAC address 52:54:00:2a:47:d0 in network mk-no-preload-331658
	I0918 21:05:28.056805   61273 main.go:141] libmachine: (no-preload-331658) Calling .GetSSHPort
	I0918 21:05:28.057009   61273 main.go:141] libmachine: (no-preload-331658) Calling .GetSSHKeyPath
	I0918 21:05:28.057168   61273 main.go:141] libmachine: (no-preload-331658) Calling .GetSSHKeyPath
	I0918 21:05:28.057305   61273 main.go:141] libmachine: (no-preload-331658) Calling .GetSSHUsername
	I0918 21:05:28.057478   61273 main.go:141] libmachine: Using SSH client type: native
	I0918 21:05:28.057652   61273 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.61.31 22 <nil> <nil>}
	I0918 21:05:28.057665   61273 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-331658 && echo "no-preload-331658" | sudo tee /etc/hostname
	I0918 21:05:28.174245   61273 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-331658
	
	I0918 21:05:28.174282   61273 main.go:141] libmachine: (no-preload-331658) Calling .GetSSHHostname
	I0918 21:05:28.177373   61273 main.go:141] libmachine: (no-preload-331658) DBG | domain no-preload-331658 has defined MAC address 52:54:00:2a:47:d0 in network mk-no-preload-331658
	I0918 21:05:28.177753   61273 main.go:141] libmachine: (no-preload-331658) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:47:d0", ip: ""} in network mk-no-preload-331658: {Iface:virbr2 ExpiryTime:2024-09-18 21:55:52 +0000 UTC Type:0 Mac:52:54:00:2a:47:d0 Iaid: IPaddr:192.168.61.31 Prefix:24 Hostname:no-preload-331658 Clientid:01:52:54:00:2a:47:d0}
	I0918 21:05:28.177781   61273 main.go:141] libmachine: (no-preload-331658) DBG | domain no-preload-331658 has defined IP address 192.168.61.31 and MAC address 52:54:00:2a:47:d0 in network mk-no-preload-331658
	I0918 21:05:28.177981   61273 main.go:141] libmachine: (no-preload-331658) Calling .GetSSHPort
	I0918 21:05:28.178202   61273 main.go:141] libmachine: (no-preload-331658) Calling .GetSSHKeyPath
	I0918 21:05:28.178380   61273 main.go:141] libmachine: (no-preload-331658) Calling .GetSSHKeyPath
	I0918 21:05:28.178523   61273 main.go:141] libmachine: (no-preload-331658) Calling .GetSSHUsername
	I0918 21:05:28.178752   61273 main.go:141] libmachine: Using SSH client type: native
	I0918 21:05:28.178948   61273 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.61.31 22 <nil> <nil>}
	I0918 21:05:28.178965   61273 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-331658' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-331658/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-331658' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0918 21:05:28.292659   61273 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0918 21:05:28.292691   61273 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19667-7671/.minikube CaCertPath:/home/jenkins/minikube-integration/19667-7671/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19667-7671/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19667-7671/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19667-7671/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19667-7671/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19667-7671/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19667-7671/.minikube}
	I0918 21:05:28.292714   61273 buildroot.go:174] setting up certificates
	I0918 21:05:28.292725   61273 provision.go:84] configureAuth start
	I0918 21:05:28.292734   61273 main.go:141] libmachine: (no-preload-331658) Calling .GetMachineName
	I0918 21:05:28.293091   61273 main.go:141] libmachine: (no-preload-331658) Calling .GetIP
	I0918 21:05:28.295792   61273 main.go:141] libmachine: (no-preload-331658) DBG | domain no-preload-331658 has defined MAC address 52:54:00:2a:47:d0 in network mk-no-preload-331658
	I0918 21:05:28.296192   61273 main.go:141] libmachine: (no-preload-331658) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:47:d0", ip: ""} in network mk-no-preload-331658: {Iface:virbr2 ExpiryTime:2024-09-18 21:55:52 +0000 UTC Type:0 Mac:52:54:00:2a:47:d0 Iaid: IPaddr:192.168.61.31 Prefix:24 Hostname:no-preload-331658 Clientid:01:52:54:00:2a:47:d0}
	I0918 21:05:28.296219   61273 main.go:141] libmachine: (no-preload-331658) DBG | domain no-preload-331658 has defined IP address 192.168.61.31 and MAC address 52:54:00:2a:47:d0 in network mk-no-preload-331658
	I0918 21:05:28.296405   61273 main.go:141] libmachine: (no-preload-331658) Calling .GetSSHHostname
	I0918 21:05:28.298446   61273 main.go:141] libmachine: (no-preload-331658) DBG | domain no-preload-331658 has defined MAC address 52:54:00:2a:47:d0 in network mk-no-preload-331658
	I0918 21:05:28.298788   61273 main.go:141] libmachine: (no-preload-331658) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:47:d0", ip: ""} in network mk-no-preload-331658: {Iface:virbr2 ExpiryTime:2024-09-18 21:55:52 +0000 UTC Type:0 Mac:52:54:00:2a:47:d0 Iaid: IPaddr:192.168.61.31 Prefix:24 Hostname:no-preload-331658 Clientid:01:52:54:00:2a:47:d0}
	I0918 21:05:28.298815   61273 main.go:141] libmachine: (no-preload-331658) DBG | domain no-preload-331658 has defined IP address 192.168.61.31 and MAC address 52:54:00:2a:47:d0 in network mk-no-preload-331658
	I0918 21:05:28.298938   61273 provision.go:143] copyHostCerts
	I0918 21:05:28.299013   61273 exec_runner.go:144] found /home/jenkins/minikube-integration/19667-7671/.minikube/ca.pem, removing ...
	I0918 21:05:28.299026   61273 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19667-7671/.minikube/ca.pem
	I0918 21:05:28.299078   61273 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19667-7671/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19667-7671/.minikube/ca.pem (1082 bytes)
	I0918 21:05:28.299170   61273 exec_runner.go:144] found /home/jenkins/minikube-integration/19667-7671/.minikube/cert.pem, removing ...
	I0918 21:05:28.299178   61273 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19667-7671/.minikube/cert.pem
	I0918 21:05:28.299199   61273 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19667-7671/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19667-7671/.minikube/cert.pem (1123 bytes)
	I0918 21:05:28.299252   61273 exec_runner.go:144] found /home/jenkins/minikube-integration/19667-7671/.minikube/key.pem, removing ...
	I0918 21:05:28.299258   61273 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19667-7671/.minikube/key.pem
	I0918 21:05:28.299278   61273 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19667-7671/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19667-7671/.minikube/key.pem (1679 bytes)
	I0918 21:05:28.299325   61273 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19667-7671/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19667-7671/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19667-7671/.minikube/certs/ca-key.pem org=jenkins.no-preload-331658 san=[127.0.0.1 192.168.61.31 localhost minikube no-preload-331658]
	I0918 21:05:28.606565   61273 provision.go:177] copyRemoteCerts
	I0918 21:05:28.606629   61273 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0918 21:05:28.606653   61273 main.go:141] libmachine: (no-preload-331658) Calling .GetSSHHostname
	I0918 21:05:28.609156   61273 main.go:141] libmachine: (no-preload-331658) DBG | domain no-preload-331658 has defined MAC address 52:54:00:2a:47:d0 in network mk-no-preload-331658
	I0918 21:05:28.609533   61273 main.go:141] libmachine: (no-preload-331658) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:47:d0", ip: ""} in network mk-no-preload-331658: {Iface:virbr2 ExpiryTime:2024-09-18 21:55:52 +0000 UTC Type:0 Mac:52:54:00:2a:47:d0 Iaid: IPaddr:192.168.61.31 Prefix:24 Hostname:no-preload-331658 Clientid:01:52:54:00:2a:47:d0}
	I0918 21:05:28.609564   61273 main.go:141] libmachine: (no-preload-331658) DBG | domain no-preload-331658 has defined IP address 192.168.61.31 and MAC address 52:54:00:2a:47:d0 in network mk-no-preload-331658
	I0918 21:05:28.609690   61273 main.go:141] libmachine: (no-preload-331658) Calling .GetSSHPort
	I0918 21:05:28.609891   61273 main.go:141] libmachine: (no-preload-331658) Calling .GetSSHKeyPath
	I0918 21:05:28.610102   61273 main.go:141] libmachine: (no-preload-331658) Calling .GetSSHUsername
	I0918 21:05:28.610332   61273 sshutil.go:53] new ssh client: &{IP:192.168.61.31 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19667-7671/.minikube/machines/no-preload-331658/id_rsa Username:docker}
	I0918 21:05:28.690571   61273 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0918 21:05:28.719257   61273 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0918 21:05:28.744119   61273 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0918 21:05:28.768692   61273 provision.go:87] duration metric: took 475.955066ms to configureAuth
	I0918 21:05:28.768720   61273 buildroot.go:189] setting minikube options for container-runtime
	I0918 21:05:28.768941   61273 config.go:182] Loaded profile config "no-preload-331658": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0918 21:05:28.769031   61273 main.go:141] libmachine: (no-preload-331658) Calling .GetSSHHostname
	I0918 21:05:28.771437   61273 main.go:141] libmachine: (no-preload-331658) DBG | domain no-preload-331658 has defined MAC address 52:54:00:2a:47:d0 in network mk-no-preload-331658
	I0918 21:05:28.771747   61273 main.go:141] libmachine: (no-preload-331658) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:47:d0", ip: ""} in network mk-no-preload-331658: {Iface:virbr2 ExpiryTime:2024-09-18 21:55:52 +0000 UTC Type:0 Mac:52:54:00:2a:47:d0 Iaid: IPaddr:192.168.61.31 Prefix:24 Hostname:no-preload-331658 Clientid:01:52:54:00:2a:47:d0}
	I0918 21:05:28.771786   61273 main.go:141] libmachine: (no-preload-331658) DBG | domain no-preload-331658 has defined IP address 192.168.61.31 and MAC address 52:54:00:2a:47:d0 in network mk-no-preload-331658
	I0918 21:05:28.771906   61273 main.go:141] libmachine: (no-preload-331658) Calling .GetSSHPort
	I0918 21:05:28.772127   61273 main.go:141] libmachine: (no-preload-331658) Calling .GetSSHKeyPath
	I0918 21:05:28.772330   61273 main.go:141] libmachine: (no-preload-331658) Calling .GetSSHKeyPath
	I0918 21:05:28.772496   61273 main.go:141] libmachine: (no-preload-331658) Calling .GetSSHUsername
	I0918 21:05:28.772717   61273 main.go:141] libmachine: Using SSH client type: native
	I0918 21:05:28.772886   61273 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.61.31 22 <nil> <nil>}
	I0918 21:05:28.772902   61273 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0918 21:05:29.001137   61273 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0918 21:05:29.001160   61273 machine.go:96] duration metric: took 1.056205004s to provisionDockerMachine
	I0918 21:05:29.001171   61273 start.go:293] postStartSetup for "no-preload-331658" (driver="kvm2")
	I0918 21:05:29.001181   61273 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0918 21:05:29.001194   61273 main.go:141] libmachine: (no-preload-331658) Calling .DriverName
	I0918 21:05:29.001531   61273 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0918 21:05:29.001556   61273 main.go:141] libmachine: (no-preload-331658) Calling .GetSSHHostname
	I0918 21:05:29.004307   61273 main.go:141] libmachine: (no-preload-331658) DBG | domain no-preload-331658 has defined MAC address 52:54:00:2a:47:d0 in network mk-no-preload-331658
	I0918 21:05:29.004656   61273 main.go:141] libmachine: (no-preload-331658) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:47:d0", ip: ""} in network mk-no-preload-331658: {Iface:virbr2 ExpiryTime:2024-09-18 21:55:52 +0000 UTC Type:0 Mac:52:54:00:2a:47:d0 Iaid: IPaddr:192.168.61.31 Prefix:24 Hostname:no-preload-331658 Clientid:01:52:54:00:2a:47:d0}
	I0918 21:05:29.004686   61273 main.go:141] libmachine: (no-preload-331658) DBG | domain no-preload-331658 has defined IP address 192.168.61.31 and MAC address 52:54:00:2a:47:d0 in network mk-no-preload-331658
	I0918 21:05:29.004877   61273 main.go:141] libmachine: (no-preload-331658) Calling .GetSSHPort
	I0918 21:05:29.005128   61273 main.go:141] libmachine: (no-preload-331658) Calling .GetSSHKeyPath
	I0918 21:05:29.005379   61273 main.go:141] libmachine: (no-preload-331658) Calling .GetSSHUsername
	I0918 21:05:29.005556   61273 sshutil.go:53] new ssh client: &{IP:192.168.61.31 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19667-7671/.minikube/machines/no-preload-331658/id_rsa Username:docker}
	I0918 21:05:29.087453   61273 ssh_runner.go:195] Run: cat /etc/os-release
	I0918 21:05:29.091329   61273 info.go:137] Remote host: Buildroot 2023.02.9
	I0918 21:05:29.091356   61273 filesync.go:126] Scanning /home/jenkins/minikube-integration/19667-7671/.minikube/addons for local assets ...
	I0918 21:05:29.091422   61273 filesync.go:126] Scanning /home/jenkins/minikube-integration/19667-7671/.minikube/files for local assets ...
	I0918 21:05:29.091493   61273 filesync.go:149] local asset: /home/jenkins/minikube-integration/19667-7671/.minikube/files/etc/ssl/certs/148782.pem -> 148782.pem in /etc/ssl/certs
	I0918 21:05:29.091578   61273 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0918 21:05:29.101039   61273 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/files/etc/ssl/certs/148782.pem --> /etc/ssl/certs/148782.pem (1708 bytes)
	I0918 21:05:29.125451   61273 start.go:296] duration metric: took 124.264463ms for postStartSetup
	I0918 21:05:29.125492   61273 fix.go:56] duration metric: took 19.880181743s for fixHost
	I0918 21:05:29.125514   61273 main.go:141] libmachine: (no-preload-331658) Calling .GetSSHHostname
	I0918 21:05:29.128543   61273 main.go:141] libmachine: (no-preload-331658) DBG | domain no-preload-331658 has defined MAC address 52:54:00:2a:47:d0 in network mk-no-preload-331658
	I0918 21:05:29.128968   61273 main.go:141] libmachine: (no-preload-331658) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:47:d0", ip: ""} in network mk-no-preload-331658: {Iface:virbr2 ExpiryTime:2024-09-18 21:55:52 +0000 UTC Type:0 Mac:52:54:00:2a:47:d0 Iaid: IPaddr:192.168.61.31 Prefix:24 Hostname:no-preload-331658 Clientid:01:52:54:00:2a:47:d0}
	I0918 21:05:29.129022   61273 main.go:141] libmachine: (no-preload-331658) DBG | domain no-preload-331658 has defined IP address 192.168.61.31 and MAC address 52:54:00:2a:47:d0 in network mk-no-preload-331658
	I0918 21:05:29.129185   61273 main.go:141] libmachine: (no-preload-331658) Calling .GetSSHPort
	I0918 21:05:29.129385   61273 main.go:141] libmachine: (no-preload-331658) Calling .GetSSHKeyPath
	I0918 21:05:29.129580   61273 main.go:141] libmachine: (no-preload-331658) Calling .GetSSHKeyPath
	I0918 21:05:29.129739   61273 main.go:141] libmachine: (no-preload-331658) Calling .GetSSHUsername
	I0918 21:05:29.129919   61273 main.go:141] libmachine: Using SSH client type: native
	I0918 21:05:29.130155   61273 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.61.31 22 <nil> <nil>}
	I0918 21:05:29.130172   61273 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0918 21:05:29.240857   61273 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726693529.214864261
	
	I0918 21:05:29.240886   61273 fix.go:216] guest clock: 1726693529.214864261
	I0918 21:05:29.240897   61273 fix.go:229] Guest: 2024-09-18 21:05:29.214864261 +0000 UTC Remote: 2024-09-18 21:05:29.125495769 +0000 UTC m=+357.666326175 (delta=89.368492ms)
	I0918 21:05:29.240943   61273 fix.go:200] guest clock delta is within tolerance: 89.368492ms
	I0918 21:05:29.240949   61273 start.go:83] releasing machines lock for "no-preload-331658", held for 19.99567651s
	I0918 21:05:29.240969   61273 main.go:141] libmachine: (no-preload-331658) Calling .DriverName
	I0918 21:05:29.241256   61273 main.go:141] libmachine: (no-preload-331658) Calling .GetIP
	I0918 21:05:29.243922   61273 main.go:141] libmachine: (no-preload-331658) DBG | domain no-preload-331658 has defined MAC address 52:54:00:2a:47:d0 in network mk-no-preload-331658
	I0918 21:05:29.244347   61273 main.go:141] libmachine: (no-preload-331658) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:47:d0", ip: ""} in network mk-no-preload-331658: {Iface:virbr2 ExpiryTime:2024-09-18 21:55:52 +0000 UTC Type:0 Mac:52:54:00:2a:47:d0 Iaid: IPaddr:192.168.61.31 Prefix:24 Hostname:no-preload-331658 Clientid:01:52:54:00:2a:47:d0}
	I0918 21:05:29.244376   61273 main.go:141] libmachine: (no-preload-331658) DBG | domain no-preload-331658 has defined IP address 192.168.61.31 and MAC address 52:54:00:2a:47:d0 in network mk-no-preload-331658
	I0918 21:05:29.244575   61273 main.go:141] libmachine: (no-preload-331658) Calling .DriverName
	I0918 21:05:29.245157   61273 main.go:141] libmachine: (no-preload-331658) Calling .DriverName
	I0918 21:05:29.245380   61273 main.go:141] libmachine: (no-preload-331658) Calling .DriverName
	I0918 21:05:29.245492   61273 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0918 21:05:29.245548   61273 main.go:141] libmachine: (no-preload-331658) Calling .GetSSHHostname
	I0918 21:05:29.245640   61273 ssh_runner.go:195] Run: cat /version.json
	I0918 21:05:29.245665   61273 main.go:141] libmachine: (no-preload-331658) Calling .GetSSHHostname
	I0918 21:05:29.248511   61273 main.go:141] libmachine: (no-preload-331658) DBG | domain no-preload-331658 has defined MAC address 52:54:00:2a:47:d0 in network mk-no-preload-331658
	I0918 21:05:29.248927   61273 main.go:141] libmachine: (no-preload-331658) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:47:d0", ip: ""} in network mk-no-preload-331658: {Iface:virbr2 ExpiryTime:2024-09-18 21:55:52 +0000 UTC Type:0 Mac:52:54:00:2a:47:d0 Iaid: IPaddr:192.168.61.31 Prefix:24 Hostname:no-preload-331658 Clientid:01:52:54:00:2a:47:d0}
	I0918 21:05:29.248954   61273 main.go:141] libmachine: (no-preload-331658) DBG | domain no-preload-331658 has defined MAC address 52:54:00:2a:47:d0 in network mk-no-preload-331658
	I0918 21:05:29.248984   61273 main.go:141] libmachine: (no-preload-331658) DBG | domain no-preload-331658 has defined IP address 192.168.61.31 and MAC address 52:54:00:2a:47:d0 in network mk-no-preload-331658
	I0918 21:05:29.249198   61273 main.go:141] libmachine: (no-preload-331658) Calling .GetSSHPort
	I0918 21:05:29.249423   61273 main.go:141] libmachine: (no-preload-331658) Calling .GetSSHKeyPath
	I0918 21:05:29.249506   61273 main.go:141] libmachine: (no-preload-331658) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:47:d0", ip: ""} in network mk-no-preload-331658: {Iface:virbr2 ExpiryTime:2024-09-18 21:55:52 +0000 UTC Type:0 Mac:52:54:00:2a:47:d0 Iaid: IPaddr:192.168.61.31 Prefix:24 Hostname:no-preload-331658 Clientid:01:52:54:00:2a:47:d0}
	I0918 21:05:29.249538   61273 main.go:141] libmachine: (no-preload-331658) DBG | domain no-preload-331658 has defined IP address 192.168.61.31 and MAC address 52:54:00:2a:47:d0 in network mk-no-preload-331658
	I0918 21:05:29.249608   61273 main.go:141] libmachine: (no-preload-331658) Calling .GetSSHUsername
	I0918 21:05:29.249692   61273 main.go:141] libmachine: (no-preload-331658) Calling .GetSSHPort
	I0918 21:05:29.249791   61273 sshutil.go:53] new ssh client: &{IP:192.168.61.31 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19667-7671/.minikube/machines/no-preload-331658/id_rsa Username:docker}
	I0918 21:05:29.249899   61273 main.go:141] libmachine: (no-preload-331658) Calling .GetSSHKeyPath
	I0918 21:05:29.250076   61273 main.go:141] libmachine: (no-preload-331658) Calling .GetSSHUsername
	I0918 21:05:29.250228   61273 sshutil.go:53] new ssh client: &{IP:192.168.61.31 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19667-7671/.minikube/machines/no-preload-331658/id_rsa Username:docker}
	I0918 21:05:29.365104   61273 ssh_runner.go:195] Run: systemctl --version
	I0918 21:05:29.371202   61273 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0918 21:05:29.518067   61273 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0918 21:05:29.524126   61273 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0918 21:05:29.524207   61273 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0918 21:05:29.540977   61273 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0918 21:05:29.541007   61273 start.go:495] detecting cgroup driver to use...
	I0918 21:05:29.541072   61273 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0918 21:05:29.558893   61273 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0918 21:05:29.576084   61273 docker.go:217] disabling cri-docker service (if available) ...
	I0918 21:05:29.576161   61273 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0918 21:05:29.591212   61273 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0918 21:05:29.605765   61273 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0918 21:05:29.734291   61273 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0918 21:05:29.892707   61273 docker.go:233] disabling docker service ...
	I0918 21:05:29.892771   61273 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0918 21:05:29.907575   61273 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0918 21:05:29.920545   61273 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0918 21:05:30.058604   61273 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0918 21:05:30.196896   61273 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0918 21:05:30.211398   61273 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0918 21:05:30.231791   61273 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0918 21:05:30.231917   61273 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0918 21:05:30.243369   61273 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0918 21:05:30.243465   61273 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0918 21:05:30.254911   61273 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0918 21:05:30.266839   61273 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0918 21:05:30.278532   61273 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0918 21:05:30.290173   61273 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0918 21:05:30.301068   61273 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0918 21:05:30.318589   61273 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0918 21:05:30.329022   61273 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0918 21:05:30.338645   61273 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0918 21:05:30.338720   61273 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0918 21:05:30.351797   61273 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0918 21:05:30.363412   61273 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0918 21:05:30.504035   61273 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0918 21:05:30.606470   61273 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0918 21:05:30.606547   61273 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0918 21:05:30.611499   61273 start.go:563] Will wait 60s for crictl version
	I0918 21:05:30.611559   61273 ssh_runner.go:195] Run: which crictl
	I0918 21:05:30.615485   61273 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0918 21:05:30.659735   61273 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0918 21:05:30.659835   61273 ssh_runner.go:195] Run: crio --version
	I0918 21:05:30.690573   61273 ssh_runner.go:195] Run: crio --version
	I0918 21:05:30.723342   61273 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0918 21:05:30.724604   61273 main.go:141] libmachine: (no-preload-331658) Calling .GetIP
	I0918 21:05:30.727445   61273 main.go:141] libmachine: (no-preload-331658) DBG | domain no-preload-331658 has defined MAC address 52:54:00:2a:47:d0 in network mk-no-preload-331658
	I0918 21:05:30.727885   61273 main.go:141] libmachine: (no-preload-331658) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:47:d0", ip: ""} in network mk-no-preload-331658: {Iface:virbr2 ExpiryTime:2024-09-18 21:55:52 +0000 UTC Type:0 Mac:52:54:00:2a:47:d0 Iaid: IPaddr:192.168.61.31 Prefix:24 Hostname:no-preload-331658 Clientid:01:52:54:00:2a:47:d0}
	I0918 21:05:30.727919   61273 main.go:141] libmachine: (no-preload-331658) DBG | domain no-preload-331658 has defined IP address 192.168.61.31 and MAC address 52:54:00:2a:47:d0 in network mk-no-preload-331658
	I0918 21:05:30.728132   61273 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0918 21:05:30.732134   61273 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0918 21:05:30.745695   61273 kubeadm.go:883] updating cluster {Name:no-preload-331658 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.1 ClusterName:no-preload-331658 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.31 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0918 21:05:30.745813   61273 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0918 21:05:30.745849   61273 ssh_runner.go:195] Run: sudo crictl images --output json
	I0918 21:05:30.788504   61273 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I0918 21:05:30.788537   61273 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.31.1 registry.k8s.io/kube-controller-manager:v1.31.1 registry.k8s.io/kube-scheduler:v1.31.1 registry.k8s.io/kube-proxy:v1.31.1 registry.k8s.io/pause:3.10 registry.k8s.io/etcd:3.5.15-0 registry.k8s.io/coredns/coredns:v1.11.3 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0918 21:05:30.788634   61273 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0918 21:05:30.788655   61273 image.go:135] retrieving image: registry.k8s.io/pause:3.10
	I0918 21:05:30.788673   61273 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.31.1
	I0918 21:05:30.788685   61273 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.11.3
	I0918 21:05:30.788655   61273 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.31.1
	I0918 21:05:30.788796   61273 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.31.1
	I0918 21:05:30.788804   61273 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.1
	I0918 21:05:30.788655   61273 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.15-0
	I0918 21:05:30.790173   61273 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0918 21:05:30.790181   61273 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.1
	I0918 21:05:30.790199   61273 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.3: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.3
	I0918 21:05:30.790170   61273 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.1
	I0918 21:05:30.790222   61273 image.go:178] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I0918 21:05:30.790237   61273 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.1
	I0918 21:05:30.790268   61273 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.15-0
	I0918 21:05:30.790542   61273 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.1
	I0918 21:05:31.049150   61273 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10
	I0918 21:05:31.052046   61273 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.31.1
	I0918 21:05:31.099660   61273 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.3
	I0918 21:05:31.099861   61273 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.31.1
	I0918 21:05:31.111308   61273 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.31.1
	I0918 21:05:31.111439   61273 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.15-0
	I0918 21:05:31.112293   61273 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.31.1
	I0918 21:05:31.203873   61273 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.31.1" needs transfer: "registry.k8s.io/kube-proxy:v1.31.1" does not exist at hash "60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561" in container runtime
	I0918 21:05:31.203934   61273 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.31.1
	I0918 21:05:31.204042   61273 ssh_runner.go:195] Run: which crictl
	I0918 21:05:31.208912   61273 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.3" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.3" does not exist at hash "c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6" in container runtime
	I0918 21:05:31.208937   61273 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.31.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.31.1" does not exist at hash "9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b" in container runtime
	I0918 21:05:31.208968   61273 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.31.1
	I0918 21:05:31.208960   61273 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.3
	I0918 21:05:31.209020   61273 ssh_runner.go:195] Run: which crictl
	I0918 21:05:31.209029   61273 ssh_runner.go:195] Run: which crictl
	I0918 21:05:31.249355   61273 cache_images.go:116] "registry.k8s.io/etcd:3.5.15-0" needs transfer: "registry.k8s.io/etcd:3.5.15-0" does not exist at hash "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4" in container runtime
	I0918 21:05:31.249408   61273 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.15-0
	I0918 21:05:31.249459   61273 ssh_runner.go:195] Run: which crictl
	I0918 21:05:31.253214   61273 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.31.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.31.1" does not exist at hash "175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1" in container runtime
	I0918 21:05:31.253244   61273 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.31.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.31.1" does not exist at hash "6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee" in container runtime
	I0918 21:05:31.253286   61273 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.31.1
	I0918 21:05:31.253274   61273 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.31.1
	I0918 21:05:31.253335   61273 ssh_runner.go:195] Run: which crictl
	I0918 21:05:31.253339   61273 ssh_runner.go:195] Run: which crictl
	I0918 21:05:31.253351   61273 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.1
	I0918 21:05:31.253405   61273 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.1
	I0918 21:05:31.253419   61273 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I0918 21:05:31.255163   61273 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0918 21:05:31.330929   61273 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.1
	I0918 21:05:31.330999   61273 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.1
	I0918 21:05:31.349540   61273 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.1
	I0918 21:05:31.349558   61273 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.1
	I0918 21:05:31.350088   61273 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I0918 21:05:31.353763   61273 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0918 21:05:31.447057   61273 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.1
	I0918 21:05:31.457171   61273 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.1
	I0918 21:05:31.457239   61273 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.1
	I0918 21:05:31.483087   61273 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.1
	I0918 21:05:31.483097   61273 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I0918 21:05:31.483210   61273 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0918 21:05:28.131874   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:05:30.133067   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:05:32.134557   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:05:29.389052   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:05:31.887032   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:05:29.704274   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:30.203372   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:30.703751   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:31.203670   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:31.704097   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:32.203611   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:32.703968   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:33.203356   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:33.704260   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:34.204082   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:31.573784   61273 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19667-7671/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.1
	I0918 21:05:31.573906   61273 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.31.1
	I0918 21:05:31.573927   61273 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.1
	I0918 21:05:31.573951   61273 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19667-7671/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.1
	I0918 21:05:31.574038   61273 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.31.1
	I0918 21:05:31.605972   61273 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19667-7671/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3
	I0918 21:05:31.606077   61273 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.1
	I0918 21:05:31.606086   61273 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.11.3
	I0918 21:05:31.613640   61273 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19667-7671/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0
	I0918 21:05:31.613769   61273 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.15-0
	I0918 21:05:31.641105   61273 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19667-7671/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.1
	I0918 21:05:31.641109   61273 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.31.1 (exists)
	I0918 21:05:31.641199   61273 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.31.1
	I0918 21:05:31.641223   61273 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.31.1
	I0918 21:05:31.641244   61273 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.1
	I0918 21:05:31.641175   61273 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.31.1 (exists)
	I0918 21:05:31.666586   61273 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.3 (exists)
	I0918 21:05:31.666661   61273 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19667-7671/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.1
	I0918 21:05:31.666792   61273 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.31.1
	I0918 21:05:31.666821   61273 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.31.1 (exists)
	I0918 21:05:31.666795   61273 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.15-0 (exists)
	I0918 21:05:32.009797   61273 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0918 21:05:33.610028   61273 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.1: (1.968756977s)
	I0918 21:05:33.610065   61273 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19667-7671/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.1 from cache
	I0918 21:05:33.610080   61273 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.31.1: (1.943261692s)
	I0918 21:05:33.610111   61273 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.31.1 (exists)
	I0918 21:05:33.610090   61273 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.31.1
	I0918 21:05:33.610122   61273 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (1.600294362s)
	I0918 21:05:33.610161   61273 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0918 21:05:33.610174   61273 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.1
	I0918 21:05:33.610193   61273 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0918 21:05:33.610242   61273 ssh_runner.go:195] Run: which crictl
	I0918 21:05:35.571685   61273 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.1: (1.96147024s)
	I0918 21:05:35.571722   61273 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19667-7671/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.1 from cache
	I0918 21:05:35.571748   61273 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.3
	I0918 21:05:35.571802   61273 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.3
	I0918 21:05:35.571802   61273 ssh_runner.go:235] Completed: which crictl: (1.961540517s)
	I0918 21:05:35.571882   61273 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0918 21:05:34.632853   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:05:36.633341   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:05:33.887591   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:05:36.387534   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:05:34.703445   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:35.203660   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:35.703374   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:36.203304   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:36.704191   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:37.204129   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:37.703912   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:38.203310   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:38.703616   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:39.203258   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:37.536622   61273 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.96470192s)
	I0918 21:05:37.536666   61273 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.3: (1.96484474s)
	I0918 21:05:37.536690   61273 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19667-7671/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3 from cache
	I0918 21:05:37.536713   61273 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0918 21:05:37.536721   61273 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.31.1
	I0918 21:05:37.536766   61273 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.1
	I0918 21:05:39.615751   61273 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.1: (2.078954836s)
	I0918 21:05:39.615791   61273 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19667-7671/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.1 from cache
	I0918 21:05:39.615823   61273 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (2.079084749s)
	I0918 21:05:39.615902   61273 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0918 21:05:39.615829   61273 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.15-0
	I0918 21:05:39.615972   61273 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0
	I0918 21:05:39.676258   61273 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19667-7671/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0918 21:05:39.676355   61273 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I0918 21:05:38.634158   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:05:40.634292   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:05:38.888255   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:05:41.387766   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:05:39.704105   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:40.204102   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:40.704073   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:41.203654   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:41.703947   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:42.203722   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:42.703303   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:43.203847   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:43.704163   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:44.203216   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:42.909577   61273 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: (3.233201912s)
	I0918 21:05:42.909617   61273 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0918 21:05:42.909722   61273 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0: (3.293701319s)
	I0918 21:05:42.909748   61273 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19667-7671/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0 from cache
	I0918 21:05:42.909781   61273 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.31.1
	I0918 21:05:42.909859   61273 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.1
	I0918 21:05:44.767646   61273 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.1: (1.857764218s)
	I0918 21:05:44.767673   61273 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19667-7671/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.1 from cache
	I0918 21:05:44.767705   61273 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0918 21:05:44.767787   61273 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0918 21:05:45.419210   61273 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19667-7671/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0918 21:05:45.419257   61273 cache_images.go:123] Successfully loaded all cached images
	I0918 21:05:45.419265   61273 cache_images.go:92] duration metric: took 14.630712818s to LoadCachedImages
	I0918 21:05:45.419278   61273 kubeadm.go:934] updating node { 192.168.61.31 8443 v1.31.1 crio true true} ...
	I0918 21:05:45.419399   61273 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-331658 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.31
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:no-preload-331658 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0918 21:05:45.419479   61273 ssh_runner.go:195] Run: crio config
	I0918 21:05:45.468525   61273 cni.go:84] Creating CNI manager for ""
	I0918 21:05:45.468549   61273 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0918 21:05:45.468558   61273 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0918 21:05:45.468579   61273 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.31 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-331658 NodeName:no-preload-331658 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.31"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.31 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0918 21:05:45.468706   61273 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.31
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-331658"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.31
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.31"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0918 21:05:45.468781   61273 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0918 21:05:45.479592   61273 binaries.go:44] Found k8s binaries, skipping transfer
	I0918 21:05:45.479662   61273 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0918 21:05:45.488586   61273 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (316 bytes)
	I0918 21:05:45.507027   61273 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0918 21:05:45.525430   61273 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2158 bytes)
	I0918 21:05:45.543854   61273 ssh_runner.go:195] Run: grep 192.168.61.31	control-plane.minikube.internal$ /etc/hosts
	I0918 21:05:45.547792   61273 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.31	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0918 21:05:45.559968   61273 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0918 21:05:45.686602   61273 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0918 21:05:45.702793   61273 certs.go:68] Setting up /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/no-preload-331658 for IP: 192.168.61.31
	I0918 21:05:45.702814   61273 certs.go:194] generating shared ca certs ...
	I0918 21:05:45.702829   61273 certs.go:226] acquiring lock for ca certs: {Name:mkf54e0e71ed884c4372f9d3d4cb308ea4600185 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0918 21:05:45.703005   61273 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19667-7671/.minikube/ca.key
	I0918 21:05:45.703071   61273 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19667-7671/.minikube/proxy-client-ca.key
	I0918 21:05:45.703085   61273 certs.go:256] generating profile certs ...
	I0918 21:05:45.703159   61273 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/no-preload-331658/client.key
	I0918 21:05:45.703228   61273 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/no-preload-331658/apiserver.key.1a336b78
	I0918 21:05:45.703263   61273 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/no-preload-331658/proxy-client.key
	I0918 21:05:45.703384   61273 certs.go:484] found cert: /home/jenkins/minikube-integration/19667-7671/.minikube/certs/14878.pem (1338 bytes)
	W0918 21:05:45.703417   61273 certs.go:480] ignoring /home/jenkins/minikube-integration/19667-7671/.minikube/certs/14878_empty.pem, impossibly tiny 0 bytes
	I0918 21:05:45.703430   61273 certs.go:484] found cert: /home/jenkins/minikube-integration/19667-7671/.minikube/certs/ca-key.pem (1675 bytes)
	I0918 21:05:45.703463   61273 certs.go:484] found cert: /home/jenkins/minikube-integration/19667-7671/.minikube/certs/ca.pem (1082 bytes)
	I0918 21:05:45.703493   61273 certs.go:484] found cert: /home/jenkins/minikube-integration/19667-7671/.minikube/certs/cert.pem (1123 bytes)
	I0918 21:05:45.703521   61273 certs.go:484] found cert: /home/jenkins/minikube-integration/19667-7671/.minikube/certs/key.pem (1679 bytes)
	I0918 21:05:45.703582   61273 certs.go:484] found cert: /home/jenkins/minikube-integration/19667-7671/.minikube/files/etc/ssl/certs/148782.pem (1708 bytes)
	I0918 21:05:45.704338   61273 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0918 21:05:45.757217   61273 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0918 21:05:45.791588   61273 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0918 21:05:45.825543   61273 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0918 21:05:45.859322   61273 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/no-preload-331658/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0918 21:05:45.892890   61273 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/no-preload-331658/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0918 21:05:45.922841   61273 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/no-preload-331658/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0918 21:05:45.947670   61273 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/no-preload-331658/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0918 21:05:45.973315   61273 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/files/etc/ssl/certs/148782.pem --> /usr/share/ca-certificates/148782.pem (1708 bytes)
	I0918 21:05:45.997699   61273 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0918 21:05:46.022802   61273 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/certs/14878.pem --> /usr/share/ca-certificates/14878.pem (1338 bytes)
	I0918 21:05:46.046646   61273 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0918 21:05:46.063329   61273 ssh_runner.go:195] Run: openssl version
	I0918 21:05:46.069432   61273 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/148782.pem && ln -fs /usr/share/ca-certificates/148782.pem /etc/ssl/certs/148782.pem"
	I0918 21:05:46.081104   61273 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/148782.pem
	I0918 21:05:46.086180   61273 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 18 19:57 /usr/share/ca-certificates/148782.pem
	I0918 21:05:46.086241   61273 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/148782.pem
	I0918 21:05:46.092527   61273 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/148782.pem /etc/ssl/certs/3ec20f2e.0"
	I0918 21:05:46.103601   61273 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0918 21:05:46.114656   61273 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0918 21:05:46.118788   61273 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 18 19:39 /usr/share/ca-certificates/minikubeCA.pem
	I0918 21:05:46.118855   61273 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0918 21:05:46.124094   61273 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0918 21:05:46.135442   61273 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14878.pem && ln -fs /usr/share/ca-certificates/14878.pem /etc/ssl/certs/14878.pem"
	I0918 21:05:46.146105   61273 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14878.pem
	I0918 21:05:46.150661   61273 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 18 19:57 /usr/share/ca-certificates/14878.pem
	I0918 21:05:46.150714   61273 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14878.pem
	I0918 21:05:46.156247   61273 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14878.pem /etc/ssl/certs/51391683.0"
	I0918 21:05:46.167475   61273 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0918 21:05:46.172172   61273 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0918 21:05:46.178638   61273 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0918 21:05:46.184644   61273 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0918 21:05:46.190704   61273 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0918 21:05:46.196414   61273 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0918 21:05:46.202467   61273 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0918 21:05:46.208306   61273 kubeadm.go:392] StartCluster: {Name:no-preload-331658 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.1 ClusterName:no-preload-331658 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.31 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0918 21:05:46.208405   61273 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0918 21:05:46.208472   61273 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0918 21:05:46.247189   61273 cri.go:89] found id: ""
	I0918 21:05:46.247267   61273 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0918 21:05:46.258228   61273 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0918 21:05:46.258253   61273 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0918 21:05:46.258309   61273 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0918 21:05:46.268703   61273 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0918 21:05:46.269728   61273 kubeconfig.go:125] found "no-preload-331658" server: "https://192.168.61.31:8443"
	I0918 21:05:46.271749   61273 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0918 21:05:46.282051   61273 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.61.31
	I0918 21:05:46.282105   61273 kubeadm.go:1160] stopping kube-system containers ...
	I0918 21:05:46.282122   61273 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0918 21:05:46.282191   61273 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0918 21:05:46.319805   61273 cri.go:89] found id: ""
	I0918 21:05:46.319880   61273 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0918 21:05:46.336130   61273 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0918 21:05:46.345940   61273 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0918 21:05:46.345962   61273 kubeadm.go:157] found existing configuration files:
	
	I0918 21:05:46.346008   61273 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0918 21:05:46.355577   61273 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0918 21:05:46.355658   61273 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0918 21:05:46.367154   61273 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0918 21:05:46.377062   61273 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0918 21:05:46.377126   61273 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0918 21:05:46.387180   61273 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0918 21:05:46.396578   61273 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0918 21:05:46.396642   61273 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0918 21:05:46.406687   61273 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0918 21:05:46.416545   61273 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0918 21:05:46.416617   61273 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0918 21:05:46.426405   61273 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0918 21:05:46.436343   61273 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0918 21:05:43.132484   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:05:45.132905   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:05:47.132942   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:05:43.890245   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:05:46.386955   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:05:44.703267   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:45.203924   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:45.703945   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:46.203386   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:46.703674   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:47.203387   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:47.704034   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:48.203348   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:48.703715   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:49.203984   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:46.563094   61273 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0918 21:05:47.663823   61273 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.100694645s)
	I0918 21:05:47.663857   61273 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0918 21:05:47.895962   61273 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0918 21:05:47.978862   61273 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0918 21:05:48.095438   61273 api_server.go:52] waiting for apiserver process to appear ...
	I0918 21:05:48.095530   61273 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:48.595581   61273 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:49.095761   61273 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:49.122304   61273 api_server.go:72] duration metric: took 1.026867171s to wait for apiserver process to appear ...
	I0918 21:05:49.122343   61273 api_server.go:88] waiting for apiserver healthz status ...
	I0918 21:05:49.122361   61273 api_server.go:253] Checking apiserver healthz at https://192.168.61.31:8443/healthz ...
	I0918 21:05:49.133503   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:05:51.133761   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:05:48.386996   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:05:50.387697   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:05:52.886989   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:05:52.253818   61273 api_server.go:279] https://192.168.61.31:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0918 21:05:52.253850   61273 api_server.go:103] status: https://192.168.61.31:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0918 21:05:52.253864   61273 api_server.go:253] Checking apiserver healthz at https://192.168.61.31:8443/healthz ...
	I0918 21:05:52.290586   61273 api_server.go:279] https://192.168.61.31:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0918 21:05:52.290617   61273 api_server.go:103] status: https://192.168.61.31:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0918 21:05:52.623078   61273 api_server.go:253] Checking apiserver healthz at https://192.168.61.31:8443/healthz ...
	I0918 21:05:52.631774   61273 api_server.go:279] https://192.168.61.31:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0918 21:05:52.631811   61273 api_server.go:103] status: https://192.168.61.31:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0918 21:05:53.123498   61273 api_server.go:253] Checking apiserver healthz at https://192.168.61.31:8443/healthz ...
	I0918 21:05:53.132091   61273 api_server.go:279] https://192.168.61.31:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0918 21:05:53.132120   61273 api_server.go:103] status: https://192.168.61.31:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0918 21:05:53.622597   61273 api_server.go:253] Checking apiserver healthz at https://192.168.61.31:8443/healthz ...
	I0918 21:05:53.628896   61273 api_server.go:279] https://192.168.61.31:8443/healthz returned 200:
	ok
	I0918 21:05:53.638315   61273 api_server.go:141] control plane version: v1.31.1
	I0918 21:05:53.638354   61273 api_server.go:131] duration metric: took 4.516002991s to wait for apiserver health ...
	I0918 21:05:53.638367   61273 cni.go:84] Creating CNI manager for ""
	I0918 21:05:53.638376   61273 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0918 21:05:53.639948   61273 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0918 21:05:49.703565   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:50.204192   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:50.704248   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:51.203335   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:51.703761   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:52.203474   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:52.703901   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:53.203856   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:53.704192   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:54.204243   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:53.641376   61273 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0918 21:05:53.667828   61273 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0918 21:05:53.701667   61273 system_pods.go:43] waiting for kube-system pods to appear ...
	I0918 21:05:53.714053   61273 system_pods.go:59] 8 kube-system pods found
	I0918 21:05:53.714101   61273 system_pods.go:61] "coredns-7c65d6cfc9-dgnw2" [085d5a98-0a61-4678-830f-384780a0d7ef] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0918 21:05:53.714113   61273 system_pods.go:61] "etcd-no-preload-331658" [d6a53ff4-fcf0-46c0-9e77-9af6d0363e08] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0918 21:05:53.714126   61273 system_pods.go:61] "kube-apiserver-no-preload-331658" [1953598f-4c3f-4a98-ab75-b8ca1b8093ad] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0918 21:05:53.714135   61273 system_pods.go:61] "kube-controller-manager-no-preload-331658" [8fc655be-a192-4250-a31d-2e533a2b5e41] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0918 21:05:53.714145   61273 system_pods.go:61] "kube-proxy-hx25w" [a26512ff-f695-4452-8974-577479257160] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0918 21:05:53.714157   61273 system_pods.go:61] "kube-scheduler-no-preload-331658" [64e5347e-e7fd-4a0d-ae41-539c3989c29e] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0918 21:05:53.714169   61273 system_pods.go:61] "metrics-server-6867b74b74-n27vc" [b1de76ec-8987-49ce-ae66-eedda2705cde] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0918 21:05:53.714181   61273 system_pods.go:61] "storage-provisioner" [e110aeb3-e9bc-4bb9-9b49-5579558bdda2] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0918 21:05:53.714191   61273 system_pods.go:74] duration metric: took 12.499195ms to wait for pod list to return data ...
	I0918 21:05:53.714206   61273 node_conditions.go:102] verifying NodePressure condition ...
	I0918 21:05:53.720251   61273 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0918 21:05:53.720283   61273 node_conditions.go:123] node cpu capacity is 2
	I0918 21:05:53.720296   61273 node_conditions.go:105] duration metric: took 6.082637ms to run NodePressure ...
	I0918 21:05:53.720317   61273 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0918 21:05:54.056981   61273 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0918 21:05:54.062413   61273 kubeadm.go:739] kubelet initialised
	I0918 21:05:54.062436   61273 kubeadm.go:740] duration metric: took 5.424693ms waiting for restarted kubelet to initialise ...
	I0918 21:05:54.062443   61273 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0918 21:05:54.069721   61273 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-dgnw2" in "kube-system" namespace to be "Ready" ...
	I0918 21:05:54.089970   61273 pod_ready.go:98] node "no-preload-331658" hosting pod "coredns-7c65d6cfc9-dgnw2" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-331658" has status "Ready":"False"
	I0918 21:05:54.090005   61273 pod_ready.go:82] duration metric: took 20.250586ms for pod "coredns-7c65d6cfc9-dgnw2" in "kube-system" namespace to be "Ready" ...
	E0918 21:05:54.090017   61273 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-331658" hosting pod "coredns-7c65d6cfc9-dgnw2" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-331658" has status "Ready":"False"
	I0918 21:05:54.090046   61273 pod_ready.go:79] waiting up to 4m0s for pod "etcd-no-preload-331658" in "kube-system" namespace to be "Ready" ...
	I0918 21:05:54.105121   61273 pod_ready.go:98] node "no-preload-331658" hosting pod "etcd-no-preload-331658" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-331658" has status "Ready":"False"
	I0918 21:05:54.105156   61273 pod_ready.go:82] duration metric: took 15.097714ms for pod "etcd-no-preload-331658" in "kube-system" namespace to be "Ready" ...
	E0918 21:05:54.105170   61273 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-331658" hosting pod "etcd-no-preload-331658" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-331658" has status "Ready":"False"
	I0918 21:05:54.105180   61273 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-no-preload-331658" in "kube-system" namespace to be "Ready" ...
	I0918 21:05:54.112687   61273 pod_ready.go:98] node "no-preload-331658" hosting pod "kube-apiserver-no-preload-331658" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-331658" has status "Ready":"False"
	I0918 21:05:54.112711   61273 pod_ready.go:82] duration metric: took 7.523191ms for pod "kube-apiserver-no-preload-331658" in "kube-system" namespace to be "Ready" ...
	E0918 21:05:54.112722   61273 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-331658" hosting pod "kube-apiserver-no-preload-331658" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-331658" has status "Ready":"False"
	I0918 21:05:54.112730   61273 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-no-preload-331658" in "kube-system" namespace to be "Ready" ...
	I0918 21:05:54.119681   61273 pod_ready.go:98] node "no-preload-331658" hosting pod "kube-controller-manager-no-preload-331658" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-331658" has status "Ready":"False"
	I0918 21:05:54.119707   61273 pod_ready.go:82] duration metric: took 6.967275ms for pod "kube-controller-manager-no-preload-331658" in "kube-system" namespace to be "Ready" ...
	E0918 21:05:54.119716   61273 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-331658" hosting pod "kube-controller-manager-no-preload-331658" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-331658" has status "Ready":"False"
	I0918 21:05:54.119723   61273 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-hx25w" in "kube-system" namespace to be "Ready" ...
	I0918 21:05:54.505099   61273 pod_ready.go:98] node "no-preload-331658" hosting pod "kube-proxy-hx25w" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-331658" has status "Ready":"False"
	I0918 21:05:54.505127   61273 pod_ready.go:82] duration metric: took 385.395528ms for pod "kube-proxy-hx25w" in "kube-system" namespace to be "Ready" ...
	E0918 21:05:54.505140   61273 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-331658" hosting pod "kube-proxy-hx25w" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-331658" has status "Ready":"False"
	I0918 21:05:54.505147   61273 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-no-preload-331658" in "kube-system" namespace to be "Ready" ...
	I0918 21:05:54.905748   61273 pod_ready.go:98] node "no-preload-331658" hosting pod "kube-scheduler-no-preload-331658" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-331658" has status "Ready":"False"
	I0918 21:05:54.905774   61273 pod_ready.go:82] duration metric: took 400.618175ms for pod "kube-scheduler-no-preload-331658" in "kube-system" namespace to be "Ready" ...
	E0918 21:05:54.905785   61273 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-331658" hosting pod "kube-scheduler-no-preload-331658" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-331658" has status "Ready":"False"
	I0918 21:05:54.905794   61273 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace to be "Ready" ...
	I0918 21:05:55.305077   61273 pod_ready.go:98] node "no-preload-331658" hosting pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-331658" has status "Ready":"False"
	I0918 21:05:55.305106   61273 pod_ready.go:82] duration metric: took 399.301293ms for pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace to be "Ready" ...
	E0918 21:05:55.305118   61273 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-331658" hosting pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-331658" has status "Ready":"False"
	I0918 21:05:55.305126   61273 pod_ready.go:39] duration metric: took 1.242662699s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0918 21:05:55.305150   61273 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0918 21:05:55.317568   61273 ops.go:34] apiserver oom_adj: -16
	I0918 21:05:55.317597   61273 kubeadm.go:597] duration metric: took 9.0593375s to restartPrimaryControlPlane
	I0918 21:05:55.317616   61273 kubeadm.go:394] duration metric: took 9.109322119s to StartCluster
	I0918 21:05:55.317643   61273 settings.go:142] acquiring lock: {Name:mk6ae95a69dcbe00c28846409e9f46945a88de2f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0918 21:05:55.317720   61273 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19667-7671/kubeconfig
	I0918 21:05:55.320228   61273 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19667-7671/kubeconfig: {Name:mk3a3eecadb0164dfdd5d3c4392b5b473a2a9bb0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0918 21:05:55.320552   61273 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.61.31 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0918 21:05:55.320609   61273 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0918 21:05:55.320716   61273 addons.go:69] Setting storage-provisioner=true in profile "no-preload-331658"
	I0918 21:05:55.320725   61273 addons.go:69] Setting default-storageclass=true in profile "no-preload-331658"
	I0918 21:05:55.320739   61273 addons.go:234] Setting addon storage-provisioner=true in "no-preload-331658"
	W0918 21:05:55.320747   61273 addons.go:243] addon storage-provisioner should already be in state true
	I0918 21:05:55.320765   61273 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-331658"
	I0918 21:05:55.320785   61273 host.go:66] Checking if "no-preload-331658" exists ...
	I0918 21:05:55.320769   61273 addons.go:69] Setting metrics-server=true in profile "no-preload-331658"
	I0918 21:05:55.320799   61273 config.go:182] Loaded profile config "no-preload-331658": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0918 21:05:55.320808   61273 addons.go:234] Setting addon metrics-server=true in "no-preload-331658"
	W0918 21:05:55.320863   61273 addons.go:243] addon metrics-server should already be in state true
	I0918 21:05:55.320889   61273 host.go:66] Checking if "no-preload-331658" exists ...
	I0918 21:05:55.321208   61273 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19667-7671/.minikube/bin/docker-machine-driver-kvm2
	I0918 21:05:55.321228   61273 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19667-7671/.minikube/bin/docker-machine-driver-kvm2
	I0918 21:05:55.321208   61273 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19667-7671/.minikube/bin/docker-machine-driver-kvm2
	I0918 21:05:55.321262   61273 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0918 21:05:55.321282   61273 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0918 21:05:55.321357   61273 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0918 21:05:55.323762   61273 out.go:177] * Verifying Kubernetes components...
	I0918 21:05:55.325718   61273 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0918 21:05:55.348485   61273 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43725
	I0918 21:05:55.349072   61273 main.go:141] libmachine: () Calling .GetVersion
	I0918 21:05:55.349611   61273 main.go:141] libmachine: Using API Version  1
	I0918 21:05:55.349641   61273 main.go:141] libmachine: () Calling .SetConfigRaw
	I0918 21:05:55.349978   61273 main.go:141] libmachine: () Calling .GetMachineName
	I0918 21:05:55.350556   61273 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19667-7671/.minikube/bin/docker-machine-driver-kvm2
	I0918 21:05:55.350606   61273 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0918 21:05:55.368807   61273 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34285
	I0918 21:05:55.369340   61273 main.go:141] libmachine: () Calling .GetVersion
	I0918 21:05:55.369826   61273 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35389
	I0918 21:05:55.369908   61273 main.go:141] libmachine: Using API Version  1
	I0918 21:05:55.369928   61273 main.go:141] libmachine: () Calling .SetConfigRaw
	I0918 21:05:55.369949   61273 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42891
	I0918 21:05:55.370195   61273 main.go:141] libmachine: () Calling .GetVersion
	I0918 21:05:55.370303   61273 main.go:141] libmachine: () Calling .GetMachineName
	I0918 21:05:55.370408   61273 main.go:141] libmachine: () Calling .GetVersion
	I0918 21:05:55.370494   61273 main.go:141] libmachine: (no-preload-331658) Calling .GetState
	I0918 21:05:55.370772   61273 main.go:141] libmachine: Using API Version  1
	I0918 21:05:55.370797   61273 main.go:141] libmachine: () Calling .SetConfigRaw
	I0918 21:05:55.370908   61273 main.go:141] libmachine: Using API Version  1
	I0918 21:05:55.370929   61273 main.go:141] libmachine: () Calling .SetConfigRaw
	I0918 21:05:55.371790   61273 main.go:141] libmachine: () Calling .GetMachineName
	I0918 21:05:55.371833   61273 main.go:141] libmachine: () Calling .GetMachineName
	I0918 21:05:55.371996   61273 main.go:141] libmachine: (no-preload-331658) Calling .GetState
	I0918 21:05:55.372415   61273 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19667-7671/.minikube/bin/docker-machine-driver-kvm2
	I0918 21:05:55.372470   61273 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0918 21:05:55.372532   61273 main.go:141] libmachine: (no-preload-331658) Calling .DriverName
	I0918 21:05:55.375524   61273 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0918 21:05:55.375574   61273 addons.go:234] Setting addon default-storageclass=true in "no-preload-331658"
	W0918 21:05:55.375593   61273 addons.go:243] addon default-storageclass should already be in state true
	I0918 21:05:55.375626   61273 host.go:66] Checking if "no-preload-331658" exists ...
	I0918 21:05:55.376008   61273 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19667-7671/.minikube/bin/docker-machine-driver-kvm2
	I0918 21:05:55.376097   61273 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0918 21:05:55.377828   61273 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0918 21:05:55.377848   61273 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0918 21:05:55.377864   61273 main.go:141] libmachine: (no-preload-331658) Calling .GetSSHHostname
	I0918 21:05:55.381877   61273 main.go:141] libmachine: (no-preload-331658) DBG | domain no-preload-331658 has defined MAC address 52:54:00:2a:47:d0 in network mk-no-preload-331658
	I0918 21:05:55.382379   61273 main.go:141] libmachine: (no-preload-331658) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:47:d0", ip: ""} in network mk-no-preload-331658: {Iface:virbr2 ExpiryTime:2024-09-18 21:55:52 +0000 UTC Type:0 Mac:52:54:00:2a:47:d0 Iaid: IPaddr:192.168.61.31 Prefix:24 Hostname:no-preload-331658 Clientid:01:52:54:00:2a:47:d0}
	I0918 21:05:55.382438   61273 main.go:141] libmachine: (no-preload-331658) DBG | domain no-preload-331658 has defined IP address 192.168.61.31 and MAC address 52:54:00:2a:47:d0 in network mk-no-preload-331658
	I0918 21:05:55.382767   61273 main.go:141] libmachine: (no-preload-331658) Calling .GetSSHPort
	I0918 21:05:55.384470   61273 main.go:141] libmachine: (no-preload-331658) Calling .GetSSHKeyPath
	I0918 21:05:55.384700   61273 main.go:141] libmachine: (no-preload-331658) Calling .GetSSHUsername
	I0918 21:05:55.384863   61273 sshutil.go:53] new ssh client: &{IP:192.168.61.31 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19667-7671/.minikube/machines/no-preload-331658/id_rsa Username:docker}
	I0918 21:05:55.399531   61273 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38229
	I0918 21:05:55.400009   61273 main.go:141] libmachine: () Calling .GetVersion
	I0918 21:05:55.400532   61273 main.go:141] libmachine: Using API Version  1
	I0918 21:05:55.400552   61273 main.go:141] libmachine: () Calling .SetConfigRaw
	I0918 21:05:55.400918   61273 main.go:141] libmachine: () Calling .GetMachineName
	I0918 21:05:55.401097   61273 main.go:141] libmachine: (no-preload-331658) Calling .GetState
	I0918 21:05:55.403124   61273 main.go:141] libmachine: (no-preload-331658) Calling .DriverName
	I0918 21:05:55.404237   61273 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45209
	I0918 21:05:55.404637   61273 main.go:141] libmachine: () Calling .GetVersion
	I0918 21:05:55.405088   61273 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0918 21:05:55.405422   61273 main.go:141] libmachine: Using API Version  1
	I0918 21:05:55.405443   61273 main.go:141] libmachine: () Calling .SetConfigRaw
	I0918 21:05:55.405906   61273 main.go:141] libmachine: () Calling .GetMachineName
	I0918 21:05:55.406570   61273 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19667-7671/.minikube/bin/docker-machine-driver-kvm2
	I0918 21:05:55.406620   61273 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0918 21:05:55.406959   61273 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0918 21:05:55.406973   61273 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0918 21:05:55.407380   61273 main.go:141] libmachine: (no-preload-331658) Calling .GetSSHHostname
	I0918 21:05:55.411410   61273 main.go:141] libmachine: (no-preload-331658) DBG | domain no-preload-331658 has defined MAC address 52:54:00:2a:47:d0 in network mk-no-preload-331658
	I0918 21:05:55.411430   61273 main.go:141] libmachine: (no-preload-331658) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:47:d0", ip: ""} in network mk-no-preload-331658: {Iface:virbr2 ExpiryTime:2024-09-18 21:55:52 +0000 UTC Type:0 Mac:52:54:00:2a:47:d0 Iaid: IPaddr:192.168.61.31 Prefix:24 Hostname:no-preload-331658 Clientid:01:52:54:00:2a:47:d0}
	I0918 21:05:55.411440   61273 main.go:141] libmachine: (no-preload-331658) DBG | domain no-preload-331658 has defined IP address 192.168.61.31 and MAC address 52:54:00:2a:47:d0 in network mk-no-preload-331658
	I0918 21:05:55.411727   61273 main.go:141] libmachine: (no-preload-331658) Calling .GetSSHPort
	I0918 21:05:55.411965   61273 main.go:141] libmachine: (no-preload-331658) Calling .GetSSHKeyPath
	I0918 21:05:55.412171   61273 main.go:141] libmachine: (no-preload-331658) Calling .GetSSHUsername
	I0918 21:05:55.412377   61273 sshutil.go:53] new ssh client: &{IP:192.168.61.31 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19667-7671/.minikube/machines/no-preload-331658/id_rsa Username:docker}
	I0918 21:05:55.426166   61273 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38277
	I0918 21:05:55.426704   61273 main.go:141] libmachine: () Calling .GetVersion
	I0918 21:05:55.427211   61273 main.go:141] libmachine: Using API Version  1
	I0918 21:05:55.427232   61273 main.go:141] libmachine: () Calling .SetConfigRaw
	I0918 21:05:55.427610   61273 main.go:141] libmachine: () Calling .GetMachineName
	I0918 21:05:55.427805   61273 main.go:141] libmachine: (no-preload-331658) Calling .GetState
	I0918 21:05:55.429864   61273 main.go:141] libmachine: (no-preload-331658) Calling .DriverName
	I0918 21:05:55.430238   61273 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0918 21:05:55.430256   61273 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0918 21:05:55.430278   61273 main.go:141] libmachine: (no-preload-331658) Calling .GetSSHHostname
	I0918 21:05:55.433576   61273 main.go:141] libmachine: (no-preload-331658) DBG | domain no-preload-331658 has defined MAC address 52:54:00:2a:47:d0 in network mk-no-preload-331658
	I0918 21:05:55.433894   61273 main.go:141] libmachine: (no-preload-331658) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:47:d0", ip: ""} in network mk-no-preload-331658: {Iface:virbr2 ExpiryTime:2024-09-18 21:55:52 +0000 UTC Type:0 Mac:52:54:00:2a:47:d0 Iaid: IPaddr:192.168.61.31 Prefix:24 Hostname:no-preload-331658 Clientid:01:52:54:00:2a:47:d0}
	I0918 21:05:55.433918   61273 main.go:141] libmachine: (no-preload-331658) DBG | domain no-preload-331658 has defined IP address 192.168.61.31 and MAC address 52:54:00:2a:47:d0 in network mk-no-preload-331658
	I0918 21:05:55.434411   61273 main.go:141] libmachine: (no-preload-331658) Calling .GetSSHPort
	I0918 21:05:55.434650   61273 main.go:141] libmachine: (no-preload-331658) Calling .GetSSHKeyPath
	I0918 21:05:55.434798   61273 main.go:141] libmachine: (no-preload-331658) Calling .GetSSHUsername
	I0918 21:05:55.434942   61273 sshutil.go:53] new ssh client: &{IP:192.168.61.31 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19667-7671/.minikube/machines/no-preload-331658/id_rsa Username:docker}
	I0918 21:05:55.528033   61273 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0918 21:05:55.545524   61273 node_ready.go:35] waiting up to 6m0s for node "no-preload-331658" to be "Ready" ...
	I0918 21:05:55.606477   61273 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0918 21:05:55.606498   61273 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0918 21:05:55.628256   61273 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0918 21:05:55.636122   61273 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0918 21:05:55.636154   61273 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0918 21:05:55.663081   61273 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0918 21:05:55.663108   61273 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0918 21:05:55.715011   61273 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0918 21:05:55.738192   61273 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0918 21:05:56.247539   61273 main.go:141] libmachine: Making call to close driver server
	I0918 21:05:56.247568   61273 main.go:141] libmachine: (no-preload-331658) Calling .Close
	I0918 21:05:56.247900   61273 main.go:141] libmachine: (no-preload-331658) DBG | Closing plugin on server side
	I0918 21:05:56.247922   61273 main.go:141] libmachine: Successfully made call to close driver server
	I0918 21:05:56.247937   61273 main.go:141] libmachine: Making call to close connection to plugin binary
	I0918 21:05:56.247948   61273 main.go:141] libmachine: Making call to close driver server
	I0918 21:05:56.247960   61273 main.go:141] libmachine: (no-preload-331658) Calling .Close
	I0918 21:05:56.248225   61273 main.go:141] libmachine: Successfully made call to close driver server
	I0918 21:05:56.248240   61273 main.go:141] libmachine: Making call to close connection to plugin binary
	I0918 21:05:56.248273   61273 main.go:141] libmachine: (no-preload-331658) DBG | Closing plugin on server side
	I0918 21:05:56.261942   61273 main.go:141] libmachine: Making call to close driver server
	I0918 21:05:56.261972   61273 main.go:141] libmachine: (no-preload-331658) Calling .Close
	I0918 21:05:56.262269   61273 main.go:141] libmachine: (no-preload-331658) DBG | Closing plugin on server side
	I0918 21:05:56.262344   61273 main.go:141] libmachine: Successfully made call to close driver server
	I0918 21:05:56.262361   61273 main.go:141] libmachine: Making call to close connection to plugin binary
	I0918 21:05:56.944008   61273 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.22895695s)
	I0918 21:05:56.944084   61273 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.205856091s)
	I0918 21:05:56.944121   61273 main.go:141] libmachine: Making call to close driver server
	I0918 21:05:56.944138   61273 main.go:141] libmachine: (no-preload-331658) Calling .Close
	I0918 21:05:56.944087   61273 main.go:141] libmachine: Making call to close driver server
	I0918 21:05:56.944186   61273 main.go:141] libmachine: (no-preload-331658) Calling .Close
	I0918 21:05:56.944489   61273 main.go:141] libmachine: (no-preload-331658) DBG | Closing plugin on server side
	I0918 21:05:56.944539   61273 main.go:141] libmachine: Successfully made call to close driver server
	I0918 21:05:56.944553   61273 main.go:141] libmachine: Making call to close connection to plugin binary
	I0918 21:05:56.944561   61273 main.go:141] libmachine: Making call to close driver server
	I0918 21:05:56.944572   61273 main.go:141] libmachine: (no-preload-331658) Calling .Close
	I0918 21:05:56.944559   61273 main.go:141] libmachine: (no-preload-331658) DBG | Closing plugin on server side
	I0918 21:05:56.944570   61273 main.go:141] libmachine: Successfully made call to close driver server
	I0918 21:05:56.944654   61273 main.go:141] libmachine: Making call to close connection to plugin binary
	I0918 21:05:56.944669   61273 main.go:141] libmachine: Making call to close driver server
	I0918 21:05:56.944678   61273 main.go:141] libmachine: (no-preload-331658) Calling .Close
	I0918 21:05:56.944794   61273 main.go:141] libmachine: Successfully made call to close driver server
	I0918 21:05:56.944808   61273 main.go:141] libmachine: Making call to close connection to plugin binary
	I0918 21:05:56.944823   61273 main.go:141] libmachine: (no-preload-331658) DBG | Closing plugin on server side
	I0918 21:05:56.944965   61273 main.go:141] libmachine: (no-preload-331658) DBG | Closing plugin on server side
	I0918 21:05:56.944988   61273 main.go:141] libmachine: Successfully made call to close driver server
	I0918 21:05:56.944998   61273 main.go:141] libmachine: Making call to close connection to plugin binary
	I0918 21:05:56.945010   61273 addons.go:475] Verifying addon metrics-server=true in "no-preload-331658"
	I0918 21:05:56.946962   61273 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I0918 21:05:53.135068   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:05:55.633160   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:05:55.393859   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:05:57.888366   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:05:54.703227   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:55.204057   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:55.704178   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:56.203443   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:56.703517   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:57.203499   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:57.703598   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:58.203660   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:58.703897   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:59.203256   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:56.948595   61273 addons.go:510] duration metric: took 1.627989207s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I0918 21:05:57.549092   61273 node_ready.go:53] node "no-preload-331658" has status "Ready":"False"
	I0918 21:06:00.050199   61273 node_ready.go:53] node "no-preload-331658" has status "Ready":"False"
	I0918 21:05:58.134289   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:06:00.632302   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:06:00.386644   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:06:02.387972   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:05:59.704149   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:06:00.203356   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:06:00.703750   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:06:01.203765   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:06:01.704295   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:06:02.203759   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:06:02.703342   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:06:03.204083   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:06:03.703777   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:06:04.203340   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:06:02.549111   61273 node_ready.go:49] node "no-preload-331658" has status "Ready":"True"
	I0918 21:06:02.549153   61273 node_ready.go:38] duration metric: took 7.003597589s for node "no-preload-331658" to be "Ready" ...
	I0918 21:06:02.549162   61273 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0918 21:06:02.554487   61273 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-dgnw2" in "kube-system" namespace to be "Ready" ...
	I0918 21:06:02.560130   61273 pod_ready.go:93] pod "coredns-7c65d6cfc9-dgnw2" in "kube-system" namespace has status "Ready":"True"
	I0918 21:06:02.560160   61273 pod_ready.go:82] duration metric: took 5.643145ms for pod "coredns-7c65d6cfc9-dgnw2" in "kube-system" namespace to be "Ready" ...
	I0918 21:06:02.560173   61273 pod_ready.go:79] waiting up to 6m0s for pod "etcd-no-preload-331658" in "kube-system" namespace to be "Ready" ...
	I0918 21:06:02.567971   61273 pod_ready.go:93] pod "etcd-no-preload-331658" in "kube-system" namespace has status "Ready":"True"
	I0918 21:06:02.567992   61273 pod_ready.go:82] duration metric: took 7.811385ms for pod "etcd-no-preload-331658" in "kube-system" namespace to be "Ready" ...
	I0918 21:06:02.568001   61273 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-no-preload-331658" in "kube-system" namespace to be "Ready" ...
	I0918 21:06:02.572606   61273 pod_ready.go:93] pod "kube-apiserver-no-preload-331658" in "kube-system" namespace has status "Ready":"True"
	I0918 21:06:02.572633   61273 pod_ready.go:82] duration metric: took 4.625414ms for pod "kube-apiserver-no-preload-331658" in "kube-system" namespace to be "Ready" ...
	I0918 21:06:02.572644   61273 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-no-preload-331658" in "kube-system" namespace to be "Ready" ...
	I0918 21:06:02.577222   61273 pod_ready.go:93] pod "kube-controller-manager-no-preload-331658" in "kube-system" namespace has status "Ready":"True"
	I0918 21:06:02.577243   61273 pod_ready.go:82] duration metric: took 4.591499ms for pod "kube-controller-manager-no-preload-331658" in "kube-system" namespace to be "Ready" ...
	I0918 21:06:02.577252   61273 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-hx25w" in "kube-system" namespace to be "Ready" ...
	I0918 21:06:02.949682   61273 pod_ready.go:93] pod "kube-proxy-hx25w" in "kube-system" namespace has status "Ready":"True"
	I0918 21:06:02.949707   61273 pod_ready.go:82] duration metric: took 372.449094ms for pod "kube-proxy-hx25w" in "kube-system" namespace to be "Ready" ...
	I0918 21:06:02.949716   61273 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-no-preload-331658" in "kube-system" namespace to be "Ready" ...
	I0918 21:06:03.350071   61273 pod_ready.go:93] pod "kube-scheduler-no-preload-331658" in "kube-system" namespace has status "Ready":"True"
	I0918 21:06:03.350104   61273 pod_ready.go:82] duration metric: took 400.380059ms for pod "kube-scheduler-no-preload-331658" in "kube-system" namespace to be "Ready" ...
	I0918 21:06:03.350118   61273 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace to be "Ready" ...
	I0918 21:06:05.357041   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:06:02.634105   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:06:05.132860   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:06:04.887184   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:06:06.887596   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:06:04.703762   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:06:05.203639   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:06:05.703335   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:06:06.204156   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:06:06.703735   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:06:07.203278   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:06:07.704072   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:06:08.203299   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:06:08.703528   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:06:09.203725   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:06:07.857844   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:06:10.356822   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:06:07.633985   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:06:10.133861   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:06:08.887695   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:06:11.387735   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:06:09.703712   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:06:10.203930   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:06:10.704216   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:06:11.203706   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:06:11.703494   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:06:12.204098   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:06:12.703927   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:06:13.203379   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:06:13.703248   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:06:14.204000   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:06:12.356878   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:06:14.360285   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:06:12.631731   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:06:15.132229   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:06:17.132802   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:06:13.887296   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:06:16.386306   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:06:14.704159   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:06:15.203401   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:06:15.703942   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:06:16.204043   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:06:16.703691   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:06:17.203508   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:06:17.703445   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:06:18.203689   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:06:18.704249   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:06:19.203290   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0918 21:06:19.203401   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0918 21:06:19.239033   62061 cri.go:89] found id: ""
	I0918 21:06:19.239065   62061 logs.go:276] 0 containers: []
	W0918 21:06:19.239073   62061 logs.go:278] No container was found matching "kube-apiserver"
	I0918 21:06:19.239079   62061 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0918 21:06:19.239141   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0918 21:06:19.274781   62061 cri.go:89] found id: ""
	I0918 21:06:19.274809   62061 logs.go:276] 0 containers: []
	W0918 21:06:19.274819   62061 logs.go:278] No container was found matching "etcd"
	I0918 21:06:19.274833   62061 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0918 21:06:19.274895   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0918 21:06:19.307894   62061 cri.go:89] found id: ""
	I0918 21:06:19.307928   62061 logs.go:276] 0 containers: []
	W0918 21:06:19.307940   62061 logs.go:278] No container was found matching "coredns"
	I0918 21:06:19.307948   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0918 21:06:19.308002   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0918 21:06:19.340572   62061 cri.go:89] found id: ""
	I0918 21:06:19.340602   62061 logs.go:276] 0 containers: []
	W0918 21:06:19.340610   62061 logs.go:278] No container was found matching "kube-scheduler"
	I0918 21:06:19.340615   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0918 21:06:19.340672   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0918 21:06:19.375448   62061 cri.go:89] found id: ""
	I0918 21:06:19.375475   62061 logs.go:276] 0 containers: []
	W0918 21:06:19.375483   62061 logs.go:278] No container was found matching "kube-proxy"
	I0918 21:06:19.375489   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0918 21:06:19.375536   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0918 21:06:19.413102   62061 cri.go:89] found id: ""
	I0918 21:06:19.413133   62061 logs.go:276] 0 containers: []
	W0918 21:06:19.413158   62061 logs.go:278] No container was found matching "kube-controller-manager"
	I0918 21:06:19.413166   62061 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0918 21:06:19.413238   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0918 21:06:19.447497   62061 cri.go:89] found id: ""
	I0918 21:06:19.447526   62061 logs.go:276] 0 containers: []
	W0918 21:06:19.447536   62061 logs.go:278] No container was found matching "kindnet"
	I0918 21:06:19.447544   62061 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0918 21:06:19.447605   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0918 21:06:19.480848   62061 cri.go:89] found id: ""
	I0918 21:06:19.480880   62061 logs.go:276] 0 containers: []
	W0918 21:06:19.480892   62061 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0918 21:06:19.480903   62061 logs.go:123] Gathering logs for kubelet ...
	I0918 21:06:19.480916   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 21:06:16.856845   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:06:19.358010   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:06:19.632608   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:06:22.132792   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:06:18.387488   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:06:20.887832   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:06:19.533573   62061 logs.go:123] Gathering logs for dmesg ...
	I0918 21:06:19.533625   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 21:06:19.546578   62061 logs.go:123] Gathering logs for describe nodes ...
	I0918 21:06:19.546604   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0918 21:06:19.667439   62061 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0918 21:06:19.667459   62061 logs.go:123] Gathering logs for CRI-O ...
	I0918 21:06:19.667472   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0918 21:06:19.739525   62061 logs.go:123] Gathering logs for container status ...
	I0918 21:06:19.739563   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 21:06:22.280913   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:06:22.296045   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0918 21:06:22.296132   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0918 21:06:22.332361   62061 cri.go:89] found id: ""
	I0918 21:06:22.332401   62061 logs.go:276] 0 containers: []
	W0918 21:06:22.332413   62061 logs.go:278] No container was found matching "kube-apiserver"
	I0918 21:06:22.332421   62061 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0918 21:06:22.332485   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0918 21:06:22.392138   62061 cri.go:89] found id: ""
	I0918 21:06:22.392170   62061 logs.go:276] 0 containers: []
	W0918 21:06:22.392178   62061 logs.go:278] No container was found matching "etcd"
	I0918 21:06:22.392184   62061 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0918 21:06:22.392237   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0918 21:06:22.431265   62061 cri.go:89] found id: ""
	I0918 21:06:22.431296   62061 logs.go:276] 0 containers: []
	W0918 21:06:22.431306   62061 logs.go:278] No container was found matching "coredns"
	I0918 21:06:22.431313   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0918 21:06:22.431376   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0918 21:06:22.463685   62061 cri.go:89] found id: ""
	I0918 21:06:22.463718   62061 logs.go:276] 0 containers: []
	W0918 21:06:22.463730   62061 logs.go:278] No container was found matching "kube-scheduler"
	I0918 21:06:22.463738   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0918 21:06:22.463808   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0918 21:06:22.498031   62061 cri.go:89] found id: ""
	I0918 21:06:22.498058   62061 logs.go:276] 0 containers: []
	W0918 21:06:22.498069   62061 logs.go:278] No container was found matching "kube-proxy"
	I0918 21:06:22.498076   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0918 21:06:22.498140   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0918 21:06:22.537701   62061 cri.go:89] found id: ""
	I0918 21:06:22.537732   62061 logs.go:276] 0 containers: []
	W0918 21:06:22.537740   62061 logs.go:278] No container was found matching "kube-controller-manager"
	I0918 21:06:22.537746   62061 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0918 21:06:22.537803   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0918 21:06:22.574085   62061 cri.go:89] found id: ""
	I0918 21:06:22.574118   62061 logs.go:276] 0 containers: []
	W0918 21:06:22.574130   62061 logs.go:278] No container was found matching "kindnet"
	I0918 21:06:22.574138   62061 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0918 21:06:22.574202   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0918 21:06:22.606313   62061 cri.go:89] found id: ""
	I0918 21:06:22.606341   62061 logs.go:276] 0 containers: []
	W0918 21:06:22.606349   62061 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0918 21:06:22.606359   62061 logs.go:123] Gathering logs for describe nodes ...
	I0918 21:06:22.606372   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0918 21:06:22.678484   62061 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0918 21:06:22.678511   62061 logs.go:123] Gathering logs for CRI-O ...
	I0918 21:06:22.678524   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0918 21:06:22.756730   62061 logs.go:123] Gathering logs for container status ...
	I0918 21:06:22.756767   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 21:06:22.793537   62061 logs.go:123] Gathering logs for kubelet ...
	I0918 21:06:22.793573   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 21:06:22.845987   62061 logs.go:123] Gathering logs for dmesg ...
	I0918 21:06:22.846021   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 21:06:21.857010   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:06:23.857823   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:06:26.358268   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:06:24.133063   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:06:26.632474   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:06:23.387764   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:06:25.886548   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:06:27.887108   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:06:25.360980   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:06:25.374930   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0918 21:06:25.375010   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0918 21:06:25.413100   62061 cri.go:89] found id: ""
	I0918 21:06:25.413135   62061 logs.go:276] 0 containers: []
	W0918 21:06:25.413147   62061 logs.go:278] No container was found matching "kube-apiserver"
	I0918 21:06:25.413155   62061 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0918 21:06:25.413221   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0918 21:06:25.450999   62061 cri.go:89] found id: ""
	I0918 21:06:25.451024   62061 logs.go:276] 0 containers: []
	W0918 21:06:25.451032   62061 logs.go:278] No container was found matching "etcd"
	I0918 21:06:25.451038   62061 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0918 21:06:25.451087   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0918 21:06:25.496114   62061 cri.go:89] found id: ""
	I0918 21:06:25.496147   62061 logs.go:276] 0 containers: []
	W0918 21:06:25.496155   62061 logs.go:278] No container was found matching "coredns"
	I0918 21:06:25.496161   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0918 21:06:25.496218   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0918 21:06:25.529186   62061 cri.go:89] found id: ""
	I0918 21:06:25.529211   62061 logs.go:276] 0 containers: []
	W0918 21:06:25.529218   62061 logs.go:278] No container was found matching "kube-scheduler"
	I0918 21:06:25.529223   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0918 21:06:25.529294   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0918 21:06:25.569710   62061 cri.go:89] found id: ""
	I0918 21:06:25.569735   62061 logs.go:276] 0 containers: []
	W0918 21:06:25.569743   62061 logs.go:278] No container was found matching "kube-proxy"
	I0918 21:06:25.569749   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0918 21:06:25.569796   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0918 21:06:25.606915   62061 cri.go:89] found id: ""
	I0918 21:06:25.606951   62061 logs.go:276] 0 containers: []
	W0918 21:06:25.606964   62061 logs.go:278] No container was found matching "kube-controller-manager"
	I0918 21:06:25.606972   62061 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0918 21:06:25.607034   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0918 21:06:25.642705   62061 cri.go:89] found id: ""
	I0918 21:06:25.642736   62061 logs.go:276] 0 containers: []
	W0918 21:06:25.642744   62061 logs.go:278] No container was found matching "kindnet"
	I0918 21:06:25.642750   62061 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0918 21:06:25.642801   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0918 21:06:25.676418   62061 cri.go:89] found id: ""
	I0918 21:06:25.676448   62061 logs.go:276] 0 containers: []
	W0918 21:06:25.676457   62061 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0918 21:06:25.676466   62061 logs.go:123] Gathering logs for dmesg ...
	I0918 21:06:25.676477   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 21:06:25.689913   62061 logs.go:123] Gathering logs for describe nodes ...
	I0918 21:06:25.689945   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0918 21:06:25.771579   62061 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0918 21:06:25.771607   62061 logs.go:123] Gathering logs for CRI-O ...
	I0918 21:06:25.771622   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0918 21:06:25.850904   62061 logs.go:123] Gathering logs for container status ...
	I0918 21:06:25.850945   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 21:06:25.890820   62061 logs.go:123] Gathering logs for kubelet ...
	I0918 21:06:25.890849   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 21:06:28.438958   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:06:28.451961   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0918 21:06:28.452043   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0918 21:06:28.485000   62061 cri.go:89] found id: ""
	I0918 21:06:28.485059   62061 logs.go:276] 0 containers: []
	W0918 21:06:28.485070   62061 logs.go:278] No container was found matching "kube-apiserver"
	I0918 21:06:28.485077   62061 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0918 21:06:28.485138   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0918 21:06:28.517852   62061 cri.go:89] found id: ""
	I0918 21:06:28.517885   62061 logs.go:276] 0 containers: []
	W0918 21:06:28.517895   62061 logs.go:278] No container was found matching "etcd"
	I0918 21:06:28.517903   62061 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0918 21:06:28.517957   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0918 21:06:28.549914   62061 cri.go:89] found id: ""
	I0918 21:06:28.549940   62061 logs.go:276] 0 containers: []
	W0918 21:06:28.549949   62061 logs.go:278] No container was found matching "coredns"
	I0918 21:06:28.549955   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0918 21:06:28.550000   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0918 21:06:28.584133   62061 cri.go:89] found id: ""
	I0918 21:06:28.584156   62061 logs.go:276] 0 containers: []
	W0918 21:06:28.584163   62061 logs.go:278] No container was found matching "kube-scheduler"
	I0918 21:06:28.584169   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0918 21:06:28.584213   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0918 21:06:28.620031   62061 cri.go:89] found id: ""
	I0918 21:06:28.620061   62061 logs.go:276] 0 containers: []
	W0918 21:06:28.620071   62061 logs.go:278] No container was found matching "kube-proxy"
	I0918 21:06:28.620078   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0918 21:06:28.620143   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0918 21:06:28.653393   62061 cri.go:89] found id: ""
	I0918 21:06:28.653424   62061 logs.go:276] 0 containers: []
	W0918 21:06:28.653435   62061 logs.go:278] No container was found matching "kube-controller-manager"
	I0918 21:06:28.653443   62061 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0918 21:06:28.653505   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0918 21:06:28.687696   62061 cri.go:89] found id: ""
	I0918 21:06:28.687729   62061 logs.go:276] 0 containers: []
	W0918 21:06:28.687741   62061 logs.go:278] No container was found matching "kindnet"
	I0918 21:06:28.687752   62061 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0918 21:06:28.687809   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0918 21:06:28.724824   62061 cri.go:89] found id: ""
	I0918 21:06:28.724854   62061 logs.go:276] 0 containers: []
	W0918 21:06:28.724864   62061 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0918 21:06:28.724875   62061 logs.go:123] Gathering logs for kubelet ...
	I0918 21:06:28.724890   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 21:06:28.785489   62061 logs.go:123] Gathering logs for dmesg ...
	I0918 21:06:28.785529   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 21:06:28.804170   62061 logs.go:123] Gathering logs for describe nodes ...
	I0918 21:06:28.804211   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0918 21:06:28.895508   62061 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0918 21:06:28.895538   62061 logs.go:123] Gathering logs for CRI-O ...
	I0918 21:06:28.895554   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0918 21:06:28.974643   62061 logs.go:123] Gathering logs for container status ...
	I0918 21:06:28.974685   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 21:06:28.858259   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:06:31.356644   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:06:28.633851   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:06:31.133612   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:06:30.392038   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:06:32.886708   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:06:31.517305   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:06:31.530374   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0918 21:06:31.530435   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0918 21:06:31.568335   62061 cri.go:89] found id: ""
	I0918 21:06:31.568374   62061 logs.go:276] 0 containers: []
	W0918 21:06:31.568385   62061 logs.go:278] No container was found matching "kube-apiserver"
	I0918 21:06:31.568393   62061 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0918 21:06:31.568462   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0918 21:06:31.603597   62061 cri.go:89] found id: ""
	I0918 21:06:31.603624   62061 logs.go:276] 0 containers: []
	W0918 21:06:31.603633   62061 logs.go:278] No container was found matching "etcd"
	I0918 21:06:31.603639   62061 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0918 21:06:31.603694   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0918 21:06:31.639355   62061 cri.go:89] found id: ""
	I0918 21:06:31.639380   62061 logs.go:276] 0 containers: []
	W0918 21:06:31.639388   62061 logs.go:278] No container was found matching "coredns"
	I0918 21:06:31.639393   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0918 21:06:31.639445   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0918 21:06:31.674197   62061 cri.go:89] found id: ""
	I0918 21:06:31.674223   62061 logs.go:276] 0 containers: []
	W0918 21:06:31.674231   62061 logs.go:278] No container was found matching "kube-scheduler"
	I0918 21:06:31.674237   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0918 21:06:31.674286   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0918 21:06:31.710563   62061 cri.go:89] found id: ""
	I0918 21:06:31.710592   62061 logs.go:276] 0 containers: []
	W0918 21:06:31.710606   62061 logs.go:278] No container was found matching "kube-proxy"
	I0918 21:06:31.710612   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0918 21:06:31.710663   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0918 21:06:31.746923   62061 cri.go:89] found id: ""
	I0918 21:06:31.746958   62061 logs.go:276] 0 containers: []
	W0918 21:06:31.746969   62061 logs.go:278] No container was found matching "kube-controller-manager"
	I0918 21:06:31.746976   62061 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0918 21:06:31.747039   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0918 21:06:31.781913   62061 cri.go:89] found id: ""
	I0918 21:06:31.781946   62061 logs.go:276] 0 containers: []
	W0918 21:06:31.781961   62061 logs.go:278] No container was found matching "kindnet"
	I0918 21:06:31.781969   62061 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0918 21:06:31.782023   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0918 21:06:31.817293   62061 cri.go:89] found id: ""
	I0918 21:06:31.817318   62061 logs.go:276] 0 containers: []
	W0918 21:06:31.817327   62061 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0918 21:06:31.817338   62061 logs.go:123] Gathering logs for kubelet ...
	I0918 21:06:31.817375   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 21:06:31.869479   62061 logs.go:123] Gathering logs for dmesg ...
	I0918 21:06:31.869518   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 21:06:31.884187   62061 logs.go:123] Gathering logs for describe nodes ...
	I0918 21:06:31.884219   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0918 21:06:31.967159   62061 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0918 21:06:31.967189   62061 logs.go:123] Gathering logs for CRI-O ...
	I0918 21:06:31.967202   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0918 21:06:32.050404   62061 logs.go:123] Gathering logs for container status ...
	I0918 21:06:32.050458   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 21:06:33.357380   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:06:35.856960   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:06:33.633434   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:06:36.133740   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:06:34.888738   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:06:37.386351   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:06:34.593135   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:06:34.613917   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0918 21:06:34.614001   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0918 21:06:34.649649   62061 cri.go:89] found id: ""
	I0918 21:06:34.649676   62061 logs.go:276] 0 containers: []
	W0918 21:06:34.649684   62061 logs.go:278] No container was found matching "kube-apiserver"
	I0918 21:06:34.649690   62061 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0918 21:06:34.649738   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0918 21:06:34.684898   62061 cri.go:89] found id: ""
	I0918 21:06:34.684932   62061 logs.go:276] 0 containers: []
	W0918 21:06:34.684940   62061 logs.go:278] No container was found matching "etcd"
	I0918 21:06:34.684948   62061 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0918 21:06:34.685018   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0918 21:06:34.717519   62061 cri.go:89] found id: ""
	I0918 21:06:34.717549   62061 logs.go:276] 0 containers: []
	W0918 21:06:34.717559   62061 logs.go:278] No container was found matching "coredns"
	I0918 21:06:34.717573   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0918 21:06:34.717626   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0918 21:06:34.751242   62061 cri.go:89] found id: ""
	I0918 21:06:34.751275   62061 logs.go:276] 0 containers: []
	W0918 21:06:34.751289   62061 logs.go:278] No container was found matching "kube-scheduler"
	I0918 21:06:34.751298   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0918 21:06:34.751368   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0918 21:06:34.786700   62061 cri.go:89] found id: ""
	I0918 21:06:34.786729   62061 logs.go:276] 0 containers: []
	W0918 21:06:34.786737   62061 logs.go:278] No container was found matching "kube-proxy"
	I0918 21:06:34.786742   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0918 21:06:34.786792   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0918 21:06:34.822400   62061 cri.go:89] found id: ""
	I0918 21:06:34.822446   62061 logs.go:276] 0 containers: []
	W0918 21:06:34.822459   62061 logs.go:278] No container was found matching "kube-controller-manager"
	I0918 21:06:34.822468   62061 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0918 21:06:34.822537   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0918 21:06:34.860509   62061 cri.go:89] found id: ""
	I0918 21:06:34.860540   62061 logs.go:276] 0 containers: []
	W0918 21:06:34.860549   62061 logs.go:278] No container was found matching "kindnet"
	I0918 21:06:34.860554   62061 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0918 21:06:34.860626   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0918 21:06:34.900682   62061 cri.go:89] found id: ""
	I0918 21:06:34.900712   62061 logs.go:276] 0 containers: []
	W0918 21:06:34.900719   62061 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0918 21:06:34.900727   62061 logs.go:123] Gathering logs for describe nodes ...
	I0918 21:06:34.900739   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0918 21:06:34.975975   62061 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0918 21:06:34.976009   62061 logs.go:123] Gathering logs for CRI-O ...
	I0918 21:06:34.976043   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0918 21:06:35.058972   62061 logs.go:123] Gathering logs for container status ...
	I0918 21:06:35.059012   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 21:06:35.101690   62061 logs.go:123] Gathering logs for kubelet ...
	I0918 21:06:35.101716   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 21:06:35.154486   62061 logs.go:123] Gathering logs for dmesg ...
	I0918 21:06:35.154525   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 21:06:37.669814   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:06:37.683235   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0918 21:06:37.683294   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0918 21:06:37.720666   62061 cri.go:89] found id: ""
	I0918 21:06:37.720697   62061 logs.go:276] 0 containers: []
	W0918 21:06:37.720708   62061 logs.go:278] No container was found matching "kube-apiserver"
	I0918 21:06:37.720715   62061 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0918 21:06:37.720777   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0918 21:06:37.758288   62061 cri.go:89] found id: ""
	I0918 21:06:37.758327   62061 logs.go:276] 0 containers: []
	W0918 21:06:37.758335   62061 logs.go:278] No container was found matching "etcd"
	I0918 21:06:37.758341   62061 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0918 21:06:37.758436   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0918 21:06:37.793394   62061 cri.go:89] found id: ""
	I0918 21:06:37.793421   62061 logs.go:276] 0 containers: []
	W0918 21:06:37.793430   62061 logs.go:278] No container was found matching "coredns"
	I0918 21:06:37.793437   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0918 21:06:37.793501   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0918 21:06:37.828258   62061 cri.go:89] found id: ""
	I0918 21:06:37.828291   62061 logs.go:276] 0 containers: []
	W0918 21:06:37.828300   62061 logs.go:278] No container was found matching "kube-scheduler"
	I0918 21:06:37.828307   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0918 21:06:37.828361   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0918 21:06:37.863164   62061 cri.go:89] found id: ""
	I0918 21:06:37.863189   62061 logs.go:276] 0 containers: []
	W0918 21:06:37.863197   62061 logs.go:278] No container was found matching "kube-proxy"
	I0918 21:06:37.863203   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0918 21:06:37.863251   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0918 21:06:37.899585   62061 cri.go:89] found id: ""
	I0918 21:06:37.899614   62061 logs.go:276] 0 containers: []
	W0918 21:06:37.899622   62061 logs.go:278] No container was found matching "kube-controller-manager"
	I0918 21:06:37.899628   62061 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0918 21:06:37.899675   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0918 21:06:37.936246   62061 cri.go:89] found id: ""
	I0918 21:06:37.936282   62061 logs.go:276] 0 containers: []
	W0918 21:06:37.936292   62061 logs.go:278] No container was found matching "kindnet"
	I0918 21:06:37.936299   62061 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0918 21:06:37.936362   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0918 21:06:37.969913   62061 cri.go:89] found id: ""
	I0918 21:06:37.969942   62061 logs.go:276] 0 containers: []
	W0918 21:06:37.969950   62061 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0918 21:06:37.969958   62061 logs.go:123] Gathering logs for kubelet ...
	I0918 21:06:37.969968   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 21:06:38.023055   62061 logs.go:123] Gathering logs for dmesg ...
	I0918 21:06:38.023094   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 21:06:38.036006   62061 logs.go:123] Gathering logs for describe nodes ...
	I0918 21:06:38.036047   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0918 21:06:38.116926   62061 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0918 21:06:38.116957   62061 logs.go:123] Gathering logs for CRI-O ...
	I0918 21:06:38.116972   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0918 21:06:38.192632   62061 logs.go:123] Gathering logs for container status ...
	I0918 21:06:38.192668   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 21:06:37.860654   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:06:40.357107   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:06:38.633432   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:06:41.131957   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:06:39.387927   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:06:41.886904   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:06:40.729602   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:06:40.743291   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0918 21:06:40.743368   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0918 21:06:40.778138   62061 cri.go:89] found id: ""
	I0918 21:06:40.778166   62061 logs.go:276] 0 containers: []
	W0918 21:06:40.778176   62061 logs.go:278] No container was found matching "kube-apiserver"
	I0918 21:06:40.778184   62061 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0918 21:06:40.778248   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0918 21:06:40.813026   62061 cri.go:89] found id: ""
	I0918 21:06:40.813062   62061 logs.go:276] 0 containers: []
	W0918 21:06:40.813073   62061 logs.go:278] No container was found matching "etcd"
	I0918 21:06:40.813081   62061 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0918 21:06:40.813146   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0918 21:06:40.847609   62061 cri.go:89] found id: ""
	I0918 21:06:40.847641   62061 logs.go:276] 0 containers: []
	W0918 21:06:40.847651   62061 logs.go:278] No container was found matching "coredns"
	I0918 21:06:40.847658   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0918 21:06:40.847727   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0918 21:06:40.885403   62061 cri.go:89] found id: ""
	I0918 21:06:40.885432   62061 logs.go:276] 0 containers: []
	W0918 21:06:40.885441   62061 logs.go:278] No container was found matching "kube-scheduler"
	I0918 21:06:40.885448   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0918 21:06:40.885515   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0918 21:06:40.922679   62061 cri.go:89] found id: ""
	I0918 21:06:40.922705   62061 logs.go:276] 0 containers: []
	W0918 21:06:40.922714   62061 logs.go:278] No container was found matching "kube-proxy"
	I0918 21:06:40.922719   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0918 21:06:40.922776   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0918 21:06:40.957187   62061 cri.go:89] found id: ""
	I0918 21:06:40.957215   62061 logs.go:276] 0 containers: []
	W0918 21:06:40.957222   62061 logs.go:278] No container was found matching "kube-controller-manager"
	I0918 21:06:40.957228   62061 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0918 21:06:40.957291   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0918 21:06:40.991722   62061 cri.go:89] found id: ""
	I0918 21:06:40.991751   62061 logs.go:276] 0 containers: []
	W0918 21:06:40.991762   62061 logs.go:278] No container was found matching "kindnet"
	I0918 21:06:40.991769   62061 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0918 21:06:40.991835   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0918 21:06:41.027210   62061 cri.go:89] found id: ""
	I0918 21:06:41.027234   62061 logs.go:276] 0 containers: []
	W0918 21:06:41.027242   62061 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0918 21:06:41.027250   62061 logs.go:123] Gathering logs for kubelet ...
	I0918 21:06:41.027275   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 21:06:41.077183   62061 logs.go:123] Gathering logs for dmesg ...
	I0918 21:06:41.077221   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 21:06:41.090707   62061 logs.go:123] Gathering logs for describe nodes ...
	I0918 21:06:41.090734   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0918 21:06:41.166206   62061 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0918 21:06:41.166228   62061 logs.go:123] Gathering logs for CRI-O ...
	I0918 21:06:41.166241   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0918 21:06:41.240308   62061 logs.go:123] Gathering logs for container status ...
	I0918 21:06:41.240346   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 21:06:43.777517   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:06:43.790901   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0918 21:06:43.790970   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0918 21:06:43.827127   62061 cri.go:89] found id: ""
	I0918 21:06:43.827156   62061 logs.go:276] 0 containers: []
	W0918 21:06:43.827164   62061 logs.go:278] No container was found matching "kube-apiserver"
	I0918 21:06:43.827170   62061 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0918 21:06:43.827218   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0918 21:06:43.861218   62061 cri.go:89] found id: ""
	I0918 21:06:43.861244   62061 logs.go:276] 0 containers: []
	W0918 21:06:43.861252   62061 logs.go:278] No container was found matching "etcd"
	I0918 21:06:43.861257   62061 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0918 21:06:43.861308   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0918 21:06:43.899669   62061 cri.go:89] found id: ""
	I0918 21:06:43.899694   62061 logs.go:276] 0 containers: []
	W0918 21:06:43.899701   62061 logs.go:278] No container was found matching "coredns"
	I0918 21:06:43.899707   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0918 21:06:43.899755   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0918 21:06:43.934698   62061 cri.go:89] found id: ""
	I0918 21:06:43.934731   62061 logs.go:276] 0 containers: []
	W0918 21:06:43.934741   62061 logs.go:278] No container was found matching "kube-scheduler"
	I0918 21:06:43.934748   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0918 21:06:43.934819   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0918 21:06:43.971715   62061 cri.go:89] found id: ""
	I0918 21:06:43.971742   62061 logs.go:276] 0 containers: []
	W0918 21:06:43.971755   62061 logs.go:278] No container was found matching "kube-proxy"
	I0918 21:06:43.971760   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0918 21:06:43.971817   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0918 21:06:44.005880   62061 cri.go:89] found id: ""
	I0918 21:06:44.005915   62061 logs.go:276] 0 containers: []
	W0918 21:06:44.005927   62061 logs.go:278] No container was found matching "kube-controller-manager"
	I0918 21:06:44.005935   62061 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0918 21:06:44.006003   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0918 21:06:44.038144   62061 cri.go:89] found id: ""
	I0918 21:06:44.038171   62061 logs.go:276] 0 containers: []
	W0918 21:06:44.038180   62061 logs.go:278] No container was found matching "kindnet"
	I0918 21:06:44.038186   62061 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0918 21:06:44.038239   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0918 21:06:44.073920   62061 cri.go:89] found id: ""
	I0918 21:06:44.073953   62061 logs.go:276] 0 containers: []
	W0918 21:06:44.073966   62061 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0918 21:06:44.073978   62061 logs.go:123] Gathering logs for describe nodes ...
	I0918 21:06:44.073992   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0918 21:06:44.142881   62061 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0918 21:06:44.142910   62061 logs.go:123] Gathering logs for CRI-O ...
	I0918 21:06:44.142926   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0918 21:06:44.222302   62061 logs.go:123] Gathering logs for container status ...
	I0918 21:06:44.222341   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 21:06:44.265914   62061 logs.go:123] Gathering logs for kubelet ...
	I0918 21:06:44.265952   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 21:06:44.316229   62061 logs.go:123] Gathering logs for dmesg ...
	I0918 21:06:44.316266   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 21:06:42.856192   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:06:44.857673   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:06:43.132992   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:06:45.134509   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:06:43.888282   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:06:45.889414   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:06:46.830793   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:06:46.843829   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0918 21:06:46.843886   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0918 21:06:46.878566   62061 cri.go:89] found id: ""
	I0918 21:06:46.878607   62061 logs.go:276] 0 containers: []
	W0918 21:06:46.878621   62061 logs.go:278] No container was found matching "kube-apiserver"
	I0918 21:06:46.878630   62061 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0918 21:06:46.878710   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0918 21:06:46.915576   62061 cri.go:89] found id: ""
	I0918 21:06:46.915603   62061 logs.go:276] 0 containers: []
	W0918 21:06:46.915611   62061 logs.go:278] No container was found matching "etcd"
	I0918 21:06:46.915616   62061 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0918 21:06:46.915663   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0918 21:06:46.952982   62061 cri.go:89] found id: ""
	I0918 21:06:46.953014   62061 logs.go:276] 0 containers: []
	W0918 21:06:46.953032   62061 logs.go:278] No container was found matching "coredns"
	I0918 21:06:46.953039   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0918 21:06:46.953104   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0918 21:06:46.987718   62061 cri.go:89] found id: ""
	I0918 21:06:46.987749   62061 logs.go:276] 0 containers: []
	W0918 21:06:46.987757   62061 logs.go:278] No container was found matching "kube-scheduler"
	I0918 21:06:46.987763   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0918 21:06:46.987815   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0918 21:06:47.022695   62061 cri.go:89] found id: ""
	I0918 21:06:47.022726   62061 logs.go:276] 0 containers: []
	W0918 21:06:47.022735   62061 logs.go:278] No container was found matching "kube-proxy"
	I0918 21:06:47.022741   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0918 21:06:47.022799   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0918 21:06:47.058058   62061 cri.go:89] found id: ""
	I0918 21:06:47.058086   62061 logs.go:276] 0 containers: []
	W0918 21:06:47.058094   62061 logs.go:278] No container was found matching "kube-controller-manager"
	I0918 21:06:47.058101   62061 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0918 21:06:47.058159   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0918 21:06:47.093314   62061 cri.go:89] found id: ""
	I0918 21:06:47.093352   62061 logs.go:276] 0 containers: []
	W0918 21:06:47.093361   62061 logs.go:278] No container was found matching "kindnet"
	I0918 21:06:47.093367   62061 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0918 21:06:47.093424   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0918 21:06:47.126997   62061 cri.go:89] found id: ""
	I0918 21:06:47.127023   62061 logs.go:276] 0 containers: []
	W0918 21:06:47.127033   62061 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0918 21:06:47.127041   62061 logs.go:123] Gathering logs for CRI-O ...
	I0918 21:06:47.127053   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0918 21:06:47.203239   62061 logs.go:123] Gathering logs for container status ...
	I0918 21:06:47.203282   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 21:06:47.240982   62061 logs.go:123] Gathering logs for kubelet ...
	I0918 21:06:47.241013   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 21:06:47.293553   62061 logs.go:123] Gathering logs for dmesg ...
	I0918 21:06:47.293591   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 21:06:47.307462   62061 logs.go:123] Gathering logs for describe nodes ...
	I0918 21:06:47.307493   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0918 21:06:47.379500   62061 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0918 21:06:47.357136   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:06:49.359981   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:06:47.633023   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:06:49.633350   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:06:52.134627   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:06:48.387568   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:06:50.886679   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:06:52.887065   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:06:49.880441   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:06:49.895816   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0918 21:06:49.895903   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0918 21:06:49.931916   62061 cri.go:89] found id: ""
	I0918 21:06:49.931941   62061 logs.go:276] 0 containers: []
	W0918 21:06:49.931950   62061 logs.go:278] No container was found matching "kube-apiserver"
	I0918 21:06:49.931955   62061 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0918 21:06:49.932007   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0918 21:06:49.967053   62061 cri.go:89] found id: ""
	I0918 21:06:49.967083   62061 logs.go:276] 0 containers: []
	W0918 21:06:49.967093   62061 logs.go:278] No container was found matching "etcd"
	I0918 21:06:49.967101   62061 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0918 21:06:49.967194   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0918 21:06:50.002943   62061 cri.go:89] found id: ""
	I0918 21:06:50.003004   62061 logs.go:276] 0 containers: []
	W0918 21:06:50.003015   62061 logs.go:278] No container was found matching "coredns"
	I0918 21:06:50.003023   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0918 21:06:50.003085   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0918 21:06:50.039445   62061 cri.go:89] found id: ""
	I0918 21:06:50.039478   62061 logs.go:276] 0 containers: []
	W0918 21:06:50.039489   62061 logs.go:278] No container was found matching "kube-scheduler"
	I0918 21:06:50.039497   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0918 21:06:50.039561   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0918 21:06:50.073529   62061 cri.go:89] found id: ""
	I0918 21:06:50.073562   62061 logs.go:276] 0 containers: []
	W0918 21:06:50.073572   62061 logs.go:278] No container was found matching "kube-proxy"
	I0918 21:06:50.073578   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0918 21:06:50.073645   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0918 21:06:50.109363   62061 cri.go:89] found id: ""
	I0918 21:06:50.109394   62061 logs.go:276] 0 containers: []
	W0918 21:06:50.109406   62061 logs.go:278] No container was found matching "kube-controller-manager"
	I0918 21:06:50.109413   62061 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0918 21:06:50.109485   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0918 21:06:50.144387   62061 cri.go:89] found id: ""
	I0918 21:06:50.144418   62061 logs.go:276] 0 containers: []
	W0918 21:06:50.144429   62061 logs.go:278] No container was found matching "kindnet"
	I0918 21:06:50.144450   62061 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0918 21:06:50.144524   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0918 21:06:50.178079   62061 cri.go:89] found id: ""
	I0918 21:06:50.178110   62061 logs.go:276] 0 containers: []
	W0918 21:06:50.178119   62061 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0918 21:06:50.178128   62061 logs.go:123] Gathering logs for kubelet ...
	I0918 21:06:50.178140   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 21:06:50.228857   62061 logs.go:123] Gathering logs for dmesg ...
	I0918 21:06:50.228893   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 21:06:50.242077   62061 logs.go:123] Gathering logs for describe nodes ...
	I0918 21:06:50.242108   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0918 21:06:50.312868   62061 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0918 21:06:50.312903   62061 logs.go:123] Gathering logs for CRI-O ...
	I0918 21:06:50.312920   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0918 21:06:50.392880   62061 logs.go:123] Gathering logs for container status ...
	I0918 21:06:50.392924   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 21:06:52.931623   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:06:52.945298   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0918 21:06:52.945394   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0918 21:06:52.980129   62061 cri.go:89] found id: ""
	I0918 21:06:52.980162   62061 logs.go:276] 0 containers: []
	W0918 21:06:52.980171   62061 logs.go:278] No container was found matching "kube-apiserver"
	I0918 21:06:52.980176   62061 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0918 21:06:52.980224   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0918 21:06:53.017899   62061 cri.go:89] found id: ""
	I0918 21:06:53.017932   62061 logs.go:276] 0 containers: []
	W0918 21:06:53.017941   62061 logs.go:278] No container was found matching "etcd"
	I0918 21:06:53.017947   62061 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0918 21:06:53.018015   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0918 21:06:53.056594   62061 cri.go:89] found id: ""
	I0918 21:06:53.056619   62061 logs.go:276] 0 containers: []
	W0918 21:06:53.056627   62061 logs.go:278] No container was found matching "coredns"
	I0918 21:06:53.056635   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0918 21:06:53.056684   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0918 21:06:53.089876   62061 cri.go:89] found id: ""
	I0918 21:06:53.089907   62061 logs.go:276] 0 containers: []
	W0918 21:06:53.089915   62061 logs.go:278] No container was found matching "kube-scheduler"
	I0918 21:06:53.089920   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0918 21:06:53.089967   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0918 21:06:53.122845   62061 cri.go:89] found id: ""
	I0918 21:06:53.122881   62061 logs.go:276] 0 containers: []
	W0918 21:06:53.122890   62061 logs.go:278] No container was found matching "kube-proxy"
	I0918 21:06:53.122904   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0918 21:06:53.122956   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0918 21:06:53.162103   62061 cri.go:89] found id: ""
	I0918 21:06:53.162137   62061 logs.go:276] 0 containers: []
	W0918 21:06:53.162148   62061 logs.go:278] No container was found matching "kube-controller-manager"
	I0918 21:06:53.162156   62061 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0918 21:06:53.162218   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0918 21:06:53.195034   62061 cri.go:89] found id: ""
	I0918 21:06:53.195067   62061 logs.go:276] 0 containers: []
	W0918 21:06:53.195078   62061 logs.go:278] No container was found matching "kindnet"
	I0918 21:06:53.195085   62061 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0918 21:06:53.195144   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0918 21:06:53.229473   62061 cri.go:89] found id: ""
	I0918 21:06:53.229518   62061 logs.go:276] 0 containers: []
	W0918 21:06:53.229542   62061 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0918 21:06:53.229556   62061 logs.go:123] Gathering logs for kubelet ...
	I0918 21:06:53.229577   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 21:06:53.283929   62061 logs.go:123] Gathering logs for dmesg ...
	I0918 21:06:53.283973   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 21:06:53.296932   62061 logs.go:123] Gathering logs for describe nodes ...
	I0918 21:06:53.296965   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0918 21:06:53.379339   62061 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0918 21:06:53.379369   62061 logs.go:123] Gathering logs for CRI-O ...
	I0918 21:06:53.379410   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0918 21:06:53.469608   62061 logs.go:123] Gathering logs for container status ...
	I0918 21:06:53.469652   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 21:06:51.855788   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:06:53.857284   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:06:55.860982   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:06:54.633423   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:06:56.633695   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:06:54.888052   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:06:57.387393   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:06:56.009227   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:06:56.023451   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0918 21:06:56.023510   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0918 21:06:56.056839   62061 cri.go:89] found id: ""
	I0918 21:06:56.056866   62061 logs.go:276] 0 containers: []
	W0918 21:06:56.056875   62061 logs.go:278] No container was found matching "kube-apiserver"
	I0918 21:06:56.056881   62061 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0918 21:06:56.056931   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0918 21:06:56.091893   62061 cri.go:89] found id: ""
	I0918 21:06:56.091919   62061 logs.go:276] 0 containers: []
	W0918 21:06:56.091928   62061 logs.go:278] No container was found matching "etcd"
	I0918 21:06:56.091933   62061 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0918 21:06:56.091980   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0918 21:06:56.127432   62061 cri.go:89] found id: ""
	I0918 21:06:56.127456   62061 logs.go:276] 0 containers: []
	W0918 21:06:56.127464   62061 logs.go:278] No container was found matching "coredns"
	I0918 21:06:56.127470   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0918 21:06:56.127518   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0918 21:06:56.162085   62061 cri.go:89] found id: ""
	I0918 21:06:56.162109   62061 logs.go:276] 0 containers: []
	W0918 21:06:56.162118   62061 logs.go:278] No container was found matching "kube-scheduler"
	I0918 21:06:56.162123   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0918 21:06:56.162176   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0918 21:06:56.195711   62061 cri.go:89] found id: ""
	I0918 21:06:56.195743   62061 logs.go:276] 0 containers: []
	W0918 21:06:56.195753   62061 logs.go:278] No container was found matching "kube-proxy"
	I0918 21:06:56.195759   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0918 21:06:56.195809   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0918 21:06:56.234481   62061 cri.go:89] found id: ""
	I0918 21:06:56.234507   62061 logs.go:276] 0 containers: []
	W0918 21:06:56.234515   62061 logs.go:278] No container was found matching "kube-controller-manager"
	I0918 21:06:56.234522   62061 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0918 21:06:56.234567   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0918 21:06:56.268566   62061 cri.go:89] found id: ""
	I0918 21:06:56.268596   62061 logs.go:276] 0 containers: []
	W0918 21:06:56.268608   62061 logs.go:278] No container was found matching "kindnet"
	I0918 21:06:56.268617   62061 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0918 21:06:56.268681   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0918 21:06:56.309785   62061 cri.go:89] found id: ""
	I0918 21:06:56.309813   62061 logs.go:276] 0 containers: []
	W0918 21:06:56.309824   62061 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0918 21:06:56.309835   62061 logs.go:123] Gathering logs for kubelet ...
	I0918 21:06:56.309874   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 21:06:56.364834   62061 logs.go:123] Gathering logs for dmesg ...
	I0918 21:06:56.364880   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 21:06:56.378424   62061 logs.go:123] Gathering logs for describe nodes ...
	I0918 21:06:56.378451   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0918 21:06:56.450482   62061 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0918 21:06:56.450511   62061 logs.go:123] Gathering logs for CRI-O ...
	I0918 21:06:56.450522   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0918 21:06:56.536261   62061 logs.go:123] Gathering logs for container status ...
	I0918 21:06:56.536305   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 21:06:59.074494   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:06:59.087553   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0918 21:06:59.087622   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0918 21:06:59.128323   62061 cri.go:89] found id: ""
	I0918 21:06:59.128357   62061 logs.go:276] 0 containers: []
	W0918 21:06:59.128368   62061 logs.go:278] No container was found matching "kube-apiserver"
	I0918 21:06:59.128375   62061 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0918 21:06:59.128436   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0918 21:06:59.161135   62061 cri.go:89] found id: ""
	I0918 21:06:59.161161   62061 logs.go:276] 0 containers: []
	W0918 21:06:59.161170   62061 logs.go:278] No container was found matching "etcd"
	I0918 21:06:59.161178   62061 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0918 21:06:59.161240   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0918 21:06:59.201558   62061 cri.go:89] found id: ""
	I0918 21:06:59.201595   62061 logs.go:276] 0 containers: []
	W0918 21:06:59.201607   62061 logs.go:278] No container was found matching "coredns"
	I0918 21:06:59.201614   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0918 21:06:59.201678   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0918 21:06:59.235330   62061 cri.go:89] found id: ""
	I0918 21:06:59.235356   62061 logs.go:276] 0 containers: []
	W0918 21:06:59.235378   62061 logs.go:278] No container was found matching "kube-scheduler"
	I0918 21:06:59.235385   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0918 21:06:59.235450   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0918 21:06:59.270957   62061 cri.go:89] found id: ""
	I0918 21:06:59.270994   62061 logs.go:276] 0 containers: []
	W0918 21:06:59.271007   62061 logs.go:278] No container was found matching "kube-proxy"
	I0918 21:06:59.271016   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0918 21:06:59.271088   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0918 21:06:59.305068   62061 cri.go:89] found id: ""
	I0918 21:06:59.305103   62061 logs.go:276] 0 containers: []
	W0918 21:06:59.305115   62061 logs.go:278] No container was found matching "kube-controller-manager"
	I0918 21:06:59.305123   62061 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0918 21:06:59.305177   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0918 21:06:59.338763   62061 cri.go:89] found id: ""
	I0918 21:06:59.338796   62061 logs.go:276] 0 containers: []
	W0918 21:06:59.338809   62061 logs.go:278] No container was found matching "kindnet"
	I0918 21:06:59.338818   62061 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0918 21:06:59.338887   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0918 21:06:59.376557   62061 cri.go:89] found id: ""
	I0918 21:06:59.376585   62061 logs.go:276] 0 containers: []
	W0918 21:06:59.376593   62061 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0918 21:06:59.376602   62061 logs.go:123] Gathering logs for CRI-O ...
	I0918 21:06:59.376615   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0918 21:06:59.455228   62061 logs.go:123] Gathering logs for container status ...
	I0918 21:06:59.455264   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 21:06:58.356648   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:07:00.357253   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:06:59.133274   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:07:01.632548   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:06:59.388183   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:07:01.886834   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:06:59.494461   62061 logs.go:123] Gathering logs for kubelet ...
	I0918 21:06:59.494488   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 21:06:59.543143   62061 logs.go:123] Gathering logs for dmesg ...
	I0918 21:06:59.543177   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 21:06:59.556031   62061 logs.go:123] Gathering logs for describe nodes ...
	I0918 21:06:59.556062   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0918 21:06:59.623888   62061 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0918 21:07:02.124743   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:07:02.139392   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0918 21:07:02.139451   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0918 21:07:02.172176   62061 cri.go:89] found id: ""
	I0918 21:07:02.172201   62061 logs.go:276] 0 containers: []
	W0918 21:07:02.172209   62061 logs.go:278] No container was found matching "kube-apiserver"
	I0918 21:07:02.172215   62061 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0918 21:07:02.172263   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0918 21:07:02.206480   62061 cri.go:89] found id: ""
	I0918 21:07:02.206507   62061 logs.go:276] 0 containers: []
	W0918 21:07:02.206518   62061 logs.go:278] No container was found matching "etcd"
	I0918 21:07:02.206525   62061 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0918 21:07:02.206586   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0918 21:07:02.240256   62061 cri.go:89] found id: ""
	I0918 21:07:02.240281   62061 logs.go:276] 0 containers: []
	W0918 21:07:02.240289   62061 logs.go:278] No container was found matching "coredns"
	I0918 21:07:02.240295   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0918 21:07:02.240353   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0918 21:07:02.274001   62061 cri.go:89] found id: ""
	I0918 21:07:02.274034   62061 logs.go:276] 0 containers: []
	W0918 21:07:02.274046   62061 logs.go:278] No container was found matching "kube-scheduler"
	I0918 21:07:02.274056   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0918 21:07:02.274115   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0918 21:07:02.307491   62061 cri.go:89] found id: ""
	I0918 21:07:02.307520   62061 logs.go:276] 0 containers: []
	W0918 21:07:02.307528   62061 logs.go:278] No container was found matching "kube-proxy"
	I0918 21:07:02.307534   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0918 21:07:02.307597   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0918 21:07:02.344688   62061 cri.go:89] found id: ""
	I0918 21:07:02.344720   62061 logs.go:276] 0 containers: []
	W0918 21:07:02.344731   62061 logs.go:278] No container was found matching "kube-controller-manager"
	I0918 21:07:02.344739   62061 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0918 21:07:02.344805   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0918 21:07:02.384046   62061 cri.go:89] found id: ""
	I0918 21:07:02.384077   62061 logs.go:276] 0 containers: []
	W0918 21:07:02.384088   62061 logs.go:278] No container was found matching "kindnet"
	I0918 21:07:02.384095   62061 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0918 21:07:02.384154   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0918 21:07:02.422530   62061 cri.go:89] found id: ""
	I0918 21:07:02.422563   62061 logs.go:276] 0 containers: []
	W0918 21:07:02.422575   62061 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0918 21:07:02.422586   62061 logs.go:123] Gathering logs for container status ...
	I0918 21:07:02.422604   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 21:07:02.461236   62061 logs.go:123] Gathering logs for kubelet ...
	I0918 21:07:02.461266   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 21:07:02.512280   62061 logs.go:123] Gathering logs for dmesg ...
	I0918 21:07:02.512320   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 21:07:02.525404   62061 logs.go:123] Gathering logs for describe nodes ...
	I0918 21:07:02.525435   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0918 21:07:02.599746   62061 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0918 21:07:02.599773   62061 logs.go:123] Gathering logs for CRI-O ...
	I0918 21:07:02.599785   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0918 21:07:02.856077   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:07:04.858098   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:07:04.133240   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:07:06.135937   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:07:03.887306   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:07:05.888675   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:07:05.183459   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:07:05.197615   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0918 21:07:05.197681   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0918 21:07:05.234313   62061 cri.go:89] found id: ""
	I0918 21:07:05.234354   62061 logs.go:276] 0 containers: []
	W0918 21:07:05.234365   62061 logs.go:278] No container was found matching "kube-apiserver"
	I0918 21:07:05.234371   62061 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0918 21:07:05.234429   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0918 21:07:05.271436   62061 cri.go:89] found id: ""
	I0918 21:07:05.271466   62061 logs.go:276] 0 containers: []
	W0918 21:07:05.271477   62061 logs.go:278] No container was found matching "etcd"
	I0918 21:07:05.271484   62061 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0918 21:07:05.271554   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0918 21:07:05.307007   62061 cri.go:89] found id: ""
	I0918 21:07:05.307038   62061 logs.go:276] 0 containers: []
	W0918 21:07:05.307050   62061 logs.go:278] No container was found matching "coredns"
	I0918 21:07:05.307056   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0918 21:07:05.307120   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0918 21:07:05.341764   62061 cri.go:89] found id: ""
	I0918 21:07:05.341810   62061 logs.go:276] 0 containers: []
	W0918 21:07:05.341831   62061 logs.go:278] No container was found matching "kube-scheduler"
	I0918 21:07:05.341840   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0918 21:07:05.341908   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0918 21:07:05.381611   62061 cri.go:89] found id: ""
	I0918 21:07:05.381646   62061 logs.go:276] 0 containers: []
	W0918 21:07:05.381658   62061 logs.go:278] No container was found matching "kube-proxy"
	I0918 21:07:05.381666   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0918 21:07:05.381747   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0918 21:07:05.421468   62061 cri.go:89] found id: ""
	I0918 21:07:05.421499   62061 logs.go:276] 0 containers: []
	W0918 21:07:05.421520   62061 logs.go:278] No container was found matching "kube-controller-manager"
	I0918 21:07:05.421528   62061 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0918 21:07:05.421590   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0918 21:07:05.457320   62061 cri.go:89] found id: ""
	I0918 21:07:05.457348   62061 logs.go:276] 0 containers: []
	W0918 21:07:05.457359   62061 logs.go:278] No container was found matching "kindnet"
	I0918 21:07:05.457367   62061 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0918 21:07:05.457425   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0918 21:07:05.493879   62061 cri.go:89] found id: ""
	I0918 21:07:05.493915   62061 logs.go:276] 0 containers: []
	W0918 21:07:05.493928   62061 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0918 21:07:05.493943   62061 logs.go:123] Gathering logs for kubelet ...
	I0918 21:07:05.493955   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 21:07:05.543825   62061 logs.go:123] Gathering logs for dmesg ...
	I0918 21:07:05.543867   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 21:07:05.558254   62061 logs.go:123] Gathering logs for describe nodes ...
	I0918 21:07:05.558285   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0918 21:07:05.627065   62061 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0918 21:07:05.627091   62061 logs.go:123] Gathering logs for CRI-O ...
	I0918 21:07:05.627103   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0918 21:07:05.703838   62061 logs.go:123] Gathering logs for container status ...
	I0918 21:07:05.703876   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 21:07:08.244087   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:07:08.257807   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0918 21:07:08.257897   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0918 21:07:08.292034   62061 cri.go:89] found id: ""
	I0918 21:07:08.292064   62061 logs.go:276] 0 containers: []
	W0918 21:07:08.292076   62061 logs.go:278] No container was found matching "kube-apiserver"
	I0918 21:07:08.292084   62061 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0918 21:07:08.292154   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0918 21:07:08.325858   62061 cri.go:89] found id: ""
	I0918 21:07:08.325897   62061 logs.go:276] 0 containers: []
	W0918 21:07:08.325910   62061 logs.go:278] No container was found matching "etcd"
	I0918 21:07:08.325918   62061 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0918 21:07:08.325987   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0918 21:07:08.369427   62061 cri.go:89] found id: ""
	I0918 21:07:08.369457   62061 logs.go:276] 0 containers: []
	W0918 21:07:08.369468   62061 logs.go:278] No container was found matching "coredns"
	I0918 21:07:08.369475   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0918 21:07:08.369536   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0918 21:07:08.409402   62061 cri.go:89] found id: ""
	I0918 21:07:08.409434   62061 logs.go:276] 0 containers: []
	W0918 21:07:08.409444   62061 logs.go:278] No container was found matching "kube-scheduler"
	I0918 21:07:08.409451   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0918 21:07:08.409515   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0918 21:07:08.445530   62061 cri.go:89] found id: ""
	I0918 21:07:08.445565   62061 logs.go:276] 0 containers: []
	W0918 21:07:08.445575   62061 logs.go:278] No container was found matching "kube-proxy"
	I0918 21:07:08.445584   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0918 21:07:08.445646   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0918 21:07:08.481905   62061 cri.go:89] found id: ""
	I0918 21:07:08.481935   62061 logs.go:276] 0 containers: []
	W0918 21:07:08.481945   62061 logs.go:278] No container was found matching "kube-controller-manager"
	I0918 21:07:08.481952   62061 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0918 21:07:08.482020   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0918 21:07:08.516710   62061 cri.go:89] found id: ""
	I0918 21:07:08.516738   62061 logs.go:276] 0 containers: []
	W0918 21:07:08.516746   62061 logs.go:278] No container was found matching "kindnet"
	I0918 21:07:08.516752   62061 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0918 21:07:08.516802   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0918 21:07:08.550707   62061 cri.go:89] found id: ""
	I0918 21:07:08.550740   62061 logs.go:276] 0 containers: []
	W0918 21:07:08.550760   62061 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0918 21:07:08.550768   62061 logs.go:123] Gathering logs for describe nodes ...
	I0918 21:07:08.550782   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0918 21:07:08.624821   62061 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0918 21:07:08.624843   62061 logs.go:123] Gathering logs for CRI-O ...
	I0918 21:07:08.624854   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0918 21:07:08.705347   62061 logs.go:123] Gathering logs for container status ...
	I0918 21:07:08.705383   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 21:07:08.744394   62061 logs.go:123] Gathering logs for kubelet ...
	I0918 21:07:08.744425   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 21:07:08.799336   62061 logs.go:123] Gathering logs for dmesg ...
	I0918 21:07:08.799375   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 21:07:07.358154   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:07:09.857118   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:07:08.633211   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:07:11.132676   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:07:08.388884   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:07:10.887356   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:07:11.312920   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:07:11.327524   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0918 21:07:11.327598   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0918 21:07:11.366177   62061 cri.go:89] found id: ""
	I0918 21:07:11.366209   62061 logs.go:276] 0 containers: []
	W0918 21:07:11.366221   62061 logs.go:278] No container was found matching "kube-apiserver"
	I0918 21:07:11.366227   62061 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0918 21:07:11.366274   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0918 21:07:11.405296   62061 cri.go:89] found id: ""
	I0918 21:07:11.405326   62061 logs.go:276] 0 containers: []
	W0918 21:07:11.405344   62061 logs.go:278] No container was found matching "etcd"
	I0918 21:07:11.405351   62061 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0918 21:07:11.405413   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0918 21:07:11.441786   62061 cri.go:89] found id: ""
	I0918 21:07:11.441818   62061 logs.go:276] 0 containers: []
	W0918 21:07:11.441829   62061 logs.go:278] No container was found matching "coredns"
	I0918 21:07:11.441837   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0918 21:07:11.441891   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0918 21:07:11.476144   62061 cri.go:89] found id: ""
	I0918 21:07:11.476167   62061 logs.go:276] 0 containers: []
	W0918 21:07:11.476175   62061 logs.go:278] No container was found matching "kube-scheduler"
	I0918 21:07:11.476181   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0918 21:07:11.476224   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0918 21:07:11.512115   62061 cri.go:89] found id: ""
	I0918 21:07:11.512143   62061 logs.go:276] 0 containers: []
	W0918 21:07:11.512154   62061 logs.go:278] No container was found matching "kube-proxy"
	I0918 21:07:11.512160   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0918 21:07:11.512221   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0918 21:07:11.545466   62061 cri.go:89] found id: ""
	I0918 21:07:11.545494   62061 logs.go:276] 0 containers: []
	W0918 21:07:11.545504   62061 logs.go:278] No container was found matching "kube-controller-manager"
	I0918 21:07:11.545512   62061 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0918 21:07:11.545575   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0918 21:07:11.579940   62061 cri.go:89] found id: ""
	I0918 21:07:11.579965   62061 logs.go:276] 0 containers: []
	W0918 21:07:11.579975   62061 logs.go:278] No container was found matching "kindnet"
	I0918 21:07:11.579994   62061 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0918 21:07:11.580080   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0918 21:07:11.613454   62061 cri.go:89] found id: ""
	I0918 21:07:11.613480   62061 logs.go:276] 0 containers: []
	W0918 21:07:11.613491   62061 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0918 21:07:11.613512   62061 logs.go:123] Gathering logs for kubelet ...
	I0918 21:07:11.613535   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 21:07:11.669079   62061 logs.go:123] Gathering logs for dmesg ...
	I0918 21:07:11.669140   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 21:07:11.682614   62061 logs.go:123] Gathering logs for describe nodes ...
	I0918 21:07:11.682639   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0918 21:07:11.756876   62061 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0918 21:07:11.756901   62061 logs.go:123] Gathering logs for CRI-O ...
	I0918 21:07:11.756914   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0918 21:07:11.835046   62061 logs.go:123] Gathering logs for container status ...
	I0918 21:07:11.835086   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 21:07:14.377541   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:07:14.392083   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0918 21:07:14.392161   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0918 21:07:14.431752   62061 cri.go:89] found id: ""
	I0918 21:07:14.431786   62061 logs.go:276] 0 containers: []
	W0918 21:07:14.431795   62061 logs.go:278] No container was found matching "kube-apiserver"
	I0918 21:07:14.431800   62061 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0918 21:07:14.431856   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0918 21:07:14.468530   62061 cri.go:89] found id: ""
	I0918 21:07:14.468562   62061 logs.go:276] 0 containers: []
	W0918 21:07:14.468573   62061 logs.go:278] No container was found matching "etcd"
	I0918 21:07:14.468580   62061 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0918 21:07:14.468643   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0918 21:07:11.857763   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:07:14.357253   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:07:13.132895   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:07:15.133426   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:07:13.386537   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:07:15.387844   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:07:17.888743   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:07:14.506510   62061 cri.go:89] found id: ""
	I0918 21:07:14.506540   62061 logs.go:276] 0 containers: []
	W0918 21:07:14.506550   62061 logs.go:278] No container was found matching "coredns"
	I0918 21:07:14.506557   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0918 21:07:14.506625   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0918 21:07:14.538986   62061 cri.go:89] found id: ""
	I0918 21:07:14.539021   62061 logs.go:276] 0 containers: []
	W0918 21:07:14.539032   62061 logs.go:278] No container was found matching "kube-scheduler"
	I0918 21:07:14.539039   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0918 21:07:14.539103   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0918 21:07:14.572390   62061 cri.go:89] found id: ""
	I0918 21:07:14.572421   62061 logs.go:276] 0 containers: []
	W0918 21:07:14.572432   62061 logs.go:278] No container was found matching "kube-proxy"
	I0918 21:07:14.572440   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0918 21:07:14.572499   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0918 21:07:14.607875   62061 cri.go:89] found id: ""
	I0918 21:07:14.607905   62061 logs.go:276] 0 containers: []
	W0918 21:07:14.607917   62061 logs.go:278] No container was found matching "kube-controller-manager"
	I0918 21:07:14.607924   62061 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0918 21:07:14.607988   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0918 21:07:14.642584   62061 cri.go:89] found id: ""
	I0918 21:07:14.642616   62061 logs.go:276] 0 containers: []
	W0918 21:07:14.642625   62061 logs.go:278] No container was found matching "kindnet"
	I0918 21:07:14.642630   62061 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0918 21:07:14.642685   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0918 21:07:14.682117   62061 cri.go:89] found id: ""
	I0918 21:07:14.682144   62061 logs.go:276] 0 containers: []
	W0918 21:07:14.682152   62061 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0918 21:07:14.682163   62061 logs.go:123] Gathering logs for dmesg ...
	I0918 21:07:14.682177   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 21:07:14.694780   62061 logs.go:123] Gathering logs for describe nodes ...
	I0918 21:07:14.694808   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0918 21:07:14.767988   62061 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0918 21:07:14.768036   62061 logs.go:123] Gathering logs for CRI-O ...
	I0918 21:07:14.768055   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0918 21:07:14.860620   62061 logs.go:123] Gathering logs for container status ...
	I0918 21:07:14.860655   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 21:07:14.905071   62061 logs.go:123] Gathering logs for kubelet ...
	I0918 21:07:14.905102   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 21:07:17.455582   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:07:17.468713   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0918 21:07:17.468774   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0918 21:07:17.503682   62061 cri.go:89] found id: ""
	I0918 21:07:17.503709   62061 logs.go:276] 0 containers: []
	W0918 21:07:17.503721   62061 logs.go:278] No container was found matching "kube-apiserver"
	I0918 21:07:17.503729   62061 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0918 21:07:17.503794   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0918 21:07:17.536581   62061 cri.go:89] found id: ""
	I0918 21:07:17.536611   62061 logs.go:276] 0 containers: []
	W0918 21:07:17.536619   62061 logs.go:278] No container was found matching "etcd"
	I0918 21:07:17.536625   62061 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0918 21:07:17.536673   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0918 21:07:17.570483   62061 cri.go:89] found id: ""
	I0918 21:07:17.570510   62061 logs.go:276] 0 containers: []
	W0918 21:07:17.570518   62061 logs.go:278] No container was found matching "coredns"
	I0918 21:07:17.570523   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0918 21:07:17.570591   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0918 21:07:17.604102   62061 cri.go:89] found id: ""
	I0918 21:07:17.604140   62061 logs.go:276] 0 containers: []
	W0918 21:07:17.604152   62061 logs.go:278] No container was found matching "kube-scheduler"
	I0918 21:07:17.604160   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0918 21:07:17.604229   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0918 21:07:17.638633   62061 cri.go:89] found id: ""
	I0918 21:07:17.638661   62061 logs.go:276] 0 containers: []
	W0918 21:07:17.638672   62061 logs.go:278] No container was found matching "kube-proxy"
	I0918 21:07:17.638678   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0918 21:07:17.638725   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0918 21:07:17.673016   62061 cri.go:89] found id: ""
	I0918 21:07:17.673048   62061 logs.go:276] 0 containers: []
	W0918 21:07:17.673057   62061 logs.go:278] No container was found matching "kube-controller-manager"
	I0918 21:07:17.673063   62061 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0918 21:07:17.673116   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0918 21:07:17.708576   62061 cri.go:89] found id: ""
	I0918 21:07:17.708601   62061 logs.go:276] 0 containers: []
	W0918 21:07:17.708609   62061 logs.go:278] No container was found matching "kindnet"
	I0918 21:07:17.708620   62061 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0918 21:07:17.708672   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0918 21:07:17.742820   62061 cri.go:89] found id: ""
	I0918 21:07:17.742843   62061 logs.go:276] 0 containers: []
	W0918 21:07:17.742851   62061 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0918 21:07:17.742859   62061 logs.go:123] Gathering logs for dmesg ...
	I0918 21:07:17.742870   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 21:07:17.757091   62061 logs.go:123] Gathering logs for describe nodes ...
	I0918 21:07:17.757120   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0918 21:07:17.824116   62061 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0918 21:07:17.824137   62061 logs.go:123] Gathering logs for CRI-O ...
	I0918 21:07:17.824151   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0918 21:07:17.911013   62061 logs.go:123] Gathering logs for container status ...
	I0918 21:07:17.911065   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 21:07:17.950517   62061 logs.go:123] Gathering logs for kubelet ...
	I0918 21:07:17.950553   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 21:07:16.857284   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:07:19.357336   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:07:17.635033   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:07:20.134331   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:07:20.388498   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:07:22.887115   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:07:20.502423   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:07:20.529214   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0918 21:07:20.529297   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0918 21:07:20.583618   62061 cri.go:89] found id: ""
	I0918 21:07:20.583660   62061 logs.go:276] 0 containers: []
	W0918 21:07:20.583672   62061 logs.go:278] No container was found matching "kube-apiserver"
	I0918 21:07:20.583682   62061 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0918 21:07:20.583751   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0918 21:07:20.633051   62061 cri.go:89] found id: ""
	I0918 21:07:20.633079   62061 logs.go:276] 0 containers: []
	W0918 21:07:20.633091   62061 logs.go:278] No container was found matching "etcd"
	I0918 21:07:20.633099   62061 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0918 21:07:20.633158   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0918 21:07:20.668513   62061 cri.go:89] found id: ""
	I0918 21:07:20.668543   62061 logs.go:276] 0 containers: []
	W0918 21:07:20.668552   62061 logs.go:278] No container was found matching "coredns"
	I0918 21:07:20.668558   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0918 21:07:20.668611   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0918 21:07:20.702648   62061 cri.go:89] found id: ""
	I0918 21:07:20.702683   62061 logs.go:276] 0 containers: []
	W0918 21:07:20.702698   62061 logs.go:278] No container was found matching "kube-scheduler"
	I0918 21:07:20.702706   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0918 21:07:20.702767   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0918 21:07:20.740639   62061 cri.go:89] found id: ""
	I0918 21:07:20.740666   62061 logs.go:276] 0 containers: []
	W0918 21:07:20.740674   62061 logs.go:278] No container was found matching "kube-proxy"
	I0918 21:07:20.740680   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0918 21:07:20.740731   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0918 21:07:20.771819   62061 cri.go:89] found id: ""
	I0918 21:07:20.771847   62061 logs.go:276] 0 containers: []
	W0918 21:07:20.771855   62061 logs.go:278] No container was found matching "kube-controller-manager"
	I0918 21:07:20.771863   62061 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0918 21:07:20.771913   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0918 21:07:20.803613   62061 cri.go:89] found id: ""
	I0918 21:07:20.803639   62061 logs.go:276] 0 containers: []
	W0918 21:07:20.803647   62061 logs.go:278] No container was found matching "kindnet"
	I0918 21:07:20.803653   62061 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0918 21:07:20.803708   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0918 21:07:20.835671   62061 cri.go:89] found id: ""
	I0918 21:07:20.835695   62061 logs.go:276] 0 containers: []
	W0918 21:07:20.835702   62061 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0918 21:07:20.835710   62061 logs.go:123] Gathering logs for kubelet ...
	I0918 21:07:20.835720   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 21:07:20.886159   62061 logs.go:123] Gathering logs for dmesg ...
	I0918 21:07:20.886191   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 21:07:20.901412   62061 logs.go:123] Gathering logs for describe nodes ...
	I0918 21:07:20.901443   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0918 21:07:20.979009   62061 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0918 21:07:20.979034   62061 logs.go:123] Gathering logs for CRI-O ...
	I0918 21:07:20.979045   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0918 21:07:21.059895   62061 logs.go:123] Gathering logs for container status ...
	I0918 21:07:21.059930   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 21:07:23.598133   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:07:23.611038   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0918 21:07:23.611102   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0918 21:07:23.648037   62061 cri.go:89] found id: ""
	I0918 21:07:23.648067   62061 logs.go:276] 0 containers: []
	W0918 21:07:23.648075   62061 logs.go:278] No container was found matching "kube-apiserver"
	I0918 21:07:23.648081   62061 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0918 21:07:23.648135   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0918 21:07:23.683956   62061 cri.go:89] found id: ""
	I0918 21:07:23.683985   62061 logs.go:276] 0 containers: []
	W0918 21:07:23.683995   62061 logs.go:278] No container was found matching "etcd"
	I0918 21:07:23.684003   62061 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0918 21:07:23.684083   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0918 21:07:23.719274   62061 cri.go:89] found id: ""
	I0918 21:07:23.719301   62061 logs.go:276] 0 containers: []
	W0918 21:07:23.719309   62061 logs.go:278] No container was found matching "coredns"
	I0918 21:07:23.719315   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0918 21:07:23.719378   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0918 21:07:23.757101   62061 cri.go:89] found id: ""
	I0918 21:07:23.757133   62061 logs.go:276] 0 containers: []
	W0918 21:07:23.757145   62061 logs.go:278] No container was found matching "kube-scheduler"
	I0918 21:07:23.757152   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0918 21:07:23.757202   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0918 21:07:23.790115   62061 cri.go:89] found id: ""
	I0918 21:07:23.790149   62061 logs.go:276] 0 containers: []
	W0918 21:07:23.790160   62061 logs.go:278] No container was found matching "kube-proxy"
	I0918 21:07:23.790168   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0918 21:07:23.790231   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0918 21:07:23.823089   62061 cri.go:89] found id: ""
	I0918 21:07:23.823120   62061 logs.go:276] 0 containers: []
	W0918 21:07:23.823131   62061 logs.go:278] No container was found matching "kube-controller-manager"
	I0918 21:07:23.823140   62061 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0918 21:07:23.823192   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0918 21:07:23.858566   62061 cri.go:89] found id: ""
	I0918 21:07:23.858591   62061 logs.go:276] 0 containers: []
	W0918 21:07:23.858601   62061 logs.go:278] No container was found matching "kindnet"
	I0918 21:07:23.858613   62061 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0918 21:07:23.858674   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0918 21:07:23.892650   62061 cri.go:89] found id: ""
	I0918 21:07:23.892679   62061 logs.go:276] 0 containers: []
	W0918 21:07:23.892690   62061 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0918 21:07:23.892699   62061 logs.go:123] Gathering logs for kubelet ...
	I0918 21:07:23.892713   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 21:07:23.941087   62061 logs.go:123] Gathering logs for dmesg ...
	I0918 21:07:23.941123   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 21:07:23.955674   62061 logs.go:123] Gathering logs for describe nodes ...
	I0918 21:07:23.955712   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0918 21:07:24.026087   62061 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0918 21:07:24.026115   62061 logs.go:123] Gathering logs for CRI-O ...
	I0918 21:07:24.026132   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0918 21:07:24.105237   62061 logs.go:123] Gathering logs for container status ...
	I0918 21:07:24.105272   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 21:07:21.857391   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:07:23.857954   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:07:26.356553   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:07:22.633058   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:07:25.133773   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:07:25.387123   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:07:27.886688   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:07:26.646822   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:07:26.659978   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0918 21:07:26.660087   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0918 21:07:26.695776   62061 cri.go:89] found id: ""
	I0918 21:07:26.695804   62061 logs.go:276] 0 containers: []
	W0918 21:07:26.695817   62061 logs.go:278] No container was found matching "kube-apiserver"
	I0918 21:07:26.695824   62061 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0918 21:07:26.695875   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0918 21:07:26.729717   62061 cri.go:89] found id: ""
	I0918 21:07:26.729747   62061 logs.go:276] 0 containers: []
	W0918 21:07:26.729756   62061 logs.go:278] No container was found matching "etcd"
	I0918 21:07:26.729762   62061 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0918 21:07:26.729830   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0918 21:07:26.763848   62061 cri.go:89] found id: ""
	I0918 21:07:26.763886   62061 logs.go:276] 0 containers: []
	W0918 21:07:26.763907   62061 logs.go:278] No container was found matching "coredns"
	I0918 21:07:26.763915   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0918 21:07:26.763974   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0918 21:07:26.798670   62061 cri.go:89] found id: ""
	I0918 21:07:26.798703   62061 logs.go:276] 0 containers: []
	W0918 21:07:26.798713   62061 logs.go:278] No container was found matching "kube-scheduler"
	I0918 21:07:26.798720   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0918 21:07:26.798789   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0918 21:07:26.833778   62061 cri.go:89] found id: ""
	I0918 21:07:26.833804   62061 logs.go:276] 0 containers: []
	W0918 21:07:26.833815   62061 logs.go:278] No container was found matching "kube-proxy"
	I0918 21:07:26.833822   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0918 21:07:26.833905   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0918 21:07:26.869657   62061 cri.go:89] found id: ""
	I0918 21:07:26.869688   62061 logs.go:276] 0 containers: []
	W0918 21:07:26.869699   62061 logs.go:278] No container was found matching "kube-controller-manager"
	I0918 21:07:26.869707   62061 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0918 21:07:26.869772   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0918 21:07:26.908163   62061 cri.go:89] found id: ""
	I0918 21:07:26.908194   62061 logs.go:276] 0 containers: []
	W0918 21:07:26.908205   62061 logs.go:278] No container was found matching "kindnet"
	I0918 21:07:26.908212   62061 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0918 21:07:26.908269   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0918 21:07:26.943416   62061 cri.go:89] found id: ""
	I0918 21:07:26.943442   62061 logs.go:276] 0 containers: []
	W0918 21:07:26.943451   62061 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0918 21:07:26.943459   62061 logs.go:123] Gathering logs for kubelet ...
	I0918 21:07:26.943471   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 21:07:26.993796   62061 logs.go:123] Gathering logs for dmesg ...
	I0918 21:07:26.993833   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 21:07:27.007619   62061 logs.go:123] Gathering logs for describe nodes ...
	I0918 21:07:27.007661   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0918 21:07:27.072880   62061 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0918 21:07:27.072904   62061 logs.go:123] Gathering logs for CRI-O ...
	I0918 21:07:27.072919   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0918 21:07:27.148984   62061 logs.go:123] Gathering logs for container status ...
	I0918 21:07:27.149031   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 21:07:28.357006   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:07:30.857527   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:07:27.632697   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:07:30.133718   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:07:29.887981   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:07:32.387478   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:07:29.690106   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:07:29.702853   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0918 21:07:29.702932   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0918 21:07:29.736427   62061 cri.go:89] found id: ""
	I0918 21:07:29.736461   62061 logs.go:276] 0 containers: []
	W0918 21:07:29.736473   62061 logs.go:278] No container was found matching "kube-apiserver"
	I0918 21:07:29.736480   62061 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0918 21:07:29.736537   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0918 21:07:29.771287   62061 cri.go:89] found id: ""
	I0918 21:07:29.771317   62061 logs.go:276] 0 containers: []
	W0918 21:07:29.771328   62061 logs.go:278] No container was found matching "etcd"
	I0918 21:07:29.771334   62061 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0918 21:07:29.771398   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0918 21:07:29.804826   62061 cri.go:89] found id: ""
	I0918 21:07:29.804861   62061 logs.go:276] 0 containers: []
	W0918 21:07:29.804875   62061 logs.go:278] No container was found matching "coredns"
	I0918 21:07:29.804882   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0918 21:07:29.804934   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0918 21:07:29.838570   62061 cri.go:89] found id: ""
	I0918 21:07:29.838598   62061 logs.go:276] 0 containers: []
	W0918 21:07:29.838608   62061 logs.go:278] No container was found matching "kube-scheduler"
	I0918 21:07:29.838614   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0918 21:07:29.838659   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0918 21:07:29.871591   62061 cri.go:89] found id: ""
	I0918 21:07:29.871621   62061 logs.go:276] 0 containers: []
	W0918 21:07:29.871631   62061 logs.go:278] No container was found matching "kube-proxy"
	I0918 21:07:29.871638   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0918 21:07:29.871699   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0918 21:07:29.905789   62061 cri.go:89] found id: ""
	I0918 21:07:29.905824   62061 logs.go:276] 0 containers: []
	W0918 21:07:29.905835   62061 logs.go:278] No container was found matching "kube-controller-manager"
	I0918 21:07:29.905846   62061 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0918 21:07:29.905910   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0918 21:07:29.945914   62061 cri.go:89] found id: ""
	I0918 21:07:29.945941   62061 logs.go:276] 0 containers: []
	W0918 21:07:29.945950   62061 logs.go:278] No container was found matching "kindnet"
	I0918 21:07:29.945955   62061 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0918 21:07:29.946004   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0918 21:07:29.979365   62061 cri.go:89] found id: ""
	I0918 21:07:29.979396   62061 logs.go:276] 0 containers: []
	W0918 21:07:29.979405   62061 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0918 21:07:29.979413   62061 logs.go:123] Gathering logs for kubelet ...
	I0918 21:07:29.979425   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 21:07:30.026925   62061 logs.go:123] Gathering logs for dmesg ...
	I0918 21:07:30.026956   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 21:07:30.040589   62061 logs.go:123] Gathering logs for describe nodes ...
	I0918 21:07:30.040623   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0918 21:07:30.112900   62061 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0918 21:07:30.112928   62061 logs.go:123] Gathering logs for CRI-O ...
	I0918 21:07:30.112948   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0918 21:07:30.195208   62061 logs.go:123] Gathering logs for container status ...
	I0918 21:07:30.195256   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 21:07:32.735787   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:07:32.748969   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0918 21:07:32.749049   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0918 21:07:32.782148   62061 cri.go:89] found id: ""
	I0918 21:07:32.782179   62061 logs.go:276] 0 containers: []
	W0918 21:07:32.782189   62061 logs.go:278] No container was found matching "kube-apiserver"
	I0918 21:07:32.782196   62061 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0918 21:07:32.782262   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0918 21:07:32.816119   62061 cri.go:89] found id: ""
	I0918 21:07:32.816144   62061 logs.go:276] 0 containers: []
	W0918 21:07:32.816152   62061 logs.go:278] No container was found matching "etcd"
	I0918 21:07:32.816158   62061 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0918 21:07:32.816203   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0918 21:07:32.849817   62061 cri.go:89] found id: ""
	I0918 21:07:32.849844   62061 logs.go:276] 0 containers: []
	W0918 21:07:32.849853   62061 logs.go:278] No container was found matching "coredns"
	I0918 21:07:32.849859   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0918 21:07:32.849911   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0918 21:07:32.884464   62061 cri.go:89] found id: ""
	I0918 21:07:32.884494   62061 logs.go:276] 0 containers: []
	W0918 21:07:32.884506   62061 logs.go:278] No container was found matching "kube-scheduler"
	I0918 21:07:32.884513   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0918 21:07:32.884576   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0918 21:07:32.917942   62061 cri.go:89] found id: ""
	I0918 21:07:32.917973   62061 logs.go:276] 0 containers: []
	W0918 21:07:32.917983   62061 logs.go:278] No container was found matching "kube-proxy"
	I0918 21:07:32.917990   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0918 21:07:32.918051   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0918 21:07:32.950748   62061 cri.go:89] found id: ""
	I0918 21:07:32.950780   62061 logs.go:276] 0 containers: []
	W0918 21:07:32.950791   62061 logs.go:278] No container was found matching "kube-controller-manager"
	I0918 21:07:32.950800   62061 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0918 21:07:32.950864   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0918 21:07:32.985059   62061 cri.go:89] found id: ""
	I0918 21:07:32.985092   62061 logs.go:276] 0 containers: []
	W0918 21:07:32.985100   62061 logs.go:278] No container was found matching "kindnet"
	I0918 21:07:32.985106   62061 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0918 21:07:32.985167   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0918 21:07:33.021496   62061 cri.go:89] found id: ""
	I0918 21:07:33.021526   62061 logs.go:276] 0 containers: []
	W0918 21:07:33.021536   62061 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0918 21:07:33.021546   62061 logs.go:123] Gathering logs for kubelet ...
	I0918 21:07:33.021560   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 21:07:33.071744   62061 logs.go:123] Gathering logs for dmesg ...
	I0918 21:07:33.071793   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 21:07:33.086533   62061 logs.go:123] Gathering logs for describe nodes ...
	I0918 21:07:33.086565   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0918 21:07:33.155274   62061 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0918 21:07:33.155307   62061 logs.go:123] Gathering logs for CRI-O ...
	I0918 21:07:33.155326   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0918 21:07:33.238301   62061 logs.go:123] Gathering logs for container status ...
	I0918 21:07:33.238342   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 21:07:33.356874   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:07:35.357445   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:07:32.631814   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:07:34.631954   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:07:36.633057   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:07:34.387725   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:07:36.887031   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:07:35.777905   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:07:35.792442   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0918 21:07:35.792510   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0918 21:07:35.827310   62061 cri.go:89] found id: ""
	I0918 21:07:35.827339   62061 logs.go:276] 0 containers: []
	W0918 21:07:35.827350   62061 logs.go:278] No container was found matching "kube-apiserver"
	I0918 21:07:35.827357   62061 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0918 21:07:35.827422   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0918 21:07:35.861873   62061 cri.go:89] found id: ""
	I0918 21:07:35.861897   62061 logs.go:276] 0 containers: []
	W0918 21:07:35.861905   62061 logs.go:278] No container was found matching "etcd"
	I0918 21:07:35.861916   62061 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0918 21:07:35.861966   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0918 21:07:35.895295   62061 cri.go:89] found id: ""
	I0918 21:07:35.895324   62061 logs.go:276] 0 containers: []
	W0918 21:07:35.895345   62061 logs.go:278] No container was found matching "coredns"
	I0918 21:07:35.895353   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0918 21:07:35.895410   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0918 21:07:35.927690   62061 cri.go:89] found id: ""
	I0918 21:07:35.927717   62061 logs.go:276] 0 containers: []
	W0918 21:07:35.927728   62061 logs.go:278] No container was found matching "kube-scheduler"
	I0918 21:07:35.927736   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0918 21:07:35.927788   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0918 21:07:35.963954   62061 cri.go:89] found id: ""
	I0918 21:07:35.963979   62061 logs.go:276] 0 containers: []
	W0918 21:07:35.963986   62061 logs.go:278] No container was found matching "kube-proxy"
	I0918 21:07:35.963992   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0918 21:07:35.964059   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0918 21:07:36.002287   62061 cri.go:89] found id: ""
	I0918 21:07:36.002313   62061 logs.go:276] 0 containers: []
	W0918 21:07:36.002322   62061 logs.go:278] No container was found matching "kube-controller-manager"
	I0918 21:07:36.002328   62061 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0918 21:07:36.002380   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0918 21:07:36.036748   62061 cri.go:89] found id: ""
	I0918 21:07:36.036778   62061 logs.go:276] 0 containers: []
	W0918 21:07:36.036790   62061 logs.go:278] No container was found matching "kindnet"
	I0918 21:07:36.036797   62061 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0918 21:07:36.036861   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0918 21:07:36.072616   62061 cri.go:89] found id: ""
	I0918 21:07:36.072647   62061 logs.go:276] 0 containers: []
	W0918 21:07:36.072658   62061 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0918 21:07:36.072668   62061 logs.go:123] Gathering logs for kubelet ...
	I0918 21:07:36.072683   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 21:07:36.121723   62061 logs.go:123] Gathering logs for dmesg ...
	I0918 21:07:36.121762   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 21:07:36.136528   62061 logs.go:123] Gathering logs for describe nodes ...
	I0918 21:07:36.136560   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0918 21:07:36.202878   62061 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0918 21:07:36.202911   62061 logs.go:123] Gathering logs for CRI-O ...
	I0918 21:07:36.202926   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0918 21:07:36.287882   62061 logs.go:123] Gathering logs for container status ...
	I0918 21:07:36.287924   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 21:07:38.825888   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:07:38.838437   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0918 21:07:38.838510   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0918 21:07:38.872312   62061 cri.go:89] found id: ""
	I0918 21:07:38.872348   62061 logs.go:276] 0 containers: []
	W0918 21:07:38.872358   62061 logs.go:278] No container was found matching "kube-apiserver"
	I0918 21:07:38.872366   62061 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0918 21:07:38.872417   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0918 21:07:38.905927   62061 cri.go:89] found id: ""
	I0918 21:07:38.905962   62061 logs.go:276] 0 containers: []
	W0918 21:07:38.905975   62061 logs.go:278] No container was found matching "etcd"
	I0918 21:07:38.905982   62061 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0918 21:07:38.906042   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0918 21:07:38.938245   62061 cri.go:89] found id: ""
	I0918 21:07:38.938274   62061 logs.go:276] 0 containers: []
	W0918 21:07:38.938286   62061 logs.go:278] No container was found matching "coredns"
	I0918 21:07:38.938293   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0918 21:07:38.938358   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0918 21:07:38.971497   62061 cri.go:89] found id: ""
	I0918 21:07:38.971536   62061 logs.go:276] 0 containers: []
	W0918 21:07:38.971548   62061 logs.go:278] No container was found matching "kube-scheduler"
	I0918 21:07:38.971555   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0918 21:07:38.971625   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0918 21:07:39.004755   62061 cri.go:89] found id: ""
	I0918 21:07:39.004784   62061 logs.go:276] 0 containers: []
	W0918 21:07:39.004793   62061 logs.go:278] No container was found matching "kube-proxy"
	I0918 21:07:39.004799   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0918 21:07:39.004854   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0918 21:07:39.036237   62061 cri.go:89] found id: ""
	I0918 21:07:39.036265   62061 logs.go:276] 0 containers: []
	W0918 21:07:39.036273   62061 logs.go:278] No container was found matching "kube-controller-manager"
	I0918 21:07:39.036279   62061 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0918 21:07:39.036328   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0918 21:07:39.071504   62061 cri.go:89] found id: ""
	I0918 21:07:39.071537   62061 logs.go:276] 0 containers: []
	W0918 21:07:39.071549   62061 logs.go:278] No container was found matching "kindnet"
	I0918 21:07:39.071557   62061 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0918 21:07:39.071623   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0918 21:07:39.107035   62061 cri.go:89] found id: ""
	I0918 21:07:39.107063   62061 logs.go:276] 0 containers: []
	W0918 21:07:39.107071   62061 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0918 21:07:39.107080   62061 logs.go:123] Gathering logs for kubelet ...
	I0918 21:07:39.107090   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 21:07:39.158078   62061 logs.go:123] Gathering logs for dmesg ...
	I0918 21:07:39.158113   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 21:07:39.172846   62061 logs.go:123] Gathering logs for describe nodes ...
	I0918 21:07:39.172875   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0918 21:07:39.240577   62061 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0918 21:07:39.240602   62061 logs.go:123] Gathering logs for CRI-O ...
	I0918 21:07:39.240618   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0918 21:07:39.319762   62061 logs.go:123] Gathering logs for container status ...
	I0918 21:07:39.319797   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 21:07:37.857371   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:07:40.356710   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:07:39.133586   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:07:41.632538   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:07:38.887485   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:07:41.386252   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:07:41.856586   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:07:41.870308   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0918 21:07:41.870388   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0918 21:07:41.905657   62061 cri.go:89] found id: ""
	I0918 21:07:41.905688   62061 logs.go:276] 0 containers: []
	W0918 21:07:41.905699   62061 logs.go:278] No container was found matching "kube-apiserver"
	I0918 21:07:41.905706   62061 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0918 21:07:41.905766   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0918 21:07:41.944507   62061 cri.go:89] found id: ""
	I0918 21:07:41.944544   62061 logs.go:276] 0 containers: []
	W0918 21:07:41.944555   62061 logs.go:278] No container was found matching "etcd"
	I0918 21:07:41.944566   62061 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0918 21:07:41.944634   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0918 21:07:41.979217   62061 cri.go:89] found id: ""
	I0918 21:07:41.979252   62061 logs.go:276] 0 containers: []
	W0918 21:07:41.979271   62061 logs.go:278] No container was found matching "coredns"
	I0918 21:07:41.979279   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0918 21:07:41.979346   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0918 21:07:42.013613   62061 cri.go:89] found id: ""
	I0918 21:07:42.013641   62061 logs.go:276] 0 containers: []
	W0918 21:07:42.013652   62061 logs.go:278] No container was found matching "kube-scheduler"
	I0918 21:07:42.013659   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0918 21:07:42.013725   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0918 21:07:42.049225   62061 cri.go:89] found id: ""
	I0918 21:07:42.049259   62061 logs.go:276] 0 containers: []
	W0918 21:07:42.049271   62061 logs.go:278] No container was found matching "kube-proxy"
	I0918 21:07:42.049279   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0918 21:07:42.049364   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0918 21:07:42.085737   62061 cri.go:89] found id: ""
	I0918 21:07:42.085763   62061 logs.go:276] 0 containers: []
	W0918 21:07:42.085775   62061 logs.go:278] No container was found matching "kube-controller-manager"
	I0918 21:07:42.085782   62061 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0918 21:07:42.085843   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0918 21:07:42.121326   62061 cri.go:89] found id: ""
	I0918 21:07:42.121356   62061 logs.go:276] 0 containers: []
	W0918 21:07:42.121365   62061 logs.go:278] No container was found matching "kindnet"
	I0918 21:07:42.121371   62061 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0918 21:07:42.121428   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0918 21:07:42.157070   62061 cri.go:89] found id: ""
	I0918 21:07:42.157097   62061 logs.go:276] 0 containers: []
	W0918 21:07:42.157107   62061 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0918 21:07:42.157118   62061 logs.go:123] Gathering logs for CRI-O ...
	I0918 21:07:42.157133   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0918 21:07:42.232110   62061 logs.go:123] Gathering logs for container status ...
	I0918 21:07:42.232162   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 21:07:42.270478   62061 logs.go:123] Gathering logs for kubelet ...
	I0918 21:07:42.270517   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 21:07:42.324545   62061 logs.go:123] Gathering logs for dmesg ...
	I0918 21:07:42.324586   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 21:07:42.339928   62061 logs.go:123] Gathering logs for describe nodes ...
	I0918 21:07:42.339962   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0918 21:07:42.415124   62061 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0918 21:07:42.356847   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:07:44.856845   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:07:43.633029   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:07:46.134786   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:07:43.387596   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:07:45.887071   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:07:44.915316   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:07:44.928261   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0918 21:07:44.928331   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0918 21:07:44.959641   62061 cri.go:89] found id: ""
	I0918 21:07:44.959673   62061 logs.go:276] 0 containers: []
	W0918 21:07:44.959682   62061 logs.go:278] No container was found matching "kube-apiserver"
	I0918 21:07:44.959689   62061 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0918 21:07:44.959791   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0918 21:07:44.991744   62061 cri.go:89] found id: ""
	I0918 21:07:44.991778   62061 logs.go:276] 0 containers: []
	W0918 21:07:44.991787   62061 logs.go:278] No container was found matching "etcd"
	I0918 21:07:44.991803   62061 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0918 21:07:44.991877   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0918 21:07:45.024228   62061 cri.go:89] found id: ""
	I0918 21:07:45.024261   62061 logs.go:276] 0 containers: []
	W0918 21:07:45.024272   62061 logs.go:278] No container was found matching "coredns"
	I0918 21:07:45.024280   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0918 21:07:45.024355   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0918 21:07:45.061905   62061 cri.go:89] found id: ""
	I0918 21:07:45.061931   62061 logs.go:276] 0 containers: []
	W0918 21:07:45.061940   62061 logs.go:278] No container was found matching "kube-scheduler"
	I0918 21:07:45.061946   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0918 21:07:45.061994   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0918 21:07:45.099224   62061 cri.go:89] found id: ""
	I0918 21:07:45.099251   62061 logs.go:276] 0 containers: []
	W0918 21:07:45.099259   62061 logs.go:278] No container was found matching "kube-proxy"
	I0918 21:07:45.099266   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0918 21:07:45.099327   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0918 21:07:45.137235   62061 cri.go:89] found id: ""
	I0918 21:07:45.137262   62061 logs.go:276] 0 containers: []
	W0918 21:07:45.137270   62061 logs.go:278] No container was found matching "kube-controller-manager"
	I0918 21:07:45.137278   62061 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0918 21:07:45.137333   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0918 21:07:45.174343   62061 cri.go:89] found id: ""
	I0918 21:07:45.174370   62061 logs.go:276] 0 containers: []
	W0918 21:07:45.174379   62061 logs.go:278] No container was found matching "kindnet"
	I0918 21:07:45.174385   62061 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0918 21:07:45.174432   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0918 21:07:45.209278   62061 cri.go:89] found id: ""
	I0918 21:07:45.209306   62061 logs.go:276] 0 containers: []
	W0918 21:07:45.209316   62061 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0918 21:07:45.209326   62061 logs.go:123] Gathering logs for dmesg ...
	I0918 21:07:45.209341   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 21:07:45.222376   62061 logs.go:123] Gathering logs for describe nodes ...
	I0918 21:07:45.222409   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0918 21:07:45.294562   62061 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0918 21:07:45.294595   62061 logs.go:123] Gathering logs for CRI-O ...
	I0918 21:07:45.294607   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0918 21:07:45.382582   62061 logs.go:123] Gathering logs for container status ...
	I0918 21:07:45.382631   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 21:07:45.424743   62061 logs.go:123] Gathering logs for kubelet ...
	I0918 21:07:45.424780   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 21:07:47.978171   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:07:47.990485   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0918 21:07:47.990554   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0918 21:07:48.024465   62061 cri.go:89] found id: ""
	I0918 21:07:48.024494   62061 logs.go:276] 0 containers: []
	W0918 21:07:48.024505   62061 logs.go:278] No container was found matching "kube-apiserver"
	I0918 21:07:48.024512   62061 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0918 21:07:48.024575   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0918 21:07:48.058838   62061 cri.go:89] found id: ""
	I0918 21:07:48.058871   62061 logs.go:276] 0 containers: []
	W0918 21:07:48.058899   62061 logs.go:278] No container was found matching "etcd"
	I0918 21:07:48.058905   62061 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0918 21:07:48.058959   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0918 21:07:48.094307   62061 cri.go:89] found id: ""
	I0918 21:07:48.094335   62061 logs.go:276] 0 containers: []
	W0918 21:07:48.094343   62061 logs.go:278] No container was found matching "coredns"
	I0918 21:07:48.094349   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0918 21:07:48.094398   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0918 21:07:48.127497   62061 cri.go:89] found id: ""
	I0918 21:07:48.127529   62061 logs.go:276] 0 containers: []
	W0918 21:07:48.127538   62061 logs.go:278] No container was found matching "kube-scheduler"
	I0918 21:07:48.127544   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0918 21:07:48.127590   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0918 21:07:48.160440   62061 cri.go:89] found id: ""
	I0918 21:07:48.160465   62061 logs.go:276] 0 containers: []
	W0918 21:07:48.160473   62061 logs.go:278] No container was found matching "kube-proxy"
	I0918 21:07:48.160478   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0918 21:07:48.160527   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0918 21:07:48.195581   62061 cri.go:89] found id: ""
	I0918 21:07:48.195615   62061 logs.go:276] 0 containers: []
	W0918 21:07:48.195624   62061 logs.go:278] No container was found matching "kube-controller-manager"
	I0918 21:07:48.195631   62061 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0918 21:07:48.195675   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0918 21:07:48.226478   62061 cri.go:89] found id: ""
	I0918 21:07:48.226506   62061 logs.go:276] 0 containers: []
	W0918 21:07:48.226514   62061 logs.go:278] No container was found matching "kindnet"
	I0918 21:07:48.226519   62061 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0918 21:07:48.226566   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0918 21:07:48.259775   62061 cri.go:89] found id: ""
	I0918 21:07:48.259804   62061 logs.go:276] 0 containers: []
	W0918 21:07:48.259815   62061 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0918 21:07:48.259825   62061 logs.go:123] Gathering logs for dmesg ...
	I0918 21:07:48.259839   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 21:07:48.274364   62061 logs.go:123] Gathering logs for describe nodes ...
	I0918 21:07:48.274391   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0918 21:07:48.347131   62061 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0918 21:07:48.347151   62061 logs.go:123] Gathering logs for CRI-O ...
	I0918 21:07:48.347163   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0918 21:07:48.430569   62061 logs.go:123] Gathering logs for container status ...
	I0918 21:07:48.430607   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 21:07:48.472972   62061 logs.go:123] Gathering logs for kubelet ...
	I0918 21:07:48.473007   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 21:07:47.356907   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:07:49.857984   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:07:48.633550   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:07:51.133639   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:07:48.388136   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:07:50.888317   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:07:51.024429   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:07:51.037249   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0918 21:07:51.037328   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0918 21:07:51.070137   62061 cri.go:89] found id: ""
	I0918 21:07:51.070173   62061 logs.go:276] 0 containers: []
	W0918 21:07:51.070195   62061 logs.go:278] No container was found matching "kube-apiserver"
	I0918 21:07:51.070203   62061 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0918 21:07:51.070276   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0918 21:07:51.105014   62061 cri.go:89] found id: ""
	I0918 21:07:51.105039   62061 logs.go:276] 0 containers: []
	W0918 21:07:51.105048   62061 logs.go:278] No container was found matching "etcd"
	I0918 21:07:51.105054   62061 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0918 21:07:51.105101   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0918 21:07:51.143287   62061 cri.go:89] found id: ""
	I0918 21:07:51.143310   62061 logs.go:276] 0 containers: []
	W0918 21:07:51.143318   62061 logs.go:278] No container was found matching "coredns"
	I0918 21:07:51.143325   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0918 21:07:51.143372   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0918 21:07:51.176811   62061 cri.go:89] found id: ""
	I0918 21:07:51.176838   62061 logs.go:276] 0 containers: []
	W0918 21:07:51.176846   62061 logs.go:278] No container was found matching "kube-scheduler"
	I0918 21:07:51.176852   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0918 21:07:51.176898   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0918 21:07:51.210817   62061 cri.go:89] found id: ""
	I0918 21:07:51.210842   62061 logs.go:276] 0 containers: []
	W0918 21:07:51.210850   62061 logs.go:278] No container was found matching "kube-proxy"
	I0918 21:07:51.210856   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0918 21:07:51.210916   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0918 21:07:51.243968   62061 cri.go:89] found id: ""
	I0918 21:07:51.244002   62061 logs.go:276] 0 containers: []
	W0918 21:07:51.244035   62061 logs.go:278] No container was found matching "kube-controller-manager"
	I0918 21:07:51.244043   62061 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0918 21:07:51.244104   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0918 21:07:51.279078   62061 cri.go:89] found id: ""
	I0918 21:07:51.279108   62061 logs.go:276] 0 containers: []
	W0918 21:07:51.279117   62061 logs.go:278] No container was found matching "kindnet"
	I0918 21:07:51.279122   62061 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0918 21:07:51.279173   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0918 21:07:51.315033   62061 cri.go:89] found id: ""
	I0918 21:07:51.315082   62061 logs.go:276] 0 containers: []
	W0918 21:07:51.315091   62061 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0918 21:07:51.315100   62061 logs.go:123] Gathering logs for CRI-O ...
	I0918 21:07:51.315111   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0918 21:07:51.391927   62061 logs.go:123] Gathering logs for container status ...
	I0918 21:07:51.391976   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 21:07:51.435515   62061 logs.go:123] Gathering logs for kubelet ...
	I0918 21:07:51.435542   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 21:07:51.488404   62061 logs.go:123] Gathering logs for dmesg ...
	I0918 21:07:51.488442   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 21:07:51.502019   62061 logs.go:123] Gathering logs for describe nodes ...
	I0918 21:07:51.502065   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0918 21:07:51.567893   62061 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0918 21:07:54.068142   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:07:54.082544   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0918 21:07:54.082619   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0918 21:07:54.118600   62061 cri.go:89] found id: ""
	I0918 21:07:54.118629   62061 logs.go:276] 0 containers: []
	W0918 21:07:54.118638   62061 logs.go:278] No container was found matching "kube-apiserver"
	I0918 21:07:54.118644   62061 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0918 21:07:54.118695   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0918 21:07:54.154086   62061 cri.go:89] found id: ""
	I0918 21:07:54.154115   62061 logs.go:276] 0 containers: []
	W0918 21:07:54.154127   62061 logs.go:278] No container was found matching "etcd"
	I0918 21:07:54.154135   62061 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0918 21:07:54.154187   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0918 21:07:54.188213   62061 cri.go:89] found id: ""
	I0918 21:07:54.188246   62061 logs.go:276] 0 containers: []
	W0918 21:07:54.188257   62061 logs.go:278] No container was found matching "coredns"
	I0918 21:07:54.188266   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0918 21:07:54.188329   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0918 21:07:54.221682   62061 cri.go:89] found id: ""
	I0918 21:07:54.221712   62061 logs.go:276] 0 containers: []
	W0918 21:07:54.221721   62061 logs.go:278] No container was found matching "kube-scheduler"
	I0918 21:07:54.221728   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0918 21:07:54.221791   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0918 21:07:54.254746   62061 cri.go:89] found id: ""
	I0918 21:07:54.254770   62061 logs.go:276] 0 containers: []
	W0918 21:07:54.254778   62061 logs.go:278] No container was found matching "kube-proxy"
	I0918 21:07:54.254783   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0918 21:07:54.254844   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0918 21:07:54.288249   62061 cri.go:89] found id: ""
	I0918 21:07:54.288273   62061 logs.go:276] 0 containers: []
	W0918 21:07:54.288281   62061 logs.go:278] No container was found matching "kube-controller-manager"
	I0918 21:07:54.288288   62061 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0918 21:07:54.288349   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0918 21:07:54.323390   62061 cri.go:89] found id: ""
	I0918 21:07:54.323422   62061 logs.go:276] 0 containers: []
	W0918 21:07:54.323433   62061 logs.go:278] No container was found matching "kindnet"
	I0918 21:07:54.323440   62061 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0918 21:07:54.323507   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0918 21:07:54.357680   62061 cri.go:89] found id: ""
	I0918 21:07:54.357709   62061 logs.go:276] 0 containers: []
	W0918 21:07:54.357719   62061 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0918 21:07:54.357734   62061 logs.go:123] Gathering logs for dmesg ...
	I0918 21:07:54.357748   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 21:07:54.396644   62061 logs.go:123] Gathering logs for describe nodes ...
	I0918 21:07:54.396683   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0918 21:07:54.469136   62061 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0918 21:07:54.469175   62061 logs.go:123] Gathering logs for CRI-O ...
	I0918 21:07:54.469191   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0918 21:07:52.357187   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:07:54.857437   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:07:53.633161   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:07:56.132554   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:07:53.386646   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:07:55.387377   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:07:57.387524   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:07:54.547848   62061 logs.go:123] Gathering logs for container status ...
	I0918 21:07:54.547884   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 21:07:54.585758   62061 logs.go:123] Gathering logs for kubelet ...
	I0918 21:07:54.585788   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 21:07:57.139198   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:07:57.152342   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0918 21:07:57.152429   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0918 21:07:57.186747   62061 cri.go:89] found id: ""
	I0918 21:07:57.186778   62061 logs.go:276] 0 containers: []
	W0918 21:07:57.186789   62061 logs.go:278] No container was found matching "kube-apiserver"
	I0918 21:07:57.186796   62061 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0918 21:07:57.186855   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0918 21:07:57.219211   62061 cri.go:89] found id: ""
	I0918 21:07:57.219239   62061 logs.go:276] 0 containers: []
	W0918 21:07:57.219248   62061 logs.go:278] No container was found matching "etcd"
	I0918 21:07:57.219256   62061 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0918 21:07:57.219315   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0918 21:07:57.251361   62061 cri.go:89] found id: ""
	I0918 21:07:57.251388   62061 logs.go:276] 0 containers: []
	W0918 21:07:57.251396   62061 logs.go:278] No container was found matching "coredns"
	I0918 21:07:57.251401   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0918 21:07:57.251450   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0918 21:07:57.285467   62061 cri.go:89] found id: ""
	I0918 21:07:57.285501   62061 logs.go:276] 0 containers: []
	W0918 21:07:57.285512   62061 logs.go:278] No container was found matching "kube-scheduler"
	I0918 21:07:57.285519   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0918 21:07:57.285580   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0918 21:07:57.317973   62061 cri.go:89] found id: ""
	I0918 21:07:57.317999   62061 logs.go:276] 0 containers: []
	W0918 21:07:57.318006   62061 logs.go:278] No container was found matching "kube-proxy"
	I0918 21:07:57.318012   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0918 21:07:57.318071   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0918 21:07:57.352172   62061 cri.go:89] found id: ""
	I0918 21:07:57.352202   62061 logs.go:276] 0 containers: []
	W0918 21:07:57.352215   62061 logs.go:278] No container was found matching "kube-controller-manager"
	I0918 21:07:57.352223   62061 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0918 21:07:57.352277   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0918 21:07:57.388117   62061 cri.go:89] found id: ""
	I0918 21:07:57.388137   62061 logs.go:276] 0 containers: []
	W0918 21:07:57.388144   62061 logs.go:278] No container was found matching "kindnet"
	I0918 21:07:57.388150   62061 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0918 21:07:57.388205   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0918 21:07:57.424814   62061 cri.go:89] found id: ""
	I0918 21:07:57.424846   62061 logs.go:276] 0 containers: []
	W0918 21:07:57.424857   62061 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0918 21:07:57.424868   62061 logs.go:123] Gathering logs for dmesg ...
	I0918 21:07:57.424882   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 21:07:57.437317   62061 logs.go:123] Gathering logs for describe nodes ...
	I0918 21:07:57.437351   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0918 21:07:57.503393   62061 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0918 21:07:57.503417   62061 logs.go:123] Gathering logs for CRI-O ...
	I0918 21:07:57.503429   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0918 21:07:57.584167   62061 logs.go:123] Gathering logs for container status ...
	I0918 21:07:57.584203   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 21:07:57.620761   62061 logs.go:123] Gathering logs for kubelet ...
	I0918 21:07:57.620792   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 21:07:57.357989   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:07:59.856413   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:07:58.133077   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:08:00.633233   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:07:59.886455   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:08:01.887882   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:08:00.174933   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:08:00.188098   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0918 21:08:00.188177   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0918 21:08:00.221852   62061 cri.go:89] found id: ""
	I0918 21:08:00.221877   62061 logs.go:276] 0 containers: []
	W0918 21:08:00.221890   62061 logs.go:278] No container was found matching "kube-apiserver"
	I0918 21:08:00.221895   62061 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0918 21:08:00.221947   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0918 21:08:00.255947   62061 cri.go:89] found id: ""
	I0918 21:08:00.255974   62061 logs.go:276] 0 containers: []
	W0918 21:08:00.255982   62061 logs.go:278] No container was found matching "etcd"
	I0918 21:08:00.255987   62061 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0918 21:08:00.256056   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0918 21:08:00.290919   62061 cri.go:89] found id: ""
	I0918 21:08:00.290953   62061 logs.go:276] 0 containers: []
	W0918 21:08:00.290961   62061 logs.go:278] No container was found matching "coredns"
	I0918 21:08:00.290966   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0918 21:08:00.291017   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0918 21:08:00.328175   62061 cri.go:89] found id: ""
	I0918 21:08:00.328200   62061 logs.go:276] 0 containers: []
	W0918 21:08:00.328208   62061 logs.go:278] No container was found matching "kube-scheduler"
	I0918 21:08:00.328214   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0918 21:08:00.328261   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0918 21:08:00.364863   62061 cri.go:89] found id: ""
	I0918 21:08:00.364892   62061 logs.go:276] 0 containers: []
	W0918 21:08:00.364903   62061 logs.go:278] No container was found matching "kube-proxy"
	I0918 21:08:00.364912   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0918 21:08:00.364967   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0918 21:08:00.400368   62061 cri.go:89] found id: ""
	I0918 21:08:00.400397   62061 logs.go:276] 0 containers: []
	W0918 21:08:00.400408   62061 logs.go:278] No container was found matching "kube-controller-manager"
	I0918 21:08:00.400415   62061 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0918 21:08:00.400480   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0918 21:08:00.435341   62061 cri.go:89] found id: ""
	I0918 21:08:00.435375   62061 logs.go:276] 0 containers: []
	W0918 21:08:00.435386   62061 logs.go:278] No container was found matching "kindnet"
	I0918 21:08:00.435394   62061 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0918 21:08:00.435459   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0918 21:08:00.469981   62061 cri.go:89] found id: ""
	I0918 21:08:00.470010   62061 logs.go:276] 0 containers: []
	W0918 21:08:00.470019   62061 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0918 21:08:00.470028   62061 logs.go:123] Gathering logs for describe nodes ...
	I0918 21:08:00.470041   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0918 21:08:00.538006   62061 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0918 21:08:00.538037   62061 logs.go:123] Gathering logs for CRI-O ...
	I0918 21:08:00.538053   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0918 21:08:00.618497   62061 logs.go:123] Gathering logs for container status ...
	I0918 21:08:00.618543   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 21:08:00.656884   62061 logs.go:123] Gathering logs for kubelet ...
	I0918 21:08:00.656912   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 21:08:00.708836   62061 logs.go:123] Gathering logs for dmesg ...
	I0918 21:08:00.708870   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 21:08:03.222489   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:08:03.236904   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0918 21:08:03.236972   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0918 21:08:03.270559   62061 cri.go:89] found id: ""
	I0918 21:08:03.270588   62061 logs.go:276] 0 containers: []
	W0918 21:08:03.270596   62061 logs.go:278] No container was found matching "kube-apiserver"
	I0918 21:08:03.270602   62061 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0918 21:08:03.270649   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0918 21:08:03.305908   62061 cri.go:89] found id: ""
	I0918 21:08:03.305933   62061 logs.go:276] 0 containers: []
	W0918 21:08:03.305940   62061 logs.go:278] No container was found matching "etcd"
	I0918 21:08:03.305946   62061 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0918 21:08:03.306004   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0918 21:08:03.339442   62061 cri.go:89] found id: ""
	I0918 21:08:03.339468   62061 logs.go:276] 0 containers: []
	W0918 21:08:03.339476   62061 logs.go:278] No container was found matching "coredns"
	I0918 21:08:03.339482   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0918 21:08:03.339550   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0918 21:08:03.377460   62061 cri.go:89] found id: ""
	I0918 21:08:03.377486   62061 logs.go:276] 0 containers: []
	W0918 21:08:03.377495   62061 logs.go:278] No container was found matching "kube-scheduler"
	I0918 21:08:03.377501   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0918 21:08:03.377552   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0918 21:08:03.414815   62061 cri.go:89] found id: ""
	I0918 21:08:03.414850   62061 logs.go:276] 0 containers: []
	W0918 21:08:03.414861   62061 logs.go:278] No container was found matching "kube-proxy"
	I0918 21:08:03.414869   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0918 21:08:03.414930   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0918 21:08:03.448654   62061 cri.go:89] found id: ""
	I0918 21:08:03.448680   62061 logs.go:276] 0 containers: []
	W0918 21:08:03.448690   62061 logs.go:278] No container was found matching "kube-controller-manager"
	I0918 21:08:03.448698   62061 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0918 21:08:03.448759   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0918 21:08:03.483598   62061 cri.go:89] found id: ""
	I0918 21:08:03.483628   62061 logs.go:276] 0 containers: []
	W0918 21:08:03.483639   62061 logs.go:278] No container was found matching "kindnet"
	I0918 21:08:03.483646   62061 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0918 21:08:03.483717   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0918 21:08:03.518557   62061 cri.go:89] found id: ""
	I0918 21:08:03.518585   62061 logs.go:276] 0 containers: []
	W0918 21:08:03.518601   62061 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0918 21:08:03.518612   62061 logs.go:123] Gathering logs for container status ...
	I0918 21:08:03.518627   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 21:08:03.555922   62061 logs.go:123] Gathering logs for kubelet ...
	I0918 21:08:03.555958   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 21:08:03.608173   62061 logs.go:123] Gathering logs for dmesg ...
	I0918 21:08:03.608208   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 21:08:03.621251   62061 logs.go:123] Gathering logs for describe nodes ...
	I0918 21:08:03.621278   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0918 21:08:03.688773   62061 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0918 21:08:03.688796   62061 logs.go:123] Gathering logs for CRI-O ...
	I0918 21:08:03.688812   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0918 21:08:01.857289   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:08:03.857768   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:08:06.356504   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:08:03.132376   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:08:05.134169   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:08:04.386905   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:08:06.891459   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:08:06.272727   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:08:06.286033   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0918 21:08:06.286115   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0918 21:08:06.319058   62061 cri.go:89] found id: ""
	I0918 21:08:06.319084   62061 logs.go:276] 0 containers: []
	W0918 21:08:06.319092   62061 logs.go:278] No container was found matching "kube-apiserver"
	I0918 21:08:06.319099   62061 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0918 21:08:06.319167   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0918 21:08:06.352572   62061 cri.go:89] found id: ""
	I0918 21:08:06.352606   62061 logs.go:276] 0 containers: []
	W0918 21:08:06.352627   62061 logs.go:278] No container was found matching "etcd"
	I0918 21:08:06.352638   62061 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0918 21:08:06.352709   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0918 21:08:06.389897   62061 cri.go:89] found id: ""
	I0918 21:08:06.389922   62061 logs.go:276] 0 containers: []
	W0918 21:08:06.389929   62061 logs.go:278] No container was found matching "coredns"
	I0918 21:08:06.389935   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0918 21:08:06.389993   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0918 21:08:06.427265   62061 cri.go:89] found id: ""
	I0918 21:08:06.427294   62061 logs.go:276] 0 containers: []
	W0918 21:08:06.427306   62061 logs.go:278] No container was found matching "kube-scheduler"
	I0918 21:08:06.427314   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0918 21:08:06.427375   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0918 21:08:06.462706   62061 cri.go:89] found id: ""
	I0918 21:08:06.462738   62061 logs.go:276] 0 containers: []
	W0918 21:08:06.462746   62061 logs.go:278] No container was found matching "kube-proxy"
	I0918 21:08:06.462753   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0918 21:08:06.462816   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0918 21:08:06.499672   62061 cri.go:89] found id: ""
	I0918 21:08:06.499707   62061 logs.go:276] 0 containers: []
	W0918 21:08:06.499719   62061 logs.go:278] No container was found matching "kube-controller-manager"
	I0918 21:08:06.499727   62061 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0918 21:08:06.499781   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0918 21:08:06.540386   62061 cri.go:89] found id: ""
	I0918 21:08:06.540415   62061 logs.go:276] 0 containers: []
	W0918 21:08:06.540426   62061 logs.go:278] No container was found matching "kindnet"
	I0918 21:08:06.540433   62061 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0918 21:08:06.540492   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0918 21:08:06.582919   62061 cri.go:89] found id: ""
	I0918 21:08:06.582946   62061 logs.go:276] 0 containers: []
	W0918 21:08:06.582957   62061 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0918 21:08:06.582966   62061 logs.go:123] Gathering logs for kubelet ...
	I0918 21:08:06.582982   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 21:08:06.637315   62061 logs.go:123] Gathering logs for dmesg ...
	I0918 21:08:06.637355   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 21:08:06.650626   62061 logs.go:123] Gathering logs for describe nodes ...
	I0918 21:08:06.650662   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0918 21:08:06.725605   62061 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0918 21:08:06.725641   62061 logs.go:123] Gathering logs for CRI-O ...
	I0918 21:08:06.725656   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0918 21:08:06.805431   62061 logs.go:123] Gathering logs for container status ...
	I0918 21:08:06.805471   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 21:08:09.343498   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:08:09.357396   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0918 21:08:09.357472   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0918 21:08:09.391561   62061 cri.go:89] found id: ""
	I0918 21:08:09.391590   62061 logs.go:276] 0 containers: []
	W0918 21:08:09.391600   62061 logs.go:278] No container was found matching "kube-apiserver"
	I0918 21:08:09.391605   62061 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0918 21:08:09.391655   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0918 21:08:09.429154   62061 cri.go:89] found id: ""
	I0918 21:08:09.429181   62061 logs.go:276] 0 containers: []
	W0918 21:08:09.429190   62061 logs.go:278] No container was found matching "etcd"
	I0918 21:08:09.429196   62061 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0918 21:08:09.429259   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0918 21:08:09.464807   62061 cri.go:89] found id: ""
	I0918 21:08:09.464848   62061 logs.go:276] 0 containers: []
	W0918 21:08:09.464859   62061 logs.go:278] No container was found matching "coredns"
	I0918 21:08:09.464866   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0918 21:08:09.464927   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0918 21:08:08.856578   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:08:10.856650   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:08:07.633438   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:08:10.132651   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:08:12.132903   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:08:09.387482   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:08:11.886885   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:08:09.499512   62061 cri.go:89] found id: ""
	I0918 21:08:09.499540   62061 logs.go:276] 0 containers: []
	W0918 21:08:09.499549   62061 logs.go:278] No container was found matching "kube-scheduler"
	I0918 21:08:09.499556   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0918 21:08:09.499619   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0918 21:08:09.534569   62061 cri.go:89] found id: ""
	I0918 21:08:09.534593   62061 logs.go:276] 0 containers: []
	W0918 21:08:09.534601   62061 logs.go:278] No container was found matching "kube-proxy"
	I0918 21:08:09.534607   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0918 21:08:09.534660   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0918 21:08:09.570387   62061 cri.go:89] found id: ""
	I0918 21:08:09.570414   62061 logs.go:276] 0 containers: []
	W0918 21:08:09.570422   62061 logs.go:278] No container was found matching "kube-controller-manager"
	I0918 21:08:09.570428   62061 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0918 21:08:09.570489   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0918 21:08:09.603832   62061 cri.go:89] found id: ""
	I0918 21:08:09.603863   62061 logs.go:276] 0 containers: []
	W0918 21:08:09.603871   62061 logs.go:278] No container was found matching "kindnet"
	I0918 21:08:09.603877   62061 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0918 21:08:09.603923   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0918 21:08:09.638887   62061 cri.go:89] found id: ""
	I0918 21:08:09.638918   62061 logs.go:276] 0 containers: []
	W0918 21:08:09.638930   62061 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0918 21:08:09.638940   62061 logs.go:123] Gathering logs for kubelet ...
	I0918 21:08:09.638953   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 21:08:09.691197   62061 logs.go:123] Gathering logs for dmesg ...
	I0918 21:08:09.691237   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 21:08:09.705470   62061 logs.go:123] Gathering logs for describe nodes ...
	I0918 21:08:09.705500   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0918 21:08:09.776826   62061 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0918 21:08:09.776858   62061 logs.go:123] Gathering logs for CRI-O ...
	I0918 21:08:09.776871   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0918 21:08:09.851287   62061 logs.go:123] Gathering logs for container status ...
	I0918 21:08:09.851321   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 21:08:12.390895   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:08:12.406734   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0918 21:08:12.406809   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0918 21:08:12.459302   62061 cri.go:89] found id: ""
	I0918 21:08:12.459331   62061 logs.go:276] 0 containers: []
	W0918 21:08:12.459349   62061 logs.go:278] No container was found matching "kube-apiserver"
	I0918 21:08:12.459360   62061 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0918 21:08:12.459420   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0918 21:08:12.517527   62061 cri.go:89] found id: ""
	I0918 21:08:12.517557   62061 logs.go:276] 0 containers: []
	W0918 21:08:12.517567   62061 logs.go:278] No container was found matching "etcd"
	I0918 21:08:12.517573   62061 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0918 21:08:12.517628   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0918 21:08:12.549661   62061 cri.go:89] found id: ""
	I0918 21:08:12.549689   62061 logs.go:276] 0 containers: []
	W0918 21:08:12.549696   62061 logs.go:278] No container was found matching "coredns"
	I0918 21:08:12.549702   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0918 21:08:12.549747   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0918 21:08:12.582970   62061 cri.go:89] found id: ""
	I0918 21:08:12.582996   62061 logs.go:276] 0 containers: []
	W0918 21:08:12.583004   62061 logs.go:278] No container was found matching "kube-scheduler"
	I0918 21:08:12.583009   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0918 21:08:12.583054   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0918 21:08:12.617059   62061 cri.go:89] found id: ""
	I0918 21:08:12.617089   62061 logs.go:276] 0 containers: []
	W0918 21:08:12.617098   62061 logs.go:278] No container was found matching "kube-proxy"
	I0918 21:08:12.617103   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0918 21:08:12.617153   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0918 21:08:12.651115   62061 cri.go:89] found id: ""
	I0918 21:08:12.651143   62061 logs.go:276] 0 containers: []
	W0918 21:08:12.651156   62061 logs.go:278] No container was found matching "kube-controller-manager"
	I0918 21:08:12.651164   62061 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0918 21:08:12.651217   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0918 21:08:12.684727   62061 cri.go:89] found id: ""
	I0918 21:08:12.684758   62061 logs.go:276] 0 containers: []
	W0918 21:08:12.684765   62061 logs.go:278] No container was found matching "kindnet"
	I0918 21:08:12.684771   62061 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0918 21:08:12.684829   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0918 21:08:12.718181   62061 cri.go:89] found id: ""
	I0918 21:08:12.718215   62061 logs.go:276] 0 containers: []
	W0918 21:08:12.718226   62061 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0918 21:08:12.718237   62061 logs.go:123] Gathering logs for container status ...
	I0918 21:08:12.718252   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 21:08:12.760350   62061 logs.go:123] Gathering logs for kubelet ...
	I0918 21:08:12.760379   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 21:08:12.810028   62061 logs.go:123] Gathering logs for dmesg ...
	I0918 21:08:12.810067   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 21:08:12.823785   62061 logs.go:123] Gathering logs for describe nodes ...
	I0918 21:08:12.823815   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0918 21:08:12.898511   62061 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0918 21:08:12.898532   62061 logs.go:123] Gathering logs for CRI-O ...
	I0918 21:08:12.898545   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0918 21:08:12.856697   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:08:15.356381   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:08:14.632694   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:08:17.131888   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:08:13.887157   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:08:15.887190   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:08:17.890618   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:08:15.476840   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:08:15.489386   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0918 21:08:15.489470   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0918 21:08:15.524549   62061 cri.go:89] found id: ""
	I0918 21:08:15.524578   62061 logs.go:276] 0 containers: []
	W0918 21:08:15.524585   62061 logs.go:278] No container was found matching "kube-apiserver"
	I0918 21:08:15.524591   62061 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0918 21:08:15.524642   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0918 21:08:15.557834   62061 cri.go:89] found id: ""
	I0918 21:08:15.557867   62061 logs.go:276] 0 containers: []
	W0918 21:08:15.557876   62061 logs.go:278] No container was found matching "etcd"
	I0918 21:08:15.557883   62061 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0918 21:08:15.557933   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0918 21:08:15.598774   62061 cri.go:89] found id: ""
	I0918 21:08:15.598805   62061 logs.go:276] 0 containers: []
	W0918 21:08:15.598818   62061 logs.go:278] No container was found matching "coredns"
	I0918 21:08:15.598828   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0918 21:08:15.598882   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0918 21:08:15.633123   62061 cri.go:89] found id: ""
	I0918 21:08:15.633147   62061 logs.go:276] 0 containers: []
	W0918 21:08:15.633155   62061 logs.go:278] No container was found matching "kube-scheduler"
	I0918 21:08:15.633161   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0918 21:08:15.633208   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0918 21:08:15.667281   62061 cri.go:89] found id: ""
	I0918 21:08:15.667307   62061 logs.go:276] 0 containers: []
	W0918 21:08:15.667317   62061 logs.go:278] No container was found matching "kube-proxy"
	I0918 21:08:15.667323   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0918 21:08:15.667381   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0918 21:08:15.705945   62061 cri.go:89] found id: ""
	I0918 21:08:15.705977   62061 logs.go:276] 0 containers: []
	W0918 21:08:15.705990   62061 logs.go:278] No container was found matching "kube-controller-manager"
	I0918 21:08:15.706015   62061 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0918 21:08:15.706093   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0918 21:08:15.739795   62061 cri.go:89] found id: ""
	I0918 21:08:15.739826   62061 logs.go:276] 0 containers: []
	W0918 21:08:15.739836   62061 logs.go:278] No container was found matching "kindnet"
	I0918 21:08:15.739843   62061 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0918 21:08:15.739904   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0918 21:08:15.778520   62061 cri.go:89] found id: ""
	I0918 21:08:15.778556   62061 logs.go:276] 0 containers: []
	W0918 21:08:15.778567   62061 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0918 21:08:15.778579   62061 logs.go:123] Gathering logs for kubelet ...
	I0918 21:08:15.778592   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 21:08:15.829357   62061 logs.go:123] Gathering logs for dmesg ...
	I0918 21:08:15.829394   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 21:08:15.842852   62061 logs.go:123] Gathering logs for describe nodes ...
	I0918 21:08:15.842882   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0918 21:08:15.922438   62061 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0918 21:08:15.922471   62061 logs.go:123] Gathering logs for CRI-O ...
	I0918 21:08:15.922483   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0918 21:08:16.001687   62061 logs.go:123] Gathering logs for container status ...
	I0918 21:08:16.001726   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 21:08:18.542067   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:08:18.554783   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0918 21:08:18.554870   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0918 21:08:18.589555   62061 cri.go:89] found id: ""
	I0918 21:08:18.589581   62061 logs.go:276] 0 containers: []
	W0918 21:08:18.589592   62061 logs.go:278] No container was found matching "kube-apiserver"
	I0918 21:08:18.589604   62061 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0918 21:08:18.589667   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0918 21:08:18.623035   62061 cri.go:89] found id: ""
	I0918 21:08:18.623059   62061 logs.go:276] 0 containers: []
	W0918 21:08:18.623067   62061 logs.go:278] No container was found matching "etcd"
	I0918 21:08:18.623073   62061 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0918 21:08:18.623127   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0918 21:08:18.655875   62061 cri.go:89] found id: ""
	I0918 21:08:18.655901   62061 logs.go:276] 0 containers: []
	W0918 21:08:18.655909   62061 logs.go:278] No container was found matching "coredns"
	I0918 21:08:18.655915   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0918 21:08:18.655973   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0918 21:08:18.688964   62061 cri.go:89] found id: ""
	I0918 21:08:18.688997   62061 logs.go:276] 0 containers: []
	W0918 21:08:18.689008   62061 logs.go:278] No container was found matching "kube-scheduler"
	I0918 21:08:18.689016   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0918 21:08:18.689080   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0918 21:08:18.723164   62061 cri.go:89] found id: ""
	I0918 21:08:18.723186   62061 logs.go:276] 0 containers: []
	W0918 21:08:18.723196   62061 logs.go:278] No container was found matching "kube-proxy"
	I0918 21:08:18.723201   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0918 21:08:18.723246   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0918 21:08:18.755022   62061 cri.go:89] found id: ""
	I0918 21:08:18.755048   62061 logs.go:276] 0 containers: []
	W0918 21:08:18.755057   62061 logs.go:278] No container was found matching "kube-controller-manager"
	I0918 21:08:18.755063   62061 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0918 21:08:18.755113   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0918 21:08:18.791614   62061 cri.go:89] found id: ""
	I0918 21:08:18.791645   62061 logs.go:276] 0 containers: []
	W0918 21:08:18.791655   62061 logs.go:278] No container was found matching "kindnet"
	I0918 21:08:18.791663   62061 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0918 21:08:18.791731   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0918 21:08:18.823553   62061 cri.go:89] found id: ""
	I0918 21:08:18.823583   62061 logs.go:276] 0 containers: []
	W0918 21:08:18.823590   62061 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0918 21:08:18.823597   62061 logs.go:123] Gathering logs for container status ...
	I0918 21:08:18.823609   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 21:08:18.864528   62061 logs.go:123] Gathering logs for kubelet ...
	I0918 21:08:18.864567   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 21:08:18.916590   62061 logs.go:123] Gathering logs for dmesg ...
	I0918 21:08:18.916628   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 21:08:18.930077   62061 logs.go:123] Gathering logs for describe nodes ...
	I0918 21:08:18.930107   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0918 21:08:18.999958   62061 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0918 21:08:18.999982   62061 logs.go:123] Gathering logs for CRI-O ...
	I0918 21:08:18.999997   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0918 21:08:17.358190   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:08:19.856605   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:08:19.132382   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:08:21.634433   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:08:20.387223   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:08:22.387374   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:08:21.573718   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:08:21.588164   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0918 21:08:21.588252   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0918 21:08:21.630044   62061 cri.go:89] found id: ""
	I0918 21:08:21.630075   62061 logs.go:276] 0 containers: []
	W0918 21:08:21.630086   62061 logs.go:278] No container was found matching "kube-apiserver"
	I0918 21:08:21.630094   62061 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0918 21:08:21.630154   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0918 21:08:21.666992   62061 cri.go:89] found id: ""
	I0918 21:08:21.667021   62061 logs.go:276] 0 containers: []
	W0918 21:08:21.667029   62061 logs.go:278] No container was found matching "etcd"
	I0918 21:08:21.667035   62061 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0918 21:08:21.667083   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0918 21:08:21.702379   62061 cri.go:89] found id: ""
	I0918 21:08:21.702403   62061 logs.go:276] 0 containers: []
	W0918 21:08:21.702411   62061 logs.go:278] No container was found matching "coredns"
	I0918 21:08:21.702416   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0918 21:08:21.702463   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0918 21:08:21.739877   62061 cri.go:89] found id: ""
	I0918 21:08:21.739908   62061 logs.go:276] 0 containers: []
	W0918 21:08:21.739918   62061 logs.go:278] No container was found matching "kube-scheduler"
	I0918 21:08:21.739923   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0918 21:08:21.739974   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0918 21:08:21.777536   62061 cri.go:89] found id: ""
	I0918 21:08:21.777573   62061 logs.go:276] 0 containers: []
	W0918 21:08:21.777584   62061 logs.go:278] No container was found matching "kube-proxy"
	I0918 21:08:21.777592   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0918 21:08:21.777652   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0918 21:08:21.812284   62061 cri.go:89] found id: ""
	I0918 21:08:21.812316   62061 logs.go:276] 0 containers: []
	W0918 21:08:21.812325   62061 logs.go:278] No container was found matching "kube-controller-manager"
	I0918 21:08:21.812332   62061 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0918 21:08:21.812401   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0918 21:08:21.848143   62061 cri.go:89] found id: ""
	I0918 21:08:21.848176   62061 logs.go:276] 0 containers: []
	W0918 21:08:21.848185   62061 logs.go:278] No container was found matching "kindnet"
	I0918 21:08:21.848191   62061 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0918 21:08:21.848250   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0918 21:08:21.887151   62061 cri.go:89] found id: ""
	I0918 21:08:21.887177   62061 logs.go:276] 0 containers: []
	W0918 21:08:21.887188   62061 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0918 21:08:21.887199   62061 logs.go:123] Gathering logs for kubelet ...
	I0918 21:08:21.887213   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 21:08:21.939969   62061 logs.go:123] Gathering logs for dmesg ...
	I0918 21:08:21.940008   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 21:08:21.954128   62061 logs.go:123] Gathering logs for describe nodes ...
	I0918 21:08:21.954164   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0918 21:08:22.022827   62061 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0918 21:08:22.022853   62061 logs.go:123] Gathering logs for CRI-O ...
	I0918 21:08:22.022865   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0918 21:08:22.103131   62061 logs.go:123] Gathering logs for container status ...
	I0918 21:08:22.103172   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 21:08:22.356641   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:08:24.358204   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:08:24.133101   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:08:26.633701   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:08:24.888715   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:08:27.386901   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:08:24.642045   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:08:24.655273   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0918 21:08:24.655343   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0918 21:08:24.687816   62061 cri.go:89] found id: ""
	I0918 21:08:24.687847   62061 logs.go:276] 0 containers: []
	W0918 21:08:24.687858   62061 logs.go:278] No container was found matching "kube-apiserver"
	I0918 21:08:24.687865   62061 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0918 21:08:24.687927   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0918 21:08:24.721276   62061 cri.go:89] found id: ""
	I0918 21:08:24.721303   62061 logs.go:276] 0 containers: []
	W0918 21:08:24.721311   62061 logs.go:278] No container was found matching "etcd"
	I0918 21:08:24.721316   62061 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0918 21:08:24.721366   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0918 21:08:24.753874   62061 cri.go:89] found id: ""
	I0918 21:08:24.753904   62061 logs.go:276] 0 containers: []
	W0918 21:08:24.753911   62061 logs.go:278] No container was found matching "coredns"
	I0918 21:08:24.753917   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0918 21:08:24.753967   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0918 21:08:24.789107   62061 cri.go:89] found id: ""
	I0918 21:08:24.789148   62061 logs.go:276] 0 containers: []
	W0918 21:08:24.789163   62061 logs.go:278] No container was found matching "kube-scheduler"
	I0918 21:08:24.789170   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0918 21:08:24.789219   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0918 21:08:24.826283   62061 cri.go:89] found id: ""
	I0918 21:08:24.826316   62061 logs.go:276] 0 containers: []
	W0918 21:08:24.826329   62061 logs.go:278] No container was found matching "kube-proxy"
	I0918 21:08:24.826337   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0918 21:08:24.826401   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0918 21:08:24.859878   62061 cri.go:89] found id: ""
	I0918 21:08:24.859907   62061 logs.go:276] 0 containers: []
	W0918 21:08:24.859917   62061 logs.go:278] No container was found matching "kube-controller-manager"
	I0918 21:08:24.859924   62061 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0918 21:08:24.859982   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0918 21:08:24.895717   62061 cri.go:89] found id: ""
	I0918 21:08:24.895747   62061 logs.go:276] 0 containers: []
	W0918 21:08:24.895758   62061 logs.go:278] No container was found matching "kindnet"
	I0918 21:08:24.895766   62061 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0918 21:08:24.895830   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0918 21:08:24.930207   62061 cri.go:89] found id: ""
	I0918 21:08:24.930239   62061 logs.go:276] 0 containers: []
	W0918 21:08:24.930250   62061 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0918 21:08:24.930262   62061 logs.go:123] Gathering logs for container status ...
	I0918 21:08:24.930279   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 21:08:24.967939   62061 logs.go:123] Gathering logs for kubelet ...
	I0918 21:08:24.967981   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 21:08:25.022526   62061 logs.go:123] Gathering logs for dmesg ...
	I0918 21:08:25.022569   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 21:08:25.036175   62061 logs.go:123] Gathering logs for describe nodes ...
	I0918 21:08:25.036214   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0918 21:08:25.104251   62061 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0918 21:08:25.104277   62061 logs.go:123] Gathering logs for CRI-O ...
	I0918 21:08:25.104294   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0918 21:08:27.686943   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:08:27.700071   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0918 21:08:27.700147   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0918 21:08:27.735245   62061 cri.go:89] found id: ""
	I0918 21:08:27.735277   62061 logs.go:276] 0 containers: []
	W0918 21:08:27.735286   62061 logs.go:278] No container was found matching "kube-apiserver"
	I0918 21:08:27.735291   62061 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0918 21:08:27.735349   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0918 21:08:27.768945   62061 cri.go:89] found id: ""
	I0918 21:08:27.768974   62061 logs.go:276] 0 containers: []
	W0918 21:08:27.768985   62061 logs.go:278] No container was found matching "etcd"
	I0918 21:08:27.768993   62061 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0918 21:08:27.769055   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0918 21:08:27.805425   62061 cri.go:89] found id: ""
	I0918 21:08:27.805457   62061 logs.go:276] 0 containers: []
	W0918 21:08:27.805468   62061 logs.go:278] No container was found matching "coredns"
	I0918 21:08:27.805474   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0918 21:08:27.805523   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0918 21:08:27.838049   62061 cri.go:89] found id: ""
	I0918 21:08:27.838081   62061 logs.go:276] 0 containers: []
	W0918 21:08:27.838091   62061 logs.go:278] No container was found matching "kube-scheduler"
	I0918 21:08:27.838098   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0918 21:08:27.838163   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0918 21:08:27.873961   62061 cri.go:89] found id: ""
	I0918 21:08:27.873986   62061 logs.go:276] 0 containers: []
	W0918 21:08:27.873994   62061 logs.go:278] No container was found matching "kube-proxy"
	I0918 21:08:27.874001   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0918 21:08:27.874064   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0918 21:08:27.908822   62061 cri.go:89] found id: ""
	I0918 21:08:27.908846   62061 logs.go:276] 0 containers: []
	W0918 21:08:27.908854   62061 logs.go:278] No container was found matching "kube-controller-manager"
	I0918 21:08:27.908860   62061 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0918 21:08:27.908915   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0918 21:08:27.943758   62061 cri.go:89] found id: ""
	I0918 21:08:27.943794   62061 logs.go:276] 0 containers: []
	W0918 21:08:27.943802   62061 logs.go:278] No container was found matching "kindnet"
	I0918 21:08:27.943808   62061 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0918 21:08:27.943875   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0918 21:08:27.978136   62061 cri.go:89] found id: ""
	I0918 21:08:27.978168   62061 logs.go:276] 0 containers: []
	W0918 21:08:27.978179   62061 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0918 21:08:27.978189   62061 logs.go:123] Gathering logs for dmesg ...
	I0918 21:08:27.978202   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 21:08:27.992744   62061 logs.go:123] Gathering logs for describe nodes ...
	I0918 21:08:27.992773   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0918 21:08:28.070339   62061 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0918 21:08:28.070362   62061 logs.go:123] Gathering logs for CRI-O ...
	I0918 21:08:28.070374   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0918 21:08:28.152405   62061 logs.go:123] Gathering logs for container status ...
	I0918 21:08:28.152452   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 21:08:28.191190   62061 logs.go:123] Gathering logs for kubelet ...
	I0918 21:08:28.191220   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 21:08:26.857256   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:08:29.356662   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:08:29.132577   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:08:31.133108   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:08:29.387068   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:08:31.886962   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:08:30.742507   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:08:30.756451   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0918 21:08:30.756553   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0918 21:08:30.790746   62061 cri.go:89] found id: ""
	I0918 21:08:30.790771   62061 logs.go:276] 0 containers: []
	W0918 21:08:30.790781   62061 logs.go:278] No container was found matching "kube-apiserver"
	I0918 21:08:30.790787   62061 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0918 21:08:30.790851   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0918 21:08:30.823633   62061 cri.go:89] found id: ""
	I0918 21:08:30.823670   62061 logs.go:276] 0 containers: []
	W0918 21:08:30.823682   62061 logs.go:278] No container was found matching "etcd"
	I0918 21:08:30.823689   62061 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0918 21:08:30.823754   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0918 21:08:30.861912   62061 cri.go:89] found id: ""
	I0918 21:08:30.861936   62061 logs.go:276] 0 containers: []
	W0918 21:08:30.861943   62061 logs.go:278] No container was found matching "coredns"
	I0918 21:08:30.861949   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0918 21:08:30.862000   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0918 21:08:30.899452   62061 cri.go:89] found id: ""
	I0918 21:08:30.899481   62061 logs.go:276] 0 containers: []
	W0918 21:08:30.899489   62061 logs.go:278] No container was found matching "kube-scheduler"
	I0918 21:08:30.899495   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0918 21:08:30.899562   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0918 21:08:30.935871   62061 cri.go:89] found id: ""
	I0918 21:08:30.935898   62061 logs.go:276] 0 containers: []
	W0918 21:08:30.935906   62061 logs.go:278] No container was found matching "kube-proxy"
	I0918 21:08:30.935912   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0918 21:08:30.935969   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0918 21:08:30.974608   62061 cri.go:89] found id: ""
	I0918 21:08:30.974643   62061 logs.go:276] 0 containers: []
	W0918 21:08:30.974655   62061 logs.go:278] No container was found matching "kube-controller-manager"
	I0918 21:08:30.974663   62061 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0918 21:08:30.974722   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0918 21:08:31.008249   62061 cri.go:89] found id: ""
	I0918 21:08:31.008279   62061 logs.go:276] 0 containers: []
	W0918 21:08:31.008290   62061 logs.go:278] No container was found matching "kindnet"
	I0918 21:08:31.008297   62061 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0918 21:08:31.008366   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0918 21:08:31.047042   62061 cri.go:89] found id: ""
	I0918 21:08:31.047075   62061 logs.go:276] 0 containers: []
	W0918 21:08:31.047083   62061 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0918 21:08:31.047093   62061 logs.go:123] Gathering logs for kubelet ...
	I0918 21:08:31.047107   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 21:08:31.098961   62061 logs.go:123] Gathering logs for dmesg ...
	I0918 21:08:31.099001   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 21:08:31.113116   62061 logs.go:123] Gathering logs for describe nodes ...
	I0918 21:08:31.113147   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0918 21:08:31.179609   62061 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0918 21:08:31.179650   62061 logs.go:123] Gathering logs for CRI-O ...
	I0918 21:08:31.179664   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0918 21:08:31.258299   62061 logs.go:123] Gathering logs for container status ...
	I0918 21:08:31.258335   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 21:08:33.798360   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:08:33.812105   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0918 21:08:33.812172   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0918 21:08:33.848120   62061 cri.go:89] found id: ""
	I0918 21:08:33.848149   62061 logs.go:276] 0 containers: []
	W0918 21:08:33.848160   62061 logs.go:278] No container was found matching "kube-apiserver"
	I0918 21:08:33.848169   62061 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0918 21:08:33.848231   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0918 21:08:33.884974   62061 cri.go:89] found id: ""
	I0918 21:08:33.885007   62061 logs.go:276] 0 containers: []
	W0918 21:08:33.885019   62061 logs.go:278] No container was found matching "etcd"
	I0918 21:08:33.885028   62061 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0918 21:08:33.885104   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0918 21:08:33.919613   62061 cri.go:89] found id: ""
	I0918 21:08:33.919658   62061 logs.go:276] 0 containers: []
	W0918 21:08:33.919667   62061 logs.go:278] No container was found matching "coredns"
	I0918 21:08:33.919673   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0918 21:08:33.919721   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0918 21:08:33.953074   62061 cri.go:89] found id: ""
	I0918 21:08:33.953112   62061 logs.go:276] 0 containers: []
	W0918 21:08:33.953120   62061 logs.go:278] No container was found matching "kube-scheduler"
	I0918 21:08:33.953125   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0918 21:08:33.953185   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0918 21:08:33.985493   62061 cri.go:89] found id: ""
	I0918 21:08:33.985521   62061 logs.go:276] 0 containers: []
	W0918 21:08:33.985532   62061 logs.go:278] No container was found matching "kube-proxy"
	I0918 21:08:33.985539   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0918 21:08:33.985630   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0918 21:08:34.020938   62061 cri.go:89] found id: ""
	I0918 21:08:34.020962   62061 logs.go:276] 0 containers: []
	W0918 21:08:34.020972   62061 logs.go:278] No container was found matching "kube-controller-manager"
	I0918 21:08:34.020981   62061 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0918 21:08:34.021047   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0918 21:08:34.053947   62061 cri.go:89] found id: ""
	I0918 21:08:34.053977   62061 logs.go:276] 0 containers: []
	W0918 21:08:34.053988   62061 logs.go:278] No container was found matching "kindnet"
	I0918 21:08:34.053996   62061 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0918 21:08:34.054060   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0918 21:08:34.090072   62061 cri.go:89] found id: ""
	I0918 21:08:34.090110   62061 logs.go:276] 0 containers: []
	W0918 21:08:34.090123   62061 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0918 21:08:34.090133   62061 logs.go:123] Gathering logs for kubelet ...
	I0918 21:08:34.090145   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 21:08:34.142069   62061 logs.go:123] Gathering logs for dmesg ...
	I0918 21:08:34.142107   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 21:08:34.156709   62061 logs.go:123] Gathering logs for describe nodes ...
	I0918 21:08:34.156740   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0918 21:08:34.232644   62061 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0918 21:08:34.232672   62061 logs.go:123] Gathering logs for CRI-O ...
	I0918 21:08:34.232685   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0918 21:08:34.311833   62061 logs.go:123] Gathering logs for container status ...
	I0918 21:08:34.311878   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 21:08:31.859360   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:08:34.357056   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:08:33.133212   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:08:35.632885   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:08:33.888487   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:08:36.386571   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:08:36.850811   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:08:36.864479   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0918 21:08:36.864558   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0918 21:08:36.902595   62061 cri.go:89] found id: ""
	I0918 21:08:36.902628   62061 logs.go:276] 0 containers: []
	W0918 21:08:36.902640   62061 logs.go:278] No container was found matching "kube-apiserver"
	I0918 21:08:36.902649   62061 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0918 21:08:36.902706   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0918 21:08:36.940336   62061 cri.go:89] found id: ""
	I0918 21:08:36.940385   62061 logs.go:276] 0 containers: []
	W0918 21:08:36.940394   62061 logs.go:278] No container was found matching "etcd"
	I0918 21:08:36.940400   62061 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0918 21:08:36.940465   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0918 21:08:36.973909   62061 cri.go:89] found id: ""
	I0918 21:08:36.973942   62061 logs.go:276] 0 containers: []
	W0918 21:08:36.973952   62061 logs.go:278] No container was found matching "coredns"
	I0918 21:08:36.973958   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0918 21:08:36.974013   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0918 21:08:37.008766   62061 cri.go:89] found id: ""
	I0918 21:08:37.008791   62061 logs.go:276] 0 containers: []
	W0918 21:08:37.008799   62061 logs.go:278] No container was found matching "kube-scheduler"
	I0918 21:08:37.008805   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0918 21:08:37.008852   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0918 21:08:37.041633   62061 cri.go:89] found id: ""
	I0918 21:08:37.041669   62061 logs.go:276] 0 containers: []
	W0918 21:08:37.041681   62061 logs.go:278] No container was found matching "kube-proxy"
	I0918 21:08:37.041688   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0918 21:08:37.041750   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0918 21:08:37.075154   62061 cri.go:89] found id: ""
	I0918 21:08:37.075188   62061 logs.go:276] 0 containers: []
	W0918 21:08:37.075197   62061 logs.go:278] No container was found matching "kube-controller-manager"
	I0918 21:08:37.075204   62061 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0918 21:08:37.075268   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0918 21:08:37.111083   62061 cri.go:89] found id: ""
	I0918 21:08:37.111119   62061 logs.go:276] 0 containers: []
	W0918 21:08:37.111130   62061 logs.go:278] No container was found matching "kindnet"
	I0918 21:08:37.111138   62061 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0918 21:08:37.111192   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0918 21:08:37.147894   62061 cri.go:89] found id: ""
	I0918 21:08:37.147925   62061 logs.go:276] 0 containers: []
	W0918 21:08:37.147936   62061 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0918 21:08:37.147948   62061 logs.go:123] Gathering logs for kubelet ...
	I0918 21:08:37.147962   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 21:08:37.200102   62061 logs.go:123] Gathering logs for dmesg ...
	I0918 21:08:37.200141   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 21:08:37.213511   62061 logs.go:123] Gathering logs for describe nodes ...
	I0918 21:08:37.213537   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0918 21:08:37.286978   62061 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0918 21:08:37.287012   62061 logs.go:123] Gathering logs for CRI-O ...
	I0918 21:08:37.287027   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0918 21:08:37.368153   62061 logs.go:123] Gathering logs for container status ...
	I0918 21:08:37.368191   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 21:08:36.857508   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:08:39.357177   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:08:41.357329   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:08:38.134332   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:08:40.633274   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:08:38.387121   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:08:40.387310   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:08:42.887614   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:08:39.907600   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:08:39.922325   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0918 21:08:39.922395   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0918 21:08:39.958150   62061 cri.go:89] found id: ""
	I0918 21:08:39.958175   62061 logs.go:276] 0 containers: []
	W0918 21:08:39.958183   62061 logs.go:278] No container was found matching "kube-apiserver"
	I0918 21:08:39.958189   62061 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0918 21:08:39.958254   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0918 21:08:39.993916   62061 cri.go:89] found id: ""
	I0918 21:08:39.993945   62061 logs.go:276] 0 containers: []
	W0918 21:08:39.993956   62061 logs.go:278] No container was found matching "etcd"
	I0918 21:08:39.993963   62061 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0918 21:08:39.994026   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0918 21:08:40.031078   62061 cri.go:89] found id: ""
	I0918 21:08:40.031114   62061 logs.go:276] 0 containers: []
	W0918 21:08:40.031126   62061 logs.go:278] No container was found matching "coredns"
	I0918 21:08:40.031133   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0918 21:08:40.031194   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0918 21:08:40.065028   62061 cri.go:89] found id: ""
	I0918 21:08:40.065054   62061 logs.go:276] 0 containers: []
	W0918 21:08:40.065065   62061 logs.go:278] No container was found matching "kube-scheduler"
	I0918 21:08:40.065072   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0918 21:08:40.065129   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0918 21:08:40.099437   62061 cri.go:89] found id: ""
	I0918 21:08:40.099466   62061 logs.go:276] 0 containers: []
	W0918 21:08:40.099474   62061 logs.go:278] No container was found matching "kube-proxy"
	I0918 21:08:40.099480   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0918 21:08:40.099544   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0918 21:08:40.139841   62061 cri.go:89] found id: ""
	I0918 21:08:40.139866   62061 logs.go:276] 0 containers: []
	W0918 21:08:40.139874   62061 logs.go:278] No container was found matching "kube-controller-manager"
	I0918 21:08:40.139880   62061 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0918 21:08:40.139936   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0918 21:08:40.175287   62061 cri.go:89] found id: ""
	I0918 21:08:40.175316   62061 logs.go:276] 0 containers: []
	W0918 21:08:40.175327   62061 logs.go:278] No container was found matching "kindnet"
	I0918 21:08:40.175334   62061 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0918 21:08:40.175397   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0918 21:08:40.208646   62061 cri.go:89] found id: ""
	I0918 21:08:40.208677   62061 logs.go:276] 0 containers: []
	W0918 21:08:40.208690   62061 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0918 21:08:40.208701   62061 logs.go:123] Gathering logs for kubelet ...
	I0918 21:08:40.208712   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 21:08:40.262944   62061 logs.go:123] Gathering logs for dmesg ...
	I0918 21:08:40.262982   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 21:08:40.276192   62061 logs.go:123] Gathering logs for describe nodes ...
	I0918 21:08:40.276222   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0918 21:08:40.346393   62061 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0918 21:08:40.346414   62061 logs.go:123] Gathering logs for CRI-O ...
	I0918 21:08:40.346426   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0918 21:08:40.433797   62061 logs.go:123] Gathering logs for container status ...
	I0918 21:08:40.433848   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 21:08:42.976574   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:08:42.990198   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0918 21:08:42.990263   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0918 21:08:43.031744   62061 cri.go:89] found id: ""
	I0918 21:08:43.031774   62061 logs.go:276] 0 containers: []
	W0918 21:08:43.031784   62061 logs.go:278] No container was found matching "kube-apiserver"
	I0918 21:08:43.031791   62061 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0918 21:08:43.031854   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0918 21:08:43.065198   62061 cri.go:89] found id: ""
	I0918 21:08:43.065240   62061 logs.go:276] 0 containers: []
	W0918 21:08:43.065248   62061 logs.go:278] No container was found matching "etcd"
	I0918 21:08:43.065261   62061 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0918 21:08:43.065319   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0918 21:08:43.099292   62061 cri.go:89] found id: ""
	I0918 21:08:43.099320   62061 logs.go:276] 0 containers: []
	W0918 21:08:43.099328   62061 logs.go:278] No container was found matching "coredns"
	I0918 21:08:43.099333   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0918 21:08:43.099388   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0918 21:08:43.135085   62061 cri.go:89] found id: ""
	I0918 21:08:43.135110   62061 logs.go:276] 0 containers: []
	W0918 21:08:43.135119   62061 logs.go:278] No container was found matching "kube-scheduler"
	I0918 21:08:43.135131   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0918 21:08:43.135190   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0918 21:08:43.173255   62061 cri.go:89] found id: ""
	I0918 21:08:43.173312   62061 logs.go:276] 0 containers: []
	W0918 21:08:43.173326   62061 logs.go:278] No container was found matching "kube-proxy"
	I0918 21:08:43.173335   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0918 21:08:43.173433   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0918 21:08:43.209249   62061 cri.go:89] found id: ""
	I0918 21:08:43.209282   62061 logs.go:276] 0 containers: []
	W0918 21:08:43.209294   62061 logs.go:278] No container was found matching "kube-controller-manager"
	I0918 21:08:43.209303   62061 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0918 21:08:43.209377   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0918 21:08:43.242333   62061 cri.go:89] found id: ""
	I0918 21:08:43.242366   62061 logs.go:276] 0 containers: []
	W0918 21:08:43.242376   62061 logs.go:278] No container was found matching "kindnet"
	I0918 21:08:43.242383   62061 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0918 21:08:43.242449   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0918 21:08:43.278099   62061 cri.go:89] found id: ""
	I0918 21:08:43.278128   62061 logs.go:276] 0 containers: []
	W0918 21:08:43.278136   62061 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0918 21:08:43.278146   62061 logs.go:123] Gathering logs for kubelet ...
	I0918 21:08:43.278164   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 21:08:43.329621   62061 logs.go:123] Gathering logs for dmesg ...
	I0918 21:08:43.329661   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 21:08:43.343357   62061 logs.go:123] Gathering logs for describe nodes ...
	I0918 21:08:43.343402   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0918 21:08:43.415392   62061 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0918 21:08:43.415419   62061 logs.go:123] Gathering logs for CRI-O ...
	I0918 21:08:43.415435   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0918 21:08:43.501634   62061 logs.go:123] Gathering logs for container status ...
	I0918 21:08:43.501670   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 21:08:43.357675   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:08:45.857212   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:08:43.133389   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:08:45.134057   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:08:44.887763   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:08:47.387221   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:08:46.043925   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:08:46.057457   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0918 21:08:46.057540   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0918 21:08:46.091987   62061 cri.go:89] found id: ""
	I0918 21:08:46.092036   62061 logs.go:276] 0 containers: []
	W0918 21:08:46.092047   62061 logs.go:278] No container was found matching "kube-apiserver"
	I0918 21:08:46.092055   62061 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0918 21:08:46.092118   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0918 21:08:46.131171   62061 cri.go:89] found id: ""
	I0918 21:08:46.131195   62061 logs.go:276] 0 containers: []
	W0918 21:08:46.131203   62061 logs.go:278] No container was found matching "etcd"
	I0918 21:08:46.131209   62061 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0918 21:08:46.131266   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0918 21:08:46.167324   62061 cri.go:89] found id: ""
	I0918 21:08:46.167360   62061 logs.go:276] 0 containers: []
	W0918 21:08:46.167369   62061 logs.go:278] No container was found matching "coredns"
	I0918 21:08:46.167375   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0918 21:08:46.167433   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0918 21:08:46.200940   62061 cri.go:89] found id: ""
	I0918 21:08:46.200969   62061 logs.go:276] 0 containers: []
	W0918 21:08:46.200978   62061 logs.go:278] No container was found matching "kube-scheduler"
	I0918 21:08:46.200983   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0918 21:08:46.201042   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0918 21:08:46.233736   62061 cri.go:89] found id: ""
	I0918 21:08:46.233765   62061 logs.go:276] 0 containers: []
	W0918 21:08:46.233773   62061 logs.go:278] No container was found matching "kube-proxy"
	I0918 21:08:46.233779   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0918 21:08:46.233841   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0918 21:08:46.267223   62061 cri.go:89] found id: ""
	I0918 21:08:46.267250   62061 logs.go:276] 0 containers: []
	W0918 21:08:46.267261   62061 logs.go:278] No container was found matching "kube-controller-manager"
	I0918 21:08:46.267268   62061 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0918 21:08:46.267331   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0918 21:08:46.302218   62061 cri.go:89] found id: ""
	I0918 21:08:46.302245   62061 logs.go:276] 0 containers: []
	W0918 21:08:46.302255   62061 logs.go:278] No container was found matching "kindnet"
	I0918 21:08:46.302262   62061 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0918 21:08:46.302326   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0918 21:08:46.334979   62061 cri.go:89] found id: ""
	I0918 21:08:46.335005   62061 logs.go:276] 0 containers: []
	W0918 21:08:46.335015   62061 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0918 21:08:46.335024   62061 logs.go:123] Gathering logs for describe nodes ...
	I0918 21:08:46.335038   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0918 21:08:46.422905   62061 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0918 21:08:46.422929   62061 logs.go:123] Gathering logs for CRI-O ...
	I0918 21:08:46.422948   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0918 21:08:46.504831   62061 logs.go:123] Gathering logs for container status ...
	I0918 21:08:46.504884   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 21:08:46.543954   62061 logs.go:123] Gathering logs for kubelet ...
	I0918 21:08:46.543981   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 21:08:46.595817   62061 logs.go:123] Gathering logs for dmesg ...
	I0918 21:08:46.595854   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 21:08:49.109984   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:08:49.124966   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0918 21:08:49.125052   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0918 21:08:49.163272   62061 cri.go:89] found id: ""
	I0918 21:08:49.163302   62061 logs.go:276] 0 containers: []
	W0918 21:08:49.163334   62061 logs.go:278] No container was found matching "kube-apiserver"
	I0918 21:08:49.163342   62061 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0918 21:08:49.163411   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0918 21:08:49.199973   62061 cri.go:89] found id: ""
	I0918 21:08:49.200003   62061 logs.go:276] 0 containers: []
	W0918 21:08:49.200037   62061 logs.go:278] No container was found matching "etcd"
	I0918 21:08:49.200046   62061 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0918 21:08:49.200111   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0918 21:08:49.235980   62061 cri.go:89] found id: ""
	I0918 21:08:49.236036   62061 logs.go:276] 0 containers: []
	W0918 21:08:49.236050   62061 logs.go:278] No container was found matching "coredns"
	I0918 21:08:49.236061   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0918 21:08:49.236123   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0918 21:08:49.271162   62061 cri.go:89] found id: ""
	I0918 21:08:49.271386   62061 logs.go:276] 0 containers: []
	W0918 21:08:49.271404   62061 logs.go:278] No container was found matching "kube-scheduler"
	I0918 21:08:49.271413   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0918 21:08:49.271483   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0918 21:08:49.305799   62061 cri.go:89] found id: ""
	I0918 21:08:49.305831   62061 logs.go:276] 0 containers: []
	W0918 21:08:49.305842   62061 logs.go:278] No container was found matching "kube-proxy"
	I0918 21:08:49.305848   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0918 21:08:49.305899   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0918 21:08:49.342170   62061 cri.go:89] found id: ""
	I0918 21:08:49.342195   62061 logs.go:276] 0 containers: []
	W0918 21:08:49.342202   62061 logs.go:278] No container was found matching "kube-controller-manager"
	I0918 21:08:49.342208   62061 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0918 21:08:49.342265   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0918 21:08:49.374071   62061 cri.go:89] found id: ""
	I0918 21:08:49.374098   62061 logs.go:276] 0 containers: []
	W0918 21:08:49.374108   62061 logs.go:278] No container was found matching "kindnet"
	I0918 21:08:49.374116   62061 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0918 21:08:49.374186   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0918 21:08:49.407614   62061 cri.go:89] found id: ""
	I0918 21:08:49.407645   62061 logs.go:276] 0 containers: []
	W0918 21:08:49.407657   62061 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0918 21:08:49.407669   62061 logs.go:123] Gathering logs for kubelet ...
	I0918 21:08:49.407685   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 21:08:49.458433   62061 logs.go:123] Gathering logs for dmesg ...
	I0918 21:08:49.458473   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 21:08:49.471490   62061 logs.go:123] Gathering logs for describe nodes ...
	I0918 21:08:49.471519   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0918 21:08:47.857798   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:08:50.355748   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:08:47.634158   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:08:49.627085   61659 pod_ready.go:82] duration metric: took 4m0.000936582s for pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace to be "Ready" ...
	E0918 21:08:49.627133   61659 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace to be "Ready" (will not retry!)
	I0918 21:08:49.627156   61659 pod_ready.go:39] duration metric: took 4m7.542795536s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0918 21:08:49.627192   61659 kubeadm.go:597] duration metric: took 4m15.452827752s to restartPrimaryControlPlane
	W0918 21:08:49.627251   61659 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0918 21:08:49.627290   61659 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0918 21:08:49.387560   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:08:51.887591   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	W0918 21:08:49.533874   62061 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0918 21:08:49.533901   62061 logs.go:123] Gathering logs for CRI-O ...
	I0918 21:08:49.533916   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0918 21:08:49.611711   62061 logs.go:123] Gathering logs for container status ...
	I0918 21:08:49.611744   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 21:08:52.157839   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:08:52.170690   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0918 21:08:52.170770   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0918 21:08:52.208737   62061 cri.go:89] found id: ""
	I0918 21:08:52.208767   62061 logs.go:276] 0 containers: []
	W0918 21:08:52.208779   62061 logs.go:278] No container was found matching "kube-apiserver"
	I0918 21:08:52.208787   62061 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0918 21:08:52.208846   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0918 21:08:52.242624   62061 cri.go:89] found id: ""
	I0918 21:08:52.242658   62061 logs.go:276] 0 containers: []
	W0918 21:08:52.242669   62061 logs.go:278] No container was found matching "etcd"
	I0918 21:08:52.242677   62061 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0918 21:08:52.242742   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0918 21:08:52.280686   62061 cri.go:89] found id: ""
	I0918 21:08:52.280717   62061 logs.go:276] 0 containers: []
	W0918 21:08:52.280728   62061 logs.go:278] No container was found matching "coredns"
	I0918 21:08:52.280736   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0918 21:08:52.280798   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0918 21:08:52.313748   62061 cri.go:89] found id: ""
	I0918 21:08:52.313777   62061 logs.go:276] 0 containers: []
	W0918 21:08:52.313785   62061 logs.go:278] No container was found matching "kube-scheduler"
	I0918 21:08:52.313791   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0918 21:08:52.313840   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0918 21:08:52.353072   62061 cri.go:89] found id: ""
	I0918 21:08:52.353102   62061 logs.go:276] 0 containers: []
	W0918 21:08:52.353124   62061 logs.go:278] No container was found matching "kube-proxy"
	I0918 21:08:52.353132   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0918 21:08:52.353195   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0918 21:08:52.390358   62061 cri.go:89] found id: ""
	I0918 21:08:52.390384   62061 logs.go:276] 0 containers: []
	W0918 21:08:52.390392   62061 logs.go:278] No container was found matching "kube-controller-manager"
	I0918 21:08:52.390398   62061 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0918 21:08:52.390448   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0918 21:08:52.430039   62061 cri.go:89] found id: ""
	I0918 21:08:52.430068   62061 logs.go:276] 0 containers: []
	W0918 21:08:52.430081   62061 logs.go:278] No container was found matching "kindnet"
	I0918 21:08:52.430088   62061 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0918 21:08:52.430146   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0918 21:08:52.466096   62061 cri.go:89] found id: ""
	I0918 21:08:52.466125   62061 logs.go:276] 0 containers: []
	W0918 21:08:52.466137   62061 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0918 21:08:52.466149   62061 logs.go:123] Gathering logs for kubelet ...
	I0918 21:08:52.466162   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 21:08:52.518643   62061 logs.go:123] Gathering logs for dmesg ...
	I0918 21:08:52.518678   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 21:08:52.531768   62061 logs.go:123] Gathering logs for describe nodes ...
	I0918 21:08:52.531797   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0918 21:08:52.605130   62061 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0918 21:08:52.605163   62061 logs.go:123] Gathering logs for CRI-O ...
	I0918 21:08:52.605181   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0918 21:08:52.686510   62061 logs.go:123] Gathering logs for container status ...
	I0918 21:08:52.686553   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 21:08:52.356535   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:08:54.356671   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:08:54.387306   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:08:56.887745   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:08:55.225867   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:08:55.240537   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0918 21:08:55.240618   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0918 21:08:55.276461   62061 cri.go:89] found id: ""
	I0918 21:08:55.276490   62061 logs.go:276] 0 containers: []
	W0918 21:08:55.276498   62061 logs.go:278] No container was found matching "kube-apiserver"
	I0918 21:08:55.276504   62061 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0918 21:08:55.276564   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0918 21:08:55.313449   62061 cri.go:89] found id: ""
	I0918 21:08:55.313482   62061 logs.go:276] 0 containers: []
	W0918 21:08:55.313493   62061 logs.go:278] No container was found matching "etcd"
	I0918 21:08:55.313499   62061 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0918 21:08:55.313551   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0918 21:08:55.352436   62061 cri.go:89] found id: ""
	I0918 21:08:55.352475   62061 logs.go:276] 0 containers: []
	W0918 21:08:55.352485   62061 logs.go:278] No container was found matching "coredns"
	I0918 21:08:55.352492   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0918 21:08:55.352560   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0918 21:08:55.390433   62061 cri.go:89] found id: ""
	I0918 21:08:55.390458   62061 logs.go:276] 0 containers: []
	W0918 21:08:55.390466   62061 logs.go:278] No container was found matching "kube-scheduler"
	I0918 21:08:55.390472   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0918 21:08:55.390529   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0918 21:08:55.428426   62061 cri.go:89] found id: ""
	I0918 21:08:55.428455   62061 logs.go:276] 0 containers: []
	W0918 21:08:55.428465   62061 logs.go:278] No container was found matching "kube-proxy"
	I0918 21:08:55.428473   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0918 21:08:55.428540   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0918 21:08:55.465587   62061 cri.go:89] found id: ""
	I0918 21:08:55.465622   62061 logs.go:276] 0 containers: []
	W0918 21:08:55.465633   62061 logs.go:278] No container was found matching "kube-controller-manager"
	I0918 21:08:55.465641   62061 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0918 21:08:55.465710   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0918 21:08:55.502137   62061 cri.go:89] found id: ""
	I0918 21:08:55.502185   62061 logs.go:276] 0 containers: []
	W0918 21:08:55.502196   62061 logs.go:278] No container was found matching "kindnet"
	I0918 21:08:55.502203   62061 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0918 21:08:55.502266   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0918 21:08:55.535992   62061 cri.go:89] found id: ""
	I0918 21:08:55.536037   62061 logs.go:276] 0 containers: []
	W0918 21:08:55.536050   62061 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0918 21:08:55.536060   62061 logs.go:123] Gathering logs for dmesg ...
	I0918 21:08:55.536078   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 21:08:55.549267   62061 logs.go:123] Gathering logs for describe nodes ...
	I0918 21:08:55.549296   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0918 21:08:55.616522   62061 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0918 21:08:55.616556   62061 logs.go:123] Gathering logs for CRI-O ...
	I0918 21:08:55.616572   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0918 21:08:55.698822   62061 logs.go:123] Gathering logs for container status ...
	I0918 21:08:55.698874   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 21:08:55.740234   62061 logs.go:123] Gathering logs for kubelet ...
	I0918 21:08:55.740264   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 21:08:58.294876   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:08:58.307543   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0918 21:08:58.307630   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0918 21:08:58.340435   62061 cri.go:89] found id: ""
	I0918 21:08:58.340467   62061 logs.go:276] 0 containers: []
	W0918 21:08:58.340479   62061 logs.go:278] No container was found matching "kube-apiserver"
	I0918 21:08:58.340486   62061 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0918 21:08:58.340545   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0918 21:08:58.374759   62061 cri.go:89] found id: ""
	I0918 21:08:58.374792   62061 logs.go:276] 0 containers: []
	W0918 21:08:58.374804   62061 logs.go:278] No container was found matching "etcd"
	I0918 21:08:58.374810   62061 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0918 21:08:58.374863   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0918 21:08:58.411756   62061 cri.go:89] found id: ""
	I0918 21:08:58.411787   62061 logs.go:276] 0 containers: []
	W0918 21:08:58.411797   62061 logs.go:278] No container was found matching "coredns"
	I0918 21:08:58.411804   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0918 21:08:58.411867   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0918 21:08:58.449787   62061 cri.go:89] found id: ""
	I0918 21:08:58.449820   62061 logs.go:276] 0 containers: []
	W0918 21:08:58.449832   62061 logs.go:278] No container was found matching "kube-scheduler"
	I0918 21:08:58.449839   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0918 21:08:58.449903   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0918 21:08:58.485212   62061 cri.go:89] found id: ""
	I0918 21:08:58.485243   62061 logs.go:276] 0 containers: []
	W0918 21:08:58.485254   62061 logs.go:278] No container was found matching "kube-proxy"
	I0918 21:08:58.485261   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0918 21:08:58.485331   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0918 21:08:58.528667   62061 cri.go:89] found id: ""
	I0918 21:08:58.528696   62061 logs.go:276] 0 containers: []
	W0918 21:08:58.528706   62061 logs.go:278] No container was found matching "kube-controller-manager"
	I0918 21:08:58.528714   62061 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0918 21:08:58.528775   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0918 21:08:58.568201   62061 cri.go:89] found id: ""
	I0918 21:08:58.568231   62061 logs.go:276] 0 containers: []
	W0918 21:08:58.568241   62061 logs.go:278] No container was found matching "kindnet"
	I0918 21:08:58.568247   62061 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0918 21:08:58.568305   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0918 21:08:58.623952   62061 cri.go:89] found id: ""
	I0918 21:08:58.623982   62061 logs.go:276] 0 containers: []
	W0918 21:08:58.623991   62061 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0918 21:08:58.624000   62061 logs.go:123] Gathering logs for container status ...
	I0918 21:08:58.624011   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 21:08:58.665418   62061 logs.go:123] Gathering logs for kubelet ...
	I0918 21:08:58.665457   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 21:08:58.713464   62061 logs.go:123] Gathering logs for dmesg ...
	I0918 21:08:58.713504   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 21:08:58.727511   62061 logs.go:123] Gathering logs for describe nodes ...
	I0918 21:08:58.727552   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0918 21:08:58.799004   62061 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0918 21:08:58.799035   62061 logs.go:123] Gathering logs for CRI-O ...
	I0918 21:08:58.799050   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0918 21:08:56.856428   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:08:58.856632   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:09:00.857301   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:08:59.386076   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:09:01.387016   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:09:01.389457   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:09:01.404658   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0918 21:09:01.404734   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0918 21:09:01.439139   62061 cri.go:89] found id: ""
	I0918 21:09:01.439170   62061 logs.go:276] 0 containers: []
	W0918 21:09:01.439180   62061 logs.go:278] No container was found matching "kube-apiserver"
	I0918 21:09:01.439187   62061 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0918 21:09:01.439251   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0918 21:09:01.473866   62061 cri.go:89] found id: ""
	I0918 21:09:01.473896   62061 logs.go:276] 0 containers: []
	W0918 21:09:01.473907   62061 logs.go:278] No container was found matching "etcd"
	I0918 21:09:01.473915   62061 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0918 21:09:01.473978   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0918 21:09:01.506734   62061 cri.go:89] found id: ""
	I0918 21:09:01.506767   62061 logs.go:276] 0 containers: []
	W0918 21:09:01.506777   62061 logs.go:278] No container was found matching "coredns"
	I0918 21:09:01.506783   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0918 21:09:01.506836   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0918 21:09:01.540123   62061 cri.go:89] found id: ""
	I0918 21:09:01.540152   62061 logs.go:276] 0 containers: []
	W0918 21:09:01.540162   62061 logs.go:278] No container was found matching "kube-scheduler"
	I0918 21:09:01.540169   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0918 21:09:01.540236   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0918 21:09:01.575002   62061 cri.go:89] found id: ""
	I0918 21:09:01.575037   62061 logs.go:276] 0 containers: []
	W0918 21:09:01.575048   62061 logs.go:278] No container was found matching "kube-proxy"
	I0918 21:09:01.575071   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0918 21:09:01.575159   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0918 21:09:01.612285   62061 cri.go:89] found id: ""
	I0918 21:09:01.612316   62061 logs.go:276] 0 containers: []
	W0918 21:09:01.612327   62061 logs.go:278] No container was found matching "kube-controller-manager"
	I0918 21:09:01.612335   62061 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0918 21:09:01.612399   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0918 21:09:01.648276   62061 cri.go:89] found id: ""
	I0918 21:09:01.648304   62061 logs.go:276] 0 containers: []
	W0918 21:09:01.648318   62061 logs.go:278] No container was found matching "kindnet"
	I0918 21:09:01.648325   62061 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0918 21:09:01.648397   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0918 21:09:01.686192   62061 cri.go:89] found id: ""
	I0918 21:09:01.686220   62061 logs.go:276] 0 containers: []
	W0918 21:09:01.686228   62061 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0918 21:09:01.686236   62061 logs.go:123] Gathering logs for kubelet ...
	I0918 21:09:01.686247   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 21:09:01.738366   62061 logs.go:123] Gathering logs for dmesg ...
	I0918 21:09:01.738408   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 21:09:01.752650   62061 logs.go:123] Gathering logs for describe nodes ...
	I0918 21:09:01.752683   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0918 21:09:01.825010   62061 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0918 21:09:01.825114   62061 logs.go:123] Gathering logs for CRI-O ...
	I0918 21:09:01.825156   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0918 21:09:01.907401   62061 logs.go:123] Gathering logs for container status ...
	I0918 21:09:01.907448   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 21:09:04.448316   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:09:04.461242   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0918 21:09:04.461304   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0918 21:09:03.357089   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:09:05.856126   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:09:03.387563   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:09:05.389665   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:09:07.886523   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:09:04.495951   62061 cri.go:89] found id: ""
	I0918 21:09:04.495984   62061 logs.go:276] 0 containers: []
	W0918 21:09:04.495997   62061 logs.go:278] No container was found matching "kube-apiserver"
	I0918 21:09:04.496006   62061 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0918 21:09:04.496090   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0918 21:09:04.536874   62061 cri.go:89] found id: ""
	I0918 21:09:04.536906   62061 logs.go:276] 0 containers: []
	W0918 21:09:04.536917   62061 logs.go:278] No container was found matching "etcd"
	I0918 21:09:04.536935   62061 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0918 21:09:04.537010   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0918 21:09:04.572601   62061 cri.go:89] found id: ""
	I0918 21:09:04.572634   62061 logs.go:276] 0 containers: []
	W0918 21:09:04.572646   62061 logs.go:278] No container was found matching "coredns"
	I0918 21:09:04.572653   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0918 21:09:04.572716   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0918 21:09:04.609783   62061 cri.go:89] found id: ""
	I0918 21:09:04.609817   62061 logs.go:276] 0 containers: []
	W0918 21:09:04.609826   62061 logs.go:278] No container was found matching "kube-scheduler"
	I0918 21:09:04.609832   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0918 21:09:04.609891   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0918 21:09:04.645124   62061 cri.go:89] found id: ""
	I0918 21:09:04.645156   62061 logs.go:276] 0 containers: []
	W0918 21:09:04.645167   62061 logs.go:278] No container was found matching "kube-proxy"
	I0918 21:09:04.645175   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0918 21:09:04.645241   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0918 21:09:04.680927   62061 cri.go:89] found id: ""
	I0918 21:09:04.680959   62061 logs.go:276] 0 containers: []
	W0918 21:09:04.680971   62061 logs.go:278] No container was found matching "kube-controller-manager"
	I0918 21:09:04.680978   62061 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0918 21:09:04.681038   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0918 21:09:04.718920   62061 cri.go:89] found id: ""
	I0918 21:09:04.718954   62061 logs.go:276] 0 containers: []
	W0918 21:09:04.718972   62061 logs.go:278] No container was found matching "kindnet"
	I0918 21:09:04.718979   62061 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0918 21:09:04.719039   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0918 21:09:04.751450   62061 cri.go:89] found id: ""
	I0918 21:09:04.751488   62061 logs.go:276] 0 containers: []
	W0918 21:09:04.751500   62061 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0918 21:09:04.751511   62061 logs.go:123] Gathering logs for container status ...
	I0918 21:09:04.751529   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 21:09:04.788969   62061 logs.go:123] Gathering logs for kubelet ...
	I0918 21:09:04.789001   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 21:09:04.837638   62061 logs.go:123] Gathering logs for dmesg ...
	I0918 21:09:04.837673   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 21:09:04.853673   62061 logs.go:123] Gathering logs for describe nodes ...
	I0918 21:09:04.853706   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0918 21:09:04.923670   62061 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0918 21:09:04.923703   62061 logs.go:123] Gathering logs for CRI-O ...
	I0918 21:09:04.923717   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0918 21:09:07.507979   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:09:07.521581   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0918 21:09:07.521656   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0918 21:09:07.556280   62061 cri.go:89] found id: ""
	I0918 21:09:07.556310   62061 logs.go:276] 0 containers: []
	W0918 21:09:07.556321   62061 logs.go:278] No container was found matching "kube-apiserver"
	I0918 21:09:07.556329   62061 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0918 21:09:07.556397   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0918 21:09:07.590775   62061 cri.go:89] found id: ""
	I0918 21:09:07.590802   62061 logs.go:276] 0 containers: []
	W0918 21:09:07.590810   62061 logs.go:278] No container was found matching "etcd"
	I0918 21:09:07.590815   62061 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0918 21:09:07.590862   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0918 21:09:07.625971   62061 cri.go:89] found id: ""
	I0918 21:09:07.626000   62061 logs.go:276] 0 containers: []
	W0918 21:09:07.626010   62061 logs.go:278] No container was found matching "coredns"
	I0918 21:09:07.626018   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0918 21:09:07.626080   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0918 21:09:07.660083   62061 cri.go:89] found id: ""
	I0918 21:09:07.660116   62061 logs.go:276] 0 containers: []
	W0918 21:09:07.660128   62061 logs.go:278] No container was found matching "kube-scheduler"
	I0918 21:09:07.660136   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0918 21:09:07.660201   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0918 21:09:07.694165   62061 cri.go:89] found id: ""
	I0918 21:09:07.694195   62061 logs.go:276] 0 containers: []
	W0918 21:09:07.694204   62061 logs.go:278] No container was found matching "kube-proxy"
	I0918 21:09:07.694211   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0918 21:09:07.694269   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0918 21:09:07.728298   62061 cri.go:89] found id: ""
	I0918 21:09:07.728328   62061 logs.go:276] 0 containers: []
	W0918 21:09:07.728338   62061 logs.go:278] No container was found matching "kube-controller-manager"
	I0918 21:09:07.728349   62061 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0918 21:09:07.728409   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0918 21:09:07.762507   62061 cri.go:89] found id: ""
	I0918 21:09:07.762546   62061 logs.go:276] 0 containers: []
	W0918 21:09:07.762555   62061 logs.go:278] No container was found matching "kindnet"
	I0918 21:09:07.762565   62061 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0918 21:09:07.762745   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0918 21:09:07.798006   62061 cri.go:89] found id: ""
	I0918 21:09:07.798038   62061 logs.go:276] 0 containers: []
	W0918 21:09:07.798049   62061 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0918 21:09:07.798059   62061 logs.go:123] Gathering logs for kubelet ...
	I0918 21:09:07.798074   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 21:09:07.849222   62061 logs.go:123] Gathering logs for dmesg ...
	I0918 21:09:07.849267   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 21:09:07.865023   62061 logs.go:123] Gathering logs for describe nodes ...
	I0918 21:09:07.865055   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0918 21:09:07.938810   62061 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0918 21:09:07.938830   62061 logs.go:123] Gathering logs for CRI-O ...
	I0918 21:09:07.938842   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0918 21:09:08.018885   62061 logs.go:123] Gathering logs for container status ...
	I0918 21:09:08.018924   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 21:09:07.856987   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:09:10.356244   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:09:09.886563   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:09:12.386922   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:09:10.562565   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:09:10.575854   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0918 21:09:10.575941   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0918 21:09:10.612853   62061 cri.go:89] found id: ""
	I0918 21:09:10.612884   62061 logs.go:276] 0 containers: []
	W0918 21:09:10.612896   62061 logs.go:278] No container was found matching "kube-apiserver"
	I0918 21:09:10.612906   62061 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0918 21:09:10.612966   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0918 21:09:10.648672   62061 cri.go:89] found id: ""
	I0918 21:09:10.648703   62061 logs.go:276] 0 containers: []
	W0918 21:09:10.648713   62061 logs.go:278] No container was found matching "etcd"
	I0918 21:09:10.648720   62061 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0918 21:09:10.648780   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0918 21:09:10.682337   62061 cri.go:89] found id: ""
	I0918 21:09:10.682370   62061 logs.go:276] 0 containers: []
	W0918 21:09:10.682388   62061 logs.go:278] No container was found matching "coredns"
	I0918 21:09:10.682394   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0918 21:09:10.682445   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0918 21:09:10.718221   62061 cri.go:89] found id: ""
	I0918 21:09:10.718257   62061 logs.go:276] 0 containers: []
	W0918 21:09:10.718269   62061 logs.go:278] No container was found matching "kube-scheduler"
	I0918 21:09:10.718277   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0918 21:09:10.718345   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0918 21:09:10.751570   62061 cri.go:89] found id: ""
	I0918 21:09:10.751600   62061 logs.go:276] 0 containers: []
	W0918 21:09:10.751609   62061 logs.go:278] No container was found matching "kube-proxy"
	I0918 21:09:10.751615   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0918 21:09:10.751706   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0918 21:09:10.792125   62061 cri.go:89] found id: ""
	I0918 21:09:10.792158   62061 logs.go:276] 0 containers: []
	W0918 21:09:10.792170   62061 logs.go:278] No container was found matching "kube-controller-manager"
	I0918 21:09:10.792178   62061 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0918 21:09:10.792245   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0918 21:09:10.830699   62061 cri.go:89] found id: ""
	I0918 21:09:10.830733   62061 logs.go:276] 0 containers: []
	W0918 21:09:10.830742   62061 logs.go:278] No container was found matching "kindnet"
	I0918 21:09:10.830748   62061 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0918 21:09:10.830820   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0918 21:09:10.869625   62061 cri.go:89] found id: ""
	I0918 21:09:10.869655   62061 logs.go:276] 0 containers: []
	W0918 21:09:10.869663   62061 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0918 21:09:10.869672   62061 logs.go:123] Gathering logs for kubelet ...
	I0918 21:09:10.869684   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 21:09:10.921340   62061 logs.go:123] Gathering logs for dmesg ...
	I0918 21:09:10.921378   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 21:09:10.937032   62061 logs.go:123] Gathering logs for describe nodes ...
	I0918 21:09:10.937071   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0918 21:09:11.006248   62061 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0918 21:09:11.006276   62061 logs.go:123] Gathering logs for CRI-O ...
	I0918 21:09:11.006291   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0918 21:09:11.086458   62061 logs.go:123] Gathering logs for container status ...
	I0918 21:09:11.086496   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 21:09:13.628824   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:09:13.642499   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0918 21:09:13.642578   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0918 21:09:13.677337   62061 cri.go:89] found id: ""
	I0918 21:09:13.677368   62061 logs.go:276] 0 containers: []
	W0918 21:09:13.677378   62061 logs.go:278] No container was found matching "kube-apiserver"
	I0918 21:09:13.677385   62061 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0918 21:09:13.677449   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0918 21:09:13.717317   62061 cri.go:89] found id: ""
	I0918 21:09:13.717341   62061 logs.go:276] 0 containers: []
	W0918 21:09:13.717353   62061 logs.go:278] No container was found matching "etcd"
	I0918 21:09:13.717358   62061 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0918 21:09:13.717419   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0918 21:09:13.752151   62061 cri.go:89] found id: ""
	I0918 21:09:13.752181   62061 logs.go:276] 0 containers: []
	W0918 21:09:13.752189   62061 logs.go:278] No container was found matching "coredns"
	I0918 21:09:13.752195   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0918 21:09:13.752253   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0918 21:09:13.791946   62061 cri.go:89] found id: ""
	I0918 21:09:13.791975   62061 logs.go:276] 0 containers: []
	W0918 21:09:13.791983   62061 logs.go:278] No container was found matching "kube-scheduler"
	I0918 21:09:13.791989   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0918 21:09:13.792069   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0918 21:09:13.825173   62061 cri.go:89] found id: ""
	I0918 21:09:13.825198   62061 logs.go:276] 0 containers: []
	W0918 21:09:13.825209   62061 logs.go:278] No container was found matching "kube-proxy"
	I0918 21:09:13.825216   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0918 21:09:13.825276   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0918 21:09:13.859801   62061 cri.go:89] found id: ""
	I0918 21:09:13.859834   62061 logs.go:276] 0 containers: []
	W0918 21:09:13.859846   62061 logs.go:278] No container was found matching "kube-controller-manager"
	I0918 21:09:13.859853   62061 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0918 21:09:13.859907   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0918 21:09:13.895413   62061 cri.go:89] found id: ""
	I0918 21:09:13.895445   62061 logs.go:276] 0 containers: []
	W0918 21:09:13.895456   62061 logs.go:278] No container was found matching "kindnet"
	I0918 21:09:13.895463   62061 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0918 21:09:13.895515   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0918 21:09:13.929048   62061 cri.go:89] found id: ""
	I0918 21:09:13.929075   62061 logs.go:276] 0 containers: []
	W0918 21:09:13.929083   62061 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0918 21:09:13.929092   62061 logs.go:123] Gathering logs for kubelet ...
	I0918 21:09:13.929104   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 21:09:13.981579   62061 logs.go:123] Gathering logs for dmesg ...
	I0918 21:09:13.981613   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 21:09:13.995642   62061 logs.go:123] Gathering logs for describe nodes ...
	I0918 21:09:13.995679   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0918 21:09:14.061762   62061 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0918 21:09:14.061782   62061 logs.go:123] Gathering logs for CRI-O ...
	I0918 21:09:14.061793   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0918 21:09:14.139623   62061 logs.go:123] Gathering logs for container status ...
	I0918 21:09:14.139659   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 21:09:16.001617   61659 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.374302262s)
	I0918 21:09:16.001692   61659 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0918 21:09:16.019307   61659 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0918 21:09:16.029547   61659 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0918 21:09:16.039132   61659 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0918 21:09:16.039154   61659 kubeadm.go:157] found existing configuration files:
	
	I0918 21:09:16.039196   61659 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0918 21:09:16.048506   61659 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0918 21:09:16.048567   61659 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0918 21:09:16.058120   61659 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0918 21:09:16.067686   61659 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0918 21:09:16.067746   61659 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0918 21:09:16.077707   61659 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0918 21:09:16.087089   61659 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0918 21:09:16.087149   61659 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0918 21:09:16.097040   61659 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0918 21:09:16.106448   61659 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0918 21:09:16.106514   61659 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0918 21:09:16.116060   61659 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0918 21:09:16.159721   61659 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I0918 21:09:16.159797   61659 kubeadm.go:310] [preflight] Running pre-flight checks
	I0918 21:09:16.266821   61659 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0918 21:09:16.266968   61659 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0918 21:09:16.267122   61659 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0918 21:09:16.275249   61659 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0918 21:09:12.855996   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:09:14.857296   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:09:16.277228   61659 out.go:235]   - Generating certificates and keys ...
	I0918 21:09:16.277333   61659 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0918 21:09:16.277419   61659 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0918 21:09:16.277534   61659 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0918 21:09:16.277617   61659 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0918 21:09:16.277709   61659 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0918 21:09:16.277790   61659 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0918 21:09:16.277904   61659 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0918 21:09:16.278013   61659 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0918 21:09:16.278131   61659 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0918 21:09:16.278265   61659 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0918 21:09:16.278331   61659 kubeadm.go:310] [certs] Using the existing "sa" key
	I0918 21:09:16.278401   61659 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0918 21:09:16.516263   61659 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0918 21:09:16.708220   61659 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0918 21:09:17.009820   61659 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0918 21:09:17.108871   61659 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0918 21:09:17.211014   61659 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0918 21:09:17.211658   61659 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0918 21:09:17.216626   61659 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0918 21:09:14.887068   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:09:16.888350   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:09:16.686071   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:09:16.699769   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0918 21:09:16.699844   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0918 21:09:16.735242   62061 cri.go:89] found id: ""
	I0918 21:09:16.735277   62061 logs.go:276] 0 containers: []
	W0918 21:09:16.735288   62061 logs.go:278] No container was found matching "kube-apiserver"
	I0918 21:09:16.735298   62061 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0918 21:09:16.735371   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0918 21:09:16.770027   62061 cri.go:89] found id: ""
	I0918 21:09:16.770052   62061 logs.go:276] 0 containers: []
	W0918 21:09:16.770060   62061 logs.go:278] No container was found matching "etcd"
	I0918 21:09:16.770066   62061 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0918 21:09:16.770114   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0918 21:09:16.806525   62061 cri.go:89] found id: ""
	I0918 21:09:16.806555   62061 logs.go:276] 0 containers: []
	W0918 21:09:16.806563   62061 logs.go:278] No container was found matching "coredns"
	I0918 21:09:16.806569   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0918 21:09:16.806636   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0918 21:09:16.851146   62061 cri.go:89] found id: ""
	I0918 21:09:16.851183   62061 logs.go:276] 0 containers: []
	W0918 21:09:16.851194   62061 logs.go:278] No container was found matching "kube-scheduler"
	I0918 21:09:16.851204   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0918 21:09:16.851271   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0918 21:09:16.890718   62061 cri.go:89] found id: ""
	I0918 21:09:16.890748   62061 logs.go:276] 0 containers: []
	W0918 21:09:16.890760   62061 logs.go:278] No container was found matching "kube-proxy"
	I0918 21:09:16.890767   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0918 21:09:16.890824   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0918 21:09:16.928971   62061 cri.go:89] found id: ""
	I0918 21:09:16.929002   62061 logs.go:276] 0 containers: []
	W0918 21:09:16.929012   62061 logs.go:278] No container was found matching "kube-controller-manager"
	I0918 21:09:16.929020   62061 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0918 21:09:16.929079   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0918 21:09:16.970965   62061 cri.go:89] found id: ""
	I0918 21:09:16.970999   62061 logs.go:276] 0 containers: []
	W0918 21:09:16.971011   62061 logs.go:278] No container was found matching "kindnet"
	I0918 21:09:16.971019   62061 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0918 21:09:16.971089   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0918 21:09:17.006427   62061 cri.go:89] found id: ""
	I0918 21:09:17.006453   62061 logs.go:276] 0 containers: []
	W0918 21:09:17.006461   62061 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0918 21:09:17.006468   62061 logs.go:123] Gathering logs for kubelet ...
	I0918 21:09:17.006480   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 21:09:17.058690   62061 logs.go:123] Gathering logs for dmesg ...
	I0918 21:09:17.058733   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 21:09:17.072593   62061 logs.go:123] Gathering logs for describe nodes ...
	I0918 21:09:17.072623   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0918 21:09:17.143046   62061 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0918 21:09:17.143071   62061 logs.go:123] Gathering logs for CRI-O ...
	I0918 21:09:17.143082   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0918 21:09:17.236943   62061 logs.go:123] Gathering logs for container status ...
	I0918 21:09:17.236989   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 21:09:17.357978   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:09:19.858268   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:09:17.218406   61659 out.go:235]   - Booting up control plane ...
	I0918 21:09:17.218544   61659 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0918 21:09:17.218662   61659 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0918 21:09:17.218765   61659 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0918 21:09:17.238076   61659 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0918 21:09:17.248123   61659 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0918 21:09:17.248226   61659 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0918 21:09:17.379685   61659 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0918 21:09:17.379840   61659 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0918 21:09:18.380791   61659 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001279947s
	I0918 21:09:18.380906   61659 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0918 21:09:18.380783   61740 pod_ready.go:82] duration metric: took 4m0.000205104s for pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace to be "Ready" ...
	E0918 21:09:18.380812   61740 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace to be "Ready" (will not retry!)
	I0918 21:09:18.380832   61740 pod_ready.go:39] duration metric: took 4m15.618837854s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0918 21:09:18.380875   61740 kubeadm.go:597] duration metric: took 4m23.646410044s to restartPrimaryControlPlane
	W0918 21:09:18.380936   61740 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0918 21:09:18.380966   61740 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0918 21:09:23.386705   61659 kubeadm.go:310] [api-check] The API server is healthy after 5.005706581s
	I0918 21:09:23.402316   61659 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0918 21:09:23.422786   61659 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0918 21:09:23.462099   61659 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0918 21:09:23.462373   61659 kubeadm.go:310] [mark-control-plane] Marking the node default-k8s-diff-port-828868 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0918 21:09:23.484276   61659 kubeadm.go:310] [bootstrap-token] Using token: 2vcil8.e13zhc1806da8knq
	I0918 21:09:19.782266   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:09:19.799433   62061 kubeadm.go:597] duration metric: took 4m2.126311085s to restartPrimaryControlPlane
	W0918 21:09:19.799513   62061 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0918 21:09:19.799543   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0918 21:09:20.910192   62061 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (1.110625463s)
	I0918 21:09:20.910273   62061 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0918 21:09:20.925992   62061 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0918 21:09:20.936876   62061 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0918 21:09:20.947170   62061 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0918 21:09:20.947199   62061 kubeadm.go:157] found existing configuration files:
	
	I0918 21:09:20.947255   62061 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0918 21:09:20.958140   62061 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0918 21:09:20.958240   62061 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0918 21:09:20.968351   62061 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0918 21:09:20.978669   62061 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0918 21:09:20.978735   62061 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0918 21:09:20.989765   62061 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0918 21:09:20.999842   62061 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0918 21:09:20.999903   62061 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0918 21:09:21.009945   62061 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0918 21:09:21.020229   62061 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0918 21:09:21.020289   62061 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0918 21:09:21.030583   62061 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0918 21:09:21.271399   62061 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0918 21:09:23.485978   61659 out.go:235]   - Configuring RBAC rules ...
	I0918 21:09:23.486112   61659 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0918 21:09:23.499163   61659 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0918 21:09:23.510754   61659 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0918 21:09:23.514794   61659 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0918 21:09:23.519247   61659 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0918 21:09:23.530424   61659 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0918 21:09:23.799778   61659 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0918 21:09:24.223469   61659 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0918 21:09:24.794852   61659 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0918 21:09:24.794886   61659 kubeadm.go:310] 
	I0918 21:09:24.794951   61659 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0918 21:09:24.794963   61659 kubeadm.go:310] 
	I0918 21:09:24.795058   61659 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0918 21:09:24.795073   61659 kubeadm.go:310] 
	I0918 21:09:24.795105   61659 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0918 21:09:24.795192   61659 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0918 21:09:24.795255   61659 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0918 21:09:24.795285   61659 kubeadm.go:310] 
	I0918 21:09:24.795366   61659 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0918 21:09:24.795376   61659 kubeadm.go:310] 
	I0918 21:09:24.795416   61659 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0918 21:09:24.795425   61659 kubeadm.go:310] 
	I0918 21:09:24.795497   61659 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0918 21:09:24.795580   61659 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0918 21:09:24.795678   61659 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0918 21:09:24.795692   61659 kubeadm.go:310] 
	I0918 21:09:24.795779   61659 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0918 21:09:24.795891   61659 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0918 21:09:24.795901   61659 kubeadm.go:310] 
	I0918 21:09:24.796174   61659 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8444 --token 2vcil8.e13zhc1806da8knq \
	I0918 21:09:24.796299   61659 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:8ad46bf288e8cc2226f5a929a768e6c95cddecc97ffc4016be27ef0987b1e36f \
	I0918 21:09:24.796350   61659 kubeadm.go:310] 	--control-plane 
	I0918 21:09:24.796367   61659 kubeadm.go:310] 
	I0918 21:09:24.796479   61659 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0918 21:09:24.796487   61659 kubeadm.go:310] 
	I0918 21:09:24.796594   61659 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8444 --token 2vcil8.e13zhc1806da8knq \
	I0918 21:09:24.796738   61659 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:8ad46bf288e8cc2226f5a929a768e6c95cddecc97ffc4016be27ef0987b1e36f 
	I0918 21:09:24.797359   61659 kubeadm.go:310] W0918 21:09:16.134048    2561 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0918 21:09:24.797679   61659 kubeadm.go:310] W0918 21:09:16.134873    2561 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0918 21:09:24.797832   61659 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0918 21:09:24.797858   61659 cni.go:84] Creating CNI manager for ""
	I0918 21:09:24.797872   61659 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0918 21:09:24.799953   61659 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0918 21:09:22.357582   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:09:24.857037   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:09:24.801259   61659 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0918 21:09:24.812277   61659 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0918 21:09:24.834749   61659 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0918 21:09:24.834855   61659 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0918 21:09:24.834871   61659 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-828868 minikube.k8s.io/updated_at=2024_09_18T21_09_24_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=85073601a832bd4bbda5d11fa91feafff6ec6b91 minikube.k8s.io/name=default-k8s-diff-port-828868 minikube.k8s.io/primary=true
	I0918 21:09:25.022861   61659 ops.go:34] apiserver oom_adj: -16
	I0918 21:09:25.022930   61659 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0918 21:09:25.523400   61659 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0918 21:09:26.023075   61659 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0918 21:09:26.523330   61659 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0918 21:09:27.023179   61659 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0918 21:09:27.523363   61659 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0918 21:09:28.023150   61659 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0918 21:09:28.523941   61659 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0918 21:09:29.023542   61659 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0918 21:09:29.143581   61659 kubeadm.go:1113] duration metric: took 4.308796493s to wait for elevateKubeSystemPrivileges
	I0918 21:09:29.143614   61659 kubeadm.go:394] duration metric: took 4m55.024616229s to StartCluster
	I0918 21:09:29.143632   61659 settings.go:142] acquiring lock: {Name:mk6ae95a69dcbe00c28846409e9f46945a88de2f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0918 21:09:29.143727   61659 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19667-7671/kubeconfig
	I0918 21:09:29.145397   61659 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19667-7671/kubeconfig: {Name:mk3a3eecadb0164dfdd5d3c4392b5b473a2a9bb0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0918 21:09:29.145680   61659 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.50.109 Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0918 21:09:29.145767   61659 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0918 21:09:29.145851   61659 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-828868"
	I0918 21:09:29.145869   61659 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-828868"
	I0918 21:09:29.145877   61659 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-828868"
	W0918 21:09:29.145885   61659 addons.go:243] addon storage-provisioner should already be in state true
	I0918 21:09:29.145896   61659 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-828868"
	I0918 21:09:29.145898   61659 config.go:182] Loaded profile config "default-k8s-diff-port-828868": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0918 21:09:29.145900   61659 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-828868"
	I0918 21:09:29.145920   61659 host.go:66] Checking if "default-k8s-diff-port-828868" exists ...
	I0918 21:09:29.145932   61659 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-828868"
	W0918 21:09:29.145946   61659 addons.go:243] addon metrics-server should already be in state true
	I0918 21:09:29.145980   61659 host.go:66] Checking if "default-k8s-diff-port-828868" exists ...
	I0918 21:09:29.146234   61659 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19667-7671/.minikube/bin/docker-machine-driver-kvm2
	I0918 21:09:29.146238   61659 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19667-7671/.minikube/bin/docker-machine-driver-kvm2
	I0918 21:09:29.146282   61659 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0918 21:09:29.146297   61659 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0918 21:09:29.146372   61659 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19667-7671/.minikube/bin/docker-machine-driver-kvm2
	I0918 21:09:29.146389   61659 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0918 21:09:29.147645   61659 out.go:177] * Verifying Kubernetes components...
	I0918 21:09:29.149574   61659 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0918 21:09:29.164779   61659 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43359
	I0918 21:09:29.165002   61659 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37857
	I0918 21:09:29.165390   61659 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44573
	I0918 21:09:29.165682   61659 main.go:141] libmachine: () Calling .GetVersion
	I0918 21:09:29.165749   61659 main.go:141] libmachine: () Calling .GetVersion
	I0918 21:09:29.166233   61659 main.go:141] libmachine: Using API Version  1
	I0918 21:09:29.166254   61659 main.go:141] libmachine: () Calling .SetConfigRaw
	I0918 21:09:29.166270   61659 main.go:141] libmachine: () Calling .GetVersion
	I0918 21:09:29.166388   61659 main.go:141] libmachine: Using API Version  1
	I0918 21:09:29.166414   61659 main.go:141] libmachine: () Calling .SetConfigRaw
	I0918 21:09:29.166544   61659 main.go:141] libmachine: () Calling .GetMachineName
	I0918 21:09:29.166711   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .GetState
	I0918 21:09:29.166730   61659 main.go:141] libmachine: () Calling .GetMachineName
	I0918 21:09:29.166894   61659 main.go:141] libmachine: Using API Version  1
	I0918 21:09:29.166918   61659 main.go:141] libmachine: () Calling .SetConfigRaw
	I0918 21:09:29.167381   61659 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19667-7671/.minikube/bin/docker-machine-driver-kvm2
	I0918 21:09:29.167425   61659 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0918 21:09:29.168144   61659 main.go:141] libmachine: () Calling .GetMachineName
	I0918 21:09:29.168578   61659 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19667-7671/.minikube/bin/docker-machine-driver-kvm2
	I0918 21:09:29.168614   61659 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0918 21:09:29.171072   61659 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-828868"
	W0918 21:09:29.171101   61659 addons.go:243] addon default-storageclass should already be in state true
	I0918 21:09:29.171133   61659 host.go:66] Checking if "default-k8s-diff-port-828868" exists ...
	I0918 21:09:29.171534   61659 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19667-7671/.minikube/bin/docker-machine-driver-kvm2
	I0918 21:09:29.171597   61659 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0918 21:09:29.186305   61659 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42211
	I0918 21:09:29.186318   61659 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45153
	I0918 21:09:29.186838   61659 main.go:141] libmachine: () Calling .GetVersion
	I0918 21:09:29.186847   61659 main.go:141] libmachine: () Calling .GetVersion
	I0918 21:09:29.187353   61659 main.go:141] libmachine: Using API Version  1
	I0918 21:09:29.187367   61659 main.go:141] libmachine: () Calling .SetConfigRaw
	I0918 21:09:29.187373   61659 main.go:141] libmachine: Using API Version  1
	I0918 21:09:29.187403   61659 main.go:141] libmachine: () Calling .SetConfigRaw
	I0918 21:09:29.187840   61659 main.go:141] libmachine: () Calling .GetMachineName
	I0918 21:09:29.187855   61659 main.go:141] libmachine: () Calling .GetMachineName
	I0918 21:09:29.188085   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .GetState
	I0918 21:09:29.188106   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .GetState
	I0918 21:09:29.193453   61659 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33471
	I0918 21:09:29.193905   61659 main.go:141] libmachine: () Calling .GetVersion
	I0918 21:09:29.194477   61659 main.go:141] libmachine: Using API Version  1
	I0918 21:09:29.194513   61659 main.go:141] libmachine: () Calling .SetConfigRaw
	I0918 21:09:29.194981   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .DriverName
	I0918 21:09:29.195155   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .DriverName
	I0918 21:09:29.195254   61659 main.go:141] libmachine: () Calling .GetMachineName
	I0918 21:09:29.195807   61659 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19667-7671/.minikube/bin/docker-machine-driver-kvm2
	I0918 21:09:29.195839   61659 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0918 21:09:29.197102   61659 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0918 21:09:29.197111   61659 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0918 21:09:29.198425   61659 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0918 21:09:29.198458   61659 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0918 21:09:29.198486   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .GetSSHHostname
	I0918 21:09:29.198589   61659 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0918 21:09:29.198605   61659 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0918 21:09:29.198622   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .GetSSHHostname
	I0918 21:09:29.202110   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | domain default-k8s-diff-port-828868 has defined MAC address 52:54:00:c0:39:06 in network mk-default-k8s-diff-port-828868
	I0918 21:09:29.202236   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | domain default-k8s-diff-port-828868 has defined MAC address 52:54:00:c0:39:06 in network mk-default-k8s-diff-port-828868
	I0918 21:09:29.202634   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:39:06", ip: ""} in network mk-default-k8s-diff-port-828868: {Iface:virbr4 ExpiryTime:2024-09-18 22:04:19 +0000 UTC Type:0 Mac:52:54:00:c0:39:06 Iaid: IPaddr:192.168.50.109 Prefix:24 Hostname:default-k8s-diff-port-828868 Clientid:01:52:54:00:c0:39:06}
	I0918 21:09:29.202656   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:39:06", ip: ""} in network mk-default-k8s-diff-port-828868: {Iface:virbr4 ExpiryTime:2024-09-18 22:04:19 +0000 UTC Type:0 Mac:52:54:00:c0:39:06 Iaid: IPaddr:192.168.50.109 Prefix:24 Hostname:default-k8s-diff-port-828868 Clientid:01:52:54:00:c0:39:06}
	I0918 21:09:29.202661   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | domain default-k8s-diff-port-828868 has defined IP address 192.168.50.109 and MAC address 52:54:00:c0:39:06 in network mk-default-k8s-diff-port-828868
	I0918 21:09:29.202677   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | domain default-k8s-diff-port-828868 has defined IP address 192.168.50.109 and MAC address 52:54:00:c0:39:06 in network mk-default-k8s-diff-port-828868
	I0918 21:09:29.202895   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .GetSSHPort
	I0918 21:09:29.202942   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .GetSSHPort
	I0918 21:09:29.203084   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .GetSSHKeyPath
	I0918 21:09:29.203129   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .GetSSHKeyPath
	I0918 21:09:29.203268   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .GetSSHUsername
	I0918 21:09:29.203275   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .GetSSHUsername
	I0918 21:09:29.203393   61659 sshutil.go:53] new ssh client: &{IP:192.168.50.109 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19667-7671/.minikube/machines/default-k8s-diff-port-828868/id_rsa Username:docker}
	I0918 21:09:29.203407   61659 sshutil.go:53] new ssh client: &{IP:192.168.50.109 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19667-7671/.minikube/machines/default-k8s-diff-port-828868/id_rsa Username:docker}
	I0918 21:09:29.215178   61659 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41565
	I0918 21:09:29.215727   61659 main.go:141] libmachine: () Calling .GetVersion
	I0918 21:09:29.216301   61659 main.go:141] libmachine: Using API Version  1
	I0918 21:09:29.216325   61659 main.go:141] libmachine: () Calling .SetConfigRaw
	I0918 21:09:29.216669   61659 main.go:141] libmachine: () Calling .GetMachineName
	I0918 21:09:29.216873   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .GetState
	I0918 21:09:29.218689   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .DriverName
	I0918 21:09:29.218980   61659 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0918 21:09:29.218994   61659 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0918 21:09:29.219009   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .GetSSHHostname
	I0918 21:09:29.222542   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | domain default-k8s-diff-port-828868 has defined MAC address 52:54:00:c0:39:06 in network mk-default-k8s-diff-port-828868
	I0918 21:09:29.222963   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:39:06", ip: ""} in network mk-default-k8s-diff-port-828868: {Iface:virbr4 ExpiryTime:2024-09-18 22:04:19 +0000 UTC Type:0 Mac:52:54:00:c0:39:06 Iaid: IPaddr:192.168.50.109 Prefix:24 Hostname:default-k8s-diff-port-828868 Clientid:01:52:54:00:c0:39:06}
	I0918 21:09:29.222985   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | domain default-k8s-diff-port-828868 has defined IP address 192.168.50.109 and MAC address 52:54:00:c0:39:06 in network mk-default-k8s-diff-port-828868
	I0918 21:09:29.223398   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .GetSSHPort
	I0918 21:09:29.223632   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .GetSSHKeyPath
	I0918 21:09:29.223820   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .GetSSHUsername
	I0918 21:09:29.224004   61659 sshutil.go:53] new ssh client: &{IP:192.168.50.109 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19667-7671/.minikube/machines/default-k8s-diff-port-828868/id_rsa Username:docker}
	I0918 21:09:29.360595   61659 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0918 21:09:29.381254   61659 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-828868" to be "Ready" ...
	I0918 21:09:29.390526   61659 node_ready.go:49] node "default-k8s-diff-port-828868" has status "Ready":"True"
	I0918 21:09:29.390554   61659 node_ready.go:38] duration metric: took 9.264338ms for node "default-k8s-diff-port-828868" to be "Ready" ...
	I0918 21:09:29.390565   61659 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0918 21:09:29.395433   61659 pod_ready.go:79] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-828868" in "kube-system" namespace to be "Ready" ...
	I0918 21:09:29.468492   61659 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0918 21:09:29.526515   61659 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0918 21:09:29.527137   61659 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0918 21:09:29.527162   61659 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0918 21:09:29.570619   61659 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0918 21:09:29.570651   61659 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0918 21:09:29.631944   61659 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0918 21:09:29.631975   61659 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0918 21:09:29.653905   61659 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0918 21:09:30.402107   61659 main.go:141] libmachine: Making call to close driver server
	I0918 21:09:30.402145   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .Close
	I0918 21:09:30.402142   61659 main.go:141] libmachine: Making call to close driver server
	I0918 21:09:30.402167   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .Close
	I0918 21:09:30.402466   61659 main.go:141] libmachine: Successfully made call to close driver server
	I0918 21:09:30.402480   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | Closing plugin on server side
	I0918 21:09:30.402493   61659 main.go:141] libmachine: Making call to close connection to plugin binary
	I0918 21:09:30.402503   61659 main.go:141] libmachine: Making call to close driver server
	I0918 21:09:30.402512   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .Close
	I0918 21:09:30.402537   61659 main.go:141] libmachine: Successfully made call to close driver server
	I0918 21:09:30.402546   61659 main.go:141] libmachine: Making call to close connection to plugin binary
	I0918 21:09:30.402555   61659 main.go:141] libmachine: Making call to close driver server
	I0918 21:09:30.402571   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .Close
	I0918 21:09:30.402733   61659 main.go:141] libmachine: Successfully made call to close driver server
	I0918 21:09:30.402773   61659 main.go:141] libmachine: Making call to close connection to plugin binary
	I0918 21:09:30.402921   61659 main.go:141] libmachine: Successfully made call to close driver server
	I0918 21:09:30.402941   61659 main.go:141] libmachine: Making call to close connection to plugin binary
	I0918 21:09:30.435323   61659 main.go:141] libmachine: Making call to close driver server
	I0918 21:09:30.435366   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .Close
	I0918 21:09:30.435659   61659 main.go:141] libmachine: Successfully made call to close driver server
	I0918 21:09:30.435683   61659 main.go:141] libmachine: Making call to close connection to plugin binary
	I0918 21:09:30.975630   61659 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.321677798s)
	I0918 21:09:30.975716   61659 main.go:141] libmachine: Making call to close driver server
	I0918 21:09:30.975733   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .Close
	I0918 21:09:30.976074   61659 main.go:141] libmachine: Successfully made call to close driver server
	I0918 21:09:30.976094   61659 main.go:141] libmachine: Making call to close connection to plugin binary
	I0918 21:09:30.976105   61659 main.go:141] libmachine: Making call to close driver server
	I0918 21:09:30.976113   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .Close
	I0918 21:09:30.976369   61659 main.go:141] libmachine: Successfully made call to close driver server
	I0918 21:09:30.976395   61659 main.go:141] libmachine: Making call to close connection to plugin binary
	I0918 21:09:30.976406   61659 addons.go:475] Verifying addon metrics-server=true in "default-k8s-diff-port-828868"
	I0918 21:09:30.978345   61659 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0918 21:09:26.857486   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:09:29.356533   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:09:31.358269   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:09:30.979731   61659 addons.go:510] duration metric: took 1.833970994s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0918 21:09:31.403620   61659 pod_ready.go:103] pod "etcd-default-k8s-diff-port-828868" in "kube-system" namespace has status "Ready":"False"
	I0918 21:09:33.857960   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:09:36.357454   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:09:33.902436   61659 pod_ready.go:103] pod "etcd-default-k8s-diff-port-828868" in "kube-system" namespace has status "Ready":"False"
	I0918 21:09:36.401889   61659 pod_ready.go:103] pod "etcd-default-k8s-diff-port-828868" in "kube-system" namespace has status "Ready":"False"
	I0918 21:09:36.902002   61659 pod_ready.go:93] pod "etcd-default-k8s-diff-port-828868" in "kube-system" namespace has status "Ready":"True"
	I0918 21:09:36.902026   61659 pod_ready.go:82] duration metric: took 7.506563242s for pod "etcd-default-k8s-diff-port-828868" in "kube-system" namespace to be "Ready" ...
	I0918 21:09:36.902035   61659 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-828868" in "kube-system" namespace to be "Ready" ...
	I0918 21:09:36.907689   61659 pod_ready.go:93] pod "kube-apiserver-default-k8s-diff-port-828868" in "kube-system" namespace has status "Ready":"True"
	I0918 21:09:36.907713   61659 pod_ready.go:82] duration metric: took 5.672631ms for pod "kube-apiserver-default-k8s-diff-port-828868" in "kube-system" namespace to be "Ready" ...
	I0918 21:09:36.907722   61659 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-828868" in "kube-system" namespace to be "Ready" ...
	I0918 21:09:38.914521   61659 pod_ready.go:103] pod "kube-controller-manager-default-k8s-diff-port-828868" in "kube-system" namespace has status "Ready":"False"
	I0918 21:09:39.414168   61659 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-828868" in "kube-system" namespace has status "Ready":"True"
	I0918 21:09:39.414196   61659 pod_ready.go:82] duration metric: took 2.506467297s for pod "kube-controller-manager-default-k8s-diff-port-828868" in "kube-system" namespace to be "Ready" ...
	I0918 21:09:39.414207   61659 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-hf5mm" in "kube-system" namespace to be "Ready" ...
	I0918 21:09:39.419030   61659 pod_ready.go:93] pod "kube-proxy-hf5mm" in "kube-system" namespace has status "Ready":"True"
	I0918 21:09:39.419053   61659 pod_ready.go:82] duration metric: took 4.838856ms for pod "kube-proxy-hf5mm" in "kube-system" namespace to be "Ready" ...
	I0918 21:09:39.419061   61659 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-828868" in "kube-system" namespace to be "Ready" ...
	I0918 21:09:39.423321   61659 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-828868" in "kube-system" namespace has status "Ready":"True"
	I0918 21:09:39.423341   61659 pod_ready.go:82] duration metric: took 4.274601ms for pod "kube-scheduler-default-k8s-diff-port-828868" in "kube-system" namespace to be "Ready" ...
	I0918 21:09:39.423348   61659 pod_ready.go:39] duration metric: took 10.03277208s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0918 21:09:39.423360   61659 api_server.go:52] waiting for apiserver process to appear ...
	I0918 21:09:39.423407   61659 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:09:39.438272   61659 api_server.go:72] duration metric: took 10.292559807s to wait for apiserver process to appear ...
	I0918 21:09:39.438297   61659 api_server.go:88] waiting for apiserver healthz status ...
	I0918 21:09:39.438315   61659 api_server.go:253] Checking apiserver healthz at https://192.168.50.109:8444/healthz ...
	I0918 21:09:39.443342   61659 api_server.go:279] https://192.168.50.109:8444/healthz returned 200:
	ok
	I0918 21:09:39.444238   61659 api_server.go:141] control plane version: v1.31.1
	I0918 21:09:39.444262   61659 api_server.go:131] duration metric: took 5.958748ms to wait for apiserver health ...
	I0918 21:09:39.444270   61659 system_pods.go:43] waiting for kube-system pods to appear ...
	I0918 21:09:39.449914   61659 system_pods.go:59] 9 kube-system pods found
	I0918 21:09:39.449938   61659 system_pods.go:61] "coredns-7c65d6cfc9-8gz5v" [bfcbe99d-9a3d-4da8-a976-58cb9d9bf7ac] Running
	I0918 21:09:39.449942   61659 system_pods.go:61] "coredns-7c65d6cfc9-shx5p" [2d6d25ab-9a90-490a-911b-bf396605fa88] Running
	I0918 21:09:39.449947   61659 system_pods.go:61] "etcd-default-k8s-diff-port-828868" [d9772e35-df33-4b90-af88-7479a81776a2] Running
	I0918 21:09:39.449950   61659 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-828868" [9ca05dc8-1e15-4f6e-9855-1e1b6376fbfc] Running
	I0918 21:09:39.449954   61659 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-828868" [b4196fd3-4882-4e31-b418-2f5f24d2610d] Running
	I0918 21:09:39.449957   61659 system_pods.go:61] "kube-proxy-hf5mm" [3fb0a166-a925-4486-9695-6db05ae704b8] Running
	I0918 21:09:39.449962   61659 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-828868" [161b8693-3400-43a4-b361-cce5749dd2eb] Running
	I0918 21:09:39.449969   61659 system_pods.go:61] "metrics-server-6867b74b74-hdt52" [bf3aa7d0-a121-4a2c-92dd-2c79c6cbaa4a] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0918 21:09:39.449976   61659 system_pods.go:61] "storage-provisioner" [8b4e1077-e23f-4262-b83e-989506798531] Running
	I0918 21:09:39.449983   61659 system_pods.go:74] duration metric: took 5.708013ms to wait for pod list to return data ...
	I0918 21:09:39.449992   61659 default_sa.go:34] waiting for default service account to be created ...
	I0918 21:09:39.453256   61659 default_sa.go:45] found service account: "default"
	I0918 21:09:39.453278   61659 default_sa.go:55] duration metric: took 3.281012ms for default service account to be created ...
	I0918 21:09:39.453287   61659 system_pods.go:116] waiting for k8s-apps to be running ...
	I0918 21:09:39.502200   61659 system_pods.go:86] 9 kube-system pods found
	I0918 21:09:39.502231   61659 system_pods.go:89] "coredns-7c65d6cfc9-8gz5v" [bfcbe99d-9a3d-4da8-a976-58cb9d9bf7ac] Running
	I0918 21:09:39.502237   61659 system_pods.go:89] "coredns-7c65d6cfc9-shx5p" [2d6d25ab-9a90-490a-911b-bf396605fa88] Running
	I0918 21:09:39.502241   61659 system_pods.go:89] "etcd-default-k8s-diff-port-828868" [d9772e35-df33-4b90-af88-7479a81776a2] Running
	I0918 21:09:39.502246   61659 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-828868" [9ca05dc8-1e15-4f6e-9855-1e1b6376fbfc] Running
	I0918 21:09:39.502250   61659 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-828868" [b4196fd3-4882-4e31-b418-2f5f24d2610d] Running
	I0918 21:09:39.502253   61659 system_pods.go:89] "kube-proxy-hf5mm" [3fb0a166-a925-4486-9695-6db05ae704b8] Running
	I0918 21:09:39.502256   61659 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-828868" [161b8693-3400-43a4-b361-cce5749dd2eb] Running
	I0918 21:09:39.502262   61659 system_pods.go:89] "metrics-server-6867b74b74-hdt52" [bf3aa7d0-a121-4a2c-92dd-2c79c6cbaa4a] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0918 21:09:39.502266   61659 system_pods.go:89] "storage-provisioner" [8b4e1077-e23f-4262-b83e-989506798531] Running
	I0918 21:09:39.502276   61659 system_pods.go:126] duration metric: took 48.981872ms to wait for k8s-apps to be running ...
	I0918 21:09:39.502291   61659 system_svc.go:44] waiting for kubelet service to be running ....
	I0918 21:09:39.502367   61659 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0918 21:09:39.517514   61659 system_svc.go:56] duration metric: took 15.213443ms WaitForService to wait for kubelet
	I0918 21:09:39.517549   61659 kubeadm.go:582] duration metric: took 10.37183977s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0918 21:09:39.517573   61659 node_conditions.go:102] verifying NodePressure condition ...
	I0918 21:09:39.700593   61659 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0918 21:09:39.700616   61659 node_conditions.go:123] node cpu capacity is 2
	I0918 21:09:39.700626   61659 node_conditions.go:105] duration metric: took 183.048537ms to run NodePressure ...
	I0918 21:09:39.700637   61659 start.go:241] waiting for startup goroutines ...
	I0918 21:09:39.700643   61659 start.go:246] waiting for cluster config update ...
	I0918 21:09:39.700653   61659 start.go:255] writing updated cluster config ...
	I0918 21:09:39.700899   61659 ssh_runner.go:195] Run: rm -f paused
	I0918 21:09:39.750890   61659 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I0918 21:09:39.753015   61659 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-828868" cluster and "default" namespace by default
	I0918 21:09:38.857481   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:09:41.356307   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:09:44.581125   61740 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.200138695s)
	I0918 21:09:44.581198   61740 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0918 21:09:44.597051   61740 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0918 21:09:44.607195   61740 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0918 21:09:44.617135   61740 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0918 21:09:44.617160   61740 kubeadm.go:157] found existing configuration files:
	
	I0918 21:09:44.617203   61740 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0918 21:09:44.626216   61740 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0918 21:09:44.626278   61740 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0918 21:09:44.635161   61740 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0918 21:09:44.643767   61740 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0918 21:09:44.643828   61740 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0918 21:09:44.652663   61740 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0918 21:09:44.662045   61740 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0918 21:09:44.662107   61740 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0918 21:09:44.671165   61740 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0918 21:09:44.680397   61740 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0918 21:09:44.680469   61740 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0918 21:09:44.689168   61740 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0918 21:09:44.733425   61740 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I0918 21:09:44.733528   61740 kubeadm.go:310] [preflight] Running pre-flight checks
	I0918 21:09:44.846369   61740 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0918 21:09:44.846477   61740 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0918 21:09:44.846612   61740 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0918 21:09:44.855581   61740 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0918 21:09:44.857599   61740 out.go:235]   - Generating certificates and keys ...
	I0918 21:09:44.857709   61740 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0918 21:09:44.857777   61740 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0918 21:09:44.857851   61740 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0918 21:09:44.857942   61740 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0918 21:09:44.858061   61740 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0918 21:09:44.858137   61740 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0918 21:09:44.858243   61740 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0918 21:09:44.858339   61740 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0918 21:09:44.858409   61740 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0918 21:09:44.858509   61740 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0918 21:09:44.858547   61740 kubeadm.go:310] [certs] Using the existing "sa" key
	I0918 21:09:44.858615   61740 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0918 21:09:45.048967   61740 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0918 21:09:45.229640   61740 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0918 21:09:45.397078   61740 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0918 21:09:45.722116   61740 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0918 21:09:45.850285   61740 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0918 21:09:45.850902   61740 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0918 21:09:45.853909   61740 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0918 21:09:43.357136   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:09:45.858056   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:09:45.855803   61740 out.go:235]   - Booting up control plane ...
	I0918 21:09:45.855931   61740 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0918 21:09:45.857227   61740 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0918 21:09:45.858855   61740 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0918 21:09:45.877299   61740 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0918 21:09:45.883953   61740 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0918 21:09:45.884043   61740 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0918 21:09:46.015368   61740 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0918 21:09:46.015509   61740 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0918 21:09:47.016371   61740 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001062473s
	I0918 21:09:47.016465   61740 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0918 21:09:48.357057   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:09:50.856124   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:09:51.518808   61740 kubeadm.go:310] [api-check] The API server is healthy after 4.502250914s
	I0918 21:09:51.532148   61740 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0918 21:09:51.549560   61740 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0918 21:09:51.579801   61740 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0918 21:09:51.580053   61740 kubeadm.go:310] [mark-control-plane] Marking the node embed-certs-255556 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0918 21:09:51.598605   61740 kubeadm.go:310] [bootstrap-token] Using token: iilbxo.n0c6mbjmeqehlfso
	I0918 21:09:51.600035   61740 out.go:235]   - Configuring RBAC rules ...
	I0918 21:09:51.600200   61740 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0918 21:09:51.614672   61740 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0918 21:09:51.626186   61740 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0918 21:09:51.629722   61740 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0918 21:09:51.634757   61740 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0918 21:09:51.642778   61740 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0918 21:09:51.931051   61740 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0918 21:09:52.359085   61740 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0918 21:09:52.930191   61740 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0918 21:09:52.931033   61740 kubeadm.go:310] 
	I0918 21:09:52.931100   61740 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0918 21:09:52.931108   61740 kubeadm.go:310] 
	I0918 21:09:52.931178   61740 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0918 21:09:52.931186   61740 kubeadm.go:310] 
	I0918 21:09:52.931208   61740 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0918 21:09:52.931313   61740 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0918 21:09:52.931400   61740 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0918 21:09:52.931435   61740 kubeadm.go:310] 
	I0918 21:09:52.931524   61740 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0918 21:09:52.931537   61740 kubeadm.go:310] 
	I0918 21:09:52.931601   61740 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0918 21:09:52.931627   61740 kubeadm.go:310] 
	I0918 21:09:52.931721   61740 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0918 21:09:52.931825   61740 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0918 21:09:52.931896   61740 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0918 21:09:52.931903   61740 kubeadm.go:310] 
	I0918 21:09:52.931974   61740 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0918 21:09:52.932073   61740 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0918 21:09:52.932081   61740 kubeadm.go:310] 
	I0918 21:09:52.932154   61740 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token iilbxo.n0c6mbjmeqehlfso \
	I0918 21:09:52.932243   61740 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:8ad46bf288e8cc2226f5a929a768e6c95cddecc97ffc4016be27ef0987b1e36f \
	I0918 21:09:52.932289   61740 kubeadm.go:310] 	--control-plane 
	I0918 21:09:52.932296   61740 kubeadm.go:310] 
	I0918 21:09:52.932365   61740 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0918 21:09:52.932372   61740 kubeadm.go:310] 
	I0918 21:09:52.932438   61740 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token iilbxo.n0c6mbjmeqehlfso \
	I0918 21:09:52.932568   61740 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:8ad46bf288e8cc2226f5a929a768e6c95cddecc97ffc4016be27ef0987b1e36f 
	I0918 21:09:52.934280   61740 kubeadm.go:310] W0918 21:09:44.705437    2512 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0918 21:09:52.934656   61740 kubeadm.go:310] W0918 21:09:44.706219    2512 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0918 21:09:52.934841   61740 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0918 21:09:52.934861   61740 cni.go:84] Creating CNI manager for ""
	I0918 21:09:52.934871   61740 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0918 21:09:52.937656   61740 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0918 21:09:52.939150   61740 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0918 21:09:52.950774   61740 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0918 21:09:52.973081   61740 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0918 21:09:52.973161   61740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0918 21:09:52.973210   61740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-255556 minikube.k8s.io/updated_at=2024_09_18T21_09_52_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=85073601a832bd4bbda5d11fa91feafff6ec6b91 minikube.k8s.io/name=embed-certs-255556 minikube.k8s.io/primary=true
	I0918 21:09:53.012402   61740 ops.go:34] apiserver oom_adj: -16
	I0918 21:09:53.180983   61740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0918 21:09:52.857161   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:09:55.357515   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:09:53.681852   61740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0918 21:09:54.181892   61740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0918 21:09:54.681768   61740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0918 21:09:55.181353   61740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0918 21:09:55.681336   61740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0918 21:09:56.181389   61740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0918 21:09:56.681574   61740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0918 21:09:57.181050   61740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0918 21:09:57.258766   61740 kubeadm.go:1113] duration metric: took 4.285672952s to wait for elevateKubeSystemPrivileges
	I0918 21:09:57.258809   61740 kubeadm.go:394] duration metric: took 5m2.572577294s to StartCluster
	I0918 21:09:57.258831   61740 settings.go:142] acquiring lock: {Name:mk6ae95a69dcbe00c28846409e9f46945a88de2f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0918 21:09:57.258925   61740 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19667-7671/kubeconfig
	I0918 21:09:57.260757   61740 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19667-7671/kubeconfig: {Name:mk3a3eecadb0164dfdd5d3c4392b5b473a2a9bb0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0918 21:09:57.261072   61740 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.21 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0918 21:09:57.261168   61740 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0918 21:09:57.261275   61740 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-255556"
	I0918 21:09:57.261302   61740 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-255556"
	W0918 21:09:57.261314   61740 addons.go:243] addon storage-provisioner should already be in state true
	I0918 21:09:57.261344   61740 host.go:66] Checking if "embed-certs-255556" exists ...
	I0918 21:09:57.261337   61740 addons.go:69] Setting default-storageclass=true in profile "embed-certs-255556"
	I0918 21:09:57.261366   61740 config.go:182] Loaded profile config "embed-certs-255556": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0918 21:09:57.261363   61740 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-255556"
	I0918 21:09:57.261354   61740 addons.go:69] Setting metrics-server=true in profile "embed-certs-255556"
	I0918 21:09:57.261413   61740 addons.go:234] Setting addon metrics-server=true in "embed-certs-255556"
	W0918 21:09:57.261423   61740 addons.go:243] addon metrics-server should already be in state true
	I0918 21:09:57.261450   61740 host.go:66] Checking if "embed-certs-255556" exists ...
	I0918 21:09:57.261751   61740 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19667-7671/.minikube/bin/docker-machine-driver-kvm2
	I0918 21:09:57.261773   61740 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19667-7671/.minikube/bin/docker-machine-driver-kvm2
	I0918 21:09:57.261797   61740 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0918 21:09:57.261805   61740 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0918 21:09:57.261827   61740 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19667-7671/.minikube/bin/docker-machine-driver-kvm2
	I0918 21:09:57.261913   61740 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0918 21:09:57.263016   61740 out.go:177] * Verifying Kubernetes components...
	I0918 21:09:57.264732   61740 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0918 21:09:57.279143   61740 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41499
	I0918 21:09:57.279741   61740 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38525
	I0918 21:09:57.279948   61740 main.go:141] libmachine: () Calling .GetVersion
	I0918 21:09:57.280150   61740 main.go:141] libmachine: () Calling .GetVersion
	I0918 21:09:57.280518   61740 main.go:141] libmachine: Using API Version  1
	I0918 21:09:57.280536   61740 main.go:141] libmachine: () Calling .SetConfigRaw
	I0918 21:09:57.280662   61740 main.go:141] libmachine: Using API Version  1
	I0918 21:09:57.280699   61740 main.go:141] libmachine: () Calling .SetConfigRaw
	I0918 21:09:57.280899   61740 main.go:141] libmachine: () Calling .GetMachineName
	I0918 21:09:57.281014   61740 main.go:141] libmachine: () Calling .GetMachineName
	I0918 21:09:57.281224   61740 main.go:141] libmachine: (embed-certs-255556) Calling .GetState
	I0918 21:09:57.281401   61740 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45081
	I0918 21:09:57.281609   61740 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19667-7671/.minikube/bin/docker-machine-driver-kvm2
	I0918 21:09:57.281669   61740 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0918 21:09:57.281824   61740 main.go:141] libmachine: () Calling .GetVersion
	I0918 21:09:57.282291   61740 main.go:141] libmachine: Using API Version  1
	I0918 21:09:57.282316   61740 main.go:141] libmachine: () Calling .SetConfigRaw
	I0918 21:09:57.282655   61740 main.go:141] libmachine: () Calling .GetMachineName
	I0918 21:09:57.283166   61740 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19667-7671/.minikube/bin/docker-machine-driver-kvm2
	I0918 21:09:57.283198   61740 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0918 21:09:57.284993   61740 addons.go:234] Setting addon default-storageclass=true in "embed-certs-255556"
	W0918 21:09:57.285013   61740 addons.go:243] addon default-storageclass should already be in state true
	I0918 21:09:57.285042   61740 host.go:66] Checking if "embed-certs-255556" exists ...
	I0918 21:09:57.285400   61740 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19667-7671/.minikube/bin/docker-machine-driver-kvm2
	I0918 21:09:57.285441   61740 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0918 21:09:57.298996   61740 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39709
	I0918 21:09:57.299572   61740 main.go:141] libmachine: () Calling .GetVersion
	I0918 21:09:57.300427   61740 main.go:141] libmachine: Using API Version  1
	I0918 21:09:57.300453   61740 main.go:141] libmachine: () Calling .SetConfigRaw
	I0918 21:09:57.300865   61740 main.go:141] libmachine: () Calling .GetMachineName
	I0918 21:09:57.301062   61740 main.go:141] libmachine: (embed-certs-255556) Calling .GetState
	I0918 21:09:57.301827   61740 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40441
	I0918 21:09:57.302410   61740 main.go:141] libmachine: () Calling .GetVersion
	I0918 21:09:57.302948   61740 main.go:141] libmachine: Using API Version  1
	I0918 21:09:57.302968   61740 main.go:141] libmachine: () Calling .SetConfigRaw
	I0918 21:09:57.303284   61740 main.go:141] libmachine: (embed-certs-255556) Calling .DriverName
	I0918 21:09:57.303333   61740 main.go:141] libmachine: () Calling .GetMachineName
	I0918 21:09:57.303512   61740 main.go:141] libmachine: (embed-certs-255556) Calling .GetState
	I0918 21:09:57.304409   61740 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43125
	I0918 21:09:57.304836   61740 main.go:141] libmachine: () Calling .GetVersion
	I0918 21:09:57.305379   61740 main.go:141] libmachine: Using API Version  1
	I0918 21:09:57.305393   61740 main.go:141] libmachine: () Calling .SetConfigRaw
	I0918 21:09:57.305423   61740 main.go:141] libmachine: (embed-certs-255556) Calling .DriverName
	I0918 21:09:57.305449   61740 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0918 21:09:57.305705   61740 main.go:141] libmachine: () Calling .GetMachineName
	I0918 21:09:57.306221   61740 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19667-7671/.minikube/bin/docker-machine-driver-kvm2
	I0918 21:09:57.306270   61740 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0918 21:09:57.306972   61740 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0918 21:09:57.307226   61740 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0918 21:09:57.307247   61740 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0918 21:09:57.307261   61740 main.go:141] libmachine: (embed-certs-255556) Calling .GetSSHHostname
	I0918 21:09:57.308757   61740 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0918 21:09:57.308778   61740 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0918 21:09:57.308798   61740 main.go:141] libmachine: (embed-certs-255556) Calling .GetSSHHostname
	I0918 21:09:57.311608   61740 main.go:141] libmachine: (embed-certs-255556) DBG | domain embed-certs-255556 has defined MAC address 52:54:00:e8:c2:b7 in network mk-embed-certs-255556
	I0918 21:09:57.312311   61740 main.go:141] libmachine: (embed-certs-255556) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e8:c2:b7", ip: ""} in network mk-embed-certs-255556: {Iface:virbr1 ExpiryTime:2024-09-18 22:04:39 +0000 UTC Type:0 Mac:52:54:00:e8:c2:b7 Iaid: IPaddr:192.168.39.21 Prefix:24 Hostname:embed-certs-255556 Clientid:01:52:54:00:e8:c2:b7}
	I0918 21:09:57.312346   61740 main.go:141] libmachine: (embed-certs-255556) DBG | domain embed-certs-255556 has defined IP address 192.168.39.21 and MAC address 52:54:00:e8:c2:b7 in network mk-embed-certs-255556
	I0918 21:09:57.312529   61740 main.go:141] libmachine: (embed-certs-255556) Calling .GetSSHPort
	I0918 21:09:57.313308   61740 main.go:141] libmachine: (embed-certs-255556) DBG | domain embed-certs-255556 has defined MAC address 52:54:00:e8:c2:b7 in network mk-embed-certs-255556
	I0918 21:09:57.313344   61740 main.go:141] libmachine: (embed-certs-255556) Calling .GetSSHKeyPath
	I0918 21:09:57.313533   61740 main.go:141] libmachine: (embed-certs-255556) Calling .GetSSHUsername
	I0918 21:09:57.313707   61740 sshutil.go:53] new ssh client: &{IP:192.168.39.21 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19667-7671/.minikube/machines/embed-certs-255556/id_rsa Username:docker}
	I0918 21:09:57.313964   61740 main.go:141] libmachine: (embed-certs-255556) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e8:c2:b7", ip: ""} in network mk-embed-certs-255556: {Iface:virbr1 ExpiryTime:2024-09-18 22:04:39 +0000 UTC Type:0 Mac:52:54:00:e8:c2:b7 Iaid: IPaddr:192.168.39.21 Prefix:24 Hostname:embed-certs-255556 Clientid:01:52:54:00:e8:c2:b7}
	I0918 21:09:57.313991   61740 main.go:141] libmachine: (embed-certs-255556) DBG | domain embed-certs-255556 has defined IP address 192.168.39.21 and MAC address 52:54:00:e8:c2:b7 in network mk-embed-certs-255556
	I0918 21:09:57.314181   61740 main.go:141] libmachine: (embed-certs-255556) Calling .GetSSHPort
	I0918 21:09:57.314357   61740 main.go:141] libmachine: (embed-certs-255556) Calling .GetSSHKeyPath
	I0918 21:09:57.314517   61740 main.go:141] libmachine: (embed-certs-255556) Calling .GetSSHUsername
	I0918 21:09:57.314644   61740 sshutil.go:53] new ssh client: &{IP:192.168.39.21 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19667-7671/.minikube/machines/embed-certs-255556/id_rsa Username:docker}
	I0918 21:09:57.325307   61740 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45641
	I0918 21:09:57.325800   61740 main.go:141] libmachine: () Calling .GetVersion
	I0918 21:09:57.326390   61740 main.go:141] libmachine: Using API Version  1
	I0918 21:09:57.326416   61740 main.go:141] libmachine: () Calling .SetConfigRaw
	I0918 21:09:57.326850   61740 main.go:141] libmachine: () Calling .GetMachineName
	I0918 21:09:57.327116   61740 main.go:141] libmachine: (embed-certs-255556) Calling .GetState
	I0918 21:09:57.328954   61740 main.go:141] libmachine: (embed-certs-255556) Calling .DriverName
	I0918 21:09:57.329179   61740 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0918 21:09:57.329197   61740 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0918 21:09:57.329216   61740 main.go:141] libmachine: (embed-certs-255556) Calling .GetSSHHostname
	I0918 21:09:57.332176   61740 main.go:141] libmachine: (embed-certs-255556) DBG | domain embed-certs-255556 has defined MAC address 52:54:00:e8:c2:b7 in network mk-embed-certs-255556
	I0918 21:09:57.332608   61740 main.go:141] libmachine: (embed-certs-255556) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e8:c2:b7", ip: ""} in network mk-embed-certs-255556: {Iface:virbr1 ExpiryTime:2024-09-18 22:04:39 +0000 UTC Type:0 Mac:52:54:00:e8:c2:b7 Iaid: IPaddr:192.168.39.21 Prefix:24 Hostname:embed-certs-255556 Clientid:01:52:54:00:e8:c2:b7}
	I0918 21:09:57.332633   61740 main.go:141] libmachine: (embed-certs-255556) DBG | domain embed-certs-255556 has defined IP address 192.168.39.21 and MAC address 52:54:00:e8:c2:b7 in network mk-embed-certs-255556
	I0918 21:09:57.332803   61740 main.go:141] libmachine: (embed-certs-255556) Calling .GetSSHPort
	I0918 21:09:57.332991   61740 main.go:141] libmachine: (embed-certs-255556) Calling .GetSSHKeyPath
	I0918 21:09:57.333132   61740 main.go:141] libmachine: (embed-certs-255556) Calling .GetSSHUsername
	I0918 21:09:57.333254   61740 sshutil.go:53] new ssh client: &{IP:192.168.39.21 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19667-7671/.minikube/machines/embed-certs-255556/id_rsa Username:docker}
	I0918 21:09:57.463767   61740 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0918 21:09:57.480852   61740 node_ready.go:35] waiting up to 6m0s for node "embed-certs-255556" to be "Ready" ...
	I0918 21:09:57.492198   61740 node_ready.go:49] node "embed-certs-255556" has status "Ready":"True"
	I0918 21:09:57.492221   61740 node_ready.go:38] duration metric: took 11.335784ms for node "embed-certs-255556" to be "Ready" ...
	I0918 21:09:57.492229   61740 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0918 21:09:57.496607   61740 pod_ready.go:79] waiting up to 6m0s for pod "etcd-embed-certs-255556" in "kube-system" namespace to be "Ready" ...
	I0918 21:09:57.627581   61740 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0918 21:09:57.631704   61740 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0918 21:09:57.647778   61740 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0918 21:09:57.647799   61740 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0918 21:09:57.686558   61740 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0918 21:09:57.686589   61740 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0918 21:09:57.726206   61740 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0918 21:09:57.726230   61740 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0918 21:09:57.831932   61740 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0918 21:09:58.026530   61740 main.go:141] libmachine: Making call to close driver server
	I0918 21:09:58.026554   61740 main.go:141] libmachine: (embed-certs-255556) Calling .Close
	I0918 21:09:58.026862   61740 main.go:141] libmachine: Successfully made call to close driver server
	I0918 21:09:58.026885   61740 main.go:141] libmachine: Making call to close connection to plugin binary
	I0918 21:09:58.026895   61740 main.go:141] libmachine: Making call to close driver server
	I0918 21:09:58.026903   61740 main.go:141] libmachine: (embed-certs-255556) Calling .Close
	I0918 21:09:58.027205   61740 main.go:141] libmachine: Successfully made call to close driver server
	I0918 21:09:58.027260   61740 main.go:141] libmachine: Making call to close connection to plugin binary
	I0918 21:09:58.027269   61740 main.go:141] libmachine: (embed-certs-255556) DBG | Closing plugin on server side
	I0918 21:09:58.038140   61740 main.go:141] libmachine: Making call to close driver server
	I0918 21:09:58.038172   61740 main.go:141] libmachine: (embed-certs-255556) Calling .Close
	I0918 21:09:58.038506   61740 main.go:141] libmachine: Successfully made call to close driver server
	I0918 21:09:58.038555   61740 main.go:141] libmachine: Making call to close connection to plugin binary
	I0918 21:09:58.038512   61740 main.go:141] libmachine: (embed-certs-255556) DBG | Closing plugin on server side
	I0918 21:09:58.551479   61740 main.go:141] libmachine: Making call to close driver server
	I0918 21:09:58.551518   61740 main.go:141] libmachine: (embed-certs-255556) Calling .Close
	I0918 21:09:58.551851   61740 main.go:141] libmachine: Successfully made call to close driver server
	I0918 21:09:58.551870   61740 main.go:141] libmachine: Making call to close connection to plugin binary
	I0918 21:09:58.551885   61740 main.go:141] libmachine: Making call to close driver server
	I0918 21:09:58.551893   61740 main.go:141] libmachine: (embed-certs-255556) Calling .Close
	I0918 21:09:58.552242   61740 main.go:141] libmachine: Successfully made call to close driver server
	I0918 21:09:58.552307   61740 main.go:141] libmachine: Making call to close connection to plugin binary
	I0918 21:09:58.552326   61740 main.go:141] libmachine: (embed-certs-255556) DBG | Closing plugin on server side
	I0918 21:09:59.078469   61740 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.246485041s)
	I0918 21:09:59.078532   61740 main.go:141] libmachine: Making call to close driver server
	I0918 21:09:59.078550   61740 main.go:141] libmachine: (embed-certs-255556) Calling .Close
	I0918 21:09:59.078883   61740 main.go:141] libmachine: Successfully made call to close driver server
	I0918 21:09:59.078906   61740 main.go:141] libmachine: Making call to close connection to plugin binary
	I0918 21:09:59.078917   61740 main.go:141] libmachine: Making call to close driver server
	I0918 21:09:59.078924   61740 main.go:141] libmachine: (embed-certs-255556) Calling .Close
	I0918 21:09:59.079143   61740 main.go:141] libmachine: Successfully made call to close driver server
	I0918 21:09:59.079157   61740 main.go:141] libmachine: Making call to close connection to plugin binary
	I0918 21:09:59.079168   61740 addons.go:475] Verifying addon metrics-server=true in "embed-certs-255556"
	I0918 21:09:59.080861   61740 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I0918 21:09:57.357619   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:09:59.357838   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:09:59.082145   61740 addons.go:510] duration metric: took 1.82098849s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I0918 21:09:59.526424   61740 pod_ready.go:93] pod "etcd-embed-certs-255556" in "kube-system" namespace has status "Ready":"True"
	I0918 21:09:59.526445   61740 pod_ready.go:82] duration metric: took 2.02981732s for pod "etcd-embed-certs-255556" in "kube-system" namespace to be "Ready" ...
	I0918 21:09:59.526455   61740 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-embed-certs-255556" in "kube-system" namespace to be "Ready" ...
	I0918 21:10:00.033589   61740 pod_ready.go:93] pod "kube-apiserver-embed-certs-255556" in "kube-system" namespace has status "Ready":"True"
	I0918 21:10:00.033616   61740 pod_ready.go:82] duration metric: took 507.155125ms for pod "kube-apiserver-embed-certs-255556" in "kube-system" namespace to be "Ready" ...
	I0918 21:10:00.033630   61740 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-255556" in "kube-system" namespace to be "Ready" ...
	I0918 21:10:02.039884   61740 pod_ready.go:103] pod "kube-controller-manager-embed-certs-255556" in "kube-system" namespace has status "Ready":"False"
	I0918 21:10:04.040760   61740 pod_ready.go:103] pod "kube-controller-manager-embed-certs-255556" in "kube-system" namespace has status "Ready":"False"
	I0918 21:10:04.541799   61740 pod_ready.go:93] pod "kube-controller-manager-embed-certs-255556" in "kube-system" namespace has status "Ready":"True"
	I0918 21:10:04.541821   61740 pod_ready.go:82] duration metric: took 4.508184279s for pod "kube-controller-manager-embed-certs-255556" in "kube-system" namespace to be "Ready" ...
	I0918 21:10:04.541830   61740 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-embed-certs-255556" in "kube-system" namespace to be "Ready" ...
	I0918 21:10:04.550008   61740 pod_ready.go:93] pod "kube-scheduler-embed-certs-255556" in "kube-system" namespace has status "Ready":"True"
	I0918 21:10:04.550038   61740 pod_ready.go:82] duration metric: took 8.201765ms for pod "kube-scheduler-embed-certs-255556" in "kube-system" namespace to be "Ready" ...
	I0918 21:10:04.550046   61740 pod_ready.go:39] duration metric: took 7.057808243s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0918 21:10:04.550060   61740 api_server.go:52] waiting for apiserver process to appear ...
	I0918 21:10:04.550110   61740 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:10:04.566882   61740 api_server.go:72] duration metric: took 7.305767858s to wait for apiserver process to appear ...
	I0918 21:10:04.566914   61740 api_server.go:88] waiting for apiserver healthz status ...
	I0918 21:10:04.566937   61740 api_server.go:253] Checking apiserver healthz at https://192.168.39.21:8443/healthz ...
	I0918 21:10:04.571495   61740 api_server.go:279] https://192.168.39.21:8443/healthz returned 200:
	ok
	I0918 21:10:04.572590   61740 api_server.go:141] control plane version: v1.31.1
	I0918 21:10:04.572618   61740 api_server.go:131] duration metric: took 5.69747ms to wait for apiserver health ...
	I0918 21:10:04.572625   61740 system_pods.go:43] waiting for kube-system pods to appear ...
	I0918 21:10:04.578979   61740 system_pods.go:59] 9 kube-system pods found
	I0918 21:10:04.579019   61740 system_pods.go:61] "coredns-7c65d6cfc9-ptxbt" [798665e6-6f4a-4ba5-b4f9-3192d3f76f03] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0918 21:10:04.579030   61740 system_pods.go:61] "coredns-7c65d6cfc9-vgmtd" [1224ebf9-1b24-413a-b779-093acfcfb61e] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0918 21:10:04.579039   61740 system_pods.go:61] "etcd-embed-certs-255556" [a796cd6d-5b0c-4f73-9dfe-23c552cb2a43] Running
	I0918 21:10:04.579046   61740 system_pods.go:61] "kube-apiserver-embed-certs-255556" [f1926542-940b-4ba9-94cf-23265d42874c] Running
	I0918 21:10:04.579051   61740 system_pods.go:61] "kube-controller-manager-embed-certs-255556" [cd0bc9bb-ca2c-435f-81d5-2d4ea9f32d85] Running
	I0918 21:10:04.579057   61740 system_pods.go:61] "kube-proxy-m7gxh" [47d72a32-7efc-4155-a890-0ddc620af6e0] Running
	I0918 21:10:04.579067   61740 system_pods.go:61] "kube-scheduler-embed-certs-255556" [091c79cd-bc76-42b3-9635-96fdbe3ecfff] Running
	I0918 21:10:04.579076   61740 system_pods.go:61] "metrics-server-6867b74b74-sr6hq" [8867f8fa-687b-4105-8ace-18af50195726] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0918 21:10:04.579085   61740 system_pods.go:61] "storage-provisioner" [dcc9b789-237c-4d92-96c6-2c23d2c401c0] Running
	I0918 21:10:04.579095   61740 system_pods.go:74] duration metric: took 6.462809ms to wait for pod list to return data ...
	I0918 21:10:04.579106   61740 default_sa.go:34] waiting for default service account to be created ...
	I0918 21:10:04.583020   61740 default_sa.go:45] found service account: "default"
	I0918 21:10:04.583059   61740 default_sa.go:55] duration metric: took 3.946388ms for default service account to be created ...
	I0918 21:10:04.583072   61740 system_pods.go:116] waiting for k8s-apps to be running ...
	I0918 21:10:04.589946   61740 system_pods.go:86] 9 kube-system pods found
	I0918 21:10:04.589991   61740 system_pods.go:89] "coredns-7c65d6cfc9-ptxbt" [798665e6-6f4a-4ba5-b4f9-3192d3f76f03] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0918 21:10:04.590004   61740 system_pods.go:89] "coredns-7c65d6cfc9-vgmtd" [1224ebf9-1b24-413a-b779-093acfcfb61e] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0918 21:10:04.590012   61740 system_pods.go:89] "etcd-embed-certs-255556" [a796cd6d-5b0c-4f73-9dfe-23c552cb2a43] Running
	I0918 21:10:04.590019   61740 system_pods.go:89] "kube-apiserver-embed-certs-255556" [f1926542-940b-4ba9-94cf-23265d42874c] Running
	I0918 21:10:04.590025   61740 system_pods.go:89] "kube-controller-manager-embed-certs-255556" [cd0bc9bb-ca2c-435f-81d5-2d4ea9f32d85] Running
	I0918 21:10:04.590030   61740 system_pods.go:89] "kube-proxy-m7gxh" [47d72a32-7efc-4155-a890-0ddc620af6e0] Running
	I0918 21:10:04.590035   61740 system_pods.go:89] "kube-scheduler-embed-certs-255556" [091c79cd-bc76-42b3-9635-96fdbe3ecfff] Running
	I0918 21:10:04.590044   61740 system_pods.go:89] "metrics-server-6867b74b74-sr6hq" [8867f8fa-687b-4105-8ace-18af50195726] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0918 21:10:04.590051   61740 system_pods.go:89] "storage-provisioner" [dcc9b789-237c-4d92-96c6-2c23d2c401c0] Running
	I0918 21:10:04.590061   61740 system_pods.go:126] duration metric: took 6.981726ms to wait for k8s-apps to be running ...
	I0918 21:10:04.590070   61740 system_svc.go:44] waiting for kubelet service to be running ....
	I0918 21:10:04.590127   61740 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0918 21:10:04.605893   61740 system_svc.go:56] duration metric: took 15.815591ms WaitForService to wait for kubelet
	I0918 21:10:04.605921   61740 kubeadm.go:582] duration metric: took 7.344815015s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0918 21:10:04.605939   61740 node_conditions.go:102] verifying NodePressure condition ...
	I0918 21:10:04.609551   61740 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0918 21:10:04.609577   61740 node_conditions.go:123] node cpu capacity is 2
	I0918 21:10:04.609588   61740 node_conditions.go:105] duration metric: took 3.645116ms to run NodePressure ...
	I0918 21:10:04.609598   61740 start.go:241] waiting for startup goroutines ...
	I0918 21:10:04.609605   61740 start.go:246] waiting for cluster config update ...
	I0918 21:10:04.609614   61740 start.go:255] writing updated cluster config ...
	I0918 21:10:04.609870   61740 ssh_runner.go:195] Run: rm -f paused
	I0918 21:10:04.664479   61740 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I0918 21:10:04.666589   61740 out.go:177] * Done! kubectl is now configured to use "embed-certs-255556" cluster and "default" namespace by default
	I0918 21:10:01.858109   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:10:03.356912   61273 pod_ready.go:82] duration metric: took 4m0.006778464s for pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace to be "Ready" ...
	E0918 21:10:03.356944   61273 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I0918 21:10:03.356952   61273 pod_ready.go:39] duration metric: took 4m0.807781101s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0918 21:10:03.356967   61273 api_server.go:52] waiting for apiserver process to appear ...
	I0918 21:10:03.356994   61273 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0918 21:10:03.357047   61273 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0918 21:10:03.410066   61273 cri.go:89] found id: "a70652dce4d80c6fac020d63cff7054a835d183ac4b910157a5bf4eb8cabcaa1"
	I0918 21:10:03.410096   61273 cri.go:89] found id: ""
	I0918 21:10:03.410104   61273 logs.go:276] 1 containers: [a70652dce4d80c6fac020d63cff7054a835d183ac4b910157a5bf4eb8cabcaa1]
	I0918 21:10:03.410168   61273 ssh_runner.go:195] Run: which crictl
	I0918 21:10:03.414236   61273 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0918 21:10:03.414309   61273 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0918 21:10:03.449405   61273 cri.go:89] found id: "a913074a00723bc43f97ba625ebbdfded7dca1ce664ace9a13173adb96d2068f"
	I0918 21:10:03.449426   61273 cri.go:89] found id: ""
	I0918 21:10:03.449434   61273 logs.go:276] 1 containers: [a913074a00723bc43f97ba625ebbdfded7dca1ce664ace9a13173adb96d2068f]
	I0918 21:10:03.449492   61273 ssh_runner.go:195] Run: which crictl
	I0918 21:10:03.453335   61273 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0918 21:10:03.453403   61273 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0918 21:10:03.487057   61273 cri.go:89] found id: "76b9e08a213468abdd23d7b3998b55d6544e2fbae9205ec6076ef0cd7a80e7be"
	I0918 21:10:03.487081   61273 cri.go:89] found id: ""
	I0918 21:10:03.487089   61273 logs.go:276] 1 containers: [76b9e08a213468abdd23d7b3998b55d6544e2fbae9205ec6076ef0cd7a80e7be]
	I0918 21:10:03.487137   61273 ssh_runner.go:195] Run: which crictl
	I0918 21:10:03.491027   61273 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0918 21:10:03.491101   61273 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0918 21:10:03.529636   61273 cri.go:89] found id: "c372970fdf265433e1668377d0214de6608cb39ebfd706cfad9624cba2f9b481"
	I0918 21:10:03.529665   61273 cri.go:89] found id: ""
	I0918 21:10:03.529675   61273 logs.go:276] 1 containers: [c372970fdf265433e1668377d0214de6608cb39ebfd706cfad9624cba2f9b481]
	I0918 21:10:03.529738   61273 ssh_runner.go:195] Run: which crictl
	I0918 21:10:03.535042   61273 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0918 21:10:03.535121   61273 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0918 21:10:03.572913   61273 cri.go:89] found id: "0257280a0d21d52e2a996c2f717880bdb9d9f6fe90a4b82cc62222c941ba7d84"
	I0918 21:10:03.572942   61273 cri.go:89] found id: ""
	I0918 21:10:03.572952   61273 logs.go:276] 1 containers: [0257280a0d21d52e2a996c2f717880bdb9d9f6fe90a4b82cc62222c941ba7d84]
	I0918 21:10:03.573012   61273 ssh_runner.go:195] Run: which crictl
	I0918 21:10:03.576945   61273 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0918 21:10:03.577021   61273 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0918 21:10:03.612785   61273 cri.go:89] found id: "785dc83056153e5eeb22216ac7520f529b71f1b3ae42e9540a5c8930f1fe62c2"
	I0918 21:10:03.612805   61273 cri.go:89] found id: ""
	I0918 21:10:03.612812   61273 logs.go:276] 1 containers: [785dc83056153e5eeb22216ac7520f529b71f1b3ae42e9540a5c8930f1fe62c2]
	I0918 21:10:03.612868   61273 ssh_runner.go:195] Run: which crictl
	I0918 21:10:03.616855   61273 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0918 21:10:03.616924   61273 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0918 21:10:03.650330   61273 cri.go:89] found id: ""
	I0918 21:10:03.650359   61273 logs.go:276] 0 containers: []
	W0918 21:10:03.650370   61273 logs.go:278] No container was found matching "kindnet"
	I0918 21:10:03.650378   61273 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0918 21:10:03.650446   61273 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0918 21:10:03.698078   61273 cri.go:89] found id: "b44d6f4b44928aa498c4375f783c165714b4a5959a1fa709712ad39a412d6d9f"
	I0918 21:10:03.698106   61273 cri.go:89] found id: "38c14df0554157969a97390590d6e9c46765fd6fc3e286f7d2e3094838e0aff5"
	I0918 21:10:03.698113   61273 cri.go:89] found id: ""
	I0918 21:10:03.698122   61273 logs.go:276] 2 containers: [b44d6f4b44928aa498c4375f783c165714b4a5959a1fa709712ad39a412d6d9f 38c14df0554157969a97390590d6e9c46765fd6fc3e286f7d2e3094838e0aff5]
	I0918 21:10:03.698184   61273 ssh_runner.go:195] Run: which crictl
	I0918 21:10:03.702311   61273 ssh_runner.go:195] Run: which crictl
	I0918 21:10:03.705974   61273 logs.go:123] Gathering logs for kubelet ...
	I0918 21:10:03.705996   61273 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 21:10:03.771043   61273 logs.go:123] Gathering logs for kube-scheduler [c372970fdf265433e1668377d0214de6608cb39ebfd706cfad9624cba2f9b481] ...
	I0918 21:10:03.771097   61273 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c372970fdf265433e1668377d0214de6608cb39ebfd706cfad9624cba2f9b481"
	I0918 21:10:03.813148   61273 logs.go:123] Gathering logs for storage-provisioner [b44d6f4b44928aa498c4375f783c165714b4a5959a1fa709712ad39a412d6d9f] ...
	I0918 21:10:03.813175   61273 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b44d6f4b44928aa498c4375f783c165714b4a5959a1fa709712ad39a412d6d9f"
	I0918 21:10:03.864553   61273 logs.go:123] Gathering logs for CRI-O ...
	I0918 21:10:03.864580   61273 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0918 21:10:04.345484   61273 logs.go:123] Gathering logs for container status ...
	I0918 21:10:04.345531   61273 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 21:10:04.390777   61273 logs.go:123] Gathering logs for dmesg ...
	I0918 21:10:04.390818   61273 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 21:10:04.409877   61273 logs.go:123] Gathering logs for describe nodes ...
	I0918 21:10:04.409918   61273 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0918 21:10:04.536579   61273 logs.go:123] Gathering logs for kube-apiserver [a70652dce4d80c6fac020d63cff7054a835d183ac4b910157a5bf4eb8cabcaa1] ...
	I0918 21:10:04.536609   61273 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a70652dce4d80c6fac020d63cff7054a835d183ac4b910157a5bf4eb8cabcaa1"
	I0918 21:10:04.595640   61273 logs.go:123] Gathering logs for etcd [a913074a00723bc43f97ba625ebbdfded7dca1ce664ace9a13173adb96d2068f] ...
	I0918 21:10:04.595680   61273 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a913074a00723bc43f97ba625ebbdfded7dca1ce664ace9a13173adb96d2068f"
	I0918 21:10:04.642332   61273 logs.go:123] Gathering logs for coredns [76b9e08a213468abdd23d7b3998b55d6544e2fbae9205ec6076ef0cd7a80e7be] ...
	I0918 21:10:04.642377   61273 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 76b9e08a213468abdd23d7b3998b55d6544e2fbae9205ec6076ef0cd7a80e7be"
	I0918 21:10:04.679525   61273 logs.go:123] Gathering logs for kube-proxy [0257280a0d21d52e2a996c2f717880bdb9d9f6fe90a4b82cc62222c941ba7d84] ...
	I0918 21:10:04.679551   61273 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0257280a0d21d52e2a996c2f717880bdb9d9f6fe90a4b82cc62222c941ba7d84"
	I0918 21:10:04.721130   61273 logs.go:123] Gathering logs for kube-controller-manager [785dc83056153e5eeb22216ac7520f529b71f1b3ae42e9540a5c8930f1fe62c2] ...
	I0918 21:10:04.721164   61273 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 785dc83056153e5eeb22216ac7520f529b71f1b3ae42e9540a5c8930f1fe62c2"
	I0918 21:10:04.789527   61273 logs.go:123] Gathering logs for storage-provisioner [38c14df0554157969a97390590d6e9c46765fd6fc3e286f7d2e3094838e0aff5] ...
	I0918 21:10:04.789558   61273 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 38c14df0554157969a97390590d6e9c46765fd6fc3e286f7d2e3094838e0aff5"
	I0918 21:10:07.334989   61273 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:10:07.352382   61273 api_server.go:72] duration metric: took 4m12.031791528s to wait for apiserver process to appear ...
	I0918 21:10:07.352411   61273 api_server.go:88] waiting for apiserver healthz status ...
	I0918 21:10:07.352446   61273 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0918 21:10:07.352494   61273 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0918 21:10:07.404709   61273 cri.go:89] found id: "a70652dce4d80c6fac020d63cff7054a835d183ac4b910157a5bf4eb8cabcaa1"
	I0918 21:10:07.404739   61273 cri.go:89] found id: ""
	I0918 21:10:07.404748   61273 logs.go:276] 1 containers: [a70652dce4d80c6fac020d63cff7054a835d183ac4b910157a5bf4eb8cabcaa1]
	I0918 21:10:07.404815   61273 ssh_runner.go:195] Run: which crictl
	I0918 21:10:07.409205   61273 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0918 21:10:07.409273   61273 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0918 21:10:07.450409   61273 cri.go:89] found id: "a913074a00723bc43f97ba625ebbdfded7dca1ce664ace9a13173adb96d2068f"
	I0918 21:10:07.450429   61273 cri.go:89] found id: ""
	I0918 21:10:07.450438   61273 logs.go:276] 1 containers: [a913074a00723bc43f97ba625ebbdfded7dca1ce664ace9a13173adb96d2068f]
	I0918 21:10:07.450498   61273 ssh_runner.go:195] Run: which crictl
	I0918 21:10:07.454623   61273 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0918 21:10:07.454692   61273 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0918 21:10:07.498344   61273 cri.go:89] found id: "76b9e08a213468abdd23d7b3998b55d6544e2fbae9205ec6076ef0cd7a80e7be"
	I0918 21:10:07.498370   61273 cri.go:89] found id: ""
	I0918 21:10:07.498379   61273 logs.go:276] 1 containers: [76b9e08a213468abdd23d7b3998b55d6544e2fbae9205ec6076ef0cd7a80e7be]
	I0918 21:10:07.498443   61273 ssh_runner.go:195] Run: which crictl
	I0918 21:10:07.503900   61273 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0918 21:10:07.503986   61273 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0918 21:10:07.543438   61273 cri.go:89] found id: "c372970fdf265433e1668377d0214de6608cb39ebfd706cfad9624cba2f9b481"
	I0918 21:10:07.543469   61273 cri.go:89] found id: ""
	I0918 21:10:07.543478   61273 logs.go:276] 1 containers: [c372970fdf265433e1668377d0214de6608cb39ebfd706cfad9624cba2f9b481]
	I0918 21:10:07.543538   61273 ssh_runner.go:195] Run: which crictl
	I0918 21:10:07.548439   61273 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0918 21:10:07.548518   61273 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0918 21:10:07.592109   61273 cri.go:89] found id: "0257280a0d21d52e2a996c2f717880bdb9d9f6fe90a4b82cc62222c941ba7d84"
	I0918 21:10:07.592140   61273 cri.go:89] found id: ""
	I0918 21:10:07.592150   61273 logs.go:276] 1 containers: [0257280a0d21d52e2a996c2f717880bdb9d9f6fe90a4b82cc62222c941ba7d84]
	I0918 21:10:07.592202   61273 ssh_runner.go:195] Run: which crictl
	I0918 21:10:07.596127   61273 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0918 21:10:07.596200   61273 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0918 21:10:07.630588   61273 cri.go:89] found id: "785dc83056153e5eeb22216ac7520f529b71f1b3ae42e9540a5c8930f1fe62c2"
	I0918 21:10:07.630623   61273 cri.go:89] found id: ""
	I0918 21:10:07.630633   61273 logs.go:276] 1 containers: [785dc83056153e5eeb22216ac7520f529b71f1b3ae42e9540a5c8930f1fe62c2]
	I0918 21:10:07.630699   61273 ssh_runner.go:195] Run: which crictl
	I0918 21:10:07.635130   61273 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0918 21:10:07.635214   61273 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0918 21:10:07.672446   61273 cri.go:89] found id: ""
	I0918 21:10:07.672475   61273 logs.go:276] 0 containers: []
	W0918 21:10:07.672487   61273 logs.go:278] No container was found matching "kindnet"
	I0918 21:10:07.672494   61273 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0918 21:10:07.672554   61273 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0918 21:10:07.710660   61273 cri.go:89] found id: "b44d6f4b44928aa498c4375f783c165714b4a5959a1fa709712ad39a412d6d9f"
	I0918 21:10:07.710693   61273 cri.go:89] found id: "38c14df0554157969a97390590d6e9c46765fd6fc3e286f7d2e3094838e0aff5"
	I0918 21:10:07.710700   61273 cri.go:89] found id: ""
	I0918 21:10:07.710709   61273 logs.go:276] 2 containers: [b44d6f4b44928aa498c4375f783c165714b4a5959a1fa709712ad39a412d6d9f 38c14df0554157969a97390590d6e9c46765fd6fc3e286f7d2e3094838e0aff5]
	I0918 21:10:07.710761   61273 ssh_runner.go:195] Run: which crictl
	I0918 21:10:07.714772   61273 ssh_runner.go:195] Run: which crictl
	I0918 21:10:07.718402   61273 logs.go:123] Gathering logs for etcd [a913074a00723bc43f97ba625ebbdfded7dca1ce664ace9a13173adb96d2068f] ...
	I0918 21:10:07.718423   61273 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a913074a00723bc43f97ba625ebbdfded7dca1ce664ace9a13173adb96d2068f"
	I0918 21:10:07.756682   61273 logs.go:123] Gathering logs for coredns [76b9e08a213468abdd23d7b3998b55d6544e2fbae9205ec6076ef0cd7a80e7be] ...
	I0918 21:10:07.756717   61273 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 76b9e08a213468abdd23d7b3998b55d6544e2fbae9205ec6076ef0cd7a80e7be"
	I0918 21:10:07.792784   61273 logs.go:123] Gathering logs for kube-proxy [0257280a0d21d52e2a996c2f717880bdb9d9f6fe90a4b82cc62222c941ba7d84] ...
	I0918 21:10:07.792813   61273 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0257280a0d21d52e2a996c2f717880bdb9d9f6fe90a4b82cc62222c941ba7d84"
	I0918 21:10:07.829746   61273 logs.go:123] Gathering logs for kube-controller-manager [785dc83056153e5eeb22216ac7520f529b71f1b3ae42e9540a5c8930f1fe62c2] ...
	I0918 21:10:07.829779   61273 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 785dc83056153e5eeb22216ac7520f529b71f1b3ae42e9540a5c8930f1fe62c2"
	I0918 21:10:07.882151   61273 logs.go:123] Gathering logs for storage-provisioner [38c14df0554157969a97390590d6e9c46765fd6fc3e286f7d2e3094838e0aff5] ...
	I0918 21:10:07.882190   61273 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 38c14df0554157969a97390590d6e9c46765fd6fc3e286f7d2e3094838e0aff5"
	I0918 21:10:07.921948   61273 logs.go:123] Gathering logs for container status ...
	I0918 21:10:07.921973   61273 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 21:10:07.969080   61273 logs.go:123] Gathering logs for kubelet ...
	I0918 21:10:07.969110   61273 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 21:10:08.036341   61273 logs.go:123] Gathering logs for dmesg ...
	I0918 21:10:08.036376   61273 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 21:10:08.050690   61273 logs.go:123] Gathering logs for describe nodes ...
	I0918 21:10:08.050722   61273 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0918 21:10:08.177111   61273 logs.go:123] Gathering logs for kube-apiserver [a70652dce4d80c6fac020d63cff7054a835d183ac4b910157a5bf4eb8cabcaa1] ...
	I0918 21:10:08.177154   61273 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a70652dce4d80c6fac020d63cff7054a835d183ac4b910157a5bf4eb8cabcaa1"
	I0918 21:10:08.224169   61273 logs.go:123] Gathering logs for kube-scheduler [c372970fdf265433e1668377d0214de6608cb39ebfd706cfad9624cba2f9b481] ...
	I0918 21:10:08.224203   61273 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c372970fdf265433e1668377d0214de6608cb39ebfd706cfad9624cba2f9b481"
	I0918 21:10:08.264412   61273 logs.go:123] Gathering logs for storage-provisioner [b44d6f4b44928aa498c4375f783c165714b4a5959a1fa709712ad39a412d6d9f] ...
	I0918 21:10:08.264437   61273 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b44d6f4b44928aa498c4375f783c165714b4a5959a1fa709712ad39a412d6d9f"
	I0918 21:10:08.309190   61273 logs.go:123] Gathering logs for CRI-O ...
	I0918 21:10:08.309215   61273 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0918 21:10:11.209439   61273 api_server.go:253] Checking apiserver healthz at https://192.168.61.31:8443/healthz ...
	I0918 21:10:11.214345   61273 api_server.go:279] https://192.168.61.31:8443/healthz returned 200:
	ok
	I0918 21:10:11.215424   61273 api_server.go:141] control plane version: v1.31.1
	I0918 21:10:11.215446   61273 api_server.go:131] duration metric: took 3.863027585s to wait for apiserver health ...
	I0918 21:10:11.215456   61273 system_pods.go:43] waiting for kube-system pods to appear ...
	I0918 21:10:11.215485   61273 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0918 21:10:11.215545   61273 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0918 21:10:11.251158   61273 cri.go:89] found id: "a70652dce4d80c6fac020d63cff7054a835d183ac4b910157a5bf4eb8cabcaa1"
	I0918 21:10:11.251182   61273 cri.go:89] found id: ""
	I0918 21:10:11.251190   61273 logs.go:276] 1 containers: [a70652dce4d80c6fac020d63cff7054a835d183ac4b910157a5bf4eb8cabcaa1]
	I0918 21:10:11.251246   61273 ssh_runner.go:195] Run: which crictl
	I0918 21:10:11.255090   61273 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0918 21:10:11.255177   61273 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0918 21:10:11.290504   61273 cri.go:89] found id: "a913074a00723bc43f97ba625ebbdfded7dca1ce664ace9a13173adb96d2068f"
	I0918 21:10:11.290526   61273 cri.go:89] found id: ""
	I0918 21:10:11.290534   61273 logs.go:276] 1 containers: [a913074a00723bc43f97ba625ebbdfded7dca1ce664ace9a13173adb96d2068f]
	I0918 21:10:11.290593   61273 ssh_runner.go:195] Run: which crictl
	I0918 21:10:11.295141   61273 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0918 21:10:11.295224   61273 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0918 21:10:11.340273   61273 cri.go:89] found id: "76b9e08a213468abdd23d7b3998b55d6544e2fbae9205ec6076ef0cd7a80e7be"
	I0918 21:10:11.340300   61273 cri.go:89] found id: ""
	I0918 21:10:11.340310   61273 logs.go:276] 1 containers: [76b9e08a213468abdd23d7b3998b55d6544e2fbae9205ec6076ef0cd7a80e7be]
	I0918 21:10:11.340362   61273 ssh_runner.go:195] Run: which crictl
	I0918 21:10:11.344823   61273 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0918 21:10:11.344903   61273 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0918 21:10:11.384145   61273 cri.go:89] found id: "c372970fdf265433e1668377d0214de6608cb39ebfd706cfad9624cba2f9b481"
	I0918 21:10:11.384172   61273 cri.go:89] found id: ""
	I0918 21:10:11.384187   61273 logs.go:276] 1 containers: [c372970fdf265433e1668377d0214de6608cb39ebfd706cfad9624cba2f9b481]
	I0918 21:10:11.384251   61273 ssh_runner.go:195] Run: which crictl
	I0918 21:10:11.388594   61273 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0918 21:10:11.388673   61273 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0918 21:10:11.434881   61273 cri.go:89] found id: "0257280a0d21d52e2a996c2f717880bdb9d9f6fe90a4b82cc62222c941ba7d84"
	I0918 21:10:11.434915   61273 cri.go:89] found id: ""
	I0918 21:10:11.434925   61273 logs.go:276] 1 containers: [0257280a0d21d52e2a996c2f717880bdb9d9f6fe90a4b82cc62222c941ba7d84]
	I0918 21:10:11.434984   61273 ssh_runner.go:195] Run: which crictl
	I0918 21:10:11.439048   61273 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0918 21:10:11.439124   61273 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0918 21:10:11.474786   61273 cri.go:89] found id: "785dc83056153e5eeb22216ac7520f529b71f1b3ae42e9540a5c8930f1fe62c2"
	I0918 21:10:11.474812   61273 cri.go:89] found id: ""
	I0918 21:10:11.474820   61273 logs.go:276] 1 containers: [785dc83056153e5eeb22216ac7520f529b71f1b3ae42e9540a5c8930f1fe62c2]
	I0918 21:10:11.474871   61273 ssh_runner.go:195] Run: which crictl
	I0918 21:10:11.478907   61273 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0918 21:10:11.478961   61273 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0918 21:10:11.521522   61273 cri.go:89] found id: ""
	I0918 21:10:11.521550   61273 logs.go:276] 0 containers: []
	W0918 21:10:11.521561   61273 logs.go:278] No container was found matching "kindnet"
	I0918 21:10:11.521568   61273 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0918 21:10:11.521642   61273 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0918 21:10:11.560406   61273 cri.go:89] found id: "b44d6f4b44928aa498c4375f783c165714b4a5959a1fa709712ad39a412d6d9f"
	I0918 21:10:11.560428   61273 cri.go:89] found id: "38c14df0554157969a97390590d6e9c46765fd6fc3e286f7d2e3094838e0aff5"
	I0918 21:10:11.560432   61273 cri.go:89] found id: ""
	I0918 21:10:11.560439   61273 logs.go:276] 2 containers: [b44d6f4b44928aa498c4375f783c165714b4a5959a1fa709712ad39a412d6d9f 38c14df0554157969a97390590d6e9c46765fd6fc3e286f7d2e3094838e0aff5]
	I0918 21:10:11.560489   61273 ssh_runner.go:195] Run: which crictl
	I0918 21:10:11.564559   61273 ssh_runner.go:195] Run: which crictl
	I0918 21:10:11.568380   61273 logs.go:123] Gathering logs for kube-proxy [0257280a0d21d52e2a996c2f717880bdb9d9f6fe90a4b82cc62222c941ba7d84] ...
	I0918 21:10:11.568405   61273 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0257280a0d21d52e2a996c2f717880bdb9d9f6fe90a4b82cc62222c941ba7d84"
	I0918 21:10:11.614927   61273 logs.go:123] Gathering logs for kube-controller-manager [785dc83056153e5eeb22216ac7520f529b71f1b3ae42e9540a5c8930f1fe62c2] ...
	I0918 21:10:11.614959   61273 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 785dc83056153e5eeb22216ac7520f529b71f1b3ae42e9540a5c8930f1fe62c2"
	I0918 21:10:11.668337   61273 logs.go:123] Gathering logs for storage-provisioner [b44d6f4b44928aa498c4375f783c165714b4a5959a1fa709712ad39a412d6d9f] ...
	I0918 21:10:11.668372   61273 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b44d6f4b44928aa498c4375f783c165714b4a5959a1fa709712ad39a412d6d9f"
	I0918 21:10:11.705574   61273 logs.go:123] Gathering logs for kubelet ...
	I0918 21:10:11.705604   61273 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 21:10:11.772691   61273 logs.go:123] Gathering logs for describe nodes ...
	I0918 21:10:11.772731   61273 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0918 21:10:11.885001   61273 logs.go:123] Gathering logs for etcd [a913074a00723bc43f97ba625ebbdfded7dca1ce664ace9a13173adb96d2068f] ...
	I0918 21:10:11.885043   61273 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a913074a00723bc43f97ba625ebbdfded7dca1ce664ace9a13173adb96d2068f"
	I0918 21:10:11.929585   61273 logs.go:123] Gathering logs for coredns [76b9e08a213468abdd23d7b3998b55d6544e2fbae9205ec6076ef0cd7a80e7be] ...
	I0918 21:10:11.929623   61273 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 76b9e08a213468abdd23d7b3998b55d6544e2fbae9205ec6076ef0cd7a80e7be"
	I0918 21:10:11.967540   61273 logs.go:123] Gathering logs for kube-scheduler [c372970fdf265433e1668377d0214de6608cb39ebfd706cfad9624cba2f9b481] ...
	I0918 21:10:11.967566   61273 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c372970fdf265433e1668377d0214de6608cb39ebfd706cfad9624cba2f9b481"
	I0918 21:10:12.007037   61273 logs.go:123] Gathering logs for storage-provisioner [38c14df0554157969a97390590d6e9c46765fd6fc3e286f7d2e3094838e0aff5] ...
	I0918 21:10:12.007076   61273 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 38c14df0554157969a97390590d6e9c46765fd6fc3e286f7d2e3094838e0aff5"
	I0918 21:10:12.045764   61273 logs.go:123] Gathering logs for CRI-O ...
	I0918 21:10:12.045805   61273 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0918 21:10:12.434993   61273 logs.go:123] Gathering logs for dmesg ...
	I0918 21:10:12.435042   61273 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 21:10:12.449422   61273 logs.go:123] Gathering logs for kube-apiserver [a70652dce4d80c6fac020d63cff7054a835d183ac4b910157a5bf4eb8cabcaa1] ...
	I0918 21:10:12.449453   61273 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a70652dce4d80c6fac020d63cff7054a835d183ac4b910157a5bf4eb8cabcaa1"
	I0918 21:10:12.500491   61273 logs.go:123] Gathering logs for container status ...
	I0918 21:10:12.500522   61273 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 21:10:15.053164   61273 system_pods.go:59] 8 kube-system pods found
	I0918 21:10:15.053203   61273 system_pods.go:61] "coredns-7c65d6cfc9-dgnw2" [085d5a98-0a61-4678-830f-384780a0d7ef] Running
	I0918 21:10:15.053211   61273 system_pods.go:61] "etcd-no-preload-331658" [d6a53ff4-fcf0-46c0-9e77-9af6d0363e08] Running
	I0918 21:10:15.053218   61273 system_pods.go:61] "kube-apiserver-no-preload-331658" [1953598f-4c3f-4a98-ab75-b8ca1b8093ad] Running
	I0918 21:10:15.053223   61273 system_pods.go:61] "kube-controller-manager-no-preload-331658" [8fc655be-a192-4250-a31d-2e533a2b5e41] Running
	I0918 21:10:15.053228   61273 system_pods.go:61] "kube-proxy-hx25w" [a26512ff-f695-4452-8974-577479257160] Running
	I0918 21:10:15.053232   61273 system_pods.go:61] "kube-scheduler-no-preload-331658" [64e5347e-e7fd-4a0d-ae41-539c3989c29e] Running
	I0918 21:10:15.053243   61273 system_pods.go:61] "metrics-server-6867b74b74-n27vc" [b1de76ec-8987-49ce-ae66-eedda2705cde] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0918 21:10:15.053254   61273 system_pods.go:61] "storage-provisioner" [e110aeb3-e9bc-4bb9-9b49-5579558bdda2] Running
	I0918 21:10:15.053264   61273 system_pods.go:74] duration metric: took 3.837800115s to wait for pod list to return data ...
	I0918 21:10:15.053273   61273 default_sa.go:34] waiting for default service account to be created ...
	I0918 21:10:15.056865   61273 default_sa.go:45] found service account: "default"
	I0918 21:10:15.056900   61273 default_sa.go:55] duration metric: took 3.619144ms for default service account to be created ...
	I0918 21:10:15.056912   61273 system_pods.go:116] waiting for k8s-apps to be running ...
	I0918 21:10:15.061835   61273 system_pods.go:86] 8 kube-system pods found
	I0918 21:10:15.061864   61273 system_pods.go:89] "coredns-7c65d6cfc9-dgnw2" [085d5a98-0a61-4678-830f-384780a0d7ef] Running
	I0918 21:10:15.061870   61273 system_pods.go:89] "etcd-no-preload-331658" [d6a53ff4-fcf0-46c0-9e77-9af6d0363e08] Running
	I0918 21:10:15.061875   61273 system_pods.go:89] "kube-apiserver-no-preload-331658" [1953598f-4c3f-4a98-ab75-b8ca1b8093ad] Running
	I0918 21:10:15.061880   61273 system_pods.go:89] "kube-controller-manager-no-preload-331658" [8fc655be-a192-4250-a31d-2e533a2b5e41] Running
	I0918 21:10:15.061884   61273 system_pods.go:89] "kube-proxy-hx25w" [a26512ff-f695-4452-8974-577479257160] Running
	I0918 21:10:15.061888   61273 system_pods.go:89] "kube-scheduler-no-preload-331658" [64e5347e-e7fd-4a0d-ae41-539c3989c29e] Running
	I0918 21:10:15.061894   61273 system_pods.go:89] "metrics-server-6867b74b74-n27vc" [b1de76ec-8987-49ce-ae66-eedda2705cde] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0918 21:10:15.061898   61273 system_pods.go:89] "storage-provisioner" [e110aeb3-e9bc-4bb9-9b49-5579558bdda2] Running
	I0918 21:10:15.061906   61273 system_pods.go:126] duration metric: took 4.987508ms to wait for k8s-apps to be running ...
	I0918 21:10:15.061912   61273 system_svc.go:44] waiting for kubelet service to be running ....
	I0918 21:10:15.061966   61273 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0918 21:10:15.079834   61273 system_svc.go:56] duration metric: took 17.908997ms WaitForService to wait for kubelet
	I0918 21:10:15.079875   61273 kubeadm.go:582] duration metric: took 4m19.759287892s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0918 21:10:15.079897   61273 node_conditions.go:102] verifying NodePressure condition ...
	I0918 21:10:15.083307   61273 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0918 21:10:15.083390   61273 node_conditions.go:123] node cpu capacity is 2
	I0918 21:10:15.083407   61273 node_conditions.go:105] duration metric: took 3.503352ms to run NodePressure ...
	I0918 21:10:15.083421   61273 start.go:241] waiting for startup goroutines ...
	I0918 21:10:15.083431   61273 start.go:246] waiting for cluster config update ...
	I0918 21:10:15.083444   61273 start.go:255] writing updated cluster config ...
	I0918 21:10:15.083788   61273 ssh_runner.go:195] Run: rm -f paused
	I0918 21:10:15.139144   61273 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I0918 21:10:15.141198   61273 out.go:177] * Done! kubectl is now configured to use "no-preload-331658" cluster and "default" namespace by default
	I0918 21:11:17.441368   62061 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0918 21:11:17.441607   62061 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0918 21:11:17.442921   62061 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0918 21:11:17.443036   62061 kubeadm.go:310] [preflight] Running pre-flight checks
	I0918 21:11:17.443221   62061 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0918 21:11:17.443500   62061 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0918 21:11:17.443818   62061 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0918 21:11:17.444099   62061 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0918 21:11:17.446858   62061 out.go:235]   - Generating certificates and keys ...
	I0918 21:11:17.446965   62061 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0918 21:11:17.447048   62061 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0918 21:11:17.447160   62061 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0918 21:11:17.447248   62061 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0918 21:11:17.447349   62061 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0918 21:11:17.447425   62061 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0918 21:11:17.447507   62061 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0918 21:11:17.447587   62061 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0918 21:11:17.447742   62061 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0918 21:11:17.447847   62061 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0918 21:11:17.447911   62061 kubeadm.go:310] [certs] Using the existing "sa" key
	I0918 21:11:17.447984   62061 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0918 21:11:17.448085   62061 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0918 21:11:17.448163   62061 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0918 21:11:17.448255   62061 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0918 21:11:17.448339   62061 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0918 21:11:17.448486   62061 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0918 21:11:17.448590   62061 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0918 21:11:17.448645   62061 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0918 21:11:17.448733   62061 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0918 21:11:17.450203   62061 out.go:235]   - Booting up control plane ...
	I0918 21:11:17.450299   62061 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0918 21:11:17.450434   62061 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0918 21:11:17.450506   62061 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0918 21:11:17.450578   62061 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0918 21:11:17.450752   62061 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0918 21:11:17.450805   62061 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0918 21:11:17.450863   62061 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0918 21:11:17.451034   62061 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0918 21:11:17.451090   62061 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0918 21:11:17.451259   62061 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0918 21:11:17.451325   62061 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0918 21:11:17.451498   62061 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0918 21:11:17.451562   62061 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0918 21:11:17.451765   62061 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0918 21:11:17.451845   62061 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0918 21:11:17.452027   62061 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0918 21:11:17.452041   62061 kubeadm.go:310] 
	I0918 21:11:17.452083   62061 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0918 21:11:17.452118   62061 kubeadm.go:310] 		timed out waiting for the condition
	I0918 21:11:17.452127   62061 kubeadm.go:310] 
	I0918 21:11:17.452160   62061 kubeadm.go:310] 	This error is likely caused by:
	I0918 21:11:17.452189   62061 kubeadm.go:310] 		- The kubelet is not running
	I0918 21:11:17.452282   62061 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0918 21:11:17.452289   62061 kubeadm.go:310] 
	I0918 21:11:17.452385   62061 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0918 21:11:17.452425   62061 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0918 21:11:17.452473   62061 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0918 21:11:17.452482   62061 kubeadm.go:310] 
	I0918 21:11:17.452578   62061 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0918 21:11:17.452649   62061 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0918 21:11:17.452656   62061 kubeadm.go:310] 
	I0918 21:11:17.452770   62061 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0918 21:11:17.452849   62061 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0918 21:11:17.452920   62061 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0918 21:11:17.452983   62061 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0918 21:11:17.453029   62061 kubeadm.go:310] 
	W0918 21:11:17.453100   62061 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0918 21:11:17.453139   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0918 21:11:17.917211   62061 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0918 21:11:17.933265   62061 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0918 21:11:17.943909   62061 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0918 21:11:17.943937   62061 kubeadm.go:157] found existing configuration files:
	
	I0918 21:11:17.943980   62061 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0918 21:11:17.953745   62061 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0918 21:11:17.953817   62061 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0918 21:11:17.964411   62061 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0918 21:11:17.974601   62061 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0918 21:11:17.974661   62061 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0918 21:11:17.984631   62061 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0918 21:11:17.994280   62061 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0918 21:11:17.994341   62061 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0918 21:11:18.004341   62061 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0918 21:11:18.014066   62061 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0918 21:11:18.014127   62061 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0918 21:11:18.023592   62061 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0918 21:11:18.098831   62061 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0918 21:11:18.098908   62061 kubeadm.go:310] [preflight] Running pre-flight checks
	I0918 21:11:18.254904   62061 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0918 21:11:18.255012   62061 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0918 21:11:18.255102   62061 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0918 21:11:18.444152   62061 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0918 21:11:18.445967   62061 out.go:235]   - Generating certificates and keys ...
	I0918 21:11:18.446082   62061 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0918 21:11:18.446185   62061 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0918 21:11:18.446316   62061 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0918 21:11:18.446403   62061 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0918 21:11:18.446508   62061 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0918 21:11:18.446600   62061 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0918 21:11:18.446706   62061 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0918 21:11:18.446767   62061 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0918 21:11:18.446852   62061 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0918 21:11:18.446936   62061 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0918 21:11:18.446971   62061 kubeadm.go:310] [certs] Using the existing "sa" key
	I0918 21:11:18.447025   62061 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0918 21:11:18.621414   62061 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0918 21:11:18.973691   62061 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0918 21:11:19.177522   62061 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0918 21:11:19.351312   62061 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0918 21:11:19.375169   62061 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0918 21:11:19.376274   62061 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0918 21:11:19.376351   62061 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0918 21:11:19.505030   62061 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0918 21:11:19.507065   62061 out.go:235]   - Booting up control plane ...
	I0918 21:11:19.507197   62061 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0918 21:11:19.519523   62061 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0918 21:11:19.520747   62061 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0918 21:11:19.522723   62061 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0918 21:11:19.524851   62061 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0918 21:11:59.527084   62061 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0918 21:11:59.527421   62061 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0918 21:11:59.527674   62061 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0918 21:12:04.528288   62061 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0918 21:12:04.528530   62061 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0918 21:12:14.529180   62061 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0918 21:12:14.529367   62061 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0918 21:12:34.530066   62061 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0918 21:12:34.530335   62061 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0918 21:13:14.529030   62061 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0918 21:13:14.529305   62061 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0918 21:13:14.529326   62061 kubeadm.go:310] 
	I0918 21:13:14.529374   62061 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0918 21:13:14.529447   62061 kubeadm.go:310] 		timed out waiting for the condition
	I0918 21:13:14.529466   62061 kubeadm.go:310] 
	I0918 21:13:14.529521   62061 kubeadm.go:310] 	This error is likely caused by:
	I0918 21:13:14.529569   62061 kubeadm.go:310] 		- The kubelet is not running
	I0918 21:13:14.529716   62061 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0918 21:13:14.529726   62061 kubeadm.go:310] 
	I0918 21:13:14.529866   62061 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0918 21:13:14.529917   62061 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0918 21:13:14.529967   62061 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0918 21:13:14.529976   62061 kubeadm.go:310] 
	I0918 21:13:14.530120   62061 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0918 21:13:14.530220   62061 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0918 21:13:14.530230   62061 kubeadm.go:310] 
	I0918 21:13:14.530352   62061 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0918 21:13:14.530480   62061 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0918 21:13:14.530579   62061 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0918 21:13:14.530678   62061 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0918 21:13:14.530688   62061 kubeadm.go:310] 
	I0918 21:13:14.531356   62061 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0918 21:13:14.531463   62061 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0918 21:13:14.531520   62061 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0918 21:13:14.531592   62061 kubeadm.go:394] duration metric: took 7m56.917405378s to StartCluster
	I0918 21:13:14.531633   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0918 21:13:14.531689   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0918 21:13:14.578851   62061 cri.go:89] found id: ""
	I0918 21:13:14.578883   62061 logs.go:276] 0 containers: []
	W0918 21:13:14.578893   62061 logs.go:278] No container was found matching "kube-apiserver"
	I0918 21:13:14.578901   62061 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0918 21:13:14.578960   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0918 21:13:14.627137   62061 cri.go:89] found id: ""
	I0918 21:13:14.627168   62061 logs.go:276] 0 containers: []
	W0918 21:13:14.627179   62061 logs.go:278] No container was found matching "etcd"
	I0918 21:13:14.627187   62061 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0918 21:13:14.627245   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0918 21:13:14.670678   62061 cri.go:89] found id: ""
	I0918 21:13:14.670707   62061 logs.go:276] 0 containers: []
	W0918 21:13:14.670717   62061 logs.go:278] No container was found matching "coredns"
	I0918 21:13:14.670724   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0918 21:13:14.670788   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0918 21:13:14.709610   62061 cri.go:89] found id: ""
	I0918 21:13:14.709641   62061 logs.go:276] 0 containers: []
	W0918 21:13:14.709651   62061 logs.go:278] No container was found matching "kube-scheduler"
	I0918 21:13:14.709658   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0918 21:13:14.709722   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0918 21:13:14.743492   62061 cri.go:89] found id: ""
	I0918 21:13:14.743522   62061 logs.go:276] 0 containers: []
	W0918 21:13:14.743534   62061 logs.go:278] No container was found matching "kube-proxy"
	I0918 21:13:14.743540   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0918 21:13:14.743601   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0918 21:13:14.777577   62061 cri.go:89] found id: ""
	I0918 21:13:14.777602   62061 logs.go:276] 0 containers: []
	W0918 21:13:14.777610   62061 logs.go:278] No container was found matching "kube-controller-manager"
	I0918 21:13:14.777616   62061 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0918 21:13:14.777679   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0918 21:13:14.815884   62061 cri.go:89] found id: ""
	I0918 21:13:14.815913   62061 logs.go:276] 0 containers: []
	W0918 21:13:14.815922   62061 logs.go:278] No container was found matching "kindnet"
	I0918 21:13:14.815937   62061 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0918 21:13:14.815997   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0918 21:13:14.855426   62061 cri.go:89] found id: ""
	I0918 21:13:14.855456   62061 logs.go:276] 0 containers: []
	W0918 21:13:14.855464   62061 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0918 21:13:14.855473   62061 logs.go:123] Gathering logs for dmesg ...
	I0918 21:13:14.855484   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 21:13:14.868812   62061 logs.go:123] Gathering logs for describe nodes ...
	I0918 21:13:14.868843   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0918 21:13:14.955093   62061 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0918 21:13:14.955120   62061 logs.go:123] Gathering logs for CRI-O ...
	I0918 21:13:14.955151   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0918 21:13:15.064722   62061 logs.go:123] Gathering logs for container status ...
	I0918 21:13:15.064760   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 21:13:15.105430   62061 logs.go:123] Gathering logs for kubelet ...
	I0918 21:13:15.105466   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0918 21:13:15.157878   62061 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0918 21:13:15.157956   62061 out.go:270] * 
	W0918 21:13:15.158036   62061 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0918 21:13:15.158052   62061 out.go:270] * 
	W0918 21:13:15.158934   62061 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0918 21:13:15.161604   62061 out.go:201] 
	W0918 21:13:15.162606   62061 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0918 21:13:15.162664   62061 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0918 21:13:15.162685   62061 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0918 21:13:15.163831   62061 out.go:201] 
	
	
	==> CRI-O <==
	Sep 18 21:19:17 no-preload-331658 crio[706]: time="2024-09-18 21:19:17.212548805Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726694357212529363,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=9c6d8e37-2880-42f5-9c92-c79247aa59f6 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 18 21:19:17 no-preload-331658 crio[706]: time="2024-09-18 21:19:17.213002460Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=79c91e12-3dd4-4bd8-b5ec-a01b9375866b name=/runtime.v1.RuntimeService/ListContainers
	Sep 18 21:19:17 no-preload-331658 crio[706]: time="2024-09-18 21:19:17.213057195Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=79c91e12-3dd4-4bd8-b5ec-a01b9375866b name=/runtime.v1.RuntimeService/ListContainers
	Sep 18 21:19:17 no-preload-331658 crio[706]: time="2024-09-18 21:19:17.213316180Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b44d6f4b44928aa498c4375f783c165714b4a5959a1fa709712ad39a412d6d9f,PodSandboxId:dea5ae06387e7e7335fef18b98fbd9ecf26aa8d0250ef3d89bce58fa3cc9f783,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726693584268643455,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e110aeb3-e9bc-4bb9-9b49-5579558bdda2,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b73a0ee39e7552f3324ca36518989909546cebd01fce34a1c7841ec6b3c2f893,PodSandboxId:f90a1da129fd7a1e4b6da8d312648f49d551727ef4e76ef728e6db438ccb383c,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1726693564769582699,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: fd5604f9-f8ae-4012-884d-ff45e1238741,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:76b9e08a213468abdd23d7b3998b55d6544e2fbae9205ec6076ef0cd7a80e7be,PodSandboxId:e600aef7eba4f5133343b5df0f6a05822565aae3915bfdf4ba671ae1aaa1579b,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726693561167110046,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-dgnw2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 085d5a98-0a61-4678-830f-384780a0d7ef,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"d
ns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:38c14df0554157969a97390590d6e9c46765fd6fc3e286f7d2e3094838e0aff5,PodSandboxId:dea5ae06387e7e7335fef18b98fbd9ecf26aa8d0250ef3d89bce58fa3cc9f783,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1726693553582994768,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e
110aeb3-e9bc-4bb9-9b49-5579558bdda2,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0257280a0d21d52e2a996c2f717880bdb9d9f6fe90a4b82cc62222c941ba7d84,PodSandboxId:ef545994f99627f1b7c7d30e652e55e8b707a237ba033526c962620e35c17cb2,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726693553447029115,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-hx25w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a26512ff-f695-4452-8974-5774792571
60,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c372970fdf265433e1668377d0214de6608cb39ebfd706cfad9624cba2f9b481,PodSandboxId:3db793cbb33d1fd20a92be6ea777701bea805eed68a056d674fc2f7a4e1f2e5a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726693548794633396,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-331658,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 78603579e4c5ca6d26ff1e0ada5894ef,},Annotati
ons:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a913074a00723bc43f97ba625ebbdfded7dca1ce664ace9a13173adb96d2068f,PodSandboxId:a2d6e267a498db1f1beb5218c1d4798baad0571bf780c72bac56f8b49195a86a,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726693548758445691,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-331658,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 576bb5b538781d67529bddb3d38e8264,},Annotations:map[string]string{io.kubernetes.contain
er.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:785dc83056153e5eeb22216ac7520f529b71f1b3ae42e9540a5c8930f1fe62c2,PodSandboxId:8bbf25a4d4a9502d446d45a80fb1275ad0ceb9e7985d4141e026cbb4df60f821,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726693548745783870,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-331658,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d87e37c96485dfa050ac6b78af8e1ed7,},Annotations:map[string]string{io.kube
rnetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a70652dce4d80c6fac020d63cff7054a835d183ac4b910157a5bf4eb8cabcaa1,PodSandboxId:1d0624802e3af5e37d5d5a733589ffa76ab0d6e414e88f5bd522eef002ece657,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726693548705226555,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-331658,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a5913cf0e3e072753e49007da9d062a7,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=79c91e12-3dd4-4bd8-b5ec-a01b9375866b name=/runtime.v1.RuntimeService/ListContainers
	Sep 18 21:19:17 no-preload-331658 crio[706]: time="2024-09-18 21:19:17.255541360Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=879c0bb8-4b85-49cc-86dc-08943a1ba617 name=/runtime.v1.RuntimeService/Version
	Sep 18 21:19:17 no-preload-331658 crio[706]: time="2024-09-18 21:19:17.255629057Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=879c0bb8-4b85-49cc-86dc-08943a1ba617 name=/runtime.v1.RuntimeService/Version
	Sep 18 21:19:17 no-preload-331658 crio[706]: time="2024-09-18 21:19:17.257060696Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=9e0ec8d8-5bea-48f2-a898-66596b3529fe name=/runtime.v1.ImageService/ImageFsInfo
	Sep 18 21:19:17 no-preload-331658 crio[706]: time="2024-09-18 21:19:17.257531512Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726694357257481175,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=9e0ec8d8-5bea-48f2-a898-66596b3529fe name=/runtime.v1.ImageService/ImageFsInfo
	Sep 18 21:19:17 no-preload-331658 crio[706]: time="2024-09-18 21:19:17.258168927Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=2974bc27-f2db-43d1-a556-7eb25b0ce968 name=/runtime.v1.RuntimeService/ListContainers
	Sep 18 21:19:17 no-preload-331658 crio[706]: time="2024-09-18 21:19:17.258236173Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=2974bc27-f2db-43d1-a556-7eb25b0ce968 name=/runtime.v1.RuntimeService/ListContainers
	Sep 18 21:19:17 no-preload-331658 crio[706]: time="2024-09-18 21:19:17.258429148Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b44d6f4b44928aa498c4375f783c165714b4a5959a1fa709712ad39a412d6d9f,PodSandboxId:dea5ae06387e7e7335fef18b98fbd9ecf26aa8d0250ef3d89bce58fa3cc9f783,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726693584268643455,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e110aeb3-e9bc-4bb9-9b49-5579558bdda2,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b73a0ee39e7552f3324ca36518989909546cebd01fce34a1c7841ec6b3c2f893,PodSandboxId:f90a1da129fd7a1e4b6da8d312648f49d551727ef4e76ef728e6db438ccb383c,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1726693564769582699,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: fd5604f9-f8ae-4012-884d-ff45e1238741,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:76b9e08a213468abdd23d7b3998b55d6544e2fbae9205ec6076ef0cd7a80e7be,PodSandboxId:e600aef7eba4f5133343b5df0f6a05822565aae3915bfdf4ba671ae1aaa1579b,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726693561167110046,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-dgnw2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 085d5a98-0a61-4678-830f-384780a0d7ef,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"d
ns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:38c14df0554157969a97390590d6e9c46765fd6fc3e286f7d2e3094838e0aff5,PodSandboxId:dea5ae06387e7e7335fef18b98fbd9ecf26aa8d0250ef3d89bce58fa3cc9f783,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1726693553582994768,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e
110aeb3-e9bc-4bb9-9b49-5579558bdda2,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0257280a0d21d52e2a996c2f717880bdb9d9f6fe90a4b82cc62222c941ba7d84,PodSandboxId:ef545994f99627f1b7c7d30e652e55e8b707a237ba033526c962620e35c17cb2,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726693553447029115,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-hx25w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a26512ff-f695-4452-8974-5774792571
60,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c372970fdf265433e1668377d0214de6608cb39ebfd706cfad9624cba2f9b481,PodSandboxId:3db793cbb33d1fd20a92be6ea777701bea805eed68a056d674fc2f7a4e1f2e5a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726693548794633396,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-331658,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 78603579e4c5ca6d26ff1e0ada5894ef,},Annotati
ons:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a913074a00723bc43f97ba625ebbdfded7dca1ce664ace9a13173adb96d2068f,PodSandboxId:a2d6e267a498db1f1beb5218c1d4798baad0571bf780c72bac56f8b49195a86a,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726693548758445691,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-331658,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 576bb5b538781d67529bddb3d38e8264,},Annotations:map[string]string{io.kubernetes.contain
er.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:785dc83056153e5eeb22216ac7520f529b71f1b3ae42e9540a5c8930f1fe62c2,PodSandboxId:8bbf25a4d4a9502d446d45a80fb1275ad0ceb9e7985d4141e026cbb4df60f821,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726693548745783870,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-331658,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d87e37c96485dfa050ac6b78af8e1ed7,},Annotations:map[string]string{io.kube
rnetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a70652dce4d80c6fac020d63cff7054a835d183ac4b910157a5bf4eb8cabcaa1,PodSandboxId:1d0624802e3af5e37d5d5a733589ffa76ab0d6e414e88f5bd522eef002ece657,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726693548705226555,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-331658,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a5913cf0e3e072753e49007da9d062a7,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=2974bc27-f2db-43d1-a556-7eb25b0ce968 name=/runtime.v1.RuntimeService/ListContainers
	Sep 18 21:19:17 no-preload-331658 crio[706]: time="2024-09-18 21:19:17.294722361Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=c76da8f0-4aa8-4e8d-8766-b9b668a01a3b name=/runtime.v1.RuntimeService/Version
	Sep 18 21:19:17 no-preload-331658 crio[706]: time="2024-09-18 21:19:17.294805467Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=c76da8f0-4aa8-4e8d-8766-b9b668a01a3b name=/runtime.v1.RuntimeService/Version
	Sep 18 21:19:17 no-preload-331658 crio[706]: time="2024-09-18 21:19:17.295729791Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=c6e4023f-c0b0-470e-9eda-f8c3be7c05aa name=/runtime.v1.ImageService/ImageFsInfo
	Sep 18 21:19:17 no-preload-331658 crio[706]: time="2024-09-18 21:19:17.296090438Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726694357296047149,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=c6e4023f-c0b0-470e-9eda-f8c3be7c05aa name=/runtime.v1.ImageService/ImageFsInfo
	Sep 18 21:19:17 no-preload-331658 crio[706]: time="2024-09-18 21:19:17.296732124Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=a4233c87-6762-4637-8710-8ccb9a2593d9 name=/runtime.v1.RuntimeService/ListContainers
	Sep 18 21:19:17 no-preload-331658 crio[706]: time="2024-09-18 21:19:17.296790097Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=a4233c87-6762-4637-8710-8ccb9a2593d9 name=/runtime.v1.RuntimeService/ListContainers
	Sep 18 21:19:17 no-preload-331658 crio[706]: time="2024-09-18 21:19:17.297019800Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b44d6f4b44928aa498c4375f783c165714b4a5959a1fa709712ad39a412d6d9f,PodSandboxId:dea5ae06387e7e7335fef18b98fbd9ecf26aa8d0250ef3d89bce58fa3cc9f783,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726693584268643455,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e110aeb3-e9bc-4bb9-9b49-5579558bdda2,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b73a0ee39e7552f3324ca36518989909546cebd01fce34a1c7841ec6b3c2f893,PodSandboxId:f90a1da129fd7a1e4b6da8d312648f49d551727ef4e76ef728e6db438ccb383c,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1726693564769582699,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: fd5604f9-f8ae-4012-884d-ff45e1238741,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:76b9e08a213468abdd23d7b3998b55d6544e2fbae9205ec6076ef0cd7a80e7be,PodSandboxId:e600aef7eba4f5133343b5df0f6a05822565aae3915bfdf4ba671ae1aaa1579b,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726693561167110046,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-dgnw2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 085d5a98-0a61-4678-830f-384780a0d7ef,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"d
ns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:38c14df0554157969a97390590d6e9c46765fd6fc3e286f7d2e3094838e0aff5,PodSandboxId:dea5ae06387e7e7335fef18b98fbd9ecf26aa8d0250ef3d89bce58fa3cc9f783,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1726693553582994768,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e
110aeb3-e9bc-4bb9-9b49-5579558bdda2,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0257280a0d21d52e2a996c2f717880bdb9d9f6fe90a4b82cc62222c941ba7d84,PodSandboxId:ef545994f99627f1b7c7d30e652e55e8b707a237ba033526c962620e35c17cb2,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726693553447029115,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-hx25w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a26512ff-f695-4452-8974-5774792571
60,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c372970fdf265433e1668377d0214de6608cb39ebfd706cfad9624cba2f9b481,PodSandboxId:3db793cbb33d1fd20a92be6ea777701bea805eed68a056d674fc2f7a4e1f2e5a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726693548794633396,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-331658,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 78603579e4c5ca6d26ff1e0ada5894ef,},Annotati
ons:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a913074a00723bc43f97ba625ebbdfded7dca1ce664ace9a13173adb96d2068f,PodSandboxId:a2d6e267a498db1f1beb5218c1d4798baad0571bf780c72bac56f8b49195a86a,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726693548758445691,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-331658,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 576bb5b538781d67529bddb3d38e8264,},Annotations:map[string]string{io.kubernetes.contain
er.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:785dc83056153e5eeb22216ac7520f529b71f1b3ae42e9540a5c8930f1fe62c2,PodSandboxId:8bbf25a4d4a9502d446d45a80fb1275ad0ceb9e7985d4141e026cbb4df60f821,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726693548745783870,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-331658,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d87e37c96485dfa050ac6b78af8e1ed7,},Annotations:map[string]string{io.kube
rnetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a70652dce4d80c6fac020d63cff7054a835d183ac4b910157a5bf4eb8cabcaa1,PodSandboxId:1d0624802e3af5e37d5d5a733589ffa76ab0d6e414e88f5bd522eef002ece657,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726693548705226555,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-331658,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a5913cf0e3e072753e49007da9d062a7,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=a4233c87-6762-4637-8710-8ccb9a2593d9 name=/runtime.v1.RuntimeService/ListContainers
	Sep 18 21:19:17 no-preload-331658 crio[706]: time="2024-09-18 21:19:17.334991609Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=e76aa7b7-bad1-4220-85e6-de28f1bc3ce8 name=/runtime.v1.RuntimeService/Version
	Sep 18 21:19:17 no-preload-331658 crio[706]: time="2024-09-18 21:19:17.335096389Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=e76aa7b7-bad1-4220-85e6-de28f1bc3ce8 name=/runtime.v1.RuntimeService/Version
	Sep 18 21:19:17 no-preload-331658 crio[706]: time="2024-09-18 21:19:17.337558434Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=50bda94c-92cb-4ecd-948c-35176616e3f1 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 18 21:19:17 no-preload-331658 crio[706]: time="2024-09-18 21:19:17.337909935Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726694357337886067,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=50bda94c-92cb-4ecd-948c-35176616e3f1 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 18 21:19:17 no-preload-331658 crio[706]: time="2024-09-18 21:19:17.338884771Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=73603510-39cc-42d5-bbe0-4a81ab8438e4 name=/runtime.v1.RuntimeService/ListContainers
	Sep 18 21:19:17 no-preload-331658 crio[706]: time="2024-09-18 21:19:17.338980697Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=73603510-39cc-42d5-bbe0-4a81ab8438e4 name=/runtime.v1.RuntimeService/ListContainers
	Sep 18 21:19:17 no-preload-331658 crio[706]: time="2024-09-18 21:19:17.339372430Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b44d6f4b44928aa498c4375f783c165714b4a5959a1fa709712ad39a412d6d9f,PodSandboxId:dea5ae06387e7e7335fef18b98fbd9ecf26aa8d0250ef3d89bce58fa3cc9f783,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726693584268643455,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e110aeb3-e9bc-4bb9-9b49-5579558bdda2,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b73a0ee39e7552f3324ca36518989909546cebd01fce34a1c7841ec6b3c2f893,PodSandboxId:f90a1da129fd7a1e4b6da8d312648f49d551727ef4e76ef728e6db438ccb383c,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1726693564769582699,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: fd5604f9-f8ae-4012-884d-ff45e1238741,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:76b9e08a213468abdd23d7b3998b55d6544e2fbae9205ec6076ef0cd7a80e7be,PodSandboxId:e600aef7eba4f5133343b5df0f6a05822565aae3915bfdf4ba671ae1aaa1579b,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726693561167110046,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-dgnw2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 085d5a98-0a61-4678-830f-384780a0d7ef,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"d
ns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:38c14df0554157969a97390590d6e9c46765fd6fc3e286f7d2e3094838e0aff5,PodSandboxId:dea5ae06387e7e7335fef18b98fbd9ecf26aa8d0250ef3d89bce58fa3cc9f783,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1726693553582994768,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e
110aeb3-e9bc-4bb9-9b49-5579558bdda2,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0257280a0d21d52e2a996c2f717880bdb9d9f6fe90a4b82cc62222c941ba7d84,PodSandboxId:ef545994f99627f1b7c7d30e652e55e8b707a237ba033526c962620e35c17cb2,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726693553447029115,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-hx25w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a26512ff-f695-4452-8974-5774792571
60,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c372970fdf265433e1668377d0214de6608cb39ebfd706cfad9624cba2f9b481,PodSandboxId:3db793cbb33d1fd20a92be6ea777701bea805eed68a056d674fc2f7a4e1f2e5a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726693548794633396,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-331658,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 78603579e4c5ca6d26ff1e0ada5894ef,},Annotati
ons:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a913074a00723bc43f97ba625ebbdfded7dca1ce664ace9a13173adb96d2068f,PodSandboxId:a2d6e267a498db1f1beb5218c1d4798baad0571bf780c72bac56f8b49195a86a,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726693548758445691,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-331658,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 576bb5b538781d67529bddb3d38e8264,},Annotations:map[string]string{io.kubernetes.contain
er.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:785dc83056153e5eeb22216ac7520f529b71f1b3ae42e9540a5c8930f1fe62c2,PodSandboxId:8bbf25a4d4a9502d446d45a80fb1275ad0ceb9e7985d4141e026cbb4df60f821,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726693548745783870,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-331658,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d87e37c96485dfa050ac6b78af8e1ed7,},Annotations:map[string]string{io.kube
rnetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a70652dce4d80c6fac020d63cff7054a835d183ac4b910157a5bf4eb8cabcaa1,PodSandboxId:1d0624802e3af5e37d5d5a733589ffa76ab0d6e414e88f5bd522eef002ece657,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726693548705226555,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-331658,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a5913cf0e3e072753e49007da9d062a7,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=73603510-39cc-42d5-bbe0-4a81ab8438e4 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	b44d6f4b44928       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      12 minutes ago      Running             storage-provisioner       2                   dea5ae06387e7       storage-provisioner
	b73a0ee39e755       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   13 minutes ago      Running             busybox                   1                   f90a1da129fd7       busybox
	76b9e08a21346       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      13 minutes ago      Running             coredns                   1                   e600aef7eba4f       coredns-7c65d6cfc9-dgnw2
	38c14df055415       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      13 minutes ago      Exited              storage-provisioner       1                   dea5ae06387e7       storage-provisioner
	0257280a0d21d       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                      13 minutes ago      Running             kube-proxy                1                   ef545994f9962       kube-proxy-hx25w
	c372970fdf265       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                      13 minutes ago      Running             kube-scheduler            1                   3db793cbb33d1       kube-scheduler-no-preload-331658
	a913074a00723       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      13 minutes ago      Running             etcd                      1                   a2d6e267a498d       etcd-no-preload-331658
	785dc83056153       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                      13 minutes ago      Running             kube-controller-manager   1                   8bbf25a4d4a95       kube-controller-manager-no-preload-331658
	a70652dce4d80       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee                                      13 minutes ago      Running             kube-apiserver            1                   1d0624802e3af       kube-apiserver-no-preload-331658
	
	
	==> coredns [76b9e08a213468abdd23d7b3998b55d6544e2fbae9205ec6076ef0cd7a80e7be] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 7996ca7cabdb2fd3e37b0463c78d5a492f8d30690ee66a90ae7ff24c50d9d936a24d239b3a5946771521ff70c09a796ffaf6ef8abe5753fd1ad5af38b6cdbb7f
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:41940 - 377 "HINFO IN 8387474681266792745.2216001485904418167. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.018101231s
	
	
	==> describe nodes <==
	Name:               no-preload-331658
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-331658
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=85073601a832bd4bbda5d11fa91feafff6ec6b91
	                    minikube.k8s.io/name=no-preload-331658
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_18T20_56_32_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 18 Sep 2024 20:56:28 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-331658
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 18 Sep 2024 21:19:08 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 18 Sep 2024 21:16:34 +0000   Wed, 18 Sep 2024 20:56:27 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 18 Sep 2024 21:16:34 +0000   Wed, 18 Sep 2024 20:56:27 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 18 Sep 2024 21:16:34 +0000   Wed, 18 Sep 2024 20:56:27 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 18 Sep 2024 21:16:34 +0000   Wed, 18 Sep 2024 21:06:02 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.61.31
	  Hostname:    no-preload-331658
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 a80780b722fd4c839ca3d1a0c9a7d0dd
	  System UUID:                a80780b7-22fd-4c83-9ca3-d1a0c9a7d0dd
	  Boot ID:                    58db0881-f0c7-4360-bff4-2e0e33a19d28
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         22m
	  kube-system                 coredns-7c65d6cfc9-dgnw2                     100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     22m
	  kube-system                 etcd-no-preload-331658                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         22m
	  kube-system                 kube-apiserver-no-preload-331658             250m (12%)    0 (0%)      0 (0%)           0 (0%)         22m
	  kube-system                 kube-controller-manager-no-preload-331658    200m (10%)    0 (0%)      0 (0%)           0 (0%)         22m
	  kube-system                 kube-proxy-hx25w                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         22m
	  kube-system                 kube-scheduler-no-preload-331658             100m (5%)     0 (0%)      0 (0%)           0 (0%)         22m
	  kube-system                 metrics-server-6867b74b74-n27vc              100m (5%)     0 (0%)      200Mi (9%)       0 (0%)         22m
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         22m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%)   0 (0%)
	  memory             370Mi (17%)  170Mi (8%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 22m                kube-proxy       
	  Normal  Starting                 13m                kube-proxy       
	  Normal  Starting                 22m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  22m (x8 over 22m)  kubelet          Node no-preload-331658 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    22m (x8 over 22m)  kubelet          Node no-preload-331658 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     22m (x7 over 22m)  kubelet          Node no-preload-331658 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  22m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasNoDiskPressure    22m                kubelet          Node no-preload-331658 status is now: NodeHasNoDiskPressure
	  Normal  NodeAllocatableEnforced  22m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  22m                kubelet          Node no-preload-331658 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     22m                kubelet          Node no-preload-331658 status is now: NodeHasSufficientPID
	  Normal  Starting                 22m                kubelet          Starting kubelet.
	  Normal  NodeReady                22m                kubelet          Node no-preload-331658 status is now: NodeReady
	  Normal  RegisteredNode           22m                node-controller  Node no-preload-331658 event: Registered Node no-preload-331658 in Controller
	  Normal  Starting                 13m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  13m (x8 over 13m)  kubelet          Node no-preload-331658 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    13m (x8 over 13m)  kubelet          Node no-preload-331658 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     13m (x7 over 13m)  kubelet          Node no-preload-331658 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  13m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           13m                node-controller  Node no-preload-331658 event: Registered Node no-preload-331658 in Controller
	
	
	==> dmesg <==
	[Sep18 21:05] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.055782] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.042065] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +5.046605] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.030734] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.587582] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +7.714844] systemd-fstab-generator[629]: Ignoring "noauto" option for root device
	[  +0.064441] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.070450] systemd-fstab-generator[641]: Ignoring "noauto" option for root device
	[  +0.179272] systemd-fstab-generator[655]: Ignoring "noauto" option for root device
	[  +0.142864] systemd-fstab-generator[667]: Ignoring "noauto" option for root device
	[  +0.304031] systemd-fstab-generator[697]: Ignoring "noauto" option for root device
	[ +15.194098] systemd-fstab-generator[1234]: Ignoring "noauto" option for root device
	[  +0.061533] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.129220] systemd-fstab-generator[1355]: Ignoring "noauto" option for root device
	[  +3.402337] kauditd_printk_skb: 97 callbacks suppressed
	[  +4.199898] systemd-fstab-generator[1981]: Ignoring "noauto" option for root device
	[  +2.741517] kauditd_printk_skb: 61 callbacks suppressed
	[Sep18 21:06] kauditd_printk_skb: 41 callbacks suppressed
	
	
	==> etcd [a913074a00723bc43f97ba625ebbdfded7dca1ce664ace9a13173adb96d2068f] <==
	{"level":"info","ts":"2024-09-18T21:05:49.244728Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-18T21:05:49.252454Z","caller":"embed/etcd.go:728","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-09-18T21:05:49.252712Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"b122709e0f96166a","initial-advertise-peer-urls":["https://192.168.61.31:2380"],"listen-peer-urls":["https://192.168.61.31:2380"],"advertise-client-urls":["https://192.168.61.31:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.61.31:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-09-18T21:05:49.252763Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-09-18T21:05:49.252926Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.61.31:2380"}
	{"level":"info","ts":"2024-09-18T21:05:49.252951Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.61.31:2380"}
	{"level":"info","ts":"2024-09-18T21:05:50.961273Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b122709e0f96166a is starting a new election at term 2"}
	{"level":"info","ts":"2024-09-18T21:05:50.961428Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b122709e0f96166a became pre-candidate at term 2"}
	{"level":"info","ts":"2024-09-18T21:05:50.961513Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b122709e0f96166a received MsgPreVoteResp from b122709e0f96166a at term 2"}
	{"level":"info","ts":"2024-09-18T21:05:50.961550Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b122709e0f96166a became candidate at term 3"}
	{"level":"info","ts":"2024-09-18T21:05:50.961574Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b122709e0f96166a received MsgVoteResp from b122709e0f96166a at term 3"}
	{"level":"info","ts":"2024-09-18T21:05:50.961621Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b122709e0f96166a became leader at term 3"}
	{"level":"info","ts":"2024-09-18T21:05:50.961647Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: b122709e0f96166a elected leader b122709e0f96166a at term 3"}
	{"level":"info","ts":"2024-09-18T21:05:51.004193Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-18T21:05:51.004353Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"b122709e0f96166a","local-member-attributes":"{Name:no-preload-331658 ClientURLs:[https://192.168.61.31:2379]}","request-path":"/0/members/b122709e0f96166a/attributes","cluster-id":"29796e4c48d338ea","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-18T21:05:51.004767Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-18T21:05:51.004964Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-18T21:05:51.004986Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-18T21:05:51.005679Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-18T21:05:51.006553Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.61.31:2379"}
	{"level":"info","ts":"2024-09-18T21:05:51.007739Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-18T21:05:51.009053Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-09-18T21:15:51.037345Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":865}
	{"level":"info","ts":"2024-09-18T21:15:51.048647Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":865,"took":"10.379884ms","hash":2940446556,"current-db-size-bytes":2871296,"current-db-size":"2.9 MB","current-db-size-in-use-bytes":2871296,"current-db-size-in-use":"2.9 MB"}
	{"level":"info","ts":"2024-09-18T21:15:51.048748Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":2940446556,"revision":865,"compact-revision":-1}
	
	
	==> kernel <==
	 21:19:17 up 14 min,  0 users,  load average: 0.30, 0.12, 0.07
	Linux no-preload-331658 5.10.207 #1 SMP Mon Sep 16 15:00:28 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [a70652dce4d80c6fac020d63cff7054a835d183ac4b910157a5bf4eb8cabcaa1] <==
	E0918 21:15:53.310230       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E0918 21:15:53.310338       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I0918 21:15:53.311584       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0918 21:15:53.311699       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0918 21:16:53.312760       1 handler_proxy.go:99] no RequestInfo found in the context
	E0918 21:16:53.312904       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W0918 21:16:53.313006       1 handler_proxy.go:99] no RequestInfo found in the context
	E0918 21:16:53.313064       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I0918 21:16:53.314082       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0918 21:16:53.314177       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0918 21:18:53.314575       1 handler_proxy.go:99] no RequestInfo found in the context
	W0918 21:18:53.314588       1 handler_proxy.go:99] no RequestInfo found in the context
	E0918 21:18:53.315059       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	E0918 21:18:53.315153       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0918 21:18:53.316304       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0918 21:18:53.316364       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [785dc83056153e5eeb22216ac7520f529b71f1b3ae42e9540a5c8930f1fe62c2] <==
	E0918 21:13:55.839482       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0918 21:13:56.410332       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0918 21:14:25.844915       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0918 21:14:26.417222       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0918 21:14:55.853219       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0918 21:14:56.427309       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0918 21:15:25.860719       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0918 21:15:26.438307       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0918 21:15:55.867414       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0918 21:15:56.445784       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0918 21:16:25.873808       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0918 21:16:26.452873       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0918 21:16:34.095555       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="no-preload-331658"
	E0918 21:16:55.881967       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0918 21:16:56.461666       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0918 21:17:07.105400       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="306.049µs"
	I0918 21:17:20.111101       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="121.514µs"
	E0918 21:17:25.888972       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0918 21:17:26.469928       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0918 21:17:55.895622       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0918 21:17:56.477814       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0918 21:18:25.902370       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0918 21:18:26.486233       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0918 21:18:55.909298       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0918 21:18:56.495478       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [0257280a0d21d52e2a996c2f717880bdb9d9f6fe90a4b82cc62222c941ba7d84] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0918 21:05:53.927489       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0918 21:05:53.957651       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.61.31"]
	E0918 21:05:53.957782       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0918 21:05:54.060813       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0918 21:05:54.060858       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0918 21:05:54.060884       1 server_linux.go:169] "Using iptables Proxier"
	I0918 21:05:54.069098       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0918 21:05:54.070239       1 server.go:483] "Version info" version="v1.31.1"
	I0918 21:05:54.070269       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0918 21:05:54.073599       1 config.go:199] "Starting service config controller"
	I0918 21:05:54.074021       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0918 21:05:54.074224       1 config.go:105] "Starting endpoint slice config controller"
	I0918 21:05:54.074257       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0918 21:05:54.075336       1 config.go:328] "Starting node config controller"
	I0918 21:05:54.075378       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0918 21:05:54.175400       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0918 21:05:54.175459       1 shared_informer.go:320] Caches are synced for node config
	I0918 21:05:54.175470       1 shared_informer.go:320] Caches are synced for service config
	
	
	==> kube-scheduler [c372970fdf265433e1668377d0214de6608cb39ebfd706cfad9624cba2f9b481] <==
	I0918 21:05:49.944448       1 serving.go:386] Generated self-signed cert in-memory
	W0918 21:05:52.253901       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0918 21:05:52.253988       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0918 21:05:52.253999       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0918 21:05:52.254005       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0918 21:05:52.319956       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.1"
	I0918 21:05:52.320041       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0918 21:05:52.324445       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0918 21:05:52.324480       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0918 21:05:52.324867       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0918 21:05:52.324954       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0918 21:05:52.425474       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 18 21:18:08 no-preload-331658 kubelet[1362]: E0918 21:18:08.254275    1362 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726694288253975972,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 18 21:18:08 no-preload-331658 kubelet[1362]: E0918 21:18:08.254330    1362 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726694288253975972,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 18 21:18:11 no-preload-331658 kubelet[1362]: E0918 21:18:11.090082    1362 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-n27vc" podUID="b1de76ec-8987-49ce-ae66-eedda2705cde"
	Sep 18 21:18:18 no-preload-331658 kubelet[1362]: E0918 21:18:18.255464    1362 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726694298255063016,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 18 21:18:18 no-preload-331658 kubelet[1362]: E0918 21:18:18.255746    1362 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726694298255063016,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 18 21:18:24 no-preload-331658 kubelet[1362]: E0918 21:18:24.091996    1362 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-n27vc" podUID="b1de76ec-8987-49ce-ae66-eedda2705cde"
	Sep 18 21:18:28 no-preload-331658 kubelet[1362]: E0918 21:18:28.258994    1362 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726694308258710258,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 18 21:18:28 no-preload-331658 kubelet[1362]: E0918 21:18:28.259329    1362 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726694308258710258,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 18 21:18:36 no-preload-331658 kubelet[1362]: E0918 21:18:36.090457    1362 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-n27vc" podUID="b1de76ec-8987-49ce-ae66-eedda2705cde"
	Sep 18 21:18:38 no-preload-331658 kubelet[1362]: E0918 21:18:38.260882    1362 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726694318260548523,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 18 21:18:38 no-preload-331658 kubelet[1362]: E0918 21:18:38.261364    1362 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726694318260548523,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 18 21:18:48 no-preload-331658 kubelet[1362]: E0918 21:18:48.090595    1362 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-n27vc" podUID="b1de76ec-8987-49ce-ae66-eedda2705cde"
	Sep 18 21:18:48 no-preload-331658 kubelet[1362]: E0918 21:18:48.107791    1362 iptables.go:577] "Could not set up iptables canary" err=<
	Sep 18 21:18:48 no-preload-331658 kubelet[1362]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Sep 18 21:18:48 no-preload-331658 kubelet[1362]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 18 21:18:48 no-preload-331658 kubelet[1362]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 18 21:18:48 no-preload-331658 kubelet[1362]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 18 21:18:48 no-preload-331658 kubelet[1362]: E0918 21:18:48.263306    1362 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726694328262916228,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 18 21:18:48 no-preload-331658 kubelet[1362]: E0918 21:18:48.263336    1362 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726694328262916228,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 18 21:18:58 no-preload-331658 kubelet[1362]: E0918 21:18:58.264946    1362 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726694338264338952,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 18 21:18:58 no-preload-331658 kubelet[1362]: E0918 21:18:58.265381    1362 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726694338264338952,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 18 21:18:59 no-preload-331658 kubelet[1362]: E0918 21:18:59.090795    1362 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-n27vc" podUID="b1de76ec-8987-49ce-ae66-eedda2705cde"
	Sep 18 21:19:08 no-preload-331658 kubelet[1362]: E0918 21:19:08.268391    1362 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726694348266981380,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 18 21:19:08 no-preload-331658 kubelet[1362]: E0918 21:19:08.270285    1362 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726694348266981380,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 18 21:19:11 no-preload-331658 kubelet[1362]: E0918 21:19:11.089350    1362 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-n27vc" podUID="b1de76ec-8987-49ce-ae66-eedda2705cde"
	
	
	==> storage-provisioner [38c14df0554157969a97390590d6e9c46765fd6fc3e286f7d2e3094838e0aff5] <==
	I0918 21:05:53.769456       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0918 21:06:23.777330       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [b44d6f4b44928aa498c4375f783c165714b4a5959a1fa709712ad39a412d6d9f] <==
	I0918 21:06:24.350939       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0918 21:06:24.362842       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0918 21:06:24.363212       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0918 21:06:41.764415       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0918 21:06:41.764833       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-331658_73254c79-8930-42ef-942a-b7efbf5cffb6!
	I0918 21:06:41.766195       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"46fb55f8-dea6-41d8-baf3-32c81977d123", APIVersion:"v1", ResourceVersion:"648", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-331658_73254c79-8930-42ef-942a-b7efbf5cffb6 became leader
	I0918 21:06:41.866480       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-331658_73254c79-8930-42ef-942a-b7efbf5cffb6!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-331658 -n no-preload-331658
helpers_test.go:261: (dbg) Run:  kubectl --context no-preload-331658 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-6867b74b74-n27vc
helpers_test.go:274: ======> post-mortem[TestStartStop/group/no-preload/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context no-preload-331658 describe pod metrics-server-6867b74b74-n27vc
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context no-preload-331658 describe pod metrics-server-6867b74b74-n27vc: exit status 1 (64.194188ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-6867b74b74-n27vc" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context no-preload-331658 describe pod metrics-server-6867b74b74-n27vc: exit status 1
--- FAIL: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (544.13s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (543.52s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
E0918 21:15:01.286088   14878 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/functional-790989/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
E0918 21:16:12.176104   14878 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/addons-815929/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
E0918 21:20:01.286433   14878 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/functional-790989/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
E0918 21:21:12.175270   14878 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/addons-815929/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:274: ***** TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-740194 -n old-k8s-version-740194
start_stop_delete_test.go:274: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-740194 -n old-k8s-version-740194: exit status 2 (232.032151ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:274: status error: exit status 2 (may be ok)
start_stop_delete_test.go:274: "old-k8s-version-740194" apiserver is not running, skipping kubectl commands (state="Stopped")
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-740194 -n old-k8s-version-740194
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-740194 -n old-k8s-version-740194: exit status 2 (231.240291ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-740194 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-740194 logs -n 25: (1.68009616s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| delete  | -p cert-options-347585                                 | cert-options-347585          | jenkins | v1.34.0 | 18 Sep 24 20:55 UTC | 18 Sep 24 20:55 UTC |
	| start   | -p no-preload-331658                                   | no-preload-331658            | jenkins | v1.34.0 | 18 Sep 24 20:55 UTC | 18 Sep 24 20:56 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-878094                           | kubernetes-upgrade-878094    | jenkins | v1.34.0 | 18 Sep 24 20:55 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-878094                           | kubernetes-upgrade-878094    | jenkins | v1.34.0 | 18 Sep 24 20:55 UTC | 18 Sep 24 20:56 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	|         | --alsologtostderr                                      |                              |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                     |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| delete  | -p pause-543700                                        | pause-543700                 | jenkins | v1.34.0 | 18 Sep 24 20:56 UTC | 18 Sep 24 20:56 UTC |
	| start   | -p embed-certs-255556                                  | embed-certs-255556           | jenkins | v1.34.0 | 18 Sep 24 20:56 UTC | 18 Sep 24 20:57 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| delete  | -p kubernetes-upgrade-878094                           | kubernetes-upgrade-878094    | jenkins | v1.34.0 | 18 Sep 24 20:56 UTC | 18 Sep 24 20:56 UTC |
	| delete  | -p                                                     | disable-driver-mounts-335923 | jenkins | v1.34.0 | 18 Sep 24 20:56 UTC | 18 Sep 24 20:56 UTC |
	|         | disable-driver-mounts-335923                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-828868 | jenkins | v1.34.0 | 18 Sep 24 20:56 UTC | 18 Sep 24 20:57 UTC |
	|         | default-k8s-diff-port-828868                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-331658             | no-preload-331658            | jenkins | v1.34.0 | 18 Sep 24 20:56 UTC | 18 Sep 24 20:57 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-331658                                   | no-preload-331658            | jenkins | v1.34.0 | 18 Sep 24 20:57 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-828868  | default-k8s-diff-port-828868 | jenkins | v1.34.0 | 18 Sep 24 20:57 UTC | 18 Sep 24 20:57 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-828868 | jenkins | v1.34.0 | 18 Sep 24 20:57 UTC |                     |
	|         | default-k8s-diff-port-828868                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-255556            | embed-certs-255556           | jenkins | v1.34.0 | 18 Sep 24 20:57 UTC | 18 Sep 24 20:57 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-255556                                  | embed-certs-255556           | jenkins | v1.34.0 | 18 Sep 24 20:57 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-740194        | old-k8s-version-740194       | jenkins | v1.34.0 | 18 Sep 24 20:59 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-331658                  | no-preload-331658            | jenkins | v1.34.0 | 18 Sep 24 20:59 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-331658                                   | no-preload-331658            | jenkins | v1.34.0 | 18 Sep 24 20:59 UTC | 18 Sep 24 21:10 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-828868       | default-k8s-diff-port-828868 | jenkins | v1.34.0 | 18 Sep 24 21:00 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-255556                 | embed-certs-255556           | jenkins | v1.34.0 | 18 Sep 24 21:00 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-828868 | jenkins | v1.34.0 | 18 Sep 24 21:00 UTC | 18 Sep 24 21:09 UTC |
	|         | default-k8s-diff-port-828868                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| start   | -p embed-certs-255556                                  | embed-certs-255556           | jenkins | v1.34.0 | 18 Sep 24 21:00 UTC | 18 Sep 24 21:10 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-740194                              | old-k8s-version-740194       | jenkins | v1.34.0 | 18 Sep 24 21:00 UTC | 18 Sep 24 21:00 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-740194             | old-k8s-version-740194       | jenkins | v1.34.0 | 18 Sep 24 21:00 UTC | 18 Sep 24 21:00 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-740194                              | old-k8s-version-740194       | jenkins | v1.34.0 | 18 Sep 24 21:00 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/18 21:00:59
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0918 21:00:59.486726   62061 out.go:345] Setting OutFile to fd 1 ...
	I0918 21:00:59.486839   62061 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0918 21:00:59.486848   62061 out.go:358] Setting ErrFile to fd 2...
	I0918 21:00:59.486854   62061 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0918 21:00:59.487063   62061 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19667-7671/.minikube/bin
	I0918 21:00:59.487631   62061 out.go:352] Setting JSON to false
	I0918 21:00:59.488570   62061 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":6203,"bootTime":1726687056,"procs":200,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0918 21:00:59.488665   62061 start.go:139] virtualization: kvm guest
	I0918 21:00:59.490927   62061 out.go:177] * [old-k8s-version-740194] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0918 21:00:59.492124   62061 out.go:177]   - MINIKUBE_LOCATION=19667
	I0918 21:00:59.492192   62061 notify.go:220] Checking for updates...
	I0918 21:00:59.494436   62061 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0918 21:00:59.495887   62061 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19667-7671/kubeconfig
	I0918 21:00:59.496929   62061 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19667-7671/.minikube
	I0918 21:00:59.498172   62061 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0918 21:00:59.499384   62061 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0918 21:00:59.500900   62061 config.go:182] Loaded profile config "old-k8s-version-740194": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0918 21:00:59.501295   62061 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19667-7671/.minikube/bin/docker-machine-driver-kvm2
	I0918 21:00:59.501375   62061 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0918 21:00:59.516448   62061 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45397
	I0918 21:00:59.516855   62061 main.go:141] libmachine: () Calling .GetVersion
	I0918 21:00:59.517347   62061 main.go:141] libmachine: Using API Version  1
	I0918 21:00:59.517362   62061 main.go:141] libmachine: () Calling .SetConfigRaw
	I0918 21:00:59.517673   62061 main.go:141] libmachine: () Calling .GetMachineName
	I0918 21:00:59.517848   62061 main.go:141] libmachine: (old-k8s-version-740194) Calling .DriverName
	I0918 21:00:59.519750   62061 out.go:177] * Kubernetes 1.31.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.1
	I0918 21:00:59.521126   62061 driver.go:394] Setting default libvirt URI to qemu:///system
	I0918 21:00:59.521433   62061 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19667-7671/.minikube/bin/docker-machine-driver-kvm2
	I0918 21:00:59.521468   62061 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0918 21:00:59.537580   62061 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39433
	I0918 21:00:59.537973   62061 main.go:141] libmachine: () Calling .GetVersion
	I0918 21:00:59.538452   62061 main.go:141] libmachine: Using API Version  1
	I0918 21:00:59.538471   62061 main.go:141] libmachine: () Calling .SetConfigRaw
	I0918 21:00:59.538852   62061 main.go:141] libmachine: () Calling .GetMachineName
	I0918 21:00:59.539083   62061 main.go:141] libmachine: (old-k8s-version-740194) Calling .DriverName
	I0918 21:00:59.575494   62061 out.go:177] * Using the kvm2 driver based on existing profile
	I0918 21:00:59.576555   62061 start.go:297] selected driver: kvm2
	I0918 21:00:59.576572   62061 start.go:901] validating driver "kvm2" against &{Name:old-k8s-version-740194 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.20.0 ClusterName:old-k8s-version-740194 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.53 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280
h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0918 21:00:59.576682   62061 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0918 21:00:59.577339   62061 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0918 21:00:59.577400   62061 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19667-7671/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0918 21:00:59.593287   62061 install.go:137] /home/jenkins/minikube-integration/19667-7671/.minikube/bin/docker-machine-driver-kvm2 version is 1.34.0
	I0918 21:00:59.593810   62061 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0918 21:00:59.593855   62061 cni.go:84] Creating CNI manager for ""
	I0918 21:00:59.593911   62061 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0918 21:00:59.593972   62061 start.go:340] cluster config:
	{Name:old-k8s-version-740194 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-740194 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.53 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p20
00.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0918 21:00:59.594104   62061 iso.go:125] acquiring lock: {Name:mkbcf4d84f3091567a39802cd5e773cb10b8c545 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0918 21:00:59.595936   62061 out.go:177] * Starting "old-k8s-version-740194" primary control-plane node in "old-k8s-version-740194" cluster
	I0918 21:00:59.932315   61273 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.31:22: connect: no route to host
	I0918 21:00:59.597210   62061 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0918 21:00:59.597243   62061 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19667-7671/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0918 21:00:59.597250   62061 cache.go:56] Caching tarball of preloaded images
	I0918 21:00:59.597325   62061 preload.go:172] Found /home/jenkins/minikube-integration/19667-7671/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0918 21:00:59.597338   62061 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0918 21:00:59.597439   62061 profile.go:143] Saving config to /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/old-k8s-version-740194/config.json ...
	I0918 21:00:59.597658   62061 start.go:360] acquireMachinesLock for old-k8s-version-740194: {Name:mk1b0d68a0b7d9d4bc8204d5d81cb9eb77222526 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0918 21:01:03.004316   61273 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.31:22: connect: no route to host
	I0918 21:01:09.084327   61273 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.31:22: connect: no route to host
	I0918 21:01:12.156358   61273 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.31:22: connect: no route to host
	I0918 21:01:18.236353   61273 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.31:22: connect: no route to host
	I0918 21:01:21.308245   61273 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.31:22: connect: no route to host
	I0918 21:01:27.388302   61273 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.31:22: connect: no route to host
	I0918 21:01:30.460341   61273 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.31:22: connect: no route to host
	I0918 21:01:36.540285   61273 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.31:22: connect: no route to host
	I0918 21:01:39.612345   61273 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.31:22: connect: no route to host
	I0918 21:01:45.692338   61273 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.31:22: connect: no route to host
	I0918 21:01:48.764308   61273 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.31:22: connect: no route to host
	I0918 21:01:54.844344   61273 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.31:22: connect: no route to host
	I0918 21:01:57.916346   61273 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.31:22: connect: no route to host
	I0918 21:02:03.996351   61273 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.31:22: connect: no route to host
	I0918 21:02:07.068377   61273 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.31:22: connect: no route to host
	I0918 21:02:13.148269   61273 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.31:22: connect: no route to host
	I0918 21:02:16.220321   61273 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.31:22: connect: no route to host
	I0918 21:02:22.300282   61273 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.31:22: connect: no route to host
	I0918 21:02:25.372352   61273 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.31:22: connect: no route to host
	I0918 21:02:31.452275   61273 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.31:22: connect: no route to host
	I0918 21:02:34.524362   61273 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.31:22: connect: no route to host
	I0918 21:02:40.604332   61273 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.31:22: connect: no route to host
	I0918 21:02:43.676372   61273 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.31:22: connect: no route to host
	I0918 21:02:49.756305   61273 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.31:22: connect: no route to host
	I0918 21:02:52.828321   61273 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.31:22: connect: no route to host
	I0918 21:02:58.908358   61273 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.31:22: connect: no route to host
	I0918 21:03:01.980309   61273 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.31:22: connect: no route to host
	I0918 21:03:08.060301   61273 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.31:22: connect: no route to host
	I0918 21:03:11.132322   61273 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.31:22: connect: no route to host
	I0918 21:03:17.212232   61273 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.31:22: connect: no route to host
	I0918 21:03:20.284342   61273 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.31:22: connect: no route to host
	I0918 21:03:26.364312   61273 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.31:22: connect: no route to host
	I0918 21:03:29.436328   61273 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.31:22: connect: no route to host
	I0918 21:03:35.516323   61273 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.31:22: connect: no route to host
	I0918 21:03:38.588372   61273 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.31:22: connect: no route to host
	I0918 21:03:44.668300   61273 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.31:22: connect: no route to host
	I0918 21:03:47.740379   61273 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.31:22: connect: no route to host
	I0918 21:03:53.820363   61273 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.31:22: connect: no route to host
	I0918 21:03:56.892355   61273 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.31:22: connect: no route to host
	I0918 21:04:02.972312   61273 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.31:22: connect: no route to host
	I0918 21:04:06.044373   61273 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.31:22: connect: no route to host
	I0918 21:04:09.048392   61659 start.go:364] duration metric: took 3m56.738592157s to acquireMachinesLock for "default-k8s-diff-port-828868"
	I0918 21:04:09.048461   61659 start.go:96] Skipping create...Using existing machine configuration
	I0918 21:04:09.048469   61659 fix.go:54] fixHost starting: 
	I0918 21:04:09.048788   61659 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19667-7671/.minikube/bin/docker-machine-driver-kvm2
	I0918 21:04:09.048827   61659 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0918 21:04:09.064428   61659 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41181
	I0918 21:04:09.064856   61659 main.go:141] libmachine: () Calling .GetVersion
	I0918 21:04:09.065395   61659 main.go:141] libmachine: Using API Version  1
	I0918 21:04:09.065421   61659 main.go:141] libmachine: () Calling .SetConfigRaw
	I0918 21:04:09.065751   61659 main.go:141] libmachine: () Calling .GetMachineName
	I0918 21:04:09.065961   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .DriverName
	I0918 21:04:09.066108   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .GetState
	I0918 21:04:09.067874   61659 fix.go:112] recreateIfNeeded on default-k8s-diff-port-828868: state=Stopped err=<nil>
	I0918 21:04:09.067915   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .DriverName
	W0918 21:04:09.068096   61659 fix.go:138] unexpected machine state, will restart: <nil>
	I0918 21:04:09.069985   61659 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-828868" ...
	I0918 21:04:09.045944   61273 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0918 21:04:09.045978   61273 main.go:141] libmachine: (no-preload-331658) Calling .GetMachineName
	I0918 21:04:09.046314   61273 buildroot.go:166] provisioning hostname "no-preload-331658"
	I0918 21:04:09.046350   61273 main.go:141] libmachine: (no-preload-331658) Calling .GetMachineName
	I0918 21:04:09.046602   61273 main.go:141] libmachine: (no-preload-331658) Calling .GetSSHHostname
	I0918 21:04:09.048253   61273 machine.go:96] duration metric: took 4m37.423609251s to provisionDockerMachine
	I0918 21:04:09.048293   61273 fix.go:56] duration metric: took 4m37.446130108s for fixHost
	I0918 21:04:09.048301   61273 start.go:83] releasing machines lock for "no-preload-331658", held for 4m37.44629145s
	W0918 21:04:09.048329   61273 start.go:714] error starting host: provision: host is not running
	W0918 21:04:09.048451   61273 out.go:270] ! StartHost failed, but will try again: provision: host is not running
	I0918 21:04:09.048465   61273 start.go:729] Will try again in 5 seconds ...
	I0918 21:04:09.071488   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .Start
	I0918 21:04:09.071699   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Ensuring networks are active...
	I0918 21:04:09.072473   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Ensuring network default is active
	I0918 21:04:09.072816   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Ensuring network mk-default-k8s-diff-port-828868 is active
	I0918 21:04:09.073204   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Getting domain xml...
	I0918 21:04:09.073977   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Creating domain...
	I0918 21:04:10.321507   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Waiting to get IP...
	I0918 21:04:10.322390   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | domain default-k8s-diff-port-828868 has defined MAC address 52:54:00:c0:39:06 in network mk-default-k8s-diff-port-828868
	I0918 21:04:10.322863   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | unable to find current IP address of domain default-k8s-diff-port-828868 in network mk-default-k8s-diff-port-828868
	I0918 21:04:10.322907   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | I0918 21:04:10.322821   62722 retry.go:31] will retry after 272.805092ms: waiting for machine to come up
	I0918 21:04:10.597434   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | domain default-k8s-diff-port-828868 has defined MAC address 52:54:00:c0:39:06 in network mk-default-k8s-diff-port-828868
	I0918 21:04:10.597861   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | unable to find current IP address of domain default-k8s-diff-port-828868 in network mk-default-k8s-diff-port-828868
	I0918 21:04:10.597888   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | I0918 21:04:10.597825   62722 retry.go:31] will retry after 302.631333ms: waiting for machine to come up
	I0918 21:04:10.902544   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | domain default-k8s-diff-port-828868 has defined MAC address 52:54:00:c0:39:06 in network mk-default-k8s-diff-port-828868
	I0918 21:04:10.903002   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | unable to find current IP address of domain default-k8s-diff-port-828868 in network mk-default-k8s-diff-port-828868
	I0918 21:04:10.903030   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | I0918 21:04:10.902943   62722 retry.go:31] will retry after 325.769954ms: waiting for machine to come up
	I0918 21:04:11.230182   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | domain default-k8s-diff-port-828868 has defined MAC address 52:54:00:c0:39:06 in network mk-default-k8s-diff-port-828868
	I0918 21:04:11.230602   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | unable to find current IP address of domain default-k8s-diff-port-828868 in network mk-default-k8s-diff-port-828868
	I0918 21:04:11.230652   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | I0918 21:04:11.230557   62722 retry.go:31] will retry after 396.395153ms: waiting for machine to come up
	I0918 21:04:11.628135   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | domain default-k8s-diff-port-828868 has defined MAC address 52:54:00:c0:39:06 in network mk-default-k8s-diff-port-828868
	I0918 21:04:11.628520   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | unable to find current IP address of domain default-k8s-diff-port-828868 in network mk-default-k8s-diff-port-828868
	I0918 21:04:11.628544   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | I0918 21:04:11.628495   62722 retry.go:31] will retry after 578.74167ms: waiting for machine to come up
	I0918 21:04:14.050009   61273 start.go:360] acquireMachinesLock for no-preload-331658: {Name:mk1b0d68a0b7d9d4bc8204d5d81cb9eb77222526 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0918 21:04:12.209844   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | domain default-k8s-diff-port-828868 has defined MAC address 52:54:00:c0:39:06 in network mk-default-k8s-diff-port-828868
	I0918 21:04:12.209911   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | unable to find current IP address of domain default-k8s-diff-port-828868 in network mk-default-k8s-diff-port-828868
	I0918 21:04:12.209937   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | I0918 21:04:12.209841   62722 retry.go:31] will retry after 779.0434ms: waiting for machine to come up
	I0918 21:04:12.990688   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | domain default-k8s-diff-port-828868 has defined MAC address 52:54:00:c0:39:06 in network mk-default-k8s-diff-port-828868
	I0918 21:04:12.991106   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | unable to find current IP address of domain default-k8s-diff-port-828868 in network mk-default-k8s-diff-port-828868
	I0918 21:04:12.991141   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | I0918 21:04:12.991045   62722 retry.go:31] will retry after 772.165771ms: waiting for machine to come up
	I0918 21:04:13.764946   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | domain default-k8s-diff-port-828868 has defined MAC address 52:54:00:c0:39:06 in network mk-default-k8s-diff-port-828868
	I0918 21:04:13.765460   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | unable to find current IP address of domain default-k8s-diff-port-828868 in network mk-default-k8s-diff-port-828868
	I0918 21:04:13.765493   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | I0918 21:04:13.765404   62722 retry.go:31] will retry after 1.017078101s: waiting for machine to come up
	I0918 21:04:14.783920   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | domain default-k8s-diff-port-828868 has defined MAC address 52:54:00:c0:39:06 in network mk-default-k8s-diff-port-828868
	I0918 21:04:14.784320   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | unable to find current IP address of domain default-k8s-diff-port-828868 in network mk-default-k8s-diff-port-828868
	I0918 21:04:14.784348   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | I0918 21:04:14.784276   62722 retry.go:31] will retry after 1.775982574s: waiting for machine to come up
	I0918 21:04:16.562037   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | domain default-k8s-diff-port-828868 has defined MAC address 52:54:00:c0:39:06 in network mk-default-k8s-diff-port-828868
	I0918 21:04:16.562413   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | unable to find current IP address of domain default-k8s-diff-port-828868 in network mk-default-k8s-diff-port-828868
	I0918 21:04:16.562451   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | I0918 21:04:16.562369   62722 retry.go:31] will retry after 1.609664062s: waiting for machine to come up
	I0918 21:04:18.174149   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | domain default-k8s-diff-port-828868 has defined MAC address 52:54:00:c0:39:06 in network mk-default-k8s-diff-port-828868
	I0918 21:04:18.174759   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | unable to find current IP address of domain default-k8s-diff-port-828868 in network mk-default-k8s-diff-port-828868
	I0918 21:04:18.174788   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | I0918 21:04:18.174710   62722 retry.go:31] will retry after 2.26359536s: waiting for machine to come up
	I0918 21:04:20.440599   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | domain default-k8s-diff-port-828868 has defined MAC address 52:54:00:c0:39:06 in network mk-default-k8s-diff-port-828868
	I0918 21:04:20.441000   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | unable to find current IP address of domain default-k8s-diff-port-828868 in network mk-default-k8s-diff-port-828868
	I0918 21:04:20.441027   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | I0918 21:04:20.440955   62722 retry.go:31] will retry after 3.387446315s: waiting for machine to come up
	I0918 21:04:23.832623   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | domain default-k8s-diff-port-828868 has defined MAC address 52:54:00:c0:39:06 in network mk-default-k8s-diff-port-828868
	I0918 21:04:23.833134   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | unable to find current IP address of domain default-k8s-diff-port-828868 in network mk-default-k8s-diff-port-828868
	I0918 21:04:23.833162   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | I0918 21:04:23.833097   62722 retry.go:31] will retry after 3.312983418s: waiting for machine to come up
	I0918 21:04:27.150091   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | domain default-k8s-diff-port-828868 has defined MAC address 52:54:00:c0:39:06 in network mk-default-k8s-diff-port-828868
	I0918 21:04:27.150658   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Found IP for machine: 192.168.50.109
	I0918 21:04:27.150682   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | domain default-k8s-diff-port-828868 has current primary IP address 192.168.50.109 and MAC address 52:54:00:c0:39:06 in network mk-default-k8s-diff-port-828868
	I0918 21:04:27.150703   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Reserving static IP address...
	I0918 21:04:27.151248   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-828868", mac: "52:54:00:c0:39:06", ip: "192.168.50.109"} in network mk-default-k8s-diff-port-828868: {Iface:virbr4 ExpiryTime:2024-09-18 22:04:19 +0000 UTC Type:0 Mac:52:54:00:c0:39:06 Iaid: IPaddr:192.168.50.109 Prefix:24 Hostname:default-k8s-diff-port-828868 Clientid:01:52:54:00:c0:39:06}
	I0918 21:04:27.151276   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Reserved static IP address: 192.168.50.109
	I0918 21:04:27.151297   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | skip adding static IP to network mk-default-k8s-diff-port-828868 - found existing host DHCP lease matching {name: "default-k8s-diff-port-828868", mac: "52:54:00:c0:39:06", ip: "192.168.50.109"}
	I0918 21:04:27.151317   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | Getting to WaitForSSH function...
	I0918 21:04:27.151330   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Waiting for SSH to be available...
	I0918 21:04:27.153633   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | domain default-k8s-diff-port-828868 has defined MAC address 52:54:00:c0:39:06 in network mk-default-k8s-diff-port-828868
	I0918 21:04:27.154006   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:39:06", ip: ""} in network mk-default-k8s-diff-port-828868: {Iface:virbr4 ExpiryTime:2024-09-18 22:04:19 +0000 UTC Type:0 Mac:52:54:00:c0:39:06 Iaid: IPaddr:192.168.50.109 Prefix:24 Hostname:default-k8s-diff-port-828868 Clientid:01:52:54:00:c0:39:06}
	I0918 21:04:27.154036   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | domain default-k8s-diff-port-828868 has defined IP address 192.168.50.109 and MAC address 52:54:00:c0:39:06 in network mk-default-k8s-diff-port-828868
	I0918 21:04:27.154127   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | Using SSH client type: external
	I0918 21:04:27.154153   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | Using SSH private key: /home/jenkins/minikube-integration/19667-7671/.minikube/machines/default-k8s-diff-port-828868/id_rsa (-rw-------)
	I0918 21:04:27.154196   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.109 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19667-7671/.minikube/machines/default-k8s-diff-port-828868/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0918 21:04:27.154211   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | About to run SSH command:
	I0918 21:04:27.154225   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | exit 0
	I0918 21:04:28.308967   61740 start.go:364] duration metric: took 4m9.856658805s to acquireMachinesLock for "embed-certs-255556"
	I0918 21:04:28.309052   61740 start.go:96] Skipping create...Using existing machine configuration
	I0918 21:04:28.309066   61740 fix.go:54] fixHost starting: 
	I0918 21:04:28.309548   61740 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19667-7671/.minikube/bin/docker-machine-driver-kvm2
	I0918 21:04:28.309609   61740 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0918 21:04:28.326972   61740 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41115
	I0918 21:04:28.327375   61740 main.go:141] libmachine: () Calling .GetVersion
	I0918 21:04:28.327941   61740 main.go:141] libmachine: Using API Version  1
	I0918 21:04:28.327974   61740 main.go:141] libmachine: () Calling .SetConfigRaw
	I0918 21:04:28.328300   61740 main.go:141] libmachine: () Calling .GetMachineName
	I0918 21:04:28.328538   61740 main.go:141] libmachine: (embed-certs-255556) Calling .DriverName
	I0918 21:04:28.328676   61740 main.go:141] libmachine: (embed-certs-255556) Calling .GetState
	I0918 21:04:28.330265   61740 fix.go:112] recreateIfNeeded on embed-certs-255556: state=Stopped err=<nil>
	I0918 21:04:28.330312   61740 main.go:141] libmachine: (embed-certs-255556) Calling .DriverName
	W0918 21:04:28.330482   61740 fix.go:138] unexpected machine state, will restart: <nil>
	I0918 21:04:28.332680   61740 out.go:177] * Restarting existing kvm2 VM for "embed-certs-255556" ...
	I0918 21:04:28.333692   61740 main.go:141] libmachine: (embed-certs-255556) Calling .Start
	I0918 21:04:28.333865   61740 main.go:141] libmachine: (embed-certs-255556) Ensuring networks are active...
	I0918 21:04:28.334536   61740 main.go:141] libmachine: (embed-certs-255556) Ensuring network default is active
	I0918 21:04:28.334987   61740 main.go:141] libmachine: (embed-certs-255556) Ensuring network mk-embed-certs-255556 is active
	I0918 21:04:28.335491   61740 main.go:141] libmachine: (embed-certs-255556) Getting domain xml...
	I0918 21:04:28.336206   61740 main.go:141] libmachine: (embed-certs-255556) Creating domain...
	I0918 21:04:27.280056   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | SSH cmd err, output: <nil>: 
	I0918 21:04:27.280448   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .GetConfigRaw
	I0918 21:04:27.281097   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .GetIP
	I0918 21:04:27.283491   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | domain default-k8s-diff-port-828868 has defined MAC address 52:54:00:c0:39:06 in network mk-default-k8s-diff-port-828868
	I0918 21:04:27.283933   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:39:06", ip: ""} in network mk-default-k8s-diff-port-828868: {Iface:virbr4 ExpiryTime:2024-09-18 22:04:19 +0000 UTC Type:0 Mac:52:54:00:c0:39:06 Iaid: IPaddr:192.168.50.109 Prefix:24 Hostname:default-k8s-diff-port-828868 Clientid:01:52:54:00:c0:39:06}
	I0918 21:04:27.283968   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | domain default-k8s-diff-port-828868 has defined IP address 192.168.50.109 and MAC address 52:54:00:c0:39:06 in network mk-default-k8s-diff-port-828868
	I0918 21:04:27.284242   61659 profile.go:143] Saving config to /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/default-k8s-diff-port-828868/config.json ...
	I0918 21:04:27.284483   61659 machine.go:93] provisionDockerMachine start ...
	I0918 21:04:27.284527   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .DriverName
	I0918 21:04:27.284740   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .GetSSHHostname
	I0918 21:04:27.287263   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | domain default-k8s-diff-port-828868 has defined MAC address 52:54:00:c0:39:06 in network mk-default-k8s-diff-port-828868
	I0918 21:04:27.287640   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:39:06", ip: ""} in network mk-default-k8s-diff-port-828868: {Iface:virbr4 ExpiryTime:2024-09-18 22:04:19 +0000 UTC Type:0 Mac:52:54:00:c0:39:06 Iaid: IPaddr:192.168.50.109 Prefix:24 Hostname:default-k8s-diff-port-828868 Clientid:01:52:54:00:c0:39:06}
	I0918 21:04:27.287671   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | domain default-k8s-diff-port-828868 has defined IP address 192.168.50.109 and MAC address 52:54:00:c0:39:06 in network mk-default-k8s-diff-port-828868
	I0918 21:04:27.287831   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .GetSSHPort
	I0918 21:04:27.288053   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .GetSSHKeyPath
	I0918 21:04:27.288230   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .GetSSHKeyPath
	I0918 21:04:27.288348   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .GetSSHUsername
	I0918 21:04:27.288497   61659 main.go:141] libmachine: Using SSH client type: native
	I0918 21:04:27.288759   61659 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.50.109 22 <nil> <nil>}
	I0918 21:04:27.288774   61659 main.go:141] libmachine: About to run SSH command:
	hostname
	I0918 21:04:27.396110   61659 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0918 21:04:27.396140   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .GetMachineName
	I0918 21:04:27.396439   61659 buildroot.go:166] provisioning hostname "default-k8s-diff-port-828868"
	I0918 21:04:27.396472   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .GetMachineName
	I0918 21:04:27.396655   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .GetSSHHostname
	I0918 21:04:27.399285   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | domain default-k8s-diff-port-828868 has defined MAC address 52:54:00:c0:39:06 in network mk-default-k8s-diff-port-828868
	I0918 21:04:27.399629   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:39:06", ip: ""} in network mk-default-k8s-diff-port-828868: {Iface:virbr4 ExpiryTime:2024-09-18 22:04:19 +0000 UTC Type:0 Mac:52:54:00:c0:39:06 Iaid: IPaddr:192.168.50.109 Prefix:24 Hostname:default-k8s-diff-port-828868 Clientid:01:52:54:00:c0:39:06}
	I0918 21:04:27.399670   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | domain default-k8s-diff-port-828868 has defined IP address 192.168.50.109 and MAC address 52:54:00:c0:39:06 in network mk-default-k8s-diff-port-828868
	I0918 21:04:27.399746   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .GetSSHPort
	I0918 21:04:27.399947   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .GetSSHKeyPath
	I0918 21:04:27.400158   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .GetSSHKeyPath
	I0918 21:04:27.400295   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .GetSSHUsername
	I0918 21:04:27.400476   61659 main.go:141] libmachine: Using SSH client type: native
	I0918 21:04:27.400701   61659 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.50.109 22 <nil> <nil>}
	I0918 21:04:27.400714   61659 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-828868 && echo "default-k8s-diff-port-828868" | sudo tee /etc/hostname
	I0918 21:04:27.518553   61659 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-828868
	
	I0918 21:04:27.518579   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .GetSSHHostname
	I0918 21:04:27.521274   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | domain default-k8s-diff-port-828868 has defined MAC address 52:54:00:c0:39:06 in network mk-default-k8s-diff-port-828868
	I0918 21:04:27.521714   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:39:06", ip: ""} in network mk-default-k8s-diff-port-828868: {Iface:virbr4 ExpiryTime:2024-09-18 22:04:19 +0000 UTC Type:0 Mac:52:54:00:c0:39:06 Iaid: IPaddr:192.168.50.109 Prefix:24 Hostname:default-k8s-diff-port-828868 Clientid:01:52:54:00:c0:39:06}
	I0918 21:04:27.521746   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | domain default-k8s-diff-port-828868 has defined IP address 192.168.50.109 and MAC address 52:54:00:c0:39:06 in network mk-default-k8s-diff-port-828868
	I0918 21:04:27.521918   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .GetSSHPort
	I0918 21:04:27.522085   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .GetSSHKeyPath
	I0918 21:04:27.522298   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .GetSSHKeyPath
	I0918 21:04:27.522469   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .GetSSHUsername
	I0918 21:04:27.522689   61659 main.go:141] libmachine: Using SSH client type: native
	I0918 21:04:27.522867   61659 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.50.109 22 <nil> <nil>}
	I0918 21:04:27.522885   61659 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-828868' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-828868/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-828868' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0918 21:04:27.636264   61659 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0918 21:04:27.636296   61659 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19667-7671/.minikube CaCertPath:/home/jenkins/minikube-integration/19667-7671/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19667-7671/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19667-7671/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19667-7671/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19667-7671/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19667-7671/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19667-7671/.minikube}
	I0918 21:04:27.636325   61659 buildroot.go:174] setting up certificates
	I0918 21:04:27.636335   61659 provision.go:84] configureAuth start
	I0918 21:04:27.636343   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .GetMachineName
	I0918 21:04:27.636629   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .GetIP
	I0918 21:04:27.639186   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | domain default-k8s-diff-port-828868 has defined MAC address 52:54:00:c0:39:06 in network mk-default-k8s-diff-port-828868
	I0918 21:04:27.639622   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:39:06", ip: ""} in network mk-default-k8s-diff-port-828868: {Iface:virbr4 ExpiryTime:2024-09-18 22:04:19 +0000 UTC Type:0 Mac:52:54:00:c0:39:06 Iaid: IPaddr:192.168.50.109 Prefix:24 Hostname:default-k8s-diff-port-828868 Clientid:01:52:54:00:c0:39:06}
	I0918 21:04:27.639646   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | domain default-k8s-diff-port-828868 has defined IP address 192.168.50.109 and MAC address 52:54:00:c0:39:06 in network mk-default-k8s-diff-port-828868
	I0918 21:04:27.639858   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .GetSSHHostname
	I0918 21:04:27.642158   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | domain default-k8s-diff-port-828868 has defined MAC address 52:54:00:c0:39:06 in network mk-default-k8s-diff-port-828868
	I0918 21:04:27.642421   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:39:06", ip: ""} in network mk-default-k8s-diff-port-828868: {Iface:virbr4 ExpiryTime:2024-09-18 22:04:19 +0000 UTC Type:0 Mac:52:54:00:c0:39:06 Iaid: IPaddr:192.168.50.109 Prefix:24 Hostname:default-k8s-diff-port-828868 Clientid:01:52:54:00:c0:39:06}
	I0918 21:04:27.642448   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | domain default-k8s-diff-port-828868 has defined IP address 192.168.50.109 and MAC address 52:54:00:c0:39:06 in network mk-default-k8s-diff-port-828868
	I0918 21:04:27.642626   61659 provision.go:143] copyHostCerts
	I0918 21:04:27.642706   61659 exec_runner.go:144] found /home/jenkins/minikube-integration/19667-7671/.minikube/ca.pem, removing ...
	I0918 21:04:27.642869   61659 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19667-7671/.minikube/ca.pem
	I0918 21:04:27.642966   61659 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19667-7671/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19667-7671/.minikube/ca.pem (1082 bytes)
	I0918 21:04:27.643099   61659 exec_runner.go:144] found /home/jenkins/minikube-integration/19667-7671/.minikube/cert.pem, removing ...
	I0918 21:04:27.643111   61659 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19667-7671/.minikube/cert.pem
	I0918 21:04:27.643150   61659 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19667-7671/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19667-7671/.minikube/cert.pem (1123 bytes)
	I0918 21:04:27.643270   61659 exec_runner.go:144] found /home/jenkins/minikube-integration/19667-7671/.minikube/key.pem, removing ...
	I0918 21:04:27.643280   61659 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19667-7671/.minikube/key.pem
	I0918 21:04:27.643320   61659 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19667-7671/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19667-7671/.minikube/key.pem (1679 bytes)
	I0918 21:04:27.643387   61659 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19667-7671/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19667-7671/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19667-7671/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-828868 san=[127.0.0.1 192.168.50.109 default-k8s-diff-port-828868 localhost minikube]
	I0918 21:04:27.693367   61659 provision.go:177] copyRemoteCerts
	I0918 21:04:27.693426   61659 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0918 21:04:27.693463   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .GetSSHHostname
	I0918 21:04:27.696331   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | domain default-k8s-diff-port-828868 has defined MAC address 52:54:00:c0:39:06 in network mk-default-k8s-diff-port-828868
	I0918 21:04:27.696661   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:39:06", ip: ""} in network mk-default-k8s-diff-port-828868: {Iface:virbr4 ExpiryTime:2024-09-18 22:04:19 +0000 UTC Type:0 Mac:52:54:00:c0:39:06 Iaid: IPaddr:192.168.50.109 Prefix:24 Hostname:default-k8s-diff-port-828868 Clientid:01:52:54:00:c0:39:06}
	I0918 21:04:27.696693   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | domain default-k8s-diff-port-828868 has defined IP address 192.168.50.109 and MAC address 52:54:00:c0:39:06 in network mk-default-k8s-diff-port-828868
	I0918 21:04:27.696835   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .GetSSHPort
	I0918 21:04:27.697028   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .GetSSHKeyPath
	I0918 21:04:27.697212   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .GetSSHUsername
	I0918 21:04:27.697317   61659 sshutil.go:53] new ssh client: &{IP:192.168.50.109 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19667-7671/.minikube/machines/default-k8s-diff-port-828868/id_rsa Username:docker}
	I0918 21:04:27.777944   61659 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0918 21:04:27.801476   61659 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0918 21:04:27.825025   61659 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0918 21:04:27.848244   61659 provision.go:87] duration metric: took 211.897185ms to configureAuth
	I0918 21:04:27.848274   61659 buildroot.go:189] setting minikube options for container-runtime
	I0918 21:04:27.848434   61659 config.go:182] Loaded profile config "default-k8s-diff-port-828868": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0918 21:04:27.848513   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .GetSSHHostname
	I0918 21:04:27.851119   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | domain default-k8s-diff-port-828868 has defined MAC address 52:54:00:c0:39:06 in network mk-default-k8s-diff-port-828868
	I0918 21:04:27.851472   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:39:06", ip: ""} in network mk-default-k8s-diff-port-828868: {Iface:virbr4 ExpiryTime:2024-09-18 22:04:19 +0000 UTC Type:0 Mac:52:54:00:c0:39:06 Iaid: IPaddr:192.168.50.109 Prefix:24 Hostname:default-k8s-diff-port-828868 Clientid:01:52:54:00:c0:39:06}
	I0918 21:04:27.851509   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | domain default-k8s-diff-port-828868 has defined IP address 192.168.50.109 and MAC address 52:54:00:c0:39:06 in network mk-default-k8s-diff-port-828868
	I0918 21:04:27.851788   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .GetSSHPort
	I0918 21:04:27.852007   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .GetSSHKeyPath
	I0918 21:04:27.852216   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .GetSSHKeyPath
	I0918 21:04:27.852420   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .GetSSHUsername
	I0918 21:04:27.852670   61659 main.go:141] libmachine: Using SSH client type: native
	I0918 21:04:27.852852   61659 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.50.109 22 <nil> <nil>}
	I0918 21:04:27.852870   61659 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0918 21:04:28.072808   61659 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0918 21:04:28.072843   61659 machine.go:96] duration metric: took 788.346091ms to provisionDockerMachine
	I0918 21:04:28.072858   61659 start.go:293] postStartSetup for "default-k8s-diff-port-828868" (driver="kvm2")
	I0918 21:04:28.072874   61659 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0918 21:04:28.072898   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .DriverName
	I0918 21:04:28.073246   61659 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0918 21:04:28.073287   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .GetSSHHostname
	I0918 21:04:28.075998   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | domain default-k8s-diff-port-828868 has defined MAC address 52:54:00:c0:39:06 in network mk-default-k8s-diff-port-828868
	I0918 21:04:28.076389   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:39:06", ip: ""} in network mk-default-k8s-diff-port-828868: {Iface:virbr4 ExpiryTime:2024-09-18 22:04:19 +0000 UTC Type:0 Mac:52:54:00:c0:39:06 Iaid: IPaddr:192.168.50.109 Prefix:24 Hostname:default-k8s-diff-port-828868 Clientid:01:52:54:00:c0:39:06}
	I0918 21:04:28.076416   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | domain default-k8s-diff-port-828868 has defined IP address 192.168.50.109 and MAC address 52:54:00:c0:39:06 in network mk-default-k8s-diff-port-828868
	I0918 21:04:28.076561   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .GetSSHPort
	I0918 21:04:28.076780   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .GetSSHKeyPath
	I0918 21:04:28.076939   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .GetSSHUsername
	I0918 21:04:28.077063   61659 sshutil.go:53] new ssh client: &{IP:192.168.50.109 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19667-7671/.minikube/machines/default-k8s-diff-port-828868/id_rsa Username:docker}
	I0918 21:04:28.158946   61659 ssh_runner.go:195] Run: cat /etc/os-release
	I0918 21:04:28.163200   61659 info.go:137] Remote host: Buildroot 2023.02.9
	I0918 21:04:28.163231   61659 filesync.go:126] Scanning /home/jenkins/minikube-integration/19667-7671/.minikube/addons for local assets ...
	I0918 21:04:28.163290   61659 filesync.go:126] Scanning /home/jenkins/minikube-integration/19667-7671/.minikube/files for local assets ...
	I0918 21:04:28.163368   61659 filesync.go:149] local asset: /home/jenkins/minikube-integration/19667-7671/.minikube/files/etc/ssl/certs/148782.pem -> 148782.pem in /etc/ssl/certs
	I0918 21:04:28.163464   61659 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0918 21:04:28.172987   61659 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/files/etc/ssl/certs/148782.pem --> /etc/ssl/certs/148782.pem (1708 bytes)
	I0918 21:04:28.198647   61659 start.go:296] duration metric: took 125.77566ms for postStartSetup
	I0918 21:04:28.198685   61659 fix.go:56] duration metric: took 19.150217303s for fixHost
	I0918 21:04:28.198704   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .GetSSHHostname
	I0918 21:04:28.201549   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | domain default-k8s-diff-port-828868 has defined MAC address 52:54:00:c0:39:06 in network mk-default-k8s-diff-port-828868
	I0918 21:04:28.201904   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:39:06", ip: ""} in network mk-default-k8s-diff-port-828868: {Iface:virbr4 ExpiryTime:2024-09-18 22:04:19 +0000 UTC Type:0 Mac:52:54:00:c0:39:06 Iaid: IPaddr:192.168.50.109 Prefix:24 Hostname:default-k8s-diff-port-828868 Clientid:01:52:54:00:c0:39:06}
	I0918 21:04:28.201934   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | domain default-k8s-diff-port-828868 has defined IP address 192.168.50.109 and MAC address 52:54:00:c0:39:06 in network mk-default-k8s-diff-port-828868
	I0918 21:04:28.202093   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .GetSSHPort
	I0918 21:04:28.202278   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .GetSSHKeyPath
	I0918 21:04:28.202435   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .GetSSHKeyPath
	I0918 21:04:28.202588   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .GetSSHUsername
	I0918 21:04:28.202714   61659 main.go:141] libmachine: Using SSH client type: native
	I0918 21:04:28.202871   61659 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.50.109 22 <nil> <nil>}
	I0918 21:04:28.202879   61659 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0918 21:04:28.308752   61659 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726693468.285343658
	
	I0918 21:04:28.308778   61659 fix.go:216] guest clock: 1726693468.285343658
	I0918 21:04:28.308786   61659 fix.go:229] Guest: 2024-09-18 21:04:28.285343658 +0000 UTC Remote: 2024-09-18 21:04:28.198688962 +0000 UTC m=+256.035220061 (delta=86.654696ms)
	I0918 21:04:28.308821   61659 fix.go:200] guest clock delta is within tolerance: 86.654696ms
	I0918 21:04:28.308829   61659 start.go:83] releasing machines lock for "default-k8s-diff-port-828868", held for 19.260404228s
	I0918 21:04:28.308857   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .DriverName
	I0918 21:04:28.309175   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .GetIP
	I0918 21:04:28.312346   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | domain default-k8s-diff-port-828868 has defined MAC address 52:54:00:c0:39:06 in network mk-default-k8s-diff-port-828868
	I0918 21:04:28.312725   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:39:06", ip: ""} in network mk-default-k8s-diff-port-828868: {Iface:virbr4 ExpiryTime:2024-09-18 22:04:19 +0000 UTC Type:0 Mac:52:54:00:c0:39:06 Iaid: IPaddr:192.168.50.109 Prefix:24 Hostname:default-k8s-diff-port-828868 Clientid:01:52:54:00:c0:39:06}
	I0918 21:04:28.312753   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | domain default-k8s-diff-port-828868 has defined IP address 192.168.50.109 and MAC address 52:54:00:c0:39:06 in network mk-default-k8s-diff-port-828868
	I0918 21:04:28.312951   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .DriverName
	I0918 21:04:28.313506   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .DriverName
	I0918 21:04:28.313702   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .DriverName
	I0918 21:04:28.313792   61659 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0918 21:04:28.313849   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .GetSSHHostname
	I0918 21:04:28.313966   61659 ssh_runner.go:195] Run: cat /version.json
	I0918 21:04:28.314001   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .GetSSHHostname
	I0918 21:04:28.316698   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | domain default-k8s-diff-port-828868 has defined MAC address 52:54:00:c0:39:06 in network mk-default-k8s-diff-port-828868
	I0918 21:04:28.316882   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | domain default-k8s-diff-port-828868 has defined MAC address 52:54:00:c0:39:06 in network mk-default-k8s-diff-port-828868
	I0918 21:04:28.317016   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:39:06", ip: ""} in network mk-default-k8s-diff-port-828868: {Iface:virbr4 ExpiryTime:2024-09-18 22:04:19 +0000 UTC Type:0 Mac:52:54:00:c0:39:06 Iaid: IPaddr:192.168.50.109 Prefix:24 Hostname:default-k8s-diff-port-828868 Clientid:01:52:54:00:c0:39:06}
	I0918 21:04:28.317038   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | domain default-k8s-diff-port-828868 has defined IP address 192.168.50.109 and MAC address 52:54:00:c0:39:06 in network mk-default-k8s-diff-port-828868
	I0918 21:04:28.317239   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .GetSSHPort
	I0918 21:04:28.317357   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:39:06", ip: ""} in network mk-default-k8s-diff-port-828868: {Iface:virbr4 ExpiryTime:2024-09-18 22:04:19 +0000 UTC Type:0 Mac:52:54:00:c0:39:06 Iaid: IPaddr:192.168.50.109 Prefix:24 Hostname:default-k8s-diff-port-828868 Clientid:01:52:54:00:c0:39:06}
	I0918 21:04:28.317408   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | domain default-k8s-diff-port-828868 has defined IP address 192.168.50.109 and MAC address 52:54:00:c0:39:06 in network mk-default-k8s-diff-port-828868
	I0918 21:04:28.317410   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .GetSSHKeyPath
	I0918 21:04:28.317596   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .GetSSHPort
	I0918 21:04:28.317598   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .GetSSHUsername
	I0918 21:04:28.317743   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .GetSSHKeyPath
	I0918 21:04:28.317783   61659 sshutil.go:53] new ssh client: &{IP:192.168.50.109 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19667-7671/.minikube/machines/default-k8s-diff-port-828868/id_rsa Username:docker}
	I0918 21:04:28.317905   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .GetSSHUsername
	I0918 21:04:28.318060   61659 sshutil.go:53] new ssh client: &{IP:192.168.50.109 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19667-7671/.minikube/machines/default-k8s-diff-port-828868/id_rsa Username:docker}
	I0918 21:04:28.439960   61659 ssh_runner.go:195] Run: systemctl --version
	I0918 21:04:28.446111   61659 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0918 21:04:28.593574   61659 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0918 21:04:28.599542   61659 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0918 21:04:28.599623   61659 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0918 21:04:28.615775   61659 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0918 21:04:28.615802   61659 start.go:495] detecting cgroup driver to use...
	I0918 21:04:28.615965   61659 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0918 21:04:28.636924   61659 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0918 21:04:28.655681   61659 docker.go:217] disabling cri-docker service (if available) ...
	I0918 21:04:28.655775   61659 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0918 21:04:28.670090   61659 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0918 21:04:28.684780   61659 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0918 21:04:28.807355   61659 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0918 21:04:28.941753   61659 docker.go:233] disabling docker service ...
	I0918 21:04:28.941836   61659 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0918 21:04:28.956786   61659 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0918 21:04:28.970301   61659 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0918 21:04:29.119605   61659 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0918 21:04:29.245330   61659 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0918 21:04:29.259626   61659 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0918 21:04:29.278104   61659 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0918 21:04:29.278162   61659 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0918 21:04:29.288761   61659 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0918 21:04:29.288837   61659 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0918 21:04:29.299631   61659 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0918 21:04:29.310244   61659 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0918 21:04:29.321220   61659 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0918 21:04:29.332722   61659 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0918 21:04:29.343590   61659 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0918 21:04:29.366099   61659 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0918 21:04:29.381180   61659 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0918 21:04:29.394427   61659 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0918 21:04:29.394494   61659 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0918 21:04:29.410069   61659 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0918 21:04:29.421207   61659 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0918 21:04:29.543870   61659 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0918 21:04:29.642149   61659 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0918 21:04:29.642205   61659 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0918 21:04:29.647336   61659 start.go:563] Will wait 60s for crictl version
	I0918 21:04:29.647400   61659 ssh_runner.go:195] Run: which crictl
	I0918 21:04:29.651148   61659 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0918 21:04:29.690903   61659 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0918 21:04:29.690992   61659 ssh_runner.go:195] Run: crio --version
	I0918 21:04:29.717176   61659 ssh_runner.go:195] Run: crio --version
	I0918 21:04:29.747416   61659 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0918 21:04:29.748825   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .GetIP
	I0918 21:04:29.751828   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | domain default-k8s-diff-port-828868 has defined MAC address 52:54:00:c0:39:06 in network mk-default-k8s-diff-port-828868
	I0918 21:04:29.752238   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:39:06", ip: ""} in network mk-default-k8s-diff-port-828868: {Iface:virbr4 ExpiryTime:2024-09-18 22:04:19 +0000 UTC Type:0 Mac:52:54:00:c0:39:06 Iaid: IPaddr:192.168.50.109 Prefix:24 Hostname:default-k8s-diff-port-828868 Clientid:01:52:54:00:c0:39:06}
	I0918 21:04:29.752288   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | domain default-k8s-diff-port-828868 has defined IP address 192.168.50.109 and MAC address 52:54:00:c0:39:06 in network mk-default-k8s-diff-port-828868
	I0918 21:04:29.752533   61659 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0918 21:04:29.756672   61659 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0918 21:04:29.768691   61659 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-828868 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.1 ClusterName:default-k8s-diff-port-828868 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.109 Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0918 21:04:29.768822   61659 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0918 21:04:29.768867   61659 ssh_runner.go:195] Run: sudo crictl images --output json
	I0918 21:04:29.803885   61659 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I0918 21:04:29.803964   61659 ssh_runner.go:195] Run: which lz4
	I0918 21:04:29.808051   61659 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0918 21:04:29.812324   61659 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0918 21:04:29.812363   61659 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I0918 21:04:31.172721   61659 crio.go:462] duration metric: took 1.364736071s to copy over tarball
	I0918 21:04:31.172837   61659 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0918 21:04:29.637411   61740 main.go:141] libmachine: (embed-certs-255556) Waiting to get IP...
	I0918 21:04:29.638427   61740 main.go:141] libmachine: (embed-certs-255556) DBG | domain embed-certs-255556 has defined MAC address 52:54:00:e8:c2:b7 in network mk-embed-certs-255556
	I0918 21:04:29.638877   61740 main.go:141] libmachine: (embed-certs-255556) DBG | unable to find current IP address of domain embed-certs-255556 in network mk-embed-certs-255556
	I0918 21:04:29.638973   61740 main.go:141] libmachine: (embed-certs-255556) DBG | I0918 21:04:29.638868   62857 retry.go:31] will retry after 298.087525ms: waiting for machine to come up
	I0918 21:04:29.938543   61740 main.go:141] libmachine: (embed-certs-255556) DBG | domain embed-certs-255556 has defined MAC address 52:54:00:e8:c2:b7 in network mk-embed-certs-255556
	I0918 21:04:29.938923   61740 main.go:141] libmachine: (embed-certs-255556) DBG | unable to find current IP address of domain embed-certs-255556 in network mk-embed-certs-255556
	I0918 21:04:29.938946   61740 main.go:141] libmachine: (embed-certs-255556) DBG | I0918 21:04:29.938889   62857 retry.go:31] will retry after 362.887862ms: waiting for machine to come up
	I0918 21:04:30.303379   61740 main.go:141] libmachine: (embed-certs-255556) DBG | domain embed-certs-255556 has defined MAC address 52:54:00:e8:c2:b7 in network mk-embed-certs-255556
	I0918 21:04:30.303867   61740 main.go:141] libmachine: (embed-certs-255556) DBG | unable to find current IP address of domain embed-certs-255556 in network mk-embed-certs-255556
	I0918 21:04:30.303898   61740 main.go:141] libmachine: (embed-certs-255556) DBG | I0918 21:04:30.303820   62857 retry.go:31] will retry after 452.771021ms: waiting for machine to come up
	I0918 21:04:30.758353   61740 main.go:141] libmachine: (embed-certs-255556) DBG | domain embed-certs-255556 has defined MAC address 52:54:00:e8:c2:b7 in network mk-embed-certs-255556
	I0918 21:04:30.758897   61740 main.go:141] libmachine: (embed-certs-255556) DBG | unable to find current IP address of domain embed-certs-255556 in network mk-embed-certs-255556
	I0918 21:04:30.758928   61740 main.go:141] libmachine: (embed-certs-255556) DBG | I0918 21:04:30.758856   62857 retry.go:31] will retry after 506.010985ms: waiting for machine to come up
	I0918 21:04:31.266443   61740 main.go:141] libmachine: (embed-certs-255556) DBG | domain embed-certs-255556 has defined MAC address 52:54:00:e8:c2:b7 in network mk-embed-certs-255556
	I0918 21:04:31.266934   61740 main.go:141] libmachine: (embed-certs-255556) DBG | unable to find current IP address of domain embed-certs-255556 in network mk-embed-certs-255556
	I0918 21:04:31.266964   61740 main.go:141] libmachine: (embed-certs-255556) DBG | I0918 21:04:31.266893   62857 retry.go:31] will retry after 584.679329ms: waiting for machine to come up
	I0918 21:04:31.853811   61740 main.go:141] libmachine: (embed-certs-255556) DBG | domain embed-certs-255556 has defined MAC address 52:54:00:e8:c2:b7 in network mk-embed-certs-255556
	I0918 21:04:31.854371   61740 main.go:141] libmachine: (embed-certs-255556) DBG | unable to find current IP address of domain embed-certs-255556 in network mk-embed-certs-255556
	I0918 21:04:31.854402   61740 main.go:141] libmachine: (embed-certs-255556) DBG | I0918 21:04:31.854309   62857 retry.go:31] will retry after 786.010743ms: waiting for machine to come up
	I0918 21:04:32.642494   61740 main.go:141] libmachine: (embed-certs-255556) DBG | domain embed-certs-255556 has defined MAC address 52:54:00:e8:c2:b7 in network mk-embed-certs-255556
	I0918 21:04:32.643068   61740 main.go:141] libmachine: (embed-certs-255556) DBG | unable to find current IP address of domain embed-certs-255556 in network mk-embed-certs-255556
	I0918 21:04:32.643100   61740 main.go:141] libmachine: (embed-certs-255556) DBG | I0918 21:04:32.643013   62857 retry.go:31] will retry after 1.010762944s: waiting for machine to come up
	I0918 21:04:33.299563   61659 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.126697598s)
	I0918 21:04:33.299596   61659 crio.go:469] duration metric: took 2.126840983s to extract the tarball
	I0918 21:04:33.299602   61659 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0918 21:04:33.336428   61659 ssh_runner.go:195] Run: sudo crictl images --output json
	I0918 21:04:33.377303   61659 crio.go:514] all images are preloaded for cri-o runtime.
	I0918 21:04:33.377342   61659 cache_images.go:84] Images are preloaded, skipping loading
	I0918 21:04:33.377352   61659 kubeadm.go:934] updating node { 192.168.50.109 8444 v1.31.1 crio true true} ...
	I0918 21:04:33.377490   61659 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-828868 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.109
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:default-k8s-diff-port-828868 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0918 21:04:33.377574   61659 ssh_runner.go:195] Run: crio config
	I0918 21:04:33.423773   61659 cni.go:84] Creating CNI manager for ""
	I0918 21:04:33.423800   61659 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0918 21:04:33.423816   61659 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0918 21:04:33.423835   61659 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.109 APIServerPort:8444 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-828868 NodeName:default-k8s-diff-port-828868 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.109"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.109 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0918 21:04:33.423976   61659 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.109
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-828868"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.109
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.109"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0918 21:04:33.424058   61659 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0918 21:04:33.434047   61659 binaries.go:44] Found k8s binaries, skipping transfer
	I0918 21:04:33.434119   61659 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0918 21:04:33.443535   61659 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I0918 21:04:33.460116   61659 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0918 21:04:33.475883   61659 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2172 bytes)
	I0918 21:04:33.492311   61659 ssh_runner.go:195] Run: grep 192.168.50.109	control-plane.minikube.internal$ /etc/hosts
	I0918 21:04:33.495940   61659 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.109	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0918 21:04:33.507411   61659 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0918 21:04:33.625104   61659 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0918 21:04:33.641530   61659 certs.go:68] Setting up /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/default-k8s-diff-port-828868 for IP: 192.168.50.109
	I0918 21:04:33.641556   61659 certs.go:194] generating shared ca certs ...
	I0918 21:04:33.641572   61659 certs.go:226] acquiring lock for ca certs: {Name:mkf54e0e71ed884c4372f9d3d4cb308ea4600185 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0918 21:04:33.641757   61659 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19667-7671/.minikube/ca.key
	I0918 21:04:33.641804   61659 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19667-7671/.minikube/proxy-client-ca.key
	I0918 21:04:33.641822   61659 certs.go:256] generating profile certs ...
	I0918 21:04:33.641944   61659 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/default-k8s-diff-port-828868/client.key
	I0918 21:04:33.642036   61659 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/default-k8s-diff-port-828868/apiserver.key.df92be3a
	I0918 21:04:33.642087   61659 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/default-k8s-diff-port-828868/proxy-client.key
	I0918 21:04:33.642255   61659 certs.go:484] found cert: /home/jenkins/minikube-integration/19667-7671/.minikube/certs/14878.pem (1338 bytes)
	W0918 21:04:33.642297   61659 certs.go:480] ignoring /home/jenkins/minikube-integration/19667-7671/.minikube/certs/14878_empty.pem, impossibly tiny 0 bytes
	I0918 21:04:33.642306   61659 certs.go:484] found cert: /home/jenkins/minikube-integration/19667-7671/.minikube/certs/ca-key.pem (1675 bytes)
	I0918 21:04:33.642337   61659 certs.go:484] found cert: /home/jenkins/minikube-integration/19667-7671/.minikube/certs/ca.pem (1082 bytes)
	I0918 21:04:33.642370   61659 certs.go:484] found cert: /home/jenkins/minikube-integration/19667-7671/.minikube/certs/cert.pem (1123 bytes)
	I0918 21:04:33.642404   61659 certs.go:484] found cert: /home/jenkins/minikube-integration/19667-7671/.minikube/certs/key.pem (1679 bytes)
	I0918 21:04:33.642454   61659 certs.go:484] found cert: /home/jenkins/minikube-integration/19667-7671/.minikube/files/etc/ssl/certs/148782.pem (1708 bytes)
	I0918 21:04:33.643116   61659 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0918 21:04:33.682428   61659 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0918 21:04:33.710444   61659 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0918 21:04:33.759078   61659 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0918 21:04:33.797727   61659 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/default-k8s-diff-port-828868/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0918 21:04:33.821989   61659 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/default-k8s-diff-port-828868/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0918 21:04:33.844210   61659 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/default-k8s-diff-port-828868/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0918 21:04:33.866843   61659 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/default-k8s-diff-port-828868/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0918 21:04:33.896125   61659 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/files/etc/ssl/certs/148782.pem --> /usr/share/ca-certificates/148782.pem (1708 bytes)
	I0918 21:04:33.918667   61659 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0918 21:04:33.940790   61659 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/certs/14878.pem --> /usr/share/ca-certificates/14878.pem (1338 bytes)
	I0918 21:04:33.963660   61659 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0918 21:04:33.980348   61659 ssh_runner.go:195] Run: openssl version
	I0918 21:04:33.985856   61659 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/148782.pem && ln -fs /usr/share/ca-certificates/148782.pem /etc/ssl/certs/148782.pem"
	I0918 21:04:33.996472   61659 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/148782.pem
	I0918 21:04:34.000732   61659 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 18 19:57 /usr/share/ca-certificates/148782.pem
	I0918 21:04:34.000788   61659 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/148782.pem
	I0918 21:04:34.006282   61659 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/148782.pem /etc/ssl/certs/3ec20f2e.0"
	I0918 21:04:34.016612   61659 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0918 21:04:34.026689   61659 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0918 21:04:34.030650   61659 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 18 19:39 /usr/share/ca-certificates/minikubeCA.pem
	I0918 21:04:34.030705   61659 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0918 21:04:34.035940   61659 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0918 21:04:34.046516   61659 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14878.pem && ln -fs /usr/share/ca-certificates/14878.pem /etc/ssl/certs/14878.pem"
	I0918 21:04:34.056755   61659 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14878.pem
	I0918 21:04:34.061189   61659 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 18 19:57 /usr/share/ca-certificates/14878.pem
	I0918 21:04:34.061264   61659 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14878.pem
	I0918 21:04:34.066973   61659 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14878.pem /etc/ssl/certs/51391683.0"
	I0918 21:04:34.078781   61659 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0918 21:04:34.083129   61659 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0918 21:04:34.089249   61659 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0918 21:04:34.095211   61659 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0918 21:04:34.101350   61659 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0918 21:04:34.107269   61659 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0918 21:04:34.113177   61659 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0918 21:04:34.119005   61659 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-828868 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.31.1 ClusterName:default-k8s-diff-port-828868 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.109 Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0918 21:04:34.119093   61659 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0918 21:04:34.119147   61659 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0918 21:04:34.162792   61659 cri.go:89] found id: ""
	I0918 21:04:34.162895   61659 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0918 21:04:34.174325   61659 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0918 21:04:34.174358   61659 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0918 21:04:34.174420   61659 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0918 21:04:34.183708   61659 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0918 21:04:34.184680   61659 kubeconfig.go:125] found "default-k8s-diff-port-828868" server: "https://192.168.50.109:8444"
	I0918 21:04:34.186781   61659 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0918 21:04:34.195823   61659 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.109
	I0918 21:04:34.195856   61659 kubeadm.go:1160] stopping kube-system containers ...
	I0918 21:04:34.195866   61659 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0918 21:04:34.195907   61659 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0918 21:04:34.235799   61659 cri.go:89] found id: ""
	I0918 21:04:34.235882   61659 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0918 21:04:34.251412   61659 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0918 21:04:34.261361   61659 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0918 21:04:34.261390   61659 kubeadm.go:157] found existing configuration files:
	
	I0918 21:04:34.261435   61659 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0918 21:04:34.272201   61659 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0918 21:04:34.272272   61659 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0918 21:04:34.283030   61659 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0918 21:04:34.293227   61659 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0918 21:04:34.293321   61659 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0918 21:04:34.303749   61659 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0918 21:04:34.314027   61659 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0918 21:04:34.314116   61659 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0918 21:04:34.324585   61659 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0918 21:04:34.334524   61659 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0918 21:04:34.334594   61659 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0918 21:04:34.344923   61659 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0918 21:04:34.355422   61659 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0918 21:04:34.480395   61659 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0918 21:04:35.320827   61659 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0918 21:04:35.542013   61659 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0918 21:04:35.610886   61659 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0918 21:04:35.694501   61659 api_server.go:52] waiting for apiserver process to appear ...
	I0918 21:04:35.694610   61659 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:04:36.195441   61659 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:04:36.694978   61659 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:04:37.195220   61659 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:04:33.655864   61740 main.go:141] libmachine: (embed-certs-255556) DBG | domain embed-certs-255556 has defined MAC address 52:54:00:e8:c2:b7 in network mk-embed-certs-255556
	I0918 21:04:33.656375   61740 main.go:141] libmachine: (embed-certs-255556) DBG | unable to find current IP address of domain embed-certs-255556 in network mk-embed-certs-255556
	I0918 21:04:33.656407   61740 main.go:141] libmachine: (embed-certs-255556) DBG | I0918 21:04:33.656347   62857 retry.go:31] will retry after 1.375317123s: waiting for machine to come up
	I0918 21:04:35.033882   61740 main.go:141] libmachine: (embed-certs-255556) DBG | domain embed-certs-255556 has defined MAC address 52:54:00:e8:c2:b7 in network mk-embed-certs-255556
	I0918 21:04:35.034266   61740 main.go:141] libmachine: (embed-certs-255556) DBG | unable to find current IP address of domain embed-certs-255556 in network mk-embed-certs-255556
	I0918 21:04:35.034293   61740 main.go:141] libmachine: (embed-certs-255556) DBG | I0918 21:04:35.034232   62857 retry.go:31] will retry after 1.142237895s: waiting for machine to come up
	I0918 21:04:36.178371   61740 main.go:141] libmachine: (embed-certs-255556) DBG | domain embed-certs-255556 has defined MAC address 52:54:00:e8:c2:b7 in network mk-embed-certs-255556
	I0918 21:04:36.178837   61740 main.go:141] libmachine: (embed-certs-255556) DBG | unable to find current IP address of domain embed-certs-255556 in network mk-embed-certs-255556
	I0918 21:04:36.178865   61740 main.go:141] libmachine: (embed-certs-255556) DBG | I0918 21:04:36.178804   62857 retry.go:31] will retry after 1.983853904s: waiting for machine to come up
	I0918 21:04:38.165113   61740 main.go:141] libmachine: (embed-certs-255556) DBG | domain embed-certs-255556 has defined MAC address 52:54:00:e8:c2:b7 in network mk-embed-certs-255556
	I0918 21:04:38.165662   61740 main.go:141] libmachine: (embed-certs-255556) DBG | unable to find current IP address of domain embed-certs-255556 in network mk-embed-certs-255556
	I0918 21:04:38.165697   61740 main.go:141] libmachine: (embed-certs-255556) DBG | I0918 21:04:38.165601   62857 retry.go:31] will retry after 2.407286782s: waiting for machine to come up
	I0918 21:04:37.694916   61659 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:04:37.713724   61659 api_server.go:72] duration metric: took 2.019221095s to wait for apiserver process to appear ...
	I0918 21:04:37.713756   61659 api_server.go:88] waiting for apiserver healthz status ...
	I0918 21:04:37.713782   61659 api_server.go:253] Checking apiserver healthz at https://192.168.50.109:8444/healthz ...
	I0918 21:04:37.714297   61659 api_server.go:269] stopped: https://192.168.50.109:8444/healthz: Get "https://192.168.50.109:8444/healthz": dial tcp 192.168.50.109:8444: connect: connection refused
	I0918 21:04:38.213883   61659 api_server.go:253] Checking apiserver healthz at https://192.168.50.109:8444/healthz ...
	I0918 21:04:40.396513   61659 api_server.go:279] https://192.168.50.109:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0918 21:04:40.396564   61659 api_server.go:103] status: https://192.168.50.109:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0918 21:04:40.396584   61659 api_server.go:253] Checking apiserver healthz at https://192.168.50.109:8444/healthz ...
	I0918 21:04:40.409718   61659 api_server.go:279] https://192.168.50.109:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0918 21:04:40.409750   61659 api_server.go:103] status: https://192.168.50.109:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0918 21:04:40.714176   61659 api_server.go:253] Checking apiserver healthz at https://192.168.50.109:8444/healthz ...
	I0918 21:04:40.719353   61659 api_server.go:279] https://192.168.50.109:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0918 21:04:40.719391   61659 api_server.go:103] status: https://192.168.50.109:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0918 21:04:41.214596   61659 api_server.go:253] Checking apiserver healthz at https://192.168.50.109:8444/healthz ...
	I0918 21:04:41.219579   61659 api_server.go:279] https://192.168.50.109:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0918 21:04:41.219608   61659 api_server.go:103] status: https://192.168.50.109:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0918 21:04:41.713951   61659 api_server.go:253] Checking apiserver healthz at https://192.168.50.109:8444/healthz ...
	I0918 21:04:41.719212   61659 api_server.go:279] https://192.168.50.109:8444/healthz returned 200:
	ok
	I0918 21:04:41.726647   61659 api_server.go:141] control plane version: v1.31.1
	I0918 21:04:41.726679   61659 api_server.go:131] duration metric: took 4.012914861s to wait for apiserver health ...
	I0918 21:04:41.726689   61659 cni.go:84] Creating CNI manager for ""
	I0918 21:04:41.726707   61659 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0918 21:04:41.728312   61659 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0918 21:04:41.729613   61659 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0918 21:04:41.741932   61659 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0918 21:04:41.763195   61659 system_pods.go:43] waiting for kube-system pods to appear ...
	I0918 21:04:41.775167   61659 system_pods.go:59] 8 kube-system pods found
	I0918 21:04:41.775210   61659 system_pods.go:61] "coredns-7c65d6cfc9-xzjd7" [bd8252df-707c-41e6-84b7-cc74480177a8] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0918 21:04:41.775219   61659 system_pods.go:61] "etcd-default-k8s-diff-port-828868" [aa8e221d-abba-48a5-8814-246df0776408] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0918 21:04:41.775227   61659 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-828868" [b44966ac-3478-40c4-b67f-1824bff2bec7] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0918 21:04:41.775233   61659 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-828868" [7af8fbad-3aa2-497e-90df-33facaee6b17] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0918 21:04:41.775239   61659 system_pods.go:61] "kube-proxy-jz7ls" [f931ae9a-0b9c-4754-8b7b-d52c267b018c] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0918 21:04:41.775247   61659 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-828868" [ee89c713-c689-4de3-b1a5-4e08470ff6fb] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0918 21:04:41.775252   61659 system_pods.go:61] "metrics-server-6867b74b74-cqp47" [1ccf8c85-183a-4bea-abbc-eb7bcedca7f0] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0918 21:04:41.775257   61659 system_pods.go:61] "storage-provisioner" [9744cbfa-6b9a-42f0-aa80-0821b87a33d4] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0918 21:04:41.775270   61659 system_pods.go:74] duration metric: took 12.058758ms to wait for pod list to return data ...
	I0918 21:04:41.775280   61659 node_conditions.go:102] verifying NodePressure condition ...
	I0918 21:04:41.779525   61659 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0918 21:04:41.779559   61659 node_conditions.go:123] node cpu capacity is 2
	I0918 21:04:41.779580   61659 node_conditions.go:105] duration metric: took 4.292138ms to run NodePressure ...
	I0918 21:04:41.779615   61659 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0918 21:04:42.079279   61659 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0918 21:04:42.084311   61659 kubeadm.go:739] kubelet initialised
	I0918 21:04:42.084338   61659 kubeadm.go:740] duration metric: took 5.024999ms waiting for restarted kubelet to initialise ...
	I0918 21:04:42.084351   61659 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0918 21:04:42.089113   61659 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-xzjd7" in "kube-system" namespace to be "Ready" ...
	I0918 21:04:42.095539   61659 pod_ready.go:98] node "default-k8s-diff-port-828868" hosting pod "coredns-7c65d6cfc9-xzjd7" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-828868" has status "Ready":"False"
	I0918 21:04:42.095565   61659 pod_ready.go:82] duration metric: took 6.405251ms for pod "coredns-7c65d6cfc9-xzjd7" in "kube-system" namespace to be "Ready" ...
	E0918 21:04:42.095575   61659 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-828868" hosting pod "coredns-7c65d6cfc9-xzjd7" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-828868" has status "Ready":"False"
	I0918 21:04:42.095581   61659 pod_ready.go:79] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-828868" in "kube-system" namespace to be "Ready" ...
	I0918 21:04:42.100447   61659 pod_ready.go:98] node "default-k8s-diff-port-828868" hosting pod "etcd-default-k8s-diff-port-828868" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-828868" has status "Ready":"False"
	I0918 21:04:42.100469   61659 pod_ready.go:82] duration metric: took 4.879955ms for pod "etcd-default-k8s-diff-port-828868" in "kube-system" namespace to be "Ready" ...
	E0918 21:04:42.100480   61659 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-828868" hosting pod "etcd-default-k8s-diff-port-828868" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-828868" has status "Ready":"False"
	I0918 21:04:42.100485   61659 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-828868" in "kube-system" namespace to be "Ready" ...
	I0918 21:04:42.104889   61659 pod_ready.go:98] node "default-k8s-diff-port-828868" hosting pod "kube-apiserver-default-k8s-diff-port-828868" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-828868" has status "Ready":"False"
	I0918 21:04:42.104914   61659 pod_ready.go:82] duration metric: took 4.421708ms for pod "kube-apiserver-default-k8s-diff-port-828868" in "kube-system" namespace to be "Ready" ...
	E0918 21:04:42.104926   61659 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-828868" hosting pod "kube-apiserver-default-k8s-diff-port-828868" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-828868" has status "Ready":"False"
	I0918 21:04:42.104934   61659 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-828868" in "kube-system" namespace to be "Ready" ...
	I0918 21:04:40.574813   61740 main.go:141] libmachine: (embed-certs-255556) DBG | domain embed-certs-255556 has defined MAC address 52:54:00:e8:c2:b7 in network mk-embed-certs-255556
	I0918 21:04:40.575265   61740 main.go:141] libmachine: (embed-certs-255556) DBG | unable to find current IP address of domain embed-certs-255556 in network mk-embed-certs-255556
	I0918 21:04:40.575295   61740 main.go:141] libmachine: (embed-certs-255556) DBG | I0918 21:04:40.575215   62857 retry.go:31] will retry after 2.249084169s: waiting for machine to come up
	I0918 21:04:42.827547   61740 main.go:141] libmachine: (embed-certs-255556) DBG | domain embed-certs-255556 has defined MAC address 52:54:00:e8:c2:b7 in network mk-embed-certs-255556
	I0918 21:04:42.827966   61740 main.go:141] libmachine: (embed-certs-255556) DBG | unable to find current IP address of domain embed-certs-255556 in network mk-embed-certs-255556
	I0918 21:04:42.828028   61740 main.go:141] libmachine: (embed-certs-255556) DBG | I0918 21:04:42.827923   62857 retry.go:31] will retry after 4.512161859s: waiting for machine to come up
	I0918 21:04:44.113739   61659 pod_ready.go:103] pod "kube-controller-manager-default-k8s-diff-port-828868" in "kube-system" namespace has status "Ready":"False"
	I0918 21:04:46.611013   61659 pod_ready.go:103] pod "kube-controller-manager-default-k8s-diff-port-828868" in "kube-system" namespace has status "Ready":"False"
	I0918 21:04:47.345046   61740 main.go:141] libmachine: (embed-certs-255556) DBG | domain embed-certs-255556 has defined MAC address 52:54:00:e8:c2:b7 in network mk-embed-certs-255556
	I0918 21:04:47.345426   61740 main.go:141] libmachine: (embed-certs-255556) Found IP for machine: 192.168.39.21
	I0918 21:04:47.345444   61740 main.go:141] libmachine: (embed-certs-255556) Reserving static IP address...
	I0918 21:04:47.345457   61740 main.go:141] libmachine: (embed-certs-255556) DBG | domain embed-certs-255556 has current primary IP address 192.168.39.21 and MAC address 52:54:00:e8:c2:b7 in network mk-embed-certs-255556
	I0918 21:04:47.345824   61740 main.go:141] libmachine: (embed-certs-255556) DBG | found host DHCP lease matching {name: "embed-certs-255556", mac: "52:54:00:e8:c2:b7", ip: "192.168.39.21"} in network mk-embed-certs-255556: {Iface:virbr1 ExpiryTime:2024-09-18 22:04:39 +0000 UTC Type:0 Mac:52:54:00:e8:c2:b7 Iaid: IPaddr:192.168.39.21 Prefix:24 Hostname:embed-certs-255556 Clientid:01:52:54:00:e8:c2:b7}
	I0918 21:04:47.345846   61740 main.go:141] libmachine: (embed-certs-255556) DBG | skip adding static IP to network mk-embed-certs-255556 - found existing host DHCP lease matching {name: "embed-certs-255556", mac: "52:54:00:e8:c2:b7", ip: "192.168.39.21"}
	I0918 21:04:47.345856   61740 main.go:141] libmachine: (embed-certs-255556) Reserved static IP address: 192.168.39.21
	I0918 21:04:47.345866   61740 main.go:141] libmachine: (embed-certs-255556) Waiting for SSH to be available...
	I0918 21:04:47.345874   61740 main.go:141] libmachine: (embed-certs-255556) DBG | Getting to WaitForSSH function...
	I0918 21:04:47.347972   61740 main.go:141] libmachine: (embed-certs-255556) DBG | domain embed-certs-255556 has defined MAC address 52:54:00:e8:c2:b7 in network mk-embed-certs-255556
	I0918 21:04:47.348327   61740 main.go:141] libmachine: (embed-certs-255556) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e8:c2:b7", ip: ""} in network mk-embed-certs-255556: {Iface:virbr1 ExpiryTime:2024-09-18 22:04:39 +0000 UTC Type:0 Mac:52:54:00:e8:c2:b7 Iaid: IPaddr:192.168.39.21 Prefix:24 Hostname:embed-certs-255556 Clientid:01:52:54:00:e8:c2:b7}
	I0918 21:04:47.348367   61740 main.go:141] libmachine: (embed-certs-255556) DBG | domain embed-certs-255556 has defined IP address 192.168.39.21 and MAC address 52:54:00:e8:c2:b7 in network mk-embed-certs-255556
	I0918 21:04:47.348437   61740 main.go:141] libmachine: (embed-certs-255556) DBG | Using SSH client type: external
	I0918 21:04:47.348469   61740 main.go:141] libmachine: (embed-certs-255556) DBG | Using SSH private key: /home/jenkins/minikube-integration/19667-7671/.minikube/machines/embed-certs-255556/id_rsa (-rw-------)
	I0918 21:04:47.348511   61740 main.go:141] libmachine: (embed-certs-255556) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.21 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19667-7671/.minikube/machines/embed-certs-255556/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0918 21:04:47.348526   61740 main.go:141] libmachine: (embed-certs-255556) DBG | About to run SSH command:
	I0918 21:04:47.348554   61740 main.go:141] libmachine: (embed-certs-255556) DBG | exit 0
	I0918 21:04:47.476457   61740 main.go:141] libmachine: (embed-certs-255556) DBG | SSH cmd err, output: <nil>: 
	I0918 21:04:47.476858   61740 main.go:141] libmachine: (embed-certs-255556) Calling .GetConfigRaw
	I0918 21:04:47.477533   61740 main.go:141] libmachine: (embed-certs-255556) Calling .GetIP
	I0918 21:04:47.480221   61740 main.go:141] libmachine: (embed-certs-255556) DBG | domain embed-certs-255556 has defined MAC address 52:54:00:e8:c2:b7 in network mk-embed-certs-255556
	I0918 21:04:47.480601   61740 main.go:141] libmachine: (embed-certs-255556) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e8:c2:b7", ip: ""} in network mk-embed-certs-255556: {Iface:virbr1 ExpiryTime:2024-09-18 22:04:39 +0000 UTC Type:0 Mac:52:54:00:e8:c2:b7 Iaid: IPaddr:192.168.39.21 Prefix:24 Hostname:embed-certs-255556 Clientid:01:52:54:00:e8:c2:b7}
	I0918 21:04:47.480644   61740 main.go:141] libmachine: (embed-certs-255556) DBG | domain embed-certs-255556 has defined IP address 192.168.39.21 and MAC address 52:54:00:e8:c2:b7 in network mk-embed-certs-255556
	I0918 21:04:47.480966   61740 profile.go:143] Saving config to /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/embed-certs-255556/config.json ...
	I0918 21:04:47.481172   61740 machine.go:93] provisionDockerMachine start ...
	I0918 21:04:47.481189   61740 main.go:141] libmachine: (embed-certs-255556) Calling .DriverName
	I0918 21:04:47.481440   61740 main.go:141] libmachine: (embed-certs-255556) Calling .GetSSHHostname
	I0918 21:04:47.483916   61740 main.go:141] libmachine: (embed-certs-255556) DBG | domain embed-certs-255556 has defined MAC address 52:54:00:e8:c2:b7 in network mk-embed-certs-255556
	I0918 21:04:47.484299   61740 main.go:141] libmachine: (embed-certs-255556) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e8:c2:b7", ip: ""} in network mk-embed-certs-255556: {Iface:virbr1 ExpiryTime:2024-09-18 22:04:39 +0000 UTC Type:0 Mac:52:54:00:e8:c2:b7 Iaid: IPaddr:192.168.39.21 Prefix:24 Hostname:embed-certs-255556 Clientid:01:52:54:00:e8:c2:b7}
	I0918 21:04:47.484328   61740 main.go:141] libmachine: (embed-certs-255556) DBG | domain embed-certs-255556 has defined IP address 192.168.39.21 and MAC address 52:54:00:e8:c2:b7 in network mk-embed-certs-255556
	I0918 21:04:47.484467   61740 main.go:141] libmachine: (embed-certs-255556) Calling .GetSSHPort
	I0918 21:04:47.484703   61740 main.go:141] libmachine: (embed-certs-255556) Calling .GetSSHKeyPath
	I0918 21:04:47.484898   61740 main.go:141] libmachine: (embed-certs-255556) Calling .GetSSHKeyPath
	I0918 21:04:47.485043   61740 main.go:141] libmachine: (embed-certs-255556) Calling .GetSSHUsername
	I0918 21:04:47.485185   61740 main.go:141] libmachine: Using SSH client type: native
	I0918 21:04:47.485386   61740 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.21 22 <nil> <nil>}
	I0918 21:04:47.485399   61740 main.go:141] libmachine: About to run SSH command:
	hostname
	I0918 21:04:47.596243   61740 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0918 21:04:47.596272   61740 main.go:141] libmachine: (embed-certs-255556) Calling .GetMachineName
	I0918 21:04:47.596531   61740 buildroot.go:166] provisioning hostname "embed-certs-255556"
	I0918 21:04:47.596560   61740 main.go:141] libmachine: (embed-certs-255556) Calling .GetMachineName
	I0918 21:04:47.596775   61740 main.go:141] libmachine: (embed-certs-255556) Calling .GetSSHHostname
	I0918 21:04:47.599159   61740 main.go:141] libmachine: (embed-certs-255556) DBG | domain embed-certs-255556 has defined MAC address 52:54:00:e8:c2:b7 in network mk-embed-certs-255556
	I0918 21:04:47.599508   61740 main.go:141] libmachine: (embed-certs-255556) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e8:c2:b7", ip: ""} in network mk-embed-certs-255556: {Iface:virbr1 ExpiryTime:2024-09-18 22:04:39 +0000 UTC Type:0 Mac:52:54:00:e8:c2:b7 Iaid: IPaddr:192.168.39.21 Prefix:24 Hostname:embed-certs-255556 Clientid:01:52:54:00:e8:c2:b7}
	I0918 21:04:47.599532   61740 main.go:141] libmachine: (embed-certs-255556) DBG | domain embed-certs-255556 has defined IP address 192.168.39.21 and MAC address 52:54:00:e8:c2:b7 in network mk-embed-certs-255556
	I0918 21:04:47.599706   61740 main.go:141] libmachine: (embed-certs-255556) Calling .GetSSHPort
	I0918 21:04:47.599888   61740 main.go:141] libmachine: (embed-certs-255556) Calling .GetSSHKeyPath
	I0918 21:04:47.600090   61740 main.go:141] libmachine: (embed-certs-255556) Calling .GetSSHKeyPath
	I0918 21:04:47.600229   61740 main.go:141] libmachine: (embed-certs-255556) Calling .GetSSHUsername
	I0918 21:04:47.600406   61740 main.go:141] libmachine: Using SSH client type: native
	I0918 21:04:47.600589   61740 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.21 22 <nil> <nil>}
	I0918 21:04:47.600602   61740 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-255556 && echo "embed-certs-255556" | sudo tee /etc/hostname
	I0918 21:04:47.726173   61740 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-255556
	
	I0918 21:04:47.726213   61740 main.go:141] libmachine: (embed-certs-255556) Calling .GetSSHHostname
	I0918 21:04:47.729209   61740 main.go:141] libmachine: (embed-certs-255556) DBG | domain embed-certs-255556 has defined MAC address 52:54:00:e8:c2:b7 in network mk-embed-certs-255556
	I0918 21:04:47.729575   61740 main.go:141] libmachine: (embed-certs-255556) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e8:c2:b7", ip: ""} in network mk-embed-certs-255556: {Iface:virbr1 ExpiryTime:2024-09-18 22:04:39 +0000 UTC Type:0 Mac:52:54:00:e8:c2:b7 Iaid: IPaddr:192.168.39.21 Prefix:24 Hostname:embed-certs-255556 Clientid:01:52:54:00:e8:c2:b7}
	I0918 21:04:47.729609   61740 main.go:141] libmachine: (embed-certs-255556) DBG | domain embed-certs-255556 has defined IP address 192.168.39.21 and MAC address 52:54:00:e8:c2:b7 in network mk-embed-certs-255556
	I0918 21:04:47.729775   61740 main.go:141] libmachine: (embed-certs-255556) Calling .GetSSHPort
	I0918 21:04:47.729952   61740 main.go:141] libmachine: (embed-certs-255556) Calling .GetSSHKeyPath
	I0918 21:04:47.730212   61740 main.go:141] libmachine: (embed-certs-255556) Calling .GetSSHKeyPath
	I0918 21:04:47.730386   61740 main.go:141] libmachine: (embed-certs-255556) Calling .GetSSHUsername
	I0918 21:04:47.730583   61740 main.go:141] libmachine: Using SSH client type: native
	I0918 21:04:47.730755   61740 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.21 22 <nil> <nil>}
	I0918 21:04:47.730771   61740 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-255556' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-255556/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-255556' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0918 21:04:47.849894   61740 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0918 21:04:47.849928   61740 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19667-7671/.minikube CaCertPath:/home/jenkins/minikube-integration/19667-7671/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19667-7671/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19667-7671/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19667-7671/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19667-7671/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19667-7671/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19667-7671/.minikube}
	I0918 21:04:47.849954   61740 buildroot.go:174] setting up certificates
	I0918 21:04:47.849961   61740 provision.go:84] configureAuth start
	I0918 21:04:47.849971   61740 main.go:141] libmachine: (embed-certs-255556) Calling .GetMachineName
	I0918 21:04:47.850307   61740 main.go:141] libmachine: (embed-certs-255556) Calling .GetIP
	I0918 21:04:47.852989   61740 main.go:141] libmachine: (embed-certs-255556) DBG | domain embed-certs-255556 has defined MAC address 52:54:00:e8:c2:b7 in network mk-embed-certs-255556
	I0918 21:04:47.853389   61740 main.go:141] libmachine: (embed-certs-255556) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e8:c2:b7", ip: ""} in network mk-embed-certs-255556: {Iface:virbr1 ExpiryTime:2024-09-18 22:04:39 +0000 UTC Type:0 Mac:52:54:00:e8:c2:b7 Iaid: IPaddr:192.168.39.21 Prefix:24 Hostname:embed-certs-255556 Clientid:01:52:54:00:e8:c2:b7}
	I0918 21:04:47.853423   61740 main.go:141] libmachine: (embed-certs-255556) DBG | domain embed-certs-255556 has defined IP address 192.168.39.21 and MAC address 52:54:00:e8:c2:b7 in network mk-embed-certs-255556
	I0918 21:04:47.853555   61740 main.go:141] libmachine: (embed-certs-255556) Calling .GetSSHHostname
	I0918 21:04:47.856032   61740 main.go:141] libmachine: (embed-certs-255556) DBG | domain embed-certs-255556 has defined MAC address 52:54:00:e8:c2:b7 in network mk-embed-certs-255556
	I0918 21:04:47.856389   61740 main.go:141] libmachine: (embed-certs-255556) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e8:c2:b7", ip: ""} in network mk-embed-certs-255556: {Iface:virbr1 ExpiryTime:2024-09-18 22:04:39 +0000 UTC Type:0 Mac:52:54:00:e8:c2:b7 Iaid: IPaddr:192.168.39.21 Prefix:24 Hostname:embed-certs-255556 Clientid:01:52:54:00:e8:c2:b7}
	I0918 21:04:47.856410   61740 main.go:141] libmachine: (embed-certs-255556) DBG | domain embed-certs-255556 has defined IP address 192.168.39.21 and MAC address 52:54:00:e8:c2:b7 in network mk-embed-certs-255556
	I0918 21:04:47.856556   61740 provision.go:143] copyHostCerts
	I0918 21:04:47.856617   61740 exec_runner.go:144] found /home/jenkins/minikube-integration/19667-7671/.minikube/ca.pem, removing ...
	I0918 21:04:47.856627   61740 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19667-7671/.minikube/ca.pem
	I0918 21:04:47.856686   61740 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19667-7671/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19667-7671/.minikube/ca.pem (1082 bytes)
	I0918 21:04:47.856778   61740 exec_runner.go:144] found /home/jenkins/minikube-integration/19667-7671/.minikube/cert.pem, removing ...
	I0918 21:04:47.856786   61740 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19667-7671/.minikube/cert.pem
	I0918 21:04:47.856805   61740 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19667-7671/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19667-7671/.minikube/cert.pem (1123 bytes)
	I0918 21:04:47.856855   61740 exec_runner.go:144] found /home/jenkins/minikube-integration/19667-7671/.minikube/key.pem, removing ...
	I0918 21:04:47.856862   61740 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19667-7671/.minikube/key.pem
	I0918 21:04:47.856881   61740 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19667-7671/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19667-7671/.minikube/key.pem (1679 bytes)
	I0918 21:04:47.856929   61740 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19667-7671/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19667-7671/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19667-7671/.minikube/certs/ca-key.pem org=jenkins.embed-certs-255556 san=[127.0.0.1 192.168.39.21 embed-certs-255556 localhost minikube]
	I0918 21:04:48.145689   61740 provision.go:177] copyRemoteCerts
	I0918 21:04:48.145750   61740 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0918 21:04:48.145779   61740 main.go:141] libmachine: (embed-certs-255556) Calling .GetSSHHostname
	I0918 21:04:48.148420   61740 main.go:141] libmachine: (embed-certs-255556) DBG | domain embed-certs-255556 has defined MAC address 52:54:00:e8:c2:b7 in network mk-embed-certs-255556
	I0918 21:04:48.148785   61740 main.go:141] libmachine: (embed-certs-255556) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e8:c2:b7", ip: ""} in network mk-embed-certs-255556: {Iface:virbr1 ExpiryTime:2024-09-18 22:04:39 +0000 UTC Type:0 Mac:52:54:00:e8:c2:b7 Iaid: IPaddr:192.168.39.21 Prefix:24 Hostname:embed-certs-255556 Clientid:01:52:54:00:e8:c2:b7}
	I0918 21:04:48.148812   61740 main.go:141] libmachine: (embed-certs-255556) DBG | domain embed-certs-255556 has defined IP address 192.168.39.21 and MAC address 52:54:00:e8:c2:b7 in network mk-embed-certs-255556
	I0918 21:04:48.148983   61740 main.go:141] libmachine: (embed-certs-255556) Calling .GetSSHPort
	I0918 21:04:48.149194   61740 main.go:141] libmachine: (embed-certs-255556) Calling .GetSSHKeyPath
	I0918 21:04:48.149371   61740 main.go:141] libmachine: (embed-certs-255556) Calling .GetSSHUsername
	I0918 21:04:48.149486   61740 sshutil.go:53] new ssh client: &{IP:192.168.39.21 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19667-7671/.minikube/machines/embed-certs-255556/id_rsa Username:docker}
	I0918 21:04:48.234451   61740 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0918 21:04:48.260660   61740 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0918 21:04:48.283305   61740 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0918 21:04:48.305919   61740 provision.go:87] duration metric: took 455.946794ms to configureAuth
	I0918 21:04:48.305954   61740 buildroot.go:189] setting minikube options for container-runtime
	I0918 21:04:48.306183   61740 config.go:182] Loaded profile config "embed-certs-255556": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0918 21:04:48.306284   61740 main.go:141] libmachine: (embed-certs-255556) Calling .GetSSHHostname
	I0918 21:04:48.308853   61740 main.go:141] libmachine: (embed-certs-255556) DBG | domain embed-certs-255556 has defined MAC address 52:54:00:e8:c2:b7 in network mk-embed-certs-255556
	I0918 21:04:48.309319   61740 main.go:141] libmachine: (embed-certs-255556) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e8:c2:b7", ip: ""} in network mk-embed-certs-255556: {Iface:virbr1 ExpiryTime:2024-09-18 22:04:39 +0000 UTC Type:0 Mac:52:54:00:e8:c2:b7 Iaid: IPaddr:192.168.39.21 Prefix:24 Hostname:embed-certs-255556 Clientid:01:52:54:00:e8:c2:b7}
	I0918 21:04:48.309359   61740 main.go:141] libmachine: (embed-certs-255556) DBG | domain embed-certs-255556 has defined IP address 192.168.39.21 and MAC address 52:54:00:e8:c2:b7 in network mk-embed-certs-255556
	I0918 21:04:48.309488   61740 main.go:141] libmachine: (embed-certs-255556) Calling .GetSSHPort
	I0918 21:04:48.309706   61740 main.go:141] libmachine: (embed-certs-255556) Calling .GetSSHKeyPath
	I0918 21:04:48.309860   61740 main.go:141] libmachine: (embed-certs-255556) Calling .GetSSHKeyPath
	I0918 21:04:48.309976   61740 main.go:141] libmachine: (embed-certs-255556) Calling .GetSSHUsername
	I0918 21:04:48.310134   61740 main.go:141] libmachine: Using SSH client type: native
	I0918 21:04:48.310349   61740 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.21 22 <nil> <nil>}
	I0918 21:04:48.310372   61740 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0918 21:04:48.782438   62061 start.go:364] duration metric: took 3m49.184727821s to acquireMachinesLock for "old-k8s-version-740194"
	I0918 21:04:48.782503   62061 start.go:96] Skipping create...Using existing machine configuration
	I0918 21:04:48.782514   62061 fix.go:54] fixHost starting: 
	I0918 21:04:48.782993   62061 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19667-7671/.minikube/bin/docker-machine-driver-kvm2
	I0918 21:04:48.783053   62061 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0918 21:04:48.802299   62061 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44463
	I0918 21:04:48.802787   62061 main.go:141] libmachine: () Calling .GetVersion
	I0918 21:04:48.803286   62061 main.go:141] libmachine: Using API Version  1
	I0918 21:04:48.803313   62061 main.go:141] libmachine: () Calling .SetConfigRaw
	I0918 21:04:48.803681   62061 main.go:141] libmachine: () Calling .GetMachineName
	I0918 21:04:48.803873   62061 main.go:141] libmachine: (old-k8s-version-740194) Calling .DriverName
	I0918 21:04:48.804007   62061 main.go:141] libmachine: (old-k8s-version-740194) Calling .GetState
	I0918 21:04:48.805714   62061 fix.go:112] recreateIfNeeded on old-k8s-version-740194: state=Stopped err=<nil>
	I0918 21:04:48.805744   62061 main.go:141] libmachine: (old-k8s-version-740194) Calling .DriverName
	W0918 21:04:48.805910   62061 fix.go:138] unexpected machine state, will restart: <nil>
	I0918 21:04:48.835402   62061 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-740194" ...
	I0918 21:04:48.836753   62061 main.go:141] libmachine: (old-k8s-version-740194) Calling .Start
	I0918 21:04:48.837090   62061 main.go:141] libmachine: (old-k8s-version-740194) Ensuring networks are active...
	I0918 21:04:48.838014   62061 main.go:141] libmachine: (old-k8s-version-740194) Ensuring network default is active
	I0918 21:04:48.838375   62061 main.go:141] libmachine: (old-k8s-version-740194) Ensuring network mk-old-k8s-version-740194 is active
	I0918 21:04:48.839035   62061 main.go:141] libmachine: (old-k8s-version-740194) Getting domain xml...
	I0918 21:04:48.839832   62061 main.go:141] libmachine: (old-k8s-version-740194) Creating domain...
	I0918 21:04:48.532928   61740 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0918 21:04:48.532952   61740 machine.go:96] duration metric: took 1.051769616s to provisionDockerMachine
	I0918 21:04:48.532962   61740 start.go:293] postStartSetup for "embed-certs-255556" (driver="kvm2")
	I0918 21:04:48.532973   61740 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0918 21:04:48.532991   61740 main.go:141] libmachine: (embed-certs-255556) Calling .DriverName
	I0918 21:04:48.533310   61740 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0918 21:04:48.533342   61740 main.go:141] libmachine: (embed-certs-255556) Calling .GetSSHHostname
	I0918 21:04:48.536039   61740 main.go:141] libmachine: (embed-certs-255556) DBG | domain embed-certs-255556 has defined MAC address 52:54:00:e8:c2:b7 in network mk-embed-certs-255556
	I0918 21:04:48.536529   61740 main.go:141] libmachine: (embed-certs-255556) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e8:c2:b7", ip: ""} in network mk-embed-certs-255556: {Iface:virbr1 ExpiryTime:2024-09-18 22:04:39 +0000 UTC Type:0 Mac:52:54:00:e8:c2:b7 Iaid: IPaddr:192.168.39.21 Prefix:24 Hostname:embed-certs-255556 Clientid:01:52:54:00:e8:c2:b7}
	I0918 21:04:48.536558   61740 main.go:141] libmachine: (embed-certs-255556) DBG | domain embed-certs-255556 has defined IP address 192.168.39.21 and MAC address 52:54:00:e8:c2:b7 in network mk-embed-certs-255556
	I0918 21:04:48.536631   61740 main.go:141] libmachine: (embed-certs-255556) Calling .GetSSHPort
	I0918 21:04:48.536806   61740 main.go:141] libmachine: (embed-certs-255556) Calling .GetSSHKeyPath
	I0918 21:04:48.536971   61740 main.go:141] libmachine: (embed-certs-255556) Calling .GetSSHUsername
	I0918 21:04:48.537148   61740 sshutil.go:53] new ssh client: &{IP:192.168.39.21 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19667-7671/.minikube/machines/embed-certs-255556/id_rsa Username:docker}
	I0918 21:04:48.623154   61740 ssh_runner.go:195] Run: cat /etc/os-release
	I0918 21:04:48.627520   61740 info.go:137] Remote host: Buildroot 2023.02.9
	I0918 21:04:48.627544   61740 filesync.go:126] Scanning /home/jenkins/minikube-integration/19667-7671/.minikube/addons for local assets ...
	I0918 21:04:48.627617   61740 filesync.go:126] Scanning /home/jenkins/minikube-integration/19667-7671/.minikube/files for local assets ...
	I0918 21:04:48.627711   61740 filesync.go:149] local asset: /home/jenkins/minikube-integration/19667-7671/.minikube/files/etc/ssl/certs/148782.pem -> 148782.pem in /etc/ssl/certs
	I0918 21:04:48.627827   61740 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0918 21:04:48.637145   61740 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/files/etc/ssl/certs/148782.pem --> /etc/ssl/certs/148782.pem (1708 bytes)
	I0918 21:04:48.661971   61740 start.go:296] duration metric: took 128.997987ms for postStartSetup
	I0918 21:04:48.662012   61740 fix.go:56] duration metric: took 20.352947161s for fixHost
	I0918 21:04:48.662034   61740 main.go:141] libmachine: (embed-certs-255556) Calling .GetSSHHostname
	I0918 21:04:48.665153   61740 main.go:141] libmachine: (embed-certs-255556) DBG | domain embed-certs-255556 has defined MAC address 52:54:00:e8:c2:b7 in network mk-embed-certs-255556
	I0918 21:04:48.665637   61740 main.go:141] libmachine: (embed-certs-255556) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e8:c2:b7", ip: ""} in network mk-embed-certs-255556: {Iface:virbr1 ExpiryTime:2024-09-18 22:04:39 +0000 UTC Type:0 Mac:52:54:00:e8:c2:b7 Iaid: IPaddr:192.168.39.21 Prefix:24 Hostname:embed-certs-255556 Clientid:01:52:54:00:e8:c2:b7}
	I0918 21:04:48.665668   61740 main.go:141] libmachine: (embed-certs-255556) DBG | domain embed-certs-255556 has defined IP address 192.168.39.21 and MAC address 52:54:00:e8:c2:b7 in network mk-embed-certs-255556
	I0918 21:04:48.665853   61740 main.go:141] libmachine: (embed-certs-255556) Calling .GetSSHPort
	I0918 21:04:48.666090   61740 main.go:141] libmachine: (embed-certs-255556) Calling .GetSSHKeyPath
	I0918 21:04:48.666289   61740 main.go:141] libmachine: (embed-certs-255556) Calling .GetSSHKeyPath
	I0918 21:04:48.666607   61740 main.go:141] libmachine: (embed-certs-255556) Calling .GetSSHUsername
	I0918 21:04:48.666784   61740 main.go:141] libmachine: Using SSH client type: native
	I0918 21:04:48.667024   61740 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.21 22 <nil> <nil>}
	I0918 21:04:48.667040   61740 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0918 21:04:48.782245   61740 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726693488.758182538
	
	I0918 21:04:48.782286   61740 fix.go:216] guest clock: 1726693488.758182538
	I0918 21:04:48.782297   61740 fix.go:229] Guest: 2024-09-18 21:04:48.758182538 +0000 UTC Remote: 2024-09-18 21:04:48.662016609 +0000 UTC m=+270.354724953 (delta=96.165929ms)
	I0918 21:04:48.782322   61740 fix.go:200] guest clock delta is within tolerance: 96.165929ms
	I0918 21:04:48.782329   61740 start.go:83] releasing machines lock for "embed-certs-255556", held for 20.47331123s
	I0918 21:04:48.782358   61740 main.go:141] libmachine: (embed-certs-255556) Calling .DriverName
	I0918 21:04:48.782655   61740 main.go:141] libmachine: (embed-certs-255556) Calling .GetIP
	I0918 21:04:48.785572   61740 main.go:141] libmachine: (embed-certs-255556) DBG | domain embed-certs-255556 has defined MAC address 52:54:00:e8:c2:b7 in network mk-embed-certs-255556
	I0918 21:04:48.785959   61740 main.go:141] libmachine: (embed-certs-255556) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e8:c2:b7", ip: ""} in network mk-embed-certs-255556: {Iface:virbr1 ExpiryTime:2024-09-18 22:04:39 +0000 UTC Type:0 Mac:52:54:00:e8:c2:b7 Iaid: IPaddr:192.168.39.21 Prefix:24 Hostname:embed-certs-255556 Clientid:01:52:54:00:e8:c2:b7}
	I0918 21:04:48.785988   61740 main.go:141] libmachine: (embed-certs-255556) DBG | domain embed-certs-255556 has defined IP address 192.168.39.21 and MAC address 52:54:00:e8:c2:b7 in network mk-embed-certs-255556
	I0918 21:04:48.786181   61740 main.go:141] libmachine: (embed-certs-255556) Calling .DriverName
	I0918 21:04:48.786653   61740 main.go:141] libmachine: (embed-certs-255556) Calling .DriverName
	I0918 21:04:48.786859   61740 main.go:141] libmachine: (embed-certs-255556) Calling .DriverName
	I0918 21:04:48.787019   61740 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0918 21:04:48.787083   61740 main.go:141] libmachine: (embed-certs-255556) Calling .GetSSHHostname
	I0918 21:04:48.787118   61740 ssh_runner.go:195] Run: cat /version.json
	I0918 21:04:48.787142   61740 main.go:141] libmachine: (embed-certs-255556) Calling .GetSSHHostname
	I0918 21:04:48.789834   61740 main.go:141] libmachine: (embed-certs-255556) DBG | domain embed-certs-255556 has defined MAC address 52:54:00:e8:c2:b7 in network mk-embed-certs-255556
	I0918 21:04:48.790239   61740 main.go:141] libmachine: (embed-certs-255556) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e8:c2:b7", ip: ""} in network mk-embed-certs-255556: {Iface:virbr1 ExpiryTime:2024-09-18 22:04:39 +0000 UTC Type:0 Mac:52:54:00:e8:c2:b7 Iaid: IPaddr:192.168.39.21 Prefix:24 Hostname:embed-certs-255556 Clientid:01:52:54:00:e8:c2:b7}
	I0918 21:04:48.790290   61740 main.go:141] libmachine: (embed-certs-255556) DBG | domain embed-certs-255556 has defined IP address 192.168.39.21 and MAC address 52:54:00:e8:c2:b7 in network mk-embed-certs-255556
	I0918 21:04:48.790326   61740 main.go:141] libmachine: (embed-certs-255556) DBG | domain embed-certs-255556 has defined MAC address 52:54:00:e8:c2:b7 in network mk-embed-certs-255556
	I0918 21:04:48.790625   61740 main.go:141] libmachine: (embed-certs-255556) Calling .GetSSHPort
	I0918 21:04:48.790805   61740 main.go:141] libmachine: (embed-certs-255556) Calling .GetSSHKeyPath
	I0918 21:04:48.790828   61740 main.go:141] libmachine: (embed-certs-255556) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e8:c2:b7", ip: ""} in network mk-embed-certs-255556: {Iface:virbr1 ExpiryTime:2024-09-18 22:04:39 +0000 UTC Type:0 Mac:52:54:00:e8:c2:b7 Iaid: IPaddr:192.168.39.21 Prefix:24 Hostname:embed-certs-255556 Clientid:01:52:54:00:e8:c2:b7}
	I0918 21:04:48.790860   61740 main.go:141] libmachine: (embed-certs-255556) DBG | domain embed-certs-255556 has defined IP address 192.168.39.21 and MAC address 52:54:00:e8:c2:b7 in network mk-embed-certs-255556
	I0918 21:04:48.791012   61740 main.go:141] libmachine: (embed-certs-255556) Calling .GetSSHUsername
	I0918 21:04:48.791035   61740 main.go:141] libmachine: (embed-certs-255556) Calling .GetSSHPort
	I0918 21:04:48.791172   61740 sshutil.go:53] new ssh client: &{IP:192.168.39.21 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19667-7671/.minikube/machines/embed-certs-255556/id_rsa Username:docker}
	I0918 21:04:48.791251   61740 main.go:141] libmachine: (embed-certs-255556) Calling .GetSSHKeyPath
	I0918 21:04:48.791406   61740 main.go:141] libmachine: (embed-certs-255556) Calling .GetSSHUsername
	I0918 21:04:48.791537   61740 sshutil.go:53] new ssh client: &{IP:192.168.39.21 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19667-7671/.minikube/machines/embed-certs-255556/id_rsa Username:docker}
	I0918 21:04:48.911282   61740 ssh_runner.go:195] Run: systemctl --version
	I0918 21:04:48.917459   61740 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0918 21:04:49.062272   61740 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0918 21:04:49.068629   61740 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0918 21:04:49.068709   61740 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0918 21:04:49.085575   61740 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0918 21:04:49.085607   61740 start.go:495] detecting cgroup driver to use...
	I0918 21:04:49.085677   61740 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0918 21:04:49.102455   61740 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0918 21:04:49.117869   61740 docker.go:217] disabling cri-docker service (if available) ...
	I0918 21:04:49.117958   61740 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0918 21:04:49.135361   61740 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0918 21:04:49.150861   61740 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0918 21:04:49.285901   61740 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0918 21:04:49.438312   61740 docker.go:233] disabling docker service ...
	I0918 21:04:49.438390   61740 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0918 21:04:49.454560   61740 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0918 21:04:49.471109   61740 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0918 21:04:49.631711   61740 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0918 21:04:49.760860   61740 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0918 21:04:49.778574   61740 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0918 21:04:49.797293   61740 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0918 21:04:49.797365   61740 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0918 21:04:49.808796   61740 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0918 21:04:49.808872   61740 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0918 21:04:49.821451   61740 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0918 21:04:49.834678   61740 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0918 21:04:49.847521   61740 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0918 21:04:49.860918   61740 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0918 21:04:49.873942   61740 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0918 21:04:49.892983   61740 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0918 21:04:49.904925   61740 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0918 21:04:49.916195   61740 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0918 21:04:49.916310   61740 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0918 21:04:49.931084   61740 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0918 21:04:49.942692   61740 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0918 21:04:50.065013   61740 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0918 21:04:50.168347   61740 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0918 21:04:50.168440   61740 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0918 21:04:50.174948   61740 start.go:563] Will wait 60s for crictl version
	I0918 21:04:50.175017   61740 ssh_runner.go:195] Run: which crictl
	I0918 21:04:50.180139   61740 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0918 21:04:50.221578   61740 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0918 21:04:50.221687   61740 ssh_runner.go:195] Run: crio --version
	I0918 21:04:50.251587   61740 ssh_runner.go:195] Run: crio --version
	I0918 21:04:50.282931   61740 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0918 21:04:48.112865   61659 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-828868" in "kube-system" namespace has status "Ready":"True"
	I0918 21:04:48.112895   61659 pod_ready.go:82] duration metric: took 6.007950768s for pod "kube-controller-manager-default-k8s-diff-port-828868" in "kube-system" namespace to be "Ready" ...
	I0918 21:04:48.112909   61659 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-jz7ls" in "kube-system" namespace to be "Ready" ...
	I0918 21:04:48.118606   61659 pod_ready.go:93] pod "kube-proxy-jz7ls" in "kube-system" namespace has status "Ready":"True"
	I0918 21:04:48.118628   61659 pod_ready.go:82] duration metric: took 5.710918ms for pod "kube-proxy-jz7ls" in "kube-system" namespace to be "Ready" ...
	I0918 21:04:48.118647   61659 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-828868" in "kube-system" namespace to be "Ready" ...
	I0918 21:04:49.626081   61659 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-828868" in "kube-system" namespace has status "Ready":"True"
	I0918 21:04:49.626116   61659 pod_ready.go:82] duration metric: took 1.507459822s for pod "kube-scheduler-default-k8s-diff-port-828868" in "kube-system" namespace to be "Ready" ...
	I0918 21:04:49.626130   61659 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace to be "Ready" ...
	I0918 21:04:51.635306   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:04:50.284258   61740 main.go:141] libmachine: (embed-certs-255556) Calling .GetIP
	I0918 21:04:50.287321   61740 main.go:141] libmachine: (embed-certs-255556) DBG | domain embed-certs-255556 has defined MAC address 52:54:00:e8:c2:b7 in network mk-embed-certs-255556
	I0918 21:04:50.287754   61740 main.go:141] libmachine: (embed-certs-255556) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e8:c2:b7", ip: ""} in network mk-embed-certs-255556: {Iface:virbr1 ExpiryTime:2024-09-18 22:04:39 +0000 UTC Type:0 Mac:52:54:00:e8:c2:b7 Iaid: IPaddr:192.168.39.21 Prefix:24 Hostname:embed-certs-255556 Clientid:01:52:54:00:e8:c2:b7}
	I0918 21:04:50.287782   61740 main.go:141] libmachine: (embed-certs-255556) DBG | domain embed-certs-255556 has defined IP address 192.168.39.21 and MAC address 52:54:00:e8:c2:b7 in network mk-embed-certs-255556
	I0918 21:04:50.288116   61740 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0918 21:04:50.292221   61740 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0918 21:04:50.304472   61740 kubeadm.go:883] updating cluster {Name:embed-certs-255556 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.1 ClusterName:embed-certs-255556 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.21 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:f
alse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0918 21:04:50.304604   61740 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0918 21:04:50.304675   61740 ssh_runner.go:195] Run: sudo crictl images --output json
	I0918 21:04:50.343445   61740 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I0918 21:04:50.343527   61740 ssh_runner.go:195] Run: which lz4
	I0918 21:04:50.347600   61740 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0918 21:04:50.351647   61740 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0918 21:04:50.351679   61740 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I0918 21:04:51.665892   61740 crio.go:462] duration metric: took 1.318339658s to copy over tarball
	I0918 21:04:51.665970   61740 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0918 21:04:50.157515   62061 main.go:141] libmachine: (old-k8s-version-740194) Waiting to get IP...
	I0918 21:04:50.158369   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | domain old-k8s-version-740194 has defined MAC address 52:54:00:3f:a7:11 in network mk-old-k8s-version-740194
	I0918 21:04:50.158796   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | unable to find current IP address of domain old-k8s-version-740194 in network mk-old-k8s-version-740194
	I0918 21:04:50.158931   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | I0918 21:04:50.158765   63026 retry.go:31] will retry after 214.617426ms: waiting for machine to come up
	I0918 21:04:50.375418   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | domain old-k8s-version-740194 has defined MAC address 52:54:00:3f:a7:11 in network mk-old-k8s-version-740194
	I0918 21:04:50.375882   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | unable to find current IP address of domain old-k8s-version-740194 in network mk-old-k8s-version-740194
	I0918 21:04:50.375910   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | I0918 21:04:50.375834   63026 retry.go:31] will retry after 311.080996ms: waiting for machine to come up
	I0918 21:04:50.688569   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | domain old-k8s-version-740194 has defined MAC address 52:54:00:3f:a7:11 in network mk-old-k8s-version-740194
	I0918 21:04:50.689151   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | unable to find current IP address of domain old-k8s-version-740194 in network mk-old-k8s-version-740194
	I0918 21:04:50.689182   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | I0918 21:04:50.689112   63026 retry.go:31] will retry after 386.384815ms: waiting for machine to come up
	I0918 21:04:51.076846   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | domain old-k8s-version-740194 has defined MAC address 52:54:00:3f:a7:11 in network mk-old-k8s-version-740194
	I0918 21:04:51.077336   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | unable to find current IP address of domain old-k8s-version-740194 in network mk-old-k8s-version-740194
	I0918 21:04:51.077359   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | I0918 21:04:51.077280   63026 retry.go:31] will retry after 556.210887ms: waiting for machine to come up
	I0918 21:04:51.634919   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | domain old-k8s-version-740194 has defined MAC address 52:54:00:3f:a7:11 in network mk-old-k8s-version-740194
	I0918 21:04:51.635423   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | unable to find current IP address of domain old-k8s-version-740194 in network mk-old-k8s-version-740194
	I0918 21:04:51.635462   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | I0918 21:04:51.635325   63026 retry.go:31] will retry after 607.960565ms: waiting for machine to come up
	I0918 21:04:52.245179   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | domain old-k8s-version-740194 has defined MAC address 52:54:00:3f:a7:11 in network mk-old-k8s-version-740194
	I0918 21:04:52.245741   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | unable to find current IP address of domain old-k8s-version-740194 in network mk-old-k8s-version-740194
	I0918 21:04:52.245774   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | I0918 21:04:52.245692   63026 retry.go:31] will retry after 620.243825ms: waiting for machine to come up
	I0918 21:04:52.867067   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | domain old-k8s-version-740194 has defined MAC address 52:54:00:3f:a7:11 in network mk-old-k8s-version-740194
	I0918 21:04:52.867658   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | unable to find current IP address of domain old-k8s-version-740194 in network mk-old-k8s-version-740194
	I0918 21:04:52.867680   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | I0918 21:04:52.867608   63026 retry.go:31] will retry after 1.091814923s: waiting for machine to come up
	I0918 21:04:53.961395   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | domain old-k8s-version-740194 has defined MAC address 52:54:00:3f:a7:11 in network mk-old-k8s-version-740194
	I0918 21:04:53.961819   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | unable to find current IP address of domain old-k8s-version-740194 in network mk-old-k8s-version-740194
	I0918 21:04:53.961842   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | I0918 21:04:53.961784   63026 retry.go:31] will retry after 1.130552716s: waiting for machine to come up
	I0918 21:04:54.133598   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:04:56.134938   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:04:53.837558   61740 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.171557505s)
	I0918 21:04:53.837589   61740 crio.go:469] duration metric: took 2.171667234s to extract the tarball
	I0918 21:04:53.837610   61740 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0918 21:04:53.876381   61740 ssh_runner.go:195] Run: sudo crictl images --output json
	I0918 21:04:53.924938   61740 crio.go:514] all images are preloaded for cri-o runtime.
	I0918 21:04:53.924968   61740 cache_images.go:84] Images are preloaded, skipping loading
	I0918 21:04:53.924979   61740 kubeadm.go:934] updating node { 192.168.39.21 8443 v1.31.1 crio true true} ...
	I0918 21:04:53.925115   61740 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-255556 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.21
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:embed-certs-255556 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0918 21:04:53.925203   61740 ssh_runner.go:195] Run: crio config
	I0918 21:04:53.969048   61740 cni.go:84] Creating CNI manager for ""
	I0918 21:04:53.969076   61740 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0918 21:04:53.969086   61740 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0918 21:04:53.969105   61740 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.21 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-255556 NodeName:embed-certs-255556 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.21"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.21 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath
:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0918 21:04:53.969240   61740 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.21
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-255556"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.21
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.21"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0918 21:04:53.969298   61740 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0918 21:04:53.978636   61740 binaries.go:44] Found k8s binaries, skipping transfer
	I0918 21:04:53.978702   61740 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0918 21:04:53.988580   61740 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0918 21:04:54.005819   61740 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0918 21:04:54.021564   61740 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2159 bytes)
	I0918 21:04:54.038702   61740 ssh_runner.go:195] Run: grep 192.168.39.21	control-plane.minikube.internal$ /etc/hosts
	I0918 21:04:54.042536   61740 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.21	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0918 21:04:54.053896   61740 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0918 21:04:54.180842   61740 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0918 21:04:54.197701   61740 certs.go:68] Setting up /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/embed-certs-255556 for IP: 192.168.39.21
	I0918 21:04:54.197731   61740 certs.go:194] generating shared ca certs ...
	I0918 21:04:54.197754   61740 certs.go:226] acquiring lock for ca certs: {Name:mkf54e0e71ed884c4372f9d3d4cb308ea4600185 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0918 21:04:54.197953   61740 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19667-7671/.minikube/ca.key
	I0918 21:04:54.198020   61740 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19667-7671/.minikube/proxy-client-ca.key
	I0918 21:04:54.198034   61740 certs.go:256] generating profile certs ...
	I0918 21:04:54.198129   61740 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/embed-certs-255556/client.key
	I0918 21:04:54.198191   61740 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/embed-certs-255556/apiserver.key.4704fd19
	I0918 21:04:54.198225   61740 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/embed-certs-255556/proxy-client.key
	I0918 21:04:54.198326   61740 certs.go:484] found cert: /home/jenkins/minikube-integration/19667-7671/.minikube/certs/14878.pem (1338 bytes)
	W0918 21:04:54.198358   61740 certs.go:480] ignoring /home/jenkins/minikube-integration/19667-7671/.minikube/certs/14878_empty.pem, impossibly tiny 0 bytes
	I0918 21:04:54.198370   61740 certs.go:484] found cert: /home/jenkins/minikube-integration/19667-7671/.minikube/certs/ca-key.pem (1675 bytes)
	I0918 21:04:54.198420   61740 certs.go:484] found cert: /home/jenkins/minikube-integration/19667-7671/.minikube/certs/ca.pem (1082 bytes)
	I0918 21:04:54.198463   61740 certs.go:484] found cert: /home/jenkins/minikube-integration/19667-7671/.minikube/certs/cert.pem (1123 bytes)
	I0918 21:04:54.198498   61740 certs.go:484] found cert: /home/jenkins/minikube-integration/19667-7671/.minikube/certs/key.pem (1679 bytes)
	I0918 21:04:54.198566   61740 certs.go:484] found cert: /home/jenkins/minikube-integration/19667-7671/.minikube/files/etc/ssl/certs/148782.pem (1708 bytes)
	I0918 21:04:54.199258   61740 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0918 21:04:54.231688   61740 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0918 21:04:54.276366   61740 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0918 21:04:54.320929   61740 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0918 21:04:54.348698   61740 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/embed-certs-255556/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0918 21:04:54.375168   61740 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/embed-certs-255556/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0918 21:04:54.399159   61740 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/embed-certs-255556/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0918 21:04:54.427975   61740 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/embed-certs-255556/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0918 21:04:54.454648   61740 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/certs/14878.pem --> /usr/share/ca-certificates/14878.pem (1338 bytes)
	I0918 21:04:54.477518   61740 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/files/etc/ssl/certs/148782.pem --> /usr/share/ca-certificates/148782.pem (1708 bytes)
	I0918 21:04:54.500703   61740 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0918 21:04:54.523380   61740 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0918 21:04:54.540053   61740 ssh_runner.go:195] Run: openssl version
	I0918 21:04:54.545818   61740 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14878.pem && ln -fs /usr/share/ca-certificates/14878.pem /etc/ssl/certs/14878.pem"
	I0918 21:04:54.557138   61740 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14878.pem
	I0918 21:04:54.561973   61740 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 18 19:57 /usr/share/ca-certificates/14878.pem
	I0918 21:04:54.562030   61740 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14878.pem
	I0918 21:04:54.568133   61740 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14878.pem /etc/ssl/certs/51391683.0"
	I0918 21:04:54.578964   61740 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/148782.pem && ln -fs /usr/share/ca-certificates/148782.pem /etc/ssl/certs/148782.pem"
	I0918 21:04:54.590254   61740 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/148782.pem
	I0918 21:04:54.594944   61740 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 18 19:57 /usr/share/ca-certificates/148782.pem
	I0918 21:04:54.595022   61740 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/148782.pem
	I0918 21:04:54.600797   61740 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/148782.pem /etc/ssl/certs/3ec20f2e.0"
	I0918 21:04:54.612078   61740 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0918 21:04:54.623280   61740 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0918 21:04:54.628636   61740 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 18 19:39 /usr/share/ca-certificates/minikubeCA.pem
	I0918 21:04:54.628711   61740 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0918 21:04:54.634847   61740 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0918 21:04:54.645647   61740 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0918 21:04:54.650004   61740 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0918 21:04:54.656906   61740 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0918 21:04:54.662778   61740 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0918 21:04:54.668744   61740 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0918 21:04:54.674676   61740 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0918 21:04:54.680431   61740 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0918 21:04:54.686242   61740 kubeadm.go:392] StartCluster: {Name:embed-certs-255556 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.1 ClusterName:embed-certs-255556 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.21 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fals
e MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0918 21:04:54.686364   61740 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0918 21:04:54.686439   61740 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0918 21:04:54.724228   61740 cri.go:89] found id: ""
	I0918 21:04:54.724319   61740 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0918 21:04:54.734427   61740 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0918 21:04:54.734458   61740 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0918 21:04:54.734511   61740 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0918 21:04:54.747453   61740 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0918 21:04:54.748449   61740 kubeconfig.go:125] found "embed-certs-255556" server: "https://192.168.39.21:8443"
	I0918 21:04:54.750481   61740 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0918 21:04:54.760549   61740 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.21
	I0918 21:04:54.760585   61740 kubeadm.go:1160] stopping kube-system containers ...
	I0918 21:04:54.760599   61740 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0918 21:04:54.760659   61740 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0918 21:04:54.796334   61740 cri.go:89] found id: ""
	I0918 21:04:54.796426   61740 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0918 21:04:54.820854   61740 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0918 21:04:54.831959   61740 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0918 21:04:54.831982   61740 kubeadm.go:157] found existing configuration files:
	
	I0918 21:04:54.832075   61740 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0918 21:04:54.841872   61740 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0918 21:04:54.841952   61740 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0918 21:04:54.852032   61740 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0918 21:04:54.862101   61740 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0918 21:04:54.862176   61740 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0918 21:04:54.872575   61740 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0918 21:04:54.882283   61740 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0918 21:04:54.882386   61740 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0918 21:04:54.895907   61740 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0918 21:04:54.905410   61740 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0918 21:04:54.905484   61740 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0918 21:04:54.914938   61740 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0918 21:04:54.924536   61740 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0918 21:04:55.035830   61740 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0918 21:04:55.975305   61740 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0918 21:04:56.227988   61740 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0918 21:04:56.304760   61740 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0918 21:04:56.375088   61740 api_server.go:52] waiting for apiserver process to appear ...
	I0918 21:04:56.375185   61740 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:04:56.875319   61740 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:04:57.375240   61740 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:04:57.875532   61740 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:04:55.093491   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | domain old-k8s-version-740194 has defined MAC address 52:54:00:3f:a7:11 in network mk-old-k8s-version-740194
	I0918 21:04:55.093956   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | unable to find current IP address of domain old-k8s-version-740194 in network mk-old-k8s-version-740194
	I0918 21:04:55.093982   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | I0918 21:04:55.093923   63026 retry.go:31] will retry after 1.824664154s: waiting for machine to come up
	I0918 21:04:56.920959   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | domain old-k8s-version-740194 has defined MAC address 52:54:00:3f:a7:11 in network mk-old-k8s-version-740194
	I0918 21:04:56.921371   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | unable to find current IP address of domain old-k8s-version-740194 in network mk-old-k8s-version-740194
	I0918 21:04:56.921422   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | I0918 21:04:56.921354   63026 retry.go:31] will retry after 1.591260677s: waiting for machine to come up
	I0918 21:04:58.514832   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | domain old-k8s-version-740194 has defined MAC address 52:54:00:3f:a7:11 in network mk-old-k8s-version-740194
	I0918 21:04:58.515294   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | unable to find current IP address of domain old-k8s-version-740194 in network mk-old-k8s-version-740194
	I0918 21:04:58.515322   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | I0918 21:04:58.515262   63026 retry.go:31] will retry after 1.868763497s: waiting for machine to come up
	I0918 21:04:58.135056   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:05:00.633540   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:04:58.375400   61740 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:04:58.392935   61740 api_server.go:72] duration metric: took 2.017847705s to wait for apiserver process to appear ...
	I0918 21:04:58.393110   61740 api_server.go:88] waiting for apiserver healthz status ...
	I0918 21:04:58.393152   61740 api_server.go:253] Checking apiserver healthz at https://192.168.39.21:8443/healthz ...
	I0918 21:04:58.393699   61740 api_server.go:269] stopped: https://192.168.39.21:8443/healthz: Get "https://192.168.39.21:8443/healthz": dial tcp 192.168.39.21:8443: connect: connection refused
	I0918 21:04:58.893291   61740 api_server.go:253] Checking apiserver healthz at https://192.168.39.21:8443/healthz ...
	I0918 21:05:01.124915   61740 api_server.go:279] https://192.168.39.21:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0918 21:05:01.124954   61740 api_server.go:103] status: https://192.168.39.21:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0918 21:05:01.124991   61740 api_server.go:253] Checking apiserver healthz at https://192.168.39.21:8443/healthz ...
	I0918 21:05:01.179199   61740 api_server.go:279] https://192.168.39.21:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0918 21:05:01.179225   61740 api_server.go:103] status: https://192.168.39.21:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0918 21:05:01.393537   61740 api_server.go:253] Checking apiserver healthz at https://192.168.39.21:8443/healthz ...
	I0918 21:05:01.399577   61740 api_server.go:279] https://192.168.39.21:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0918 21:05:01.399610   61740 api_server.go:103] status: https://192.168.39.21:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0918 21:05:01.894174   61740 api_server.go:253] Checking apiserver healthz at https://192.168.39.21:8443/healthz ...
	I0918 21:05:01.899086   61740 api_server.go:279] https://192.168.39.21:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0918 21:05:01.899110   61740 api_server.go:103] status: https://192.168.39.21:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0918 21:05:02.393672   61740 api_server.go:253] Checking apiserver healthz at https://192.168.39.21:8443/healthz ...
	I0918 21:05:02.401942   61740 api_server.go:279] https://192.168.39.21:8443/healthz returned 200:
	ok
	I0918 21:05:02.408523   61740 api_server.go:141] control plane version: v1.31.1
	I0918 21:05:02.408553   61740 api_server.go:131] duration metric: took 4.015427901s to wait for apiserver health ...
	I0918 21:05:02.408562   61740 cni.go:84] Creating CNI manager for ""
	I0918 21:05:02.408568   61740 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0918 21:05:02.410199   61740 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0918 21:05:02.411470   61740 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0918 21:05:02.424617   61740 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0918 21:05:02.443819   61740 system_pods.go:43] waiting for kube-system pods to appear ...
	I0918 21:05:02.458892   61740 system_pods.go:59] 8 kube-system pods found
	I0918 21:05:02.458939   61740 system_pods.go:61] "coredns-7c65d6cfc9-xwn8w" [773b9a83-bb43-40d3-b3a3-40603c3b22b2] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0918 21:05:02.458949   61740 system_pods.go:61] "etcd-embed-certs-255556" [ee3e7dc9-fb5a-4faa-a0b5-b84b7cd506b6] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0918 21:05:02.458961   61740 system_pods.go:61] "kube-apiserver-embed-certs-255556" [c60ce069-c7a0-42d7-a7de-ce3cf91a3d43] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0918 21:05:02.458970   61740 system_pods.go:61] "kube-controller-manager-embed-certs-255556" [ac8f6b42-caa3-4815-9a90-3f7bb1f0060b] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0918 21:05:02.458980   61740 system_pods.go:61] "kube-proxy-v8szm" [367f743a-399b-4d04-8604-dcd441999581] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0918 21:05:02.458993   61740 system_pods.go:61] "kube-scheduler-embed-certs-255556" [b5dd211b-7963-41ac-8b43-0a5451e3e848] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0918 21:05:02.459001   61740 system_pods.go:61] "metrics-server-6867b74b74-z8rm7" [d1b6823e-4ac5-4ac6-88ae-7f8eac622fdf] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0918 21:05:02.459009   61740 system_pods.go:61] "storage-provisioner" [1575f899-35a7-4eb2-ad5f-660183f75aa6] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0918 21:05:02.459015   61740 system_pods.go:74] duration metric: took 15.172393ms to wait for pod list to return data ...
	I0918 21:05:02.459025   61740 node_conditions.go:102] verifying NodePressure condition ...
	I0918 21:05:02.463140   61740 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0918 21:05:02.463177   61740 node_conditions.go:123] node cpu capacity is 2
	I0918 21:05:02.463192   61740 node_conditions.go:105] duration metric: took 4.162401ms to run NodePressure ...
	I0918 21:05:02.463214   61740 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0918 21:05:02.757153   61740 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0918 21:05:02.761949   61740 kubeadm.go:739] kubelet initialised
	I0918 21:05:02.761977   61740 kubeadm.go:740] duration metric: took 4.79396ms waiting for restarted kubelet to initialise ...
	I0918 21:05:02.761985   61740 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0918 21:05:02.767197   61740 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-xwn8w" in "kube-system" namespace to be "Ready" ...
	I0918 21:05:00.385891   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | domain old-k8s-version-740194 has defined MAC address 52:54:00:3f:a7:11 in network mk-old-k8s-version-740194
	I0918 21:05:00.386451   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | unable to find current IP address of domain old-k8s-version-740194 in network mk-old-k8s-version-740194
	I0918 21:05:00.386482   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | I0918 21:05:00.386384   63026 retry.go:31] will retry after 3.274467583s: waiting for machine to come up
	I0918 21:05:03.664788   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | domain old-k8s-version-740194 has defined MAC address 52:54:00:3f:a7:11 in network mk-old-k8s-version-740194
	I0918 21:05:03.665255   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | unable to find current IP address of domain old-k8s-version-740194 in network mk-old-k8s-version-740194
	I0918 21:05:03.665286   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | I0918 21:05:03.665210   63026 retry.go:31] will retry after 4.112908346s: waiting for machine to come up
	I0918 21:05:02.634177   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:05:05.133431   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:05:07.133941   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:05:04.774196   61740 pod_ready.go:103] pod "coredns-7c65d6cfc9-xwn8w" in "kube-system" namespace has status "Ready":"False"
	I0918 21:05:07.273045   61740 pod_ready.go:103] pod "coredns-7c65d6cfc9-xwn8w" in "kube-system" namespace has status "Ready":"False"
	I0918 21:05:09.245246   61273 start.go:364] duration metric: took 55.195169549s to acquireMachinesLock for "no-preload-331658"
	I0918 21:05:09.245300   61273 start.go:96] Skipping create...Using existing machine configuration
	I0918 21:05:09.245311   61273 fix.go:54] fixHost starting: 
	I0918 21:05:09.245741   61273 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19667-7671/.minikube/bin/docker-machine-driver-kvm2
	I0918 21:05:09.245778   61273 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0918 21:05:09.263998   61273 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37335
	I0918 21:05:09.264565   61273 main.go:141] libmachine: () Calling .GetVersion
	I0918 21:05:09.265118   61273 main.go:141] libmachine: Using API Version  1
	I0918 21:05:09.265142   61273 main.go:141] libmachine: () Calling .SetConfigRaw
	I0918 21:05:09.265505   61273 main.go:141] libmachine: () Calling .GetMachineName
	I0918 21:05:09.265732   61273 main.go:141] libmachine: (no-preload-331658) Calling .DriverName
	I0918 21:05:09.265901   61273 main.go:141] libmachine: (no-preload-331658) Calling .GetState
	I0918 21:05:09.269500   61273 fix.go:112] recreateIfNeeded on no-preload-331658: state=Stopped err=<nil>
	I0918 21:05:09.269525   61273 main.go:141] libmachine: (no-preload-331658) Calling .DriverName
	W0918 21:05:09.269730   61273 fix.go:138] unexpected machine state, will restart: <nil>
	I0918 21:05:09.271448   61273 out.go:177] * Restarting existing kvm2 VM for "no-preload-331658" ...
	I0918 21:05:07.782616   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | domain old-k8s-version-740194 has defined MAC address 52:54:00:3f:a7:11 in network mk-old-k8s-version-740194
	I0918 21:05:07.783108   62061 main.go:141] libmachine: (old-k8s-version-740194) Found IP for machine: 192.168.72.53
	I0918 21:05:07.783125   62061 main.go:141] libmachine: (old-k8s-version-740194) Reserving static IP address...
	I0918 21:05:07.783135   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | domain old-k8s-version-740194 has current primary IP address 192.168.72.53 and MAC address 52:54:00:3f:a7:11 in network mk-old-k8s-version-740194
	I0918 21:05:07.783509   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | found host DHCP lease matching {name: "old-k8s-version-740194", mac: "52:54:00:3f:a7:11", ip: "192.168.72.53"} in network mk-old-k8s-version-740194: {Iface:virbr3 ExpiryTime:2024-09-18 22:05:00 +0000 UTC Type:0 Mac:52:54:00:3f:a7:11 Iaid: IPaddr:192.168.72.53 Prefix:24 Hostname:old-k8s-version-740194 Clientid:01:52:54:00:3f:a7:11}
	I0918 21:05:07.783542   62061 main.go:141] libmachine: (old-k8s-version-740194) Reserved static IP address: 192.168.72.53
	I0918 21:05:07.783565   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | skip adding static IP to network mk-old-k8s-version-740194 - found existing host DHCP lease matching {name: "old-k8s-version-740194", mac: "52:54:00:3f:a7:11", ip: "192.168.72.53"}
	I0918 21:05:07.783583   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | Getting to WaitForSSH function...
	I0918 21:05:07.783614   62061 main.go:141] libmachine: (old-k8s-version-740194) Waiting for SSH to be available...
	I0918 21:05:07.785492   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | domain old-k8s-version-740194 has defined MAC address 52:54:00:3f:a7:11 in network mk-old-k8s-version-740194
	I0918 21:05:07.785856   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:a7:11", ip: ""} in network mk-old-k8s-version-740194: {Iface:virbr3 ExpiryTime:2024-09-18 22:05:00 +0000 UTC Type:0 Mac:52:54:00:3f:a7:11 Iaid: IPaddr:192.168.72.53 Prefix:24 Hostname:old-k8s-version-740194 Clientid:01:52:54:00:3f:a7:11}
	I0918 21:05:07.785885   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | domain old-k8s-version-740194 has defined IP address 192.168.72.53 and MAC address 52:54:00:3f:a7:11 in network mk-old-k8s-version-740194
	I0918 21:05:07.786000   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | Using SSH client type: external
	I0918 21:05:07.786021   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | Using SSH private key: /home/jenkins/minikube-integration/19667-7671/.minikube/machines/old-k8s-version-740194/id_rsa (-rw-------)
	I0918 21:05:07.786044   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.53 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19667-7671/.minikube/machines/old-k8s-version-740194/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0918 21:05:07.786055   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | About to run SSH command:
	I0918 21:05:07.786065   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | exit 0
	I0918 21:05:07.915953   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | SSH cmd err, output: <nil>: 
	I0918 21:05:07.916454   62061 main.go:141] libmachine: (old-k8s-version-740194) Calling .GetConfigRaw
	I0918 21:05:07.917059   62061 main.go:141] libmachine: (old-k8s-version-740194) Calling .GetIP
	I0918 21:05:07.919749   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | domain old-k8s-version-740194 has defined MAC address 52:54:00:3f:a7:11 in network mk-old-k8s-version-740194
	I0918 21:05:07.920244   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:a7:11", ip: ""} in network mk-old-k8s-version-740194: {Iface:virbr3 ExpiryTime:2024-09-18 22:05:00 +0000 UTC Type:0 Mac:52:54:00:3f:a7:11 Iaid: IPaddr:192.168.72.53 Prefix:24 Hostname:old-k8s-version-740194 Clientid:01:52:54:00:3f:a7:11}
	I0918 21:05:07.920280   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | domain old-k8s-version-740194 has defined IP address 192.168.72.53 and MAC address 52:54:00:3f:a7:11 in network mk-old-k8s-version-740194
	I0918 21:05:07.920639   62061 profile.go:143] Saving config to /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/old-k8s-version-740194/config.json ...
	I0918 21:05:07.920871   62061 machine.go:93] provisionDockerMachine start ...
	I0918 21:05:07.920893   62061 main.go:141] libmachine: (old-k8s-version-740194) Calling .DriverName
	I0918 21:05:07.921100   62061 main.go:141] libmachine: (old-k8s-version-740194) Calling .GetSSHHostname
	I0918 21:05:07.923129   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | domain old-k8s-version-740194 has defined MAC address 52:54:00:3f:a7:11 in network mk-old-k8s-version-740194
	I0918 21:05:07.923434   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:a7:11", ip: ""} in network mk-old-k8s-version-740194: {Iface:virbr3 ExpiryTime:2024-09-18 22:05:00 +0000 UTC Type:0 Mac:52:54:00:3f:a7:11 Iaid: IPaddr:192.168.72.53 Prefix:24 Hostname:old-k8s-version-740194 Clientid:01:52:54:00:3f:a7:11}
	I0918 21:05:07.923458   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | domain old-k8s-version-740194 has defined IP address 192.168.72.53 and MAC address 52:54:00:3f:a7:11 in network mk-old-k8s-version-740194
	I0918 21:05:07.923573   62061 main.go:141] libmachine: (old-k8s-version-740194) Calling .GetSSHPort
	I0918 21:05:07.923727   62061 main.go:141] libmachine: (old-k8s-version-740194) Calling .GetSSHKeyPath
	I0918 21:05:07.923889   62061 main.go:141] libmachine: (old-k8s-version-740194) Calling .GetSSHKeyPath
	I0918 21:05:07.924036   62061 main.go:141] libmachine: (old-k8s-version-740194) Calling .GetSSHUsername
	I0918 21:05:07.924185   62061 main.go:141] libmachine: Using SSH client type: native
	I0918 21:05:07.924368   62061 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.72.53 22 <nil> <nil>}
	I0918 21:05:07.924378   62061 main.go:141] libmachine: About to run SSH command:
	hostname
	I0918 21:05:08.035941   62061 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0918 21:05:08.035972   62061 main.go:141] libmachine: (old-k8s-version-740194) Calling .GetMachineName
	I0918 21:05:08.036242   62061 buildroot.go:166] provisioning hostname "old-k8s-version-740194"
	I0918 21:05:08.036273   62061 main.go:141] libmachine: (old-k8s-version-740194) Calling .GetMachineName
	I0918 21:05:08.036512   62061 main.go:141] libmachine: (old-k8s-version-740194) Calling .GetSSHHostname
	I0918 21:05:08.038902   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | domain old-k8s-version-740194 has defined MAC address 52:54:00:3f:a7:11 in network mk-old-k8s-version-740194
	I0918 21:05:08.039239   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:a7:11", ip: ""} in network mk-old-k8s-version-740194: {Iface:virbr3 ExpiryTime:2024-09-18 22:05:00 +0000 UTC Type:0 Mac:52:54:00:3f:a7:11 Iaid: IPaddr:192.168.72.53 Prefix:24 Hostname:old-k8s-version-740194 Clientid:01:52:54:00:3f:a7:11}
	I0918 21:05:08.039266   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | domain old-k8s-version-740194 has defined IP address 192.168.72.53 and MAC address 52:54:00:3f:a7:11 in network mk-old-k8s-version-740194
	I0918 21:05:08.039438   62061 main.go:141] libmachine: (old-k8s-version-740194) Calling .GetSSHPort
	I0918 21:05:08.039632   62061 main.go:141] libmachine: (old-k8s-version-740194) Calling .GetSSHKeyPath
	I0918 21:05:08.039899   62061 main.go:141] libmachine: (old-k8s-version-740194) Calling .GetSSHKeyPath
	I0918 21:05:08.040072   62061 main.go:141] libmachine: (old-k8s-version-740194) Calling .GetSSHUsername
	I0918 21:05:08.040248   62061 main.go:141] libmachine: Using SSH client type: native
	I0918 21:05:08.040415   62061 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.72.53 22 <nil> <nil>}
	I0918 21:05:08.040428   62061 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-740194 && echo "old-k8s-version-740194" | sudo tee /etc/hostname
	I0918 21:05:08.165391   62061 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-740194
	
	I0918 21:05:08.165424   62061 main.go:141] libmachine: (old-k8s-version-740194) Calling .GetSSHHostname
	I0918 21:05:08.168059   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | domain old-k8s-version-740194 has defined MAC address 52:54:00:3f:a7:11 in network mk-old-k8s-version-740194
	I0918 21:05:08.168396   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:a7:11", ip: ""} in network mk-old-k8s-version-740194: {Iface:virbr3 ExpiryTime:2024-09-18 22:05:00 +0000 UTC Type:0 Mac:52:54:00:3f:a7:11 Iaid: IPaddr:192.168.72.53 Prefix:24 Hostname:old-k8s-version-740194 Clientid:01:52:54:00:3f:a7:11}
	I0918 21:05:08.168428   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | domain old-k8s-version-740194 has defined IP address 192.168.72.53 and MAC address 52:54:00:3f:a7:11 in network mk-old-k8s-version-740194
	I0918 21:05:08.168621   62061 main.go:141] libmachine: (old-k8s-version-740194) Calling .GetSSHPort
	I0918 21:05:08.168837   62061 main.go:141] libmachine: (old-k8s-version-740194) Calling .GetSSHKeyPath
	I0918 21:05:08.168988   62061 main.go:141] libmachine: (old-k8s-version-740194) Calling .GetSSHKeyPath
	I0918 21:05:08.169231   62061 main.go:141] libmachine: (old-k8s-version-740194) Calling .GetSSHUsername
	I0918 21:05:08.169413   62061 main.go:141] libmachine: Using SSH client type: native
	I0918 21:05:08.169579   62061 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.72.53 22 <nil> <nil>}
	I0918 21:05:08.169594   62061 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-740194' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-740194/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-740194' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0918 21:05:08.288591   62061 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0918 21:05:08.288644   62061 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19667-7671/.minikube CaCertPath:/home/jenkins/minikube-integration/19667-7671/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19667-7671/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19667-7671/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19667-7671/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19667-7671/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19667-7671/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19667-7671/.minikube}
	I0918 21:05:08.288671   62061 buildroot.go:174] setting up certificates
	I0918 21:05:08.288682   62061 provision.go:84] configureAuth start
	I0918 21:05:08.288694   62061 main.go:141] libmachine: (old-k8s-version-740194) Calling .GetMachineName
	I0918 21:05:08.289017   62061 main.go:141] libmachine: (old-k8s-version-740194) Calling .GetIP
	I0918 21:05:08.291949   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | domain old-k8s-version-740194 has defined MAC address 52:54:00:3f:a7:11 in network mk-old-k8s-version-740194
	I0918 21:05:08.292358   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:a7:11", ip: ""} in network mk-old-k8s-version-740194: {Iface:virbr3 ExpiryTime:2024-09-18 22:05:00 +0000 UTC Type:0 Mac:52:54:00:3f:a7:11 Iaid: IPaddr:192.168.72.53 Prefix:24 Hostname:old-k8s-version-740194 Clientid:01:52:54:00:3f:a7:11}
	I0918 21:05:08.292405   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | domain old-k8s-version-740194 has defined IP address 192.168.72.53 and MAC address 52:54:00:3f:a7:11 in network mk-old-k8s-version-740194
	I0918 21:05:08.292526   62061 main.go:141] libmachine: (old-k8s-version-740194) Calling .GetSSHHostname
	I0918 21:05:08.295013   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | domain old-k8s-version-740194 has defined MAC address 52:54:00:3f:a7:11 in network mk-old-k8s-version-740194
	I0918 21:05:08.295378   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:a7:11", ip: ""} in network mk-old-k8s-version-740194: {Iface:virbr3 ExpiryTime:2024-09-18 22:05:00 +0000 UTC Type:0 Mac:52:54:00:3f:a7:11 Iaid: IPaddr:192.168.72.53 Prefix:24 Hostname:old-k8s-version-740194 Clientid:01:52:54:00:3f:a7:11}
	I0918 21:05:08.295399   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | domain old-k8s-version-740194 has defined IP address 192.168.72.53 and MAC address 52:54:00:3f:a7:11 in network mk-old-k8s-version-740194
	I0918 21:05:08.295567   62061 provision.go:143] copyHostCerts
	I0918 21:05:08.295630   62061 exec_runner.go:144] found /home/jenkins/minikube-integration/19667-7671/.minikube/ca.pem, removing ...
	I0918 21:05:08.295640   62061 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19667-7671/.minikube/ca.pem
	I0918 21:05:08.295692   62061 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19667-7671/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19667-7671/.minikube/ca.pem (1082 bytes)
	I0918 21:05:08.295783   62061 exec_runner.go:144] found /home/jenkins/minikube-integration/19667-7671/.minikube/cert.pem, removing ...
	I0918 21:05:08.295790   62061 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19667-7671/.minikube/cert.pem
	I0918 21:05:08.295810   62061 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19667-7671/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19667-7671/.minikube/cert.pem (1123 bytes)
	I0918 21:05:08.295917   62061 exec_runner.go:144] found /home/jenkins/minikube-integration/19667-7671/.minikube/key.pem, removing ...
	I0918 21:05:08.295926   62061 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19667-7671/.minikube/key.pem
	I0918 21:05:08.295949   62061 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19667-7671/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19667-7671/.minikube/key.pem (1679 bytes)
	I0918 21:05:08.296009   62061 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19667-7671/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19667-7671/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19667-7671/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-740194 san=[127.0.0.1 192.168.72.53 localhost minikube old-k8s-version-740194]
	I0918 21:05:08.560726   62061 provision.go:177] copyRemoteCerts
	I0918 21:05:08.560786   62061 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0918 21:05:08.560816   62061 main.go:141] libmachine: (old-k8s-version-740194) Calling .GetSSHHostname
	I0918 21:05:08.563798   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | domain old-k8s-version-740194 has defined MAC address 52:54:00:3f:a7:11 in network mk-old-k8s-version-740194
	I0918 21:05:08.564231   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:a7:11", ip: ""} in network mk-old-k8s-version-740194: {Iface:virbr3 ExpiryTime:2024-09-18 22:05:00 +0000 UTC Type:0 Mac:52:54:00:3f:a7:11 Iaid: IPaddr:192.168.72.53 Prefix:24 Hostname:old-k8s-version-740194 Clientid:01:52:54:00:3f:a7:11}
	I0918 21:05:08.564266   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | domain old-k8s-version-740194 has defined IP address 192.168.72.53 and MAC address 52:54:00:3f:a7:11 in network mk-old-k8s-version-740194
	I0918 21:05:08.564473   62061 main.go:141] libmachine: (old-k8s-version-740194) Calling .GetSSHPort
	I0918 21:05:08.564704   62061 main.go:141] libmachine: (old-k8s-version-740194) Calling .GetSSHKeyPath
	I0918 21:05:08.564876   62061 main.go:141] libmachine: (old-k8s-version-740194) Calling .GetSSHUsername
	I0918 21:05:08.565016   62061 sshutil.go:53] new ssh client: &{IP:192.168.72.53 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19667-7671/.minikube/machines/old-k8s-version-740194/id_rsa Username:docker}
	I0918 21:05:08.654357   62061 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0918 21:05:08.680551   62061 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0918 21:05:08.704324   62061 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0918 21:05:08.727966   62061 provision.go:87] duration metric: took 439.269312ms to configureAuth
	I0918 21:05:08.728003   62061 buildroot.go:189] setting minikube options for container-runtime
	I0918 21:05:08.728235   62061 config.go:182] Loaded profile config "old-k8s-version-740194": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0918 21:05:08.728305   62061 main.go:141] libmachine: (old-k8s-version-740194) Calling .GetSSHHostname
	I0918 21:05:08.730895   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | domain old-k8s-version-740194 has defined MAC address 52:54:00:3f:a7:11 in network mk-old-k8s-version-740194
	I0918 21:05:08.731318   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:a7:11", ip: ""} in network mk-old-k8s-version-740194: {Iface:virbr3 ExpiryTime:2024-09-18 22:05:00 +0000 UTC Type:0 Mac:52:54:00:3f:a7:11 Iaid: IPaddr:192.168.72.53 Prefix:24 Hostname:old-k8s-version-740194 Clientid:01:52:54:00:3f:a7:11}
	I0918 21:05:08.731348   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | domain old-k8s-version-740194 has defined IP address 192.168.72.53 and MAC address 52:54:00:3f:a7:11 in network mk-old-k8s-version-740194
	I0918 21:05:08.731448   62061 main.go:141] libmachine: (old-k8s-version-740194) Calling .GetSSHPort
	I0918 21:05:08.731668   62061 main.go:141] libmachine: (old-k8s-version-740194) Calling .GetSSHKeyPath
	I0918 21:05:08.731884   62061 main.go:141] libmachine: (old-k8s-version-740194) Calling .GetSSHKeyPath
	I0918 21:05:08.732057   62061 main.go:141] libmachine: (old-k8s-version-740194) Calling .GetSSHUsername
	I0918 21:05:08.732223   62061 main.go:141] libmachine: Using SSH client type: native
	I0918 21:05:08.732396   62061 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.72.53 22 <nil> <nil>}
	I0918 21:05:08.732411   62061 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0918 21:05:08.978962   62061 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0918 21:05:08.978995   62061 machine.go:96] duration metric: took 1.05811075s to provisionDockerMachine
	I0918 21:05:08.979008   62061 start.go:293] postStartSetup for "old-k8s-version-740194" (driver="kvm2")
	I0918 21:05:08.979020   62061 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0918 21:05:08.979050   62061 main.go:141] libmachine: (old-k8s-version-740194) Calling .DriverName
	I0918 21:05:08.979409   62061 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0918 21:05:08.979436   62061 main.go:141] libmachine: (old-k8s-version-740194) Calling .GetSSHHostname
	I0918 21:05:08.982472   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | domain old-k8s-version-740194 has defined MAC address 52:54:00:3f:a7:11 in network mk-old-k8s-version-740194
	I0918 21:05:08.982791   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:a7:11", ip: ""} in network mk-old-k8s-version-740194: {Iface:virbr3 ExpiryTime:2024-09-18 22:05:00 +0000 UTC Type:0 Mac:52:54:00:3f:a7:11 Iaid: IPaddr:192.168.72.53 Prefix:24 Hostname:old-k8s-version-740194 Clientid:01:52:54:00:3f:a7:11}
	I0918 21:05:08.982830   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | domain old-k8s-version-740194 has defined IP address 192.168.72.53 and MAC address 52:54:00:3f:a7:11 in network mk-old-k8s-version-740194
	I0918 21:05:08.982996   62061 main.go:141] libmachine: (old-k8s-version-740194) Calling .GetSSHPort
	I0918 21:05:08.983192   62061 main.go:141] libmachine: (old-k8s-version-740194) Calling .GetSSHKeyPath
	I0918 21:05:08.983341   62061 main.go:141] libmachine: (old-k8s-version-740194) Calling .GetSSHUsername
	I0918 21:05:08.983510   62061 sshutil.go:53] new ssh client: &{IP:192.168.72.53 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19667-7671/.minikube/machines/old-k8s-version-740194/id_rsa Username:docker}
	I0918 21:05:09.074995   62061 ssh_runner.go:195] Run: cat /etc/os-release
	I0918 21:05:09.080207   62061 info.go:137] Remote host: Buildroot 2023.02.9
	I0918 21:05:09.080240   62061 filesync.go:126] Scanning /home/jenkins/minikube-integration/19667-7671/.minikube/addons for local assets ...
	I0918 21:05:09.080327   62061 filesync.go:126] Scanning /home/jenkins/minikube-integration/19667-7671/.minikube/files for local assets ...
	I0918 21:05:09.080451   62061 filesync.go:149] local asset: /home/jenkins/minikube-integration/19667-7671/.minikube/files/etc/ssl/certs/148782.pem -> 148782.pem in /etc/ssl/certs
	I0918 21:05:09.080583   62061 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0918 21:05:09.091374   62061 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/files/etc/ssl/certs/148782.pem --> /etc/ssl/certs/148782.pem (1708 bytes)
	I0918 21:05:09.119546   62061 start.go:296] duration metric: took 140.521158ms for postStartSetup
	I0918 21:05:09.119593   62061 fix.go:56] duration metric: took 20.337078765s for fixHost
	I0918 21:05:09.119620   62061 main.go:141] libmachine: (old-k8s-version-740194) Calling .GetSSHHostname
	I0918 21:05:09.122534   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | domain old-k8s-version-740194 has defined MAC address 52:54:00:3f:a7:11 in network mk-old-k8s-version-740194
	I0918 21:05:09.123157   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:a7:11", ip: ""} in network mk-old-k8s-version-740194: {Iface:virbr3 ExpiryTime:2024-09-18 22:05:00 +0000 UTC Type:0 Mac:52:54:00:3f:a7:11 Iaid: IPaddr:192.168.72.53 Prefix:24 Hostname:old-k8s-version-740194 Clientid:01:52:54:00:3f:a7:11}
	I0918 21:05:09.123187   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | domain old-k8s-version-740194 has defined IP address 192.168.72.53 and MAC address 52:54:00:3f:a7:11 in network mk-old-k8s-version-740194
	I0918 21:05:09.123447   62061 main.go:141] libmachine: (old-k8s-version-740194) Calling .GetSSHPort
	I0918 21:05:09.123695   62061 main.go:141] libmachine: (old-k8s-version-740194) Calling .GetSSHKeyPath
	I0918 21:05:09.123891   62061 main.go:141] libmachine: (old-k8s-version-740194) Calling .GetSSHKeyPath
	I0918 21:05:09.124095   62061 main.go:141] libmachine: (old-k8s-version-740194) Calling .GetSSHUsername
	I0918 21:05:09.124373   62061 main.go:141] libmachine: Using SSH client type: native
	I0918 21:05:09.124582   62061 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.72.53 22 <nil> <nil>}
	I0918 21:05:09.124595   62061 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0918 21:05:09.245082   62061 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726693509.219779478
	
	I0918 21:05:09.245109   62061 fix.go:216] guest clock: 1726693509.219779478
	I0918 21:05:09.245119   62061 fix.go:229] Guest: 2024-09-18 21:05:09.219779478 +0000 UTC Remote: 2024-09-18 21:05:09.11959842 +0000 UTC m=+249.666759777 (delta=100.181058ms)
	I0918 21:05:09.245139   62061 fix.go:200] guest clock delta is within tolerance: 100.181058ms
	I0918 21:05:09.245146   62061 start.go:83] releasing machines lock for "old-k8s-version-740194", held for 20.462669229s
	I0918 21:05:09.245176   62061 main.go:141] libmachine: (old-k8s-version-740194) Calling .DriverName
	I0918 21:05:09.245602   62061 main.go:141] libmachine: (old-k8s-version-740194) Calling .GetIP
	I0918 21:05:09.248653   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | domain old-k8s-version-740194 has defined MAC address 52:54:00:3f:a7:11 in network mk-old-k8s-version-740194
	I0918 21:05:09.249110   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:a7:11", ip: ""} in network mk-old-k8s-version-740194: {Iface:virbr3 ExpiryTime:2024-09-18 22:05:00 +0000 UTC Type:0 Mac:52:54:00:3f:a7:11 Iaid: IPaddr:192.168.72.53 Prefix:24 Hostname:old-k8s-version-740194 Clientid:01:52:54:00:3f:a7:11}
	I0918 21:05:09.249156   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | domain old-k8s-version-740194 has defined IP address 192.168.72.53 and MAC address 52:54:00:3f:a7:11 in network mk-old-k8s-version-740194
	I0918 21:05:09.249345   62061 main.go:141] libmachine: (old-k8s-version-740194) Calling .DriverName
	I0918 21:05:09.249838   62061 main.go:141] libmachine: (old-k8s-version-740194) Calling .DriverName
	I0918 21:05:09.250047   62061 main.go:141] libmachine: (old-k8s-version-740194) Calling .DriverName
	I0918 21:05:09.250167   62061 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0918 21:05:09.250230   62061 main.go:141] libmachine: (old-k8s-version-740194) Calling .GetSSHHostname
	I0918 21:05:09.250286   62061 ssh_runner.go:195] Run: cat /version.json
	I0918 21:05:09.250312   62061 main.go:141] libmachine: (old-k8s-version-740194) Calling .GetSSHHostname
	I0918 21:05:09.253242   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | domain old-k8s-version-740194 has defined MAC address 52:54:00:3f:a7:11 in network mk-old-k8s-version-740194
	I0918 21:05:09.253408   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | domain old-k8s-version-740194 has defined MAC address 52:54:00:3f:a7:11 in network mk-old-k8s-version-740194
	I0918 21:05:09.253687   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:a7:11", ip: ""} in network mk-old-k8s-version-740194: {Iface:virbr3 ExpiryTime:2024-09-18 22:05:00 +0000 UTC Type:0 Mac:52:54:00:3f:a7:11 Iaid: IPaddr:192.168.72.53 Prefix:24 Hostname:old-k8s-version-740194 Clientid:01:52:54:00:3f:a7:11}
	I0918 21:05:09.253749   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | domain old-k8s-version-740194 has defined IP address 192.168.72.53 and MAC address 52:54:00:3f:a7:11 in network mk-old-k8s-version-740194
	I0918 21:05:09.253882   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:a7:11", ip: ""} in network mk-old-k8s-version-740194: {Iface:virbr3 ExpiryTime:2024-09-18 22:05:00 +0000 UTC Type:0 Mac:52:54:00:3f:a7:11 Iaid: IPaddr:192.168.72.53 Prefix:24 Hostname:old-k8s-version-740194 Clientid:01:52:54:00:3f:a7:11}
	I0918 21:05:09.253901   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | domain old-k8s-version-740194 has defined IP address 192.168.72.53 and MAC address 52:54:00:3f:a7:11 in network mk-old-k8s-version-740194
	I0918 21:05:09.253944   62061 main.go:141] libmachine: (old-k8s-version-740194) Calling .GetSSHPort
	I0918 21:05:09.254026   62061 main.go:141] libmachine: (old-k8s-version-740194) Calling .GetSSHPort
	I0918 21:05:09.254221   62061 main.go:141] libmachine: (old-k8s-version-740194) Calling .GetSSHKeyPath
	I0918 21:05:09.254243   62061 main.go:141] libmachine: (old-k8s-version-740194) Calling .GetSSHKeyPath
	I0918 21:05:09.254372   62061 main.go:141] libmachine: (old-k8s-version-740194) Calling .GetSSHUsername
	I0918 21:05:09.254426   62061 main.go:141] libmachine: (old-k8s-version-740194) Calling .GetSSHUsername
	I0918 21:05:09.254533   62061 sshutil.go:53] new ssh client: &{IP:192.168.72.53 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19667-7671/.minikube/machines/old-k8s-version-740194/id_rsa Username:docker}
	I0918 21:05:09.254695   62061 sshutil.go:53] new ssh client: &{IP:192.168.72.53 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19667-7671/.minikube/machines/old-k8s-version-740194/id_rsa Username:docker}
	I0918 21:05:09.374728   62061 ssh_runner.go:195] Run: systemctl --version
	I0918 21:05:09.381117   62061 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0918 21:05:09.532059   62061 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0918 21:05:09.538041   62061 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0918 21:05:09.538124   62061 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0918 21:05:09.553871   62061 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0918 21:05:09.553907   62061 start.go:495] detecting cgroup driver to use...
	I0918 21:05:09.553982   62061 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0918 21:05:09.573554   62061 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0918 21:05:09.591963   62061 docker.go:217] disabling cri-docker service (if available) ...
	I0918 21:05:09.592057   62061 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0918 21:05:09.610813   62061 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0918 21:05:09.627153   62061 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0918 21:05:09.763978   62061 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0918 21:05:09.945185   62061 docker.go:233] disabling docker service ...
	I0918 21:05:09.945257   62061 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0918 21:05:09.961076   62061 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0918 21:05:09.974660   62061 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0918 21:05:10.111191   62061 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0918 21:05:10.235058   62061 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0918 21:05:10.251389   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0918 21:05:10.270949   62061 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0918 21:05:10.271047   62061 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0918 21:05:10.284743   62061 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0918 21:05:10.284811   62061 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0918 21:05:10.295221   62061 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0918 21:05:10.305897   62061 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0918 21:05:10.317726   62061 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0918 21:05:10.330136   62061 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0918 21:05:10.339480   62061 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0918 21:05:10.339543   62061 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0918 21:05:10.352954   62061 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0918 21:05:10.370764   62061 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0918 21:05:10.524315   62061 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0918 21:05:10.617374   62061 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0918 21:05:10.617466   62061 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0918 21:05:10.624164   62061 start.go:563] Will wait 60s for crictl version
	I0918 21:05:10.624222   62061 ssh_runner.go:195] Run: which crictl
	I0918 21:05:10.629583   62061 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0918 21:05:10.673613   62061 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0918 21:05:10.673702   62061 ssh_runner.go:195] Run: crio --version
	I0918 21:05:10.703948   62061 ssh_runner.go:195] Run: crio --version
	I0918 21:05:10.733924   62061 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0918 21:05:09.272840   61273 main.go:141] libmachine: (no-preload-331658) Calling .Start
	I0918 21:05:09.273067   61273 main.go:141] libmachine: (no-preload-331658) Ensuring networks are active...
	I0918 21:05:09.274115   61273 main.go:141] libmachine: (no-preload-331658) Ensuring network default is active
	I0918 21:05:09.274576   61273 main.go:141] libmachine: (no-preload-331658) Ensuring network mk-no-preload-331658 is active
	I0918 21:05:09.275108   61273 main.go:141] libmachine: (no-preload-331658) Getting domain xml...
	I0918 21:05:09.276003   61273 main.go:141] libmachine: (no-preload-331658) Creating domain...
	I0918 21:05:10.665647   61273 main.go:141] libmachine: (no-preload-331658) Waiting to get IP...
	I0918 21:05:10.666710   61273 main.go:141] libmachine: (no-preload-331658) DBG | domain no-preload-331658 has defined MAC address 52:54:00:2a:47:d0 in network mk-no-preload-331658
	I0918 21:05:10.667187   61273 main.go:141] libmachine: (no-preload-331658) DBG | unable to find current IP address of domain no-preload-331658 in network mk-no-preload-331658
	I0918 21:05:10.667261   61273 main.go:141] libmachine: (no-preload-331658) DBG | I0918 21:05:10.667162   63200 retry.go:31] will retry after 215.232953ms: waiting for machine to come up
	I0918 21:05:10.883691   61273 main.go:141] libmachine: (no-preload-331658) DBG | domain no-preload-331658 has defined MAC address 52:54:00:2a:47:d0 in network mk-no-preload-331658
	I0918 21:05:10.884249   61273 main.go:141] libmachine: (no-preload-331658) DBG | unable to find current IP address of domain no-preload-331658 in network mk-no-preload-331658
	I0918 21:05:10.884283   61273 main.go:141] libmachine: (no-preload-331658) DBG | I0918 21:05:10.884185   63200 retry.go:31] will retry after 289.698979ms: waiting for machine to come up
	I0918 21:05:11.175936   61273 main.go:141] libmachine: (no-preload-331658) DBG | domain no-preload-331658 has defined MAC address 52:54:00:2a:47:d0 in network mk-no-preload-331658
	I0918 21:05:11.176656   61273 main.go:141] libmachine: (no-preload-331658) DBG | unable to find current IP address of domain no-preload-331658 in network mk-no-preload-331658
	I0918 21:05:11.176680   61273 main.go:141] libmachine: (no-preload-331658) DBG | I0918 21:05:11.176553   63200 retry.go:31] will retry after 424.473311ms: waiting for machine to come up
	I0918 21:05:09.633671   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:05:11.634755   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:05:09.274214   61740 pod_ready.go:103] pod "coredns-7c65d6cfc9-xwn8w" in "kube-system" namespace has status "Ready":"False"
	I0918 21:05:11.275099   61740 pod_ready.go:103] pod "coredns-7c65d6cfc9-xwn8w" in "kube-system" namespace has status "Ready":"False"
	I0918 21:05:10.735500   62061 main.go:141] libmachine: (old-k8s-version-740194) Calling .GetIP
	I0918 21:05:10.738130   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | domain old-k8s-version-740194 has defined MAC address 52:54:00:3f:a7:11 in network mk-old-k8s-version-740194
	I0918 21:05:10.738488   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:a7:11", ip: ""} in network mk-old-k8s-version-740194: {Iface:virbr3 ExpiryTime:2024-09-18 22:05:00 +0000 UTC Type:0 Mac:52:54:00:3f:a7:11 Iaid: IPaddr:192.168.72.53 Prefix:24 Hostname:old-k8s-version-740194 Clientid:01:52:54:00:3f:a7:11}
	I0918 21:05:10.738516   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | domain old-k8s-version-740194 has defined IP address 192.168.72.53 and MAC address 52:54:00:3f:a7:11 in network mk-old-k8s-version-740194
	I0918 21:05:10.738831   62061 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0918 21:05:10.742780   62061 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0918 21:05:10.754785   62061 kubeadm.go:883] updating cluster {Name:old-k8s-version-740194 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-740194 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.53 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0918 21:05:10.754939   62061 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0918 21:05:10.755002   62061 ssh_runner.go:195] Run: sudo crictl images --output json
	I0918 21:05:10.800452   62061 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0918 21:05:10.800526   62061 ssh_runner.go:195] Run: which lz4
	I0918 21:05:10.804522   62061 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0918 21:05:10.809179   62061 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0918 21:05:10.809214   62061 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0918 21:05:12.306958   62061 crio.go:462] duration metric: took 1.502481615s to copy over tarball
	I0918 21:05:12.307049   62061 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0918 21:05:11.603153   61273 main.go:141] libmachine: (no-preload-331658) DBG | domain no-preload-331658 has defined MAC address 52:54:00:2a:47:d0 in network mk-no-preload-331658
	I0918 21:05:11.603791   61273 main.go:141] libmachine: (no-preload-331658) DBG | unable to find current IP address of domain no-preload-331658 in network mk-no-preload-331658
	I0918 21:05:11.603817   61273 main.go:141] libmachine: (no-preload-331658) DBG | I0918 21:05:11.603742   63200 retry.go:31] will retry after 425.818515ms: waiting for machine to come up
	I0918 21:05:12.031622   61273 main.go:141] libmachine: (no-preload-331658) DBG | domain no-preload-331658 has defined MAC address 52:54:00:2a:47:d0 in network mk-no-preload-331658
	I0918 21:05:12.032425   61273 main.go:141] libmachine: (no-preload-331658) DBG | unable to find current IP address of domain no-preload-331658 in network mk-no-preload-331658
	I0918 21:05:12.032458   61273 main.go:141] libmachine: (no-preload-331658) DBG | I0918 21:05:12.032357   63200 retry.go:31] will retry after 701.564015ms: waiting for machine to come up
	I0918 21:05:12.735295   61273 main.go:141] libmachine: (no-preload-331658) DBG | domain no-preload-331658 has defined MAC address 52:54:00:2a:47:d0 in network mk-no-preload-331658
	I0918 21:05:12.735852   61273 main.go:141] libmachine: (no-preload-331658) DBG | unable to find current IP address of domain no-preload-331658 in network mk-no-preload-331658
	I0918 21:05:12.735882   61273 main.go:141] libmachine: (no-preload-331658) DBG | I0918 21:05:12.735814   63200 retry.go:31] will retry after 904.737419ms: waiting for machine to come up
	I0918 21:05:13.642383   61273 main.go:141] libmachine: (no-preload-331658) DBG | domain no-preload-331658 has defined MAC address 52:54:00:2a:47:d0 in network mk-no-preload-331658
	I0918 21:05:13.642913   61273 main.go:141] libmachine: (no-preload-331658) DBG | unable to find current IP address of domain no-preload-331658 in network mk-no-preload-331658
	I0918 21:05:13.642935   61273 main.go:141] libmachine: (no-preload-331658) DBG | I0918 21:05:13.642872   63200 retry.go:31] will retry after 891.091353ms: waiting for machine to come up
	I0918 21:05:14.536200   61273 main.go:141] libmachine: (no-preload-331658) DBG | domain no-preload-331658 has defined MAC address 52:54:00:2a:47:d0 in network mk-no-preload-331658
	I0918 21:05:14.536797   61273 main.go:141] libmachine: (no-preload-331658) DBG | unable to find current IP address of domain no-preload-331658 in network mk-no-preload-331658
	I0918 21:05:14.536849   61273 main.go:141] libmachine: (no-preload-331658) DBG | I0918 21:05:14.536761   63200 retry.go:31] will retry after 1.01795417s: waiting for machine to come up
	I0918 21:05:15.555787   61273 main.go:141] libmachine: (no-preload-331658) DBG | domain no-preload-331658 has defined MAC address 52:54:00:2a:47:d0 in network mk-no-preload-331658
	I0918 21:05:15.556287   61273 main.go:141] libmachine: (no-preload-331658) DBG | unable to find current IP address of domain no-preload-331658 in network mk-no-preload-331658
	I0918 21:05:15.556315   61273 main.go:141] libmachine: (no-preload-331658) DBG | I0918 21:05:15.556243   63200 retry.go:31] will retry after 1.598926126s: waiting for machine to come up
	I0918 21:05:14.132957   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:05:16.133323   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:05:13.778274   61740 pod_ready.go:93] pod "coredns-7c65d6cfc9-xwn8w" in "kube-system" namespace has status "Ready":"True"
	I0918 21:05:13.778310   61740 pod_ready.go:82] duration metric: took 11.011085965s for pod "coredns-7c65d6cfc9-xwn8w" in "kube-system" namespace to be "Ready" ...
	I0918 21:05:13.778325   61740 pod_ready.go:79] waiting up to 4m0s for pod "etcd-embed-certs-255556" in "kube-system" namespace to be "Ready" ...
	I0918 21:05:13.785089   61740 pod_ready.go:93] pod "etcd-embed-certs-255556" in "kube-system" namespace has status "Ready":"True"
	I0918 21:05:13.785121   61740 pod_ready.go:82] duration metric: took 6.787649ms for pod "etcd-embed-certs-255556" in "kube-system" namespace to be "Ready" ...
	I0918 21:05:13.785135   61740 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-embed-certs-255556" in "kube-system" namespace to be "Ready" ...
	I0918 21:05:15.793479   61740 pod_ready.go:103] pod "kube-apiserver-embed-certs-255556" in "kube-system" namespace has status "Ready":"False"
	I0918 21:05:15.357114   62061 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.050037005s)
	I0918 21:05:15.357141   62061 crio.go:469] duration metric: took 3.050151648s to extract the tarball
	I0918 21:05:15.357148   62061 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0918 21:05:15.399373   62061 ssh_runner.go:195] Run: sudo crictl images --output json
	I0918 21:05:15.434204   62061 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0918 21:05:15.434238   62061 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0918 21:05:15.434332   62061 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0918 21:05:15.434369   62061 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0918 21:05:15.434385   62061 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0918 21:05:15.434398   62061 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0918 21:05:15.434339   62061 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0918 21:05:15.434438   62061 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I0918 21:05:15.434443   62061 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0918 21:05:15.434491   62061 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I0918 21:05:15.436820   62061 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0918 21:05:15.436824   62061 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0918 21:05:15.436829   62061 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0918 21:05:15.436856   62061 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0918 21:05:15.436867   62061 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0918 21:05:15.436904   62061 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0918 21:05:15.436952   62061 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0918 21:05:15.437278   62061 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0918 21:05:15.747423   62061 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0918 21:05:15.747705   62061 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0918 21:05:15.748375   62061 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0918 21:05:15.750816   62061 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0918 21:05:15.752244   62061 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0918 21:05:15.754038   62061 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0918 21:05:15.770881   62061 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0918 21:05:15.911654   62061 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0918 21:05:15.911725   62061 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0918 21:05:15.911795   62061 ssh_runner.go:195] Run: which crictl
	I0918 21:05:15.938412   62061 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0918 21:05:15.938464   62061 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0918 21:05:15.938534   62061 ssh_runner.go:195] Run: which crictl
	I0918 21:05:15.944615   62061 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0918 21:05:15.944706   62061 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0918 21:05:15.944736   62061 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0918 21:05:15.944749   62061 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0918 21:05:15.944786   62061 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0918 21:05:15.944809   62061 ssh_runner.go:195] Run: which crictl
	I0918 21:05:15.944820   62061 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0918 21:05:15.944844   62061 ssh_runner.go:195] Run: which crictl
	I0918 21:05:15.944857   62061 ssh_runner.go:195] Run: which crictl
	I0918 21:05:15.944661   62061 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0918 21:05:15.944914   62061 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0918 21:05:15.944950   62061 ssh_runner.go:195] Run: which crictl
	I0918 21:05:15.965316   62061 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0918 21:05:15.965366   62061 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0918 21:05:15.965418   62061 ssh_runner.go:195] Run: which crictl
	I0918 21:05:15.965461   62061 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0918 21:05:15.965419   62061 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0918 21:05:15.965544   62061 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0918 21:05:15.965558   62061 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0918 21:05:15.965613   62061 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0918 21:05:15.965621   62061 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0918 21:05:16.101533   62061 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0918 21:05:16.101536   62061 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0918 21:05:16.105174   62061 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0918 21:05:16.105198   62061 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0918 21:05:16.109044   62061 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0918 21:05:16.109048   62061 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0918 21:05:16.109152   62061 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0918 21:05:16.220653   62061 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0918 21:05:16.255418   62061 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0918 21:05:16.259931   62061 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0918 21:05:16.259986   62061 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0918 21:05:16.260039   62061 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0918 21:05:16.260121   62061 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0918 21:05:16.260337   62061 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0918 21:05:16.352969   62061 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0918 21:05:16.399180   62061 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19667-7671/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0918 21:05:16.445003   62061 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19667-7671/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0918 21:05:16.445078   62061 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19667-7671/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0918 21:05:16.445163   62061 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19667-7671/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0918 21:05:16.445266   62061 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19667-7671/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0918 21:05:16.445387   62061 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19667-7671/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0918 21:05:16.445398   62061 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19667-7671/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0918 21:05:16.633221   62061 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0918 21:05:16.782331   62061 cache_images.go:92] duration metric: took 1.348045359s to LoadCachedImages
	W0918 21:05:16.782445   62061 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19667-7671/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0: no such file or directory
	I0918 21:05:16.782464   62061 kubeadm.go:934] updating node { 192.168.72.53 8443 v1.20.0 crio true true} ...
	I0918 21:05:16.782608   62061 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-740194 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.72.53
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-740194 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0918 21:05:16.782679   62061 ssh_runner.go:195] Run: crio config
	I0918 21:05:16.838946   62061 cni.go:84] Creating CNI manager for ""
	I0918 21:05:16.838978   62061 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0918 21:05:16.838990   62061 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0918 21:05:16.839008   62061 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.53 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-740194 NodeName:old-k8s-version-740194 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.53"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.53 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0918 21:05:16.839163   62061 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.53
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-740194"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.53
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.53"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0918 21:05:16.839232   62061 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0918 21:05:16.849868   62061 binaries.go:44] Found k8s binaries, skipping transfer
	I0918 21:05:16.849955   62061 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0918 21:05:16.859716   62061 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (429 bytes)
	I0918 21:05:16.877857   62061 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0918 21:05:16.895573   62061 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2120 bytes)
	I0918 21:05:16.913545   62061 ssh_runner.go:195] Run: grep 192.168.72.53	control-plane.minikube.internal$ /etc/hosts
	I0918 21:05:16.917476   62061 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.53	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0918 21:05:16.931318   62061 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0918 21:05:17.057636   62061 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0918 21:05:17.076534   62061 certs.go:68] Setting up /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/old-k8s-version-740194 for IP: 192.168.72.53
	I0918 21:05:17.076571   62061 certs.go:194] generating shared ca certs ...
	I0918 21:05:17.076594   62061 certs.go:226] acquiring lock for ca certs: {Name:mkf54e0e71ed884c4372f9d3d4cb308ea4600185 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0918 21:05:17.076796   62061 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19667-7671/.minikube/ca.key
	I0918 21:05:17.076855   62061 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19667-7671/.minikube/proxy-client-ca.key
	I0918 21:05:17.076871   62061 certs.go:256] generating profile certs ...
	I0918 21:05:17.076999   62061 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/old-k8s-version-740194/client.key
	I0918 21:05:17.077083   62061 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/old-k8s-version-740194/apiserver.key.424b07d9
	I0918 21:05:17.077149   62061 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/old-k8s-version-740194/proxy-client.key
	I0918 21:05:17.077321   62061 certs.go:484] found cert: /home/jenkins/minikube-integration/19667-7671/.minikube/certs/14878.pem (1338 bytes)
	W0918 21:05:17.077371   62061 certs.go:480] ignoring /home/jenkins/minikube-integration/19667-7671/.minikube/certs/14878_empty.pem, impossibly tiny 0 bytes
	I0918 21:05:17.077386   62061 certs.go:484] found cert: /home/jenkins/minikube-integration/19667-7671/.minikube/certs/ca-key.pem (1675 bytes)
	I0918 21:05:17.077421   62061 certs.go:484] found cert: /home/jenkins/minikube-integration/19667-7671/.minikube/certs/ca.pem (1082 bytes)
	I0918 21:05:17.077465   62061 certs.go:484] found cert: /home/jenkins/minikube-integration/19667-7671/.minikube/certs/cert.pem (1123 bytes)
	I0918 21:05:17.077499   62061 certs.go:484] found cert: /home/jenkins/minikube-integration/19667-7671/.minikube/certs/key.pem (1679 bytes)
	I0918 21:05:17.077585   62061 certs.go:484] found cert: /home/jenkins/minikube-integration/19667-7671/.minikube/files/etc/ssl/certs/148782.pem (1708 bytes)
	I0918 21:05:17.078553   62061 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0918 21:05:17.116000   62061 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0918 21:05:17.150848   62061 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0918 21:05:17.190616   62061 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0918 21:05:17.222411   62061 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/old-k8s-version-740194/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0918 21:05:17.266923   62061 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/old-k8s-version-740194/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0918 21:05:17.303973   62061 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/old-k8s-version-740194/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0918 21:05:17.334390   62061 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/old-k8s-version-740194/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0918 21:05:17.358732   62061 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0918 21:05:17.385693   62061 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/certs/14878.pem --> /usr/share/ca-certificates/14878.pem (1338 bytes)
	I0918 21:05:17.411396   62061 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/files/etc/ssl/certs/148782.pem --> /usr/share/ca-certificates/148782.pem (1708 bytes)
	I0918 21:05:17.440205   62061 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0918 21:05:17.459639   62061 ssh_runner.go:195] Run: openssl version
	I0918 21:05:17.465846   62061 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0918 21:05:17.477482   62061 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0918 21:05:17.482579   62061 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 18 19:39 /usr/share/ca-certificates/minikubeCA.pem
	I0918 21:05:17.482650   62061 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0918 21:05:17.488822   62061 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0918 21:05:17.499729   62061 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14878.pem && ln -fs /usr/share/ca-certificates/14878.pem /etc/ssl/certs/14878.pem"
	I0918 21:05:17.511718   62061 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14878.pem
	I0918 21:05:17.516361   62061 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 18 19:57 /usr/share/ca-certificates/14878.pem
	I0918 21:05:17.516438   62061 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14878.pem
	I0918 21:05:17.522542   62061 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14878.pem /etc/ssl/certs/51391683.0"
	I0918 21:05:17.534450   62061 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/148782.pem && ln -fs /usr/share/ca-certificates/148782.pem /etc/ssl/certs/148782.pem"
	I0918 21:05:17.545916   62061 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/148782.pem
	I0918 21:05:17.550672   62061 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 18 19:57 /usr/share/ca-certificates/148782.pem
	I0918 21:05:17.550737   62061 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/148782.pem
	I0918 21:05:17.556802   62061 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/148782.pem /etc/ssl/certs/3ec20f2e.0"
	I0918 21:05:17.568991   62061 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0918 21:05:17.574098   62061 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0918 21:05:17.581458   62061 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0918 21:05:17.588242   62061 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0918 21:05:17.594889   62061 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0918 21:05:17.601499   62061 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0918 21:05:17.608193   62061 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0918 21:05:17.614196   62061 kubeadm.go:392] StartCluster: {Name:old-k8s-version-740194 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-740194 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.53 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0918 21:05:17.614298   62061 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0918 21:05:17.614363   62061 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0918 21:05:17.661551   62061 cri.go:89] found id: ""
	I0918 21:05:17.661627   62061 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0918 21:05:17.673087   62061 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0918 21:05:17.673110   62061 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0918 21:05:17.673153   62061 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0918 21:05:17.683213   62061 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0918 21:05:17.684537   62061 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-740194" does not appear in /home/jenkins/minikube-integration/19667-7671/kubeconfig
	I0918 21:05:17.685136   62061 kubeconfig.go:62] /home/jenkins/minikube-integration/19667-7671/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-740194" cluster setting kubeconfig missing "old-k8s-version-740194" context setting]
	I0918 21:05:17.686023   62061 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19667-7671/kubeconfig: {Name:mk3a3eecadb0164dfdd5d3c4392b5b473a2a9bb0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0918 21:05:17.711337   62061 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0918 21:05:17.723894   62061 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.72.53
	I0918 21:05:17.723936   62061 kubeadm.go:1160] stopping kube-system containers ...
	I0918 21:05:17.723949   62061 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0918 21:05:17.724005   62061 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0918 21:05:17.761087   62061 cri.go:89] found id: ""
	I0918 21:05:17.761166   62061 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0918 21:05:17.779689   62061 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0918 21:05:17.791699   62061 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0918 21:05:17.791723   62061 kubeadm.go:157] found existing configuration files:
	
	I0918 21:05:17.791773   62061 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0918 21:05:17.801804   62061 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0918 21:05:17.801879   62061 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0918 21:05:17.812806   62061 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0918 21:05:17.822813   62061 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0918 21:05:17.822902   62061 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0918 21:05:17.833267   62061 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0918 21:05:17.845148   62061 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0918 21:05:17.845230   62061 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0918 21:05:17.855974   62061 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0918 21:05:17.866490   62061 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0918 21:05:17.866593   62061 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0918 21:05:17.877339   62061 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0918 21:05:17.887825   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0918 21:05:18.021691   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0918 21:05:18.726750   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0918 21:05:18.970635   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0918 21:05:19.081869   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0918 21:05:19.203071   62061 api_server.go:52] waiting for apiserver process to appear ...
	I0918 21:05:19.203166   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:17.156934   61273 main.go:141] libmachine: (no-preload-331658) DBG | domain no-preload-331658 has defined MAC address 52:54:00:2a:47:d0 in network mk-no-preload-331658
	I0918 21:05:17.157481   61273 main.go:141] libmachine: (no-preload-331658) DBG | unable to find current IP address of domain no-preload-331658 in network mk-no-preload-331658
	I0918 21:05:17.157509   61273 main.go:141] libmachine: (no-preload-331658) DBG | I0918 21:05:17.157429   63200 retry.go:31] will retry after 1.586399944s: waiting for machine to come up
	I0918 21:05:18.746155   61273 main.go:141] libmachine: (no-preload-331658) DBG | domain no-preload-331658 has defined MAC address 52:54:00:2a:47:d0 in network mk-no-preload-331658
	I0918 21:05:18.746620   61273 main.go:141] libmachine: (no-preload-331658) DBG | unable to find current IP address of domain no-preload-331658 in network mk-no-preload-331658
	I0918 21:05:18.746650   61273 main.go:141] libmachine: (no-preload-331658) DBG | I0918 21:05:18.746571   63200 retry.go:31] will retry after 2.204220189s: waiting for machine to come up
	I0918 21:05:20.953669   61273 main.go:141] libmachine: (no-preload-331658) DBG | domain no-preload-331658 has defined MAC address 52:54:00:2a:47:d0 in network mk-no-preload-331658
	I0918 21:05:20.954223   61273 main.go:141] libmachine: (no-preload-331658) DBG | unable to find current IP address of domain no-preload-331658 in network mk-no-preload-331658
	I0918 21:05:20.954287   61273 main.go:141] libmachine: (no-preload-331658) DBG | I0918 21:05:20.954209   63200 retry.go:31] will retry after 2.418479665s: waiting for machine to come up
	I0918 21:05:18.634113   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:05:21.133516   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:05:18.365915   61740 pod_ready.go:93] pod "kube-apiserver-embed-certs-255556" in "kube-system" namespace has status "Ready":"True"
	I0918 21:05:18.365943   61740 pod_ready.go:82] duration metric: took 4.580799395s for pod "kube-apiserver-embed-certs-255556" in "kube-system" namespace to be "Ready" ...
	I0918 21:05:18.365956   61740 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-255556" in "kube-system" namespace to be "Ready" ...
	I0918 21:05:18.371010   61740 pod_ready.go:93] pod "kube-controller-manager-embed-certs-255556" in "kube-system" namespace has status "Ready":"True"
	I0918 21:05:18.371035   61740 pod_ready.go:82] duration metric: took 5.070331ms for pod "kube-controller-manager-embed-certs-255556" in "kube-system" namespace to be "Ready" ...
	I0918 21:05:18.371046   61740 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-v8szm" in "kube-system" namespace to be "Ready" ...
	I0918 21:05:18.375632   61740 pod_ready.go:93] pod "kube-proxy-v8szm" in "kube-system" namespace has status "Ready":"True"
	I0918 21:05:18.375658   61740 pod_ready.go:82] duration metric: took 4.603787ms for pod "kube-proxy-v8szm" in "kube-system" namespace to be "Ready" ...
	I0918 21:05:18.375671   61740 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-embed-certs-255556" in "kube-system" namespace to be "Ready" ...
	I0918 21:05:18.380527   61740 pod_ready.go:93] pod "kube-scheduler-embed-certs-255556" in "kube-system" namespace has status "Ready":"True"
	I0918 21:05:18.380551   61740 pod_ready.go:82] duration metric: took 4.872699ms for pod "kube-scheduler-embed-certs-255556" in "kube-system" namespace to be "Ready" ...
	I0918 21:05:18.380563   61740 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace to be "Ready" ...
	I0918 21:05:20.388600   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:05:22.887122   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:05:19.704128   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:20.203595   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:20.703244   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:21.204288   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:21.703469   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:22.203224   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:22.703968   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:23.204097   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:23.703933   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:24.204272   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:23.375904   61273 main.go:141] libmachine: (no-preload-331658) DBG | domain no-preload-331658 has defined MAC address 52:54:00:2a:47:d0 in network mk-no-preload-331658
	I0918 21:05:23.376450   61273 main.go:141] libmachine: (no-preload-331658) DBG | unable to find current IP address of domain no-preload-331658 in network mk-no-preload-331658
	I0918 21:05:23.376476   61273 main.go:141] libmachine: (no-preload-331658) DBG | I0918 21:05:23.376397   63200 retry.go:31] will retry after 4.431211335s: waiting for machine to come up
	I0918 21:05:23.633093   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:05:25.633913   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:05:24.887771   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:05:27.386891   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:05:24.704240   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:25.203880   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:25.703983   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:26.204273   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:26.703861   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:27.204064   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:27.703276   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:28.204289   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:28.703701   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:29.203604   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:27.811234   61273 main.go:141] libmachine: (no-preload-331658) DBG | domain no-preload-331658 has defined MAC address 52:54:00:2a:47:d0 in network mk-no-preload-331658
	I0918 21:05:27.811698   61273 main.go:141] libmachine: (no-preload-331658) DBG | domain no-preload-331658 has current primary IP address 192.168.61.31 and MAC address 52:54:00:2a:47:d0 in network mk-no-preload-331658
	I0918 21:05:27.811719   61273 main.go:141] libmachine: (no-preload-331658) Found IP for machine: 192.168.61.31
	I0918 21:05:27.811729   61273 main.go:141] libmachine: (no-preload-331658) Reserving static IP address...
	I0918 21:05:27.812131   61273 main.go:141] libmachine: (no-preload-331658) DBG | found host DHCP lease matching {name: "no-preload-331658", mac: "52:54:00:2a:47:d0", ip: "192.168.61.31"} in network mk-no-preload-331658: {Iface:virbr2 ExpiryTime:2024-09-18 21:55:52 +0000 UTC Type:0 Mac:52:54:00:2a:47:d0 Iaid: IPaddr:192.168.61.31 Prefix:24 Hostname:no-preload-331658 Clientid:01:52:54:00:2a:47:d0}
	I0918 21:05:27.812150   61273 main.go:141] libmachine: (no-preload-331658) Reserved static IP address: 192.168.61.31
	I0918 21:05:27.812163   61273 main.go:141] libmachine: (no-preload-331658) DBG | skip adding static IP to network mk-no-preload-331658 - found existing host DHCP lease matching {name: "no-preload-331658", mac: "52:54:00:2a:47:d0", ip: "192.168.61.31"}
	I0918 21:05:27.812170   61273 main.go:141] libmachine: (no-preload-331658) Waiting for SSH to be available...
	I0918 21:05:27.812178   61273 main.go:141] libmachine: (no-preload-331658) DBG | Getting to WaitForSSH function...
	I0918 21:05:27.814300   61273 main.go:141] libmachine: (no-preload-331658) DBG | domain no-preload-331658 has defined MAC address 52:54:00:2a:47:d0 in network mk-no-preload-331658
	I0918 21:05:27.814735   61273 main.go:141] libmachine: (no-preload-331658) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:47:d0", ip: ""} in network mk-no-preload-331658: {Iface:virbr2 ExpiryTime:2024-09-18 21:55:52 +0000 UTC Type:0 Mac:52:54:00:2a:47:d0 Iaid: IPaddr:192.168.61.31 Prefix:24 Hostname:no-preload-331658 Clientid:01:52:54:00:2a:47:d0}
	I0918 21:05:27.814767   61273 main.go:141] libmachine: (no-preload-331658) DBG | domain no-preload-331658 has defined IP address 192.168.61.31 and MAC address 52:54:00:2a:47:d0 in network mk-no-preload-331658
	I0918 21:05:27.814891   61273 main.go:141] libmachine: (no-preload-331658) DBG | Using SSH client type: external
	I0918 21:05:27.814922   61273 main.go:141] libmachine: (no-preload-331658) DBG | Using SSH private key: /home/jenkins/minikube-integration/19667-7671/.minikube/machines/no-preload-331658/id_rsa (-rw-------)
	I0918 21:05:27.814945   61273 main.go:141] libmachine: (no-preload-331658) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.31 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19667-7671/.minikube/machines/no-preload-331658/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0918 21:05:27.814972   61273 main.go:141] libmachine: (no-preload-331658) DBG | About to run SSH command:
	I0918 21:05:27.814985   61273 main.go:141] libmachine: (no-preload-331658) DBG | exit 0
	I0918 21:05:27.939949   61273 main.go:141] libmachine: (no-preload-331658) DBG | SSH cmd err, output: <nil>: 
	I0918 21:05:27.940365   61273 main.go:141] libmachine: (no-preload-331658) Calling .GetConfigRaw
	I0918 21:05:27.941187   61273 main.go:141] libmachine: (no-preload-331658) Calling .GetIP
	I0918 21:05:27.943976   61273 main.go:141] libmachine: (no-preload-331658) DBG | domain no-preload-331658 has defined MAC address 52:54:00:2a:47:d0 in network mk-no-preload-331658
	I0918 21:05:27.944375   61273 main.go:141] libmachine: (no-preload-331658) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:47:d0", ip: ""} in network mk-no-preload-331658: {Iface:virbr2 ExpiryTime:2024-09-18 21:55:52 +0000 UTC Type:0 Mac:52:54:00:2a:47:d0 Iaid: IPaddr:192.168.61.31 Prefix:24 Hostname:no-preload-331658 Clientid:01:52:54:00:2a:47:d0}
	I0918 21:05:27.944399   61273 main.go:141] libmachine: (no-preload-331658) DBG | domain no-preload-331658 has defined IP address 192.168.61.31 and MAC address 52:54:00:2a:47:d0 in network mk-no-preload-331658
	I0918 21:05:27.944670   61273 profile.go:143] Saving config to /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/no-preload-331658/config.json ...
	I0918 21:05:27.944942   61273 machine.go:93] provisionDockerMachine start ...
	I0918 21:05:27.944963   61273 main.go:141] libmachine: (no-preload-331658) Calling .DriverName
	I0918 21:05:27.945228   61273 main.go:141] libmachine: (no-preload-331658) Calling .GetSSHHostname
	I0918 21:05:27.947444   61273 main.go:141] libmachine: (no-preload-331658) DBG | domain no-preload-331658 has defined MAC address 52:54:00:2a:47:d0 in network mk-no-preload-331658
	I0918 21:05:27.947810   61273 main.go:141] libmachine: (no-preload-331658) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:47:d0", ip: ""} in network mk-no-preload-331658: {Iface:virbr2 ExpiryTime:2024-09-18 21:55:52 +0000 UTC Type:0 Mac:52:54:00:2a:47:d0 Iaid: IPaddr:192.168.61.31 Prefix:24 Hostname:no-preload-331658 Clientid:01:52:54:00:2a:47:d0}
	I0918 21:05:27.947843   61273 main.go:141] libmachine: (no-preload-331658) DBG | domain no-preload-331658 has defined IP address 192.168.61.31 and MAC address 52:54:00:2a:47:d0 in network mk-no-preload-331658
	I0918 21:05:27.947974   61273 main.go:141] libmachine: (no-preload-331658) Calling .GetSSHPort
	I0918 21:05:27.948196   61273 main.go:141] libmachine: (no-preload-331658) Calling .GetSSHKeyPath
	I0918 21:05:27.948404   61273 main.go:141] libmachine: (no-preload-331658) Calling .GetSSHKeyPath
	I0918 21:05:27.948664   61273 main.go:141] libmachine: (no-preload-331658) Calling .GetSSHUsername
	I0918 21:05:27.948845   61273 main.go:141] libmachine: Using SSH client type: native
	I0918 21:05:27.949078   61273 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.61.31 22 <nil> <nil>}
	I0918 21:05:27.949099   61273 main.go:141] libmachine: About to run SSH command:
	hostname
	I0918 21:05:28.052352   61273 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0918 21:05:28.052378   61273 main.go:141] libmachine: (no-preload-331658) Calling .GetMachineName
	I0918 21:05:28.052638   61273 buildroot.go:166] provisioning hostname "no-preload-331658"
	I0918 21:05:28.052668   61273 main.go:141] libmachine: (no-preload-331658) Calling .GetMachineName
	I0918 21:05:28.052923   61273 main.go:141] libmachine: (no-preload-331658) Calling .GetSSHHostname
	I0918 21:05:28.056168   61273 main.go:141] libmachine: (no-preload-331658) DBG | domain no-preload-331658 has defined MAC address 52:54:00:2a:47:d0 in network mk-no-preload-331658
	I0918 21:05:28.056599   61273 main.go:141] libmachine: (no-preload-331658) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:47:d0", ip: ""} in network mk-no-preload-331658: {Iface:virbr2 ExpiryTime:2024-09-18 21:55:52 +0000 UTC Type:0 Mac:52:54:00:2a:47:d0 Iaid: IPaddr:192.168.61.31 Prefix:24 Hostname:no-preload-331658 Clientid:01:52:54:00:2a:47:d0}
	I0918 21:05:28.056631   61273 main.go:141] libmachine: (no-preload-331658) DBG | domain no-preload-331658 has defined IP address 192.168.61.31 and MAC address 52:54:00:2a:47:d0 in network mk-no-preload-331658
	I0918 21:05:28.056805   61273 main.go:141] libmachine: (no-preload-331658) Calling .GetSSHPort
	I0918 21:05:28.057009   61273 main.go:141] libmachine: (no-preload-331658) Calling .GetSSHKeyPath
	I0918 21:05:28.057168   61273 main.go:141] libmachine: (no-preload-331658) Calling .GetSSHKeyPath
	I0918 21:05:28.057305   61273 main.go:141] libmachine: (no-preload-331658) Calling .GetSSHUsername
	I0918 21:05:28.057478   61273 main.go:141] libmachine: Using SSH client type: native
	I0918 21:05:28.057652   61273 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.61.31 22 <nil> <nil>}
	I0918 21:05:28.057665   61273 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-331658 && echo "no-preload-331658" | sudo tee /etc/hostname
	I0918 21:05:28.174245   61273 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-331658
	
	I0918 21:05:28.174282   61273 main.go:141] libmachine: (no-preload-331658) Calling .GetSSHHostname
	I0918 21:05:28.177373   61273 main.go:141] libmachine: (no-preload-331658) DBG | domain no-preload-331658 has defined MAC address 52:54:00:2a:47:d0 in network mk-no-preload-331658
	I0918 21:05:28.177753   61273 main.go:141] libmachine: (no-preload-331658) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:47:d0", ip: ""} in network mk-no-preload-331658: {Iface:virbr2 ExpiryTime:2024-09-18 21:55:52 +0000 UTC Type:0 Mac:52:54:00:2a:47:d0 Iaid: IPaddr:192.168.61.31 Prefix:24 Hostname:no-preload-331658 Clientid:01:52:54:00:2a:47:d0}
	I0918 21:05:28.177781   61273 main.go:141] libmachine: (no-preload-331658) DBG | domain no-preload-331658 has defined IP address 192.168.61.31 and MAC address 52:54:00:2a:47:d0 in network mk-no-preload-331658
	I0918 21:05:28.177981   61273 main.go:141] libmachine: (no-preload-331658) Calling .GetSSHPort
	I0918 21:05:28.178202   61273 main.go:141] libmachine: (no-preload-331658) Calling .GetSSHKeyPath
	I0918 21:05:28.178380   61273 main.go:141] libmachine: (no-preload-331658) Calling .GetSSHKeyPath
	I0918 21:05:28.178523   61273 main.go:141] libmachine: (no-preload-331658) Calling .GetSSHUsername
	I0918 21:05:28.178752   61273 main.go:141] libmachine: Using SSH client type: native
	I0918 21:05:28.178948   61273 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.61.31 22 <nil> <nil>}
	I0918 21:05:28.178965   61273 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-331658' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-331658/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-331658' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0918 21:05:28.292659   61273 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0918 21:05:28.292691   61273 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19667-7671/.minikube CaCertPath:/home/jenkins/minikube-integration/19667-7671/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19667-7671/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19667-7671/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19667-7671/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19667-7671/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19667-7671/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19667-7671/.minikube}
	I0918 21:05:28.292714   61273 buildroot.go:174] setting up certificates
	I0918 21:05:28.292725   61273 provision.go:84] configureAuth start
	I0918 21:05:28.292734   61273 main.go:141] libmachine: (no-preload-331658) Calling .GetMachineName
	I0918 21:05:28.293091   61273 main.go:141] libmachine: (no-preload-331658) Calling .GetIP
	I0918 21:05:28.295792   61273 main.go:141] libmachine: (no-preload-331658) DBG | domain no-preload-331658 has defined MAC address 52:54:00:2a:47:d0 in network mk-no-preload-331658
	I0918 21:05:28.296192   61273 main.go:141] libmachine: (no-preload-331658) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:47:d0", ip: ""} in network mk-no-preload-331658: {Iface:virbr2 ExpiryTime:2024-09-18 21:55:52 +0000 UTC Type:0 Mac:52:54:00:2a:47:d0 Iaid: IPaddr:192.168.61.31 Prefix:24 Hostname:no-preload-331658 Clientid:01:52:54:00:2a:47:d0}
	I0918 21:05:28.296219   61273 main.go:141] libmachine: (no-preload-331658) DBG | domain no-preload-331658 has defined IP address 192.168.61.31 and MAC address 52:54:00:2a:47:d0 in network mk-no-preload-331658
	I0918 21:05:28.296405   61273 main.go:141] libmachine: (no-preload-331658) Calling .GetSSHHostname
	I0918 21:05:28.298446   61273 main.go:141] libmachine: (no-preload-331658) DBG | domain no-preload-331658 has defined MAC address 52:54:00:2a:47:d0 in network mk-no-preload-331658
	I0918 21:05:28.298788   61273 main.go:141] libmachine: (no-preload-331658) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:47:d0", ip: ""} in network mk-no-preload-331658: {Iface:virbr2 ExpiryTime:2024-09-18 21:55:52 +0000 UTC Type:0 Mac:52:54:00:2a:47:d0 Iaid: IPaddr:192.168.61.31 Prefix:24 Hostname:no-preload-331658 Clientid:01:52:54:00:2a:47:d0}
	I0918 21:05:28.298815   61273 main.go:141] libmachine: (no-preload-331658) DBG | domain no-preload-331658 has defined IP address 192.168.61.31 and MAC address 52:54:00:2a:47:d0 in network mk-no-preload-331658
	I0918 21:05:28.298938   61273 provision.go:143] copyHostCerts
	I0918 21:05:28.299013   61273 exec_runner.go:144] found /home/jenkins/minikube-integration/19667-7671/.minikube/ca.pem, removing ...
	I0918 21:05:28.299026   61273 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19667-7671/.minikube/ca.pem
	I0918 21:05:28.299078   61273 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19667-7671/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19667-7671/.minikube/ca.pem (1082 bytes)
	I0918 21:05:28.299170   61273 exec_runner.go:144] found /home/jenkins/minikube-integration/19667-7671/.minikube/cert.pem, removing ...
	I0918 21:05:28.299178   61273 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19667-7671/.minikube/cert.pem
	I0918 21:05:28.299199   61273 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19667-7671/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19667-7671/.minikube/cert.pem (1123 bytes)
	I0918 21:05:28.299252   61273 exec_runner.go:144] found /home/jenkins/minikube-integration/19667-7671/.minikube/key.pem, removing ...
	I0918 21:05:28.299258   61273 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19667-7671/.minikube/key.pem
	I0918 21:05:28.299278   61273 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19667-7671/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19667-7671/.minikube/key.pem (1679 bytes)
	I0918 21:05:28.299325   61273 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19667-7671/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19667-7671/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19667-7671/.minikube/certs/ca-key.pem org=jenkins.no-preload-331658 san=[127.0.0.1 192.168.61.31 localhost minikube no-preload-331658]
	I0918 21:05:28.606565   61273 provision.go:177] copyRemoteCerts
	I0918 21:05:28.606629   61273 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0918 21:05:28.606653   61273 main.go:141] libmachine: (no-preload-331658) Calling .GetSSHHostname
	I0918 21:05:28.609156   61273 main.go:141] libmachine: (no-preload-331658) DBG | domain no-preload-331658 has defined MAC address 52:54:00:2a:47:d0 in network mk-no-preload-331658
	I0918 21:05:28.609533   61273 main.go:141] libmachine: (no-preload-331658) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:47:d0", ip: ""} in network mk-no-preload-331658: {Iface:virbr2 ExpiryTime:2024-09-18 21:55:52 +0000 UTC Type:0 Mac:52:54:00:2a:47:d0 Iaid: IPaddr:192.168.61.31 Prefix:24 Hostname:no-preload-331658 Clientid:01:52:54:00:2a:47:d0}
	I0918 21:05:28.609564   61273 main.go:141] libmachine: (no-preload-331658) DBG | domain no-preload-331658 has defined IP address 192.168.61.31 and MAC address 52:54:00:2a:47:d0 in network mk-no-preload-331658
	I0918 21:05:28.609690   61273 main.go:141] libmachine: (no-preload-331658) Calling .GetSSHPort
	I0918 21:05:28.609891   61273 main.go:141] libmachine: (no-preload-331658) Calling .GetSSHKeyPath
	I0918 21:05:28.610102   61273 main.go:141] libmachine: (no-preload-331658) Calling .GetSSHUsername
	I0918 21:05:28.610332   61273 sshutil.go:53] new ssh client: &{IP:192.168.61.31 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19667-7671/.minikube/machines/no-preload-331658/id_rsa Username:docker}
	I0918 21:05:28.690571   61273 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0918 21:05:28.719257   61273 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0918 21:05:28.744119   61273 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0918 21:05:28.768692   61273 provision.go:87] duration metric: took 475.955066ms to configureAuth
	I0918 21:05:28.768720   61273 buildroot.go:189] setting minikube options for container-runtime
	I0918 21:05:28.768941   61273 config.go:182] Loaded profile config "no-preload-331658": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0918 21:05:28.769031   61273 main.go:141] libmachine: (no-preload-331658) Calling .GetSSHHostname
	I0918 21:05:28.771437   61273 main.go:141] libmachine: (no-preload-331658) DBG | domain no-preload-331658 has defined MAC address 52:54:00:2a:47:d0 in network mk-no-preload-331658
	I0918 21:05:28.771747   61273 main.go:141] libmachine: (no-preload-331658) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:47:d0", ip: ""} in network mk-no-preload-331658: {Iface:virbr2 ExpiryTime:2024-09-18 21:55:52 +0000 UTC Type:0 Mac:52:54:00:2a:47:d0 Iaid: IPaddr:192.168.61.31 Prefix:24 Hostname:no-preload-331658 Clientid:01:52:54:00:2a:47:d0}
	I0918 21:05:28.771786   61273 main.go:141] libmachine: (no-preload-331658) DBG | domain no-preload-331658 has defined IP address 192.168.61.31 and MAC address 52:54:00:2a:47:d0 in network mk-no-preload-331658
	I0918 21:05:28.771906   61273 main.go:141] libmachine: (no-preload-331658) Calling .GetSSHPort
	I0918 21:05:28.772127   61273 main.go:141] libmachine: (no-preload-331658) Calling .GetSSHKeyPath
	I0918 21:05:28.772330   61273 main.go:141] libmachine: (no-preload-331658) Calling .GetSSHKeyPath
	I0918 21:05:28.772496   61273 main.go:141] libmachine: (no-preload-331658) Calling .GetSSHUsername
	I0918 21:05:28.772717   61273 main.go:141] libmachine: Using SSH client type: native
	I0918 21:05:28.772886   61273 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.61.31 22 <nil> <nil>}
	I0918 21:05:28.772902   61273 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0918 21:05:29.001137   61273 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0918 21:05:29.001160   61273 machine.go:96] duration metric: took 1.056205004s to provisionDockerMachine
	I0918 21:05:29.001171   61273 start.go:293] postStartSetup for "no-preload-331658" (driver="kvm2")
	I0918 21:05:29.001181   61273 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0918 21:05:29.001194   61273 main.go:141] libmachine: (no-preload-331658) Calling .DriverName
	I0918 21:05:29.001531   61273 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0918 21:05:29.001556   61273 main.go:141] libmachine: (no-preload-331658) Calling .GetSSHHostname
	I0918 21:05:29.004307   61273 main.go:141] libmachine: (no-preload-331658) DBG | domain no-preload-331658 has defined MAC address 52:54:00:2a:47:d0 in network mk-no-preload-331658
	I0918 21:05:29.004656   61273 main.go:141] libmachine: (no-preload-331658) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:47:d0", ip: ""} in network mk-no-preload-331658: {Iface:virbr2 ExpiryTime:2024-09-18 21:55:52 +0000 UTC Type:0 Mac:52:54:00:2a:47:d0 Iaid: IPaddr:192.168.61.31 Prefix:24 Hostname:no-preload-331658 Clientid:01:52:54:00:2a:47:d0}
	I0918 21:05:29.004686   61273 main.go:141] libmachine: (no-preload-331658) DBG | domain no-preload-331658 has defined IP address 192.168.61.31 and MAC address 52:54:00:2a:47:d0 in network mk-no-preload-331658
	I0918 21:05:29.004877   61273 main.go:141] libmachine: (no-preload-331658) Calling .GetSSHPort
	I0918 21:05:29.005128   61273 main.go:141] libmachine: (no-preload-331658) Calling .GetSSHKeyPath
	I0918 21:05:29.005379   61273 main.go:141] libmachine: (no-preload-331658) Calling .GetSSHUsername
	I0918 21:05:29.005556   61273 sshutil.go:53] new ssh client: &{IP:192.168.61.31 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19667-7671/.minikube/machines/no-preload-331658/id_rsa Username:docker}
	I0918 21:05:29.087453   61273 ssh_runner.go:195] Run: cat /etc/os-release
	I0918 21:05:29.091329   61273 info.go:137] Remote host: Buildroot 2023.02.9
	I0918 21:05:29.091356   61273 filesync.go:126] Scanning /home/jenkins/minikube-integration/19667-7671/.minikube/addons for local assets ...
	I0918 21:05:29.091422   61273 filesync.go:126] Scanning /home/jenkins/minikube-integration/19667-7671/.minikube/files for local assets ...
	I0918 21:05:29.091493   61273 filesync.go:149] local asset: /home/jenkins/minikube-integration/19667-7671/.minikube/files/etc/ssl/certs/148782.pem -> 148782.pem in /etc/ssl/certs
	I0918 21:05:29.091578   61273 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0918 21:05:29.101039   61273 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/files/etc/ssl/certs/148782.pem --> /etc/ssl/certs/148782.pem (1708 bytes)
	I0918 21:05:29.125451   61273 start.go:296] duration metric: took 124.264463ms for postStartSetup
	I0918 21:05:29.125492   61273 fix.go:56] duration metric: took 19.880181743s for fixHost
	I0918 21:05:29.125514   61273 main.go:141] libmachine: (no-preload-331658) Calling .GetSSHHostname
	I0918 21:05:29.128543   61273 main.go:141] libmachine: (no-preload-331658) DBG | domain no-preload-331658 has defined MAC address 52:54:00:2a:47:d0 in network mk-no-preload-331658
	I0918 21:05:29.128968   61273 main.go:141] libmachine: (no-preload-331658) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:47:d0", ip: ""} in network mk-no-preload-331658: {Iface:virbr2 ExpiryTime:2024-09-18 21:55:52 +0000 UTC Type:0 Mac:52:54:00:2a:47:d0 Iaid: IPaddr:192.168.61.31 Prefix:24 Hostname:no-preload-331658 Clientid:01:52:54:00:2a:47:d0}
	I0918 21:05:29.129022   61273 main.go:141] libmachine: (no-preload-331658) DBG | domain no-preload-331658 has defined IP address 192.168.61.31 and MAC address 52:54:00:2a:47:d0 in network mk-no-preload-331658
	I0918 21:05:29.129185   61273 main.go:141] libmachine: (no-preload-331658) Calling .GetSSHPort
	I0918 21:05:29.129385   61273 main.go:141] libmachine: (no-preload-331658) Calling .GetSSHKeyPath
	I0918 21:05:29.129580   61273 main.go:141] libmachine: (no-preload-331658) Calling .GetSSHKeyPath
	I0918 21:05:29.129739   61273 main.go:141] libmachine: (no-preload-331658) Calling .GetSSHUsername
	I0918 21:05:29.129919   61273 main.go:141] libmachine: Using SSH client type: native
	I0918 21:05:29.130155   61273 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.61.31 22 <nil> <nil>}
	I0918 21:05:29.130172   61273 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0918 21:05:29.240857   61273 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726693529.214864261
	
	I0918 21:05:29.240886   61273 fix.go:216] guest clock: 1726693529.214864261
	I0918 21:05:29.240897   61273 fix.go:229] Guest: 2024-09-18 21:05:29.214864261 +0000 UTC Remote: 2024-09-18 21:05:29.125495769 +0000 UTC m=+357.666326175 (delta=89.368492ms)
	I0918 21:05:29.240943   61273 fix.go:200] guest clock delta is within tolerance: 89.368492ms
	I0918 21:05:29.240949   61273 start.go:83] releasing machines lock for "no-preload-331658", held for 19.99567651s
	I0918 21:05:29.240969   61273 main.go:141] libmachine: (no-preload-331658) Calling .DriverName
	I0918 21:05:29.241256   61273 main.go:141] libmachine: (no-preload-331658) Calling .GetIP
	I0918 21:05:29.243922   61273 main.go:141] libmachine: (no-preload-331658) DBG | domain no-preload-331658 has defined MAC address 52:54:00:2a:47:d0 in network mk-no-preload-331658
	I0918 21:05:29.244347   61273 main.go:141] libmachine: (no-preload-331658) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:47:d0", ip: ""} in network mk-no-preload-331658: {Iface:virbr2 ExpiryTime:2024-09-18 21:55:52 +0000 UTC Type:0 Mac:52:54:00:2a:47:d0 Iaid: IPaddr:192.168.61.31 Prefix:24 Hostname:no-preload-331658 Clientid:01:52:54:00:2a:47:d0}
	I0918 21:05:29.244376   61273 main.go:141] libmachine: (no-preload-331658) DBG | domain no-preload-331658 has defined IP address 192.168.61.31 and MAC address 52:54:00:2a:47:d0 in network mk-no-preload-331658
	I0918 21:05:29.244575   61273 main.go:141] libmachine: (no-preload-331658) Calling .DriverName
	I0918 21:05:29.245157   61273 main.go:141] libmachine: (no-preload-331658) Calling .DriverName
	I0918 21:05:29.245380   61273 main.go:141] libmachine: (no-preload-331658) Calling .DriverName
	I0918 21:05:29.245492   61273 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0918 21:05:29.245548   61273 main.go:141] libmachine: (no-preload-331658) Calling .GetSSHHostname
	I0918 21:05:29.245640   61273 ssh_runner.go:195] Run: cat /version.json
	I0918 21:05:29.245665   61273 main.go:141] libmachine: (no-preload-331658) Calling .GetSSHHostname
	I0918 21:05:29.248511   61273 main.go:141] libmachine: (no-preload-331658) DBG | domain no-preload-331658 has defined MAC address 52:54:00:2a:47:d0 in network mk-no-preload-331658
	I0918 21:05:29.248927   61273 main.go:141] libmachine: (no-preload-331658) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:47:d0", ip: ""} in network mk-no-preload-331658: {Iface:virbr2 ExpiryTime:2024-09-18 21:55:52 +0000 UTC Type:0 Mac:52:54:00:2a:47:d0 Iaid: IPaddr:192.168.61.31 Prefix:24 Hostname:no-preload-331658 Clientid:01:52:54:00:2a:47:d0}
	I0918 21:05:29.248954   61273 main.go:141] libmachine: (no-preload-331658) DBG | domain no-preload-331658 has defined MAC address 52:54:00:2a:47:d0 in network mk-no-preload-331658
	I0918 21:05:29.248984   61273 main.go:141] libmachine: (no-preload-331658) DBG | domain no-preload-331658 has defined IP address 192.168.61.31 and MAC address 52:54:00:2a:47:d0 in network mk-no-preload-331658
	I0918 21:05:29.249198   61273 main.go:141] libmachine: (no-preload-331658) Calling .GetSSHPort
	I0918 21:05:29.249423   61273 main.go:141] libmachine: (no-preload-331658) Calling .GetSSHKeyPath
	I0918 21:05:29.249506   61273 main.go:141] libmachine: (no-preload-331658) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:47:d0", ip: ""} in network mk-no-preload-331658: {Iface:virbr2 ExpiryTime:2024-09-18 21:55:52 +0000 UTC Type:0 Mac:52:54:00:2a:47:d0 Iaid: IPaddr:192.168.61.31 Prefix:24 Hostname:no-preload-331658 Clientid:01:52:54:00:2a:47:d0}
	I0918 21:05:29.249538   61273 main.go:141] libmachine: (no-preload-331658) DBG | domain no-preload-331658 has defined IP address 192.168.61.31 and MAC address 52:54:00:2a:47:d0 in network mk-no-preload-331658
	I0918 21:05:29.249608   61273 main.go:141] libmachine: (no-preload-331658) Calling .GetSSHUsername
	I0918 21:05:29.249692   61273 main.go:141] libmachine: (no-preload-331658) Calling .GetSSHPort
	I0918 21:05:29.249791   61273 sshutil.go:53] new ssh client: &{IP:192.168.61.31 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19667-7671/.minikube/machines/no-preload-331658/id_rsa Username:docker}
	I0918 21:05:29.249899   61273 main.go:141] libmachine: (no-preload-331658) Calling .GetSSHKeyPath
	I0918 21:05:29.250076   61273 main.go:141] libmachine: (no-preload-331658) Calling .GetSSHUsername
	I0918 21:05:29.250228   61273 sshutil.go:53] new ssh client: &{IP:192.168.61.31 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19667-7671/.minikube/machines/no-preload-331658/id_rsa Username:docker}
	I0918 21:05:29.365104   61273 ssh_runner.go:195] Run: systemctl --version
	I0918 21:05:29.371202   61273 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0918 21:05:29.518067   61273 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0918 21:05:29.524126   61273 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0918 21:05:29.524207   61273 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0918 21:05:29.540977   61273 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0918 21:05:29.541007   61273 start.go:495] detecting cgroup driver to use...
	I0918 21:05:29.541072   61273 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0918 21:05:29.558893   61273 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0918 21:05:29.576084   61273 docker.go:217] disabling cri-docker service (if available) ...
	I0918 21:05:29.576161   61273 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0918 21:05:29.591212   61273 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0918 21:05:29.605765   61273 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0918 21:05:29.734291   61273 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0918 21:05:29.892707   61273 docker.go:233] disabling docker service ...
	I0918 21:05:29.892771   61273 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0918 21:05:29.907575   61273 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0918 21:05:29.920545   61273 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0918 21:05:30.058604   61273 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0918 21:05:30.196896   61273 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0918 21:05:30.211398   61273 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0918 21:05:30.231791   61273 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0918 21:05:30.231917   61273 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0918 21:05:30.243369   61273 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0918 21:05:30.243465   61273 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0918 21:05:30.254911   61273 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0918 21:05:30.266839   61273 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0918 21:05:30.278532   61273 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0918 21:05:30.290173   61273 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0918 21:05:30.301068   61273 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0918 21:05:30.318589   61273 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0918 21:05:30.329022   61273 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0918 21:05:30.338645   61273 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0918 21:05:30.338720   61273 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0918 21:05:30.351797   61273 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0918 21:05:30.363412   61273 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0918 21:05:30.504035   61273 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0918 21:05:30.606470   61273 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0918 21:05:30.606547   61273 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0918 21:05:30.611499   61273 start.go:563] Will wait 60s for crictl version
	I0918 21:05:30.611559   61273 ssh_runner.go:195] Run: which crictl
	I0918 21:05:30.615485   61273 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0918 21:05:30.659735   61273 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0918 21:05:30.659835   61273 ssh_runner.go:195] Run: crio --version
	I0918 21:05:30.690573   61273 ssh_runner.go:195] Run: crio --version
	I0918 21:05:30.723342   61273 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0918 21:05:30.724604   61273 main.go:141] libmachine: (no-preload-331658) Calling .GetIP
	I0918 21:05:30.727445   61273 main.go:141] libmachine: (no-preload-331658) DBG | domain no-preload-331658 has defined MAC address 52:54:00:2a:47:d0 in network mk-no-preload-331658
	I0918 21:05:30.727885   61273 main.go:141] libmachine: (no-preload-331658) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:47:d0", ip: ""} in network mk-no-preload-331658: {Iface:virbr2 ExpiryTime:2024-09-18 21:55:52 +0000 UTC Type:0 Mac:52:54:00:2a:47:d0 Iaid: IPaddr:192.168.61.31 Prefix:24 Hostname:no-preload-331658 Clientid:01:52:54:00:2a:47:d0}
	I0918 21:05:30.727919   61273 main.go:141] libmachine: (no-preload-331658) DBG | domain no-preload-331658 has defined IP address 192.168.61.31 and MAC address 52:54:00:2a:47:d0 in network mk-no-preload-331658
	I0918 21:05:30.728132   61273 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0918 21:05:30.732134   61273 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0918 21:05:30.745695   61273 kubeadm.go:883] updating cluster {Name:no-preload-331658 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.1 ClusterName:no-preload-331658 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.31 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0918 21:05:30.745813   61273 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0918 21:05:30.745849   61273 ssh_runner.go:195] Run: sudo crictl images --output json
	I0918 21:05:30.788504   61273 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I0918 21:05:30.788537   61273 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.31.1 registry.k8s.io/kube-controller-manager:v1.31.1 registry.k8s.io/kube-scheduler:v1.31.1 registry.k8s.io/kube-proxy:v1.31.1 registry.k8s.io/pause:3.10 registry.k8s.io/etcd:3.5.15-0 registry.k8s.io/coredns/coredns:v1.11.3 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0918 21:05:30.788634   61273 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0918 21:05:30.788655   61273 image.go:135] retrieving image: registry.k8s.io/pause:3.10
	I0918 21:05:30.788673   61273 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.31.1
	I0918 21:05:30.788685   61273 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.11.3
	I0918 21:05:30.788655   61273 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.31.1
	I0918 21:05:30.788796   61273 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.31.1
	I0918 21:05:30.788804   61273 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.1
	I0918 21:05:30.788655   61273 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.15-0
	I0918 21:05:30.790173   61273 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0918 21:05:30.790181   61273 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.1
	I0918 21:05:30.790199   61273 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.3: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.3
	I0918 21:05:30.790170   61273 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.1
	I0918 21:05:30.790222   61273 image.go:178] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I0918 21:05:30.790237   61273 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.1
	I0918 21:05:30.790268   61273 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.15-0
	I0918 21:05:30.790542   61273 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.1
	I0918 21:05:31.049150   61273 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10
	I0918 21:05:31.052046   61273 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.31.1
	I0918 21:05:31.099660   61273 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.3
	I0918 21:05:31.099861   61273 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.31.1
	I0918 21:05:31.111308   61273 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.31.1
	I0918 21:05:31.111439   61273 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.15-0
	I0918 21:05:31.112293   61273 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.31.1
	I0918 21:05:31.203873   61273 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.31.1" needs transfer: "registry.k8s.io/kube-proxy:v1.31.1" does not exist at hash "60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561" in container runtime
	I0918 21:05:31.203934   61273 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.31.1
	I0918 21:05:31.204042   61273 ssh_runner.go:195] Run: which crictl
	I0918 21:05:31.208912   61273 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.3" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.3" does not exist at hash "c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6" in container runtime
	I0918 21:05:31.208937   61273 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.31.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.31.1" does not exist at hash "9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b" in container runtime
	I0918 21:05:31.208968   61273 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.31.1
	I0918 21:05:31.208960   61273 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.3
	I0918 21:05:31.209020   61273 ssh_runner.go:195] Run: which crictl
	I0918 21:05:31.209029   61273 ssh_runner.go:195] Run: which crictl
	I0918 21:05:31.249355   61273 cache_images.go:116] "registry.k8s.io/etcd:3.5.15-0" needs transfer: "registry.k8s.io/etcd:3.5.15-0" does not exist at hash "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4" in container runtime
	I0918 21:05:31.249408   61273 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.15-0
	I0918 21:05:31.249459   61273 ssh_runner.go:195] Run: which crictl
	I0918 21:05:31.253214   61273 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.31.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.31.1" does not exist at hash "175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1" in container runtime
	I0918 21:05:31.253244   61273 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.31.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.31.1" does not exist at hash "6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee" in container runtime
	I0918 21:05:31.253286   61273 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.31.1
	I0918 21:05:31.253274   61273 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.31.1
	I0918 21:05:31.253335   61273 ssh_runner.go:195] Run: which crictl
	I0918 21:05:31.253339   61273 ssh_runner.go:195] Run: which crictl
	I0918 21:05:31.253351   61273 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.1
	I0918 21:05:31.253405   61273 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.1
	I0918 21:05:31.253419   61273 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I0918 21:05:31.255163   61273 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0918 21:05:31.330929   61273 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.1
	I0918 21:05:31.330999   61273 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.1
	I0918 21:05:31.349540   61273 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.1
	I0918 21:05:31.349558   61273 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.1
	I0918 21:05:31.350088   61273 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I0918 21:05:31.353763   61273 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0918 21:05:31.447057   61273 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.1
	I0918 21:05:31.457171   61273 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.1
	I0918 21:05:31.457239   61273 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.1
	I0918 21:05:31.483087   61273 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.1
	I0918 21:05:31.483097   61273 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I0918 21:05:31.483210   61273 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0918 21:05:28.131874   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:05:30.133067   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:05:32.134557   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:05:29.389052   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:05:31.887032   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:05:29.704274   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:30.203372   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:30.703751   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:31.203670   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:31.704097   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:32.203611   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:32.703968   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:33.203356   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:33.704260   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:34.204082   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:31.573784   61273 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19667-7671/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.1
	I0918 21:05:31.573906   61273 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.31.1
	I0918 21:05:31.573927   61273 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.1
	I0918 21:05:31.573951   61273 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19667-7671/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.1
	I0918 21:05:31.574038   61273 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.31.1
	I0918 21:05:31.605972   61273 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19667-7671/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3
	I0918 21:05:31.606077   61273 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.1
	I0918 21:05:31.606086   61273 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.11.3
	I0918 21:05:31.613640   61273 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19667-7671/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0
	I0918 21:05:31.613769   61273 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.15-0
	I0918 21:05:31.641105   61273 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19667-7671/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.1
	I0918 21:05:31.641109   61273 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.31.1 (exists)
	I0918 21:05:31.641199   61273 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.31.1
	I0918 21:05:31.641223   61273 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.31.1
	I0918 21:05:31.641244   61273 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.1
	I0918 21:05:31.641175   61273 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.31.1 (exists)
	I0918 21:05:31.666586   61273 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.3 (exists)
	I0918 21:05:31.666661   61273 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19667-7671/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.1
	I0918 21:05:31.666792   61273 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.31.1
	I0918 21:05:31.666821   61273 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.31.1 (exists)
	I0918 21:05:31.666795   61273 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.15-0 (exists)
	I0918 21:05:32.009797   61273 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0918 21:05:33.610028   61273 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.1: (1.968756977s)
	I0918 21:05:33.610065   61273 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19667-7671/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.1 from cache
	I0918 21:05:33.610080   61273 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.31.1: (1.943261692s)
	I0918 21:05:33.610111   61273 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.31.1 (exists)
	I0918 21:05:33.610090   61273 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.31.1
	I0918 21:05:33.610122   61273 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (1.600294362s)
	I0918 21:05:33.610161   61273 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0918 21:05:33.610174   61273 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.1
	I0918 21:05:33.610193   61273 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0918 21:05:33.610242   61273 ssh_runner.go:195] Run: which crictl
	I0918 21:05:35.571685   61273 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.1: (1.96147024s)
	I0918 21:05:35.571722   61273 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19667-7671/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.1 from cache
	I0918 21:05:35.571748   61273 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.3
	I0918 21:05:35.571802   61273 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.3
	I0918 21:05:35.571802   61273 ssh_runner.go:235] Completed: which crictl: (1.961540517s)
	I0918 21:05:35.571882   61273 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0918 21:05:34.632853   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:05:36.633341   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:05:33.887591   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:05:36.387534   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:05:34.703445   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:35.203660   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:35.703374   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:36.203304   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:36.704191   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:37.204129   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:37.703912   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:38.203310   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:38.703616   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:39.203258   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:37.536622   61273 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.96470192s)
	I0918 21:05:37.536666   61273 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.3: (1.96484474s)
	I0918 21:05:37.536690   61273 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19667-7671/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3 from cache
	I0918 21:05:37.536713   61273 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0918 21:05:37.536721   61273 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.31.1
	I0918 21:05:37.536766   61273 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.1
	I0918 21:05:39.615751   61273 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.1: (2.078954836s)
	I0918 21:05:39.615791   61273 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19667-7671/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.1 from cache
	I0918 21:05:39.615823   61273 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (2.079084749s)
	I0918 21:05:39.615902   61273 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0918 21:05:39.615829   61273 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.15-0
	I0918 21:05:39.615972   61273 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0
	I0918 21:05:39.676258   61273 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19667-7671/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0918 21:05:39.676355   61273 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I0918 21:05:38.634158   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:05:40.634292   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:05:38.888255   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:05:41.387766   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:05:39.704105   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:40.204102   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:40.704073   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:41.203654   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:41.703947   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:42.203722   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:42.703303   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:43.203847   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:43.704163   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:44.203216   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:42.909577   61273 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: (3.233201912s)
	I0918 21:05:42.909617   61273 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0918 21:05:42.909722   61273 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0: (3.293701319s)
	I0918 21:05:42.909748   61273 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19667-7671/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0 from cache
	I0918 21:05:42.909781   61273 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.31.1
	I0918 21:05:42.909859   61273 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.1
	I0918 21:05:44.767646   61273 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.1: (1.857764218s)
	I0918 21:05:44.767673   61273 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19667-7671/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.1 from cache
	I0918 21:05:44.767705   61273 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0918 21:05:44.767787   61273 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0918 21:05:45.419210   61273 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19667-7671/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0918 21:05:45.419257   61273 cache_images.go:123] Successfully loaded all cached images
	I0918 21:05:45.419265   61273 cache_images.go:92] duration metric: took 14.630712818s to LoadCachedImages
	I0918 21:05:45.419278   61273 kubeadm.go:934] updating node { 192.168.61.31 8443 v1.31.1 crio true true} ...
	I0918 21:05:45.419399   61273 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-331658 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.31
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:no-preload-331658 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0918 21:05:45.419479   61273 ssh_runner.go:195] Run: crio config
	I0918 21:05:45.468525   61273 cni.go:84] Creating CNI manager for ""
	I0918 21:05:45.468549   61273 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0918 21:05:45.468558   61273 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0918 21:05:45.468579   61273 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.31 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-331658 NodeName:no-preload-331658 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.31"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.31 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0918 21:05:45.468706   61273 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.31
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-331658"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.31
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.31"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0918 21:05:45.468781   61273 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0918 21:05:45.479592   61273 binaries.go:44] Found k8s binaries, skipping transfer
	I0918 21:05:45.479662   61273 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0918 21:05:45.488586   61273 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (316 bytes)
	I0918 21:05:45.507027   61273 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0918 21:05:45.525430   61273 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2158 bytes)
	I0918 21:05:45.543854   61273 ssh_runner.go:195] Run: grep 192.168.61.31	control-plane.minikube.internal$ /etc/hosts
	I0918 21:05:45.547792   61273 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.31	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0918 21:05:45.559968   61273 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0918 21:05:45.686602   61273 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0918 21:05:45.702793   61273 certs.go:68] Setting up /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/no-preload-331658 for IP: 192.168.61.31
	I0918 21:05:45.702814   61273 certs.go:194] generating shared ca certs ...
	I0918 21:05:45.702829   61273 certs.go:226] acquiring lock for ca certs: {Name:mkf54e0e71ed884c4372f9d3d4cb308ea4600185 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0918 21:05:45.703005   61273 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19667-7671/.minikube/ca.key
	I0918 21:05:45.703071   61273 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19667-7671/.minikube/proxy-client-ca.key
	I0918 21:05:45.703085   61273 certs.go:256] generating profile certs ...
	I0918 21:05:45.703159   61273 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/no-preload-331658/client.key
	I0918 21:05:45.703228   61273 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/no-preload-331658/apiserver.key.1a336b78
	I0918 21:05:45.703263   61273 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/no-preload-331658/proxy-client.key
	I0918 21:05:45.703384   61273 certs.go:484] found cert: /home/jenkins/minikube-integration/19667-7671/.minikube/certs/14878.pem (1338 bytes)
	W0918 21:05:45.703417   61273 certs.go:480] ignoring /home/jenkins/minikube-integration/19667-7671/.minikube/certs/14878_empty.pem, impossibly tiny 0 bytes
	I0918 21:05:45.703430   61273 certs.go:484] found cert: /home/jenkins/minikube-integration/19667-7671/.minikube/certs/ca-key.pem (1675 bytes)
	I0918 21:05:45.703463   61273 certs.go:484] found cert: /home/jenkins/minikube-integration/19667-7671/.minikube/certs/ca.pem (1082 bytes)
	I0918 21:05:45.703493   61273 certs.go:484] found cert: /home/jenkins/minikube-integration/19667-7671/.minikube/certs/cert.pem (1123 bytes)
	I0918 21:05:45.703521   61273 certs.go:484] found cert: /home/jenkins/minikube-integration/19667-7671/.minikube/certs/key.pem (1679 bytes)
	I0918 21:05:45.703582   61273 certs.go:484] found cert: /home/jenkins/minikube-integration/19667-7671/.minikube/files/etc/ssl/certs/148782.pem (1708 bytes)
	I0918 21:05:45.704338   61273 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0918 21:05:45.757217   61273 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0918 21:05:45.791588   61273 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0918 21:05:45.825543   61273 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0918 21:05:45.859322   61273 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/no-preload-331658/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0918 21:05:45.892890   61273 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/no-preload-331658/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0918 21:05:45.922841   61273 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/no-preload-331658/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0918 21:05:45.947670   61273 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/no-preload-331658/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0918 21:05:45.973315   61273 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/files/etc/ssl/certs/148782.pem --> /usr/share/ca-certificates/148782.pem (1708 bytes)
	I0918 21:05:45.997699   61273 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0918 21:05:46.022802   61273 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/certs/14878.pem --> /usr/share/ca-certificates/14878.pem (1338 bytes)
	I0918 21:05:46.046646   61273 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0918 21:05:46.063329   61273 ssh_runner.go:195] Run: openssl version
	I0918 21:05:46.069432   61273 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/148782.pem && ln -fs /usr/share/ca-certificates/148782.pem /etc/ssl/certs/148782.pem"
	I0918 21:05:46.081104   61273 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/148782.pem
	I0918 21:05:46.086180   61273 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 18 19:57 /usr/share/ca-certificates/148782.pem
	I0918 21:05:46.086241   61273 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/148782.pem
	I0918 21:05:46.092527   61273 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/148782.pem /etc/ssl/certs/3ec20f2e.0"
	I0918 21:05:46.103601   61273 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0918 21:05:46.114656   61273 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0918 21:05:46.118788   61273 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 18 19:39 /usr/share/ca-certificates/minikubeCA.pem
	I0918 21:05:46.118855   61273 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0918 21:05:46.124094   61273 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0918 21:05:46.135442   61273 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14878.pem && ln -fs /usr/share/ca-certificates/14878.pem /etc/ssl/certs/14878.pem"
	I0918 21:05:46.146105   61273 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14878.pem
	I0918 21:05:46.150661   61273 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 18 19:57 /usr/share/ca-certificates/14878.pem
	I0918 21:05:46.150714   61273 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14878.pem
	I0918 21:05:46.156247   61273 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14878.pem /etc/ssl/certs/51391683.0"
	I0918 21:05:46.167475   61273 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0918 21:05:46.172172   61273 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0918 21:05:46.178638   61273 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0918 21:05:46.184644   61273 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0918 21:05:46.190704   61273 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0918 21:05:46.196414   61273 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0918 21:05:46.202467   61273 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0918 21:05:46.208306   61273 kubeadm.go:392] StartCluster: {Name:no-preload-331658 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.1 ClusterName:no-preload-331658 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.31 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0918 21:05:46.208405   61273 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0918 21:05:46.208472   61273 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0918 21:05:46.247189   61273 cri.go:89] found id: ""
	I0918 21:05:46.247267   61273 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0918 21:05:46.258228   61273 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0918 21:05:46.258253   61273 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0918 21:05:46.258309   61273 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0918 21:05:46.268703   61273 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0918 21:05:46.269728   61273 kubeconfig.go:125] found "no-preload-331658" server: "https://192.168.61.31:8443"
	I0918 21:05:46.271749   61273 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0918 21:05:46.282051   61273 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.61.31
	I0918 21:05:46.282105   61273 kubeadm.go:1160] stopping kube-system containers ...
	I0918 21:05:46.282122   61273 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0918 21:05:46.282191   61273 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0918 21:05:46.319805   61273 cri.go:89] found id: ""
	I0918 21:05:46.319880   61273 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0918 21:05:46.336130   61273 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0918 21:05:46.345940   61273 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0918 21:05:46.345962   61273 kubeadm.go:157] found existing configuration files:
	
	I0918 21:05:46.346008   61273 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0918 21:05:46.355577   61273 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0918 21:05:46.355658   61273 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0918 21:05:46.367154   61273 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0918 21:05:46.377062   61273 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0918 21:05:46.377126   61273 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0918 21:05:46.387180   61273 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0918 21:05:46.396578   61273 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0918 21:05:46.396642   61273 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0918 21:05:46.406687   61273 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0918 21:05:46.416545   61273 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0918 21:05:46.416617   61273 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0918 21:05:46.426405   61273 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0918 21:05:46.436343   61273 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0918 21:05:43.132484   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:05:45.132905   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:05:47.132942   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:05:43.890245   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:05:46.386955   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:05:44.703267   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:45.203924   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:45.703945   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:46.203386   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:46.703674   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:47.203387   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:47.704034   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:48.203348   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:48.703715   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:49.203984   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:46.563094   61273 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0918 21:05:47.663823   61273 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.100694645s)
	I0918 21:05:47.663857   61273 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0918 21:05:47.895962   61273 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0918 21:05:47.978862   61273 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0918 21:05:48.095438   61273 api_server.go:52] waiting for apiserver process to appear ...
	I0918 21:05:48.095530   61273 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:48.595581   61273 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:49.095761   61273 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:49.122304   61273 api_server.go:72] duration metric: took 1.026867171s to wait for apiserver process to appear ...
	I0918 21:05:49.122343   61273 api_server.go:88] waiting for apiserver healthz status ...
	I0918 21:05:49.122361   61273 api_server.go:253] Checking apiserver healthz at https://192.168.61.31:8443/healthz ...
	I0918 21:05:49.133503   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:05:51.133761   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:05:48.386996   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:05:50.387697   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:05:52.886989   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:05:52.253818   61273 api_server.go:279] https://192.168.61.31:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0918 21:05:52.253850   61273 api_server.go:103] status: https://192.168.61.31:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0918 21:05:52.253864   61273 api_server.go:253] Checking apiserver healthz at https://192.168.61.31:8443/healthz ...
	I0918 21:05:52.290586   61273 api_server.go:279] https://192.168.61.31:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0918 21:05:52.290617   61273 api_server.go:103] status: https://192.168.61.31:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0918 21:05:52.623078   61273 api_server.go:253] Checking apiserver healthz at https://192.168.61.31:8443/healthz ...
	I0918 21:05:52.631774   61273 api_server.go:279] https://192.168.61.31:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0918 21:05:52.631811   61273 api_server.go:103] status: https://192.168.61.31:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0918 21:05:53.123498   61273 api_server.go:253] Checking apiserver healthz at https://192.168.61.31:8443/healthz ...
	I0918 21:05:53.132091   61273 api_server.go:279] https://192.168.61.31:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0918 21:05:53.132120   61273 api_server.go:103] status: https://192.168.61.31:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0918 21:05:53.622597   61273 api_server.go:253] Checking apiserver healthz at https://192.168.61.31:8443/healthz ...
	I0918 21:05:53.628896   61273 api_server.go:279] https://192.168.61.31:8443/healthz returned 200:
	ok
	I0918 21:05:53.638315   61273 api_server.go:141] control plane version: v1.31.1
	I0918 21:05:53.638354   61273 api_server.go:131] duration metric: took 4.516002991s to wait for apiserver health ...
	I0918 21:05:53.638367   61273 cni.go:84] Creating CNI manager for ""
	I0918 21:05:53.638376   61273 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0918 21:05:53.639948   61273 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0918 21:05:49.703565   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:50.204192   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:50.704248   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:51.203335   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:51.703761   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:52.203474   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:52.703901   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:53.203856   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:53.704192   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:54.204243   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:53.641376   61273 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0918 21:05:53.667828   61273 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0918 21:05:53.701667   61273 system_pods.go:43] waiting for kube-system pods to appear ...
	I0918 21:05:53.714053   61273 system_pods.go:59] 8 kube-system pods found
	I0918 21:05:53.714101   61273 system_pods.go:61] "coredns-7c65d6cfc9-dgnw2" [085d5a98-0a61-4678-830f-384780a0d7ef] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0918 21:05:53.714113   61273 system_pods.go:61] "etcd-no-preload-331658" [d6a53ff4-fcf0-46c0-9e77-9af6d0363e08] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0918 21:05:53.714126   61273 system_pods.go:61] "kube-apiserver-no-preload-331658" [1953598f-4c3f-4a98-ab75-b8ca1b8093ad] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0918 21:05:53.714135   61273 system_pods.go:61] "kube-controller-manager-no-preload-331658" [8fc655be-a192-4250-a31d-2e533a2b5e41] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0918 21:05:53.714145   61273 system_pods.go:61] "kube-proxy-hx25w" [a26512ff-f695-4452-8974-577479257160] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0918 21:05:53.714157   61273 system_pods.go:61] "kube-scheduler-no-preload-331658" [64e5347e-e7fd-4a0d-ae41-539c3989c29e] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0918 21:05:53.714169   61273 system_pods.go:61] "metrics-server-6867b74b74-n27vc" [b1de76ec-8987-49ce-ae66-eedda2705cde] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0918 21:05:53.714181   61273 system_pods.go:61] "storage-provisioner" [e110aeb3-e9bc-4bb9-9b49-5579558bdda2] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0918 21:05:53.714191   61273 system_pods.go:74] duration metric: took 12.499195ms to wait for pod list to return data ...
	I0918 21:05:53.714206   61273 node_conditions.go:102] verifying NodePressure condition ...
	I0918 21:05:53.720251   61273 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0918 21:05:53.720283   61273 node_conditions.go:123] node cpu capacity is 2
	I0918 21:05:53.720296   61273 node_conditions.go:105] duration metric: took 6.082637ms to run NodePressure ...
	I0918 21:05:53.720317   61273 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0918 21:05:54.056981   61273 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0918 21:05:54.062413   61273 kubeadm.go:739] kubelet initialised
	I0918 21:05:54.062436   61273 kubeadm.go:740] duration metric: took 5.424693ms waiting for restarted kubelet to initialise ...
	I0918 21:05:54.062443   61273 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0918 21:05:54.069721   61273 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-dgnw2" in "kube-system" namespace to be "Ready" ...
	I0918 21:05:54.089970   61273 pod_ready.go:98] node "no-preload-331658" hosting pod "coredns-7c65d6cfc9-dgnw2" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-331658" has status "Ready":"False"
	I0918 21:05:54.090005   61273 pod_ready.go:82] duration metric: took 20.250586ms for pod "coredns-7c65d6cfc9-dgnw2" in "kube-system" namespace to be "Ready" ...
	E0918 21:05:54.090017   61273 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-331658" hosting pod "coredns-7c65d6cfc9-dgnw2" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-331658" has status "Ready":"False"
	I0918 21:05:54.090046   61273 pod_ready.go:79] waiting up to 4m0s for pod "etcd-no-preload-331658" in "kube-system" namespace to be "Ready" ...
	I0918 21:05:54.105121   61273 pod_ready.go:98] node "no-preload-331658" hosting pod "etcd-no-preload-331658" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-331658" has status "Ready":"False"
	I0918 21:05:54.105156   61273 pod_ready.go:82] duration metric: took 15.097714ms for pod "etcd-no-preload-331658" in "kube-system" namespace to be "Ready" ...
	E0918 21:05:54.105170   61273 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-331658" hosting pod "etcd-no-preload-331658" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-331658" has status "Ready":"False"
	I0918 21:05:54.105180   61273 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-no-preload-331658" in "kube-system" namespace to be "Ready" ...
	I0918 21:05:54.112687   61273 pod_ready.go:98] node "no-preload-331658" hosting pod "kube-apiserver-no-preload-331658" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-331658" has status "Ready":"False"
	I0918 21:05:54.112711   61273 pod_ready.go:82] duration metric: took 7.523191ms for pod "kube-apiserver-no-preload-331658" in "kube-system" namespace to be "Ready" ...
	E0918 21:05:54.112722   61273 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-331658" hosting pod "kube-apiserver-no-preload-331658" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-331658" has status "Ready":"False"
	I0918 21:05:54.112730   61273 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-no-preload-331658" in "kube-system" namespace to be "Ready" ...
	I0918 21:05:54.119681   61273 pod_ready.go:98] node "no-preload-331658" hosting pod "kube-controller-manager-no-preload-331658" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-331658" has status "Ready":"False"
	I0918 21:05:54.119707   61273 pod_ready.go:82] duration metric: took 6.967275ms for pod "kube-controller-manager-no-preload-331658" in "kube-system" namespace to be "Ready" ...
	E0918 21:05:54.119716   61273 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-331658" hosting pod "kube-controller-manager-no-preload-331658" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-331658" has status "Ready":"False"
	I0918 21:05:54.119723   61273 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-hx25w" in "kube-system" namespace to be "Ready" ...
	I0918 21:05:54.505099   61273 pod_ready.go:98] node "no-preload-331658" hosting pod "kube-proxy-hx25w" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-331658" has status "Ready":"False"
	I0918 21:05:54.505127   61273 pod_ready.go:82] duration metric: took 385.395528ms for pod "kube-proxy-hx25w" in "kube-system" namespace to be "Ready" ...
	E0918 21:05:54.505140   61273 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-331658" hosting pod "kube-proxy-hx25w" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-331658" has status "Ready":"False"
	I0918 21:05:54.505147   61273 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-no-preload-331658" in "kube-system" namespace to be "Ready" ...
	I0918 21:05:54.905748   61273 pod_ready.go:98] node "no-preload-331658" hosting pod "kube-scheduler-no-preload-331658" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-331658" has status "Ready":"False"
	I0918 21:05:54.905774   61273 pod_ready.go:82] duration metric: took 400.618175ms for pod "kube-scheduler-no-preload-331658" in "kube-system" namespace to be "Ready" ...
	E0918 21:05:54.905785   61273 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-331658" hosting pod "kube-scheduler-no-preload-331658" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-331658" has status "Ready":"False"
	I0918 21:05:54.905794   61273 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace to be "Ready" ...
	I0918 21:05:55.305077   61273 pod_ready.go:98] node "no-preload-331658" hosting pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-331658" has status "Ready":"False"
	I0918 21:05:55.305106   61273 pod_ready.go:82] duration metric: took 399.301293ms for pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace to be "Ready" ...
	E0918 21:05:55.305118   61273 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-331658" hosting pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-331658" has status "Ready":"False"
	I0918 21:05:55.305126   61273 pod_ready.go:39] duration metric: took 1.242662699s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0918 21:05:55.305150   61273 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0918 21:05:55.317568   61273 ops.go:34] apiserver oom_adj: -16
	I0918 21:05:55.317597   61273 kubeadm.go:597] duration metric: took 9.0593375s to restartPrimaryControlPlane
	I0918 21:05:55.317616   61273 kubeadm.go:394] duration metric: took 9.109322119s to StartCluster
	I0918 21:05:55.317643   61273 settings.go:142] acquiring lock: {Name:mk6ae95a69dcbe00c28846409e9f46945a88de2f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0918 21:05:55.317720   61273 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19667-7671/kubeconfig
	I0918 21:05:55.320228   61273 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19667-7671/kubeconfig: {Name:mk3a3eecadb0164dfdd5d3c4392b5b473a2a9bb0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0918 21:05:55.320552   61273 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.61.31 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0918 21:05:55.320609   61273 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0918 21:05:55.320716   61273 addons.go:69] Setting storage-provisioner=true in profile "no-preload-331658"
	I0918 21:05:55.320725   61273 addons.go:69] Setting default-storageclass=true in profile "no-preload-331658"
	I0918 21:05:55.320739   61273 addons.go:234] Setting addon storage-provisioner=true in "no-preload-331658"
	W0918 21:05:55.320747   61273 addons.go:243] addon storage-provisioner should already be in state true
	I0918 21:05:55.320765   61273 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-331658"
	I0918 21:05:55.320785   61273 host.go:66] Checking if "no-preload-331658" exists ...
	I0918 21:05:55.320769   61273 addons.go:69] Setting metrics-server=true in profile "no-preload-331658"
	I0918 21:05:55.320799   61273 config.go:182] Loaded profile config "no-preload-331658": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0918 21:05:55.320808   61273 addons.go:234] Setting addon metrics-server=true in "no-preload-331658"
	W0918 21:05:55.320863   61273 addons.go:243] addon metrics-server should already be in state true
	I0918 21:05:55.320889   61273 host.go:66] Checking if "no-preload-331658" exists ...
	I0918 21:05:55.321208   61273 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19667-7671/.minikube/bin/docker-machine-driver-kvm2
	I0918 21:05:55.321228   61273 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19667-7671/.minikube/bin/docker-machine-driver-kvm2
	I0918 21:05:55.321208   61273 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19667-7671/.minikube/bin/docker-machine-driver-kvm2
	I0918 21:05:55.321262   61273 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0918 21:05:55.321282   61273 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0918 21:05:55.321357   61273 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0918 21:05:55.323762   61273 out.go:177] * Verifying Kubernetes components...
	I0918 21:05:55.325718   61273 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0918 21:05:55.348485   61273 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43725
	I0918 21:05:55.349072   61273 main.go:141] libmachine: () Calling .GetVersion
	I0918 21:05:55.349611   61273 main.go:141] libmachine: Using API Version  1
	I0918 21:05:55.349641   61273 main.go:141] libmachine: () Calling .SetConfigRaw
	I0918 21:05:55.349978   61273 main.go:141] libmachine: () Calling .GetMachineName
	I0918 21:05:55.350556   61273 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19667-7671/.minikube/bin/docker-machine-driver-kvm2
	I0918 21:05:55.350606   61273 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0918 21:05:55.368807   61273 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34285
	I0918 21:05:55.369340   61273 main.go:141] libmachine: () Calling .GetVersion
	I0918 21:05:55.369826   61273 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35389
	I0918 21:05:55.369908   61273 main.go:141] libmachine: Using API Version  1
	I0918 21:05:55.369928   61273 main.go:141] libmachine: () Calling .SetConfigRaw
	I0918 21:05:55.369949   61273 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42891
	I0918 21:05:55.370195   61273 main.go:141] libmachine: () Calling .GetVersion
	I0918 21:05:55.370303   61273 main.go:141] libmachine: () Calling .GetMachineName
	I0918 21:05:55.370408   61273 main.go:141] libmachine: () Calling .GetVersion
	I0918 21:05:55.370494   61273 main.go:141] libmachine: (no-preload-331658) Calling .GetState
	I0918 21:05:55.370772   61273 main.go:141] libmachine: Using API Version  1
	I0918 21:05:55.370797   61273 main.go:141] libmachine: () Calling .SetConfigRaw
	I0918 21:05:55.370908   61273 main.go:141] libmachine: Using API Version  1
	I0918 21:05:55.370929   61273 main.go:141] libmachine: () Calling .SetConfigRaw
	I0918 21:05:55.371790   61273 main.go:141] libmachine: () Calling .GetMachineName
	I0918 21:05:55.371833   61273 main.go:141] libmachine: () Calling .GetMachineName
	I0918 21:05:55.371996   61273 main.go:141] libmachine: (no-preload-331658) Calling .GetState
	I0918 21:05:55.372415   61273 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19667-7671/.minikube/bin/docker-machine-driver-kvm2
	I0918 21:05:55.372470   61273 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0918 21:05:55.372532   61273 main.go:141] libmachine: (no-preload-331658) Calling .DriverName
	I0918 21:05:55.375524   61273 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0918 21:05:55.375574   61273 addons.go:234] Setting addon default-storageclass=true in "no-preload-331658"
	W0918 21:05:55.375593   61273 addons.go:243] addon default-storageclass should already be in state true
	I0918 21:05:55.375626   61273 host.go:66] Checking if "no-preload-331658" exists ...
	I0918 21:05:55.376008   61273 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19667-7671/.minikube/bin/docker-machine-driver-kvm2
	I0918 21:05:55.376097   61273 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0918 21:05:55.377828   61273 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0918 21:05:55.377848   61273 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0918 21:05:55.377864   61273 main.go:141] libmachine: (no-preload-331658) Calling .GetSSHHostname
	I0918 21:05:55.381877   61273 main.go:141] libmachine: (no-preload-331658) DBG | domain no-preload-331658 has defined MAC address 52:54:00:2a:47:d0 in network mk-no-preload-331658
	I0918 21:05:55.382379   61273 main.go:141] libmachine: (no-preload-331658) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:47:d0", ip: ""} in network mk-no-preload-331658: {Iface:virbr2 ExpiryTime:2024-09-18 21:55:52 +0000 UTC Type:0 Mac:52:54:00:2a:47:d0 Iaid: IPaddr:192.168.61.31 Prefix:24 Hostname:no-preload-331658 Clientid:01:52:54:00:2a:47:d0}
	I0918 21:05:55.382438   61273 main.go:141] libmachine: (no-preload-331658) DBG | domain no-preload-331658 has defined IP address 192.168.61.31 and MAC address 52:54:00:2a:47:d0 in network mk-no-preload-331658
	I0918 21:05:55.382767   61273 main.go:141] libmachine: (no-preload-331658) Calling .GetSSHPort
	I0918 21:05:55.384470   61273 main.go:141] libmachine: (no-preload-331658) Calling .GetSSHKeyPath
	I0918 21:05:55.384700   61273 main.go:141] libmachine: (no-preload-331658) Calling .GetSSHUsername
	I0918 21:05:55.384863   61273 sshutil.go:53] new ssh client: &{IP:192.168.61.31 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19667-7671/.minikube/machines/no-preload-331658/id_rsa Username:docker}
	I0918 21:05:55.399531   61273 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38229
	I0918 21:05:55.400009   61273 main.go:141] libmachine: () Calling .GetVersion
	I0918 21:05:55.400532   61273 main.go:141] libmachine: Using API Version  1
	I0918 21:05:55.400552   61273 main.go:141] libmachine: () Calling .SetConfigRaw
	I0918 21:05:55.400918   61273 main.go:141] libmachine: () Calling .GetMachineName
	I0918 21:05:55.401097   61273 main.go:141] libmachine: (no-preload-331658) Calling .GetState
	I0918 21:05:55.403124   61273 main.go:141] libmachine: (no-preload-331658) Calling .DriverName
	I0918 21:05:55.404237   61273 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45209
	I0918 21:05:55.404637   61273 main.go:141] libmachine: () Calling .GetVersion
	I0918 21:05:55.405088   61273 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0918 21:05:55.405422   61273 main.go:141] libmachine: Using API Version  1
	I0918 21:05:55.405443   61273 main.go:141] libmachine: () Calling .SetConfigRaw
	I0918 21:05:55.405906   61273 main.go:141] libmachine: () Calling .GetMachineName
	I0918 21:05:55.406570   61273 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19667-7671/.minikube/bin/docker-machine-driver-kvm2
	I0918 21:05:55.406620   61273 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0918 21:05:55.406959   61273 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0918 21:05:55.406973   61273 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0918 21:05:55.407380   61273 main.go:141] libmachine: (no-preload-331658) Calling .GetSSHHostname
	I0918 21:05:55.411410   61273 main.go:141] libmachine: (no-preload-331658) DBG | domain no-preload-331658 has defined MAC address 52:54:00:2a:47:d0 in network mk-no-preload-331658
	I0918 21:05:55.411430   61273 main.go:141] libmachine: (no-preload-331658) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:47:d0", ip: ""} in network mk-no-preload-331658: {Iface:virbr2 ExpiryTime:2024-09-18 21:55:52 +0000 UTC Type:0 Mac:52:54:00:2a:47:d0 Iaid: IPaddr:192.168.61.31 Prefix:24 Hostname:no-preload-331658 Clientid:01:52:54:00:2a:47:d0}
	I0918 21:05:55.411440   61273 main.go:141] libmachine: (no-preload-331658) DBG | domain no-preload-331658 has defined IP address 192.168.61.31 and MAC address 52:54:00:2a:47:d0 in network mk-no-preload-331658
	I0918 21:05:55.411727   61273 main.go:141] libmachine: (no-preload-331658) Calling .GetSSHPort
	I0918 21:05:55.411965   61273 main.go:141] libmachine: (no-preload-331658) Calling .GetSSHKeyPath
	I0918 21:05:55.412171   61273 main.go:141] libmachine: (no-preload-331658) Calling .GetSSHUsername
	I0918 21:05:55.412377   61273 sshutil.go:53] new ssh client: &{IP:192.168.61.31 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19667-7671/.minikube/machines/no-preload-331658/id_rsa Username:docker}
	I0918 21:05:55.426166   61273 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38277
	I0918 21:05:55.426704   61273 main.go:141] libmachine: () Calling .GetVersion
	I0918 21:05:55.427211   61273 main.go:141] libmachine: Using API Version  1
	I0918 21:05:55.427232   61273 main.go:141] libmachine: () Calling .SetConfigRaw
	I0918 21:05:55.427610   61273 main.go:141] libmachine: () Calling .GetMachineName
	I0918 21:05:55.427805   61273 main.go:141] libmachine: (no-preload-331658) Calling .GetState
	I0918 21:05:55.429864   61273 main.go:141] libmachine: (no-preload-331658) Calling .DriverName
	I0918 21:05:55.430238   61273 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0918 21:05:55.430256   61273 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0918 21:05:55.430278   61273 main.go:141] libmachine: (no-preload-331658) Calling .GetSSHHostname
	I0918 21:05:55.433576   61273 main.go:141] libmachine: (no-preload-331658) DBG | domain no-preload-331658 has defined MAC address 52:54:00:2a:47:d0 in network mk-no-preload-331658
	I0918 21:05:55.433894   61273 main.go:141] libmachine: (no-preload-331658) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:47:d0", ip: ""} in network mk-no-preload-331658: {Iface:virbr2 ExpiryTime:2024-09-18 21:55:52 +0000 UTC Type:0 Mac:52:54:00:2a:47:d0 Iaid: IPaddr:192.168.61.31 Prefix:24 Hostname:no-preload-331658 Clientid:01:52:54:00:2a:47:d0}
	I0918 21:05:55.433918   61273 main.go:141] libmachine: (no-preload-331658) DBG | domain no-preload-331658 has defined IP address 192.168.61.31 and MAC address 52:54:00:2a:47:d0 in network mk-no-preload-331658
	I0918 21:05:55.434411   61273 main.go:141] libmachine: (no-preload-331658) Calling .GetSSHPort
	I0918 21:05:55.434650   61273 main.go:141] libmachine: (no-preload-331658) Calling .GetSSHKeyPath
	I0918 21:05:55.434798   61273 main.go:141] libmachine: (no-preload-331658) Calling .GetSSHUsername
	I0918 21:05:55.434942   61273 sshutil.go:53] new ssh client: &{IP:192.168.61.31 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19667-7671/.minikube/machines/no-preload-331658/id_rsa Username:docker}
	I0918 21:05:55.528033   61273 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0918 21:05:55.545524   61273 node_ready.go:35] waiting up to 6m0s for node "no-preload-331658" to be "Ready" ...
	I0918 21:05:55.606477   61273 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0918 21:05:55.606498   61273 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0918 21:05:55.628256   61273 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0918 21:05:55.636122   61273 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0918 21:05:55.636154   61273 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0918 21:05:55.663081   61273 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0918 21:05:55.663108   61273 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0918 21:05:55.715011   61273 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0918 21:05:55.738192   61273 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0918 21:05:56.247539   61273 main.go:141] libmachine: Making call to close driver server
	I0918 21:05:56.247568   61273 main.go:141] libmachine: (no-preload-331658) Calling .Close
	I0918 21:05:56.247900   61273 main.go:141] libmachine: (no-preload-331658) DBG | Closing plugin on server side
	I0918 21:05:56.247922   61273 main.go:141] libmachine: Successfully made call to close driver server
	I0918 21:05:56.247937   61273 main.go:141] libmachine: Making call to close connection to plugin binary
	I0918 21:05:56.247948   61273 main.go:141] libmachine: Making call to close driver server
	I0918 21:05:56.247960   61273 main.go:141] libmachine: (no-preload-331658) Calling .Close
	I0918 21:05:56.248225   61273 main.go:141] libmachine: Successfully made call to close driver server
	I0918 21:05:56.248240   61273 main.go:141] libmachine: Making call to close connection to plugin binary
	I0918 21:05:56.248273   61273 main.go:141] libmachine: (no-preload-331658) DBG | Closing plugin on server side
	I0918 21:05:56.261942   61273 main.go:141] libmachine: Making call to close driver server
	I0918 21:05:56.261972   61273 main.go:141] libmachine: (no-preload-331658) Calling .Close
	I0918 21:05:56.262269   61273 main.go:141] libmachine: (no-preload-331658) DBG | Closing plugin on server side
	I0918 21:05:56.262344   61273 main.go:141] libmachine: Successfully made call to close driver server
	I0918 21:05:56.262361   61273 main.go:141] libmachine: Making call to close connection to plugin binary
	I0918 21:05:56.944008   61273 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.22895695s)
	I0918 21:05:56.944084   61273 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.205856091s)
	I0918 21:05:56.944121   61273 main.go:141] libmachine: Making call to close driver server
	I0918 21:05:56.944138   61273 main.go:141] libmachine: (no-preload-331658) Calling .Close
	I0918 21:05:56.944087   61273 main.go:141] libmachine: Making call to close driver server
	I0918 21:05:56.944186   61273 main.go:141] libmachine: (no-preload-331658) Calling .Close
	I0918 21:05:56.944489   61273 main.go:141] libmachine: (no-preload-331658) DBG | Closing plugin on server side
	I0918 21:05:56.944539   61273 main.go:141] libmachine: Successfully made call to close driver server
	I0918 21:05:56.944553   61273 main.go:141] libmachine: Making call to close connection to plugin binary
	I0918 21:05:56.944561   61273 main.go:141] libmachine: Making call to close driver server
	I0918 21:05:56.944572   61273 main.go:141] libmachine: (no-preload-331658) Calling .Close
	I0918 21:05:56.944559   61273 main.go:141] libmachine: (no-preload-331658) DBG | Closing plugin on server side
	I0918 21:05:56.944570   61273 main.go:141] libmachine: Successfully made call to close driver server
	I0918 21:05:56.944654   61273 main.go:141] libmachine: Making call to close connection to plugin binary
	I0918 21:05:56.944669   61273 main.go:141] libmachine: Making call to close driver server
	I0918 21:05:56.944678   61273 main.go:141] libmachine: (no-preload-331658) Calling .Close
	I0918 21:05:56.944794   61273 main.go:141] libmachine: Successfully made call to close driver server
	I0918 21:05:56.944808   61273 main.go:141] libmachine: Making call to close connection to plugin binary
	I0918 21:05:56.944823   61273 main.go:141] libmachine: (no-preload-331658) DBG | Closing plugin on server side
	I0918 21:05:56.944965   61273 main.go:141] libmachine: (no-preload-331658) DBG | Closing plugin on server side
	I0918 21:05:56.944988   61273 main.go:141] libmachine: Successfully made call to close driver server
	I0918 21:05:56.944998   61273 main.go:141] libmachine: Making call to close connection to plugin binary
	I0918 21:05:56.945010   61273 addons.go:475] Verifying addon metrics-server=true in "no-preload-331658"
	I0918 21:05:56.946962   61273 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I0918 21:05:53.135068   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:05:55.633160   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:05:55.393859   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:05:57.888366   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:05:54.703227   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:55.204057   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:55.704178   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:56.203443   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:56.703517   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:57.203499   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:57.703598   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:58.203660   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:58.703897   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:59.203256   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:56.948595   61273 addons.go:510] duration metric: took 1.627989207s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I0918 21:05:57.549092   61273 node_ready.go:53] node "no-preload-331658" has status "Ready":"False"
	I0918 21:06:00.050199   61273 node_ready.go:53] node "no-preload-331658" has status "Ready":"False"
	I0918 21:05:58.134289   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:06:00.632302   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:06:00.386644   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:06:02.387972   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:05:59.704149   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:06:00.203356   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:06:00.703750   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:06:01.203765   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:06:01.704295   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:06:02.203759   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:06:02.703342   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:06:03.204083   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:06:03.703777   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:06:04.203340   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:06:02.549111   61273 node_ready.go:49] node "no-preload-331658" has status "Ready":"True"
	I0918 21:06:02.549153   61273 node_ready.go:38] duration metric: took 7.003597589s for node "no-preload-331658" to be "Ready" ...
	I0918 21:06:02.549162   61273 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0918 21:06:02.554487   61273 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-dgnw2" in "kube-system" namespace to be "Ready" ...
	I0918 21:06:02.560130   61273 pod_ready.go:93] pod "coredns-7c65d6cfc9-dgnw2" in "kube-system" namespace has status "Ready":"True"
	I0918 21:06:02.560160   61273 pod_ready.go:82] duration metric: took 5.643145ms for pod "coredns-7c65d6cfc9-dgnw2" in "kube-system" namespace to be "Ready" ...
	I0918 21:06:02.560173   61273 pod_ready.go:79] waiting up to 6m0s for pod "etcd-no-preload-331658" in "kube-system" namespace to be "Ready" ...
	I0918 21:06:02.567971   61273 pod_ready.go:93] pod "etcd-no-preload-331658" in "kube-system" namespace has status "Ready":"True"
	I0918 21:06:02.567992   61273 pod_ready.go:82] duration metric: took 7.811385ms for pod "etcd-no-preload-331658" in "kube-system" namespace to be "Ready" ...
	I0918 21:06:02.568001   61273 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-no-preload-331658" in "kube-system" namespace to be "Ready" ...
	I0918 21:06:02.572606   61273 pod_ready.go:93] pod "kube-apiserver-no-preload-331658" in "kube-system" namespace has status "Ready":"True"
	I0918 21:06:02.572633   61273 pod_ready.go:82] duration metric: took 4.625414ms for pod "kube-apiserver-no-preload-331658" in "kube-system" namespace to be "Ready" ...
	I0918 21:06:02.572644   61273 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-no-preload-331658" in "kube-system" namespace to be "Ready" ...
	I0918 21:06:02.577222   61273 pod_ready.go:93] pod "kube-controller-manager-no-preload-331658" in "kube-system" namespace has status "Ready":"True"
	I0918 21:06:02.577243   61273 pod_ready.go:82] duration metric: took 4.591499ms for pod "kube-controller-manager-no-preload-331658" in "kube-system" namespace to be "Ready" ...
	I0918 21:06:02.577252   61273 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-hx25w" in "kube-system" namespace to be "Ready" ...
	I0918 21:06:02.949682   61273 pod_ready.go:93] pod "kube-proxy-hx25w" in "kube-system" namespace has status "Ready":"True"
	I0918 21:06:02.949707   61273 pod_ready.go:82] duration metric: took 372.449094ms for pod "kube-proxy-hx25w" in "kube-system" namespace to be "Ready" ...
	I0918 21:06:02.949716   61273 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-no-preload-331658" in "kube-system" namespace to be "Ready" ...
	I0918 21:06:03.350071   61273 pod_ready.go:93] pod "kube-scheduler-no-preload-331658" in "kube-system" namespace has status "Ready":"True"
	I0918 21:06:03.350104   61273 pod_ready.go:82] duration metric: took 400.380059ms for pod "kube-scheduler-no-preload-331658" in "kube-system" namespace to be "Ready" ...
	I0918 21:06:03.350118   61273 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace to be "Ready" ...
	I0918 21:06:05.357041   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:06:02.634105   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:06:05.132860   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:06:04.887184   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:06:06.887596   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:06:04.703762   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:06:05.203639   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:06:05.703335   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:06:06.204156   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:06:06.703735   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:06:07.203278   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:06:07.704072   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:06:08.203299   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:06:08.703528   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:06:09.203725   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:06:07.857844   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:06:10.356822   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:06:07.633985   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:06:10.133861   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:06:08.887695   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:06:11.387735   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:06:09.703712   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:06:10.203930   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:06:10.704216   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:06:11.203706   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:06:11.703494   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:06:12.204098   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:06:12.703927   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:06:13.203379   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:06:13.703248   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:06:14.204000   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:06:12.356878   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:06:14.360285   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:06:12.631731   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:06:15.132229   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:06:17.132802   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:06:13.887296   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:06:16.386306   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:06:14.704159   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:06:15.203401   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:06:15.703942   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:06:16.204043   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:06:16.703691   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:06:17.203508   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:06:17.703445   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:06:18.203689   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:06:18.704249   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:06:19.203290   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0918 21:06:19.203401   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0918 21:06:19.239033   62061 cri.go:89] found id: ""
	I0918 21:06:19.239065   62061 logs.go:276] 0 containers: []
	W0918 21:06:19.239073   62061 logs.go:278] No container was found matching "kube-apiserver"
	I0918 21:06:19.239079   62061 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0918 21:06:19.239141   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0918 21:06:19.274781   62061 cri.go:89] found id: ""
	I0918 21:06:19.274809   62061 logs.go:276] 0 containers: []
	W0918 21:06:19.274819   62061 logs.go:278] No container was found matching "etcd"
	I0918 21:06:19.274833   62061 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0918 21:06:19.274895   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0918 21:06:19.307894   62061 cri.go:89] found id: ""
	I0918 21:06:19.307928   62061 logs.go:276] 0 containers: []
	W0918 21:06:19.307940   62061 logs.go:278] No container was found matching "coredns"
	I0918 21:06:19.307948   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0918 21:06:19.308002   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0918 21:06:19.340572   62061 cri.go:89] found id: ""
	I0918 21:06:19.340602   62061 logs.go:276] 0 containers: []
	W0918 21:06:19.340610   62061 logs.go:278] No container was found matching "kube-scheduler"
	I0918 21:06:19.340615   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0918 21:06:19.340672   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0918 21:06:19.375448   62061 cri.go:89] found id: ""
	I0918 21:06:19.375475   62061 logs.go:276] 0 containers: []
	W0918 21:06:19.375483   62061 logs.go:278] No container was found matching "kube-proxy"
	I0918 21:06:19.375489   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0918 21:06:19.375536   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0918 21:06:19.413102   62061 cri.go:89] found id: ""
	I0918 21:06:19.413133   62061 logs.go:276] 0 containers: []
	W0918 21:06:19.413158   62061 logs.go:278] No container was found matching "kube-controller-manager"
	I0918 21:06:19.413166   62061 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0918 21:06:19.413238   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0918 21:06:19.447497   62061 cri.go:89] found id: ""
	I0918 21:06:19.447526   62061 logs.go:276] 0 containers: []
	W0918 21:06:19.447536   62061 logs.go:278] No container was found matching "kindnet"
	I0918 21:06:19.447544   62061 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0918 21:06:19.447605   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0918 21:06:19.480848   62061 cri.go:89] found id: ""
	I0918 21:06:19.480880   62061 logs.go:276] 0 containers: []
	W0918 21:06:19.480892   62061 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0918 21:06:19.480903   62061 logs.go:123] Gathering logs for kubelet ...
	I0918 21:06:19.480916   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 21:06:16.856845   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:06:19.358010   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:06:19.632608   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:06:22.132792   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:06:18.387488   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:06:20.887832   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:06:19.533573   62061 logs.go:123] Gathering logs for dmesg ...
	I0918 21:06:19.533625   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 21:06:19.546578   62061 logs.go:123] Gathering logs for describe nodes ...
	I0918 21:06:19.546604   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0918 21:06:19.667439   62061 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0918 21:06:19.667459   62061 logs.go:123] Gathering logs for CRI-O ...
	I0918 21:06:19.667472   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0918 21:06:19.739525   62061 logs.go:123] Gathering logs for container status ...
	I0918 21:06:19.739563   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 21:06:22.280913   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:06:22.296045   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0918 21:06:22.296132   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0918 21:06:22.332361   62061 cri.go:89] found id: ""
	I0918 21:06:22.332401   62061 logs.go:276] 0 containers: []
	W0918 21:06:22.332413   62061 logs.go:278] No container was found matching "kube-apiserver"
	I0918 21:06:22.332421   62061 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0918 21:06:22.332485   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0918 21:06:22.392138   62061 cri.go:89] found id: ""
	I0918 21:06:22.392170   62061 logs.go:276] 0 containers: []
	W0918 21:06:22.392178   62061 logs.go:278] No container was found matching "etcd"
	I0918 21:06:22.392184   62061 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0918 21:06:22.392237   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0918 21:06:22.431265   62061 cri.go:89] found id: ""
	I0918 21:06:22.431296   62061 logs.go:276] 0 containers: []
	W0918 21:06:22.431306   62061 logs.go:278] No container was found matching "coredns"
	I0918 21:06:22.431313   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0918 21:06:22.431376   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0918 21:06:22.463685   62061 cri.go:89] found id: ""
	I0918 21:06:22.463718   62061 logs.go:276] 0 containers: []
	W0918 21:06:22.463730   62061 logs.go:278] No container was found matching "kube-scheduler"
	I0918 21:06:22.463738   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0918 21:06:22.463808   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0918 21:06:22.498031   62061 cri.go:89] found id: ""
	I0918 21:06:22.498058   62061 logs.go:276] 0 containers: []
	W0918 21:06:22.498069   62061 logs.go:278] No container was found matching "kube-proxy"
	I0918 21:06:22.498076   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0918 21:06:22.498140   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0918 21:06:22.537701   62061 cri.go:89] found id: ""
	I0918 21:06:22.537732   62061 logs.go:276] 0 containers: []
	W0918 21:06:22.537740   62061 logs.go:278] No container was found matching "kube-controller-manager"
	I0918 21:06:22.537746   62061 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0918 21:06:22.537803   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0918 21:06:22.574085   62061 cri.go:89] found id: ""
	I0918 21:06:22.574118   62061 logs.go:276] 0 containers: []
	W0918 21:06:22.574130   62061 logs.go:278] No container was found matching "kindnet"
	I0918 21:06:22.574138   62061 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0918 21:06:22.574202   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0918 21:06:22.606313   62061 cri.go:89] found id: ""
	I0918 21:06:22.606341   62061 logs.go:276] 0 containers: []
	W0918 21:06:22.606349   62061 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0918 21:06:22.606359   62061 logs.go:123] Gathering logs for describe nodes ...
	I0918 21:06:22.606372   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0918 21:06:22.678484   62061 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0918 21:06:22.678511   62061 logs.go:123] Gathering logs for CRI-O ...
	I0918 21:06:22.678524   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0918 21:06:22.756730   62061 logs.go:123] Gathering logs for container status ...
	I0918 21:06:22.756767   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 21:06:22.793537   62061 logs.go:123] Gathering logs for kubelet ...
	I0918 21:06:22.793573   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 21:06:22.845987   62061 logs.go:123] Gathering logs for dmesg ...
	I0918 21:06:22.846021   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 21:06:21.857010   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:06:23.857823   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:06:26.358268   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:06:24.133063   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:06:26.632474   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:06:23.387764   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:06:25.886548   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:06:27.887108   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:06:25.360980   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:06:25.374930   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0918 21:06:25.375010   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0918 21:06:25.413100   62061 cri.go:89] found id: ""
	I0918 21:06:25.413135   62061 logs.go:276] 0 containers: []
	W0918 21:06:25.413147   62061 logs.go:278] No container was found matching "kube-apiserver"
	I0918 21:06:25.413155   62061 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0918 21:06:25.413221   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0918 21:06:25.450999   62061 cri.go:89] found id: ""
	I0918 21:06:25.451024   62061 logs.go:276] 0 containers: []
	W0918 21:06:25.451032   62061 logs.go:278] No container was found matching "etcd"
	I0918 21:06:25.451038   62061 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0918 21:06:25.451087   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0918 21:06:25.496114   62061 cri.go:89] found id: ""
	I0918 21:06:25.496147   62061 logs.go:276] 0 containers: []
	W0918 21:06:25.496155   62061 logs.go:278] No container was found matching "coredns"
	I0918 21:06:25.496161   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0918 21:06:25.496218   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0918 21:06:25.529186   62061 cri.go:89] found id: ""
	I0918 21:06:25.529211   62061 logs.go:276] 0 containers: []
	W0918 21:06:25.529218   62061 logs.go:278] No container was found matching "kube-scheduler"
	I0918 21:06:25.529223   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0918 21:06:25.529294   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0918 21:06:25.569710   62061 cri.go:89] found id: ""
	I0918 21:06:25.569735   62061 logs.go:276] 0 containers: []
	W0918 21:06:25.569743   62061 logs.go:278] No container was found matching "kube-proxy"
	I0918 21:06:25.569749   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0918 21:06:25.569796   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0918 21:06:25.606915   62061 cri.go:89] found id: ""
	I0918 21:06:25.606951   62061 logs.go:276] 0 containers: []
	W0918 21:06:25.606964   62061 logs.go:278] No container was found matching "kube-controller-manager"
	I0918 21:06:25.606972   62061 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0918 21:06:25.607034   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0918 21:06:25.642705   62061 cri.go:89] found id: ""
	I0918 21:06:25.642736   62061 logs.go:276] 0 containers: []
	W0918 21:06:25.642744   62061 logs.go:278] No container was found matching "kindnet"
	I0918 21:06:25.642750   62061 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0918 21:06:25.642801   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0918 21:06:25.676418   62061 cri.go:89] found id: ""
	I0918 21:06:25.676448   62061 logs.go:276] 0 containers: []
	W0918 21:06:25.676457   62061 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0918 21:06:25.676466   62061 logs.go:123] Gathering logs for dmesg ...
	I0918 21:06:25.676477   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 21:06:25.689913   62061 logs.go:123] Gathering logs for describe nodes ...
	I0918 21:06:25.689945   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0918 21:06:25.771579   62061 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0918 21:06:25.771607   62061 logs.go:123] Gathering logs for CRI-O ...
	I0918 21:06:25.771622   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0918 21:06:25.850904   62061 logs.go:123] Gathering logs for container status ...
	I0918 21:06:25.850945   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 21:06:25.890820   62061 logs.go:123] Gathering logs for kubelet ...
	I0918 21:06:25.890849   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 21:06:28.438958   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:06:28.451961   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0918 21:06:28.452043   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0918 21:06:28.485000   62061 cri.go:89] found id: ""
	I0918 21:06:28.485059   62061 logs.go:276] 0 containers: []
	W0918 21:06:28.485070   62061 logs.go:278] No container was found matching "kube-apiserver"
	I0918 21:06:28.485077   62061 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0918 21:06:28.485138   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0918 21:06:28.517852   62061 cri.go:89] found id: ""
	I0918 21:06:28.517885   62061 logs.go:276] 0 containers: []
	W0918 21:06:28.517895   62061 logs.go:278] No container was found matching "etcd"
	I0918 21:06:28.517903   62061 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0918 21:06:28.517957   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0918 21:06:28.549914   62061 cri.go:89] found id: ""
	I0918 21:06:28.549940   62061 logs.go:276] 0 containers: []
	W0918 21:06:28.549949   62061 logs.go:278] No container was found matching "coredns"
	I0918 21:06:28.549955   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0918 21:06:28.550000   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0918 21:06:28.584133   62061 cri.go:89] found id: ""
	I0918 21:06:28.584156   62061 logs.go:276] 0 containers: []
	W0918 21:06:28.584163   62061 logs.go:278] No container was found matching "kube-scheduler"
	I0918 21:06:28.584169   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0918 21:06:28.584213   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0918 21:06:28.620031   62061 cri.go:89] found id: ""
	I0918 21:06:28.620061   62061 logs.go:276] 0 containers: []
	W0918 21:06:28.620071   62061 logs.go:278] No container was found matching "kube-proxy"
	I0918 21:06:28.620078   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0918 21:06:28.620143   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0918 21:06:28.653393   62061 cri.go:89] found id: ""
	I0918 21:06:28.653424   62061 logs.go:276] 0 containers: []
	W0918 21:06:28.653435   62061 logs.go:278] No container was found matching "kube-controller-manager"
	I0918 21:06:28.653443   62061 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0918 21:06:28.653505   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0918 21:06:28.687696   62061 cri.go:89] found id: ""
	I0918 21:06:28.687729   62061 logs.go:276] 0 containers: []
	W0918 21:06:28.687741   62061 logs.go:278] No container was found matching "kindnet"
	I0918 21:06:28.687752   62061 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0918 21:06:28.687809   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0918 21:06:28.724824   62061 cri.go:89] found id: ""
	I0918 21:06:28.724854   62061 logs.go:276] 0 containers: []
	W0918 21:06:28.724864   62061 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0918 21:06:28.724875   62061 logs.go:123] Gathering logs for kubelet ...
	I0918 21:06:28.724890   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 21:06:28.785489   62061 logs.go:123] Gathering logs for dmesg ...
	I0918 21:06:28.785529   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 21:06:28.804170   62061 logs.go:123] Gathering logs for describe nodes ...
	I0918 21:06:28.804211   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0918 21:06:28.895508   62061 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0918 21:06:28.895538   62061 logs.go:123] Gathering logs for CRI-O ...
	I0918 21:06:28.895554   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0918 21:06:28.974643   62061 logs.go:123] Gathering logs for container status ...
	I0918 21:06:28.974685   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 21:06:28.858259   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:06:31.356644   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:06:28.633851   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:06:31.133612   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:06:30.392038   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:06:32.886708   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:06:31.517305   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:06:31.530374   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0918 21:06:31.530435   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0918 21:06:31.568335   62061 cri.go:89] found id: ""
	I0918 21:06:31.568374   62061 logs.go:276] 0 containers: []
	W0918 21:06:31.568385   62061 logs.go:278] No container was found matching "kube-apiserver"
	I0918 21:06:31.568393   62061 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0918 21:06:31.568462   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0918 21:06:31.603597   62061 cri.go:89] found id: ""
	I0918 21:06:31.603624   62061 logs.go:276] 0 containers: []
	W0918 21:06:31.603633   62061 logs.go:278] No container was found matching "etcd"
	I0918 21:06:31.603639   62061 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0918 21:06:31.603694   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0918 21:06:31.639355   62061 cri.go:89] found id: ""
	I0918 21:06:31.639380   62061 logs.go:276] 0 containers: []
	W0918 21:06:31.639388   62061 logs.go:278] No container was found matching "coredns"
	I0918 21:06:31.639393   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0918 21:06:31.639445   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0918 21:06:31.674197   62061 cri.go:89] found id: ""
	I0918 21:06:31.674223   62061 logs.go:276] 0 containers: []
	W0918 21:06:31.674231   62061 logs.go:278] No container was found matching "kube-scheduler"
	I0918 21:06:31.674237   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0918 21:06:31.674286   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0918 21:06:31.710563   62061 cri.go:89] found id: ""
	I0918 21:06:31.710592   62061 logs.go:276] 0 containers: []
	W0918 21:06:31.710606   62061 logs.go:278] No container was found matching "kube-proxy"
	I0918 21:06:31.710612   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0918 21:06:31.710663   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0918 21:06:31.746923   62061 cri.go:89] found id: ""
	I0918 21:06:31.746958   62061 logs.go:276] 0 containers: []
	W0918 21:06:31.746969   62061 logs.go:278] No container was found matching "kube-controller-manager"
	I0918 21:06:31.746976   62061 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0918 21:06:31.747039   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0918 21:06:31.781913   62061 cri.go:89] found id: ""
	I0918 21:06:31.781946   62061 logs.go:276] 0 containers: []
	W0918 21:06:31.781961   62061 logs.go:278] No container was found matching "kindnet"
	I0918 21:06:31.781969   62061 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0918 21:06:31.782023   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0918 21:06:31.817293   62061 cri.go:89] found id: ""
	I0918 21:06:31.817318   62061 logs.go:276] 0 containers: []
	W0918 21:06:31.817327   62061 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0918 21:06:31.817338   62061 logs.go:123] Gathering logs for kubelet ...
	I0918 21:06:31.817375   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 21:06:31.869479   62061 logs.go:123] Gathering logs for dmesg ...
	I0918 21:06:31.869518   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 21:06:31.884187   62061 logs.go:123] Gathering logs for describe nodes ...
	I0918 21:06:31.884219   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0918 21:06:31.967159   62061 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0918 21:06:31.967189   62061 logs.go:123] Gathering logs for CRI-O ...
	I0918 21:06:31.967202   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0918 21:06:32.050404   62061 logs.go:123] Gathering logs for container status ...
	I0918 21:06:32.050458   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 21:06:33.357380   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:06:35.856960   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:06:33.633434   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:06:36.133740   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:06:34.888738   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:06:37.386351   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:06:34.593135   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:06:34.613917   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0918 21:06:34.614001   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0918 21:06:34.649649   62061 cri.go:89] found id: ""
	I0918 21:06:34.649676   62061 logs.go:276] 0 containers: []
	W0918 21:06:34.649684   62061 logs.go:278] No container was found matching "kube-apiserver"
	I0918 21:06:34.649690   62061 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0918 21:06:34.649738   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0918 21:06:34.684898   62061 cri.go:89] found id: ""
	I0918 21:06:34.684932   62061 logs.go:276] 0 containers: []
	W0918 21:06:34.684940   62061 logs.go:278] No container was found matching "etcd"
	I0918 21:06:34.684948   62061 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0918 21:06:34.685018   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0918 21:06:34.717519   62061 cri.go:89] found id: ""
	I0918 21:06:34.717549   62061 logs.go:276] 0 containers: []
	W0918 21:06:34.717559   62061 logs.go:278] No container was found matching "coredns"
	I0918 21:06:34.717573   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0918 21:06:34.717626   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0918 21:06:34.751242   62061 cri.go:89] found id: ""
	I0918 21:06:34.751275   62061 logs.go:276] 0 containers: []
	W0918 21:06:34.751289   62061 logs.go:278] No container was found matching "kube-scheduler"
	I0918 21:06:34.751298   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0918 21:06:34.751368   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0918 21:06:34.786700   62061 cri.go:89] found id: ""
	I0918 21:06:34.786729   62061 logs.go:276] 0 containers: []
	W0918 21:06:34.786737   62061 logs.go:278] No container was found matching "kube-proxy"
	I0918 21:06:34.786742   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0918 21:06:34.786792   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0918 21:06:34.822400   62061 cri.go:89] found id: ""
	I0918 21:06:34.822446   62061 logs.go:276] 0 containers: []
	W0918 21:06:34.822459   62061 logs.go:278] No container was found matching "kube-controller-manager"
	I0918 21:06:34.822468   62061 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0918 21:06:34.822537   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0918 21:06:34.860509   62061 cri.go:89] found id: ""
	I0918 21:06:34.860540   62061 logs.go:276] 0 containers: []
	W0918 21:06:34.860549   62061 logs.go:278] No container was found matching "kindnet"
	I0918 21:06:34.860554   62061 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0918 21:06:34.860626   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0918 21:06:34.900682   62061 cri.go:89] found id: ""
	I0918 21:06:34.900712   62061 logs.go:276] 0 containers: []
	W0918 21:06:34.900719   62061 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0918 21:06:34.900727   62061 logs.go:123] Gathering logs for describe nodes ...
	I0918 21:06:34.900739   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0918 21:06:34.975975   62061 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0918 21:06:34.976009   62061 logs.go:123] Gathering logs for CRI-O ...
	I0918 21:06:34.976043   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0918 21:06:35.058972   62061 logs.go:123] Gathering logs for container status ...
	I0918 21:06:35.059012   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 21:06:35.101690   62061 logs.go:123] Gathering logs for kubelet ...
	I0918 21:06:35.101716   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 21:06:35.154486   62061 logs.go:123] Gathering logs for dmesg ...
	I0918 21:06:35.154525   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 21:06:37.669814   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:06:37.683235   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0918 21:06:37.683294   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0918 21:06:37.720666   62061 cri.go:89] found id: ""
	I0918 21:06:37.720697   62061 logs.go:276] 0 containers: []
	W0918 21:06:37.720708   62061 logs.go:278] No container was found matching "kube-apiserver"
	I0918 21:06:37.720715   62061 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0918 21:06:37.720777   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0918 21:06:37.758288   62061 cri.go:89] found id: ""
	I0918 21:06:37.758327   62061 logs.go:276] 0 containers: []
	W0918 21:06:37.758335   62061 logs.go:278] No container was found matching "etcd"
	I0918 21:06:37.758341   62061 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0918 21:06:37.758436   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0918 21:06:37.793394   62061 cri.go:89] found id: ""
	I0918 21:06:37.793421   62061 logs.go:276] 0 containers: []
	W0918 21:06:37.793430   62061 logs.go:278] No container was found matching "coredns"
	I0918 21:06:37.793437   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0918 21:06:37.793501   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0918 21:06:37.828258   62061 cri.go:89] found id: ""
	I0918 21:06:37.828291   62061 logs.go:276] 0 containers: []
	W0918 21:06:37.828300   62061 logs.go:278] No container was found matching "kube-scheduler"
	I0918 21:06:37.828307   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0918 21:06:37.828361   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0918 21:06:37.863164   62061 cri.go:89] found id: ""
	I0918 21:06:37.863189   62061 logs.go:276] 0 containers: []
	W0918 21:06:37.863197   62061 logs.go:278] No container was found matching "kube-proxy"
	I0918 21:06:37.863203   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0918 21:06:37.863251   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0918 21:06:37.899585   62061 cri.go:89] found id: ""
	I0918 21:06:37.899614   62061 logs.go:276] 0 containers: []
	W0918 21:06:37.899622   62061 logs.go:278] No container was found matching "kube-controller-manager"
	I0918 21:06:37.899628   62061 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0918 21:06:37.899675   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0918 21:06:37.936246   62061 cri.go:89] found id: ""
	I0918 21:06:37.936282   62061 logs.go:276] 0 containers: []
	W0918 21:06:37.936292   62061 logs.go:278] No container was found matching "kindnet"
	I0918 21:06:37.936299   62061 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0918 21:06:37.936362   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0918 21:06:37.969913   62061 cri.go:89] found id: ""
	I0918 21:06:37.969942   62061 logs.go:276] 0 containers: []
	W0918 21:06:37.969950   62061 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0918 21:06:37.969958   62061 logs.go:123] Gathering logs for kubelet ...
	I0918 21:06:37.969968   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 21:06:38.023055   62061 logs.go:123] Gathering logs for dmesg ...
	I0918 21:06:38.023094   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 21:06:38.036006   62061 logs.go:123] Gathering logs for describe nodes ...
	I0918 21:06:38.036047   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0918 21:06:38.116926   62061 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0918 21:06:38.116957   62061 logs.go:123] Gathering logs for CRI-O ...
	I0918 21:06:38.116972   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0918 21:06:38.192632   62061 logs.go:123] Gathering logs for container status ...
	I0918 21:06:38.192668   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 21:06:37.860654   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:06:40.357107   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:06:38.633432   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:06:41.131957   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:06:39.387927   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:06:41.886904   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:06:40.729602   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:06:40.743291   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0918 21:06:40.743368   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0918 21:06:40.778138   62061 cri.go:89] found id: ""
	I0918 21:06:40.778166   62061 logs.go:276] 0 containers: []
	W0918 21:06:40.778176   62061 logs.go:278] No container was found matching "kube-apiserver"
	I0918 21:06:40.778184   62061 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0918 21:06:40.778248   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0918 21:06:40.813026   62061 cri.go:89] found id: ""
	I0918 21:06:40.813062   62061 logs.go:276] 0 containers: []
	W0918 21:06:40.813073   62061 logs.go:278] No container was found matching "etcd"
	I0918 21:06:40.813081   62061 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0918 21:06:40.813146   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0918 21:06:40.847609   62061 cri.go:89] found id: ""
	I0918 21:06:40.847641   62061 logs.go:276] 0 containers: []
	W0918 21:06:40.847651   62061 logs.go:278] No container was found matching "coredns"
	I0918 21:06:40.847658   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0918 21:06:40.847727   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0918 21:06:40.885403   62061 cri.go:89] found id: ""
	I0918 21:06:40.885432   62061 logs.go:276] 0 containers: []
	W0918 21:06:40.885441   62061 logs.go:278] No container was found matching "kube-scheduler"
	I0918 21:06:40.885448   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0918 21:06:40.885515   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0918 21:06:40.922679   62061 cri.go:89] found id: ""
	I0918 21:06:40.922705   62061 logs.go:276] 0 containers: []
	W0918 21:06:40.922714   62061 logs.go:278] No container was found matching "kube-proxy"
	I0918 21:06:40.922719   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0918 21:06:40.922776   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0918 21:06:40.957187   62061 cri.go:89] found id: ""
	I0918 21:06:40.957215   62061 logs.go:276] 0 containers: []
	W0918 21:06:40.957222   62061 logs.go:278] No container was found matching "kube-controller-manager"
	I0918 21:06:40.957228   62061 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0918 21:06:40.957291   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0918 21:06:40.991722   62061 cri.go:89] found id: ""
	I0918 21:06:40.991751   62061 logs.go:276] 0 containers: []
	W0918 21:06:40.991762   62061 logs.go:278] No container was found matching "kindnet"
	I0918 21:06:40.991769   62061 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0918 21:06:40.991835   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0918 21:06:41.027210   62061 cri.go:89] found id: ""
	I0918 21:06:41.027234   62061 logs.go:276] 0 containers: []
	W0918 21:06:41.027242   62061 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0918 21:06:41.027250   62061 logs.go:123] Gathering logs for kubelet ...
	I0918 21:06:41.027275   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 21:06:41.077183   62061 logs.go:123] Gathering logs for dmesg ...
	I0918 21:06:41.077221   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 21:06:41.090707   62061 logs.go:123] Gathering logs for describe nodes ...
	I0918 21:06:41.090734   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0918 21:06:41.166206   62061 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0918 21:06:41.166228   62061 logs.go:123] Gathering logs for CRI-O ...
	I0918 21:06:41.166241   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0918 21:06:41.240308   62061 logs.go:123] Gathering logs for container status ...
	I0918 21:06:41.240346   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 21:06:43.777517   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:06:43.790901   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0918 21:06:43.790970   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0918 21:06:43.827127   62061 cri.go:89] found id: ""
	I0918 21:06:43.827156   62061 logs.go:276] 0 containers: []
	W0918 21:06:43.827164   62061 logs.go:278] No container was found matching "kube-apiserver"
	I0918 21:06:43.827170   62061 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0918 21:06:43.827218   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0918 21:06:43.861218   62061 cri.go:89] found id: ""
	I0918 21:06:43.861244   62061 logs.go:276] 0 containers: []
	W0918 21:06:43.861252   62061 logs.go:278] No container was found matching "etcd"
	I0918 21:06:43.861257   62061 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0918 21:06:43.861308   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0918 21:06:43.899669   62061 cri.go:89] found id: ""
	I0918 21:06:43.899694   62061 logs.go:276] 0 containers: []
	W0918 21:06:43.899701   62061 logs.go:278] No container was found matching "coredns"
	I0918 21:06:43.899707   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0918 21:06:43.899755   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0918 21:06:43.934698   62061 cri.go:89] found id: ""
	I0918 21:06:43.934731   62061 logs.go:276] 0 containers: []
	W0918 21:06:43.934741   62061 logs.go:278] No container was found matching "kube-scheduler"
	I0918 21:06:43.934748   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0918 21:06:43.934819   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0918 21:06:43.971715   62061 cri.go:89] found id: ""
	I0918 21:06:43.971742   62061 logs.go:276] 0 containers: []
	W0918 21:06:43.971755   62061 logs.go:278] No container was found matching "kube-proxy"
	I0918 21:06:43.971760   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0918 21:06:43.971817   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0918 21:06:44.005880   62061 cri.go:89] found id: ""
	I0918 21:06:44.005915   62061 logs.go:276] 0 containers: []
	W0918 21:06:44.005927   62061 logs.go:278] No container was found matching "kube-controller-manager"
	I0918 21:06:44.005935   62061 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0918 21:06:44.006003   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0918 21:06:44.038144   62061 cri.go:89] found id: ""
	I0918 21:06:44.038171   62061 logs.go:276] 0 containers: []
	W0918 21:06:44.038180   62061 logs.go:278] No container was found matching "kindnet"
	I0918 21:06:44.038186   62061 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0918 21:06:44.038239   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0918 21:06:44.073920   62061 cri.go:89] found id: ""
	I0918 21:06:44.073953   62061 logs.go:276] 0 containers: []
	W0918 21:06:44.073966   62061 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0918 21:06:44.073978   62061 logs.go:123] Gathering logs for describe nodes ...
	I0918 21:06:44.073992   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0918 21:06:44.142881   62061 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0918 21:06:44.142910   62061 logs.go:123] Gathering logs for CRI-O ...
	I0918 21:06:44.142926   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0918 21:06:44.222302   62061 logs.go:123] Gathering logs for container status ...
	I0918 21:06:44.222341   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 21:06:44.265914   62061 logs.go:123] Gathering logs for kubelet ...
	I0918 21:06:44.265952   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 21:06:44.316229   62061 logs.go:123] Gathering logs for dmesg ...
	I0918 21:06:44.316266   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 21:06:42.856192   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:06:44.857673   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:06:43.132992   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:06:45.134509   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:06:43.888282   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:06:45.889414   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:06:46.830793   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:06:46.843829   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0918 21:06:46.843886   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0918 21:06:46.878566   62061 cri.go:89] found id: ""
	I0918 21:06:46.878607   62061 logs.go:276] 0 containers: []
	W0918 21:06:46.878621   62061 logs.go:278] No container was found matching "kube-apiserver"
	I0918 21:06:46.878630   62061 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0918 21:06:46.878710   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0918 21:06:46.915576   62061 cri.go:89] found id: ""
	I0918 21:06:46.915603   62061 logs.go:276] 0 containers: []
	W0918 21:06:46.915611   62061 logs.go:278] No container was found matching "etcd"
	I0918 21:06:46.915616   62061 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0918 21:06:46.915663   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0918 21:06:46.952982   62061 cri.go:89] found id: ""
	I0918 21:06:46.953014   62061 logs.go:276] 0 containers: []
	W0918 21:06:46.953032   62061 logs.go:278] No container was found matching "coredns"
	I0918 21:06:46.953039   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0918 21:06:46.953104   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0918 21:06:46.987718   62061 cri.go:89] found id: ""
	I0918 21:06:46.987749   62061 logs.go:276] 0 containers: []
	W0918 21:06:46.987757   62061 logs.go:278] No container was found matching "kube-scheduler"
	I0918 21:06:46.987763   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0918 21:06:46.987815   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0918 21:06:47.022695   62061 cri.go:89] found id: ""
	I0918 21:06:47.022726   62061 logs.go:276] 0 containers: []
	W0918 21:06:47.022735   62061 logs.go:278] No container was found matching "kube-proxy"
	I0918 21:06:47.022741   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0918 21:06:47.022799   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0918 21:06:47.058058   62061 cri.go:89] found id: ""
	I0918 21:06:47.058086   62061 logs.go:276] 0 containers: []
	W0918 21:06:47.058094   62061 logs.go:278] No container was found matching "kube-controller-manager"
	I0918 21:06:47.058101   62061 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0918 21:06:47.058159   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0918 21:06:47.093314   62061 cri.go:89] found id: ""
	I0918 21:06:47.093352   62061 logs.go:276] 0 containers: []
	W0918 21:06:47.093361   62061 logs.go:278] No container was found matching "kindnet"
	I0918 21:06:47.093367   62061 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0918 21:06:47.093424   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0918 21:06:47.126997   62061 cri.go:89] found id: ""
	I0918 21:06:47.127023   62061 logs.go:276] 0 containers: []
	W0918 21:06:47.127033   62061 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0918 21:06:47.127041   62061 logs.go:123] Gathering logs for CRI-O ...
	I0918 21:06:47.127053   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0918 21:06:47.203239   62061 logs.go:123] Gathering logs for container status ...
	I0918 21:06:47.203282   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 21:06:47.240982   62061 logs.go:123] Gathering logs for kubelet ...
	I0918 21:06:47.241013   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 21:06:47.293553   62061 logs.go:123] Gathering logs for dmesg ...
	I0918 21:06:47.293591   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 21:06:47.307462   62061 logs.go:123] Gathering logs for describe nodes ...
	I0918 21:06:47.307493   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0918 21:06:47.379500   62061 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0918 21:06:47.357136   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:06:49.359981   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:06:47.633023   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:06:49.633350   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:06:52.134627   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:06:48.387568   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:06:50.886679   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:06:52.887065   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:06:49.880441   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:06:49.895816   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0918 21:06:49.895903   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0918 21:06:49.931916   62061 cri.go:89] found id: ""
	I0918 21:06:49.931941   62061 logs.go:276] 0 containers: []
	W0918 21:06:49.931950   62061 logs.go:278] No container was found matching "kube-apiserver"
	I0918 21:06:49.931955   62061 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0918 21:06:49.932007   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0918 21:06:49.967053   62061 cri.go:89] found id: ""
	I0918 21:06:49.967083   62061 logs.go:276] 0 containers: []
	W0918 21:06:49.967093   62061 logs.go:278] No container was found matching "etcd"
	I0918 21:06:49.967101   62061 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0918 21:06:49.967194   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0918 21:06:50.002943   62061 cri.go:89] found id: ""
	I0918 21:06:50.003004   62061 logs.go:276] 0 containers: []
	W0918 21:06:50.003015   62061 logs.go:278] No container was found matching "coredns"
	I0918 21:06:50.003023   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0918 21:06:50.003085   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0918 21:06:50.039445   62061 cri.go:89] found id: ""
	I0918 21:06:50.039478   62061 logs.go:276] 0 containers: []
	W0918 21:06:50.039489   62061 logs.go:278] No container was found matching "kube-scheduler"
	I0918 21:06:50.039497   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0918 21:06:50.039561   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0918 21:06:50.073529   62061 cri.go:89] found id: ""
	I0918 21:06:50.073562   62061 logs.go:276] 0 containers: []
	W0918 21:06:50.073572   62061 logs.go:278] No container was found matching "kube-proxy"
	I0918 21:06:50.073578   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0918 21:06:50.073645   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0918 21:06:50.109363   62061 cri.go:89] found id: ""
	I0918 21:06:50.109394   62061 logs.go:276] 0 containers: []
	W0918 21:06:50.109406   62061 logs.go:278] No container was found matching "kube-controller-manager"
	I0918 21:06:50.109413   62061 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0918 21:06:50.109485   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0918 21:06:50.144387   62061 cri.go:89] found id: ""
	I0918 21:06:50.144418   62061 logs.go:276] 0 containers: []
	W0918 21:06:50.144429   62061 logs.go:278] No container was found matching "kindnet"
	I0918 21:06:50.144450   62061 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0918 21:06:50.144524   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0918 21:06:50.178079   62061 cri.go:89] found id: ""
	I0918 21:06:50.178110   62061 logs.go:276] 0 containers: []
	W0918 21:06:50.178119   62061 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0918 21:06:50.178128   62061 logs.go:123] Gathering logs for kubelet ...
	I0918 21:06:50.178140   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 21:06:50.228857   62061 logs.go:123] Gathering logs for dmesg ...
	I0918 21:06:50.228893   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 21:06:50.242077   62061 logs.go:123] Gathering logs for describe nodes ...
	I0918 21:06:50.242108   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0918 21:06:50.312868   62061 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0918 21:06:50.312903   62061 logs.go:123] Gathering logs for CRI-O ...
	I0918 21:06:50.312920   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0918 21:06:50.392880   62061 logs.go:123] Gathering logs for container status ...
	I0918 21:06:50.392924   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 21:06:52.931623   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:06:52.945298   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0918 21:06:52.945394   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0918 21:06:52.980129   62061 cri.go:89] found id: ""
	I0918 21:06:52.980162   62061 logs.go:276] 0 containers: []
	W0918 21:06:52.980171   62061 logs.go:278] No container was found matching "kube-apiserver"
	I0918 21:06:52.980176   62061 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0918 21:06:52.980224   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0918 21:06:53.017899   62061 cri.go:89] found id: ""
	I0918 21:06:53.017932   62061 logs.go:276] 0 containers: []
	W0918 21:06:53.017941   62061 logs.go:278] No container was found matching "etcd"
	I0918 21:06:53.017947   62061 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0918 21:06:53.018015   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0918 21:06:53.056594   62061 cri.go:89] found id: ""
	I0918 21:06:53.056619   62061 logs.go:276] 0 containers: []
	W0918 21:06:53.056627   62061 logs.go:278] No container was found matching "coredns"
	I0918 21:06:53.056635   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0918 21:06:53.056684   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0918 21:06:53.089876   62061 cri.go:89] found id: ""
	I0918 21:06:53.089907   62061 logs.go:276] 0 containers: []
	W0918 21:06:53.089915   62061 logs.go:278] No container was found matching "kube-scheduler"
	I0918 21:06:53.089920   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0918 21:06:53.089967   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0918 21:06:53.122845   62061 cri.go:89] found id: ""
	I0918 21:06:53.122881   62061 logs.go:276] 0 containers: []
	W0918 21:06:53.122890   62061 logs.go:278] No container was found matching "kube-proxy"
	I0918 21:06:53.122904   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0918 21:06:53.122956   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0918 21:06:53.162103   62061 cri.go:89] found id: ""
	I0918 21:06:53.162137   62061 logs.go:276] 0 containers: []
	W0918 21:06:53.162148   62061 logs.go:278] No container was found matching "kube-controller-manager"
	I0918 21:06:53.162156   62061 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0918 21:06:53.162218   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0918 21:06:53.195034   62061 cri.go:89] found id: ""
	I0918 21:06:53.195067   62061 logs.go:276] 0 containers: []
	W0918 21:06:53.195078   62061 logs.go:278] No container was found matching "kindnet"
	I0918 21:06:53.195085   62061 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0918 21:06:53.195144   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0918 21:06:53.229473   62061 cri.go:89] found id: ""
	I0918 21:06:53.229518   62061 logs.go:276] 0 containers: []
	W0918 21:06:53.229542   62061 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0918 21:06:53.229556   62061 logs.go:123] Gathering logs for kubelet ...
	I0918 21:06:53.229577   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 21:06:53.283929   62061 logs.go:123] Gathering logs for dmesg ...
	I0918 21:06:53.283973   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 21:06:53.296932   62061 logs.go:123] Gathering logs for describe nodes ...
	I0918 21:06:53.296965   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0918 21:06:53.379339   62061 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0918 21:06:53.379369   62061 logs.go:123] Gathering logs for CRI-O ...
	I0918 21:06:53.379410   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0918 21:06:53.469608   62061 logs.go:123] Gathering logs for container status ...
	I0918 21:06:53.469652   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 21:06:51.855788   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:06:53.857284   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:06:55.860982   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:06:54.633423   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:06:56.633695   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:06:54.888052   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:06:57.387393   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:06:56.009227   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:06:56.023451   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0918 21:06:56.023510   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0918 21:06:56.056839   62061 cri.go:89] found id: ""
	I0918 21:06:56.056866   62061 logs.go:276] 0 containers: []
	W0918 21:06:56.056875   62061 logs.go:278] No container was found matching "kube-apiserver"
	I0918 21:06:56.056881   62061 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0918 21:06:56.056931   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0918 21:06:56.091893   62061 cri.go:89] found id: ""
	I0918 21:06:56.091919   62061 logs.go:276] 0 containers: []
	W0918 21:06:56.091928   62061 logs.go:278] No container was found matching "etcd"
	I0918 21:06:56.091933   62061 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0918 21:06:56.091980   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0918 21:06:56.127432   62061 cri.go:89] found id: ""
	I0918 21:06:56.127456   62061 logs.go:276] 0 containers: []
	W0918 21:06:56.127464   62061 logs.go:278] No container was found matching "coredns"
	I0918 21:06:56.127470   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0918 21:06:56.127518   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0918 21:06:56.162085   62061 cri.go:89] found id: ""
	I0918 21:06:56.162109   62061 logs.go:276] 0 containers: []
	W0918 21:06:56.162118   62061 logs.go:278] No container was found matching "kube-scheduler"
	I0918 21:06:56.162123   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0918 21:06:56.162176   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0918 21:06:56.195711   62061 cri.go:89] found id: ""
	I0918 21:06:56.195743   62061 logs.go:276] 0 containers: []
	W0918 21:06:56.195753   62061 logs.go:278] No container was found matching "kube-proxy"
	I0918 21:06:56.195759   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0918 21:06:56.195809   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0918 21:06:56.234481   62061 cri.go:89] found id: ""
	I0918 21:06:56.234507   62061 logs.go:276] 0 containers: []
	W0918 21:06:56.234515   62061 logs.go:278] No container was found matching "kube-controller-manager"
	I0918 21:06:56.234522   62061 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0918 21:06:56.234567   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0918 21:06:56.268566   62061 cri.go:89] found id: ""
	I0918 21:06:56.268596   62061 logs.go:276] 0 containers: []
	W0918 21:06:56.268608   62061 logs.go:278] No container was found matching "kindnet"
	I0918 21:06:56.268617   62061 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0918 21:06:56.268681   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0918 21:06:56.309785   62061 cri.go:89] found id: ""
	I0918 21:06:56.309813   62061 logs.go:276] 0 containers: []
	W0918 21:06:56.309824   62061 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0918 21:06:56.309835   62061 logs.go:123] Gathering logs for kubelet ...
	I0918 21:06:56.309874   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 21:06:56.364834   62061 logs.go:123] Gathering logs for dmesg ...
	I0918 21:06:56.364880   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 21:06:56.378424   62061 logs.go:123] Gathering logs for describe nodes ...
	I0918 21:06:56.378451   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0918 21:06:56.450482   62061 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0918 21:06:56.450511   62061 logs.go:123] Gathering logs for CRI-O ...
	I0918 21:06:56.450522   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0918 21:06:56.536261   62061 logs.go:123] Gathering logs for container status ...
	I0918 21:06:56.536305   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 21:06:59.074494   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:06:59.087553   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0918 21:06:59.087622   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0918 21:06:59.128323   62061 cri.go:89] found id: ""
	I0918 21:06:59.128357   62061 logs.go:276] 0 containers: []
	W0918 21:06:59.128368   62061 logs.go:278] No container was found matching "kube-apiserver"
	I0918 21:06:59.128375   62061 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0918 21:06:59.128436   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0918 21:06:59.161135   62061 cri.go:89] found id: ""
	I0918 21:06:59.161161   62061 logs.go:276] 0 containers: []
	W0918 21:06:59.161170   62061 logs.go:278] No container was found matching "etcd"
	I0918 21:06:59.161178   62061 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0918 21:06:59.161240   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0918 21:06:59.201558   62061 cri.go:89] found id: ""
	I0918 21:06:59.201595   62061 logs.go:276] 0 containers: []
	W0918 21:06:59.201607   62061 logs.go:278] No container was found matching "coredns"
	I0918 21:06:59.201614   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0918 21:06:59.201678   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0918 21:06:59.235330   62061 cri.go:89] found id: ""
	I0918 21:06:59.235356   62061 logs.go:276] 0 containers: []
	W0918 21:06:59.235378   62061 logs.go:278] No container was found matching "kube-scheduler"
	I0918 21:06:59.235385   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0918 21:06:59.235450   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0918 21:06:59.270957   62061 cri.go:89] found id: ""
	I0918 21:06:59.270994   62061 logs.go:276] 0 containers: []
	W0918 21:06:59.271007   62061 logs.go:278] No container was found matching "kube-proxy"
	I0918 21:06:59.271016   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0918 21:06:59.271088   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0918 21:06:59.305068   62061 cri.go:89] found id: ""
	I0918 21:06:59.305103   62061 logs.go:276] 0 containers: []
	W0918 21:06:59.305115   62061 logs.go:278] No container was found matching "kube-controller-manager"
	I0918 21:06:59.305123   62061 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0918 21:06:59.305177   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0918 21:06:59.338763   62061 cri.go:89] found id: ""
	I0918 21:06:59.338796   62061 logs.go:276] 0 containers: []
	W0918 21:06:59.338809   62061 logs.go:278] No container was found matching "kindnet"
	I0918 21:06:59.338818   62061 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0918 21:06:59.338887   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0918 21:06:59.376557   62061 cri.go:89] found id: ""
	I0918 21:06:59.376585   62061 logs.go:276] 0 containers: []
	W0918 21:06:59.376593   62061 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0918 21:06:59.376602   62061 logs.go:123] Gathering logs for CRI-O ...
	I0918 21:06:59.376615   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0918 21:06:59.455228   62061 logs.go:123] Gathering logs for container status ...
	I0918 21:06:59.455264   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 21:06:58.356648   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:07:00.357253   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:06:59.133274   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:07:01.632548   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:06:59.388183   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:07:01.886834   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:06:59.494461   62061 logs.go:123] Gathering logs for kubelet ...
	I0918 21:06:59.494488   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 21:06:59.543143   62061 logs.go:123] Gathering logs for dmesg ...
	I0918 21:06:59.543177   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 21:06:59.556031   62061 logs.go:123] Gathering logs for describe nodes ...
	I0918 21:06:59.556062   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0918 21:06:59.623888   62061 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0918 21:07:02.124743   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:07:02.139392   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0918 21:07:02.139451   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0918 21:07:02.172176   62061 cri.go:89] found id: ""
	I0918 21:07:02.172201   62061 logs.go:276] 0 containers: []
	W0918 21:07:02.172209   62061 logs.go:278] No container was found matching "kube-apiserver"
	I0918 21:07:02.172215   62061 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0918 21:07:02.172263   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0918 21:07:02.206480   62061 cri.go:89] found id: ""
	I0918 21:07:02.206507   62061 logs.go:276] 0 containers: []
	W0918 21:07:02.206518   62061 logs.go:278] No container was found matching "etcd"
	I0918 21:07:02.206525   62061 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0918 21:07:02.206586   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0918 21:07:02.240256   62061 cri.go:89] found id: ""
	I0918 21:07:02.240281   62061 logs.go:276] 0 containers: []
	W0918 21:07:02.240289   62061 logs.go:278] No container was found matching "coredns"
	I0918 21:07:02.240295   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0918 21:07:02.240353   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0918 21:07:02.274001   62061 cri.go:89] found id: ""
	I0918 21:07:02.274034   62061 logs.go:276] 0 containers: []
	W0918 21:07:02.274046   62061 logs.go:278] No container was found matching "kube-scheduler"
	I0918 21:07:02.274056   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0918 21:07:02.274115   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0918 21:07:02.307491   62061 cri.go:89] found id: ""
	I0918 21:07:02.307520   62061 logs.go:276] 0 containers: []
	W0918 21:07:02.307528   62061 logs.go:278] No container was found matching "kube-proxy"
	I0918 21:07:02.307534   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0918 21:07:02.307597   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0918 21:07:02.344688   62061 cri.go:89] found id: ""
	I0918 21:07:02.344720   62061 logs.go:276] 0 containers: []
	W0918 21:07:02.344731   62061 logs.go:278] No container was found matching "kube-controller-manager"
	I0918 21:07:02.344739   62061 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0918 21:07:02.344805   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0918 21:07:02.384046   62061 cri.go:89] found id: ""
	I0918 21:07:02.384077   62061 logs.go:276] 0 containers: []
	W0918 21:07:02.384088   62061 logs.go:278] No container was found matching "kindnet"
	I0918 21:07:02.384095   62061 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0918 21:07:02.384154   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0918 21:07:02.422530   62061 cri.go:89] found id: ""
	I0918 21:07:02.422563   62061 logs.go:276] 0 containers: []
	W0918 21:07:02.422575   62061 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0918 21:07:02.422586   62061 logs.go:123] Gathering logs for container status ...
	I0918 21:07:02.422604   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 21:07:02.461236   62061 logs.go:123] Gathering logs for kubelet ...
	I0918 21:07:02.461266   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 21:07:02.512280   62061 logs.go:123] Gathering logs for dmesg ...
	I0918 21:07:02.512320   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 21:07:02.525404   62061 logs.go:123] Gathering logs for describe nodes ...
	I0918 21:07:02.525435   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0918 21:07:02.599746   62061 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0918 21:07:02.599773   62061 logs.go:123] Gathering logs for CRI-O ...
	I0918 21:07:02.599785   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0918 21:07:02.856077   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:07:04.858098   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:07:04.133240   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:07:06.135937   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:07:03.887306   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:07:05.888675   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:07:05.183459   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:07:05.197615   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0918 21:07:05.197681   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0918 21:07:05.234313   62061 cri.go:89] found id: ""
	I0918 21:07:05.234354   62061 logs.go:276] 0 containers: []
	W0918 21:07:05.234365   62061 logs.go:278] No container was found matching "kube-apiserver"
	I0918 21:07:05.234371   62061 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0918 21:07:05.234429   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0918 21:07:05.271436   62061 cri.go:89] found id: ""
	I0918 21:07:05.271466   62061 logs.go:276] 0 containers: []
	W0918 21:07:05.271477   62061 logs.go:278] No container was found matching "etcd"
	I0918 21:07:05.271484   62061 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0918 21:07:05.271554   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0918 21:07:05.307007   62061 cri.go:89] found id: ""
	I0918 21:07:05.307038   62061 logs.go:276] 0 containers: []
	W0918 21:07:05.307050   62061 logs.go:278] No container was found matching "coredns"
	I0918 21:07:05.307056   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0918 21:07:05.307120   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0918 21:07:05.341764   62061 cri.go:89] found id: ""
	I0918 21:07:05.341810   62061 logs.go:276] 0 containers: []
	W0918 21:07:05.341831   62061 logs.go:278] No container was found matching "kube-scheduler"
	I0918 21:07:05.341840   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0918 21:07:05.341908   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0918 21:07:05.381611   62061 cri.go:89] found id: ""
	I0918 21:07:05.381646   62061 logs.go:276] 0 containers: []
	W0918 21:07:05.381658   62061 logs.go:278] No container was found matching "kube-proxy"
	I0918 21:07:05.381666   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0918 21:07:05.381747   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0918 21:07:05.421468   62061 cri.go:89] found id: ""
	I0918 21:07:05.421499   62061 logs.go:276] 0 containers: []
	W0918 21:07:05.421520   62061 logs.go:278] No container was found matching "kube-controller-manager"
	I0918 21:07:05.421528   62061 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0918 21:07:05.421590   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0918 21:07:05.457320   62061 cri.go:89] found id: ""
	I0918 21:07:05.457348   62061 logs.go:276] 0 containers: []
	W0918 21:07:05.457359   62061 logs.go:278] No container was found matching "kindnet"
	I0918 21:07:05.457367   62061 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0918 21:07:05.457425   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0918 21:07:05.493879   62061 cri.go:89] found id: ""
	I0918 21:07:05.493915   62061 logs.go:276] 0 containers: []
	W0918 21:07:05.493928   62061 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0918 21:07:05.493943   62061 logs.go:123] Gathering logs for kubelet ...
	I0918 21:07:05.493955   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 21:07:05.543825   62061 logs.go:123] Gathering logs for dmesg ...
	I0918 21:07:05.543867   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 21:07:05.558254   62061 logs.go:123] Gathering logs for describe nodes ...
	I0918 21:07:05.558285   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0918 21:07:05.627065   62061 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0918 21:07:05.627091   62061 logs.go:123] Gathering logs for CRI-O ...
	I0918 21:07:05.627103   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0918 21:07:05.703838   62061 logs.go:123] Gathering logs for container status ...
	I0918 21:07:05.703876   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 21:07:08.244087   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:07:08.257807   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0918 21:07:08.257897   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0918 21:07:08.292034   62061 cri.go:89] found id: ""
	I0918 21:07:08.292064   62061 logs.go:276] 0 containers: []
	W0918 21:07:08.292076   62061 logs.go:278] No container was found matching "kube-apiserver"
	I0918 21:07:08.292084   62061 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0918 21:07:08.292154   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0918 21:07:08.325858   62061 cri.go:89] found id: ""
	I0918 21:07:08.325897   62061 logs.go:276] 0 containers: []
	W0918 21:07:08.325910   62061 logs.go:278] No container was found matching "etcd"
	I0918 21:07:08.325918   62061 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0918 21:07:08.325987   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0918 21:07:08.369427   62061 cri.go:89] found id: ""
	I0918 21:07:08.369457   62061 logs.go:276] 0 containers: []
	W0918 21:07:08.369468   62061 logs.go:278] No container was found matching "coredns"
	I0918 21:07:08.369475   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0918 21:07:08.369536   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0918 21:07:08.409402   62061 cri.go:89] found id: ""
	I0918 21:07:08.409434   62061 logs.go:276] 0 containers: []
	W0918 21:07:08.409444   62061 logs.go:278] No container was found matching "kube-scheduler"
	I0918 21:07:08.409451   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0918 21:07:08.409515   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0918 21:07:08.445530   62061 cri.go:89] found id: ""
	I0918 21:07:08.445565   62061 logs.go:276] 0 containers: []
	W0918 21:07:08.445575   62061 logs.go:278] No container was found matching "kube-proxy"
	I0918 21:07:08.445584   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0918 21:07:08.445646   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0918 21:07:08.481905   62061 cri.go:89] found id: ""
	I0918 21:07:08.481935   62061 logs.go:276] 0 containers: []
	W0918 21:07:08.481945   62061 logs.go:278] No container was found matching "kube-controller-manager"
	I0918 21:07:08.481952   62061 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0918 21:07:08.482020   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0918 21:07:08.516710   62061 cri.go:89] found id: ""
	I0918 21:07:08.516738   62061 logs.go:276] 0 containers: []
	W0918 21:07:08.516746   62061 logs.go:278] No container was found matching "kindnet"
	I0918 21:07:08.516752   62061 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0918 21:07:08.516802   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0918 21:07:08.550707   62061 cri.go:89] found id: ""
	I0918 21:07:08.550740   62061 logs.go:276] 0 containers: []
	W0918 21:07:08.550760   62061 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0918 21:07:08.550768   62061 logs.go:123] Gathering logs for describe nodes ...
	I0918 21:07:08.550782   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0918 21:07:08.624821   62061 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0918 21:07:08.624843   62061 logs.go:123] Gathering logs for CRI-O ...
	I0918 21:07:08.624854   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0918 21:07:08.705347   62061 logs.go:123] Gathering logs for container status ...
	I0918 21:07:08.705383   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 21:07:08.744394   62061 logs.go:123] Gathering logs for kubelet ...
	I0918 21:07:08.744425   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 21:07:08.799336   62061 logs.go:123] Gathering logs for dmesg ...
	I0918 21:07:08.799375   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 21:07:07.358154   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:07:09.857118   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:07:08.633211   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:07:11.132676   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:07:08.388884   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:07:10.887356   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:07:11.312920   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:07:11.327524   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0918 21:07:11.327598   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0918 21:07:11.366177   62061 cri.go:89] found id: ""
	I0918 21:07:11.366209   62061 logs.go:276] 0 containers: []
	W0918 21:07:11.366221   62061 logs.go:278] No container was found matching "kube-apiserver"
	I0918 21:07:11.366227   62061 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0918 21:07:11.366274   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0918 21:07:11.405296   62061 cri.go:89] found id: ""
	I0918 21:07:11.405326   62061 logs.go:276] 0 containers: []
	W0918 21:07:11.405344   62061 logs.go:278] No container was found matching "etcd"
	I0918 21:07:11.405351   62061 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0918 21:07:11.405413   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0918 21:07:11.441786   62061 cri.go:89] found id: ""
	I0918 21:07:11.441818   62061 logs.go:276] 0 containers: []
	W0918 21:07:11.441829   62061 logs.go:278] No container was found matching "coredns"
	I0918 21:07:11.441837   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0918 21:07:11.441891   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0918 21:07:11.476144   62061 cri.go:89] found id: ""
	I0918 21:07:11.476167   62061 logs.go:276] 0 containers: []
	W0918 21:07:11.476175   62061 logs.go:278] No container was found matching "kube-scheduler"
	I0918 21:07:11.476181   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0918 21:07:11.476224   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0918 21:07:11.512115   62061 cri.go:89] found id: ""
	I0918 21:07:11.512143   62061 logs.go:276] 0 containers: []
	W0918 21:07:11.512154   62061 logs.go:278] No container was found matching "kube-proxy"
	I0918 21:07:11.512160   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0918 21:07:11.512221   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0918 21:07:11.545466   62061 cri.go:89] found id: ""
	I0918 21:07:11.545494   62061 logs.go:276] 0 containers: []
	W0918 21:07:11.545504   62061 logs.go:278] No container was found matching "kube-controller-manager"
	I0918 21:07:11.545512   62061 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0918 21:07:11.545575   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0918 21:07:11.579940   62061 cri.go:89] found id: ""
	I0918 21:07:11.579965   62061 logs.go:276] 0 containers: []
	W0918 21:07:11.579975   62061 logs.go:278] No container was found matching "kindnet"
	I0918 21:07:11.579994   62061 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0918 21:07:11.580080   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0918 21:07:11.613454   62061 cri.go:89] found id: ""
	I0918 21:07:11.613480   62061 logs.go:276] 0 containers: []
	W0918 21:07:11.613491   62061 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0918 21:07:11.613512   62061 logs.go:123] Gathering logs for kubelet ...
	I0918 21:07:11.613535   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 21:07:11.669079   62061 logs.go:123] Gathering logs for dmesg ...
	I0918 21:07:11.669140   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 21:07:11.682614   62061 logs.go:123] Gathering logs for describe nodes ...
	I0918 21:07:11.682639   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0918 21:07:11.756876   62061 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0918 21:07:11.756901   62061 logs.go:123] Gathering logs for CRI-O ...
	I0918 21:07:11.756914   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0918 21:07:11.835046   62061 logs.go:123] Gathering logs for container status ...
	I0918 21:07:11.835086   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 21:07:14.377541   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:07:14.392083   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0918 21:07:14.392161   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0918 21:07:14.431752   62061 cri.go:89] found id: ""
	I0918 21:07:14.431786   62061 logs.go:276] 0 containers: []
	W0918 21:07:14.431795   62061 logs.go:278] No container was found matching "kube-apiserver"
	I0918 21:07:14.431800   62061 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0918 21:07:14.431856   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0918 21:07:14.468530   62061 cri.go:89] found id: ""
	I0918 21:07:14.468562   62061 logs.go:276] 0 containers: []
	W0918 21:07:14.468573   62061 logs.go:278] No container was found matching "etcd"
	I0918 21:07:14.468580   62061 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0918 21:07:14.468643   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0918 21:07:11.857763   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:07:14.357253   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:07:13.132895   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:07:15.133426   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:07:13.386537   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:07:15.387844   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:07:17.888743   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:07:14.506510   62061 cri.go:89] found id: ""
	I0918 21:07:14.506540   62061 logs.go:276] 0 containers: []
	W0918 21:07:14.506550   62061 logs.go:278] No container was found matching "coredns"
	I0918 21:07:14.506557   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0918 21:07:14.506625   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0918 21:07:14.538986   62061 cri.go:89] found id: ""
	I0918 21:07:14.539021   62061 logs.go:276] 0 containers: []
	W0918 21:07:14.539032   62061 logs.go:278] No container was found matching "kube-scheduler"
	I0918 21:07:14.539039   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0918 21:07:14.539103   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0918 21:07:14.572390   62061 cri.go:89] found id: ""
	I0918 21:07:14.572421   62061 logs.go:276] 0 containers: []
	W0918 21:07:14.572432   62061 logs.go:278] No container was found matching "kube-proxy"
	I0918 21:07:14.572440   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0918 21:07:14.572499   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0918 21:07:14.607875   62061 cri.go:89] found id: ""
	I0918 21:07:14.607905   62061 logs.go:276] 0 containers: []
	W0918 21:07:14.607917   62061 logs.go:278] No container was found matching "kube-controller-manager"
	I0918 21:07:14.607924   62061 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0918 21:07:14.607988   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0918 21:07:14.642584   62061 cri.go:89] found id: ""
	I0918 21:07:14.642616   62061 logs.go:276] 0 containers: []
	W0918 21:07:14.642625   62061 logs.go:278] No container was found matching "kindnet"
	I0918 21:07:14.642630   62061 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0918 21:07:14.642685   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0918 21:07:14.682117   62061 cri.go:89] found id: ""
	I0918 21:07:14.682144   62061 logs.go:276] 0 containers: []
	W0918 21:07:14.682152   62061 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0918 21:07:14.682163   62061 logs.go:123] Gathering logs for dmesg ...
	I0918 21:07:14.682177   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 21:07:14.694780   62061 logs.go:123] Gathering logs for describe nodes ...
	I0918 21:07:14.694808   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0918 21:07:14.767988   62061 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0918 21:07:14.768036   62061 logs.go:123] Gathering logs for CRI-O ...
	I0918 21:07:14.768055   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0918 21:07:14.860620   62061 logs.go:123] Gathering logs for container status ...
	I0918 21:07:14.860655   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 21:07:14.905071   62061 logs.go:123] Gathering logs for kubelet ...
	I0918 21:07:14.905102   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 21:07:17.455582   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:07:17.468713   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0918 21:07:17.468774   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0918 21:07:17.503682   62061 cri.go:89] found id: ""
	I0918 21:07:17.503709   62061 logs.go:276] 0 containers: []
	W0918 21:07:17.503721   62061 logs.go:278] No container was found matching "kube-apiserver"
	I0918 21:07:17.503729   62061 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0918 21:07:17.503794   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0918 21:07:17.536581   62061 cri.go:89] found id: ""
	I0918 21:07:17.536611   62061 logs.go:276] 0 containers: []
	W0918 21:07:17.536619   62061 logs.go:278] No container was found matching "etcd"
	I0918 21:07:17.536625   62061 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0918 21:07:17.536673   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0918 21:07:17.570483   62061 cri.go:89] found id: ""
	I0918 21:07:17.570510   62061 logs.go:276] 0 containers: []
	W0918 21:07:17.570518   62061 logs.go:278] No container was found matching "coredns"
	I0918 21:07:17.570523   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0918 21:07:17.570591   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0918 21:07:17.604102   62061 cri.go:89] found id: ""
	I0918 21:07:17.604140   62061 logs.go:276] 0 containers: []
	W0918 21:07:17.604152   62061 logs.go:278] No container was found matching "kube-scheduler"
	I0918 21:07:17.604160   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0918 21:07:17.604229   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0918 21:07:17.638633   62061 cri.go:89] found id: ""
	I0918 21:07:17.638661   62061 logs.go:276] 0 containers: []
	W0918 21:07:17.638672   62061 logs.go:278] No container was found matching "kube-proxy"
	I0918 21:07:17.638678   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0918 21:07:17.638725   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0918 21:07:17.673016   62061 cri.go:89] found id: ""
	I0918 21:07:17.673048   62061 logs.go:276] 0 containers: []
	W0918 21:07:17.673057   62061 logs.go:278] No container was found matching "kube-controller-manager"
	I0918 21:07:17.673063   62061 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0918 21:07:17.673116   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0918 21:07:17.708576   62061 cri.go:89] found id: ""
	I0918 21:07:17.708601   62061 logs.go:276] 0 containers: []
	W0918 21:07:17.708609   62061 logs.go:278] No container was found matching "kindnet"
	I0918 21:07:17.708620   62061 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0918 21:07:17.708672   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0918 21:07:17.742820   62061 cri.go:89] found id: ""
	I0918 21:07:17.742843   62061 logs.go:276] 0 containers: []
	W0918 21:07:17.742851   62061 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0918 21:07:17.742859   62061 logs.go:123] Gathering logs for dmesg ...
	I0918 21:07:17.742870   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 21:07:17.757091   62061 logs.go:123] Gathering logs for describe nodes ...
	I0918 21:07:17.757120   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0918 21:07:17.824116   62061 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0918 21:07:17.824137   62061 logs.go:123] Gathering logs for CRI-O ...
	I0918 21:07:17.824151   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0918 21:07:17.911013   62061 logs.go:123] Gathering logs for container status ...
	I0918 21:07:17.911065   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 21:07:17.950517   62061 logs.go:123] Gathering logs for kubelet ...
	I0918 21:07:17.950553   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 21:07:16.857284   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:07:19.357336   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:07:17.635033   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:07:20.134331   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:07:20.388498   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:07:22.887115   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:07:20.502423   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:07:20.529214   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0918 21:07:20.529297   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0918 21:07:20.583618   62061 cri.go:89] found id: ""
	I0918 21:07:20.583660   62061 logs.go:276] 0 containers: []
	W0918 21:07:20.583672   62061 logs.go:278] No container was found matching "kube-apiserver"
	I0918 21:07:20.583682   62061 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0918 21:07:20.583751   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0918 21:07:20.633051   62061 cri.go:89] found id: ""
	I0918 21:07:20.633079   62061 logs.go:276] 0 containers: []
	W0918 21:07:20.633091   62061 logs.go:278] No container was found matching "etcd"
	I0918 21:07:20.633099   62061 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0918 21:07:20.633158   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0918 21:07:20.668513   62061 cri.go:89] found id: ""
	I0918 21:07:20.668543   62061 logs.go:276] 0 containers: []
	W0918 21:07:20.668552   62061 logs.go:278] No container was found matching "coredns"
	I0918 21:07:20.668558   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0918 21:07:20.668611   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0918 21:07:20.702648   62061 cri.go:89] found id: ""
	I0918 21:07:20.702683   62061 logs.go:276] 0 containers: []
	W0918 21:07:20.702698   62061 logs.go:278] No container was found matching "kube-scheduler"
	I0918 21:07:20.702706   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0918 21:07:20.702767   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0918 21:07:20.740639   62061 cri.go:89] found id: ""
	I0918 21:07:20.740666   62061 logs.go:276] 0 containers: []
	W0918 21:07:20.740674   62061 logs.go:278] No container was found matching "kube-proxy"
	I0918 21:07:20.740680   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0918 21:07:20.740731   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0918 21:07:20.771819   62061 cri.go:89] found id: ""
	I0918 21:07:20.771847   62061 logs.go:276] 0 containers: []
	W0918 21:07:20.771855   62061 logs.go:278] No container was found matching "kube-controller-manager"
	I0918 21:07:20.771863   62061 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0918 21:07:20.771913   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0918 21:07:20.803613   62061 cri.go:89] found id: ""
	I0918 21:07:20.803639   62061 logs.go:276] 0 containers: []
	W0918 21:07:20.803647   62061 logs.go:278] No container was found matching "kindnet"
	I0918 21:07:20.803653   62061 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0918 21:07:20.803708   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0918 21:07:20.835671   62061 cri.go:89] found id: ""
	I0918 21:07:20.835695   62061 logs.go:276] 0 containers: []
	W0918 21:07:20.835702   62061 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0918 21:07:20.835710   62061 logs.go:123] Gathering logs for kubelet ...
	I0918 21:07:20.835720   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 21:07:20.886159   62061 logs.go:123] Gathering logs for dmesg ...
	I0918 21:07:20.886191   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 21:07:20.901412   62061 logs.go:123] Gathering logs for describe nodes ...
	I0918 21:07:20.901443   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0918 21:07:20.979009   62061 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0918 21:07:20.979034   62061 logs.go:123] Gathering logs for CRI-O ...
	I0918 21:07:20.979045   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0918 21:07:21.059895   62061 logs.go:123] Gathering logs for container status ...
	I0918 21:07:21.059930   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 21:07:23.598133   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:07:23.611038   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0918 21:07:23.611102   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0918 21:07:23.648037   62061 cri.go:89] found id: ""
	I0918 21:07:23.648067   62061 logs.go:276] 0 containers: []
	W0918 21:07:23.648075   62061 logs.go:278] No container was found matching "kube-apiserver"
	I0918 21:07:23.648081   62061 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0918 21:07:23.648135   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0918 21:07:23.683956   62061 cri.go:89] found id: ""
	I0918 21:07:23.683985   62061 logs.go:276] 0 containers: []
	W0918 21:07:23.683995   62061 logs.go:278] No container was found matching "etcd"
	I0918 21:07:23.684003   62061 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0918 21:07:23.684083   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0918 21:07:23.719274   62061 cri.go:89] found id: ""
	I0918 21:07:23.719301   62061 logs.go:276] 0 containers: []
	W0918 21:07:23.719309   62061 logs.go:278] No container was found matching "coredns"
	I0918 21:07:23.719315   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0918 21:07:23.719378   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0918 21:07:23.757101   62061 cri.go:89] found id: ""
	I0918 21:07:23.757133   62061 logs.go:276] 0 containers: []
	W0918 21:07:23.757145   62061 logs.go:278] No container was found matching "kube-scheduler"
	I0918 21:07:23.757152   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0918 21:07:23.757202   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0918 21:07:23.790115   62061 cri.go:89] found id: ""
	I0918 21:07:23.790149   62061 logs.go:276] 0 containers: []
	W0918 21:07:23.790160   62061 logs.go:278] No container was found matching "kube-proxy"
	I0918 21:07:23.790168   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0918 21:07:23.790231   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0918 21:07:23.823089   62061 cri.go:89] found id: ""
	I0918 21:07:23.823120   62061 logs.go:276] 0 containers: []
	W0918 21:07:23.823131   62061 logs.go:278] No container was found matching "kube-controller-manager"
	I0918 21:07:23.823140   62061 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0918 21:07:23.823192   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0918 21:07:23.858566   62061 cri.go:89] found id: ""
	I0918 21:07:23.858591   62061 logs.go:276] 0 containers: []
	W0918 21:07:23.858601   62061 logs.go:278] No container was found matching "kindnet"
	I0918 21:07:23.858613   62061 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0918 21:07:23.858674   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0918 21:07:23.892650   62061 cri.go:89] found id: ""
	I0918 21:07:23.892679   62061 logs.go:276] 0 containers: []
	W0918 21:07:23.892690   62061 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0918 21:07:23.892699   62061 logs.go:123] Gathering logs for kubelet ...
	I0918 21:07:23.892713   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 21:07:23.941087   62061 logs.go:123] Gathering logs for dmesg ...
	I0918 21:07:23.941123   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 21:07:23.955674   62061 logs.go:123] Gathering logs for describe nodes ...
	I0918 21:07:23.955712   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0918 21:07:24.026087   62061 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0918 21:07:24.026115   62061 logs.go:123] Gathering logs for CRI-O ...
	I0918 21:07:24.026132   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0918 21:07:24.105237   62061 logs.go:123] Gathering logs for container status ...
	I0918 21:07:24.105272   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 21:07:21.857391   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:07:23.857954   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:07:26.356553   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:07:22.633058   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:07:25.133773   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:07:25.387123   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:07:27.886688   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:07:26.646822   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:07:26.659978   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0918 21:07:26.660087   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0918 21:07:26.695776   62061 cri.go:89] found id: ""
	I0918 21:07:26.695804   62061 logs.go:276] 0 containers: []
	W0918 21:07:26.695817   62061 logs.go:278] No container was found matching "kube-apiserver"
	I0918 21:07:26.695824   62061 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0918 21:07:26.695875   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0918 21:07:26.729717   62061 cri.go:89] found id: ""
	I0918 21:07:26.729747   62061 logs.go:276] 0 containers: []
	W0918 21:07:26.729756   62061 logs.go:278] No container was found matching "etcd"
	I0918 21:07:26.729762   62061 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0918 21:07:26.729830   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0918 21:07:26.763848   62061 cri.go:89] found id: ""
	I0918 21:07:26.763886   62061 logs.go:276] 0 containers: []
	W0918 21:07:26.763907   62061 logs.go:278] No container was found matching "coredns"
	I0918 21:07:26.763915   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0918 21:07:26.763974   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0918 21:07:26.798670   62061 cri.go:89] found id: ""
	I0918 21:07:26.798703   62061 logs.go:276] 0 containers: []
	W0918 21:07:26.798713   62061 logs.go:278] No container was found matching "kube-scheduler"
	I0918 21:07:26.798720   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0918 21:07:26.798789   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0918 21:07:26.833778   62061 cri.go:89] found id: ""
	I0918 21:07:26.833804   62061 logs.go:276] 0 containers: []
	W0918 21:07:26.833815   62061 logs.go:278] No container was found matching "kube-proxy"
	I0918 21:07:26.833822   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0918 21:07:26.833905   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0918 21:07:26.869657   62061 cri.go:89] found id: ""
	I0918 21:07:26.869688   62061 logs.go:276] 0 containers: []
	W0918 21:07:26.869699   62061 logs.go:278] No container was found matching "kube-controller-manager"
	I0918 21:07:26.869707   62061 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0918 21:07:26.869772   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0918 21:07:26.908163   62061 cri.go:89] found id: ""
	I0918 21:07:26.908194   62061 logs.go:276] 0 containers: []
	W0918 21:07:26.908205   62061 logs.go:278] No container was found matching "kindnet"
	I0918 21:07:26.908212   62061 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0918 21:07:26.908269   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0918 21:07:26.943416   62061 cri.go:89] found id: ""
	I0918 21:07:26.943442   62061 logs.go:276] 0 containers: []
	W0918 21:07:26.943451   62061 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0918 21:07:26.943459   62061 logs.go:123] Gathering logs for kubelet ...
	I0918 21:07:26.943471   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 21:07:26.993796   62061 logs.go:123] Gathering logs for dmesg ...
	I0918 21:07:26.993833   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 21:07:27.007619   62061 logs.go:123] Gathering logs for describe nodes ...
	I0918 21:07:27.007661   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0918 21:07:27.072880   62061 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0918 21:07:27.072904   62061 logs.go:123] Gathering logs for CRI-O ...
	I0918 21:07:27.072919   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0918 21:07:27.148984   62061 logs.go:123] Gathering logs for container status ...
	I0918 21:07:27.149031   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 21:07:28.357006   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:07:30.857527   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:07:27.632697   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:07:30.133718   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:07:29.887981   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:07:32.387478   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:07:29.690106   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:07:29.702853   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0918 21:07:29.702932   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0918 21:07:29.736427   62061 cri.go:89] found id: ""
	I0918 21:07:29.736461   62061 logs.go:276] 0 containers: []
	W0918 21:07:29.736473   62061 logs.go:278] No container was found matching "kube-apiserver"
	I0918 21:07:29.736480   62061 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0918 21:07:29.736537   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0918 21:07:29.771287   62061 cri.go:89] found id: ""
	I0918 21:07:29.771317   62061 logs.go:276] 0 containers: []
	W0918 21:07:29.771328   62061 logs.go:278] No container was found matching "etcd"
	I0918 21:07:29.771334   62061 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0918 21:07:29.771398   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0918 21:07:29.804826   62061 cri.go:89] found id: ""
	I0918 21:07:29.804861   62061 logs.go:276] 0 containers: []
	W0918 21:07:29.804875   62061 logs.go:278] No container was found matching "coredns"
	I0918 21:07:29.804882   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0918 21:07:29.804934   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0918 21:07:29.838570   62061 cri.go:89] found id: ""
	I0918 21:07:29.838598   62061 logs.go:276] 0 containers: []
	W0918 21:07:29.838608   62061 logs.go:278] No container was found matching "kube-scheduler"
	I0918 21:07:29.838614   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0918 21:07:29.838659   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0918 21:07:29.871591   62061 cri.go:89] found id: ""
	I0918 21:07:29.871621   62061 logs.go:276] 0 containers: []
	W0918 21:07:29.871631   62061 logs.go:278] No container was found matching "kube-proxy"
	I0918 21:07:29.871638   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0918 21:07:29.871699   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0918 21:07:29.905789   62061 cri.go:89] found id: ""
	I0918 21:07:29.905824   62061 logs.go:276] 0 containers: []
	W0918 21:07:29.905835   62061 logs.go:278] No container was found matching "kube-controller-manager"
	I0918 21:07:29.905846   62061 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0918 21:07:29.905910   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0918 21:07:29.945914   62061 cri.go:89] found id: ""
	I0918 21:07:29.945941   62061 logs.go:276] 0 containers: []
	W0918 21:07:29.945950   62061 logs.go:278] No container was found matching "kindnet"
	I0918 21:07:29.945955   62061 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0918 21:07:29.946004   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0918 21:07:29.979365   62061 cri.go:89] found id: ""
	I0918 21:07:29.979396   62061 logs.go:276] 0 containers: []
	W0918 21:07:29.979405   62061 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0918 21:07:29.979413   62061 logs.go:123] Gathering logs for kubelet ...
	I0918 21:07:29.979425   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 21:07:30.026925   62061 logs.go:123] Gathering logs for dmesg ...
	I0918 21:07:30.026956   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 21:07:30.040589   62061 logs.go:123] Gathering logs for describe nodes ...
	I0918 21:07:30.040623   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0918 21:07:30.112900   62061 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0918 21:07:30.112928   62061 logs.go:123] Gathering logs for CRI-O ...
	I0918 21:07:30.112948   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0918 21:07:30.195208   62061 logs.go:123] Gathering logs for container status ...
	I0918 21:07:30.195256   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 21:07:32.735787   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:07:32.748969   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0918 21:07:32.749049   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0918 21:07:32.782148   62061 cri.go:89] found id: ""
	I0918 21:07:32.782179   62061 logs.go:276] 0 containers: []
	W0918 21:07:32.782189   62061 logs.go:278] No container was found matching "kube-apiserver"
	I0918 21:07:32.782196   62061 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0918 21:07:32.782262   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0918 21:07:32.816119   62061 cri.go:89] found id: ""
	I0918 21:07:32.816144   62061 logs.go:276] 0 containers: []
	W0918 21:07:32.816152   62061 logs.go:278] No container was found matching "etcd"
	I0918 21:07:32.816158   62061 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0918 21:07:32.816203   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0918 21:07:32.849817   62061 cri.go:89] found id: ""
	I0918 21:07:32.849844   62061 logs.go:276] 0 containers: []
	W0918 21:07:32.849853   62061 logs.go:278] No container was found matching "coredns"
	I0918 21:07:32.849859   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0918 21:07:32.849911   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0918 21:07:32.884464   62061 cri.go:89] found id: ""
	I0918 21:07:32.884494   62061 logs.go:276] 0 containers: []
	W0918 21:07:32.884506   62061 logs.go:278] No container was found matching "kube-scheduler"
	I0918 21:07:32.884513   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0918 21:07:32.884576   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0918 21:07:32.917942   62061 cri.go:89] found id: ""
	I0918 21:07:32.917973   62061 logs.go:276] 0 containers: []
	W0918 21:07:32.917983   62061 logs.go:278] No container was found matching "kube-proxy"
	I0918 21:07:32.917990   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0918 21:07:32.918051   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0918 21:07:32.950748   62061 cri.go:89] found id: ""
	I0918 21:07:32.950780   62061 logs.go:276] 0 containers: []
	W0918 21:07:32.950791   62061 logs.go:278] No container was found matching "kube-controller-manager"
	I0918 21:07:32.950800   62061 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0918 21:07:32.950864   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0918 21:07:32.985059   62061 cri.go:89] found id: ""
	I0918 21:07:32.985092   62061 logs.go:276] 0 containers: []
	W0918 21:07:32.985100   62061 logs.go:278] No container was found matching "kindnet"
	I0918 21:07:32.985106   62061 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0918 21:07:32.985167   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0918 21:07:33.021496   62061 cri.go:89] found id: ""
	I0918 21:07:33.021526   62061 logs.go:276] 0 containers: []
	W0918 21:07:33.021536   62061 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0918 21:07:33.021546   62061 logs.go:123] Gathering logs for kubelet ...
	I0918 21:07:33.021560   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 21:07:33.071744   62061 logs.go:123] Gathering logs for dmesg ...
	I0918 21:07:33.071793   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 21:07:33.086533   62061 logs.go:123] Gathering logs for describe nodes ...
	I0918 21:07:33.086565   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0918 21:07:33.155274   62061 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0918 21:07:33.155307   62061 logs.go:123] Gathering logs for CRI-O ...
	I0918 21:07:33.155326   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0918 21:07:33.238301   62061 logs.go:123] Gathering logs for container status ...
	I0918 21:07:33.238342   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 21:07:33.356874   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:07:35.357445   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:07:32.631814   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:07:34.631954   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:07:36.633057   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:07:34.387725   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:07:36.887031   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:07:35.777905   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:07:35.792442   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0918 21:07:35.792510   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0918 21:07:35.827310   62061 cri.go:89] found id: ""
	I0918 21:07:35.827339   62061 logs.go:276] 0 containers: []
	W0918 21:07:35.827350   62061 logs.go:278] No container was found matching "kube-apiserver"
	I0918 21:07:35.827357   62061 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0918 21:07:35.827422   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0918 21:07:35.861873   62061 cri.go:89] found id: ""
	I0918 21:07:35.861897   62061 logs.go:276] 0 containers: []
	W0918 21:07:35.861905   62061 logs.go:278] No container was found matching "etcd"
	I0918 21:07:35.861916   62061 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0918 21:07:35.861966   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0918 21:07:35.895295   62061 cri.go:89] found id: ""
	I0918 21:07:35.895324   62061 logs.go:276] 0 containers: []
	W0918 21:07:35.895345   62061 logs.go:278] No container was found matching "coredns"
	I0918 21:07:35.895353   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0918 21:07:35.895410   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0918 21:07:35.927690   62061 cri.go:89] found id: ""
	I0918 21:07:35.927717   62061 logs.go:276] 0 containers: []
	W0918 21:07:35.927728   62061 logs.go:278] No container was found matching "kube-scheduler"
	I0918 21:07:35.927736   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0918 21:07:35.927788   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0918 21:07:35.963954   62061 cri.go:89] found id: ""
	I0918 21:07:35.963979   62061 logs.go:276] 0 containers: []
	W0918 21:07:35.963986   62061 logs.go:278] No container was found matching "kube-proxy"
	I0918 21:07:35.963992   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0918 21:07:35.964059   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0918 21:07:36.002287   62061 cri.go:89] found id: ""
	I0918 21:07:36.002313   62061 logs.go:276] 0 containers: []
	W0918 21:07:36.002322   62061 logs.go:278] No container was found matching "kube-controller-manager"
	I0918 21:07:36.002328   62061 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0918 21:07:36.002380   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0918 21:07:36.036748   62061 cri.go:89] found id: ""
	I0918 21:07:36.036778   62061 logs.go:276] 0 containers: []
	W0918 21:07:36.036790   62061 logs.go:278] No container was found matching "kindnet"
	I0918 21:07:36.036797   62061 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0918 21:07:36.036861   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0918 21:07:36.072616   62061 cri.go:89] found id: ""
	I0918 21:07:36.072647   62061 logs.go:276] 0 containers: []
	W0918 21:07:36.072658   62061 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0918 21:07:36.072668   62061 logs.go:123] Gathering logs for kubelet ...
	I0918 21:07:36.072683   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 21:07:36.121723   62061 logs.go:123] Gathering logs for dmesg ...
	I0918 21:07:36.121762   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 21:07:36.136528   62061 logs.go:123] Gathering logs for describe nodes ...
	I0918 21:07:36.136560   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0918 21:07:36.202878   62061 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0918 21:07:36.202911   62061 logs.go:123] Gathering logs for CRI-O ...
	I0918 21:07:36.202926   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0918 21:07:36.287882   62061 logs.go:123] Gathering logs for container status ...
	I0918 21:07:36.287924   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 21:07:38.825888   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:07:38.838437   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0918 21:07:38.838510   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0918 21:07:38.872312   62061 cri.go:89] found id: ""
	I0918 21:07:38.872348   62061 logs.go:276] 0 containers: []
	W0918 21:07:38.872358   62061 logs.go:278] No container was found matching "kube-apiserver"
	I0918 21:07:38.872366   62061 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0918 21:07:38.872417   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0918 21:07:38.905927   62061 cri.go:89] found id: ""
	I0918 21:07:38.905962   62061 logs.go:276] 0 containers: []
	W0918 21:07:38.905975   62061 logs.go:278] No container was found matching "etcd"
	I0918 21:07:38.905982   62061 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0918 21:07:38.906042   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0918 21:07:38.938245   62061 cri.go:89] found id: ""
	I0918 21:07:38.938274   62061 logs.go:276] 0 containers: []
	W0918 21:07:38.938286   62061 logs.go:278] No container was found matching "coredns"
	I0918 21:07:38.938293   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0918 21:07:38.938358   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0918 21:07:38.971497   62061 cri.go:89] found id: ""
	I0918 21:07:38.971536   62061 logs.go:276] 0 containers: []
	W0918 21:07:38.971548   62061 logs.go:278] No container was found matching "kube-scheduler"
	I0918 21:07:38.971555   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0918 21:07:38.971625   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0918 21:07:39.004755   62061 cri.go:89] found id: ""
	I0918 21:07:39.004784   62061 logs.go:276] 0 containers: []
	W0918 21:07:39.004793   62061 logs.go:278] No container was found matching "kube-proxy"
	I0918 21:07:39.004799   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0918 21:07:39.004854   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0918 21:07:39.036237   62061 cri.go:89] found id: ""
	I0918 21:07:39.036265   62061 logs.go:276] 0 containers: []
	W0918 21:07:39.036273   62061 logs.go:278] No container was found matching "kube-controller-manager"
	I0918 21:07:39.036279   62061 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0918 21:07:39.036328   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0918 21:07:39.071504   62061 cri.go:89] found id: ""
	I0918 21:07:39.071537   62061 logs.go:276] 0 containers: []
	W0918 21:07:39.071549   62061 logs.go:278] No container was found matching "kindnet"
	I0918 21:07:39.071557   62061 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0918 21:07:39.071623   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0918 21:07:39.107035   62061 cri.go:89] found id: ""
	I0918 21:07:39.107063   62061 logs.go:276] 0 containers: []
	W0918 21:07:39.107071   62061 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0918 21:07:39.107080   62061 logs.go:123] Gathering logs for kubelet ...
	I0918 21:07:39.107090   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 21:07:39.158078   62061 logs.go:123] Gathering logs for dmesg ...
	I0918 21:07:39.158113   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 21:07:39.172846   62061 logs.go:123] Gathering logs for describe nodes ...
	I0918 21:07:39.172875   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0918 21:07:39.240577   62061 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0918 21:07:39.240602   62061 logs.go:123] Gathering logs for CRI-O ...
	I0918 21:07:39.240618   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0918 21:07:39.319762   62061 logs.go:123] Gathering logs for container status ...
	I0918 21:07:39.319797   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 21:07:37.857371   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:07:40.356710   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:07:39.133586   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:07:41.632538   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:07:38.887485   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:07:41.386252   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:07:41.856586   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:07:41.870308   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0918 21:07:41.870388   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0918 21:07:41.905657   62061 cri.go:89] found id: ""
	I0918 21:07:41.905688   62061 logs.go:276] 0 containers: []
	W0918 21:07:41.905699   62061 logs.go:278] No container was found matching "kube-apiserver"
	I0918 21:07:41.905706   62061 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0918 21:07:41.905766   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0918 21:07:41.944507   62061 cri.go:89] found id: ""
	I0918 21:07:41.944544   62061 logs.go:276] 0 containers: []
	W0918 21:07:41.944555   62061 logs.go:278] No container was found matching "etcd"
	I0918 21:07:41.944566   62061 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0918 21:07:41.944634   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0918 21:07:41.979217   62061 cri.go:89] found id: ""
	I0918 21:07:41.979252   62061 logs.go:276] 0 containers: []
	W0918 21:07:41.979271   62061 logs.go:278] No container was found matching "coredns"
	I0918 21:07:41.979279   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0918 21:07:41.979346   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0918 21:07:42.013613   62061 cri.go:89] found id: ""
	I0918 21:07:42.013641   62061 logs.go:276] 0 containers: []
	W0918 21:07:42.013652   62061 logs.go:278] No container was found matching "kube-scheduler"
	I0918 21:07:42.013659   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0918 21:07:42.013725   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0918 21:07:42.049225   62061 cri.go:89] found id: ""
	I0918 21:07:42.049259   62061 logs.go:276] 0 containers: []
	W0918 21:07:42.049271   62061 logs.go:278] No container was found matching "kube-proxy"
	I0918 21:07:42.049279   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0918 21:07:42.049364   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0918 21:07:42.085737   62061 cri.go:89] found id: ""
	I0918 21:07:42.085763   62061 logs.go:276] 0 containers: []
	W0918 21:07:42.085775   62061 logs.go:278] No container was found matching "kube-controller-manager"
	I0918 21:07:42.085782   62061 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0918 21:07:42.085843   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0918 21:07:42.121326   62061 cri.go:89] found id: ""
	I0918 21:07:42.121356   62061 logs.go:276] 0 containers: []
	W0918 21:07:42.121365   62061 logs.go:278] No container was found matching "kindnet"
	I0918 21:07:42.121371   62061 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0918 21:07:42.121428   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0918 21:07:42.157070   62061 cri.go:89] found id: ""
	I0918 21:07:42.157097   62061 logs.go:276] 0 containers: []
	W0918 21:07:42.157107   62061 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0918 21:07:42.157118   62061 logs.go:123] Gathering logs for CRI-O ...
	I0918 21:07:42.157133   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0918 21:07:42.232110   62061 logs.go:123] Gathering logs for container status ...
	I0918 21:07:42.232162   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 21:07:42.270478   62061 logs.go:123] Gathering logs for kubelet ...
	I0918 21:07:42.270517   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 21:07:42.324545   62061 logs.go:123] Gathering logs for dmesg ...
	I0918 21:07:42.324586   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 21:07:42.339928   62061 logs.go:123] Gathering logs for describe nodes ...
	I0918 21:07:42.339962   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0918 21:07:42.415124   62061 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0918 21:07:42.356847   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:07:44.856845   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:07:43.633029   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:07:46.134786   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:07:43.387596   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:07:45.887071   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:07:44.915316   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:07:44.928261   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0918 21:07:44.928331   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0918 21:07:44.959641   62061 cri.go:89] found id: ""
	I0918 21:07:44.959673   62061 logs.go:276] 0 containers: []
	W0918 21:07:44.959682   62061 logs.go:278] No container was found matching "kube-apiserver"
	I0918 21:07:44.959689   62061 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0918 21:07:44.959791   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0918 21:07:44.991744   62061 cri.go:89] found id: ""
	I0918 21:07:44.991778   62061 logs.go:276] 0 containers: []
	W0918 21:07:44.991787   62061 logs.go:278] No container was found matching "etcd"
	I0918 21:07:44.991803   62061 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0918 21:07:44.991877   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0918 21:07:45.024228   62061 cri.go:89] found id: ""
	I0918 21:07:45.024261   62061 logs.go:276] 0 containers: []
	W0918 21:07:45.024272   62061 logs.go:278] No container was found matching "coredns"
	I0918 21:07:45.024280   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0918 21:07:45.024355   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0918 21:07:45.061905   62061 cri.go:89] found id: ""
	I0918 21:07:45.061931   62061 logs.go:276] 0 containers: []
	W0918 21:07:45.061940   62061 logs.go:278] No container was found matching "kube-scheduler"
	I0918 21:07:45.061946   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0918 21:07:45.061994   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0918 21:07:45.099224   62061 cri.go:89] found id: ""
	I0918 21:07:45.099251   62061 logs.go:276] 0 containers: []
	W0918 21:07:45.099259   62061 logs.go:278] No container was found matching "kube-proxy"
	I0918 21:07:45.099266   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0918 21:07:45.099327   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0918 21:07:45.137235   62061 cri.go:89] found id: ""
	I0918 21:07:45.137262   62061 logs.go:276] 0 containers: []
	W0918 21:07:45.137270   62061 logs.go:278] No container was found matching "kube-controller-manager"
	I0918 21:07:45.137278   62061 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0918 21:07:45.137333   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0918 21:07:45.174343   62061 cri.go:89] found id: ""
	I0918 21:07:45.174370   62061 logs.go:276] 0 containers: []
	W0918 21:07:45.174379   62061 logs.go:278] No container was found matching "kindnet"
	I0918 21:07:45.174385   62061 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0918 21:07:45.174432   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0918 21:07:45.209278   62061 cri.go:89] found id: ""
	I0918 21:07:45.209306   62061 logs.go:276] 0 containers: []
	W0918 21:07:45.209316   62061 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0918 21:07:45.209326   62061 logs.go:123] Gathering logs for dmesg ...
	I0918 21:07:45.209341   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 21:07:45.222376   62061 logs.go:123] Gathering logs for describe nodes ...
	I0918 21:07:45.222409   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0918 21:07:45.294562   62061 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0918 21:07:45.294595   62061 logs.go:123] Gathering logs for CRI-O ...
	I0918 21:07:45.294607   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0918 21:07:45.382582   62061 logs.go:123] Gathering logs for container status ...
	I0918 21:07:45.382631   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 21:07:45.424743   62061 logs.go:123] Gathering logs for kubelet ...
	I0918 21:07:45.424780   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 21:07:47.978171   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:07:47.990485   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0918 21:07:47.990554   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0918 21:07:48.024465   62061 cri.go:89] found id: ""
	I0918 21:07:48.024494   62061 logs.go:276] 0 containers: []
	W0918 21:07:48.024505   62061 logs.go:278] No container was found matching "kube-apiserver"
	I0918 21:07:48.024512   62061 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0918 21:07:48.024575   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0918 21:07:48.058838   62061 cri.go:89] found id: ""
	I0918 21:07:48.058871   62061 logs.go:276] 0 containers: []
	W0918 21:07:48.058899   62061 logs.go:278] No container was found matching "etcd"
	I0918 21:07:48.058905   62061 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0918 21:07:48.058959   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0918 21:07:48.094307   62061 cri.go:89] found id: ""
	I0918 21:07:48.094335   62061 logs.go:276] 0 containers: []
	W0918 21:07:48.094343   62061 logs.go:278] No container was found matching "coredns"
	I0918 21:07:48.094349   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0918 21:07:48.094398   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0918 21:07:48.127497   62061 cri.go:89] found id: ""
	I0918 21:07:48.127529   62061 logs.go:276] 0 containers: []
	W0918 21:07:48.127538   62061 logs.go:278] No container was found matching "kube-scheduler"
	I0918 21:07:48.127544   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0918 21:07:48.127590   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0918 21:07:48.160440   62061 cri.go:89] found id: ""
	I0918 21:07:48.160465   62061 logs.go:276] 0 containers: []
	W0918 21:07:48.160473   62061 logs.go:278] No container was found matching "kube-proxy"
	I0918 21:07:48.160478   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0918 21:07:48.160527   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0918 21:07:48.195581   62061 cri.go:89] found id: ""
	I0918 21:07:48.195615   62061 logs.go:276] 0 containers: []
	W0918 21:07:48.195624   62061 logs.go:278] No container was found matching "kube-controller-manager"
	I0918 21:07:48.195631   62061 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0918 21:07:48.195675   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0918 21:07:48.226478   62061 cri.go:89] found id: ""
	I0918 21:07:48.226506   62061 logs.go:276] 0 containers: []
	W0918 21:07:48.226514   62061 logs.go:278] No container was found matching "kindnet"
	I0918 21:07:48.226519   62061 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0918 21:07:48.226566   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0918 21:07:48.259775   62061 cri.go:89] found id: ""
	I0918 21:07:48.259804   62061 logs.go:276] 0 containers: []
	W0918 21:07:48.259815   62061 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0918 21:07:48.259825   62061 logs.go:123] Gathering logs for dmesg ...
	I0918 21:07:48.259839   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 21:07:48.274364   62061 logs.go:123] Gathering logs for describe nodes ...
	I0918 21:07:48.274391   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0918 21:07:48.347131   62061 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0918 21:07:48.347151   62061 logs.go:123] Gathering logs for CRI-O ...
	I0918 21:07:48.347163   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0918 21:07:48.430569   62061 logs.go:123] Gathering logs for container status ...
	I0918 21:07:48.430607   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 21:07:48.472972   62061 logs.go:123] Gathering logs for kubelet ...
	I0918 21:07:48.473007   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 21:07:47.356907   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:07:49.857984   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:07:48.633550   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:07:51.133639   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:07:48.388136   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:07:50.888317   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:07:51.024429   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:07:51.037249   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0918 21:07:51.037328   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0918 21:07:51.070137   62061 cri.go:89] found id: ""
	I0918 21:07:51.070173   62061 logs.go:276] 0 containers: []
	W0918 21:07:51.070195   62061 logs.go:278] No container was found matching "kube-apiserver"
	I0918 21:07:51.070203   62061 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0918 21:07:51.070276   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0918 21:07:51.105014   62061 cri.go:89] found id: ""
	I0918 21:07:51.105039   62061 logs.go:276] 0 containers: []
	W0918 21:07:51.105048   62061 logs.go:278] No container was found matching "etcd"
	I0918 21:07:51.105054   62061 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0918 21:07:51.105101   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0918 21:07:51.143287   62061 cri.go:89] found id: ""
	I0918 21:07:51.143310   62061 logs.go:276] 0 containers: []
	W0918 21:07:51.143318   62061 logs.go:278] No container was found matching "coredns"
	I0918 21:07:51.143325   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0918 21:07:51.143372   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0918 21:07:51.176811   62061 cri.go:89] found id: ""
	I0918 21:07:51.176838   62061 logs.go:276] 0 containers: []
	W0918 21:07:51.176846   62061 logs.go:278] No container was found matching "kube-scheduler"
	I0918 21:07:51.176852   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0918 21:07:51.176898   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0918 21:07:51.210817   62061 cri.go:89] found id: ""
	I0918 21:07:51.210842   62061 logs.go:276] 0 containers: []
	W0918 21:07:51.210850   62061 logs.go:278] No container was found matching "kube-proxy"
	I0918 21:07:51.210856   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0918 21:07:51.210916   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0918 21:07:51.243968   62061 cri.go:89] found id: ""
	I0918 21:07:51.244002   62061 logs.go:276] 0 containers: []
	W0918 21:07:51.244035   62061 logs.go:278] No container was found matching "kube-controller-manager"
	I0918 21:07:51.244043   62061 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0918 21:07:51.244104   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0918 21:07:51.279078   62061 cri.go:89] found id: ""
	I0918 21:07:51.279108   62061 logs.go:276] 0 containers: []
	W0918 21:07:51.279117   62061 logs.go:278] No container was found matching "kindnet"
	I0918 21:07:51.279122   62061 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0918 21:07:51.279173   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0918 21:07:51.315033   62061 cri.go:89] found id: ""
	I0918 21:07:51.315082   62061 logs.go:276] 0 containers: []
	W0918 21:07:51.315091   62061 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0918 21:07:51.315100   62061 logs.go:123] Gathering logs for CRI-O ...
	I0918 21:07:51.315111   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0918 21:07:51.391927   62061 logs.go:123] Gathering logs for container status ...
	I0918 21:07:51.391976   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 21:07:51.435515   62061 logs.go:123] Gathering logs for kubelet ...
	I0918 21:07:51.435542   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 21:07:51.488404   62061 logs.go:123] Gathering logs for dmesg ...
	I0918 21:07:51.488442   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 21:07:51.502019   62061 logs.go:123] Gathering logs for describe nodes ...
	I0918 21:07:51.502065   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0918 21:07:51.567893   62061 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0918 21:07:54.068142   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:07:54.082544   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0918 21:07:54.082619   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0918 21:07:54.118600   62061 cri.go:89] found id: ""
	I0918 21:07:54.118629   62061 logs.go:276] 0 containers: []
	W0918 21:07:54.118638   62061 logs.go:278] No container was found matching "kube-apiserver"
	I0918 21:07:54.118644   62061 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0918 21:07:54.118695   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0918 21:07:54.154086   62061 cri.go:89] found id: ""
	I0918 21:07:54.154115   62061 logs.go:276] 0 containers: []
	W0918 21:07:54.154127   62061 logs.go:278] No container was found matching "etcd"
	I0918 21:07:54.154135   62061 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0918 21:07:54.154187   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0918 21:07:54.188213   62061 cri.go:89] found id: ""
	I0918 21:07:54.188246   62061 logs.go:276] 0 containers: []
	W0918 21:07:54.188257   62061 logs.go:278] No container was found matching "coredns"
	I0918 21:07:54.188266   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0918 21:07:54.188329   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0918 21:07:54.221682   62061 cri.go:89] found id: ""
	I0918 21:07:54.221712   62061 logs.go:276] 0 containers: []
	W0918 21:07:54.221721   62061 logs.go:278] No container was found matching "kube-scheduler"
	I0918 21:07:54.221728   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0918 21:07:54.221791   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0918 21:07:54.254746   62061 cri.go:89] found id: ""
	I0918 21:07:54.254770   62061 logs.go:276] 0 containers: []
	W0918 21:07:54.254778   62061 logs.go:278] No container was found matching "kube-proxy"
	I0918 21:07:54.254783   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0918 21:07:54.254844   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0918 21:07:54.288249   62061 cri.go:89] found id: ""
	I0918 21:07:54.288273   62061 logs.go:276] 0 containers: []
	W0918 21:07:54.288281   62061 logs.go:278] No container was found matching "kube-controller-manager"
	I0918 21:07:54.288288   62061 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0918 21:07:54.288349   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0918 21:07:54.323390   62061 cri.go:89] found id: ""
	I0918 21:07:54.323422   62061 logs.go:276] 0 containers: []
	W0918 21:07:54.323433   62061 logs.go:278] No container was found matching "kindnet"
	I0918 21:07:54.323440   62061 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0918 21:07:54.323507   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0918 21:07:54.357680   62061 cri.go:89] found id: ""
	I0918 21:07:54.357709   62061 logs.go:276] 0 containers: []
	W0918 21:07:54.357719   62061 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0918 21:07:54.357734   62061 logs.go:123] Gathering logs for dmesg ...
	I0918 21:07:54.357748   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 21:07:54.396644   62061 logs.go:123] Gathering logs for describe nodes ...
	I0918 21:07:54.396683   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0918 21:07:54.469136   62061 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0918 21:07:54.469175   62061 logs.go:123] Gathering logs for CRI-O ...
	I0918 21:07:54.469191   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0918 21:07:52.357187   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:07:54.857437   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:07:53.633161   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:07:56.132554   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:07:53.386646   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:07:55.387377   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:07:57.387524   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:07:54.547848   62061 logs.go:123] Gathering logs for container status ...
	I0918 21:07:54.547884   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 21:07:54.585758   62061 logs.go:123] Gathering logs for kubelet ...
	I0918 21:07:54.585788   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 21:07:57.139198   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:07:57.152342   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0918 21:07:57.152429   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0918 21:07:57.186747   62061 cri.go:89] found id: ""
	I0918 21:07:57.186778   62061 logs.go:276] 0 containers: []
	W0918 21:07:57.186789   62061 logs.go:278] No container was found matching "kube-apiserver"
	I0918 21:07:57.186796   62061 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0918 21:07:57.186855   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0918 21:07:57.219211   62061 cri.go:89] found id: ""
	I0918 21:07:57.219239   62061 logs.go:276] 0 containers: []
	W0918 21:07:57.219248   62061 logs.go:278] No container was found matching "etcd"
	I0918 21:07:57.219256   62061 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0918 21:07:57.219315   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0918 21:07:57.251361   62061 cri.go:89] found id: ""
	I0918 21:07:57.251388   62061 logs.go:276] 0 containers: []
	W0918 21:07:57.251396   62061 logs.go:278] No container was found matching "coredns"
	I0918 21:07:57.251401   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0918 21:07:57.251450   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0918 21:07:57.285467   62061 cri.go:89] found id: ""
	I0918 21:07:57.285501   62061 logs.go:276] 0 containers: []
	W0918 21:07:57.285512   62061 logs.go:278] No container was found matching "kube-scheduler"
	I0918 21:07:57.285519   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0918 21:07:57.285580   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0918 21:07:57.317973   62061 cri.go:89] found id: ""
	I0918 21:07:57.317999   62061 logs.go:276] 0 containers: []
	W0918 21:07:57.318006   62061 logs.go:278] No container was found matching "kube-proxy"
	I0918 21:07:57.318012   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0918 21:07:57.318071   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0918 21:07:57.352172   62061 cri.go:89] found id: ""
	I0918 21:07:57.352202   62061 logs.go:276] 0 containers: []
	W0918 21:07:57.352215   62061 logs.go:278] No container was found matching "kube-controller-manager"
	I0918 21:07:57.352223   62061 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0918 21:07:57.352277   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0918 21:07:57.388117   62061 cri.go:89] found id: ""
	I0918 21:07:57.388137   62061 logs.go:276] 0 containers: []
	W0918 21:07:57.388144   62061 logs.go:278] No container was found matching "kindnet"
	I0918 21:07:57.388150   62061 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0918 21:07:57.388205   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0918 21:07:57.424814   62061 cri.go:89] found id: ""
	I0918 21:07:57.424846   62061 logs.go:276] 0 containers: []
	W0918 21:07:57.424857   62061 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0918 21:07:57.424868   62061 logs.go:123] Gathering logs for dmesg ...
	I0918 21:07:57.424882   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 21:07:57.437317   62061 logs.go:123] Gathering logs for describe nodes ...
	I0918 21:07:57.437351   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0918 21:07:57.503393   62061 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0918 21:07:57.503417   62061 logs.go:123] Gathering logs for CRI-O ...
	I0918 21:07:57.503429   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0918 21:07:57.584167   62061 logs.go:123] Gathering logs for container status ...
	I0918 21:07:57.584203   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 21:07:57.620761   62061 logs.go:123] Gathering logs for kubelet ...
	I0918 21:07:57.620792   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 21:07:57.357989   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:07:59.856413   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:07:58.133077   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:08:00.633233   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:07:59.886455   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:08:01.887882   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:08:00.174933   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:08:00.188098   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0918 21:08:00.188177   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0918 21:08:00.221852   62061 cri.go:89] found id: ""
	I0918 21:08:00.221877   62061 logs.go:276] 0 containers: []
	W0918 21:08:00.221890   62061 logs.go:278] No container was found matching "kube-apiserver"
	I0918 21:08:00.221895   62061 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0918 21:08:00.221947   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0918 21:08:00.255947   62061 cri.go:89] found id: ""
	I0918 21:08:00.255974   62061 logs.go:276] 0 containers: []
	W0918 21:08:00.255982   62061 logs.go:278] No container was found matching "etcd"
	I0918 21:08:00.255987   62061 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0918 21:08:00.256056   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0918 21:08:00.290919   62061 cri.go:89] found id: ""
	I0918 21:08:00.290953   62061 logs.go:276] 0 containers: []
	W0918 21:08:00.290961   62061 logs.go:278] No container was found matching "coredns"
	I0918 21:08:00.290966   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0918 21:08:00.291017   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0918 21:08:00.328175   62061 cri.go:89] found id: ""
	I0918 21:08:00.328200   62061 logs.go:276] 0 containers: []
	W0918 21:08:00.328208   62061 logs.go:278] No container was found matching "kube-scheduler"
	I0918 21:08:00.328214   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0918 21:08:00.328261   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0918 21:08:00.364863   62061 cri.go:89] found id: ""
	I0918 21:08:00.364892   62061 logs.go:276] 0 containers: []
	W0918 21:08:00.364903   62061 logs.go:278] No container was found matching "kube-proxy"
	I0918 21:08:00.364912   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0918 21:08:00.364967   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0918 21:08:00.400368   62061 cri.go:89] found id: ""
	I0918 21:08:00.400397   62061 logs.go:276] 0 containers: []
	W0918 21:08:00.400408   62061 logs.go:278] No container was found matching "kube-controller-manager"
	I0918 21:08:00.400415   62061 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0918 21:08:00.400480   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0918 21:08:00.435341   62061 cri.go:89] found id: ""
	I0918 21:08:00.435375   62061 logs.go:276] 0 containers: []
	W0918 21:08:00.435386   62061 logs.go:278] No container was found matching "kindnet"
	I0918 21:08:00.435394   62061 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0918 21:08:00.435459   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0918 21:08:00.469981   62061 cri.go:89] found id: ""
	I0918 21:08:00.470010   62061 logs.go:276] 0 containers: []
	W0918 21:08:00.470019   62061 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0918 21:08:00.470028   62061 logs.go:123] Gathering logs for describe nodes ...
	I0918 21:08:00.470041   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0918 21:08:00.538006   62061 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0918 21:08:00.538037   62061 logs.go:123] Gathering logs for CRI-O ...
	I0918 21:08:00.538053   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0918 21:08:00.618497   62061 logs.go:123] Gathering logs for container status ...
	I0918 21:08:00.618543   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 21:08:00.656884   62061 logs.go:123] Gathering logs for kubelet ...
	I0918 21:08:00.656912   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 21:08:00.708836   62061 logs.go:123] Gathering logs for dmesg ...
	I0918 21:08:00.708870   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 21:08:03.222489   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:08:03.236904   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0918 21:08:03.236972   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0918 21:08:03.270559   62061 cri.go:89] found id: ""
	I0918 21:08:03.270588   62061 logs.go:276] 0 containers: []
	W0918 21:08:03.270596   62061 logs.go:278] No container was found matching "kube-apiserver"
	I0918 21:08:03.270602   62061 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0918 21:08:03.270649   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0918 21:08:03.305908   62061 cri.go:89] found id: ""
	I0918 21:08:03.305933   62061 logs.go:276] 0 containers: []
	W0918 21:08:03.305940   62061 logs.go:278] No container was found matching "etcd"
	I0918 21:08:03.305946   62061 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0918 21:08:03.306004   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0918 21:08:03.339442   62061 cri.go:89] found id: ""
	I0918 21:08:03.339468   62061 logs.go:276] 0 containers: []
	W0918 21:08:03.339476   62061 logs.go:278] No container was found matching "coredns"
	I0918 21:08:03.339482   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0918 21:08:03.339550   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0918 21:08:03.377460   62061 cri.go:89] found id: ""
	I0918 21:08:03.377486   62061 logs.go:276] 0 containers: []
	W0918 21:08:03.377495   62061 logs.go:278] No container was found matching "kube-scheduler"
	I0918 21:08:03.377501   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0918 21:08:03.377552   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0918 21:08:03.414815   62061 cri.go:89] found id: ""
	I0918 21:08:03.414850   62061 logs.go:276] 0 containers: []
	W0918 21:08:03.414861   62061 logs.go:278] No container was found matching "kube-proxy"
	I0918 21:08:03.414869   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0918 21:08:03.414930   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0918 21:08:03.448654   62061 cri.go:89] found id: ""
	I0918 21:08:03.448680   62061 logs.go:276] 0 containers: []
	W0918 21:08:03.448690   62061 logs.go:278] No container was found matching "kube-controller-manager"
	I0918 21:08:03.448698   62061 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0918 21:08:03.448759   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0918 21:08:03.483598   62061 cri.go:89] found id: ""
	I0918 21:08:03.483628   62061 logs.go:276] 0 containers: []
	W0918 21:08:03.483639   62061 logs.go:278] No container was found matching "kindnet"
	I0918 21:08:03.483646   62061 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0918 21:08:03.483717   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0918 21:08:03.518557   62061 cri.go:89] found id: ""
	I0918 21:08:03.518585   62061 logs.go:276] 0 containers: []
	W0918 21:08:03.518601   62061 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0918 21:08:03.518612   62061 logs.go:123] Gathering logs for container status ...
	I0918 21:08:03.518627   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 21:08:03.555922   62061 logs.go:123] Gathering logs for kubelet ...
	I0918 21:08:03.555958   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 21:08:03.608173   62061 logs.go:123] Gathering logs for dmesg ...
	I0918 21:08:03.608208   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 21:08:03.621251   62061 logs.go:123] Gathering logs for describe nodes ...
	I0918 21:08:03.621278   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0918 21:08:03.688773   62061 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0918 21:08:03.688796   62061 logs.go:123] Gathering logs for CRI-O ...
	I0918 21:08:03.688812   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0918 21:08:01.857289   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:08:03.857768   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:08:06.356504   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:08:03.132376   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:08:05.134169   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:08:04.386905   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:08:06.891459   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:08:06.272727   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:08:06.286033   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0918 21:08:06.286115   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0918 21:08:06.319058   62061 cri.go:89] found id: ""
	I0918 21:08:06.319084   62061 logs.go:276] 0 containers: []
	W0918 21:08:06.319092   62061 logs.go:278] No container was found matching "kube-apiserver"
	I0918 21:08:06.319099   62061 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0918 21:08:06.319167   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0918 21:08:06.352572   62061 cri.go:89] found id: ""
	I0918 21:08:06.352606   62061 logs.go:276] 0 containers: []
	W0918 21:08:06.352627   62061 logs.go:278] No container was found matching "etcd"
	I0918 21:08:06.352638   62061 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0918 21:08:06.352709   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0918 21:08:06.389897   62061 cri.go:89] found id: ""
	I0918 21:08:06.389922   62061 logs.go:276] 0 containers: []
	W0918 21:08:06.389929   62061 logs.go:278] No container was found matching "coredns"
	I0918 21:08:06.389935   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0918 21:08:06.389993   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0918 21:08:06.427265   62061 cri.go:89] found id: ""
	I0918 21:08:06.427294   62061 logs.go:276] 0 containers: []
	W0918 21:08:06.427306   62061 logs.go:278] No container was found matching "kube-scheduler"
	I0918 21:08:06.427314   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0918 21:08:06.427375   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0918 21:08:06.462706   62061 cri.go:89] found id: ""
	I0918 21:08:06.462738   62061 logs.go:276] 0 containers: []
	W0918 21:08:06.462746   62061 logs.go:278] No container was found matching "kube-proxy"
	I0918 21:08:06.462753   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0918 21:08:06.462816   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0918 21:08:06.499672   62061 cri.go:89] found id: ""
	I0918 21:08:06.499707   62061 logs.go:276] 0 containers: []
	W0918 21:08:06.499719   62061 logs.go:278] No container was found matching "kube-controller-manager"
	I0918 21:08:06.499727   62061 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0918 21:08:06.499781   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0918 21:08:06.540386   62061 cri.go:89] found id: ""
	I0918 21:08:06.540415   62061 logs.go:276] 0 containers: []
	W0918 21:08:06.540426   62061 logs.go:278] No container was found matching "kindnet"
	I0918 21:08:06.540433   62061 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0918 21:08:06.540492   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0918 21:08:06.582919   62061 cri.go:89] found id: ""
	I0918 21:08:06.582946   62061 logs.go:276] 0 containers: []
	W0918 21:08:06.582957   62061 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0918 21:08:06.582966   62061 logs.go:123] Gathering logs for kubelet ...
	I0918 21:08:06.582982   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 21:08:06.637315   62061 logs.go:123] Gathering logs for dmesg ...
	I0918 21:08:06.637355   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 21:08:06.650626   62061 logs.go:123] Gathering logs for describe nodes ...
	I0918 21:08:06.650662   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0918 21:08:06.725605   62061 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0918 21:08:06.725641   62061 logs.go:123] Gathering logs for CRI-O ...
	I0918 21:08:06.725656   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0918 21:08:06.805431   62061 logs.go:123] Gathering logs for container status ...
	I0918 21:08:06.805471   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 21:08:09.343498   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:08:09.357396   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0918 21:08:09.357472   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0918 21:08:09.391561   62061 cri.go:89] found id: ""
	I0918 21:08:09.391590   62061 logs.go:276] 0 containers: []
	W0918 21:08:09.391600   62061 logs.go:278] No container was found matching "kube-apiserver"
	I0918 21:08:09.391605   62061 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0918 21:08:09.391655   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0918 21:08:09.429154   62061 cri.go:89] found id: ""
	I0918 21:08:09.429181   62061 logs.go:276] 0 containers: []
	W0918 21:08:09.429190   62061 logs.go:278] No container was found matching "etcd"
	I0918 21:08:09.429196   62061 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0918 21:08:09.429259   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0918 21:08:09.464807   62061 cri.go:89] found id: ""
	I0918 21:08:09.464848   62061 logs.go:276] 0 containers: []
	W0918 21:08:09.464859   62061 logs.go:278] No container was found matching "coredns"
	I0918 21:08:09.464866   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0918 21:08:09.464927   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0918 21:08:08.856578   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:08:10.856650   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:08:07.633438   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:08:10.132651   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:08:12.132903   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:08:09.387482   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:08:11.886885   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:08:09.499512   62061 cri.go:89] found id: ""
	I0918 21:08:09.499540   62061 logs.go:276] 0 containers: []
	W0918 21:08:09.499549   62061 logs.go:278] No container was found matching "kube-scheduler"
	I0918 21:08:09.499556   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0918 21:08:09.499619   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0918 21:08:09.534569   62061 cri.go:89] found id: ""
	I0918 21:08:09.534593   62061 logs.go:276] 0 containers: []
	W0918 21:08:09.534601   62061 logs.go:278] No container was found matching "kube-proxy"
	I0918 21:08:09.534607   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0918 21:08:09.534660   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0918 21:08:09.570387   62061 cri.go:89] found id: ""
	I0918 21:08:09.570414   62061 logs.go:276] 0 containers: []
	W0918 21:08:09.570422   62061 logs.go:278] No container was found matching "kube-controller-manager"
	I0918 21:08:09.570428   62061 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0918 21:08:09.570489   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0918 21:08:09.603832   62061 cri.go:89] found id: ""
	I0918 21:08:09.603863   62061 logs.go:276] 0 containers: []
	W0918 21:08:09.603871   62061 logs.go:278] No container was found matching "kindnet"
	I0918 21:08:09.603877   62061 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0918 21:08:09.603923   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0918 21:08:09.638887   62061 cri.go:89] found id: ""
	I0918 21:08:09.638918   62061 logs.go:276] 0 containers: []
	W0918 21:08:09.638930   62061 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0918 21:08:09.638940   62061 logs.go:123] Gathering logs for kubelet ...
	I0918 21:08:09.638953   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 21:08:09.691197   62061 logs.go:123] Gathering logs for dmesg ...
	I0918 21:08:09.691237   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 21:08:09.705470   62061 logs.go:123] Gathering logs for describe nodes ...
	I0918 21:08:09.705500   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0918 21:08:09.776826   62061 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0918 21:08:09.776858   62061 logs.go:123] Gathering logs for CRI-O ...
	I0918 21:08:09.776871   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0918 21:08:09.851287   62061 logs.go:123] Gathering logs for container status ...
	I0918 21:08:09.851321   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 21:08:12.390895   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:08:12.406734   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0918 21:08:12.406809   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0918 21:08:12.459302   62061 cri.go:89] found id: ""
	I0918 21:08:12.459331   62061 logs.go:276] 0 containers: []
	W0918 21:08:12.459349   62061 logs.go:278] No container was found matching "kube-apiserver"
	I0918 21:08:12.459360   62061 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0918 21:08:12.459420   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0918 21:08:12.517527   62061 cri.go:89] found id: ""
	I0918 21:08:12.517557   62061 logs.go:276] 0 containers: []
	W0918 21:08:12.517567   62061 logs.go:278] No container was found matching "etcd"
	I0918 21:08:12.517573   62061 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0918 21:08:12.517628   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0918 21:08:12.549661   62061 cri.go:89] found id: ""
	I0918 21:08:12.549689   62061 logs.go:276] 0 containers: []
	W0918 21:08:12.549696   62061 logs.go:278] No container was found matching "coredns"
	I0918 21:08:12.549702   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0918 21:08:12.549747   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0918 21:08:12.582970   62061 cri.go:89] found id: ""
	I0918 21:08:12.582996   62061 logs.go:276] 0 containers: []
	W0918 21:08:12.583004   62061 logs.go:278] No container was found matching "kube-scheduler"
	I0918 21:08:12.583009   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0918 21:08:12.583054   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0918 21:08:12.617059   62061 cri.go:89] found id: ""
	I0918 21:08:12.617089   62061 logs.go:276] 0 containers: []
	W0918 21:08:12.617098   62061 logs.go:278] No container was found matching "kube-proxy"
	I0918 21:08:12.617103   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0918 21:08:12.617153   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0918 21:08:12.651115   62061 cri.go:89] found id: ""
	I0918 21:08:12.651143   62061 logs.go:276] 0 containers: []
	W0918 21:08:12.651156   62061 logs.go:278] No container was found matching "kube-controller-manager"
	I0918 21:08:12.651164   62061 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0918 21:08:12.651217   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0918 21:08:12.684727   62061 cri.go:89] found id: ""
	I0918 21:08:12.684758   62061 logs.go:276] 0 containers: []
	W0918 21:08:12.684765   62061 logs.go:278] No container was found matching "kindnet"
	I0918 21:08:12.684771   62061 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0918 21:08:12.684829   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0918 21:08:12.718181   62061 cri.go:89] found id: ""
	I0918 21:08:12.718215   62061 logs.go:276] 0 containers: []
	W0918 21:08:12.718226   62061 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0918 21:08:12.718237   62061 logs.go:123] Gathering logs for container status ...
	I0918 21:08:12.718252   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 21:08:12.760350   62061 logs.go:123] Gathering logs for kubelet ...
	I0918 21:08:12.760379   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 21:08:12.810028   62061 logs.go:123] Gathering logs for dmesg ...
	I0918 21:08:12.810067   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 21:08:12.823785   62061 logs.go:123] Gathering logs for describe nodes ...
	I0918 21:08:12.823815   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0918 21:08:12.898511   62061 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0918 21:08:12.898532   62061 logs.go:123] Gathering logs for CRI-O ...
	I0918 21:08:12.898545   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0918 21:08:12.856697   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:08:15.356381   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:08:14.632694   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:08:17.131888   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:08:13.887157   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:08:15.887190   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:08:17.890618   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:08:15.476840   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:08:15.489386   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0918 21:08:15.489470   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0918 21:08:15.524549   62061 cri.go:89] found id: ""
	I0918 21:08:15.524578   62061 logs.go:276] 0 containers: []
	W0918 21:08:15.524585   62061 logs.go:278] No container was found matching "kube-apiserver"
	I0918 21:08:15.524591   62061 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0918 21:08:15.524642   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0918 21:08:15.557834   62061 cri.go:89] found id: ""
	I0918 21:08:15.557867   62061 logs.go:276] 0 containers: []
	W0918 21:08:15.557876   62061 logs.go:278] No container was found matching "etcd"
	I0918 21:08:15.557883   62061 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0918 21:08:15.557933   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0918 21:08:15.598774   62061 cri.go:89] found id: ""
	I0918 21:08:15.598805   62061 logs.go:276] 0 containers: []
	W0918 21:08:15.598818   62061 logs.go:278] No container was found matching "coredns"
	I0918 21:08:15.598828   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0918 21:08:15.598882   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0918 21:08:15.633123   62061 cri.go:89] found id: ""
	I0918 21:08:15.633147   62061 logs.go:276] 0 containers: []
	W0918 21:08:15.633155   62061 logs.go:278] No container was found matching "kube-scheduler"
	I0918 21:08:15.633161   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0918 21:08:15.633208   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0918 21:08:15.667281   62061 cri.go:89] found id: ""
	I0918 21:08:15.667307   62061 logs.go:276] 0 containers: []
	W0918 21:08:15.667317   62061 logs.go:278] No container was found matching "kube-proxy"
	I0918 21:08:15.667323   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0918 21:08:15.667381   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0918 21:08:15.705945   62061 cri.go:89] found id: ""
	I0918 21:08:15.705977   62061 logs.go:276] 0 containers: []
	W0918 21:08:15.705990   62061 logs.go:278] No container was found matching "kube-controller-manager"
	I0918 21:08:15.706015   62061 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0918 21:08:15.706093   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0918 21:08:15.739795   62061 cri.go:89] found id: ""
	I0918 21:08:15.739826   62061 logs.go:276] 0 containers: []
	W0918 21:08:15.739836   62061 logs.go:278] No container was found matching "kindnet"
	I0918 21:08:15.739843   62061 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0918 21:08:15.739904   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0918 21:08:15.778520   62061 cri.go:89] found id: ""
	I0918 21:08:15.778556   62061 logs.go:276] 0 containers: []
	W0918 21:08:15.778567   62061 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0918 21:08:15.778579   62061 logs.go:123] Gathering logs for kubelet ...
	I0918 21:08:15.778592   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 21:08:15.829357   62061 logs.go:123] Gathering logs for dmesg ...
	I0918 21:08:15.829394   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 21:08:15.842852   62061 logs.go:123] Gathering logs for describe nodes ...
	I0918 21:08:15.842882   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0918 21:08:15.922438   62061 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0918 21:08:15.922471   62061 logs.go:123] Gathering logs for CRI-O ...
	I0918 21:08:15.922483   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0918 21:08:16.001687   62061 logs.go:123] Gathering logs for container status ...
	I0918 21:08:16.001726   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 21:08:18.542067   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:08:18.554783   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0918 21:08:18.554870   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0918 21:08:18.589555   62061 cri.go:89] found id: ""
	I0918 21:08:18.589581   62061 logs.go:276] 0 containers: []
	W0918 21:08:18.589592   62061 logs.go:278] No container was found matching "kube-apiserver"
	I0918 21:08:18.589604   62061 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0918 21:08:18.589667   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0918 21:08:18.623035   62061 cri.go:89] found id: ""
	I0918 21:08:18.623059   62061 logs.go:276] 0 containers: []
	W0918 21:08:18.623067   62061 logs.go:278] No container was found matching "etcd"
	I0918 21:08:18.623073   62061 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0918 21:08:18.623127   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0918 21:08:18.655875   62061 cri.go:89] found id: ""
	I0918 21:08:18.655901   62061 logs.go:276] 0 containers: []
	W0918 21:08:18.655909   62061 logs.go:278] No container was found matching "coredns"
	I0918 21:08:18.655915   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0918 21:08:18.655973   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0918 21:08:18.688964   62061 cri.go:89] found id: ""
	I0918 21:08:18.688997   62061 logs.go:276] 0 containers: []
	W0918 21:08:18.689008   62061 logs.go:278] No container was found matching "kube-scheduler"
	I0918 21:08:18.689016   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0918 21:08:18.689080   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0918 21:08:18.723164   62061 cri.go:89] found id: ""
	I0918 21:08:18.723186   62061 logs.go:276] 0 containers: []
	W0918 21:08:18.723196   62061 logs.go:278] No container was found matching "kube-proxy"
	I0918 21:08:18.723201   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0918 21:08:18.723246   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0918 21:08:18.755022   62061 cri.go:89] found id: ""
	I0918 21:08:18.755048   62061 logs.go:276] 0 containers: []
	W0918 21:08:18.755057   62061 logs.go:278] No container was found matching "kube-controller-manager"
	I0918 21:08:18.755063   62061 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0918 21:08:18.755113   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0918 21:08:18.791614   62061 cri.go:89] found id: ""
	I0918 21:08:18.791645   62061 logs.go:276] 0 containers: []
	W0918 21:08:18.791655   62061 logs.go:278] No container was found matching "kindnet"
	I0918 21:08:18.791663   62061 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0918 21:08:18.791731   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0918 21:08:18.823553   62061 cri.go:89] found id: ""
	I0918 21:08:18.823583   62061 logs.go:276] 0 containers: []
	W0918 21:08:18.823590   62061 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0918 21:08:18.823597   62061 logs.go:123] Gathering logs for container status ...
	I0918 21:08:18.823609   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 21:08:18.864528   62061 logs.go:123] Gathering logs for kubelet ...
	I0918 21:08:18.864567   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 21:08:18.916590   62061 logs.go:123] Gathering logs for dmesg ...
	I0918 21:08:18.916628   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 21:08:18.930077   62061 logs.go:123] Gathering logs for describe nodes ...
	I0918 21:08:18.930107   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0918 21:08:18.999958   62061 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0918 21:08:18.999982   62061 logs.go:123] Gathering logs for CRI-O ...
	I0918 21:08:18.999997   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0918 21:08:17.358190   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:08:19.856605   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:08:19.132382   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:08:21.634433   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:08:20.387223   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:08:22.387374   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:08:21.573718   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:08:21.588164   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0918 21:08:21.588252   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0918 21:08:21.630044   62061 cri.go:89] found id: ""
	I0918 21:08:21.630075   62061 logs.go:276] 0 containers: []
	W0918 21:08:21.630086   62061 logs.go:278] No container was found matching "kube-apiserver"
	I0918 21:08:21.630094   62061 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0918 21:08:21.630154   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0918 21:08:21.666992   62061 cri.go:89] found id: ""
	I0918 21:08:21.667021   62061 logs.go:276] 0 containers: []
	W0918 21:08:21.667029   62061 logs.go:278] No container was found matching "etcd"
	I0918 21:08:21.667035   62061 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0918 21:08:21.667083   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0918 21:08:21.702379   62061 cri.go:89] found id: ""
	I0918 21:08:21.702403   62061 logs.go:276] 0 containers: []
	W0918 21:08:21.702411   62061 logs.go:278] No container was found matching "coredns"
	I0918 21:08:21.702416   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0918 21:08:21.702463   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0918 21:08:21.739877   62061 cri.go:89] found id: ""
	I0918 21:08:21.739908   62061 logs.go:276] 0 containers: []
	W0918 21:08:21.739918   62061 logs.go:278] No container was found matching "kube-scheduler"
	I0918 21:08:21.739923   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0918 21:08:21.739974   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0918 21:08:21.777536   62061 cri.go:89] found id: ""
	I0918 21:08:21.777573   62061 logs.go:276] 0 containers: []
	W0918 21:08:21.777584   62061 logs.go:278] No container was found matching "kube-proxy"
	I0918 21:08:21.777592   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0918 21:08:21.777652   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0918 21:08:21.812284   62061 cri.go:89] found id: ""
	I0918 21:08:21.812316   62061 logs.go:276] 0 containers: []
	W0918 21:08:21.812325   62061 logs.go:278] No container was found matching "kube-controller-manager"
	I0918 21:08:21.812332   62061 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0918 21:08:21.812401   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0918 21:08:21.848143   62061 cri.go:89] found id: ""
	I0918 21:08:21.848176   62061 logs.go:276] 0 containers: []
	W0918 21:08:21.848185   62061 logs.go:278] No container was found matching "kindnet"
	I0918 21:08:21.848191   62061 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0918 21:08:21.848250   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0918 21:08:21.887151   62061 cri.go:89] found id: ""
	I0918 21:08:21.887177   62061 logs.go:276] 0 containers: []
	W0918 21:08:21.887188   62061 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0918 21:08:21.887199   62061 logs.go:123] Gathering logs for kubelet ...
	I0918 21:08:21.887213   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 21:08:21.939969   62061 logs.go:123] Gathering logs for dmesg ...
	I0918 21:08:21.940008   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 21:08:21.954128   62061 logs.go:123] Gathering logs for describe nodes ...
	I0918 21:08:21.954164   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0918 21:08:22.022827   62061 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0918 21:08:22.022853   62061 logs.go:123] Gathering logs for CRI-O ...
	I0918 21:08:22.022865   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0918 21:08:22.103131   62061 logs.go:123] Gathering logs for container status ...
	I0918 21:08:22.103172   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 21:08:22.356641   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:08:24.358204   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:08:24.133101   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:08:26.633701   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:08:24.888715   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:08:27.386901   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:08:24.642045   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:08:24.655273   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0918 21:08:24.655343   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0918 21:08:24.687816   62061 cri.go:89] found id: ""
	I0918 21:08:24.687847   62061 logs.go:276] 0 containers: []
	W0918 21:08:24.687858   62061 logs.go:278] No container was found matching "kube-apiserver"
	I0918 21:08:24.687865   62061 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0918 21:08:24.687927   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0918 21:08:24.721276   62061 cri.go:89] found id: ""
	I0918 21:08:24.721303   62061 logs.go:276] 0 containers: []
	W0918 21:08:24.721311   62061 logs.go:278] No container was found matching "etcd"
	I0918 21:08:24.721316   62061 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0918 21:08:24.721366   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0918 21:08:24.753874   62061 cri.go:89] found id: ""
	I0918 21:08:24.753904   62061 logs.go:276] 0 containers: []
	W0918 21:08:24.753911   62061 logs.go:278] No container was found matching "coredns"
	I0918 21:08:24.753917   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0918 21:08:24.753967   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0918 21:08:24.789107   62061 cri.go:89] found id: ""
	I0918 21:08:24.789148   62061 logs.go:276] 0 containers: []
	W0918 21:08:24.789163   62061 logs.go:278] No container was found matching "kube-scheduler"
	I0918 21:08:24.789170   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0918 21:08:24.789219   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0918 21:08:24.826283   62061 cri.go:89] found id: ""
	I0918 21:08:24.826316   62061 logs.go:276] 0 containers: []
	W0918 21:08:24.826329   62061 logs.go:278] No container was found matching "kube-proxy"
	I0918 21:08:24.826337   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0918 21:08:24.826401   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0918 21:08:24.859878   62061 cri.go:89] found id: ""
	I0918 21:08:24.859907   62061 logs.go:276] 0 containers: []
	W0918 21:08:24.859917   62061 logs.go:278] No container was found matching "kube-controller-manager"
	I0918 21:08:24.859924   62061 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0918 21:08:24.859982   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0918 21:08:24.895717   62061 cri.go:89] found id: ""
	I0918 21:08:24.895747   62061 logs.go:276] 0 containers: []
	W0918 21:08:24.895758   62061 logs.go:278] No container was found matching "kindnet"
	I0918 21:08:24.895766   62061 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0918 21:08:24.895830   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0918 21:08:24.930207   62061 cri.go:89] found id: ""
	I0918 21:08:24.930239   62061 logs.go:276] 0 containers: []
	W0918 21:08:24.930250   62061 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0918 21:08:24.930262   62061 logs.go:123] Gathering logs for container status ...
	I0918 21:08:24.930279   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 21:08:24.967939   62061 logs.go:123] Gathering logs for kubelet ...
	I0918 21:08:24.967981   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 21:08:25.022526   62061 logs.go:123] Gathering logs for dmesg ...
	I0918 21:08:25.022569   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 21:08:25.036175   62061 logs.go:123] Gathering logs for describe nodes ...
	I0918 21:08:25.036214   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0918 21:08:25.104251   62061 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0918 21:08:25.104277   62061 logs.go:123] Gathering logs for CRI-O ...
	I0918 21:08:25.104294   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0918 21:08:27.686943   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:08:27.700071   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0918 21:08:27.700147   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0918 21:08:27.735245   62061 cri.go:89] found id: ""
	I0918 21:08:27.735277   62061 logs.go:276] 0 containers: []
	W0918 21:08:27.735286   62061 logs.go:278] No container was found matching "kube-apiserver"
	I0918 21:08:27.735291   62061 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0918 21:08:27.735349   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0918 21:08:27.768945   62061 cri.go:89] found id: ""
	I0918 21:08:27.768974   62061 logs.go:276] 0 containers: []
	W0918 21:08:27.768985   62061 logs.go:278] No container was found matching "etcd"
	I0918 21:08:27.768993   62061 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0918 21:08:27.769055   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0918 21:08:27.805425   62061 cri.go:89] found id: ""
	I0918 21:08:27.805457   62061 logs.go:276] 0 containers: []
	W0918 21:08:27.805468   62061 logs.go:278] No container was found matching "coredns"
	I0918 21:08:27.805474   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0918 21:08:27.805523   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0918 21:08:27.838049   62061 cri.go:89] found id: ""
	I0918 21:08:27.838081   62061 logs.go:276] 0 containers: []
	W0918 21:08:27.838091   62061 logs.go:278] No container was found matching "kube-scheduler"
	I0918 21:08:27.838098   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0918 21:08:27.838163   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0918 21:08:27.873961   62061 cri.go:89] found id: ""
	I0918 21:08:27.873986   62061 logs.go:276] 0 containers: []
	W0918 21:08:27.873994   62061 logs.go:278] No container was found matching "kube-proxy"
	I0918 21:08:27.874001   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0918 21:08:27.874064   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0918 21:08:27.908822   62061 cri.go:89] found id: ""
	I0918 21:08:27.908846   62061 logs.go:276] 0 containers: []
	W0918 21:08:27.908854   62061 logs.go:278] No container was found matching "kube-controller-manager"
	I0918 21:08:27.908860   62061 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0918 21:08:27.908915   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0918 21:08:27.943758   62061 cri.go:89] found id: ""
	I0918 21:08:27.943794   62061 logs.go:276] 0 containers: []
	W0918 21:08:27.943802   62061 logs.go:278] No container was found matching "kindnet"
	I0918 21:08:27.943808   62061 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0918 21:08:27.943875   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0918 21:08:27.978136   62061 cri.go:89] found id: ""
	I0918 21:08:27.978168   62061 logs.go:276] 0 containers: []
	W0918 21:08:27.978179   62061 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0918 21:08:27.978189   62061 logs.go:123] Gathering logs for dmesg ...
	I0918 21:08:27.978202   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 21:08:27.992744   62061 logs.go:123] Gathering logs for describe nodes ...
	I0918 21:08:27.992773   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0918 21:08:28.070339   62061 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0918 21:08:28.070362   62061 logs.go:123] Gathering logs for CRI-O ...
	I0918 21:08:28.070374   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0918 21:08:28.152405   62061 logs.go:123] Gathering logs for container status ...
	I0918 21:08:28.152452   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 21:08:28.191190   62061 logs.go:123] Gathering logs for kubelet ...
	I0918 21:08:28.191220   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 21:08:26.857256   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:08:29.356662   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:08:29.132577   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:08:31.133108   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:08:29.387068   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:08:31.886962   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:08:30.742507   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:08:30.756451   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0918 21:08:30.756553   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0918 21:08:30.790746   62061 cri.go:89] found id: ""
	I0918 21:08:30.790771   62061 logs.go:276] 0 containers: []
	W0918 21:08:30.790781   62061 logs.go:278] No container was found matching "kube-apiserver"
	I0918 21:08:30.790787   62061 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0918 21:08:30.790851   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0918 21:08:30.823633   62061 cri.go:89] found id: ""
	I0918 21:08:30.823670   62061 logs.go:276] 0 containers: []
	W0918 21:08:30.823682   62061 logs.go:278] No container was found matching "etcd"
	I0918 21:08:30.823689   62061 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0918 21:08:30.823754   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0918 21:08:30.861912   62061 cri.go:89] found id: ""
	I0918 21:08:30.861936   62061 logs.go:276] 0 containers: []
	W0918 21:08:30.861943   62061 logs.go:278] No container was found matching "coredns"
	I0918 21:08:30.861949   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0918 21:08:30.862000   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0918 21:08:30.899452   62061 cri.go:89] found id: ""
	I0918 21:08:30.899481   62061 logs.go:276] 0 containers: []
	W0918 21:08:30.899489   62061 logs.go:278] No container was found matching "kube-scheduler"
	I0918 21:08:30.899495   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0918 21:08:30.899562   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0918 21:08:30.935871   62061 cri.go:89] found id: ""
	I0918 21:08:30.935898   62061 logs.go:276] 0 containers: []
	W0918 21:08:30.935906   62061 logs.go:278] No container was found matching "kube-proxy"
	I0918 21:08:30.935912   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0918 21:08:30.935969   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0918 21:08:30.974608   62061 cri.go:89] found id: ""
	I0918 21:08:30.974643   62061 logs.go:276] 0 containers: []
	W0918 21:08:30.974655   62061 logs.go:278] No container was found matching "kube-controller-manager"
	I0918 21:08:30.974663   62061 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0918 21:08:30.974722   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0918 21:08:31.008249   62061 cri.go:89] found id: ""
	I0918 21:08:31.008279   62061 logs.go:276] 0 containers: []
	W0918 21:08:31.008290   62061 logs.go:278] No container was found matching "kindnet"
	I0918 21:08:31.008297   62061 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0918 21:08:31.008366   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0918 21:08:31.047042   62061 cri.go:89] found id: ""
	I0918 21:08:31.047075   62061 logs.go:276] 0 containers: []
	W0918 21:08:31.047083   62061 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0918 21:08:31.047093   62061 logs.go:123] Gathering logs for kubelet ...
	I0918 21:08:31.047107   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 21:08:31.098961   62061 logs.go:123] Gathering logs for dmesg ...
	I0918 21:08:31.099001   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 21:08:31.113116   62061 logs.go:123] Gathering logs for describe nodes ...
	I0918 21:08:31.113147   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0918 21:08:31.179609   62061 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0918 21:08:31.179650   62061 logs.go:123] Gathering logs for CRI-O ...
	I0918 21:08:31.179664   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0918 21:08:31.258299   62061 logs.go:123] Gathering logs for container status ...
	I0918 21:08:31.258335   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 21:08:33.798360   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:08:33.812105   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0918 21:08:33.812172   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0918 21:08:33.848120   62061 cri.go:89] found id: ""
	I0918 21:08:33.848149   62061 logs.go:276] 0 containers: []
	W0918 21:08:33.848160   62061 logs.go:278] No container was found matching "kube-apiserver"
	I0918 21:08:33.848169   62061 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0918 21:08:33.848231   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0918 21:08:33.884974   62061 cri.go:89] found id: ""
	I0918 21:08:33.885007   62061 logs.go:276] 0 containers: []
	W0918 21:08:33.885019   62061 logs.go:278] No container was found matching "etcd"
	I0918 21:08:33.885028   62061 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0918 21:08:33.885104   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0918 21:08:33.919613   62061 cri.go:89] found id: ""
	I0918 21:08:33.919658   62061 logs.go:276] 0 containers: []
	W0918 21:08:33.919667   62061 logs.go:278] No container was found matching "coredns"
	I0918 21:08:33.919673   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0918 21:08:33.919721   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0918 21:08:33.953074   62061 cri.go:89] found id: ""
	I0918 21:08:33.953112   62061 logs.go:276] 0 containers: []
	W0918 21:08:33.953120   62061 logs.go:278] No container was found matching "kube-scheduler"
	I0918 21:08:33.953125   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0918 21:08:33.953185   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0918 21:08:33.985493   62061 cri.go:89] found id: ""
	I0918 21:08:33.985521   62061 logs.go:276] 0 containers: []
	W0918 21:08:33.985532   62061 logs.go:278] No container was found matching "kube-proxy"
	I0918 21:08:33.985539   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0918 21:08:33.985630   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0918 21:08:34.020938   62061 cri.go:89] found id: ""
	I0918 21:08:34.020962   62061 logs.go:276] 0 containers: []
	W0918 21:08:34.020972   62061 logs.go:278] No container was found matching "kube-controller-manager"
	I0918 21:08:34.020981   62061 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0918 21:08:34.021047   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0918 21:08:34.053947   62061 cri.go:89] found id: ""
	I0918 21:08:34.053977   62061 logs.go:276] 0 containers: []
	W0918 21:08:34.053988   62061 logs.go:278] No container was found matching "kindnet"
	I0918 21:08:34.053996   62061 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0918 21:08:34.054060   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0918 21:08:34.090072   62061 cri.go:89] found id: ""
	I0918 21:08:34.090110   62061 logs.go:276] 0 containers: []
	W0918 21:08:34.090123   62061 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0918 21:08:34.090133   62061 logs.go:123] Gathering logs for kubelet ...
	I0918 21:08:34.090145   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 21:08:34.142069   62061 logs.go:123] Gathering logs for dmesg ...
	I0918 21:08:34.142107   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 21:08:34.156709   62061 logs.go:123] Gathering logs for describe nodes ...
	I0918 21:08:34.156740   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0918 21:08:34.232644   62061 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0918 21:08:34.232672   62061 logs.go:123] Gathering logs for CRI-O ...
	I0918 21:08:34.232685   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0918 21:08:34.311833   62061 logs.go:123] Gathering logs for container status ...
	I0918 21:08:34.311878   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 21:08:31.859360   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:08:34.357056   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:08:33.133212   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:08:35.632885   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:08:33.888487   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:08:36.386571   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:08:36.850811   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:08:36.864479   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0918 21:08:36.864558   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0918 21:08:36.902595   62061 cri.go:89] found id: ""
	I0918 21:08:36.902628   62061 logs.go:276] 0 containers: []
	W0918 21:08:36.902640   62061 logs.go:278] No container was found matching "kube-apiserver"
	I0918 21:08:36.902649   62061 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0918 21:08:36.902706   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0918 21:08:36.940336   62061 cri.go:89] found id: ""
	I0918 21:08:36.940385   62061 logs.go:276] 0 containers: []
	W0918 21:08:36.940394   62061 logs.go:278] No container was found matching "etcd"
	I0918 21:08:36.940400   62061 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0918 21:08:36.940465   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0918 21:08:36.973909   62061 cri.go:89] found id: ""
	I0918 21:08:36.973942   62061 logs.go:276] 0 containers: []
	W0918 21:08:36.973952   62061 logs.go:278] No container was found matching "coredns"
	I0918 21:08:36.973958   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0918 21:08:36.974013   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0918 21:08:37.008766   62061 cri.go:89] found id: ""
	I0918 21:08:37.008791   62061 logs.go:276] 0 containers: []
	W0918 21:08:37.008799   62061 logs.go:278] No container was found matching "kube-scheduler"
	I0918 21:08:37.008805   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0918 21:08:37.008852   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0918 21:08:37.041633   62061 cri.go:89] found id: ""
	I0918 21:08:37.041669   62061 logs.go:276] 0 containers: []
	W0918 21:08:37.041681   62061 logs.go:278] No container was found matching "kube-proxy"
	I0918 21:08:37.041688   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0918 21:08:37.041750   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0918 21:08:37.075154   62061 cri.go:89] found id: ""
	I0918 21:08:37.075188   62061 logs.go:276] 0 containers: []
	W0918 21:08:37.075197   62061 logs.go:278] No container was found matching "kube-controller-manager"
	I0918 21:08:37.075204   62061 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0918 21:08:37.075268   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0918 21:08:37.111083   62061 cri.go:89] found id: ""
	I0918 21:08:37.111119   62061 logs.go:276] 0 containers: []
	W0918 21:08:37.111130   62061 logs.go:278] No container was found matching "kindnet"
	I0918 21:08:37.111138   62061 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0918 21:08:37.111192   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0918 21:08:37.147894   62061 cri.go:89] found id: ""
	I0918 21:08:37.147925   62061 logs.go:276] 0 containers: []
	W0918 21:08:37.147936   62061 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0918 21:08:37.147948   62061 logs.go:123] Gathering logs for kubelet ...
	I0918 21:08:37.147962   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 21:08:37.200102   62061 logs.go:123] Gathering logs for dmesg ...
	I0918 21:08:37.200141   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 21:08:37.213511   62061 logs.go:123] Gathering logs for describe nodes ...
	I0918 21:08:37.213537   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0918 21:08:37.286978   62061 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0918 21:08:37.287012   62061 logs.go:123] Gathering logs for CRI-O ...
	I0918 21:08:37.287027   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0918 21:08:37.368153   62061 logs.go:123] Gathering logs for container status ...
	I0918 21:08:37.368191   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 21:08:36.857508   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:08:39.357177   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:08:41.357329   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:08:38.134332   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:08:40.633274   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:08:38.387121   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:08:40.387310   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:08:42.887614   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:08:39.907600   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:08:39.922325   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0918 21:08:39.922395   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0918 21:08:39.958150   62061 cri.go:89] found id: ""
	I0918 21:08:39.958175   62061 logs.go:276] 0 containers: []
	W0918 21:08:39.958183   62061 logs.go:278] No container was found matching "kube-apiserver"
	I0918 21:08:39.958189   62061 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0918 21:08:39.958254   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0918 21:08:39.993916   62061 cri.go:89] found id: ""
	I0918 21:08:39.993945   62061 logs.go:276] 0 containers: []
	W0918 21:08:39.993956   62061 logs.go:278] No container was found matching "etcd"
	I0918 21:08:39.993963   62061 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0918 21:08:39.994026   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0918 21:08:40.031078   62061 cri.go:89] found id: ""
	I0918 21:08:40.031114   62061 logs.go:276] 0 containers: []
	W0918 21:08:40.031126   62061 logs.go:278] No container was found matching "coredns"
	I0918 21:08:40.031133   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0918 21:08:40.031194   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0918 21:08:40.065028   62061 cri.go:89] found id: ""
	I0918 21:08:40.065054   62061 logs.go:276] 0 containers: []
	W0918 21:08:40.065065   62061 logs.go:278] No container was found matching "kube-scheduler"
	I0918 21:08:40.065072   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0918 21:08:40.065129   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0918 21:08:40.099437   62061 cri.go:89] found id: ""
	I0918 21:08:40.099466   62061 logs.go:276] 0 containers: []
	W0918 21:08:40.099474   62061 logs.go:278] No container was found matching "kube-proxy"
	I0918 21:08:40.099480   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0918 21:08:40.099544   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0918 21:08:40.139841   62061 cri.go:89] found id: ""
	I0918 21:08:40.139866   62061 logs.go:276] 0 containers: []
	W0918 21:08:40.139874   62061 logs.go:278] No container was found matching "kube-controller-manager"
	I0918 21:08:40.139880   62061 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0918 21:08:40.139936   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0918 21:08:40.175287   62061 cri.go:89] found id: ""
	I0918 21:08:40.175316   62061 logs.go:276] 0 containers: []
	W0918 21:08:40.175327   62061 logs.go:278] No container was found matching "kindnet"
	I0918 21:08:40.175334   62061 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0918 21:08:40.175397   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0918 21:08:40.208646   62061 cri.go:89] found id: ""
	I0918 21:08:40.208677   62061 logs.go:276] 0 containers: []
	W0918 21:08:40.208690   62061 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0918 21:08:40.208701   62061 logs.go:123] Gathering logs for kubelet ...
	I0918 21:08:40.208712   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 21:08:40.262944   62061 logs.go:123] Gathering logs for dmesg ...
	I0918 21:08:40.262982   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 21:08:40.276192   62061 logs.go:123] Gathering logs for describe nodes ...
	I0918 21:08:40.276222   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0918 21:08:40.346393   62061 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0918 21:08:40.346414   62061 logs.go:123] Gathering logs for CRI-O ...
	I0918 21:08:40.346426   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0918 21:08:40.433797   62061 logs.go:123] Gathering logs for container status ...
	I0918 21:08:40.433848   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 21:08:42.976574   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:08:42.990198   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0918 21:08:42.990263   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0918 21:08:43.031744   62061 cri.go:89] found id: ""
	I0918 21:08:43.031774   62061 logs.go:276] 0 containers: []
	W0918 21:08:43.031784   62061 logs.go:278] No container was found matching "kube-apiserver"
	I0918 21:08:43.031791   62061 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0918 21:08:43.031854   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0918 21:08:43.065198   62061 cri.go:89] found id: ""
	I0918 21:08:43.065240   62061 logs.go:276] 0 containers: []
	W0918 21:08:43.065248   62061 logs.go:278] No container was found matching "etcd"
	I0918 21:08:43.065261   62061 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0918 21:08:43.065319   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0918 21:08:43.099292   62061 cri.go:89] found id: ""
	I0918 21:08:43.099320   62061 logs.go:276] 0 containers: []
	W0918 21:08:43.099328   62061 logs.go:278] No container was found matching "coredns"
	I0918 21:08:43.099333   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0918 21:08:43.099388   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0918 21:08:43.135085   62061 cri.go:89] found id: ""
	I0918 21:08:43.135110   62061 logs.go:276] 0 containers: []
	W0918 21:08:43.135119   62061 logs.go:278] No container was found matching "kube-scheduler"
	I0918 21:08:43.135131   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0918 21:08:43.135190   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0918 21:08:43.173255   62061 cri.go:89] found id: ""
	I0918 21:08:43.173312   62061 logs.go:276] 0 containers: []
	W0918 21:08:43.173326   62061 logs.go:278] No container was found matching "kube-proxy"
	I0918 21:08:43.173335   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0918 21:08:43.173433   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0918 21:08:43.209249   62061 cri.go:89] found id: ""
	I0918 21:08:43.209282   62061 logs.go:276] 0 containers: []
	W0918 21:08:43.209294   62061 logs.go:278] No container was found matching "kube-controller-manager"
	I0918 21:08:43.209303   62061 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0918 21:08:43.209377   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0918 21:08:43.242333   62061 cri.go:89] found id: ""
	I0918 21:08:43.242366   62061 logs.go:276] 0 containers: []
	W0918 21:08:43.242376   62061 logs.go:278] No container was found matching "kindnet"
	I0918 21:08:43.242383   62061 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0918 21:08:43.242449   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0918 21:08:43.278099   62061 cri.go:89] found id: ""
	I0918 21:08:43.278128   62061 logs.go:276] 0 containers: []
	W0918 21:08:43.278136   62061 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0918 21:08:43.278146   62061 logs.go:123] Gathering logs for kubelet ...
	I0918 21:08:43.278164   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 21:08:43.329621   62061 logs.go:123] Gathering logs for dmesg ...
	I0918 21:08:43.329661   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 21:08:43.343357   62061 logs.go:123] Gathering logs for describe nodes ...
	I0918 21:08:43.343402   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0918 21:08:43.415392   62061 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0918 21:08:43.415419   62061 logs.go:123] Gathering logs for CRI-O ...
	I0918 21:08:43.415435   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0918 21:08:43.501634   62061 logs.go:123] Gathering logs for container status ...
	I0918 21:08:43.501670   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 21:08:43.357675   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:08:45.857212   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:08:43.133389   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:08:45.134057   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:08:44.887763   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:08:47.387221   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:08:46.043925   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:08:46.057457   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0918 21:08:46.057540   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0918 21:08:46.091987   62061 cri.go:89] found id: ""
	I0918 21:08:46.092036   62061 logs.go:276] 0 containers: []
	W0918 21:08:46.092047   62061 logs.go:278] No container was found matching "kube-apiserver"
	I0918 21:08:46.092055   62061 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0918 21:08:46.092118   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0918 21:08:46.131171   62061 cri.go:89] found id: ""
	I0918 21:08:46.131195   62061 logs.go:276] 0 containers: []
	W0918 21:08:46.131203   62061 logs.go:278] No container was found matching "etcd"
	I0918 21:08:46.131209   62061 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0918 21:08:46.131266   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0918 21:08:46.167324   62061 cri.go:89] found id: ""
	I0918 21:08:46.167360   62061 logs.go:276] 0 containers: []
	W0918 21:08:46.167369   62061 logs.go:278] No container was found matching "coredns"
	I0918 21:08:46.167375   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0918 21:08:46.167433   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0918 21:08:46.200940   62061 cri.go:89] found id: ""
	I0918 21:08:46.200969   62061 logs.go:276] 0 containers: []
	W0918 21:08:46.200978   62061 logs.go:278] No container was found matching "kube-scheduler"
	I0918 21:08:46.200983   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0918 21:08:46.201042   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0918 21:08:46.233736   62061 cri.go:89] found id: ""
	I0918 21:08:46.233765   62061 logs.go:276] 0 containers: []
	W0918 21:08:46.233773   62061 logs.go:278] No container was found matching "kube-proxy"
	I0918 21:08:46.233779   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0918 21:08:46.233841   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0918 21:08:46.267223   62061 cri.go:89] found id: ""
	I0918 21:08:46.267250   62061 logs.go:276] 0 containers: []
	W0918 21:08:46.267261   62061 logs.go:278] No container was found matching "kube-controller-manager"
	I0918 21:08:46.267268   62061 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0918 21:08:46.267331   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0918 21:08:46.302218   62061 cri.go:89] found id: ""
	I0918 21:08:46.302245   62061 logs.go:276] 0 containers: []
	W0918 21:08:46.302255   62061 logs.go:278] No container was found matching "kindnet"
	I0918 21:08:46.302262   62061 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0918 21:08:46.302326   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0918 21:08:46.334979   62061 cri.go:89] found id: ""
	I0918 21:08:46.335005   62061 logs.go:276] 0 containers: []
	W0918 21:08:46.335015   62061 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0918 21:08:46.335024   62061 logs.go:123] Gathering logs for describe nodes ...
	I0918 21:08:46.335038   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0918 21:08:46.422905   62061 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0918 21:08:46.422929   62061 logs.go:123] Gathering logs for CRI-O ...
	I0918 21:08:46.422948   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0918 21:08:46.504831   62061 logs.go:123] Gathering logs for container status ...
	I0918 21:08:46.504884   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 21:08:46.543954   62061 logs.go:123] Gathering logs for kubelet ...
	I0918 21:08:46.543981   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 21:08:46.595817   62061 logs.go:123] Gathering logs for dmesg ...
	I0918 21:08:46.595854   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 21:08:49.109984   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:08:49.124966   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0918 21:08:49.125052   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0918 21:08:49.163272   62061 cri.go:89] found id: ""
	I0918 21:08:49.163302   62061 logs.go:276] 0 containers: []
	W0918 21:08:49.163334   62061 logs.go:278] No container was found matching "kube-apiserver"
	I0918 21:08:49.163342   62061 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0918 21:08:49.163411   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0918 21:08:49.199973   62061 cri.go:89] found id: ""
	I0918 21:08:49.200003   62061 logs.go:276] 0 containers: []
	W0918 21:08:49.200037   62061 logs.go:278] No container was found matching "etcd"
	I0918 21:08:49.200046   62061 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0918 21:08:49.200111   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0918 21:08:49.235980   62061 cri.go:89] found id: ""
	I0918 21:08:49.236036   62061 logs.go:276] 0 containers: []
	W0918 21:08:49.236050   62061 logs.go:278] No container was found matching "coredns"
	I0918 21:08:49.236061   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0918 21:08:49.236123   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0918 21:08:49.271162   62061 cri.go:89] found id: ""
	I0918 21:08:49.271386   62061 logs.go:276] 0 containers: []
	W0918 21:08:49.271404   62061 logs.go:278] No container was found matching "kube-scheduler"
	I0918 21:08:49.271413   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0918 21:08:49.271483   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0918 21:08:49.305799   62061 cri.go:89] found id: ""
	I0918 21:08:49.305831   62061 logs.go:276] 0 containers: []
	W0918 21:08:49.305842   62061 logs.go:278] No container was found matching "kube-proxy"
	I0918 21:08:49.305848   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0918 21:08:49.305899   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0918 21:08:49.342170   62061 cri.go:89] found id: ""
	I0918 21:08:49.342195   62061 logs.go:276] 0 containers: []
	W0918 21:08:49.342202   62061 logs.go:278] No container was found matching "kube-controller-manager"
	I0918 21:08:49.342208   62061 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0918 21:08:49.342265   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0918 21:08:49.374071   62061 cri.go:89] found id: ""
	I0918 21:08:49.374098   62061 logs.go:276] 0 containers: []
	W0918 21:08:49.374108   62061 logs.go:278] No container was found matching "kindnet"
	I0918 21:08:49.374116   62061 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0918 21:08:49.374186   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0918 21:08:49.407614   62061 cri.go:89] found id: ""
	I0918 21:08:49.407645   62061 logs.go:276] 0 containers: []
	W0918 21:08:49.407657   62061 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0918 21:08:49.407669   62061 logs.go:123] Gathering logs for kubelet ...
	I0918 21:08:49.407685   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 21:08:49.458433   62061 logs.go:123] Gathering logs for dmesg ...
	I0918 21:08:49.458473   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 21:08:49.471490   62061 logs.go:123] Gathering logs for describe nodes ...
	I0918 21:08:49.471519   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0918 21:08:47.857798   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:08:50.355748   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:08:47.634158   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:08:49.627085   61659 pod_ready.go:82] duration metric: took 4m0.000936582s for pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace to be "Ready" ...
	E0918 21:08:49.627133   61659 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace to be "Ready" (will not retry!)
	I0918 21:08:49.627156   61659 pod_ready.go:39] duration metric: took 4m7.542795536s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0918 21:08:49.627192   61659 kubeadm.go:597] duration metric: took 4m15.452827752s to restartPrimaryControlPlane
	W0918 21:08:49.627251   61659 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0918 21:08:49.627290   61659 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0918 21:08:49.387560   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:08:51.887591   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	W0918 21:08:49.533874   62061 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0918 21:08:49.533901   62061 logs.go:123] Gathering logs for CRI-O ...
	I0918 21:08:49.533916   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0918 21:08:49.611711   62061 logs.go:123] Gathering logs for container status ...
	I0918 21:08:49.611744   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 21:08:52.157839   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:08:52.170690   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0918 21:08:52.170770   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0918 21:08:52.208737   62061 cri.go:89] found id: ""
	I0918 21:08:52.208767   62061 logs.go:276] 0 containers: []
	W0918 21:08:52.208779   62061 logs.go:278] No container was found matching "kube-apiserver"
	I0918 21:08:52.208787   62061 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0918 21:08:52.208846   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0918 21:08:52.242624   62061 cri.go:89] found id: ""
	I0918 21:08:52.242658   62061 logs.go:276] 0 containers: []
	W0918 21:08:52.242669   62061 logs.go:278] No container was found matching "etcd"
	I0918 21:08:52.242677   62061 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0918 21:08:52.242742   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0918 21:08:52.280686   62061 cri.go:89] found id: ""
	I0918 21:08:52.280717   62061 logs.go:276] 0 containers: []
	W0918 21:08:52.280728   62061 logs.go:278] No container was found matching "coredns"
	I0918 21:08:52.280736   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0918 21:08:52.280798   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0918 21:08:52.313748   62061 cri.go:89] found id: ""
	I0918 21:08:52.313777   62061 logs.go:276] 0 containers: []
	W0918 21:08:52.313785   62061 logs.go:278] No container was found matching "kube-scheduler"
	I0918 21:08:52.313791   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0918 21:08:52.313840   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0918 21:08:52.353072   62061 cri.go:89] found id: ""
	I0918 21:08:52.353102   62061 logs.go:276] 0 containers: []
	W0918 21:08:52.353124   62061 logs.go:278] No container was found matching "kube-proxy"
	I0918 21:08:52.353132   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0918 21:08:52.353195   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0918 21:08:52.390358   62061 cri.go:89] found id: ""
	I0918 21:08:52.390384   62061 logs.go:276] 0 containers: []
	W0918 21:08:52.390392   62061 logs.go:278] No container was found matching "kube-controller-manager"
	I0918 21:08:52.390398   62061 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0918 21:08:52.390448   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0918 21:08:52.430039   62061 cri.go:89] found id: ""
	I0918 21:08:52.430068   62061 logs.go:276] 0 containers: []
	W0918 21:08:52.430081   62061 logs.go:278] No container was found matching "kindnet"
	I0918 21:08:52.430088   62061 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0918 21:08:52.430146   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0918 21:08:52.466096   62061 cri.go:89] found id: ""
	I0918 21:08:52.466125   62061 logs.go:276] 0 containers: []
	W0918 21:08:52.466137   62061 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0918 21:08:52.466149   62061 logs.go:123] Gathering logs for kubelet ...
	I0918 21:08:52.466162   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 21:08:52.518643   62061 logs.go:123] Gathering logs for dmesg ...
	I0918 21:08:52.518678   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 21:08:52.531768   62061 logs.go:123] Gathering logs for describe nodes ...
	I0918 21:08:52.531797   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0918 21:08:52.605130   62061 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0918 21:08:52.605163   62061 logs.go:123] Gathering logs for CRI-O ...
	I0918 21:08:52.605181   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0918 21:08:52.686510   62061 logs.go:123] Gathering logs for container status ...
	I0918 21:08:52.686553   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 21:08:52.356535   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:08:54.356671   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:08:54.387306   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:08:56.887745   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:08:55.225867   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:08:55.240537   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0918 21:08:55.240618   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0918 21:08:55.276461   62061 cri.go:89] found id: ""
	I0918 21:08:55.276490   62061 logs.go:276] 0 containers: []
	W0918 21:08:55.276498   62061 logs.go:278] No container was found matching "kube-apiserver"
	I0918 21:08:55.276504   62061 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0918 21:08:55.276564   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0918 21:08:55.313449   62061 cri.go:89] found id: ""
	I0918 21:08:55.313482   62061 logs.go:276] 0 containers: []
	W0918 21:08:55.313493   62061 logs.go:278] No container was found matching "etcd"
	I0918 21:08:55.313499   62061 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0918 21:08:55.313551   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0918 21:08:55.352436   62061 cri.go:89] found id: ""
	I0918 21:08:55.352475   62061 logs.go:276] 0 containers: []
	W0918 21:08:55.352485   62061 logs.go:278] No container was found matching "coredns"
	I0918 21:08:55.352492   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0918 21:08:55.352560   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0918 21:08:55.390433   62061 cri.go:89] found id: ""
	I0918 21:08:55.390458   62061 logs.go:276] 0 containers: []
	W0918 21:08:55.390466   62061 logs.go:278] No container was found matching "kube-scheduler"
	I0918 21:08:55.390472   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0918 21:08:55.390529   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0918 21:08:55.428426   62061 cri.go:89] found id: ""
	I0918 21:08:55.428455   62061 logs.go:276] 0 containers: []
	W0918 21:08:55.428465   62061 logs.go:278] No container was found matching "kube-proxy"
	I0918 21:08:55.428473   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0918 21:08:55.428540   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0918 21:08:55.465587   62061 cri.go:89] found id: ""
	I0918 21:08:55.465622   62061 logs.go:276] 0 containers: []
	W0918 21:08:55.465633   62061 logs.go:278] No container was found matching "kube-controller-manager"
	I0918 21:08:55.465641   62061 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0918 21:08:55.465710   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0918 21:08:55.502137   62061 cri.go:89] found id: ""
	I0918 21:08:55.502185   62061 logs.go:276] 0 containers: []
	W0918 21:08:55.502196   62061 logs.go:278] No container was found matching "kindnet"
	I0918 21:08:55.502203   62061 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0918 21:08:55.502266   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0918 21:08:55.535992   62061 cri.go:89] found id: ""
	I0918 21:08:55.536037   62061 logs.go:276] 0 containers: []
	W0918 21:08:55.536050   62061 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0918 21:08:55.536060   62061 logs.go:123] Gathering logs for dmesg ...
	I0918 21:08:55.536078   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 21:08:55.549267   62061 logs.go:123] Gathering logs for describe nodes ...
	I0918 21:08:55.549296   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0918 21:08:55.616522   62061 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0918 21:08:55.616556   62061 logs.go:123] Gathering logs for CRI-O ...
	I0918 21:08:55.616572   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0918 21:08:55.698822   62061 logs.go:123] Gathering logs for container status ...
	I0918 21:08:55.698874   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 21:08:55.740234   62061 logs.go:123] Gathering logs for kubelet ...
	I0918 21:08:55.740264   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 21:08:58.294876   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:08:58.307543   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0918 21:08:58.307630   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0918 21:08:58.340435   62061 cri.go:89] found id: ""
	I0918 21:08:58.340467   62061 logs.go:276] 0 containers: []
	W0918 21:08:58.340479   62061 logs.go:278] No container was found matching "kube-apiserver"
	I0918 21:08:58.340486   62061 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0918 21:08:58.340545   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0918 21:08:58.374759   62061 cri.go:89] found id: ""
	I0918 21:08:58.374792   62061 logs.go:276] 0 containers: []
	W0918 21:08:58.374804   62061 logs.go:278] No container was found matching "etcd"
	I0918 21:08:58.374810   62061 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0918 21:08:58.374863   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0918 21:08:58.411756   62061 cri.go:89] found id: ""
	I0918 21:08:58.411787   62061 logs.go:276] 0 containers: []
	W0918 21:08:58.411797   62061 logs.go:278] No container was found matching "coredns"
	I0918 21:08:58.411804   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0918 21:08:58.411867   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0918 21:08:58.449787   62061 cri.go:89] found id: ""
	I0918 21:08:58.449820   62061 logs.go:276] 0 containers: []
	W0918 21:08:58.449832   62061 logs.go:278] No container was found matching "kube-scheduler"
	I0918 21:08:58.449839   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0918 21:08:58.449903   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0918 21:08:58.485212   62061 cri.go:89] found id: ""
	I0918 21:08:58.485243   62061 logs.go:276] 0 containers: []
	W0918 21:08:58.485254   62061 logs.go:278] No container was found matching "kube-proxy"
	I0918 21:08:58.485261   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0918 21:08:58.485331   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0918 21:08:58.528667   62061 cri.go:89] found id: ""
	I0918 21:08:58.528696   62061 logs.go:276] 0 containers: []
	W0918 21:08:58.528706   62061 logs.go:278] No container was found matching "kube-controller-manager"
	I0918 21:08:58.528714   62061 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0918 21:08:58.528775   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0918 21:08:58.568201   62061 cri.go:89] found id: ""
	I0918 21:08:58.568231   62061 logs.go:276] 0 containers: []
	W0918 21:08:58.568241   62061 logs.go:278] No container was found matching "kindnet"
	I0918 21:08:58.568247   62061 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0918 21:08:58.568305   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0918 21:08:58.623952   62061 cri.go:89] found id: ""
	I0918 21:08:58.623982   62061 logs.go:276] 0 containers: []
	W0918 21:08:58.623991   62061 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0918 21:08:58.624000   62061 logs.go:123] Gathering logs for container status ...
	I0918 21:08:58.624011   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 21:08:58.665418   62061 logs.go:123] Gathering logs for kubelet ...
	I0918 21:08:58.665457   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 21:08:58.713464   62061 logs.go:123] Gathering logs for dmesg ...
	I0918 21:08:58.713504   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 21:08:58.727511   62061 logs.go:123] Gathering logs for describe nodes ...
	I0918 21:08:58.727552   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0918 21:08:58.799004   62061 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0918 21:08:58.799035   62061 logs.go:123] Gathering logs for CRI-O ...
	I0918 21:08:58.799050   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0918 21:08:56.856428   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:08:58.856632   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:09:00.857301   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:08:59.386076   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:09:01.387016   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:09:01.389457   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:09:01.404658   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0918 21:09:01.404734   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0918 21:09:01.439139   62061 cri.go:89] found id: ""
	I0918 21:09:01.439170   62061 logs.go:276] 0 containers: []
	W0918 21:09:01.439180   62061 logs.go:278] No container was found matching "kube-apiserver"
	I0918 21:09:01.439187   62061 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0918 21:09:01.439251   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0918 21:09:01.473866   62061 cri.go:89] found id: ""
	I0918 21:09:01.473896   62061 logs.go:276] 0 containers: []
	W0918 21:09:01.473907   62061 logs.go:278] No container was found matching "etcd"
	I0918 21:09:01.473915   62061 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0918 21:09:01.473978   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0918 21:09:01.506734   62061 cri.go:89] found id: ""
	I0918 21:09:01.506767   62061 logs.go:276] 0 containers: []
	W0918 21:09:01.506777   62061 logs.go:278] No container was found matching "coredns"
	I0918 21:09:01.506783   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0918 21:09:01.506836   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0918 21:09:01.540123   62061 cri.go:89] found id: ""
	I0918 21:09:01.540152   62061 logs.go:276] 0 containers: []
	W0918 21:09:01.540162   62061 logs.go:278] No container was found matching "kube-scheduler"
	I0918 21:09:01.540169   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0918 21:09:01.540236   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0918 21:09:01.575002   62061 cri.go:89] found id: ""
	I0918 21:09:01.575037   62061 logs.go:276] 0 containers: []
	W0918 21:09:01.575048   62061 logs.go:278] No container was found matching "kube-proxy"
	I0918 21:09:01.575071   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0918 21:09:01.575159   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0918 21:09:01.612285   62061 cri.go:89] found id: ""
	I0918 21:09:01.612316   62061 logs.go:276] 0 containers: []
	W0918 21:09:01.612327   62061 logs.go:278] No container was found matching "kube-controller-manager"
	I0918 21:09:01.612335   62061 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0918 21:09:01.612399   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0918 21:09:01.648276   62061 cri.go:89] found id: ""
	I0918 21:09:01.648304   62061 logs.go:276] 0 containers: []
	W0918 21:09:01.648318   62061 logs.go:278] No container was found matching "kindnet"
	I0918 21:09:01.648325   62061 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0918 21:09:01.648397   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0918 21:09:01.686192   62061 cri.go:89] found id: ""
	I0918 21:09:01.686220   62061 logs.go:276] 0 containers: []
	W0918 21:09:01.686228   62061 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0918 21:09:01.686236   62061 logs.go:123] Gathering logs for kubelet ...
	I0918 21:09:01.686247   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 21:09:01.738366   62061 logs.go:123] Gathering logs for dmesg ...
	I0918 21:09:01.738408   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 21:09:01.752650   62061 logs.go:123] Gathering logs for describe nodes ...
	I0918 21:09:01.752683   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0918 21:09:01.825010   62061 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0918 21:09:01.825114   62061 logs.go:123] Gathering logs for CRI-O ...
	I0918 21:09:01.825156   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0918 21:09:01.907401   62061 logs.go:123] Gathering logs for container status ...
	I0918 21:09:01.907448   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 21:09:04.448316   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:09:04.461242   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0918 21:09:04.461304   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0918 21:09:03.357089   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:09:05.856126   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:09:03.387563   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:09:05.389665   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:09:07.886523   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:09:04.495951   62061 cri.go:89] found id: ""
	I0918 21:09:04.495984   62061 logs.go:276] 0 containers: []
	W0918 21:09:04.495997   62061 logs.go:278] No container was found matching "kube-apiserver"
	I0918 21:09:04.496006   62061 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0918 21:09:04.496090   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0918 21:09:04.536874   62061 cri.go:89] found id: ""
	I0918 21:09:04.536906   62061 logs.go:276] 0 containers: []
	W0918 21:09:04.536917   62061 logs.go:278] No container was found matching "etcd"
	I0918 21:09:04.536935   62061 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0918 21:09:04.537010   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0918 21:09:04.572601   62061 cri.go:89] found id: ""
	I0918 21:09:04.572634   62061 logs.go:276] 0 containers: []
	W0918 21:09:04.572646   62061 logs.go:278] No container was found matching "coredns"
	I0918 21:09:04.572653   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0918 21:09:04.572716   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0918 21:09:04.609783   62061 cri.go:89] found id: ""
	I0918 21:09:04.609817   62061 logs.go:276] 0 containers: []
	W0918 21:09:04.609826   62061 logs.go:278] No container was found matching "kube-scheduler"
	I0918 21:09:04.609832   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0918 21:09:04.609891   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0918 21:09:04.645124   62061 cri.go:89] found id: ""
	I0918 21:09:04.645156   62061 logs.go:276] 0 containers: []
	W0918 21:09:04.645167   62061 logs.go:278] No container was found matching "kube-proxy"
	I0918 21:09:04.645175   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0918 21:09:04.645241   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0918 21:09:04.680927   62061 cri.go:89] found id: ""
	I0918 21:09:04.680959   62061 logs.go:276] 0 containers: []
	W0918 21:09:04.680971   62061 logs.go:278] No container was found matching "kube-controller-manager"
	I0918 21:09:04.680978   62061 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0918 21:09:04.681038   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0918 21:09:04.718920   62061 cri.go:89] found id: ""
	I0918 21:09:04.718954   62061 logs.go:276] 0 containers: []
	W0918 21:09:04.718972   62061 logs.go:278] No container was found matching "kindnet"
	I0918 21:09:04.718979   62061 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0918 21:09:04.719039   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0918 21:09:04.751450   62061 cri.go:89] found id: ""
	I0918 21:09:04.751488   62061 logs.go:276] 0 containers: []
	W0918 21:09:04.751500   62061 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0918 21:09:04.751511   62061 logs.go:123] Gathering logs for container status ...
	I0918 21:09:04.751529   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 21:09:04.788969   62061 logs.go:123] Gathering logs for kubelet ...
	I0918 21:09:04.789001   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 21:09:04.837638   62061 logs.go:123] Gathering logs for dmesg ...
	I0918 21:09:04.837673   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 21:09:04.853673   62061 logs.go:123] Gathering logs for describe nodes ...
	I0918 21:09:04.853706   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0918 21:09:04.923670   62061 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0918 21:09:04.923703   62061 logs.go:123] Gathering logs for CRI-O ...
	I0918 21:09:04.923717   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0918 21:09:07.507979   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:09:07.521581   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0918 21:09:07.521656   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0918 21:09:07.556280   62061 cri.go:89] found id: ""
	I0918 21:09:07.556310   62061 logs.go:276] 0 containers: []
	W0918 21:09:07.556321   62061 logs.go:278] No container was found matching "kube-apiserver"
	I0918 21:09:07.556329   62061 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0918 21:09:07.556397   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0918 21:09:07.590775   62061 cri.go:89] found id: ""
	I0918 21:09:07.590802   62061 logs.go:276] 0 containers: []
	W0918 21:09:07.590810   62061 logs.go:278] No container was found matching "etcd"
	I0918 21:09:07.590815   62061 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0918 21:09:07.590862   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0918 21:09:07.625971   62061 cri.go:89] found id: ""
	I0918 21:09:07.626000   62061 logs.go:276] 0 containers: []
	W0918 21:09:07.626010   62061 logs.go:278] No container was found matching "coredns"
	I0918 21:09:07.626018   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0918 21:09:07.626080   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0918 21:09:07.660083   62061 cri.go:89] found id: ""
	I0918 21:09:07.660116   62061 logs.go:276] 0 containers: []
	W0918 21:09:07.660128   62061 logs.go:278] No container was found matching "kube-scheduler"
	I0918 21:09:07.660136   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0918 21:09:07.660201   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0918 21:09:07.694165   62061 cri.go:89] found id: ""
	I0918 21:09:07.694195   62061 logs.go:276] 0 containers: []
	W0918 21:09:07.694204   62061 logs.go:278] No container was found matching "kube-proxy"
	I0918 21:09:07.694211   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0918 21:09:07.694269   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0918 21:09:07.728298   62061 cri.go:89] found id: ""
	I0918 21:09:07.728328   62061 logs.go:276] 0 containers: []
	W0918 21:09:07.728338   62061 logs.go:278] No container was found matching "kube-controller-manager"
	I0918 21:09:07.728349   62061 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0918 21:09:07.728409   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0918 21:09:07.762507   62061 cri.go:89] found id: ""
	I0918 21:09:07.762546   62061 logs.go:276] 0 containers: []
	W0918 21:09:07.762555   62061 logs.go:278] No container was found matching "kindnet"
	I0918 21:09:07.762565   62061 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0918 21:09:07.762745   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0918 21:09:07.798006   62061 cri.go:89] found id: ""
	I0918 21:09:07.798038   62061 logs.go:276] 0 containers: []
	W0918 21:09:07.798049   62061 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0918 21:09:07.798059   62061 logs.go:123] Gathering logs for kubelet ...
	I0918 21:09:07.798074   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 21:09:07.849222   62061 logs.go:123] Gathering logs for dmesg ...
	I0918 21:09:07.849267   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 21:09:07.865023   62061 logs.go:123] Gathering logs for describe nodes ...
	I0918 21:09:07.865055   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0918 21:09:07.938810   62061 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0918 21:09:07.938830   62061 logs.go:123] Gathering logs for CRI-O ...
	I0918 21:09:07.938842   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0918 21:09:08.018885   62061 logs.go:123] Gathering logs for container status ...
	I0918 21:09:08.018924   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 21:09:07.856987   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:09:10.356244   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:09:09.886563   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:09:12.386922   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:09:10.562565   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:09:10.575854   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0918 21:09:10.575941   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0918 21:09:10.612853   62061 cri.go:89] found id: ""
	I0918 21:09:10.612884   62061 logs.go:276] 0 containers: []
	W0918 21:09:10.612896   62061 logs.go:278] No container was found matching "kube-apiserver"
	I0918 21:09:10.612906   62061 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0918 21:09:10.612966   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0918 21:09:10.648672   62061 cri.go:89] found id: ""
	I0918 21:09:10.648703   62061 logs.go:276] 0 containers: []
	W0918 21:09:10.648713   62061 logs.go:278] No container was found matching "etcd"
	I0918 21:09:10.648720   62061 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0918 21:09:10.648780   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0918 21:09:10.682337   62061 cri.go:89] found id: ""
	I0918 21:09:10.682370   62061 logs.go:276] 0 containers: []
	W0918 21:09:10.682388   62061 logs.go:278] No container was found matching "coredns"
	I0918 21:09:10.682394   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0918 21:09:10.682445   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0918 21:09:10.718221   62061 cri.go:89] found id: ""
	I0918 21:09:10.718257   62061 logs.go:276] 0 containers: []
	W0918 21:09:10.718269   62061 logs.go:278] No container was found matching "kube-scheduler"
	I0918 21:09:10.718277   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0918 21:09:10.718345   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0918 21:09:10.751570   62061 cri.go:89] found id: ""
	I0918 21:09:10.751600   62061 logs.go:276] 0 containers: []
	W0918 21:09:10.751609   62061 logs.go:278] No container was found matching "kube-proxy"
	I0918 21:09:10.751615   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0918 21:09:10.751706   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0918 21:09:10.792125   62061 cri.go:89] found id: ""
	I0918 21:09:10.792158   62061 logs.go:276] 0 containers: []
	W0918 21:09:10.792170   62061 logs.go:278] No container was found matching "kube-controller-manager"
	I0918 21:09:10.792178   62061 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0918 21:09:10.792245   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0918 21:09:10.830699   62061 cri.go:89] found id: ""
	I0918 21:09:10.830733   62061 logs.go:276] 0 containers: []
	W0918 21:09:10.830742   62061 logs.go:278] No container was found matching "kindnet"
	I0918 21:09:10.830748   62061 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0918 21:09:10.830820   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0918 21:09:10.869625   62061 cri.go:89] found id: ""
	I0918 21:09:10.869655   62061 logs.go:276] 0 containers: []
	W0918 21:09:10.869663   62061 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0918 21:09:10.869672   62061 logs.go:123] Gathering logs for kubelet ...
	I0918 21:09:10.869684   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 21:09:10.921340   62061 logs.go:123] Gathering logs for dmesg ...
	I0918 21:09:10.921378   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 21:09:10.937032   62061 logs.go:123] Gathering logs for describe nodes ...
	I0918 21:09:10.937071   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0918 21:09:11.006248   62061 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0918 21:09:11.006276   62061 logs.go:123] Gathering logs for CRI-O ...
	I0918 21:09:11.006291   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0918 21:09:11.086458   62061 logs.go:123] Gathering logs for container status ...
	I0918 21:09:11.086496   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 21:09:13.628824   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:09:13.642499   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0918 21:09:13.642578   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0918 21:09:13.677337   62061 cri.go:89] found id: ""
	I0918 21:09:13.677368   62061 logs.go:276] 0 containers: []
	W0918 21:09:13.677378   62061 logs.go:278] No container was found matching "kube-apiserver"
	I0918 21:09:13.677385   62061 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0918 21:09:13.677449   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0918 21:09:13.717317   62061 cri.go:89] found id: ""
	I0918 21:09:13.717341   62061 logs.go:276] 0 containers: []
	W0918 21:09:13.717353   62061 logs.go:278] No container was found matching "etcd"
	I0918 21:09:13.717358   62061 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0918 21:09:13.717419   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0918 21:09:13.752151   62061 cri.go:89] found id: ""
	I0918 21:09:13.752181   62061 logs.go:276] 0 containers: []
	W0918 21:09:13.752189   62061 logs.go:278] No container was found matching "coredns"
	I0918 21:09:13.752195   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0918 21:09:13.752253   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0918 21:09:13.791946   62061 cri.go:89] found id: ""
	I0918 21:09:13.791975   62061 logs.go:276] 0 containers: []
	W0918 21:09:13.791983   62061 logs.go:278] No container was found matching "kube-scheduler"
	I0918 21:09:13.791989   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0918 21:09:13.792069   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0918 21:09:13.825173   62061 cri.go:89] found id: ""
	I0918 21:09:13.825198   62061 logs.go:276] 0 containers: []
	W0918 21:09:13.825209   62061 logs.go:278] No container was found matching "kube-proxy"
	I0918 21:09:13.825216   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0918 21:09:13.825276   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0918 21:09:13.859801   62061 cri.go:89] found id: ""
	I0918 21:09:13.859834   62061 logs.go:276] 0 containers: []
	W0918 21:09:13.859846   62061 logs.go:278] No container was found matching "kube-controller-manager"
	I0918 21:09:13.859853   62061 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0918 21:09:13.859907   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0918 21:09:13.895413   62061 cri.go:89] found id: ""
	I0918 21:09:13.895445   62061 logs.go:276] 0 containers: []
	W0918 21:09:13.895456   62061 logs.go:278] No container was found matching "kindnet"
	I0918 21:09:13.895463   62061 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0918 21:09:13.895515   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0918 21:09:13.929048   62061 cri.go:89] found id: ""
	I0918 21:09:13.929075   62061 logs.go:276] 0 containers: []
	W0918 21:09:13.929083   62061 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0918 21:09:13.929092   62061 logs.go:123] Gathering logs for kubelet ...
	I0918 21:09:13.929104   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 21:09:13.981579   62061 logs.go:123] Gathering logs for dmesg ...
	I0918 21:09:13.981613   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 21:09:13.995642   62061 logs.go:123] Gathering logs for describe nodes ...
	I0918 21:09:13.995679   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0918 21:09:14.061762   62061 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0918 21:09:14.061782   62061 logs.go:123] Gathering logs for CRI-O ...
	I0918 21:09:14.061793   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0918 21:09:14.139623   62061 logs.go:123] Gathering logs for container status ...
	I0918 21:09:14.139659   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 21:09:16.001617   61659 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.374302262s)
	I0918 21:09:16.001692   61659 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0918 21:09:16.019307   61659 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0918 21:09:16.029547   61659 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0918 21:09:16.039132   61659 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0918 21:09:16.039154   61659 kubeadm.go:157] found existing configuration files:
	
	I0918 21:09:16.039196   61659 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0918 21:09:16.048506   61659 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0918 21:09:16.048567   61659 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0918 21:09:16.058120   61659 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0918 21:09:16.067686   61659 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0918 21:09:16.067746   61659 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0918 21:09:16.077707   61659 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0918 21:09:16.087089   61659 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0918 21:09:16.087149   61659 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0918 21:09:16.097040   61659 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0918 21:09:16.106448   61659 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0918 21:09:16.106514   61659 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0918 21:09:16.116060   61659 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0918 21:09:16.159721   61659 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I0918 21:09:16.159797   61659 kubeadm.go:310] [preflight] Running pre-flight checks
	I0918 21:09:16.266821   61659 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0918 21:09:16.266968   61659 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0918 21:09:16.267122   61659 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0918 21:09:16.275249   61659 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0918 21:09:12.855996   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:09:14.857296   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:09:16.277228   61659 out.go:235]   - Generating certificates and keys ...
	I0918 21:09:16.277333   61659 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0918 21:09:16.277419   61659 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0918 21:09:16.277534   61659 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0918 21:09:16.277617   61659 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0918 21:09:16.277709   61659 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0918 21:09:16.277790   61659 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0918 21:09:16.277904   61659 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0918 21:09:16.278013   61659 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0918 21:09:16.278131   61659 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0918 21:09:16.278265   61659 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0918 21:09:16.278331   61659 kubeadm.go:310] [certs] Using the existing "sa" key
	I0918 21:09:16.278401   61659 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0918 21:09:16.516263   61659 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0918 21:09:16.708220   61659 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0918 21:09:17.009820   61659 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0918 21:09:17.108871   61659 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0918 21:09:17.211014   61659 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0918 21:09:17.211658   61659 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0918 21:09:17.216626   61659 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0918 21:09:14.887068   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:09:16.888350   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:09:16.686071   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:09:16.699769   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0918 21:09:16.699844   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0918 21:09:16.735242   62061 cri.go:89] found id: ""
	I0918 21:09:16.735277   62061 logs.go:276] 0 containers: []
	W0918 21:09:16.735288   62061 logs.go:278] No container was found matching "kube-apiserver"
	I0918 21:09:16.735298   62061 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0918 21:09:16.735371   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0918 21:09:16.770027   62061 cri.go:89] found id: ""
	I0918 21:09:16.770052   62061 logs.go:276] 0 containers: []
	W0918 21:09:16.770060   62061 logs.go:278] No container was found matching "etcd"
	I0918 21:09:16.770066   62061 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0918 21:09:16.770114   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0918 21:09:16.806525   62061 cri.go:89] found id: ""
	I0918 21:09:16.806555   62061 logs.go:276] 0 containers: []
	W0918 21:09:16.806563   62061 logs.go:278] No container was found matching "coredns"
	I0918 21:09:16.806569   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0918 21:09:16.806636   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0918 21:09:16.851146   62061 cri.go:89] found id: ""
	I0918 21:09:16.851183   62061 logs.go:276] 0 containers: []
	W0918 21:09:16.851194   62061 logs.go:278] No container was found matching "kube-scheduler"
	I0918 21:09:16.851204   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0918 21:09:16.851271   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0918 21:09:16.890718   62061 cri.go:89] found id: ""
	I0918 21:09:16.890748   62061 logs.go:276] 0 containers: []
	W0918 21:09:16.890760   62061 logs.go:278] No container was found matching "kube-proxy"
	I0918 21:09:16.890767   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0918 21:09:16.890824   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0918 21:09:16.928971   62061 cri.go:89] found id: ""
	I0918 21:09:16.929002   62061 logs.go:276] 0 containers: []
	W0918 21:09:16.929012   62061 logs.go:278] No container was found matching "kube-controller-manager"
	I0918 21:09:16.929020   62061 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0918 21:09:16.929079   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0918 21:09:16.970965   62061 cri.go:89] found id: ""
	I0918 21:09:16.970999   62061 logs.go:276] 0 containers: []
	W0918 21:09:16.971011   62061 logs.go:278] No container was found matching "kindnet"
	I0918 21:09:16.971019   62061 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0918 21:09:16.971089   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0918 21:09:17.006427   62061 cri.go:89] found id: ""
	I0918 21:09:17.006453   62061 logs.go:276] 0 containers: []
	W0918 21:09:17.006461   62061 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0918 21:09:17.006468   62061 logs.go:123] Gathering logs for kubelet ...
	I0918 21:09:17.006480   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 21:09:17.058690   62061 logs.go:123] Gathering logs for dmesg ...
	I0918 21:09:17.058733   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 21:09:17.072593   62061 logs.go:123] Gathering logs for describe nodes ...
	I0918 21:09:17.072623   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0918 21:09:17.143046   62061 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0918 21:09:17.143071   62061 logs.go:123] Gathering logs for CRI-O ...
	I0918 21:09:17.143082   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0918 21:09:17.236943   62061 logs.go:123] Gathering logs for container status ...
	I0918 21:09:17.236989   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 21:09:17.357978   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:09:19.858268   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:09:17.218406   61659 out.go:235]   - Booting up control plane ...
	I0918 21:09:17.218544   61659 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0918 21:09:17.218662   61659 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0918 21:09:17.218765   61659 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0918 21:09:17.238076   61659 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0918 21:09:17.248123   61659 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0918 21:09:17.248226   61659 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0918 21:09:17.379685   61659 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0918 21:09:17.379840   61659 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0918 21:09:18.380791   61659 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001279947s
	I0918 21:09:18.380906   61659 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0918 21:09:18.380783   61740 pod_ready.go:82] duration metric: took 4m0.000205104s for pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace to be "Ready" ...
	E0918 21:09:18.380812   61740 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace to be "Ready" (will not retry!)
	I0918 21:09:18.380832   61740 pod_ready.go:39] duration metric: took 4m15.618837854s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0918 21:09:18.380875   61740 kubeadm.go:597] duration metric: took 4m23.646410044s to restartPrimaryControlPlane
	W0918 21:09:18.380936   61740 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0918 21:09:18.380966   61740 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0918 21:09:23.386705   61659 kubeadm.go:310] [api-check] The API server is healthy after 5.005706581s
	I0918 21:09:23.402316   61659 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0918 21:09:23.422786   61659 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0918 21:09:23.462099   61659 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0918 21:09:23.462373   61659 kubeadm.go:310] [mark-control-plane] Marking the node default-k8s-diff-port-828868 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0918 21:09:23.484276   61659 kubeadm.go:310] [bootstrap-token] Using token: 2vcil8.e13zhc1806da8knq
	I0918 21:09:19.782266   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:09:19.799433   62061 kubeadm.go:597] duration metric: took 4m2.126311085s to restartPrimaryControlPlane
	W0918 21:09:19.799513   62061 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0918 21:09:19.799543   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0918 21:09:20.910192   62061 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (1.110625463s)
	I0918 21:09:20.910273   62061 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0918 21:09:20.925992   62061 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0918 21:09:20.936876   62061 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0918 21:09:20.947170   62061 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0918 21:09:20.947199   62061 kubeadm.go:157] found existing configuration files:
	
	I0918 21:09:20.947255   62061 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0918 21:09:20.958140   62061 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0918 21:09:20.958240   62061 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0918 21:09:20.968351   62061 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0918 21:09:20.978669   62061 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0918 21:09:20.978735   62061 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0918 21:09:20.989765   62061 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0918 21:09:20.999842   62061 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0918 21:09:20.999903   62061 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0918 21:09:21.009945   62061 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0918 21:09:21.020229   62061 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0918 21:09:21.020289   62061 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0918 21:09:21.030583   62061 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0918 21:09:21.271399   62061 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0918 21:09:23.485978   61659 out.go:235]   - Configuring RBAC rules ...
	I0918 21:09:23.486112   61659 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0918 21:09:23.499163   61659 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0918 21:09:23.510754   61659 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0918 21:09:23.514794   61659 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0918 21:09:23.519247   61659 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0918 21:09:23.530424   61659 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0918 21:09:23.799778   61659 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0918 21:09:24.223469   61659 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0918 21:09:24.794852   61659 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0918 21:09:24.794886   61659 kubeadm.go:310] 
	I0918 21:09:24.794951   61659 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0918 21:09:24.794963   61659 kubeadm.go:310] 
	I0918 21:09:24.795058   61659 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0918 21:09:24.795073   61659 kubeadm.go:310] 
	I0918 21:09:24.795105   61659 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0918 21:09:24.795192   61659 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0918 21:09:24.795255   61659 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0918 21:09:24.795285   61659 kubeadm.go:310] 
	I0918 21:09:24.795366   61659 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0918 21:09:24.795376   61659 kubeadm.go:310] 
	I0918 21:09:24.795416   61659 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0918 21:09:24.795425   61659 kubeadm.go:310] 
	I0918 21:09:24.795497   61659 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0918 21:09:24.795580   61659 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0918 21:09:24.795678   61659 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0918 21:09:24.795692   61659 kubeadm.go:310] 
	I0918 21:09:24.795779   61659 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0918 21:09:24.795891   61659 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0918 21:09:24.795901   61659 kubeadm.go:310] 
	I0918 21:09:24.796174   61659 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8444 --token 2vcil8.e13zhc1806da8knq \
	I0918 21:09:24.796299   61659 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:8ad46bf288e8cc2226f5a929a768e6c95cddecc97ffc4016be27ef0987b1e36f \
	I0918 21:09:24.796350   61659 kubeadm.go:310] 	--control-plane 
	I0918 21:09:24.796367   61659 kubeadm.go:310] 
	I0918 21:09:24.796479   61659 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0918 21:09:24.796487   61659 kubeadm.go:310] 
	I0918 21:09:24.796594   61659 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8444 --token 2vcil8.e13zhc1806da8knq \
	I0918 21:09:24.796738   61659 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:8ad46bf288e8cc2226f5a929a768e6c95cddecc97ffc4016be27ef0987b1e36f 
	I0918 21:09:24.797359   61659 kubeadm.go:310] W0918 21:09:16.134048    2561 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0918 21:09:24.797679   61659 kubeadm.go:310] W0918 21:09:16.134873    2561 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0918 21:09:24.797832   61659 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0918 21:09:24.797858   61659 cni.go:84] Creating CNI manager for ""
	I0918 21:09:24.797872   61659 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0918 21:09:24.799953   61659 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0918 21:09:22.357582   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:09:24.857037   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:09:24.801259   61659 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0918 21:09:24.812277   61659 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0918 21:09:24.834749   61659 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0918 21:09:24.834855   61659 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0918 21:09:24.834871   61659 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-828868 minikube.k8s.io/updated_at=2024_09_18T21_09_24_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=85073601a832bd4bbda5d11fa91feafff6ec6b91 minikube.k8s.io/name=default-k8s-diff-port-828868 minikube.k8s.io/primary=true
	I0918 21:09:25.022861   61659 ops.go:34] apiserver oom_adj: -16
	I0918 21:09:25.022930   61659 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0918 21:09:25.523400   61659 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0918 21:09:26.023075   61659 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0918 21:09:26.523330   61659 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0918 21:09:27.023179   61659 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0918 21:09:27.523363   61659 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0918 21:09:28.023150   61659 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0918 21:09:28.523941   61659 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0918 21:09:29.023542   61659 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0918 21:09:29.143581   61659 kubeadm.go:1113] duration metric: took 4.308796493s to wait for elevateKubeSystemPrivileges
	I0918 21:09:29.143614   61659 kubeadm.go:394] duration metric: took 4m55.024616229s to StartCluster
	I0918 21:09:29.143632   61659 settings.go:142] acquiring lock: {Name:mk6ae95a69dcbe00c28846409e9f46945a88de2f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0918 21:09:29.143727   61659 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19667-7671/kubeconfig
	I0918 21:09:29.145397   61659 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19667-7671/kubeconfig: {Name:mk3a3eecadb0164dfdd5d3c4392b5b473a2a9bb0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0918 21:09:29.145680   61659 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.50.109 Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0918 21:09:29.145767   61659 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0918 21:09:29.145851   61659 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-828868"
	I0918 21:09:29.145869   61659 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-828868"
	I0918 21:09:29.145877   61659 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-828868"
	W0918 21:09:29.145885   61659 addons.go:243] addon storage-provisioner should already be in state true
	I0918 21:09:29.145896   61659 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-828868"
	I0918 21:09:29.145898   61659 config.go:182] Loaded profile config "default-k8s-diff-port-828868": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0918 21:09:29.145900   61659 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-828868"
	I0918 21:09:29.145920   61659 host.go:66] Checking if "default-k8s-diff-port-828868" exists ...
	I0918 21:09:29.145932   61659 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-828868"
	W0918 21:09:29.145946   61659 addons.go:243] addon metrics-server should already be in state true
	I0918 21:09:29.145980   61659 host.go:66] Checking if "default-k8s-diff-port-828868" exists ...
	I0918 21:09:29.146234   61659 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19667-7671/.minikube/bin/docker-machine-driver-kvm2
	I0918 21:09:29.146238   61659 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19667-7671/.minikube/bin/docker-machine-driver-kvm2
	I0918 21:09:29.146282   61659 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0918 21:09:29.146297   61659 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0918 21:09:29.146372   61659 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19667-7671/.minikube/bin/docker-machine-driver-kvm2
	I0918 21:09:29.146389   61659 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0918 21:09:29.147645   61659 out.go:177] * Verifying Kubernetes components...
	I0918 21:09:29.149574   61659 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0918 21:09:29.164779   61659 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43359
	I0918 21:09:29.165002   61659 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37857
	I0918 21:09:29.165390   61659 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44573
	I0918 21:09:29.165682   61659 main.go:141] libmachine: () Calling .GetVersion
	I0918 21:09:29.165749   61659 main.go:141] libmachine: () Calling .GetVersion
	I0918 21:09:29.166233   61659 main.go:141] libmachine: Using API Version  1
	I0918 21:09:29.166254   61659 main.go:141] libmachine: () Calling .SetConfigRaw
	I0918 21:09:29.166270   61659 main.go:141] libmachine: () Calling .GetVersion
	I0918 21:09:29.166388   61659 main.go:141] libmachine: Using API Version  1
	I0918 21:09:29.166414   61659 main.go:141] libmachine: () Calling .SetConfigRaw
	I0918 21:09:29.166544   61659 main.go:141] libmachine: () Calling .GetMachineName
	I0918 21:09:29.166711   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .GetState
	I0918 21:09:29.166730   61659 main.go:141] libmachine: () Calling .GetMachineName
	I0918 21:09:29.166894   61659 main.go:141] libmachine: Using API Version  1
	I0918 21:09:29.166918   61659 main.go:141] libmachine: () Calling .SetConfigRaw
	I0918 21:09:29.167381   61659 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19667-7671/.minikube/bin/docker-machine-driver-kvm2
	I0918 21:09:29.167425   61659 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0918 21:09:29.168144   61659 main.go:141] libmachine: () Calling .GetMachineName
	I0918 21:09:29.168578   61659 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19667-7671/.minikube/bin/docker-machine-driver-kvm2
	I0918 21:09:29.168614   61659 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0918 21:09:29.171072   61659 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-828868"
	W0918 21:09:29.171101   61659 addons.go:243] addon default-storageclass should already be in state true
	I0918 21:09:29.171133   61659 host.go:66] Checking if "default-k8s-diff-port-828868" exists ...
	I0918 21:09:29.171534   61659 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19667-7671/.minikube/bin/docker-machine-driver-kvm2
	I0918 21:09:29.171597   61659 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0918 21:09:29.186305   61659 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42211
	I0918 21:09:29.186318   61659 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45153
	I0918 21:09:29.186838   61659 main.go:141] libmachine: () Calling .GetVersion
	I0918 21:09:29.186847   61659 main.go:141] libmachine: () Calling .GetVersion
	I0918 21:09:29.187353   61659 main.go:141] libmachine: Using API Version  1
	I0918 21:09:29.187367   61659 main.go:141] libmachine: () Calling .SetConfigRaw
	I0918 21:09:29.187373   61659 main.go:141] libmachine: Using API Version  1
	I0918 21:09:29.187403   61659 main.go:141] libmachine: () Calling .SetConfigRaw
	I0918 21:09:29.187840   61659 main.go:141] libmachine: () Calling .GetMachineName
	I0918 21:09:29.187855   61659 main.go:141] libmachine: () Calling .GetMachineName
	I0918 21:09:29.188085   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .GetState
	I0918 21:09:29.188106   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .GetState
	I0918 21:09:29.193453   61659 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33471
	I0918 21:09:29.193905   61659 main.go:141] libmachine: () Calling .GetVersion
	I0918 21:09:29.194477   61659 main.go:141] libmachine: Using API Version  1
	I0918 21:09:29.194513   61659 main.go:141] libmachine: () Calling .SetConfigRaw
	I0918 21:09:29.194981   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .DriverName
	I0918 21:09:29.195155   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .DriverName
	I0918 21:09:29.195254   61659 main.go:141] libmachine: () Calling .GetMachineName
	I0918 21:09:29.195807   61659 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19667-7671/.minikube/bin/docker-machine-driver-kvm2
	I0918 21:09:29.195839   61659 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0918 21:09:29.197102   61659 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0918 21:09:29.197111   61659 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0918 21:09:29.198425   61659 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0918 21:09:29.198458   61659 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0918 21:09:29.198486   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .GetSSHHostname
	I0918 21:09:29.198589   61659 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0918 21:09:29.198605   61659 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0918 21:09:29.198622   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .GetSSHHostname
	I0918 21:09:29.202110   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | domain default-k8s-diff-port-828868 has defined MAC address 52:54:00:c0:39:06 in network mk-default-k8s-diff-port-828868
	I0918 21:09:29.202236   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | domain default-k8s-diff-port-828868 has defined MAC address 52:54:00:c0:39:06 in network mk-default-k8s-diff-port-828868
	I0918 21:09:29.202634   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:39:06", ip: ""} in network mk-default-k8s-diff-port-828868: {Iface:virbr4 ExpiryTime:2024-09-18 22:04:19 +0000 UTC Type:0 Mac:52:54:00:c0:39:06 Iaid: IPaddr:192.168.50.109 Prefix:24 Hostname:default-k8s-diff-port-828868 Clientid:01:52:54:00:c0:39:06}
	I0918 21:09:29.202656   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:39:06", ip: ""} in network mk-default-k8s-diff-port-828868: {Iface:virbr4 ExpiryTime:2024-09-18 22:04:19 +0000 UTC Type:0 Mac:52:54:00:c0:39:06 Iaid: IPaddr:192.168.50.109 Prefix:24 Hostname:default-k8s-diff-port-828868 Clientid:01:52:54:00:c0:39:06}
	I0918 21:09:29.202661   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | domain default-k8s-diff-port-828868 has defined IP address 192.168.50.109 and MAC address 52:54:00:c0:39:06 in network mk-default-k8s-diff-port-828868
	I0918 21:09:29.202677   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | domain default-k8s-diff-port-828868 has defined IP address 192.168.50.109 and MAC address 52:54:00:c0:39:06 in network mk-default-k8s-diff-port-828868
	I0918 21:09:29.202895   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .GetSSHPort
	I0918 21:09:29.202942   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .GetSSHPort
	I0918 21:09:29.203084   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .GetSSHKeyPath
	I0918 21:09:29.203129   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .GetSSHKeyPath
	I0918 21:09:29.203268   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .GetSSHUsername
	I0918 21:09:29.203275   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .GetSSHUsername
	I0918 21:09:29.203393   61659 sshutil.go:53] new ssh client: &{IP:192.168.50.109 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19667-7671/.minikube/machines/default-k8s-diff-port-828868/id_rsa Username:docker}
	I0918 21:09:29.203407   61659 sshutil.go:53] new ssh client: &{IP:192.168.50.109 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19667-7671/.minikube/machines/default-k8s-diff-port-828868/id_rsa Username:docker}
	I0918 21:09:29.215178   61659 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41565
	I0918 21:09:29.215727   61659 main.go:141] libmachine: () Calling .GetVersion
	I0918 21:09:29.216301   61659 main.go:141] libmachine: Using API Version  1
	I0918 21:09:29.216325   61659 main.go:141] libmachine: () Calling .SetConfigRaw
	I0918 21:09:29.216669   61659 main.go:141] libmachine: () Calling .GetMachineName
	I0918 21:09:29.216873   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .GetState
	I0918 21:09:29.218689   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .DriverName
	I0918 21:09:29.218980   61659 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0918 21:09:29.218994   61659 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0918 21:09:29.219009   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .GetSSHHostname
	I0918 21:09:29.222542   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | domain default-k8s-diff-port-828868 has defined MAC address 52:54:00:c0:39:06 in network mk-default-k8s-diff-port-828868
	I0918 21:09:29.222963   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:39:06", ip: ""} in network mk-default-k8s-diff-port-828868: {Iface:virbr4 ExpiryTime:2024-09-18 22:04:19 +0000 UTC Type:0 Mac:52:54:00:c0:39:06 Iaid: IPaddr:192.168.50.109 Prefix:24 Hostname:default-k8s-diff-port-828868 Clientid:01:52:54:00:c0:39:06}
	I0918 21:09:29.222985   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | domain default-k8s-diff-port-828868 has defined IP address 192.168.50.109 and MAC address 52:54:00:c0:39:06 in network mk-default-k8s-diff-port-828868
	I0918 21:09:29.223398   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .GetSSHPort
	I0918 21:09:29.223632   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .GetSSHKeyPath
	I0918 21:09:29.223820   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .GetSSHUsername
	I0918 21:09:29.224004   61659 sshutil.go:53] new ssh client: &{IP:192.168.50.109 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19667-7671/.minikube/machines/default-k8s-diff-port-828868/id_rsa Username:docker}
	I0918 21:09:29.360595   61659 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0918 21:09:29.381254   61659 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-828868" to be "Ready" ...
	I0918 21:09:29.390526   61659 node_ready.go:49] node "default-k8s-diff-port-828868" has status "Ready":"True"
	I0918 21:09:29.390554   61659 node_ready.go:38] duration metric: took 9.264338ms for node "default-k8s-diff-port-828868" to be "Ready" ...
	I0918 21:09:29.390565   61659 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0918 21:09:29.395433   61659 pod_ready.go:79] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-828868" in "kube-system" namespace to be "Ready" ...
	I0918 21:09:29.468492   61659 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0918 21:09:29.526515   61659 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0918 21:09:29.527137   61659 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0918 21:09:29.527162   61659 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0918 21:09:29.570619   61659 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0918 21:09:29.570651   61659 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0918 21:09:29.631944   61659 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0918 21:09:29.631975   61659 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0918 21:09:29.653905   61659 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0918 21:09:30.402107   61659 main.go:141] libmachine: Making call to close driver server
	I0918 21:09:30.402145   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .Close
	I0918 21:09:30.402142   61659 main.go:141] libmachine: Making call to close driver server
	I0918 21:09:30.402167   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .Close
	I0918 21:09:30.402466   61659 main.go:141] libmachine: Successfully made call to close driver server
	I0918 21:09:30.402480   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | Closing plugin on server side
	I0918 21:09:30.402493   61659 main.go:141] libmachine: Making call to close connection to plugin binary
	I0918 21:09:30.402503   61659 main.go:141] libmachine: Making call to close driver server
	I0918 21:09:30.402512   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .Close
	I0918 21:09:30.402537   61659 main.go:141] libmachine: Successfully made call to close driver server
	I0918 21:09:30.402546   61659 main.go:141] libmachine: Making call to close connection to plugin binary
	I0918 21:09:30.402555   61659 main.go:141] libmachine: Making call to close driver server
	I0918 21:09:30.402571   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .Close
	I0918 21:09:30.402733   61659 main.go:141] libmachine: Successfully made call to close driver server
	I0918 21:09:30.402773   61659 main.go:141] libmachine: Making call to close connection to plugin binary
	I0918 21:09:30.402921   61659 main.go:141] libmachine: Successfully made call to close driver server
	I0918 21:09:30.402941   61659 main.go:141] libmachine: Making call to close connection to plugin binary
	I0918 21:09:30.435323   61659 main.go:141] libmachine: Making call to close driver server
	I0918 21:09:30.435366   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .Close
	I0918 21:09:30.435659   61659 main.go:141] libmachine: Successfully made call to close driver server
	I0918 21:09:30.435683   61659 main.go:141] libmachine: Making call to close connection to plugin binary
	I0918 21:09:30.975630   61659 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.321677798s)
	I0918 21:09:30.975716   61659 main.go:141] libmachine: Making call to close driver server
	I0918 21:09:30.975733   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .Close
	I0918 21:09:30.976074   61659 main.go:141] libmachine: Successfully made call to close driver server
	I0918 21:09:30.976094   61659 main.go:141] libmachine: Making call to close connection to plugin binary
	I0918 21:09:30.976105   61659 main.go:141] libmachine: Making call to close driver server
	I0918 21:09:30.976113   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .Close
	I0918 21:09:30.976369   61659 main.go:141] libmachine: Successfully made call to close driver server
	I0918 21:09:30.976395   61659 main.go:141] libmachine: Making call to close connection to plugin binary
	I0918 21:09:30.976406   61659 addons.go:475] Verifying addon metrics-server=true in "default-k8s-diff-port-828868"
	I0918 21:09:30.978345   61659 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0918 21:09:26.857486   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:09:29.356533   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:09:31.358269   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:09:30.979731   61659 addons.go:510] duration metric: took 1.833970994s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0918 21:09:31.403620   61659 pod_ready.go:103] pod "etcd-default-k8s-diff-port-828868" in "kube-system" namespace has status "Ready":"False"
	I0918 21:09:33.857960   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:09:36.357454   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:09:33.902436   61659 pod_ready.go:103] pod "etcd-default-k8s-diff-port-828868" in "kube-system" namespace has status "Ready":"False"
	I0918 21:09:36.401889   61659 pod_ready.go:103] pod "etcd-default-k8s-diff-port-828868" in "kube-system" namespace has status "Ready":"False"
	I0918 21:09:36.902002   61659 pod_ready.go:93] pod "etcd-default-k8s-diff-port-828868" in "kube-system" namespace has status "Ready":"True"
	I0918 21:09:36.902026   61659 pod_ready.go:82] duration metric: took 7.506563242s for pod "etcd-default-k8s-diff-port-828868" in "kube-system" namespace to be "Ready" ...
	I0918 21:09:36.902035   61659 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-828868" in "kube-system" namespace to be "Ready" ...
	I0918 21:09:36.907689   61659 pod_ready.go:93] pod "kube-apiserver-default-k8s-diff-port-828868" in "kube-system" namespace has status "Ready":"True"
	I0918 21:09:36.907713   61659 pod_ready.go:82] duration metric: took 5.672631ms for pod "kube-apiserver-default-k8s-diff-port-828868" in "kube-system" namespace to be "Ready" ...
	I0918 21:09:36.907722   61659 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-828868" in "kube-system" namespace to be "Ready" ...
	I0918 21:09:38.914521   61659 pod_ready.go:103] pod "kube-controller-manager-default-k8s-diff-port-828868" in "kube-system" namespace has status "Ready":"False"
	I0918 21:09:39.414168   61659 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-828868" in "kube-system" namespace has status "Ready":"True"
	I0918 21:09:39.414196   61659 pod_ready.go:82] duration metric: took 2.506467297s for pod "kube-controller-manager-default-k8s-diff-port-828868" in "kube-system" namespace to be "Ready" ...
	I0918 21:09:39.414207   61659 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-hf5mm" in "kube-system" namespace to be "Ready" ...
	I0918 21:09:39.419030   61659 pod_ready.go:93] pod "kube-proxy-hf5mm" in "kube-system" namespace has status "Ready":"True"
	I0918 21:09:39.419053   61659 pod_ready.go:82] duration metric: took 4.838856ms for pod "kube-proxy-hf5mm" in "kube-system" namespace to be "Ready" ...
	I0918 21:09:39.419061   61659 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-828868" in "kube-system" namespace to be "Ready" ...
	I0918 21:09:39.423321   61659 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-828868" in "kube-system" namespace has status "Ready":"True"
	I0918 21:09:39.423341   61659 pod_ready.go:82] duration metric: took 4.274601ms for pod "kube-scheduler-default-k8s-diff-port-828868" in "kube-system" namespace to be "Ready" ...
	I0918 21:09:39.423348   61659 pod_ready.go:39] duration metric: took 10.03277208s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0918 21:09:39.423360   61659 api_server.go:52] waiting for apiserver process to appear ...
	I0918 21:09:39.423407   61659 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:09:39.438272   61659 api_server.go:72] duration metric: took 10.292559807s to wait for apiserver process to appear ...
	I0918 21:09:39.438297   61659 api_server.go:88] waiting for apiserver healthz status ...
	I0918 21:09:39.438315   61659 api_server.go:253] Checking apiserver healthz at https://192.168.50.109:8444/healthz ...
	I0918 21:09:39.443342   61659 api_server.go:279] https://192.168.50.109:8444/healthz returned 200:
	ok
	I0918 21:09:39.444238   61659 api_server.go:141] control plane version: v1.31.1
	I0918 21:09:39.444262   61659 api_server.go:131] duration metric: took 5.958748ms to wait for apiserver health ...
	I0918 21:09:39.444270   61659 system_pods.go:43] waiting for kube-system pods to appear ...
	I0918 21:09:39.449914   61659 system_pods.go:59] 9 kube-system pods found
	I0918 21:09:39.449938   61659 system_pods.go:61] "coredns-7c65d6cfc9-8gz5v" [bfcbe99d-9a3d-4da8-a976-58cb9d9bf7ac] Running
	I0918 21:09:39.449942   61659 system_pods.go:61] "coredns-7c65d6cfc9-shx5p" [2d6d25ab-9a90-490a-911b-bf396605fa88] Running
	I0918 21:09:39.449947   61659 system_pods.go:61] "etcd-default-k8s-diff-port-828868" [d9772e35-df33-4b90-af88-7479a81776a2] Running
	I0918 21:09:39.449950   61659 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-828868" [9ca05dc8-1e15-4f6e-9855-1e1b6376fbfc] Running
	I0918 21:09:39.449954   61659 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-828868" [b4196fd3-4882-4e31-b418-2f5f24d2610d] Running
	I0918 21:09:39.449957   61659 system_pods.go:61] "kube-proxy-hf5mm" [3fb0a166-a925-4486-9695-6db05ae704b8] Running
	I0918 21:09:39.449962   61659 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-828868" [161b8693-3400-43a4-b361-cce5749dd2eb] Running
	I0918 21:09:39.449969   61659 system_pods.go:61] "metrics-server-6867b74b74-hdt52" [bf3aa7d0-a121-4a2c-92dd-2c79c6cbaa4a] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0918 21:09:39.449976   61659 system_pods.go:61] "storage-provisioner" [8b4e1077-e23f-4262-b83e-989506798531] Running
	I0918 21:09:39.449983   61659 system_pods.go:74] duration metric: took 5.708013ms to wait for pod list to return data ...
	I0918 21:09:39.449992   61659 default_sa.go:34] waiting for default service account to be created ...
	I0918 21:09:39.453256   61659 default_sa.go:45] found service account: "default"
	I0918 21:09:39.453278   61659 default_sa.go:55] duration metric: took 3.281012ms for default service account to be created ...
	I0918 21:09:39.453287   61659 system_pods.go:116] waiting for k8s-apps to be running ...
	I0918 21:09:39.502200   61659 system_pods.go:86] 9 kube-system pods found
	I0918 21:09:39.502231   61659 system_pods.go:89] "coredns-7c65d6cfc9-8gz5v" [bfcbe99d-9a3d-4da8-a976-58cb9d9bf7ac] Running
	I0918 21:09:39.502237   61659 system_pods.go:89] "coredns-7c65d6cfc9-shx5p" [2d6d25ab-9a90-490a-911b-bf396605fa88] Running
	I0918 21:09:39.502241   61659 system_pods.go:89] "etcd-default-k8s-diff-port-828868" [d9772e35-df33-4b90-af88-7479a81776a2] Running
	I0918 21:09:39.502246   61659 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-828868" [9ca05dc8-1e15-4f6e-9855-1e1b6376fbfc] Running
	I0918 21:09:39.502250   61659 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-828868" [b4196fd3-4882-4e31-b418-2f5f24d2610d] Running
	I0918 21:09:39.502253   61659 system_pods.go:89] "kube-proxy-hf5mm" [3fb0a166-a925-4486-9695-6db05ae704b8] Running
	I0918 21:09:39.502256   61659 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-828868" [161b8693-3400-43a4-b361-cce5749dd2eb] Running
	I0918 21:09:39.502262   61659 system_pods.go:89] "metrics-server-6867b74b74-hdt52" [bf3aa7d0-a121-4a2c-92dd-2c79c6cbaa4a] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0918 21:09:39.502266   61659 system_pods.go:89] "storage-provisioner" [8b4e1077-e23f-4262-b83e-989506798531] Running
	I0918 21:09:39.502276   61659 system_pods.go:126] duration metric: took 48.981872ms to wait for k8s-apps to be running ...
	I0918 21:09:39.502291   61659 system_svc.go:44] waiting for kubelet service to be running ....
	I0918 21:09:39.502367   61659 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0918 21:09:39.517514   61659 system_svc.go:56] duration metric: took 15.213443ms WaitForService to wait for kubelet
	I0918 21:09:39.517549   61659 kubeadm.go:582] duration metric: took 10.37183977s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0918 21:09:39.517573   61659 node_conditions.go:102] verifying NodePressure condition ...
	I0918 21:09:39.700593   61659 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0918 21:09:39.700616   61659 node_conditions.go:123] node cpu capacity is 2
	I0918 21:09:39.700626   61659 node_conditions.go:105] duration metric: took 183.048537ms to run NodePressure ...
	I0918 21:09:39.700637   61659 start.go:241] waiting for startup goroutines ...
	I0918 21:09:39.700643   61659 start.go:246] waiting for cluster config update ...
	I0918 21:09:39.700653   61659 start.go:255] writing updated cluster config ...
	I0918 21:09:39.700899   61659 ssh_runner.go:195] Run: rm -f paused
	I0918 21:09:39.750890   61659 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I0918 21:09:39.753015   61659 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-828868" cluster and "default" namespace by default
	I0918 21:09:38.857481   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:09:41.356307   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:09:44.581125   61740 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.200138695s)
	I0918 21:09:44.581198   61740 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0918 21:09:44.597051   61740 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0918 21:09:44.607195   61740 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0918 21:09:44.617135   61740 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0918 21:09:44.617160   61740 kubeadm.go:157] found existing configuration files:
	
	I0918 21:09:44.617203   61740 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0918 21:09:44.626216   61740 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0918 21:09:44.626278   61740 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0918 21:09:44.635161   61740 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0918 21:09:44.643767   61740 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0918 21:09:44.643828   61740 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0918 21:09:44.652663   61740 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0918 21:09:44.662045   61740 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0918 21:09:44.662107   61740 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0918 21:09:44.671165   61740 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0918 21:09:44.680397   61740 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0918 21:09:44.680469   61740 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0918 21:09:44.689168   61740 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0918 21:09:44.733425   61740 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I0918 21:09:44.733528   61740 kubeadm.go:310] [preflight] Running pre-flight checks
	I0918 21:09:44.846369   61740 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0918 21:09:44.846477   61740 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0918 21:09:44.846612   61740 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0918 21:09:44.855581   61740 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0918 21:09:44.857599   61740 out.go:235]   - Generating certificates and keys ...
	I0918 21:09:44.857709   61740 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0918 21:09:44.857777   61740 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0918 21:09:44.857851   61740 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0918 21:09:44.857942   61740 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0918 21:09:44.858061   61740 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0918 21:09:44.858137   61740 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0918 21:09:44.858243   61740 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0918 21:09:44.858339   61740 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0918 21:09:44.858409   61740 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0918 21:09:44.858509   61740 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0918 21:09:44.858547   61740 kubeadm.go:310] [certs] Using the existing "sa" key
	I0918 21:09:44.858615   61740 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0918 21:09:45.048967   61740 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0918 21:09:45.229640   61740 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0918 21:09:45.397078   61740 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0918 21:09:45.722116   61740 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0918 21:09:45.850285   61740 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0918 21:09:45.850902   61740 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0918 21:09:45.853909   61740 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0918 21:09:43.357136   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:09:45.858056   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:09:45.855803   61740 out.go:235]   - Booting up control plane ...
	I0918 21:09:45.855931   61740 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0918 21:09:45.857227   61740 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0918 21:09:45.858855   61740 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0918 21:09:45.877299   61740 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0918 21:09:45.883953   61740 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0918 21:09:45.884043   61740 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0918 21:09:46.015368   61740 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0918 21:09:46.015509   61740 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0918 21:09:47.016371   61740 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001062473s
	I0918 21:09:47.016465   61740 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0918 21:09:48.357057   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:09:50.856124   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:09:51.518808   61740 kubeadm.go:310] [api-check] The API server is healthy after 4.502250914s
	I0918 21:09:51.532148   61740 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0918 21:09:51.549560   61740 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0918 21:09:51.579801   61740 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0918 21:09:51.580053   61740 kubeadm.go:310] [mark-control-plane] Marking the node embed-certs-255556 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0918 21:09:51.598605   61740 kubeadm.go:310] [bootstrap-token] Using token: iilbxo.n0c6mbjmeqehlfso
	I0918 21:09:51.600035   61740 out.go:235]   - Configuring RBAC rules ...
	I0918 21:09:51.600200   61740 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0918 21:09:51.614672   61740 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0918 21:09:51.626186   61740 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0918 21:09:51.629722   61740 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0918 21:09:51.634757   61740 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0918 21:09:51.642778   61740 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0918 21:09:51.931051   61740 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0918 21:09:52.359085   61740 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0918 21:09:52.930191   61740 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0918 21:09:52.931033   61740 kubeadm.go:310] 
	I0918 21:09:52.931100   61740 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0918 21:09:52.931108   61740 kubeadm.go:310] 
	I0918 21:09:52.931178   61740 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0918 21:09:52.931186   61740 kubeadm.go:310] 
	I0918 21:09:52.931208   61740 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0918 21:09:52.931313   61740 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0918 21:09:52.931400   61740 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0918 21:09:52.931435   61740 kubeadm.go:310] 
	I0918 21:09:52.931524   61740 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0918 21:09:52.931537   61740 kubeadm.go:310] 
	I0918 21:09:52.931601   61740 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0918 21:09:52.931627   61740 kubeadm.go:310] 
	I0918 21:09:52.931721   61740 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0918 21:09:52.931825   61740 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0918 21:09:52.931896   61740 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0918 21:09:52.931903   61740 kubeadm.go:310] 
	I0918 21:09:52.931974   61740 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0918 21:09:52.932073   61740 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0918 21:09:52.932081   61740 kubeadm.go:310] 
	I0918 21:09:52.932154   61740 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token iilbxo.n0c6mbjmeqehlfso \
	I0918 21:09:52.932243   61740 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:8ad46bf288e8cc2226f5a929a768e6c95cddecc97ffc4016be27ef0987b1e36f \
	I0918 21:09:52.932289   61740 kubeadm.go:310] 	--control-plane 
	I0918 21:09:52.932296   61740 kubeadm.go:310] 
	I0918 21:09:52.932365   61740 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0918 21:09:52.932372   61740 kubeadm.go:310] 
	I0918 21:09:52.932438   61740 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token iilbxo.n0c6mbjmeqehlfso \
	I0918 21:09:52.932568   61740 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:8ad46bf288e8cc2226f5a929a768e6c95cddecc97ffc4016be27ef0987b1e36f 
	I0918 21:09:52.934280   61740 kubeadm.go:310] W0918 21:09:44.705437    2512 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0918 21:09:52.934656   61740 kubeadm.go:310] W0918 21:09:44.706219    2512 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0918 21:09:52.934841   61740 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0918 21:09:52.934861   61740 cni.go:84] Creating CNI manager for ""
	I0918 21:09:52.934871   61740 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0918 21:09:52.937656   61740 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0918 21:09:52.939150   61740 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0918 21:09:52.950774   61740 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0918 21:09:52.973081   61740 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0918 21:09:52.973161   61740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0918 21:09:52.973210   61740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-255556 minikube.k8s.io/updated_at=2024_09_18T21_09_52_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=85073601a832bd4bbda5d11fa91feafff6ec6b91 minikube.k8s.io/name=embed-certs-255556 minikube.k8s.io/primary=true
	I0918 21:09:53.012402   61740 ops.go:34] apiserver oom_adj: -16
	I0918 21:09:53.180983   61740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0918 21:09:52.857161   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:09:55.357515   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:09:53.681852   61740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0918 21:09:54.181892   61740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0918 21:09:54.681768   61740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0918 21:09:55.181353   61740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0918 21:09:55.681336   61740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0918 21:09:56.181389   61740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0918 21:09:56.681574   61740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0918 21:09:57.181050   61740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0918 21:09:57.258766   61740 kubeadm.go:1113] duration metric: took 4.285672952s to wait for elevateKubeSystemPrivileges
	I0918 21:09:57.258809   61740 kubeadm.go:394] duration metric: took 5m2.572577294s to StartCluster
	I0918 21:09:57.258831   61740 settings.go:142] acquiring lock: {Name:mk6ae95a69dcbe00c28846409e9f46945a88de2f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0918 21:09:57.258925   61740 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19667-7671/kubeconfig
	I0918 21:09:57.260757   61740 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19667-7671/kubeconfig: {Name:mk3a3eecadb0164dfdd5d3c4392b5b473a2a9bb0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0918 21:09:57.261072   61740 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.21 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0918 21:09:57.261168   61740 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0918 21:09:57.261275   61740 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-255556"
	I0918 21:09:57.261302   61740 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-255556"
	W0918 21:09:57.261314   61740 addons.go:243] addon storage-provisioner should already be in state true
	I0918 21:09:57.261344   61740 host.go:66] Checking if "embed-certs-255556" exists ...
	I0918 21:09:57.261337   61740 addons.go:69] Setting default-storageclass=true in profile "embed-certs-255556"
	I0918 21:09:57.261366   61740 config.go:182] Loaded profile config "embed-certs-255556": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0918 21:09:57.261363   61740 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-255556"
	I0918 21:09:57.261354   61740 addons.go:69] Setting metrics-server=true in profile "embed-certs-255556"
	I0918 21:09:57.261413   61740 addons.go:234] Setting addon metrics-server=true in "embed-certs-255556"
	W0918 21:09:57.261423   61740 addons.go:243] addon metrics-server should already be in state true
	I0918 21:09:57.261450   61740 host.go:66] Checking if "embed-certs-255556" exists ...
	I0918 21:09:57.261751   61740 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19667-7671/.minikube/bin/docker-machine-driver-kvm2
	I0918 21:09:57.261773   61740 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19667-7671/.minikube/bin/docker-machine-driver-kvm2
	I0918 21:09:57.261797   61740 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0918 21:09:57.261805   61740 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0918 21:09:57.261827   61740 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19667-7671/.minikube/bin/docker-machine-driver-kvm2
	I0918 21:09:57.261913   61740 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0918 21:09:57.263016   61740 out.go:177] * Verifying Kubernetes components...
	I0918 21:09:57.264732   61740 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0918 21:09:57.279143   61740 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41499
	I0918 21:09:57.279741   61740 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38525
	I0918 21:09:57.279948   61740 main.go:141] libmachine: () Calling .GetVersion
	I0918 21:09:57.280150   61740 main.go:141] libmachine: () Calling .GetVersion
	I0918 21:09:57.280518   61740 main.go:141] libmachine: Using API Version  1
	I0918 21:09:57.280536   61740 main.go:141] libmachine: () Calling .SetConfigRaw
	I0918 21:09:57.280662   61740 main.go:141] libmachine: Using API Version  1
	I0918 21:09:57.280699   61740 main.go:141] libmachine: () Calling .SetConfigRaw
	I0918 21:09:57.280899   61740 main.go:141] libmachine: () Calling .GetMachineName
	I0918 21:09:57.281014   61740 main.go:141] libmachine: () Calling .GetMachineName
	I0918 21:09:57.281224   61740 main.go:141] libmachine: (embed-certs-255556) Calling .GetState
	I0918 21:09:57.281401   61740 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45081
	I0918 21:09:57.281609   61740 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19667-7671/.minikube/bin/docker-machine-driver-kvm2
	I0918 21:09:57.281669   61740 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0918 21:09:57.281824   61740 main.go:141] libmachine: () Calling .GetVersion
	I0918 21:09:57.282291   61740 main.go:141] libmachine: Using API Version  1
	I0918 21:09:57.282316   61740 main.go:141] libmachine: () Calling .SetConfigRaw
	I0918 21:09:57.282655   61740 main.go:141] libmachine: () Calling .GetMachineName
	I0918 21:09:57.283166   61740 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19667-7671/.minikube/bin/docker-machine-driver-kvm2
	I0918 21:09:57.283198   61740 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0918 21:09:57.284993   61740 addons.go:234] Setting addon default-storageclass=true in "embed-certs-255556"
	W0918 21:09:57.285013   61740 addons.go:243] addon default-storageclass should already be in state true
	I0918 21:09:57.285042   61740 host.go:66] Checking if "embed-certs-255556" exists ...
	I0918 21:09:57.285400   61740 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19667-7671/.minikube/bin/docker-machine-driver-kvm2
	I0918 21:09:57.285441   61740 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0918 21:09:57.298996   61740 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39709
	I0918 21:09:57.299572   61740 main.go:141] libmachine: () Calling .GetVersion
	I0918 21:09:57.300427   61740 main.go:141] libmachine: Using API Version  1
	I0918 21:09:57.300453   61740 main.go:141] libmachine: () Calling .SetConfigRaw
	I0918 21:09:57.300865   61740 main.go:141] libmachine: () Calling .GetMachineName
	I0918 21:09:57.301062   61740 main.go:141] libmachine: (embed-certs-255556) Calling .GetState
	I0918 21:09:57.301827   61740 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40441
	I0918 21:09:57.302410   61740 main.go:141] libmachine: () Calling .GetVersion
	I0918 21:09:57.302948   61740 main.go:141] libmachine: Using API Version  1
	I0918 21:09:57.302968   61740 main.go:141] libmachine: () Calling .SetConfigRaw
	I0918 21:09:57.303284   61740 main.go:141] libmachine: (embed-certs-255556) Calling .DriverName
	I0918 21:09:57.303333   61740 main.go:141] libmachine: () Calling .GetMachineName
	I0918 21:09:57.303512   61740 main.go:141] libmachine: (embed-certs-255556) Calling .GetState
	I0918 21:09:57.304409   61740 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43125
	I0918 21:09:57.304836   61740 main.go:141] libmachine: () Calling .GetVersion
	I0918 21:09:57.305379   61740 main.go:141] libmachine: Using API Version  1
	I0918 21:09:57.305393   61740 main.go:141] libmachine: () Calling .SetConfigRaw
	I0918 21:09:57.305423   61740 main.go:141] libmachine: (embed-certs-255556) Calling .DriverName
	I0918 21:09:57.305449   61740 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0918 21:09:57.305705   61740 main.go:141] libmachine: () Calling .GetMachineName
	I0918 21:09:57.306221   61740 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19667-7671/.minikube/bin/docker-machine-driver-kvm2
	I0918 21:09:57.306270   61740 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0918 21:09:57.306972   61740 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0918 21:09:57.307226   61740 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0918 21:09:57.307247   61740 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0918 21:09:57.307261   61740 main.go:141] libmachine: (embed-certs-255556) Calling .GetSSHHostname
	I0918 21:09:57.308757   61740 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0918 21:09:57.308778   61740 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0918 21:09:57.308798   61740 main.go:141] libmachine: (embed-certs-255556) Calling .GetSSHHostname
	I0918 21:09:57.311608   61740 main.go:141] libmachine: (embed-certs-255556) DBG | domain embed-certs-255556 has defined MAC address 52:54:00:e8:c2:b7 in network mk-embed-certs-255556
	I0918 21:09:57.312311   61740 main.go:141] libmachine: (embed-certs-255556) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e8:c2:b7", ip: ""} in network mk-embed-certs-255556: {Iface:virbr1 ExpiryTime:2024-09-18 22:04:39 +0000 UTC Type:0 Mac:52:54:00:e8:c2:b7 Iaid: IPaddr:192.168.39.21 Prefix:24 Hostname:embed-certs-255556 Clientid:01:52:54:00:e8:c2:b7}
	I0918 21:09:57.312346   61740 main.go:141] libmachine: (embed-certs-255556) DBG | domain embed-certs-255556 has defined IP address 192.168.39.21 and MAC address 52:54:00:e8:c2:b7 in network mk-embed-certs-255556
	I0918 21:09:57.312529   61740 main.go:141] libmachine: (embed-certs-255556) Calling .GetSSHPort
	I0918 21:09:57.313308   61740 main.go:141] libmachine: (embed-certs-255556) DBG | domain embed-certs-255556 has defined MAC address 52:54:00:e8:c2:b7 in network mk-embed-certs-255556
	I0918 21:09:57.313344   61740 main.go:141] libmachine: (embed-certs-255556) Calling .GetSSHKeyPath
	I0918 21:09:57.313533   61740 main.go:141] libmachine: (embed-certs-255556) Calling .GetSSHUsername
	I0918 21:09:57.313707   61740 sshutil.go:53] new ssh client: &{IP:192.168.39.21 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19667-7671/.minikube/machines/embed-certs-255556/id_rsa Username:docker}
	I0918 21:09:57.313964   61740 main.go:141] libmachine: (embed-certs-255556) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e8:c2:b7", ip: ""} in network mk-embed-certs-255556: {Iface:virbr1 ExpiryTime:2024-09-18 22:04:39 +0000 UTC Type:0 Mac:52:54:00:e8:c2:b7 Iaid: IPaddr:192.168.39.21 Prefix:24 Hostname:embed-certs-255556 Clientid:01:52:54:00:e8:c2:b7}
	I0918 21:09:57.313991   61740 main.go:141] libmachine: (embed-certs-255556) DBG | domain embed-certs-255556 has defined IP address 192.168.39.21 and MAC address 52:54:00:e8:c2:b7 in network mk-embed-certs-255556
	I0918 21:09:57.314181   61740 main.go:141] libmachine: (embed-certs-255556) Calling .GetSSHPort
	I0918 21:09:57.314357   61740 main.go:141] libmachine: (embed-certs-255556) Calling .GetSSHKeyPath
	I0918 21:09:57.314517   61740 main.go:141] libmachine: (embed-certs-255556) Calling .GetSSHUsername
	I0918 21:09:57.314644   61740 sshutil.go:53] new ssh client: &{IP:192.168.39.21 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19667-7671/.minikube/machines/embed-certs-255556/id_rsa Username:docker}
	I0918 21:09:57.325307   61740 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45641
	I0918 21:09:57.325800   61740 main.go:141] libmachine: () Calling .GetVersion
	I0918 21:09:57.326390   61740 main.go:141] libmachine: Using API Version  1
	I0918 21:09:57.326416   61740 main.go:141] libmachine: () Calling .SetConfigRaw
	I0918 21:09:57.326850   61740 main.go:141] libmachine: () Calling .GetMachineName
	I0918 21:09:57.327116   61740 main.go:141] libmachine: (embed-certs-255556) Calling .GetState
	I0918 21:09:57.328954   61740 main.go:141] libmachine: (embed-certs-255556) Calling .DriverName
	I0918 21:09:57.329179   61740 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0918 21:09:57.329197   61740 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0918 21:09:57.329216   61740 main.go:141] libmachine: (embed-certs-255556) Calling .GetSSHHostname
	I0918 21:09:57.332176   61740 main.go:141] libmachine: (embed-certs-255556) DBG | domain embed-certs-255556 has defined MAC address 52:54:00:e8:c2:b7 in network mk-embed-certs-255556
	I0918 21:09:57.332608   61740 main.go:141] libmachine: (embed-certs-255556) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e8:c2:b7", ip: ""} in network mk-embed-certs-255556: {Iface:virbr1 ExpiryTime:2024-09-18 22:04:39 +0000 UTC Type:0 Mac:52:54:00:e8:c2:b7 Iaid: IPaddr:192.168.39.21 Prefix:24 Hostname:embed-certs-255556 Clientid:01:52:54:00:e8:c2:b7}
	I0918 21:09:57.332633   61740 main.go:141] libmachine: (embed-certs-255556) DBG | domain embed-certs-255556 has defined IP address 192.168.39.21 and MAC address 52:54:00:e8:c2:b7 in network mk-embed-certs-255556
	I0918 21:09:57.332803   61740 main.go:141] libmachine: (embed-certs-255556) Calling .GetSSHPort
	I0918 21:09:57.332991   61740 main.go:141] libmachine: (embed-certs-255556) Calling .GetSSHKeyPath
	I0918 21:09:57.333132   61740 main.go:141] libmachine: (embed-certs-255556) Calling .GetSSHUsername
	I0918 21:09:57.333254   61740 sshutil.go:53] new ssh client: &{IP:192.168.39.21 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19667-7671/.minikube/machines/embed-certs-255556/id_rsa Username:docker}
	I0918 21:09:57.463767   61740 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0918 21:09:57.480852   61740 node_ready.go:35] waiting up to 6m0s for node "embed-certs-255556" to be "Ready" ...
	I0918 21:09:57.492198   61740 node_ready.go:49] node "embed-certs-255556" has status "Ready":"True"
	I0918 21:09:57.492221   61740 node_ready.go:38] duration metric: took 11.335784ms for node "embed-certs-255556" to be "Ready" ...
	I0918 21:09:57.492229   61740 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0918 21:09:57.496607   61740 pod_ready.go:79] waiting up to 6m0s for pod "etcd-embed-certs-255556" in "kube-system" namespace to be "Ready" ...
	I0918 21:09:57.627581   61740 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0918 21:09:57.631704   61740 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0918 21:09:57.647778   61740 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0918 21:09:57.647799   61740 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0918 21:09:57.686558   61740 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0918 21:09:57.686589   61740 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0918 21:09:57.726206   61740 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0918 21:09:57.726230   61740 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0918 21:09:57.831932   61740 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0918 21:09:58.026530   61740 main.go:141] libmachine: Making call to close driver server
	I0918 21:09:58.026554   61740 main.go:141] libmachine: (embed-certs-255556) Calling .Close
	I0918 21:09:58.026862   61740 main.go:141] libmachine: Successfully made call to close driver server
	I0918 21:09:58.026885   61740 main.go:141] libmachine: Making call to close connection to plugin binary
	I0918 21:09:58.026895   61740 main.go:141] libmachine: Making call to close driver server
	I0918 21:09:58.026903   61740 main.go:141] libmachine: (embed-certs-255556) Calling .Close
	I0918 21:09:58.027205   61740 main.go:141] libmachine: Successfully made call to close driver server
	I0918 21:09:58.027260   61740 main.go:141] libmachine: Making call to close connection to plugin binary
	I0918 21:09:58.027269   61740 main.go:141] libmachine: (embed-certs-255556) DBG | Closing plugin on server side
	I0918 21:09:58.038140   61740 main.go:141] libmachine: Making call to close driver server
	I0918 21:09:58.038172   61740 main.go:141] libmachine: (embed-certs-255556) Calling .Close
	I0918 21:09:58.038506   61740 main.go:141] libmachine: Successfully made call to close driver server
	I0918 21:09:58.038555   61740 main.go:141] libmachine: Making call to close connection to plugin binary
	I0918 21:09:58.038512   61740 main.go:141] libmachine: (embed-certs-255556) DBG | Closing plugin on server side
	I0918 21:09:58.551479   61740 main.go:141] libmachine: Making call to close driver server
	I0918 21:09:58.551518   61740 main.go:141] libmachine: (embed-certs-255556) Calling .Close
	I0918 21:09:58.551851   61740 main.go:141] libmachine: Successfully made call to close driver server
	I0918 21:09:58.551870   61740 main.go:141] libmachine: Making call to close connection to plugin binary
	I0918 21:09:58.551885   61740 main.go:141] libmachine: Making call to close driver server
	I0918 21:09:58.551893   61740 main.go:141] libmachine: (embed-certs-255556) Calling .Close
	I0918 21:09:58.552242   61740 main.go:141] libmachine: Successfully made call to close driver server
	I0918 21:09:58.552307   61740 main.go:141] libmachine: Making call to close connection to plugin binary
	I0918 21:09:58.552326   61740 main.go:141] libmachine: (embed-certs-255556) DBG | Closing plugin on server side
	I0918 21:09:59.078469   61740 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.246485041s)
	I0918 21:09:59.078532   61740 main.go:141] libmachine: Making call to close driver server
	I0918 21:09:59.078550   61740 main.go:141] libmachine: (embed-certs-255556) Calling .Close
	I0918 21:09:59.078883   61740 main.go:141] libmachine: Successfully made call to close driver server
	I0918 21:09:59.078906   61740 main.go:141] libmachine: Making call to close connection to plugin binary
	I0918 21:09:59.078917   61740 main.go:141] libmachine: Making call to close driver server
	I0918 21:09:59.078924   61740 main.go:141] libmachine: (embed-certs-255556) Calling .Close
	I0918 21:09:59.079143   61740 main.go:141] libmachine: Successfully made call to close driver server
	I0918 21:09:59.079157   61740 main.go:141] libmachine: Making call to close connection to plugin binary
	I0918 21:09:59.079168   61740 addons.go:475] Verifying addon metrics-server=true in "embed-certs-255556"
	I0918 21:09:59.080861   61740 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I0918 21:09:57.357619   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:09:59.357838   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:09:59.082145   61740 addons.go:510] duration metric: took 1.82098849s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I0918 21:09:59.526424   61740 pod_ready.go:93] pod "etcd-embed-certs-255556" in "kube-system" namespace has status "Ready":"True"
	I0918 21:09:59.526445   61740 pod_ready.go:82] duration metric: took 2.02981732s for pod "etcd-embed-certs-255556" in "kube-system" namespace to be "Ready" ...
	I0918 21:09:59.526455   61740 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-embed-certs-255556" in "kube-system" namespace to be "Ready" ...
	I0918 21:10:00.033589   61740 pod_ready.go:93] pod "kube-apiserver-embed-certs-255556" in "kube-system" namespace has status "Ready":"True"
	I0918 21:10:00.033616   61740 pod_ready.go:82] duration metric: took 507.155125ms for pod "kube-apiserver-embed-certs-255556" in "kube-system" namespace to be "Ready" ...
	I0918 21:10:00.033630   61740 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-255556" in "kube-system" namespace to be "Ready" ...
	I0918 21:10:02.039884   61740 pod_ready.go:103] pod "kube-controller-manager-embed-certs-255556" in "kube-system" namespace has status "Ready":"False"
	I0918 21:10:04.040760   61740 pod_ready.go:103] pod "kube-controller-manager-embed-certs-255556" in "kube-system" namespace has status "Ready":"False"
	I0918 21:10:04.541799   61740 pod_ready.go:93] pod "kube-controller-manager-embed-certs-255556" in "kube-system" namespace has status "Ready":"True"
	I0918 21:10:04.541821   61740 pod_ready.go:82] duration metric: took 4.508184279s for pod "kube-controller-manager-embed-certs-255556" in "kube-system" namespace to be "Ready" ...
	I0918 21:10:04.541830   61740 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-embed-certs-255556" in "kube-system" namespace to be "Ready" ...
	I0918 21:10:04.550008   61740 pod_ready.go:93] pod "kube-scheduler-embed-certs-255556" in "kube-system" namespace has status "Ready":"True"
	I0918 21:10:04.550038   61740 pod_ready.go:82] duration metric: took 8.201765ms for pod "kube-scheduler-embed-certs-255556" in "kube-system" namespace to be "Ready" ...
	I0918 21:10:04.550046   61740 pod_ready.go:39] duration metric: took 7.057808243s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0918 21:10:04.550060   61740 api_server.go:52] waiting for apiserver process to appear ...
	I0918 21:10:04.550110   61740 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:10:04.566882   61740 api_server.go:72] duration metric: took 7.305767858s to wait for apiserver process to appear ...
	I0918 21:10:04.566914   61740 api_server.go:88] waiting for apiserver healthz status ...
	I0918 21:10:04.566937   61740 api_server.go:253] Checking apiserver healthz at https://192.168.39.21:8443/healthz ...
	I0918 21:10:04.571495   61740 api_server.go:279] https://192.168.39.21:8443/healthz returned 200:
	ok
	I0918 21:10:04.572590   61740 api_server.go:141] control plane version: v1.31.1
	I0918 21:10:04.572618   61740 api_server.go:131] duration metric: took 5.69747ms to wait for apiserver health ...
	I0918 21:10:04.572625   61740 system_pods.go:43] waiting for kube-system pods to appear ...
	I0918 21:10:04.578979   61740 system_pods.go:59] 9 kube-system pods found
	I0918 21:10:04.579019   61740 system_pods.go:61] "coredns-7c65d6cfc9-ptxbt" [798665e6-6f4a-4ba5-b4f9-3192d3f76f03] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0918 21:10:04.579030   61740 system_pods.go:61] "coredns-7c65d6cfc9-vgmtd" [1224ebf9-1b24-413a-b779-093acfcfb61e] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0918 21:10:04.579039   61740 system_pods.go:61] "etcd-embed-certs-255556" [a796cd6d-5b0c-4f73-9dfe-23c552cb2a43] Running
	I0918 21:10:04.579046   61740 system_pods.go:61] "kube-apiserver-embed-certs-255556" [f1926542-940b-4ba9-94cf-23265d42874c] Running
	I0918 21:10:04.579051   61740 system_pods.go:61] "kube-controller-manager-embed-certs-255556" [cd0bc9bb-ca2c-435f-81d5-2d4ea9f32d85] Running
	I0918 21:10:04.579057   61740 system_pods.go:61] "kube-proxy-m7gxh" [47d72a32-7efc-4155-a890-0ddc620af6e0] Running
	I0918 21:10:04.579067   61740 system_pods.go:61] "kube-scheduler-embed-certs-255556" [091c79cd-bc76-42b3-9635-96fdbe3ecfff] Running
	I0918 21:10:04.579076   61740 system_pods.go:61] "metrics-server-6867b74b74-sr6hq" [8867f8fa-687b-4105-8ace-18af50195726] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0918 21:10:04.579085   61740 system_pods.go:61] "storage-provisioner" [dcc9b789-237c-4d92-96c6-2c23d2c401c0] Running
	I0918 21:10:04.579095   61740 system_pods.go:74] duration metric: took 6.462809ms to wait for pod list to return data ...
	I0918 21:10:04.579106   61740 default_sa.go:34] waiting for default service account to be created ...
	I0918 21:10:04.583020   61740 default_sa.go:45] found service account: "default"
	I0918 21:10:04.583059   61740 default_sa.go:55] duration metric: took 3.946388ms for default service account to be created ...
	I0918 21:10:04.583072   61740 system_pods.go:116] waiting for k8s-apps to be running ...
	I0918 21:10:04.589946   61740 system_pods.go:86] 9 kube-system pods found
	I0918 21:10:04.589991   61740 system_pods.go:89] "coredns-7c65d6cfc9-ptxbt" [798665e6-6f4a-4ba5-b4f9-3192d3f76f03] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0918 21:10:04.590004   61740 system_pods.go:89] "coredns-7c65d6cfc9-vgmtd" [1224ebf9-1b24-413a-b779-093acfcfb61e] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0918 21:10:04.590012   61740 system_pods.go:89] "etcd-embed-certs-255556" [a796cd6d-5b0c-4f73-9dfe-23c552cb2a43] Running
	I0918 21:10:04.590019   61740 system_pods.go:89] "kube-apiserver-embed-certs-255556" [f1926542-940b-4ba9-94cf-23265d42874c] Running
	I0918 21:10:04.590025   61740 system_pods.go:89] "kube-controller-manager-embed-certs-255556" [cd0bc9bb-ca2c-435f-81d5-2d4ea9f32d85] Running
	I0918 21:10:04.590030   61740 system_pods.go:89] "kube-proxy-m7gxh" [47d72a32-7efc-4155-a890-0ddc620af6e0] Running
	I0918 21:10:04.590035   61740 system_pods.go:89] "kube-scheduler-embed-certs-255556" [091c79cd-bc76-42b3-9635-96fdbe3ecfff] Running
	I0918 21:10:04.590044   61740 system_pods.go:89] "metrics-server-6867b74b74-sr6hq" [8867f8fa-687b-4105-8ace-18af50195726] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0918 21:10:04.590051   61740 system_pods.go:89] "storage-provisioner" [dcc9b789-237c-4d92-96c6-2c23d2c401c0] Running
	I0918 21:10:04.590061   61740 system_pods.go:126] duration metric: took 6.981726ms to wait for k8s-apps to be running ...
	I0918 21:10:04.590070   61740 system_svc.go:44] waiting for kubelet service to be running ....
	I0918 21:10:04.590127   61740 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0918 21:10:04.605893   61740 system_svc.go:56] duration metric: took 15.815591ms WaitForService to wait for kubelet
	I0918 21:10:04.605921   61740 kubeadm.go:582] duration metric: took 7.344815015s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0918 21:10:04.605939   61740 node_conditions.go:102] verifying NodePressure condition ...
	I0918 21:10:04.609551   61740 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0918 21:10:04.609577   61740 node_conditions.go:123] node cpu capacity is 2
	I0918 21:10:04.609588   61740 node_conditions.go:105] duration metric: took 3.645116ms to run NodePressure ...
	I0918 21:10:04.609598   61740 start.go:241] waiting for startup goroutines ...
	I0918 21:10:04.609605   61740 start.go:246] waiting for cluster config update ...
	I0918 21:10:04.609614   61740 start.go:255] writing updated cluster config ...
	I0918 21:10:04.609870   61740 ssh_runner.go:195] Run: rm -f paused
	I0918 21:10:04.664479   61740 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I0918 21:10:04.666589   61740 out.go:177] * Done! kubectl is now configured to use "embed-certs-255556" cluster and "default" namespace by default
	I0918 21:10:01.858109   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:10:03.356912   61273 pod_ready.go:82] duration metric: took 4m0.006778464s for pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace to be "Ready" ...
	E0918 21:10:03.356944   61273 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I0918 21:10:03.356952   61273 pod_ready.go:39] duration metric: took 4m0.807781101s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0918 21:10:03.356967   61273 api_server.go:52] waiting for apiserver process to appear ...
	I0918 21:10:03.356994   61273 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0918 21:10:03.357047   61273 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0918 21:10:03.410066   61273 cri.go:89] found id: "a70652dce4d80c6fac020d63cff7054a835d183ac4b910157a5bf4eb8cabcaa1"
	I0918 21:10:03.410096   61273 cri.go:89] found id: ""
	I0918 21:10:03.410104   61273 logs.go:276] 1 containers: [a70652dce4d80c6fac020d63cff7054a835d183ac4b910157a5bf4eb8cabcaa1]
	I0918 21:10:03.410168   61273 ssh_runner.go:195] Run: which crictl
	I0918 21:10:03.414236   61273 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0918 21:10:03.414309   61273 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0918 21:10:03.449405   61273 cri.go:89] found id: "a913074a00723bc43f97ba625ebbdfded7dca1ce664ace9a13173adb96d2068f"
	I0918 21:10:03.449426   61273 cri.go:89] found id: ""
	I0918 21:10:03.449434   61273 logs.go:276] 1 containers: [a913074a00723bc43f97ba625ebbdfded7dca1ce664ace9a13173adb96d2068f]
	I0918 21:10:03.449492   61273 ssh_runner.go:195] Run: which crictl
	I0918 21:10:03.453335   61273 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0918 21:10:03.453403   61273 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0918 21:10:03.487057   61273 cri.go:89] found id: "76b9e08a213468abdd23d7b3998b55d6544e2fbae9205ec6076ef0cd7a80e7be"
	I0918 21:10:03.487081   61273 cri.go:89] found id: ""
	I0918 21:10:03.487089   61273 logs.go:276] 1 containers: [76b9e08a213468abdd23d7b3998b55d6544e2fbae9205ec6076ef0cd7a80e7be]
	I0918 21:10:03.487137   61273 ssh_runner.go:195] Run: which crictl
	I0918 21:10:03.491027   61273 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0918 21:10:03.491101   61273 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0918 21:10:03.529636   61273 cri.go:89] found id: "c372970fdf265433e1668377d0214de6608cb39ebfd706cfad9624cba2f9b481"
	I0918 21:10:03.529665   61273 cri.go:89] found id: ""
	I0918 21:10:03.529675   61273 logs.go:276] 1 containers: [c372970fdf265433e1668377d0214de6608cb39ebfd706cfad9624cba2f9b481]
	I0918 21:10:03.529738   61273 ssh_runner.go:195] Run: which crictl
	I0918 21:10:03.535042   61273 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0918 21:10:03.535121   61273 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0918 21:10:03.572913   61273 cri.go:89] found id: "0257280a0d21d52e2a996c2f717880bdb9d9f6fe90a4b82cc62222c941ba7d84"
	I0918 21:10:03.572942   61273 cri.go:89] found id: ""
	I0918 21:10:03.572952   61273 logs.go:276] 1 containers: [0257280a0d21d52e2a996c2f717880bdb9d9f6fe90a4b82cc62222c941ba7d84]
	I0918 21:10:03.573012   61273 ssh_runner.go:195] Run: which crictl
	I0918 21:10:03.576945   61273 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0918 21:10:03.577021   61273 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0918 21:10:03.612785   61273 cri.go:89] found id: "785dc83056153e5eeb22216ac7520f529b71f1b3ae42e9540a5c8930f1fe62c2"
	I0918 21:10:03.612805   61273 cri.go:89] found id: ""
	I0918 21:10:03.612812   61273 logs.go:276] 1 containers: [785dc83056153e5eeb22216ac7520f529b71f1b3ae42e9540a5c8930f1fe62c2]
	I0918 21:10:03.612868   61273 ssh_runner.go:195] Run: which crictl
	I0918 21:10:03.616855   61273 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0918 21:10:03.616924   61273 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0918 21:10:03.650330   61273 cri.go:89] found id: ""
	I0918 21:10:03.650359   61273 logs.go:276] 0 containers: []
	W0918 21:10:03.650370   61273 logs.go:278] No container was found matching "kindnet"
	I0918 21:10:03.650378   61273 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0918 21:10:03.650446   61273 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0918 21:10:03.698078   61273 cri.go:89] found id: "b44d6f4b44928aa498c4375f783c165714b4a5959a1fa709712ad39a412d6d9f"
	I0918 21:10:03.698106   61273 cri.go:89] found id: "38c14df0554157969a97390590d6e9c46765fd6fc3e286f7d2e3094838e0aff5"
	I0918 21:10:03.698113   61273 cri.go:89] found id: ""
	I0918 21:10:03.698122   61273 logs.go:276] 2 containers: [b44d6f4b44928aa498c4375f783c165714b4a5959a1fa709712ad39a412d6d9f 38c14df0554157969a97390590d6e9c46765fd6fc3e286f7d2e3094838e0aff5]
	I0918 21:10:03.698184   61273 ssh_runner.go:195] Run: which crictl
	I0918 21:10:03.702311   61273 ssh_runner.go:195] Run: which crictl
	I0918 21:10:03.705974   61273 logs.go:123] Gathering logs for kubelet ...
	I0918 21:10:03.705996   61273 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 21:10:03.771043   61273 logs.go:123] Gathering logs for kube-scheduler [c372970fdf265433e1668377d0214de6608cb39ebfd706cfad9624cba2f9b481] ...
	I0918 21:10:03.771097   61273 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c372970fdf265433e1668377d0214de6608cb39ebfd706cfad9624cba2f9b481"
	I0918 21:10:03.813148   61273 logs.go:123] Gathering logs for storage-provisioner [b44d6f4b44928aa498c4375f783c165714b4a5959a1fa709712ad39a412d6d9f] ...
	I0918 21:10:03.813175   61273 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b44d6f4b44928aa498c4375f783c165714b4a5959a1fa709712ad39a412d6d9f"
	I0918 21:10:03.864553   61273 logs.go:123] Gathering logs for CRI-O ...
	I0918 21:10:03.864580   61273 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0918 21:10:04.345484   61273 logs.go:123] Gathering logs for container status ...
	I0918 21:10:04.345531   61273 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 21:10:04.390777   61273 logs.go:123] Gathering logs for dmesg ...
	I0918 21:10:04.390818   61273 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 21:10:04.409877   61273 logs.go:123] Gathering logs for describe nodes ...
	I0918 21:10:04.409918   61273 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0918 21:10:04.536579   61273 logs.go:123] Gathering logs for kube-apiserver [a70652dce4d80c6fac020d63cff7054a835d183ac4b910157a5bf4eb8cabcaa1] ...
	I0918 21:10:04.536609   61273 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a70652dce4d80c6fac020d63cff7054a835d183ac4b910157a5bf4eb8cabcaa1"
	I0918 21:10:04.595640   61273 logs.go:123] Gathering logs for etcd [a913074a00723bc43f97ba625ebbdfded7dca1ce664ace9a13173adb96d2068f] ...
	I0918 21:10:04.595680   61273 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a913074a00723bc43f97ba625ebbdfded7dca1ce664ace9a13173adb96d2068f"
	I0918 21:10:04.642332   61273 logs.go:123] Gathering logs for coredns [76b9e08a213468abdd23d7b3998b55d6544e2fbae9205ec6076ef0cd7a80e7be] ...
	I0918 21:10:04.642377   61273 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 76b9e08a213468abdd23d7b3998b55d6544e2fbae9205ec6076ef0cd7a80e7be"
	I0918 21:10:04.679525   61273 logs.go:123] Gathering logs for kube-proxy [0257280a0d21d52e2a996c2f717880bdb9d9f6fe90a4b82cc62222c941ba7d84] ...
	I0918 21:10:04.679551   61273 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0257280a0d21d52e2a996c2f717880bdb9d9f6fe90a4b82cc62222c941ba7d84"
	I0918 21:10:04.721130   61273 logs.go:123] Gathering logs for kube-controller-manager [785dc83056153e5eeb22216ac7520f529b71f1b3ae42e9540a5c8930f1fe62c2] ...
	I0918 21:10:04.721164   61273 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 785dc83056153e5eeb22216ac7520f529b71f1b3ae42e9540a5c8930f1fe62c2"
	I0918 21:10:04.789527   61273 logs.go:123] Gathering logs for storage-provisioner [38c14df0554157969a97390590d6e9c46765fd6fc3e286f7d2e3094838e0aff5] ...
	I0918 21:10:04.789558   61273 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 38c14df0554157969a97390590d6e9c46765fd6fc3e286f7d2e3094838e0aff5"
	I0918 21:10:07.334989   61273 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:10:07.352382   61273 api_server.go:72] duration metric: took 4m12.031791528s to wait for apiserver process to appear ...
	I0918 21:10:07.352411   61273 api_server.go:88] waiting for apiserver healthz status ...
	I0918 21:10:07.352446   61273 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0918 21:10:07.352494   61273 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0918 21:10:07.404709   61273 cri.go:89] found id: "a70652dce4d80c6fac020d63cff7054a835d183ac4b910157a5bf4eb8cabcaa1"
	I0918 21:10:07.404739   61273 cri.go:89] found id: ""
	I0918 21:10:07.404748   61273 logs.go:276] 1 containers: [a70652dce4d80c6fac020d63cff7054a835d183ac4b910157a5bf4eb8cabcaa1]
	I0918 21:10:07.404815   61273 ssh_runner.go:195] Run: which crictl
	I0918 21:10:07.409205   61273 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0918 21:10:07.409273   61273 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0918 21:10:07.450409   61273 cri.go:89] found id: "a913074a00723bc43f97ba625ebbdfded7dca1ce664ace9a13173adb96d2068f"
	I0918 21:10:07.450429   61273 cri.go:89] found id: ""
	I0918 21:10:07.450438   61273 logs.go:276] 1 containers: [a913074a00723bc43f97ba625ebbdfded7dca1ce664ace9a13173adb96d2068f]
	I0918 21:10:07.450498   61273 ssh_runner.go:195] Run: which crictl
	I0918 21:10:07.454623   61273 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0918 21:10:07.454692   61273 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0918 21:10:07.498344   61273 cri.go:89] found id: "76b9e08a213468abdd23d7b3998b55d6544e2fbae9205ec6076ef0cd7a80e7be"
	I0918 21:10:07.498370   61273 cri.go:89] found id: ""
	I0918 21:10:07.498379   61273 logs.go:276] 1 containers: [76b9e08a213468abdd23d7b3998b55d6544e2fbae9205ec6076ef0cd7a80e7be]
	I0918 21:10:07.498443   61273 ssh_runner.go:195] Run: which crictl
	I0918 21:10:07.503900   61273 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0918 21:10:07.503986   61273 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0918 21:10:07.543438   61273 cri.go:89] found id: "c372970fdf265433e1668377d0214de6608cb39ebfd706cfad9624cba2f9b481"
	I0918 21:10:07.543469   61273 cri.go:89] found id: ""
	I0918 21:10:07.543478   61273 logs.go:276] 1 containers: [c372970fdf265433e1668377d0214de6608cb39ebfd706cfad9624cba2f9b481]
	I0918 21:10:07.543538   61273 ssh_runner.go:195] Run: which crictl
	I0918 21:10:07.548439   61273 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0918 21:10:07.548518   61273 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0918 21:10:07.592109   61273 cri.go:89] found id: "0257280a0d21d52e2a996c2f717880bdb9d9f6fe90a4b82cc62222c941ba7d84"
	I0918 21:10:07.592140   61273 cri.go:89] found id: ""
	I0918 21:10:07.592150   61273 logs.go:276] 1 containers: [0257280a0d21d52e2a996c2f717880bdb9d9f6fe90a4b82cc62222c941ba7d84]
	I0918 21:10:07.592202   61273 ssh_runner.go:195] Run: which crictl
	I0918 21:10:07.596127   61273 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0918 21:10:07.596200   61273 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0918 21:10:07.630588   61273 cri.go:89] found id: "785dc83056153e5eeb22216ac7520f529b71f1b3ae42e9540a5c8930f1fe62c2"
	I0918 21:10:07.630623   61273 cri.go:89] found id: ""
	I0918 21:10:07.630633   61273 logs.go:276] 1 containers: [785dc83056153e5eeb22216ac7520f529b71f1b3ae42e9540a5c8930f1fe62c2]
	I0918 21:10:07.630699   61273 ssh_runner.go:195] Run: which crictl
	I0918 21:10:07.635130   61273 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0918 21:10:07.635214   61273 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0918 21:10:07.672446   61273 cri.go:89] found id: ""
	I0918 21:10:07.672475   61273 logs.go:276] 0 containers: []
	W0918 21:10:07.672487   61273 logs.go:278] No container was found matching "kindnet"
	I0918 21:10:07.672494   61273 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0918 21:10:07.672554   61273 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0918 21:10:07.710660   61273 cri.go:89] found id: "b44d6f4b44928aa498c4375f783c165714b4a5959a1fa709712ad39a412d6d9f"
	I0918 21:10:07.710693   61273 cri.go:89] found id: "38c14df0554157969a97390590d6e9c46765fd6fc3e286f7d2e3094838e0aff5"
	I0918 21:10:07.710700   61273 cri.go:89] found id: ""
	I0918 21:10:07.710709   61273 logs.go:276] 2 containers: [b44d6f4b44928aa498c4375f783c165714b4a5959a1fa709712ad39a412d6d9f 38c14df0554157969a97390590d6e9c46765fd6fc3e286f7d2e3094838e0aff5]
	I0918 21:10:07.710761   61273 ssh_runner.go:195] Run: which crictl
	I0918 21:10:07.714772   61273 ssh_runner.go:195] Run: which crictl
	I0918 21:10:07.718402   61273 logs.go:123] Gathering logs for etcd [a913074a00723bc43f97ba625ebbdfded7dca1ce664ace9a13173adb96d2068f] ...
	I0918 21:10:07.718423   61273 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a913074a00723bc43f97ba625ebbdfded7dca1ce664ace9a13173adb96d2068f"
	I0918 21:10:07.756682   61273 logs.go:123] Gathering logs for coredns [76b9e08a213468abdd23d7b3998b55d6544e2fbae9205ec6076ef0cd7a80e7be] ...
	I0918 21:10:07.756717   61273 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 76b9e08a213468abdd23d7b3998b55d6544e2fbae9205ec6076ef0cd7a80e7be"
	I0918 21:10:07.792784   61273 logs.go:123] Gathering logs for kube-proxy [0257280a0d21d52e2a996c2f717880bdb9d9f6fe90a4b82cc62222c941ba7d84] ...
	I0918 21:10:07.792813   61273 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0257280a0d21d52e2a996c2f717880bdb9d9f6fe90a4b82cc62222c941ba7d84"
	I0918 21:10:07.829746   61273 logs.go:123] Gathering logs for kube-controller-manager [785dc83056153e5eeb22216ac7520f529b71f1b3ae42e9540a5c8930f1fe62c2] ...
	I0918 21:10:07.829779   61273 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 785dc83056153e5eeb22216ac7520f529b71f1b3ae42e9540a5c8930f1fe62c2"
	I0918 21:10:07.882151   61273 logs.go:123] Gathering logs for storage-provisioner [38c14df0554157969a97390590d6e9c46765fd6fc3e286f7d2e3094838e0aff5] ...
	I0918 21:10:07.882190   61273 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 38c14df0554157969a97390590d6e9c46765fd6fc3e286f7d2e3094838e0aff5"
	I0918 21:10:07.921948   61273 logs.go:123] Gathering logs for container status ...
	I0918 21:10:07.921973   61273 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 21:10:07.969080   61273 logs.go:123] Gathering logs for kubelet ...
	I0918 21:10:07.969110   61273 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 21:10:08.036341   61273 logs.go:123] Gathering logs for dmesg ...
	I0918 21:10:08.036376   61273 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 21:10:08.050690   61273 logs.go:123] Gathering logs for describe nodes ...
	I0918 21:10:08.050722   61273 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0918 21:10:08.177111   61273 logs.go:123] Gathering logs for kube-apiserver [a70652dce4d80c6fac020d63cff7054a835d183ac4b910157a5bf4eb8cabcaa1] ...
	I0918 21:10:08.177154   61273 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a70652dce4d80c6fac020d63cff7054a835d183ac4b910157a5bf4eb8cabcaa1"
	I0918 21:10:08.224169   61273 logs.go:123] Gathering logs for kube-scheduler [c372970fdf265433e1668377d0214de6608cb39ebfd706cfad9624cba2f9b481] ...
	I0918 21:10:08.224203   61273 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c372970fdf265433e1668377d0214de6608cb39ebfd706cfad9624cba2f9b481"
	I0918 21:10:08.264412   61273 logs.go:123] Gathering logs for storage-provisioner [b44d6f4b44928aa498c4375f783c165714b4a5959a1fa709712ad39a412d6d9f] ...
	I0918 21:10:08.264437   61273 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b44d6f4b44928aa498c4375f783c165714b4a5959a1fa709712ad39a412d6d9f"
	I0918 21:10:08.309190   61273 logs.go:123] Gathering logs for CRI-O ...
	I0918 21:10:08.309215   61273 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0918 21:10:11.209439   61273 api_server.go:253] Checking apiserver healthz at https://192.168.61.31:8443/healthz ...
	I0918 21:10:11.214345   61273 api_server.go:279] https://192.168.61.31:8443/healthz returned 200:
	ok
	I0918 21:10:11.215424   61273 api_server.go:141] control plane version: v1.31.1
	I0918 21:10:11.215446   61273 api_server.go:131] duration metric: took 3.863027585s to wait for apiserver health ...
	I0918 21:10:11.215456   61273 system_pods.go:43] waiting for kube-system pods to appear ...
	I0918 21:10:11.215485   61273 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0918 21:10:11.215545   61273 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0918 21:10:11.251158   61273 cri.go:89] found id: "a70652dce4d80c6fac020d63cff7054a835d183ac4b910157a5bf4eb8cabcaa1"
	I0918 21:10:11.251182   61273 cri.go:89] found id: ""
	I0918 21:10:11.251190   61273 logs.go:276] 1 containers: [a70652dce4d80c6fac020d63cff7054a835d183ac4b910157a5bf4eb8cabcaa1]
	I0918 21:10:11.251246   61273 ssh_runner.go:195] Run: which crictl
	I0918 21:10:11.255090   61273 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0918 21:10:11.255177   61273 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0918 21:10:11.290504   61273 cri.go:89] found id: "a913074a00723bc43f97ba625ebbdfded7dca1ce664ace9a13173adb96d2068f"
	I0918 21:10:11.290526   61273 cri.go:89] found id: ""
	I0918 21:10:11.290534   61273 logs.go:276] 1 containers: [a913074a00723bc43f97ba625ebbdfded7dca1ce664ace9a13173adb96d2068f]
	I0918 21:10:11.290593   61273 ssh_runner.go:195] Run: which crictl
	I0918 21:10:11.295141   61273 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0918 21:10:11.295224   61273 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0918 21:10:11.340273   61273 cri.go:89] found id: "76b9e08a213468abdd23d7b3998b55d6544e2fbae9205ec6076ef0cd7a80e7be"
	I0918 21:10:11.340300   61273 cri.go:89] found id: ""
	I0918 21:10:11.340310   61273 logs.go:276] 1 containers: [76b9e08a213468abdd23d7b3998b55d6544e2fbae9205ec6076ef0cd7a80e7be]
	I0918 21:10:11.340362   61273 ssh_runner.go:195] Run: which crictl
	I0918 21:10:11.344823   61273 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0918 21:10:11.344903   61273 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0918 21:10:11.384145   61273 cri.go:89] found id: "c372970fdf265433e1668377d0214de6608cb39ebfd706cfad9624cba2f9b481"
	I0918 21:10:11.384172   61273 cri.go:89] found id: ""
	I0918 21:10:11.384187   61273 logs.go:276] 1 containers: [c372970fdf265433e1668377d0214de6608cb39ebfd706cfad9624cba2f9b481]
	I0918 21:10:11.384251   61273 ssh_runner.go:195] Run: which crictl
	I0918 21:10:11.388594   61273 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0918 21:10:11.388673   61273 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0918 21:10:11.434881   61273 cri.go:89] found id: "0257280a0d21d52e2a996c2f717880bdb9d9f6fe90a4b82cc62222c941ba7d84"
	I0918 21:10:11.434915   61273 cri.go:89] found id: ""
	I0918 21:10:11.434925   61273 logs.go:276] 1 containers: [0257280a0d21d52e2a996c2f717880bdb9d9f6fe90a4b82cc62222c941ba7d84]
	I0918 21:10:11.434984   61273 ssh_runner.go:195] Run: which crictl
	I0918 21:10:11.439048   61273 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0918 21:10:11.439124   61273 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0918 21:10:11.474786   61273 cri.go:89] found id: "785dc83056153e5eeb22216ac7520f529b71f1b3ae42e9540a5c8930f1fe62c2"
	I0918 21:10:11.474812   61273 cri.go:89] found id: ""
	I0918 21:10:11.474820   61273 logs.go:276] 1 containers: [785dc83056153e5eeb22216ac7520f529b71f1b3ae42e9540a5c8930f1fe62c2]
	I0918 21:10:11.474871   61273 ssh_runner.go:195] Run: which crictl
	I0918 21:10:11.478907   61273 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0918 21:10:11.478961   61273 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0918 21:10:11.521522   61273 cri.go:89] found id: ""
	I0918 21:10:11.521550   61273 logs.go:276] 0 containers: []
	W0918 21:10:11.521561   61273 logs.go:278] No container was found matching "kindnet"
	I0918 21:10:11.521568   61273 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0918 21:10:11.521642   61273 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0918 21:10:11.560406   61273 cri.go:89] found id: "b44d6f4b44928aa498c4375f783c165714b4a5959a1fa709712ad39a412d6d9f"
	I0918 21:10:11.560428   61273 cri.go:89] found id: "38c14df0554157969a97390590d6e9c46765fd6fc3e286f7d2e3094838e0aff5"
	I0918 21:10:11.560432   61273 cri.go:89] found id: ""
	I0918 21:10:11.560439   61273 logs.go:276] 2 containers: [b44d6f4b44928aa498c4375f783c165714b4a5959a1fa709712ad39a412d6d9f 38c14df0554157969a97390590d6e9c46765fd6fc3e286f7d2e3094838e0aff5]
	I0918 21:10:11.560489   61273 ssh_runner.go:195] Run: which crictl
	I0918 21:10:11.564559   61273 ssh_runner.go:195] Run: which crictl
	I0918 21:10:11.568380   61273 logs.go:123] Gathering logs for kube-proxy [0257280a0d21d52e2a996c2f717880bdb9d9f6fe90a4b82cc62222c941ba7d84] ...
	I0918 21:10:11.568405   61273 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0257280a0d21d52e2a996c2f717880bdb9d9f6fe90a4b82cc62222c941ba7d84"
	I0918 21:10:11.614927   61273 logs.go:123] Gathering logs for kube-controller-manager [785dc83056153e5eeb22216ac7520f529b71f1b3ae42e9540a5c8930f1fe62c2] ...
	I0918 21:10:11.614959   61273 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 785dc83056153e5eeb22216ac7520f529b71f1b3ae42e9540a5c8930f1fe62c2"
	I0918 21:10:11.668337   61273 logs.go:123] Gathering logs for storage-provisioner [b44d6f4b44928aa498c4375f783c165714b4a5959a1fa709712ad39a412d6d9f] ...
	I0918 21:10:11.668372   61273 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b44d6f4b44928aa498c4375f783c165714b4a5959a1fa709712ad39a412d6d9f"
	I0918 21:10:11.705574   61273 logs.go:123] Gathering logs for kubelet ...
	I0918 21:10:11.705604   61273 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 21:10:11.772691   61273 logs.go:123] Gathering logs for describe nodes ...
	I0918 21:10:11.772731   61273 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0918 21:10:11.885001   61273 logs.go:123] Gathering logs for etcd [a913074a00723bc43f97ba625ebbdfded7dca1ce664ace9a13173adb96d2068f] ...
	I0918 21:10:11.885043   61273 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a913074a00723bc43f97ba625ebbdfded7dca1ce664ace9a13173adb96d2068f"
	I0918 21:10:11.929585   61273 logs.go:123] Gathering logs for coredns [76b9e08a213468abdd23d7b3998b55d6544e2fbae9205ec6076ef0cd7a80e7be] ...
	I0918 21:10:11.929623   61273 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 76b9e08a213468abdd23d7b3998b55d6544e2fbae9205ec6076ef0cd7a80e7be"
	I0918 21:10:11.967540   61273 logs.go:123] Gathering logs for kube-scheduler [c372970fdf265433e1668377d0214de6608cb39ebfd706cfad9624cba2f9b481] ...
	I0918 21:10:11.967566   61273 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c372970fdf265433e1668377d0214de6608cb39ebfd706cfad9624cba2f9b481"
	I0918 21:10:12.007037   61273 logs.go:123] Gathering logs for storage-provisioner [38c14df0554157969a97390590d6e9c46765fd6fc3e286f7d2e3094838e0aff5] ...
	I0918 21:10:12.007076   61273 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 38c14df0554157969a97390590d6e9c46765fd6fc3e286f7d2e3094838e0aff5"
	I0918 21:10:12.045764   61273 logs.go:123] Gathering logs for CRI-O ...
	I0918 21:10:12.045805   61273 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0918 21:10:12.434993   61273 logs.go:123] Gathering logs for dmesg ...
	I0918 21:10:12.435042   61273 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 21:10:12.449422   61273 logs.go:123] Gathering logs for kube-apiserver [a70652dce4d80c6fac020d63cff7054a835d183ac4b910157a5bf4eb8cabcaa1] ...
	I0918 21:10:12.449453   61273 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a70652dce4d80c6fac020d63cff7054a835d183ac4b910157a5bf4eb8cabcaa1"
	I0918 21:10:12.500491   61273 logs.go:123] Gathering logs for container status ...
	I0918 21:10:12.500522   61273 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 21:10:15.053164   61273 system_pods.go:59] 8 kube-system pods found
	I0918 21:10:15.053203   61273 system_pods.go:61] "coredns-7c65d6cfc9-dgnw2" [085d5a98-0a61-4678-830f-384780a0d7ef] Running
	I0918 21:10:15.053211   61273 system_pods.go:61] "etcd-no-preload-331658" [d6a53ff4-fcf0-46c0-9e77-9af6d0363e08] Running
	I0918 21:10:15.053218   61273 system_pods.go:61] "kube-apiserver-no-preload-331658" [1953598f-4c3f-4a98-ab75-b8ca1b8093ad] Running
	I0918 21:10:15.053223   61273 system_pods.go:61] "kube-controller-manager-no-preload-331658" [8fc655be-a192-4250-a31d-2e533a2b5e41] Running
	I0918 21:10:15.053228   61273 system_pods.go:61] "kube-proxy-hx25w" [a26512ff-f695-4452-8974-577479257160] Running
	I0918 21:10:15.053232   61273 system_pods.go:61] "kube-scheduler-no-preload-331658" [64e5347e-e7fd-4a0d-ae41-539c3989c29e] Running
	I0918 21:10:15.053243   61273 system_pods.go:61] "metrics-server-6867b74b74-n27vc" [b1de76ec-8987-49ce-ae66-eedda2705cde] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0918 21:10:15.053254   61273 system_pods.go:61] "storage-provisioner" [e110aeb3-e9bc-4bb9-9b49-5579558bdda2] Running
	I0918 21:10:15.053264   61273 system_pods.go:74] duration metric: took 3.837800115s to wait for pod list to return data ...
	I0918 21:10:15.053273   61273 default_sa.go:34] waiting for default service account to be created ...
	I0918 21:10:15.056865   61273 default_sa.go:45] found service account: "default"
	I0918 21:10:15.056900   61273 default_sa.go:55] duration metric: took 3.619144ms for default service account to be created ...
	I0918 21:10:15.056912   61273 system_pods.go:116] waiting for k8s-apps to be running ...
	I0918 21:10:15.061835   61273 system_pods.go:86] 8 kube-system pods found
	I0918 21:10:15.061864   61273 system_pods.go:89] "coredns-7c65d6cfc9-dgnw2" [085d5a98-0a61-4678-830f-384780a0d7ef] Running
	I0918 21:10:15.061870   61273 system_pods.go:89] "etcd-no-preload-331658" [d6a53ff4-fcf0-46c0-9e77-9af6d0363e08] Running
	I0918 21:10:15.061875   61273 system_pods.go:89] "kube-apiserver-no-preload-331658" [1953598f-4c3f-4a98-ab75-b8ca1b8093ad] Running
	I0918 21:10:15.061880   61273 system_pods.go:89] "kube-controller-manager-no-preload-331658" [8fc655be-a192-4250-a31d-2e533a2b5e41] Running
	I0918 21:10:15.061884   61273 system_pods.go:89] "kube-proxy-hx25w" [a26512ff-f695-4452-8974-577479257160] Running
	I0918 21:10:15.061888   61273 system_pods.go:89] "kube-scheduler-no-preload-331658" [64e5347e-e7fd-4a0d-ae41-539c3989c29e] Running
	I0918 21:10:15.061894   61273 system_pods.go:89] "metrics-server-6867b74b74-n27vc" [b1de76ec-8987-49ce-ae66-eedda2705cde] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0918 21:10:15.061898   61273 system_pods.go:89] "storage-provisioner" [e110aeb3-e9bc-4bb9-9b49-5579558bdda2] Running
	I0918 21:10:15.061906   61273 system_pods.go:126] duration metric: took 4.987508ms to wait for k8s-apps to be running ...
	I0918 21:10:15.061912   61273 system_svc.go:44] waiting for kubelet service to be running ....
	I0918 21:10:15.061966   61273 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0918 21:10:15.079834   61273 system_svc.go:56] duration metric: took 17.908997ms WaitForService to wait for kubelet
	I0918 21:10:15.079875   61273 kubeadm.go:582] duration metric: took 4m19.759287892s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0918 21:10:15.079897   61273 node_conditions.go:102] verifying NodePressure condition ...
	I0918 21:10:15.083307   61273 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0918 21:10:15.083390   61273 node_conditions.go:123] node cpu capacity is 2
	I0918 21:10:15.083407   61273 node_conditions.go:105] duration metric: took 3.503352ms to run NodePressure ...
	I0918 21:10:15.083421   61273 start.go:241] waiting for startup goroutines ...
	I0918 21:10:15.083431   61273 start.go:246] waiting for cluster config update ...
	I0918 21:10:15.083444   61273 start.go:255] writing updated cluster config ...
	I0918 21:10:15.083788   61273 ssh_runner.go:195] Run: rm -f paused
	I0918 21:10:15.139144   61273 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I0918 21:10:15.141198   61273 out.go:177] * Done! kubectl is now configured to use "no-preload-331658" cluster and "default" namespace by default
	I0918 21:11:17.441368   62061 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0918 21:11:17.441607   62061 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0918 21:11:17.442921   62061 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0918 21:11:17.443036   62061 kubeadm.go:310] [preflight] Running pre-flight checks
	I0918 21:11:17.443221   62061 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0918 21:11:17.443500   62061 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0918 21:11:17.443818   62061 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0918 21:11:17.444099   62061 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0918 21:11:17.446858   62061 out.go:235]   - Generating certificates and keys ...
	I0918 21:11:17.446965   62061 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0918 21:11:17.447048   62061 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0918 21:11:17.447160   62061 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0918 21:11:17.447248   62061 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0918 21:11:17.447349   62061 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0918 21:11:17.447425   62061 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0918 21:11:17.447507   62061 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0918 21:11:17.447587   62061 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0918 21:11:17.447742   62061 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0918 21:11:17.447847   62061 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0918 21:11:17.447911   62061 kubeadm.go:310] [certs] Using the existing "sa" key
	I0918 21:11:17.447984   62061 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0918 21:11:17.448085   62061 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0918 21:11:17.448163   62061 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0918 21:11:17.448255   62061 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0918 21:11:17.448339   62061 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0918 21:11:17.448486   62061 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0918 21:11:17.448590   62061 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0918 21:11:17.448645   62061 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0918 21:11:17.448733   62061 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0918 21:11:17.450203   62061 out.go:235]   - Booting up control plane ...
	I0918 21:11:17.450299   62061 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0918 21:11:17.450434   62061 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0918 21:11:17.450506   62061 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0918 21:11:17.450578   62061 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0918 21:11:17.450752   62061 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0918 21:11:17.450805   62061 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0918 21:11:17.450863   62061 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0918 21:11:17.451034   62061 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0918 21:11:17.451090   62061 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0918 21:11:17.451259   62061 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0918 21:11:17.451325   62061 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0918 21:11:17.451498   62061 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0918 21:11:17.451562   62061 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0918 21:11:17.451765   62061 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0918 21:11:17.451845   62061 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0918 21:11:17.452027   62061 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0918 21:11:17.452041   62061 kubeadm.go:310] 
	I0918 21:11:17.452083   62061 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0918 21:11:17.452118   62061 kubeadm.go:310] 		timed out waiting for the condition
	I0918 21:11:17.452127   62061 kubeadm.go:310] 
	I0918 21:11:17.452160   62061 kubeadm.go:310] 	This error is likely caused by:
	I0918 21:11:17.452189   62061 kubeadm.go:310] 		- The kubelet is not running
	I0918 21:11:17.452282   62061 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0918 21:11:17.452289   62061 kubeadm.go:310] 
	I0918 21:11:17.452385   62061 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0918 21:11:17.452425   62061 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0918 21:11:17.452473   62061 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0918 21:11:17.452482   62061 kubeadm.go:310] 
	I0918 21:11:17.452578   62061 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0918 21:11:17.452649   62061 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0918 21:11:17.452656   62061 kubeadm.go:310] 
	I0918 21:11:17.452770   62061 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0918 21:11:17.452849   62061 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0918 21:11:17.452920   62061 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0918 21:11:17.452983   62061 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0918 21:11:17.453029   62061 kubeadm.go:310] 
	W0918 21:11:17.453100   62061 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0918 21:11:17.453139   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0918 21:11:17.917211   62061 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0918 21:11:17.933265   62061 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0918 21:11:17.943909   62061 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0918 21:11:17.943937   62061 kubeadm.go:157] found existing configuration files:
	
	I0918 21:11:17.943980   62061 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0918 21:11:17.953745   62061 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0918 21:11:17.953817   62061 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0918 21:11:17.964411   62061 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0918 21:11:17.974601   62061 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0918 21:11:17.974661   62061 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0918 21:11:17.984631   62061 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0918 21:11:17.994280   62061 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0918 21:11:17.994341   62061 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0918 21:11:18.004341   62061 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0918 21:11:18.014066   62061 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0918 21:11:18.014127   62061 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0918 21:11:18.023592   62061 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0918 21:11:18.098831   62061 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0918 21:11:18.098908   62061 kubeadm.go:310] [preflight] Running pre-flight checks
	I0918 21:11:18.254904   62061 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0918 21:11:18.255012   62061 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0918 21:11:18.255102   62061 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0918 21:11:18.444152   62061 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0918 21:11:18.445967   62061 out.go:235]   - Generating certificates and keys ...
	I0918 21:11:18.446082   62061 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0918 21:11:18.446185   62061 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0918 21:11:18.446316   62061 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0918 21:11:18.446403   62061 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0918 21:11:18.446508   62061 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0918 21:11:18.446600   62061 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0918 21:11:18.446706   62061 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0918 21:11:18.446767   62061 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0918 21:11:18.446852   62061 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0918 21:11:18.446936   62061 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0918 21:11:18.446971   62061 kubeadm.go:310] [certs] Using the existing "sa" key
	I0918 21:11:18.447025   62061 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0918 21:11:18.621414   62061 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0918 21:11:18.973691   62061 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0918 21:11:19.177522   62061 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0918 21:11:19.351312   62061 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0918 21:11:19.375169   62061 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0918 21:11:19.376274   62061 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0918 21:11:19.376351   62061 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0918 21:11:19.505030   62061 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0918 21:11:19.507065   62061 out.go:235]   - Booting up control plane ...
	I0918 21:11:19.507197   62061 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0918 21:11:19.519523   62061 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0918 21:11:19.520747   62061 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0918 21:11:19.522723   62061 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0918 21:11:19.524851   62061 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0918 21:11:59.527084   62061 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0918 21:11:59.527421   62061 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0918 21:11:59.527674   62061 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0918 21:12:04.528288   62061 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0918 21:12:04.528530   62061 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0918 21:12:14.529180   62061 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0918 21:12:14.529367   62061 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0918 21:12:34.530066   62061 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0918 21:12:34.530335   62061 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0918 21:13:14.529030   62061 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0918 21:13:14.529305   62061 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0918 21:13:14.529326   62061 kubeadm.go:310] 
	I0918 21:13:14.529374   62061 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0918 21:13:14.529447   62061 kubeadm.go:310] 		timed out waiting for the condition
	I0918 21:13:14.529466   62061 kubeadm.go:310] 
	I0918 21:13:14.529521   62061 kubeadm.go:310] 	This error is likely caused by:
	I0918 21:13:14.529569   62061 kubeadm.go:310] 		- The kubelet is not running
	I0918 21:13:14.529716   62061 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0918 21:13:14.529726   62061 kubeadm.go:310] 
	I0918 21:13:14.529866   62061 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0918 21:13:14.529917   62061 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0918 21:13:14.529967   62061 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0918 21:13:14.529976   62061 kubeadm.go:310] 
	I0918 21:13:14.530120   62061 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0918 21:13:14.530220   62061 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0918 21:13:14.530230   62061 kubeadm.go:310] 
	I0918 21:13:14.530352   62061 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0918 21:13:14.530480   62061 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0918 21:13:14.530579   62061 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0918 21:13:14.530678   62061 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0918 21:13:14.530688   62061 kubeadm.go:310] 
	I0918 21:13:14.531356   62061 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0918 21:13:14.531463   62061 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0918 21:13:14.531520   62061 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0918 21:13:14.531592   62061 kubeadm.go:394] duration metric: took 7m56.917405378s to StartCluster
	I0918 21:13:14.531633   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0918 21:13:14.531689   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0918 21:13:14.578851   62061 cri.go:89] found id: ""
	I0918 21:13:14.578883   62061 logs.go:276] 0 containers: []
	W0918 21:13:14.578893   62061 logs.go:278] No container was found matching "kube-apiserver"
	I0918 21:13:14.578901   62061 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0918 21:13:14.578960   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0918 21:13:14.627137   62061 cri.go:89] found id: ""
	I0918 21:13:14.627168   62061 logs.go:276] 0 containers: []
	W0918 21:13:14.627179   62061 logs.go:278] No container was found matching "etcd"
	I0918 21:13:14.627187   62061 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0918 21:13:14.627245   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0918 21:13:14.670678   62061 cri.go:89] found id: ""
	I0918 21:13:14.670707   62061 logs.go:276] 0 containers: []
	W0918 21:13:14.670717   62061 logs.go:278] No container was found matching "coredns"
	I0918 21:13:14.670724   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0918 21:13:14.670788   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0918 21:13:14.709610   62061 cri.go:89] found id: ""
	I0918 21:13:14.709641   62061 logs.go:276] 0 containers: []
	W0918 21:13:14.709651   62061 logs.go:278] No container was found matching "kube-scheduler"
	I0918 21:13:14.709658   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0918 21:13:14.709722   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0918 21:13:14.743492   62061 cri.go:89] found id: ""
	I0918 21:13:14.743522   62061 logs.go:276] 0 containers: []
	W0918 21:13:14.743534   62061 logs.go:278] No container was found matching "kube-proxy"
	I0918 21:13:14.743540   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0918 21:13:14.743601   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0918 21:13:14.777577   62061 cri.go:89] found id: ""
	I0918 21:13:14.777602   62061 logs.go:276] 0 containers: []
	W0918 21:13:14.777610   62061 logs.go:278] No container was found matching "kube-controller-manager"
	I0918 21:13:14.777616   62061 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0918 21:13:14.777679   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0918 21:13:14.815884   62061 cri.go:89] found id: ""
	I0918 21:13:14.815913   62061 logs.go:276] 0 containers: []
	W0918 21:13:14.815922   62061 logs.go:278] No container was found matching "kindnet"
	I0918 21:13:14.815937   62061 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0918 21:13:14.815997   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0918 21:13:14.855426   62061 cri.go:89] found id: ""
	I0918 21:13:14.855456   62061 logs.go:276] 0 containers: []
	W0918 21:13:14.855464   62061 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0918 21:13:14.855473   62061 logs.go:123] Gathering logs for dmesg ...
	I0918 21:13:14.855484   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 21:13:14.868812   62061 logs.go:123] Gathering logs for describe nodes ...
	I0918 21:13:14.868843   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0918 21:13:14.955093   62061 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0918 21:13:14.955120   62061 logs.go:123] Gathering logs for CRI-O ...
	I0918 21:13:14.955151   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0918 21:13:15.064722   62061 logs.go:123] Gathering logs for container status ...
	I0918 21:13:15.064760   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 21:13:15.105430   62061 logs.go:123] Gathering logs for kubelet ...
	I0918 21:13:15.105466   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0918 21:13:15.157878   62061 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0918 21:13:15.157956   62061 out.go:270] * 
	W0918 21:13:15.158036   62061 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0918 21:13:15.158052   62061 out.go:270] * 
	W0918 21:13:15.158934   62061 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0918 21:13:15.161604   62061 out.go:201] 
	W0918 21:13:15.162606   62061 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0918 21:13:15.162664   62061 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0918 21:13:15.162685   62061 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0918 21:13:15.163831   62061 out.go:201] 
	
	
	==> CRI-O <==
	Sep 18 21:22:20 old-k8s-version-740194 crio[636]: time="2024-09-18 21:22:20.567344631Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726694540567312021,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=f81def22-3fec-49ef-84c4-141723050525 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 18 21:22:20 old-k8s-version-740194 crio[636]: time="2024-09-18 21:22:20.568081707Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=7663308b-b927-4f1c-b843-0e2227d12e05 name=/runtime.v1.RuntimeService/ListContainers
	Sep 18 21:22:20 old-k8s-version-740194 crio[636]: time="2024-09-18 21:22:20.568166710Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=7663308b-b927-4f1c-b843-0e2227d12e05 name=/runtime.v1.RuntimeService/ListContainers
	Sep 18 21:22:20 old-k8s-version-740194 crio[636]: time="2024-09-18 21:22:20.568237138Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=7663308b-b927-4f1c-b843-0e2227d12e05 name=/runtime.v1.RuntimeService/ListContainers
	Sep 18 21:22:20 old-k8s-version-740194 crio[636]: time="2024-09-18 21:22:20.602394890Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=8016cbac-76a1-4bc1-b12b-bf40ce0330ea name=/runtime.v1.RuntimeService/Version
	Sep 18 21:22:20 old-k8s-version-740194 crio[636]: time="2024-09-18 21:22:20.602502594Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=8016cbac-76a1-4bc1-b12b-bf40ce0330ea name=/runtime.v1.RuntimeService/Version
	Sep 18 21:22:20 old-k8s-version-740194 crio[636]: time="2024-09-18 21:22:20.603767493Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=0fb013a2-d81a-4826-894d-9d5013ab2415 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 18 21:22:20 old-k8s-version-740194 crio[636]: time="2024-09-18 21:22:20.604411042Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726694540604362404,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=0fb013a2-d81a-4826-894d-9d5013ab2415 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 18 21:22:20 old-k8s-version-740194 crio[636]: time="2024-09-18 21:22:20.605243240Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=888efdeb-5ddd-42c4-9d82-214b1ea3bef6 name=/runtime.v1.RuntimeService/ListContainers
	Sep 18 21:22:20 old-k8s-version-740194 crio[636]: time="2024-09-18 21:22:20.605320519Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=888efdeb-5ddd-42c4-9d82-214b1ea3bef6 name=/runtime.v1.RuntimeService/ListContainers
	Sep 18 21:22:20 old-k8s-version-740194 crio[636]: time="2024-09-18 21:22:20.605371448Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=888efdeb-5ddd-42c4-9d82-214b1ea3bef6 name=/runtime.v1.RuntimeService/ListContainers
	Sep 18 21:22:20 old-k8s-version-740194 crio[636]: time="2024-09-18 21:22:20.639775182Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=b6e14780-08a1-4674-92fa-8f61d1afb08d name=/runtime.v1.RuntimeService/Version
	Sep 18 21:22:20 old-k8s-version-740194 crio[636]: time="2024-09-18 21:22:20.639868308Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=b6e14780-08a1-4674-92fa-8f61d1afb08d name=/runtime.v1.RuntimeService/Version
	Sep 18 21:22:20 old-k8s-version-740194 crio[636]: time="2024-09-18 21:22:20.641020890Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=922ca7e3-79bb-4035-8b02-108e590a6fb7 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 18 21:22:20 old-k8s-version-740194 crio[636]: time="2024-09-18 21:22:20.641445032Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726694540641423630,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=922ca7e3-79bb-4035-8b02-108e590a6fb7 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 18 21:22:20 old-k8s-version-740194 crio[636]: time="2024-09-18 21:22:20.641898615Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=0c77d699-44bd-49f6-9bb2-2f672cd8fbf8 name=/runtime.v1.RuntimeService/ListContainers
	Sep 18 21:22:20 old-k8s-version-740194 crio[636]: time="2024-09-18 21:22:20.641946155Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=0c77d699-44bd-49f6-9bb2-2f672cd8fbf8 name=/runtime.v1.RuntimeService/ListContainers
	Sep 18 21:22:20 old-k8s-version-740194 crio[636]: time="2024-09-18 21:22:20.642029060Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=0c77d699-44bd-49f6-9bb2-2f672cd8fbf8 name=/runtime.v1.RuntimeService/ListContainers
	Sep 18 21:22:20 old-k8s-version-740194 crio[636]: time="2024-09-18 21:22:20.678448556Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=c4d73f4c-5ddf-4531-98c3-61cae3ddfb58 name=/runtime.v1.RuntimeService/Version
	Sep 18 21:22:20 old-k8s-version-740194 crio[636]: time="2024-09-18 21:22:20.678527458Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=c4d73f4c-5ddf-4531-98c3-61cae3ddfb58 name=/runtime.v1.RuntimeService/Version
	Sep 18 21:22:20 old-k8s-version-740194 crio[636]: time="2024-09-18 21:22:20.679713850Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=7b00dcc4-e1aa-4fc4-ba13-64e7045bdaa5 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 18 21:22:20 old-k8s-version-740194 crio[636]: time="2024-09-18 21:22:20.680361938Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726694540680315168,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=7b00dcc4-e1aa-4fc4-ba13-64e7045bdaa5 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 18 21:22:20 old-k8s-version-740194 crio[636]: time="2024-09-18 21:22:20.680950863Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=95d725ab-fd1b-4eac-91e3-7c5a8331db97 name=/runtime.v1.RuntimeService/ListContainers
	Sep 18 21:22:20 old-k8s-version-740194 crio[636]: time="2024-09-18 21:22:20.681070367Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=95d725ab-fd1b-4eac-91e3-7c5a8331db97 name=/runtime.v1.RuntimeService/ListContainers
	Sep 18 21:22:20 old-k8s-version-740194 crio[636]: time="2024-09-18 21:22:20.681117414Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=95d725ab-fd1b-4eac-91e3-7c5a8331db97 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Sep18 21:04] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.052758] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.039829] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.960759] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +1.971252] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[Sep18 21:05] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000005] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +8.490123] systemd-fstab-generator[559]: Ignoring "noauto" option for root device
	[  +0.066663] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.070792] systemd-fstab-generator[571]: Ignoring "noauto" option for root device
	[  +0.218400] systemd-fstab-generator[585]: Ignoring "noauto" option for root device
	[  +0.116531] systemd-fstab-generator[597]: Ignoring "noauto" option for root device
	[  +0.277213] systemd-fstab-generator[626]: Ignoring "noauto" option for root device
	[  +6.543535] systemd-fstab-generator[885]: Ignoring "noauto" option for root device
	[  +0.067666] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.830893] systemd-fstab-generator[1009]: Ignoring "noauto" option for root device
	[ +11.620626] kauditd_printk_skb: 46 callbacks suppressed
	[Sep18 21:09] systemd-fstab-generator[5030]: Ignoring "noauto" option for root device
	[Sep18 21:11] systemd-fstab-generator[5310]: Ignoring "noauto" option for root device
	[  +0.067380] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> kernel <==
	 21:22:20 up 17 min,  0 users,  load average: 0.13, 0.07, 0.06
	Linux old-k8s-version-740194 5.10.207 #1 SMP Mon Sep 16 15:00:28 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Sep 18 21:22:20 old-k8s-version-740194 kubelet[6496]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport/http2_client.go:1265 +0x179
	Sep 18 21:22:20 old-k8s-version-740194 kubelet[6496]: created by k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport.newHTTP2Client
	Sep 18 21:22:20 old-k8s-version-740194 kubelet[6496]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport/http2_client.go:300 +0xd31
	Sep 18 21:22:20 old-k8s-version-740194 kubelet[6496]: goroutine 128 [select]:
	Sep 18 21:22:20 old-k8s-version-740194 kubelet[6496]: k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport.(*controlBuffer).get(0xc000113db0, 0x1, 0x0, 0x0, 0x0, 0x0)
	Sep 18 21:22:20 old-k8s-version-740194 kubelet[6496]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport/controlbuf.go:395 +0x125
	Sep 18 21:22:20 old-k8s-version-740194 kubelet[6496]: k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport.(*loopyWriter).run(0xc0001d4780, 0x0, 0x0)
	Sep 18 21:22:20 old-k8s-version-740194 kubelet[6496]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport/controlbuf.go:513 +0x1d3
	Sep 18 21:22:20 old-k8s-version-740194 kubelet[6496]: k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport.newHTTP2Client.func3(0xc0009348c0)
	Sep 18 21:22:20 old-k8s-version-740194 kubelet[6496]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport/http2_client.go:346 +0x7b
	Sep 18 21:22:20 old-k8s-version-740194 kubelet[6496]: created by k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport.newHTTP2Client
	Sep 18 21:22:20 old-k8s-version-740194 kubelet[6496]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport/http2_client.go:344 +0xefc
	Sep 18 21:22:20 old-k8s-version-740194 kubelet[6496]: goroutine 129 [syscall]:
	Sep 18 21:22:20 old-k8s-version-740194 kubelet[6496]: syscall.Syscall6(0xe8, 0xe, 0xc000c8fb6c, 0x7, 0xffffffffffffffff, 0x0, 0x0, 0x0, 0x0, 0x0)
	Sep 18 21:22:20 old-k8s-version-740194 kubelet[6496]:         /usr/local/go/src/syscall/asm_linux_amd64.s:41 +0x5
	Sep 18 21:22:20 old-k8s-version-740194 kubelet[6496]: k8s.io/kubernetes/vendor/golang.org/x/sys/unix.EpollWait(0xe, 0xc000c8fb6c, 0x7, 0x7, 0xffffffffffffffff, 0x0, 0x0, 0x0)
	Sep 18 21:22:20 old-k8s-version-740194 kubelet[6496]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/golang.org/x/sys/unix/zsyscall_linux_amd64.go:76 +0x72
	Sep 18 21:22:20 old-k8s-version-740194 kubelet[6496]: k8s.io/kubernetes/vendor/github.com/fsnotify/fsnotify.(*fdPoller).wait(0xc0008db1e0, 0x0, 0x0, 0x0)
	Sep 18 21:22:20 old-k8s-version-740194 kubelet[6496]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/fsnotify/fsnotify/inotify_poller.go:86 +0x91
	Sep 18 21:22:20 old-k8s-version-740194 kubelet[6496]: k8s.io/kubernetes/vendor/github.com/fsnotify/fsnotify.(*Watcher).readEvents(0xc00096c050)
	Sep 18 21:22:20 old-k8s-version-740194 kubelet[6496]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/fsnotify/fsnotify/inotify.go:192 +0x206
	Sep 18 21:22:20 old-k8s-version-740194 kubelet[6496]: created by k8s.io/kubernetes/vendor/github.com/fsnotify/fsnotify.NewWatcher
	Sep 18 21:22:20 old-k8s-version-740194 kubelet[6496]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/fsnotify/fsnotify/inotify.go:59 +0x1a8
	Sep 18 21:22:20 old-k8s-version-740194 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Sep 18 21:22:20 old-k8s-version-740194 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-740194 -n old-k8s-version-740194
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-740194 -n old-k8s-version-740194: exit status 2 (226.682926ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-740194" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (543.52s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (467.25s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
start_stop_delete_test.go:287: ***** TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-828868 -n default-k8s-diff-port-828868
start_stop_delete_test.go:287: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: showing logs for failed pods as of 2024-09-18 21:26:29.756599281 +0000 UTC m=+6512.526940522
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-828868 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-828868 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (1.717µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context default-k8s-diff-port-828868 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-828868 -n default-k8s-diff-port-828868
helpers_test.go:244: <<< TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-828868 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-828868 logs -n 25: (1.124926826s)
helpers_test.go:252: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| addons  | enable dashboard -p no-preload-331658                  | no-preload-331658            | jenkins | v1.34.0 | 18 Sep 24 20:59 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-331658                                   | no-preload-331658            | jenkins | v1.34.0 | 18 Sep 24 20:59 UTC | 18 Sep 24 21:10 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-828868       | default-k8s-diff-port-828868 | jenkins | v1.34.0 | 18 Sep 24 21:00 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-255556                 | embed-certs-255556           | jenkins | v1.34.0 | 18 Sep 24 21:00 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-828868 | jenkins | v1.34.0 | 18 Sep 24 21:00 UTC | 18 Sep 24 21:09 UTC |
	|         | default-k8s-diff-port-828868                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| start   | -p embed-certs-255556                                  | embed-certs-255556           | jenkins | v1.34.0 | 18 Sep 24 21:00 UTC | 18 Sep 24 21:10 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-740194                              | old-k8s-version-740194       | jenkins | v1.34.0 | 18 Sep 24 21:00 UTC | 18 Sep 24 21:00 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-740194             | old-k8s-version-740194       | jenkins | v1.34.0 | 18 Sep 24 21:00 UTC | 18 Sep 24 21:00 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-740194                              | old-k8s-version-740194       | jenkins | v1.34.0 | 18 Sep 24 21:00 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| delete  | -p old-k8s-version-740194                              | old-k8s-version-740194       | jenkins | v1.34.0 | 18 Sep 24 21:24 UTC | 18 Sep 24 21:24 UTC |
	| start   | -p newest-cni-560575 --memory=2200 --alsologtostderr   | newest-cni-560575            | jenkins | v1.34.0 | 18 Sep 24 21:24 UTC | 18 Sep 24 21:25 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| delete  | -p no-preload-331658                                   | no-preload-331658            | jenkins | v1.34.0 | 18 Sep 24 21:25 UTC | 18 Sep 24 21:25 UTC |
	| start   | -p auto-543581 --memory=3072                           | auto-543581                  | jenkins | v1.34.0 | 18 Sep 24 21:25 UTC |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --wait-timeout=15m                                     |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p newest-cni-560575             | newest-cni-560575            | jenkins | v1.34.0 | 18 Sep 24 21:25 UTC | 18 Sep 24 21:25 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p newest-cni-560575                                   | newest-cni-560575            | jenkins | v1.34.0 | 18 Sep 24 21:25 UTC | 18 Sep 24 21:25 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p newest-cni-560575                  | newest-cni-560575            | jenkins | v1.34.0 | 18 Sep 24 21:25 UTC | 18 Sep 24 21:25 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p newest-cni-560575 --memory=2200 --alsologtostderr   | newest-cni-560575            | jenkins | v1.34.0 | 18 Sep 24 21:25 UTC | 18 Sep 24 21:26 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| delete  | -p embed-certs-255556                                  | embed-certs-255556           | jenkins | v1.34.0 | 18 Sep 24 21:26 UTC | 18 Sep 24 21:26 UTC |
	| start   | -p kindnet-543581                                      | kindnet-543581               | jenkins | v1.34.0 | 18 Sep 24 21:26 UTC |                     |
	|         | --memory=3072                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --wait-timeout=15m                                     |                              |         |         |                     |                     |
	|         | --cni=kindnet --driver=kvm2                            |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| image   | newest-cni-560575 image list                           | newest-cni-560575            | jenkins | v1.34.0 | 18 Sep 24 21:26 UTC | 18 Sep 24 21:26 UTC |
	|         | --format=json                                          |                              |         |         |                     |                     |
	| pause   | -p newest-cni-560575                                   | newest-cni-560575            | jenkins | v1.34.0 | 18 Sep 24 21:26 UTC | 18 Sep 24 21:26 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p newest-cni-560575                                   | newest-cni-560575            | jenkins | v1.34.0 | 18 Sep 24 21:26 UTC | 18 Sep 24 21:26 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p newest-cni-560575                                   | newest-cni-560575            | jenkins | v1.34.0 | 18 Sep 24 21:26 UTC | 18 Sep 24 21:26 UTC |
	| delete  | -p newest-cni-560575                                   | newest-cni-560575            | jenkins | v1.34.0 | 18 Sep 24 21:26 UTC | 18 Sep 24 21:26 UTC |
	| start   | -p calico-543581 --memory=3072                         | calico-543581                | jenkins | v1.34.0 | 18 Sep 24 21:26 UTC |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --wait-timeout=15m                                     |                              |         |         |                     |                     |
	|         | --cni=calico --driver=kvm2                             |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/18 21:26:14
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0918 21:26:14.305033   70733 out.go:345] Setting OutFile to fd 1 ...
	I0918 21:26:14.305335   70733 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0918 21:26:14.305345   70733 out.go:358] Setting ErrFile to fd 2...
	I0918 21:26:14.305350   70733 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0918 21:26:14.305577   70733 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19667-7671/.minikube/bin
	I0918 21:26:14.306198   70733 out.go:352] Setting JSON to false
	I0918 21:26:14.307235   70733 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":7718,"bootTime":1726687056,"procs":235,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0918 21:26:14.307340   70733 start.go:139] virtualization: kvm guest
	I0918 21:26:14.309797   70733 out.go:177] * [calico-543581] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0918 21:26:14.311472   70733 notify.go:220] Checking for updates...
	I0918 21:26:14.311510   70733 out.go:177]   - MINIKUBE_LOCATION=19667
	I0918 21:26:14.313223   70733 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0918 21:26:14.314846   70733 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19667-7671/kubeconfig
	I0918 21:26:14.316368   70733 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19667-7671/.minikube
	I0918 21:26:14.317763   70733 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0918 21:26:14.319074   70733 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0918 21:26:14.320832   70733 config.go:182] Loaded profile config "auto-543581": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0918 21:26:14.320993   70733 config.go:182] Loaded profile config "default-k8s-diff-port-828868": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0918 21:26:14.321116   70733 config.go:182] Loaded profile config "kindnet-543581": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0918 21:26:14.321237   70733 driver.go:394] Setting default libvirt URI to qemu:///system
	I0918 21:26:14.361366   70733 out.go:177] * Using the kvm2 driver based on user configuration
	I0918 21:26:14.362887   70733 start.go:297] selected driver: kvm2
	I0918 21:26:14.362910   70733 start.go:901] validating driver "kvm2" against <nil>
	I0918 21:26:14.362930   70733 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0918 21:26:14.364168   70733 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0918 21:26:14.364291   70733 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19667-7671/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0918 21:26:14.380823   70733 install.go:137] /home/jenkins/minikube-integration/19667-7671/.minikube/bin/docker-machine-driver-kvm2 version is 1.34.0
	I0918 21:26:14.380889   70733 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0918 21:26:14.381259   70733 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0918 21:26:14.381301   70733 cni.go:84] Creating CNI manager for "calico"
	I0918 21:26:14.381316   70733 start_flags.go:319] Found "Calico" CNI - setting NetworkPlugin=cni
	I0918 21:26:14.381389   70733 start.go:340] cluster config:
	{Name:calico-543581 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:calico-543581 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock
: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0918 21:26:14.381530   70733 iso.go:125] acquiring lock: {Name:mkbcf4d84f3091567a39802cd5e773cb10b8c545 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0918 21:26:14.383562   70733 out.go:177] * Starting "calico-543581" primary control-plane node in "calico-543581" cluster
	I0918 21:26:13.665191   69269 pod_ready.go:103] pod "coredns-7c65d6cfc9-5cwr8" in "kube-system" namespace has status "Ready":"False"
	I0918 21:26:16.165936   69269 pod_ready.go:103] pod "coredns-7c65d6cfc9-5cwr8" in "kube-system" namespace has status "Ready":"False"
	I0918 21:26:12.444874   70197 main.go:141] libmachine: (kindnet-543581) DBG | domain kindnet-543581 has defined MAC address 52:54:00:a0:3f:87 in network mk-kindnet-543581
	I0918 21:26:12.445452   70197 main.go:141] libmachine: (kindnet-543581) DBG | unable to find current IP address of domain kindnet-543581 in network mk-kindnet-543581
	I0918 21:26:12.445497   70197 main.go:141] libmachine: (kindnet-543581) DBG | I0918 21:26:12.445430   70220 retry.go:31] will retry after 992.57491ms: waiting for machine to come up
	I0918 21:26:13.837695   70197 main.go:141] libmachine: (kindnet-543581) DBG | domain kindnet-543581 has defined MAC address 52:54:00:a0:3f:87 in network mk-kindnet-543581
	I0918 21:26:13.838208   70197 main.go:141] libmachine: (kindnet-543581) DBG | unable to find current IP address of domain kindnet-543581 in network mk-kindnet-543581
	I0918 21:26:13.838236   70197 main.go:141] libmachine: (kindnet-543581) DBG | I0918 21:26:13.838161   70220 retry.go:31] will retry after 1.310018051s: waiting for machine to come up
	I0918 21:26:15.150441   70197 main.go:141] libmachine: (kindnet-543581) DBG | domain kindnet-543581 has defined MAC address 52:54:00:a0:3f:87 in network mk-kindnet-543581
	I0918 21:26:15.150962   70197 main.go:141] libmachine: (kindnet-543581) DBG | unable to find current IP address of domain kindnet-543581 in network mk-kindnet-543581
	I0918 21:26:15.150989   70197 main.go:141] libmachine: (kindnet-543581) DBG | I0918 21:26:15.150926   70220 retry.go:31] will retry after 1.569605288s: waiting for machine to come up
	I0918 21:26:16.722190   70197 main.go:141] libmachine: (kindnet-543581) DBG | domain kindnet-543581 has defined MAC address 52:54:00:a0:3f:87 in network mk-kindnet-543581
	I0918 21:26:16.722680   70197 main.go:141] libmachine: (kindnet-543581) DBG | unable to find current IP address of domain kindnet-543581 in network mk-kindnet-543581
	I0918 21:26:16.722702   70197 main.go:141] libmachine: (kindnet-543581) DBG | I0918 21:26:16.722637   70220 retry.go:31] will retry after 1.498589017s: waiting for machine to come up
	I0918 21:26:14.385029   70733 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0918 21:26:14.385089   70733 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19667-7671/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I0918 21:26:14.385105   70733 cache.go:56] Caching tarball of preloaded images
	I0918 21:26:14.385218   70733 preload.go:172] Found /home/jenkins/minikube-integration/19667-7671/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0918 21:26:14.385235   70733 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0918 21:26:14.385367   70733 profile.go:143] Saving config to /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/calico-543581/config.json ...
	I0918 21:26:14.385404   70733 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/calico-543581/config.json: {Name:mkb97309b2319f0929407f719da5256bbfe52adf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0918 21:26:14.385676   70733 start.go:360] acquireMachinesLock for calico-543581: {Name:mk1b0d68a0b7d9d4bc8204d5d81cb9eb77222526 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0918 21:26:18.664398   69269 pod_ready.go:103] pod "coredns-7c65d6cfc9-5cwr8" in "kube-system" namespace has status "Ready":"False"
	I0918 21:26:21.164615   69269 pod_ready.go:103] pod "coredns-7c65d6cfc9-5cwr8" in "kube-system" namespace has status "Ready":"False"
	I0918 21:26:18.223558   70197 main.go:141] libmachine: (kindnet-543581) DBG | domain kindnet-543581 has defined MAC address 52:54:00:a0:3f:87 in network mk-kindnet-543581
	I0918 21:26:18.224120   70197 main.go:141] libmachine: (kindnet-543581) DBG | unable to find current IP address of domain kindnet-543581 in network mk-kindnet-543581
	I0918 21:26:18.224150   70197 main.go:141] libmachine: (kindnet-543581) DBG | I0918 21:26:18.224065   70220 retry.go:31] will retry after 2.358615143s: waiting for machine to come up
	I0918 21:26:20.585405   70197 main.go:141] libmachine: (kindnet-543581) DBG | domain kindnet-543581 has defined MAC address 52:54:00:a0:3f:87 in network mk-kindnet-543581
	I0918 21:26:20.585936   70197 main.go:141] libmachine: (kindnet-543581) DBG | unable to find current IP address of domain kindnet-543581 in network mk-kindnet-543581
	I0918 21:26:20.585956   70197 main.go:141] libmachine: (kindnet-543581) DBG | I0918 21:26:20.585895   70220 retry.go:31] will retry after 2.786124378s: waiting for machine to come up
	I0918 21:26:23.166254   69269 pod_ready.go:103] pod "coredns-7c65d6cfc9-5cwr8" in "kube-system" namespace has status "Ready":"False"
	I0918 21:26:25.664375   69269 pod_ready.go:103] pod "coredns-7c65d6cfc9-5cwr8" in "kube-system" namespace has status "Ready":"False"
	I0918 21:26:23.372992   70197 main.go:141] libmachine: (kindnet-543581) DBG | domain kindnet-543581 has defined MAC address 52:54:00:a0:3f:87 in network mk-kindnet-543581
	I0918 21:26:23.373537   70197 main.go:141] libmachine: (kindnet-543581) DBG | unable to find current IP address of domain kindnet-543581 in network mk-kindnet-543581
	I0918 21:26:23.373578   70197 main.go:141] libmachine: (kindnet-543581) DBG | I0918 21:26:23.373503   70220 retry.go:31] will retry after 3.858407435s: waiting for machine to come up
	I0918 21:26:27.236591   70197 main.go:141] libmachine: (kindnet-543581) DBG | domain kindnet-543581 has defined MAC address 52:54:00:a0:3f:87 in network mk-kindnet-543581
	I0918 21:26:27.237018   70197 main.go:141] libmachine: (kindnet-543581) DBG | unable to find current IP address of domain kindnet-543581 in network mk-kindnet-543581
	I0918 21:26:27.237046   70197 main.go:141] libmachine: (kindnet-543581) DBG | I0918 21:26:27.236970   70220 retry.go:31] will retry after 4.951501617s: waiting for machine to come up
	
	
	==> CRI-O <==
	Sep 18 21:26:30 default-k8s-diff-port-828868 crio[712]: time="2024-09-18 21:26:30.308740205Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726694790308715301,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=8afba1c9-d0eb-46b6-87c6-ed1cfdba5273 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 18 21:26:30 default-k8s-diff-port-828868 crio[712]: time="2024-09-18 21:26:30.309398435Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=c5820183-02d8-4c46-b183-67f4bbabff63 name=/runtime.v1.RuntimeService/ListContainers
	Sep 18 21:26:30 default-k8s-diff-port-828868 crio[712]: time="2024-09-18 21:26:30.309467733Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=c5820183-02d8-4c46-b183-67f4bbabff63 name=/runtime.v1.RuntimeService/ListContainers
	Sep 18 21:26:30 default-k8s-diff-port-828868 crio[712]: time="2024-09-18 21:26:30.309686773Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:0d0b97e9f72af5a0bd052549df0f2c3ea4d140c9ef1e8b1ccc76c58a461ee37c,PodSandboxId:72eef0ad95d538297fb3c4789da4a985ab3d8807bf345d5e2fd68178031ef278,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726693771060281006,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8b4e1077-e23f-4262-b83e-989506798531,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e2ac9232270efdf1abd0fec72f0128462216dc1b45d80b9b4c8a4f56a4261da5,PodSandboxId:972edd043ce5eb380ceea6fbe077b618e479c6f3231ef4aab9475bd38bb924c6,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726693770591622237,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-8gz5v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bfcbe99d-9a3d-4da8-a976-58cb9d9bf7ac,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:25cde4223682100325c2aeb2b6938730f12051a03dcf6e0748c2a6c85a43436a,PodSandboxId:7023205c094dab04e074a6d2623a1004e56c54d2e075ae154796ac3ae3987b87,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726693770596500921,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-shx5p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: 2d6d25ab-9a90-490a-911b-bf396605fa88,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d92ded6c9bd3df5db2c65359fa2fe855e949921242337824b7bdb69266eeefcd,PodSandboxId:b4fa714267183c158aee6efcab88236b7c0eb21f45034638f59bc4a6463c4aca,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING
,CreatedAt:1726693769792140827,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-hf5mm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3fb0a166-a925-4486-9695-6db05ae704b8,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7acfe06c0ec767f7534be911cae0c5ac2ad444bc6e8fd03889aed6d2739d0fc8,PodSandboxId:54cd361ac0647b81bc37e551cbfef36f826f2868d111e67f6c82fe039925e9d2,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:172669375897931982
7,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-828868,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 04e4144c885b49a7b621930470176aaa,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:74b07ca92709a7cfcc8dc0aa3398a6cf6f8e91026dbe1f565981b6fe699b75f0,PodSandboxId:f2e650fe9f8b28118c175e9708adadc8e8933663f2617a1ea8ba42b4693dd884,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:17266937589
84442846,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-828868,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a43ac7bdaae7ab7a4bac05d31bb8c2d3,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f727b8fd80c86645115bb17a6c04d85dd1ea62b00858983c116acf36270ea847,PodSandboxId:653b77cd43b7fe8a6d8c314791a5374edcbc889446d46788069772c965b7b4ca,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,Creat
edAt:1726693758899147724,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-828868,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 40d24393f8fb0592258907503c39ea41,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c0b240f30eafde9982d8ce7201787f2c29c207a40d9ce60bf72ff9d7eb9afb55,PodSandboxId:b349872526b54819d3acc551db5561db1345f1208b9cdd7544e8ab7b29e8bc55,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726693
758906888361,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-828868,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a793cbd41f245f3396e69fa0dc64ce00,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:63709198b2a1bec977af0e16d4f231ed7e5caf103e797daf118541e61d8b03f3,PodSandboxId:6835a506addfd99bea4ffb2e34f01bf1a0bb7009e61282906162de7cb128ade5,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726693477357010195,Labels:map
[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-828868,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 40d24393f8fb0592258907503c39ea41,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=c5820183-02d8-4c46-b183-67f4bbabff63 name=/runtime.v1.RuntimeService/ListContainers
	Sep 18 21:26:30 default-k8s-diff-port-828868 crio[712]: time="2024-09-18 21:26:30.345782466Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=dfc5c5c3-5938-4413-a142-5444580cfcd7 name=/runtime.v1.RuntimeService/Version
	Sep 18 21:26:30 default-k8s-diff-port-828868 crio[712]: time="2024-09-18 21:26:30.345855306Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=dfc5c5c3-5938-4413-a142-5444580cfcd7 name=/runtime.v1.RuntimeService/Version
	Sep 18 21:26:30 default-k8s-diff-port-828868 crio[712]: time="2024-09-18 21:26:30.346854025Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=17277014-7930-43b8-997d-3edf0bd623d7 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 18 21:26:30 default-k8s-diff-port-828868 crio[712]: time="2024-09-18 21:26:30.347496435Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726694790347465178,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=17277014-7930-43b8-997d-3edf0bd623d7 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 18 21:26:30 default-k8s-diff-port-828868 crio[712]: time="2024-09-18 21:26:30.348006420Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=2daf6d8a-c8fc-4e7e-90ee-9813f5b7ca04 name=/runtime.v1.RuntimeService/ListContainers
	Sep 18 21:26:30 default-k8s-diff-port-828868 crio[712]: time="2024-09-18 21:26:30.348074753Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=2daf6d8a-c8fc-4e7e-90ee-9813f5b7ca04 name=/runtime.v1.RuntimeService/ListContainers
	Sep 18 21:26:30 default-k8s-diff-port-828868 crio[712]: time="2024-09-18 21:26:30.348341600Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:0d0b97e9f72af5a0bd052549df0f2c3ea4d140c9ef1e8b1ccc76c58a461ee37c,PodSandboxId:72eef0ad95d538297fb3c4789da4a985ab3d8807bf345d5e2fd68178031ef278,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726693771060281006,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8b4e1077-e23f-4262-b83e-989506798531,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e2ac9232270efdf1abd0fec72f0128462216dc1b45d80b9b4c8a4f56a4261da5,PodSandboxId:972edd043ce5eb380ceea6fbe077b618e479c6f3231ef4aab9475bd38bb924c6,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726693770591622237,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-8gz5v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bfcbe99d-9a3d-4da8-a976-58cb9d9bf7ac,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:25cde4223682100325c2aeb2b6938730f12051a03dcf6e0748c2a6c85a43436a,PodSandboxId:7023205c094dab04e074a6d2623a1004e56c54d2e075ae154796ac3ae3987b87,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726693770596500921,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-shx5p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: 2d6d25ab-9a90-490a-911b-bf396605fa88,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d92ded6c9bd3df5db2c65359fa2fe855e949921242337824b7bdb69266eeefcd,PodSandboxId:b4fa714267183c158aee6efcab88236b7c0eb21f45034638f59bc4a6463c4aca,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING
,CreatedAt:1726693769792140827,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-hf5mm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3fb0a166-a925-4486-9695-6db05ae704b8,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7acfe06c0ec767f7534be911cae0c5ac2ad444bc6e8fd03889aed6d2739d0fc8,PodSandboxId:54cd361ac0647b81bc37e551cbfef36f826f2868d111e67f6c82fe039925e9d2,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:172669375897931982
7,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-828868,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 04e4144c885b49a7b621930470176aaa,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:74b07ca92709a7cfcc8dc0aa3398a6cf6f8e91026dbe1f565981b6fe699b75f0,PodSandboxId:f2e650fe9f8b28118c175e9708adadc8e8933663f2617a1ea8ba42b4693dd884,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:17266937589
84442846,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-828868,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a43ac7bdaae7ab7a4bac05d31bb8c2d3,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f727b8fd80c86645115bb17a6c04d85dd1ea62b00858983c116acf36270ea847,PodSandboxId:653b77cd43b7fe8a6d8c314791a5374edcbc889446d46788069772c965b7b4ca,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,Creat
edAt:1726693758899147724,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-828868,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 40d24393f8fb0592258907503c39ea41,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c0b240f30eafde9982d8ce7201787f2c29c207a40d9ce60bf72ff9d7eb9afb55,PodSandboxId:b349872526b54819d3acc551db5561db1345f1208b9cdd7544e8ab7b29e8bc55,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726693
758906888361,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-828868,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a793cbd41f245f3396e69fa0dc64ce00,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:63709198b2a1bec977af0e16d4f231ed7e5caf103e797daf118541e61d8b03f3,PodSandboxId:6835a506addfd99bea4ffb2e34f01bf1a0bb7009e61282906162de7cb128ade5,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726693477357010195,Labels:map
[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-828868,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 40d24393f8fb0592258907503c39ea41,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=2daf6d8a-c8fc-4e7e-90ee-9813f5b7ca04 name=/runtime.v1.RuntimeService/ListContainers
	Sep 18 21:26:30 default-k8s-diff-port-828868 crio[712]: time="2024-09-18 21:26:30.383849187Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=dd5c76a1-b0ab-48b0-b2b4-77e4bac8164f name=/runtime.v1.RuntimeService/Version
	Sep 18 21:26:30 default-k8s-diff-port-828868 crio[712]: time="2024-09-18 21:26:30.383924359Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=dd5c76a1-b0ab-48b0-b2b4-77e4bac8164f name=/runtime.v1.RuntimeService/Version
	Sep 18 21:26:30 default-k8s-diff-port-828868 crio[712]: time="2024-09-18 21:26:30.384701179Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=454cb048-d4ee-431e-a74a-7ee1642d083e name=/runtime.v1.ImageService/ImageFsInfo
	Sep 18 21:26:30 default-k8s-diff-port-828868 crio[712]: time="2024-09-18 21:26:30.385104333Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726694790385081022,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=454cb048-d4ee-431e-a74a-7ee1642d083e name=/runtime.v1.ImageService/ImageFsInfo
	Sep 18 21:26:30 default-k8s-diff-port-828868 crio[712]: time="2024-09-18 21:26:30.385600994Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=7f59a283-e538-4bda-a36b-b5c80ed967ff name=/runtime.v1.RuntimeService/ListContainers
	Sep 18 21:26:30 default-k8s-diff-port-828868 crio[712]: time="2024-09-18 21:26:30.385660957Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=7f59a283-e538-4bda-a36b-b5c80ed967ff name=/runtime.v1.RuntimeService/ListContainers
	Sep 18 21:26:30 default-k8s-diff-port-828868 crio[712]: time="2024-09-18 21:26:30.385850180Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:0d0b97e9f72af5a0bd052549df0f2c3ea4d140c9ef1e8b1ccc76c58a461ee37c,PodSandboxId:72eef0ad95d538297fb3c4789da4a985ab3d8807bf345d5e2fd68178031ef278,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726693771060281006,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8b4e1077-e23f-4262-b83e-989506798531,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e2ac9232270efdf1abd0fec72f0128462216dc1b45d80b9b4c8a4f56a4261da5,PodSandboxId:972edd043ce5eb380ceea6fbe077b618e479c6f3231ef4aab9475bd38bb924c6,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726693770591622237,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-8gz5v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bfcbe99d-9a3d-4da8-a976-58cb9d9bf7ac,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:25cde4223682100325c2aeb2b6938730f12051a03dcf6e0748c2a6c85a43436a,PodSandboxId:7023205c094dab04e074a6d2623a1004e56c54d2e075ae154796ac3ae3987b87,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726693770596500921,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-shx5p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: 2d6d25ab-9a90-490a-911b-bf396605fa88,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d92ded6c9bd3df5db2c65359fa2fe855e949921242337824b7bdb69266eeefcd,PodSandboxId:b4fa714267183c158aee6efcab88236b7c0eb21f45034638f59bc4a6463c4aca,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING
,CreatedAt:1726693769792140827,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-hf5mm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3fb0a166-a925-4486-9695-6db05ae704b8,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7acfe06c0ec767f7534be911cae0c5ac2ad444bc6e8fd03889aed6d2739d0fc8,PodSandboxId:54cd361ac0647b81bc37e551cbfef36f826f2868d111e67f6c82fe039925e9d2,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:172669375897931982
7,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-828868,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 04e4144c885b49a7b621930470176aaa,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:74b07ca92709a7cfcc8dc0aa3398a6cf6f8e91026dbe1f565981b6fe699b75f0,PodSandboxId:f2e650fe9f8b28118c175e9708adadc8e8933663f2617a1ea8ba42b4693dd884,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:17266937589
84442846,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-828868,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a43ac7bdaae7ab7a4bac05d31bb8c2d3,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f727b8fd80c86645115bb17a6c04d85dd1ea62b00858983c116acf36270ea847,PodSandboxId:653b77cd43b7fe8a6d8c314791a5374edcbc889446d46788069772c965b7b4ca,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,Creat
edAt:1726693758899147724,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-828868,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 40d24393f8fb0592258907503c39ea41,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c0b240f30eafde9982d8ce7201787f2c29c207a40d9ce60bf72ff9d7eb9afb55,PodSandboxId:b349872526b54819d3acc551db5561db1345f1208b9cdd7544e8ab7b29e8bc55,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726693
758906888361,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-828868,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a793cbd41f245f3396e69fa0dc64ce00,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:63709198b2a1bec977af0e16d4f231ed7e5caf103e797daf118541e61d8b03f3,PodSandboxId:6835a506addfd99bea4ffb2e34f01bf1a0bb7009e61282906162de7cb128ade5,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726693477357010195,Labels:map
[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-828868,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 40d24393f8fb0592258907503c39ea41,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=7f59a283-e538-4bda-a36b-b5c80ed967ff name=/runtime.v1.RuntimeService/ListContainers
	Sep 18 21:26:30 default-k8s-diff-port-828868 crio[712]: time="2024-09-18 21:26:30.416531792Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=c3a0a692-6d8b-432a-9600-0be2fc09e7d1 name=/runtime.v1.RuntimeService/Version
	Sep 18 21:26:30 default-k8s-diff-port-828868 crio[712]: time="2024-09-18 21:26:30.416608848Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=c3a0a692-6d8b-432a-9600-0be2fc09e7d1 name=/runtime.v1.RuntimeService/Version
	Sep 18 21:26:30 default-k8s-diff-port-828868 crio[712]: time="2024-09-18 21:26:30.417712969Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=7c0ef285-7af5-401d-9acf-159da95ebbdb name=/runtime.v1.ImageService/ImageFsInfo
	Sep 18 21:26:30 default-k8s-diff-port-828868 crio[712]: time="2024-09-18 21:26:30.418346154Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726694790418321641,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=7c0ef285-7af5-401d-9acf-159da95ebbdb name=/runtime.v1.ImageService/ImageFsInfo
	Sep 18 21:26:30 default-k8s-diff-port-828868 crio[712]: time="2024-09-18 21:26:30.418956455Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=949db6d0-56ea-40b4-aa58-1ef401676c7b name=/runtime.v1.RuntimeService/ListContainers
	Sep 18 21:26:30 default-k8s-diff-port-828868 crio[712]: time="2024-09-18 21:26:30.419007742Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=949db6d0-56ea-40b4-aa58-1ef401676c7b name=/runtime.v1.RuntimeService/ListContainers
	Sep 18 21:26:30 default-k8s-diff-port-828868 crio[712]: time="2024-09-18 21:26:30.419279014Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:0d0b97e9f72af5a0bd052549df0f2c3ea4d140c9ef1e8b1ccc76c58a461ee37c,PodSandboxId:72eef0ad95d538297fb3c4789da4a985ab3d8807bf345d5e2fd68178031ef278,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726693771060281006,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8b4e1077-e23f-4262-b83e-989506798531,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e2ac9232270efdf1abd0fec72f0128462216dc1b45d80b9b4c8a4f56a4261da5,PodSandboxId:972edd043ce5eb380ceea6fbe077b618e479c6f3231ef4aab9475bd38bb924c6,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726693770591622237,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-8gz5v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bfcbe99d-9a3d-4da8-a976-58cb9d9bf7ac,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:25cde4223682100325c2aeb2b6938730f12051a03dcf6e0748c2a6c85a43436a,PodSandboxId:7023205c094dab04e074a6d2623a1004e56c54d2e075ae154796ac3ae3987b87,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726693770596500921,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-shx5p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: 2d6d25ab-9a90-490a-911b-bf396605fa88,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d92ded6c9bd3df5db2c65359fa2fe855e949921242337824b7bdb69266eeefcd,PodSandboxId:b4fa714267183c158aee6efcab88236b7c0eb21f45034638f59bc4a6463c4aca,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING
,CreatedAt:1726693769792140827,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-hf5mm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3fb0a166-a925-4486-9695-6db05ae704b8,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7acfe06c0ec767f7534be911cae0c5ac2ad444bc6e8fd03889aed6d2739d0fc8,PodSandboxId:54cd361ac0647b81bc37e551cbfef36f826f2868d111e67f6c82fe039925e9d2,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:172669375897931982
7,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-828868,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 04e4144c885b49a7b621930470176aaa,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:74b07ca92709a7cfcc8dc0aa3398a6cf6f8e91026dbe1f565981b6fe699b75f0,PodSandboxId:f2e650fe9f8b28118c175e9708adadc8e8933663f2617a1ea8ba42b4693dd884,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:17266937589
84442846,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-828868,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a43ac7bdaae7ab7a4bac05d31bb8c2d3,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f727b8fd80c86645115bb17a6c04d85dd1ea62b00858983c116acf36270ea847,PodSandboxId:653b77cd43b7fe8a6d8c314791a5374edcbc889446d46788069772c965b7b4ca,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,Creat
edAt:1726693758899147724,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-828868,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 40d24393f8fb0592258907503c39ea41,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c0b240f30eafde9982d8ce7201787f2c29c207a40d9ce60bf72ff9d7eb9afb55,PodSandboxId:b349872526b54819d3acc551db5561db1345f1208b9cdd7544e8ab7b29e8bc55,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726693
758906888361,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-828868,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a793cbd41f245f3396e69fa0dc64ce00,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:63709198b2a1bec977af0e16d4f231ed7e5caf103e797daf118541e61d8b03f3,PodSandboxId:6835a506addfd99bea4ffb2e34f01bf1a0bb7009e61282906162de7cb128ade5,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726693477357010195,Labels:map
[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-828868,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 40d24393f8fb0592258907503c39ea41,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=949db6d0-56ea-40b4-aa58-1ef401676c7b name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	0d0b97e9f72af       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   16 minutes ago      Running             storage-provisioner       0                   72eef0ad95d53       storage-provisioner
	25cde42236821       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   16 minutes ago      Running             coredns                   0                   7023205c094da       coredns-7c65d6cfc9-shx5p
	e2ac9232270ef       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   16 minutes ago      Running             coredns                   0                   972edd043ce5e       coredns-7c65d6cfc9-8gz5v
	d92ded6c9bd3d       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561   17 minutes ago      Running             kube-proxy                0                   b4fa714267183       kube-proxy-hf5mm
	74b07ca92709a       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1   17 minutes ago      Running             kube-controller-manager   2                   f2e650fe9f8b2       kube-controller-manager-default-k8s-diff-port-828868
	7acfe06c0ec76       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b   17 minutes ago      Running             kube-scheduler            2                   54cd361ac0647       kube-scheduler-default-k8s-diff-port-828868
	c0b240f30eafd       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4   17 minutes ago      Running             etcd                      2                   b349872526b54       etcd-default-k8s-diff-port-828868
	f727b8fd80c86       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee   17 minutes ago      Running             kube-apiserver            2                   653b77cd43b7f       kube-apiserver-default-k8s-diff-port-828868
	63709198b2a1b       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee   21 minutes ago      Exited              kube-apiserver            1                   6835a506addfd       kube-apiserver-default-k8s-diff-port-828868
	
	
	==> coredns [25cde4223682100325c2aeb2b6938730f12051a03dcf6e0748c2a6c85a43436a] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	
	
	==> coredns [e2ac9232270efdf1abd0fec72f0128462216dc1b45d80b9b4c8a4f56a4261da5] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-828868
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-828868
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=85073601a832bd4bbda5d11fa91feafff6ec6b91
	                    minikube.k8s.io/name=default-k8s-diff-port-828868
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_18T21_09_24_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 18 Sep 2024 21:09:21 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-828868
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 18 Sep 2024 21:26:26 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 18 Sep 2024 21:24:52 +0000   Wed, 18 Sep 2024 21:09:19 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 18 Sep 2024 21:24:52 +0000   Wed, 18 Sep 2024 21:09:19 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 18 Sep 2024 21:24:52 +0000   Wed, 18 Sep 2024 21:09:19 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 18 Sep 2024 21:24:52 +0000   Wed, 18 Sep 2024 21:09:21 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.50.109
	  Hostname:    default-k8s-diff-port-828868
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 fecab61126ae4306b71ea4ef8286345b
	  System UUID:                fecab611-26ae-4306-b71e-a4ef8286345b
	  Boot ID:                    183f01c1-2271-4ea7-bca0-7c5ddeafec3c
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7c65d6cfc9-8gz5v                                100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     17m
	  kube-system                 coredns-7c65d6cfc9-shx5p                                100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     17m
	  kube-system                 etcd-default-k8s-diff-port-828868                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         17m
	  kube-system                 kube-apiserver-default-k8s-diff-port-828868             250m (12%)    0 (0%)      0 (0%)           0 (0%)         17m
	  kube-system                 kube-controller-manager-default-k8s-diff-port-828868    200m (10%)    0 (0%)      0 (0%)           0 (0%)         17m
	  kube-system                 kube-proxy-hf5mm                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         17m
	  kube-system                 kube-scheduler-default-k8s-diff-port-828868             100m (5%)     0 (0%)      0 (0%)           0 (0%)         17m
	  kube-system                 metrics-server-6867b74b74-hdt52                         100m (5%)     0 (0%)      200Mi (9%)       0 (0%)         17m
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         17m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   0 (0%)
	  memory             440Mi (20%)  340Mi (16%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 17m                kube-proxy       
	  Normal  Starting                 17m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  17m (x8 over 17m)  kubelet          Node default-k8s-diff-port-828868 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    17m (x8 over 17m)  kubelet          Node default-k8s-diff-port-828868 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     17m (x7 over 17m)  kubelet          Node default-k8s-diff-port-828868 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  17m                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 17m                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  17m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  17m                kubelet          Node default-k8s-diff-port-828868 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    17m                kubelet          Node default-k8s-diff-port-828868 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     17m                kubelet          Node default-k8s-diff-port-828868 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           17m                node-controller  Node default-k8s-diff-port-828868 event: Registered Node default-k8s-diff-port-828868 in Controller
	
	
	==> dmesg <==
	[  +0.051364] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.037672] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.769921] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +1.843305] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.528209] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +8.094296] systemd-fstab-generator[635]: Ignoring "noauto" option for root device
	[  +0.059733] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.057010] systemd-fstab-generator[647]: Ignoring "noauto" option for root device
	[  +0.171312] systemd-fstab-generator[661]: Ignoring "noauto" option for root device
	[  +0.149503] systemd-fstab-generator[673]: Ignoring "noauto" option for root device
	[  +0.295214] systemd-fstab-generator[702]: Ignoring "noauto" option for root device
	[  +4.085393] systemd-fstab-generator[795]: Ignoring "noauto" option for root device
	[  +1.889464] systemd-fstab-generator[915]: Ignoring "noauto" option for root device
	[  +0.071826] kauditd_printk_skb: 158 callbacks suppressed
	[  +5.600889] kauditd_printk_skb: 69 callbacks suppressed
	[  +7.850084] kauditd_printk_skb: 85 callbacks suppressed
	[Sep18 21:09] kauditd_printk_skb: 7 callbacks suppressed
	[  +1.309185] systemd-fstab-generator[2588]: Ignoring "noauto" option for root device
	[  +5.033554] kauditd_printk_skb: 54 callbacks suppressed
	[  +1.523447] systemd-fstab-generator[2912]: Ignoring "noauto" option for root device
	[  +5.384123] systemd-fstab-generator[3042]: Ignoring "noauto" option for root device
	[  +0.114623] kauditd_printk_skb: 14 callbacks suppressed
	[  +5.273292] kauditd_printk_skb: 86 callbacks suppressed
	
	
	==> etcd [c0b240f30eafde9982d8ce7201787f2c29c207a40d9ce60bf72ff9d7eb9afb55] <==
	{"level":"info","ts":"2024-09-18T21:09:19.772874Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-18T21:09:19.781324Z","caller":"etcdserver/server.go:2629","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-18T21:09:19.782575Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-18T21:09:19.783422Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-09-18T21:09:19.786056Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-18T21:09:19.790931Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.50.109:2379"}
	{"level":"info","ts":"2024-09-18T21:09:19.796243Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-18T21:09:19.796349Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-18T21:09:19.798329Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"d0e6cadbc325cfac","local-member-id":"46a65bd61cd538c0","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-18T21:09:19.798452Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-18T21:09:19.798506Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-18T21:19:19.938529Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":723}
	{"level":"info","ts":"2024-09-18T21:19:19.947573Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":723,"took":"8.508321ms","hash":4092190148,"current-db-size-bytes":2236416,"current-db-size":"2.2 MB","current-db-size-in-use-bytes":2236416,"current-db-size-in-use":"2.2 MB"}
	{"level":"info","ts":"2024-09-18T21:19:19.947712Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":4092190148,"revision":723,"compact-revision":-1}
	{"level":"info","ts":"2024-09-18T21:24:19.946527Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":966}
	{"level":"info","ts":"2024-09-18T21:24:19.950654Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":966,"took":"3.709912ms","hash":2522330644,"current-db-size-bytes":2236416,"current-db-size":"2.2 MB","current-db-size-in-use-bytes":1548288,"current-db-size-in-use":"1.5 MB"}
	{"level":"info","ts":"2024-09-18T21:24:19.950714Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":2522330644,"revision":966,"compact-revision":723}
	{"level":"warn","ts":"2024-09-18T21:24:58.823796Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"192.376288ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 serializable:true keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-18T21:24:58.824247Z","caller":"traceutil/trace.go:171","msg":"trace[369877692] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:1243; }","duration":"192.863319ms","start":"2024-09-18T21:24:58.631347Z","end":"2024-09-18T21:24:58.824210Z","steps":["trace[369877692] 'range keys from in-memory index tree'  (duration: 192.362657ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-18T21:24:58.824027Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"262.080809ms","expected-duration":"100ms","prefix":"","request":"header:<ID:4089429020278050819 > lease_revoke:<id:38c09206f7d8cfa6>","response":"size:28"}
	{"level":"info","ts":"2024-09-18T21:24:58.824492Z","caller":"traceutil/trace.go:171","msg":"trace[525311430] linearizableReadLoop","detail":"{readStateIndex:1447; appliedIndex:1446; }","duration":"280.894275ms","start":"2024-09-18T21:24:58.543587Z","end":"2024-09-18T21:24:58.824481Z","steps":["trace[525311430] 'read index received'  (duration: 18.124545ms)","trace[525311430] 'applied index is now lower than readState.Index'  (duration: 262.76865ms)"],"step_count":2}
	{"level":"warn","ts":"2024-09-18T21:24:58.824623Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"281.027436ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-18T21:24:58.824659Z","caller":"traceutil/trace.go:171","msg":"trace[624371352] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1243; }","duration":"281.083813ms","start":"2024-09-18T21:24:58.543569Z","end":"2024-09-18T21:24:58.824653Z","steps":["trace[624371352] 'agreement among raft nodes before linearized reading'  (duration: 281.003871ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-18T21:25:40.766525Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"134.812722ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 serializable:true keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-18T21:25:40.766637Z","caller":"traceutil/trace.go:171","msg":"trace[2001130741] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:1276; }","duration":"134.943666ms","start":"2024-09-18T21:25:40.631679Z","end":"2024-09-18T21:25:40.766622Z","steps":["trace[2001130741] 'range keys from in-memory index tree'  (duration: 134.797582ms)"],"step_count":1}
	
	
	==> kernel <==
	 21:26:30 up 22 min,  0 users,  load average: 0.21, 0.20, 0.18
	Linux default-k8s-diff-port-828868 5.10.207 #1 SMP Mon Sep 16 15:00:28 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [63709198b2a1bec977af0e16d4f231ed7e5caf103e797daf118541e61d8b03f3] <==
	W0918 21:09:14.758386       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0918 21:09:14.769873       1 logging.go:55] [core] [Channel #100 SubChannel #101]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0918 21:09:14.787455       1 logging.go:55] [core] [Channel #31 SubChannel #32]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0918 21:09:14.815093       1 logging.go:55] [core] [Channel #169 SubChannel #170]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0918 21:09:14.832344       1 logging.go:55] [core] [Channel #130 SubChannel #131]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0918 21:09:14.851696       1 logging.go:55] [core] [Channel #127 SubChannel #128]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0918 21:09:14.887491       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0918 21:09:14.888925       1 logging.go:55] [core] [Channel #58 SubChannel #59]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0918 21:09:14.904028       1 logging.go:55] [core] [Channel #94 SubChannel #95]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0918 21:09:14.905114       1 logging.go:55] [core] [Channel #145 SubChannel #146]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0918 21:09:15.019554       1 logging.go:55] [core] [Channel #148 SubChannel #149]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0918 21:09:15.031999       1 logging.go:55] [core] [Channel #34 SubChannel #35]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0918 21:09:15.144876       1 logging.go:55] [core] [Channel #85 SubChannel #86]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0918 21:09:15.170847       1 logging.go:55] [core] [Channel #124 SubChannel #125]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0918 21:09:15.235722       1 logging.go:55] [core] [Channel #178 SubChannel #179]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0918 21:09:15.328707       1 logging.go:55] [core] [Channel #61 SubChannel #62]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0918 21:09:15.446491       1 logging.go:55] [core] [Channel #97 SubChannel #98]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0918 21:09:15.464481       1 logging.go:55] [core] [Channel #70 SubChannel #71]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0918 21:09:15.480365       1 logging.go:55] [core] [Channel #112 SubChannel #113]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0918 21:09:15.531666       1 logging.go:55] [core] [Channel #46 SubChannel #47]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0918 21:09:15.557053       1 logging.go:55] [core] [Channel #118 SubChannel #119]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0918 21:09:15.577907       1 logging.go:55] [core] [Channel #160 SubChannel #161]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0918 21:09:15.633495       1 logging.go:55] [core] [Channel #175 SubChannel #176]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0918 21:09:15.728705       1 logging.go:55] [core] [Channel #154 SubChannel #155]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0918 21:09:15.748949       1 logging.go:55] [core] [Channel #136 SubChannel #137]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-apiserver [f727b8fd80c86645115bb17a6c04d85dd1ea62b00858983c116acf36270ea847] <==
	I0918 21:22:22.547623       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0918 21:22:22.547673       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0918 21:24:21.546145       1 handler_proxy.go:99] no RequestInfo found in the context
	E0918 21:24:21.546475       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W0918 21:24:22.548723       1 handler_proxy.go:99] no RequestInfo found in the context
	E0918 21:24:22.548835       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	W0918 21:24:22.548752       1 handler_proxy.go:99] no RequestInfo found in the context
	E0918 21:24:22.548937       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0918 21:24:22.549923       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0918 21:24:22.549980       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0918 21:25:22.550969       1 handler_proxy.go:99] no RequestInfo found in the context
	E0918 21:25:22.551325       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	W0918 21:25:22.550973       1 handler_proxy.go:99] no RequestInfo found in the context
	E0918 21:25:22.551529       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0918 21:25:22.552524       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0918 21:25:22.552636       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [74b07ca92709a7cfcc8dc0aa3398a6cf6f8e91026dbe1f565981b6fe699b75f0] <==
	E0918 21:21:28.669060       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0918 21:21:29.116227       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0918 21:21:58.674883       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0918 21:21:59.124290       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0918 21:22:28.682250       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0918 21:22:29.134213       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0918 21:22:58.689507       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0918 21:22:59.144027       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0918 21:23:28.696071       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0918 21:23:29.153483       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0918 21:23:58.702361       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0918 21:23:59.162007       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0918 21:24:28.710962       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0918 21:24:29.172087       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0918 21:24:52.553529       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="default-k8s-diff-port-828868"
	E0918 21:24:58.717512       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0918 21:24:59.180914       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0918 21:25:28.723378       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0918 21:25:29.189784       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0918 21:25:58.130331       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="440.258µs"
	E0918 21:25:58.730979       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0918 21:25:59.199037       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0918 21:26:11.124349       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="71.748µs"
	E0918 21:26:28.738221       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0918 21:26:29.206573       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [d92ded6c9bd3df5db2c65359fa2fe855e949921242337824b7bdb69266eeefcd] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0918 21:09:30.248981       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0918 21:09:30.258900       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.50.109"]
	E0918 21:09:30.259002       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0918 21:09:30.360943       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0918 21:09:30.361036       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0918 21:09:30.361077       1 server_linux.go:169] "Using iptables Proxier"
	I0918 21:09:30.364320       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0918 21:09:30.364730       1 server.go:483] "Version info" version="v1.31.1"
	I0918 21:09:30.364755       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0918 21:09:30.368663       1 config.go:199] "Starting service config controller"
	I0918 21:09:30.368802       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0918 21:09:30.368832       1 config.go:105] "Starting endpoint slice config controller"
	I0918 21:09:30.368848       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0918 21:09:30.371322       1 config.go:328] "Starting node config controller"
	I0918 21:09:30.371336       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0918 21:09:30.472561       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0918 21:09:30.475752       1 shared_informer.go:320] Caches are synced for node config
	I0918 21:09:30.475823       1 shared_informer.go:320] Caches are synced for service config
	
	
	==> kube-scheduler [7acfe06c0ec767f7534be911cae0c5ac2ad444bc6e8fd03889aed6d2739d0fc8] <==
	W0918 21:09:21.606798       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0918 21:09:21.606830       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0918 21:09:21.606893       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0918 21:09:21.606917       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0918 21:09:21.606973       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0918 21:09:21.606996       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0918 21:09:21.607748       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0918 21:09:21.607864       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0918 21:09:22.554481       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0918 21:09:22.554559       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0918 21:09:22.598071       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0918 21:09:22.598223       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0918 21:09:22.653296       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0918 21:09:22.653424       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0918 21:09:22.754523       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0918 21:09:22.754630       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0918 21:09:22.811701       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0918 21:09:22.811775       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0918 21:09:22.812841       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0918 21:09:22.812968       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0918 21:09:22.908487       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0918 21:09:22.908663       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0918 21:09:23.008646       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0918 21:09:23.009074       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	I0918 21:09:24.797417       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 18 21:25:31 default-k8s-diff-port-828868 kubelet[2919]: E0918 21:25:31.107003    2919 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-hdt52" podUID="bf3aa7d0-a121-4a2c-92dd-2c79c6cbaa4a"
	Sep 18 21:25:34 default-k8s-diff-port-828868 kubelet[2919]: E0918 21:25:34.376724    2919 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726694734375856343,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 18 21:25:34 default-k8s-diff-port-828868 kubelet[2919]: E0918 21:25:34.376823    2919 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726694734375856343,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 18 21:25:44 default-k8s-diff-port-828868 kubelet[2919]: E0918 21:25:44.133109    2919 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Sep 18 21:25:44 default-k8s-diff-port-828868 kubelet[2919]: E0918 21:25:44.133625    2919 kuberuntime_image.go:55] "Failed to pull image" err="pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Sep 18 21:25:44 default-k8s-diff-port-828868 kubelet[2919]: E0918 21:25:44.133959    2919 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:metrics-server,Image:fake.domain/registry.k8s.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=60s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{209715200 0} {<nil>}  BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zfscs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountP
ropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile
:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod metrics-server-6867b74b74-hdt52_kube-system(bf3aa7d0-a121-4a2c-92dd-2c79c6cbaa4a): ErrImagePull: pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" logger="UnhandledError"
	Sep 18 21:25:44 default-k8s-diff-port-828868 kubelet[2919]: E0918 21:25:44.135299    2919 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ErrImagePull: \"pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-6867b74b74-hdt52" podUID="bf3aa7d0-a121-4a2c-92dd-2c79c6cbaa4a"
	Sep 18 21:25:44 default-k8s-diff-port-828868 kubelet[2919]: E0918 21:25:44.378110    2919 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726694744377695070,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 18 21:25:44 default-k8s-diff-port-828868 kubelet[2919]: E0918 21:25:44.378197    2919 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726694744377695070,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 18 21:25:54 default-k8s-diff-port-828868 kubelet[2919]: E0918 21:25:54.379932    2919 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726694754379634729,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 18 21:25:54 default-k8s-diff-port-828868 kubelet[2919]: E0918 21:25:54.380001    2919 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726694754379634729,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 18 21:25:58 default-k8s-diff-port-828868 kubelet[2919]: E0918 21:25:58.108046    2919 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-hdt52" podUID="bf3aa7d0-a121-4a2c-92dd-2c79c6cbaa4a"
	Sep 18 21:26:04 default-k8s-diff-port-828868 kubelet[2919]: E0918 21:26:04.381133    2919 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726694764380926059,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 18 21:26:04 default-k8s-diff-port-828868 kubelet[2919]: E0918 21:26:04.381224    2919 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726694764380926059,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 18 21:26:11 default-k8s-diff-port-828868 kubelet[2919]: E0918 21:26:11.107805    2919 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-hdt52" podUID="bf3aa7d0-a121-4a2c-92dd-2c79c6cbaa4a"
	Sep 18 21:26:14 default-k8s-diff-port-828868 kubelet[2919]: E0918 21:26:14.382849    2919 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726694774382578251,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 18 21:26:14 default-k8s-diff-port-828868 kubelet[2919]: E0918 21:26:14.382889    2919 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726694774382578251,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 18 21:26:24 default-k8s-diff-port-828868 kubelet[2919]: E0918 21:26:24.149037    2919 iptables.go:577] "Could not set up iptables canary" err=<
	Sep 18 21:26:24 default-k8s-diff-port-828868 kubelet[2919]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Sep 18 21:26:24 default-k8s-diff-port-828868 kubelet[2919]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 18 21:26:24 default-k8s-diff-port-828868 kubelet[2919]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 18 21:26:24 default-k8s-diff-port-828868 kubelet[2919]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 18 21:26:24 default-k8s-diff-port-828868 kubelet[2919]: E0918 21:26:24.384360    2919 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726694784383935137,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 18 21:26:24 default-k8s-diff-port-828868 kubelet[2919]: E0918 21:26:24.384399    2919 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726694784383935137,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 18 21:26:25 default-k8s-diff-port-828868 kubelet[2919]: E0918 21:26:25.108185    2919 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-hdt52" podUID="bf3aa7d0-a121-4a2c-92dd-2c79c6cbaa4a"
	
	
	==> storage-provisioner [0d0b97e9f72af5a0bd052549df0f2c3ea4d140c9ef1e8b1ccc76c58a461ee37c] <==
	I0918 21:09:31.173059       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0918 21:09:31.186322       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0918 21:09:31.186718       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0918 21:09:31.203351       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0918 21:09:31.203606       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-828868_0e992c30-dd4d-4bb2-9c51-0fe02b0d69ee!
	I0918 21:09:31.205746       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"d95df135-fbfe-405c-94e8-b9c522473029", APIVersion:"v1", ResourceVersion:"430", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-828868_0e992c30-dd4d-4bb2-9c51-0fe02b0d69ee became leader
	I0918 21:09:31.306831       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-828868_0e992c30-dd4d-4bb2-9c51-0fe02b0d69ee!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-828868 -n default-k8s-diff-port-828868
helpers_test.go:261: (dbg) Run:  kubectl --context default-k8s-diff-port-828868 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-6867b74b74-hdt52
helpers_test.go:274: ======> post-mortem[TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context default-k8s-diff-port-828868 describe pod metrics-server-6867b74b74-hdt52
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-828868 describe pod metrics-server-6867b74b74-hdt52: exit status 1 (58.921252ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-6867b74b74-hdt52" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context default-k8s-diff-port-828868 describe pod metrics-server-6867b74b74-hdt52: exit status 1
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (467.25s)
E0918 21:28:50.417873   14878 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/default-k8s-diff-port-828868/client.crt: no such file or directory" logger="UnhandledError"
E0918 21:28:59.739388   14878 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/old-k8s-version-740194/client.crt: no such file or directory" logger="UnhandledError"

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (416.94s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
start_stop_delete_test.go:287: ***** TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-255556 -n embed-certs-255556
start_stop_delete_test.go:287: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: showing logs for failed pods as of 2024-09-18 21:26:04.087561236 +0000 UTC m=+6486.857902478
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-255556 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context embed-certs-255556 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (1.799µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context embed-certs-255556 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-255556 -n embed-certs-255556
helpers_test.go:244: <<< TestStartStop/group/embed-certs/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/embed-certs/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-255556 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-255556 logs -n 25: (1.266587268s)
helpers_test.go:252: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| start   | -p                                                     | default-k8s-diff-port-828868 | jenkins | v1.34.0 | 18 Sep 24 20:56 UTC | 18 Sep 24 20:57 UTC |
	|         | default-k8s-diff-port-828868                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-331658             | no-preload-331658            | jenkins | v1.34.0 | 18 Sep 24 20:56 UTC | 18 Sep 24 20:57 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-331658                                   | no-preload-331658            | jenkins | v1.34.0 | 18 Sep 24 20:57 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-828868  | default-k8s-diff-port-828868 | jenkins | v1.34.0 | 18 Sep 24 20:57 UTC | 18 Sep 24 20:57 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-828868 | jenkins | v1.34.0 | 18 Sep 24 20:57 UTC |                     |
	|         | default-k8s-diff-port-828868                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-255556            | embed-certs-255556           | jenkins | v1.34.0 | 18 Sep 24 20:57 UTC | 18 Sep 24 20:57 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-255556                                  | embed-certs-255556           | jenkins | v1.34.0 | 18 Sep 24 20:57 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-740194        | old-k8s-version-740194       | jenkins | v1.34.0 | 18 Sep 24 20:59 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-331658                  | no-preload-331658            | jenkins | v1.34.0 | 18 Sep 24 20:59 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-331658                                   | no-preload-331658            | jenkins | v1.34.0 | 18 Sep 24 20:59 UTC | 18 Sep 24 21:10 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-828868       | default-k8s-diff-port-828868 | jenkins | v1.34.0 | 18 Sep 24 21:00 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-255556                 | embed-certs-255556           | jenkins | v1.34.0 | 18 Sep 24 21:00 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-828868 | jenkins | v1.34.0 | 18 Sep 24 21:00 UTC | 18 Sep 24 21:09 UTC |
	|         | default-k8s-diff-port-828868                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| start   | -p embed-certs-255556                                  | embed-certs-255556           | jenkins | v1.34.0 | 18 Sep 24 21:00 UTC | 18 Sep 24 21:10 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-740194                              | old-k8s-version-740194       | jenkins | v1.34.0 | 18 Sep 24 21:00 UTC | 18 Sep 24 21:00 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-740194             | old-k8s-version-740194       | jenkins | v1.34.0 | 18 Sep 24 21:00 UTC | 18 Sep 24 21:00 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-740194                              | old-k8s-version-740194       | jenkins | v1.34.0 | 18 Sep 24 21:00 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| delete  | -p old-k8s-version-740194                              | old-k8s-version-740194       | jenkins | v1.34.0 | 18 Sep 24 21:24 UTC | 18 Sep 24 21:24 UTC |
	| start   | -p newest-cni-560575 --memory=2200 --alsologtostderr   | newest-cni-560575            | jenkins | v1.34.0 | 18 Sep 24 21:24 UTC | 18 Sep 24 21:25 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| delete  | -p no-preload-331658                                   | no-preload-331658            | jenkins | v1.34.0 | 18 Sep 24 21:25 UTC | 18 Sep 24 21:25 UTC |
	| start   | -p auto-543581 --memory=3072                           | auto-543581                  | jenkins | v1.34.0 | 18 Sep 24 21:25 UTC |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --wait-timeout=15m                                     |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p newest-cni-560575             | newest-cni-560575            | jenkins | v1.34.0 | 18 Sep 24 21:25 UTC | 18 Sep 24 21:25 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p newest-cni-560575                                   | newest-cni-560575            | jenkins | v1.34.0 | 18 Sep 24 21:25 UTC | 18 Sep 24 21:25 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p newest-cni-560575                  | newest-cni-560575            | jenkins | v1.34.0 | 18 Sep 24 21:25 UTC | 18 Sep 24 21:25 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p newest-cni-560575 --memory=2200 --alsologtostderr   | newest-cni-560575            | jenkins | v1.34.0 | 18 Sep 24 21:25 UTC |                     |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/18 21:25:25
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0918 21:25:25.198559   69636 out.go:345] Setting OutFile to fd 1 ...
	I0918 21:25:25.198677   69636 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0918 21:25:25.198686   69636 out.go:358] Setting ErrFile to fd 2...
	I0918 21:25:25.198691   69636 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0918 21:25:25.198884   69636 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19667-7671/.minikube/bin
	I0918 21:25:25.199426   69636 out.go:352] Setting JSON to false
	I0918 21:25:25.200370   69636 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":7669,"bootTime":1726687056,"procs":206,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0918 21:25:25.200474   69636 start.go:139] virtualization: kvm guest
	I0918 21:25:25.202545   69636 out.go:177] * [newest-cni-560575] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0918 21:25:25.204304   69636 out.go:177]   - MINIKUBE_LOCATION=19667
	I0918 21:25:25.204305   69636 notify.go:220] Checking for updates...
	I0918 21:25:25.206462   69636 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0918 21:25:25.207677   69636 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19667-7671/kubeconfig
	I0918 21:25:25.208968   69636 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19667-7671/.minikube
	I0918 21:25:25.210049   69636 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0918 21:25:25.211402   69636 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0918 21:25:25.212917   69636 config.go:182] Loaded profile config "newest-cni-560575": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0918 21:25:25.213371   69636 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19667-7671/.minikube/bin/docker-machine-driver-kvm2
	I0918 21:25:25.213443   69636 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0918 21:25:25.229388   69636 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41547
	I0918 21:25:25.229859   69636 main.go:141] libmachine: () Calling .GetVersion
	I0918 21:25:25.230405   69636 main.go:141] libmachine: Using API Version  1
	I0918 21:25:25.230427   69636 main.go:141] libmachine: () Calling .SetConfigRaw
	I0918 21:25:25.230839   69636 main.go:141] libmachine: () Calling .GetMachineName
	I0918 21:25:25.231079   69636 main.go:141] libmachine: (newest-cni-560575) Calling .DriverName
	I0918 21:25:25.231349   69636 driver.go:394] Setting default libvirt URI to qemu:///system
	I0918 21:25:25.231750   69636 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19667-7671/.minikube/bin/docker-machine-driver-kvm2
	I0918 21:25:25.231792   69636 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0918 21:25:25.247561   69636 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45993
	I0918 21:25:25.248082   69636 main.go:141] libmachine: () Calling .GetVersion
	I0918 21:25:25.248578   69636 main.go:141] libmachine: Using API Version  1
	I0918 21:25:25.248603   69636 main.go:141] libmachine: () Calling .SetConfigRaw
	I0918 21:25:25.248992   69636 main.go:141] libmachine: () Calling .GetMachineName
	I0918 21:25:25.249177   69636 main.go:141] libmachine: (newest-cni-560575) Calling .DriverName
	I0918 21:25:25.288216   69636 out.go:177] * Using the kvm2 driver based on existing profile
	I0918 21:25:25.289592   69636 start.go:297] selected driver: kvm2
	I0918 21:25:25.289609   69636 start.go:901] validating driver "kvm2" against &{Name:newest-cni-560575 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.1 ClusterName:newest-cni-560575 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.106 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] St
artHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0918 21:25:25.289765   69636 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0918 21:25:25.290646   69636 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0918 21:25:25.290731   69636 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19667-7671/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0918 21:25:25.306669   69636 install.go:137] /home/jenkins/minikube-integration/19667-7671/.minikube/bin/docker-machine-driver-kvm2 version is 1.34.0
	I0918 21:25:25.307135   69636 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0918 21:25:25.307181   69636 cni.go:84] Creating CNI manager for ""
	I0918 21:25:25.307235   69636 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0918 21:25:25.307286   69636 start.go:340] cluster config:
	{Name:newest-cni-560575 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:newest-cni-560575 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.106 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network
: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0918 21:25:25.307395   69636 iso.go:125] acquiring lock: {Name:mkbcf4d84f3091567a39802cd5e773cb10b8c545 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0918 21:25:25.309682   69636 out.go:177] * Starting "newest-cni-560575" primary control-plane node in "newest-cni-560575" cluster
	I0918 21:25:22.695454   69269 main.go:141] libmachine: (auto-543581) DBG | domain auto-543581 has defined MAC address 52:54:00:35:a4:04 in network mk-auto-543581
	I0918 21:25:22.695979   69269 main.go:141] libmachine: (auto-543581) DBG | unable to find current IP address of domain auto-543581 in network mk-auto-543581
	I0918 21:25:22.696006   69269 main.go:141] libmachine: (auto-543581) DBG | I0918 21:25:22.695925   69292 retry.go:31] will retry after 3.359994696s: waiting for machine to come up
	I0918 21:25:26.057213   69269 main.go:141] libmachine: (auto-543581) DBG | domain auto-543581 has defined MAC address 52:54:00:35:a4:04 in network mk-auto-543581
	I0918 21:25:26.057652   69269 main.go:141] libmachine: (auto-543581) DBG | unable to find current IP address of domain auto-543581 in network mk-auto-543581
	I0918 21:25:26.057673   69269 main.go:141] libmachine: (auto-543581) DBG | I0918 21:25:26.057625   69292 retry.go:31] will retry after 5.247460161s: waiting for machine to come up
	I0918 21:25:25.311264   69636 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0918 21:25:25.311306   69636 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19667-7671/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I0918 21:25:25.311314   69636 cache.go:56] Caching tarball of preloaded images
	I0918 21:25:25.311403   69636 preload.go:172] Found /home/jenkins/minikube-integration/19667-7671/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0918 21:25:25.311418   69636 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0918 21:25:25.311553   69636 profile.go:143] Saving config to /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/newest-cni-560575/config.json ...
	I0918 21:25:25.311809   69636 start.go:360] acquireMachinesLock for newest-cni-560575: {Name:mk1b0d68a0b7d9d4bc8204d5d81cb9eb77222526 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0918 21:25:31.310429   69269 main.go:141] libmachine: (auto-543581) DBG | domain auto-543581 has defined MAC address 52:54:00:35:a4:04 in network mk-auto-543581
	I0918 21:25:31.310984   69269 main.go:141] libmachine: (auto-543581) Found IP for machine: 192.168.61.181
	I0918 21:25:31.311008   69269 main.go:141] libmachine: (auto-543581) DBG | domain auto-543581 has current primary IP address 192.168.61.181 and MAC address 52:54:00:35:a4:04 in network mk-auto-543581
	I0918 21:25:31.311014   69269 main.go:141] libmachine: (auto-543581) Reserving static IP address...
	I0918 21:25:31.311714   69269 main.go:141] libmachine: (auto-543581) DBG | unable to find host DHCP lease matching {name: "auto-543581", mac: "52:54:00:35:a4:04", ip: "192.168.61.181"} in network mk-auto-543581
	I0918 21:25:31.390117   69269 main.go:141] libmachine: (auto-543581) DBG | Getting to WaitForSSH function...
	I0918 21:25:31.390162   69269 main.go:141] libmachine: (auto-543581) Reserved static IP address: 192.168.61.181
	I0918 21:25:31.390175   69269 main.go:141] libmachine: (auto-543581) Waiting for SSH to be available...
	I0918 21:25:31.393144   69269 main.go:141] libmachine: (auto-543581) DBG | domain auto-543581 has defined MAC address 52:54:00:35:a4:04 in network mk-auto-543581
	I0918 21:25:31.393701   69269 main.go:141] libmachine: (auto-543581) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:a4:04", ip: ""} in network mk-auto-543581: {Iface:virbr2 ExpiryTime:2024-09-18 22:25:21 +0000 UTC Type:0 Mac:52:54:00:35:a4:04 Iaid: IPaddr:192.168.61.181 Prefix:24 Hostname:minikube Clientid:01:52:54:00:35:a4:04}
	I0918 21:25:31.393736   69269 main.go:141] libmachine: (auto-543581) DBG | domain auto-543581 has defined IP address 192.168.61.181 and MAC address 52:54:00:35:a4:04 in network mk-auto-543581
	I0918 21:25:31.393881   69269 main.go:141] libmachine: (auto-543581) DBG | Using SSH client type: external
	I0918 21:25:31.393909   69269 main.go:141] libmachine: (auto-543581) DBG | Using SSH private key: /home/jenkins/minikube-integration/19667-7671/.minikube/machines/auto-543581/id_rsa (-rw-------)
	I0918 21:25:31.393935   69269 main.go:141] libmachine: (auto-543581) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.181 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19667-7671/.minikube/machines/auto-543581/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0918 21:25:31.393946   69269 main.go:141] libmachine: (auto-543581) DBG | About to run SSH command:
	I0918 21:25:31.393958   69269 main.go:141] libmachine: (auto-543581) DBG | exit 0
	I0918 21:25:31.516041   69269 main.go:141] libmachine: (auto-543581) DBG | SSH cmd err, output: <nil>: 
	I0918 21:25:31.516332   69269 main.go:141] libmachine: (auto-543581) KVM machine creation complete!
	I0918 21:25:31.516652   69269 main.go:141] libmachine: (auto-543581) Calling .GetConfigRaw
	I0918 21:25:31.517221   69269 main.go:141] libmachine: (auto-543581) Calling .DriverName
	I0918 21:25:31.517406   69269 main.go:141] libmachine: (auto-543581) Calling .DriverName
	I0918 21:25:31.517578   69269 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0918 21:25:31.517594   69269 main.go:141] libmachine: (auto-543581) Calling .GetState
	I0918 21:25:31.518978   69269 main.go:141] libmachine: Detecting operating system of created instance...
	I0918 21:25:31.518996   69269 main.go:141] libmachine: Waiting for SSH to be available...
	I0918 21:25:31.519002   69269 main.go:141] libmachine: Getting to WaitForSSH function...
	I0918 21:25:31.519010   69269 main.go:141] libmachine: (auto-543581) Calling .GetSSHHostname
	I0918 21:25:31.521634   69269 main.go:141] libmachine: (auto-543581) DBG | domain auto-543581 has defined MAC address 52:54:00:35:a4:04 in network mk-auto-543581
	I0918 21:25:31.521985   69269 main.go:141] libmachine: (auto-543581) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:a4:04", ip: ""} in network mk-auto-543581: {Iface:virbr2 ExpiryTime:2024-09-18 22:25:21 +0000 UTC Type:0 Mac:52:54:00:35:a4:04 Iaid: IPaddr:192.168.61.181 Prefix:24 Hostname:auto-543581 Clientid:01:52:54:00:35:a4:04}
	I0918 21:25:31.522018   69269 main.go:141] libmachine: (auto-543581) DBG | domain auto-543581 has defined IP address 192.168.61.181 and MAC address 52:54:00:35:a4:04 in network mk-auto-543581
	I0918 21:25:31.522139   69269 main.go:141] libmachine: (auto-543581) Calling .GetSSHPort
	I0918 21:25:31.522309   69269 main.go:141] libmachine: (auto-543581) Calling .GetSSHKeyPath
	I0918 21:25:31.522492   69269 main.go:141] libmachine: (auto-543581) Calling .GetSSHKeyPath
	I0918 21:25:31.522655   69269 main.go:141] libmachine: (auto-543581) Calling .GetSSHUsername
	I0918 21:25:31.522783   69269 main.go:141] libmachine: Using SSH client type: native
	I0918 21:25:31.522990   69269 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.61.181 22 <nil> <nil>}
	I0918 21:25:31.523005   69269 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0918 21:25:31.619257   69269 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0918 21:25:31.619281   69269 main.go:141] libmachine: Detecting the provisioner...
	I0918 21:25:31.619293   69269 main.go:141] libmachine: (auto-543581) Calling .GetSSHHostname
	I0918 21:25:31.622156   69269 main.go:141] libmachine: (auto-543581) DBG | domain auto-543581 has defined MAC address 52:54:00:35:a4:04 in network mk-auto-543581
	I0918 21:25:31.622523   69269 main.go:141] libmachine: (auto-543581) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:a4:04", ip: ""} in network mk-auto-543581: {Iface:virbr2 ExpiryTime:2024-09-18 22:25:21 +0000 UTC Type:0 Mac:52:54:00:35:a4:04 Iaid: IPaddr:192.168.61.181 Prefix:24 Hostname:auto-543581 Clientid:01:52:54:00:35:a4:04}
	I0918 21:25:31.622549   69269 main.go:141] libmachine: (auto-543581) DBG | domain auto-543581 has defined IP address 192.168.61.181 and MAC address 52:54:00:35:a4:04 in network mk-auto-543581
	I0918 21:25:31.622702   69269 main.go:141] libmachine: (auto-543581) Calling .GetSSHPort
	I0918 21:25:31.622901   69269 main.go:141] libmachine: (auto-543581) Calling .GetSSHKeyPath
	I0918 21:25:31.623079   69269 main.go:141] libmachine: (auto-543581) Calling .GetSSHKeyPath
	I0918 21:25:31.623224   69269 main.go:141] libmachine: (auto-543581) Calling .GetSSHUsername
	I0918 21:25:31.623388   69269 main.go:141] libmachine: Using SSH client type: native
	I0918 21:25:31.623550   69269 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.61.181 22 <nil> <nil>}
	I0918 21:25:31.623560   69269 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0918 21:25:31.720615   69269 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0918 21:25:31.720722   69269 main.go:141] libmachine: found compatible host: buildroot
	I0918 21:25:31.720737   69269 main.go:141] libmachine: Provisioning with buildroot...
	I0918 21:25:31.720748   69269 main.go:141] libmachine: (auto-543581) Calling .GetMachineName
	I0918 21:25:31.721028   69269 buildroot.go:166] provisioning hostname "auto-543581"
	I0918 21:25:31.721051   69269 main.go:141] libmachine: (auto-543581) Calling .GetMachineName
	I0918 21:25:31.721255   69269 main.go:141] libmachine: (auto-543581) Calling .GetSSHHostname
	I0918 21:25:31.724073   69269 main.go:141] libmachine: (auto-543581) DBG | domain auto-543581 has defined MAC address 52:54:00:35:a4:04 in network mk-auto-543581
	I0918 21:25:31.724535   69269 main.go:141] libmachine: (auto-543581) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:a4:04", ip: ""} in network mk-auto-543581: {Iface:virbr2 ExpiryTime:2024-09-18 22:25:21 +0000 UTC Type:0 Mac:52:54:00:35:a4:04 Iaid: IPaddr:192.168.61.181 Prefix:24 Hostname:auto-543581 Clientid:01:52:54:00:35:a4:04}
	I0918 21:25:31.724576   69269 main.go:141] libmachine: (auto-543581) DBG | domain auto-543581 has defined IP address 192.168.61.181 and MAC address 52:54:00:35:a4:04 in network mk-auto-543581
	I0918 21:25:31.724750   69269 main.go:141] libmachine: (auto-543581) Calling .GetSSHPort
	I0918 21:25:31.724948   69269 main.go:141] libmachine: (auto-543581) Calling .GetSSHKeyPath
	I0918 21:25:31.725106   69269 main.go:141] libmachine: (auto-543581) Calling .GetSSHKeyPath
	I0918 21:25:31.725251   69269 main.go:141] libmachine: (auto-543581) Calling .GetSSHUsername
	I0918 21:25:31.725438   69269 main.go:141] libmachine: Using SSH client type: native
	I0918 21:25:31.725649   69269 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.61.181 22 <nil> <nil>}
	I0918 21:25:31.725664   69269 main.go:141] libmachine: About to run SSH command:
	sudo hostname auto-543581 && echo "auto-543581" | sudo tee /etc/hostname
	I0918 21:25:32.752685   69636 start.go:364] duration metric: took 7.440826864s to acquireMachinesLock for "newest-cni-560575"
	I0918 21:25:32.752738   69636 start.go:96] Skipping create...Using existing machine configuration
	I0918 21:25:32.752748   69636 fix.go:54] fixHost starting: 
	I0918 21:25:32.753190   69636 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19667-7671/.minikube/bin/docker-machine-driver-kvm2
	I0918 21:25:32.753243   69636 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0918 21:25:32.771934   69636 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38437
	I0918 21:25:32.772404   69636 main.go:141] libmachine: () Calling .GetVersion
	I0918 21:25:32.772926   69636 main.go:141] libmachine: Using API Version  1
	I0918 21:25:32.772950   69636 main.go:141] libmachine: () Calling .SetConfigRaw
	I0918 21:25:32.773291   69636 main.go:141] libmachine: () Calling .GetMachineName
	I0918 21:25:32.773510   69636 main.go:141] libmachine: (newest-cni-560575) Calling .DriverName
	I0918 21:25:32.773636   69636 main.go:141] libmachine: (newest-cni-560575) Calling .GetState
	I0918 21:25:32.775447   69636 fix.go:112] recreateIfNeeded on newest-cni-560575: state=Stopped err=<nil>
	I0918 21:25:32.775483   69636 main.go:141] libmachine: (newest-cni-560575) Calling .DriverName
	W0918 21:25:32.775655   69636 fix.go:138] unexpected machine state, will restart: <nil>
	I0918 21:25:32.778256   69636 out.go:177] * Restarting existing kvm2 VM for "newest-cni-560575" ...
	I0918 21:25:31.838332   69269 main.go:141] libmachine: SSH cmd err, output: <nil>: auto-543581
	
	I0918 21:25:31.838383   69269 main.go:141] libmachine: (auto-543581) Calling .GetSSHHostname
	I0918 21:25:31.841227   69269 main.go:141] libmachine: (auto-543581) DBG | domain auto-543581 has defined MAC address 52:54:00:35:a4:04 in network mk-auto-543581
	I0918 21:25:31.841653   69269 main.go:141] libmachine: (auto-543581) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:a4:04", ip: ""} in network mk-auto-543581: {Iface:virbr2 ExpiryTime:2024-09-18 22:25:21 +0000 UTC Type:0 Mac:52:54:00:35:a4:04 Iaid: IPaddr:192.168.61.181 Prefix:24 Hostname:auto-543581 Clientid:01:52:54:00:35:a4:04}
	I0918 21:25:31.841688   69269 main.go:141] libmachine: (auto-543581) DBG | domain auto-543581 has defined IP address 192.168.61.181 and MAC address 52:54:00:35:a4:04 in network mk-auto-543581
	I0918 21:25:31.841840   69269 main.go:141] libmachine: (auto-543581) Calling .GetSSHPort
	I0918 21:25:31.842044   69269 main.go:141] libmachine: (auto-543581) Calling .GetSSHKeyPath
	I0918 21:25:31.842218   69269 main.go:141] libmachine: (auto-543581) Calling .GetSSHKeyPath
	I0918 21:25:31.842363   69269 main.go:141] libmachine: (auto-543581) Calling .GetSSHUsername
	I0918 21:25:31.842514   69269 main.go:141] libmachine: Using SSH client type: native
	I0918 21:25:31.842745   69269 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.61.181 22 <nil> <nil>}
	I0918 21:25:31.842767   69269 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sauto-543581' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 auto-543581/g' /etc/hosts;
				else 
					echo '127.0.1.1 auto-543581' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0918 21:25:31.948879   69269 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0918 21:25:31.948910   69269 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19667-7671/.minikube CaCertPath:/home/jenkins/minikube-integration/19667-7671/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19667-7671/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19667-7671/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19667-7671/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19667-7671/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19667-7671/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19667-7671/.minikube}
	I0918 21:25:31.948938   69269 buildroot.go:174] setting up certificates
	I0918 21:25:31.948951   69269 provision.go:84] configureAuth start
	I0918 21:25:31.948960   69269 main.go:141] libmachine: (auto-543581) Calling .GetMachineName
	I0918 21:25:31.949249   69269 main.go:141] libmachine: (auto-543581) Calling .GetIP
	I0918 21:25:31.952144   69269 main.go:141] libmachine: (auto-543581) DBG | domain auto-543581 has defined MAC address 52:54:00:35:a4:04 in network mk-auto-543581
	I0918 21:25:31.952638   69269 main.go:141] libmachine: (auto-543581) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:a4:04", ip: ""} in network mk-auto-543581: {Iface:virbr2 ExpiryTime:2024-09-18 22:25:21 +0000 UTC Type:0 Mac:52:54:00:35:a4:04 Iaid: IPaddr:192.168.61.181 Prefix:24 Hostname:auto-543581 Clientid:01:52:54:00:35:a4:04}
	I0918 21:25:31.952667   69269 main.go:141] libmachine: (auto-543581) DBG | domain auto-543581 has defined IP address 192.168.61.181 and MAC address 52:54:00:35:a4:04 in network mk-auto-543581
	I0918 21:25:31.952874   69269 main.go:141] libmachine: (auto-543581) Calling .GetSSHHostname
	I0918 21:25:31.955387   69269 main.go:141] libmachine: (auto-543581) DBG | domain auto-543581 has defined MAC address 52:54:00:35:a4:04 in network mk-auto-543581
	I0918 21:25:31.955793   69269 main.go:141] libmachine: (auto-543581) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:a4:04", ip: ""} in network mk-auto-543581: {Iface:virbr2 ExpiryTime:2024-09-18 22:25:21 +0000 UTC Type:0 Mac:52:54:00:35:a4:04 Iaid: IPaddr:192.168.61.181 Prefix:24 Hostname:auto-543581 Clientid:01:52:54:00:35:a4:04}
	I0918 21:25:31.955822   69269 main.go:141] libmachine: (auto-543581) DBG | domain auto-543581 has defined IP address 192.168.61.181 and MAC address 52:54:00:35:a4:04 in network mk-auto-543581
	I0918 21:25:31.956062   69269 provision.go:143] copyHostCerts
	I0918 21:25:31.956129   69269 exec_runner.go:144] found /home/jenkins/minikube-integration/19667-7671/.minikube/ca.pem, removing ...
	I0918 21:25:31.956140   69269 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19667-7671/.minikube/ca.pem
	I0918 21:25:31.956223   69269 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19667-7671/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19667-7671/.minikube/ca.pem (1082 bytes)
	I0918 21:25:31.956344   69269 exec_runner.go:144] found /home/jenkins/minikube-integration/19667-7671/.minikube/cert.pem, removing ...
	I0918 21:25:31.956354   69269 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19667-7671/.minikube/cert.pem
	I0918 21:25:31.956382   69269 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19667-7671/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19667-7671/.minikube/cert.pem (1123 bytes)
	I0918 21:25:31.956460   69269 exec_runner.go:144] found /home/jenkins/minikube-integration/19667-7671/.minikube/key.pem, removing ...
	I0918 21:25:31.956477   69269 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19667-7671/.minikube/key.pem
	I0918 21:25:31.956513   69269 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19667-7671/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19667-7671/.minikube/key.pem (1679 bytes)
	I0918 21:25:31.956648   69269 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19667-7671/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19667-7671/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19667-7671/.minikube/certs/ca-key.pem org=jenkins.auto-543581 san=[127.0.0.1 192.168.61.181 auto-543581 localhost minikube]
	I0918 21:25:32.147309   69269 provision.go:177] copyRemoteCerts
	I0918 21:25:32.147370   69269 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0918 21:25:32.147394   69269 main.go:141] libmachine: (auto-543581) Calling .GetSSHHostname
	I0918 21:25:32.149856   69269 main.go:141] libmachine: (auto-543581) DBG | domain auto-543581 has defined MAC address 52:54:00:35:a4:04 in network mk-auto-543581
	I0918 21:25:32.150114   69269 main.go:141] libmachine: (auto-543581) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:a4:04", ip: ""} in network mk-auto-543581: {Iface:virbr2 ExpiryTime:2024-09-18 22:25:21 +0000 UTC Type:0 Mac:52:54:00:35:a4:04 Iaid: IPaddr:192.168.61.181 Prefix:24 Hostname:auto-543581 Clientid:01:52:54:00:35:a4:04}
	I0918 21:25:32.150160   69269 main.go:141] libmachine: (auto-543581) DBG | domain auto-543581 has defined IP address 192.168.61.181 and MAC address 52:54:00:35:a4:04 in network mk-auto-543581
	I0918 21:25:32.150265   69269 main.go:141] libmachine: (auto-543581) Calling .GetSSHPort
	I0918 21:25:32.150478   69269 main.go:141] libmachine: (auto-543581) Calling .GetSSHKeyPath
	I0918 21:25:32.150628   69269 main.go:141] libmachine: (auto-543581) Calling .GetSSHUsername
	I0918 21:25:32.150752   69269 sshutil.go:53] new ssh client: &{IP:192.168.61.181 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19667-7671/.minikube/machines/auto-543581/id_rsa Username:docker}
	I0918 21:25:32.233635   69269 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0918 21:25:32.258061   69269 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/machines/server.pem --> /etc/docker/server.pem (1204 bytes)
	I0918 21:25:32.280264   69269 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0918 21:25:32.302780   69269 provision.go:87] duration metric: took 353.817234ms to configureAuth
	I0918 21:25:32.302823   69269 buildroot.go:189] setting minikube options for container-runtime
	I0918 21:25:32.302999   69269 config.go:182] Loaded profile config "auto-543581": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0918 21:25:32.303091   69269 main.go:141] libmachine: (auto-543581) Calling .GetSSHHostname
	I0918 21:25:32.305537   69269 main.go:141] libmachine: (auto-543581) DBG | domain auto-543581 has defined MAC address 52:54:00:35:a4:04 in network mk-auto-543581
	I0918 21:25:32.305854   69269 main.go:141] libmachine: (auto-543581) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:a4:04", ip: ""} in network mk-auto-543581: {Iface:virbr2 ExpiryTime:2024-09-18 22:25:21 +0000 UTC Type:0 Mac:52:54:00:35:a4:04 Iaid: IPaddr:192.168.61.181 Prefix:24 Hostname:auto-543581 Clientid:01:52:54:00:35:a4:04}
	I0918 21:25:32.305878   69269 main.go:141] libmachine: (auto-543581) DBG | domain auto-543581 has defined IP address 192.168.61.181 and MAC address 52:54:00:35:a4:04 in network mk-auto-543581
	I0918 21:25:32.306043   69269 main.go:141] libmachine: (auto-543581) Calling .GetSSHPort
	I0918 21:25:32.306249   69269 main.go:141] libmachine: (auto-543581) Calling .GetSSHKeyPath
	I0918 21:25:32.306434   69269 main.go:141] libmachine: (auto-543581) Calling .GetSSHKeyPath
	I0918 21:25:32.306615   69269 main.go:141] libmachine: (auto-543581) Calling .GetSSHUsername
	I0918 21:25:32.306803   69269 main.go:141] libmachine: Using SSH client type: native
	I0918 21:25:32.307005   69269 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.61.181 22 <nil> <nil>}
	I0918 21:25:32.307022   69269 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0918 21:25:32.521179   69269 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0918 21:25:32.521206   69269 main.go:141] libmachine: Checking connection to Docker...
	I0918 21:25:32.521219   69269 main.go:141] libmachine: (auto-543581) Calling .GetURL
	I0918 21:25:32.522339   69269 main.go:141] libmachine: (auto-543581) DBG | Using libvirt version 6000000
	I0918 21:25:32.524357   69269 main.go:141] libmachine: (auto-543581) DBG | domain auto-543581 has defined MAC address 52:54:00:35:a4:04 in network mk-auto-543581
	I0918 21:25:32.524750   69269 main.go:141] libmachine: (auto-543581) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:a4:04", ip: ""} in network mk-auto-543581: {Iface:virbr2 ExpiryTime:2024-09-18 22:25:21 +0000 UTC Type:0 Mac:52:54:00:35:a4:04 Iaid: IPaddr:192.168.61.181 Prefix:24 Hostname:auto-543581 Clientid:01:52:54:00:35:a4:04}
	I0918 21:25:32.524779   69269 main.go:141] libmachine: (auto-543581) DBG | domain auto-543581 has defined IP address 192.168.61.181 and MAC address 52:54:00:35:a4:04 in network mk-auto-543581
	I0918 21:25:32.524979   69269 main.go:141] libmachine: Docker is up and running!
	I0918 21:25:32.525001   69269 main.go:141] libmachine: Reticulating splines...
	I0918 21:25:32.525009   69269 client.go:171] duration metric: took 25.586003558s to LocalClient.Create
	I0918 21:25:32.525028   69269 start.go:167] duration metric: took 25.586080698s to libmachine.API.Create "auto-543581"
	I0918 21:25:32.525037   69269 start.go:293] postStartSetup for "auto-543581" (driver="kvm2")
	I0918 21:25:32.525047   69269 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0918 21:25:32.525063   69269 main.go:141] libmachine: (auto-543581) Calling .DriverName
	I0918 21:25:32.525308   69269 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0918 21:25:32.525337   69269 main.go:141] libmachine: (auto-543581) Calling .GetSSHHostname
	I0918 21:25:32.527325   69269 main.go:141] libmachine: (auto-543581) DBG | domain auto-543581 has defined MAC address 52:54:00:35:a4:04 in network mk-auto-543581
	I0918 21:25:32.527646   69269 main.go:141] libmachine: (auto-543581) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:a4:04", ip: ""} in network mk-auto-543581: {Iface:virbr2 ExpiryTime:2024-09-18 22:25:21 +0000 UTC Type:0 Mac:52:54:00:35:a4:04 Iaid: IPaddr:192.168.61.181 Prefix:24 Hostname:auto-543581 Clientid:01:52:54:00:35:a4:04}
	I0918 21:25:32.527674   69269 main.go:141] libmachine: (auto-543581) DBG | domain auto-543581 has defined IP address 192.168.61.181 and MAC address 52:54:00:35:a4:04 in network mk-auto-543581
	I0918 21:25:32.527761   69269 main.go:141] libmachine: (auto-543581) Calling .GetSSHPort
	I0918 21:25:32.527948   69269 main.go:141] libmachine: (auto-543581) Calling .GetSSHKeyPath
	I0918 21:25:32.528167   69269 main.go:141] libmachine: (auto-543581) Calling .GetSSHUsername
	I0918 21:25:32.528337   69269 sshutil.go:53] new ssh client: &{IP:192.168.61.181 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19667-7671/.minikube/machines/auto-543581/id_rsa Username:docker}
	I0918 21:25:32.606931   69269 ssh_runner.go:195] Run: cat /etc/os-release
	I0918 21:25:32.611192   69269 info.go:137] Remote host: Buildroot 2023.02.9
	I0918 21:25:32.611221   69269 filesync.go:126] Scanning /home/jenkins/minikube-integration/19667-7671/.minikube/addons for local assets ...
	I0918 21:25:32.611280   69269 filesync.go:126] Scanning /home/jenkins/minikube-integration/19667-7671/.minikube/files for local assets ...
	I0918 21:25:32.611372   69269 filesync.go:149] local asset: /home/jenkins/minikube-integration/19667-7671/.minikube/files/etc/ssl/certs/148782.pem -> 148782.pem in /etc/ssl/certs
	I0918 21:25:32.611461   69269 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0918 21:25:32.622407   69269 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/files/etc/ssl/certs/148782.pem --> /etc/ssl/certs/148782.pem (1708 bytes)
	I0918 21:25:32.645783   69269 start.go:296] duration metric: took 120.73431ms for postStartSetup
	I0918 21:25:32.645829   69269 main.go:141] libmachine: (auto-543581) Calling .GetConfigRaw
	I0918 21:25:32.646464   69269 main.go:141] libmachine: (auto-543581) Calling .GetIP
	I0918 21:25:32.649109   69269 main.go:141] libmachine: (auto-543581) DBG | domain auto-543581 has defined MAC address 52:54:00:35:a4:04 in network mk-auto-543581
	I0918 21:25:32.649537   69269 main.go:141] libmachine: (auto-543581) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:a4:04", ip: ""} in network mk-auto-543581: {Iface:virbr2 ExpiryTime:2024-09-18 22:25:21 +0000 UTC Type:0 Mac:52:54:00:35:a4:04 Iaid: IPaddr:192.168.61.181 Prefix:24 Hostname:auto-543581 Clientid:01:52:54:00:35:a4:04}
	I0918 21:25:32.649562   69269 main.go:141] libmachine: (auto-543581) DBG | domain auto-543581 has defined IP address 192.168.61.181 and MAC address 52:54:00:35:a4:04 in network mk-auto-543581
	I0918 21:25:32.649816   69269 profile.go:143] Saving config to /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/auto-543581/config.json ...
	I0918 21:25:32.650006   69269 start.go:128] duration metric: took 25.731749917s to createHost
	I0918 21:25:32.650028   69269 main.go:141] libmachine: (auto-543581) Calling .GetSSHHostname
	I0918 21:25:32.652234   69269 main.go:141] libmachine: (auto-543581) DBG | domain auto-543581 has defined MAC address 52:54:00:35:a4:04 in network mk-auto-543581
	I0918 21:25:32.652559   69269 main.go:141] libmachine: (auto-543581) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:a4:04", ip: ""} in network mk-auto-543581: {Iface:virbr2 ExpiryTime:2024-09-18 22:25:21 +0000 UTC Type:0 Mac:52:54:00:35:a4:04 Iaid: IPaddr:192.168.61.181 Prefix:24 Hostname:auto-543581 Clientid:01:52:54:00:35:a4:04}
	I0918 21:25:32.652588   69269 main.go:141] libmachine: (auto-543581) DBG | domain auto-543581 has defined IP address 192.168.61.181 and MAC address 52:54:00:35:a4:04 in network mk-auto-543581
	I0918 21:25:32.652746   69269 main.go:141] libmachine: (auto-543581) Calling .GetSSHPort
	I0918 21:25:32.652965   69269 main.go:141] libmachine: (auto-543581) Calling .GetSSHKeyPath
	I0918 21:25:32.653114   69269 main.go:141] libmachine: (auto-543581) Calling .GetSSHKeyPath
	I0918 21:25:32.653271   69269 main.go:141] libmachine: (auto-543581) Calling .GetSSHUsername
	I0918 21:25:32.653432   69269 main.go:141] libmachine: Using SSH client type: native
	I0918 21:25:32.653671   69269 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.61.181 22 <nil> <nil>}
	I0918 21:25:32.653689   69269 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0918 21:25:32.752501   69269 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726694732.730404198
	
	I0918 21:25:32.752527   69269 fix.go:216] guest clock: 1726694732.730404198
	I0918 21:25:32.752537   69269 fix.go:229] Guest: 2024-09-18 21:25:32.730404198 +0000 UTC Remote: 2024-09-18 21:25:32.650016387 +0000 UTC m=+25.855529473 (delta=80.387811ms)
	I0918 21:25:32.752585   69269 fix.go:200] guest clock delta is within tolerance: 80.387811ms
	I0918 21:25:32.752590   69269 start.go:83] releasing machines lock for "auto-543581", held for 25.834425331s
	I0918 21:25:32.752623   69269 main.go:141] libmachine: (auto-543581) Calling .DriverName
	I0918 21:25:32.752887   69269 main.go:141] libmachine: (auto-543581) Calling .GetIP
	I0918 21:25:32.755779   69269 main.go:141] libmachine: (auto-543581) DBG | domain auto-543581 has defined MAC address 52:54:00:35:a4:04 in network mk-auto-543581
	I0918 21:25:32.756214   69269 main.go:141] libmachine: (auto-543581) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:a4:04", ip: ""} in network mk-auto-543581: {Iface:virbr2 ExpiryTime:2024-09-18 22:25:21 +0000 UTC Type:0 Mac:52:54:00:35:a4:04 Iaid: IPaddr:192.168.61.181 Prefix:24 Hostname:auto-543581 Clientid:01:52:54:00:35:a4:04}
	I0918 21:25:32.756241   69269 main.go:141] libmachine: (auto-543581) DBG | domain auto-543581 has defined IP address 192.168.61.181 and MAC address 52:54:00:35:a4:04 in network mk-auto-543581
	I0918 21:25:32.756472   69269 main.go:141] libmachine: (auto-543581) Calling .DriverName
	I0918 21:25:32.757020   69269 main.go:141] libmachine: (auto-543581) Calling .DriverName
	I0918 21:25:32.757195   69269 main.go:141] libmachine: (auto-543581) Calling .DriverName
	I0918 21:25:32.757295   69269 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0918 21:25:32.757347   69269 main.go:141] libmachine: (auto-543581) Calling .GetSSHHostname
	I0918 21:25:32.757407   69269 ssh_runner.go:195] Run: cat /version.json
	I0918 21:25:32.757429   69269 main.go:141] libmachine: (auto-543581) Calling .GetSSHHostname
	I0918 21:25:32.760975   69269 main.go:141] libmachine: (auto-543581) DBG | domain auto-543581 has defined MAC address 52:54:00:35:a4:04 in network mk-auto-543581
	I0918 21:25:32.761019   69269 main.go:141] libmachine: (auto-543581) DBG | domain auto-543581 has defined MAC address 52:54:00:35:a4:04 in network mk-auto-543581
	I0918 21:25:32.761433   69269 main.go:141] libmachine: (auto-543581) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:a4:04", ip: ""} in network mk-auto-543581: {Iface:virbr2 ExpiryTime:2024-09-18 22:25:21 +0000 UTC Type:0 Mac:52:54:00:35:a4:04 Iaid: IPaddr:192.168.61.181 Prefix:24 Hostname:auto-543581 Clientid:01:52:54:00:35:a4:04}
	I0918 21:25:32.761512   69269 main.go:141] libmachine: (auto-543581) DBG | domain auto-543581 has defined IP address 192.168.61.181 and MAC address 52:54:00:35:a4:04 in network mk-auto-543581
	I0918 21:25:32.761853   69269 main.go:141] libmachine: (auto-543581) Calling .GetSSHPort
	I0918 21:25:32.761908   69269 main.go:141] libmachine: (auto-543581) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:a4:04", ip: ""} in network mk-auto-543581: {Iface:virbr2 ExpiryTime:2024-09-18 22:25:21 +0000 UTC Type:0 Mac:52:54:00:35:a4:04 Iaid: IPaddr:192.168.61.181 Prefix:24 Hostname:auto-543581 Clientid:01:52:54:00:35:a4:04}
	I0918 21:25:32.761974   69269 main.go:141] libmachine: (auto-543581) DBG | domain auto-543581 has defined IP address 192.168.61.181 and MAC address 52:54:00:35:a4:04 in network mk-auto-543581
	I0918 21:25:32.762112   69269 main.go:141] libmachine: (auto-543581) Calling .GetSSHPort
	I0918 21:25:32.762137   69269 main.go:141] libmachine: (auto-543581) Calling .GetSSHKeyPath
	I0918 21:25:32.762382   69269 main.go:141] libmachine: (auto-543581) Calling .GetSSHKeyPath
	I0918 21:25:32.762395   69269 main.go:141] libmachine: (auto-543581) Calling .GetSSHUsername
	I0918 21:25:32.762594   69269 main.go:141] libmachine: (auto-543581) Calling .GetSSHUsername
	I0918 21:25:32.762672   69269 sshutil.go:53] new ssh client: &{IP:192.168.61.181 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19667-7671/.minikube/machines/auto-543581/id_rsa Username:docker}
	I0918 21:25:32.762760   69269 sshutil.go:53] new ssh client: &{IP:192.168.61.181 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19667-7671/.minikube/machines/auto-543581/id_rsa Username:docker}
	I0918 21:25:32.837177   69269 ssh_runner.go:195] Run: systemctl --version
	I0918 21:25:32.877306   69269 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0918 21:25:33.036792   69269 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0918 21:25:33.042683   69269 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0918 21:25:33.042761   69269 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0918 21:25:33.062795   69269 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0918 21:25:33.062832   69269 start.go:495] detecting cgroup driver to use...
	I0918 21:25:33.062901   69269 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0918 21:25:33.079204   69269 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0918 21:25:33.093295   69269 docker.go:217] disabling cri-docker service (if available) ...
	I0918 21:25:33.093353   69269 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0918 21:25:33.106915   69269 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0918 21:25:33.121059   69269 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0918 21:25:33.237885   69269 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0918 21:25:33.384307   69269 docker.go:233] disabling docker service ...
	I0918 21:25:33.384400   69269 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0918 21:25:33.403621   69269 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0918 21:25:33.421425   69269 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0918 21:25:33.573141   69269 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0918 21:25:33.713042   69269 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0918 21:25:33.726831   69269 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0918 21:25:33.748413   69269 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0918 21:25:33.748473   69269 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0918 21:25:33.758702   69269 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0918 21:25:33.758786   69269 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0918 21:25:33.769871   69269 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0918 21:25:33.780908   69269 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0918 21:25:33.792385   69269 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0918 21:25:33.805064   69269 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0918 21:25:33.819753   69269 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0918 21:25:33.842825   69269 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0918 21:25:33.858110   69269 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0918 21:25:33.868152   69269 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0918 21:25:33.868218   69269 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0918 21:25:33.881807   69269 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0918 21:25:33.894738   69269 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0918 21:25:34.037875   69269 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0918 21:25:34.144077   69269 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0918 21:25:34.144197   69269 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0918 21:25:34.149459   69269 start.go:563] Will wait 60s for crictl version
	I0918 21:25:34.149527   69269 ssh_runner.go:195] Run: which crictl
	I0918 21:25:34.153583   69269 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0918 21:25:34.199364   69269 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0918 21:25:34.199456   69269 ssh_runner.go:195] Run: crio --version
	I0918 21:25:34.229837   69269 ssh_runner.go:195] Run: crio --version
	I0918 21:25:34.263869   69269 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0918 21:25:32.780072   69636 main.go:141] libmachine: (newest-cni-560575) Calling .Start
	I0918 21:25:32.780287   69636 main.go:141] libmachine: (newest-cni-560575) Ensuring networks are active...
	I0918 21:25:32.781241   69636 main.go:141] libmachine: (newest-cni-560575) Ensuring network default is active
	I0918 21:25:32.781673   69636 main.go:141] libmachine: (newest-cni-560575) Ensuring network mk-newest-cni-560575 is active
	I0918 21:25:32.782217   69636 main.go:141] libmachine: (newest-cni-560575) Getting domain xml...
	I0918 21:25:32.783496   69636 main.go:141] libmachine: (newest-cni-560575) Creating domain...
	I0918 21:25:34.159288   69636 main.go:141] libmachine: (newest-cni-560575) Waiting to get IP...
	I0918 21:25:34.160378   69636 main.go:141] libmachine: (newest-cni-560575) DBG | domain newest-cni-560575 has defined MAC address 52:54:00:35:4b:9c in network mk-newest-cni-560575
	I0918 21:25:34.160981   69636 main.go:141] libmachine: (newest-cni-560575) DBG | unable to find current IP address of domain newest-cni-560575 in network mk-newest-cni-560575
	I0918 21:25:34.161058   69636 main.go:141] libmachine: (newest-cni-560575) DBG | I0918 21:25:34.160948   69715 retry.go:31] will retry after 203.369938ms: waiting for machine to come up
	I0918 21:25:34.366539   69636 main.go:141] libmachine: (newest-cni-560575) DBG | domain newest-cni-560575 has defined MAC address 52:54:00:35:4b:9c in network mk-newest-cni-560575
	I0918 21:25:34.367033   69636 main.go:141] libmachine: (newest-cni-560575) DBG | unable to find current IP address of domain newest-cni-560575 in network mk-newest-cni-560575
	I0918 21:25:34.367062   69636 main.go:141] libmachine: (newest-cni-560575) DBG | I0918 21:25:34.366993   69715 retry.go:31] will retry after 289.045775ms: waiting for machine to come up
	I0918 21:25:34.657637   69636 main.go:141] libmachine: (newest-cni-560575) DBG | domain newest-cni-560575 has defined MAC address 52:54:00:35:4b:9c in network mk-newest-cni-560575
	I0918 21:25:34.658292   69636 main.go:141] libmachine: (newest-cni-560575) DBG | unable to find current IP address of domain newest-cni-560575 in network mk-newest-cni-560575
	I0918 21:25:34.658315   69636 main.go:141] libmachine: (newest-cni-560575) DBG | I0918 21:25:34.658243   69715 retry.go:31] will retry after 387.239433ms: waiting for machine to come up
	I0918 21:25:35.046888   69636 main.go:141] libmachine: (newest-cni-560575) DBG | domain newest-cni-560575 has defined MAC address 52:54:00:35:4b:9c in network mk-newest-cni-560575
	I0918 21:25:35.047453   69636 main.go:141] libmachine: (newest-cni-560575) DBG | unable to find current IP address of domain newest-cni-560575 in network mk-newest-cni-560575
	I0918 21:25:35.047481   69636 main.go:141] libmachine: (newest-cni-560575) DBG | I0918 21:25:35.047414   69715 retry.go:31] will retry after 577.71104ms: waiting for machine to come up
	I0918 21:25:34.265792   69269 main.go:141] libmachine: (auto-543581) Calling .GetIP
	I0918 21:25:34.270117   69269 main.go:141] libmachine: (auto-543581) DBG | domain auto-543581 has defined MAC address 52:54:00:35:a4:04 in network mk-auto-543581
	I0918 21:25:34.270682   69269 main.go:141] libmachine: (auto-543581) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:a4:04", ip: ""} in network mk-auto-543581: {Iface:virbr2 ExpiryTime:2024-09-18 22:25:21 +0000 UTC Type:0 Mac:52:54:00:35:a4:04 Iaid: IPaddr:192.168.61.181 Prefix:24 Hostname:auto-543581 Clientid:01:52:54:00:35:a4:04}
	I0918 21:25:34.270730   69269 main.go:141] libmachine: (auto-543581) DBG | domain auto-543581 has defined IP address 192.168.61.181 and MAC address 52:54:00:35:a4:04 in network mk-auto-543581
	I0918 21:25:34.271031   69269 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0918 21:25:34.275600   69269 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0918 21:25:34.288647   69269 kubeadm.go:883] updating cluster {Name:auto-543581 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1
ClusterName:auto-543581 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.181 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0918 21:25:34.288803   69269 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0918 21:25:34.288860   69269 ssh_runner.go:195] Run: sudo crictl images --output json
	I0918 21:25:34.320923   69269 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I0918 21:25:34.320992   69269 ssh_runner.go:195] Run: which lz4
	I0918 21:25:34.324925   69269 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0918 21:25:34.329236   69269 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0918 21:25:34.329275   69269 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I0918 21:25:35.720660   69269 crio.go:462] duration metric: took 1.395766044s to copy over tarball
	I0918 21:25:35.720743   69269 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0918 21:25:38.090806   69269 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.370032871s)
	I0918 21:25:38.090844   69269 crio.go:469] duration metric: took 2.370153804s to extract the tarball
	I0918 21:25:38.090852   69269 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0918 21:25:38.129428   69269 ssh_runner.go:195] Run: sudo crictl images --output json
	I0918 21:25:38.172254   69269 crio.go:514] all images are preloaded for cri-o runtime.
	I0918 21:25:38.172278   69269 cache_images.go:84] Images are preloaded, skipping loading
	I0918 21:25:38.172285   69269 kubeadm.go:934] updating node { 192.168.61.181 8443 v1.31.1 crio true true} ...
	I0918 21:25:38.172408   69269 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=auto-543581 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.181
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:auto-543581 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0918 21:25:38.172487   69269 ssh_runner.go:195] Run: crio config
	I0918 21:25:38.233357   69269 cni.go:84] Creating CNI manager for ""
	I0918 21:25:38.233386   69269 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0918 21:25:38.233400   69269 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0918 21:25:38.233429   69269 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.181 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:auto-543581 NodeName:auto-543581 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.181"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.181 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuber
netes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0918 21:25:38.233611   69269 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.181
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "auto-543581"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.181
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.181"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0918 21:25:38.233685   69269 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0918 21:25:38.243808   69269 binaries.go:44] Found k8s binaries, skipping transfer
	I0918 21:25:38.243885   69269 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0918 21:25:38.253417   69269 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (311 bytes)
	I0918 21:25:38.270946   69269 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0918 21:25:38.287737   69269 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2155 bytes)
	I0918 21:25:38.307970   69269 ssh_runner.go:195] Run: grep 192.168.61.181	control-plane.minikube.internal$ /etc/hosts
	I0918 21:25:38.312880   69269 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.181	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0918 21:25:38.328812   69269 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0918 21:25:38.469618   69269 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0918 21:25:38.491493   69269 certs.go:68] Setting up /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/auto-543581 for IP: 192.168.61.181
	I0918 21:25:38.491518   69269 certs.go:194] generating shared ca certs ...
	I0918 21:25:38.491542   69269 certs.go:226] acquiring lock for ca certs: {Name:mkf54e0e71ed884c4372f9d3d4cb308ea4600185 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0918 21:25:38.491758   69269 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19667-7671/.minikube/ca.key
	I0918 21:25:38.491820   69269 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19667-7671/.minikube/proxy-client-ca.key
	I0918 21:25:38.491834   69269 certs.go:256] generating profile certs ...
	I0918 21:25:38.491910   69269 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/auto-543581/client.key
	I0918 21:25:38.491932   69269 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/auto-543581/client.crt with IP's: []
	I0918 21:25:38.608335   69269 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/auto-543581/client.crt ...
	I0918 21:25:38.608375   69269 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/auto-543581/client.crt: {Name:mkb9b0957d4b51eebf3353745f9fbccce5b64bdd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0918 21:25:38.608590   69269 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/auto-543581/client.key ...
	I0918 21:25:38.608604   69269 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/auto-543581/client.key: {Name:mk3c41c4fed0a9312fac8091f55c9e3efe2772bf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0918 21:25:38.608719   69269 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/auto-543581/apiserver.key.0ee0a943
	I0918 21:25:38.608747   69269 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/auto-543581/apiserver.crt.0ee0a943 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.61.181]
	I0918 21:25:38.987128   69269 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/auto-543581/apiserver.crt.0ee0a943 ...
	I0918 21:25:38.987158   69269 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/auto-543581/apiserver.crt.0ee0a943: {Name:mkf3b23455f7cd1b3277f825627499a372ce99e3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0918 21:25:38.987323   69269 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/auto-543581/apiserver.key.0ee0a943 ...
	I0918 21:25:38.987335   69269 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/auto-543581/apiserver.key.0ee0a943: {Name:mk9093cc61ec25d6e5a384fd0d9cf0a105a56c29 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0918 21:25:38.987428   69269 certs.go:381] copying /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/auto-543581/apiserver.crt.0ee0a943 -> /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/auto-543581/apiserver.crt
	I0918 21:25:38.987500   69269 certs.go:385] copying /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/auto-543581/apiserver.key.0ee0a943 -> /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/auto-543581/apiserver.key
	I0918 21:25:38.987545   69269 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/auto-543581/proxy-client.key
	I0918 21:25:38.987560   69269 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/auto-543581/proxy-client.crt with IP's: []
	I0918 21:25:39.168513   69269 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/auto-543581/proxy-client.crt ...
	I0918 21:25:39.168542   69269 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/auto-543581/proxy-client.crt: {Name:mk18fea494019d12051b594f2892dd2f2b599280 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0918 21:25:39.168746   69269 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/auto-543581/proxy-client.key ...
	I0918 21:25:39.168763   69269 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/auto-543581/proxy-client.key: {Name:mk0bbc382fb051ff7b0fab6cfe53ca67a516331a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0918 21:25:39.168981   69269 certs.go:484] found cert: /home/jenkins/minikube-integration/19667-7671/.minikube/certs/14878.pem (1338 bytes)
	W0918 21:25:39.169025   69269 certs.go:480] ignoring /home/jenkins/minikube-integration/19667-7671/.minikube/certs/14878_empty.pem, impossibly tiny 0 bytes
	I0918 21:25:39.169040   69269 certs.go:484] found cert: /home/jenkins/minikube-integration/19667-7671/.minikube/certs/ca-key.pem (1675 bytes)
	I0918 21:25:39.169077   69269 certs.go:484] found cert: /home/jenkins/minikube-integration/19667-7671/.minikube/certs/ca.pem (1082 bytes)
	I0918 21:25:39.169117   69269 certs.go:484] found cert: /home/jenkins/minikube-integration/19667-7671/.minikube/certs/cert.pem (1123 bytes)
	I0918 21:25:39.169175   69269 certs.go:484] found cert: /home/jenkins/minikube-integration/19667-7671/.minikube/certs/key.pem (1679 bytes)
	I0918 21:25:39.169249   69269 certs.go:484] found cert: /home/jenkins/minikube-integration/19667-7671/.minikube/files/etc/ssl/certs/148782.pem (1708 bytes)
	I0918 21:25:39.169891   69269 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0918 21:25:39.198333   69269 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0918 21:25:39.239399   69269 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0918 21:25:39.267814   69269 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0918 21:25:39.295927   69269 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/auto-543581/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1415 bytes)
	I0918 21:25:39.323814   69269 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/auto-543581/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0918 21:25:39.349463   69269 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/auto-543581/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0918 21:25:39.376580   69269 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/auto-543581/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0918 21:25:39.405774   69269 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/files/etc/ssl/certs/148782.pem --> /usr/share/ca-certificates/148782.pem (1708 bytes)
	I0918 21:25:39.431090   69269 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0918 21:25:39.456699   69269 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/certs/14878.pem --> /usr/share/ca-certificates/14878.pem (1338 bytes)
	I0918 21:25:39.483870   69269 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0918 21:25:39.501991   69269 ssh_runner.go:195] Run: openssl version
	I0918 21:25:39.507725   69269 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0918 21:25:39.518469   69269 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0918 21:25:39.523004   69269 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 18 19:39 /usr/share/ca-certificates/minikubeCA.pem
	I0918 21:25:39.523060   69269 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0918 21:25:39.528670   69269 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0918 21:25:39.539433   69269 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14878.pem && ln -fs /usr/share/ca-certificates/14878.pem /etc/ssl/certs/14878.pem"
	I0918 21:25:39.554492   69269 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14878.pem
	I0918 21:25:39.559409   69269 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 18 19:57 /usr/share/ca-certificates/14878.pem
	I0918 21:25:39.559489   69269 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14878.pem
	I0918 21:25:39.565799   69269 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14878.pem /etc/ssl/certs/51391683.0"
	I0918 21:25:39.577165   69269 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/148782.pem && ln -fs /usr/share/ca-certificates/148782.pem /etc/ssl/certs/148782.pem"
	I0918 21:25:39.588547   69269 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/148782.pem
	I0918 21:25:39.593149   69269 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 18 19:57 /usr/share/ca-certificates/148782.pem
	I0918 21:25:39.593227   69269 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/148782.pem
	I0918 21:25:39.599067   69269 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/148782.pem /etc/ssl/certs/3ec20f2e.0"
	I0918 21:25:39.610001   69269 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0918 21:25:39.614406   69269 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0918 21:25:39.614471   69269 kubeadm.go:392] StartCluster: {Name:auto-543581 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Clu
sterName:auto-543581 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.181 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p
MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0918 21:25:39.614564   69269 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0918 21:25:39.614629   69269 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0918 21:25:39.657586   69269 cri.go:89] found id: ""
	I0918 21:25:39.657691   69269 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0918 21:25:39.667829   69269 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0918 21:25:39.678046   69269 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0918 21:25:39.688281   69269 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0918 21:25:39.688308   69269 kubeadm.go:157] found existing configuration files:
	
	I0918 21:25:39.688362   69269 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0918 21:25:39.697868   69269 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0918 21:25:39.697935   69269 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0918 21:25:39.707576   69269 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0918 21:25:39.716822   69269 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0918 21:25:39.716893   69269 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0918 21:25:39.726769   69269 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0918 21:25:39.736274   69269 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0918 21:25:39.736345   69269 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0918 21:25:39.747647   69269 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0918 21:25:39.757154   69269 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0918 21:25:39.757233   69269 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0918 21:25:39.767389   69269 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0918 21:25:39.825366   69269 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I0918 21:25:39.825455   69269 kubeadm.go:310] [preflight] Running pre-flight checks
	I0918 21:25:39.935111   69269 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0918 21:25:39.935280   69269 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0918 21:25:39.935438   69269 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0918 21:25:39.943944   69269 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0918 21:25:35.627399   69636 main.go:141] libmachine: (newest-cni-560575) DBG | domain newest-cni-560575 has defined MAC address 52:54:00:35:4b:9c in network mk-newest-cni-560575
	I0918 21:25:35.628007   69636 main.go:141] libmachine: (newest-cni-560575) DBG | unable to find current IP address of domain newest-cni-560575 in network mk-newest-cni-560575
	I0918 21:25:35.628048   69636 main.go:141] libmachine: (newest-cni-560575) DBG | I0918 21:25:35.627953   69715 retry.go:31] will retry after 461.908428ms: waiting for machine to come up
	I0918 21:25:36.091799   69636 main.go:141] libmachine: (newest-cni-560575) DBG | domain newest-cni-560575 has defined MAC address 52:54:00:35:4b:9c in network mk-newest-cni-560575
	I0918 21:25:36.092594   69636 main.go:141] libmachine: (newest-cni-560575) DBG | unable to find current IP address of domain newest-cni-560575 in network mk-newest-cni-560575
	I0918 21:25:36.092622   69636 main.go:141] libmachine: (newest-cni-560575) DBG | I0918 21:25:36.092508   69715 retry.go:31] will retry after 846.381502ms: waiting for machine to come up
	I0918 21:25:36.941170   69636 main.go:141] libmachine: (newest-cni-560575) DBG | domain newest-cni-560575 has defined MAC address 52:54:00:35:4b:9c in network mk-newest-cni-560575
	I0918 21:25:36.941745   69636 main.go:141] libmachine: (newest-cni-560575) DBG | unable to find current IP address of domain newest-cni-560575 in network mk-newest-cni-560575
	I0918 21:25:36.941768   69636 main.go:141] libmachine: (newest-cni-560575) DBG | I0918 21:25:36.941694   69715 retry.go:31] will retry after 971.312697ms: waiting for machine to come up
	I0918 21:25:37.914899   69636 main.go:141] libmachine: (newest-cni-560575) DBG | domain newest-cni-560575 has defined MAC address 52:54:00:35:4b:9c in network mk-newest-cni-560575
	I0918 21:25:37.915580   69636 main.go:141] libmachine: (newest-cni-560575) DBG | unable to find current IP address of domain newest-cni-560575 in network mk-newest-cni-560575
	I0918 21:25:37.915608   69636 main.go:141] libmachine: (newest-cni-560575) DBG | I0918 21:25:37.915504   69715 retry.go:31] will retry after 1.110638964s: waiting for machine to come up
	I0918 21:25:39.027815   69636 main.go:141] libmachine: (newest-cni-560575) DBG | domain newest-cni-560575 has defined MAC address 52:54:00:35:4b:9c in network mk-newest-cni-560575
	I0918 21:25:39.028308   69636 main.go:141] libmachine: (newest-cni-560575) DBG | unable to find current IP address of domain newest-cni-560575 in network mk-newest-cni-560575
	I0918 21:25:39.028329   69636 main.go:141] libmachine: (newest-cni-560575) DBG | I0918 21:25:39.028227   69715 retry.go:31] will retry after 1.646957156s: waiting for machine to come up
	I0918 21:25:40.120471   69269 out.go:235]   - Generating certificates and keys ...
	I0918 21:25:40.120645   69269 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0918 21:25:40.120745   69269 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0918 21:25:40.204820   69269 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0918 21:25:40.309397   69269 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0918 21:25:40.644283   69269 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0918 21:25:40.724238   69269 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0918 21:25:41.024098   69269 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0918 21:25:41.024302   69269 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [auto-543581 localhost] and IPs [192.168.61.181 127.0.0.1 ::1]
	I0918 21:25:41.390985   69269 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0918 21:25:41.391148   69269 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [auto-543581 localhost] and IPs [192.168.61.181 127.0.0.1 ::1]
	I0918 21:25:41.502395   69269 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0918 21:25:42.079948   69269 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0918 21:25:42.289076   69269 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0918 21:25:42.289294   69269 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0918 21:25:42.485884   69269 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0918 21:25:42.733073   69269 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0918 21:25:42.918152   69269 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0918 21:25:43.163923   69269 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0918 21:25:43.578168   69269 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0918 21:25:43.578822   69269 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0918 21:25:43.584638   69269 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0918 21:25:40.676700   69636 main.go:141] libmachine: (newest-cni-560575) DBG | domain newest-cni-560575 has defined MAC address 52:54:00:35:4b:9c in network mk-newest-cni-560575
	I0918 21:25:40.677298   69636 main.go:141] libmachine: (newest-cni-560575) DBG | unable to find current IP address of domain newest-cni-560575 in network mk-newest-cni-560575
	I0918 21:25:40.677327   69636 main.go:141] libmachine: (newest-cni-560575) DBG | I0918 21:25:40.677251   69715 retry.go:31] will retry after 1.679246697s: waiting for machine to come up
	I0918 21:25:42.357846   69636 main.go:141] libmachine: (newest-cni-560575) DBG | domain newest-cni-560575 has defined MAC address 52:54:00:35:4b:9c in network mk-newest-cni-560575
	I0918 21:25:42.358549   69636 main.go:141] libmachine: (newest-cni-560575) DBG | unable to find current IP address of domain newest-cni-560575 in network mk-newest-cni-560575
	I0918 21:25:42.358579   69636 main.go:141] libmachine: (newest-cni-560575) DBG | I0918 21:25:42.358499   69715 retry.go:31] will retry after 2.135269406s: waiting for machine to come up
	I0918 21:25:44.495014   69636 main.go:141] libmachine: (newest-cni-560575) DBG | domain newest-cni-560575 has defined MAC address 52:54:00:35:4b:9c in network mk-newest-cni-560575
	I0918 21:25:44.495568   69636 main.go:141] libmachine: (newest-cni-560575) DBG | unable to find current IP address of domain newest-cni-560575 in network mk-newest-cni-560575
	I0918 21:25:44.495613   69636 main.go:141] libmachine: (newest-cni-560575) DBG | I0918 21:25:44.495537   69715 retry.go:31] will retry after 3.079388257s: waiting for machine to come up
	I0918 21:25:43.587073   69269 out.go:235]   - Booting up control plane ...
	I0918 21:25:43.587194   69269 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0918 21:25:43.587329   69269 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0918 21:25:43.587435   69269 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0918 21:25:43.604661   69269 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0918 21:25:43.613043   69269 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0918 21:25:43.613141   69269 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0918 21:25:43.759936   69269 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0918 21:25:43.760067   69269 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0918 21:25:44.262161   69269 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 502.575793ms
	I0918 21:25:44.262305   69269 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0918 21:25:47.577075   69636 main.go:141] libmachine: (newest-cni-560575) DBG | domain newest-cni-560575 has defined MAC address 52:54:00:35:4b:9c in network mk-newest-cni-560575
	I0918 21:25:47.577593   69636 main.go:141] libmachine: (newest-cni-560575) DBG | unable to find current IP address of domain newest-cni-560575 in network mk-newest-cni-560575
	I0918 21:25:47.577618   69636 main.go:141] libmachine: (newest-cni-560575) DBG | I0918 21:25:47.577557   69715 retry.go:31] will retry after 3.851719007s: waiting for machine to come up
	I0918 21:25:50.263949   69269 kubeadm.go:310] [api-check] The API server is healthy after 6.002596681s
	I0918 21:25:50.275035   69269 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0918 21:25:50.297822   69269 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0918 21:25:50.328953   69269 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0918 21:25:50.329196   69269 kubeadm.go:310] [mark-control-plane] Marking the node auto-543581 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0918 21:25:50.346604   69269 kubeadm.go:310] [bootstrap-token] Using token: j9mpfe.zmuyflf183c04j10
	I0918 21:25:50.347827   69269 out.go:235]   - Configuring RBAC rules ...
	I0918 21:25:50.347938   69269 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0918 21:25:50.357759   69269 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0918 21:25:50.371169   69269 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0918 21:25:50.375677   69269 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0918 21:25:50.380563   69269 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0918 21:25:50.385525   69269 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0918 21:25:50.671751   69269 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0918 21:25:51.111248   69269 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0918 21:25:51.670858   69269 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0918 21:25:51.670896   69269 kubeadm.go:310] 
	I0918 21:25:51.670970   69269 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0918 21:25:51.670983   69269 kubeadm.go:310] 
	I0918 21:25:51.671088   69269 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0918 21:25:51.671099   69269 kubeadm.go:310] 
	I0918 21:25:51.671134   69269 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0918 21:25:51.671192   69269 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0918 21:25:51.671237   69269 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0918 21:25:51.671262   69269 kubeadm.go:310] 
	I0918 21:25:51.671349   69269 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0918 21:25:51.671362   69269 kubeadm.go:310] 
	I0918 21:25:51.671444   69269 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0918 21:25:51.671477   69269 kubeadm.go:310] 
	I0918 21:25:51.671554   69269 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0918 21:25:51.671662   69269 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0918 21:25:51.671770   69269 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0918 21:25:51.671780   69269 kubeadm.go:310] 
	I0918 21:25:51.671891   69269 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0918 21:25:51.671988   69269 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0918 21:25:51.671996   69269 kubeadm.go:310] 
	I0918 21:25:51.672126   69269 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token j9mpfe.zmuyflf183c04j10 \
	I0918 21:25:51.672270   69269 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:8ad46bf288e8cc2226f5a929a768e6c95cddecc97ffc4016be27ef0987b1e36f \
	I0918 21:25:51.672302   69269 kubeadm.go:310] 	--control-plane 
	I0918 21:25:51.672309   69269 kubeadm.go:310] 
	I0918 21:25:51.672450   69269 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0918 21:25:51.672465   69269 kubeadm.go:310] 
	I0918 21:25:51.672579   69269 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token j9mpfe.zmuyflf183c04j10 \
	I0918 21:25:51.672724   69269 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:8ad46bf288e8cc2226f5a929a768e6c95cddecc97ffc4016be27ef0987b1e36f 
	I0918 21:25:51.673365   69269 kubeadm.go:310] W0918 21:25:39.804498     833 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0918 21:25:51.673713   69269 kubeadm.go:310] W0918 21:25:39.807075     833 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0918 21:25:51.673858   69269 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0918 21:25:51.673886   69269 cni.go:84] Creating CNI manager for ""
	I0918 21:25:51.673894   69269 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0918 21:25:51.675938   69269 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0918 21:25:51.677200   69269 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0918 21:25:51.689001   69269 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0918 21:25:51.708125   69269 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0918 21:25:51.708187   69269 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0918 21:25:51.708254   69269 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes auto-543581 minikube.k8s.io/updated_at=2024_09_18T21_25_51_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=85073601a832bd4bbda5d11fa91feafff6ec6b91 minikube.k8s.io/name=auto-543581 minikube.k8s.io/primary=true
	I0918 21:25:51.433185   69636 main.go:141] libmachine: (newest-cni-560575) DBG | domain newest-cni-560575 has defined MAC address 52:54:00:35:4b:9c in network mk-newest-cni-560575
	I0918 21:25:51.433817   69636 main.go:141] libmachine: (newest-cni-560575) DBG | domain newest-cni-560575 has current primary IP address 192.168.72.106 and MAC address 52:54:00:35:4b:9c in network mk-newest-cni-560575
	I0918 21:25:51.433843   69636 main.go:141] libmachine: (newest-cni-560575) Found IP for machine: 192.168.72.106
	I0918 21:25:51.433855   69636 main.go:141] libmachine: (newest-cni-560575) Reserving static IP address...
	I0918 21:25:51.434495   69636 main.go:141] libmachine: (newest-cni-560575) DBG | found host DHCP lease matching {name: "newest-cni-560575", mac: "52:54:00:35:4b:9c", ip: "192.168.72.106"} in network mk-newest-cni-560575: {Iface:virbr3 ExpiryTime:2024-09-18 22:25:44 +0000 UTC Type:0 Mac:52:54:00:35:4b:9c Iaid: IPaddr:192.168.72.106 Prefix:24 Hostname:newest-cni-560575 Clientid:01:52:54:00:35:4b:9c}
	I0918 21:25:51.434545   69636 main.go:141] libmachine: (newest-cni-560575) DBG | skip adding static IP to network mk-newest-cni-560575 - found existing host DHCP lease matching {name: "newest-cni-560575", mac: "52:54:00:35:4b:9c", ip: "192.168.72.106"}
	I0918 21:25:51.434562   69636 main.go:141] libmachine: (newest-cni-560575) Reserved static IP address: 192.168.72.106
	I0918 21:25:51.434579   69636 main.go:141] libmachine: (newest-cni-560575) Waiting for SSH to be available...
	I0918 21:25:51.434593   69636 main.go:141] libmachine: (newest-cni-560575) DBG | Getting to WaitForSSH function...
	I0918 21:25:51.437216   69636 main.go:141] libmachine: (newest-cni-560575) DBG | domain newest-cni-560575 has defined MAC address 52:54:00:35:4b:9c in network mk-newest-cni-560575
	I0918 21:25:51.437604   69636 main.go:141] libmachine: (newest-cni-560575) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:4b:9c", ip: ""} in network mk-newest-cni-560575: {Iface:virbr3 ExpiryTime:2024-09-18 22:25:44 +0000 UTC Type:0 Mac:52:54:00:35:4b:9c Iaid: IPaddr:192.168.72.106 Prefix:24 Hostname:newest-cni-560575 Clientid:01:52:54:00:35:4b:9c}
	I0918 21:25:51.437640   69636 main.go:141] libmachine: (newest-cni-560575) DBG | domain newest-cni-560575 has defined IP address 192.168.72.106 and MAC address 52:54:00:35:4b:9c in network mk-newest-cni-560575
	I0918 21:25:51.437759   69636 main.go:141] libmachine: (newest-cni-560575) DBG | Using SSH client type: external
	I0918 21:25:51.437780   69636 main.go:141] libmachine: (newest-cni-560575) DBG | Using SSH private key: /home/jenkins/minikube-integration/19667-7671/.minikube/machines/newest-cni-560575/id_rsa (-rw-------)
	I0918 21:25:51.437823   69636 main.go:141] libmachine: (newest-cni-560575) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.106 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19667-7671/.minikube/machines/newest-cni-560575/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0918 21:25:51.437837   69636 main.go:141] libmachine: (newest-cni-560575) DBG | About to run SSH command:
	I0918 21:25:51.437867   69636 main.go:141] libmachine: (newest-cni-560575) DBG | exit 0
	I0918 21:25:51.560470   69636 main.go:141] libmachine: (newest-cni-560575) DBG | SSH cmd err, output: <nil>: 
	I0918 21:25:51.560852   69636 main.go:141] libmachine: (newest-cni-560575) Calling .GetConfigRaw
	I0918 21:25:51.561694   69636 main.go:141] libmachine: (newest-cni-560575) Calling .GetIP
	I0918 21:25:51.564517   69636 main.go:141] libmachine: (newest-cni-560575) DBG | domain newest-cni-560575 has defined MAC address 52:54:00:35:4b:9c in network mk-newest-cni-560575
	I0918 21:25:51.564942   69636 main.go:141] libmachine: (newest-cni-560575) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:4b:9c", ip: ""} in network mk-newest-cni-560575: {Iface:virbr3 ExpiryTime:2024-09-18 22:25:44 +0000 UTC Type:0 Mac:52:54:00:35:4b:9c Iaid: IPaddr:192.168.72.106 Prefix:24 Hostname:newest-cni-560575 Clientid:01:52:54:00:35:4b:9c}
	I0918 21:25:51.564964   69636 main.go:141] libmachine: (newest-cni-560575) DBG | domain newest-cni-560575 has defined IP address 192.168.72.106 and MAC address 52:54:00:35:4b:9c in network mk-newest-cni-560575
	I0918 21:25:51.565210   69636 profile.go:143] Saving config to /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/newest-cni-560575/config.json ...
	I0918 21:25:51.565463   69636 machine.go:93] provisionDockerMachine start ...
	I0918 21:25:51.565488   69636 main.go:141] libmachine: (newest-cni-560575) Calling .DriverName
	I0918 21:25:51.565709   69636 main.go:141] libmachine: (newest-cni-560575) Calling .GetSSHHostname
	I0918 21:25:51.567884   69636 main.go:141] libmachine: (newest-cni-560575) DBG | domain newest-cni-560575 has defined MAC address 52:54:00:35:4b:9c in network mk-newest-cni-560575
	I0918 21:25:51.568229   69636 main.go:141] libmachine: (newest-cni-560575) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:4b:9c", ip: ""} in network mk-newest-cni-560575: {Iface:virbr3 ExpiryTime:2024-09-18 22:25:44 +0000 UTC Type:0 Mac:52:54:00:35:4b:9c Iaid: IPaddr:192.168.72.106 Prefix:24 Hostname:newest-cni-560575 Clientid:01:52:54:00:35:4b:9c}
	I0918 21:25:51.568257   69636 main.go:141] libmachine: (newest-cni-560575) DBG | domain newest-cni-560575 has defined IP address 192.168.72.106 and MAC address 52:54:00:35:4b:9c in network mk-newest-cni-560575
	I0918 21:25:51.568566   69636 main.go:141] libmachine: (newest-cni-560575) Calling .GetSSHPort
	I0918 21:25:51.568726   69636 main.go:141] libmachine: (newest-cni-560575) Calling .GetSSHKeyPath
	I0918 21:25:51.568891   69636 main.go:141] libmachine: (newest-cni-560575) Calling .GetSSHKeyPath
	I0918 21:25:51.569018   69636 main.go:141] libmachine: (newest-cni-560575) Calling .GetSSHUsername
	I0918 21:25:51.569158   69636 main.go:141] libmachine: Using SSH client type: native
	I0918 21:25:51.569349   69636 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.72.106 22 <nil> <nil>}
	I0918 21:25:51.569359   69636 main.go:141] libmachine: About to run SSH command:
	hostname
	I0918 21:25:51.672806   69636 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0918 21:25:51.672833   69636 main.go:141] libmachine: (newest-cni-560575) Calling .GetMachineName
	I0918 21:25:51.673108   69636 buildroot.go:166] provisioning hostname "newest-cni-560575"
	I0918 21:25:51.673135   69636 main.go:141] libmachine: (newest-cni-560575) Calling .GetMachineName
	I0918 21:25:51.673322   69636 main.go:141] libmachine: (newest-cni-560575) Calling .GetSSHHostname
	I0918 21:25:51.676639   69636 main.go:141] libmachine: (newest-cni-560575) DBG | domain newest-cni-560575 has defined MAC address 52:54:00:35:4b:9c in network mk-newest-cni-560575
	I0918 21:25:51.677143   69636 main.go:141] libmachine: (newest-cni-560575) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:4b:9c", ip: ""} in network mk-newest-cni-560575: {Iface:virbr3 ExpiryTime:2024-09-18 22:25:44 +0000 UTC Type:0 Mac:52:54:00:35:4b:9c Iaid: IPaddr:192.168.72.106 Prefix:24 Hostname:newest-cni-560575 Clientid:01:52:54:00:35:4b:9c}
	I0918 21:25:51.677171   69636 main.go:141] libmachine: (newest-cni-560575) DBG | domain newest-cni-560575 has defined IP address 192.168.72.106 and MAC address 52:54:00:35:4b:9c in network mk-newest-cni-560575
	I0918 21:25:51.677420   69636 main.go:141] libmachine: (newest-cni-560575) Calling .GetSSHPort
	I0918 21:25:51.677611   69636 main.go:141] libmachine: (newest-cni-560575) Calling .GetSSHKeyPath
	I0918 21:25:51.677783   69636 main.go:141] libmachine: (newest-cni-560575) Calling .GetSSHKeyPath
	I0918 21:25:51.677983   69636 main.go:141] libmachine: (newest-cni-560575) Calling .GetSSHUsername
	I0918 21:25:51.678177   69636 main.go:141] libmachine: Using SSH client type: native
	I0918 21:25:51.678364   69636 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.72.106 22 <nil> <nil>}
	I0918 21:25:51.678390   69636 main.go:141] libmachine: About to run SSH command:
	sudo hostname newest-cni-560575 && echo "newest-cni-560575" | sudo tee /etc/hostname
	I0918 21:25:51.804839   69636 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-560575
	
	I0918 21:25:51.804889   69636 main.go:141] libmachine: (newest-cni-560575) Calling .GetSSHHostname
	I0918 21:25:51.807991   69636 main.go:141] libmachine: (newest-cni-560575) DBG | domain newest-cni-560575 has defined MAC address 52:54:00:35:4b:9c in network mk-newest-cni-560575
	I0918 21:25:51.808374   69636 main.go:141] libmachine: (newest-cni-560575) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:4b:9c", ip: ""} in network mk-newest-cni-560575: {Iface:virbr3 ExpiryTime:2024-09-18 22:25:44 +0000 UTC Type:0 Mac:52:54:00:35:4b:9c Iaid: IPaddr:192.168.72.106 Prefix:24 Hostname:newest-cni-560575 Clientid:01:52:54:00:35:4b:9c}
	I0918 21:25:51.808412   69636 main.go:141] libmachine: (newest-cni-560575) DBG | domain newest-cni-560575 has defined IP address 192.168.72.106 and MAC address 52:54:00:35:4b:9c in network mk-newest-cni-560575
	I0918 21:25:51.808539   69636 main.go:141] libmachine: (newest-cni-560575) Calling .GetSSHPort
	I0918 21:25:51.808759   69636 main.go:141] libmachine: (newest-cni-560575) Calling .GetSSHKeyPath
	I0918 21:25:51.808980   69636 main.go:141] libmachine: (newest-cni-560575) Calling .GetSSHKeyPath
	I0918 21:25:51.809204   69636 main.go:141] libmachine: (newest-cni-560575) Calling .GetSSHUsername
	I0918 21:25:51.809404   69636 main.go:141] libmachine: Using SSH client type: native
	I0918 21:25:51.809627   69636 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.72.106 22 <nil> <nil>}
	I0918 21:25:51.809651   69636 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-560575' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-560575/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-560575' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0918 21:25:51.930604   69636 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0918 21:25:51.930641   69636 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19667-7671/.minikube CaCertPath:/home/jenkins/minikube-integration/19667-7671/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19667-7671/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19667-7671/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19667-7671/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19667-7671/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19667-7671/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19667-7671/.minikube}
	I0918 21:25:51.930660   69636 buildroot.go:174] setting up certificates
	I0918 21:25:51.930667   69636 provision.go:84] configureAuth start
	I0918 21:25:51.930680   69636 main.go:141] libmachine: (newest-cni-560575) Calling .GetMachineName
	I0918 21:25:51.930941   69636 main.go:141] libmachine: (newest-cni-560575) Calling .GetIP
	I0918 21:25:51.933624   69636 main.go:141] libmachine: (newest-cni-560575) DBG | domain newest-cni-560575 has defined MAC address 52:54:00:35:4b:9c in network mk-newest-cni-560575
	I0918 21:25:51.934066   69636 main.go:141] libmachine: (newest-cni-560575) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:4b:9c", ip: ""} in network mk-newest-cni-560575: {Iface:virbr3 ExpiryTime:2024-09-18 22:25:44 +0000 UTC Type:0 Mac:52:54:00:35:4b:9c Iaid: IPaddr:192.168.72.106 Prefix:24 Hostname:newest-cni-560575 Clientid:01:52:54:00:35:4b:9c}
	I0918 21:25:51.934108   69636 main.go:141] libmachine: (newest-cni-560575) DBG | domain newest-cni-560575 has defined IP address 192.168.72.106 and MAC address 52:54:00:35:4b:9c in network mk-newest-cni-560575
	I0918 21:25:51.934264   69636 main.go:141] libmachine: (newest-cni-560575) Calling .GetSSHHostname
	I0918 21:25:51.937255   69636 main.go:141] libmachine: (newest-cni-560575) DBG | domain newest-cni-560575 has defined MAC address 52:54:00:35:4b:9c in network mk-newest-cni-560575
	I0918 21:25:51.937668   69636 main.go:141] libmachine: (newest-cni-560575) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:4b:9c", ip: ""} in network mk-newest-cni-560575: {Iface:virbr3 ExpiryTime:2024-09-18 22:25:44 +0000 UTC Type:0 Mac:52:54:00:35:4b:9c Iaid: IPaddr:192.168.72.106 Prefix:24 Hostname:newest-cni-560575 Clientid:01:52:54:00:35:4b:9c}
	I0918 21:25:51.937705   69636 main.go:141] libmachine: (newest-cni-560575) DBG | domain newest-cni-560575 has defined IP address 192.168.72.106 and MAC address 52:54:00:35:4b:9c in network mk-newest-cni-560575
	I0918 21:25:51.937987   69636 provision.go:143] copyHostCerts
	I0918 21:25:51.938056   69636 exec_runner.go:144] found /home/jenkins/minikube-integration/19667-7671/.minikube/ca.pem, removing ...
	I0918 21:25:51.938068   69636 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19667-7671/.minikube/ca.pem
	I0918 21:25:51.938119   69636 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19667-7671/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19667-7671/.minikube/ca.pem (1082 bytes)
	I0918 21:25:51.938217   69636 exec_runner.go:144] found /home/jenkins/minikube-integration/19667-7671/.minikube/cert.pem, removing ...
	I0918 21:25:51.938225   69636 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19667-7671/.minikube/cert.pem
	I0918 21:25:51.938245   69636 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19667-7671/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19667-7671/.minikube/cert.pem (1123 bytes)
	I0918 21:25:51.938319   69636 exec_runner.go:144] found /home/jenkins/minikube-integration/19667-7671/.minikube/key.pem, removing ...
	I0918 21:25:51.938326   69636 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19667-7671/.minikube/key.pem
	I0918 21:25:51.938347   69636 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19667-7671/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19667-7671/.minikube/key.pem (1679 bytes)
	I0918 21:25:51.938407   69636 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19667-7671/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19667-7671/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19667-7671/.minikube/certs/ca-key.pem org=jenkins.newest-cni-560575 san=[127.0.0.1 192.168.72.106 localhost minikube newest-cni-560575]
	I0918 21:25:52.481463   69636 provision.go:177] copyRemoteCerts
	I0918 21:25:52.481528   69636 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0918 21:25:52.481554   69636 main.go:141] libmachine: (newest-cni-560575) Calling .GetSSHHostname
	I0918 21:25:52.484866   69636 main.go:141] libmachine: (newest-cni-560575) DBG | domain newest-cni-560575 has defined MAC address 52:54:00:35:4b:9c in network mk-newest-cni-560575
	I0918 21:25:52.485245   69636 main.go:141] libmachine: (newest-cni-560575) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:4b:9c", ip: ""} in network mk-newest-cni-560575: {Iface:virbr3 ExpiryTime:2024-09-18 22:25:44 +0000 UTC Type:0 Mac:52:54:00:35:4b:9c Iaid: IPaddr:192.168.72.106 Prefix:24 Hostname:newest-cni-560575 Clientid:01:52:54:00:35:4b:9c}
	I0918 21:25:52.485281   69636 main.go:141] libmachine: (newest-cni-560575) DBG | domain newest-cni-560575 has defined IP address 192.168.72.106 and MAC address 52:54:00:35:4b:9c in network mk-newest-cni-560575
	I0918 21:25:52.485482   69636 main.go:141] libmachine: (newest-cni-560575) Calling .GetSSHPort
	I0918 21:25:52.485763   69636 main.go:141] libmachine: (newest-cni-560575) Calling .GetSSHKeyPath
	I0918 21:25:52.485934   69636 main.go:141] libmachine: (newest-cni-560575) Calling .GetSSHUsername
	I0918 21:25:52.486125   69636 sshutil.go:53] new ssh client: &{IP:192.168.72.106 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19667-7671/.minikube/machines/newest-cni-560575/id_rsa Username:docker}
	I0918 21:25:52.566235   69636 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0918 21:25:52.590280   69636 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0918 21:25:52.616863   69636 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0918 21:25:52.643269   69636 provision.go:87] duration metric: took 712.589527ms to configureAuth
	I0918 21:25:52.643308   69636 buildroot.go:189] setting minikube options for container-runtime
	I0918 21:25:52.643516   69636 config.go:182] Loaded profile config "newest-cni-560575": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0918 21:25:52.643623   69636 main.go:141] libmachine: (newest-cni-560575) Calling .GetSSHHostname
	I0918 21:25:52.647143   69636 main.go:141] libmachine: (newest-cni-560575) DBG | domain newest-cni-560575 has defined MAC address 52:54:00:35:4b:9c in network mk-newest-cni-560575
	I0918 21:25:52.647487   69636 main.go:141] libmachine: (newest-cni-560575) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:4b:9c", ip: ""} in network mk-newest-cni-560575: {Iface:virbr3 ExpiryTime:2024-09-18 22:25:44 +0000 UTC Type:0 Mac:52:54:00:35:4b:9c Iaid: IPaddr:192.168.72.106 Prefix:24 Hostname:newest-cni-560575 Clientid:01:52:54:00:35:4b:9c}
	I0918 21:25:52.647517   69636 main.go:141] libmachine: (newest-cni-560575) DBG | domain newest-cni-560575 has defined IP address 192.168.72.106 and MAC address 52:54:00:35:4b:9c in network mk-newest-cni-560575
	I0918 21:25:52.647775   69636 main.go:141] libmachine: (newest-cni-560575) Calling .GetSSHPort
	I0918 21:25:52.648104   69636 main.go:141] libmachine: (newest-cni-560575) Calling .GetSSHKeyPath
	I0918 21:25:52.648472   69636 main.go:141] libmachine: (newest-cni-560575) Calling .GetSSHKeyPath
	I0918 21:25:52.648739   69636 main.go:141] libmachine: (newest-cni-560575) Calling .GetSSHUsername
	I0918 21:25:52.649086   69636 main.go:141] libmachine: Using SSH client type: native
	I0918 21:25:52.649436   69636 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.72.106 22 <nil> <nil>}
	I0918 21:25:52.649458   69636 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0918 21:25:52.872144   69636 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0918 21:25:52.872173   69636 machine.go:96] duration metric: took 1.306693679s to provisionDockerMachine
	I0918 21:25:52.872186   69636 start.go:293] postStartSetup for "newest-cni-560575" (driver="kvm2")
	I0918 21:25:52.872198   69636 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0918 21:25:52.872223   69636 main.go:141] libmachine: (newest-cni-560575) Calling .DriverName
	I0918 21:25:52.872581   69636 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0918 21:25:52.872619   69636 main.go:141] libmachine: (newest-cni-560575) Calling .GetSSHHostname
	I0918 21:25:52.875454   69636 main.go:141] libmachine: (newest-cni-560575) DBG | domain newest-cni-560575 has defined MAC address 52:54:00:35:4b:9c in network mk-newest-cni-560575
	I0918 21:25:52.875883   69636 main.go:141] libmachine: (newest-cni-560575) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:4b:9c", ip: ""} in network mk-newest-cni-560575: {Iface:virbr3 ExpiryTime:2024-09-18 22:25:44 +0000 UTC Type:0 Mac:52:54:00:35:4b:9c Iaid: IPaddr:192.168.72.106 Prefix:24 Hostname:newest-cni-560575 Clientid:01:52:54:00:35:4b:9c}
	I0918 21:25:52.875918   69636 main.go:141] libmachine: (newest-cni-560575) DBG | domain newest-cni-560575 has defined IP address 192.168.72.106 and MAC address 52:54:00:35:4b:9c in network mk-newest-cni-560575
	I0918 21:25:52.876082   69636 main.go:141] libmachine: (newest-cni-560575) Calling .GetSSHPort
	I0918 21:25:52.876301   69636 main.go:141] libmachine: (newest-cni-560575) Calling .GetSSHKeyPath
	I0918 21:25:52.876450   69636 main.go:141] libmachine: (newest-cni-560575) Calling .GetSSHUsername
	I0918 21:25:52.876594   69636 sshutil.go:53] new ssh client: &{IP:192.168.72.106 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19667-7671/.minikube/machines/newest-cni-560575/id_rsa Username:docker}
	I0918 21:25:52.960398   69636 ssh_runner.go:195] Run: cat /etc/os-release
	I0918 21:25:52.964840   69636 info.go:137] Remote host: Buildroot 2023.02.9
	I0918 21:25:52.964869   69636 filesync.go:126] Scanning /home/jenkins/minikube-integration/19667-7671/.minikube/addons for local assets ...
	I0918 21:25:52.964947   69636 filesync.go:126] Scanning /home/jenkins/minikube-integration/19667-7671/.minikube/files for local assets ...
	I0918 21:25:52.965042   69636 filesync.go:149] local asset: /home/jenkins/minikube-integration/19667-7671/.minikube/files/etc/ssl/certs/148782.pem -> 148782.pem in /etc/ssl/certs
	I0918 21:25:52.965187   69636 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0918 21:25:52.976325   69636 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/files/etc/ssl/certs/148782.pem --> /etc/ssl/certs/148782.pem (1708 bytes)
	I0918 21:25:53.000678   69636 start.go:296] duration metric: took 128.47609ms for postStartSetup
	I0918 21:25:53.000723   69636 fix.go:56] duration metric: took 20.24797596s for fixHost
	I0918 21:25:53.000769   69636 main.go:141] libmachine: (newest-cni-560575) Calling .GetSSHHostname
	I0918 21:25:53.003490   69636 main.go:141] libmachine: (newest-cni-560575) DBG | domain newest-cni-560575 has defined MAC address 52:54:00:35:4b:9c in network mk-newest-cni-560575
	I0918 21:25:53.003809   69636 main.go:141] libmachine: (newest-cni-560575) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:4b:9c", ip: ""} in network mk-newest-cni-560575: {Iface:virbr3 ExpiryTime:2024-09-18 22:25:44 +0000 UTC Type:0 Mac:52:54:00:35:4b:9c Iaid: IPaddr:192.168.72.106 Prefix:24 Hostname:newest-cni-560575 Clientid:01:52:54:00:35:4b:9c}
	I0918 21:25:53.003835   69636 main.go:141] libmachine: (newest-cni-560575) DBG | domain newest-cni-560575 has defined IP address 192.168.72.106 and MAC address 52:54:00:35:4b:9c in network mk-newest-cni-560575
	I0918 21:25:53.004078   69636 main.go:141] libmachine: (newest-cni-560575) Calling .GetSSHPort
	I0918 21:25:53.004322   69636 main.go:141] libmachine: (newest-cni-560575) Calling .GetSSHKeyPath
	I0918 21:25:53.004498   69636 main.go:141] libmachine: (newest-cni-560575) Calling .GetSSHKeyPath
	I0918 21:25:53.004691   69636 main.go:141] libmachine: (newest-cni-560575) Calling .GetSSHUsername
	I0918 21:25:53.004875   69636 main.go:141] libmachine: Using SSH client type: native
	I0918 21:25:53.005096   69636 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.72.106 22 <nil> <nil>}
	I0918 21:25:53.005112   69636 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0918 21:25:53.109240   69636 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726694753.084270222
	
	I0918 21:25:53.109264   69636 fix.go:216] guest clock: 1726694753.084270222
	I0918 21:25:53.109273   69636 fix.go:229] Guest: 2024-09-18 21:25:53.084270222 +0000 UTC Remote: 2024-09-18 21:25:53.000728061 +0000 UTC m=+27.837944780 (delta=83.542161ms)
	I0918 21:25:53.109348   69636 fix.go:200] guest clock delta is within tolerance: 83.542161ms
	I0918 21:25:53.109367   69636 start.go:83] releasing machines lock for "newest-cni-560575", held for 20.356638157s
	I0918 21:25:53.109402   69636 main.go:141] libmachine: (newest-cni-560575) Calling .DriverName
	I0918 21:25:53.109754   69636 main.go:141] libmachine: (newest-cni-560575) Calling .GetIP
	I0918 21:25:53.112761   69636 main.go:141] libmachine: (newest-cni-560575) DBG | domain newest-cni-560575 has defined MAC address 52:54:00:35:4b:9c in network mk-newest-cni-560575
	I0918 21:25:53.113162   69636 main.go:141] libmachine: (newest-cni-560575) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:4b:9c", ip: ""} in network mk-newest-cni-560575: {Iface:virbr3 ExpiryTime:2024-09-18 22:25:44 +0000 UTC Type:0 Mac:52:54:00:35:4b:9c Iaid: IPaddr:192.168.72.106 Prefix:24 Hostname:newest-cni-560575 Clientid:01:52:54:00:35:4b:9c}
	I0918 21:25:53.113198   69636 main.go:141] libmachine: (newest-cni-560575) DBG | domain newest-cni-560575 has defined IP address 192.168.72.106 and MAC address 52:54:00:35:4b:9c in network mk-newest-cni-560575
	I0918 21:25:53.113406   69636 main.go:141] libmachine: (newest-cni-560575) Calling .DriverName
	I0918 21:25:53.114027   69636 main.go:141] libmachine: (newest-cni-560575) Calling .DriverName
	I0918 21:25:53.114237   69636 main.go:141] libmachine: (newest-cni-560575) Calling .DriverName
	I0918 21:25:53.114329   69636 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0918 21:25:53.114373   69636 main.go:141] libmachine: (newest-cni-560575) Calling .GetSSHHostname
	I0918 21:25:53.114491   69636 ssh_runner.go:195] Run: cat /version.json
	I0918 21:25:53.114517   69636 main.go:141] libmachine: (newest-cni-560575) Calling .GetSSHHostname
	I0918 21:25:53.117252   69636 main.go:141] libmachine: (newest-cni-560575) DBG | domain newest-cni-560575 has defined MAC address 52:54:00:35:4b:9c in network mk-newest-cni-560575
	I0918 21:25:53.117454   69636 main.go:141] libmachine: (newest-cni-560575) DBG | domain newest-cni-560575 has defined MAC address 52:54:00:35:4b:9c in network mk-newest-cni-560575
	I0918 21:25:53.117630   69636 main.go:141] libmachine: (newest-cni-560575) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:4b:9c", ip: ""} in network mk-newest-cni-560575: {Iface:virbr3 ExpiryTime:2024-09-18 22:25:44 +0000 UTC Type:0 Mac:52:54:00:35:4b:9c Iaid: IPaddr:192.168.72.106 Prefix:24 Hostname:newest-cni-560575 Clientid:01:52:54:00:35:4b:9c}
	I0918 21:25:53.117661   69636 main.go:141] libmachine: (newest-cni-560575) DBG | domain newest-cni-560575 has defined IP address 192.168.72.106 and MAC address 52:54:00:35:4b:9c in network mk-newest-cni-560575
	I0918 21:25:53.117823   69636 main.go:141] libmachine: (newest-cni-560575) Calling .GetSSHPort
	I0918 21:25:53.117923   69636 main.go:141] libmachine: (newest-cni-560575) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:4b:9c", ip: ""} in network mk-newest-cni-560575: {Iface:virbr3 ExpiryTime:2024-09-18 22:25:44 +0000 UTC Type:0 Mac:52:54:00:35:4b:9c Iaid: IPaddr:192.168.72.106 Prefix:24 Hostname:newest-cni-560575 Clientid:01:52:54:00:35:4b:9c}
	I0918 21:25:53.117950   69636 main.go:141] libmachine: (newest-cni-560575) DBG | domain newest-cni-560575 has defined IP address 192.168.72.106 and MAC address 52:54:00:35:4b:9c in network mk-newest-cni-560575
	I0918 21:25:53.118046   69636 main.go:141] libmachine: (newest-cni-560575) Calling .GetSSHKeyPath
	I0918 21:25:53.118216   69636 main.go:141] libmachine: (newest-cni-560575) Calling .GetSSHPort
	I0918 21:25:53.118229   69636 main.go:141] libmachine: (newest-cni-560575) Calling .GetSSHUsername
	I0918 21:25:53.118381   69636 main.go:141] libmachine: (newest-cni-560575) Calling .GetSSHKeyPath
	I0918 21:25:53.118378   69636 sshutil.go:53] new ssh client: &{IP:192.168.72.106 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19667-7671/.minikube/machines/newest-cni-560575/id_rsa Username:docker}
	I0918 21:25:53.118543   69636 main.go:141] libmachine: (newest-cni-560575) Calling .GetSSHUsername
	I0918 21:25:53.118734   69636 sshutil.go:53] new ssh client: &{IP:192.168.72.106 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19667-7671/.minikube/machines/newest-cni-560575/id_rsa Username:docker}
	I0918 21:25:53.227979   69636 ssh_runner.go:195] Run: systemctl --version
	I0918 21:25:53.234017   69636 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0918 21:25:53.386887   69636 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0918 21:25:53.392801   69636 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0918 21:25:53.392866   69636 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0918 21:25:53.411172   69636 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0918 21:25:53.411199   69636 start.go:495] detecting cgroup driver to use...
	I0918 21:25:53.411273   69636 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0918 21:25:53.432871   69636 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0918 21:25:53.448267   69636 docker.go:217] disabling cri-docker service (if available) ...
	I0918 21:25:53.448341   69636 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0918 21:25:53.469490   69636 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0918 21:25:53.486157   69636 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0918 21:25:53.622484   69636 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0918 21:25:53.755703   69636 docker.go:233] disabling docker service ...
	I0918 21:25:53.755782   69636 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0918 21:25:53.771764   69636 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0918 21:25:53.786855   69636 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0918 21:25:53.926259   69636 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0918 21:25:54.040207   69636 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0918 21:25:54.054910   69636 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0918 21:25:54.075838   69636 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0918 21:25:54.075913   69636 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0918 21:25:54.087017   69636 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0918 21:25:54.087108   69636 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0918 21:25:54.098393   69636 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0918 21:25:54.109370   69636 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0918 21:25:54.120635   69636 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0918 21:25:54.132298   69636 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0918 21:25:54.142985   69636 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0918 21:25:54.163642   69636 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0918 21:25:54.175105   69636 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0918 21:25:54.185420   69636 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0918 21:25:54.185501   69636 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0918 21:25:54.200183   69636 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0918 21:25:54.210025   69636 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0918 21:25:54.322634   69636 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0918 21:25:54.429955   69636 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0918 21:25:54.430051   69636 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0918 21:25:54.435043   69636 start.go:563] Will wait 60s for crictl version
	I0918 21:25:54.435127   69636 ssh_runner.go:195] Run: which crictl
	I0918 21:25:54.439139   69636 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0918 21:25:54.484360   69636 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0918 21:25:54.484444   69636 ssh_runner.go:195] Run: crio --version
	I0918 21:25:54.514649   69636 ssh_runner.go:195] Run: crio --version
	I0918 21:25:54.543966   69636 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0918 21:25:54.545308   69636 main.go:141] libmachine: (newest-cni-560575) Calling .GetIP
	I0918 21:25:54.548455   69636 main.go:141] libmachine: (newest-cni-560575) DBG | domain newest-cni-560575 has defined MAC address 52:54:00:35:4b:9c in network mk-newest-cni-560575
	I0918 21:25:54.548971   69636 main.go:141] libmachine: (newest-cni-560575) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:4b:9c", ip: ""} in network mk-newest-cni-560575: {Iface:virbr3 ExpiryTime:2024-09-18 22:25:44 +0000 UTC Type:0 Mac:52:54:00:35:4b:9c Iaid: IPaddr:192.168.72.106 Prefix:24 Hostname:newest-cni-560575 Clientid:01:52:54:00:35:4b:9c}
	I0918 21:25:54.549009   69636 main.go:141] libmachine: (newest-cni-560575) DBG | domain newest-cni-560575 has defined IP address 192.168.72.106 and MAC address 52:54:00:35:4b:9c in network mk-newest-cni-560575
	I0918 21:25:54.549430   69636 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0918 21:25:54.553844   69636 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0918 21:25:54.568720   69636 out.go:177]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I0918 21:25:54.570036   69636 kubeadm.go:883] updating cluster {Name:newest-cni-560575 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.1 ClusterName:newest-cni-560575 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.106 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:
6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0918 21:25:54.570170   69636 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0918 21:25:54.570245   69636 ssh_runner.go:195] Run: sudo crictl images --output json
	I0918 21:25:54.613613   69636 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I0918 21:25:54.613691   69636 ssh_runner.go:195] Run: which lz4
	I0918 21:25:54.617727   69636 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0918 21:25:54.622032   69636 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0918 21:25:54.622058   69636 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I0918 21:25:51.887851   69269 ops.go:34] apiserver oom_adj: -16
	I0918 21:25:51.887992   69269 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0918 21:25:52.388765   69269 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0918 21:25:52.888197   69269 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0918 21:25:53.388166   69269 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0918 21:25:53.888203   69269 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0918 21:25:54.388173   69269 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0918 21:25:54.888990   69269 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0918 21:25:55.388629   69269 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0918 21:25:55.505754   69269 kubeadm.go:1113] duration metric: took 3.797633365s to wait for elevateKubeSystemPrivileges
	I0918 21:25:55.505781   69269 kubeadm.go:394] duration metric: took 15.891314341s to StartCluster
	I0918 21:25:55.505795   69269 settings.go:142] acquiring lock: {Name:mk6ae95a69dcbe00c28846409e9f46945a88de2f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0918 21:25:55.505861   69269 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19667-7671/kubeconfig
	I0918 21:25:55.507348   69269 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19667-7671/kubeconfig: {Name:mk3a3eecadb0164dfdd5d3c4392b5b473a2a9bb0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0918 21:25:55.507612   69269 start.go:235] Will wait 15m0s for node &{Name: IP:192.168.61.181 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0918 21:25:55.507739   69269 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0918 21:25:55.507800   69269 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0918 21:25:55.507904   69269 addons.go:69] Setting storage-provisioner=true in profile "auto-543581"
	I0918 21:25:55.507921   69269 addons.go:234] Setting addon storage-provisioner=true in "auto-543581"
	I0918 21:25:55.507954   69269 host.go:66] Checking if "auto-543581" exists ...
	I0918 21:25:55.507971   69269 config.go:182] Loaded profile config "auto-543581": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0918 21:25:55.508053   69269 addons.go:69] Setting default-storageclass=true in profile "auto-543581"
	I0918 21:25:55.508070   69269 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "auto-543581"
	I0918 21:25:55.508545   69269 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19667-7671/.minikube/bin/docker-machine-driver-kvm2
	I0918 21:25:55.508557   69269 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19667-7671/.minikube/bin/docker-machine-driver-kvm2
	I0918 21:25:55.508593   69269 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0918 21:25:55.508721   69269 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0918 21:25:55.509375   69269 out.go:177] * Verifying Kubernetes components...
	I0918 21:25:55.510676   69269 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0918 21:25:55.529880   69269 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33203
	I0918 21:25:55.530495   69269 main.go:141] libmachine: () Calling .GetVersion
	I0918 21:25:55.530992   69269 main.go:141] libmachine: Using API Version  1
	I0918 21:25:55.531017   69269 main.go:141] libmachine: () Calling .SetConfigRaw
	I0918 21:25:55.531594   69269 main.go:141] libmachine: () Calling .GetMachineName
	I0918 21:25:55.531924   69269 main.go:141] libmachine: (auto-543581) Calling .GetState
	I0918 21:25:55.531946   69269 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40349
	I0918 21:25:55.532576   69269 main.go:141] libmachine: () Calling .GetVersion
	I0918 21:25:55.533461   69269 main.go:141] libmachine: Using API Version  1
	I0918 21:25:55.533481   69269 main.go:141] libmachine: () Calling .SetConfigRaw
	I0918 21:25:55.533938   69269 main.go:141] libmachine: () Calling .GetMachineName
	I0918 21:25:55.536194   69269 addons.go:234] Setting addon default-storageclass=true in "auto-543581"
	I0918 21:25:55.536242   69269 host.go:66] Checking if "auto-543581" exists ...
	I0918 21:25:55.536599   69269 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19667-7671/.minikube/bin/docker-machine-driver-kvm2
	I0918 21:25:55.536610   69269 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19667-7671/.minikube/bin/docker-machine-driver-kvm2
	I0918 21:25:55.536641   69269 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0918 21:25:55.536650   69269 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0918 21:25:55.559697   69269 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40231
	I0918 21:25:55.559724   69269 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40327
	I0918 21:25:55.560248   69269 main.go:141] libmachine: () Calling .GetVersion
	I0918 21:25:55.560357   69269 main.go:141] libmachine: () Calling .GetVersion
	I0918 21:25:55.560987   69269 main.go:141] libmachine: Using API Version  1
	I0918 21:25:55.561002   69269 main.go:141] libmachine: () Calling .SetConfigRaw
	I0918 21:25:55.561018   69269 main.go:141] libmachine: Using API Version  1
	I0918 21:25:55.561037   69269 main.go:141] libmachine: () Calling .SetConfigRaw
	I0918 21:25:55.561365   69269 main.go:141] libmachine: () Calling .GetMachineName
	I0918 21:25:55.561960   69269 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19667-7671/.minikube/bin/docker-machine-driver-kvm2
	I0918 21:25:55.562011   69269 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0918 21:25:55.568559   69269 main.go:141] libmachine: () Calling .GetMachineName
	I0918 21:25:55.568880   69269 main.go:141] libmachine: (auto-543581) Calling .GetState
	I0918 21:25:55.570969   69269 main.go:141] libmachine: (auto-543581) Calling .DriverName
	I0918 21:25:55.573537   69269 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0918 21:25:55.575545   69269 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0918 21:25:55.575571   69269 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0918 21:25:55.575598   69269 main.go:141] libmachine: (auto-543581) Calling .GetSSHHostname
	I0918 21:25:55.579775   69269 main.go:141] libmachine: (auto-543581) DBG | domain auto-543581 has defined MAC address 52:54:00:35:a4:04 in network mk-auto-543581
	I0918 21:25:55.580822   69269 main.go:141] libmachine: (auto-543581) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:a4:04", ip: ""} in network mk-auto-543581: {Iface:virbr2 ExpiryTime:2024-09-18 22:25:21 +0000 UTC Type:0 Mac:52:54:00:35:a4:04 Iaid: IPaddr:192.168.61.181 Prefix:24 Hostname:auto-543581 Clientid:01:52:54:00:35:a4:04}
	I0918 21:25:55.580848   69269 main.go:141] libmachine: (auto-543581) DBG | domain auto-543581 has defined IP address 192.168.61.181 and MAC address 52:54:00:35:a4:04 in network mk-auto-543581
	I0918 21:25:55.581093   69269 main.go:141] libmachine: (auto-543581) Calling .GetSSHPort
	I0918 21:25:55.581810   69269 main.go:141] libmachine: (auto-543581) Calling .GetSSHKeyPath
	I0918 21:25:55.582053   69269 main.go:141] libmachine: (auto-543581) Calling .GetSSHUsername
	I0918 21:25:55.582245   69269 sshutil.go:53] new ssh client: &{IP:192.168.61.181 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19667-7671/.minikube/machines/auto-543581/id_rsa Username:docker}
	I0918 21:25:55.598985   69269 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35881
	I0918 21:25:55.599562   69269 main.go:141] libmachine: () Calling .GetVersion
	I0918 21:25:55.600212   69269 main.go:141] libmachine: Using API Version  1
	I0918 21:25:55.600233   69269 main.go:141] libmachine: () Calling .SetConfigRaw
	I0918 21:25:55.600732   69269 main.go:141] libmachine: () Calling .GetMachineName
	I0918 21:25:55.600936   69269 main.go:141] libmachine: (auto-543581) Calling .GetState
	I0918 21:25:55.603609   69269 main.go:141] libmachine: (auto-543581) Calling .DriverName
	I0918 21:25:55.603861   69269 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0918 21:25:55.603884   69269 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0918 21:25:55.603909   69269 main.go:141] libmachine: (auto-543581) Calling .GetSSHHostname
	I0918 21:25:55.607781   69269 main.go:141] libmachine: (auto-543581) DBG | domain auto-543581 has defined MAC address 52:54:00:35:a4:04 in network mk-auto-543581
	I0918 21:25:55.608363   69269 main.go:141] libmachine: (auto-543581) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:a4:04", ip: ""} in network mk-auto-543581: {Iface:virbr2 ExpiryTime:2024-09-18 22:25:21 +0000 UTC Type:0 Mac:52:54:00:35:a4:04 Iaid: IPaddr:192.168.61.181 Prefix:24 Hostname:auto-543581 Clientid:01:52:54:00:35:a4:04}
	I0918 21:25:55.608384   69269 main.go:141] libmachine: (auto-543581) DBG | domain auto-543581 has defined IP address 192.168.61.181 and MAC address 52:54:00:35:a4:04 in network mk-auto-543581
	I0918 21:25:55.608717   69269 main.go:141] libmachine: (auto-543581) Calling .GetSSHPort
	I0918 21:25:55.608954   69269 main.go:141] libmachine: (auto-543581) Calling .GetSSHKeyPath
	I0918 21:25:55.609104   69269 main.go:141] libmachine: (auto-543581) Calling .GetSSHUsername
	I0918 21:25:55.609340   69269 sshutil.go:53] new ssh client: &{IP:192.168.61.181 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19667-7671/.minikube/machines/auto-543581/id_rsa Username:docker}
	I0918 21:25:55.827130   69269 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0918 21:25:55.827519   69269 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.61.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0918 21:25:56.000081   69269 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0918 21:25:56.143827   69269 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0918 21:25:56.599531   69269 start.go:971] {"host.minikube.internal": 192.168.61.1} host record injected into CoreDNS's ConfigMap
	I0918 21:25:56.599591   69269 main.go:141] libmachine: Making call to close driver server
	I0918 21:25:56.599611   69269 main.go:141] libmachine: (auto-543581) Calling .Close
	I0918 21:25:56.600136   69269 main.go:141] libmachine: (auto-543581) DBG | Closing plugin on server side
	I0918 21:25:56.600162   69269 main.go:141] libmachine: Successfully made call to close driver server
	I0918 21:25:56.600178   69269 main.go:141] libmachine: Making call to close connection to plugin binary
	I0918 21:25:56.600187   69269 main.go:141] libmachine: Making call to close driver server
	I0918 21:25:56.600195   69269 main.go:141] libmachine: (auto-543581) Calling .Close
	I0918 21:25:56.600422   69269 main.go:141] libmachine: Successfully made call to close driver server
	I0918 21:25:56.600440   69269 main.go:141] libmachine: Making call to close connection to plugin binary
	I0918 21:25:56.601188   69269 node_ready.go:35] waiting up to 15m0s for node "auto-543581" to be "Ready" ...
	I0918 21:25:56.623139   69269 node_ready.go:49] node "auto-543581" has status "Ready":"True"
	I0918 21:25:56.623163   69269 node_ready.go:38] duration metric: took 21.954939ms for node "auto-543581" to be "Ready" ...
	I0918 21:25:56.623172   69269 pod_ready.go:36] extra waiting up to 15m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0918 21:25:56.644556   69269 main.go:141] libmachine: Making call to close driver server
	I0918 21:25:56.644584   69269 main.go:141] libmachine: (auto-543581) Calling .Close
	I0918 21:25:56.644928   69269 main.go:141] libmachine: (auto-543581) DBG | Closing plugin on server side
	I0918 21:25:56.644941   69269 main.go:141] libmachine: Successfully made call to close driver server
	I0918 21:25:56.644952   69269 main.go:141] libmachine: Making call to close connection to plugin binary
	I0918 21:25:56.657550   69269 pod_ready.go:79] waiting up to 15m0s for pod "coredns-7c65d6cfc9-5cwr8" in "kube-system" namespace to be "Ready" ...
	I0918 21:25:57.106515   69269 kapi.go:214] "coredns" deployment in "kube-system" namespace and "auto-543581" context rescaled to 1 replicas
	I0918 21:25:57.473210   69269 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.329340853s)
	I0918 21:25:57.473363   69269 main.go:141] libmachine: Making call to close driver server
	I0918 21:25:57.473440   69269 main.go:141] libmachine: (auto-543581) Calling .Close
	I0918 21:25:57.473859   69269 main.go:141] libmachine: Successfully made call to close driver server
	I0918 21:25:57.473922   69269 main.go:141] libmachine: Making call to close connection to plugin binary
	I0918 21:25:57.473936   69269 main.go:141] libmachine: Making call to close driver server
	I0918 21:25:57.473937   69269 main.go:141] libmachine: (auto-543581) DBG | Closing plugin on server side
	I0918 21:25:57.473945   69269 main.go:141] libmachine: (auto-543581) Calling .Close
	I0918 21:25:57.474315   69269 main.go:141] libmachine: Successfully made call to close driver server
	I0918 21:25:57.474371   69269 main.go:141] libmachine: Making call to close connection to plugin binary
	I0918 21:25:57.476793   69269 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I0918 21:25:55.973400   69636 crio.go:462] duration metric: took 1.35570314s to copy over tarball
	I0918 21:25:55.973484   69636 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0918 21:25:58.334361   69636 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.360843616s)
	I0918 21:25:58.334398   69636 crio.go:469] duration metric: took 2.360959892s to extract the tarball
	I0918 21:25:58.334407   69636 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0918 21:25:58.371438   69636 ssh_runner.go:195] Run: sudo crictl images --output json
	I0918 21:25:58.416915   69636 crio.go:514] all images are preloaded for cri-o runtime.
	I0918 21:25:58.416961   69636 cache_images.go:84] Images are preloaded, skipping loading
	I0918 21:25:58.416971   69636 kubeadm.go:934] updating node { 192.168.72.106 8443 v1.31.1 crio true true} ...
	I0918 21:25:58.417097   69636 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --feature-gates=ServerSideApply=true --hostname-override=newest-cni-560575 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.106
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:newest-cni-560575 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0918 21:25:58.417267   69636 ssh_runner.go:195] Run: crio config
	I0918 21:25:58.467431   69636 cni.go:84] Creating CNI manager for ""
	I0918 21:25:58.467467   69636 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0918 21:25:58.467480   69636 kubeadm.go:84] Using pod CIDR: 10.42.0.0/16
	I0918 21:25:58.467505   69636 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.72.106 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-560575 NodeName:newest-cni-560575 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota feature-gates:ServerSideApply=true] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.106"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]}] FeatureArgs:ma
p[] NodeIP:192.168.72.106 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0918 21:25:58.467731   69636 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.106
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-560575"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.106
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.106"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	    feature-gates: "ServerSideApply=true"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0918 21:25:58.467811   69636 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0918 21:25:58.479203   69636 binaries.go:44] Found k8s binaries, skipping transfer
	I0918 21:25:58.479283   69636 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0918 21:25:58.490217   69636 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (354 bytes)
	I0918 21:25:58.507228   69636 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0918 21:25:58.523965   69636 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2285 bytes)
	I0918 21:25:58.542165   69636 ssh_runner.go:195] Run: grep 192.168.72.106	control-plane.minikube.internal$ /etc/hosts
	I0918 21:25:58.546080   69636 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.106	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0918 21:25:58.558842   69636 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0918 21:25:58.691672   69636 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0918 21:25:58.708777   69636 certs.go:68] Setting up /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/newest-cni-560575 for IP: 192.168.72.106
	I0918 21:25:58.708803   69636 certs.go:194] generating shared ca certs ...
	I0918 21:25:58.708825   69636 certs.go:226] acquiring lock for ca certs: {Name:mkf54e0e71ed884c4372f9d3d4cb308ea4600185 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0918 21:25:58.709040   69636 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19667-7671/.minikube/ca.key
	I0918 21:25:58.709099   69636 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19667-7671/.minikube/proxy-client-ca.key
	I0918 21:25:58.709113   69636 certs.go:256] generating profile certs ...
	I0918 21:25:58.709228   69636 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/newest-cni-560575/client.key
	I0918 21:25:58.709303   69636 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/newest-cni-560575/apiserver.key.df886787
	I0918 21:25:58.709359   69636 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/newest-cni-560575/proxy-client.key
	I0918 21:25:58.709518   69636 certs.go:484] found cert: /home/jenkins/minikube-integration/19667-7671/.minikube/certs/14878.pem (1338 bytes)
	W0918 21:25:58.709559   69636 certs.go:480] ignoring /home/jenkins/minikube-integration/19667-7671/.minikube/certs/14878_empty.pem, impossibly tiny 0 bytes
	I0918 21:25:58.709592   69636 certs.go:484] found cert: /home/jenkins/minikube-integration/19667-7671/.minikube/certs/ca-key.pem (1675 bytes)
	I0918 21:25:58.709623   69636 certs.go:484] found cert: /home/jenkins/minikube-integration/19667-7671/.minikube/certs/ca.pem (1082 bytes)
	I0918 21:25:58.709658   69636 certs.go:484] found cert: /home/jenkins/minikube-integration/19667-7671/.minikube/certs/cert.pem (1123 bytes)
	I0918 21:25:58.709701   69636 certs.go:484] found cert: /home/jenkins/minikube-integration/19667-7671/.minikube/certs/key.pem (1679 bytes)
	I0918 21:25:58.709760   69636 certs.go:484] found cert: /home/jenkins/minikube-integration/19667-7671/.minikube/files/etc/ssl/certs/148782.pem (1708 bytes)
	I0918 21:25:58.710549   69636 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0918 21:25:58.745402   69636 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0918 21:25:58.786152   69636 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0918 21:25:58.819499   69636 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0918 21:25:58.853081   69636 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/newest-cni-560575/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0918 21:25:58.886441   69636 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/newest-cni-560575/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0918 21:25:58.920372   69636 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/newest-cni-560575/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0918 21:25:58.943987   69636 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/newest-cni-560575/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0918 21:25:58.967411   69636 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0918 21:25:58.992304   69636 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/certs/14878.pem --> /usr/share/ca-certificates/14878.pem (1338 bytes)
	I0918 21:25:59.015402   69636 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/files/etc/ssl/certs/148782.pem --> /usr/share/ca-certificates/148782.pem (1708 bytes)
	I0918 21:25:59.039129   69636 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0918 21:25:59.056856   69636 ssh_runner.go:195] Run: openssl version
	I0918 21:25:59.062685   69636 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0918 21:25:59.073996   69636 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0918 21:25:59.078500   69636 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 18 19:39 /usr/share/ca-certificates/minikubeCA.pem
	I0918 21:25:59.078569   69636 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0918 21:25:59.085471   69636 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0918 21:25:59.099758   69636 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14878.pem && ln -fs /usr/share/ca-certificates/14878.pem /etc/ssl/certs/14878.pem"
	I0918 21:25:59.112601   69636 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14878.pem
	I0918 21:25:59.117834   69636 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 18 19:57 /usr/share/ca-certificates/14878.pem
	I0918 21:25:59.117901   69636 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14878.pem
	I0918 21:25:59.123714   69636 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14878.pem /etc/ssl/certs/51391683.0"
	I0918 21:25:59.137572   69636 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/148782.pem && ln -fs /usr/share/ca-certificates/148782.pem /etc/ssl/certs/148782.pem"
	I0918 21:25:59.149103   69636 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/148782.pem
	I0918 21:25:59.153943   69636 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 18 19:57 /usr/share/ca-certificates/148782.pem
	I0918 21:25:59.154011   69636 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/148782.pem
	I0918 21:25:59.161906   69636 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/148782.pem /etc/ssl/certs/3ec20f2e.0"
	I0918 21:25:59.173509   69636 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0918 21:25:59.178439   69636 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0918 21:25:59.184696   69636 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0918 21:25:59.190675   69636 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0918 21:25:59.197496   69636 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0918 21:25:59.203863   69636 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0918 21:25:59.209777   69636 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0918 21:25:59.216308   69636 kubeadm.go:392] StartCluster: {Name:newest-cni-560575 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.1 ClusterName:newest-cni-560575 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.106 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0
s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0918 21:25:59.216417   69636 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0918 21:25:59.216460   69636 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0918 21:25:59.255108   69636 cri.go:89] found id: ""
	I0918 21:25:59.255190   69636 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0918 21:25:59.267901   69636 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0918 21:25:59.267929   69636 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0918 21:25:59.267984   69636 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0918 21:25:59.278526   69636 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0918 21:25:59.279886   69636 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-560575" does not appear in /home/jenkins/minikube-integration/19667-7671/kubeconfig
	I0918 21:25:59.280893   69636 kubeconfig.go:62] /home/jenkins/minikube-integration/19667-7671/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-560575" cluster setting kubeconfig missing "newest-cni-560575" context setting]
	I0918 21:25:59.282238   69636 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19667-7671/kubeconfig: {Name:mk3a3eecadb0164dfdd5d3c4392b5b473a2a9bb0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0918 21:25:59.284177   69636 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0918 21:25:59.294029   69636 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.72.106
	I0918 21:25:59.294067   69636 kubeadm.go:1160] stopping kube-system containers ...
	I0918 21:25:59.294083   69636 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0918 21:25:59.294153   69636 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0918 21:25:59.332007   69636 cri.go:89] found id: ""
	I0918 21:25:59.332111   69636 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0918 21:25:59.348199   69636 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0918 21:25:59.358033   69636 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0918 21:25:59.358053   69636 kubeadm.go:157] found existing configuration files:
	
	I0918 21:25:59.358103   69636 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0918 21:25:59.367229   69636 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0918 21:25:59.367289   69636 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0918 21:25:59.376672   69636 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0918 21:25:59.385328   69636 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0918 21:25:59.385390   69636 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0918 21:25:59.395210   69636 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0918 21:25:59.404557   69636 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0918 21:25:59.404636   69636 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0918 21:25:59.415416   69636 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0918 21:25:59.426683   69636 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0918 21:25:59.426762   69636 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0918 21:25:59.436896   69636 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0918 21:25:59.446879   69636 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0918 21:25:59.560661   69636 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0918 21:25:57.478143   69269 addons.go:510] duration metric: took 1.970351306s for enable addons: enabled=[default-storageclass storage-provisioner]
	I0918 21:25:58.663943   69269 pod_ready.go:103] pod "coredns-7c65d6cfc9-5cwr8" in "kube-system" namespace has status "Ready":"False"
	I0918 21:26:01.009352   69269 pod_ready.go:103] pod "coredns-7c65d6cfc9-5cwr8" in "kube-system" namespace has status "Ready":"False"
	
	
	==> CRI-O <==
	Sep 18 21:26:04 embed-certs-255556 crio[682]: time="2024-09-18 21:26:04.785649078Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=023ada76-b3c9-4e4d-b06a-2d18252e37ef name=/runtime.v1.RuntimeService/Version
	Sep 18 21:26:04 embed-certs-255556 crio[682]: time="2024-09-18 21:26:04.786772510Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=38def204-fc93-4378-992d-8a8da161fca2 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 18 21:26:04 embed-certs-255556 crio[682]: time="2024-09-18 21:26:04.787153279Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726694764787130896,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=38def204-fc93-4378-992d-8a8da161fca2 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 18 21:26:04 embed-certs-255556 crio[682]: time="2024-09-18 21:26:04.787809972Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f58ad979-5a51-4fb1-aa19-2b9638965821 name=/runtime.v1.RuntimeService/ListContainers
	Sep 18 21:26:04 embed-certs-255556 crio[682]: time="2024-09-18 21:26:04.787881289Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f58ad979-5a51-4fb1-aa19-2b9638965821 name=/runtime.v1.RuntimeService/ListContainers
	Sep 18 21:26:04 embed-certs-255556 crio[682]: time="2024-09-18 21:26:04.788139206Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f4a5989dc9c66d57fbb948e86861e58fa98697a7dfb22e5dfd9be743a4f8f076,PodSandboxId:d60dc3b49edf0e2886ba25401f793cd90a13b06324b690a2e2e2aed54e6b5cec,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726693799224422399,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dcc9b789-237c-4d92-96c6-2c23d2c401c0,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:41327fabd1f80a15235c7deb09523f5617af36f4d3f3fc07f8cf501beb3facde,PodSandboxId:58b4788d8b4af932a32bd7be33feaf6c4fbe17d6110be02d3528f03f2a14d2a4,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726693798933407165,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-ptxbt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 798665e6-6f4a-4ba5-b4f9-3192d3f76f03,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UD
P\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cea70894e040287c08794805d7f3b8f7182a6f7a180df6de048b480f4d186f9e,PodSandboxId:6b5065325583ec41ef1067801916683a5bd9ba68c372857f3344cb58db4fe6d0,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726693798724374308,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-vgmtd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1
224ebf9-1b24-413a-b779-093acfcfb61e,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5dd648996f6325d0d5aa564f4218f56772a3e2ce9abf525f3d27ab7a53b0bd87,PodSandboxId:6edde6859292b134b7892c92af3999f270c2df6230afae26048c490f07950493,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt
:1726693798145760092,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-m7gxh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 47d72a32-7efc-4155-a890-0ddc620af6e0,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5822532d2a32a540040588fade5c42910d6050af9d0a0d6c110881be16e83ef7,PodSandboxId:29d74ececeb949d47a28def66a76fe67d1cb7e0490c5b6d18e58ee0b0009870b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726693787175401666,Labels:m
ap[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-255556,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 441c7024775a4b212d948f8f8dc32239,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:311a22617dfc469fc0e4ac6731dca5c3b663f607c8d60f0a3ef05695fd7a02da,PodSandboxId:67d2b94e52293070e6401e1ffca87a1d51f4cf2051c0956233ac7506a048606a,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726693787177233492,Labels:map[string]string{io.ku
bernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-255556,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 11063eb2ec944bfb7d80ace46d649f35,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7d38c7d6a99952a1fc0f1eacff46e5c50f67b25237192b834534c0fe3c5e5f6d,PodSandboxId:b356a54fa925344999343b4bfd13fc882c4542d02d0fed730e4d5b9ef41e5d31,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726693787108149405,Labels:map[string]string{io.kubernetes.container.name: kube-ap
iserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-255556,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dd8fa81c77941b65929f2e8b548a3480,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c563aafe653941fe1518a56e35605bb08416c5d02c816472ced216f51d39ac60,PodSandboxId:c9bbb1b207ff87ed413bf6bb1739fb694e2236ba0bed6dc47adb43cf49cf3f4a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726693787069099051,Labels:map[string]string{io.kubernetes.container.name: kube-contr
oller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-255556,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6e2f00147e16e007afce11a22718d8af,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:929b63815a268640f858a8357b598df21c4a0591ea8c6bc078fb967444ebab6e,PodSandboxId:a4721dc78266322e264b86fe3d6e211aae634cb921e4545fc6cdfcf3f36bd494,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726693498045783113,Labels:map[string]string{io.kubernetes.container.name: kube-
apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-255556,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dd8fa81c77941b65929f2e8b548a3480,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=f58ad979-5a51-4fb1-aa19-2b9638965821 name=/runtime.v1.RuntimeService/ListContainers
	Sep 18 21:26:04 embed-certs-255556 crio[682]: time="2024-09-18 21:26:04.823420546Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=79765635-cfd5-4768-a33b-b1319afe4104 name=/runtime.v1.RuntimeService/Version
	Sep 18 21:26:04 embed-certs-255556 crio[682]: time="2024-09-18 21:26:04.823516593Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=79765635-cfd5-4768-a33b-b1319afe4104 name=/runtime.v1.RuntimeService/Version
	Sep 18 21:26:04 embed-certs-255556 crio[682]: time="2024-09-18 21:26:04.824689277Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:nil,}" file="otel-collector/interceptors.go:62" id=31f1d09a-3949-4d1b-886c-2ed84e31b075 name=/runtime.v1.RuntimeService/ListPodSandbox
	Sep 18 21:26:04 embed-certs-255556 crio[682]: time="2024-09-18 21:26:04.825127271Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:9f272601ac2da1e87158c338572dad60425e070eeafbd8c764b7fa23a3eeb15f,Metadata:&PodSandboxMetadata{Name:metrics-server-6867b74b74-sr6hq,Uid:8867f8fa-687b-4105-8ace-18af50195726,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1726693799122390847,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: metrics-server-6867b74b74-sr6hq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8867f8fa-687b-4105-8ace-18af50195726,k8s-app: metrics-server,pod-template-hash: 6867b74b74,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-18T21:09:58.814787819Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:d60dc3b49edf0e2886ba25401f793cd90a13b06324b690a2e2e2aed54e6b5cec,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:dcc9b789-237c-4d92-96c6-2c23d2c401c0,N
amespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1726693798832153991,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dcc9b789-237c-4d92-96c6-2c23d2c401c0,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"vol
umes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-09-18T21:09:58.522168491Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:58b4788d8b4af932a32bd7be33feaf6c4fbe17d6110be02d3528f03f2a14d2a4,Metadata:&PodSandboxMetadata{Name:coredns-7c65d6cfc9-ptxbt,Uid:798665e6-6f4a-4ba5-b4f9-3192d3f76f03,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1726693798037389276,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7c65d6cfc9-ptxbt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 798665e6-6f4a-4ba5-b4f9-3192d3f76f03,k8s-app: kube-dns,pod-template-hash: 7c65d6cfc9,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-18T21:09:57.724683541Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:6b5065325583ec41ef1067801916683a5bd9ba68c372857f3344cb58db4fe6d0,Metadata:&PodSandboxMetadata{Name:coredns-7c65d6cfc9-vgmtd,Uid:1224ebf9-1b24-413a
-b779-093acfcfb61e,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1726693798022168183,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7c65d6cfc9-vgmtd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1224ebf9-1b24-413a-b779-093acfcfb61e,k8s-app: kube-dns,pod-template-hash: 7c65d6cfc9,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-18T21:09:57.699844249Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:6edde6859292b134b7892c92af3999f270c2df6230afae26048c490f07950493,Metadata:&PodSandboxMetadata{Name:kube-proxy-m7gxh,Uid:47d72a32-7efc-4155-a890-0ddc620af6e0,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1726693797874997950,Labels:map[string]string{controller-revision-hash: 648b489c5b,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-m7gxh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 47d72a32-7efc-4155-a890-0ddc620af6e0,k8s-app: kube-proxy,pod-tem
plate-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-18T21:09:57.555992690Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:b356a54fa925344999343b4bfd13fc882c4542d02d0fed730e4d5b9ef41e5d31,Metadata:&PodSandboxMetadata{Name:kube-apiserver-embed-certs-255556,Uid:dd8fa81c77941b65929f2e8b548a3480,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1726693786946305084,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-embed-certs-255556,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dd8fa81c77941b65929f2e8b548a3480,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.21:8443,kubernetes.io/config.hash: dd8fa81c77941b65929f2e8b548a3480,kubernetes.io/config.seen: 2024-09-18T21:09:46.491402471Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:c9bbb1b207ff87ed413bf6bb1739f
b694e2236ba0bed6dc47adb43cf49cf3f4a,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-embed-certs-255556,Uid:6e2f00147e16e007afce11a22718d8af,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1726693786940808907,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-embed-certs-255556,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6e2f00147e16e007afce11a22718d8af,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 6e2f00147e16e007afce11a22718d8af,kubernetes.io/config.seen: 2024-09-18T21:09:46.491403476Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:29d74ececeb949d47a28def66a76fe67d1cb7e0490c5b6d18e58ee0b0009870b,Metadata:&PodSandboxMetadata{Name:kube-scheduler-embed-certs-255556,Uid:441c7024775a4b212d948f8f8dc32239,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1726693786938094081,Labels:map[string]string{component: kub
e-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-embed-certs-255556,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 441c7024775a4b212d948f8f8dc32239,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 441c7024775a4b212d948f8f8dc32239,kubernetes.io/config.seen: 2024-09-18T21:09:46.491396361Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:67d2b94e52293070e6401e1ffca87a1d51f4cf2051c0956233ac7506a048606a,Metadata:&PodSandboxMetadata{Name:etcd-embed-certs-255556,Uid:11063eb2ec944bfb7d80ace46d649f35,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1726693786934711156,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-embed-certs-255556,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 11063eb2ec944bfb7d80ace46d649f35,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39
.21:2379,kubernetes.io/config.hash: 11063eb2ec944bfb7d80ace46d649f35,kubernetes.io/config.seen: 2024-09-18T21:09:46.491401089Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:a4721dc78266322e264b86fe3d6e211aae634cb921e4545fc6cdfcf3f36bd494,Metadata:&PodSandboxMetadata{Name:kube-apiserver-embed-certs-255556,Uid:dd8fa81c77941b65929f2e8b548a3480,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1726693496834394650,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-embed-certs-255556,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dd8fa81c77941b65929f2e8b548a3480,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.21:8443,kubernetes.io/config.hash: dd8fa81c77941b65929f2e8b548a3480,kubernetes.io/config.seen: 2024-09-18T21:04:56.335175951Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collect
or/interceptors.go:74" id=31f1d09a-3949-4d1b-886c-2ed84e31b075 name=/runtime.v1.RuntimeService/ListPodSandbox
	Sep 18 21:26:04 embed-certs-255556 crio[682]: time="2024-09-18 21:26:04.825905946Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=bb883b1c-5d4d-460a-869a-2983a638e6ac name=/runtime.v1.ImageService/ImageFsInfo
	Sep 18 21:26:04 embed-certs-255556 crio[682]: time="2024-09-18 21:26:04.826197225Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=899c8ce8-b780-4fda-bb3e-ddbca17e08cc name=/runtime.v1.RuntimeService/ListContainers
	Sep 18 21:26:04 embed-certs-255556 crio[682]: time="2024-09-18 21:26:04.826250133Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=899c8ce8-b780-4fda-bb3e-ddbca17e08cc name=/runtime.v1.RuntimeService/ListContainers
	Sep 18 21:26:04 embed-certs-255556 crio[682]: time="2024-09-18 21:26:04.826348665Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726694764826323340,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=bb883b1c-5d4d-460a-869a-2983a638e6ac name=/runtime.v1.ImageService/ImageFsInfo
	Sep 18 21:26:04 embed-certs-255556 crio[682]: time="2024-09-18 21:26:04.826438145Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f4a5989dc9c66d57fbb948e86861e58fa98697a7dfb22e5dfd9be743a4f8f076,PodSandboxId:d60dc3b49edf0e2886ba25401f793cd90a13b06324b690a2e2e2aed54e6b5cec,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726693799224422399,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dcc9b789-237c-4d92-96c6-2c23d2c401c0,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:41327fabd1f80a15235c7deb09523f5617af36f4d3f3fc07f8cf501beb3facde,PodSandboxId:58b4788d8b4af932a32bd7be33feaf6c4fbe17d6110be02d3528f03f2a14d2a4,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726693798933407165,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-ptxbt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 798665e6-6f4a-4ba5-b4f9-3192d3f76f03,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UD
P\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cea70894e040287c08794805d7f3b8f7182a6f7a180df6de048b480f4d186f9e,PodSandboxId:6b5065325583ec41ef1067801916683a5bd9ba68c372857f3344cb58db4fe6d0,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726693798724374308,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-vgmtd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1
224ebf9-1b24-413a-b779-093acfcfb61e,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5dd648996f6325d0d5aa564f4218f56772a3e2ce9abf525f3d27ab7a53b0bd87,PodSandboxId:6edde6859292b134b7892c92af3999f270c2df6230afae26048c490f07950493,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt
:1726693798145760092,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-m7gxh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 47d72a32-7efc-4155-a890-0ddc620af6e0,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5822532d2a32a540040588fade5c42910d6050af9d0a0d6c110881be16e83ef7,PodSandboxId:29d74ececeb949d47a28def66a76fe67d1cb7e0490c5b6d18e58ee0b0009870b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726693787175401666,Labels:m
ap[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-255556,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 441c7024775a4b212d948f8f8dc32239,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:311a22617dfc469fc0e4ac6731dca5c3b663f607c8d60f0a3ef05695fd7a02da,PodSandboxId:67d2b94e52293070e6401e1ffca87a1d51f4cf2051c0956233ac7506a048606a,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726693787177233492,Labels:map[string]string{io.ku
bernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-255556,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 11063eb2ec944bfb7d80ace46d649f35,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7d38c7d6a99952a1fc0f1eacff46e5c50f67b25237192b834534c0fe3c5e5f6d,PodSandboxId:b356a54fa925344999343b4bfd13fc882c4542d02d0fed730e4d5b9ef41e5d31,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726693787108149405,Labels:map[string]string{io.kubernetes.container.name: kube-ap
iserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-255556,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dd8fa81c77941b65929f2e8b548a3480,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c563aafe653941fe1518a56e35605bb08416c5d02c816472ced216f51d39ac60,PodSandboxId:c9bbb1b207ff87ed413bf6bb1739fb694e2236ba0bed6dc47adb43cf49cf3f4a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726693787069099051,Labels:map[string]string{io.kubernetes.container.name: kube-contr
oller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-255556,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6e2f00147e16e007afce11a22718d8af,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:929b63815a268640f858a8357b598df21c4a0591ea8c6bc078fb967444ebab6e,PodSandboxId:a4721dc78266322e264b86fe3d6e211aae634cb921e4545fc6cdfcf3f36bd494,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726693498045783113,Labels:map[string]string{io.kubernetes.container.name: kube-
apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-255556,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dd8fa81c77941b65929f2e8b548a3480,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=899c8ce8-b780-4fda-bb3e-ddbca17e08cc name=/runtime.v1.RuntimeService/ListContainers
	Sep 18 21:26:04 embed-certs-255556 crio[682]: time="2024-09-18 21:26:04.827143656Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=797c46a4-96ef-4eec-bfe5-7148746f4289 name=/runtime.v1.RuntimeService/ListContainers
	Sep 18 21:26:04 embed-certs-255556 crio[682]: time="2024-09-18 21:26:04.827193899Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=797c46a4-96ef-4eec-bfe5-7148746f4289 name=/runtime.v1.RuntimeService/ListContainers
	Sep 18 21:26:04 embed-certs-255556 crio[682]: time="2024-09-18 21:26:04.827369536Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f4a5989dc9c66d57fbb948e86861e58fa98697a7dfb22e5dfd9be743a4f8f076,PodSandboxId:d60dc3b49edf0e2886ba25401f793cd90a13b06324b690a2e2e2aed54e6b5cec,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726693799224422399,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dcc9b789-237c-4d92-96c6-2c23d2c401c0,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:41327fabd1f80a15235c7deb09523f5617af36f4d3f3fc07f8cf501beb3facde,PodSandboxId:58b4788d8b4af932a32bd7be33feaf6c4fbe17d6110be02d3528f03f2a14d2a4,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726693798933407165,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-ptxbt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 798665e6-6f4a-4ba5-b4f9-3192d3f76f03,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UD
P\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cea70894e040287c08794805d7f3b8f7182a6f7a180df6de048b480f4d186f9e,PodSandboxId:6b5065325583ec41ef1067801916683a5bd9ba68c372857f3344cb58db4fe6d0,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726693798724374308,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-vgmtd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1
224ebf9-1b24-413a-b779-093acfcfb61e,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5dd648996f6325d0d5aa564f4218f56772a3e2ce9abf525f3d27ab7a53b0bd87,PodSandboxId:6edde6859292b134b7892c92af3999f270c2df6230afae26048c490f07950493,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt
:1726693798145760092,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-m7gxh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 47d72a32-7efc-4155-a890-0ddc620af6e0,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5822532d2a32a540040588fade5c42910d6050af9d0a0d6c110881be16e83ef7,PodSandboxId:29d74ececeb949d47a28def66a76fe67d1cb7e0490c5b6d18e58ee0b0009870b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726693787175401666,Labels:m
ap[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-255556,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 441c7024775a4b212d948f8f8dc32239,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:311a22617dfc469fc0e4ac6731dca5c3b663f607c8d60f0a3ef05695fd7a02da,PodSandboxId:67d2b94e52293070e6401e1ffca87a1d51f4cf2051c0956233ac7506a048606a,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726693787177233492,Labels:map[string]string{io.ku
bernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-255556,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 11063eb2ec944bfb7d80ace46d649f35,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7d38c7d6a99952a1fc0f1eacff46e5c50f67b25237192b834534c0fe3c5e5f6d,PodSandboxId:b356a54fa925344999343b4bfd13fc882c4542d02d0fed730e4d5b9ef41e5d31,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726693787108149405,Labels:map[string]string{io.kubernetes.container.name: kube-ap
iserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-255556,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dd8fa81c77941b65929f2e8b548a3480,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c563aafe653941fe1518a56e35605bb08416c5d02c816472ced216f51d39ac60,PodSandboxId:c9bbb1b207ff87ed413bf6bb1739fb694e2236ba0bed6dc47adb43cf49cf3f4a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726693787069099051,Labels:map[string]string{io.kubernetes.container.name: kube-contr
oller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-255556,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6e2f00147e16e007afce11a22718d8af,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:929b63815a268640f858a8357b598df21c4a0591ea8c6bc078fb967444ebab6e,PodSandboxId:a4721dc78266322e264b86fe3d6e211aae634cb921e4545fc6cdfcf3f36bd494,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726693498045783113,Labels:map[string]string{io.kubernetes.container.name: kube-
apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-255556,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dd8fa81c77941b65929f2e8b548a3480,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=797c46a4-96ef-4eec-bfe5-7148746f4289 name=/runtime.v1.RuntimeService/ListContainers
	Sep 18 21:26:04 embed-certs-255556 crio[682]: time="2024-09-18 21:26:04.858766072Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=ca0dcb50-6f57-4bd4-8bb9-6e7e1472f016 name=/runtime.v1.RuntimeService/Version
	Sep 18 21:26:04 embed-certs-255556 crio[682]: time="2024-09-18 21:26:04.858859333Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=ca0dcb50-6f57-4bd4-8bb9-6e7e1472f016 name=/runtime.v1.RuntimeService/Version
	Sep 18 21:26:04 embed-certs-255556 crio[682]: time="2024-09-18 21:26:04.860246103Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=2d1189c0-4604-4cef-b5bf-83f836ef30ac name=/runtime.v1.ImageService/ImageFsInfo
	Sep 18 21:26:04 embed-certs-255556 crio[682]: time="2024-09-18 21:26:04.860903223Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726694764860875824,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=2d1189c0-4604-4cef-b5bf-83f836ef30ac name=/runtime.v1.ImageService/ImageFsInfo
	Sep 18 21:26:04 embed-certs-255556 crio[682]: time="2024-09-18 21:26:04.861373329Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=75448b13-36fb-41c8-b3a3-8466406535a8 name=/runtime.v1.RuntimeService/ListContainers
	Sep 18 21:26:04 embed-certs-255556 crio[682]: time="2024-09-18 21:26:04.861430618Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=75448b13-36fb-41c8-b3a3-8466406535a8 name=/runtime.v1.RuntimeService/ListContainers
	Sep 18 21:26:04 embed-certs-255556 crio[682]: time="2024-09-18 21:26:04.861680718Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f4a5989dc9c66d57fbb948e86861e58fa98697a7dfb22e5dfd9be743a4f8f076,PodSandboxId:d60dc3b49edf0e2886ba25401f793cd90a13b06324b690a2e2e2aed54e6b5cec,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726693799224422399,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dcc9b789-237c-4d92-96c6-2c23d2c401c0,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:41327fabd1f80a15235c7deb09523f5617af36f4d3f3fc07f8cf501beb3facde,PodSandboxId:58b4788d8b4af932a32bd7be33feaf6c4fbe17d6110be02d3528f03f2a14d2a4,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726693798933407165,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-ptxbt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 798665e6-6f4a-4ba5-b4f9-3192d3f76f03,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UD
P\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cea70894e040287c08794805d7f3b8f7182a6f7a180df6de048b480f4d186f9e,PodSandboxId:6b5065325583ec41ef1067801916683a5bd9ba68c372857f3344cb58db4fe6d0,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726693798724374308,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-vgmtd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1
224ebf9-1b24-413a-b779-093acfcfb61e,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5dd648996f6325d0d5aa564f4218f56772a3e2ce9abf525f3d27ab7a53b0bd87,PodSandboxId:6edde6859292b134b7892c92af3999f270c2df6230afae26048c490f07950493,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt
:1726693798145760092,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-m7gxh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 47d72a32-7efc-4155-a890-0ddc620af6e0,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5822532d2a32a540040588fade5c42910d6050af9d0a0d6c110881be16e83ef7,PodSandboxId:29d74ececeb949d47a28def66a76fe67d1cb7e0490c5b6d18e58ee0b0009870b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726693787175401666,Labels:m
ap[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-255556,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 441c7024775a4b212d948f8f8dc32239,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:311a22617dfc469fc0e4ac6731dca5c3b663f607c8d60f0a3ef05695fd7a02da,PodSandboxId:67d2b94e52293070e6401e1ffca87a1d51f4cf2051c0956233ac7506a048606a,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726693787177233492,Labels:map[string]string{io.ku
bernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-255556,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 11063eb2ec944bfb7d80ace46d649f35,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7d38c7d6a99952a1fc0f1eacff46e5c50f67b25237192b834534c0fe3c5e5f6d,PodSandboxId:b356a54fa925344999343b4bfd13fc882c4542d02d0fed730e4d5b9ef41e5d31,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726693787108149405,Labels:map[string]string{io.kubernetes.container.name: kube-ap
iserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-255556,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dd8fa81c77941b65929f2e8b548a3480,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c563aafe653941fe1518a56e35605bb08416c5d02c816472ced216f51d39ac60,PodSandboxId:c9bbb1b207ff87ed413bf6bb1739fb694e2236ba0bed6dc47adb43cf49cf3f4a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726693787069099051,Labels:map[string]string{io.kubernetes.container.name: kube-contr
oller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-255556,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6e2f00147e16e007afce11a22718d8af,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:929b63815a268640f858a8357b598df21c4a0591ea8c6bc078fb967444ebab6e,PodSandboxId:a4721dc78266322e264b86fe3d6e211aae634cb921e4545fc6cdfcf3f36bd494,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726693498045783113,Labels:map[string]string{io.kubernetes.container.name: kube-
apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-255556,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dd8fa81c77941b65929f2e8b548a3480,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=75448b13-36fb-41c8-b3a3-8466406535a8 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	f4a5989dc9c66       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   16 minutes ago      Running             storage-provisioner       0                   d60dc3b49edf0       storage-provisioner
	41327fabd1f80       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   16 minutes ago      Running             coredns                   0                   58b4788d8b4af       coredns-7c65d6cfc9-ptxbt
	cea70894e0402       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   16 minutes ago      Running             coredns                   0                   6b5065325583e       coredns-7c65d6cfc9-vgmtd
	5dd648996f632       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561   16 minutes ago      Running             kube-proxy                0                   6edde6859292b       kube-proxy-m7gxh
	311a22617dfc4       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4   16 minutes ago      Running             etcd                      2                   67d2b94e52293       etcd-embed-certs-255556
	5822532d2a32a       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b   16 minutes ago      Running             kube-scheduler            2                   29d74ececeb94       kube-scheduler-embed-certs-255556
	7d38c7d6a9995       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee   16 minutes ago      Running             kube-apiserver            2                   b356a54fa9253       kube-apiserver-embed-certs-255556
	c563aafe65394       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1   16 minutes ago      Running             kube-controller-manager   2                   c9bbb1b207ff8       kube-controller-manager-embed-certs-255556
	929b63815a268       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee   21 minutes ago      Exited              kube-apiserver            1                   a4721dc782663       kube-apiserver-embed-certs-255556
	
	
	==> coredns [41327fabd1f80a15235c7deb09523f5617af36f4d3f3fc07f8cf501beb3facde] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	
	
	==> coredns [cea70894e040287c08794805d7f3b8f7182a6f7a180df6de048b480f4d186f9e] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	
	
	==> describe nodes <==
	Name:               embed-certs-255556
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-255556
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=85073601a832bd4bbda5d11fa91feafff6ec6b91
	                    minikube.k8s.io/name=embed-certs-255556
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_18T21_09_52_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 18 Sep 2024 21:09:49 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-255556
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 18 Sep 2024 21:26:00 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 18 Sep 2024 21:25:19 +0000   Wed, 18 Sep 2024 21:09:48 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 18 Sep 2024 21:25:19 +0000   Wed, 18 Sep 2024 21:09:48 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 18 Sep 2024 21:25:19 +0000   Wed, 18 Sep 2024 21:09:48 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 18 Sep 2024 21:25:19 +0000   Wed, 18 Sep 2024 21:09:50 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.21
	  Hostname:    embed-certs-255556
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 0c6567145a664a07ac62659c94c4c9a6
	  System UUID:                0c656714-5a66-4a07-ac62-659c94c4c9a6
	  Boot ID:                    3a64d178-a667-4d3a-89d7-15de20adee8f
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7c65d6cfc9-ptxbt                      100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     16m
	  kube-system                 coredns-7c65d6cfc9-vgmtd                      100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     16m
	  kube-system                 etcd-embed-certs-255556                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         16m
	  kube-system                 kube-apiserver-embed-certs-255556             250m (12%)    0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 kube-controller-manager-embed-certs-255556    200m (10%)    0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 kube-proxy-m7gxh                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 kube-scheduler-embed-certs-255556             100m (5%)     0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 metrics-server-6867b74b74-sr6hq               100m (5%)     0 (0%)      200Mi (9%)       0 (0%)         16m
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         16m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   0 (0%)
	  memory             440Mi (20%)  340Mi (16%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 16m                kube-proxy       
	  Normal  Starting                 16m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  16m (x8 over 16m)  kubelet          Node embed-certs-255556 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    16m (x8 over 16m)  kubelet          Node embed-certs-255556 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     16m (x7 over 16m)  kubelet          Node embed-certs-255556 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  16m                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 16m                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  16m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  16m                kubelet          Node embed-certs-255556 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    16m                kubelet          Node embed-certs-255556 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     16m                kubelet          Node embed-certs-255556 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           16m                node-controller  Node embed-certs-255556 event: Registered Node embed-certs-255556 in Controller
	
	
	==> dmesg <==
	[  +0.051417] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.040003] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.847762] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +1.960067] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +2.341489] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000008] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +7.967624] systemd-fstab-generator[605]: Ignoring "noauto" option for root device
	[  +0.079504] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.060715] systemd-fstab-generator[617]: Ignoring "noauto" option for root device
	[  +0.199631] systemd-fstab-generator[631]: Ignoring "noauto" option for root device
	[  +0.144016] systemd-fstab-generator[643]: Ignoring "noauto" option for root device
	[  +0.312970] systemd-fstab-generator[674]: Ignoring "noauto" option for root device
	[  +4.115484] systemd-fstab-generator[763]: Ignoring "noauto" option for root device
	[  +2.015406] systemd-fstab-generator[883]: Ignoring "noauto" option for root device
	[  +0.074405] kauditd_printk_skb: 158 callbacks suppressed
	[Sep18 21:05] kauditd_printk_skb: 69 callbacks suppressed
	[  +7.498684] kauditd_printk_skb: 85 callbacks suppressed
	[Sep18 21:09] kauditd_printk_skb: 7 callbacks suppressed
	[  +1.368389] systemd-fstab-generator[2538]: Ignoring "noauto" option for root device
	[  +4.632361] kauditd_printk_skb: 54 callbacks suppressed
	[  +1.411286] systemd-fstab-generator[2864]: Ignoring "noauto" option for root device
	[  +5.379266] systemd-fstab-generator[2988]: Ignoring "noauto" option for root device
	[  +0.096244] kauditd_printk_skb: 14 callbacks suppressed
	[Sep18 21:10] kauditd_printk_skb: 84 callbacks suppressed
	
	
	==> etcd [311a22617dfc469fc0e4ac6731dca5c3b663f607c8d60f0a3ef05695fd7a02da] <==
	{"level":"info","ts":"2024-09-18T21:09:48.264930Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-09-18T21:09:48.265533Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-18T21:09:48.265632Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-18T21:09:48.266738Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-18T21:09:48.267673Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.21:2379"}
	{"level":"info","ts":"2024-09-18T21:09:48.275037Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"f019a0e2d3e7d785","local-member-id":"3c2bdad7569acae7","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-18T21:09:48.275147Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-18T21:09:48.275183Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-18T21:19:48.303153Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":720}
	{"level":"info","ts":"2024-09-18T21:19:48.312651Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":720,"took":"9.133052ms","hash":28356297,"current-db-size-bytes":2273280,"current-db-size":"2.3 MB","current-db-size-in-use-bytes":2273280,"current-db-size-in-use":"2.3 MB"}
	{"level":"info","ts":"2024-09-18T21:19:48.313098Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":28356297,"revision":720,"compact-revision":-1}
	{"level":"info","ts":"2024-09-18T21:24:48.320268Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":963}
	{"level":"info","ts":"2024-09-18T21:24:48.325233Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":963,"took":"3.962059ms","hash":717290511,"current-db-size-bytes":2273280,"current-db-size":"2.3 MB","current-db-size-in-use-bytes":1581056,"current-db-size-in-use":"1.6 MB"}
	{"level":"info","ts":"2024-09-18T21:24:48.325361Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":717290511,"revision":963,"compact-revision":720}
	{"level":"warn","ts":"2024-09-18T21:25:40.974801Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"331.850113ms","expected-duration":"100ms","prefix":"","request":"header:<ID:14620815273914489909 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:1249 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1030 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >>","response":"size:16"}
	{"level":"info","ts":"2024-09-18T21:25:40.975256Z","caller":"traceutil/trace.go:171","msg":"trace[1886234243] transaction","detail":"{read_only:false; response_revision:1252; number_of_response:1; }","duration":"232.997004ms","start":"2024-09-18T21:25:40.742229Z","end":"2024-09-18T21:25:40.975226Z","steps":["trace[1886234243] 'process raft request'  (duration: 232.917264ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-18T21:25:40.975614Z","caller":"traceutil/trace.go:171","msg":"trace[1992120491] transaction","detail":"{read_only:false; response_revision:1251; number_of_response:1; }","duration":"538.798586ms","start":"2024-09-18T21:25:40.436801Z","end":"2024-09-18T21:25:40.975600Z","steps":["trace[1992120491] 'process raft request'  (duration: 205.149004ms)","trace[1992120491] 'compare'  (duration: 331.707326ms)"],"step_count":2}
	{"level":"warn","ts":"2024-09-18T21:25:40.975860Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-18T21:25:40.436781Z","time spent":"538.976228ms","remote":"127.0.0.1:44852","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1103,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:1249 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1030 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	{"level":"info","ts":"2024-09-18T21:26:01.105110Z","caller":"traceutil/trace.go:171","msg":"trace[849123613] transaction","detail":"{read_only:false; response_revision:1266; number_of_response:1; }","duration":"218.409769ms","start":"2024-09-18T21:26:00.886686Z","end":"2024-09-18T21:26:01.105096Z","steps":["trace[849123613] 'process raft request'  (duration: 218.298909ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-18T21:26:01.338850Z","caller":"traceutil/trace.go:171","msg":"trace[1573876994] linearizableReadLoop","detail":"{readStateIndex:1478; appliedIndex:1477; }","duration":"233.303271ms","start":"2024-09-18T21:26:01.105535Z","end":"2024-09-18T21:26:01.338838Z","steps":["trace[1573876994] 'read index received'  (duration: 233.197037ms)","trace[1573876994] 'applied index is now lower than readState.Index'  (duration: 105.911µs)"],"step_count":2}
	{"level":"info","ts":"2024-09-18T21:26:01.339087Z","caller":"traceutil/trace.go:171","msg":"trace[1737649009] transaction","detail":"{read_only:false; response_revision:1267; number_of_response:1; }","duration":"236.656355ms","start":"2024-09-18T21:26:01.102421Z","end":"2024-09-18T21:26:01.339077Z","steps":["trace[1737649009] 'process raft request'  (duration: 236.349352ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-18T21:26:01.339304Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"260.970067ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" ","response":"range_response_count:1 size:1119"}
	{"level":"info","ts":"2024-09-18T21:26:01.339360Z","caller":"traceutil/trace.go:171","msg":"trace[203321733] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:1; response_revision:1267; }","duration":"261.041334ms","start":"2024-09-18T21:26:01.078308Z","end":"2024-09-18T21:26:01.339350Z","steps":["trace[203321733] 'agreement among raft nodes before linearized reading'  (duration: 260.934569ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-18T21:26:01.339514Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"216.286086ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-18T21:26:01.339588Z","caller":"traceutil/trace.go:171","msg":"trace[1074465439] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1267; }","duration":"216.364183ms","start":"2024-09-18T21:26:01.123216Z","end":"2024-09-18T21:26:01.339581Z","steps":["trace[1074465439] 'agreement among raft nodes before linearized reading'  (duration: 216.271531ms)"],"step_count":1}
	
	
	==> kernel <==
	 21:26:05 up 21 min,  0 users,  load average: 0.62, 0.32, 0.22
	Linux embed-certs-255556 5.10.207 #1 SMP Mon Sep 16 15:00:28 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [7d38c7d6a99952a1fc0f1eacff46e5c50f67b25237192b834534c0fe3c5e5f6d] <==
	I0918 21:22:50.805654       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0918 21:22:50.806740       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0918 21:24:49.804007       1 handler_proxy.go:99] no RequestInfo found in the context
	E0918 21:24:49.804172       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W0918 21:24:50.806020       1 handler_proxy.go:99] no RequestInfo found in the context
	W0918 21:24:50.806067       1 handler_proxy.go:99] no RequestInfo found in the context
	E0918 21:24:50.806170       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	E0918 21:24:50.806287       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0918 21:24:50.807437       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0918 21:24:50.807506       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0918 21:25:50.808208       1 handler_proxy.go:99] no RequestInfo found in the context
	E0918 21:25:50.808284       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	W0918 21:25:50.808390       1 handler_proxy.go:99] no RequestInfo found in the context
	E0918 21:25:50.808445       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0918 21:25:50.809487       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0918 21:25:50.809682       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-apiserver [929b63815a268640f858a8357b598df21c4a0591ea8c6bc078fb967444ebab6e] <==
	W0918 21:09:42.556237       1 logging.go:55] [core] [Channel #37 SubChannel #38]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0918 21:09:42.598071       1 logging.go:55] [core] [Channel #157 SubChannel #158]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0918 21:09:42.651823       1 logging.go:55] [core] [Channel #208 SubChannel #209]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0918 21:09:43.003229       1 logging.go:55] [core] [Channel #124 SubChannel #125]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0918 21:09:43.069606       1 logging.go:55] [core] [Channel #139 SubChannel #140]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0918 21:09:43.090989       1 logging.go:55] [core] [Channel #58 SubChannel #59]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0918 21:09:43.196634       1 logging.go:55] [core] [Channel #10 SubChannel #11]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0918 21:09:43.196634       1 logging.go:55] [core] [Channel #163 SubChannel #164]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0918 21:09:43.287062       1 logging.go:55] [core] [Channel #103 SubChannel #104]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0918 21:09:43.381834       1 logging.go:55] [core] [Channel #64 SubChannel #65]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0918 21:09:43.492947       1 logging.go:55] [core] [Channel #2 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0918 21:09:43.509427       1 logging.go:55] [core] [Channel #22 SubChannel #23]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0918 21:09:43.603296       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0918 21:09:43.603408       1 logging.go:55] [core] [Channel #118 SubChannel #119]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0918 21:09:43.837782       1 logging.go:55] [core] [Channel #148 SubChannel #149]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0918 21:09:43.881910       1 logging.go:55] [core] [Channel #76 SubChannel #77]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0918 21:09:43.967190       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0918 21:09:44.017876       1 logging.go:55] [core] [Channel #121 SubChannel #122]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0918 21:09:44.059947       1 logging.go:55] [core] [Channel #31 SubChannel #32]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0918 21:09:44.071840       1 logging.go:55] [core] [Channel #79 SubChannel #80]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0918 21:09:44.120081       1 logging.go:55] [core] [Channel #61 SubChannel #62]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0918 21:09:44.185540       1 logging.go:55] [core] [Channel #127 SubChannel #128]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0918 21:09:44.191174       1 logging.go:55] [core] [Channel #73 SubChannel #74]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0918 21:09:44.204160       1 logging.go:55] [core] [Channel #25 SubChannel #26]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0918 21:09:44.349637       1 logging.go:55] [core] [Channel #145 SubChannel #146]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-controller-manager [c563aafe653941fe1518a56e35605bb08416c5d02c816472ced216f51d39ac60] <==
	E0918 21:20:56.856897       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0918 21:20:57.320245       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0918 21:21:06.261661       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="217.294µs"
	I0918 21:21:19.258003       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="57.477µs"
	E0918 21:21:26.863149       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0918 21:21:27.328046       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0918 21:21:56.869805       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0918 21:21:57.335722       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0918 21:22:26.876906       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0918 21:22:27.343682       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0918 21:22:56.884072       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0918 21:22:57.354102       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0918 21:23:26.889759       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0918 21:23:27.363497       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0918 21:23:56.897118       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0918 21:23:57.372033       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0918 21:24:26.903057       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0918 21:24:27.381676       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0918 21:24:56.911429       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0918 21:24:57.391212       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0918 21:25:19.327138       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="embed-certs-255556"
	E0918 21:25:26.918519       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0918 21:25:27.398250       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0918 21:25:56.925036       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0918 21:25:57.411744       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [5dd648996f6325d0d5aa564f4218f56772a3e2ce9abf525f3d27ab7a53b0bd87] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0918 21:09:58.657292       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0918 21:09:58.729941       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.21"]
	E0918 21:09:58.730021       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0918 21:09:58.953284       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0918 21:09:58.953315       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0918 21:09:58.953337       1 server_linux.go:169] "Using iptables Proxier"
	I0918 21:09:59.044362       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0918 21:09:59.044719       1 server.go:483] "Version info" version="v1.31.1"
	I0918 21:09:59.044733       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0918 21:09:59.107804       1 config.go:199] "Starting service config controller"
	I0918 21:09:59.107862       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0918 21:09:59.107908       1 config.go:105] "Starting endpoint slice config controller"
	I0918 21:09:59.107924       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0918 21:09:59.108982       1 config.go:328] "Starting node config controller"
	I0918 21:09:59.109011       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0918 21:09:59.208681       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0918 21:09:59.208821       1 shared_informer.go:320] Caches are synced for service config
	I0918 21:09:59.215216       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [5822532d2a32a540040588fade5c42910d6050af9d0a0d6c110881be16e83ef7] <==
	W0918 21:09:49.805597       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0918 21:09:49.805635       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0918 21:09:49.805701       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0918 21:09:49.805728       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0918 21:09:49.805797       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0918 21:09:49.805822       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0918 21:09:49.805918       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0918 21:09:49.805956       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0918 21:09:49.807958       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0918 21:09:49.807992       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0918 21:09:49.808055       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0918 21:09:49.808101       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0918 21:09:49.808240       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0918 21:09:49.808277       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0918 21:09:50.624720       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0918 21:09:50.624761       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0918 21:09:50.753190       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0918 21:09:50.753366       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0918 21:09:50.785940       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0918 21:09:50.786073       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0918 21:09:51.032872       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0918 21:09:51.032985       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0918 21:09:51.035430       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0918 21:09:51.035473       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0918 21:09:52.395062       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 18 21:25:05 embed-certs-255556 kubelet[2871]: E0918 21:25:05.244530    2871 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-sr6hq" podUID="8867f8fa-687b-4105-8ace-18af50195726"
	Sep 18 21:25:12 embed-certs-255556 kubelet[2871]: E0918 21:25:12.518486    2871 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726694712517658474,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 18 21:25:12 embed-certs-255556 kubelet[2871]: E0918 21:25:12.518945    2871 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726694712517658474,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 18 21:25:17 embed-certs-255556 kubelet[2871]: E0918 21:25:17.245966    2871 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-sr6hq" podUID="8867f8fa-687b-4105-8ace-18af50195726"
	Sep 18 21:25:22 embed-certs-255556 kubelet[2871]: E0918 21:25:22.520649    2871 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726694722520288316,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 18 21:25:22 embed-certs-255556 kubelet[2871]: E0918 21:25:22.521082    2871 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726694722520288316,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 18 21:25:32 embed-certs-255556 kubelet[2871]: E0918 21:25:32.247010    2871 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-sr6hq" podUID="8867f8fa-687b-4105-8ace-18af50195726"
	Sep 18 21:25:32 embed-certs-255556 kubelet[2871]: E0918 21:25:32.523280    2871 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726694732522852338,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 18 21:25:32 embed-certs-255556 kubelet[2871]: E0918 21:25:32.523336    2871 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726694732522852338,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 18 21:25:42 embed-certs-255556 kubelet[2871]: E0918 21:25:42.526084    2871 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726694742525497609,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 18 21:25:42 embed-certs-255556 kubelet[2871]: E0918 21:25:42.526371    2871 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726694742525497609,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 18 21:25:43 embed-certs-255556 kubelet[2871]: E0918 21:25:43.244787    2871 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-sr6hq" podUID="8867f8fa-687b-4105-8ace-18af50195726"
	Sep 18 21:25:52 embed-certs-255556 kubelet[2871]: E0918 21:25:52.261167    2871 iptables.go:577] "Could not set up iptables canary" err=<
	Sep 18 21:25:52 embed-certs-255556 kubelet[2871]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Sep 18 21:25:52 embed-certs-255556 kubelet[2871]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 18 21:25:52 embed-certs-255556 kubelet[2871]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 18 21:25:52 embed-certs-255556 kubelet[2871]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 18 21:25:52 embed-certs-255556 kubelet[2871]: E0918 21:25:52.528181    2871 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726694752527850495,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 18 21:25:52 embed-certs-255556 kubelet[2871]: E0918 21:25:52.528232    2871 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726694752527850495,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 18 21:25:55 embed-certs-255556 kubelet[2871]: E0918 21:25:55.264185    2871 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Sep 18 21:25:55 embed-certs-255556 kubelet[2871]: E0918 21:25:55.264288    2871 kuberuntime_image.go:55] "Failed to pull image" err="pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Sep 18 21:25:55 embed-certs-255556 kubelet[2871]: E0918 21:25:55.264632    2871 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:metrics-server,Image:fake.domain/registry.k8s.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=60s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{209715200 0} {<nil>}  BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vwmm5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation
:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Std
in:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod metrics-server-6867b74b74-sr6hq_kube-system(8867f8fa-687b-4105-8ace-18af50195726): ErrImagePull: pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" logger="UnhandledError"
	Sep 18 21:25:55 embed-certs-255556 kubelet[2871]: E0918 21:25:55.266229    2871 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ErrImagePull: \"pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-6867b74b74-sr6hq" podUID="8867f8fa-687b-4105-8ace-18af50195726"
	Sep 18 21:26:02 embed-certs-255556 kubelet[2871]: E0918 21:26:02.530416    2871 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726694762530035991,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 18 21:26:02 embed-certs-255556 kubelet[2871]: E0918 21:26:02.530791    2871 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726694762530035991,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	
	
	==> storage-provisioner [f4a5989dc9c66d57fbb948e86861e58fa98697a7dfb22e5dfd9be743a4f8f076] <==
	I0918 21:09:59.391359       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0918 21:09:59.408021       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0918 21:09:59.409228       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0918 21:09:59.426866       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0918 21:09:59.449079       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-255556_372ad411-ea08-4670-bb13-bfc2f465df48!
	I0918 21:09:59.443973       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"3d7ac12d-e6c5-470b-9559-125d6ebd6917", APIVersion:"v1", ResourceVersion:"430", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-255556_372ad411-ea08-4670-bb13-bfc2f465df48 became leader
	I0918 21:09:59.550313       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-255556_372ad411-ea08-4670-bb13-bfc2f465df48!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-255556 -n embed-certs-255556
helpers_test.go:261: (dbg) Run:  kubectl --context embed-certs-255556 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-6867b74b74-sr6hq
helpers_test.go:274: ======> post-mortem[TestStartStop/group/embed-certs/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context embed-certs-255556 describe pod metrics-server-6867b74b74-sr6hq
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context embed-certs-255556 describe pod metrics-server-6867b74b74-sr6hq: exit status 1 (67.024695ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-6867b74b74-sr6hq" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context embed-certs-255556 describe pod metrics-server-6867b74b74-sr6hq: exit status 1
--- FAIL: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (416.94s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (346.35s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
start_stop_delete_test.go:287: ***** TestStartStop/group/no-preload/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-331658 -n no-preload-331658
start_stop_delete_test.go:287: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: showing logs for failed pods as of 2024-09-18 21:25:03.889112369 +0000 UTC m=+6426.659453608
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-331658 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context no-preload-331658 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (2.366µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context no-preload-331658 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-331658 -n no-preload-331658
helpers_test.go:244: <<< TestStartStop/group/no-preload/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/no-preload/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-331658 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p no-preload-331658 logs -n 25: (1.231383254s)
helpers_test.go:252: TestStartStop/group/no-preload/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| start   | -p kubernetes-upgrade-878094                           | kubernetes-upgrade-878094    | jenkins | v1.34.0 | 18 Sep 24 20:55 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-878094                           | kubernetes-upgrade-878094    | jenkins | v1.34.0 | 18 Sep 24 20:55 UTC | 18 Sep 24 20:56 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	|         | --alsologtostderr                                      |                              |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                     |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| delete  | -p pause-543700                                        | pause-543700                 | jenkins | v1.34.0 | 18 Sep 24 20:56 UTC | 18 Sep 24 20:56 UTC |
	| start   | -p embed-certs-255556                                  | embed-certs-255556           | jenkins | v1.34.0 | 18 Sep 24 20:56 UTC | 18 Sep 24 20:57 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| delete  | -p kubernetes-upgrade-878094                           | kubernetes-upgrade-878094    | jenkins | v1.34.0 | 18 Sep 24 20:56 UTC | 18 Sep 24 20:56 UTC |
	| delete  | -p                                                     | disable-driver-mounts-335923 | jenkins | v1.34.0 | 18 Sep 24 20:56 UTC | 18 Sep 24 20:56 UTC |
	|         | disable-driver-mounts-335923                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-828868 | jenkins | v1.34.0 | 18 Sep 24 20:56 UTC | 18 Sep 24 20:57 UTC |
	|         | default-k8s-diff-port-828868                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-331658             | no-preload-331658            | jenkins | v1.34.0 | 18 Sep 24 20:56 UTC | 18 Sep 24 20:57 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-331658                                   | no-preload-331658            | jenkins | v1.34.0 | 18 Sep 24 20:57 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-828868  | default-k8s-diff-port-828868 | jenkins | v1.34.0 | 18 Sep 24 20:57 UTC | 18 Sep 24 20:57 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-828868 | jenkins | v1.34.0 | 18 Sep 24 20:57 UTC |                     |
	|         | default-k8s-diff-port-828868                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-255556            | embed-certs-255556           | jenkins | v1.34.0 | 18 Sep 24 20:57 UTC | 18 Sep 24 20:57 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-255556                                  | embed-certs-255556           | jenkins | v1.34.0 | 18 Sep 24 20:57 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-740194        | old-k8s-version-740194       | jenkins | v1.34.0 | 18 Sep 24 20:59 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-331658                  | no-preload-331658            | jenkins | v1.34.0 | 18 Sep 24 20:59 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-331658                                   | no-preload-331658            | jenkins | v1.34.0 | 18 Sep 24 20:59 UTC | 18 Sep 24 21:10 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-828868       | default-k8s-diff-port-828868 | jenkins | v1.34.0 | 18 Sep 24 21:00 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-255556                 | embed-certs-255556           | jenkins | v1.34.0 | 18 Sep 24 21:00 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-828868 | jenkins | v1.34.0 | 18 Sep 24 21:00 UTC | 18 Sep 24 21:09 UTC |
	|         | default-k8s-diff-port-828868                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| start   | -p embed-certs-255556                                  | embed-certs-255556           | jenkins | v1.34.0 | 18 Sep 24 21:00 UTC | 18 Sep 24 21:10 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-740194                              | old-k8s-version-740194       | jenkins | v1.34.0 | 18 Sep 24 21:00 UTC | 18 Sep 24 21:00 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-740194             | old-k8s-version-740194       | jenkins | v1.34.0 | 18 Sep 24 21:00 UTC | 18 Sep 24 21:00 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-740194                              | old-k8s-version-740194       | jenkins | v1.34.0 | 18 Sep 24 21:00 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| delete  | -p old-k8s-version-740194                              | old-k8s-version-740194       | jenkins | v1.34.0 | 18 Sep 24 21:24 UTC | 18 Sep 24 21:24 UTC |
	| start   | -p newest-cni-560575 --memory=2200 --alsologtostderr   | newest-cni-560575            | jenkins | v1.34.0 | 18 Sep 24 21:24 UTC |                     |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/18 21:24:29
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0918 21:24:29.696136   68762 out.go:345] Setting OutFile to fd 1 ...
	I0918 21:24:29.696286   68762 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0918 21:24:29.696298   68762 out.go:358] Setting ErrFile to fd 2...
	I0918 21:24:29.696305   68762 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0918 21:24:29.696586   68762 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19667-7671/.minikube/bin
	I0918 21:24:29.697209   68762 out.go:352] Setting JSON to false
	I0918 21:24:29.698206   68762 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":7614,"bootTime":1726687056,"procs":205,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0918 21:24:29.698313   68762 start.go:139] virtualization: kvm guest
	I0918 21:24:29.700560   68762 out.go:177] * [newest-cni-560575] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0918 21:24:29.702498   68762 notify.go:220] Checking for updates...
	I0918 21:24:29.702531   68762 out.go:177]   - MINIKUBE_LOCATION=19667
	I0918 21:24:29.703831   68762 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0918 21:24:29.705061   68762 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19667-7671/kubeconfig
	I0918 21:24:29.706414   68762 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19667-7671/.minikube
	I0918 21:24:29.707567   68762 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0918 21:24:29.708948   68762 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0918 21:24:29.710655   68762 config.go:182] Loaded profile config "default-k8s-diff-port-828868": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0918 21:24:29.710792   68762 config.go:182] Loaded profile config "embed-certs-255556": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0918 21:24:29.710936   68762 config.go:182] Loaded profile config "no-preload-331658": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0918 21:24:29.711058   68762 driver.go:394] Setting default libvirt URI to qemu:///system
	I0918 21:24:29.750071   68762 out.go:177] * Using the kvm2 driver based on user configuration
	I0918 21:24:29.751133   68762 start.go:297] selected driver: kvm2
	I0918 21:24:29.751146   68762 start.go:901] validating driver "kvm2" against <nil>
	I0918 21:24:29.751158   68762 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0918 21:24:29.751975   68762 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0918 21:24:29.752073   68762 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19667-7671/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0918 21:24:29.768644   68762 install.go:137] /home/jenkins/minikube-integration/19667-7671/.minikube/bin/docker-machine-driver-kvm2 version is 1.34.0
	I0918 21:24:29.768702   68762 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	W0918 21:24:29.768757   68762 out.go:270] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I0918 21:24:29.769077   68762 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0918 21:24:29.769118   68762 cni.go:84] Creating CNI manager for ""
	I0918 21:24:29.769190   68762 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0918 21:24:29.769201   68762 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0918 21:24:29.769273   68762 start.go:340] cluster config:
	{Name:newest-cni-560575 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:newest-cni-560575 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false C
ustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0918 21:24:29.769403   68762 iso.go:125] acquiring lock: {Name:mkbcf4d84f3091567a39802cd5e773cb10b8c545 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0918 21:24:29.770932   68762 out.go:177] * Starting "newest-cni-560575" primary control-plane node in "newest-cni-560575" cluster
	I0918 21:24:29.772156   68762 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0918 21:24:29.772210   68762 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19667-7671/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I0918 21:24:29.772224   68762 cache.go:56] Caching tarball of preloaded images
	I0918 21:24:29.772344   68762 preload.go:172] Found /home/jenkins/minikube-integration/19667-7671/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0918 21:24:29.772356   68762 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0918 21:24:29.772479   68762 profile.go:143] Saving config to /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/newest-cni-560575/config.json ...
	I0918 21:24:29.772503   68762 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/newest-cni-560575/config.json: {Name:mkda8a88afc68b9dc4782ca267bcee8cd5058f38 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0918 21:24:29.772721   68762 start.go:360] acquireMachinesLock for newest-cni-560575: {Name:mk1b0d68a0b7d9d4bc8204d5d81cb9eb77222526 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0918 21:24:29.772761   68762 start.go:364] duration metric: took 23.528µs to acquireMachinesLock for "newest-cni-560575"
	I0918 21:24:29.772783   68762 start.go:93] Provisioning new machine with config: &{Name:newest-cni-560575 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.31.1 ClusterName:newest-cni-560575 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount
9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0918 21:24:29.772867   68762 start.go:125] createHost starting for "" (driver="kvm2")
	I0918 21:24:29.774641   68762 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0918 21:24:29.774814   68762 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19667-7671/.minikube/bin/docker-machine-driver-kvm2
	I0918 21:24:29.774867   68762 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0918 21:24:29.790700   68762 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34697
	I0918 21:24:29.791193   68762 main.go:141] libmachine: () Calling .GetVersion
	I0918 21:24:29.791761   68762 main.go:141] libmachine: Using API Version  1
	I0918 21:24:29.791776   68762 main.go:141] libmachine: () Calling .SetConfigRaw
	I0918 21:24:29.792213   68762 main.go:141] libmachine: () Calling .GetMachineName
	I0918 21:24:29.792486   68762 main.go:141] libmachine: (newest-cni-560575) Calling .GetMachineName
	I0918 21:24:29.792688   68762 main.go:141] libmachine: (newest-cni-560575) Calling .DriverName
	I0918 21:24:29.792878   68762 start.go:159] libmachine.API.Create for "newest-cni-560575" (driver="kvm2")
	I0918 21:24:29.792908   68762 client.go:168] LocalClient.Create starting
	I0918 21:24:29.792957   68762 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19667-7671/.minikube/certs/ca.pem
	I0918 21:24:29.793007   68762 main.go:141] libmachine: Decoding PEM data...
	I0918 21:24:29.793111   68762 main.go:141] libmachine: Parsing certificate...
	I0918 21:24:29.793224   68762 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19667-7671/.minikube/certs/cert.pem
	I0918 21:24:29.793265   68762 main.go:141] libmachine: Decoding PEM data...
	I0918 21:24:29.793291   68762 main.go:141] libmachine: Parsing certificate...
	I0918 21:24:29.793317   68762 main.go:141] libmachine: Running pre-create checks...
	I0918 21:24:29.793336   68762 main.go:141] libmachine: (newest-cni-560575) Calling .PreCreateCheck
	I0918 21:24:29.793779   68762 main.go:141] libmachine: (newest-cni-560575) Calling .GetConfigRaw
	I0918 21:24:29.794214   68762 main.go:141] libmachine: Creating machine...
	I0918 21:24:29.794233   68762 main.go:141] libmachine: (newest-cni-560575) Calling .Create
	I0918 21:24:29.794366   68762 main.go:141] libmachine: (newest-cni-560575) Creating KVM machine...
	I0918 21:24:29.795673   68762 main.go:141] libmachine: (newest-cni-560575) DBG | found existing default KVM network
	I0918 21:24:29.797038   68762 main.go:141] libmachine: (newest-cni-560575) DBG | I0918 21:24:29.796858   68785 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:a7:46:df} reservation:<nil>}
	I0918 21:24:29.797856   68762 main.go:141] libmachine: (newest-cni-560575) DBG | I0918 21:24:29.797775   68785 network.go:211] skipping subnet 192.168.50.0/24 that is taken: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName:virbr4 IfaceIPv4:192.168.50.1 IfaceMTU:1500 IfaceMAC:52:54:00:ca:e0:35} reservation:<nil>}
	I0918 21:24:29.798626   68762 main.go:141] libmachine: (newest-cni-560575) DBG | I0918 21:24:29.798563   68785 network.go:211] skipping subnet 192.168.61.0/24 that is taken: &{IP:192.168.61.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.61.0/24 Gateway:192.168.61.1 ClientMin:192.168.61.2 ClientMax:192.168.61.254 Broadcast:192.168.61.255 IsPrivate:true Interface:{IfaceName:virbr2 IfaceIPv4:192.168.61.1 IfaceMTU:1500 IfaceMAC:52:54:00:89:2b:21} reservation:<nil>}
	I0918 21:24:29.799665   68762 main.go:141] libmachine: (newest-cni-560575) DBG | I0918 21:24:29.799586   68785 network.go:206] using free private subnet 192.168.72.0/24: &{IP:192.168.72.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.72.0/24 Gateway:192.168.72.1 ClientMin:192.168.72.2 ClientMax:192.168.72.254 Broadcast:192.168.72.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00028b8a0}
	I0918 21:24:29.799701   68762 main.go:141] libmachine: (newest-cni-560575) DBG | created network xml: 
	I0918 21:24:29.799718   68762 main.go:141] libmachine: (newest-cni-560575) DBG | <network>
	I0918 21:24:29.799728   68762 main.go:141] libmachine: (newest-cni-560575) DBG |   <name>mk-newest-cni-560575</name>
	I0918 21:24:29.799742   68762 main.go:141] libmachine: (newest-cni-560575) DBG |   <dns enable='no'/>
	I0918 21:24:29.799759   68762 main.go:141] libmachine: (newest-cni-560575) DBG |   
	I0918 21:24:29.799772   68762 main.go:141] libmachine: (newest-cni-560575) DBG |   <ip address='192.168.72.1' netmask='255.255.255.0'>
	I0918 21:24:29.799808   68762 main.go:141] libmachine: (newest-cni-560575) DBG |     <dhcp>
	I0918 21:24:29.799840   68762 main.go:141] libmachine: (newest-cni-560575) DBG |       <range start='192.168.72.2' end='192.168.72.253'/>
	I0918 21:24:29.799863   68762 main.go:141] libmachine: (newest-cni-560575) DBG |     </dhcp>
	I0918 21:24:29.799875   68762 main.go:141] libmachine: (newest-cni-560575) DBG |   </ip>
	I0918 21:24:29.799881   68762 main.go:141] libmachine: (newest-cni-560575) DBG |   
	I0918 21:24:29.799895   68762 main.go:141] libmachine: (newest-cni-560575) DBG | </network>
	I0918 21:24:29.799906   68762 main.go:141] libmachine: (newest-cni-560575) DBG | 
	I0918 21:24:29.805331   68762 main.go:141] libmachine: (newest-cni-560575) DBG | trying to create private KVM network mk-newest-cni-560575 192.168.72.0/24...
	I0918 21:24:29.880989   68762 main.go:141] libmachine: (newest-cni-560575) DBG | private KVM network mk-newest-cni-560575 192.168.72.0/24 created
	I0918 21:24:29.881043   68762 main.go:141] libmachine: (newest-cni-560575) Setting up store path in /home/jenkins/minikube-integration/19667-7671/.minikube/machines/newest-cni-560575 ...
	I0918 21:24:29.881060   68762 main.go:141] libmachine: (newest-cni-560575) Building disk image from file:///home/jenkins/minikube-integration/19667-7671/.minikube/cache/iso/amd64/minikube-v1.34.0-1726481713-19649-amd64.iso
	I0918 21:24:29.881071   68762 main.go:141] libmachine: (newest-cni-560575) DBG | I0918 21:24:29.880964   68785 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19667-7671/.minikube
	I0918 21:24:29.881108   68762 main.go:141] libmachine: (newest-cni-560575) Downloading /home/jenkins/minikube-integration/19667-7671/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19667-7671/.minikube/cache/iso/amd64/minikube-v1.34.0-1726481713-19649-amd64.iso...
	I0918 21:24:30.140080   68762 main.go:141] libmachine: (newest-cni-560575) DBG | I0918 21:24:30.139912   68785 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19667-7671/.minikube/machines/newest-cni-560575/id_rsa...
	I0918 21:24:30.254776   68762 main.go:141] libmachine: (newest-cni-560575) DBG | I0918 21:24:30.254633   68785 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19667-7671/.minikube/machines/newest-cni-560575/newest-cni-560575.rawdisk...
	I0918 21:24:30.254807   68762 main.go:141] libmachine: (newest-cni-560575) DBG | Writing magic tar header
	I0918 21:24:30.254825   68762 main.go:141] libmachine: (newest-cni-560575) DBG | Writing SSH key tar header
	I0918 21:24:30.254838   68762 main.go:141] libmachine: (newest-cni-560575) DBG | I0918 21:24:30.254751   68785 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19667-7671/.minikube/machines/newest-cni-560575 ...
	I0918 21:24:30.254860   68762 main.go:141] libmachine: (newest-cni-560575) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19667-7671/.minikube/machines/newest-cni-560575
	I0918 21:24:30.254902   68762 main.go:141] libmachine: (newest-cni-560575) Setting executable bit set on /home/jenkins/minikube-integration/19667-7671/.minikube/machines/newest-cni-560575 (perms=drwx------)
	I0918 21:24:30.254917   68762 main.go:141] libmachine: (newest-cni-560575) Setting executable bit set on /home/jenkins/minikube-integration/19667-7671/.minikube/machines (perms=drwxr-xr-x)
	I0918 21:24:30.254937   68762 main.go:141] libmachine: (newest-cni-560575) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19667-7671/.minikube/machines
	I0918 21:24:30.254961   68762 main.go:141] libmachine: (newest-cni-560575) Setting executable bit set on /home/jenkins/minikube-integration/19667-7671/.minikube (perms=drwxr-xr-x)
	I0918 21:24:30.254973   68762 main.go:141] libmachine: (newest-cni-560575) Setting executable bit set on /home/jenkins/minikube-integration/19667-7671 (perms=drwxrwxr-x)
	I0918 21:24:30.254983   68762 main.go:141] libmachine: (newest-cni-560575) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19667-7671/.minikube
	I0918 21:24:30.255021   68762 main.go:141] libmachine: (newest-cni-560575) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0918 21:24:30.255044   68762 main.go:141] libmachine: (newest-cni-560575) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19667-7671
	I0918 21:24:30.255061   68762 main.go:141] libmachine: (newest-cni-560575) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0918 21:24:30.255076   68762 main.go:141] libmachine: (newest-cni-560575) Creating domain...
	I0918 21:24:30.255087   68762 main.go:141] libmachine: (newest-cni-560575) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0918 21:24:30.255095   68762 main.go:141] libmachine: (newest-cni-560575) DBG | Checking permissions on dir: /home/jenkins
	I0918 21:24:30.255104   68762 main.go:141] libmachine: (newest-cni-560575) DBG | Checking permissions on dir: /home
	I0918 21:24:30.255112   68762 main.go:141] libmachine: (newest-cni-560575) DBG | Skipping /home - not owner
	I0918 21:24:30.256418   68762 main.go:141] libmachine: (newest-cni-560575) define libvirt domain using xml: 
	I0918 21:24:30.256445   68762 main.go:141] libmachine: (newest-cni-560575) <domain type='kvm'>
	I0918 21:24:30.256457   68762 main.go:141] libmachine: (newest-cni-560575)   <name>newest-cni-560575</name>
	I0918 21:24:30.256468   68762 main.go:141] libmachine: (newest-cni-560575)   <memory unit='MiB'>2200</memory>
	I0918 21:24:30.256477   68762 main.go:141] libmachine: (newest-cni-560575)   <vcpu>2</vcpu>
	I0918 21:24:30.256484   68762 main.go:141] libmachine: (newest-cni-560575)   <features>
	I0918 21:24:30.256499   68762 main.go:141] libmachine: (newest-cni-560575)     <acpi/>
	I0918 21:24:30.256508   68762 main.go:141] libmachine: (newest-cni-560575)     <apic/>
	I0918 21:24:30.256540   68762 main.go:141] libmachine: (newest-cni-560575)     <pae/>
	I0918 21:24:30.256570   68762 main.go:141] libmachine: (newest-cni-560575)     
	I0918 21:24:30.256584   68762 main.go:141] libmachine: (newest-cni-560575)   </features>
	I0918 21:24:30.256596   68762 main.go:141] libmachine: (newest-cni-560575)   <cpu mode='host-passthrough'>
	I0918 21:24:30.256607   68762 main.go:141] libmachine: (newest-cni-560575)   
	I0918 21:24:30.256618   68762 main.go:141] libmachine: (newest-cni-560575)   </cpu>
	I0918 21:24:30.256629   68762 main.go:141] libmachine: (newest-cni-560575)   <os>
	I0918 21:24:30.256638   68762 main.go:141] libmachine: (newest-cni-560575)     <type>hvm</type>
	I0918 21:24:30.256648   68762 main.go:141] libmachine: (newest-cni-560575)     <boot dev='cdrom'/>
	I0918 21:24:30.256661   68762 main.go:141] libmachine: (newest-cni-560575)     <boot dev='hd'/>
	I0918 21:24:30.256690   68762 main.go:141] libmachine: (newest-cni-560575)     <bootmenu enable='no'/>
	I0918 21:24:30.256710   68762 main.go:141] libmachine: (newest-cni-560575)   </os>
	I0918 21:24:30.256724   68762 main.go:141] libmachine: (newest-cni-560575)   <devices>
	I0918 21:24:30.256734   68762 main.go:141] libmachine: (newest-cni-560575)     <disk type='file' device='cdrom'>
	I0918 21:24:30.256752   68762 main.go:141] libmachine: (newest-cni-560575)       <source file='/home/jenkins/minikube-integration/19667-7671/.minikube/machines/newest-cni-560575/boot2docker.iso'/>
	I0918 21:24:30.256774   68762 main.go:141] libmachine: (newest-cni-560575)       <target dev='hdc' bus='scsi'/>
	I0918 21:24:30.256786   68762 main.go:141] libmachine: (newest-cni-560575)       <readonly/>
	I0918 21:24:30.256795   68762 main.go:141] libmachine: (newest-cni-560575)     </disk>
	I0918 21:24:30.256804   68762 main.go:141] libmachine: (newest-cni-560575)     <disk type='file' device='disk'>
	I0918 21:24:30.256817   68762 main.go:141] libmachine: (newest-cni-560575)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0918 21:24:30.256833   68762 main.go:141] libmachine: (newest-cni-560575)       <source file='/home/jenkins/minikube-integration/19667-7671/.minikube/machines/newest-cni-560575/newest-cni-560575.rawdisk'/>
	I0918 21:24:30.256848   68762 main.go:141] libmachine: (newest-cni-560575)       <target dev='hda' bus='virtio'/>
	I0918 21:24:30.256859   68762 main.go:141] libmachine: (newest-cni-560575)     </disk>
	I0918 21:24:30.256869   68762 main.go:141] libmachine: (newest-cni-560575)     <interface type='network'>
	I0918 21:24:30.256877   68762 main.go:141] libmachine: (newest-cni-560575)       <source network='mk-newest-cni-560575'/>
	I0918 21:24:30.256887   68762 main.go:141] libmachine: (newest-cni-560575)       <model type='virtio'/>
	I0918 21:24:30.256897   68762 main.go:141] libmachine: (newest-cni-560575)     </interface>
	I0918 21:24:30.256907   68762 main.go:141] libmachine: (newest-cni-560575)     <interface type='network'>
	I0918 21:24:30.256928   68762 main.go:141] libmachine: (newest-cni-560575)       <source network='default'/>
	I0918 21:24:30.256945   68762 main.go:141] libmachine: (newest-cni-560575)       <model type='virtio'/>
	I0918 21:24:30.256958   68762 main.go:141] libmachine: (newest-cni-560575)     </interface>
	I0918 21:24:30.256968   68762 main.go:141] libmachine: (newest-cni-560575)     <serial type='pty'>
	I0918 21:24:30.256979   68762 main.go:141] libmachine: (newest-cni-560575)       <target port='0'/>
	I0918 21:24:30.256989   68762 main.go:141] libmachine: (newest-cni-560575)     </serial>
	I0918 21:24:30.257001   68762 main.go:141] libmachine: (newest-cni-560575)     <console type='pty'>
	I0918 21:24:30.257012   68762 main.go:141] libmachine: (newest-cni-560575)       <target type='serial' port='0'/>
	I0918 21:24:30.257023   68762 main.go:141] libmachine: (newest-cni-560575)     </console>
	I0918 21:24:30.257032   68762 main.go:141] libmachine: (newest-cni-560575)     <rng model='virtio'>
	I0918 21:24:30.257046   68762 main.go:141] libmachine: (newest-cni-560575)       <backend model='random'>/dev/random</backend>
	I0918 21:24:30.257055   68762 main.go:141] libmachine: (newest-cni-560575)     </rng>
	I0918 21:24:30.257064   68762 main.go:141] libmachine: (newest-cni-560575)     
	I0918 21:24:30.257073   68762 main.go:141] libmachine: (newest-cni-560575)     
	I0918 21:24:30.257082   68762 main.go:141] libmachine: (newest-cni-560575)   </devices>
	I0918 21:24:30.257093   68762 main.go:141] libmachine: (newest-cni-560575) </domain>
	I0918 21:24:30.257115   68762 main.go:141] libmachine: (newest-cni-560575) 
	I0918 21:24:30.261686   68762 main.go:141] libmachine: (newest-cni-560575) DBG | domain newest-cni-560575 has defined MAC address 52:54:00:2f:e5:7d in network default
	I0918 21:24:30.262295   68762 main.go:141] libmachine: (newest-cni-560575) Ensuring networks are active...
	I0918 21:24:30.262321   68762 main.go:141] libmachine: (newest-cni-560575) DBG | domain newest-cni-560575 has defined MAC address 52:54:00:35:4b:9c in network mk-newest-cni-560575
	I0918 21:24:30.263010   68762 main.go:141] libmachine: (newest-cni-560575) Ensuring network default is active
	I0918 21:24:30.263508   68762 main.go:141] libmachine: (newest-cni-560575) Ensuring network mk-newest-cni-560575 is active
	I0918 21:24:30.263944   68762 main.go:141] libmachine: (newest-cni-560575) Getting domain xml...
	I0918 21:24:30.264745   68762 main.go:141] libmachine: (newest-cni-560575) Creating domain...
	I0918 21:24:31.535770   68762 main.go:141] libmachine: (newest-cni-560575) Waiting to get IP...
	I0918 21:24:31.536635   68762 main.go:141] libmachine: (newest-cni-560575) DBG | domain newest-cni-560575 has defined MAC address 52:54:00:35:4b:9c in network mk-newest-cni-560575
	I0918 21:24:31.537072   68762 main.go:141] libmachine: (newest-cni-560575) DBG | unable to find current IP address of domain newest-cni-560575 in network mk-newest-cni-560575
	I0918 21:24:31.537144   68762 main.go:141] libmachine: (newest-cni-560575) DBG | I0918 21:24:31.537084   68785 retry.go:31] will retry after 212.627498ms: waiting for machine to come up
	I0918 21:24:31.751762   68762 main.go:141] libmachine: (newest-cni-560575) DBG | domain newest-cni-560575 has defined MAC address 52:54:00:35:4b:9c in network mk-newest-cni-560575
	I0918 21:24:31.752316   68762 main.go:141] libmachine: (newest-cni-560575) DBG | unable to find current IP address of domain newest-cni-560575 in network mk-newest-cni-560575
	I0918 21:24:31.752342   68762 main.go:141] libmachine: (newest-cni-560575) DBG | I0918 21:24:31.752255   68785 retry.go:31] will retry after 282.044533ms: waiting for machine to come up
	I0918 21:24:32.035731   68762 main.go:141] libmachine: (newest-cni-560575) DBG | domain newest-cni-560575 has defined MAC address 52:54:00:35:4b:9c in network mk-newest-cni-560575
	I0918 21:24:32.036267   68762 main.go:141] libmachine: (newest-cni-560575) DBG | unable to find current IP address of domain newest-cni-560575 in network mk-newest-cni-560575
	I0918 21:24:32.036297   68762 main.go:141] libmachine: (newest-cni-560575) DBG | I0918 21:24:32.036228   68785 retry.go:31] will retry after 407.966249ms: waiting for machine to come up
	I0918 21:24:32.445988   68762 main.go:141] libmachine: (newest-cni-560575) DBG | domain newest-cni-560575 has defined MAC address 52:54:00:35:4b:9c in network mk-newest-cni-560575
	I0918 21:24:32.446470   68762 main.go:141] libmachine: (newest-cni-560575) DBG | unable to find current IP address of domain newest-cni-560575 in network mk-newest-cni-560575
	I0918 21:24:32.446509   68762 main.go:141] libmachine: (newest-cni-560575) DBG | I0918 21:24:32.446438   68785 retry.go:31] will retry after 546.170373ms: waiting for machine to come up
	I0918 21:24:32.994226   68762 main.go:141] libmachine: (newest-cni-560575) DBG | domain newest-cni-560575 has defined MAC address 52:54:00:35:4b:9c in network mk-newest-cni-560575
	I0918 21:24:32.994772   68762 main.go:141] libmachine: (newest-cni-560575) DBG | unable to find current IP address of domain newest-cni-560575 in network mk-newest-cni-560575
	I0918 21:24:32.994793   68762 main.go:141] libmachine: (newest-cni-560575) DBG | I0918 21:24:32.994733   68785 retry.go:31] will retry after 748.287173ms: waiting for machine to come up
	I0918 21:24:33.744691   68762 main.go:141] libmachine: (newest-cni-560575) DBG | domain newest-cni-560575 has defined MAC address 52:54:00:35:4b:9c in network mk-newest-cni-560575
	I0918 21:24:33.745051   68762 main.go:141] libmachine: (newest-cni-560575) DBG | unable to find current IP address of domain newest-cni-560575 in network mk-newest-cni-560575
	I0918 21:24:33.745091   68762 main.go:141] libmachine: (newest-cni-560575) DBG | I0918 21:24:33.745000   68785 retry.go:31] will retry after 693.693741ms: waiting for machine to come up
	I0918 21:24:34.439773   68762 main.go:141] libmachine: (newest-cni-560575) DBG | domain newest-cni-560575 has defined MAC address 52:54:00:35:4b:9c in network mk-newest-cni-560575
	I0918 21:24:34.440334   68762 main.go:141] libmachine: (newest-cni-560575) DBG | unable to find current IP address of domain newest-cni-560575 in network mk-newest-cni-560575
	I0918 21:24:34.440371   68762 main.go:141] libmachine: (newest-cni-560575) DBG | I0918 21:24:34.440281   68785 retry.go:31] will retry after 1.16579599s: waiting for machine to come up
	I0918 21:24:35.608272   68762 main.go:141] libmachine: (newest-cni-560575) DBG | domain newest-cni-560575 has defined MAC address 52:54:00:35:4b:9c in network mk-newest-cni-560575
	I0918 21:24:35.608693   68762 main.go:141] libmachine: (newest-cni-560575) DBG | unable to find current IP address of domain newest-cni-560575 in network mk-newest-cni-560575
	I0918 21:24:35.608720   68762 main.go:141] libmachine: (newest-cni-560575) DBG | I0918 21:24:35.608640   68785 retry.go:31] will retry after 1.01496908s: waiting for machine to come up
	I0918 21:24:36.624854   68762 main.go:141] libmachine: (newest-cni-560575) DBG | domain newest-cni-560575 has defined MAC address 52:54:00:35:4b:9c in network mk-newest-cni-560575
	I0918 21:24:36.625572   68762 main.go:141] libmachine: (newest-cni-560575) DBG | unable to find current IP address of domain newest-cni-560575 in network mk-newest-cni-560575
	I0918 21:24:36.625615   68762 main.go:141] libmachine: (newest-cni-560575) DBG | I0918 21:24:36.625515   68785 retry.go:31] will retry after 1.304135166s: waiting for machine to come up
	I0918 21:24:37.931159   68762 main.go:141] libmachine: (newest-cni-560575) DBG | domain newest-cni-560575 has defined MAC address 52:54:00:35:4b:9c in network mk-newest-cni-560575
	I0918 21:24:37.931566   68762 main.go:141] libmachine: (newest-cni-560575) DBG | unable to find current IP address of domain newest-cni-560575 in network mk-newest-cni-560575
	I0918 21:24:37.931594   68762 main.go:141] libmachine: (newest-cni-560575) DBG | I0918 21:24:37.931516   68785 retry.go:31] will retry after 2.039451276s: waiting for machine to come up
	I0918 21:24:39.973002   68762 main.go:141] libmachine: (newest-cni-560575) DBG | domain newest-cni-560575 has defined MAC address 52:54:00:35:4b:9c in network mk-newest-cni-560575
	I0918 21:24:39.973610   68762 main.go:141] libmachine: (newest-cni-560575) DBG | unable to find current IP address of domain newest-cni-560575 in network mk-newest-cni-560575
	I0918 21:24:39.973639   68762 main.go:141] libmachine: (newest-cni-560575) DBG | I0918 21:24:39.973552   68785 retry.go:31] will retry after 2.811533988s: waiting for machine to come up
	I0918 21:24:42.787860   68762 main.go:141] libmachine: (newest-cni-560575) DBG | domain newest-cni-560575 has defined MAC address 52:54:00:35:4b:9c in network mk-newest-cni-560575
	I0918 21:24:42.788277   68762 main.go:141] libmachine: (newest-cni-560575) DBG | unable to find current IP address of domain newest-cni-560575 in network mk-newest-cni-560575
	I0918 21:24:42.788298   68762 main.go:141] libmachine: (newest-cni-560575) DBG | I0918 21:24:42.788236   68785 retry.go:31] will retry after 3.314572823s: waiting for machine to come up
	I0918 21:24:46.105150   68762 main.go:141] libmachine: (newest-cni-560575) DBG | domain newest-cni-560575 has defined MAC address 52:54:00:35:4b:9c in network mk-newest-cni-560575
	I0918 21:24:46.105596   68762 main.go:141] libmachine: (newest-cni-560575) DBG | unable to find current IP address of domain newest-cni-560575 in network mk-newest-cni-560575
	I0918 21:24:46.105621   68762 main.go:141] libmachine: (newest-cni-560575) DBG | I0918 21:24:46.105539   68785 retry.go:31] will retry after 3.202217501s: waiting for machine to come up
	I0918 21:24:49.311789   68762 main.go:141] libmachine: (newest-cni-560575) DBG | domain newest-cni-560575 has defined MAC address 52:54:00:35:4b:9c in network mk-newest-cni-560575
	I0918 21:24:49.312283   68762 main.go:141] libmachine: (newest-cni-560575) DBG | domain newest-cni-560575 has current primary IP address 192.168.72.106 and MAC address 52:54:00:35:4b:9c in network mk-newest-cni-560575
	I0918 21:24:49.312308   68762 main.go:141] libmachine: (newest-cni-560575) Found IP for machine: 192.168.72.106
	I0918 21:24:49.312319   68762 main.go:141] libmachine: (newest-cni-560575) Reserving static IP address...
	I0918 21:24:49.312696   68762 main.go:141] libmachine: (newest-cni-560575) DBG | unable to find host DHCP lease matching {name: "newest-cni-560575", mac: "52:54:00:35:4b:9c", ip: "192.168.72.106"} in network mk-newest-cni-560575
	I0918 21:24:49.400722   68762 main.go:141] libmachine: (newest-cni-560575) Reserved static IP address: 192.168.72.106
	I0918 21:24:49.400757   68762 main.go:141] libmachine: (newest-cni-560575) DBG | Getting to WaitForSSH function...
	I0918 21:24:49.400767   68762 main.go:141] libmachine: (newest-cni-560575) Waiting for SSH to be available...
	I0918 21:24:49.404078   68762 main.go:141] libmachine: (newest-cni-560575) DBG | domain newest-cni-560575 has defined MAC address 52:54:00:35:4b:9c in network mk-newest-cni-560575
	I0918 21:24:49.404669   68762 main.go:141] libmachine: (newest-cni-560575) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:4b:9c", ip: ""} in network mk-newest-cni-560575: {Iface:virbr3 ExpiryTime:2024-09-18 22:24:44 +0000 UTC Type:0 Mac:52:54:00:35:4b:9c Iaid: IPaddr:192.168.72.106 Prefix:24 Hostname:minikube Clientid:01:52:54:00:35:4b:9c}
	I0918 21:24:49.404698   68762 main.go:141] libmachine: (newest-cni-560575) DBG | domain newest-cni-560575 has defined IP address 192.168.72.106 and MAC address 52:54:00:35:4b:9c in network mk-newest-cni-560575
	I0918 21:24:49.404879   68762 main.go:141] libmachine: (newest-cni-560575) DBG | Using SSH client type: external
	I0918 21:24:49.404896   68762 main.go:141] libmachine: (newest-cni-560575) DBG | Using SSH private key: /home/jenkins/minikube-integration/19667-7671/.minikube/machines/newest-cni-560575/id_rsa (-rw-------)
	I0918 21:24:49.404978   68762 main.go:141] libmachine: (newest-cni-560575) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.106 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19667-7671/.minikube/machines/newest-cni-560575/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0918 21:24:49.405004   68762 main.go:141] libmachine: (newest-cni-560575) DBG | About to run SSH command:
	I0918 21:24:49.405016   68762 main.go:141] libmachine: (newest-cni-560575) DBG | exit 0
	I0918 21:24:49.532001   68762 main.go:141] libmachine: (newest-cni-560575) DBG | SSH cmd err, output: <nil>: 
	I0918 21:24:49.532325   68762 main.go:141] libmachine: (newest-cni-560575) KVM machine creation complete!
	I0918 21:24:49.532665   68762 main.go:141] libmachine: (newest-cni-560575) Calling .GetConfigRaw
	I0918 21:24:49.533204   68762 main.go:141] libmachine: (newest-cni-560575) Calling .DriverName
	I0918 21:24:49.533416   68762 main.go:141] libmachine: (newest-cni-560575) Calling .DriverName
	I0918 21:24:49.533592   68762 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0918 21:24:49.533606   68762 main.go:141] libmachine: (newest-cni-560575) Calling .GetState
	I0918 21:24:49.534864   68762 main.go:141] libmachine: Detecting operating system of created instance...
	I0918 21:24:49.534886   68762 main.go:141] libmachine: Waiting for SSH to be available...
	I0918 21:24:49.534892   68762 main.go:141] libmachine: Getting to WaitForSSH function...
	I0918 21:24:49.534900   68762 main.go:141] libmachine: (newest-cni-560575) Calling .GetSSHHostname
	I0918 21:24:49.537413   68762 main.go:141] libmachine: (newest-cni-560575) DBG | domain newest-cni-560575 has defined MAC address 52:54:00:35:4b:9c in network mk-newest-cni-560575
	I0918 21:24:49.537828   68762 main.go:141] libmachine: (newest-cni-560575) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:4b:9c", ip: ""} in network mk-newest-cni-560575: {Iface:virbr3 ExpiryTime:2024-09-18 22:24:44 +0000 UTC Type:0 Mac:52:54:00:35:4b:9c Iaid: IPaddr:192.168.72.106 Prefix:24 Hostname:newest-cni-560575 Clientid:01:52:54:00:35:4b:9c}
	I0918 21:24:49.537863   68762 main.go:141] libmachine: (newest-cni-560575) DBG | domain newest-cni-560575 has defined IP address 192.168.72.106 and MAC address 52:54:00:35:4b:9c in network mk-newest-cni-560575
	I0918 21:24:49.537989   68762 main.go:141] libmachine: (newest-cni-560575) Calling .GetSSHPort
	I0918 21:24:49.538176   68762 main.go:141] libmachine: (newest-cni-560575) Calling .GetSSHKeyPath
	I0918 21:24:49.538360   68762 main.go:141] libmachine: (newest-cni-560575) Calling .GetSSHKeyPath
	I0918 21:24:49.538502   68762 main.go:141] libmachine: (newest-cni-560575) Calling .GetSSHUsername
	I0918 21:24:49.538710   68762 main.go:141] libmachine: Using SSH client type: native
	I0918 21:24:49.538954   68762 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.72.106 22 <nil> <nil>}
	I0918 21:24:49.538970   68762 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0918 21:24:49.643382   68762 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0918 21:24:49.643412   68762 main.go:141] libmachine: Detecting the provisioner...
	I0918 21:24:49.643423   68762 main.go:141] libmachine: (newest-cni-560575) Calling .GetSSHHostname
	I0918 21:24:49.646273   68762 main.go:141] libmachine: (newest-cni-560575) DBG | domain newest-cni-560575 has defined MAC address 52:54:00:35:4b:9c in network mk-newest-cni-560575
	I0918 21:24:49.646613   68762 main.go:141] libmachine: (newest-cni-560575) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:4b:9c", ip: ""} in network mk-newest-cni-560575: {Iface:virbr3 ExpiryTime:2024-09-18 22:24:44 +0000 UTC Type:0 Mac:52:54:00:35:4b:9c Iaid: IPaddr:192.168.72.106 Prefix:24 Hostname:newest-cni-560575 Clientid:01:52:54:00:35:4b:9c}
	I0918 21:24:49.646653   68762 main.go:141] libmachine: (newest-cni-560575) DBG | domain newest-cni-560575 has defined IP address 192.168.72.106 and MAC address 52:54:00:35:4b:9c in network mk-newest-cni-560575
	I0918 21:24:49.646871   68762 main.go:141] libmachine: (newest-cni-560575) Calling .GetSSHPort
	I0918 21:24:49.647073   68762 main.go:141] libmachine: (newest-cni-560575) Calling .GetSSHKeyPath
	I0918 21:24:49.647283   68762 main.go:141] libmachine: (newest-cni-560575) Calling .GetSSHKeyPath
	I0918 21:24:49.647473   68762 main.go:141] libmachine: (newest-cni-560575) Calling .GetSSHUsername
	I0918 21:24:49.647693   68762 main.go:141] libmachine: Using SSH client type: native
	I0918 21:24:49.647921   68762 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.72.106 22 <nil> <nil>}
	I0918 21:24:49.647936   68762 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0918 21:24:49.756919   68762 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0918 21:24:49.757076   68762 main.go:141] libmachine: found compatible host: buildroot
	I0918 21:24:49.757097   68762 main.go:141] libmachine: Provisioning with buildroot...
	I0918 21:24:49.757107   68762 main.go:141] libmachine: (newest-cni-560575) Calling .GetMachineName
	I0918 21:24:49.757415   68762 buildroot.go:166] provisioning hostname "newest-cni-560575"
	I0918 21:24:49.757434   68762 main.go:141] libmachine: (newest-cni-560575) Calling .GetMachineName
	I0918 21:24:49.757653   68762 main.go:141] libmachine: (newest-cni-560575) Calling .GetSSHHostname
	I0918 21:24:49.760487   68762 main.go:141] libmachine: (newest-cni-560575) DBG | domain newest-cni-560575 has defined MAC address 52:54:00:35:4b:9c in network mk-newest-cni-560575
	I0918 21:24:49.761022   68762 main.go:141] libmachine: (newest-cni-560575) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:4b:9c", ip: ""} in network mk-newest-cni-560575: {Iface:virbr3 ExpiryTime:2024-09-18 22:24:44 +0000 UTC Type:0 Mac:52:54:00:35:4b:9c Iaid: IPaddr:192.168.72.106 Prefix:24 Hostname:newest-cni-560575 Clientid:01:52:54:00:35:4b:9c}
	I0918 21:24:49.761066   68762 main.go:141] libmachine: (newest-cni-560575) DBG | domain newest-cni-560575 has defined IP address 192.168.72.106 and MAC address 52:54:00:35:4b:9c in network mk-newest-cni-560575
	I0918 21:24:49.761285   68762 main.go:141] libmachine: (newest-cni-560575) Calling .GetSSHPort
	I0918 21:24:49.761501   68762 main.go:141] libmachine: (newest-cni-560575) Calling .GetSSHKeyPath
	I0918 21:24:49.761705   68762 main.go:141] libmachine: (newest-cni-560575) Calling .GetSSHKeyPath
	I0918 21:24:49.761935   68762 main.go:141] libmachine: (newest-cni-560575) Calling .GetSSHUsername
	I0918 21:24:49.762265   68762 main.go:141] libmachine: Using SSH client type: native
	I0918 21:24:49.762443   68762 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.72.106 22 <nil> <nil>}
	I0918 21:24:49.762456   68762 main.go:141] libmachine: About to run SSH command:
	sudo hostname newest-cni-560575 && echo "newest-cni-560575" | sudo tee /etc/hostname
	I0918 21:24:49.883171   68762 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-560575
	
	I0918 21:24:49.883229   68762 main.go:141] libmachine: (newest-cni-560575) Calling .GetSSHHostname
	I0918 21:24:49.886567   68762 main.go:141] libmachine: (newest-cni-560575) DBG | domain newest-cni-560575 has defined MAC address 52:54:00:35:4b:9c in network mk-newest-cni-560575
	I0918 21:24:49.886916   68762 main.go:141] libmachine: (newest-cni-560575) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:4b:9c", ip: ""} in network mk-newest-cni-560575: {Iface:virbr3 ExpiryTime:2024-09-18 22:24:44 +0000 UTC Type:0 Mac:52:54:00:35:4b:9c Iaid: IPaddr:192.168.72.106 Prefix:24 Hostname:newest-cni-560575 Clientid:01:52:54:00:35:4b:9c}
	I0918 21:24:49.886946   68762 main.go:141] libmachine: (newest-cni-560575) DBG | domain newest-cni-560575 has defined IP address 192.168.72.106 and MAC address 52:54:00:35:4b:9c in network mk-newest-cni-560575
	I0918 21:24:49.887221   68762 main.go:141] libmachine: (newest-cni-560575) Calling .GetSSHPort
	I0918 21:24:49.887448   68762 main.go:141] libmachine: (newest-cni-560575) Calling .GetSSHKeyPath
	I0918 21:24:49.887650   68762 main.go:141] libmachine: (newest-cni-560575) Calling .GetSSHKeyPath
	I0918 21:24:49.887874   68762 main.go:141] libmachine: (newest-cni-560575) Calling .GetSSHUsername
	I0918 21:24:49.888080   68762 main.go:141] libmachine: Using SSH client type: native
	I0918 21:24:49.888269   68762 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.72.106 22 <nil> <nil>}
	I0918 21:24:49.888285   68762 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-560575' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-560575/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-560575' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0918 21:24:50.001509   68762 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0918 21:24:50.001536   68762 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19667-7671/.minikube CaCertPath:/home/jenkins/minikube-integration/19667-7671/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19667-7671/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19667-7671/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19667-7671/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19667-7671/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19667-7671/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19667-7671/.minikube}
	I0918 21:24:50.001569   68762 buildroot.go:174] setting up certificates
	I0918 21:24:50.001582   68762 provision.go:84] configureAuth start
	I0918 21:24:50.001603   68762 main.go:141] libmachine: (newest-cni-560575) Calling .GetMachineName
	I0918 21:24:50.001924   68762 main.go:141] libmachine: (newest-cni-560575) Calling .GetIP
	I0918 21:24:50.004527   68762 main.go:141] libmachine: (newest-cni-560575) DBG | domain newest-cni-560575 has defined MAC address 52:54:00:35:4b:9c in network mk-newest-cni-560575
	I0918 21:24:50.004869   68762 main.go:141] libmachine: (newest-cni-560575) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:4b:9c", ip: ""} in network mk-newest-cni-560575: {Iface:virbr3 ExpiryTime:2024-09-18 22:24:44 +0000 UTC Type:0 Mac:52:54:00:35:4b:9c Iaid: IPaddr:192.168.72.106 Prefix:24 Hostname:newest-cni-560575 Clientid:01:52:54:00:35:4b:9c}
	I0918 21:24:50.004906   68762 main.go:141] libmachine: (newest-cni-560575) DBG | domain newest-cni-560575 has defined IP address 192.168.72.106 and MAC address 52:54:00:35:4b:9c in network mk-newest-cni-560575
	I0918 21:24:50.005086   68762 main.go:141] libmachine: (newest-cni-560575) Calling .GetSSHHostname
	I0918 21:24:50.007161   68762 main.go:141] libmachine: (newest-cni-560575) DBG | domain newest-cni-560575 has defined MAC address 52:54:00:35:4b:9c in network mk-newest-cni-560575
	I0918 21:24:50.007490   68762 main.go:141] libmachine: (newest-cni-560575) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:4b:9c", ip: ""} in network mk-newest-cni-560575: {Iface:virbr3 ExpiryTime:2024-09-18 22:24:44 +0000 UTC Type:0 Mac:52:54:00:35:4b:9c Iaid: IPaddr:192.168.72.106 Prefix:24 Hostname:newest-cni-560575 Clientid:01:52:54:00:35:4b:9c}
	I0918 21:24:50.007516   68762 main.go:141] libmachine: (newest-cni-560575) DBG | domain newest-cni-560575 has defined IP address 192.168.72.106 and MAC address 52:54:00:35:4b:9c in network mk-newest-cni-560575
	I0918 21:24:50.007663   68762 provision.go:143] copyHostCerts
	I0918 21:24:50.007737   68762 exec_runner.go:144] found /home/jenkins/minikube-integration/19667-7671/.minikube/ca.pem, removing ...
	I0918 21:24:50.007751   68762 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19667-7671/.minikube/ca.pem
	I0918 21:24:50.007830   68762 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19667-7671/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19667-7671/.minikube/ca.pem (1082 bytes)
	I0918 21:24:50.007956   68762 exec_runner.go:144] found /home/jenkins/minikube-integration/19667-7671/.minikube/cert.pem, removing ...
	I0918 21:24:50.007967   68762 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19667-7671/.minikube/cert.pem
	I0918 21:24:50.008007   68762 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19667-7671/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19667-7671/.minikube/cert.pem (1123 bytes)
	I0918 21:24:50.008110   68762 exec_runner.go:144] found /home/jenkins/minikube-integration/19667-7671/.minikube/key.pem, removing ...
	I0918 21:24:50.008122   68762 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19667-7671/.minikube/key.pem
	I0918 21:24:50.008161   68762 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19667-7671/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19667-7671/.minikube/key.pem (1679 bytes)
	I0918 21:24:50.008224   68762 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19667-7671/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19667-7671/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19667-7671/.minikube/certs/ca-key.pem org=jenkins.newest-cni-560575 san=[127.0.0.1 192.168.72.106 localhost minikube newest-cni-560575]
	I0918 21:24:50.164605   68762 provision.go:177] copyRemoteCerts
	I0918 21:24:50.164663   68762 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0918 21:24:50.164688   68762 main.go:141] libmachine: (newest-cni-560575) Calling .GetSSHHostname
	I0918 21:24:50.167779   68762 main.go:141] libmachine: (newest-cni-560575) DBG | domain newest-cni-560575 has defined MAC address 52:54:00:35:4b:9c in network mk-newest-cni-560575
	I0918 21:24:50.168096   68762 main.go:141] libmachine: (newest-cni-560575) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:4b:9c", ip: ""} in network mk-newest-cni-560575: {Iface:virbr3 ExpiryTime:2024-09-18 22:24:44 +0000 UTC Type:0 Mac:52:54:00:35:4b:9c Iaid: IPaddr:192.168.72.106 Prefix:24 Hostname:newest-cni-560575 Clientid:01:52:54:00:35:4b:9c}
	I0918 21:24:50.168134   68762 main.go:141] libmachine: (newest-cni-560575) DBG | domain newest-cni-560575 has defined IP address 192.168.72.106 and MAC address 52:54:00:35:4b:9c in network mk-newest-cni-560575
	I0918 21:24:50.168348   68762 main.go:141] libmachine: (newest-cni-560575) Calling .GetSSHPort
	I0918 21:24:50.168578   68762 main.go:141] libmachine: (newest-cni-560575) Calling .GetSSHKeyPath
	I0918 21:24:50.168782   68762 main.go:141] libmachine: (newest-cni-560575) Calling .GetSSHUsername
	I0918 21:24:50.168991   68762 sshutil.go:53] new ssh client: &{IP:192.168.72.106 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19667-7671/.minikube/machines/newest-cni-560575/id_rsa Username:docker}
	I0918 21:24:50.254679   68762 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0918 21:24:50.279379   68762 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0918 21:24:50.305345   68762 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0918 21:24:50.332671   68762 provision.go:87] duration metric: took 331.076865ms to configureAuth
	I0918 21:24:50.332702   68762 buildroot.go:189] setting minikube options for container-runtime
	I0918 21:24:50.332993   68762 config.go:182] Loaded profile config "newest-cni-560575": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0918 21:24:50.333088   68762 main.go:141] libmachine: (newest-cni-560575) Calling .GetSSHHostname
	I0918 21:24:50.336029   68762 main.go:141] libmachine: (newest-cni-560575) DBG | domain newest-cni-560575 has defined MAC address 52:54:00:35:4b:9c in network mk-newest-cni-560575
	I0918 21:24:50.336458   68762 main.go:141] libmachine: (newest-cni-560575) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:4b:9c", ip: ""} in network mk-newest-cni-560575: {Iface:virbr3 ExpiryTime:2024-09-18 22:24:44 +0000 UTC Type:0 Mac:52:54:00:35:4b:9c Iaid: IPaddr:192.168.72.106 Prefix:24 Hostname:newest-cni-560575 Clientid:01:52:54:00:35:4b:9c}
	I0918 21:24:50.336488   68762 main.go:141] libmachine: (newest-cni-560575) DBG | domain newest-cni-560575 has defined IP address 192.168.72.106 and MAC address 52:54:00:35:4b:9c in network mk-newest-cni-560575
	I0918 21:24:50.336655   68762 main.go:141] libmachine: (newest-cni-560575) Calling .GetSSHPort
	I0918 21:24:50.336892   68762 main.go:141] libmachine: (newest-cni-560575) Calling .GetSSHKeyPath
	I0918 21:24:50.337054   68762 main.go:141] libmachine: (newest-cni-560575) Calling .GetSSHKeyPath
	I0918 21:24:50.337243   68762 main.go:141] libmachine: (newest-cni-560575) Calling .GetSSHUsername
	I0918 21:24:50.337478   68762 main.go:141] libmachine: Using SSH client type: native
	I0918 21:24:50.337657   68762 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.72.106 22 <nil> <nil>}
	I0918 21:24:50.337672   68762 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0918 21:24:50.583100   68762 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0918 21:24:50.583146   68762 main.go:141] libmachine: Checking connection to Docker...
	I0918 21:24:50.583163   68762 main.go:141] libmachine: (newest-cni-560575) Calling .GetURL
	I0918 21:24:50.584556   68762 main.go:141] libmachine: (newest-cni-560575) DBG | Using libvirt version 6000000
	I0918 21:24:50.586814   68762 main.go:141] libmachine: (newest-cni-560575) DBG | domain newest-cni-560575 has defined MAC address 52:54:00:35:4b:9c in network mk-newest-cni-560575
	I0918 21:24:50.587156   68762 main.go:141] libmachine: (newest-cni-560575) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:4b:9c", ip: ""} in network mk-newest-cni-560575: {Iface:virbr3 ExpiryTime:2024-09-18 22:24:44 +0000 UTC Type:0 Mac:52:54:00:35:4b:9c Iaid: IPaddr:192.168.72.106 Prefix:24 Hostname:newest-cni-560575 Clientid:01:52:54:00:35:4b:9c}
	I0918 21:24:50.587186   68762 main.go:141] libmachine: (newest-cni-560575) DBG | domain newest-cni-560575 has defined IP address 192.168.72.106 and MAC address 52:54:00:35:4b:9c in network mk-newest-cni-560575
	I0918 21:24:50.587385   68762 main.go:141] libmachine: Docker is up and running!
	I0918 21:24:50.587403   68762 main.go:141] libmachine: Reticulating splines...
	I0918 21:24:50.587411   68762 client.go:171] duration metric: took 20.794494347s to LocalClient.Create
	I0918 21:24:50.587449   68762 start.go:167] duration metric: took 20.794556295s to libmachine.API.Create "newest-cni-560575"
	I0918 21:24:50.587460   68762 start.go:293] postStartSetup for "newest-cni-560575" (driver="kvm2")
	I0918 21:24:50.587472   68762 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0918 21:24:50.587496   68762 main.go:141] libmachine: (newest-cni-560575) Calling .DriverName
	I0918 21:24:50.587857   68762 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0918 21:24:50.587888   68762 main.go:141] libmachine: (newest-cni-560575) Calling .GetSSHHostname
	I0918 21:24:50.590433   68762 main.go:141] libmachine: (newest-cni-560575) DBG | domain newest-cni-560575 has defined MAC address 52:54:00:35:4b:9c in network mk-newest-cni-560575
	I0918 21:24:50.590826   68762 main.go:141] libmachine: (newest-cni-560575) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:4b:9c", ip: ""} in network mk-newest-cni-560575: {Iface:virbr3 ExpiryTime:2024-09-18 22:24:44 +0000 UTC Type:0 Mac:52:54:00:35:4b:9c Iaid: IPaddr:192.168.72.106 Prefix:24 Hostname:newest-cni-560575 Clientid:01:52:54:00:35:4b:9c}
	I0918 21:24:50.590848   68762 main.go:141] libmachine: (newest-cni-560575) DBG | domain newest-cni-560575 has defined IP address 192.168.72.106 and MAC address 52:54:00:35:4b:9c in network mk-newest-cni-560575
	I0918 21:24:50.591001   68762 main.go:141] libmachine: (newest-cni-560575) Calling .GetSSHPort
	I0918 21:24:50.591226   68762 main.go:141] libmachine: (newest-cni-560575) Calling .GetSSHKeyPath
	I0918 21:24:50.591408   68762 main.go:141] libmachine: (newest-cni-560575) Calling .GetSSHUsername
	I0918 21:24:50.591589   68762 sshutil.go:53] new ssh client: &{IP:192.168.72.106 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19667-7671/.minikube/machines/newest-cni-560575/id_rsa Username:docker}
	I0918 21:24:50.674563   68762 ssh_runner.go:195] Run: cat /etc/os-release
	I0918 21:24:50.678653   68762 info.go:137] Remote host: Buildroot 2023.02.9
	I0918 21:24:50.678678   68762 filesync.go:126] Scanning /home/jenkins/minikube-integration/19667-7671/.minikube/addons for local assets ...
	I0918 21:24:50.678751   68762 filesync.go:126] Scanning /home/jenkins/minikube-integration/19667-7671/.minikube/files for local assets ...
	I0918 21:24:50.678847   68762 filesync.go:149] local asset: /home/jenkins/minikube-integration/19667-7671/.minikube/files/etc/ssl/certs/148782.pem -> 148782.pem in /etc/ssl/certs
	I0918 21:24:50.678964   68762 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0918 21:24:50.689759   68762 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/files/etc/ssl/certs/148782.pem --> /etc/ssl/certs/148782.pem (1708 bytes)
	I0918 21:24:50.714314   68762 start.go:296] duration metric: took 126.84036ms for postStartSetup
	I0918 21:24:50.714407   68762 main.go:141] libmachine: (newest-cni-560575) Calling .GetConfigRaw
	I0918 21:24:50.715040   68762 main.go:141] libmachine: (newest-cni-560575) Calling .GetIP
	I0918 21:24:50.717810   68762 main.go:141] libmachine: (newest-cni-560575) DBG | domain newest-cni-560575 has defined MAC address 52:54:00:35:4b:9c in network mk-newest-cni-560575
	I0918 21:24:50.718103   68762 main.go:141] libmachine: (newest-cni-560575) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:4b:9c", ip: ""} in network mk-newest-cni-560575: {Iface:virbr3 ExpiryTime:2024-09-18 22:24:44 +0000 UTC Type:0 Mac:52:54:00:35:4b:9c Iaid: IPaddr:192.168.72.106 Prefix:24 Hostname:newest-cni-560575 Clientid:01:52:54:00:35:4b:9c}
	I0918 21:24:50.718134   68762 main.go:141] libmachine: (newest-cni-560575) DBG | domain newest-cni-560575 has defined IP address 192.168.72.106 and MAC address 52:54:00:35:4b:9c in network mk-newest-cni-560575
	I0918 21:24:50.718321   68762 profile.go:143] Saving config to /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/newest-cni-560575/config.json ...
	I0918 21:24:50.718542   68762 start.go:128] duration metric: took 20.945662753s to createHost
	I0918 21:24:50.718564   68762 main.go:141] libmachine: (newest-cni-560575) Calling .GetSSHHostname
	I0918 21:24:50.720716   68762 main.go:141] libmachine: (newest-cni-560575) DBG | domain newest-cni-560575 has defined MAC address 52:54:00:35:4b:9c in network mk-newest-cni-560575
	I0918 21:24:50.721013   68762 main.go:141] libmachine: (newest-cni-560575) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:4b:9c", ip: ""} in network mk-newest-cni-560575: {Iface:virbr3 ExpiryTime:2024-09-18 22:24:44 +0000 UTC Type:0 Mac:52:54:00:35:4b:9c Iaid: IPaddr:192.168.72.106 Prefix:24 Hostname:newest-cni-560575 Clientid:01:52:54:00:35:4b:9c}
	I0918 21:24:50.721044   68762 main.go:141] libmachine: (newest-cni-560575) DBG | domain newest-cni-560575 has defined IP address 192.168.72.106 and MAC address 52:54:00:35:4b:9c in network mk-newest-cni-560575
	I0918 21:24:50.721269   68762 main.go:141] libmachine: (newest-cni-560575) Calling .GetSSHPort
	I0918 21:24:50.721497   68762 main.go:141] libmachine: (newest-cni-560575) Calling .GetSSHKeyPath
	I0918 21:24:50.721688   68762 main.go:141] libmachine: (newest-cni-560575) Calling .GetSSHKeyPath
	I0918 21:24:50.721855   68762 main.go:141] libmachine: (newest-cni-560575) Calling .GetSSHUsername
	I0918 21:24:50.722012   68762 main.go:141] libmachine: Using SSH client type: native
	I0918 21:24:50.722212   68762 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.72.106 22 <nil> <nil>}
	I0918 21:24:50.722227   68762 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0918 21:24:50.824951   68762 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726694690.796253583
	
	I0918 21:24:50.824984   68762 fix.go:216] guest clock: 1726694690.796253583
	I0918 21:24:50.824993   68762 fix.go:229] Guest: 2024-09-18 21:24:50.796253583 +0000 UTC Remote: 2024-09-18 21:24:50.718554594 +0000 UTC m=+21.059245500 (delta=77.698989ms)
	I0918 21:24:50.825021   68762 fix.go:200] guest clock delta is within tolerance: 77.698989ms
	I0918 21:24:50.825028   68762 start.go:83] releasing machines lock for "newest-cni-560575", held for 21.052256187s
	I0918 21:24:50.825054   68762 main.go:141] libmachine: (newest-cni-560575) Calling .DriverName
	I0918 21:24:50.825424   68762 main.go:141] libmachine: (newest-cni-560575) Calling .GetIP
	I0918 21:24:50.828371   68762 main.go:141] libmachine: (newest-cni-560575) DBG | domain newest-cni-560575 has defined MAC address 52:54:00:35:4b:9c in network mk-newest-cni-560575
	I0918 21:24:50.828743   68762 main.go:141] libmachine: (newest-cni-560575) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:4b:9c", ip: ""} in network mk-newest-cni-560575: {Iface:virbr3 ExpiryTime:2024-09-18 22:24:44 +0000 UTC Type:0 Mac:52:54:00:35:4b:9c Iaid: IPaddr:192.168.72.106 Prefix:24 Hostname:newest-cni-560575 Clientid:01:52:54:00:35:4b:9c}
	I0918 21:24:50.828768   68762 main.go:141] libmachine: (newest-cni-560575) DBG | domain newest-cni-560575 has defined IP address 192.168.72.106 and MAC address 52:54:00:35:4b:9c in network mk-newest-cni-560575
	I0918 21:24:50.828979   68762 main.go:141] libmachine: (newest-cni-560575) Calling .DriverName
	I0918 21:24:50.829560   68762 main.go:141] libmachine: (newest-cni-560575) Calling .DriverName
	I0918 21:24:50.829792   68762 main.go:141] libmachine: (newest-cni-560575) Calling .DriverName
	I0918 21:24:50.829920   68762 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0918 21:24:50.829959   68762 main.go:141] libmachine: (newest-cni-560575) Calling .GetSSHHostname
	I0918 21:24:50.830059   68762 ssh_runner.go:195] Run: cat /version.json
	I0918 21:24:50.830087   68762 main.go:141] libmachine: (newest-cni-560575) Calling .GetSSHHostname
	I0918 21:24:50.832924   68762 main.go:141] libmachine: (newest-cni-560575) DBG | domain newest-cni-560575 has defined MAC address 52:54:00:35:4b:9c in network mk-newest-cni-560575
	I0918 21:24:50.833087   68762 main.go:141] libmachine: (newest-cni-560575) DBG | domain newest-cni-560575 has defined MAC address 52:54:00:35:4b:9c in network mk-newest-cni-560575
	I0918 21:24:50.833219   68762 main.go:141] libmachine: (newest-cni-560575) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:4b:9c", ip: ""} in network mk-newest-cni-560575: {Iface:virbr3 ExpiryTime:2024-09-18 22:24:44 +0000 UTC Type:0 Mac:52:54:00:35:4b:9c Iaid: IPaddr:192.168.72.106 Prefix:24 Hostname:newest-cni-560575 Clientid:01:52:54:00:35:4b:9c}
	I0918 21:24:50.833239   68762 main.go:141] libmachine: (newest-cni-560575) DBG | domain newest-cni-560575 has defined IP address 192.168.72.106 and MAC address 52:54:00:35:4b:9c in network mk-newest-cni-560575
	I0918 21:24:50.833487   68762 main.go:141] libmachine: (newest-cni-560575) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:4b:9c", ip: ""} in network mk-newest-cni-560575: {Iface:virbr3 ExpiryTime:2024-09-18 22:24:44 +0000 UTC Type:0 Mac:52:54:00:35:4b:9c Iaid: IPaddr:192.168.72.106 Prefix:24 Hostname:newest-cni-560575 Clientid:01:52:54:00:35:4b:9c}
	I0918 21:24:50.833513   68762 main.go:141] libmachine: (newest-cni-560575) Calling .GetSSHPort
	I0918 21:24:50.833524   68762 main.go:141] libmachine: (newest-cni-560575) DBG | domain newest-cni-560575 has defined IP address 192.168.72.106 and MAC address 52:54:00:35:4b:9c in network mk-newest-cni-560575
	I0918 21:24:50.833712   68762 main.go:141] libmachine: (newest-cni-560575) Calling .GetSSHPort
	I0918 21:24:50.833718   68762 main.go:141] libmachine: (newest-cni-560575) Calling .GetSSHKeyPath
	I0918 21:24:50.833841   68762 main.go:141] libmachine: (newest-cni-560575) Calling .GetSSHUsername
	I0918 21:24:50.833894   68762 main.go:141] libmachine: (newest-cni-560575) Calling .GetSSHKeyPath
	I0918 21:24:50.834020   68762 sshutil.go:53] new ssh client: &{IP:192.168.72.106 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19667-7671/.minikube/machines/newest-cni-560575/id_rsa Username:docker}
	I0918 21:24:50.834059   68762 main.go:141] libmachine: (newest-cni-560575) Calling .GetSSHUsername
	I0918 21:24:50.834243   68762 sshutil.go:53] new ssh client: &{IP:192.168.72.106 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19667-7671/.minikube/machines/newest-cni-560575/id_rsa Username:docker}
	I0918 21:24:50.957070   68762 ssh_runner.go:195] Run: systemctl --version
	I0918 21:24:50.963603   68762 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0918 21:24:51.125469   68762 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0918 21:24:51.131255   68762 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0918 21:24:51.131338   68762 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0918 21:24:51.148139   68762 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0918 21:24:51.148164   68762 start.go:495] detecting cgroup driver to use...
	I0918 21:24:51.148236   68762 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0918 21:24:51.167363   68762 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0918 21:24:51.183795   68762 docker.go:217] disabling cri-docker service (if available) ...
	I0918 21:24:51.183850   68762 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0918 21:24:51.199348   68762 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0918 21:24:51.214037   68762 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0918 21:24:51.335687   68762 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0918 21:24:51.494974   68762 docker.go:233] disabling docker service ...
	I0918 21:24:51.495040   68762 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0918 21:24:51.518152   68762 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0918 21:24:51.533649   68762 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0918 21:24:51.683465   68762 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0918 21:24:51.822346   68762 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0918 21:24:51.837056   68762 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0918 21:24:51.858251   68762 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0918 21:24:51.858320   68762 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0918 21:24:51.869826   68762 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0918 21:24:51.869897   68762 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0918 21:24:51.880928   68762 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0918 21:24:51.892808   68762 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0918 21:24:51.903333   68762 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0918 21:24:51.915603   68762 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0918 21:24:51.926343   68762 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0918 21:24:51.944943   68762 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0918 21:24:51.957872   68762 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0918 21:24:51.968940   68762 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0918 21:24:51.969007   68762 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0918 21:24:51.982919   68762 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0918 21:24:51.993056   68762 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0918 21:24:52.127963   68762 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0918 21:24:52.225668   68762 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0918 21:24:52.225771   68762 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0918 21:24:52.230714   68762 start.go:563] Will wait 60s for crictl version
	I0918 21:24:52.230784   68762 ssh_runner.go:195] Run: which crictl
	I0918 21:24:52.234564   68762 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0918 21:24:52.277431   68762 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0918 21:24:52.277522   68762 ssh_runner.go:195] Run: crio --version
	I0918 21:24:52.306708   68762 ssh_runner.go:195] Run: crio --version
	I0918 21:24:52.337602   68762 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0918 21:24:52.338903   68762 main.go:141] libmachine: (newest-cni-560575) Calling .GetIP
	I0918 21:24:52.341776   68762 main.go:141] libmachine: (newest-cni-560575) DBG | domain newest-cni-560575 has defined MAC address 52:54:00:35:4b:9c in network mk-newest-cni-560575
	I0918 21:24:52.342119   68762 main.go:141] libmachine: (newest-cni-560575) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:4b:9c", ip: ""} in network mk-newest-cni-560575: {Iface:virbr3 ExpiryTime:2024-09-18 22:24:44 +0000 UTC Type:0 Mac:52:54:00:35:4b:9c Iaid: IPaddr:192.168.72.106 Prefix:24 Hostname:newest-cni-560575 Clientid:01:52:54:00:35:4b:9c}
	I0918 21:24:52.342144   68762 main.go:141] libmachine: (newest-cni-560575) DBG | domain newest-cni-560575 has defined IP address 192.168.72.106 and MAC address 52:54:00:35:4b:9c in network mk-newest-cni-560575
	I0918 21:24:52.342395   68762 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0918 21:24:52.346358   68762 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0918 21:24:52.361010   68762 out.go:177]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I0918 21:24:52.362128   68762 kubeadm.go:883] updating cluster {Name:newest-cni-560575 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.1 ClusterName:newest-cni-560575 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.106 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVer
sion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0918 21:24:52.362252   68762 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0918 21:24:52.362308   68762 ssh_runner.go:195] Run: sudo crictl images --output json
	I0918 21:24:52.396758   68762 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I0918 21:24:52.396842   68762 ssh_runner.go:195] Run: which lz4
	I0918 21:24:52.400707   68762 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0918 21:24:52.404989   68762 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0918 21:24:52.405022   68762 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I0918 21:24:53.647993   68762 crio.go:462] duration metric: took 1.247323644s to copy over tarball
	I0918 21:24:53.648101   68762 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0918 21:24:55.828647   68762 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.180512988s)
	I0918 21:24:55.828681   68762 crio.go:469] duration metric: took 2.180636067s to extract the tarball
	I0918 21:24:55.828690   68762 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0918 21:24:55.866897   68762 ssh_runner.go:195] Run: sudo crictl images --output json
	I0918 21:24:55.911941   68762 crio.go:514] all images are preloaded for cri-o runtime.
	I0918 21:24:55.911970   68762 cache_images.go:84] Images are preloaded, skipping loading
	I0918 21:24:55.911978   68762 kubeadm.go:934] updating node { 192.168.72.106 8443 v1.31.1 crio true true} ...
	I0918 21:24:55.912107   68762 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --feature-gates=ServerSideApply=true --hostname-override=newest-cni-560575 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.106
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:newest-cni-560575 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0918 21:24:55.912194   68762 ssh_runner.go:195] Run: crio config
	I0918 21:24:55.967044   68762 cni.go:84] Creating CNI manager for ""
	I0918 21:24:55.967074   68762 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0918 21:24:55.967086   68762 kubeadm.go:84] Using pod CIDR: 10.42.0.0/16
	I0918 21:24:55.967114   68762 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.72.106 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-560575 NodeName:newest-cni-560575 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota feature-gates:ServerSideApply=true] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.106"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]}] FeatureArgs:ma
p[] NodeIP:192.168.72.106 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0918 21:24:55.967260   68762 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.106
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-560575"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.106
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.106"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	    feature-gates: "ServerSideApply=true"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0918 21:24:55.967315   68762 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0918 21:24:55.978670   68762 binaries.go:44] Found k8s binaries, skipping transfer
	I0918 21:24:55.978762   68762 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0918 21:24:55.988901   68762 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (354 bytes)
	I0918 21:24:56.006258   68762 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0918 21:24:56.024308   68762 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2285 bytes)
	I0918 21:24:56.041582   68762 ssh_runner.go:195] Run: grep 192.168.72.106	control-plane.minikube.internal$ /etc/hosts
	I0918 21:24:56.045185   68762 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.106	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0918 21:24:56.057949   68762 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0918 21:24:56.175394   68762 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0918 21:24:56.192417   68762 certs.go:68] Setting up /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/newest-cni-560575 for IP: 192.168.72.106
	I0918 21:24:56.192440   68762 certs.go:194] generating shared ca certs ...
	I0918 21:24:56.192460   68762 certs.go:226] acquiring lock for ca certs: {Name:mkf54e0e71ed884c4372f9d3d4cb308ea4600185 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0918 21:24:56.192675   68762 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19667-7671/.minikube/ca.key
	I0918 21:24:56.192737   68762 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19667-7671/.minikube/proxy-client-ca.key
	I0918 21:24:56.192751   68762 certs.go:256] generating profile certs ...
	I0918 21:24:56.192821   68762 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/newest-cni-560575/client.key
	I0918 21:24:56.192852   68762 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/newest-cni-560575/client.crt with IP's: []
	I0918 21:24:56.620949   68762 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/newest-cni-560575/client.crt ...
	I0918 21:24:56.620979   68762 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/newest-cni-560575/client.crt: {Name:mk8634811559319244c578fbfbb865779bb502ed Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0918 21:24:56.621146   68762 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/newest-cni-560575/client.key ...
	I0918 21:24:56.621156   68762 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/newest-cni-560575/client.key: {Name:mk90234b75367f747040e6088f316ab21f5fd3f4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0918 21:24:56.621231   68762 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/newest-cni-560575/apiserver.key.df886787
	I0918 21:24:56.621245   68762 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/newest-cni-560575/apiserver.crt.df886787 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.72.106]
	I0918 21:24:56.789374   68762 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/newest-cni-560575/apiserver.crt.df886787 ...
	I0918 21:24:56.789405   68762 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/newest-cni-560575/apiserver.crt.df886787: {Name:mke5d28e763d2cbf8d6d0f8f05125abde0ff4e88 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0918 21:24:56.789565   68762 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/newest-cni-560575/apiserver.key.df886787 ...
	I0918 21:24:56.789578   68762 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/newest-cni-560575/apiserver.key.df886787: {Name:mk496420da01bec64c9b32e7fc9db8a65454653b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0918 21:24:56.789644   68762 certs.go:381] copying /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/newest-cni-560575/apiserver.crt.df886787 -> /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/newest-cni-560575/apiserver.crt
	I0918 21:24:56.789758   68762 certs.go:385] copying /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/newest-cni-560575/apiserver.key.df886787 -> /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/newest-cni-560575/apiserver.key
	I0918 21:24:56.789819   68762 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/newest-cni-560575/proxy-client.key
	I0918 21:24:56.789839   68762 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/newest-cni-560575/proxy-client.crt with IP's: []
	I0918 21:24:56.959199   68762 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/newest-cni-560575/proxy-client.crt ...
	I0918 21:24:56.959234   68762 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/newest-cni-560575/proxy-client.crt: {Name:mk25b72eea98665d2d3e1c18203adef18e2789d1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0918 21:24:56.959402   68762 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/newest-cni-560575/proxy-client.key ...
	I0918 21:24:56.959414   68762 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/newest-cni-560575/proxy-client.key: {Name:mk5f4b4e98d4c2c85cbfdf80e8012ac0a7cfaded Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0918 21:24:56.959594   68762 certs.go:484] found cert: /home/jenkins/minikube-integration/19667-7671/.minikube/certs/14878.pem (1338 bytes)
	W0918 21:24:56.959631   68762 certs.go:480] ignoring /home/jenkins/minikube-integration/19667-7671/.minikube/certs/14878_empty.pem, impossibly tiny 0 bytes
	I0918 21:24:56.959641   68762 certs.go:484] found cert: /home/jenkins/minikube-integration/19667-7671/.minikube/certs/ca-key.pem (1675 bytes)
	I0918 21:24:56.959662   68762 certs.go:484] found cert: /home/jenkins/minikube-integration/19667-7671/.minikube/certs/ca.pem (1082 bytes)
	I0918 21:24:56.959684   68762 certs.go:484] found cert: /home/jenkins/minikube-integration/19667-7671/.minikube/certs/cert.pem (1123 bytes)
	I0918 21:24:56.959705   68762 certs.go:484] found cert: /home/jenkins/minikube-integration/19667-7671/.minikube/certs/key.pem (1679 bytes)
	I0918 21:24:56.959742   68762 certs.go:484] found cert: /home/jenkins/minikube-integration/19667-7671/.minikube/files/etc/ssl/certs/148782.pem (1708 bytes)
	I0918 21:24:56.960355   68762 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0918 21:24:56.991813   68762 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0918 21:24:57.021806   68762 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0918 21:24:57.052463   68762 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0918 21:24:57.078061   68762 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/newest-cni-560575/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0918 21:24:57.105453   68762 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/newest-cni-560575/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0918 21:24:57.130836   68762 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/newest-cni-560575/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0918 21:24:57.155682   68762 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/newest-cni-560575/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0918 21:24:57.180824   68762 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0918 21:24:57.207380   68762 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/certs/14878.pem --> /usr/share/ca-certificates/14878.pem (1338 bytes)
	I0918 21:24:57.231910   68762 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/files/etc/ssl/certs/148782.pem --> /usr/share/ca-certificates/148782.pem (1708 bytes)
	I0918 21:24:57.257168   68762 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0918 21:24:57.278195   68762 ssh_runner.go:195] Run: openssl version
	I0918 21:24:57.284684   68762 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0918 21:24:57.298214   68762 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0918 21:24:57.303306   68762 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 18 19:39 /usr/share/ca-certificates/minikubeCA.pem
	I0918 21:24:57.303386   68762 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0918 21:24:57.309626   68762 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0918 21:24:57.320768   68762 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14878.pem && ln -fs /usr/share/ca-certificates/14878.pem /etc/ssl/certs/14878.pem"
	I0918 21:24:57.331578   68762 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14878.pem
	I0918 21:24:57.337075   68762 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 18 19:57 /usr/share/ca-certificates/14878.pem
	I0918 21:24:57.337145   68762 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14878.pem
	I0918 21:24:57.343347   68762 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14878.pem /etc/ssl/certs/51391683.0"
	I0918 21:24:57.354762   68762 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/148782.pem && ln -fs /usr/share/ca-certificates/148782.pem /etc/ssl/certs/148782.pem"
	I0918 21:24:57.366959   68762 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/148782.pem
	I0918 21:24:57.372111   68762 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 18 19:57 /usr/share/ca-certificates/148782.pem
	I0918 21:24:57.372177   68762 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/148782.pem
	I0918 21:24:57.378507   68762 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/148782.pem /etc/ssl/certs/3ec20f2e.0"
	I0918 21:24:57.391155   68762 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0918 21:24:57.395608   68762 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0918 21:24:57.395660   68762 kubeadm.go:392] StartCluster: {Name:newest-cni-560575 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.1 ClusterName:newest-cni-560575 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.106 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersio
n:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0918 21:24:57.395759   68762 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0918 21:24:57.395810   68762 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0918 21:24:57.446046   68762 cri.go:89] found id: ""
	I0918 21:24:57.446151   68762 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0918 21:24:57.456828   68762 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0918 21:24:57.468092   68762 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0918 21:24:57.478936   68762 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0918 21:24:57.478956   68762 kubeadm.go:157] found existing configuration files:
	
	I0918 21:24:57.479012   68762 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0918 21:24:57.489610   68762 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0918 21:24:57.489688   68762 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0918 21:24:57.500646   68762 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0918 21:24:57.511152   68762 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0918 21:24:57.511227   68762 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0918 21:24:57.521464   68762 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0918 21:24:57.532919   68762 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0918 21:24:57.532975   68762 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0918 21:24:57.543290   68762 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0918 21:24:57.553105   68762 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0918 21:24:57.553174   68762 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0918 21:24:57.563131   68762 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0918 21:24:57.672154   68762 kubeadm.go:310] W0918 21:24:57.652376     827 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0918 21:24:57.673561   68762 kubeadm.go:310] W0918 21:24:57.653874     827 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0918 21:24:57.796451   68762 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	
	
	==> CRI-O <==
	Sep 18 21:25:04 no-preload-331658 crio[706]: time="2024-09-18 21:25:04.558326188Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726694704558301515,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=f6bb3a5f-0a53-4777-9d55-7e22cd9d7f6c name=/runtime.v1.ImageService/ImageFsInfo
	Sep 18 21:25:04 no-preload-331658 crio[706]: time="2024-09-18 21:25:04.558824108Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=9763828d-559c-460f-9618-c8202808e5ee name=/runtime.v1.RuntimeService/ListContainers
	Sep 18 21:25:04 no-preload-331658 crio[706]: time="2024-09-18 21:25:04.558884394Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=9763828d-559c-460f-9618-c8202808e5ee name=/runtime.v1.RuntimeService/ListContainers
	Sep 18 21:25:04 no-preload-331658 crio[706]: time="2024-09-18 21:25:04.559073301Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b44d6f4b44928aa498c4375f783c165714b4a5959a1fa709712ad39a412d6d9f,PodSandboxId:dea5ae06387e7e7335fef18b98fbd9ecf26aa8d0250ef3d89bce58fa3cc9f783,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726693584268643455,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e110aeb3-e9bc-4bb9-9b49-5579558bdda2,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b73a0ee39e7552f3324ca36518989909546cebd01fce34a1c7841ec6b3c2f893,PodSandboxId:f90a1da129fd7a1e4b6da8d312648f49d551727ef4e76ef728e6db438ccb383c,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1726693564769582699,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: fd5604f9-f8ae-4012-884d-ff45e1238741,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:76b9e08a213468abdd23d7b3998b55d6544e2fbae9205ec6076ef0cd7a80e7be,PodSandboxId:e600aef7eba4f5133343b5df0f6a05822565aae3915bfdf4ba671ae1aaa1579b,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726693561167110046,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-dgnw2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 085d5a98-0a61-4678-830f-384780a0d7ef,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"d
ns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:38c14df0554157969a97390590d6e9c46765fd6fc3e286f7d2e3094838e0aff5,PodSandboxId:dea5ae06387e7e7335fef18b98fbd9ecf26aa8d0250ef3d89bce58fa3cc9f783,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1726693553582994768,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e
110aeb3-e9bc-4bb9-9b49-5579558bdda2,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0257280a0d21d52e2a996c2f717880bdb9d9f6fe90a4b82cc62222c941ba7d84,PodSandboxId:ef545994f99627f1b7c7d30e652e55e8b707a237ba033526c962620e35c17cb2,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726693553447029115,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-hx25w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a26512ff-f695-4452-8974-5774792571
60,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c372970fdf265433e1668377d0214de6608cb39ebfd706cfad9624cba2f9b481,PodSandboxId:3db793cbb33d1fd20a92be6ea777701bea805eed68a056d674fc2f7a4e1f2e5a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726693548794633396,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-331658,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 78603579e4c5ca6d26ff1e0ada5894ef,},Annotati
ons:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a913074a00723bc43f97ba625ebbdfded7dca1ce664ace9a13173adb96d2068f,PodSandboxId:a2d6e267a498db1f1beb5218c1d4798baad0571bf780c72bac56f8b49195a86a,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726693548758445691,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-331658,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 576bb5b538781d67529bddb3d38e8264,},Annotations:map[string]string{io.kubernetes.contain
er.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:785dc83056153e5eeb22216ac7520f529b71f1b3ae42e9540a5c8930f1fe62c2,PodSandboxId:8bbf25a4d4a9502d446d45a80fb1275ad0ceb9e7985d4141e026cbb4df60f821,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726693548745783870,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-331658,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d87e37c96485dfa050ac6b78af8e1ed7,},Annotations:map[string]string{io.kube
rnetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a70652dce4d80c6fac020d63cff7054a835d183ac4b910157a5bf4eb8cabcaa1,PodSandboxId:1d0624802e3af5e37d5d5a733589ffa76ab0d6e414e88f5bd522eef002ece657,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726693548705226555,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-331658,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a5913cf0e3e072753e49007da9d062a7,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=9763828d-559c-460f-9618-c8202808e5ee name=/runtime.v1.RuntimeService/ListContainers
	Sep 18 21:25:04 no-preload-331658 crio[706]: time="2024-09-18 21:25:04.594872306Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=22157094-76e9-4426-ac83-99b3890f8307 name=/runtime.v1.RuntimeService/Version
	Sep 18 21:25:04 no-preload-331658 crio[706]: time="2024-09-18 21:25:04.594948572Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=22157094-76e9-4426-ac83-99b3890f8307 name=/runtime.v1.RuntimeService/Version
	Sep 18 21:25:04 no-preload-331658 crio[706]: time="2024-09-18 21:25:04.596109736Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=feddc1b5-31a4-4fb7-8e4d-b1c3ae918008 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 18 21:25:04 no-preload-331658 crio[706]: time="2024-09-18 21:25:04.596546371Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726694704596522341,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=feddc1b5-31a4-4fb7-8e4d-b1c3ae918008 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 18 21:25:04 no-preload-331658 crio[706]: time="2024-09-18 21:25:04.597238683Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=a48cffc7-96b8-4f65-afb8-4adf82631d79 name=/runtime.v1.RuntimeService/ListContainers
	Sep 18 21:25:04 no-preload-331658 crio[706]: time="2024-09-18 21:25:04.597305315Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=a48cffc7-96b8-4f65-afb8-4adf82631d79 name=/runtime.v1.RuntimeService/ListContainers
	Sep 18 21:25:04 no-preload-331658 crio[706]: time="2024-09-18 21:25:04.597513489Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b44d6f4b44928aa498c4375f783c165714b4a5959a1fa709712ad39a412d6d9f,PodSandboxId:dea5ae06387e7e7335fef18b98fbd9ecf26aa8d0250ef3d89bce58fa3cc9f783,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726693584268643455,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e110aeb3-e9bc-4bb9-9b49-5579558bdda2,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b73a0ee39e7552f3324ca36518989909546cebd01fce34a1c7841ec6b3c2f893,PodSandboxId:f90a1da129fd7a1e4b6da8d312648f49d551727ef4e76ef728e6db438ccb383c,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1726693564769582699,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: fd5604f9-f8ae-4012-884d-ff45e1238741,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:76b9e08a213468abdd23d7b3998b55d6544e2fbae9205ec6076ef0cd7a80e7be,PodSandboxId:e600aef7eba4f5133343b5df0f6a05822565aae3915bfdf4ba671ae1aaa1579b,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726693561167110046,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-dgnw2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 085d5a98-0a61-4678-830f-384780a0d7ef,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"d
ns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:38c14df0554157969a97390590d6e9c46765fd6fc3e286f7d2e3094838e0aff5,PodSandboxId:dea5ae06387e7e7335fef18b98fbd9ecf26aa8d0250ef3d89bce58fa3cc9f783,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1726693553582994768,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e
110aeb3-e9bc-4bb9-9b49-5579558bdda2,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0257280a0d21d52e2a996c2f717880bdb9d9f6fe90a4b82cc62222c941ba7d84,PodSandboxId:ef545994f99627f1b7c7d30e652e55e8b707a237ba033526c962620e35c17cb2,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726693553447029115,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-hx25w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a26512ff-f695-4452-8974-5774792571
60,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c372970fdf265433e1668377d0214de6608cb39ebfd706cfad9624cba2f9b481,PodSandboxId:3db793cbb33d1fd20a92be6ea777701bea805eed68a056d674fc2f7a4e1f2e5a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726693548794633396,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-331658,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 78603579e4c5ca6d26ff1e0ada5894ef,},Annotati
ons:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a913074a00723bc43f97ba625ebbdfded7dca1ce664ace9a13173adb96d2068f,PodSandboxId:a2d6e267a498db1f1beb5218c1d4798baad0571bf780c72bac56f8b49195a86a,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726693548758445691,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-331658,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 576bb5b538781d67529bddb3d38e8264,},Annotations:map[string]string{io.kubernetes.contain
er.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:785dc83056153e5eeb22216ac7520f529b71f1b3ae42e9540a5c8930f1fe62c2,PodSandboxId:8bbf25a4d4a9502d446d45a80fb1275ad0ceb9e7985d4141e026cbb4df60f821,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726693548745783870,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-331658,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d87e37c96485dfa050ac6b78af8e1ed7,},Annotations:map[string]string{io.kube
rnetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a70652dce4d80c6fac020d63cff7054a835d183ac4b910157a5bf4eb8cabcaa1,PodSandboxId:1d0624802e3af5e37d5d5a733589ffa76ab0d6e414e88f5bd522eef002ece657,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726693548705226555,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-331658,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a5913cf0e3e072753e49007da9d062a7,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=a48cffc7-96b8-4f65-afb8-4adf82631d79 name=/runtime.v1.RuntimeService/ListContainers
	Sep 18 21:25:04 no-preload-331658 crio[706]: time="2024-09-18 21:25:04.633780605Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=f7cae538-91ea-453b-ac6f-0925c09b8e56 name=/runtime.v1.RuntimeService/Version
	Sep 18 21:25:04 no-preload-331658 crio[706]: time="2024-09-18 21:25:04.633870574Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=f7cae538-91ea-453b-ac6f-0925c09b8e56 name=/runtime.v1.RuntimeService/Version
	Sep 18 21:25:04 no-preload-331658 crio[706]: time="2024-09-18 21:25:04.634975793Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=813cc172-c44c-44b3-8974-44b22dd1fba3 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 18 21:25:04 no-preload-331658 crio[706]: time="2024-09-18 21:25:04.635362099Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726694704635338899,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=813cc172-c44c-44b3-8974-44b22dd1fba3 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 18 21:25:04 no-preload-331658 crio[706]: time="2024-09-18 21:25:04.635967121Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=13bc2fc3-6da5-4300-83e9-c7d9c45cab11 name=/runtime.v1.RuntimeService/ListContainers
	Sep 18 21:25:04 no-preload-331658 crio[706]: time="2024-09-18 21:25:04.636021683Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=13bc2fc3-6da5-4300-83e9-c7d9c45cab11 name=/runtime.v1.RuntimeService/ListContainers
	Sep 18 21:25:04 no-preload-331658 crio[706]: time="2024-09-18 21:25:04.636284218Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b44d6f4b44928aa498c4375f783c165714b4a5959a1fa709712ad39a412d6d9f,PodSandboxId:dea5ae06387e7e7335fef18b98fbd9ecf26aa8d0250ef3d89bce58fa3cc9f783,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726693584268643455,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e110aeb3-e9bc-4bb9-9b49-5579558bdda2,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b73a0ee39e7552f3324ca36518989909546cebd01fce34a1c7841ec6b3c2f893,PodSandboxId:f90a1da129fd7a1e4b6da8d312648f49d551727ef4e76ef728e6db438ccb383c,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1726693564769582699,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: fd5604f9-f8ae-4012-884d-ff45e1238741,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:76b9e08a213468abdd23d7b3998b55d6544e2fbae9205ec6076ef0cd7a80e7be,PodSandboxId:e600aef7eba4f5133343b5df0f6a05822565aae3915bfdf4ba671ae1aaa1579b,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726693561167110046,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-dgnw2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 085d5a98-0a61-4678-830f-384780a0d7ef,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"d
ns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:38c14df0554157969a97390590d6e9c46765fd6fc3e286f7d2e3094838e0aff5,PodSandboxId:dea5ae06387e7e7335fef18b98fbd9ecf26aa8d0250ef3d89bce58fa3cc9f783,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1726693553582994768,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e
110aeb3-e9bc-4bb9-9b49-5579558bdda2,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0257280a0d21d52e2a996c2f717880bdb9d9f6fe90a4b82cc62222c941ba7d84,PodSandboxId:ef545994f99627f1b7c7d30e652e55e8b707a237ba033526c962620e35c17cb2,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726693553447029115,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-hx25w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a26512ff-f695-4452-8974-5774792571
60,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c372970fdf265433e1668377d0214de6608cb39ebfd706cfad9624cba2f9b481,PodSandboxId:3db793cbb33d1fd20a92be6ea777701bea805eed68a056d674fc2f7a4e1f2e5a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726693548794633396,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-331658,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 78603579e4c5ca6d26ff1e0ada5894ef,},Annotati
ons:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a913074a00723bc43f97ba625ebbdfded7dca1ce664ace9a13173adb96d2068f,PodSandboxId:a2d6e267a498db1f1beb5218c1d4798baad0571bf780c72bac56f8b49195a86a,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726693548758445691,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-331658,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 576bb5b538781d67529bddb3d38e8264,},Annotations:map[string]string{io.kubernetes.contain
er.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:785dc83056153e5eeb22216ac7520f529b71f1b3ae42e9540a5c8930f1fe62c2,PodSandboxId:8bbf25a4d4a9502d446d45a80fb1275ad0ceb9e7985d4141e026cbb4df60f821,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726693548745783870,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-331658,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d87e37c96485dfa050ac6b78af8e1ed7,},Annotations:map[string]string{io.kube
rnetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a70652dce4d80c6fac020d63cff7054a835d183ac4b910157a5bf4eb8cabcaa1,PodSandboxId:1d0624802e3af5e37d5d5a733589ffa76ab0d6e414e88f5bd522eef002ece657,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726693548705226555,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-331658,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a5913cf0e3e072753e49007da9d062a7,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=13bc2fc3-6da5-4300-83e9-c7d9c45cab11 name=/runtime.v1.RuntimeService/ListContainers
	Sep 18 21:25:04 no-preload-331658 crio[706]: time="2024-09-18 21:25:04.672920269Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=e0419a69-2dc9-4b0c-bf46-94437ce9101a name=/runtime.v1.RuntimeService/Version
	Sep 18 21:25:04 no-preload-331658 crio[706]: time="2024-09-18 21:25:04.673040727Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=e0419a69-2dc9-4b0c-bf46-94437ce9101a name=/runtime.v1.RuntimeService/Version
	Sep 18 21:25:04 no-preload-331658 crio[706]: time="2024-09-18 21:25:04.674375972Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=f495ddb0-0349-46e6-9631-03b2934f657e name=/runtime.v1.ImageService/ImageFsInfo
	Sep 18 21:25:04 no-preload-331658 crio[706]: time="2024-09-18 21:25:04.674896589Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726694704674865314,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=f495ddb0-0349-46e6-9631-03b2934f657e name=/runtime.v1.ImageService/ImageFsInfo
	Sep 18 21:25:04 no-preload-331658 crio[706]: time="2024-09-18 21:25:04.675782828Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=caa71f45-7611-4ffe-abf7-109d2026bf4d name=/runtime.v1.RuntimeService/ListContainers
	Sep 18 21:25:04 no-preload-331658 crio[706]: time="2024-09-18 21:25:04.675869450Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=caa71f45-7611-4ffe-abf7-109d2026bf4d name=/runtime.v1.RuntimeService/ListContainers
	Sep 18 21:25:04 no-preload-331658 crio[706]: time="2024-09-18 21:25:04.676206099Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b44d6f4b44928aa498c4375f783c165714b4a5959a1fa709712ad39a412d6d9f,PodSandboxId:dea5ae06387e7e7335fef18b98fbd9ecf26aa8d0250ef3d89bce58fa3cc9f783,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726693584268643455,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e110aeb3-e9bc-4bb9-9b49-5579558bdda2,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b73a0ee39e7552f3324ca36518989909546cebd01fce34a1c7841ec6b3c2f893,PodSandboxId:f90a1da129fd7a1e4b6da8d312648f49d551727ef4e76ef728e6db438ccb383c,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1726693564769582699,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: fd5604f9-f8ae-4012-884d-ff45e1238741,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:76b9e08a213468abdd23d7b3998b55d6544e2fbae9205ec6076ef0cd7a80e7be,PodSandboxId:e600aef7eba4f5133343b5df0f6a05822565aae3915bfdf4ba671ae1aaa1579b,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726693561167110046,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-dgnw2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 085d5a98-0a61-4678-830f-384780a0d7ef,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"d
ns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:38c14df0554157969a97390590d6e9c46765fd6fc3e286f7d2e3094838e0aff5,PodSandboxId:dea5ae06387e7e7335fef18b98fbd9ecf26aa8d0250ef3d89bce58fa3cc9f783,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1726693553582994768,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e
110aeb3-e9bc-4bb9-9b49-5579558bdda2,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0257280a0d21d52e2a996c2f717880bdb9d9f6fe90a4b82cc62222c941ba7d84,PodSandboxId:ef545994f99627f1b7c7d30e652e55e8b707a237ba033526c962620e35c17cb2,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726693553447029115,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-hx25w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a26512ff-f695-4452-8974-5774792571
60,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c372970fdf265433e1668377d0214de6608cb39ebfd706cfad9624cba2f9b481,PodSandboxId:3db793cbb33d1fd20a92be6ea777701bea805eed68a056d674fc2f7a4e1f2e5a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726693548794633396,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-331658,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 78603579e4c5ca6d26ff1e0ada5894ef,},Annotati
ons:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a913074a00723bc43f97ba625ebbdfded7dca1ce664ace9a13173adb96d2068f,PodSandboxId:a2d6e267a498db1f1beb5218c1d4798baad0571bf780c72bac56f8b49195a86a,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726693548758445691,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-331658,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 576bb5b538781d67529bddb3d38e8264,},Annotations:map[string]string{io.kubernetes.contain
er.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:785dc83056153e5eeb22216ac7520f529b71f1b3ae42e9540a5c8930f1fe62c2,PodSandboxId:8bbf25a4d4a9502d446d45a80fb1275ad0ceb9e7985d4141e026cbb4df60f821,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726693548745783870,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-331658,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d87e37c96485dfa050ac6b78af8e1ed7,},Annotations:map[string]string{io.kube
rnetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a70652dce4d80c6fac020d63cff7054a835d183ac4b910157a5bf4eb8cabcaa1,PodSandboxId:1d0624802e3af5e37d5d5a733589ffa76ab0d6e414e88f5bd522eef002ece657,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726693548705226555,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-331658,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a5913cf0e3e072753e49007da9d062a7,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=caa71f45-7611-4ffe-abf7-109d2026bf4d name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	b44d6f4b44928       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      18 minutes ago      Running             storage-provisioner       2                   dea5ae06387e7       storage-provisioner
	b73a0ee39e755       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   18 minutes ago      Running             busybox                   1                   f90a1da129fd7       busybox
	76b9e08a21346       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      19 minutes ago      Running             coredns                   1                   e600aef7eba4f       coredns-7c65d6cfc9-dgnw2
	38c14df055415       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      19 minutes ago      Exited              storage-provisioner       1                   dea5ae06387e7       storage-provisioner
	0257280a0d21d       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                      19 minutes ago      Running             kube-proxy                1                   ef545994f9962       kube-proxy-hx25w
	c372970fdf265       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                      19 minutes ago      Running             kube-scheduler            1                   3db793cbb33d1       kube-scheduler-no-preload-331658
	a913074a00723       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      19 minutes ago      Running             etcd                      1                   a2d6e267a498d       etcd-no-preload-331658
	785dc83056153       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                      19 minutes ago      Running             kube-controller-manager   1                   8bbf25a4d4a95       kube-controller-manager-no-preload-331658
	a70652dce4d80       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee                                      19 minutes ago      Running             kube-apiserver            1                   1d0624802e3af       kube-apiserver-no-preload-331658
	
	
	==> coredns [76b9e08a213468abdd23d7b3998b55d6544e2fbae9205ec6076ef0cd7a80e7be] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 7996ca7cabdb2fd3e37b0463c78d5a492f8d30690ee66a90ae7ff24c50d9d936a24d239b3a5946771521ff70c09a796ffaf6ef8abe5753fd1ad5af38b6cdbb7f
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:41940 - 377 "HINFO IN 8387474681266792745.2216001485904418167. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.018101231s
	
	
	==> describe nodes <==
	Name:               no-preload-331658
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-331658
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=85073601a832bd4bbda5d11fa91feafff6ec6b91
	                    minikube.k8s.io/name=no-preload-331658
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_18T20_56_32_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 18 Sep 2024 20:56:28 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-331658
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 18 Sep 2024 21:24:55 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 18 Sep 2024 21:21:40 +0000   Wed, 18 Sep 2024 20:56:27 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 18 Sep 2024 21:21:40 +0000   Wed, 18 Sep 2024 20:56:27 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 18 Sep 2024 21:21:40 +0000   Wed, 18 Sep 2024 20:56:27 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 18 Sep 2024 21:21:40 +0000   Wed, 18 Sep 2024 21:06:02 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.61.31
	  Hostname:    no-preload-331658
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 a80780b722fd4c839ca3d1a0c9a7d0dd
	  System UUID:                a80780b7-22fd-4c83-9ca3-d1a0c9a7d0dd
	  Boot ID:                    58db0881-f0c7-4360-bff4-2e0e33a19d28
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         28m
	  kube-system                 coredns-7c65d6cfc9-dgnw2                     100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     28m
	  kube-system                 etcd-no-preload-331658                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         28m
	  kube-system                 kube-apiserver-no-preload-331658             250m (12%)    0 (0%)      0 (0%)           0 (0%)         28m
	  kube-system                 kube-controller-manager-no-preload-331658    200m (10%)    0 (0%)      0 (0%)           0 (0%)         28m
	  kube-system                 kube-proxy-hx25w                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         28m
	  kube-system                 kube-scheduler-no-preload-331658             100m (5%)     0 (0%)      0 (0%)           0 (0%)         28m
	  kube-system                 metrics-server-6867b74b74-n27vc              100m (5%)     0 (0%)      200Mi (9%)       0 (0%)         28m
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         28m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%)   0 (0%)
	  memory             370Mi (17%)  170Mi (8%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 28m                kube-proxy       
	  Normal  Starting                 19m                kube-proxy       
	  Normal  Starting                 28m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  28m (x8 over 28m)  kubelet          Node no-preload-331658 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    28m (x8 over 28m)  kubelet          Node no-preload-331658 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     28m (x7 over 28m)  kubelet          Node no-preload-331658 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  28m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasNoDiskPressure    28m                kubelet          Node no-preload-331658 status is now: NodeHasNoDiskPressure
	  Normal  NodeAllocatableEnforced  28m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  28m                kubelet          Node no-preload-331658 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     28m                kubelet          Node no-preload-331658 status is now: NodeHasSufficientPID
	  Normal  Starting                 28m                kubelet          Starting kubelet.
	  Normal  NodeReady                28m                kubelet          Node no-preload-331658 status is now: NodeReady
	  Normal  RegisteredNode           28m                node-controller  Node no-preload-331658 event: Registered Node no-preload-331658 in Controller
	  Normal  Starting                 19m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  19m (x8 over 19m)  kubelet          Node no-preload-331658 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    19m (x8 over 19m)  kubelet          Node no-preload-331658 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     19m (x7 over 19m)  kubelet          Node no-preload-331658 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  19m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           19m                node-controller  Node no-preload-331658 event: Registered Node no-preload-331658 in Controller
	
	
	==> dmesg <==
	[Sep18 21:05] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.055782] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.042065] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +5.046605] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.030734] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.587582] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +7.714844] systemd-fstab-generator[629]: Ignoring "noauto" option for root device
	[  +0.064441] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.070450] systemd-fstab-generator[641]: Ignoring "noauto" option for root device
	[  +0.179272] systemd-fstab-generator[655]: Ignoring "noauto" option for root device
	[  +0.142864] systemd-fstab-generator[667]: Ignoring "noauto" option for root device
	[  +0.304031] systemd-fstab-generator[697]: Ignoring "noauto" option for root device
	[ +15.194098] systemd-fstab-generator[1234]: Ignoring "noauto" option for root device
	[  +0.061533] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.129220] systemd-fstab-generator[1355]: Ignoring "noauto" option for root device
	[  +3.402337] kauditd_printk_skb: 97 callbacks suppressed
	[  +4.199898] systemd-fstab-generator[1981]: Ignoring "noauto" option for root device
	[  +2.741517] kauditd_printk_skb: 61 callbacks suppressed
	[Sep18 21:06] kauditd_printk_skb: 41 callbacks suppressed
	
	
	==> etcd [a913074a00723bc43f97ba625ebbdfded7dca1ce664ace9a13173adb96d2068f] <==
	{"level":"info","ts":"2024-09-18T21:05:51.004193Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-18T21:05:51.004353Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"b122709e0f96166a","local-member-attributes":"{Name:no-preload-331658 ClientURLs:[https://192.168.61.31:2379]}","request-path":"/0/members/b122709e0f96166a/attributes","cluster-id":"29796e4c48d338ea","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-18T21:05:51.004767Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-18T21:05:51.004964Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-18T21:05:51.004986Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-18T21:05:51.005679Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-18T21:05:51.006553Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.61.31:2379"}
	{"level":"info","ts":"2024-09-18T21:05:51.007739Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-18T21:05:51.009053Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-09-18T21:15:51.037345Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":865}
	{"level":"info","ts":"2024-09-18T21:15:51.048647Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":865,"took":"10.379884ms","hash":2940446556,"current-db-size-bytes":2871296,"current-db-size":"2.9 MB","current-db-size-in-use-bytes":2871296,"current-db-size-in-use":"2.9 MB"}
	{"level":"info","ts":"2024-09-18T21:15:51.048748Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":2940446556,"revision":865,"compact-revision":-1}
	{"level":"info","ts":"2024-09-18T21:20:51.044144Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1107}
	{"level":"info","ts":"2024-09-18T21:20:51.048722Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":1107,"took":"3.805112ms","hash":2440813698,"current-db-size-bytes":2871296,"current-db-size":"2.9 MB","current-db-size-in-use-bytes":1671168,"current-db-size-in-use":"1.7 MB"}
	{"level":"info","ts":"2024-09-18T21:20:51.048808Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":2440813698,"revision":1107,"compact-revision":865}
	{"level":"info","ts":"2024-09-18T21:24:45.212261Z","caller":"traceutil/trace.go:171","msg":"trace[1446483390] transaction","detail":"{read_only:false; response_revision:1541; number_of_response:1; }","duration":"102.939266ms","start":"2024-09-18T21:24:45.109256Z","end":"2024-09-18T21:24:45.212195Z","steps":["trace[1446483390] 'process raft request'  (duration: 102.774128ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-18T21:24:45.528670Z","caller":"traceutil/trace.go:171","msg":"trace[1952526045] linearizableReadLoop","detail":"{readStateIndex:1808; appliedIndex:1807; }","duration":"145.827794ms","start":"2024-09-18T21:24:45.382828Z","end":"2024-09-18T21:24:45.528656Z","steps":["trace[1952526045] 'read index received'  (duration: 145.671774ms)","trace[1952526045] 'applied index is now lower than readState.Index'  (duration: 155.618µs)"],"step_count":2}
	{"level":"info","ts":"2024-09-18T21:24:45.528831Z","caller":"traceutil/trace.go:171","msg":"trace[594265735] transaction","detail":"{read_only:false; response_revision:1542; number_of_response:1; }","duration":"180.721595ms","start":"2024-09-18T21:24:45.348103Z","end":"2024-09-18T21:24:45.528824Z","steps":["trace[594265735] 'process raft request'  (duration: 180.442906ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-18T21:24:45.529232Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"146.272131ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-18T21:24:45.529331Z","caller":"traceutil/trace.go:171","msg":"trace[950067855] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1542; }","duration":"146.4969ms","start":"2024-09-18T21:24:45.382824Z","end":"2024-09-18T21:24:45.529321Z","steps":["trace[950067855] 'agreement among raft nodes before linearized reading'  (duration: 146.251373ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-18T21:24:57.702354Z","caller":"traceutil/trace.go:171","msg":"trace[754016137] transaction","detail":"{read_only:false; response_revision:1551; number_of_response:1; }","duration":"111.834434ms","start":"2024-09-18T21:24:57.590481Z","end":"2024-09-18T21:24:57.702315Z","steps":["trace[754016137] 'process raft request'  (duration: 111.335966ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-18T21:24:59.224553Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"109.478909ms","expected-duration":"100ms","prefix":"","request":"header:<ID:1615263974937626649 > lease_revoke:<id:166a9206f4a487b7>","response":"size:28"}
	{"level":"info","ts":"2024-09-18T21:24:59.224701Z","caller":"traceutil/trace.go:171","msg":"trace[1715376158] linearizableReadLoop","detail":"{readStateIndex:1820; appliedIndex:1819; }","duration":"129.964306ms","start":"2024-09-18T21:24:59.094728Z","end":"2024-09-18T21:24:59.224692Z","steps":["trace[1715376158] 'read index received'  (duration: 20.336026ms)","trace[1715376158] 'applied index is now lower than readState.Index'  (duration: 109.627305ms)"],"step_count":2}
	{"level":"warn","ts":"2024-09-18T21:24:59.224795Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"130.059643ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-18T21:24:59.224841Z","caller":"traceutil/trace.go:171","msg":"trace[1889770078] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:1551; }","duration":"130.112515ms","start":"2024-09-18T21:24:59.094722Z","end":"2024-09-18T21:24:59.224834Z","steps":["trace[1889770078] 'agreement among raft nodes before linearized reading'  (duration: 130.04345ms)"],"step_count":1}
	
	
	==> kernel <==
	 21:25:05 up 19 min,  0 users,  load average: 0.14, 0.34, 0.20
	Linux no-preload-331658 5.10.207 #1 SMP Mon Sep 16 15:00:28 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [a70652dce4d80c6fac020d63cff7054a835d183ac4b910157a5bf4eb8cabcaa1] <==
	W0918 21:20:53.316025       1 handler_proxy.go:99] no RequestInfo found in the context
	E0918 21:20:53.316409       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0918 21:20:53.317279       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0918 21:20:53.318495       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0918 21:21:53.317544       1 handler_proxy.go:99] no RequestInfo found in the context
	E0918 21:21:53.317621       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I0918 21:21:53.318759       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0918 21:21:53.318838       1 handler_proxy.go:99] no RequestInfo found in the context
	E0918 21:21:53.318904       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0918 21:21:53.320191       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0918 21:23:53.319492       1 handler_proxy.go:99] no RequestInfo found in the context
	E0918 21:23:53.319878       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	W0918 21:23:53.320893       1 handler_proxy.go:99] no RequestInfo found in the context
	I0918 21:23:53.320949       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	E0918 21:23:53.321051       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0918 21:23:53.322212       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [785dc83056153e5eeb22216ac7520f529b71f1b3ae42e9540a5c8930f1fe62c2] <==
	E0918 21:19:55.922620       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0918 21:19:56.512765       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0918 21:20:25.929200       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0918 21:20:26.522494       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0918 21:20:55.936332       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0918 21:20:56.529948       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0918 21:21:25.944619       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0918 21:21:26.537443       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0918 21:21:40.431595       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="no-preload-331658"
	E0918 21:21:55.951429       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0918 21:21:56.546886       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0918 21:22:16.108316       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="264.812µs"
	E0918 21:22:25.958170       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0918 21:22:26.553846       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0918 21:22:30.109040       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="137.891µs"
	E0918 21:22:55.966280       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0918 21:22:56.561618       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0918 21:23:25.972585       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0918 21:23:26.568900       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0918 21:23:55.980276       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0918 21:23:56.577971       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0918 21:24:25.986976       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0918 21:24:26.591992       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0918 21:24:55.994484       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0918 21:24:56.599673       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [0257280a0d21d52e2a996c2f717880bdb9d9f6fe90a4b82cc62222c941ba7d84] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0918 21:05:53.927489       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0918 21:05:53.957651       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.61.31"]
	E0918 21:05:53.957782       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0918 21:05:54.060813       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0918 21:05:54.060858       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0918 21:05:54.060884       1 server_linux.go:169] "Using iptables Proxier"
	I0918 21:05:54.069098       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0918 21:05:54.070239       1 server.go:483] "Version info" version="v1.31.1"
	I0918 21:05:54.070269       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0918 21:05:54.073599       1 config.go:199] "Starting service config controller"
	I0918 21:05:54.074021       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0918 21:05:54.074224       1 config.go:105] "Starting endpoint slice config controller"
	I0918 21:05:54.074257       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0918 21:05:54.075336       1 config.go:328] "Starting node config controller"
	I0918 21:05:54.075378       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0918 21:05:54.175400       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0918 21:05:54.175459       1 shared_informer.go:320] Caches are synced for node config
	I0918 21:05:54.175470       1 shared_informer.go:320] Caches are synced for service config
	
	
	==> kube-scheduler [c372970fdf265433e1668377d0214de6608cb39ebfd706cfad9624cba2f9b481] <==
	I0918 21:05:49.944448       1 serving.go:386] Generated self-signed cert in-memory
	W0918 21:05:52.253901       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0918 21:05:52.253988       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0918 21:05:52.253999       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0918 21:05:52.254005       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0918 21:05:52.319956       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.1"
	I0918 21:05:52.320041       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0918 21:05:52.324445       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0918 21:05:52.324480       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0918 21:05:52.324867       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0918 21:05:52.324954       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0918 21:05:52.425474       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 18 21:23:58 no-preload-331658 kubelet[1362]: E0918 21:23:58.328858    1362 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726694638327905575,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 18 21:23:58 no-preload-331658 kubelet[1362]: E0918 21:23:58.329769    1362 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726694638327905575,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 18 21:24:01 no-preload-331658 kubelet[1362]: E0918 21:24:01.089828    1362 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-n27vc" podUID="b1de76ec-8987-49ce-ae66-eedda2705cde"
	Sep 18 21:24:08 no-preload-331658 kubelet[1362]: E0918 21:24:08.331575    1362 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726694648331337115,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 18 21:24:08 no-preload-331658 kubelet[1362]: E0918 21:24:08.331630    1362 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726694648331337115,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 18 21:24:16 no-preload-331658 kubelet[1362]: E0918 21:24:16.090508    1362 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-n27vc" podUID="b1de76ec-8987-49ce-ae66-eedda2705cde"
	Sep 18 21:24:18 no-preload-331658 kubelet[1362]: E0918 21:24:18.332904    1362 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726694658332549594,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 18 21:24:18 no-preload-331658 kubelet[1362]: E0918 21:24:18.332943    1362 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726694658332549594,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 18 21:24:28 no-preload-331658 kubelet[1362]: E0918 21:24:28.336769    1362 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726694668335875554,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 18 21:24:28 no-preload-331658 kubelet[1362]: E0918 21:24:28.337330    1362 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726694668335875554,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 18 21:24:29 no-preload-331658 kubelet[1362]: E0918 21:24:29.089577    1362 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-n27vc" podUID="b1de76ec-8987-49ce-ae66-eedda2705cde"
	Sep 18 21:24:38 no-preload-331658 kubelet[1362]: E0918 21:24:38.339787    1362 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726694678339395947,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 18 21:24:38 no-preload-331658 kubelet[1362]: E0918 21:24:38.339883    1362 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726694678339395947,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 18 21:24:41 no-preload-331658 kubelet[1362]: E0918 21:24:41.090359    1362 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-n27vc" podUID="b1de76ec-8987-49ce-ae66-eedda2705cde"
	Sep 18 21:24:48 no-preload-331658 kubelet[1362]: E0918 21:24:48.105956    1362 iptables.go:577] "Could not set up iptables canary" err=<
	Sep 18 21:24:48 no-preload-331658 kubelet[1362]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Sep 18 21:24:48 no-preload-331658 kubelet[1362]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 18 21:24:48 no-preload-331658 kubelet[1362]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 18 21:24:48 no-preload-331658 kubelet[1362]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 18 21:24:48 no-preload-331658 kubelet[1362]: E0918 21:24:48.342298    1362 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726694688341762165,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 18 21:24:48 no-preload-331658 kubelet[1362]: E0918 21:24:48.342540    1362 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726694688341762165,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 18 21:24:52 no-preload-331658 kubelet[1362]: E0918 21:24:52.089754    1362 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-n27vc" podUID="b1de76ec-8987-49ce-ae66-eedda2705cde"
	Sep 18 21:24:58 no-preload-331658 kubelet[1362]: E0918 21:24:58.343866    1362 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726694698343539205,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 18 21:24:58 no-preload-331658 kubelet[1362]: E0918 21:24:58.343920    1362 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726694698343539205,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 18 21:25:04 no-preload-331658 kubelet[1362]: E0918 21:25:04.090091    1362 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-n27vc" podUID="b1de76ec-8987-49ce-ae66-eedda2705cde"
	
	
	==> storage-provisioner [38c14df0554157969a97390590d6e9c46765fd6fc3e286f7d2e3094838e0aff5] <==
	I0918 21:05:53.769456       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0918 21:06:23.777330       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [b44d6f4b44928aa498c4375f783c165714b4a5959a1fa709712ad39a412d6d9f] <==
	I0918 21:06:24.350939       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0918 21:06:24.362842       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0918 21:06:24.363212       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0918 21:06:41.764415       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0918 21:06:41.764833       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-331658_73254c79-8930-42ef-942a-b7efbf5cffb6!
	I0918 21:06:41.766195       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"46fb55f8-dea6-41d8-baf3-32c81977d123", APIVersion:"v1", ResourceVersion:"648", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-331658_73254c79-8930-42ef-942a-b7efbf5cffb6 became leader
	I0918 21:06:41.866480       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-331658_73254c79-8930-42ef-942a-b7efbf5cffb6!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-331658 -n no-preload-331658
helpers_test.go:261: (dbg) Run:  kubectl --context no-preload-331658 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-6867b74b74-n27vc
helpers_test.go:274: ======> post-mortem[TestStartStop/group/no-preload/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context no-preload-331658 describe pod metrics-server-6867b74b74-n27vc
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context no-preload-331658 describe pod metrics-server-6867b74b74-n27vc: exit status 1 (95.578451ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-6867b74b74-n27vc" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context no-preload-331658 describe pod metrics-server-6867b74b74-n27vc: exit status 1
--- FAIL: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (346.35s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (126.4s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
E0918 21:24:15.250762   14878 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/addons-815929/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.53:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.53:8443: connect: connection refused
start_stop_delete_test.go:287: ***** TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-740194 -n old-k8s-version-740194
start_stop_delete_test.go:287: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-740194 -n old-k8s-version-740194: exit status 2 (250.473611ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:287: status error: exit status 2 (may be ok)
start_stop_delete_test.go:287: "old-k8s-version-740194" apiserver is not running, skipping kubectl commands (state="Stopped")
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-740194 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context old-k8s-version-740194 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (1.575µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context old-k8s-version-740194 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-740194 -n old-k8s-version-740194
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-740194 -n old-k8s-version-740194: exit status 2 (229.9713ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-740194 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-740194 logs -n 25: (1.624656437s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| delete  | -p cert-options-347585                                 | cert-options-347585          | jenkins | v1.34.0 | 18 Sep 24 20:55 UTC | 18 Sep 24 20:55 UTC |
	| start   | -p no-preload-331658                                   | no-preload-331658            | jenkins | v1.34.0 | 18 Sep 24 20:55 UTC | 18 Sep 24 20:56 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-878094                           | kubernetes-upgrade-878094    | jenkins | v1.34.0 | 18 Sep 24 20:55 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-878094                           | kubernetes-upgrade-878094    | jenkins | v1.34.0 | 18 Sep 24 20:55 UTC | 18 Sep 24 20:56 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	|         | --alsologtostderr                                      |                              |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                     |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| delete  | -p pause-543700                                        | pause-543700                 | jenkins | v1.34.0 | 18 Sep 24 20:56 UTC | 18 Sep 24 20:56 UTC |
	| start   | -p embed-certs-255556                                  | embed-certs-255556           | jenkins | v1.34.0 | 18 Sep 24 20:56 UTC | 18 Sep 24 20:57 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| delete  | -p kubernetes-upgrade-878094                           | kubernetes-upgrade-878094    | jenkins | v1.34.0 | 18 Sep 24 20:56 UTC | 18 Sep 24 20:56 UTC |
	| delete  | -p                                                     | disable-driver-mounts-335923 | jenkins | v1.34.0 | 18 Sep 24 20:56 UTC | 18 Sep 24 20:56 UTC |
	|         | disable-driver-mounts-335923                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-828868 | jenkins | v1.34.0 | 18 Sep 24 20:56 UTC | 18 Sep 24 20:57 UTC |
	|         | default-k8s-diff-port-828868                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-331658             | no-preload-331658            | jenkins | v1.34.0 | 18 Sep 24 20:56 UTC | 18 Sep 24 20:57 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-331658                                   | no-preload-331658            | jenkins | v1.34.0 | 18 Sep 24 20:57 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-828868  | default-k8s-diff-port-828868 | jenkins | v1.34.0 | 18 Sep 24 20:57 UTC | 18 Sep 24 20:57 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-828868 | jenkins | v1.34.0 | 18 Sep 24 20:57 UTC |                     |
	|         | default-k8s-diff-port-828868                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-255556            | embed-certs-255556           | jenkins | v1.34.0 | 18 Sep 24 20:57 UTC | 18 Sep 24 20:57 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-255556                                  | embed-certs-255556           | jenkins | v1.34.0 | 18 Sep 24 20:57 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-740194        | old-k8s-version-740194       | jenkins | v1.34.0 | 18 Sep 24 20:59 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-331658                  | no-preload-331658            | jenkins | v1.34.0 | 18 Sep 24 20:59 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-331658                                   | no-preload-331658            | jenkins | v1.34.0 | 18 Sep 24 20:59 UTC | 18 Sep 24 21:10 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-828868       | default-k8s-diff-port-828868 | jenkins | v1.34.0 | 18 Sep 24 21:00 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-255556                 | embed-certs-255556           | jenkins | v1.34.0 | 18 Sep 24 21:00 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-828868 | jenkins | v1.34.0 | 18 Sep 24 21:00 UTC | 18 Sep 24 21:09 UTC |
	|         | default-k8s-diff-port-828868                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| start   | -p embed-certs-255556                                  | embed-certs-255556           | jenkins | v1.34.0 | 18 Sep 24 21:00 UTC | 18 Sep 24 21:10 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-740194                              | old-k8s-version-740194       | jenkins | v1.34.0 | 18 Sep 24 21:00 UTC | 18 Sep 24 21:00 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-740194             | old-k8s-version-740194       | jenkins | v1.34.0 | 18 Sep 24 21:00 UTC | 18 Sep 24 21:00 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-740194                              | old-k8s-version-740194       | jenkins | v1.34.0 | 18 Sep 24 21:00 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/18 21:00:59
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0918 21:00:59.486726   62061 out.go:345] Setting OutFile to fd 1 ...
	I0918 21:00:59.486839   62061 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0918 21:00:59.486848   62061 out.go:358] Setting ErrFile to fd 2...
	I0918 21:00:59.486854   62061 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0918 21:00:59.487063   62061 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19667-7671/.minikube/bin
	I0918 21:00:59.487631   62061 out.go:352] Setting JSON to false
	I0918 21:00:59.488570   62061 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":6203,"bootTime":1726687056,"procs":200,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0918 21:00:59.488665   62061 start.go:139] virtualization: kvm guest
	I0918 21:00:59.490927   62061 out.go:177] * [old-k8s-version-740194] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0918 21:00:59.492124   62061 out.go:177]   - MINIKUBE_LOCATION=19667
	I0918 21:00:59.492192   62061 notify.go:220] Checking for updates...
	I0918 21:00:59.494436   62061 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0918 21:00:59.495887   62061 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19667-7671/kubeconfig
	I0918 21:00:59.496929   62061 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19667-7671/.minikube
	I0918 21:00:59.498172   62061 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0918 21:00:59.499384   62061 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0918 21:00:59.500900   62061 config.go:182] Loaded profile config "old-k8s-version-740194": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0918 21:00:59.501295   62061 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19667-7671/.minikube/bin/docker-machine-driver-kvm2
	I0918 21:00:59.501375   62061 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0918 21:00:59.516448   62061 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45397
	I0918 21:00:59.516855   62061 main.go:141] libmachine: () Calling .GetVersion
	I0918 21:00:59.517347   62061 main.go:141] libmachine: Using API Version  1
	I0918 21:00:59.517362   62061 main.go:141] libmachine: () Calling .SetConfigRaw
	I0918 21:00:59.517673   62061 main.go:141] libmachine: () Calling .GetMachineName
	I0918 21:00:59.517848   62061 main.go:141] libmachine: (old-k8s-version-740194) Calling .DriverName
	I0918 21:00:59.519750   62061 out.go:177] * Kubernetes 1.31.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.1
	I0918 21:00:59.521126   62061 driver.go:394] Setting default libvirt URI to qemu:///system
	I0918 21:00:59.521433   62061 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19667-7671/.minikube/bin/docker-machine-driver-kvm2
	I0918 21:00:59.521468   62061 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0918 21:00:59.537580   62061 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39433
	I0918 21:00:59.537973   62061 main.go:141] libmachine: () Calling .GetVersion
	I0918 21:00:59.538452   62061 main.go:141] libmachine: Using API Version  1
	I0918 21:00:59.538471   62061 main.go:141] libmachine: () Calling .SetConfigRaw
	I0918 21:00:59.538852   62061 main.go:141] libmachine: () Calling .GetMachineName
	I0918 21:00:59.539083   62061 main.go:141] libmachine: (old-k8s-version-740194) Calling .DriverName
	I0918 21:00:59.575494   62061 out.go:177] * Using the kvm2 driver based on existing profile
	I0918 21:00:59.576555   62061 start.go:297] selected driver: kvm2
	I0918 21:00:59.576572   62061 start.go:901] validating driver "kvm2" against &{Name:old-k8s-version-740194 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.20.0 ClusterName:old-k8s-version-740194 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.53 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280
h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0918 21:00:59.576682   62061 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0918 21:00:59.577339   62061 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0918 21:00:59.577400   62061 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19667-7671/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0918 21:00:59.593287   62061 install.go:137] /home/jenkins/minikube-integration/19667-7671/.minikube/bin/docker-machine-driver-kvm2 version is 1.34.0
	I0918 21:00:59.593810   62061 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0918 21:00:59.593855   62061 cni.go:84] Creating CNI manager for ""
	I0918 21:00:59.593911   62061 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0918 21:00:59.593972   62061 start.go:340] cluster config:
	{Name:old-k8s-version-740194 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-740194 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.53 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p20
00.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0918 21:00:59.594104   62061 iso.go:125] acquiring lock: {Name:mkbcf4d84f3091567a39802cd5e773cb10b8c545 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0918 21:00:59.595936   62061 out.go:177] * Starting "old-k8s-version-740194" primary control-plane node in "old-k8s-version-740194" cluster
	I0918 21:00:59.932315   61273 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.31:22: connect: no route to host
	I0918 21:00:59.597210   62061 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0918 21:00:59.597243   62061 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19667-7671/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0918 21:00:59.597250   62061 cache.go:56] Caching tarball of preloaded images
	I0918 21:00:59.597325   62061 preload.go:172] Found /home/jenkins/minikube-integration/19667-7671/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0918 21:00:59.597338   62061 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0918 21:00:59.597439   62061 profile.go:143] Saving config to /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/old-k8s-version-740194/config.json ...
	I0918 21:00:59.597658   62061 start.go:360] acquireMachinesLock for old-k8s-version-740194: {Name:mk1b0d68a0b7d9d4bc8204d5d81cb9eb77222526 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0918 21:01:03.004316   61273 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.31:22: connect: no route to host
	I0918 21:01:09.084327   61273 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.31:22: connect: no route to host
	I0918 21:01:12.156358   61273 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.31:22: connect: no route to host
	I0918 21:01:18.236353   61273 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.31:22: connect: no route to host
	I0918 21:01:21.308245   61273 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.31:22: connect: no route to host
	I0918 21:01:27.388302   61273 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.31:22: connect: no route to host
	I0918 21:01:30.460341   61273 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.31:22: connect: no route to host
	I0918 21:01:36.540285   61273 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.31:22: connect: no route to host
	I0918 21:01:39.612345   61273 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.31:22: connect: no route to host
	I0918 21:01:45.692338   61273 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.31:22: connect: no route to host
	I0918 21:01:48.764308   61273 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.31:22: connect: no route to host
	I0918 21:01:54.844344   61273 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.31:22: connect: no route to host
	I0918 21:01:57.916346   61273 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.31:22: connect: no route to host
	I0918 21:02:03.996351   61273 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.31:22: connect: no route to host
	I0918 21:02:07.068377   61273 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.31:22: connect: no route to host
	I0918 21:02:13.148269   61273 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.31:22: connect: no route to host
	I0918 21:02:16.220321   61273 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.31:22: connect: no route to host
	I0918 21:02:22.300282   61273 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.31:22: connect: no route to host
	I0918 21:02:25.372352   61273 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.31:22: connect: no route to host
	I0918 21:02:31.452275   61273 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.31:22: connect: no route to host
	I0918 21:02:34.524362   61273 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.31:22: connect: no route to host
	I0918 21:02:40.604332   61273 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.31:22: connect: no route to host
	I0918 21:02:43.676372   61273 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.31:22: connect: no route to host
	I0918 21:02:49.756305   61273 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.31:22: connect: no route to host
	I0918 21:02:52.828321   61273 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.31:22: connect: no route to host
	I0918 21:02:58.908358   61273 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.31:22: connect: no route to host
	I0918 21:03:01.980309   61273 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.31:22: connect: no route to host
	I0918 21:03:08.060301   61273 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.31:22: connect: no route to host
	I0918 21:03:11.132322   61273 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.31:22: connect: no route to host
	I0918 21:03:17.212232   61273 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.31:22: connect: no route to host
	I0918 21:03:20.284342   61273 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.31:22: connect: no route to host
	I0918 21:03:26.364312   61273 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.31:22: connect: no route to host
	I0918 21:03:29.436328   61273 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.31:22: connect: no route to host
	I0918 21:03:35.516323   61273 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.31:22: connect: no route to host
	I0918 21:03:38.588372   61273 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.31:22: connect: no route to host
	I0918 21:03:44.668300   61273 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.31:22: connect: no route to host
	I0918 21:03:47.740379   61273 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.31:22: connect: no route to host
	I0918 21:03:53.820363   61273 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.31:22: connect: no route to host
	I0918 21:03:56.892355   61273 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.31:22: connect: no route to host
	I0918 21:04:02.972312   61273 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.31:22: connect: no route to host
	I0918 21:04:06.044373   61273 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.31:22: connect: no route to host
	I0918 21:04:09.048392   61659 start.go:364] duration metric: took 3m56.738592157s to acquireMachinesLock for "default-k8s-diff-port-828868"
	I0918 21:04:09.048461   61659 start.go:96] Skipping create...Using existing machine configuration
	I0918 21:04:09.048469   61659 fix.go:54] fixHost starting: 
	I0918 21:04:09.048788   61659 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19667-7671/.minikube/bin/docker-machine-driver-kvm2
	I0918 21:04:09.048827   61659 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0918 21:04:09.064428   61659 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41181
	I0918 21:04:09.064856   61659 main.go:141] libmachine: () Calling .GetVersion
	I0918 21:04:09.065395   61659 main.go:141] libmachine: Using API Version  1
	I0918 21:04:09.065421   61659 main.go:141] libmachine: () Calling .SetConfigRaw
	I0918 21:04:09.065751   61659 main.go:141] libmachine: () Calling .GetMachineName
	I0918 21:04:09.065961   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .DriverName
	I0918 21:04:09.066108   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .GetState
	I0918 21:04:09.067874   61659 fix.go:112] recreateIfNeeded on default-k8s-diff-port-828868: state=Stopped err=<nil>
	I0918 21:04:09.067915   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .DriverName
	W0918 21:04:09.068096   61659 fix.go:138] unexpected machine state, will restart: <nil>
	I0918 21:04:09.069985   61659 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-828868" ...
	I0918 21:04:09.045944   61273 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0918 21:04:09.045978   61273 main.go:141] libmachine: (no-preload-331658) Calling .GetMachineName
	I0918 21:04:09.046314   61273 buildroot.go:166] provisioning hostname "no-preload-331658"
	I0918 21:04:09.046350   61273 main.go:141] libmachine: (no-preload-331658) Calling .GetMachineName
	I0918 21:04:09.046602   61273 main.go:141] libmachine: (no-preload-331658) Calling .GetSSHHostname
	I0918 21:04:09.048253   61273 machine.go:96] duration metric: took 4m37.423609251s to provisionDockerMachine
	I0918 21:04:09.048293   61273 fix.go:56] duration metric: took 4m37.446130108s for fixHost
	I0918 21:04:09.048301   61273 start.go:83] releasing machines lock for "no-preload-331658", held for 4m37.44629145s
	W0918 21:04:09.048329   61273 start.go:714] error starting host: provision: host is not running
	W0918 21:04:09.048451   61273 out.go:270] ! StartHost failed, but will try again: provision: host is not running
	I0918 21:04:09.048465   61273 start.go:729] Will try again in 5 seconds ...
	I0918 21:04:09.071488   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .Start
	I0918 21:04:09.071699   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Ensuring networks are active...
	I0918 21:04:09.072473   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Ensuring network default is active
	I0918 21:04:09.072816   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Ensuring network mk-default-k8s-diff-port-828868 is active
	I0918 21:04:09.073204   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Getting domain xml...
	I0918 21:04:09.073977   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Creating domain...
	I0918 21:04:10.321507   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Waiting to get IP...
	I0918 21:04:10.322390   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | domain default-k8s-diff-port-828868 has defined MAC address 52:54:00:c0:39:06 in network mk-default-k8s-diff-port-828868
	I0918 21:04:10.322863   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | unable to find current IP address of domain default-k8s-diff-port-828868 in network mk-default-k8s-diff-port-828868
	I0918 21:04:10.322907   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | I0918 21:04:10.322821   62722 retry.go:31] will retry after 272.805092ms: waiting for machine to come up
	I0918 21:04:10.597434   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | domain default-k8s-diff-port-828868 has defined MAC address 52:54:00:c0:39:06 in network mk-default-k8s-diff-port-828868
	I0918 21:04:10.597861   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | unable to find current IP address of domain default-k8s-diff-port-828868 in network mk-default-k8s-diff-port-828868
	I0918 21:04:10.597888   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | I0918 21:04:10.597825   62722 retry.go:31] will retry after 302.631333ms: waiting for machine to come up
	I0918 21:04:10.902544   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | domain default-k8s-diff-port-828868 has defined MAC address 52:54:00:c0:39:06 in network mk-default-k8s-diff-port-828868
	I0918 21:04:10.903002   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | unable to find current IP address of domain default-k8s-diff-port-828868 in network mk-default-k8s-diff-port-828868
	I0918 21:04:10.903030   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | I0918 21:04:10.902943   62722 retry.go:31] will retry after 325.769954ms: waiting for machine to come up
	I0918 21:04:11.230182   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | domain default-k8s-diff-port-828868 has defined MAC address 52:54:00:c0:39:06 in network mk-default-k8s-diff-port-828868
	I0918 21:04:11.230602   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | unable to find current IP address of domain default-k8s-diff-port-828868 in network mk-default-k8s-diff-port-828868
	I0918 21:04:11.230652   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | I0918 21:04:11.230557   62722 retry.go:31] will retry after 396.395153ms: waiting for machine to come up
	I0918 21:04:11.628135   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | domain default-k8s-diff-port-828868 has defined MAC address 52:54:00:c0:39:06 in network mk-default-k8s-diff-port-828868
	I0918 21:04:11.628520   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | unable to find current IP address of domain default-k8s-diff-port-828868 in network mk-default-k8s-diff-port-828868
	I0918 21:04:11.628544   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | I0918 21:04:11.628495   62722 retry.go:31] will retry after 578.74167ms: waiting for machine to come up
	I0918 21:04:14.050009   61273 start.go:360] acquireMachinesLock for no-preload-331658: {Name:mk1b0d68a0b7d9d4bc8204d5d81cb9eb77222526 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0918 21:04:12.209844   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | domain default-k8s-diff-port-828868 has defined MAC address 52:54:00:c0:39:06 in network mk-default-k8s-diff-port-828868
	I0918 21:04:12.209911   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | unable to find current IP address of domain default-k8s-diff-port-828868 in network mk-default-k8s-diff-port-828868
	I0918 21:04:12.209937   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | I0918 21:04:12.209841   62722 retry.go:31] will retry after 779.0434ms: waiting for machine to come up
	I0918 21:04:12.990688   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | domain default-k8s-diff-port-828868 has defined MAC address 52:54:00:c0:39:06 in network mk-default-k8s-diff-port-828868
	I0918 21:04:12.991106   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | unable to find current IP address of domain default-k8s-diff-port-828868 in network mk-default-k8s-diff-port-828868
	I0918 21:04:12.991141   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | I0918 21:04:12.991045   62722 retry.go:31] will retry after 772.165771ms: waiting for machine to come up
	I0918 21:04:13.764946   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | domain default-k8s-diff-port-828868 has defined MAC address 52:54:00:c0:39:06 in network mk-default-k8s-diff-port-828868
	I0918 21:04:13.765460   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | unable to find current IP address of domain default-k8s-diff-port-828868 in network mk-default-k8s-diff-port-828868
	I0918 21:04:13.765493   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | I0918 21:04:13.765404   62722 retry.go:31] will retry after 1.017078101s: waiting for machine to come up
	I0918 21:04:14.783920   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | domain default-k8s-diff-port-828868 has defined MAC address 52:54:00:c0:39:06 in network mk-default-k8s-diff-port-828868
	I0918 21:04:14.784320   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | unable to find current IP address of domain default-k8s-diff-port-828868 in network mk-default-k8s-diff-port-828868
	I0918 21:04:14.784348   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | I0918 21:04:14.784276   62722 retry.go:31] will retry after 1.775982574s: waiting for machine to come up
	I0918 21:04:16.562037   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | domain default-k8s-diff-port-828868 has defined MAC address 52:54:00:c0:39:06 in network mk-default-k8s-diff-port-828868
	I0918 21:04:16.562413   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | unable to find current IP address of domain default-k8s-diff-port-828868 in network mk-default-k8s-diff-port-828868
	I0918 21:04:16.562451   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | I0918 21:04:16.562369   62722 retry.go:31] will retry after 1.609664062s: waiting for machine to come up
	I0918 21:04:18.174149   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | domain default-k8s-diff-port-828868 has defined MAC address 52:54:00:c0:39:06 in network mk-default-k8s-diff-port-828868
	I0918 21:04:18.174759   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | unable to find current IP address of domain default-k8s-diff-port-828868 in network mk-default-k8s-diff-port-828868
	I0918 21:04:18.174788   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | I0918 21:04:18.174710   62722 retry.go:31] will retry after 2.26359536s: waiting for machine to come up
	I0918 21:04:20.440599   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | domain default-k8s-diff-port-828868 has defined MAC address 52:54:00:c0:39:06 in network mk-default-k8s-diff-port-828868
	I0918 21:04:20.441000   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | unable to find current IP address of domain default-k8s-diff-port-828868 in network mk-default-k8s-diff-port-828868
	I0918 21:04:20.441027   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | I0918 21:04:20.440955   62722 retry.go:31] will retry after 3.387446315s: waiting for machine to come up
	I0918 21:04:23.832623   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | domain default-k8s-diff-port-828868 has defined MAC address 52:54:00:c0:39:06 in network mk-default-k8s-diff-port-828868
	I0918 21:04:23.833134   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | unable to find current IP address of domain default-k8s-diff-port-828868 in network mk-default-k8s-diff-port-828868
	I0918 21:04:23.833162   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | I0918 21:04:23.833097   62722 retry.go:31] will retry after 3.312983418s: waiting for machine to come up
	I0918 21:04:27.150091   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | domain default-k8s-diff-port-828868 has defined MAC address 52:54:00:c0:39:06 in network mk-default-k8s-diff-port-828868
	I0918 21:04:27.150658   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Found IP for machine: 192.168.50.109
	I0918 21:04:27.150682   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | domain default-k8s-diff-port-828868 has current primary IP address 192.168.50.109 and MAC address 52:54:00:c0:39:06 in network mk-default-k8s-diff-port-828868
	I0918 21:04:27.150703   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Reserving static IP address...
	I0918 21:04:27.151248   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-828868", mac: "52:54:00:c0:39:06", ip: "192.168.50.109"} in network mk-default-k8s-diff-port-828868: {Iface:virbr4 ExpiryTime:2024-09-18 22:04:19 +0000 UTC Type:0 Mac:52:54:00:c0:39:06 Iaid: IPaddr:192.168.50.109 Prefix:24 Hostname:default-k8s-diff-port-828868 Clientid:01:52:54:00:c0:39:06}
	I0918 21:04:27.151276   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Reserved static IP address: 192.168.50.109
	I0918 21:04:27.151297   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | skip adding static IP to network mk-default-k8s-diff-port-828868 - found existing host DHCP lease matching {name: "default-k8s-diff-port-828868", mac: "52:54:00:c0:39:06", ip: "192.168.50.109"}
	I0918 21:04:27.151317   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | Getting to WaitForSSH function...
	I0918 21:04:27.151330   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Waiting for SSH to be available...
	I0918 21:04:27.153633   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | domain default-k8s-diff-port-828868 has defined MAC address 52:54:00:c0:39:06 in network mk-default-k8s-diff-port-828868
	I0918 21:04:27.154006   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:39:06", ip: ""} in network mk-default-k8s-diff-port-828868: {Iface:virbr4 ExpiryTime:2024-09-18 22:04:19 +0000 UTC Type:0 Mac:52:54:00:c0:39:06 Iaid: IPaddr:192.168.50.109 Prefix:24 Hostname:default-k8s-diff-port-828868 Clientid:01:52:54:00:c0:39:06}
	I0918 21:04:27.154036   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | domain default-k8s-diff-port-828868 has defined IP address 192.168.50.109 and MAC address 52:54:00:c0:39:06 in network mk-default-k8s-diff-port-828868
	I0918 21:04:27.154127   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | Using SSH client type: external
	I0918 21:04:27.154153   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | Using SSH private key: /home/jenkins/minikube-integration/19667-7671/.minikube/machines/default-k8s-diff-port-828868/id_rsa (-rw-------)
	I0918 21:04:27.154196   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.109 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19667-7671/.minikube/machines/default-k8s-diff-port-828868/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0918 21:04:27.154211   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | About to run SSH command:
	I0918 21:04:27.154225   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | exit 0
	I0918 21:04:28.308967   61740 start.go:364] duration metric: took 4m9.856658805s to acquireMachinesLock for "embed-certs-255556"
	I0918 21:04:28.309052   61740 start.go:96] Skipping create...Using existing machine configuration
	I0918 21:04:28.309066   61740 fix.go:54] fixHost starting: 
	I0918 21:04:28.309548   61740 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19667-7671/.minikube/bin/docker-machine-driver-kvm2
	I0918 21:04:28.309609   61740 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0918 21:04:28.326972   61740 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41115
	I0918 21:04:28.327375   61740 main.go:141] libmachine: () Calling .GetVersion
	I0918 21:04:28.327941   61740 main.go:141] libmachine: Using API Version  1
	I0918 21:04:28.327974   61740 main.go:141] libmachine: () Calling .SetConfigRaw
	I0918 21:04:28.328300   61740 main.go:141] libmachine: () Calling .GetMachineName
	I0918 21:04:28.328538   61740 main.go:141] libmachine: (embed-certs-255556) Calling .DriverName
	I0918 21:04:28.328676   61740 main.go:141] libmachine: (embed-certs-255556) Calling .GetState
	I0918 21:04:28.330265   61740 fix.go:112] recreateIfNeeded on embed-certs-255556: state=Stopped err=<nil>
	I0918 21:04:28.330312   61740 main.go:141] libmachine: (embed-certs-255556) Calling .DriverName
	W0918 21:04:28.330482   61740 fix.go:138] unexpected machine state, will restart: <nil>
	I0918 21:04:28.332680   61740 out.go:177] * Restarting existing kvm2 VM for "embed-certs-255556" ...
	I0918 21:04:28.333692   61740 main.go:141] libmachine: (embed-certs-255556) Calling .Start
	I0918 21:04:28.333865   61740 main.go:141] libmachine: (embed-certs-255556) Ensuring networks are active...
	I0918 21:04:28.334536   61740 main.go:141] libmachine: (embed-certs-255556) Ensuring network default is active
	I0918 21:04:28.334987   61740 main.go:141] libmachine: (embed-certs-255556) Ensuring network mk-embed-certs-255556 is active
	I0918 21:04:28.335491   61740 main.go:141] libmachine: (embed-certs-255556) Getting domain xml...
	I0918 21:04:28.336206   61740 main.go:141] libmachine: (embed-certs-255556) Creating domain...
	I0918 21:04:27.280056   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | SSH cmd err, output: <nil>: 
	I0918 21:04:27.280448   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .GetConfigRaw
	I0918 21:04:27.281097   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .GetIP
	I0918 21:04:27.283491   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | domain default-k8s-diff-port-828868 has defined MAC address 52:54:00:c0:39:06 in network mk-default-k8s-diff-port-828868
	I0918 21:04:27.283933   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:39:06", ip: ""} in network mk-default-k8s-diff-port-828868: {Iface:virbr4 ExpiryTime:2024-09-18 22:04:19 +0000 UTC Type:0 Mac:52:54:00:c0:39:06 Iaid: IPaddr:192.168.50.109 Prefix:24 Hostname:default-k8s-diff-port-828868 Clientid:01:52:54:00:c0:39:06}
	I0918 21:04:27.283968   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | domain default-k8s-diff-port-828868 has defined IP address 192.168.50.109 and MAC address 52:54:00:c0:39:06 in network mk-default-k8s-diff-port-828868
	I0918 21:04:27.284242   61659 profile.go:143] Saving config to /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/default-k8s-diff-port-828868/config.json ...
	I0918 21:04:27.284483   61659 machine.go:93] provisionDockerMachine start ...
	I0918 21:04:27.284527   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .DriverName
	I0918 21:04:27.284740   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .GetSSHHostname
	I0918 21:04:27.287263   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | domain default-k8s-diff-port-828868 has defined MAC address 52:54:00:c0:39:06 in network mk-default-k8s-diff-port-828868
	I0918 21:04:27.287640   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:39:06", ip: ""} in network mk-default-k8s-diff-port-828868: {Iface:virbr4 ExpiryTime:2024-09-18 22:04:19 +0000 UTC Type:0 Mac:52:54:00:c0:39:06 Iaid: IPaddr:192.168.50.109 Prefix:24 Hostname:default-k8s-diff-port-828868 Clientid:01:52:54:00:c0:39:06}
	I0918 21:04:27.287671   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | domain default-k8s-diff-port-828868 has defined IP address 192.168.50.109 and MAC address 52:54:00:c0:39:06 in network mk-default-k8s-diff-port-828868
	I0918 21:04:27.287831   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .GetSSHPort
	I0918 21:04:27.288053   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .GetSSHKeyPath
	I0918 21:04:27.288230   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .GetSSHKeyPath
	I0918 21:04:27.288348   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .GetSSHUsername
	I0918 21:04:27.288497   61659 main.go:141] libmachine: Using SSH client type: native
	I0918 21:04:27.288759   61659 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.50.109 22 <nil> <nil>}
	I0918 21:04:27.288774   61659 main.go:141] libmachine: About to run SSH command:
	hostname
	I0918 21:04:27.396110   61659 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0918 21:04:27.396140   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .GetMachineName
	I0918 21:04:27.396439   61659 buildroot.go:166] provisioning hostname "default-k8s-diff-port-828868"
	I0918 21:04:27.396472   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .GetMachineName
	I0918 21:04:27.396655   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .GetSSHHostname
	I0918 21:04:27.399285   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | domain default-k8s-diff-port-828868 has defined MAC address 52:54:00:c0:39:06 in network mk-default-k8s-diff-port-828868
	I0918 21:04:27.399629   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:39:06", ip: ""} in network mk-default-k8s-diff-port-828868: {Iface:virbr4 ExpiryTime:2024-09-18 22:04:19 +0000 UTC Type:0 Mac:52:54:00:c0:39:06 Iaid: IPaddr:192.168.50.109 Prefix:24 Hostname:default-k8s-diff-port-828868 Clientid:01:52:54:00:c0:39:06}
	I0918 21:04:27.399670   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | domain default-k8s-diff-port-828868 has defined IP address 192.168.50.109 and MAC address 52:54:00:c0:39:06 in network mk-default-k8s-diff-port-828868
	I0918 21:04:27.399746   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .GetSSHPort
	I0918 21:04:27.399947   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .GetSSHKeyPath
	I0918 21:04:27.400158   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .GetSSHKeyPath
	I0918 21:04:27.400295   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .GetSSHUsername
	I0918 21:04:27.400476   61659 main.go:141] libmachine: Using SSH client type: native
	I0918 21:04:27.400701   61659 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.50.109 22 <nil> <nil>}
	I0918 21:04:27.400714   61659 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-828868 && echo "default-k8s-diff-port-828868" | sudo tee /etc/hostname
	I0918 21:04:27.518553   61659 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-828868
	
	I0918 21:04:27.518579   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .GetSSHHostname
	I0918 21:04:27.521274   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | domain default-k8s-diff-port-828868 has defined MAC address 52:54:00:c0:39:06 in network mk-default-k8s-diff-port-828868
	I0918 21:04:27.521714   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:39:06", ip: ""} in network mk-default-k8s-diff-port-828868: {Iface:virbr4 ExpiryTime:2024-09-18 22:04:19 +0000 UTC Type:0 Mac:52:54:00:c0:39:06 Iaid: IPaddr:192.168.50.109 Prefix:24 Hostname:default-k8s-diff-port-828868 Clientid:01:52:54:00:c0:39:06}
	I0918 21:04:27.521746   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | domain default-k8s-diff-port-828868 has defined IP address 192.168.50.109 and MAC address 52:54:00:c0:39:06 in network mk-default-k8s-diff-port-828868
	I0918 21:04:27.521918   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .GetSSHPort
	I0918 21:04:27.522085   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .GetSSHKeyPath
	I0918 21:04:27.522298   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .GetSSHKeyPath
	I0918 21:04:27.522469   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .GetSSHUsername
	I0918 21:04:27.522689   61659 main.go:141] libmachine: Using SSH client type: native
	I0918 21:04:27.522867   61659 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.50.109 22 <nil> <nil>}
	I0918 21:04:27.522885   61659 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-828868' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-828868/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-828868' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0918 21:04:27.636264   61659 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0918 21:04:27.636296   61659 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19667-7671/.minikube CaCertPath:/home/jenkins/minikube-integration/19667-7671/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19667-7671/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19667-7671/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19667-7671/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19667-7671/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19667-7671/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19667-7671/.minikube}
	I0918 21:04:27.636325   61659 buildroot.go:174] setting up certificates
	I0918 21:04:27.636335   61659 provision.go:84] configureAuth start
	I0918 21:04:27.636343   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .GetMachineName
	I0918 21:04:27.636629   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .GetIP
	I0918 21:04:27.639186   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | domain default-k8s-diff-port-828868 has defined MAC address 52:54:00:c0:39:06 in network mk-default-k8s-diff-port-828868
	I0918 21:04:27.639622   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:39:06", ip: ""} in network mk-default-k8s-diff-port-828868: {Iface:virbr4 ExpiryTime:2024-09-18 22:04:19 +0000 UTC Type:0 Mac:52:54:00:c0:39:06 Iaid: IPaddr:192.168.50.109 Prefix:24 Hostname:default-k8s-diff-port-828868 Clientid:01:52:54:00:c0:39:06}
	I0918 21:04:27.639646   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | domain default-k8s-diff-port-828868 has defined IP address 192.168.50.109 and MAC address 52:54:00:c0:39:06 in network mk-default-k8s-diff-port-828868
	I0918 21:04:27.639858   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .GetSSHHostname
	I0918 21:04:27.642158   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | domain default-k8s-diff-port-828868 has defined MAC address 52:54:00:c0:39:06 in network mk-default-k8s-diff-port-828868
	I0918 21:04:27.642421   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:39:06", ip: ""} in network mk-default-k8s-diff-port-828868: {Iface:virbr4 ExpiryTime:2024-09-18 22:04:19 +0000 UTC Type:0 Mac:52:54:00:c0:39:06 Iaid: IPaddr:192.168.50.109 Prefix:24 Hostname:default-k8s-diff-port-828868 Clientid:01:52:54:00:c0:39:06}
	I0918 21:04:27.642448   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | domain default-k8s-diff-port-828868 has defined IP address 192.168.50.109 and MAC address 52:54:00:c0:39:06 in network mk-default-k8s-diff-port-828868
	I0918 21:04:27.642626   61659 provision.go:143] copyHostCerts
	I0918 21:04:27.642706   61659 exec_runner.go:144] found /home/jenkins/minikube-integration/19667-7671/.minikube/ca.pem, removing ...
	I0918 21:04:27.642869   61659 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19667-7671/.minikube/ca.pem
	I0918 21:04:27.642966   61659 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19667-7671/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19667-7671/.minikube/ca.pem (1082 bytes)
	I0918 21:04:27.643099   61659 exec_runner.go:144] found /home/jenkins/minikube-integration/19667-7671/.minikube/cert.pem, removing ...
	I0918 21:04:27.643111   61659 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19667-7671/.minikube/cert.pem
	I0918 21:04:27.643150   61659 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19667-7671/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19667-7671/.minikube/cert.pem (1123 bytes)
	I0918 21:04:27.643270   61659 exec_runner.go:144] found /home/jenkins/minikube-integration/19667-7671/.minikube/key.pem, removing ...
	I0918 21:04:27.643280   61659 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19667-7671/.minikube/key.pem
	I0918 21:04:27.643320   61659 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19667-7671/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19667-7671/.minikube/key.pem (1679 bytes)
	I0918 21:04:27.643387   61659 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19667-7671/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19667-7671/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19667-7671/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-828868 san=[127.0.0.1 192.168.50.109 default-k8s-diff-port-828868 localhost minikube]
	I0918 21:04:27.693367   61659 provision.go:177] copyRemoteCerts
	I0918 21:04:27.693426   61659 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0918 21:04:27.693463   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .GetSSHHostname
	I0918 21:04:27.696331   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | domain default-k8s-diff-port-828868 has defined MAC address 52:54:00:c0:39:06 in network mk-default-k8s-diff-port-828868
	I0918 21:04:27.696661   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:39:06", ip: ""} in network mk-default-k8s-diff-port-828868: {Iface:virbr4 ExpiryTime:2024-09-18 22:04:19 +0000 UTC Type:0 Mac:52:54:00:c0:39:06 Iaid: IPaddr:192.168.50.109 Prefix:24 Hostname:default-k8s-diff-port-828868 Clientid:01:52:54:00:c0:39:06}
	I0918 21:04:27.696693   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | domain default-k8s-diff-port-828868 has defined IP address 192.168.50.109 and MAC address 52:54:00:c0:39:06 in network mk-default-k8s-diff-port-828868
	I0918 21:04:27.696835   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .GetSSHPort
	I0918 21:04:27.697028   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .GetSSHKeyPath
	I0918 21:04:27.697212   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .GetSSHUsername
	I0918 21:04:27.697317   61659 sshutil.go:53] new ssh client: &{IP:192.168.50.109 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19667-7671/.minikube/machines/default-k8s-diff-port-828868/id_rsa Username:docker}
	I0918 21:04:27.777944   61659 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0918 21:04:27.801476   61659 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0918 21:04:27.825025   61659 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0918 21:04:27.848244   61659 provision.go:87] duration metric: took 211.897185ms to configureAuth
	I0918 21:04:27.848274   61659 buildroot.go:189] setting minikube options for container-runtime
	I0918 21:04:27.848434   61659 config.go:182] Loaded profile config "default-k8s-diff-port-828868": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0918 21:04:27.848513   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .GetSSHHostname
	I0918 21:04:27.851119   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | domain default-k8s-diff-port-828868 has defined MAC address 52:54:00:c0:39:06 in network mk-default-k8s-diff-port-828868
	I0918 21:04:27.851472   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:39:06", ip: ""} in network mk-default-k8s-diff-port-828868: {Iface:virbr4 ExpiryTime:2024-09-18 22:04:19 +0000 UTC Type:0 Mac:52:54:00:c0:39:06 Iaid: IPaddr:192.168.50.109 Prefix:24 Hostname:default-k8s-diff-port-828868 Clientid:01:52:54:00:c0:39:06}
	I0918 21:04:27.851509   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | domain default-k8s-diff-port-828868 has defined IP address 192.168.50.109 and MAC address 52:54:00:c0:39:06 in network mk-default-k8s-diff-port-828868
	I0918 21:04:27.851788   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .GetSSHPort
	I0918 21:04:27.852007   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .GetSSHKeyPath
	I0918 21:04:27.852216   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .GetSSHKeyPath
	I0918 21:04:27.852420   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .GetSSHUsername
	I0918 21:04:27.852670   61659 main.go:141] libmachine: Using SSH client type: native
	I0918 21:04:27.852852   61659 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.50.109 22 <nil> <nil>}
	I0918 21:04:27.852870   61659 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0918 21:04:28.072808   61659 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0918 21:04:28.072843   61659 machine.go:96] duration metric: took 788.346091ms to provisionDockerMachine
	I0918 21:04:28.072858   61659 start.go:293] postStartSetup for "default-k8s-diff-port-828868" (driver="kvm2")
	I0918 21:04:28.072874   61659 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0918 21:04:28.072898   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .DriverName
	I0918 21:04:28.073246   61659 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0918 21:04:28.073287   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .GetSSHHostname
	I0918 21:04:28.075998   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | domain default-k8s-diff-port-828868 has defined MAC address 52:54:00:c0:39:06 in network mk-default-k8s-diff-port-828868
	I0918 21:04:28.076389   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:39:06", ip: ""} in network mk-default-k8s-diff-port-828868: {Iface:virbr4 ExpiryTime:2024-09-18 22:04:19 +0000 UTC Type:0 Mac:52:54:00:c0:39:06 Iaid: IPaddr:192.168.50.109 Prefix:24 Hostname:default-k8s-diff-port-828868 Clientid:01:52:54:00:c0:39:06}
	I0918 21:04:28.076416   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | domain default-k8s-diff-port-828868 has defined IP address 192.168.50.109 and MAC address 52:54:00:c0:39:06 in network mk-default-k8s-diff-port-828868
	I0918 21:04:28.076561   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .GetSSHPort
	I0918 21:04:28.076780   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .GetSSHKeyPath
	I0918 21:04:28.076939   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .GetSSHUsername
	I0918 21:04:28.077063   61659 sshutil.go:53] new ssh client: &{IP:192.168.50.109 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19667-7671/.minikube/machines/default-k8s-diff-port-828868/id_rsa Username:docker}
	I0918 21:04:28.158946   61659 ssh_runner.go:195] Run: cat /etc/os-release
	I0918 21:04:28.163200   61659 info.go:137] Remote host: Buildroot 2023.02.9
	I0918 21:04:28.163231   61659 filesync.go:126] Scanning /home/jenkins/minikube-integration/19667-7671/.minikube/addons for local assets ...
	I0918 21:04:28.163290   61659 filesync.go:126] Scanning /home/jenkins/minikube-integration/19667-7671/.minikube/files for local assets ...
	I0918 21:04:28.163368   61659 filesync.go:149] local asset: /home/jenkins/minikube-integration/19667-7671/.minikube/files/etc/ssl/certs/148782.pem -> 148782.pem in /etc/ssl/certs
	I0918 21:04:28.163464   61659 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0918 21:04:28.172987   61659 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/files/etc/ssl/certs/148782.pem --> /etc/ssl/certs/148782.pem (1708 bytes)
	I0918 21:04:28.198647   61659 start.go:296] duration metric: took 125.77566ms for postStartSetup
	I0918 21:04:28.198685   61659 fix.go:56] duration metric: took 19.150217303s for fixHost
	I0918 21:04:28.198704   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .GetSSHHostname
	I0918 21:04:28.201549   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | domain default-k8s-diff-port-828868 has defined MAC address 52:54:00:c0:39:06 in network mk-default-k8s-diff-port-828868
	I0918 21:04:28.201904   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:39:06", ip: ""} in network mk-default-k8s-diff-port-828868: {Iface:virbr4 ExpiryTime:2024-09-18 22:04:19 +0000 UTC Type:0 Mac:52:54:00:c0:39:06 Iaid: IPaddr:192.168.50.109 Prefix:24 Hostname:default-k8s-diff-port-828868 Clientid:01:52:54:00:c0:39:06}
	I0918 21:04:28.201934   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | domain default-k8s-diff-port-828868 has defined IP address 192.168.50.109 and MAC address 52:54:00:c0:39:06 in network mk-default-k8s-diff-port-828868
	I0918 21:04:28.202093   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .GetSSHPort
	I0918 21:04:28.202278   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .GetSSHKeyPath
	I0918 21:04:28.202435   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .GetSSHKeyPath
	I0918 21:04:28.202588   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .GetSSHUsername
	I0918 21:04:28.202714   61659 main.go:141] libmachine: Using SSH client type: native
	I0918 21:04:28.202871   61659 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.50.109 22 <nil> <nil>}
	I0918 21:04:28.202879   61659 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0918 21:04:28.308752   61659 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726693468.285343658
	
	I0918 21:04:28.308778   61659 fix.go:216] guest clock: 1726693468.285343658
	I0918 21:04:28.308786   61659 fix.go:229] Guest: 2024-09-18 21:04:28.285343658 +0000 UTC Remote: 2024-09-18 21:04:28.198688962 +0000 UTC m=+256.035220061 (delta=86.654696ms)
	I0918 21:04:28.308821   61659 fix.go:200] guest clock delta is within tolerance: 86.654696ms
	I0918 21:04:28.308829   61659 start.go:83] releasing machines lock for "default-k8s-diff-port-828868", held for 19.260404228s
	I0918 21:04:28.308857   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .DriverName
	I0918 21:04:28.309175   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .GetIP
	I0918 21:04:28.312346   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | domain default-k8s-diff-port-828868 has defined MAC address 52:54:00:c0:39:06 in network mk-default-k8s-diff-port-828868
	I0918 21:04:28.312725   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:39:06", ip: ""} in network mk-default-k8s-diff-port-828868: {Iface:virbr4 ExpiryTime:2024-09-18 22:04:19 +0000 UTC Type:0 Mac:52:54:00:c0:39:06 Iaid: IPaddr:192.168.50.109 Prefix:24 Hostname:default-k8s-diff-port-828868 Clientid:01:52:54:00:c0:39:06}
	I0918 21:04:28.312753   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | domain default-k8s-diff-port-828868 has defined IP address 192.168.50.109 and MAC address 52:54:00:c0:39:06 in network mk-default-k8s-diff-port-828868
	I0918 21:04:28.312951   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .DriverName
	I0918 21:04:28.313506   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .DriverName
	I0918 21:04:28.313702   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .DriverName
	I0918 21:04:28.313792   61659 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0918 21:04:28.313849   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .GetSSHHostname
	I0918 21:04:28.313966   61659 ssh_runner.go:195] Run: cat /version.json
	I0918 21:04:28.314001   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .GetSSHHostname
	I0918 21:04:28.316698   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | domain default-k8s-diff-port-828868 has defined MAC address 52:54:00:c0:39:06 in network mk-default-k8s-diff-port-828868
	I0918 21:04:28.316882   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | domain default-k8s-diff-port-828868 has defined MAC address 52:54:00:c0:39:06 in network mk-default-k8s-diff-port-828868
	I0918 21:04:28.317016   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:39:06", ip: ""} in network mk-default-k8s-diff-port-828868: {Iface:virbr4 ExpiryTime:2024-09-18 22:04:19 +0000 UTC Type:0 Mac:52:54:00:c0:39:06 Iaid: IPaddr:192.168.50.109 Prefix:24 Hostname:default-k8s-diff-port-828868 Clientid:01:52:54:00:c0:39:06}
	I0918 21:04:28.317038   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | domain default-k8s-diff-port-828868 has defined IP address 192.168.50.109 and MAC address 52:54:00:c0:39:06 in network mk-default-k8s-diff-port-828868
	I0918 21:04:28.317239   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .GetSSHPort
	I0918 21:04:28.317357   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:39:06", ip: ""} in network mk-default-k8s-diff-port-828868: {Iface:virbr4 ExpiryTime:2024-09-18 22:04:19 +0000 UTC Type:0 Mac:52:54:00:c0:39:06 Iaid: IPaddr:192.168.50.109 Prefix:24 Hostname:default-k8s-diff-port-828868 Clientid:01:52:54:00:c0:39:06}
	I0918 21:04:28.317408   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | domain default-k8s-diff-port-828868 has defined IP address 192.168.50.109 and MAC address 52:54:00:c0:39:06 in network mk-default-k8s-diff-port-828868
	I0918 21:04:28.317410   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .GetSSHKeyPath
	I0918 21:04:28.317596   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .GetSSHPort
	I0918 21:04:28.317598   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .GetSSHUsername
	I0918 21:04:28.317743   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .GetSSHKeyPath
	I0918 21:04:28.317783   61659 sshutil.go:53] new ssh client: &{IP:192.168.50.109 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19667-7671/.minikube/machines/default-k8s-diff-port-828868/id_rsa Username:docker}
	I0918 21:04:28.317905   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .GetSSHUsername
	I0918 21:04:28.318060   61659 sshutil.go:53] new ssh client: &{IP:192.168.50.109 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19667-7671/.minikube/machines/default-k8s-diff-port-828868/id_rsa Username:docker}
	I0918 21:04:28.439960   61659 ssh_runner.go:195] Run: systemctl --version
	I0918 21:04:28.446111   61659 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0918 21:04:28.593574   61659 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0918 21:04:28.599542   61659 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0918 21:04:28.599623   61659 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0918 21:04:28.615775   61659 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0918 21:04:28.615802   61659 start.go:495] detecting cgroup driver to use...
	I0918 21:04:28.615965   61659 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0918 21:04:28.636924   61659 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0918 21:04:28.655681   61659 docker.go:217] disabling cri-docker service (if available) ...
	I0918 21:04:28.655775   61659 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0918 21:04:28.670090   61659 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0918 21:04:28.684780   61659 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0918 21:04:28.807355   61659 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0918 21:04:28.941753   61659 docker.go:233] disabling docker service ...
	I0918 21:04:28.941836   61659 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0918 21:04:28.956786   61659 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0918 21:04:28.970301   61659 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0918 21:04:29.119605   61659 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0918 21:04:29.245330   61659 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0918 21:04:29.259626   61659 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0918 21:04:29.278104   61659 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0918 21:04:29.278162   61659 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0918 21:04:29.288761   61659 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0918 21:04:29.288837   61659 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0918 21:04:29.299631   61659 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0918 21:04:29.310244   61659 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0918 21:04:29.321220   61659 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0918 21:04:29.332722   61659 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0918 21:04:29.343590   61659 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0918 21:04:29.366099   61659 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0918 21:04:29.381180   61659 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0918 21:04:29.394427   61659 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0918 21:04:29.394494   61659 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0918 21:04:29.410069   61659 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0918 21:04:29.421207   61659 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0918 21:04:29.543870   61659 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0918 21:04:29.642149   61659 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0918 21:04:29.642205   61659 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0918 21:04:29.647336   61659 start.go:563] Will wait 60s for crictl version
	I0918 21:04:29.647400   61659 ssh_runner.go:195] Run: which crictl
	I0918 21:04:29.651148   61659 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0918 21:04:29.690903   61659 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0918 21:04:29.690992   61659 ssh_runner.go:195] Run: crio --version
	I0918 21:04:29.717176   61659 ssh_runner.go:195] Run: crio --version
	I0918 21:04:29.747416   61659 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0918 21:04:29.748825   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .GetIP
	I0918 21:04:29.751828   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | domain default-k8s-diff-port-828868 has defined MAC address 52:54:00:c0:39:06 in network mk-default-k8s-diff-port-828868
	I0918 21:04:29.752238   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:39:06", ip: ""} in network mk-default-k8s-diff-port-828868: {Iface:virbr4 ExpiryTime:2024-09-18 22:04:19 +0000 UTC Type:0 Mac:52:54:00:c0:39:06 Iaid: IPaddr:192.168.50.109 Prefix:24 Hostname:default-k8s-diff-port-828868 Clientid:01:52:54:00:c0:39:06}
	I0918 21:04:29.752288   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | domain default-k8s-diff-port-828868 has defined IP address 192.168.50.109 and MAC address 52:54:00:c0:39:06 in network mk-default-k8s-diff-port-828868
	I0918 21:04:29.752533   61659 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0918 21:04:29.756672   61659 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0918 21:04:29.768691   61659 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-828868 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.1 ClusterName:default-k8s-diff-port-828868 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.109 Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0918 21:04:29.768822   61659 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0918 21:04:29.768867   61659 ssh_runner.go:195] Run: sudo crictl images --output json
	I0918 21:04:29.803885   61659 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I0918 21:04:29.803964   61659 ssh_runner.go:195] Run: which lz4
	I0918 21:04:29.808051   61659 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0918 21:04:29.812324   61659 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0918 21:04:29.812363   61659 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I0918 21:04:31.172721   61659 crio.go:462] duration metric: took 1.364736071s to copy over tarball
	I0918 21:04:31.172837   61659 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0918 21:04:29.637411   61740 main.go:141] libmachine: (embed-certs-255556) Waiting to get IP...
	I0918 21:04:29.638427   61740 main.go:141] libmachine: (embed-certs-255556) DBG | domain embed-certs-255556 has defined MAC address 52:54:00:e8:c2:b7 in network mk-embed-certs-255556
	I0918 21:04:29.638877   61740 main.go:141] libmachine: (embed-certs-255556) DBG | unable to find current IP address of domain embed-certs-255556 in network mk-embed-certs-255556
	I0918 21:04:29.638973   61740 main.go:141] libmachine: (embed-certs-255556) DBG | I0918 21:04:29.638868   62857 retry.go:31] will retry after 298.087525ms: waiting for machine to come up
	I0918 21:04:29.938543   61740 main.go:141] libmachine: (embed-certs-255556) DBG | domain embed-certs-255556 has defined MAC address 52:54:00:e8:c2:b7 in network mk-embed-certs-255556
	I0918 21:04:29.938923   61740 main.go:141] libmachine: (embed-certs-255556) DBG | unable to find current IP address of domain embed-certs-255556 in network mk-embed-certs-255556
	I0918 21:04:29.938946   61740 main.go:141] libmachine: (embed-certs-255556) DBG | I0918 21:04:29.938889   62857 retry.go:31] will retry after 362.887862ms: waiting for machine to come up
	I0918 21:04:30.303379   61740 main.go:141] libmachine: (embed-certs-255556) DBG | domain embed-certs-255556 has defined MAC address 52:54:00:e8:c2:b7 in network mk-embed-certs-255556
	I0918 21:04:30.303867   61740 main.go:141] libmachine: (embed-certs-255556) DBG | unable to find current IP address of domain embed-certs-255556 in network mk-embed-certs-255556
	I0918 21:04:30.303898   61740 main.go:141] libmachine: (embed-certs-255556) DBG | I0918 21:04:30.303820   62857 retry.go:31] will retry after 452.771021ms: waiting for machine to come up
	I0918 21:04:30.758353   61740 main.go:141] libmachine: (embed-certs-255556) DBG | domain embed-certs-255556 has defined MAC address 52:54:00:e8:c2:b7 in network mk-embed-certs-255556
	I0918 21:04:30.758897   61740 main.go:141] libmachine: (embed-certs-255556) DBG | unable to find current IP address of domain embed-certs-255556 in network mk-embed-certs-255556
	I0918 21:04:30.758928   61740 main.go:141] libmachine: (embed-certs-255556) DBG | I0918 21:04:30.758856   62857 retry.go:31] will retry after 506.010985ms: waiting for machine to come up
	I0918 21:04:31.266443   61740 main.go:141] libmachine: (embed-certs-255556) DBG | domain embed-certs-255556 has defined MAC address 52:54:00:e8:c2:b7 in network mk-embed-certs-255556
	I0918 21:04:31.266934   61740 main.go:141] libmachine: (embed-certs-255556) DBG | unable to find current IP address of domain embed-certs-255556 in network mk-embed-certs-255556
	I0918 21:04:31.266964   61740 main.go:141] libmachine: (embed-certs-255556) DBG | I0918 21:04:31.266893   62857 retry.go:31] will retry after 584.679329ms: waiting for machine to come up
	I0918 21:04:31.853811   61740 main.go:141] libmachine: (embed-certs-255556) DBG | domain embed-certs-255556 has defined MAC address 52:54:00:e8:c2:b7 in network mk-embed-certs-255556
	I0918 21:04:31.854371   61740 main.go:141] libmachine: (embed-certs-255556) DBG | unable to find current IP address of domain embed-certs-255556 in network mk-embed-certs-255556
	I0918 21:04:31.854402   61740 main.go:141] libmachine: (embed-certs-255556) DBG | I0918 21:04:31.854309   62857 retry.go:31] will retry after 786.010743ms: waiting for machine to come up
	I0918 21:04:32.642494   61740 main.go:141] libmachine: (embed-certs-255556) DBG | domain embed-certs-255556 has defined MAC address 52:54:00:e8:c2:b7 in network mk-embed-certs-255556
	I0918 21:04:32.643068   61740 main.go:141] libmachine: (embed-certs-255556) DBG | unable to find current IP address of domain embed-certs-255556 in network mk-embed-certs-255556
	I0918 21:04:32.643100   61740 main.go:141] libmachine: (embed-certs-255556) DBG | I0918 21:04:32.643013   62857 retry.go:31] will retry after 1.010762944s: waiting for machine to come up
	I0918 21:04:33.299563   61659 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.126697598s)
	I0918 21:04:33.299596   61659 crio.go:469] duration metric: took 2.126840983s to extract the tarball
	I0918 21:04:33.299602   61659 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0918 21:04:33.336428   61659 ssh_runner.go:195] Run: sudo crictl images --output json
	I0918 21:04:33.377303   61659 crio.go:514] all images are preloaded for cri-o runtime.
	I0918 21:04:33.377342   61659 cache_images.go:84] Images are preloaded, skipping loading
	I0918 21:04:33.377352   61659 kubeadm.go:934] updating node { 192.168.50.109 8444 v1.31.1 crio true true} ...
	I0918 21:04:33.377490   61659 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-828868 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.109
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:default-k8s-diff-port-828868 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0918 21:04:33.377574   61659 ssh_runner.go:195] Run: crio config
	I0918 21:04:33.423773   61659 cni.go:84] Creating CNI manager for ""
	I0918 21:04:33.423800   61659 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0918 21:04:33.423816   61659 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0918 21:04:33.423835   61659 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.109 APIServerPort:8444 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-828868 NodeName:default-k8s-diff-port-828868 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.109"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.109 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0918 21:04:33.423976   61659 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.109
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-828868"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.109
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.109"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0918 21:04:33.424058   61659 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0918 21:04:33.434047   61659 binaries.go:44] Found k8s binaries, skipping transfer
	I0918 21:04:33.434119   61659 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0918 21:04:33.443535   61659 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I0918 21:04:33.460116   61659 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0918 21:04:33.475883   61659 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2172 bytes)
	I0918 21:04:33.492311   61659 ssh_runner.go:195] Run: grep 192.168.50.109	control-plane.minikube.internal$ /etc/hosts
	I0918 21:04:33.495940   61659 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.109	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0918 21:04:33.507411   61659 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0918 21:04:33.625104   61659 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0918 21:04:33.641530   61659 certs.go:68] Setting up /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/default-k8s-diff-port-828868 for IP: 192.168.50.109
	I0918 21:04:33.641556   61659 certs.go:194] generating shared ca certs ...
	I0918 21:04:33.641572   61659 certs.go:226] acquiring lock for ca certs: {Name:mkf54e0e71ed884c4372f9d3d4cb308ea4600185 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0918 21:04:33.641757   61659 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19667-7671/.minikube/ca.key
	I0918 21:04:33.641804   61659 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19667-7671/.minikube/proxy-client-ca.key
	I0918 21:04:33.641822   61659 certs.go:256] generating profile certs ...
	I0918 21:04:33.641944   61659 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/default-k8s-diff-port-828868/client.key
	I0918 21:04:33.642036   61659 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/default-k8s-diff-port-828868/apiserver.key.df92be3a
	I0918 21:04:33.642087   61659 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/default-k8s-diff-port-828868/proxy-client.key
	I0918 21:04:33.642255   61659 certs.go:484] found cert: /home/jenkins/minikube-integration/19667-7671/.minikube/certs/14878.pem (1338 bytes)
	W0918 21:04:33.642297   61659 certs.go:480] ignoring /home/jenkins/minikube-integration/19667-7671/.minikube/certs/14878_empty.pem, impossibly tiny 0 bytes
	I0918 21:04:33.642306   61659 certs.go:484] found cert: /home/jenkins/minikube-integration/19667-7671/.minikube/certs/ca-key.pem (1675 bytes)
	I0918 21:04:33.642337   61659 certs.go:484] found cert: /home/jenkins/minikube-integration/19667-7671/.minikube/certs/ca.pem (1082 bytes)
	I0918 21:04:33.642370   61659 certs.go:484] found cert: /home/jenkins/minikube-integration/19667-7671/.minikube/certs/cert.pem (1123 bytes)
	I0918 21:04:33.642404   61659 certs.go:484] found cert: /home/jenkins/minikube-integration/19667-7671/.minikube/certs/key.pem (1679 bytes)
	I0918 21:04:33.642454   61659 certs.go:484] found cert: /home/jenkins/minikube-integration/19667-7671/.minikube/files/etc/ssl/certs/148782.pem (1708 bytes)
	I0918 21:04:33.643116   61659 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0918 21:04:33.682428   61659 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0918 21:04:33.710444   61659 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0918 21:04:33.759078   61659 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0918 21:04:33.797727   61659 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/default-k8s-diff-port-828868/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0918 21:04:33.821989   61659 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/default-k8s-diff-port-828868/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0918 21:04:33.844210   61659 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/default-k8s-diff-port-828868/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0918 21:04:33.866843   61659 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/default-k8s-diff-port-828868/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0918 21:04:33.896125   61659 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/files/etc/ssl/certs/148782.pem --> /usr/share/ca-certificates/148782.pem (1708 bytes)
	I0918 21:04:33.918667   61659 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0918 21:04:33.940790   61659 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/certs/14878.pem --> /usr/share/ca-certificates/14878.pem (1338 bytes)
	I0918 21:04:33.963660   61659 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0918 21:04:33.980348   61659 ssh_runner.go:195] Run: openssl version
	I0918 21:04:33.985856   61659 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/148782.pem && ln -fs /usr/share/ca-certificates/148782.pem /etc/ssl/certs/148782.pem"
	I0918 21:04:33.996472   61659 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/148782.pem
	I0918 21:04:34.000732   61659 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 18 19:57 /usr/share/ca-certificates/148782.pem
	I0918 21:04:34.000788   61659 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/148782.pem
	I0918 21:04:34.006282   61659 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/148782.pem /etc/ssl/certs/3ec20f2e.0"
	I0918 21:04:34.016612   61659 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0918 21:04:34.026689   61659 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0918 21:04:34.030650   61659 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 18 19:39 /usr/share/ca-certificates/minikubeCA.pem
	I0918 21:04:34.030705   61659 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0918 21:04:34.035940   61659 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0918 21:04:34.046516   61659 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14878.pem && ln -fs /usr/share/ca-certificates/14878.pem /etc/ssl/certs/14878.pem"
	I0918 21:04:34.056755   61659 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14878.pem
	I0918 21:04:34.061189   61659 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 18 19:57 /usr/share/ca-certificates/14878.pem
	I0918 21:04:34.061264   61659 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14878.pem
	I0918 21:04:34.066973   61659 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14878.pem /etc/ssl/certs/51391683.0"
	I0918 21:04:34.078781   61659 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0918 21:04:34.083129   61659 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0918 21:04:34.089249   61659 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0918 21:04:34.095211   61659 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0918 21:04:34.101350   61659 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0918 21:04:34.107269   61659 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0918 21:04:34.113177   61659 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0918 21:04:34.119005   61659 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-828868 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.31.1 ClusterName:default-k8s-diff-port-828868 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.109 Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0918 21:04:34.119093   61659 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0918 21:04:34.119147   61659 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0918 21:04:34.162792   61659 cri.go:89] found id: ""
	I0918 21:04:34.162895   61659 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0918 21:04:34.174325   61659 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0918 21:04:34.174358   61659 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0918 21:04:34.174420   61659 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0918 21:04:34.183708   61659 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0918 21:04:34.184680   61659 kubeconfig.go:125] found "default-k8s-diff-port-828868" server: "https://192.168.50.109:8444"
	I0918 21:04:34.186781   61659 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0918 21:04:34.195823   61659 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.109
	I0918 21:04:34.195856   61659 kubeadm.go:1160] stopping kube-system containers ...
	I0918 21:04:34.195866   61659 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0918 21:04:34.195907   61659 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0918 21:04:34.235799   61659 cri.go:89] found id: ""
	I0918 21:04:34.235882   61659 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0918 21:04:34.251412   61659 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0918 21:04:34.261361   61659 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0918 21:04:34.261390   61659 kubeadm.go:157] found existing configuration files:
	
	I0918 21:04:34.261435   61659 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0918 21:04:34.272201   61659 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0918 21:04:34.272272   61659 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0918 21:04:34.283030   61659 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0918 21:04:34.293227   61659 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0918 21:04:34.293321   61659 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0918 21:04:34.303749   61659 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0918 21:04:34.314027   61659 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0918 21:04:34.314116   61659 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0918 21:04:34.324585   61659 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0918 21:04:34.334524   61659 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0918 21:04:34.334594   61659 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0918 21:04:34.344923   61659 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0918 21:04:34.355422   61659 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0918 21:04:34.480395   61659 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0918 21:04:35.320827   61659 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0918 21:04:35.542013   61659 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0918 21:04:35.610886   61659 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0918 21:04:35.694501   61659 api_server.go:52] waiting for apiserver process to appear ...
	I0918 21:04:35.694610   61659 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:04:36.195441   61659 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:04:36.694978   61659 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:04:37.195220   61659 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:04:33.655864   61740 main.go:141] libmachine: (embed-certs-255556) DBG | domain embed-certs-255556 has defined MAC address 52:54:00:e8:c2:b7 in network mk-embed-certs-255556
	I0918 21:04:33.656375   61740 main.go:141] libmachine: (embed-certs-255556) DBG | unable to find current IP address of domain embed-certs-255556 in network mk-embed-certs-255556
	I0918 21:04:33.656407   61740 main.go:141] libmachine: (embed-certs-255556) DBG | I0918 21:04:33.656347   62857 retry.go:31] will retry after 1.375317123s: waiting for machine to come up
	I0918 21:04:35.033882   61740 main.go:141] libmachine: (embed-certs-255556) DBG | domain embed-certs-255556 has defined MAC address 52:54:00:e8:c2:b7 in network mk-embed-certs-255556
	I0918 21:04:35.034266   61740 main.go:141] libmachine: (embed-certs-255556) DBG | unable to find current IP address of domain embed-certs-255556 in network mk-embed-certs-255556
	I0918 21:04:35.034293   61740 main.go:141] libmachine: (embed-certs-255556) DBG | I0918 21:04:35.034232   62857 retry.go:31] will retry after 1.142237895s: waiting for machine to come up
	I0918 21:04:36.178371   61740 main.go:141] libmachine: (embed-certs-255556) DBG | domain embed-certs-255556 has defined MAC address 52:54:00:e8:c2:b7 in network mk-embed-certs-255556
	I0918 21:04:36.178837   61740 main.go:141] libmachine: (embed-certs-255556) DBG | unable to find current IP address of domain embed-certs-255556 in network mk-embed-certs-255556
	I0918 21:04:36.178865   61740 main.go:141] libmachine: (embed-certs-255556) DBG | I0918 21:04:36.178804   62857 retry.go:31] will retry after 1.983853904s: waiting for machine to come up
	I0918 21:04:38.165113   61740 main.go:141] libmachine: (embed-certs-255556) DBG | domain embed-certs-255556 has defined MAC address 52:54:00:e8:c2:b7 in network mk-embed-certs-255556
	I0918 21:04:38.165662   61740 main.go:141] libmachine: (embed-certs-255556) DBG | unable to find current IP address of domain embed-certs-255556 in network mk-embed-certs-255556
	I0918 21:04:38.165697   61740 main.go:141] libmachine: (embed-certs-255556) DBG | I0918 21:04:38.165601   62857 retry.go:31] will retry after 2.407286782s: waiting for machine to come up
	I0918 21:04:37.694916   61659 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:04:37.713724   61659 api_server.go:72] duration metric: took 2.019221095s to wait for apiserver process to appear ...
	I0918 21:04:37.713756   61659 api_server.go:88] waiting for apiserver healthz status ...
	I0918 21:04:37.713782   61659 api_server.go:253] Checking apiserver healthz at https://192.168.50.109:8444/healthz ...
	I0918 21:04:37.714297   61659 api_server.go:269] stopped: https://192.168.50.109:8444/healthz: Get "https://192.168.50.109:8444/healthz": dial tcp 192.168.50.109:8444: connect: connection refused
	I0918 21:04:38.213883   61659 api_server.go:253] Checking apiserver healthz at https://192.168.50.109:8444/healthz ...
	I0918 21:04:40.396513   61659 api_server.go:279] https://192.168.50.109:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0918 21:04:40.396564   61659 api_server.go:103] status: https://192.168.50.109:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0918 21:04:40.396584   61659 api_server.go:253] Checking apiserver healthz at https://192.168.50.109:8444/healthz ...
	I0918 21:04:40.409718   61659 api_server.go:279] https://192.168.50.109:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0918 21:04:40.409750   61659 api_server.go:103] status: https://192.168.50.109:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0918 21:04:40.714176   61659 api_server.go:253] Checking apiserver healthz at https://192.168.50.109:8444/healthz ...
	I0918 21:04:40.719353   61659 api_server.go:279] https://192.168.50.109:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0918 21:04:40.719391   61659 api_server.go:103] status: https://192.168.50.109:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0918 21:04:41.214596   61659 api_server.go:253] Checking apiserver healthz at https://192.168.50.109:8444/healthz ...
	I0918 21:04:41.219579   61659 api_server.go:279] https://192.168.50.109:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0918 21:04:41.219608   61659 api_server.go:103] status: https://192.168.50.109:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0918 21:04:41.713951   61659 api_server.go:253] Checking apiserver healthz at https://192.168.50.109:8444/healthz ...
	I0918 21:04:41.719212   61659 api_server.go:279] https://192.168.50.109:8444/healthz returned 200:
	ok
	I0918 21:04:41.726647   61659 api_server.go:141] control plane version: v1.31.1
	I0918 21:04:41.726679   61659 api_server.go:131] duration metric: took 4.012914861s to wait for apiserver health ...
	I0918 21:04:41.726689   61659 cni.go:84] Creating CNI manager for ""
	I0918 21:04:41.726707   61659 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0918 21:04:41.728312   61659 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0918 21:04:41.729613   61659 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0918 21:04:41.741932   61659 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0918 21:04:41.763195   61659 system_pods.go:43] waiting for kube-system pods to appear ...
	I0918 21:04:41.775167   61659 system_pods.go:59] 8 kube-system pods found
	I0918 21:04:41.775210   61659 system_pods.go:61] "coredns-7c65d6cfc9-xzjd7" [bd8252df-707c-41e6-84b7-cc74480177a8] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0918 21:04:41.775219   61659 system_pods.go:61] "etcd-default-k8s-diff-port-828868" [aa8e221d-abba-48a5-8814-246df0776408] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0918 21:04:41.775227   61659 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-828868" [b44966ac-3478-40c4-b67f-1824bff2bec7] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0918 21:04:41.775233   61659 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-828868" [7af8fbad-3aa2-497e-90df-33facaee6b17] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0918 21:04:41.775239   61659 system_pods.go:61] "kube-proxy-jz7ls" [f931ae9a-0b9c-4754-8b7b-d52c267b018c] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0918 21:04:41.775247   61659 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-828868" [ee89c713-c689-4de3-b1a5-4e08470ff6fb] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0918 21:04:41.775252   61659 system_pods.go:61] "metrics-server-6867b74b74-cqp47" [1ccf8c85-183a-4bea-abbc-eb7bcedca7f0] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0918 21:04:41.775257   61659 system_pods.go:61] "storage-provisioner" [9744cbfa-6b9a-42f0-aa80-0821b87a33d4] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0918 21:04:41.775270   61659 system_pods.go:74] duration metric: took 12.058758ms to wait for pod list to return data ...
	I0918 21:04:41.775280   61659 node_conditions.go:102] verifying NodePressure condition ...
	I0918 21:04:41.779525   61659 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0918 21:04:41.779559   61659 node_conditions.go:123] node cpu capacity is 2
	I0918 21:04:41.779580   61659 node_conditions.go:105] duration metric: took 4.292138ms to run NodePressure ...
	I0918 21:04:41.779615   61659 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0918 21:04:42.079279   61659 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0918 21:04:42.084311   61659 kubeadm.go:739] kubelet initialised
	I0918 21:04:42.084338   61659 kubeadm.go:740] duration metric: took 5.024999ms waiting for restarted kubelet to initialise ...
	I0918 21:04:42.084351   61659 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0918 21:04:42.089113   61659 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-xzjd7" in "kube-system" namespace to be "Ready" ...
	I0918 21:04:42.095539   61659 pod_ready.go:98] node "default-k8s-diff-port-828868" hosting pod "coredns-7c65d6cfc9-xzjd7" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-828868" has status "Ready":"False"
	I0918 21:04:42.095565   61659 pod_ready.go:82] duration metric: took 6.405251ms for pod "coredns-7c65d6cfc9-xzjd7" in "kube-system" namespace to be "Ready" ...
	E0918 21:04:42.095575   61659 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-828868" hosting pod "coredns-7c65d6cfc9-xzjd7" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-828868" has status "Ready":"False"
	I0918 21:04:42.095581   61659 pod_ready.go:79] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-828868" in "kube-system" namespace to be "Ready" ...
	I0918 21:04:42.100447   61659 pod_ready.go:98] node "default-k8s-diff-port-828868" hosting pod "etcd-default-k8s-diff-port-828868" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-828868" has status "Ready":"False"
	I0918 21:04:42.100469   61659 pod_ready.go:82] duration metric: took 4.879955ms for pod "etcd-default-k8s-diff-port-828868" in "kube-system" namespace to be "Ready" ...
	E0918 21:04:42.100480   61659 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-828868" hosting pod "etcd-default-k8s-diff-port-828868" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-828868" has status "Ready":"False"
	I0918 21:04:42.100485   61659 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-828868" in "kube-system" namespace to be "Ready" ...
	I0918 21:04:42.104889   61659 pod_ready.go:98] node "default-k8s-diff-port-828868" hosting pod "kube-apiserver-default-k8s-diff-port-828868" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-828868" has status "Ready":"False"
	I0918 21:04:42.104914   61659 pod_ready.go:82] duration metric: took 4.421708ms for pod "kube-apiserver-default-k8s-diff-port-828868" in "kube-system" namespace to be "Ready" ...
	E0918 21:04:42.104926   61659 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-828868" hosting pod "kube-apiserver-default-k8s-diff-port-828868" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-828868" has status "Ready":"False"
	I0918 21:04:42.104934   61659 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-828868" in "kube-system" namespace to be "Ready" ...
	I0918 21:04:40.574813   61740 main.go:141] libmachine: (embed-certs-255556) DBG | domain embed-certs-255556 has defined MAC address 52:54:00:e8:c2:b7 in network mk-embed-certs-255556
	I0918 21:04:40.575265   61740 main.go:141] libmachine: (embed-certs-255556) DBG | unable to find current IP address of domain embed-certs-255556 in network mk-embed-certs-255556
	I0918 21:04:40.575295   61740 main.go:141] libmachine: (embed-certs-255556) DBG | I0918 21:04:40.575215   62857 retry.go:31] will retry after 2.249084169s: waiting for machine to come up
	I0918 21:04:42.827547   61740 main.go:141] libmachine: (embed-certs-255556) DBG | domain embed-certs-255556 has defined MAC address 52:54:00:e8:c2:b7 in network mk-embed-certs-255556
	I0918 21:04:42.827966   61740 main.go:141] libmachine: (embed-certs-255556) DBG | unable to find current IP address of domain embed-certs-255556 in network mk-embed-certs-255556
	I0918 21:04:42.828028   61740 main.go:141] libmachine: (embed-certs-255556) DBG | I0918 21:04:42.827923   62857 retry.go:31] will retry after 4.512161859s: waiting for machine to come up
	I0918 21:04:44.113739   61659 pod_ready.go:103] pod "kube-controller-manager-default-k8s-diff-port-828868" in "kube-system" namespace has status "Ready":"False"
	I0918 21:04:46.611013   61659 pod_ready.go:103] pod "kube-controller-manager-default-k8s-diff-port-828868" in "kube-system" namespace has status "Ready":"False"
	I0918 21:04:47.345046   61740 main.go:141] libmachine: (embed-certs-255556) DBG | domain embed-certs-255556 has defined MAC address 52:54:00:e8:c2:b7 in network mk-embed-certs-255556
	I0918 21:04:47.345426   61740 main.go:141] libmachine: (embed-certs-255556) Found IP for machine: 192.168.39.21
	I0918 21:04:47.345444   61740 main.go:141] libmachine: (embed-certs-255556) Reserving static IP address...
	I0918 21:04:47.345457   61740 main.go:141] libmachine: (embed-certs-255556) DBG | domain embed-certs-255556 has current primary IP address 192.168.39.21 and MAC address 52:54:00:e8:c2:b7 in network mk-embed-certs-255556
	I0918 21:04:47.345824   61740 main.go:141] libmachine: (embed-certs-255556) DBG | found host DHCP lease matching {name: "embed-certs-255556", mac: "52:54:00:e8:c2:b7", ip: "192.168.39.21"} in network mk-embed-certs-255556: {Iface:virbr1 ExpiryTime:2024-09-18 22:04:39 +0000 UTC Type:0 Mac:52:54:00:e8:c2:b7 Iaid: IPaddr:192.168.39.21 Prefix:24 Hostname:embed-certs-255556 Clientid:01:52:54:00:e8:c2:b7}
	I0918 21:04:47.345846   61740 main.go:141] libmachine: (embed-certs-255556) DBG | skip adding static IP to network mk-embed-certs-255556 - found existing host DHCP lease matching {name: "embed-certs-255556", mac: "52:54:00:e8:c2:b7", ip: "192.168.39.21"}
	I0918 21:04:47.345856   61740 main.go:141] libmachine: (embed-certs-255556) Reserved static IP address: 192.168.39.21
	I0918 21:04:47.345866   61740 main.go:141] libmachine: (embed-certs-255556) Waiting for SSH to be available...
	I0918 21:04:47.345874   61740 main.go:141] libmachine: (embed-certs-255556) DBG | Getting to WaitForSSH function...
	I0918 21:04:47.347972   61740 main.go:141] libmachine: (embed-certs-255556) DBG | domain embed-certs-255556 has defined MAC address 52:54:00:e8:c2:b7 in network mk-embed-certs-255556
	I0918 21:04:47.348327   61740 main.go:141] libmachine: (embed-certs-255556) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e8:c2:b7", ip: ""} in network mk-embed-certs-255556: {Iface:virbr1 ExpiryTime:2024-09-18 22:04:39 +0000 UTC Type:0 Mac:52:54:00:e8:c2:b7 Iaid: IPaddr:192.168.39.21 Prefix:24 Hostname:embed-certs-255556 Clientid:01:52:54:00:e8:c2:b7}
	I0918 21:04:47.348367   61740 main.go:141] libmachine: (embed-certs-255556) DBG | domain embed-certs-255556 has defined IP address 192.168.39.21 and MAC address 52:54:00:e8:c2:b7 in network mk-embed-certs-255556
	I0918 21:04:47.348437   61740 main.go:141] libmachine: (embed-certs-255556) DBG | Using SSH client type: external
	I0918 21:04:47.348469   61740 main.go:141] libmachine: (embed-certs-255556) DBG | Using SSH private key: /home/jenkins/minikube-integration/19667-7671/.minikube/machines/embed-certs-255556/id_rsa (-rw-------)
	I0918 21:04:47.348511   61740 main.go:141] libmachine: (embed-certs-255556) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.21 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19667-7671/.minikube/machines/embed-certs-255556/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0918 21:04:47.348526   61740 main.go:141] libmachine: (embed-certs-255556) DBG | About to run SSH command:
	I0918 21:04:47.348554   61740 main.go:141] libmachine: (embed-certs-255556) DBG | exit 0
	I0918 21:04:47.476457   61740 main.go:141] libmachine: (embed-certs-255556) DBG | SSH cmd err, output: <nil>: 
	I0918 21:04:47.476858   61740 main.go:141] libmachine: (embed-certs-255556) Calling .GetConfigRaw
	I0918 21:04:47.477533   61740 main.go:141] libmachine: (embed-certs-255556) Calling .GetIP
	I0918 21:04:47.480221   61740 main.go:141] libmachine: (embed-certs-255556) DBG | domain embed-certs-255556 has defined MAC address 52:54:00:e8:c2:b7 in network mk-embed-certs-255556
	I0918 21:04:47.480601   61740 main.go:141] libmachine: (embed-certs-255556) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e8:c2:b7", ip: ""} in network mk-embed-certs-255556: {Iface:virbr1 ExpiryTime:2024-09-18 22:04:39 +0000 UTC Type:0 Mac:52:54:00:e8:c2:b7 Iaid: IPaddr:192.168.39.21 Prefix:24 Hostname:embed-certs-255556 Clientid:01:52:54:00:e8:c2:b7}
	I0918 21:04:47.480644   61740 main.go:141] libmachine: (embed-certs-255556) DBG | domain embed-certs-255556 has defined IP address 192.168.39.21 and MAC address 52:54:00:e8:c2:b7 in network mk-embed-certs-255556
	I0918 21:04:47.480966   61740 profile.go:143] Saving config to /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/embed-certs-255556/config.json ...
	I0918 21:04:47.481172   61740 machine.go:93] provisionDockerMachine start ...
	I0918 21:04:47.481189   61740 main.go:141] libmachine: (embed-certs-255556) Calling .DriverName
	I0918 21:04:47.481440   61740 main.go:141] libmachine: (embed-certs-255556) Calling .GetSSHHostname
	I0918 21:04:47.483916   61740 main.go:141] libmachine: (embed-certs-255556) DBG | domain embed-certs-255556 has defined MAC address 52:54:00:e8:c2:b7 in network mk-embed-certs-255556
	I0918 21:04:47.484299   61740 main.go:141] libmachine: (embed-certs-255556) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e8:c2:b7", ip: ""} in network mk-embed-certs-255556: {Iface:virbr1 ExpiryTime:2024-09-18 22:04:39 +0000 UTC Type:0 Mac:52:54:00:e8:c2:b7 Iaid: IPaddr:192.168.39.21 Prefix:24 Hostname:embed-certs-255556 Clientid:01:52:54:00:e8:c2:b7}
	I0918 21:04:47.484328   61740 main.go:141] libmachine: (embed-certs-255556) DBG | domain embed-certs-255556 has defined IP address 192.168.39.21 and MAC address 52:54:00:e8:c2:b7 in network mk-embed-certs-255556
	I0918 21:04:47.484467   61740 main.go:141] libmachine: (embed-certs-255556) Calling .GetSSHPort
	I0918 21:04:47.484703   61740 main.go:141] libmachine: (embed-certs-255556) Calling .GetSSHKeyPath
	I0918 21:04:47.484898   61740 main.go:141] libmachine: (embed-certs-255556) Calling .GetSSHKeyPath
	I0918 21:04:47.485043   61740 main.go:141] libmachine: (embed-certs-255556) Calling .GetSSHUsername
	I0918 21:04:47.485185   61740 main.go:141] libmachine: Using SSH client type: native
	I0918 21:04:47.485386   61740 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.21 22 <nil> <nil>}
	I0918 21:04:47.485399   61740 main.go:141] libmachine: About to run SSH command:
	hostname
	I0918 21:04:47.596243   61740 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0918 21:04:47.596272   61740 main.go:141] libmachine: (embed-certs-255556) Calling .GetMachineName
	I0918 21:04:47.596531   61740 buildroot.go:166] provisioning hostname "embed-certs-255556"
	I0918 21:04:47.596560   61740 main.go:141] libmachine: (embed-certs-255556) Calling .GetMachineName
	I0918 21:04:47.596775   61740 main.go:141] libmachine: (embed-certs-255556) Calling .GetSSHHostname
	I0918 21:04:47.599159   61740 main.go:141] libmachine: (embed-certs-255556) DBG | domain embed-certs-255556 has defined MAC address 52:54:00:e8:c2:b7 in network mk-embed-certs-255556
	I0918 21:04:47.599508   61740 main.go:141] libmachine: (embed-certs-255556) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e8:c2:b7", ip: ""} in network mk-embed-certs-255556: {Iface:virbr1 ExpiryTime:2024-09-18 22:04:39 +0000 UTC Type:0 Mac:52:54:00:e8:c2:b7 Iaid: IPaddr:192.168.39.21 Prefix:24 Hostname:embed-certs-255556 Clientid:01:52:54:00:e8:c2:b7}
	I0918 21:04:47.599532   61740 main.go:141] libmachine: (embed-certs-255556) DBG | domain embed-certs-255556 has defined IP address 192.168.39.21 and MAC address 52:54:00:e8:c2:b7 in network mk-embed-certs-255556
	I0918 21:04:47.599706   61740 main.go:141] libmachine: (embed-certs-255556) Calling .GetSSHPort
	I0918 21:04:47.599888   61740 main.go:141] libmachine: (embed-certs-255556) Calling .GetSSHKeyPath
	I0918 21:04:47.600090   61740 main.go:141] libmachine: (embed-certs-255556) Calling .GetSSHKeyPath
	I0918 21:04:47.600229   61740 main.go:141] libmachine: (embed-certs-255556) Calling .GetSSHUsername
	I0918 21:04:47.600406   61740 main.go:141] libmachine: Using SSH client type: native
	I0918 21:04:47.600589   61740 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.21 22 <nil> <nil>}
	I0918 21:04:47.600602   61740 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-255556 && echo "embed-certs-255556" | sudo tee /etc/hostname
	I0918 21:04:47.726173   61740 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-255556
	
	I0918 21:04:47.726213   61740 main.go:141] libmachine: (embed-certs-255556) Calling .GetSSHHostname
	I0918 21:04:47.729209   61740 main.go:141] libmachine: (embed-certs-255556) DBG | domain embed-certs-255556 has defined MAC address 52:54:00:e8:c2:b7 in network mk-embed-certs-255556
	I0918 21:04:47.729575   61740 main.go:141] libmachine: (embed-certs-255556) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e8:c2:b7", ip: ""} in network mk-embed-certs-255556: {Iface:virbr1 ExpiryTime:2024-09-18 22:04:39 +0000 UTC Type:0 Mac:52:54:00:e8:c2:b7 Iaid: IPaddr:192.168.39.21 Prefix:24 Hostname:embed-certs-255556 Clientid:01:52:54:00:e8:c2:b7}
	I0918 21:04:47.729609   61740 main.go:141] libmachine: (embed-certs-255556) DBG | domain embed-certs-255556 has defined IP address 192.168.39.21 and MAC address 52:54:00:e8:c2:b7 in network mk-embed-certs-255556
	I0918 21:04:47.729775   61740 main.go:141] libmachine: (embed-certs-255556) Calling .GetSSHPort
	I0918 21:04:47.729952   61740 main.go:141] libmachine: (embed-certs-255556) Calling .GetSSHKeyPath
	I0918 21:04:47.730212   61740 main.go:141] libmachine: (embed-certs-255556) Calling .GetSSHKeyPath
	I0918 21:04:47.730386   61740 main.go:141] libmachine: (embed-certs-255556) Calling .GetSSHUsername
	I0918 21:04:47.730583   61740 main.go:141] libmachine: Using SSH client type: native
	I0918 21:04:47.730755   61740 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.21 22 <nil> <nil>}
	I0918 21:04:47.730771   61740 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-255556' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-255556/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-255556' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0918 21:04:47.849894   61740 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0918 21:04:47.849928   61740 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19667-7671/.minikube CaCertPath:/home/jenkins/minikube-integration/19667-7671/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19667-7671/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19667-7671/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19667-7671/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19667-7671/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19667-7671/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19667-7671/.minikube}
	I0918 21:04:47.849954   61740 buildroot.go:174] setting up certificates
	I0918 21:04:47.849961   61740 provision.go:84] configureAuth start
	I0918 21:04:47.849971   61740 main.go:141] libmachine: (embed-certs-255556) Calling .GetMachineName
	I0918 21:04:47.850307   61740 main.go:141] libmachine: (embed-certs-255556) Calling .GetIP
	I0918 21:04:47.852989   61740 main.go:141] libmachine: (embed-certs-255556) DBG | domain embed-certs-255556 has defined MAC address 52:54:00:e8:c2:b7 in network mk-embed-certs-255556
	I0918 21:04:47.853389   61740 main.go:141] libmachine: (embed-certs-255556) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e8:c2:b7", ip: ""} in network mk-embed-certs-255556: {Iface:virbr1 ExpiryTime:2024-09-18 22:04:39 +0000 UTC Type:0 Mac:52:54:00:e8:c2:b7 Iaid: IPaddr:192.168.39.21 Prefix:24 Hostname:embed-certs-255556 Clientid:01:52:54:00:e8:c2:b7}
	I0918 21:04:47.853423   61740 main.go:141] libmachine: (embed-certs-255556) DBG | domain embed-certs-255556 has defined IP address 192.168.39.21 and MAC address 52:54:00:e8:c2:b7 in network mk-embed-certs-255556
	I0918 21:04:47.853555   61740 main.go:141] libmachine: (embed-certs-255556) Calling .GetSSHHostname
	I0918 21:04:47.856032   61740 main.go:141] libmachine: (embed-certs-255556) DBG | domain embed-certs-255556 has defined MAC address 52:54:00:e8:c2:b7 in network mk-embed-certs-255556
	I0918 21:04:47.856389   61740 main.go:141] libmachine: (embed-certs-255556) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e8:c2:b7", ip: ""} in network mk-embed-certs-255556: {Iface:virbr1 ExpiryTime:2024-09-18 22:04:39 +0000 UTC Type:0 Mac:52:54:00:e8:c2:b7 Iaid: IPaddr:192.168.39.21 Prefix:24 Hostname:embed-certs-255556 Clientid:01:52:54:00:e8:c2:b7}
	I0918 21:04:47.856410   61740 main.go:141] libmachine: (embed-certs-255556) DBG | domain embed-certs-255556 has defined IP address 192.168.39.21 and MAC address 52:54:00:e8:c2:b7 in network mk-embed-certs-255556
	I0918 21:04:47.856556   61740 provision.go:143] copyHostCerts
	I0918 21:04:47.856617   61740 exec_runner.go:144] found /home/jenkins/minikube-integration/19667-7671/.minikube/ca.pem, removing ...
	I0918 21:04:47.856627   61740 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19667-7671/.minikube/ca.pem
	I0918 21:04:47.856686   61740 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19667-7671/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19667-7671/.minikube/ca.pem (1082 bytes)
	I0918 21:04:47.856778   61740 exec_runner.go:144] found /home/jenkins/minikube-integration/19667-7671/.minikube/cert.pem, removing ...
	I0918 21:04:47.856786   61740 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19667-7671/.minikube/cert.pem
	I0918 21:04:47.856805   61740 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19667-7671/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19667-7671/.minikube/cert.pem (1123 bytes)
	I0918 21:04:47.856855   61740 exec_runner.go:144] found /home/jenkins/minikube-integration/19667-7671/.minikube/key.pem, removing ...
	I0918 21:04:47.856862   61740 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19667-7671/.minikube/key.pem
	I0918 21:04:47.856881   61740 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19667-7671/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19667-7671/.minikube/key.pem (1679 bytes)
	I0918 21:04:47.856929   61740 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19667-7671/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19667-7671/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19667-7671/.minikube/certs/ca-key.pem org=jenkins.embed-certs-255556 san=[127.0.0.1 192.168.39.21 embed-certs-255556 localhost minikube]
	I0918 21:04:48.145689   61740 provision.go:177] copyRemoteCerts
	I0918 21:04:48.145750   61740 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0918 21:04:48.145779   61740 main.go:141] libmachine: (embed-certs-255556) Calling .GetSSHHostname
	I0918 21:04:48.148420   61740 main.go:141] libmachine: (embed-certs-255556) DBG | domain embed-certs-255556 has defined MAC address 52:54:00:e8:c2:b7 in network mk-embed-certs-255556
	I0918 21:04:48.148785   61740 main.go:141] libmachine: (embed-certs-255556) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e8:c2:b7", ip: ""} in network mk-embed-certs-255556: {Iface:virbr1 ExpiryTime:2024-09-18 22:04:39 +0000 UTC Type:0 Mac:52:54:00:e8:c2:b7 Iaid: IPaddr:192.168.39.21 Prefix:24 Hostname:embed-certs-255556 Clientid:01:52:54:00:e8:c2:b7}
	I0918 21:04:48.148812   61740 main.go:141] libmachine: (embed-certs-255556) DBG | domain embed-certs-255556 has defined IP address 192.168.39.21 and MAC address 52:54:00:e8:c2:b7 in network mk-embed-certs-255556
	I0918 21:04:48.148983   61740 main.go:141] libmachine: (embed-certs-255556) Calling .GetSSHPort
	I0918 21:04:48.149194   61740 main.go:141] libmachine: (embed-certs-255556) Calling .GetSSHKeyPath
	I0918 21:04:48.149371   61740 main.go:141] libmachine: (embed-certs-255556) Calling .GetSSHUsername
	I0918 21:04:48.149486   61740 sshutil.go:53] new ssh client: &{IP:192.168.39.21 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19667-7671/.minikube/machines/embed-certs-255556/id_rsa Username:docker}
	I0918 21:04:48.234451   61740 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0918 21:04:48.260660   61740 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0918 21:04:48.283305   61740 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0918 21:04:48.305919   61740 provision.go:87] duration metric: took 455.946794ms to configureAuth
	I0918 21:04:48.305954   61740 buildroot.go:189] setting minikube options for container-runtime
	I0918 21:04:48.306183   61740 config.go:182] Loaded profile config "embed-certs-255556": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0918 21:04:48.306284   61740 main.go:141] libmachine: (embed-certs-255556) Calling .GetSSHHostname
	I0918 21:04:48.308853   61740 main.go:141] libmachine: (embed-certs-255556) DBG | domain embed-certs-255556 has defined MAC address 52:54:00:e8:c2:b7 in network mk-embed-certs-255556
	I0918 21:04:48.309319   61740 main.go:141] libmachine: (embed-certs-255556) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e8:c2:b7", ip: ""} in network mk-embed-certs-255556: {Iface:virbr1 ExpiryTime:2024-09-18 22:04:39 +0000 UTC Type:0 Mac:52:54:00:e8:c2:b7 Iaid: IPaddr:192.168.39.21 Prefix:24 Hostname:embed-certs-255556 Clientid:01:52:54:00:e8:c2:b7}
	I0918 21:04:48.309359   61740 main.go:141] libmachine: (embed-certs-255556) DBG | domain embed-certs-255556 has defined IP address 192.168.39.21 and MAC address 52:54:00:e8:c2:b7 in network mk-embed-certs-255556
	I0918 21:04:48.309488   61740 main.go:141] libmachine: (embed-certs-255556) Calling .GetSSHPort
	I0918 21:04:48.309706   61740 main.go:141] libmachine: (embed-certs-255556) Calling .GetSSHKeyPath
	I0918 21:04:48.309860   61740 main.go:141] libmachine: (embed-certs-255556) Calling .GetSSHKeyPath
	I0918 21:04:48.309976   61740 main.go:141] libmachine: (embed-certs-255556) Calling .GetSSHUsername
	I0918 21:04:48.310134   61740 main.go:141] libmachine: Using SSH client type: native
	I0918 21:04:48.310349   61740 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.21 22 <nil> <nil>}
	I0918 21:04:48.310372   61740 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0918 21:04:48.782438   62061 start.go:364] duration metric: took 3m49.184727821s to acquireMachinesLock for "old-k8s-version-740194"
	I0918 21:04:48.782503   62061 start.go:96] Skipping create...Using existing machine configuration
	I0918 21:04:48.782514   62061 fix.go:54] fixHost starting: 
	I0918 21:04:48.782993   62061 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19667-7671/.minikube/bin/docker-machine-driver-kvm2
	I0918 21:04:48.783053   62061 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0918 21:04:48.802299   62061 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44463
	I0918 21:04:48.802787   62061 main.go:141] libmachine: () Calling .GetVersion
	I0918 21:04:48.803286   62061 main.go:141] libmachine: Using API Version  1
	I0918 21:04:48.803313   62061 main.go:141] libmachine: () Calling .SetConfigRaw
	I0918 21:04:48.803681   62061 main.go:141] libmachine: () Calling .GetMachineName
	I0918 21:04:48.803873   62061 main.go:141] libmachine: (old-k8s-version-740194) Calling .DriverName
	I0918 21:04:48.804007   62061 main.go:141] libmachine: (old-k8s-version-740194) Calling .GetState
	I0918 21:04:48.805714   62061 fix.go:112] recreateIfNeeded on old-k8s-version-740194: state=Stopped err=<nil>
	I0918 21:04:48.805744   62061 main.go:141] libmachine: (old-k8s-version-740194) Calling .DriverName
	W0918 21:04:48.805910   62061 fix.go:138] unexpected machine state, will restart: <nil>
	I0918 21:04:48.835402   62061 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-740194" ...
	I0918 21:04:48.836753   62061 main.go:141] libmachine: (old-k8s-version-740194) Calling .Start
	I0918 21:04:48.837090   62061 main.go:141] libmachine: (old-k8s-version-740194) Ensuring networks are active...
	I0918 21:04:48.838014   62061 main.go:141] libmachine: (old-k8s-version-740194) Ensuring network default is active
	I0918 21:04:48.838375   62061 main.go:141] libmachine: (old-k8s-version-740194) Ensuring network mk-old-k8s-version-740194 is active
	I0918 21:04:48.839035   62061 main.go:141] libmachine: (old-k8s-version-740194) Getting domain xml...
	I0918 21:04:48.839832   62061 main.go:141] libmachine: (old-k8s-version-740194) Creating domain...
	I0918 21:04:48.532928   61740 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0918 21:04:48.532952   61740 machine.go:96] duration metric: took 1.051769616s to provisionDockerMachine
	I0918 21:04:48.532962   61740 start.go:293] postStartSetup for "embed-certs-255556" (driver="kvm2")
	I0918 21:04:48.532973   61740 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0918 21:04:48.532991   61740 main.go:141] libmachine: (embed-certs-255556) Calling .DriverName
	I0918 21:04:48.533310   61740 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0918 21:04:48.533342   61740 main.go:141] libmachine: (embed-certs-255556) Calling .GetSSHHostname
	I0918 21:04:48.536039   61740 main.go:141] libmachine: (embed-certs-255556) DBG | domain embed-certs-255556 has defined MAC address 52:54:00:e8:c2:b7 in network mk-embed-certs-255556
	I0918 21:04:48.536529   61740 main.go:141] libmachine: (embed-certs-255556) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e8:c2:b7", ip: ""} in network mk-embed-certs-255556: {Iface:virbr1 ExpiryTime:2024-09-18 22:04:39 +0000 UTC Type:0 Mac:52:54:00:e8:c2:b7 Iaid: IPaddr:192.168.39.21 Prefix:24 Hostname:embed-certs-255556 Clientid:01:52:54:00:e8:c2:b7}
	I0918 21:04:48.536558   61740 main.go:141] libmachine: (embed-certs-255556) DBG | domain embed-certs-255556 has defined IP address 192.168.39.21 and MAC address 52:54:00:e8:c2:b7 in network mk-embed-certs-255556
	I0918 21:04:48.536631   61740 main.go:141] libmachine: (embed-certs-255556) Calling .GetSSHPort
	I0918 21:04:48.536806   61740 main.go:141] libmachine: (embed-certs-255556) Calling .GetSSHKeyPath
	I0918 21:04:48.536971   61740 main.go:141] libmachine: (embed-certs-255556) Calling .GetSSHUsername
	I0918 21:04:48.537148   61740 sshutil.go:53] new ssh client: &{IP:192.168.39.21 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19667-7671/.minikube/machines/embed-certs-255556/id_rsa Username:docker}
	I0918 21:04:48.623154   61740 ssh_runner.go:195] Run: cat /etc/os-release
	I0918 21:04:48.627520   61740 info.go:137] Remote host: Buildroot 2023.02.9
	I0918 21:04:48.627544   61740 filesync.go:126] Scanning /home/jenkins/minikube-integration/19667-7671/.minikube/addons for local assets ...
	I0918 21:04:48.627617   61740 filesync.go:126] Scanning /home/jenkins/minikube-integration/19667-7671/.minikube/files for local assets ...
	I0918 21:04:48.627711   61740 filesync.go:149] local asset: /home/jenkins/minikube-integration/19667-7671/.minikube/files/etc/ssl/certs/148782.pem -> 148782.pem in /etc/ssl/certs
	I0918 21:04:48.627827   61740 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0918 21:04:48.637145   61740 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/files/etc/ssl/certs/148782.pem --> /etc/ssl/certs/148782.pem (1708 bytes)
	I0918 21:04:48.661971   61740 start.go:296] duration metric: took 128.997987ms for postStartSetup
	I0918 21:04:48.662012   61740 fix.go:56] duration metric: took 20.352947161s for fixHost
	I0918 21:04:48.662034   61740 main.go:141] libmachine: (embed-certs-255556) Calling .GetSSHHostname
	I0918 21:04:48.665153   61740 main.go:141] libmachine: (embed-certs-255556) DBG | domain embed-certs-255556 has defined MAC address 52:54:00:e8:c2:b7 in network mk-embed-certs-255556
	I0918 21:04:48.665637   61740 main.go:141] libmachine: (embed-certs-255556) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e8:c2:b7", ip: ""} in network mk-embed-certs-255556: {Iface:virbr1 ExpiryTime:2024-09-18 22:04:39 +0000 UTC Type:0 Mac:52:54:00:e8:c2:b7 Iaid: IPaddr:192.168.39.21 Prefix:24 Hostname:embed-certs-255556 Clientid:01:52:54:00:e8:c2:b7}
	I0918 21:04:48.665668   61740 main.go:141] libmachine: (embed-certs-255556) DBG | domain embed-certs-255556 has defined IP address 192.168.39.21 and MAC address 52:54:00:e8:c2:b7 in network mk-embed-certs-255556
	I0918 21:04:48.665853   61740 main.go:141] libmachine: (embed-certs-255556) Calling .GetSSHPort
	I0918 21:04:48.666090   61740 main.go:141] libmachine: (embed-certs-255556) Calling .GetSSHKeyPath
	I0918 21:04:48.666289   61740 main.go:141] libmachine: (embed-certs-255556) Calling .GetSSHKeyPath
	I0918 21:04:48.666607   61740 main.go:141] libmachine: (embed-certs-255556) Calling .GetSSHUsername
	I0918 21:04:48.666784   61740 main.go:141] libmachine: Using SSH client type: native
	I0918 21:04:48.667024   61740 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.21 22 <nil> <nil>}
	I0918 21:04:48.667040   61740 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0918 21:04:48.782245   61740 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726693488.758182538
	
	I0918 21:04:48.782286   61740 fix.go:216] guest clock: 1726693488.758182538
	I0918 21:04:48.782297   61740 fix.go:229] Guest: 2024-09-18 21:04:48.758182538 +0000 UTC Remote: 2024-09-18 21:04:48.662016609 +0000 UTC m=+270.354724953 (delta=96.165929ms)
	I0918 21:04:48.782322   61740 fix.go:200] guest clock delta is within tolerance: 96.165929ms
	I0918 21:04:48.782329   61740 start.go:83] releasing machines lock for "embed-certs-255556", held for 20.47331123s
	I0918 21:04:48.782358   61740 main.go:141] libmachine: (embed-certs-255556) Calling .DriverName
	I0918 21:04:48.782655   61740 main.go:141] libmachine: (embed-certs-255556) Calling .GetIP
	I0918 21:04:48.785572   61740 main.go:141] libmachine: (embed-certs-255556) DBG | domain embed-certs-255556 has defined MAC address 52:54:00:e8:c2:b7 in network mk-embed-certs-255556
	I0918 21:04:48.785959   61740 main.go:141] libmachine: (embed-certs-255556) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e8:c2:b7", ip: ""} in network mk-embed-certs-255556: {Iface:virbr1 ExpiryTime:2024-09-18 22:04:39 +0000 UTC Type:0 Mac:52:54:00:e8:c2:b7 Iaid: IPaddr:192.168.39.21 Prefix:24 Hostname:embed-certs-255556 Clientid:01:52:54:00:e8:c2:b7}
	I0918 21:04:48.785988   61740 main.go:141] libmachine: (embed-certs-255556) DBG | domain embed-certs-255556 has defined IP address 192.168.39.21 and MAC address 52:54:00:e8:c2:b7 in network mk-embed-certs-255556
	I0918 21:04:48.786181   61740 main.go:141] libmachine: (embed-certs-255556) Calling .DriverName
	I0918 21:04:48.786653   61740 main.go:141] libmachine: (embed-certs-255556) Calling .DriverName
	I0918 21:04:48.786859   61740 main.go:141] libmachine: (embed-certs-255556) Calling .DriverName
	I0918 21:04:48.787019   61740 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0918 21:04:48.787083   61740 main.go:141] libmachine: (embed-certs-255556) Calling .GetSSHHostname
	I0918 21:04:48.787118   61740 ssh_runner.go:195] Run: cat /version.json
	I0918 21:04:48.787142   61740 main.go:141] libmachine: (embed-certs-255556) Calling .GetSSHHostname
	I0918 21:04:48.789834   61740 main.go:141] libmachine: (embed-certs-255556) DBG | domain embed-certs-255556 has defined MAC address 52:54:00:e8:c2:b7 in network mk-embed-certs-255556
	I0918 21:04:48.790239   61740 main.go:141] libmachine: (embed-certs-255556) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e8:c2:b7", ip: ""} in network mk-embed-certs-255556: {Iface:virbr1 ExpiryTime:2024-09-18 22:04:39 +0000 UTC Type:0 Mac:52:54:00:e8:c2:b7 Iaid: IPaddr:192.168.39.21 Prefix:24 Hostname:embed-certs-255556 Clientid:01:52:54:00:e8:c2:b7}
	I0918 21:04:48.790290   61740 main.go:141] libmachine: (embed-certs-255556) DBG | domain embed-certs-255556 has defined IP address 192.168.39.21 and MAC address 52:54:00:e8:c2:b7 in network mk-embed-certs-255556
	I0918 21:04:48.790326   61740 main.go:141] libmachine: (embed-certs-255556) DBG | domain embed-certs-255556 has defined MAC address 52:54:00:e8:c2:b7 in network mk-embed-certs-255556
	I0918 21:04:48.790625   61740 main.go:141] libmachine: (embed-certs-255556) Calling .GetSSHPort
	I0918 21:04:48.790805   61740 main.go:141] libmachine: (embed-certs-255556) Calling .GetSSHKeyPath
	I0918 21:04:48.790828   61740 main.go:141] libmachine: (embed-certs-255556) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e8:c2:b7", ip: ""} in network mk-embed-certs-255556: {Iface:virbr1 ExpiryTime:2024-09-18 22:04:39 +0000 UTC Type:0 Mac:52:54:00:e8:c2:b7 Iaid: IPaddr:192.168.39.21 Prefix:24 Hostname:embed-certs-255556 Clientid:01:52:54:00:e8:c2:b7}
	I0918 21:04:48.790860   61740 main.go:141] libmachine: (embed-certs-255556) DBG | domain embed-certs-255556 has defined IP address 192.168.39.21 and MAC address 52:54:00:e8:c2:b7 in network mk-embed-certs-255556
	I0918 21:04:48.791012   61740 main.go:141] libmachine: (embed-certs-255556) Calling .GetSSHUsername
	I0918 21:04:48.791035   61740 main.go:141] libmachine: (embed-certs-255556) Calling .GetSSHPort
	I0918 21:04:48.791172   61740 sshutil.go:53] new ssh client: &{IP:192.168.39.21 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19667-7671/.minikube/machines/embed-certs-255556/id_rsa Username:docker}
	I0918 21:04:48.791251   61740 main.go:141] libmachine: (embed-certs-255556) Calling .GetSSHKeyPath
	I0918 21:04:48.791406   61740 main.go:141] libmachine: (embed-certs-255556) Calling .GetSSHUsername
	I0918 21:04:48.791537   61740 sshutil.go:53] new ssh client: &{IP:192.168.39.21 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19667-7671/.minikube/machines/embed-certs-255556/id_rsa Username:docker}
	I0918 21:04:48.911282   61740 ssh_runner.go:195] Run: systemctl --version
	I0918 21:04:48.917459   61740 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0918 21:04:49.062272   61740 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0918 21:04:49.068629   61740 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0918 21:04:49.068709   61740 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0918 21:04:49.085575   61740 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0918 21:04:49.085607   61740 start.go:495] detecting cgroup driver to use...
	I0918 21:04:49.085677   61740 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0918 21:04:49.102455   61740 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0918 21:04:49.117869   61740 docker.go:217] disabling cri-docker service (if available) ...
	I0918 21:04:49.117958   61740 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0918 21:04:49.135361   61740 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0918 21:04:49.150861   61740 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0918 21:04:49.285901   61740 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0918 21:04:49.438312   61740 docker.go:233] disabling docker service ...
	I0918 21:04:49.438390   61740 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0918 21:04:49.454560   61740 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0918 21:04:49.471109   61740 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0918 21:04:49.631711   61740 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0918 21:04:49.760860   61740 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0918 21:04:49.778574   61740 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0918 21:04:49.797293   61740 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0918 21:04:49.797365   61740 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0918 21:04:49.808796   61740 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0918 21:04:49.808872   61740 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0918 21:04:49.821451   61740 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0918 21:04:49.834678   61740 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0918 21:04:49.847521   61740 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0918 21:04:49.860918   61740 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0918 21:04:49.873942   61740 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0918 21:04:49.892983   61740 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0918 21:04:49.904925   61740 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0918 21:04:49.916195   61740 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0918 21:04:49.916310   61740 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0918 21:04:49.931084   61740 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0918 21:04:49.942692   61740 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0918 21:04:50.065013   61740 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0918 21:04:50.168347   61740 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0918 21:04:50.168440   61740 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0918 21:04:50.174948   61740 start.go:563] Will wait 60s for crictl version
	I0918 21:04:50.175017   61740 ssh_runner.go:195] Run: which crictl
	I0918 21:04:50.180139   61740 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0918 21:04:50.221578   61740 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0918 21:04:50.221687   61740 ssh_runner.go:195] Run: crio --version
	I0918 21:04:50.251587   61740 ssh_runner.go:195] Run: crio --version
	I0918 21:04:50.282931   61740 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0918 21:04:48.112865   61659 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-828868" in "kube-system" namespace has status "Ready":"True"
	I0918 21:04:48.112895   61659 pod_ready.go:82] duration metric: took 6.007950768s for pod "kube-controller-manager-default-k8s-diff-port-828868" in "kube-system" namespace to be "Ready" ...
	I0918 21:04:48.112909   61659 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-jz7ls" in "kube-system" namespace to be "Ready" ...
	I0918 21:04:48.118606   61659 pod_ready.go:93] pod "kube-proxy-jz7ls" in "kube-system" namespace has status "Ready":"True"
	I0918 21:04:48.118628   61659 pod_ready.go:82] duration metric: took 5.710918ms for pod "kube-proxy-jz7ls" in "kube-system" namespace to be "Ready" ...
	I0918 21:04:48.118647   61659 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-828868" in "kube-system" namespace to be "Ready" ...
	I0918 21:04:49.626081   61659 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-828868" in "kube-system" namespace has status "Ready":"True"
	I0918 21:04:49.626116   61659 pod_ready.go:82] duration metric: took 1.507459822s for pod "kube-scheduler-default-k8s-diff-port-828868" in "kube-system" namespace to be "Ready" ...
	I0918 21:04:49.626130   61659 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace to be "Ready" ...
	I0918 21:04:51.635306   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:04:50.284258   61740 main.go:141] libmachine: (embed-certs-255556) Calling .GetIP
	I0918 21:04:50.287321   61740 main.go:141] libmachine: (embed-certs-255556) DBG | domain embed-certs-255556 has defined MAC address 52:54:00:e8:c2:b7 in network mk-embed-certs-255556
	I0918 21:04:50.287754   61740 main.go:141] libmachine: (embed-certs-255556) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e8:c2:b7", ip: ""} in network mk-embed-certs-255556: {Iface:virbr1 ExpiryTime:2024-09-18 22:04:39 +0000 UTC Type:0 Mac:52:54:00:e8:c2:b7 Iaid: IPaddr:192.168.39.21 Prefix:24 Hostname:embed-certs-255556 Clientid:01:52:54:00:e8:c2:b7}
	I0918 21:04:50.287782   61740 main.go:141] libmachine: (embed-certs-255556) DBG | domain embed-certs-255556 has defined IP address 192.168.39.21 and MAC address 52:54:00:e8:c2:b7 in network mk-embed-certs-255556
	I0918 21:04:50.288116   61740 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0918 21:04:50.292221   61740 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0918 21:04:50.304472   61740 kubeadm.go:883] updating cluster {Name:embed-certs-255556 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.1 ClusterName:embed-certs-255556 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.21 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:f
alse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0918 21:04:50.304604   61740 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0918 21:04:50.304675   61740 ssh_runner.go:195] Run: sudo crictl images --output json
	I0918 21:04:50.343445   61740 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I0918 21:04:50.343527   61740 ssh_runner.go:195] Run: which lz4
	I0918 21:04:50.347600   61740 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0918 21:04:50.351647   61740 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0918 21:04:50.351679   61740 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I0918 21:04:51.665892   61740 crio.go:462] duration metric: took 1.318339658s to copy over tarball
	I0918 21:04:51.665970   61740 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0918 21:04:50.157515   62061 main.go:141] libmachine: (old-k8s-version-740194) Waiting to get IP...
	I0918 21:04:50.158369   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | domain old-k8s-version-740194 has defined MAC address 52:54:00:3f:a7:11 in network mk-old-k8s-version-740194
	I0918 21:04:50.158796   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | unable to find current IP address of domain old-k8s-version-740194 in network mk-old-k8s-version-740194
	I0918 21:04:50.158931   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | I0918 21:04:50.158765   63026 retry.go:31] will retry after 214.617426ms: waiting for machine to come up
	I0918 21:04:50.375418   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | domain old-k8s-version-740194 has defined MAC address 52:54:00:3f:a7:11 in network mk-old-k8s-version-740194
	I0918 21:04:50.375882   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | unable to find current IP address of domain old-k8s-version-740194 in network mk-old-k8s-version-740194
	I0918 21:04:50.375910   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | I0918 21:04:50.375834   63026 retry.go:31] will retry after 311.080996ms: waiting for machine to come up
	I0918 21:04:50.688569   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | domain old-k8s-version-740194 has defined MAC address 52:54:00:3f:a7:11 in network mk-old-k8s-version-740194
	I0918 21:04:50.689151   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | unable to find current IP address of domain old-k8s-version-740194 in network mk-old-k8s-version-740194
	I0918 21:04:50.689182   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | I0918 21:04:50.689112   63026 retry.go:31] will retry after 386.384815ms: waiting for machine to come up
	I0918 21:04:51.076846   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | domain old-k8s-version-740194 has defined MAC address 52:54:00:3f:a7:11 in network mk-old-k8s-version-740194
	I0918 21:04:51.077336   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | unable to find current IP address of domain old-k8s-version-740194 in network mk-old-k8s-version-740194
	I0918 21:04:51.077359   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | I0918 21:04:51.077280   63026 retry.go:31] will retry after 556.210887ms: waiting for machine to come up
	I0918 21:04:51.634919   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | domain old-k8s-version-740194 has defined MAC address 52:54:00:3f:a7:11 in network mk-old-k8s-version-740194
	I0918 21:04:51.635423   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | unable to find current IP address of domain old-k8s-version-740194 in network mk-old-k8s-version-740194
	I0918 21:04:51.635462   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | I0918 21:04:51.635325   63026 retry.go:31] will retry after 607.960565ms: waiting for machine to come up
	I0918 21:04:52.245179   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | domain old-k8s-version-740194 has defined MAC address 52:54:00:3f:a7:11 in network mk-old-k8s-version-740194
	I0918 21:04:52.245741   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | unable to find current IP address of domain old-k8s-version-740194 in network mk-old-k8s-version-740194
	I0918 21:04:52.245774   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | I0918 21:04:52.245692   63026 retry.go:31] will retry after 620.243825ms: waiting for machine to come up
	I0918 21:04:52.867067   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | domain old-k8s-version-740194 has defined MAC address 52:54:00:3f:a7:11 in network mk-old-k8s-version-740194
	I0918 21:04:52.867658   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | unable to find current IP address of domain old-k8s-version-740194 in network mk-old-k8s-version-740194
	I0918 21:04:52.867680   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | I0918 21:04:52.867608   63026 retry.go:31] will retry after 1.091814923s: waiting for machine to come up
	I0918 21:04:53.961395   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | domain old-k8s-version-740194 has defined MAC address 52:54:00:3f:a7:11 in network mk-old-k8s-version-740194
	I0918 21:04:53.961819   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | unable to find current IP address of domain old-k8s-version-740194 in network mk-old-k8s-version-740194
	I0918 21:04:53.961842   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | I0918 21:04:53.961784   63026 retry.go:31] will retry after 1.130552716s: waiting for machine to come up
	I0918 21:04:54.133598   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:04:56.134938   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:04:53.837558   61740 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.171557505s)
	I0918 21:04:53.837589   61740 crio.go:469] duration metric: took 2.171667234s to extract the tarball
	I0918 21:04:53.837610   61740 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0918 21:04:53.876381   61740 ssh_runner.go:195] Run: sudo crictl images --output json
	I0918 21:04:53.924938   61740 crio.go:514] all images are preloaded for cri-o runtime.
	I0918 21:04:53.924968   61740 cache_images.go:84] Images are preloaded, skipping loading
	I0918 21:04:53.924979   61740 kubeadm.go:934] updating node { 192.168.39.21 8443 v1.31.1 crio true true} ...
	I0918 21:04:53.925115   61740 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-255556 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.21
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:embed-certs-255556 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0918 21:04:53.925203   61740 ssh_runner.go:195] Run: crio config
	I0918 21:04:53.969048   61740 cni.go:84] Creating CNI manager for ""
	I0918 21:04:53.969076   61740 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0918 21:04:53.969086   61740 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0918 21:04:53.969105   61740 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.21 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-255556 NodeName:embed-certs-255556 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.21"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.21 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath
:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0918 21:04:53.969240   61740 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.21
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-255556"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.21
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.21"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0918 21:04:53.969298   61740 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0918 21:04:53.978636   61740 binaries.go:44] Found k8s binaries, skipping transfer
	I0918 21:04:53.978702   61740 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0918 21:04:53.988580   61740 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0918 21:04:54.005819   61740 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0918 21:04:54.021564   61740 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2159 bytes)
	I0918 21:04:54.038702   61740 ssh_runner.go:195] Run: grep 192.168.39.21	control-plane.minikube.internal$ /etc/hosts
	I0918 21:04:54.042536   61740 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.21	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0918 21:04:54.053896   61740 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0918 21:04:54.180842   61740 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0918 21:04:54.197701   61740 certs.go:68] Setting up /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/embed-certs-255556 for IP: 192.168.39.21
	I0918 21:04:54.197731   61740 certs.go:194] generating shared ca certs ...
	I0918 21:04:54.197754   61740 certs.go:226] acquiring lock for ca certs: {Name:mkf54e0e71ed884c4372f9d3d4cb308ea4600185 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0918 21:04:54.197953   61740 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19667-7671/.minikube/ca.key
	I0918 21:04:54.198020   61740 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19667-7671/.minikube/proxy-client-ca.key
	I0918 21:04:54.198034   61740 certs.go:256] generating profile certs ...
	I0918 21:04:54.198129   61740 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/embed-certs-255556/client.key
	I0918 21:04:54.198191   61740 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/embed-certs-255556/apiserver.key.4704fd19
	I0918 21:04:54.198225   61740 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/embed-certs-255556/proxy-client.key
	I0918 21:04:54.198326   61740 certs.go:484] found cert: /home/jenkins/minikube-integration/19667-7671/.minikube/certs/14878.pem (1338 bytes)
	W0918 21:04:54.198358   61740 certs.go:480] ignoring /home/jenkins/minikube-integration/19667-7671/.minikube/certs/14878_empty.pem, impossibly tiny 0 bytes
	I0918 21:04:54.198370   61740 certs.go:484] found cert: /home/jenkins/minikube-integration/19667-7671/.minikube/certs/ca-key.pem (1675 bytes)
	I0918 21:04:54.198420   61740 certs.go:484] found cert: /home/jenkins/minikube-integration/19667-7671/.minikube/certs/ca.pem (1082 bytes)
	I0918 21:04:54.198463   61740 certs.go:484] found cert: /home/jenkins/minikube-integration/19667-7671/.minikube/certs/cert.pem (1123 bytes)
	I0918 21:04:54.198498   61740 certs.go:484] found cert: /home/jenkins/minikube-integration/19667-7671/.minikube/certs/key.pem (1679 bytes)
	I0918 21:04:54.198566   61740 certs.go:484] found cert: /home/jenkins/minikube-integration/19667-7671/.minikube/files/etc/ssl/certs/148782.pem (1708 bytes)
	I0918 21:04:54.199258   61740 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0918 21:04:54.231688   61740 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0918 21:04:54.276366   61740 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0918 21:04:54.320929   61740 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0918 21:04:54.348698   61740 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/embed-certs-255556/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0918 21:04:54.375168   61740 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/embed-certs-255556/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0918 21:04:54.399159   61740 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/embed-certs-255556/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0918 21:04:54.427975   61740 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/embed-certs-255556/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0918 21:04:54.454648   61740 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/certs/14878.pem --> /usr/share/ca-certificates/14878.pem (1338 bytes)
	I0918 21:04:54.477518   61740 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/files/etc/ssl/certs/148782.pem --> /usr/share/ca-certificates/148782.pem (1708 bytes)
	I0918 21:04:54.500703   61740 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0918 21:04:54.523380   61740 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0918 21:04:54.540053   61740 ssh_runner.go:195] Run: openssl version
	I0918 21:04:54.545818   61740 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14878.pem && ln -fs /usr/share/ca-certificates/14878.pem /etc/ssl/certs/14878.pem"
	I0918 21:04:54.557138   61740 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14878.pem
	I0918 21:04:54.561973   61740 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 18 19:57 /usr/share/ca-certificates/14878.pem
	I0918 21:04:54.562030   61740 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14878.pem
	I0918 21:04:54.568133   61740 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14878.pem /etc/ssl/certs/51391683.0"
	I0918 21:04:54.578964   61740 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/148782.pem && ln -fs /usr/share/ca-certificates/148782.pem /etc/ssl/certs/148782.pem"
	I0918 21:04:54.590254   61740 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/148782.pem
	I0918 21:04:54.594944   61740 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 18 19:57 /usr/share/ca-certificates/148782.pem
	I0918 21:04:54.595022   61740 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/148782.pem
	I0918 21:04:54.600797   61740 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/148782.pem /etc/ssl/certs/3ec20f2e.0"
	I0918 21:04:54.612078   61740 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0918 21:04:54.623280   61740 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0918 21:04:54.628636   61740 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 18 19:39 /usr/share/ca-certificates/minikubeCA.pem
	I0918 21:04:54.628711   61740 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0918 21:04:54.634847   61740 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0918 21:04:54.645647   61740 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0918 21:04:54.650004   61740 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0918 21:04:54.656906   61740 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0918 21:04:54.662778   61740 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0918 21:04:54.668744   61740 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0918 21:04:54.674676   61740 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0918 21:04:54.680431   61740 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0918 21:04:54.686242   61740 kubeadm.go:392] StartCluster: {Name:embed-certs-255556 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.1 ClusterName:embed-certs-255556 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.21 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fals
e MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0918 21:04:54.686364   61740 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0918 21:04:54.686439   61740 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0918 21:04:54.724228   61740 cri.go:89] found id: ""
	I0918 21:04:54.724319   61740 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0918 21:04:54.734427   61740 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0918 21:04:54.734458   61740 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0918 21:04:54.734511   61740 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0918 21:04:54.747453   61740 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0918 21:04:54.748449   61740 kubeconfig.go:125] found "embed-certs-255556" server: "https://192.168.39.21:8443"
	I0918 21:04:54.750481   61740 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0918 21:04:54.760549   61740 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.21
	I0918 21:04:54.760585   61740 kubeadm.go:1160] stopping kube-system containers ...
	I0918 21:04:54.760599   61740 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0918 21:04:54.760659   61740 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0918 21:04:54.796334   61740 cri.go:89] found id: ""
	I0918 21:04:54.796426   61740 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0918 21:04:54.820854   61740 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0918 21:04:54.831959   61740 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0918 21:04:54.831982   61740 kubeadm.go:157] found existing configuration files:
	
	I0918 21:04:54.832075   61740 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0918 21:04:54.841872   61740 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0918 21:04:54.841952   61740 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0918 21:04:54.852032   61740 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0918 21:04:54.862101   61740 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0918 21:04:54.862176   61740 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0918 21:04:54.872575   61740 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0918 21:04:54.882283   61740 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0918 21:04:54.882386   61740 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0918 21:04:54.895907   61740 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0918 21:04:54.905410   61740 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0918 21:04:54.905484   61740 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0918 21:04:54.914938   61740 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0918 21:04:54.924536   61740 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0918 21:04:55.035830   61740 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0918 21:04:55.975305   61740 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0918 21:04:56.227988   61740 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0918 21:04:56.304760   61740 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0918 21:04:56.375088   61740 api_server.go:52] waiting for apiserver process to appear ...
	I0918 21:04:56.375185   61740 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:04:56.875319   61740 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:04:57.375240   61740 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:04:57.875532   61740 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:04:55.093491   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | domain old-k8s-version-740194 has defined MAC address 52:54:00:3f:a7:11 in network mk-old-k8s-version-740194
	I0918 21:04:55.093956   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | unable to find current IP address of domain old-k8s-version-740194 in network mk-old-k8s-version-740194
	I0918 21:04:55.093982   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | I0918 21:04:55.093923   63026 retry.go:31] will retry after 1.824664154s: waiting for machine to come up
	I0918 21:04:56.920959   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | domain old-k8s-version-740194 has defined MAC address 52:54:00:3f:a7:11 in network mk-old-k8s-version-740194
	I0918 21:04:56.921371   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | unable to find current IP address of domain old-k8s-version-740194 in network mk-old-k8s-version-740194
	I0918 21:04:56.921422   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | I0918 21:04:56.921354   63026 retry.go:31] will retry after 1.591260677s: waiting for machine to come up
	I0918 21:04:58.514832   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | domain old-k8s-version-740194 has defined MAC address 52:54:00:3f:a7:11 in network mk-old-k8s-version-740194
	I0918 21:04:58.515294   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | unable to find current IP address of domain old-k8s-version-740194 in network mk-old-k8s-version-740194
	I0918 21:04:58.515322   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | I0918 21:04:58.515262   63026 retry.go:31] will retry after 1.868763497s: waiting for machine to come up
	I0918 21:04:58.135056   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:05:00.633540   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:04:58.375400   61740 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:04:58.392935   61740 api_server.go:72] duration metric: took 2.017847705s to wait for apiserver process to appear ...
	I0918 21:04:58.393110   61740 api_server.go:88] waiting for apiserver healthz status ...
	I0918 21:04:58.393152   61740 api_server.go:253] Checking apiserver healthz at https://192.168.39.21:8443/healthz ...
	I0918 21:04:58.393699   61740 api_server.go:269] stopped: https://192.168.39.21:8443/healthz: Get "https://192.168.39.21:8443/healthz": dial tcp 192.168.39.21:8443: connect: connection refused
	I0918 21:04:58.893291   61740 api_server.go:253] Checking apiserver healthz at https://192.168.39.21:8443/healthz ...
	I0918 21:05:01.124915   61740 api_server.go:279] https://192.168.39.21:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0918 21:05:01.124954   61740 api_server.go:103] status: https://192.168.39.21:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0918 21:05:01.124991   61740 api_server.go:253] Checking apiserver healthz at https://192.168.39.21:8443/healthz ...
	I0918 21:05:01.179199   61740 api_server.go:279] https://192.168.39.21:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0918 21:05:01.179225   61740 api_server.go:103] status: https://192.168.39.21:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0918 21:05:01.393537   61740 api_server.go:253] Checking apiserver healthz at https://192.168.39.21:8443/healthz ...
	I0918 21:05:01.399577   61740 api_server.go:279] https://192.168.39.21:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0918 21:05:01.399610   61740 api_server.go:103] status: https://192.168.39.21:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0918 21:05:01.894174   61740 api_server.go:253] Checking apiserver healthz at https://192.168.39.21:8443/healthz ...
	I0918 21:05:01.899086   61740 api_server.go:279] https://192.168.39.21:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0918 21:05:01.899110   61740 api_server.go:103] status: https://192.168.39.21:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0918 21:05:02.393672   61740 api_server.go:253] Checking apiserver healthz at https://192.168.39.21:8443/healthz ...
	I0918 21:05:02.401942   61740 api_server.go:279] https://192.168.39.21:8443/healthz returned 200:
	ok
	I0918 21:05:02.408523   61740 api_server.go:141] control plane version: v1.31.1
	I0918 21:05:02.408553   61740 api_server.go:131] duration metric: took 4.015427901s to wait for apiserver health ...
	I0918 21:05:02.408562   61740 cni.go:84] Creating CNI manager for ""
	I0918 21:05:02.408568   61740 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0918 21:05:02.410199   61740 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0918 21:05:02.411470   61740 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0918 21:05:02.424617   61740 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0918 21:05:02.443819   61740 system_pods.go:43] waiting for kube-system pods to appear ...
	I0918 21:05:02.458892   61740 system_pods.go:59] 8 kube-system pods found
	I0918 21:05:02.458939   61740 system_pods.go:61] "coredns-7c65d6cfc9-xwn8w" [773b9a83-bb43-40d3-b3a3-40603c3b22b2] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0918 21:05:02.458949   61740 system_pods.go:61] "etcd-embed-certs-255556" [ee3e7dc9-fb5a-4faa-a0b5-b84b7cd506b6] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0918 21:05:02.458961   61740 system_pods.go:61] "kube-apiserver-embed-certs-255556" [c60ce069-c7a0-42d7-a7de-ce3cf91a3d43] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0918 21:05:02.458970   61740 system_pods.go:61] "kube-controller-manager-embed-certs-255556" [ac8f6b42-caa3-4815-9a90-3f7bb1f0060b] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0918 21:05:02.458980   61740 system_pods.go:61] "kube-proxy-v8szm" [367f743a-399b-4d04-8604-dcd441999581] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0918 21:05:02.458993   61740 system_pods.go:61] "kube-scheduler-embed-certs-255556" [b5dd211b-7963-41ac-8b43-0a5451e3e848] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0918 21:05:02.459001   61740 system_pods.go:61] "metrics-server-6867b74b74-z8rm7" [d1b6823e-4ac5-4ac6-88ae-7f8eac622fdf] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0918 21:05:02.459009   61740 system_pods.go:61] "storage-provisioner" [1575f899-35a7-4eb2-ad5f-660183f75aa6] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0918 21:05:02.459015   61740 system_pods.go:74] duration metric: took 15.172393ms to wait for pod list to return data ...
	I0918 21:05:02.459025   61740 node_conditions.go:102] verifying NodePressure condition ...
	I0918 21:05:02.463140   61740 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0918 21:05:02.463177   61740 node_conditions.go:123] node cpu capacity is 2
	I0918 21:05:02.463192   61740 node_conditions.go:105] duration metric: took 4.162401ms to run NodePressure ...
	I0918 21:05:02.463214   61740 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0918 21:05:02.757153   61740 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0918 21:05:02.761949   61740 kubeadm.go:739] kubelet initialised
	I0918 21:05:02.761977   61740 kubeadm.go:740] duration metric: took 4.79396ms waiting for restarted kubelet to initialise ...
	I0918 21:05:02.761985   61740 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0918 21:05:02.767197   61740 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-xwn8w" in "kube-system" namespace to be "Ready" ...
	I0918 21:05:00.385891   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | domain old-k8s-version-740194 has defined MAC address 52:54:00:3f:a7:11 in network mk-old-k8s-version-740194
	I0918 21:05:00.386451   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | unable to find current IP address of domain old-k8s-version-740194 in network mk-old-k8s-version-740194
	I0918 21:05:00.386482   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | I0918 21:05:00.386384   63026 retry.go:31] will retry after 3.274467583s: waiting for machine to come up
	I0918 21:05:03.664788   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | domain old-k8s-version-740194 has defined MAC address 52:54:00:3f:a7:11 in network mk-old-k8s-version-740194
	I0918 21:05:03.665255   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | unable to find current IP address of domain old-k8s-version-740194 in network mk-old-k8s-version-740194
	I0918 21:05:03.665286   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | I0918 21:05:03.665210   63026 retry.go:31] will retry after 4.112908346s: waiting for machine to come up
	I0918 21:05:02.634177   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:05:05.133431   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:05:07.133941   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:05:04.774196   61740 pod_ready.go:103] pod "coredns-7c65d6cfc9-xwn8w" in "kube-system" namespace has status "Ready":"False"
	I0918 21:05:07.273045   61740 pod_ready.go:103] pod "coredns-7c65d6cfc9-xwn8w" in "kube-system" namespace has status "Ready":"False"
	I0918 21:05:09.245246   61273 start.go:364] duration metric: took 55.195169549s to acquireMachinesLock for "no-preload-331658"
	I0918 21:05:09.245300   61273 start.go:96] Skipping create...Using existing machine configuration
	I0918 21:05:09.245311   61273 fix.go:54] fixHost starting: 
	I0918 21:05:09.245741   61273 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19667-7671/.minikube/bin/docker-machine-driver-kvm2
	I0918 21:05:09.245778   61273 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0918 21:05:09.263998   61273 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37335
	I0918 21:05:09.264565   61273 main.go:141] libmachine: () Calling .GetVersion
	I0918 21:05:09.265118   61273 main.go:141] libmachine: Using API Version  1
	I0918 21:05:09.265142   61273 main.go:141] libmachine: () Calling .SetConfigRaw
	I0918 21:05:09.265505   61273 main.go:141] libmachine: () Calling .GetMachineName
	I0918 21:05:09.265732   61273 main.go:141] libmachine: (no-preload-331658) Calling .DriverName
	I0918 21:05:09.265901   61273 main.go:141] libmachine: (no-preload-331658) Calling .GetState
	I0918 21:05:09.269500   61273 fix.go:112] recreateIfNeeded on no-preload-331658: state=Stopped err=<nil>
	I0918 21:05:09.269525   61273 main.go:141] libmachine: (no-preload-331658) Calling .DriverName
	W0918 21:05:09.269730   61273 fix.go:138] unexpected machine state, will restart: <nil>
	I0918 21:05:09.271448   61273 out.go:177] * Restarting existing kvm2 VM for "no-preload-331658" ...
	I0918 21:05:07.782616   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | domain old-k8s-version-740194 has defined MAC address 52:54:00:3f:a7:11 in network mk-old-k8s-version-740194
	I0918 21:05:07.783108   62061 main.go:141] libmachine: (old-k8s-version-740194) Found IP for machine: 192.168.72.53
	I0918 21:05:07.783125   62061 main.go:141] libmachine: (old-k8s-version-740194) Reserving static IP address...
	I0918 21:05:07.783135   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | domain old-k8s-version-740194 has current primary IP address 192.168.72.53 and MAC address 52:54:00:3f:a7:11 in network mk-old-k8s-version-740194
	I0918 21:05:07.783509   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | found host DHCP lease matching {name: "old-k8s-version-740194", mac: "52:54:00:3f:a7:11", ip: "192.168.72.53"} in network mk-old-k8s-version-740194: {Iface:virbr3 ExpiryTime:2024-09-18 22:05:00 +0000 UTC Type:0 Mac:52:54:00:3f:a7:11 Iaid: IPaddr:192.168.72.53 Prefix:24 Hostname:old-k8s-version-740194 Clientid:01:52:54:00:3f:a7:11}
	I0918 21:05:07.783542   62061 main.go:141] libmachine: (old-k8s-version-740194) Reserved static IP address: 192.168.72.53
	I0918 21:05:07.783565   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | skip adding static IP to network mk-old-k8s-version-740194 - found existing host DHCP lease matching {name: "old-k8s-version-740194", mac: "52:54:00:3f:a7:11", ip: "192.168.72.53"}
	I0918 21:05:07.783583   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | Getting to WaitForSSH function...
	I0918 21:05:07.783614   62061 main.go:141] libmachine: (old-k8s-version-740194) Waiting for SSH to be available...
	I0918 21:05:07.785492   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | domain old-k8s-version-740194 has defined MAC address 52:54:00:3f:a7:11 in network mk-old-k8s-version-740194
	I0918 21:05:07.785856   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:a7:11", ip: ""} in network mk-old-k8s-version-740194: {Iface:virbr3 ExpiryTime:2024-09-18 22:05:00 +0000 UTC Type:0 Mac:52:54:00:3f:a7:11 Iaid: IPaddr:192.168.72.53 Prefix:24 Hostname:old-k8s-version-740194 Clientid:01:52:54:00:3f:a7:11}
	I0918 21:05:07.785885   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | domain old-k8s-version-740194 has defined IP address 192.168.72.53 and MAC address 52:54:00:3f:a7:11 in network mk-old-k8s-version-740194
	I0918 21:05:07.786000   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | Using SSH client type: external
	I0918 21:05:07.786021   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | Using SSH private key: /home/jenkins/minikube-integration/19667-7671/.minikube/machines/old-k8s-version-740194/id_rsa (-rw-------)
	I0918 21:05:07.786044   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.53 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19667-7671/.minikube/machines/old-k8s-version-740194/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0918 21:05:07.786055   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | About to run SSH command:
	I0918 21:05:07.786065   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | exit 0
	I0918 21:05:07.915953   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | SSH cmd err, output: <nil>: 
	I0918 21:05:07.916454   62061 main.go:141] libmachine: (old-k8s-version-740194) Calling .GetConfigRaw
	I0918 21:05:07.917059   62061 main.go:141] libmachine: (old-k8s-version-740194) Calling .GetIP
	I0918 21:05:07.919749   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | domain old-k8s-version-740194 has defined MAC address 52:54:00:3f:a7:11 in network mk-old-k8s-version-740194
	I0918 21:05:07.920244   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:a7:11", ip: ""} in network mk-old-k8s-version-740194: {Iface:virbr3 ExpiryTime:2024-09-18 22:05:00 +0000 UTC Type:0 Mac:52:54:00:3f:a7:11 Iaid: IPaddr:192.168.72.53 Prefix:24 Hostname:old-k8s-version-740194 Clientid:01:52:54:00:3f:a7:11}
	I0918 21:05:07.920280   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | domain old-k8s-version-740194 has defined IP address 192.168.72.53 and MAC address 52:54:00:3f:a7:11 in network mk-old-k8s-version-740194
	I0918 21:05:07.920639   62061 profile.go:143] Saving config to /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/old-k8s-version-740194/config.json ...
	I0918 21:05:07.920871   62061 machine.go:93] provisionDockerMachine start ...
	I0918 21:05:07.920893   62061 main.go:141] libmachine: (old-k8s-version-740194) Calling .DriverName
	I0918 21:05:07.921100   62061 main.go:141] libmachine: (old-k8s-version-740194) Calling .GetSSHHostname
	I0918 21:05:07.923129   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | domain old-k8s-version-740194 has defined MAC address 52:54:00:3f:a7:11 in network mk-old-k8s-version-740194
	I0918 21:05:07.923434   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:a7:11", ip: ""} in network mk-old-k8s-version-740194: {Iface:virbr3 ExpiryTime:2024-09-18 22:05:00 +0000 UTC Type:0 Mac:52:54:00:3f:a7:11 Iaid: IPaddr:192.168.72.53 Prefix:24 Hostname:old-k8s-version-740194 Clientid:01:52:54:00:3f:a7:11}
	I0918 21:05:07.923458   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | domain old-k8s-version-740194 has defined IP address 192.168.72.53 and MAC address 52:54:00:3f:a7:11 in network mk-old-k8s-version-740194
	I0918 21:05:07.923573   62061 main.go:141] libmachine: (old-k8s-version-740194) Calling .GetSSHPort
	I0918 21:05:07.923727   62061 main.go:141] libmachine: (old-k8s-version-740194) Calling .GetSSHKeyPath
	I0918 21:05:07.923889   62061 main.go:141] libmachine: (old-k8s-version-740194) Calling .GetSSHKeyPath
	I0918 21:05:07.924036   62061 main.go:141] libmachine: (old-k8s-version-740194) Calling .GetSSHUsername
	I0918 21:05:07.924185   62061 main.go:141] libmachine: Using SSH client type: native
	I0918 21:05:07.924368   62061 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.72.53 22 <nil> <nil>}
	I0918 21:05:07.924378   62061 main.go:141] libmachine: About to run SSH command:
	hostname
	I0918 21:05:08.035941   62061 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0918 21:05:08.035972   62061 main.go:141] libmachine: (old-k8s-version-740194) Calling .GetMachineName
	I0918 21:05:08.036242   62061 buildroot.go:166] provisioning hostname "old-k8s-version-740194"
	I0918 21:05:08.036273   62061 main.go:141] libmachine: (old-k8s-version-740194) Calling .GetMachineName
	I0918 21:05:08.036512   62061 main.go:141] libmachine: (old-k8s-version-740194) Calling .GetSSHHostname
	I0918 21:05:08.038902   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | domain old-k8s-version-740194 has defined MAC address 52:54:00:3f:a7:11 in network mk-old-k8s-version-740194
	I0918 21:05:08.039239   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:a7:11", ip: ""} in network mk-old-k8s-version-740194: {Iface:virbr3 ExpiryTime:2024-09-18 22:05:00 +0000 UTC Type:0 Mac:52:54:00:3f:a7:11 Iaid: IPaddr:192.168.72.53 Prefix:24 Hostname:old-k8s-version-740194 Clientid:01:52:54:00:3f:a7:11}
	I0918 21:05:08.039266   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | domain old-k8s-version-740194 has defined IP address 192.168.72.53 and MAC address 52:54:00:3f:a7:11 in network mk-old-k8s-version-740194
	I0918 21:05:08.039438   62061 main.go:141] libmachine: (old-k8s-version-740194) Calling .GetSSHPort
	I0918 21:05:08.039632   62061 main.go:141] libmachine: (old-k8s-version-740194) Calling .GetSSHKeyPath
	I0918 21:05:08.039899   62061 main.go:141] libmachine: (old-k8s-version-740194) Calling .GetSSHKeyPath
	I0918 21:05:08.040072   62061 main.go:141] libmachine: (old-k8s-version-740194) Calling .GetSSHUsername
	I0918 21:05:08.040248   62061 main.go:141] libmachine: Using SSH client type: native
	I0918 21:05:08.040415   62061 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.72.53 22 <nil> <nil>}
	I0918 21:05:08.040428   62061 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-740194 && echo "old-k8s-version-740194" | sudo tee /etc/hostname
	I0918 21:05:08.165391   62061 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-740194
	
	I0918 21:05:08.165424   62061 main.go:141] libmachine: (old-k8s-version-740194) Calling .GetSSHHostname
	I0918 21:05:08.168059   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | domain old-k8s-version-740194 has defined MAC address 52:54:00:3f:a7:11 in network mk-old-k8s-version-740194
	I0918 21:05:08.168396   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:a7:11", ip: ""} in network mk-old-k8s-version-740194: {Iface:virbr3 ExpiryTime:2024-09-18 22:05:00 +0000 UTC Type:0 Mac:52:54:00:3f:a7:11 Iaid: IPaddr:192.168.72.53 Prefix:24 Hostname:old-k8s-version-740194 Clientid:01:52:54:00:3f:a7:11}
	I0918 21:05:08.168428   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | domain old-k8s-version-740194 has defined IP address 192.168.72.53 and MAC address 52:54:00:3f:a7:11 in network mk-old-k8s-version-740194
	I0918 21:05:08.168621   62061 main.go:141] libmachine: (old-k8s-version-740194) Calling .GetSSHPort
	I0918 21:05:08.168837   62061 main.go:141] libmachine: (old-k8s-version-740194) Calling .GetSSHKeyPath
	I0918 21:05:08.168988   62061 main.go:141] libmachine: (old-k8s-version-740194) Calling .GetSSHKeyPath
	I0918 21:05:08.169231   62061 main.go:141] libmachine: (old-k8s-version-740194) Calling .GetSSHUsername
	I0918 21:05:08.169413   62061 main.go:141] libmachine: Using SSH client type: native
	I0918 21:05:08.169579   62061 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.72.53 22 <nil> <nil>}
	I0918 21:05:08.169594   62061 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-740194' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-740194/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-740194' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0918 21:05:08.288591   62061 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0918 21:05:08.288644   62061 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19667-7671/.minikube CaCertPath:/home/jenkins/minikube-integration/19667-7671/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19667-7671/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19667-7671/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19667-7671/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19667-7671/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19667-7671/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19667-7671/.minikube}
	I0918 21:05:08.288671   62061 buildroot.go:174] setting up certificates
	I0918 21:05:08.288682   62061 provision.go:84] configureAuth start
	I0918 21:05:08.288694   62061 main.go:141] libmachine: (old-k8s-version-740194) Calling .GetMachineName
	I0918 21:05:08.289017   62061 main.go:141] libmachine: (old-k8s-version-740194) Calling .GetIP
	I0918 21:05:08.291949   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | domain old-k8s-version-740194 has defined MAC address 52:54:00:3f:a7:11 in network mk-old-k8s-version-740194
	I0918 21:05:08.292358   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:a7:11", ip: ""} in network mk-old-k8s-version-740194: {Iface:virbr3 ExpiryTime:2024-09-18 22:05:00 +0000 UTC Type:0 Mac:52:54:00:3f:a7:11 Iaid: IPaddr:192.168.72.53 Prefix:24 Hostname:old-k8s-version-740194 Clientid:01:52:54:00:3f:a7:11}
	I0918 21:05:08.292405   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | domain old-k8s-version-740194 has defined IP address 192.168.72.53 and MAC address 52:54:00:3f:a7:11 in network mk-old-k8s-version-740194
	I0918 21:05:08.292526   62061 main.go:141] libmachine: (old-k8s-version-740194) Calling .GetSSHHostname
	I0918 21:05:08.295013   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | domain old-k8s-version-740194 has defined MAC address 52:54:00:3f:a7:11 in network mk-old-k8s-version-740194
	I0918 21:05:08.295378   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:a7:11", ip: ""} in network mk-old-k8s-version-740194: {Iface:virbr3 ExpiryTime:2024-09-18 22:05:00 +0000 UTC Type:0 Mac:52:54:00:3f:a7:11 Iaid: IPaddr:192.168.72.53 Prefix:24 Hostname:old-k8s-version-740194 Clientid:01:52:54:00:3f:a7:11}
	I0918 21:05:08.295399   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | domain old-k8s-version-740194 has defined IP address 192.168.72.53 and MAC address 52:54:00:3f:a7:11 in network mk-old-k8s-version-740194
	I0918 21:05:08.295567   62061 provision.go:143] copyHostCerts
	I0918 21:05:08.295630   62061 exec_runner.go:144] found /home/jenkins/minikube-integration/19667-7671/.minikube/ca.pem, removing ...
	I0918 21:05:08.295640   62061 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19667-7671/.minikube/ca.pem
	I0918 21:05:08.295692   62061 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19667-7671/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19667-7671/.minikube/ca.pem (1082 bytes)
	I0918 21:05:08.295783   62061 exec_runner.go:144] found /home/jenkins/minikube-integration/19667-7671/.minikube/cert.pem, removing ...
	I0918 21:05:08.295790   62061 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19667-7671/.minikube/cert.pem
	I0918 21:05:08.295810   62061 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19667-7671/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19667-7671/.minikube/cert.pem (1123 bytes)
	I0918 21:05:08.295917   62061 exec_runner.go:144] found /home/jenkins/minikube-integration/19667-7671/.minikube/key.pem, removing ...
	I0918 21:05:08.295926   62061 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19667-7671/.minikube/key.pem
	I0918 21:05:08.295949   62061 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19667-7671/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19667-7671/.minikube/key.pem (1679 bytes)
	I0918 21:05:08.296009   62061 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19667-7671/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19667-7671/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19667-7671/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-740194 san=[127.0.0.1 192.168.72.53 localhost minikube old-k8s-version-740194]
	I0918 21:05:08.560726   62061 provision.go:177] copyRemoteCerts
	I0918 21:05:08.560786   62061 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0918 21:05:08.560816   62061 main.go:141] libmachine: (old-k8s-version-740194) Calling .GetSSHHostname
	I0918 21:05:08.563798   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | domain old-k8s-version-740194 has defined MAC address 52:54:00:3f:a7:11 in network mk-old-k8s-version-740194
	I0918 21:05:08.564231   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:a7:11", ip: ""} in network mk-old-k8s-version-740194: {Iface:virbr3 ExpiryTime:2024-09-18 22:05:00 +0000 UTC Type:0 Mac:52:54:00:3f:a7:11 Iaid: IPaddr:192.168.72.53 Prefix:24 Hostname:old-k8s-version-740194 Clientid:01:52:54:00:3f:a7:11}
	I0918 21:05:08.564266   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | domain old-k8s-version-740194 has defined IP address 192.168.72.53 and MAC address 52:54:00:3f:a7:11 in network mk-old-k8s-version-740194
	I0918 21:05:08.564473   62061 main.go:141] libmachine: (old-k8s-version-740194) Calling .GetSSHPort
	I0918 21:05:08.564704   62061 main.go:141] libmachine: (old-k8s-version-740194) Calling .GetSSHKeyPath
	I0918 21:05:08.564876   62061 main.go:141] libmachine: (old-k8s-version-740194) Calling .GetSSHUsername
	I0918 21:05:08.565016   62061 sshutil.go:53] new ssh client: &{IP:192.168.72.53 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19667-7671/.minikube/machines/old-k8s-version-740194/id_rsa Username:docker}
	I0918 21:05:08.654357   62061 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0918 21:05:08.680551   62061 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0918 21:05:08.704324   62061 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0918 21:05:08.727966   62061 provision.go:87] duration metric: took 439.269312ms to configureAuth
	I0918 21:05:08.728003   62061 buildroot.go:189] setting minikube options for container-runtime
	I0918 21:05:08.728235   62061 config.go:182] Loaded profile config "old-k8s-version-740194": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0918 21:05:08.728305   62061 main.go:141] libmachine: (old-k8s-version-740194) Calling .GetSSHHostname
	I0918 21:05:08.730895   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | domain old-k8s-version-740194 has defined MAC address 52:54:00:3f:a7:11 in network mk-old-k8s-version-740194
	I0918 21:05:08.731318   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:a7:11", ip: ""} in network mk-old-k8s-version-740194: {Iface:virbr3 ExpiryTime:2024-09-18 22:05:00 +0000 UTC Type:0 Mac:52:54:00:3f:a7:11 Iaid: IPaddr:192.168.72.53 Prefix:24 Hostname:old-k8s-version-740194 Clientid:01:52:54:00:3f:a7:11}
	I0918 21:05:08.731348   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | domain old-k8s-version-740194 has defined IP address 192.168.72.53 and MAC address 52:54:00:3f:a7:11 in network mk-old-k8s-version-740194
	I0918 21:05:08.731448   62061 main.go:141] libmachine: (old-k8s-version-740194) Calling .GetSSHPort
	I0918 21:05:08.731668   62061 main.go:141] libmachine: (old-k8s-version-740194) Calling .GetSSHKeyPath
	I0918 21:05:08.731884   62061 main.go:141] libmachine: (old-k8s-version-740194) Calling .GetSSHKeyPath
	I0918 21:05:08.732057   62061 main.go:141] libmachine: (old-k8s-version-740194) Calling .GetSSHUsername
	I0918 21:05:08.732223   62061 main.go:141] libmachine: Using SSH client type: native
	I0918 21:05:08.732396   62061 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.72.53 22 <nil> <nil>}
	I0918 21:05:08.732411   62061 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0918 21:05:08.978962   62061 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0918 21:05:08.978995   62061 machine.go:96] duration metric: took 1.05811075s to provisionDockerMachine
	I0918 21:05:08.979008   62061 start.go:293] postStartSetup for "old-k8s-version-740194" (driver="kvm2")
	I0918 21:05:08.979020   62061 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0918 21:05:08.979050   62061 main.go:141] libmachine: (old-k8s-version-740194) Calling .DriverName
	I0918 21:05:08.979409   62061 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0918 21:05:08.979436   62061 main.go:141] libmachine: (old-k8s-version-740194) Calling .GetSSHHostname
	I0918 21:05:08.982472   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | domain old-k8s-version-740194 has defined MAC address 52:54:00:3f:a7:11 in network mk-old-k8s-version-740194
	I0918 21:05:08.982791   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:a7:11", ip: ""} in network mk-old-k8s-version-740194: {Iface:virbr3 ExpiryTime:2024-09-18 22:05:00 +0000 UTC Type:0 Mac:52:54:00:3f:a7:11 Iaid: IPaddr:192.168.72.53 Prefix:24 Hostname:old-k8s-version-740194 Clientid:01:52:54:00:3f:a7:11}
	I0918 21:05:08.982830   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | domain old-k8s-version-740194 has defined IP address 192.168.72.53 and MAC address 52:54:00:3f:a7:11 in network mk-old-k8s-version-740194
	I0918 21:05:08.982996   62061 main.go:141] libmachine: (old-k8s-version-740194) Calling .GetSSHPort
	I0918 21:05:08.983192   62061 main.go:141] libmachine: (old-k8s-version-740194) Calling .GetSSHKeyPath
	I0918 21:05:08.983341   62061 main.go:141] libmachine: (old-k8s-version-740194) Calling .GetSSHUsername
	I0918 21:05:08.983510   62061 sshutil.go:53] new ssh client: &{IP:192.168.72.53 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19667-7671/.minikube/machines/old-k8s-version-740194/id_rsa Username:docker}
	I0918 21:05:09.074995   62061 ssh_runner.go:195] Run: cat /etc/os-release
	I0918 21:05:09.080207   62061 info.go:137] Remote host: Buildroot 2023.02.9
	I0918 21:05:09.080240   62061 filesync.go:126] Scanning /home/jenkins/minikube-integration/19667-7671/.minikube/addons for local assets ...
	I0918 21:05:09.080327   62061 filesync.go:126] Scanning /home/jenkins/minikube-integration/19667-7671/.minikube/files for local assets ...
	I0918 21:05:09.080451   62061 filesync.go:149] local asset: /home/jenkins/minikube-integration/19667-7671/.minikube/files/etc/ssl/certs/148782.pem -> 148782.pem in /etc/ssl/certs
	I0918 21:05:09.080583   62061 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0918 21:05:09.091374   62061 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/files/etc/ssl/certs/148782.pem --> /etc/ssl/certs/148782.pem (1708 bytes)
	I0918 21:05:09.119546   62061 start.go:296] duration metric: took 140.521158ms for postStartSetup
	I0918 21:05:09.119593   62061 fix.go:56] duration metric: took 20.337078765s for fixHost
	I0918 21:05:09.119620   62061 main.go:141] libmachine: (old-k8s-version-740194) Calling .GetSSHHostname
	I0918 21:05:09.122534   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | domain old-k8s-version-740194 has defined MAC address 52:54:00:3f:a7:11 in network mk-old-k8s-version-740194
	I0918 21:05:09.123157   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:a7:11", ip: ""} in network mk-old-k8s-version-740194: {Iface:virbr3 ExpiryTime:2024-09-18 22:05:00 +0000 UTC Type:0 Mac:52:54:00:3f:a7:11 Iaid: IPaddr:192.168.72.53 Prefix:24 Hostname:old-k8s-version-740194 Clientid:01:52:54:00:3f:a7:11}
	I0918 21:05:09.123187   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | domain old-k8s-version-740194 has defined IP address 192.168.72.53 and MAC address 52:54:00:3f:a7:11 in network mk-old-k8s-version-740194
	I0918 21:05:09.123447   62061 main.go:141] libmachine: (old-k8s-version-740194) Calling .GetSSHPort
	I0918 21:05:09.123695   62061 main.go:141] libmachine: (old-k8s-version-740194) Calling .GetSSHKeyPath
	I0918 21:05:09.123891   62061 main.go:141] libmachine: (old-k8s-version-740194) Calling .GetSSHKeyPath
	I0918 21:05:09.124095   62061 main.go:141] libmachine: (old-k8s-version-740194) Calling .GetSSHUsername
	I0918 21:05:09.124373   62061 main.go:141] libmachine: Using SSH client type: native
	I0918 21:05:09.124582   62061 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.72.53 22 <nil> <nil>}
	I0918 21:05:09.124595   62061 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0918 21:05:09.245082   62061 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726693509.219779478
	
	I0918 21:05:09.245109   62061 fix.go:216] guest clock: 1726693509.219779478
	I0918 21:05:09.245119   62061 fix.go:229] Guest: 2024-09-18 21:05:09.219779478 +0000 UTC Remote: 2024-09-18 21:05:09.11959842 +0000 UTC m=+249.666759777 (delta=100.181058ms)
	I0918 21:05:09.245139   62061 fix.go:200] guest clock delta is within tolerance: 100.181058ms
	I0918 21:05:09.245146   62061 start.go:83] releasing machines lock for "old-k8s-version-740194", held for 20.462669229s
	I0918 21:05:09.245176   62061 main.go:141] libmachine: (old-k8s-version-740194) Calling .DriverName
	I0918 21:05:09.245602   62061 main.go:141] libmachine: (old-k8s-version-740194) Calling .GetIP
	I0918 21:05:09.248653   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | domain old-k8s-version-740194 has defined MAC address 52:54:00:3f:a7:11 in network mk-old-k8s-version-740194
	I0918 21:05:09.249110   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:a7:11", ip: ""} in network mk-old-k8s-version-740194: {Iface:virbr3 ExpiryTime:2024-09-18 22:05:00 +0000 UTC Type:0 Mac:52:54:00:3f:a7:11 Iaid: IPaddr:192.168.72.53 Prefix:24 Hostname:old-k8s-version-740194 Clientid:01:52:54:00:3f:a7:11}
	I0918 21:05:09.249156   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | domain old-k8s-version-740194 has defined IP address 192.168.72.53 and MAC address 52:54:00:3f:a7:11 in network mk-old-k8s-version-740194
	I0918 21:05:09.249345   62061 main.go:141] libmachine: (old-k8s-version-740194) Calling .DriverName
	I0918 21:05:09.249838   62061 main.go:141] libmachine: (old-k8s-version-740194) Calling .DriverName
	I0918 21:05:09.250047   62061 main.go:141] libmachine: (old-k8s-version-740194) Calling .DriverName
	I0918 21:05:09.250167   62061 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0918 21:05:09.250230   62061 main.go:141] libmachine: (old-k8s-version-740194) Calling .GetSSHHostname
	I0918 21:05:09.250286   62061 ssh_runner.go:195] Run: cat /version.json
	I0918 21:05:09.250312   62061 main.go:141] libmachine: (old-k8s-version-740194) Calling .GetSSHHostname
	I0918 21:05:09.253242   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | domain old-k8s-version-740194 has defined MAC address 52:54:00:3f:a7:11 in network mk-old-k8s-version-740194
	I0918 21:05:09.253408   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | domain old-k8s-version-740194 has defined MAC address 52:54:00:3f:a7:11 in network mk-old-k8s-version-740194
	I0918 21:05:09.253687   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:a7:11", ip: ""} in network mk-old-k8s-version-740194: {Iface:virbr3 ExpiryTime:2024-09-18 22:05:00 +0000 UTC Type:0 Mac:52:54:00:3f:a7:11 Iaid: IPaddr:192.168.72.53 Prefix:24 Hostname:old-k8s-version-740194 Clientid:01:52:54:00:3f:a7:11}
	I0918 21:05:09.253749   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | domain old-k8s-version-740194 has defined IP address 192.168.72.53 and MAC address 52:54:00:3f:a7:11 in network mk-old-k8s-version-740194
	I0918 21:05:09.253882   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:a7:11", ip: ""} in network mk-old-k8s-version-740194: {Iface:virbr3 ExpiryTime:2024-09-18 22:05:00 +0000 UTC Type:0 Mac:52:54:00:3f:a7:11 Iaid: IPaddr:192.168.72.53 Prefix:24 Hostname:old-k8s-version-740194 Clientid:01:52:54:00:3f:a7:11}
	I0918 21:05:09.253901   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | domain old-k8s-version-740194 has defined IP address 192.168.72.53 and MAC address 52:54:00:3f:a7:11 in network mk-old-k8s-version-740194
	I0918 21:05:09.253944   62061 main.go:141] libmachine: (old-k8s-version-740194) Calling .GetSSHPort
	I0918 21:05:09.254026   62061 main.go:141] libmachine: (old-k8s-version-740194) Calling .GetSSHPort
	I0918 21:05:09.254221   62061 main.go:141] libmachine: (old-k8s-version-740194) Calling .GetSSHKeyPath
	I0918 21:05:09.254243   62061 main.go:141] libmachine: (old-k8s-version-740194) Calling .GetSSHKeyPath
	I0918 21:05:09.254372   62061 main.go:141] libmachine: (old-k8s-version-740194) Calling .GetSSHUsername
	I0918 21:05:09.254426   62061 main.go:141] libmachine: (old-k8s-version-740194) Calling .GetSSHUsername
	I0918 21:05:09.254533   62061 sshutil.go:53] new ssh client: &{IP:192.168.72.53 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19667-7671/.minikube/machines/old-k8s-version-740194/id_rsa Username:docker}
	I0918 21:05:09.254695   62061 sshutil.go:53] new ssh client: &{IP:192.168.72.53 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19667-7671/.minikube/machines/old-k8s-version-740194/id_rsa Username:docker}
	I0918 21:05:09.374728   62061 ssh_runner.go:195] Run: systemctl --version
	I0918 21:05:09.381117   62061 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0918 21:05:09.532059   62061 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0918 21:05:09.538041   62061 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0918 21:05:09.538124   62061 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0918 21:05:09.553871   62061 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0918 21:05:09.553907   62061 start.go:495] detecting cgroup driver to use...
	I0918 21:05:09.553982   62061 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0918 21:05:09.573554   62061 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0918 21:05:09.591963   62061 docker.go:217] disabling cri-docker service (if available) ...
	I0918 21:05:09.592057   62061 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0918 21:05:09.610813   62061 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0918 21:05:09.627153   62061 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0918 21:05:09.763978   62061 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0918 21:05:09.945185   62061 docker.go:233] disabling docker service ...
	I0918 21:05:09.945257   62061 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0918 21:05:09.961076   62061 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0918 21:05:09.974660   62061 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0918 21:05:10.111191   62061 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0918 21:05:10.235058   62061 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0918 21:05:10.251389   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0918 21:05:10.270949   62061 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0918 21:05:10.271047   62061 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0918 21:05:10.284743   62061 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0918 21:05:10.284811   62061 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0918 21:05:10.295221   62061 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0918 21:05:10.305897   62061 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0918 21:05:10.317726   62061 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0918 21:05:10.330136   62061 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0918 21:05:10.339480   62061 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0918 21:05:10.339543   62061 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0918 21:05:10.352954   62061 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0918 21:05:10.370764   62061 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0918 21:05:10.524315   62061 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0918 21:05:10.617374   62061 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0918 21:05:10.617466   62061 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0918 21:05:10.624164   62061 start.go:563] Will wait 60s for crictl version
	I0918 21:05:10.624222   62061 ssh_runner.go:195] Run: which crictl
	I0918 21:05:10.629583   62061 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0918 21:05:10.673613   62061 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0918 21:05:10.673702   62061 ssh_runner.go:195] Run: crio --version
	I0918 21:05:10.703948   62061 ssh_runner.go:195] Run: crio --version
	I0918 21:05:10.733924   62061 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0918 21:05:09.272840   61273 main.go:141] libmachine: (no-preload-331658) Calling .Start
	I0918 21:05:09.273067   61273 main.go:141] libmachine: (no-preload-331658) Ensuring networks are active...
	I0918 21:05:09.274115   61273 main.go:141] libmachine: (no-preload-331658) Ensuring network default is active
	I0918 21:05:09.274576   61273 main.go:141] libmachine: (no-preload-331658) Ensuring network mk-no-preload-331658 is active
	I0918 21:05:09.275108   61273 main.go:141] libmachine: (no-preload-331658) Getting domain xml...
	I0918 21:05:09.276003   61273 main.go:141] libmachine: (no-preload-331658) Creating domain...
	I0918 21:05:10.665647   61273 main.go:141] libmachine: (no-preload-331658) Waiting to get IP...
	I0918 21:05:10.666710   61273 main.go:141] libmachine: (no-preload-331658) DBG | domain no-preload-331658 has defined MAC address 52:54:00:2a:47:d0 in network mk-no-preload-331658
	I0918 21:05:10.667187   61273 main.go:141] libmachine: (no-preload-331658) DBG | unable to find current IP address of domain no-preload-331658 in network mk-no-preload-331658
	I0918 21:05:10.667261   61273 main.go:141] libmachine: (no-preload-331658) DBG | I0918 21:05:10.667162   63200 retry.go:31] will retry after 215.232953ms: waiting for machine to come up
	I0918 21:05:10.883691   61273 main.go:141] libmachine: (no-preload-331658) DBG | domain no-preload-331658 has defined MAC address 52:54:00:2a:47:d0 in network mk-no-preload-331658
	I0918 21:05:10.884249   61273 main.go:141] libmachine: (no-preload-331658) DBG | unable to find current IP address of domain no-preload-331658 in network mk-no-preload-331658
	I0918 21:05:10.884283   61273 main.go:141] libmachine: (no-preload-331658) DBG | I0918 21:05:10.884185   63200 retry.go:31] will retry after 289.698979ms: waiting for machine to come up
	I0918 21:05:11.175936   61273 main.go:141] libmachine: (no-preload-331658) DBG | domain no-preload-331658 has defined MAC address 52:54:00:2a:47:d0 in network mk-no-preload-331658
	I0918 21:05:11.176656   61273 main.go:141] libmachine: (no-preload-331658) DBG | unable to find current IP address of domain no-preload-331658 in network mk-no-preload-331658
	I0918 21:05:11.176680   61273 main.go:141] libmachine: (no-preload-331658) DBG | I0918 21:05:11.176553   63200 retry.go:31] will retry after 424.473311ms: waiting for machine to come up
	I0918 21:05:09.633671   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:05:11.634755   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:05:09.274214   61740 pod_ready.go:103] pod "coredns-7c65d6cfc9-xwn8w" in "kube-system" namespace has status "Ready":"False"
	I0918 21:05:11.275099   61740 pod_ready.go:103] pod "coredns-7c65d6cfc9-xwn8w" in "kube-system" namespace has status "Ready":"False"
	I0918 21:05:10.735500   62061 main.go:141] libmachine: (old-k8s-version-740194) Calling .GetIP
	I0918 21:05:10.738130   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | domain old-k8s-version-740194 has defined MAC address 52:54:00:3f:a7:11 in network mk-old-k8s-version-740194
	I0918 21:05:10.738488   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:a7:11", ip: ""} in network mk-old-k8s-version-740194: {Iface:virbr3 ExpiryTime:2024-09-18 22:05:00 +0000 UTC Type:0 Mac:52:54:00:3f:a7:11 Iaid: IPaddr:192.168.72.53 Prefix:24 Hostname:old-k8s-version-740194 Clientid:01:52:54:00:3f:a7:11}
	I0918 21:05:10.738516   62061 main.go:141] libmachine: (old-k8s-version-740194) DBG | domain old-k8s-version-740194 has defined IP address 192.168.72.53 and MAC address 52:54:00:3f:a7:11 in network mk-old-k8s-version-740194
	I0918 21:05:10.738831   62061 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0918 21:05:10.742780   62061 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0918 21:05:10.754785   62061 kubeadm.go:883] updating cluster {Name:old-k8s-version-740194 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-740194 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.53 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0918 21:05:10.754939   62061 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0918 21:05:10.755002   62061 ssh_runner.go:195] Run: sudo crictl images --output json
	I0918 21:05:10.800452   62061 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0918 21:05:10.800526   62061 ssh_runner.go:195] Run: which lz4
	I0918 21:05:10.804522   62061 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0918 21:05:10.809179   62061 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0918 21:05:10.809214   62061 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0918 21:05:12.306958   62061 crio.go:462] duration metric: took 1.502481615s to copy over tarball
	I0918 21:05:12.307049   62061 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0918 21:05:11.603153   61273 main.go:141] libmachine: (no-preload-331658) DBG | domain no-preload-331658 has defined MAC address 52:54:00:2a:47:d0 in network mk-no-preload-331658
	I0918 21:05:11.603791   61273 main.go:141] libmachine: (no-preload-331658) DBG | unable to find current IP address of domain no-preload-331658 in network mk-no-preload-331658
	I0918 21:05:11.603817   61273 main.go:141] libmachine: (no-preload-331658) DBG | I0918 21:05:11.603742   63200 retry.go:31] will retry after 425.818515ms: waiting for machine to come up
	I0918 21:05:12.031622   61273 main.go:141] libmachine: (no-preload-331658) DBG | domain no-preload-331658 has defined MAC address 52:54:00:2a:47:d0 in network mk-no-preload-331658
	I0918 21:05:12.032425   61273 main.go:141] libmachine: (no-preload-331658) DBG | unable to find current IP address of domain no-preload-331658 in network mk-no-preload-331658
	I0918 21:05:12.032458   61273 main.go:141] libmachine: (no-preload-331658) DBG | I0918 21:05:12.032357   63200 retry.go:31] will retry after 701.564015ms: waiting for machine to come up
	I0918 21:05:12.735295   61273 main.go:141] libmachine: (no-preload-331658) DBG | domain no-preload-331658 has defined MAC address 52:54:00:2a:47:d0 in network mk-no-preload-331658
	I0918 21:05:12.735852   61273 main.go:141] libmachine: (no-preload-331658) DBG | unable to find current IP address of domain no-preload-331658 in network mk-no-preload-331658
	I0918 21:05:12.735882   61273 main.go:141] libmachine: (no-preload-331658) DBG | I0918 21:05:12.735814   63200 retry.go:31] will retry after 904.737419ms: waiting for machine to come up
	I0918 21:05:13.642383   61273 main.go:141] libmachine: (no-preload-331658) DBG | domain no-preload-331658 has defined MAC address 52:54:00:2a:47:d0 in network mk-no-preload-331658
	I0918 21:05:13.642913   61273 main.go:141] libmachine: (no-preload-331658) DBG | unable to find current IP address of domain no-preload-331658 in network mk-no-preload-331658
	I0918 21:05:13.642935   61273 main.go:141] libmachine: (no-preload-331658) DBG | I0918 21:05:13.642872   63200 retry.go:31] will retry after 891.091353ms: waiting for machine to come up
	I0918 21:05:14.536200   61273 main.go:141] libmachine: (no-preload-331658) DBG | domain no-preload-331658 has defined MAC address 52:54:00:2a:47:d0 in network mk-no-preload-331658
	I0918 21:05:14.536797   61273 main.go:141] libmachine: (no-preload-331658) DBG | unable to find current IP address of domain no-preload-331658 in network mk-no-preload-331658
	I0918 21:05:14.536849   61273 main.go:141] libmachine: (no-preload-331658) DBG | I0918 21:05:14.536761   63200 retry.go:31] will retry after 1.01795417s: waiting for machine to come up
	I0918 21:05:15.555787   61273 main.go:141] libmachine: (no-preload-331658) DBG | domain no-preload-331658 has defined MAC address 52:54:00:2a:47:d0 in network mk-no-preload-331658
	I0918 21:05:15.556287   61273 main.go:141] libmachine: (no-preload-331658) DBG | unable to find current IP address of domain no-preload-331658 in network mk-no-preload-331658
	I0918 21:05:15.556315   61273 main.go:141] libmachine: (no-preload-331658) DBG | I0918 21:05:15.556243   63200 retry.go:31] will retry after 1.598926126s: waiting for machine to come up
	I0918 21:05:14.132957   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:05:16.133323   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:05:13.778274   61740 pod_ready.go:93] pod "coredns-7c65d6cfc9-xwn8w" in "kube-system" namespace has status "Ready":"True"
	I0918 21:05:13.778310   61740 pod_ready.go:82] duration metric: took 11.011085965s for pod "coredns-7c65d6cfc9-xwn8w" in "kube-system" namespace to be "Ready" ...
	I0918 21:05:13.778325   61740 pod_ready.go:79] waiting up to 4m0s for pod "etcd-embed-certs-255556" in "kube-system" namespace to be "Ready" ...
	I0918 21:05:13.785089   61740 pod_ready.go:93] pod "etcd-embed-certs-255556" in "kube-system" namespace has status "Ready":"True"
	I0918 21:05:13.785121   61740 pod_ready.go:82] duration metric: took 6.787649ms for pod "etcd-embed-certs-255556" in "kube-system" namespace to be "Ready" ...
	I0918 21:05:13.785135   61740 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-embed-certs-255556" in "kube-system" namespace to be "Ready" ...
	I0918 21:05:15.793479   61740 pod_ready.go:103] pod "kube-apiserver-embed-certs-255556" in "kube-system" namespace has status "Ready":"False"
	I0918 21:05:15.357114   62061 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.050037005s)
	I0918 21:05:15.357141   62061 crio.go:469] duration metric: took 3.050151648s to extract the tarball
	I0918 21:05:15.357148   62061 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0918 21:05:15.399373   62061 ssh_runner.go:195] Run: sudo crictl images --output json
	I0918 21:05:15.434204   62061 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0918 21:05:15.434238   62061 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0918 21:05:15.434332   62061 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0918 21:05:15.434369   62061 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0918 21:05:15.434385   62061 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0918 21:05:15.434398   62061 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0918 21:05:15.434339   62061 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0918 21:05:15.434438   62061 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I0918 21:05:15.434443   62061 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0918 21:05:15.434491   62061 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I0918 21:05:15.436820   62061 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0918 21:05:15.436824   62061 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0918 21:05:15.436829   62061 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0918 21:05:15.436856   62061 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0918 21:05:15.436867   62061 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0918 21:05:15.436904   62061 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0918 21:05:15.436952   62061 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0918 21:05:15.437278   62061 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0918 21:05:15.747423   62061 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0918 21:05:15.747705   62061 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0918 21:05:15.748375   62061 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0918 21:05:15.750816   62061 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0918 21:05:15.752244   62061 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0918 21:05:15.754038   62061 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0918 21:05:15.770881   62061 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0918 21:05:15.911654   62061 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0918 21:05:15.911725   62061 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0918 21:05:15.911795   62061 ssh_runner.go:195] Run: which crictl
	I0918 21:05:15.938412   62061 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0918 21:05:15.938464   62061 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0918 21:05:15.938534   62061 ssh_runner.go:195] Run: which crictl
	I0918 21:05:15.944615   62061 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0918 21:05:15.944706   62061 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0918 21:05:15.944736   62061 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0918 21:05:15.944749   62061 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0918 21:05:15.944786   62061 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0918 21:05:15.944809   62061 ssh_runner.go:195] Run: which crictl
	I0918 21:05:15.944820   62061 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0918 21:05:15.944844   62061 ssh_runner.go:195] Run: which crictl
	I0918 21:05:15.944857   62061 ssh_runner.go:195] Run: which crictl
	I0918 21:05:15.944661   62061 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0918 21:05:15.944914   62061 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0918 21:05:15.944950   62061 ssh_runner.go:195] Run: which crictl
	I0918 21:05:15.965316   62061 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0918 21:05:15.965366   62061 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0918 21:05:15.965418   62061 ssh_runner.go:195] Run: which crictl
	I0918 21:05:15.965461   62061 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0918 21:05:15.965419   62061 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0918 21:05:15.965544   62061 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0918 21:05:15.965558   62061 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0918 21:05:15.965613   62061 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0918 21:05:15.965621   62061 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0918 21:05:16.101533   62061 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0918 21:05:16.101536   62061 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0918 21:05:16.105174   62061 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0918 21:05:16.105198   62061 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0918 21:05:16.109044   62061 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0918 21:05:16.109048   62061 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0918 21:05:16.109152   62061 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0918 21:05:16.220653   62061 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0918 21:05:16.255418   62061 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0918 21:05:16.259931   62061 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0918 21:05:16.259986   62061 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0918 21:05:16.260039   62061 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0918 21:05:16.260121   62061 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0918 21:05:16.260337   62061 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0918 21:05:16.352969   62061 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0918 21:05:16.399180   62061 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19667-7671/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0918 21:05:16.445003   62061 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19667-7671/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0918 21:05:16.445078   62061 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19667-7671/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0918 21:05:16.445163   62061 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19667-7671/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0918 21:05:16.445266   62061 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19667-7671/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0918 21:05:16.445387   62061 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19667-7671/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0918 21:05:16.445398   62061 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19667-7671/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0918 21:05:16.633221   62061 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0918 21:05:16.782331   62061 cache_images.go:92] duration metric: took 1.348045359s to LoadCachedImages
	W0918 21:05:16.782445   62061 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19667-7671/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0: no such file or directory
	I0918 21:05:16.782464   62061 kubeadm.go:934] updating node { 192.168.72.53 8443 v1.20.0 crio true true} ...
	I0918 21:05:16.782608   62061 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-740194 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.72.53
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-740194 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0918 21:05:16.782679   62061 ssh_runner.go:195] Run: crio config
	I0918 21:05:16.838946   62061 cni.go:84] Creating CNI manager for ""
	I0918 21:05:16.838978   62061 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0918 21:05:16.838990   62061 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0918 21:05:16.839008   62061 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.53 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-740194 NodeName:old-k8s-version-740194 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.53"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.53 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0918 21:05:16.839163   62061 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.53
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-740194"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.53
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.53"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0918 21:05:16.839232   62061 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0918 21:05:16.849868   62061 binaries.go:44] Found k8s binaries, skipping transfer
	I0918 21:05:16.849955   62061 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0918 21:05:16.859716   62061 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (429 bytes)
	I0918 21:05:16.877857   62061 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0918 21:05:16.895573   62061 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2120 bytes)
	I0918 21:05:16.913545   62061 ssh_runner.go:195] Run: grep 192.168.72.53	control-plane.minikube.internal$ /etc/hosts
	I0918 21:05:16.917476   62061 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.53	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0918 21:05:16.931318   62061 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0918 21:05:17.057636   62061 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0918 21:05:17.076534   62061 certs.go:68] Setting up /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/old-k8s-version-740194 for IP: 192.168.72.53
	I0918 21:05:17.076571   62061 certs.go:194] generating shared ca certs ...
	I0918 21:05:17.076594   62061 certs.go:226] acquiring lock for ca certs: {Name:mkf54e0e71ed884c4372f9d3d4cb308ea4600185 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0918 21:05:17.076796   62061 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19667-7671/.minikube/ca.key
	I0918 21:05:17.076855   62061 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19667-7671/.minikube/proxy-client-ca.key
	I0918 21:05:17.076871   62061 certs.go:256] generating profile certs ...
	I0918 21:05:17.076999   62061 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/old-k8s-version-740194/client.key
	I0918 21:05:17.077083   62061 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/old-k8s-version-740194/apiserver.key.424b07d9
	I0918 21:05:17.077149   62061 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/old-k8s-version-740194/proxy-client.key
	I0918 21:05:17.077321   62061 certs.go:484] found cert: /home/jenkins/minikube-integration/19667-7671/.minikube/certs/14878.pem (1338 bytes)
	W0918 21:05:17.077371   62061 certs.go:480] ignoring /home/jenkins/minikube-integration/19667-7671/.minikube/certs/14878_empty.pem, impossibly tiny 0 bytes
	I0918 21:05:17.077386   62061 certs.go:484] found cert: /home/jenkins/minikube-integration/19667-7671/.minikube/certs/ca-key.pem (1675 bytes)
	I0918 21:05:17.077421   62061 certs.go:484] found cert: /home/jenkins/minikube-integration/19667-7671/.minikube/certs/ca.pem (1082 bytes)
	I0918 21:05:17.077465   62061 certs.go:484] found cert: /home/jenkins/minikube-integration/19667-7671/.minikube/certs/cert.pem (1123 bytes)
	I0918 21:05:17.077499   62061 certs.go:484] found cert: /home/jenkins/minikube-integration/19667-7671/.minikube/certs/key.pem (1679 bytes)
	I0918 21:05:17.077585   62061 certs.go:484] found cert: /home/jenkins/minikube-integration/19667-7671/.minikube/files/etc/ssl/certs/148782.pem (1708 bytes)
	I0918 21:05:17.078553   62061 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0918 21:05:17.116000   62061 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0918 21:05:17.150848   62061 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0918 21:05:17.190616   62061 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0918 21:05:17.222411   62061 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/old-k8s-version-740194/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0918 21:05:17.266923   62061 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/old-k8s-version-740194/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0918 21:05:17.303973   62061 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/old-k8s-version-740194/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0918 21:05:17.334390   62061 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/old-k8s-version-740194/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0918 21:05:17.358732   62061 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0918 21:05:17.385693   62061 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/certs/14878.pem --> /usr/share/ca-certificates/14878.pem (1338 bytes)
	I0918 21:05:17.411396   62061 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/files/etc/ssl/certs/148782.pem --> /usr/share/ca-certificates/148782.pem (1708 bytes)
	I0918 21:05:17.440205   62061 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0918 21:05:17.459639   62061 ssh_runner.go:195] Run: openssl version
	I0918 21:05:17.465846   62061 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0918 21:05:17.477482   62061 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0918 21:05:17.482579   62061 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 18 19:39 /usr/share/ca-certificates/minikubeCA.pem
	I0918 21:05:17.482650   62061 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0918 21:05:17.488822   62061 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0918 21:05:17.499729   62061 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14878.pem && ln -fs /usr/share/ca-certificates/14878.pem /etc/ssl/certs/14878.pem"
	I0918 21:05:17.511718   62061 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14878.pem
	I0918 21:05:17.516361   62061 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 18 19:57 /usr/share/ca-certificates/14878.pem
	I0918 21:05:17.516438   62061 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14878.pem
	I0918 21:05:17.522542   62061 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14878.pem /etc/ssl/certs/51391683.0"
	I0918 21:05:17.534450   62061 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/148782.pem && ln -fs /usr/share/ca-certificates/148782.pem /etc/ssl/certs/148782.pem"
	I0918 21:05:17.545916   62061 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/148782.pem
	I0918 21:05:17.550672   62061 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 18 19:57 /usr/share/ca-certificates/148782.pem
	I0918 21:05:17.550737   62061 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/148782.pem
	I0918 21:05:17.556802   62061 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/148782.pem /etc/ssl/certs/3ec20f2e.0"
	I0918 21:05:17.568991   62061 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0918 21:05:17.574098   62061 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0918 21:05:17.581458   62061 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0918 21:05:17.588242   62061 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0918 21:05:17.594889   62061 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0918 21:05:17.601499   62061 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0918 21:05:17.608193   62061 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0918 21:05:17.614196   62061 kubeadm.go:392] StartCluster: {Name:old-k8s-version-740194 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-740194 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.53 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0918 21:05:17.614298   62061 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0918 21:05:17.614363   62061 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0918 21:05:17.661551   62061 cri.go:89] found id: ""
	I0918 21:05:17.661627   62061 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0918 21:05:17.673087   62061 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0918 21:05:17.673110   62061 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0918 21:05:17.673153   62061 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0918 21:05:17.683213   62061 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0918 21:05:17.684537   62061 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-740194" does not appear in /home/jenkins/minikube-integration/19667-7671/kubeconfig
	I0918 21:05:17.685136   62061 kubeconfig.go:62] /home/jenkins/minikube-integration/19667-7671/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-740194" cluster setting kubeconfig missing "old-k8s-version-740194" context setting]
	I0918 21:05:17.686023   62061 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19667-7671/kubeconfig: {Name:mk3a3eecadb0164dfdd5d3c4392b5b473a2a9bb0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0918 21:05:17.711337   62061 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0918 21:05:17.723894   62061 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.72.53
	I0918 21:05:17.723936   62061 kubeadm.go:1160] stopping kube-system containers ...
	I0918 21:05:17.723949   62061 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0918 21:05:17.724005   62061 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0918 21:05:17.761087   62061 cri.go:89] found id: ""
	I0918 21:05:17.761166   62061 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0918 21:05:17.779689   62061 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0918 21:05:17.791699   62061 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0918 21:05:17.791723   62061 kubeadm.go:157] found existing configuration files:
	
	I0918 21:05:17.791773   62061 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0918 21:05:17.801804   62061 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0918 21:05:17.801879   62061 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0918 21:05:17.812806   62061 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0918 21:05:17.822813   62061 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0918 21:05:17.822902   62061 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0918 21:05:17.833267   62061 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0918 21:05:17.845148   62061 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0918 21:05:17.845230   62061 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0918 21:05:17.855974   62061 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0918 21:05:17.866490   62061 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0918 21:05:17.866593   62061 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0918 21:05:17.877339   62061 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0918 21:05:17.887825   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0918 21:05:18.021691   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0918 21:05:18.726750   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0918 21:05:18.970635   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0918 21:05:19.081869   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0918 21:05:19.203071   62061 api_server.go:52] waiting for apiserver process to appear ...
	I0918 21:05:19.203166   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:17.156934   61273 main.go:141] libmachine: (no-preload-331658) DBG | domain no-preload-331658 has defined MAC address 52:54:00:2a:47:d0 in network mk-no-preload-331658
	I0918 21:05:17.157481   61273 main.go:141] libmachine: (no-preload-331658) DBG | unable to find current IP address of domain no-preload-331658 in network mk-no-preload-331658
	I0918 21:05:17.157509   61273 main.go:141] libmachine: (no-preload-331658) DBG | I0918 21:05:17.157429   63200 retry.go:31] will retry after 1.586399944s: waiting for machine to come up
	I0918 21:05:18.746155   61273 main.go:141] libmachine: (no-preload-331658) DBG | domain no-preload-331658 has defined MAC address 52:54:00:2a:47:d0 in network mk-no-preload-331658
	I0918 21:05:18.746620   61273 main.go:141] libmachine: (no-preload-331658) DBG | unable to find current IP address of domain no-preload-331658 in network mk-no-preload-331658
	I0918 21:05:18.746650   61273 main.go:141] libmachine: (no-preload-331658) DBG | I0918 21:05:18.746571   63200 retry.go:31] will retry after 2.204220189s: waiting for machine to come up
	I0918 21:05:20.953669   61273 main.go:141] libmachine: (no-preload-331658) DBG | domain no-preload-331658 has defined MAC address 52:54:00:2a:47:d0 in network mk-no-preload-331658
	I0918 21:05:20.954223   61273 main.go:141] libmachine: (no-preload-331658) DBG | unable to find current IP address of domain no-preload-331658 in network mk-no-preload-331658
	I0918 21:05:20.954287   61273 main.go:141] libmachine: (no-preload-331658) DBG | I0918 21:05:20.954209   63200 retry.go:31] will retry after 2.418479665s: waiting for machine to come up
	I0918 21:05:18.634113   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:05:21.133516   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:05:18.365915   61740 pod_ready.go:93] pod "kube-apiserver-embed-certs-255556" in "kube-system" namespace has status "Ready":"True"
	I0918 21:05:18.365943   61740 pod_ready.go:82] duration metric: took 4.580799395s for pod "kube-apiserver-embed-certs-255556" in "kube-system" namespace to be "Ready" ...
	I0918 21:05:18.365956   61740 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-255556" in "kube-system" namespace to be "Ready" ...
	I0918 21:05:18.371010   61740 pod_ready.go:93] pod "kube-controller-manager-embed-certs-255556" in "kube-system" namespace has status "Ready":"True"
	I0918 21:05:18.371035   61740 pod_ready.go:82] duration metric: took 5.070331ms for pod "kube-controller-manager-embed-certs-255556" in "kube-system" namespace to be "Ready" ...
	I0918 21:05:18.371046   61740 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-v8szm" in "kube-system" namespace to be "Ready" ...
	I0918 21:05:18.375632   61740 pod_ready.go:93] pod "kube-proxy-v8szm" in "kube-system" namespace has status "Ready":"True"
	I0918 21:05:18.375658   61740 pod_ready.go:82] duration metric: took 4.603787ms for pod "kube-proxy-v8szm" in "kube-system" namespace to be "Ready" ...
	I0918 21:05:18.375671   61740 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-embed-certs-255556" in "kube-system" namespace to be "Ready" ...
	I0918 21:05:18.380527   61740 pod_ready.go:93] pod "kube-scheduler-embed-certs-255556" in "kube-system" namespace has status "Ready":"True"
	I0918 21:05:18.380551   61740 pod_ready.go:82] duration metric: took 4.872699ms for pod "kube-scheduler-embed-certs-255556" in "kube-system" namespace to be "Ready" ...
	I0918 21:05:18.380563   61740 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace to be "Ready" ...
	I0918 21:05:20.388600   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:05:22.887122   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:05:19.704128   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:20.203595   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:20.703244   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:21.204288   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:21.703469   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:22.203224   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:22.703968   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:23.204097   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:23.703933   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:24.204272   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:23.375904   61273 main.go:141] libmachine: (no-preload-331658) DBG | domain no-preload-331658 has defined MAC address 52:54:00:2a:47:d0 in network mk-no-preload-331658
	I0918 21:05:23.376450   61273 main.go:141] libmachine: (no-preload-331658) DBG | unable to find current IP address of domain no-preload-331658 in network mk-no-preload-331658
	I0918 21:05:23.376476   61273 main.go:141] libmachine: (no-preload-331658) DBG | I0918 21:05:23.376397   63200 retry.go:31] will retry after 4.431211335s: waiting for machine to come up
	I0918 21:05:23.633093   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:05:25.633913   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:05:24.887771   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:05:27.386891   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:05:24.704240   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:25.203880   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:25.703983   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:26.204273   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:26.703861   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:27.204064   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:27.703276   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:28.204289   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:28.703701   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:29.203604   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:27.811234   61273 main.go:141] libmachine: (no-preload-331658) DBG | domain no-preload-331658 has defined MAC address 52:54:00:2a:47:d0 in network mk-no-preload-331658
	I0918 21:05:27.811698   61273 main.go:141] libmachine: (no-preload-331658) DBG | domain no-preload-331658 has current primary IP address 192.168.61.31 and MAC address 52:54:00:2a:47:d0 in network mk-no-preload-331658
	I0918 21:05:27.811719   61273 main.go:141] libmachine: (no-preload-331658) Found IP for machine: 192.168.61.31
	I0918 21:05:27.811729   61273 main.go:141] libmachine: (no-preload-331658) Reserving static IP address...
	I0918 21:05:27.812131   61273 main.go:141] libmachine: (no-preload-331658) DBG | found host DHCP lease matching {name: "no-preload-331658", mac: "52:54:00:2a:47:d0", ip: "192.168.61.31"} in network mk-no-preload-331658: {Iface:virbr2 ExpiryTime:2024-09-18 21:55:52 +0000 UTC Type:0 Mac:52:54:00:2a:47:d0 Iaid: IPaddr:192.168.61.31 Prefix:24 Hostname:no-preload-331658 Clientid:01:52:54:00:2a:47:d0}
	I0918 21:05:27.812150   61273 main.go:141] libmachine: (no-preload-331658) Reserved static IP address: 192.168.61.31
	I0918 21:05:27.812163   61273 main.go:141] libmachine: (no-preload-331658) DBG | skip adding static IP to network mk-no-preload-331658 - found existing host DHCP lease matching {name: "no-preload-331658", mac: "52:54:00:2a:47:d0", ip: "192.168.61.31"}
	I0918 21:05:27.812170   61273 main.go:141] libmachine: (no-preload-331658) Waiting for SSH to be available...
	I0918 21:05:27.812178   61273 main.go:141] libmachine: (no-preload-331658) DBG | Getting to WaitForSSH function...
	I0918 21:05:27.814300   61273 main.go:141] libmachine: (no-preload-331658) DBG | domain no-preload-331658 has defined MAC address 52:54:00:2a:47:d0 in network mk-no-preload-331658
	I0918 21:05:27.814735   61273 main.go:141] libmachine: (no-preload-331658) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:47:d0", ip: ""} in network mk-no-preload-331658: {Iface:virbr2 ExpiryTime:2024-09-18 21:55:52 +0000 UTC Type:0 Mac:52:54:00:2a:47:d0 Iaid: IPaddr:192.168.61.31 Prefix:24 Hostname:no-preload-331658 Clientid:01:52:54:00:2a:47:d0}
	I0918 21:05:27.814767   61273 main.go:141] libmachine: (no-preload-331658) DBG | domain no-preload-331658 has defined IP address 192.168.61.31 and MAC address 52:54:00:2a:47:d0 in network mk-no-preload-331658
	I0918 21:05:27.814891   61273 main.go:141] libmachine: (no-preload-331658) DBG | Using SSH client type: external
	I0918 21:05:27.814922   61273 main.go:141] libmachine: (no-preload-331658) DBG | Using SSH private key: /home/jenkins/minikube-integration/19667-7671/.minikube/machines/no-preload-331658/id_rsa (-rw-------)
	I0918 21:05:27.814945   61273 main.go:141] libmachine: (no-preload-331658) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.31 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19667-7671/.minikube/machines/no-preload-331658/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0918 21:05:27.814972   61273 main.go:141] libmachine: (no-preload-331658) DBG | About to run SSH command:
	I0918 21:05:27.814985   61273 main.go:141] libmachine: (no-preload-331658) DBG | exit 0
	I0918 21:05:27.939949   61273 main.go:141] libmachine: (no-preload-331658) DBG | SSH cmd err, output: <nil>: 
	I0918 21:05:27.940365   61273 main.go:141] libmachine: (no-preload-331658) Calling .GetConfigRaw
	I0918 21:05:27.941187   61273 main.go:141] libmachine: (no-preload-331658) Calling .GetIP
	I0918 21:05:27.943976   61273 main.go:141] libmachine: (no-preload-331658) DBG | domain no-preload-331658 has defined MAC address 52:54:00:2a:47:d0 in network mk-no-preload-331658
	I0918 21:05:27.944375   61273 main.go:141] libmachine: (no-preload-331658) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:47:d0", ip: ""} in network mk-no-preload-331658: {Iface:virbr2 ExpiryTime:2024-09-18 21:55:52 +0000 UTC Type:0 Mac:52:54:00:2a:47:d0 Iaid: IPaddr:192.168.61.31 Prefix:24 Hostname:no-preload-331658 Clientid:01:52:54:00:2a:47:d0}
	I0918 21:05:27.944399   61273 main.go:141] libmachine: (no-preload-331658) DBG | domain no-preload-331658 has defined IP address 192.168.61.31 and MAC address 52:54:00:2a:47:d0 in network mk-no-preload-331658
	I0918 21:05:27.944670   61273 profile.go:143] Saving config to /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/no-preload-331658/config.json ...
	I0918 21:05:27.944942   61273 machine.go:93] provisionDockerMachine start ...
	I0918 21:05:27.944963   61273 main.go:141] libmachine: (no-preload-331658) Calling .DriverName
	I0918 21:05:27.945228   61273 main.go:141] libmachine: (no-preload-331658) Calling .GetSSHHostname
	I0918 21:05:27.947444   61273 main.go:141] libmachine: (no-preload-331658) DBG | domain no-preload-331658 has defined MAC address 52:54:00:2a:47:d0 in network mk-no-preload-331658
	I0918 21:05:27.947810   61273 main.go:141] libmachine: (no-preload-331658) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:47:d0", ip: ""} in network mk-no-preload-331658: {Iface:virbr2 ExpiryTime:2024-09-18 21:55:52 +0000 UTC Type:0 Mac:52:54:00:2a:47:d0 Iaid: IPaddr:192.168.61.31 Prefix:24 Hostname:no-preload-331658 Clientid:01:52:54:00:2a:47:d0}
	I0918 21:05:27.947843   61273 main.go:141] libmachine: (no-preload-331658) DBG | domain no-preload-331658 has defined IP address 192.168.61.31 and MAC address 52:54:00:2a:47:d0 in network mk-no-preload-331658
	I0918 21:05:27.947974   61273 main.go:141] libmachine: (no-preload-331658) Calling .GetSSHPort
	I0918 21:05:27.948196   61273 main.go:141] libmachine: (no-preload-331658) Calling .GetSSHKeyPath
	I0918 21:05:27.948404   61273 main.go:141] libmachine: (no-preload-331658) Calling .GetSSHKeyPath
	I0918 21:05:27.948664   61273 main.go:141] libmachine: (no-preload-331658) Calling .GetSSHUsername
	I0918 21:05:27.948845   61273 main.go:141] libmachine: Using SSH client type: native
	I0918 21:05:27.949078   61273 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.61.31 22 <nil> <nil>}
	I0918 21:05:27.949099   61273 main.go:141] libmachine: About to run SSH command:
	hostname
	I0918 21:05:28.052352   61273 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0918 21:05:28.052378   61273 main.go:141] libmachine: (no-preload-331658) Calling .GetMachineName
	I0918 21:05:28.052638   61273 buildroot.go:166] provisioning hostname "no-preload-331658"
	I0918 21:05:28.052668   61273 main.go:141] libmachine: (no-preload-331658) Calling .GetMachineName
	I0918 21:05:28.052923   61273 main.go:141] libmachine: (no-preload-331658) Calling .GetSSHHostname
	I0918 21:05:28.056168   61273 main.go:141] libmachine: (no-preload-331658) DBG | domain no-preload-331658 has defined MAC address 52:54:00:2a:47:d0 in network mk-no-preload-331658
	I0918 21:05:28.056599   61273 main.go:141] libmachine: (no-preload-331658) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:47:d0", ip: ""} in network mk-no-preload-331658: {Iface:virbr2 ExpiryTime:2024-09-18 21:55:52 +0000 UTC Type:0 Mac:52:54:00:2a:47:d0 Iaid: IPaddr:192.168.61.31 Prefix:24 Hostname:no-preload-331658 Clientid:01:52:54:00:2a:47:d0}
	I0918 21:05:28.056631   61273 main.go:141] libmachine: (no-preload-331658) DBG | domain no-preload-331658 has defined IP address 192.168.61.31 and MAC address 52:54:00:2a:47:d0 in network mk-no-preload-331658
	I0918 21:05:28.056805   61273 main.go:141] libmachine: (no-preload-331658) Calling .GetSSHPort
	I0918 21:05:28.057009   61273 main.go:141] libmachine: (no-preload-331658) Calling .GetSSHKeyPath
	I0918 21:05:28.057168   61273 main.go:141] libmachine: (no-preload-331658) Calling .GetSSHKeyPath
	I0918 21:05:28.057305   61273 main.go:141] libmachine: (no-preload-331658) Calling .GetSSHUsername
	I0918 21:05:28.057478   61273 main.go:141] libmachine: Using SSH client type: native
	I0918 21:05:28.057652   61273 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.61.31 22 <nil> <nil>}
	I0918 21:05:28.057665   61273 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-331658 && echo "no-preload-331658" | sudo tee /etc/hostname
	I0918 21:05:28.174245   61273 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-331658
	
	I0918 21:05:28.174282   61273 main.go:141] libmachine: (no-preload-331658) Calling .GetSSHHostname
	I0918 21:05:28.177373   61273 main.go:141] libmachine: (no-preload-331658) DBG | domain no-preload-331658 has defined MAC address 52:54:00:2a:47:d0 in network mk-no-preload-331658
	I0918 21:05:28.177753   61273 main.go:141] libmachine: (no-preload-331658) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:47:d0", ip: ""} in network mk-no-preload-331658: {Iface:virbr2 ExpiryTime:2024-09-18 21:55:52 +0000 UTC Type:0 Mac:52:54:00:2a:47:d0 Iaid: IPaddr:192.168.61.31 Prefix:24 Hostname:no-preload-331658 Clientid:01:52:54:00:2a:47:d0}
	I0918 21:05:28.177781   61273 main.go:141] libmachine: (no-preload-331658) DBG | domain no-preload-331658 has defined IP address 192.168.61.31 and MAC address 52:54:00:2a:47:d0 in network mk-no-preload-331658
	I0918 21:05:28.177981   61273 main.go:141] libmachine: (no-preload-331658) Calling .GetSSHPort
	I0918 21:05:28.178202   61273 main.go:141] libmachine: (no-preload-331658) Calling .GetSSHKeyPath
	I0918 21:05:28.178380   61273 main.go:141] libmachine: (no-preload-331658) Calling .GetSSHKeyPath
	I0918 21:05:28.178523   61273 main.go:141] libmachine: (no-preload-331658) Calling .GetSSHUsername
	I0918 21:05:28.178752   61273 main.go:141] libmachine: Using SSH client type: native
	I0918 21:05:28.178948   61273 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.61.31 22 <nil> <nil>}
	I0918 21:05:28.178965   61273 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-331658' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-331658/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-331658' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0918 21:05:28.292659   61273 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0918 21:05:28.292691   61273 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19667-7671/.minikube CaCertPath:/home/jenkins/minikube-integration/19667-7671/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19667-7671/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19667-7671/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19667-7671/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19667-7671/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19667-7671/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19667-7671/.minikube}
	I0918 21:05:28.292714   61273 buildroot.go:174] setting up certificates
	I0918 21:05:28.292725   61273 provision.go:84] configureAuth start
	I0918 21:05:28.292734   61273 main.go:141] libmachine: (no-preload-331658) Calling .GetMachineName
	I0918 21:05:28.293091   61273 main.go:141] libmachine: (no-preload-331658) Calling .GetIP
	I0918 21:05:28.295792   61273 main.go:141] libmachine: (no-preload-331658) DBG | domain no-preload-331658 has defined MAC address 52:54:00:2a:47:d0 in network mk-no-preload-331658
	I0918 21:05:28.296192   61273 main.go:141] libmachine: (no-preload-331658) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:47:d0", ip: ""} in network mk-no-preload-331658: {Iface:virbr2 ExpiryTime:2024-09-18 21:55:52 +0000 UTC Type:0 Mac:52:54:00:2a:47:d0 Iaid: IPaddr:192.168.61.31 Prefix:24 Hostname:no-preload-331658 Clientid:01:52:54:00:2a:47:d0}
	I0918 21:05:28.296219   61273 main.go:141] libmachine: (no-preload-331658) DBG | domain no-preload-331658 has defined IP address 192.168.61.31 and MAC address 52:54:00:2a:47:d0 in network mk-no-preload-331658
	I0918 21:05:28.296405   61273 main.go:141] libmachine: (no-preload-331658) Calling .GetSSHHostname
	I0918 21:05:28.298446   61273 main.go:141] libmachine: (no-preload-331658) DBG | domain no-preload-331658 has defined MAC address 52:54:00:2a:47:d0 in network mk-no-preload-331658
	I0918 21:05:28.298788   61273 main.go:141] libmachine: (no-preload-331658) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:47:d0", ip: ""} in network mk-no-preload-331658: {Iface:virbr2 ExpiryTime:2024-09-18 21:55:52 +0000 UTC Type:0 Mac:52:54:00:2a:47:d0 Iaid: IPaddr:192.168.61.31 Prefix:24 Hostname:no-preload-331658 Clientid:01:52:54:00:2a:47:d0}
	I0918 21:05:28.298815   61273 main.go:141] libmachine: (no-preload-331658) DBG | domain no-preload-331658 has defined IP address 192.168.61.31 and MAC address 52:54:00:2a:47:d0 in network mk-no-preload-331658
	I0918 21:05:28.298938   61273 provision.go:143] copyHostCerts
	I0918 21:05:28.299013   61273 exec_runner.go:144] found /home/jenkins/minikube-integration/19667-7671/.minikube/ca.pem, removing ...
	I0918 21:05:28.299026   61273 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19667-7671/.minikube/ca.pem
	I0918 21:05:28.299078   61273 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19667-7671/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19667-7671/.minikube/ca.pem (1082 bytes)
	I0918 21:05:28.299170   61273 exec_runner.go:144] found /home/jenkins/minikube-integration/19667-7671/.minikube/cert.pem, removing ...
	I0918 21:05:28.299178   61273 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19667-7671/.minikube/cert.pem
	I0918 21:05:28.299199   61273 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19667-7671/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19667-7671/.minikube/cert.pem (1123 bytes)
	I0918 21:05:28.299252   61273 exec_runner.go:144] found /home/jenkins/minikube-integration/19667-7671/.minikube/key.pem, removing ...
	I0918 21:05:28.299258   61273 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19667-7671/.minikube/key.pem
	I0918 21:05:28.299278   61273 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19667-7671/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19667-7671/.minikube/key.pem (1679 bytes)
	I0918 21:05:28.299325   61273 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19667-7671/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19667-7671/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19667-7671/.minikube/certs/ca-key.pem org=jenkins.no-preload-331658 san=[127.0.0.1 192.168.61.31 localhost minikube no-preload-331658]
	I0918 21:05:28.606565   61273 provision.go:177] copyRemoteCerts
	I0918 21:05:28.606629   61273 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0918 21:05:28.606653   61273 main.go:141] libmachine: (no-preload-331658) Calling .GetSSHHostname
	I0918 21:05:28.609156   61273 main.go:141] libmachine: (no-preload-331658) DBG | domain no-preload-331658 has defined MAC address 52:54:00:2a:47:d0 in network mk-no-preload-331658
	I0918 21:05:28.609533   61273 main.go:141] libmachine: (no-preload-331658) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:47:d0", ip: ""} in network mk-no-preload-331658: {Iface:virbr2 ExpiryTime:2024-09-18 21:55:52 +0000 UTC Type:0 Mac:52:54:00:2a:47:d0 Iaid: IPaddr:192.168.61.31 Prefix:24 Hostname:no-preload-331658 Clientid:01:52:54:00:2a:47:d0}
	I0918 21:05:28.609564   61273 main.go:141] libmachine: (no-preload-331658) DBG | domain no-preload-331658 has defined IP address 192.168.61.31 and MAC address 52:54:00:2a:47:d0 in network mk-no-preload-331658
	I0918 21:05:28.609690   61273 main.go:141] libmachine: (no-preload-331658) Calling .GetSSHPort
	I0918 21:05:28.609891   61273 main.go:141] libmachine: (no-preload-331658) Calling .GetSSHKeyPath
	I0918 21:05:28.610102   61273 main.go:141] libmachine: (no-preload-331658) Calling .GetSSHUsername
	I0918 21:05:28.610332   61273 sshutil.go:53] new ssh client: &{IP:192.168.61.31 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19667-7671/.minikube/machines/no-preload-331658/id_rsa Username:docker}
	I0918 21:05:28.690571   61273 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0918 21:05:28.719257   61273 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0918 21:05:28.744119   61273 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0918 21:05:28.768692   61273 provision.go:87] duration metric: took 475.955066ms to configureAuth
	I0918 21:05:28.768720   61273 buildroot.go:189] setting minikube options for container-runtime
	I0918 21:05:28.768941   61273 config.go:182] Loaded profile config "no-preload-331658": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0918 21:05:28.769031   61273 main.go:141] libmachine: (no-preload-331658) Calling .GetSSHHostname
	I0918 21:05:28.771437   61273 main.go:141] libmachine: (no-preload-331658) DBG | domain no-preload-331658 has defined MAC address 52:54:00:2a:47:d0 in network mk-no-preload-331658
	I0918 21:05:28.771747   61273 main.go:141] libmachine: (no-preload-331658) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:47:d0", ip: ""} in network mk-no-preload-331658: {Iface:virbr2 ExpiryTime:2024-09-18 21:55:52 +0000 UTC Type:0 Mac:52:54:00:2a:47:d0 Iaid: IPaddr:192.168.61.31 Prefix:24 Hostname:no-preload-331658 Clientid:01:52:54:00:2a:47:d0}
	I0918 21:05:28.771786   61273 main.go:141] libmachine: (no-preload-331658) DBG | domain no-preload-331658 has defined IP address 192.168.61.31 and MAC address 52:54:00:2a:47:d0 in network mk-no-preload-331658
	I0918 21:05:28.771906   61273 main.go:141] libmachine: (no-preload-331658) Calling .GetSSHPort
	I0918 21:05:28.772127   61273 main.go:141] libmachine: (no-preload-331658) Calling .GetSSHKeyPath
	I0918 21:05:28.772330   61273 main.go:141] libmachine: (no-preload-331658) Calling .GetSSHKeyPath
	I0918 21:05:28.772496   61273 main.go:141] libmachine: (no-preload-331658) Calling .GetSSHUsername
	I0918 21:05:28.772717   61273 main.go:141] libmachine: Using SSH client type: native
	I0918 21:05:28.772886   61273 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.61.31 22 <nil> <nil>}
	I0918 21:05:28.772902   61273 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0918 21:05:29.001137   61273 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0918 21:05:29.001160   61273 machine.go:96] duration metric: took 1.056205004s to provisionDockerMachine
	I0918 21:05:29.001171   61273 start.go:293] postStartSetup for "no-preload-331658" (driver="kvm2")
	I0918 21:05:29.001181   61273 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0918 21:05:29.001194   61273 main.go:141] libmachine: (no-preload-331658) Calling .DriverName
	I0918 21:05:29.001531   61273 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0918 21:05:29.001556   61273 main.go:141] libmachine: (no-preload-331658) Calling .GetSSHHostname
	I0918 21:05:29.004307   61273 main.go:141] libmachine: (no-preload-331658) DBG | domain no-preload-331658 has defined MAC address 52:54:00:2a:47:d0 in network mk-no-preload-331658
	I0918 21:05:29.004656   61273 main.go:141] libmachine: (no-preload-331658) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:47:d0", ip: ""} in network mk-no-preload-331658: {Iface:virbr2 ExpiryTime:2024-09-18 21:55:52 +0000 UTC Type:0 Mac:52:54:00:2a:47:d0 Iaid: IPaddr:192.168.61.31 Prefix:24 Hostname:no-preload-331658 Clientid:01:52:54:00:2a:47:d0}
	I0918 21:05:29.004686   61273 main.go:141] libmachine: (no-preload-331658) DBG | domain no-preload-331658 has defined IP address 192.168.61.31 and MAC address 52:54:00:2a:47:d0 in network mk-no-preload-331658
	I0918 21:05:29.004877   61273 main.go:141] libmachine: (no-preload-331658) Calling .GetSSHPort
	I0918 21:05:29.005128   61273 main.go:141] libmachine: (no-preload-331658) Calling .GetSSHKeyPath
	I0918 21:05:29.005379   61273 main.go:141] libmachine: (no-preload-331658) Calling .GetSSHUsername
	I0918 21:05:29.005556   61273 sshutil.go:53] new ssh client: &{IP:192.168.61.31 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19667-7671/.minikube/machines/no-preload-331658/id_rsa Username:docker}
	I0918 21:05:29.087453   61273 ssh_runner.go:195] Run: cat /etc/os-release
	I0918 21:05:29.091329   61273 info.go:137] Remote host: Buildroot 2023.02.9
	I0918 21:05:29.091356   61273 filesync.go:126] Scanning /home/jenkins/minikube-integration/19667-7671/.minikube/addons for local assets ...
	I0918 21:05:29.091422   61273 filesync.go:126] Scanning /home/jenkins/minikube-integration/19667-7671/.minikube/files for local assets ...
	I0918 21:05:29.091493   61273 filesync.go:149] local asset: /home/jenkins/minikube-integration/19667-7671/.minikube/files/etc/ssl/certs/148782.pem -> 148782.pem in /etc/ssl/certs
	I0918 21:05:29.091578   61273 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0918 21:05:29.101039   61273 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/files/etc/ssl/certs/148782.pem --> /etc/ssl/certs/148782.pem (1708 bytes)
	I0918 21:05:29.125451   61273 start.go:296] duration metric: took 124.264463ms for postStartSetup
	I0918 21:05:29.125492   61273 fix.go:56] duration metric: took 19.880181743s for fixHost
	I0918 21:05:29.125514   61273 main.go:141] libmachine: (no-preload-331658) Calling .GetSSHHostname
	I0918 21:05:29.128543   61273 main.go:141] libmachine: (no-preload-331658) DBG | domain no-preload-331658 has defined MAC address 52:54:00:2a:47:d0 in network mk-no-preload-331658
	I0918 21:05:29.128968   61273 main.go:141] libmachine: (no-preload-331658) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:47:d0", ip: ""} in network mk-no-preload-331658: {Iface:virbr2 ExpiryTime:2024-09-18 21:55:52 +0000 UTC Type:0 Mac:52:54:00:2a:47:d0 Iaid: IPaddr:192.168.61.31 Prefix:24 Hostname:no-preload-331658 Clientid:01:52:54:00:2a:47:d0}
	I0918 21:05:29.129022   61273 main.go:141] libmachine: (no-preload-331658) DBG | domain no-preload-331658 has defined IP address 192.168.61.31 and MAC address 52:54:00:2a:47:d0 in network mk-no-preload-331658
	I0918 21:05:29.129185   61273 main.go:141] libmachine: (no-preload-331658) Calling .GetSSHPort
	I0918 21:05:29.129385   61273 main.go:141] libmachine: (no-preload-331658) Calling .GetSSHKeyPath
	I0918 21:05:29.129580   61273 main.go:141] libmachine: (no-preload-331658) Calling .GetSSHKeyPath
	I0918 21:05:29.129739   61273 main.go:141] libmachine: (no-preload-331658) Calling .GetSSHUsername
	I0918 21:05:29.129919   61273 main.go:141] libmachine: Using SSH client type: native
	I0918 21:05:29.130155   61273 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.61.31 22 <nil> <nil>}
	I0918 21:05:29.130172   61273 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0918 21:05:29.240857   61273 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726693529.214864261
	
	I0918 21:05:29.240886   61273 fix.go:216] guest clock: 1726693529.214864261
	I0918 21:05:29.240897   61273 fix.go:229] Guest: 2024-09-18 21:05:29.214864261 +0000 UTC Remote: 2024-09-18 21:05:29.125495769 +0000 UTC m=+357.666326175 (delta=89.368492ms)
	I0918 21:05:29.240943   61273 fix.go:200] guest clock delta is within tolerance: 89.368492ms
	I0918 21:05:29.240949   61273 start.go:83] releasing machines lock for "no-preload-331658", held for 19.99567651s
	I0918 21:05:29.240969   61273 main.go:141] libmachine: (no-preload-331658) Calling .DriverName
	I0918 21:05:29.241256   61273 main.go:141] libmachine: (no-preload-331658) Calling .GetIP
	I0918 21:05:29.243922   61273 main.go:141] libmachine: (no-preload-331658) DBG | domain no-preload-331658 has defined MAC address 52:54:00:2a:47:d0 in network mk-no-preload-331658
	I0918 21:05:29.244347   61273 main.go:141] libmachine: (no-preload-331658) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:47:d0", ip: ""} in network mk-no-preload-331658: {Iface:virbr2 ExpiryTime:2024-09-18 21:55:52 +0000 UTC Type:0 Mac:52:54:00:2a:47:d0 Iaid: IPaddr:192.168.61.31 Prefix:24 Hostname:no-preload-331658 Clientid:01:52:54:00:2a:47:d0}
	I0918 21:05:29.244376   61273 main.go:141] libmachine: (no-preload-331658) DBG | domain no-preload-331658 has defined IP address 192.168.61.31 and MAC address 52:54:00:2a:47:d0 in network mk-no-preload-331658
	I0918 21:05:29.244575   61273 main.go:141] libmachine: (no-preload-331658) Calling .DriverName
	I0918 21:05:29.245157   61273 main.go:141] libmachine: (no-preload-331658) Calling .DriverName
	I0918 21:05:29.245380   61273 main.go:141] libmachine: (no-preload-331658) Calling .DriverName
	I0918 21:05:29.245492   61273 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0918 21:05:29.245548   61273 main.go:141] libmachine: (no-preload-331658) Calling .GetSSHHostname
	I0918 21:05:29.245640   61273 ssh_runner.go:195] Run: cat /version.json
	I0918 21:05:29.245665   61273 main.go:141] libmachine: (no-preload-331658) Calling .GetSSHHostname
	I0918 21:05:29.248511   61273 main.go:141] libmachine: (no-preload-331658) DBG | domain no-preload-331658 has defined MAC address 52:54:00:2a:47:d0 in network mk-no-preload-331658
	I0918 21:05:29.248927   61273 main.go:141] libmachine: (no-preload-331658) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:47:d0", ip: ""} in network mk-no-preload-331658: {Iface:virbr2 ExpiryTime:2024-09-18 21:55:52 +0000 UTC Type:0 Mac:52:54:00:2a:47:d0 Iaid: IPaddr:192.168.61.31 Prefix:24 Hostname:no-preload-331658 Clientid:01:52:54:00:2a:47:d0}
	I0918 21:05:29.248954   61273 main.go:141] libmachine: (no-preload-331658) DBG | domain no-preload-331658 has defined MAC address 52:54:00:2a:47:d0 in network mk-no-preload-331658
	I0918 21:05:29.248984   61273 main.go:141] libmachine: (no-preload-331658) DBG | domain no-preload-331658 has defined IP address 192.168.61.31 and MAC address 52:54:00:2a:47:d0 in network mk-no-preload-331658
	I0918 21:05:29.249198   61273 main.go:141] libmachine: (no-preload-331658) Calling .GetSSHPort
	I0918 21:05:29.249423   61273 main.go:141] libmachine: (no-preload-331658) Calling .GetSSHKeyPath
	I0918 21:05:29.249506   61273 main.go:141] libmachine: (no-preload-331658) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:47:d0", ip: ""} in network mk-no-preload-331658: {Iface:virbr2 ExpiryTime:2024-09-18 21:55:52 +0000 UTC Type:0 Mac:52:54:00:2a:47:d0 Iaid: IPaddr:192.168.61.31 Prefix:24 Hostname:no-preload-331658 Clientid:01:52:54:00:2a:47:d0}
	I0918 21:05:29.249538   61273 main.go:141] libmachine: (no-preload-331658) DBG | domain no-preload-331658 has defined IP address 192.168.61.31 and MAC address 52:54:00:2a:47:d0 in network mk-no-preload-331658
	I0918 21:05:29.249608   61273 main.go:141] libmachine: (no-preload-331658) Calling .GetSSHUsername
	I0918 21:05:29.249692   61273 main.go:141] libmachine: (no-preload-331658) Calling .GetSSHPort
	I0918 21:05:29.249791   61273 sshutil.go:53] new ssh client: &{IP:192.168.61.31 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19667-7671/.minikube/machines/no-preload-331658/id_rsa Username:docker}
	I0918 21:05:29.249899   61273 main.go:141] libmachine: (no-preload-331658) Calling .GetSSHKeyPath
	I0918 21:05:29.250076   61273 main.go:141] libmachine: (no-preload-331658) Calling .GetSSHUsername
	I0918 21:05:29.250228   61273 sshutil.go:53] new ssh client: &{IP:192.168.61.31 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19667-7671/.minikube/machines/no-preload-331658/id_rsa Username:docker}
	I0918 21:05:29.365104   61273 ssh_runner.go:195] Run: systemctl --version
	I0918 21:05:29.371202   61273 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0918 21:05:29.518067   61273 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0918 21:05:29.524126   61273 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0918 21:05:29.524207   61273 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0918 21:05:29.540977   61273 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0918 21:05:29.541007   61273 start.go:495] detecting cgroup driver to use...
	I0918 21:05:29.541072   61273 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0918 21:05:29.558893   61273 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0918 21:05:29.576084   61273 docker.go:217] disabling cri-docker service (if available) ...
	I0918 21:05:29.576161   61273 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0918 21:05:29.591212   61273 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0918 21:05:29.605765   61273 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0918 21:05:29.734291   61273 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0918 21:05:29.892707   61273 docker.go:233] disabling docker service ...
	I0918 21:05:29.892771   61273 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0918 21:05:29.907575   61273 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0918 21:05:29.920545   61273 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0918 21:05:30.058604   61273 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0918 21:05:30.196896   61273 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0918 21:05:30.211398   61273 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0918 21:05:30.231791   61273 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0918 21:05:30.231917   61273 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0918 21:05:30.243369   61273 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0918 21:05:30.243465   61273 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0918 21:05:30.254911   61273 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0918 21:05:30.266839   61273 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0918 21:05:30.278532   61273 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0918 21:05:30.290173   61273 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0918 21:05:30.301068   61273 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0918 21:05:30.318589   61273 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0918 21:05:30.329022   61273 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0918 21:05:30.338645   61273 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0918 21:05:30.338720   61273 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0918 21:05:30.351797   61273 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0918 21:05:30.363412   61273 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0918 21:05:30.504035   61273 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0918 21:05:30.606470   61273 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0918 21:05:30.606547   61273 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0918 21:05:30.611499   61273 start.go:563] Will wait 60s for crictl version
	I0918 21:05:30.611559   61273 ssh_runner.go:195] Run: which crictl
	I0918 21:05:30.615485   61273 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0918 21:05:30.659735   61273 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0918 21:05:30.659835   61273 ssh_runner.go:195] Run: crio --version
	I0918 21:05:30.690573   61273 ssh_runner.go:195] Run: crio --version
	I0918 21:05:30.723342   61273 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0918 21:05:30.724604   61273 main.go:141] libmachine: (no-preload-331658) Calling .GetIP
	I0918 21:05:30.727445   61273 main.go:141] libmachine: (no-preload-331658) DBG | domain no-preload-331658 has defined MAC address 52:54:00:2a:47:d0 in network mk-no-preload-331658
	I0918 21:05:30.727885   61273 main.go:141] libmachine: (no-preload-331658) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:47:d0", ip: ""} in network mk-no-preload-331658: {Iface:virbr2 ExpiryTime:2024-09-18 21:55:52 +0000 UTC Type:0 Mac:52:54:00:2a:47:d0 Iaid: IPaddr:192.168.61.31 Prefix:24 Hostname:no-preload-331658 Clientid:01:52:54:00:2a:47:d0}
	I0918 21:05:30.727919   61273 main.go:141] libmachine: (no-preload-331658) DBG | domain no-preload-331658 has defined IP address 192.168.61.31 and MAC address 52:54:00:2a:47:d0 in network mk-no-preload-331658
	I0918 21:05:30.728132   61273 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0918 21:05:30.732134   61273 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0918 21:05:30.745695   61273 kubeadm.go:883] updating cluster {Name:no-preload-331658 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.1 ClusterName:no-preload-331658 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.31 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0918 21:05:30.745813   61273 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0918 21:05:30.745849   61273 ssh_runner.go:195] Run: sudo crictl images --output json
	I0918 21:05:30.788504   61273 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I0918 21:05:30.788537   61273 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.31.1 registry.k8s.io/kube-controller-manager:v1.31.1 registry.k8s.io/kube-scheduler:v1.31.1 registry.k8s.io/kube-proxy:v1.31.1 registry.k8s.io/pause:3.10 registry.k8s.io/etcd:3.5.15-0 registry.k8s.io/coredns/coredns:v1.11.3 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0918 21:05:30.788634   61273 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0918 21:05:30.788655   61273 image.go:135] retrieving image: registry.k8s.io/pause:3.10
	I0918 21:05:30.788673   61273 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.31.1
	I0918 21:05:30.788685   61273 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.11.3
	I0918 21:05:30.788655   61273 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.31.1
	I0918 21:05:30.788796   61273 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.31.1
	I0918 21:05:30.788804   61273 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.1
	I0918 21:05:30.788655   61273 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.15-0
	I0918 21:05:30.790173   61273 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0918 21:05:30.790181   61273 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.1
	I0918 21:05:30.790199   61273 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.3: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.3
	I0918 21:05:30.790170   61273 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.1
	I0918 21:05:30.790222   61273 image.go:178] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I0918 21:05:30.790237   61273 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.1
	I0918 21:05:30.790268   61273 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.15-0
	I0918 21:05:30.790542   61273 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.1
	I0918 21:05:31.049150   61273 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10
	I0918 21:05:31.052046   61273 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.31.1
	I0918 21:05:31.099660   61273 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.3
	I0918 21:05:31.099861   61273 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.31.1
	I0918 21:05:31.111308   61273 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.31.1
	I0918 21:05:31.111439   61273 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.15-0
	I0918 21:05:31.112293   61273 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.31.1
	I0918 21:05:31.203873   61273 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.31.1" needs transfer: "registry.k8s.io/kube-proxy:v1.31.1" does not exist at hash "60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561" in container runtime
	I0918 21:05:31.203934   61273 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.31.1
	I0918 21:05:31.204042   61273 ssh_runner.go:195] Run: which crictl
	I0918 21:05:31.208912   61273 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.3" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.3" does not exist at hash "c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6" in container runtime
	I0918 21:05:31.208937   61273 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.31.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.31.1" does not exist at hash "9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b" in container runtime
	I0918 21:05:31.208968   61273 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.31.1
	I0918 21:05:31.208960   61273 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.3
	I0918 21:05:31.209020   61273 ssh_runner.go:195] Run: which crictl
	I0918 21:05:31.209029   61273 ssh_runner.go:195] Run: which crictl
	I0918 21:05:31.249355   61273 cache_images.go:116] "registry.k8s.io/etcd:3.5.15-0" needs transfer: "registry.k8s.io/etcd:3.5.15-0" does not exist at hash "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4" in container runtime
	I0918 21:05:31.249408   61273 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.15-0
	I0918 21:05:31.249459   61273 ssh_runner.go:195] Run: which crictl
	I0918 21:05:31.253214   61273 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.31.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.31.1" does not exist at hash "175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1" in container runtime
	I0918 21:05:31.253244   61273 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.31.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.31.1" does not exist at hash "6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee" in container runtime
	I0918 21:05:31.253286   61273 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.31.1
	I0918 21:05:31.253274   61273 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.31.1
	I0918 21:05:31.253335   61273 ssh_runner.go:195] Run: which crictl
	I0918 21:05:31.253339   61273 ssh_runner.go:195] Run: which crictl
	I0918 21:05:31.253351   61273 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.1
	I0918 21:05:31.253405   61273 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.1
	I0918 21:05:31.253419   61273 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I0918 21:05:31.255163   61273 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0918 21:05:31.330929   61273 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.1
	I0918 21:05:31.330999   61273 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.1
	I0918 21:05:31.349540   61273 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.1
	I0918 21:05:31.349558   61273 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.1
	I0918 21:05:31.350088   61273 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I0918 21:05:31.353763   61273 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0918 21:05:31.447057   61273 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.1
	I0918 21:05:31.457171   61273 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.1
	I0918 21:05:31.457239   61273 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.1
	I0918 21:05:31.483087   61273 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.1
	I0918 21:05:31.483097   61273 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I0918 21:05:31.483210   61273 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0918 21:05:28.131874   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:05:30.133067   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:05:32.134557   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:05:29.389052   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:05:31.887032   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:05:29.704274   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:30.203372   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:30.703751   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:31.203670   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:31.704097   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:32.203611   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:32.703968   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:33.203356   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:33.704260   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:34.204082   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:31.573784   61273 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19667-7671/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.1
	I0918 21:05:31.573906   61273 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.31.1
	I0918 21:05:31.573927   61273 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.1
	I0918 21:05:31.573951   61273 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19667-7671/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.1
	I0918 21:05:31.574038   61273 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.31.1
	I0918 21:05:31.605972   61273 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19667-7671/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3
	I0918 21:05:31.606077   61273 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.1
	I0918 21:05:31.606086   61273 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.11.3
	I0918 21:05:31.613640   61273 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19667-7671/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0
	I0918 21:05:31.613769   61273 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.15-0
	I0918 21:05:31.641105   61273 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19667-7671/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.1
	I0918 21:05:31.641109   61273 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.31.1 (exists)
	I0918 21:05:31.641199   61273 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.31.1
	I0918 21:05:31.641223   61273 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.31.1
	I0918 21:05:31.641244   61273 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.1
	I0918 21:05:31.641175   61273 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.31.1 (exists)
	I0918 21:05:31.666586   61273 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.3 (exists)
	I0918 21:05:31.666661   61273 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19667-7671/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.1
	I0918 21:05:31.666792   61273 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.31.1
	I0918 21:05:31.666821   61273 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.31.1 (exists)
	I0918 21:05:31.666795   61273 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.15-0 (exists)
	I0918 21:05:32.009797   61273 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0918 21:05:33.610028   61273 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.1: (1.968756977s)
	I0918 21:05:33.610065   61273 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19667-7671/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.1 from cache
	I0918 21:05:33.610080   61273 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.31.1: (1.943261692s)
	I0918 21:05:33.610111   61273 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.31.1 (exists)
	I0918 21:05:33.610090   61273 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.31.1
	I0918 21:05:33.610122   61273 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (1.600294362s)
	I0918 21:05:33.610161   61273 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0918 21:05:33.610174   61273 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.1
	I0918 21:05:33.610193   61273 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0918 21:05:33.610242   61273 ssh_runner.go:195] Run: which crictl
	I0918 21:05:35.571685   61273 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.1: (1.96147024s)
	I0918 21:05:35.571722   61273 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19667-7671/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.1 from cache
	I0918 21:05:35.571748   61273 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.3
	I0918 21:05:35.571802   61273 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.3
	I0918 21:05:35.571802   61273 ssh_runner.go:235] Completed: which crictl: (1.961540517s)
	I0918 21:05:35.571882   61273 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0918 21:05:34.632853   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:05:36.633341   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:05:33.887591   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:05:36.387534   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:05:34.703445   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:35.203660   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:35.703374   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:36.203304   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:36.704191   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:37.204129   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:37.703912   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:38.203310   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:38.703616   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:39.203258   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:37.536622   61273 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.96470192s)
	I0918 21:05:37.536666   61273 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.3: (1.96484474s)
	I0918 21:05:37.536690   61273 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19667-7671/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3 from cache
	I0918 21:05:37.536713   61273 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0918 21:05:37.536721   61273 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.31.1
	I0918 21:05:37.536766   61273 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.1
	I0918 21:05:39.615751   61273 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.1: (2.078954836s)
	I0918 21:05:39.615791   61273 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19667-7671/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.1 from cache
	I0918 21:05:39.615823   61273 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (2.079084749s)
	I0918 21:05:39.615902   61273 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0918 21:05:39.615829   61273 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.15-0
	I0918 21:05:39.615972   61273 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0
	I0918 21:05:39.676258   61273 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19667-7671/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0918 21:05:39.676355   61273 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I0918 21:05:38.634158   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:05:40.634292   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:05:38.888255   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:05:41.387766   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:05:39.704105   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:40.204102   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:40.704073   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:41.203654   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:41.703947   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:42.203722   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:42.703303   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:43.203847   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:43.704163   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:44.203216   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:42.909577   61273 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: (3.233201912s)
	I0918 21:05:42.909617   61273 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0918 21:05:42.909722   61273 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0: (3.293701319s)
	I0918 21:05:42.909748   61273 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19667-7671/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0 from cache
	I0918 21:05:42.909781   61273 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.31.1
	I0918 21:05:42.909859   61273 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.1
	I0918 21:05:44.767646   61273 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.1: (1.857764218s)
	I0918 21:05:44.767673   61273 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19667-7671/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.1 from cache
	I0918 21:05:44.767705   61273 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0918 21:05:44.767787   61273 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0918 21:05:45.419210   61273 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19667-7671/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0918 21:05:45.419257   61273 cache_images.go:123] Successfully loaded all cached images
	I0918 21:05:45.419265   61273 cache_images.go:92] duration metric: took 14.630712818s to LoadCachedImages
	I0918 21:05:45.419278   61273 kubeadm.go:934] updating node { 192.168.61.31 8443 v1.31.1 crio true true} ...
	I0918 21:05:45.419399   61273 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-331658 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.31
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:no-preload-331658 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0918 21:05:45.419479   61273 ssh_runner.go:195] Run: crio config
	I0918 21:05:45.468525   61273 cni.go:84] Creating CNI manager for ""
	I0918 21:05:45.468549   61273 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0918 21:05:45.468558   61273 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0918 21:05:45.468579   61273 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.31 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-331658 NodeName:no-preload-331658 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.31"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.31 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0918 21:05:45.468706   61273 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.31
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-331658"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.31
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.31"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0918 21:05:45.468781   61273 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0918 21:05:45.479592   61273 binaries.go:44] Found k8s binaries, skipping transfer
	I0918 21:05:45.479662   61273 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0918 21:05:45.488586   61273 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (316 bytes)
	I0918 21:05:45.507027   61273 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0918 21:05:45.525430   61273 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2158 bytes)
	I0918 21:05:45.543854   61273 ssh_runner.go:195] Run: grep 192.168.61.31	control-plane.minikube.internal$ /etc/hosts
	I0918 21:05:45.547792   61273 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.31	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0918 21:05:45.559968   61273 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0918 21:05:45.686602   61273 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0918 21:05:45.702793   61273 certs.go:68] Setting up /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/no-preload-331658 for IP: 192.168.61.31
	I0918 21:05:45.702814   61273 certs.go:194] generating shared ca certs ...
	I0918 21:05:45.702829   61273 certs.go:226] acquiring lock for ca certs: {Name:mkf54e0e71ed884c4372f9d3d4cb308ea4600185 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0918 21:05:45.703005   61273 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19667-7671/.minikube/ca.key
	I0918 21:05:45.703071   61273 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19667-7671/.minikube/proxy-client-ca.key
	I0918 21:05:45.703085   61273 certs.go:256] generating profile certs ...
	I0918 21:05:45.703159   61273 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/no-preload-331658/client.key
	I0918 21:05:45.703228   61273 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/no-preload-331658/apiserver.key.1a336b78
	I0918 21:05:45.703263   61273 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/no-preload-331658/proxy-client.key
	I0918 21:05:45.703384   61273 certs.go:484] found cert: /home/jenkins/minikube-integration/19667-7671/.minikube/certs/14878.pem (1338 bytes)
	W0918 21:05:45.703417   61273 certs.go:480] ignoring /home/jenkins/minikube-integration/19667-7671/.minikube/certs/14878_empty.pem, impossibly tiny 0 bytes
	I0918 21:05:45.703430   61273 certs.go:484] found cert: /home/jenkins/minikube-integration/19667-7671/.minikube/certs/ca-key.pem (1675 bytes)
	I0918 21:05:45.703463   61273 certs.go:484] found cert: /home/jenkins/minikube-integration/19667-7671/.minikube/certs/ca.pem (1082 bytes)
	I0918 21:05:45.703493   61273 certs.go:484] found cert: /home/jenkins/minikube-integration/19667-7671/.minikube/certs/cert.pem (1123 bytes)
	I0918 21:05:45.703521   61273 certs.go:484] found cert: /home/jenkins/minikube-integration/19667-7671/.minikube/certs/key.pem (1679 bytes)
	I0918 21:05:45.703582   61273 certs.go:484] found cert: /home/jenkins/minikube-integration/19667-7671/.minikube/files/etc/ssl/certs/148782.pem (1708 bytes)
	I0918 21:05:45.704338   61273 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0918 21:05:45.757217   61273 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0918 21:05:45.791588   61273 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0918 21:05:45.825543   61273 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0918 21:05:45.859322   61273 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/no-preload-331658/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0918 21:05:45.892890   61273 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/no-preload-331658/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0918 21:05:45.922841   61273 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/no-preload-331658/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0918 21:05:45.947670   61273 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/no-preload-331658/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0918 21:05:45.973315   61273 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/files/etc/ssl/certs/148782.pem --> /usr/share/ca-certificates/148782.pem (1708 bytes)
	I0918 21:05:45.997699   61273 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0918 21:05:46.022802   61273 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-7671/.minikube/certs/14878.pem --> /usr/share/ca-certificates/14878.pem (1338 bytes)
	I0918 21:05:46.046646   61273 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0918 21:05:46.063329   61273 ssh_runner.go:195] Run: openssl version
	I0918 21:05:46.069432   61273 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/148782.pem && ln -fs /usr/share/ca-certificates/148782.pem /etc/ssl/certs/148782.pem"
	I0918 21:05:46.081104   61273 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/148782.pem
	I0918 21:05:46.086180   61273 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 18 19:57 /usr/share/ca-certificates/148782.pem
	I0918 21:05:46.086241   61273 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/148782.pem
	I0918 21:05:46.092527   61273 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/148782.pem /etc/ssl/certs/3ec20f2e.0"
	I0918 21:05:46.103601   61273 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0918 21:05:46.114656   61273 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0918 21:05:46.118788   61273 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 18 19:39 /usr/share/ca-certificates/minikubeCA.pem
	I0918 21:05:46.118855   61273 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0918 21:05:46.124094   61273 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0918 21:05:46.135442   61273 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14878.pem && ln -fs /usr/share/ca-certificates/14878.pem /etc/ssl/certs/14878.pem"
	I0918 21:05:46.146105   61273 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14878.pem
	I0918 21:05:46.150661   61273 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 18 19:57 /usr/share/ca-certificates/14878.pem
	I0918 21:05:46.150714   61273 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14878.pem
	I0918 21:05:46.156247   61273 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14878.pem /etc/ssl/certs/51391683.0"
	I0918 21:05:46.167475   61273 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0918 21:05:46.172172   61273 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0918 21:05:46.178638   61273 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0918 21:05:46.184644   61273 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0918 21:05:46.190704   61273 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0918 21:05:46.196414   61273 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0918 21:05:46.202467   61273 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0918 21:05:46.208306   61273 kubeadm.go:392] StartCluster: {Name:no-preload-331658 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.1 ClusterName:no-preload-331658 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.31 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0918 21:05:46.208405   61273 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0918 21:05:46.208472   61273 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0918 21:05:46.247189   61273 cri.go:89] found id: ""
	I0918 21:05:46.247267   61273 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0918 21:05:46.258228   61273 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0918 21:05:46.258253   61273 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0918 21:05:46.258309   61273 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0918 21:05:46.268703   61273 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0918 21:05:46.269728   61273 kubeconfig.go:125] found "no-preload-331658" server: "https://192.168.61.31:8443"
	I0918 21:05:46.271749   61273 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0918 21:05:46.282051   61273 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.61.31
	I0918 21:05:46.282105   61273 kubeadm.go:1160] stopping kube-system containers ...
	I0918 21:05:46.282122   61273 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0918 21:05:46.282191   61273 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0918 21:05:46.319805   61273 cri.go:89] found id: ""
	I0918 21:05:46.319880   61273 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0918 21:05:46.336130   61273 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0918 21:05:46.345940   61273 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0918 21:05:46.345962   61273 kubeadm.go:157] found existing configuration files:
	
	I0918 21:05:46.346008   61273 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0918 21:05:46.355577   61273 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0918 21:05:46.355658   61273 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0918 21:05:46.367154   61273 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0918 21:05:46.377062   61273 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0918 21:05:46.377126   61273 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0918 21:05:46.387180   61273 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0918 21:05:46.396578   61273 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0918 21:05:46.396642   61273 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0918 21:05:46.406687   61273 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0918 21:05:46.416545   61273 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0918 21:05:46.416617   61273 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0918 21:05:46.426405   61273 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0918 21:05:46.436343   61273 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0918 21:05:43.132484   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:05:45.132905   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:05:47.132942   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:05:43.890245   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:05:46.386955   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:05:44.703267   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:45.203924   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:45.703945   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:46.203386   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:46.703674   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:47.203387   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:47.704034   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:48.203348   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:48.703715   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:49.203984   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:46.563094   61273 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0918 21:05:47.663823   61273 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.100694645s)
	I0918 21:05:47.663857   61273 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0918 21:05:47.895962   61273 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0918 21:05:47.978862   61273 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0918 21:05:48.095438   61273 api_server.go:52] waiting for apiserver process to appear ...
	I0918 21:05:48.095530   61273 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:48.595581   61273 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:49.095761   61273 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:49.122304   61273 api_server.go:72] duration metric: took 1.026867171s to wait for apiserver process to appear ...
	I0918 21:05:49.122343   61273 api_server.go:88] waiting for apiserver healthz status ...
	I0918 21:05:49.122361   61273 api_server.go:253] Checking apiserver healthz at https://192.168.61.31:8443/healthz ...
	I0918 21:05:49.133503   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:05:51.133761   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:05:48.386996   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:05:50.387697   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:05:52.886989   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:05:52.253818   61273 api_server.go:279] https://192.168.61.31:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0918 21:05:52.253850   61273 api_server.go:103] status: https://192.168.61.31:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0918 21:05:52.253864   61273 api_server.go:253] Checking apiserver healthz at https://192.168.61.31:8443/healthz ...
	I0918 21:05:52.290586   61273 api_server.go:279] https://192.168.61.31:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0918 21:05:52.290617   61273 api_server.go:103] status: https://192.168.61.31:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0918 21:05:52.623078   61273 api_server.go:253] Checking apiserver healthz at https://192.168.61.31:8443/healthz ...
	I0918 21:05:52.631774   61273 api_server.go:279] https://192.168.61.31:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0918 21:05:52.631811   61273 api_server.go:103] status: https://192.168.61.31:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0918 21:05:53.123498   61273 api_server.go:253] Checking apiserver healthz at https://192.168.61.31:8443/healthz ...
	I0918 21:05:53.132091   61273 api_server.go:279] https://192.168.61.31:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0918 21:05:53.132120   61273 api_server.go:103] status: https://192.168.61.31:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0918 21:05:53.622597   61273 api_server.go:253] Checking apiserver healthz at https://192.168.61.31:8443/healthz ...
	I0918 21:05:53.628896   61273 api_server.go:279] https://192.168.61.31:8443/healthz returned 200:
	ok
	I0918 21:05:53.638315   61273 api_server.go:141] control plane version: v1.31.1
	I0918 21:05:53.638354   61273 api_server.go:131] duration metric: took 4.516002991s to wait for apiserver health ...
	I0918 21:05:53.638367   61273 cni.go:84] Creating CNI manager for ""
	I0918 21:05:53.638376   61273 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0918 21:05:53.639948   61273 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0918 21:05:49.703565   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:50.204192   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:50.704248   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:51.203335   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:51.703761   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:52.203474   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:52.703901   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:53.203856   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:53.704192   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:54.204243   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:53.641376   61273 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0918 21:05:53.667828   61273 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0918 21:05:53.701667   61273 system_pods.go:43] waiting for kube-system pods to appear ...
	I0918 21:05:53.714053   61273 system_pods.go:59] 8 kube-system pods found
	I0918 21:05:53.714101   61273 system_pods.go:61] "coredns-7c65d6cfc9-dgnw2" [085d5a98-0a61-4678-830f-384780a0d7ef] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0918 21:05:53.714113   61273 system_pods.go:61] "etcd-no-preload-331658" [d6a53ff4-fcf0-46c0-9e77-9af6d0363e08] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0918 21:05:53.714126   61273 system_pods.go:61] "kube-apiserver-no-preload-331658" [1953598f-4c3f-4a98-ab75-b8ca1b8093ad] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0918 21:05:53.714135   61273 system_pods.go:61] "kube-controller-manager-no-preload-331658" [8fc655be-a192-4250-a31d-2e533a2b5e41] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0918 21:05:53.714145   61273 system_pods.go:61] "kube-proxy-hx25w" [a26512ff-f695-4452-8974-577479257160] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0918 21:05:53.714157   61273 system_pods.go:61] "kube-scheduler-no-preload-331658" [64e5347e-e7fd-4a0d-ae41-539c3989c29e] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0918 21:05:53.714169   61273 system_pods.go:61] "metrics-server-6867b74b74-n27vc" [b1de76ec-8987-49ce-ae66-eedda2705cde] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0918 21:05:53.714181   61273 system_pods.go:61] "storage-provisioner" [e110aeb3-e9bc-4bb9-9b49-5579558bdda2] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0918 21:05:53.714191   61273 system_pods.go:74] duration metric: took 12.499195ms to wait for pod list to return data ...
	I0918 21:05:53.714206   61273 node_conditions.go:102] verifying NodePressure condition ...
	I0918 21:05:53.720251   61273 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0918 21:05:53.720283   61273 node_conditions.go:123] node cpu capacity is 2
	I0918 21:05:53.720296   61273 node_conditions.go:105] duration metric: took 6.082637ms to run NodePressure ...
	I0918 21:05:53.720317   61273 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0918 21:05:54.056981   61273 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0918 21:05:54.062413   61273 kubeadm.go:739] kubelet initialised
	I0918 21:05:54.062436   61273 kubeadm.go:740] duration metric: took 5.424693ms waiting for restarted kubelet to initialise ...
	I0918 21:05:54.062443   61273 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0918 21:05:54.069721   61273 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-dgnw2" in "kube-system" namespace to be "Ready" ...
	I0918 21:05:54.089970   61273 pod_ready.go:98] node "no-preload-331658" hosting pod "coredns-7c65d6cfc9-dgnw2" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-331658" has status "Ready":"False"
	I0918 21:05:54.090005   61273 pod_ready.go:82] duration metric: took 20.250586ms for pod "coredns-7c65d6cfc9-dgnw2" in "kube-system" namespace to be "Ready" ...
	E0918 21:05:54.090017   61273 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-331658" hosting pod "coredns-7c65d6cfc9-dgnw2" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-331658" has status "Ready":"False"
	I0918 21:05:54.090046   61273 pod_ready.go:79] waiting up to 4m0s for pod "etcd-no-preload-331658" in "kube-system" namespace to be "Ready" ...
	I0918 21:05:54.105121   61273 pod_ready.go:98] node "no-preload-331658" hosting pod "etcd-no-preload-331658" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-331658" has status "Ready":"False"
	I0918 21:05:54.105156   61273 pod_ready.go:82] duration metric: took 15.097714ms for pod "etcd-no-preload-331658" in "kube-system" namespace to be "Ready" ...
	E0918 21:05:54.105170   61273 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-331658" hosting pod "etcd-no-preload-331658" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-331658" has status "Ready":"False"
	I0918 21:05:54.105180   61273 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-no-preload-331658" in "kube-system" namespace to be "Ready" ...
	I0918 21:05:54.112687   61273 pod_ready.go:98] node "no-preload-331658" hosting pod "kube-apiserver-no-preload-331658" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-331658" has status "Ready":"False"
	I0918 21:05:54.112711   61273 pod_ready.go:82] duration metric: took 7.523191ms for pod "kube-apiserver-no-preload-331658" in "kube-system" namespace to be "Ready" ...
	E0918 21:05:54.112722   61273 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-331658" hosting pod "kube-apiserver-no-preload-331658" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-331658" has status "Ready":"False"
	I0918 21:05:54.112730   61273 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-no-preload-331658" in "kube-system" namespace to be "Ready" ...
	I0918 21:05:54.119681   61273 pod_ready.go:98] node "no-preload-331658" hosting pod "kube-controller-manager-no-preload-331658" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-331658" has status "Ready":"False"
	I0918 21:05:54.119707   61273 pod_ready.go:82] duration metric: took 6.967275ms for pod "kube-controller-manager-no-preload-331658" in "kube-system" namespace to be "Ready" ...
	E0918 21:05:54.119716   61273 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-331658" hosting pod "kube-controller-manager-no-preload-331658" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-331658" has status "Ready":"False"
	I0918 21:05:54.119723   61273 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-hx25w" in "kube-system" namespace to be "Ready" ...
	I0918 21:05:54.505099   61273 pod_ready.go:98] node "no-preload-331658" hosting pod "kube-proxy-hx25w" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-331658" has status "Ready":"False"
	I0918 21:05:54.505127   61273 pod_ready.go:82] duration metric: took 385.395528ms for pod "kube-proxy-hx25w" in "kube-system" namespace to be "Ready" ...
	E0918 21:05:54.505140   61273 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-331658" hosting pod "kube-proxy-hx25w" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-331658" has status "Ready":"False"
	I0918 21:05:54.505147   61273 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-no-preload-331658" in "kube-system" namespace to be "Ready" ...
	I0918 21:05:54.905748   61273 pod_ready.go:98] node "no-preload-331658" hosting pod "kube-scheduler-no-preload-331658" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-331658" has status "Ready":"False"
	I0918 21:05:54.905774   61273 pod_ready.go:82] duration metric: took 400.618175ms for pod "kube-scheduler-no-preload-331658" in "kube-system" namespace to be "Ready" ...
	E0918 21:05:54.905785   61273 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-331658" hosting pod "kube-scheduler-no-preload-331658" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-331658" has status "Ready":"False"
	I0918 21:05:54.905794   61273 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace to be "Ready" ...
	I0918 21:05:55.305077   61273 pod_ready.go:98] node "no-preload-331658" hosting pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-331658" has status "Ready":"False"
	I0918 21:05:55.305106   61273 pod_ready.go:82] duration metric: took 399.301293ms for pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace to be "Ready" ...
	E0918 21:05:55.305118   61273 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-331658" hosting pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-331658" has status "Ready":"False"
	I0918 21:05:55.305126   61273 pod_ready.go:39] duration metric: took 1.242662699s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0918 21:05:55.305150   61273 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0918 21:05:55.317568   61273 ops.go:34] apiserver oom_adj: -16
	I0918 21:05:55.317597   61273 kubeadm.go:597] duration metric: took 9.0593375s to restartPrimaryControlPlane
	I0918 21:05:55.317616   61273 kubeadm.go:394] duration metric: took 9.109322119s to StartCluster
	I0918 21:05:55.317643   61273 settings.go:142] acquiring lock: {Name:mk6ae95a69dcbe00c28846409e9f46945a88de2f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0918 21:05:55.317720   61273 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19667-7671/kubeconfig
	I0918 21:05:55.320228   61273 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19667-7671/kubeconfig: {Name:mk3a3eecadb0164dfdd5d3c4392b5b473a2a9bb0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0918 21:05:55.320552   61273 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.61.31 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0918 21:05:55.320609   61273 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0918 21:05:55.320716   61273 addons.go:69] Setting storage-provisioner=true in profile "no-preload-331658"
	I0918 21:05:55.320725   61273 addons.go:69] Setting default-storageclass=true in profile "no-preload-331658"
	I0918 21:05:55.320739   61273 addons.go:234] Setting addon storage-provisioner=true in "no-preload-331658"
	W0918 21:05:55.320747   61273 addons.go:243] addon storage-provisioner should already be in state true
	I0918 21:05:55.320765   61273 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-331658"
	I0918 21:05:55.320785   61273 host.go:66] Checking if "no-preload-331658" exists ...
	I0918 21:05:55.320769   61273 addons.go:69] Setting metrics-server=true in profile "no-preload-331658"
	I0918 21:05:55.320799   61273 config.go:182] Loaded profile config "no-preload-331658": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0918 21:05:55.320808   61273 addons.go:234] Setting addon metrics-server=true in "no-preload-331658"
	W0918 21:05:55.320863   61273 addons.go:243] addon metrics-server should already be in state true
	I0918 21:05:55.320889   61273 host.go:66] Checking if "no-preload-331658" exists ...
	I0918 21:05:55.321208   61273 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19667-7671/.minikube/bin/docker-machine-driver-kvm2
	I0918 21:05:55.321228   61273 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19667-7671/.minikube/bin/docker-machine-driver-kvm2
	I0918 21:05:55.321208   61273 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19667-7671/.minikube/bin/docker-machine-driver-kvm2
	I0918 21:05:55.321262   61273 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0918 21:05:55.321282   61273 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0918 21:05:55.321357   61273 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0918 21:05:55.323762   61273 out.go:177] * Verifying Kubernetes components...
	I0918 21:05:55.325718   61273 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0918 21:05:55.348485   61273 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43725
	I0918 21:05:55.349072   61273 main.go:141] libmachine: () Calling .GetVersion
	I0918 21:05:55.349611   61273 main.go:141] libmachine: Using API Version  1
	I0918 21:05:55.349641   61273 main.go:141] libmachine: () Calling .SetConfigRaw
	I0918 21:05:55.349978   61273 main.go:141] libmachine: () Calling .GetMachineName
	I0918 21:05:55.350556   61273 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19667-7671/.minikube/bin/docker-machine-driver-kvm2
	I0918 21:05:55.350606   61273 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0918 21:05:55.368807   61273 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34285
	I0918 21:05:55.369340   61273 main.go:141] libmachine: () Calling .GetVersion
	I0918 21:05:55.369826   61273 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35389
	I0918 21:05:55.369908   61273 main.go:141] libmachine: Using API Version  1
	I0918 21:05:55.369928   61273 main.go:141] libmachine: () Calling .SetConfigRaw
	I0918 21:05:55.369949   61273 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42891
	I0918 21:05:55.370195   61273 main.go:141] libmachine: () Calling .GetVersion
	I0918 21:05:55.370303   61273 main.go:141] libmachine: () Calling .GetMachineName
	I0918 21:05:55.370408   61273 main.go:141] libmachine: () Calling .GetVersion
	I0918 21:05:55.370494   61273 main.go:141] libmachine: (no-preload-331658) Calling .GetState
	I0918 21:05:55.370772   61273 main.go:141] libmachine: Using API Version  1
	I0918 21:05:55.370797   61273 main.go:141] libmachine: () Calling .SetConfigRaw
	I0918 21:05:55.370908   61273 main.go:141] libmachine: Using API Version  1
	I0918 21:05:55.370929   61273 main.go:141] libmachine: () Calling .SetConfigRaw
	I0918 21:05:55.371790   61273 main.go:141] libmachine: () Calling .GetMachineName
	I0918 21:05:55.371833   61273 main.go:141] libmachine: () Calling .GetMachineName
	I0918 21:05:55.371996   61273 main.go:141] libmachine: (no-preload-331658) Calling .GetState
	I0918 21:05:55.372415   61273 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19667-7671/.minikube/bin/docker-machine-driver-kvm2
	I0918 21:05:55.372470   61273 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0918 21:05:55.372532   61273 main.go:141] libmachine: (no-preload-331658) Calling .DriverName
	I0918 21:05:55.375524   61273 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0918 21:05:55.375574   61273 addons.go:234] Setting addon default-storageclass=true in "no-preload-331658"
	W0918 21:05:55.375593   61273 addons.go:243] addon default-storageclass should already be in state true
	I0918 21:05:55.375626   61273 host.go:66] Checking if "no-preload-331658" exists ...
	I0918 21:05:55.376008   61273 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19667-7671/.minikube/bin/docker-machine-driver-kvm2
	I0918 21:05:55.376097   61273 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0918 21:05:55.377828   61273 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0918 21:05:55.377848   61273 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0918 21:05:55.377864   61273 main.go:141] libmachine: (no-preload-331658) Calling .GetSSHHostname
	I0918 21:05:55.381877   61273 main.go:141] libmachine: (no-preload-331658) DBG | domain no-preload-331658 has defined MAC address 52:54:00:2a:47:d0 in network mk-no-preload-331658
	I0918 21:05:55.382379   61273 main.go:141] libmachine: (no-preload-331658) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:47:d0", ip: ""} in network mk-no-preload-331658: {Iface:virbr2 ExpiryTime:2024-09-18 21:55:52 +0000 UTC Type:0 Mac:52:54:00:2a:47:d0 Iaid: IPaddr:192.168.61.31 Prefix:24 Hostname:no-preload-331658 Clientid:01:52:54:00:2a:47:d0}
	I0918 21:05:55.382438   61273 main.go:141] libmachine: (no-preload-331658) DBG | domain no-preload-331658 has defined IP address 192.168.61.31 and MAC address 52:54:00:2a:47:d0 in network mk-no-preload-331658
	I0918 21:05:55.382767   61273 main.go:141] libmachine: (no-preload-331658) Calling .GetSSHPort
	I0918 21:05:55.384470   61273 main.go:141] libmachine: (no-preload-331658) Calling .GetSSHKeyPath
	I0918 21:05:55.384700   61273 main.go:141] libmachine: (no-preload-331658) Calling .GetSSHUsername
	I0918 21:05:55.384863   61273 sshutil.go:53] new ssh client: &{IP:192.168.61.31 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19667-7671/.minikube/machines/no-preload-331658/id_rsa Username:docker}
	I0918 21:05:55.399531   61273 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38229
	I0918 21:05:55.400009   61273 main.go:141] libmachine: () Calling .GetVersion
	I0918 21:05:55.400532   61273 main.go:141] libmachine: Using API Version  1
	I0918 21:05:55.400552   61273 main.go:141] libmachine: () Calling .SetConfigRaw
	I0918 21:05:55.400918   61273 main.go:141] libmachine: () Calling .GetMachineName
	I0918 21:05:55.401097   61273 main.go:141] libmachine: (no-preload-331658) Calling .GetState
	I0918 21:05:55.403124   61273 main.go:141] libmachine: (no-preload-331658) Calling .DriverName
	I0918 21:05:55.404237   61273 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45209
	I0918 21:05:55.404637   61273 main.go:141] libmachine: () Calling .GetVersion
	I0918 21:05:55.405088   61273 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0918 21:05:55.405422   61273 main.go:141] libmachine: Using API Version  1
	I0918 21:05:55.405443   61273 main.go:141] libmachine: () Calling .SetConfigRaw
	I0918 21:05:55.405906   61273 main.go:141] libmachine: () Calling .GetMachineName
	I0918 21:05:55.406570   61273 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19667-7671/.minikube/bin/docker-machine-driver-kvm2
	I0918 21:05:55.406620   61273 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0918 21:05:55.406959   61273 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0918 21:05:55.406973   61273 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0918 21:05:55.407380   61273 main.go:141] libmachine: (no-preload-331658) Calling .GetSSHHostname
	I0918 21:05:55.411410   61273 main.go:141] libmachine: (no-preload-331658) DBG | domain no-preload-331658 has defined MAC address 52:54:00:2a:47:d0 in network mk-no-preload-331658
	I0918 21:05:55.411430   61273 main.go:141] libmachine: (no-preload-331658) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:47:d0", ip: ""} in network mk-no-preload-331658: {Iface:virbr2 ExpiryTime:2024-09-18 21:55:52 +0000 UTC Type:0 Mac:52:54:00:2a:47:d0 Iaid: IPaddr:192.168.61.31 Prefix:24 Hostname:no-preload-331658 Clientid:01:52:54:00:2a:47:d0}
	I0918 21:05:55.411440   61273 main.go:141] libmachine: (no-preload-331658) DBG | domain no-preload-331658 has defined IP address 192.168.61.31 and MAC address 52:54:00:2a:47:d0 in network mk-no-preload-331658
	I0918 21:05:55.411727   61273 main.go:141] libmachine: (no-preload-331658) Calling .GetSSHPort
	I0918 21:05:55.411965   61273 main.go:141] libmachine: (no-preload-331658) Calling .GetSSHKeyPath
	I0918 21:05:55.412171   61273 main.go:141] libmachine: (no-preload-331658) Calling .GetSSHUsername
	I0918 21:05:55.412377   61273 sshutil.go:53] new ssh client: &{IP:192.168.61.31 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19667-7671/.minikube/machines/no-preload-331658/id_rsa Username:docker}
	I0918 21:05:55.426166   61273 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38277
	I0918 21:05:55.426704   61273 main.go:141] libmachine: () Calling .GetVersion
	I0918 21:05:55.427211   61273 main.go:141] libmachine: Using API Version  1
	I0918 21:05:55.427232   61273 main.go:141] libmachine: () Calling .SetConfigRaw
	I0918 21:05:55.427610   61273 main.go:141] libmachine: () Calling .GetMachineName
	I0918 21:05:55.427805   61273 main.go:141] libmachine: (no-preload-331658) Calling .GetState
	I0918 21:05:55.429864   61273 main.go:141] libmachine: (no-preload-331658) Calling .DriverName
	I0918 21:05:55.430238   61273 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0918 21:05:55.430256   61273 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0918 21:05:55.430278   61273 main.go:141] libmachine: (no-preload-331658) Calling .GetSSHHostname
	I0918 21:05:55.433576   61273 main.go:141] libmachine: (no-preload-331658) DBG | domain no-preload-331658 has defined MAC address 52:54:00:2a:47:d0 in network mk-no-preload-331658
	I0918 21:05:55.433894   61273 main.go:141] libmachine: (no-preload-331658) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:47:d0", ip: ""} in network mk-no-preload-331658: {Iface:virbr2 ExpiryTime:2024-09-18 21:55:52 +0000 UTC Type:0 Mac:52:54:00:2a:47:d0 Iaid: IPaddr:192.168.61.31 Prefix:24 Hostname:no-preload-331658 Clientid:01:52:54:00:2a:47:d0}
	I0918 21:05:55.433918   61273 main.go:141] libmachine: (no-preload-331658) DBG | domain no-preload-331658 has defined IP address 192.168.61.31 and MAC address 52:54:00:2a:47:d0 in network mk-no-preload-331658
	I0918 21:05:55.434411   61273 main.go:141] libmachine: (no-preload-331658) Calling .GetSSHPort
	I0918 21:05:55.434650   61273 main.go:141] libmachine: (no-preload-331658) Calling .GetSSHKeyPath
	I0918 21:05:55.434798   61273 main.go:141] libmachine: (no-preload-331658) Calling .GetSSHUsername
	I0918 21:05:55.434942   61273 sshutil.go:53] new ssh client: &{IP:192.168.61.31 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19667-7671/.minikube/machines/no-preload-331658/id_rsa Username:docker}
	I0918 21:05:55.528033   61273 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0918 21:05:55.545524   61273 node_ready.go:35] waiting up to 6m0s for node "no-preload-331658" to be "Ready" ...
	I0918 21:05:55.606477   61273 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0918 21:05:55.606498   61273 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0918 21:05:55.628256   61273 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0918 21:05:55.636122   61273 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0918 21:05:55.636154   61273 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0918 21:05:55.663081   61273 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0918 21:05:55.663108   61273 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0918 21:05:55.715011   61273 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0918 21:05:55.738192   61273 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0918 21:05:56.247539   61273 main.go:141] libmachine: Making call to close driver server
	I0918 21:05:56.247568   61273 main.go:141] libmachine: (no-preload-331658) Calling .Close
	I0918 21:05:56.247900   61273 main.go:141] libmachine: (no-preload-331658) DBG | Closing plugin on server side
	I0918 21:05:56.247922   61273 main.go:141] libmachine: Successfully made call to close driver server
	I0918 21:05:56.247937   61273 main.go:141] libmachine: Making call to close connection to plugin binary
	I0918 21:05:56.247948   61273 main.go:141] libmachine: Making call to close driver server
	I0918 21:05:56.247960   61273 main.go:141] libmachine: (no-preload-331658) Calling .Close
	I0918 21:05:56.248225   61273 main.go:141] libmachine: Successfully made call to close driver server
	I0918 21:05:56.248240   61273 main.go:141] libmachine: Making call to close connection to plugin binary
	I0918 21:05:56.248273   61273 main.go:141] libmachine: (no-preload-331658) DBG | Closing plugin on server side
	I0918 21:05:56.261942   61273 main.go:141] libmachine: Making call to close driver server
	I0918 21:05:56.261972   61273 main.go:141] libmachine: (no-preload-331658) Calling .Close
	I0918 21:05:56.262269   61273 main.go:141] libmachine: (no-preload-331658) DBG | Closing plugin on server side
	I0918 21:05:56.262344   61273 main.go:141] libmachine: Successfully made call to close driver server
	I0918 21:05:56.262361   61273 main.go:141] libmachine: Making call to close connection to plugin binary
	I0918 21:05:56.944008   61273 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.22895695s)
	I0918 21:05:56.944084   61273 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.205856091s)
	I0918 21:05:56.944121   61273 main.go:141] libmachine: Making call to close driver server
	I0918 21:05:56.944138   61273 main.go:141] libmachine: (no-preload-331658) Calling .Close
	I0918 21:05:56.944087   61273 main.go:141] libmachine: Making call to close driver server
	I0918 21:05:56.944186   61273 main.go:141] libmachine: (no-preload-331658) Calling .Close
	I0918 21:05:56.944489   61273 main.go:141] libmachine: (no-preload-331658) DBG | Closing plugin on server side
	I0918 21:05:56.944539   61273 main.go:141] libmachine: Successfully made call to close driver server
	I0918 21:05:56.944553   61273 main.go:141] libmachine: Making call to close connection to plugin binary
	I0918 21:05:56.944561   61273 main.go:141] libmachine: Making call to close driver server
	I0918 21:05:56.944572   61273 main.go:141] libmachine: (no-preload-331658) Calling .Close
	I0918 21:05:56.944559   61273 main.go:141] libmachine: (no-preload-331658) DBG | Closing plugin on server side
	I0918 21:05:56.944570   61273 main.go:141] libmachine: Successfully made call to close driver server
	I0918 21:05:56.944654   61273 main.go:141] libmachine: Making call to close connection to plugin binary
	I0918 21:05:56.944669   61273 main.go:141] libmachine: Making call to close driver server
	I0918 21:05:56.944678   61273 main.go:141] libmachine: (no-preload-331658) Calling .Close
	I0918 21:05:56.944794   61273 main.go:141] libmachine: Successfully made call to close driver server
	I0918 21:05:56.944808   61273 main.go:141] libmachine: Making call to close connection to plugin binary
	I0918 21:05:56.944823   61273 main.go:141] libmachine: (no-preload-331658) DBG | Closing plugin on server side
	I0918 21:05:56.944965   61273 main.go:141] libmachine: (no-preload-331658) DBG | Closing plugin on server side
	I0918 21:05:56.944988   61273 main.go:141] libmachine: Successfully made call to close driver server
	I0918 21:05:56.944998   61273 main.go:141] libmachine: Making call to close connection to plugin binary
	I0918 21:05:56.945010   61273 addons.go:475] Verifying addon metrics-server=true in "no-preload-331658"
	I0918 21:05:56.946962   61273 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I0918 21:05:53.135068   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:05:55.633160   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:05:55.393859   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:05:57.888366   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:05:54.703227   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:55.204057   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:55.704178   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:56.203443   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:56.703517   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:57.203499   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:57.703598   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:58.203660   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:58.703897   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:59.203256   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:05:56.948595   61273 addons.go:510] duration metric: took 1.627989207s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I0918 21:05:57.549092   61273 node_ready.go:53] node "no-preload-331658" has status "Ready":"False"
	I0918 21:06:00.050199   61273 node_ready.go:53] node "no-preload-331658" has status "Ready":"False"
	I0918 21:05:58.134289   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:06:00.632302   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:06:00.386644   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:06:02.387972   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:05:59.704149   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:06:00.203356   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:06:00.703750   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:06:01.203765   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:06:01.704295   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:06:02.203759   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:06:02.703342   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:06:03.204083   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:06:03.703777   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:06:04.203340   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:06:02.549111   61273 node_ready.go:49] node "no-preload-331658" has status "Ready":"True"
	I0918 21:06:02.549153   61273 node_ready.go:38] duration metric: took 7.003597589s for node "no-preload-331658" to be "Ready" ...
	I0918 21:06:02.549162   61273 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0918 21:06:02.554487   61273 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-dgnw2" in "kube-system" namespace to be "Ready" ...
	I0918 21:06:02.560130   61273 pod_ready.go:93] pod "coredns-7c65d6cfc9-dgnw2" in "kube-system" namespace has status "Ready":"True"
	I0918 21:06:02.560160   61273 pod_ready.go:82] duration metric: took 5.643145ms for pod "coredns-7c65d6cfc9-dgnw2" in "kube-system" namespace to be "Ready" ...
	I0918 21:06:02.560173   61273 pod_ready.go:79] waiting up to 6m0s for pod "etcd-no-preload-331658" in "kube-system" namespace to be "Ready" ...
	I0918 21:06:02.567971   61273 pod_ready.go:93] pod "etcd-no-preload-331658" in "kube-system" namespace has status "Ready":"True"
	I0918 21:06:02.567992   61273 pod_ready.go:82] duration metric: took 7.811385ms for pod "etcd-no-preload-331658" in "kube-system" namespace to be "Ready" ...
	I0918 21:06:02.568001   61273 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-no-preload-331658" in "kube-system" namespace to be "Ready" ...
	I0918 21:06:02.572606   61273 pod_ready.go:93] pod "kube-apiserver-no-preload-331658" in "kube-system" namespace has status "Ready":"True"
	I0918 21:06:02.572633   61273 pod_ready.go:82] duration metric: took 4.625414ms for pod "kube-apiserver-no-preload-331658" in "kube-system" namespace to be "Ready" ...
	I0918 21:06:02.572644   61273 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-no-preload-331658" in "kube-system" namespace to be "Ready" ...
	I0918 21:06:02.577222   61273 pod_ready.go:93] pod "kube-controller-manager-no-preload-331658" in "kube-system" namespace has status "Ready":"True"
	I0918 21:06:02.577243   61273 pod_ready.go:82] duration metric: took 4.591499ms for pod "kube-controller-manager-no-preload-331658" in "kube-system" namespace to be "Ready" ...
	I0918 21:06:02.577252   61273 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-hx25w" in "kube-system" namespace to be "Ready" ...
	I0918 21:06:02.949682   61273 pod_ready.go:93] pod "kube-proxy-hx25w" in "kube-system" namespace has status "Ready":"True"
	I0918 21:06:02.949707   61273 pod_ready.go:82] duration metric: took 372.449094ms for pod "kube-proxy-hx25w" in "kube-system" namespace to be "Ready" ...
	I0918 21:06:02.949716   61273 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-no-preload-331658" in "kube-system" namespace to be "Ready" ...
	I0918 21:06:03.350071   61273 pod_ready.go:93] pod "kube-scheduler-no-preload-331658" in "kube-system" namespace has status "Ready":"True"
	I0918 21:06:03.350104   61273 pod_ready.go:82] duration metric: took 400.380059ms for pod "kube-scheduler-no-preload-331658" in "kube-system" namespace to be "Ready" ...
	I0918 21:06:03.350118   61273 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace to be "Ready" ...
	I0918 21:06:05.357041   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:06:02.634105   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:06:05.132860   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:06:04.887184   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:06:06.887596   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:06:04.703762   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:06:05.203639   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:06:05.703335   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:06:06.204156   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:06:06.703735   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:06:07.203278   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:06:07.704072   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:06:08.203299   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:06:08.703528   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:06:09.203725   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:06:07.857844   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:06:10.356822   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:06:07.633985   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:06:10.133861   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:06:08.887695   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:06:11.387735   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:06:09.703712   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:06:10.203930   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:06:10.704216   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:06:11.203706   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:06:11.703494   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:06:12.204098   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:06:12.703927   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:06:13.203379   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:06:13.703248   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:06:14.204000   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:06:12.356878   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:06:14.360285   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:06:12.631731   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:06:15.132229   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:06:17.132802   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:06:13.887296   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:06:16.386306   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:06:14.704159   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:06:15.203401   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:06:15.703942   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:06:16.204043   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:06:16.703691   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:06:17.203508   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:06:17.703445   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:06:18.203689   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:06:18.704249   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:06:19.203290   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0918 21:06:19.203401   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0918 21:06:19.239033   62061 cri.go:89] found id: ""
	I0918 21:06:19.239065   62061 logs.go:276] 0 containers: []
	W0918 21:06:19.239073   62061 logs.go:278] No container was found matching "kube-apiserver"
	I0918 21:06:19.239079   62061 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0918 21:06:19.239141   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0918 21:06:19.274781   62061 cri.go:89] found id: ""
	I0918 21:06:19.274809   62061 logs.go:276] 0 containers: []
	W0918 21:06:19.274819   62061 logs.go:278] No container was found matching "etcd"
	I0918 21:06:19.274833   62061 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0918 21:06:19.274895   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0918 21:06:19.307894   62061 cri.go:89] found id: ""
	I0918 21:06:19.307928   62061 logs.go:276] 0 containers: []
	W0918 21:06:19.307940   62061 logs.go:278] No container was found matching "coredns"
	I0918 21:06:19.307948   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0918 21:06:19.308002   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0918 21:06:19.340572   62061 cri.go:89] found id: ""
	I0918 21:06:19.340602   62061 logs.go:276] 0 containers: []
	W0918 21:06:19.340610   62061 logs.go:278] No container was found matching "kube-scheduler"
	I0918 21:06:19.340615   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0918 21:06:19.340672   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0918 21:06:19.375448   62061 cri.go:89] found id: ""
	I0918 21:06:19.375475   62061 logs.go:276] 0 containers: []
	W0918 21:06:19.375483   62061 logs.go:278] No container was found matching "kube-proxy"
	I0918 21:06:19.375489   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0918 21:06:19.375536   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0918 21:06:19.413102   62061 cri.go:89] found id: ""
	I0918 21:06:19.413133   62061 logs.go:276] 0 containers: []
	W0918 21:06:19.413158   62061 logs.go:278] No container was found matching "kube-controller-manager"
	I0918 21:06:19.413166   62061 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0918 21:06:19.413238   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0918 21:06:19.447497   62061 cri.go:89] found id: ""
	I0918 21:06:19.447526   62061 logs.go:276] 0 containers: []
	W0918 21:06:19.447536   62061 logs.go:278] No container was found matching "kindnet"
	I0918 21:06:19.447544   62061 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0918 21:06:19.447605   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0918 21:06:19.480848   62061 cri.go:89] found id: ""
	I0918 21:06:19.480880   62061 logs.go:276] 0 containers: []
	W0918 21:06:19.480892   62061 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0918 21:06:19.480903   62061 logs.go:123] Gathering logs for kubelet ...
	I0918 21:06:19.480916   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 21:06:16.856845   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:06:19.358010   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:06:19.632608   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:06:22.132792   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:06:18.387488   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:06:20.887832   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:06:19.533573   62061 logs.go:123] Gathering logs for dmesg ...
	I0918 21:06:19.533625   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 21:06:19.546578   62061 logs.go:123] Gathering logs for describe nodes ...
	I0918 21:06:19.546604   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0918 21:06:19.667439   62061 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0918 21:06:19.667459   62061 logs.go:123] Gathering logs for CRI-O ...
	I0918 21:06:19.667472   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0918 21:06:19.739525   62061 logs.go:123] Gathering logs for container status ...
	I0918 21:06:19.739563   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 21:06:22.280913   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:06:22.296045   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0918 21:06:22.296132   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0918 21:06:22.332361   62061 cri.go:89] found id: ""
	I0918 21:06:22.332401   62061 logs.go:276] 0 containers: []
	W0918 21:06:22.332413   62061 logs.go:278] No container was found matching "kube-apiserver"
	I0918 21:06:22.332421   62061 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0918 21:06:22.332485   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0918 21:06:22.392138   62061 cri.go:89] found id: ""
	I0918 21:06:22.392170   62061 logs.go:276] 0 containers: []
	W0918 21:06:22.392178   62061 logs.go:278] No container was found matching "etcd"
	I0918 21:06:22.392184   62061 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0918 21:06:22.392237   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0918 21:06:22.431265   62061 cri.go:89] found id: ""
	I0918 21:06:22.431296   62061 logs.go:276] 0 containers: []
	W0918 21:06:22.431306   62061 logs.go:278] No container was found matching "coredns"
	I0918 21:06:22.431313   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0918 21:06:22.431376   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0918 21:06:22.463685   62061 cri.go:89] found id: ""
	I0918 21:06:22.463718   62061 logs.go:276] 0 containers: []
	W0918 21:06:22.463730   62061 logs.go:278] No container was found matching "kube-scheduler"
	I0918 21:06:22.463738   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0918 21:06:22.463808   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0918 21:06:22.498031   62061 cri.go:89] found id: ""
	I0918 21:06:22.498058   62061 logs.go:276] 0 containers: []
	W0918 21:06:22.498069   62061 logs.go:278] No container was found matching "kube-proxy"
	I0918 21:06:22.498076   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0918 21:06:22.498140   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0918 21:06:22.537701   62061 cri.go:89] found id: ""
	I0918 21:06:22.537732   62061 logs.go:276] 0 containers: []
	W0918 21:06:22.537740   62061 logs.go:278] No container was found matching "kube-controller-manager"
	I0918 21:06:22.537746   62061 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0918 21:06:22.537803   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0918 21:06:22.574085   62061 cri.go:89] found id: ""
	I0918 21:06:22.574118   62061 logs.go:276] 0 containers: []
	W0918 21:06:22.574130   62061 logs.go:278] No container was found matching "kindnet"
	I0918 21:06:22.574138   62061 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0918 21:06:22.574202   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0918 21:06:22.606313   62061 cri.go:89] found id: ""
	I0918 21:06:22.606341   62061 logs.go:276] 0 containers: []
	W0918 21:06:22.606349   62061 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0918 21:06:22.606359   62061 logs.go:123] Gathering logs for describe nodes ...
	I0918 21:06:22.606372   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0918 21:06:22.678484   62061 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0918 21:06:22.678511   62061 logs.go:123] Gathering logs for CRI-O ...
	I0918 21:06:22.678524   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0918 21:06:22.756730   62061 logs.go:123] Gathering logs for container status ...
	I0918 21:06:22.756767   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 21:06:22.793537   62061 logs.go:123] Gathering logs for kubelet ...
	I0918 21:06:22.793573   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 21:06:22.845987   62061 logs.go:123] Gathering logs for dmesg ...
	I0918 21:06:22.846021   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 21:06:21.857010   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:06:23.857823   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:06:26.358268   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:06:24.133063   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:06:26.632474   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:06:23.387764   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:06:25.886548   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:06:27.887108   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:06:25.360980   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:06:25.374930   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0918 21:06:25.375010   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0918 21:06:25.413100   62061 cri.go:89] found id: ""
	I0918 21:06:25.413135   62061 logs.go:276] 0 containers: []
	W0918 21:06:25.413147   62061 logs.go:278] No container was found matching "kube-apiserver"
	I0918 21:06:25.413155   62061 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0918 21:06:25.413221   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0918 21:06:25.450999   62061 cri.go:89] found id: ""
	I0918 21:06:25.451024   62061 logs.go:276] 0 containers: []
	W0918 21:06:25.451032   62061 logs.go:278] No container was found matching "etcd"
	I0918 21:06:25.451038   62061 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0918 21:06:25.451087   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0918 21:06:25.496114   62061 cri.go:89] found id: ""
	I0918 21:06:25.496147   62061 logs.go:276] 0 containers: []
	W0918 21:06:25.496155   62061 logs.go:278] No container was found matching "coredns"
	I0918 21:06:25.496161   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0918 21:06:25.496218   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0918 21:06:25.529186   62061 cri.go:89] found id: ""
	I0918 21:06:25.529211   62061 logs.go:276] 0 containers: []
	W0918 21:06:25.529218   62061 logs.go:278] No container was found matching "kube-scheduler"
	I0918 21:06:25.529223   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0918 21:06:25.529294   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0918 21:06:25.569710   62061 cri.go:89] found id: ""
	I0918 21:06:25.569735   62061 logs.go:276] 0 containers: []
	W0918 21:06:25.569743   62061 logs.go:278] No container was found matching "kube-proxy"
	I0918 21:06:25.569749   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0918 21:06:25.569796   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0918 21:06:25.606915   62061 cri.go:89] found id: ""
	I0918 21:06:25.606951   62061 logs.go:276] 0 containers: []
	W0918 21:06:25.606964   62061 logs.go:278] No container was found matching "kube-controller-manager"
	I0918 21:06:25.606972   62061 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0918 21:06:25.607034   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0918 21:06:25.642705   62061 cri.go:89] found id: ""
	I0918 21:06:25.642736   62061 logs.go:276] 0 containers: []
	W0918 21:06:25.642744   62061 logs.go:278] No container was found matching "kindnet"
	I0918 21:06:25.642750   62061 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0918 21:06:25.642801   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0918 21:06:25.676418   62061 cri.go:89] found id: ""
	I0918 21:06:25.676448   62061 logs.go:276] 0 containers: []
	W0918 21:06:25.676457   62061 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0918 21:06:25.676466   62061 logs.go:123] Gathering logs for dmesg ...
	I0918 21:06:25.676477   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 21:06:25.689913   62061 logs.go:123] Gathering logs for describe nodes ...
	I0918 21:06:25.689945   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0918 21:06:25.771579   62061 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0918 21:06:25.771607   62061 logs.go:123] Gathering logs for CRI-O ...
	I0918 21:06:25.771622   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0918 21:06:25.850904   62061 logs.go:123] Gathering logs for container status ...
	I0918 21:06:25.850945   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 21:06:25.890820   62061 logs.go:123] Gathering logs for kubelet ...
	I0918 21:06:25.890849   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 21:06:28.438958   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:06:28.451961   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0918 21:06:28.452043   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0918 21:06:28.485000   62061 cri.go:89] found id: ""
	I0918 21:06:28.485059   62061 logs.go:276] 0 containers: []
	W0918 21:06:28.485070   62061 logs.go:278] No container was found matching "kube-apiserver"
	I0918 21:06:28.485077   62061 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0918 21:06:28.485138   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0918 21:06:28.517852   62061 cri.go:89] found id: ""
	I0918 21:06:28.517885   62061 logs.go:276] 0 containers: []
	W0918 21:06:28.517895   62061 logs.go:278] No container was found matching "etcd"
	I0918 21:06:28.517903   62061 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0918 21:06:28.517957   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0918 21:06:28.549914   62061 cri.go:89] found id: ""
	I0918 21:06:28.549940   62061 logs.go:276] 0 containers: []
	W0918 21:06:28.549949   62061 logs.go:278] No container was found matching "coredns"
	I0918 21:06:28.549955   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0918 21:06:28.550000   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0918 21:06:28.584133   62061 cri.go:89] found id: ""
	I0918 21:06:28.584156   62061 logs.go:276] 0 containers: []
	W0918 21:06:28.584163   62061 logs.go:278] No container was found matching "kube-scheduler"
	I0918 21:06:28.584169   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0918 21:06:28.584213   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0918 21:06:28.620031   62061 cri.go:89] found id: ""
	I0918 21:06:28.620061   62061 logs.go:276] 0 containers: []
	W0918 21:06:28.620071   62061 logs.go:278] No container was found matching "kube-proxy"
	I0918 21:06:28.620078   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0918 21:06:28.620143   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0918 21:06:28.653393   62061 cri.go:89] found id: ""
	I0918 21:06:28.653424   62061 logs.go:276] 0 containers: []
	W0918 21:06:28.653435   62061 logs.go:278] No container was found matching "kube-controller-manager"
	I0918 21:06:28.653443   62061 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0918 21:06:28.653505   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0918 21:06:28.687696   62061 cri.go:89] found id: ""
	I0918 21:06:28.687729   62061 logs.go:276] 0 containers: []
	W0918 21:06:28.687741   62061 logs.go:278] No container was found matching "kindnet"
	I0918 21:06:28.687752   62061 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0918 21:06:28.687809   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0918 21:06:28.724824   62061 cri.go:89] found id: ""
	I0918 21:06:28.724854   62061 logs.go:276] 0 containers: []
	W0918 21:06:28.724864   62061 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0918 21:06:28.724875   62061 logs.go:123] Gathering logs for kubelet ...
	I0918 21:06:28.724890   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 21:06:28.785489   62061 logs.go:123] Gathering logs for dmesg ...
	I0918 21:06:28.785529   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 21:06:28.804170   62061 logs.go:123] Gathering logs for describe nodes ...
	I0918 21:06:28.804211   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0918 21:06:28.895508   62061 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0918 21:06:28.895538   62061 logs.go:123] Gathering logs for CRI-O ...
	I0918 21:06:28.895554   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0918 21:06:28.974643   62061 logs.go:123] Gathering logs for container status ...
	I0918 21:06:28.974685   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 21:06:28.858259   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:06:31.356644   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:06:28.633851   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:06:31.133612   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:06:30.392038   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:06:32.886708   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:06:31.517305   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:06:31.530374   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0918 21:06:31.530435   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0918 21:06:31.568335   62061 cri.go:89] found id: ""
	I0918 21:06:31.568374   62061 logs.go:276] 0 containers: []
	W0918 21:06:31.568385   62061 logs.go:278] No container was found matching "kube-apiserver"
	I0918 21:06:31.568393   62061 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0918 21:06:31.568462   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0918 21:06:31.603597   62061 cri.go:89] found id: ""
	I0918 21:06:31.603624   62061 logs.go:276] 0 containers: []
	W0918 21:06:31.603633   62061 logs.go:278] No container was found matching "etcd"
	I0918 21:06:31.603639   62061 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0918 21:06:31.603694   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0918 21:06:31.639355   62061 cri.go:89] found id: ""
	I0918 21:06:31.639380   62061 logs.go:276] 0 containers: []
	W0918 21:06:31.639388   62061 logs.go:278] No container was found matching "coredns"
	I0918 21:06:31.639393   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0918 21:06:31.639445   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0918 21:06:31.674197   62061 cri.go:89] found id: ""
	I0918 21:06:31.674223   62061 logs.go:276] 0 containers: []
	W0918 21:06:31.674231   62061 logs.go:278] No container was found matching "kube-scheduler"
	I0918 21:06:31.674237   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0918 21:06:31.674286   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0918 21:06:31.710563   62061 cri.go:89] found id: ""
	I0918 21:06:31.710592   62061 logs.go:276] 0 containers: []
	W0918 21:06:31.710606   62061 logs.go:278] No container was found matching "kube-proxy"
	I0918 21:06:31.710612   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0918 21:06:31.710663   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0918 21:06:31.746923   62061 cri.go:89] found id: ""
	I0918 21:06:31.746958   62061 logs.go:276] 0 containers: []
	W0918 21:06:31.746969   62061 logs.go:278] No container was found matching "kube-controller-manager"
	I0918 21:06:31.746976   62061 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0918 21:06:31.747039   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0918 21:06:31.781913   62061 cri.go:89] found id: ""
	I0918 21:06:31.781946   62061 logs.go:276] 0 containers: []
	W0918 21:06:31.781961   62061 logs.go:278] No container was found matching "kindnet"
	I0918 21:06:31.781969   62061 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0918 21:06:31.782023   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0918 21:06:31.817293   62061 cri.go:89] found id: ""
	I0918 21:06:31.817318   62061 logs.go:276] 0 containers: []
	W0918 21:06:31.817327   62061 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0918 21:06:31.817338   62061 logs.go:123] Gathering logs for kubelet ...
	I0918 21:06:31.817375   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 21:06:31.869479   62061 logs.go:123] Gathering logs for dmesg ...
	I0918 21:06:31.869518   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 21:06:31.884187   62061 logs.go:123] Gathering logs for describe nodes ...
	I0918 21:06:31.884219   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0918 21:06:31.967159   62061 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0918 21:06:31.967189   62061 logs.go:123] Gathering logs for CRI-O ...
	I0918 21:06:31.967202   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0918 21:06:32.050404   62061 logs.go:123] Gathering logs for container status ...
	I0918 21:06:32.050458   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 21:06:33.357380   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:06:35.856960   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:06:33.633434   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:06:36.133740   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:06:34.888738   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:06:37.386351   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:06:34.593135   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:06:34.613917   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0918 21:06:34.614001   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0918 21:06:34.649649   62061 cri.go:89] found id: ""
	I0918 21:06:34.649676   62061 logs.go:276] 0 containers: []
	W0918 21:06:34.649684   62061 logs.go:278] No container was found matching "kube-apiserver"
	I0918 21:06:34.649690   62061 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0918 21:06:34.649738   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0918 21:06:34.684898   62061 cri.go:89] found id: ""
	I0918 21:06:34.684932   62061 logs.go:276] 0 containers: []
	W0918 21:06:34.684940   62061 logs.go:278] No container was found matching "etcd"
	I0918 21:06:34.684948   62061 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0918 21:06:34.685018   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0918 21:06:34.717519   62061 cri.go:89] found id: ""
	I0918 21:06:34.717549   62061 logs.go:276] 0 containers: []
	W0918 21:06:34.717559   62061 logs.go:278] No container was found matching "coredns"
	I0918 21:06:34.717573   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0918 21:06:34.717626   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0918 21:06:34.751242   62061 cri.go:89] found id: ""
	I0918 21:06:34.751275   62061 logs.go:276] 0 containers: []
	W0918 21:06:34.751289   62061 logs.go:278] No container was found matching "kube-scheduler"
	I0918 21:06:34.751298   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0918 21:06:34.751368   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0918 21:06:34.786700   62061 cri.go:89] found id: ""
	I0918 21:06:34.786729   62061 logs.go:276] 0 containers: []
	W0918 21:06:34.786737   62061 logs.go:278] No container was found matching "kube-proxy"
	I0918 21:06:34.786742   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0918 21:06:34.786792   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0918 21:06:34.822400   62061 cri.go:89] found id: ""
	I0918 21:06:34.822446   62061 logs.go:276] 0 containers: []
	W0918 21:06:34.822459   62061 logs.go:278] No container was found matching "kube-controller-manager"
	I0918 21:06:34.822468   62061 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0918 21:06:34.822537   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0918 21:06:34.860509   62061 cri.go:89] found id: ""
	I0918 21:06:34.860540   62061 logs.go:276] 0 containers: []
	W0918 21:06:34.860549   62061 logs.go:278] No container was found matching "kindnet"
	I0918 21:06:34.860554   62061 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0918 21:06:34.860626   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0918 21:06:34.900682   62061 cri.go:89] found id: ""
	I0918 21:06:34.900712   62061 logs.go:276] 0 containers: []
	W0918 21:06:34.900719   62061 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0918 21:06:34.900727   62061 logs.go:123] Gathering logs for describe nodes ...
	I0918 21:06:34.900739   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0918 21:06:34.975975   62061 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0918 21:06:34.976009   62061 logs.go:123] Gathering logs for CRI-O ...
	I0918 21:06:34.976043   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0918 21:06:35.058972   62061 logs.go:123] Gathering logs for container status ...
	I0918 21:06:35.059012   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 21:06:35.101690   62061 logs.go:123] Gathering logs for kubelet ...
	I0918 21:06:35.101716   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 21:06:35.154486   62061 logs.go:123] Gathering logs for dmesg ...
	I0918 21:06:35.154525   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 21:06:37.669814   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:06:37.683235   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0918 21:06:37.683294   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0918 21:06:37.720666   62061 cri.go:89] found id: ""
	I0918 21:06:37.720697   62061 logs.go:276] 0 containers: []
	W0918 21:06:37.720708   62061 logs.go:278] No container was found matching "kube-apiserver"
	I0918 21:06:37.720715   62061 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0918 21:06:37.720777   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0918 21:06:37.758288   62061 cri.go:89] found id: ""
	I0918 21:06:37.758327   62061 logs.go:276] 0 containers: []
	W0918 21:06:37.758335   62061 logs.go:278] No container was found matching "etcd"
	I0918 21:06:37.758341   62061 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0918 21:06:37.758436   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0918 21:06:37.793394   62061 cri.go:89] found id: ""
	I0918 21:06:37.793421   62061 logs.go:276] 0 containers: []
	W0918 21:06:37.793430   62061 logs.go:278] No container was found matching "coredns"
	I0918 21:06:37.793437   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0918 21:06:37.793501   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0918 21:06:37.828258   62061 cri.go:89] found id: ""
	I0918 21:06:37.828291   62061 logs.go:276] 0 containers: []
	W0918 21:06:37.828300   62061 logs.go:278] No container was found matching "kube-scheduler"
	I0918 21:06:37.828307   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0918 21:06:37.828361   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0918 21:06:37.863164   62061 cri.go:89] found id: ""
	I0918 21:06:37.863189   62061 logs.go:276] 0 containers: []
	W0918 21:06:37.863197   62061 logs.go:278] No container was found matching "kube-proxy"
	I0918 21:06:37.863203   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0918 21:06:37.863251   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0918 21:06:37.899585   62061 cri.go:89] found id: ""
	I0918 21:06:37.899614   62061 logs.go:276] 0 containers: []
	W0918 21:06:37.899622   62061 logs.go:278] No container was found matching "kube-controller-manager"
	I0918 21:06:37.899628   62061 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0918 21:06:37.899675   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0918 21:06:37.936246   62061 cri.go:89] found id: ""
	I0918 21:06:37.936282   62061 logs.go:276] 0 containers: []
	W0918 21:06:37.936292   62061 logs.go:278] No container was found matching "kindnet"
	I0918 21:06:37.936299   62061 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0918 21:06:37.936362   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0918 21:06:37.969913   62061 cri.go:89] found id: ""
	I0918 21:06:37.969942   62061 logs.go:276] 0 containers: []
	W0918 21:06:37.969950   62061 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0918 21:06:37.969958   62061 logs.go:123] Gathering logs for kubelet ...
	I0918 21:06:37.969968   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 21:06:38.023055   62061 logs.go:123] Gathering logs for dmesg ...
	I0918 21:06:38.023094   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 21:06:38.036006   62061 logs.go:123] Gathering logs for describe nodes ...
	I0918 21:06:38.036047   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0918 21:06:38.116926   62061 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0918 21:06:38.116957   62061 logs.go:123] Gathering logs for CRI-O ...
	I0918 21:06:38.116972   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0918 21:06:38.192632   62061 logs.go:123] Gathering logs for container status ...
	I0918 21:06:38.192668   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 21:06:37.860654   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:06:40.357107   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:06:38.633432   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:06:41.131957   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:06:39.387927   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:06:41.886904   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:06:40.729602   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:06:40.743291   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0918 21:06:40.743368   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0918 21:06:40.778138   62061 cri.go:89] found id: ""
	I0918 21:06:40.778166   62061 logs.go:276] 0 containers: []
	W0918 21:06:40.778176   62061 logs.go:278] No container was found matching "kube-apiserver"
	I0918 21:06:40.778184   62061 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0918 21:06:40.778248   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0918 21:06:40.813026   62061 cri.go:89] found id: ""
	I0918 21:06:40.813062   62061 logs.go:276] 0 containers: []
	W0918 21:06:40.813073   62061 logs.go:278] No container was found matching "etcd"
	I0918 21:06:40.813081   62061 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0918 21:06:40.813146   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0918 21:06:40.847609   62061 cri.go:89] found id: ""
	I0918 21:06:40.847641   62061 logs.go:276] 0 containers: []
	W0918 21:06:40.847651   62061 logs.go:278] No container was found matching "coredns"
	I0918 21:06:40.847658   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0918 21:06:40.847727   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0918 21:06:40.885403   62061 cri.go:89] found id: ""
	I0918 21:06:40.885432   62061 logs.go:276] 0 containers: []
	W0918 21:06:40.885441   62061 logs.go:278] No container was found matching "kube-scheduler"
	I0918 21:06:40.885448   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0918 21:06:40.885515   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0918 21:06:40.922679   62061 cri.go:89] found id: ""
	I0918 21:06:40.922705   62061 logs.go:276] 0 containers: []
	W0918 21:06:40.922714   62061 logs.go:278] No container was found matching "kube-proxy"
	I0918 21:06:40.922719   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0918 21:06:40.922776   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0918 21:06:40.957187   62061 cri.go:89] found id: ""
	I0918 21:06:40.957215   62061 logs.go:276] 0 containers: []
	W0918 21:06:40.957222   62061 logs.go:278] No container was found matching "kube-controller-manager"
	I0918 21:06:40.957228   62061 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0918 21:06:40.957291   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0918 21:06:40.991722   62061 cri.go:89] found id: ""
	I0918 21:06:40.991751   62061 logs.go:276] 0 containers: []
	W0918 21:06:40.991762   62061 logs.go:278] No container was found matching "kindnet"
	I0918 21:06:40.991769   62061 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0918 21:06:40.991835   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0918 21:06:41.027210   62061 cri.go:89] found id: ""
	I0918 21:06:41.027234   62061 logs.go:276] 0 containers: []
	W0918 21:06:41.027242   62061 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0918 21:06:41.027250   62061 logs.go:123] Gathering logs for kubelet ...
	I0918 21:06:41.027275   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 21:06:41.077183   62061 logs.go:123] Gathering logs for dmesg ...
	I0918 21:06:41.077221   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 21:06:41.090707   62061 logs.go:123] Gathering logs for describe nodes ...
	I0918 21:06:41.090734   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0918 21:06:41.166206   62061 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0918 21:06:41.166228   62061 logs.go:123] Gathering logs for CRI-O ...
	I0918 21:06:41.166241   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0918 21:06:41.240308   62061 logs.go:123] Gathering logs for container status ...
	I0918 21:06:41.240346   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 21:06:43.777517   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:06:43.790901   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0918 21:06:43.790970   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0918 21:06:43.827127   62061 cri.go:89] found id: ""
	I0918 21:06:43.827156   62061 logs.go:276] 0 containers: []
	W0918 21:06:43.827164   62061 logs.go:278] No container was found matching "kube-apiserver"
	I0918 21:06:43.827170   62061 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0918 21:06:43.827218   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0918 21:06:43.861218   62061 cri.go:89] found id: ""
	I0918 21:06:43.861244   62061 logs.go:276] 0 containers: []
	W0918 21:06:43.861252   62061 logs.go:278] No container was found matching "etcd"
	I0918 21:06:43.861257   62061 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0918 21:06:43.861308   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0918 21:06:43.899669   62061 cri.go:89] found id: ""
	I0918 21:06:43.899694   62061 logs.go:276] 0 containers: []
	W0918 21:06:43.899701   62061 logs.go:278] No container was found matching "coredns"
	I0918 21:06:43.899707   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0918 21:06:43.899755   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0918 21:06:43.934698   62061 cri.go:89] found id: ""
	I0918 21:06:43.934731   62061 logs.go:276] 0 containers: []
	W0918 21:06:43.934741   62061 logs.go:278] No container was found matching "kube-scheduler"
	I0918 21:06:43.934748   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0918 21:06:43.934819   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0918 21:06:43.971715   62061 cri.go:89] found id: ""
	I0918 21:06:43.971742   62061 logs.go:276] 0 containers: []
	W0918 21:06:43.971755   62061 logs.go:278] No container was found matching "kube-proxy"
	I0918 21:06:43.971760   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0918 21:06:43.971817   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0918 21:06:44.005880   62061 cri.go:89] found id: ""
	I0918 21:06:44.005915   62061 logs.go:276] 0 containers: []
	W0918 21:06:44.005927   62061 logs.go:278] No container was found matching "kube-controller-manager"
	I0918 21:06:44.005935   62061 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0918 21:06:44.006003   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0918 21:06:44.038144   62061 cri.go:89] found id: ""
	I0918 21:06:44.038171   62061 logs.go:276] 0 containers: []
	W0918 21:06:44.038180   62061 logs.go:278] No container was found matching "kindnet"
	I0918 21:06:44.038186   62061 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0918 21:06:44.038239   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0918 21:06:44.073920   62061 cri.go:89] found id: ""
	I0918 21:06:44.073953   62061 logs.go:276] 0 containers: []
	W0918 21:06:44.073966   62061 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0918 21:06:44.073978   62061 logs.go:123] Gathering logs for describe nodes ...
	I0918 21:06:44.073992   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0918 21:06:44.142881   62061 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0918 21:06:44.142910   62061 logs.go:123] Gathering logs for CRI-O ...
	I0918 21:06:44.142926   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0918 21:06:44.222302   62061 logs.go:123] Gathering logs for container status ...
	I0918 21:06:44.222341   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 21:06:44.265914   62061 logs.go:123] Gathering logs for kubelet ...
	I0918 21:06:44.265952   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 21:06:44.316229   62061 logs.go:123] Gathering logs for dmesg ...
	I0918 21:06:44.316266   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 21:06:42.856192   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:06:44.857673   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:06:43.132992   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:06:45.134509   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:06:43.888282   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:06:45.889414   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:06:46.830793   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:06:46.843829   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0918 21:06:46.843886   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0918 21:06:46.878566   62061 cri.go:89] found id: ""
	I0918 21:06:46.878607   62061 logs.go:276] 0 containers: []
	W0918 21:06:46.878621   62061 logs.go:278] No container was found matching "kube-apiserver"
	I0918 21:06:46.878630   62061 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0918 21:06:46.878710   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0918 21:06:46.915576   62061 cri.go:89] found id: ""
	I0918 21:06:46.915603   62061 logs.go:276] 0 containers: []
	W0918 21:06:46.915611   62061 logs.go:278] No container was found matching "etcd"
	I0918 21:06:46.915616   62061 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0918 21:06:46.915663   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0918 21:06:46.952982   62061 cri.go:89] found id: ""
	I0918 21:06:46.953014   62061 logs.go:276] 0 containers: []
	W0918 21:06:46.953032   62061 logs.go:278] No container was found matching "coredns"
	I0918 21:06:46.953039   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0918 21:06:46.953104   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0918 21:06:46.987718   62061 cri.go:89] found id: ""
	I0918 21:06:46.987749   62061 logs.go:276] 0 containers: []
	W0918 21:06:46.987757   62061 logs.go:278] No container was found matching "kube-scheduler"
	I0918 21:06:46.987763   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0918 21:06:46.987815   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0918 21:06:47.022695   62061 cri.go:89] found id: ""
	I0918 21:06:47.022726   62061 logs.go:276] 0 containers: []
	W0918 21:06:47.022735   62061 logs.go:278] No container was found matching "kube-proxy"
	I0918 21:06:47.022741   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0918 21:06:47.022799   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0918 21:06:47.058058   62061 cri.go:89] found id: ""
	I0918 21:06:47.058086   62061 logs.go:276] 0 containers: []
	W0918 21:06:47.058094   62061 logs.go:278] No container was found matching "kube-controller-manager"
	I0918 21:06:47.058101   62061 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0918 21:06:47.058159   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0918 21:06:47.093314   62061 cri.go:89] found id: ""
	I0918 21:06:47.093352   62061 logs.go:276] 0 containers: []
	W0918 21:06:47.093361   62061 logs.go:278] No container was found matching "kindnet"
	I0918 21:06:47.093367   62061 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0918 21:06:47.093424   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0918 21:06:47.126997   62061 cri.go:89] found id: ""
	I0918 21:06:47.127023   62061 logs.go:276] 0 containers: []
	W0918 21:06:47.127033   62061 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0918 21:06:47.127041   62061 logs.go:123] Gathering logs for CRI-O ...
	I0918 21:06:47.127053   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0918 21:06:47.203239   62061 logs.go:123] Gathering logs for container status ...
	I0918 21:06:47.203282   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 21:06:47.240982   62061 logs.go:123] Gathering logs for kubelet ...
	I0918 21:06:47.241013   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 21:06:47.293553   62061 logs.go:123] Gathering logs for dmesg ...
	I0918 21:06:47.293591   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 21:06:47.307462   62061 logs.go:123] Gathering logs for describe nodes ...
	I0918 21:06:47.307493   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0918 21:06:47.379500   62061 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0918 21:06:47.357136   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:06:49.359981   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:06:47.633023   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:06:49.633350   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:06:52.134627   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:06:48.387568   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:06:50.886679   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:06:52.887065   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:06:49.880441   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:06:49.895816   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0918 21:06:49.895903   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0918 21:06:49.931916   62061 cri.go:89] found id: ""
	I0918 21:06:49.931941   62061 logs.go:276] 0 containers: []
	W0918 21:06:49.931950   62061 logs.go:278] No container was found matching "kube-apiserver"
	I0918 21:06:49.931955   62061 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0918 21:06:49.932007   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0918 21:06:49.967053   62061 cri.go:89] found id: ""
	I0918 21:06:49.967083   62061 logs.go:276] 0 containers: []
	W0918 21:06:49.967093   62061 logs.go:278] No container was found matching "etcd"
	I0918 21:06:49.967101   62061 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0918 21:06:49.967194   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0918 21:06:50.002943   62061 cri.go:89] found id: ""
	I0918 21:06:50.003004   62061 logs.go:276] 0 containers: []
	W0918 21:06:50.003015   62061 logs.go:278] No container was found matching "coredns"
	I0918 21:06:50.003023   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0918 21:06:50.003085   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0918 21:06:50.039445   62061 cri.go:89] found id: ""
	I0918 21:06:50.039478   62061 logs.go:276] 0 containers: []
	W0918 21:06:50.039489   62061 logs.go:278] No container was found matching "kube-scheduler"
	I0918 21:06:50.039497   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0918 21:06:50.039561   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0918 21:06:50.073529   62061 cri.go:89] found id: ""
	I0918 21:06:50.073562   62061 logs.go:276] 0 containers: []
	W0918 21:06:50.073572   62061 logs.go:278] No container was found matching "kube-proxy"
	I0918 21:06:50.073578   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0918 21:06:50.073645   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0918 21:06:50.109363   62061 cri.go:89] found id: ""
	I0918 21:06:50.109394   62061 logs.go:276] 0 containers: []
	W0918 21:06:50.109406   62061 logs.go:278] No container was found matching "kube-controller-manager"
	I0918 21:06:50.109413   62061 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0918 21:06:50.109485   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0918 21:06:50.144387   62061 cri.go:89] found id: ""
	I0918 21:06:50.144418   62061 logs.go:276] 0 containers: []
	W0918 21:06:50.144429   62061 logs.go:278] No container was found matching "kindnet"
	I0918 21:06:50.144450   62061 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0918 21:06:50.144524   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0918 21:06:50.178079   62061 cri.go:89] found id: ""
	I0918 21:06:50.178110   62061 logs.go:276] 0 containers: []
	W0918 21:06:50.178119   62061 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0918 21:06:50.178128   62061 logs.go:123] Gathering logs for kubelet ...
	I0918 21:06:50.178140   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 21:06:50.228857   62061 logs.go:123] Gathering logs for dmesg ...
	I0918 21:06:50.228893   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 21:06:50.242077   62061 logs.go:123] Gathering logs for describe nodes ...
	I0918 21:06:50.242108   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0918 21:06:50.312868   62061 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0918 21:06:50.312903   62061 logs.go:123] Gathering logs for CRI-O ...
	I0918 21:06:50.312920   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0918 21:06:50.392880   62061 logs.go:123] Gathering logs for container status ...
	I0918 21:06:50.392924   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 21:06:52.931623   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:06:52.945298   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0918 21:06:52.945394   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0918 21:06:52.980129   62061 cri.go:89] found id: ""
	I0918 21:06:52.980162   62061 logs.go:276] 0 containers: []
	W0918 21:06:52.980171   62061 logs.go:278] No container was found matching "kube-apiserver"
	I0918 21:06:52.980176   62061 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0918 21:06:52.980224   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0918 21:06:53.017899   62061 cri.go:89] found id: ""
	I0918 21:06:53.017932   62061 logs.go:276] 0 containers: []
	W0918 21:06:53.017941   62061 logs.go:278] No container was found matching "etcd"
	I0918 21:06:53.017947   62061 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0918 21:06:53.018015   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0918 21:06:53.056594   62061 cri.go:89] found id: ""
	I0918 21:06:53.056619   62061 logs.go:276] 0 containers: []
	W0918 21:06:53.056627   62061 logs.go:278] No container was found matching "coredns"
	I0918 21:06:53.056635   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0918 21:06:53.056684   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0918 21:06:53.089876   62061 cri.go:89] found id: ""
	I0918 21:06:53.089907   62061 logs.go:276] 0 containers: []
	W0918 21:06:53.089915   62061 logs.go:278] No container was found matching "kube-scheduler"
	I0918 21:06:53.089920   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0918 21:06:53.089967   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0918 21:06:53.122845   62061 cri.go:89] found id: ""
	I0918 21:06:53.122881   62061 logs.go:276] 0 containers: []
	W0918 21:06:53.122890   62061 logs.go:278] No container was found matching "kube-proxy"
	I0918 21:06:53.122904   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0918 21:06:53.122956   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0918 21:06:53.162103   62061 cri.go:89] found id: ""
	I0918 21:06:53.162137   62061 logs.go:276] 0 containers: []
	W0918 21:06:53.162148   62061 logs.go:278] No container was found matching "kube-controller-manager"
	I0918 21:06:53.162156   62061 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0918 21:06:53.162218   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0918 21:06:53.195034   62061 cri.go:89] found id: ""
	I0918 21:06:53.195067   62061 logs.go:276] 0 containers: []
	W0918 21:06:53.195078   62061 logs.go:278] No container was found matching "kindnet"
	I0918 21:06:53.195085   62061 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0918 21:06:53.195144   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0918 21:06:53.229473   62061 cri.go:89] found id: ""
	I0918 21:06:53.229518   62061 logs.go:276] 0 containers: []
	W0918 21:06:53.229542   62061 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0918 21:06:53.229556   62061 logs.go:123] Gathering logs for kubelet ...
	I0918 21:06:53.229577   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 21:06:53.283929   62061 logs.go:123] Gathering logs for dmesg ...
	I0918 21:06:53.283973   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 21:06:53.296932   62061 logs.go:123] Gathering logs for describe nodes ...
	I0918 21:06:53.296965   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0918 21:06:53.379339   62061 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0918 21:06:53.379369   62061 logs.go:123] Gathering logs for CRI-O ...
	I0918 21:06:53.379410   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0918 21:06:53.469608   62061 logs.go:123] Gathering logs for container status ...
	I0918 21:06:53.469652   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 21:06:51.855788   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:06:53.857284   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:06:55.860982   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:06:54.633423   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:06:56.633695   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:06:54.888052   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:06:57.387393   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:06:56.009227   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:06:56.023451   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0918 21:06:56.023510   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0918 21:06:56.056839   62061 cri.go:89] found id: ""
	I0918 21:06:56.056866   62061 logs.go:276] 0 containers: []
	W0918 21:06:56.056875   62061 logs.go:278] No container was found matching "kube-apiserver"
	I0918 21:06:56.056881   62061 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0918 21:06:56.056931   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0918 21:06:56.091893   62061 cri.go:89] found id: ""
	I0918 21:06:56.091919   62061 logs.go:276] 0 containers: []
	W0918 21:06:56.091928   62061 logs.go:278] No container was found matching "etcd"
	I0918 21:06:56.091933   62061 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0918 21:06:56.091980   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0918 21:06:56.127432   62061 cri.go:89] found id: ""
	I0918 21:06:56.127456   62061 logs.go:276] 0 containers: []
	W0918 21:06:56.127464   62061 logs.go:278] No container was found matching "coredns"
	I0918 21:06:56.127470   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0918 21:06:56.127518   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0918 21:06:56.162085   62061 cri.go:89] found id: ""
	I0918 21:06:56.162109   62061 logs.go:276] 0 containers: []
	W0918 21:06:56.162118   62061 logs.go:278] No container was found matching "kube-scheduler"
	I0918 21:06:56.162123   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0918 21:06:56.162176   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0918 21:06:56.195711   62061 cri.go:89] found id: ""
	I0918 21:06:56.195743   62061 logs.go:276] 0 containers: []
	W0918 21:06:56.195753   62061 logs.go:278] No container was found matching "kube-proxy"
	I0918 21:06:56.195759   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0918 21:06:56.195809   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0918 21:06:56.234481   62061 cri.go:89] found id: ""
	I0918 21:06:56.234507   62061 logs.go:276] 0 containers: []
	W0918 21:06:56.234515   62061 logs.go:278] No container was found matching "kube-controller-manager"
	I0918 21:06:56.234522   62061 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0918 21:06:56.234567   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0918 21:06:56.268566   62061 cri.go:89] found id: ""
	I0918 21:06:56.268596   62061 logs.go:276] 0 containers: []
	W0918 21:06:56.268608   62061 logs.go:278] No container was found matching "kindnet"
	I0918 21:06:56.268617   62061 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0918 21:06:56.268681   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0918 21:06:56.309785   62061 cri.go:89] found id: ""
	I0918 21:06:56.309813   62061 logs.go:276] 0 containers: []
	W0918 21:06:56.309824   62061 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0918 21:06:56.309835   62061 logs.go:123] Gathering logs for kubelet ...
	I0918 21:06:56.309874   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 21:06:56.364834   62061 logs.go:123] Gathering logs for dmesg ...
	I0918 21:06:56.364880   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 21:06:56.378424   62061 logs.go:123] Gathering logs for describe nodes ...
	I0918 21:06:56.378451   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0918 21:06:56.450482   62061 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0918 21:06:56.450511   62061 logs.go:123] Gathering logs for CRI-O ...
	I0918 21:06:56.450522   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0918 21:06:56.536261   62061 logs.go:123] Gathering logs for container status ...
	I0918 21:06:56.536305   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 21:06:59.074494   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:06:59.087553   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0918 21:06:59.087622   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0918 21:06:59.128323   62061 cri.go:89] found id: ""
	I0918 21:06:59.128357   62061 logs.go:276] 0 containers: []
	W0918 21:06:59.128368   62061 logs.go:278] No container was found matching "kube-apiserver"
	I0918 21:06:59.128375   62061 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0918 21:06:59.128436   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0918 21:06:59.161135   62061 cri.go:89] found id: ""
	I0918 21:06:59.161161   62061 logs.go:276] 0 containers: []
	W0918 21:06:59.161170   62061 logs.go:278] No container was found matching "etcd"
	I0918 21:06:59.161178   62061 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0918 21:06:59.161240   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0918 21:06:59.201558   62061 cri.go:89] found id: ""
	I0918 21:06:59.201595   62061 logs.go:276] 0 containers: []
	W0918 21:06:59.201607   62061 logs.go:278] No container was found matching "coredns"
	I0918 21:06:59.201614   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0918 21:06:59.201678   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0918 21:06:59.235330   62061 cri.go:89] found id: ""
	I0918 21:06:59.235356   62061 logs.go:276] 0 containers: []
	W0918 21:06:59.235378   62061 logs.go:278] No container was found matching "kube-scheduler"
	I0918 21:06:59.235385   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0918 21:06:59.235450   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0918 21:06:59.270957   62061 cri.go:89] found id: ""
	I0918 21:06:59.270994   62061 logs.go:276] 0 containers: []
	W0918 21:06:59.271007   62061 logs.go:278] No container was found matching "kube-proxy"
	I0918 21:06:59.271016   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0918 21:06:59.271088   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0918 21:06:59.305068   62061 cri.go:89] found id: ""
	I0918 21:06:59.305103   62061 logs.go:276] 0 containers: []
	W0918 21:06:59.305115   62061 logs.go:278] No container was found matching "kube-controller-manager"
	I0918 21:06:59.305123   62061 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0918 21:06:59.305177   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0918 21:06:59.338763   62061 cri.go:89] found id: ""
	I0918 21:06:59.338796   62061 logs.go:276] 0 containers: []
	W0918 21:06:59.338809   62061 logs.go:278] No container was found matching "kindnet"
	I0918 21:06:59.338818   62061 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0918 21:06:59.338887   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0918 21:06:59.376557   62061 cri.go:89] found id: ""
	I0918 21:06:59.376585   62061 logs.go:276] 0 containers: []
	W0918 21:06:59.376593   62061 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0918 21:06:59.376602   62061 logs.go:123] Gathering logs for CRI-O ...
	I0918 21:06:59.376615   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0918 21:06:59.455228   62061 logs.go:123] Gathering logs for container status ...
	I0918 21:06:59.455264   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 21:06:58.356648   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:07:00.357253   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:06:59.133274   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:07:01.632548   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:06:59.388183   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:07:01.886834   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:06:59.494461   62061 logs.go:123] Gathering logs for kubelet ...
	I0918 21:06:59.494488   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 21:06:59.543143   62061 logs.go:123] Gathering logs for dmesg ...
	I0918 21:06:59.543177   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 21:06:59.556031   62061 logs.go:123] Gathering logs for describe nodes ...
	I0918 21:06:59.556062   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0918 21:06:59.623888   62061 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0918 21:07:02.124743   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:07:02.139392   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0918 21:07:02.139451   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0918 21:07:02.172176   62061 cri.go:89] found id: ""
	I0918 21:07:02.172201   62061 logs.go:276] 0 containers: []
	W0918 21:07:02.172209   62061 logs.go:278] No container was found matching "kube-apiserver"
	I0918 21:07:02.172215   62061 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0918 21:07:02.172263   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0918 21:07:02.206480   62061 cri.go:89] found id: ""
	I0918 21:07:02.206507   62061 logs.go:276] 0 containers: []
	W0918 21:07:02.206518   62061 logs.go:278] No container was found matching "etcd"
	I0918 21:07:02.206525   62061 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0918 21:07:02.206586   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0918 21:07:02.240256   62061 cri.go:89] found id: ""
	I0918 21:07:02.240281   62061 logs.go:276] 0 containers: []
	W0918 21:07:02.240289   62061 logs.go:278] No container was found matching "coredns"
	I0918 21:07:02.240295   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0918 21:07:02.240353   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0918 21:07:02.274001   62061 cri.go:89] found id: ""
	I0918 21:07:02.274034   62061 logs.go:276] 0 containers: []
	W0918 21:07:02.274046   62061 logs.go:278] No container was found matching "kube-scheduler"
	I0918 21:07:02.274056   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0918 21:07:02.274115   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0918 21:07:02.307491   62061 cri.go:89] found id: ""
	I0918 21:07:02.307520   62061 logs.go:276] 0 containers: []
	W0918 21:07:02.307528   62061 logs.go:278] No container was found matching "kube-proxy"
	I0918 21:07:02.307534   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0918 21:07:02.307597   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0918 21:07:02.344688   62061 cri.go:89] found id: ""
	I0918 21:07:02.344720   62061 logs.go:276] 0 containers: []
	W0918 21:07:02.344731   62061 logs.go:278] No container was found matching "kube-controller-manager"
	I0918 21:07:02.344739   62061 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0918 21:07:02.344805   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0918 21:07:02.384046   62061 cri.go:89] found id: ""
	I0918 21:07:02.384077   62061 logs.go:276] 0 containers: []
	W0918 21:07:02.384088   62061 logs.go:278] No container was found matching "kindnet"
	I0918 21:07:02.384095   62061 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0918 21:07:02.384154   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0918 21:07:02.422530   62061 cri.go:89] found id: ""
	I0918 21:07:02.422563   62061 logs.go:276] 0 containers: []
	W0918 21:07:02.422575   62061 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0918 21:07:02.422586   62061 logs.go:123] Gathering logs for container status ...
	I0918 21:07:02.422604   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 21:07:02.461236   62061 logs.go:123] Gathering logs for kubelet ...
	I0918 21:07:02.461266   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 21:07:02.512280   62061 logs.go:123] Gathering logs for dmesg ...
	I0918 21:07:02.512320   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 21:07:02.525404   62061 logs.go:123] Gathering logs for describe nodes ...
	I0918 21:07:02.525435   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0918 21:07:02.599746   62061 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0918 21:07:02.599773   62061 logs.go:123] Gathering logs for CRI-O ...
	I0918 21:07:02.599785   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0918 21:07:02.856077   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:07:04.858098   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:07:04.133240   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:07:06.135937   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:07:03.887306   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:07:05.888675   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:07:05.183459   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:07:05.197615   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0918 21:07:05.197681   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0918 21:07:05.234313   62061 cri.go:89] found id: ""
	I0918 21:07:05.234354   62061 logs.go:276] 0 containers: []
	W0918 21:07:05.234365   62061 logs.go:278] No container was found matching "kube-apiserver"
	I0918 21:07:05.234371   62061 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0918 21:07:05.234429   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0918 21:07:05.271436   62061 cri.go:89] found id: ""
	I0918 21:07:05.271466   62061 logs.go:276] 0 containers: []
	W0918 21:07:05.271477   62061 logs.go:278] No container was found matching "etcd"
	I0918 21:07:05.271484   62061 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0918 21:07:05.271554   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0918 21:07:05.307007   62061 cri.go:89] found id: ""
	I0918 21:07:05.307038   62061 logs.go:276] 0 containers: []
	W0918 21:07:05.307050   62061 logs.go:278] No container was found matching "coredns"
	I0918 21:07:05.307056   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0918 21:07:05.307120   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0918 21:07:05.341764   62061 cri.go:89] found id: ""
	I0918 21:07:05.341810   62061 logs.go:276] 0 containers: []
	W0918 21:07:05.341831   62061 logs.go:278] No container was found matching "kube-scheduler"
	I0918 21:07:05.341840   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0918 21:07:05.341908   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0918 21:07:05.381611   62061 cri.go:89] found id: ""
	I0918 21:07:05.381646   62061 logs.go:276] 0 containers: []
	W0918 21:07:05.381658   62061 logs.go:278] No container was found matching "kube-proxy"
	I0918 21:07:05.381666   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0918 21:07:05.381747   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0918 21:07:05.421468   62061 cri.go:89] found id: ""
	I0918 21:07:05.421499   62061 logs.go:276] 0 containers: []
	W0918 21:07:05.421520   62061 logs.go:278] No container was found matching "kube-controller-manager"
	I0918 21:07:05.421528   62061 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0918 21:07:05.421590   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0918 21:07:05.457320   62061 cri.go:89] found id: ""
	I0918 21:07:05.457348   62061 logs.go:276] 0 containers: []
	W0918 21:07:05.457359   62061 logs.go:278] No container was found matching "kindnet"
	I0918 21:07:05.457367   62061 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0918 21:07:05.457425   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0918 21:07:05.493879   62061 cri.go:89] found id: ""
	I0918 21:07:05.493915   62061 logs.go:276] 0 containers: []
	W0918 21:07:05.493928   62061 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0918 21:07:05.493943   62061 logs.go:123] Gathering logs for kubelet ...
	I0918 21:07:05.493955   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 21:07:05.543825   62061 logs.go:123] Gathering logs for dmesg ...
	I0918 21:07:05.543867   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 21:07:05.558254   62061 logs.go:123] Gathering logs for describe nodes ...
	I0918 21:07:05.558285   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0918 21:07:05.627065   62061 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0918 21:07:05.627091   62061 logs.go:123] Gathering logs for CRI-O ...
	I0918 21:07:05.627103   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0918 21:07:05.703838   62061 logs.go:123] Gathering logs for container status ...
	I0918 21:07:05.703876   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 21:07:08.244087   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:07:08.257807   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0918 21:07:08.257897   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0918 21:07:08.292034   62061 cri.go:89] found id: ""
	I0918 21:07:08.292064   62061 logs.go:276] 0 containers: []
	W0918 21:07:08.292076   62061 logs.go:278] No container was found matching "kube-apiserver"
	I0918 21:07:08.292084   62061 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0918 21:07:08.292154   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0918 21:07:08.325858   62061 cri.go:89] found id: ""
	I0918 21:07:08.325897   62061 logs.go:276] 0 containers: []
	W0918 21:07:08.325910   62061 logs.go:278] No container was found matching "etcd"
	I0918 21:07:08.325918   62061 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0918 21:07:08.325987   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0918 21:07:08.369427   62061 cri.go:89] found id: ""
	I0918 21:07:08.369457   62061 logs.go:276] 0 containers: []
	W0918 21:07:08.369468   62061 logs.go:278] No container was found matching "coredns"
	I0918 21:07:08.369475   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0918 21:07:08.369536   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0918 21:07:08.409402   62061 cri.go:89] found id: ""
	I0918 21:07:08.409434   62061 logs.go:276] 0 containers: []
	W0918 21:07:08.409444   62061 logs.go:278] No container was found matching "kube-scheduler"
	I0918 21:07:08.409451   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0918 21:07:08.409515   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0918 21:07:08.445530   62061 cri.go:89] found id: ""
	I0918 21:07:08.445565   62061 logs.go:276] 0 containers: []
	W0918 21:07:08.445575   62061 logs.go:278] No container was found matching "kube-proxy"
	I0918 21:07:08.445584   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0918 21:07:08.445646   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0918 21:07:08.481905   62061 cri.go:89] found id: ""
	I0918 21:07:08.481935   62061 logs.go:276] 0 containers: []
	W0918 21:07:08.481945   62061 logs.go:278] No container was found matching "kube-controller-manager"
	I0918 21:07:08.481952   62061 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0918 21:07:08.482020   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0918 21:07:08.516710   62061 cri.go:89] found id: ""
	I0918 21:07:08.516738   62061 logs.go:276] 0 containers: []
	W0918 21:07:08.516746   62061 logs.go:278] No container was found matching "kindnet"
	I0918 21:07:08.516752   62061 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0918 21:07:08.516802   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0918 21:07:08.550707   62061 cri.go:89] found id: ""
	I0918 21:07:08.550740   62061 logs.go:276] 0 containers: []
	W0918 21:07:08.550760   62061 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0918 21:07:08.550768   62061 logs.go:123] Gathering logs for describe nodes ...
	I0918 21:07:08.550782   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0918 21:07:08.624821   62061 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0918 21:07:08.624843   62061 logs.go:123] Gathering logs for CRI-O ...
	I0918 21:07:08.624854   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0918 21:07:08.705347   62061 logs.go:123] Gathering logs for container status ...
	I0918 21:07:08.705383   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 21:07:08.744394   62061 logs.go:123] Gathering logs for kubelet ...
	I0918 21:07:08.744425   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 21:07:08.799336   62061 logs.go:123] Gathering logs for dmesg ...
	I0918 21:07:08.799375   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 21:07:07.358154   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:07:09.857118   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:07:08.633211   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:07:11.132676   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:07:08.388884   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:07:10.887356   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:07:11.312920   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:07:11.327524   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0918 21:07:11.327598   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0918 21:07:11.366177   62061 cri.go:89] found id: ""
	I0918 21:07:11.366209   62061 logs.go:276] 0 containers: []
	W0918 21:07:11.366221   62061 logs.go:278] No container was found matching "kube-apiserver"
	I0918 21:07:11.366227   62061 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0918 21:07:11.366274   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0918 21:07:11.405296   62061 cri.go:89] found id: ""
	I0918 21:07:11.405326   62061 logs.go:276] 0 containers: []
	W0918 21:07:11.405344   62061 logs.go:278] No container was found matching "etcd"
	I0918 21:07:11.405351   62061 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0918 21:07:11.405413   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0918 21:07:11.441786   62061 cri.go:89] found id: ""
	I0918 21:07:11.441818   62061 logs.go:276] 0 containers: []
	W0918 21:07:11.441829   62061 logs.go:278] No container was found matching "coredns"
	I0918 21:07:11.441837   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0918 21:07:11.441891   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0918 21:07:11.476144   62061 cri.go:89] found id: ""
	I0918 21:07:11.476167   62061 logs.go:276] 0 containers: []
	W0918 21:07:11.476175   62061 logs.go:278] No container was found matching "kube-scheduler"
	I0918 21:07:11.476181   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0918 21:07:11.476224   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0918 21:07:11.512115   62061 cri.go:89] found id: ""
	I0918 21:07:11.512143   62061 logs.go:276] 0 containers: []
	W0918 21:07:11.512154   62061 logs.go:278] No container was found matching "kube-proxy"
	I0918 21:07:11.512160   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0918 21:07:11.512221   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0918 21:07:11.545466   62061 cri.go:89] found id: ""
	I0918 21:07:11.545494   62061 logs.go:276] 0 containers: []
	W0918 21:07:11.545504   62061 logs.go:278] No container was found matching "kube-controller-manager"
	I0918 21:07:11.545512   62061 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0918 21:07:11.545575   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0918 21:07:11.579940   62061 cri.go:89] found id: ""
	I0918 21:07:11.579965   62061 logs.go:276] 0 containers: []
	W0918 21:07:11.579975   62061 logs.go:278] No container was found matching "kindnet"
	I0918 21:07:11.579994   62061 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0918 21:07:11.580080   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0918 21:07:11.613454   62061 cri.go:89] found id: ""
	I0918 21:07:11.613480   62061 logs.go:276] 0 containers: []
	W0918 21:07:11.613491   62061 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0918 21:07:11.613512   62061 logs.go:123] Gathering logs for kubelet ...
	I0918 21:07:11.613535   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 21:07:11.669079   62061 logs.go:123] Gathering logs for dmesg ...
	I0918 21:07:11.669140   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 21:07:11.682614   62061 logs.go:123] Gathering logs for describe nodes ...
	I0918 21:07:11.682639   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0918 21:07:11.756876   62061 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0918 21:07:11.756901   62061 logs.go:123] Gathering logs for CRI-O ...
	I0918 21:07:11.756914   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0918 21:07:11.835046   62061 logs.go:123] Gathering logs for container status ...
	I0918 21:07:11.835086   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 21:07:14.377541   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:07:14.392083   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0918 21:07:14.392161   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0918 21:07:14.431752   62061 cri.go:89] found id: ""
	I0918 21:07:14.431786   62061 logs.go:276] 0 containers: []
	W0918 21:07:14.431795   62061 logs.go:278] No container was found matching "kube-apiserver"
	I0918 21:07:14.431800   62061 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0918 21:07:14.431856   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0918 21:07:14.468530   62061 cri.go:89] found id: ""
	I0918 21:07:14.468562   62061 logs.go:276] 0 containers: []
	W0918 21:07:14.468573   62061 logs.go:278] No container was found matching "etcd"
	I0918 21:07:14.468580   62061 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0918 21:07:14.468643   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0918 21:07:11.857763   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:07:14.357253   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:07:13.132895   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:07:15.133426   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:07:13.386537   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:07:15.387844   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:07:17.888743   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:07:14.506510   62061 cri.go:89] found id: ""
	I0918 21:07:14.506540   62061 logs.go:276] 0 containers: []
	W0918 21:07:14.506550   62061 logs.go:278] No container was found matching "coredns"
	I0918 21:07:14.506557   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0918 21:07:14.506625   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0918 21:07:14.538986   62061 cri.go:89] found id: ""
	I0918 21:07:14.539021   62061 logs.go:276] 0 containers: []
	W0918 21:07:14.539032   62061 logs.go:278] No container was found matching "kube-scheduler"
	I0918 21:07:14.539039   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0918 21:07:14.539103   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0918 21:07:14.572390   62061 cri.go:89] found id: ""
	I0918 21:07:14.572421   62061 logs.go:276] 0 containers: []
	W0918 21:07:14.572432   62061 logs.go:278] No container was found matching "kube-proxy"
	I0918 21:07:14.572440   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0918 21:07:14.572499   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0918 21:07:14.607875   62061 cri.go:89] found id: ""
	I0918 21:07:14.607905   62061 logs.go:276] 0 containers: []
	W0918 21:07:14.607917   62061 logs.go:278] No container was found matching "kube-controller-manager"
	I0918 21:07:14.607924   62061 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0918 21:07:14.607988   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0918 21:07:14.642584   62061 cri.go:89] found id: ""
	I0918 21:07:14.642616   62061 logs.go:276] 0 containers: []
	W0918 21:07:14.642625   62061 logs.go:278] No container was found matching "kindnet"
	I0918 21:07:14.642630   62061 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0918 21:07:14.642685   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0918 21:07:14.682117   62061 cri.go:89] found id: ""
	I0918 21:07:14.682144   62061 logs.go:276] 0 containers: []
	W0918 21:07:14.682152   62061 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0918 21:07:14.682163   62061 logs.go:123] Gathering logs for dmesg ...
	I0918 21:07:14.682177   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 21:07:14.694780   62061 logs.go:123] Gathering logs for describe nodes ...
	I0918 21:07:14.694808   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0918 21:07:14.767988   62061 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0918 21:07:14.768036   62061 logs.go:123] Gathering logs for CRI-O ...
	I0918 21:07:14.768055   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0918 21:07:14.860620   62061 logs.go:123] Gathering logs for container status ...
	I0918 21:07:14.860655   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 21:07:14.905071   62061 logs.go:123] Gathering logs for kubelet ...
	I0918 21:07:14.905102   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 21:07:17.455582   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:07:17.468713   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0918 21:07:17.468774   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0918 21:07:17.503682   62061 cri.go:89] found id: ""
	I0918 21:07:17.503709   62061 logs.go:276] 0 containers: []
	W0918 21:07:17.503721   62061 logs.go:278] No container was found matching "kube-apiserver"
	I0918 21:07:17.503729   62061 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0918 21:07:17.503794   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0918 21:07:17.536581   62061 cri.go:89] found id: ""
	I0918 21:07:17.536611   62061 logs.go:276] 0 containers: []
	W0918 21:07:17.536619   62061 logs.go:278] No container was found matching "etcd"
	I0918 21:07:17.536625   62061 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0918 21:07:17.536673   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0918 21:07:17.570483   62061 cri.go:89] found id: ""
	I0918 21:07:17.570510   62061 logs.go:276] 0 containers: []
	W0918 21:07:17.570518   62061 logs.go:278] No container was found matching "coredns"
	I0918 21:07:17.570523   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0918 21:07:17.570591   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0918 21:07:17.604102   62061 cri.go:89] found id: ""
	I0918 21:07:17.604140   62061 logs.go:276] 0 containers: []
	W0918 21:07:17.604152   62061 logs.go:278] No container was found matching "kube-scheduler"
	I0918 21:07:17.604160   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0918 21:07:17.604229   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0918 21:07:17.638633   62061 cri.go:89] found id: ""
	I0918 21:07:17.638661   62061 logs.go:276] 0 containers: []
	W0918 21:07:17.638672   62061 logs.go:278] No container was found matching "kube-proxy"
	I0918 21:07:17.638678   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0918 21:07:17.638725   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0918 21:07:17.673016   62061 cri.go:89] found id: ""
	I0918 21:07:17.673048   62061 logs.go:276] 0 containers: []
	W0918 21:07:17.673057   62061 logs.go:278] No container was found matching "kube-controller-manager"
	I0918 21:07:17.673063   62061 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0918 21:07:17.673116   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0918 21:07:17.708576   62061 cri.go:89] found id: ""
	I0918 21:07:17.708601   62061 logs.go:276] 0 containers: []
	W0918 21:07:17.708609   62061 logs.go:278] No container was found matching "kindnet"
	I0918 21:07:17.708620   62061 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0918 21:07:17.708672   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0918 21:07:17.742820   62061 cri.go:89] found id: ""
	I0918 21:07:17.742843   62061 logs.go:276] 0 containers: []
	W0918 21:07:17.742851   62061 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0918 21:07:17.742859   62061 logs.go:123] Gathering logs for dmesg ...
	I0918 21:07:17.742870   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 21:07:17.757091   62061 logs.go:123] Gathering logs for describe nodes ...
	I0918 21:07:17.757120   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0918 21:07:17.824116   62061 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0918 21:07:17.824137   62061 logs.go:123] Gathering logs for CRI-O ...
	I0918 21:07:17.824151   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0918 21:07:17.911013   62061 logs.go:123] Gathering logs for container status ...
	I0918 21:07:17.911065   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 21:07:17.950517   62061 logs.go:123] Gathering logs for kubelet ...
	I0918 21:07:17.950553   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 21:07:16.857284   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:07:19.357336   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:07:17.635033   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:07:20.134331   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:07:20.388498   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:07:22.887115   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:07:20.502423   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:07:20.529214   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0918 21:07:20.529297   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0918 21:07:20.583618   62061 cri.go:89] found id: ""
	I0918 21:07:20.583660   62061 logs.go:276] 0 containers: []
	W0918 21:07:20.583672   62061 logs.go:278] No container was found matching "kube-apiserver"
	I0918 21:07:20.583682   62061 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0918 21:07:20.583751   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0918 21:07:20.633051   62061 cri.go:89] found id: ""
	I0918 21:07:20.633079   62061 logs.go:276] 0 containers: []
	W0918 21:07:20.633091   62061 logs.go:278] No container was found matching "etcd"
	I0918 21:07:20.633099   62061 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0918 21:07:20.633158   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0918 21:07:20.668513   62061 cri.go:89] found id: ""
	I0918 21:07:20.668543   62061 logs.go:276] 0 containers: []
	W0918 21:07:20.668552   62061 logs.go:278] No container was found matching "coredns"
	I0918 21:07:20.668558   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0918 21:07:20.668611   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0918 21:07:20.702648   62061 cri.go:89] found id: ""
	I0918 21:07:20.702683   62061 logs.go:276] 0 containers: []
	W0918 21:07:20.702698   62061 logs.go:278] No container was found matching "kube-scheduler"
	I0918 21:07:20.702706   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0918 21:07:20.702767   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0918 21:07:20.740639   62061 cri.go:89] found id: ""
	I0918 21:07:20.740666   62061 logs.go:276] 0 containers: []
	W0918 21:07:20.740674   62061 logs.go:278] No container was found matching "kube-proxy"
	I0918 21:07:20.740680   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0918 21:07:20.740731   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0918 21:07:20.771819   62061 cri.go:89] found id: ""
	I0918 21:07:20.771847   62061 logs.go:276] 0 containers: []
	W0918 21:07:20.771855   62061 logs.go:278] No container was found matching "kube-controller-manager"
	I0918 21:07:20.771863   62061 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0918 21:07:20.771913   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0918 21:07:20.803613   62061 cri.go:89] found id: ""
	I0918 21:07:20.803639   62061 logs.go:276] 0 containers: []
	W0918 21:07:20.803647   62061 logs.go:278] No container was found matching "kindnet"
	I0918 21:07:20.803653   62061 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0918 21:07:20.803708   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0918 21:07:20.835671   62061 cri.go:89] found id: ""
	I0918 21:07:20.835695   62061 logs.go:276] 0 containers: []
	W0918 21:07:20.835702   62061 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0918 21:07:20.835710   62061 logs.go:123] Gathering logs for kubelet ...
	I0918 21:07:20.835720   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 21:07:20.886159   62061 logs.go:123] Gathering logs for dmesg ...
	I0918 21:07:20.886191   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 21:07:20.901412   62061 logs.go:123] Gathering logs for describe nodes ...
	I0918 21:07:20.901443   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0918 21:07:20.979009   62061 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0918 21:07:20.979034   62061 logs.go:123] Gathering logs for CRI-O ...
	I0918 21:07:20.979045   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0918 21:07:21.059895   62061 logs.go:123] Gathering logs for container status ...
	I0918 21:07:21.059930   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 21:07:23.598133   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:07:23.611038   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0918 21:07:23.611102   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0918 21:07:23.648037   62061 cri.go:89] found id: ""
	I0918 21:07:23.648067   62061 logs.go:276] 0 containers: []
	W0918 21:07:23.648075   62061 logs.go:278] No container was found matching "kube-apiserver"
	I0918 21:07:23.648081   62061 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0918 21:07:23.648135   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0918 21:07:23.683956   62061 cri.go:89] found id: ""
	I0918 21:07:23.683985   62061 logs.go:276] 0 containers: []
	W0918 21:07:23.683995   62061 logs.go:278] No container was found matching "etcd"
	I0918 21:07:23.684003   62061 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0918 21:07:23.684083   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0918 21:07:23.719274   62061 cri.go:89] found id: ""
	I0918 21:07:23.719301   62061 logs.go:276] 0 containers: []
	W0918 21:07:23.719309   62061 logs.go:278] No container was found matching "coredns"
	I0918 21:07:23.719315   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0918 21:07:23.719378   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0918 21:07:23.757101   62061 cri.go:89] found id: ""
	I0918 21:07:23.757133   62061 logs.go:276] 0 containers: []
	W0918 21:07:23.757145   62061 logs.go:278] No container was found matching "kube-scheduler"
	I0918 21:07:23.757152   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0918 21:07:23.757202   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0918 21:07:23.790115   62061 cri.go:89] found id: ""
	I0918 21:07:23.790149   62061 logs.go:276] 0 containers: []
	W0918 21:07:23.790160   62061 logs.go:278] No container was found matching "kube-proxy"
	I0918 21:07:23.790168   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0918 21:07:23.790231   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0918 21:07:23.823089   62061 cri.go:89] found id: ""
	I0918 21:07:23.823120   62061 logs.go:276] 0 containers: []
	W0918 21:07:23.823131   62061 logs.go:278] No container was found matching "kube-controller-manager"
	I0918 21:07:23.823140   62061 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0918 21:07:23.823192   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0918 21:07:23.858566   62061 cri.go:89] found id: ""
	I0918 21:07:23.858591   62061 logs.go:276] 0 containers: []
	W0918 21:07:23.858601   62061 logs.go:278] No container was found matching "kindnet"
	I0918 21:07:23.858613   62061 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0918 21:07:23.858674   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0918 21:07:23.892650   62061 cri.go:89] found id: ""
	I0918 21:07:23.892679   62061 logs.go:276] 0 containers: []
	W0918 21:07:23.892690   62061 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0918 21:07:23.892699   62061 logs.go:123] Gathering logs for kubelet ...
	I0918 21:07:23.892713   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 21:07:23.941087   62061 logs.go:123] Gathering logs for dmesg ...
	I0918 21:07:23.941123   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 21:07:23.955674   62061 logs.go:123] Gathering logs for describe nodes ...
	I0918 21:07:23.955712   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0918 21:07:24.026087   62061 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0918 21:07:24.026115   62061 logs.go:123] Gathering logs for CRI-O ...
	I0918 21:07:24.026132   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0918 21:07:24.105237   62061 logs.go:123] Gathering logs for container status ...
	I0918 21:07:24.105272   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 21:07:21.857391   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:07:23.857954   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:07:26.356553   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:07:22.633058   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:07:25.133773   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:07:25.387123   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:07:27.886688   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:07:26.646822   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:07:26.659978   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0918 21:07:26.660087   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0918 21:07:26.695776   62061 cri.go:89] found id: ""
	I0918 21:07:26.695804   62061 logs.go:276] 0 containers: []
	W0918 21:07:26.695817   62061 logs.go:278] No container was found matching "kube-apiserver"
	I0918 21:07:26.695824   62061 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0918 21:07:26.695875   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0918 21:07:26.729717   62061 cri.go:89] found id: ""
	I0918 21:07:26.729747   62061 logs.go:276] 0 containers: []
	W0918 21:07:26.729756   62061 logs.go:278] No container was found matching "etcd"
	I0918 21:07:26.729762   62061 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0918 21:07:26.729830   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0918 21:07:26.763848   62061 cri.go:89] found id: ""
	I0918 21:07:26.763886   62061 logs.go:276] 0 containers: []
	W0918 21:07:26.763907   62061 logs.go:278] No container was found matching "coredns"
	I0918 21:07:26.763915   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0918 21:07:26.763974   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0918 21:07:26.798670   62061 cri.go:89] found id: ""
	I0918 21:07:26.798703   62061 logs.go:276] 0 containers: []
	W0918 21:07:26.798713   62061 logs.go:278] No container was found matching "kube-scheduler"
	I0918 21:07:26.798720   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0918 21:07:26.798789   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0918 21:07:26.833778   62061 cri.go:89] found id: ""
	I0918 21:07:26.833804   62061 logs.go:276] 0 containers: []
	W0918 21:07:26.833815   62061 logs.go:278] No container was found matching "kube-proxy"
	I0918 21:07:26.833822   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0918 21:07:26.833905   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0918 21:07:26.869657   62061 cri.go:89] found id: ""
	I0918 21:07:26.869688   62061 logs.go:276] 0 containers: []
	W0918 21:07:26.869699   62061 logs.go:278] No container was found matching "kube-controller-manager"
	I0918 21:07:26.869707   62061 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0918 21:07:26.869772   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0918 21:07:26.908163   62061 cri.go:89] found id: ""
	I0918 21:07:26.908194   62061 logs.go:276] 0 containers: []
	W0918 21:07:26.908205   62061 logs.go:278] No container was found matching "kindnet"
	I0918 21:07:26.908212   62061 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0918 21:07:26.908269   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0918 21:07:26.943416   62061 cri.go:89] found id: ""
	I0918 21:07:26.943442   62061 logs.go:276] 0 containers: []
	W0918 21:07:26.943451   62061 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0918 21:07:26.943459   62061 logs.go:123] Gathering logs for kubelet ...
	I0918 21:07:26.943471   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 21:07:26.993796   62061 logs.go:123] Gathering logs for dmesg ...
	I0918 21:07:26.993833   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 21:07:27.007619   62061 logs.go:123] Gathering logs for describe nodes ...
	I0918 21:07:27.007661   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0918 21:07:27.072880   62061 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0918 21:07:27.072904   62061 logs.go:123] Gathering logs for CRI-O ...
	I0918 21:07:27.072919   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0918 21:07:27.148984   62061 logs.go:123] Gathering logs for container status ...
	I0918 21:07:27.149031   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 21:07:28.357006   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:07:30.857527   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:07:27.632697   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:07:30.133718   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:07:29.887981   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:07:32.387478   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:07:29.690106   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:07:29.702853   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0918 21:07:29.702932   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0918 21:07:29.736427   62061 cri.go:89] found id: ""
	I0918 21:07:29.736461   62061 logs.go:276] 0 containers: []
	W0918 21:07:29.736473   62061 logs.go:278] No container was found matching "kube-apiserver"
	I0918 21:07:29.736480   62061 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0918 21:07:29.736537   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0918 21:07:29.771287   62061 cri.go:89] found id: ""
	I0918 21:07:29.771317   62061 logs.go:276] 0 containers: []
	W0918 21:07:29.771328   62061 logs.go:278] No container was found matching "etcd"
	I0918 21:07:29.771334   62061 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0918 21:07:29.771398   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0918 21:07:29.804826   62061 cri.go:89] found id: ""
	I0918 21:07:29.804861   62061 logs.go:276] 0 containers: []
	W0918 21:07:29.804875   62061 logs.go:278] No container was found matching "coredns"
	I0918 21:07:29.804882   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0918 21:07:29.804934   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0918 21:07:29.838570   62061 cri.go:89] found id: ""
	I0918 21:07:29.838598   62061 logs.go:276] 0 containers: []
	W0918 21:07:29.838608   62061 logs.go:278] No container was found matching "kube-scheduler"
	I0918 21:07:29.838614   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0918 21:07:29.838659   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0918 21:07:29.871591   62061 cri.go:89] found id: ""
	I0918 21:07:29.871621   62061 logs.go:276] 0 containers: []
	W0918 21:07:29.871631   62061 logs.go:278] No container was found matching "kube-proxy"
	I0918 21:07:29.871638   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0918 21:07:29.871699   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0918 21:07:29.905789   62061 cri.go:89] found id: ""
	I0918 21:07:29.905824   62061 logs.go:276] 0 containers: []
	W0918 21:07:29.905835   62061 logs.go:278] No container was found matching "kube-controller-manager"
	I0918 21:07:29.905846   62061 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0918 21:07:29.905910   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0918 21:07:29.945914   62061 cri.go:89] found id: ""
	I0918 21:07:29.945941   62061 logs.go:276] 0 containers: []
	W0918 21:07:29.945950   62061 logs.go:278] No container was found matching "kindnet"
	I0918 21:07:29.945955   62061 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0918 21:07:29.946004   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0918 21:07:29.979365   62061 cri.go:89] found id: ""
	I0918 21:07:29.979396   62061 logs.go:276] 0 containers: []
	W0918 21:07:29.979405   62061 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0918 21:07:29.979413   62061 logs.go:123] Gathering logs for kubelet ...
	I0918 21:07:29.979425   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 21:07:30.026925   62061 logs.go:123] Gathering logs for dmesg ...
	I0918 21:07:30.026956   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 21:07:30.040589   62061 logs.go:123] Gathering logs for describe nodes ...
	I0918 21:07:30.040623   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0918 21:07:30.112900   62061 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0918 21:07:30.112928   62061 logs.go:123] Gathering logs for CRI-O ...
	I0918 21:07:30.112948   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0918 21:07:30.195208   62061 logs.go:123] Gathering logs for container status ...
	I0918 21:07:30.195256   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 21:07:32.735787   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:07:32.748969   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0918 21:07:32.749049   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0918 21:07:32.782148   62061 cri.go:89] found id: ""
	I0918 21:07:32.782179   62061 logs.go:276] 0 containers: []
	W0918 21:07:32.782189   62061 logs.go:278] No container was found matching "kube-apiserver"
	I0918 21:07:32.782196   62061 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0918 21:07:32.782262   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0918 21:07:32.816119   62061 cri.go:89] found id: ""
	I0918 21:07:32.816144   62061 logs.go:276] 0 containers: []
	W0918 21:07:32.816152   62061 logs.go:278] No container was found matching "etcd"
	I0918 21:07:32.816158   62061 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0918 21:07:32.816203   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0918 21:07:32.849817   62061 cri.go:89] found id: ""
	I0918 21:07:32.849844   62061 logs.go:276] 0 containers: []
	W0918 21:07:32.849853   62061 logs.go:278] No container was found matching "coredns"
	I0918 21:07:32.849859   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0918 21:07:32.849911   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0918 21:07:32.884464   62061 cri.go:89] found id: ""
	I0918 21:07:32.884494   62061 logs.go:276] 0 containers: []
	W0918 21:07:32.884506   62061 logs.go:278] No container was found matching "kube-scheduler"
	I0918 21:07:32.884513   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0918 21:07:32.884576   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0918 21:07:32.917942   62061 cri.go:89] found id: ""
	I0918 21:07:32.917973   62061 logs.go:276] 0 containers: []
	W0918 21:07:32.917983   62061 logs.go:278] No container was found matching "kube-proxy"
	I0918 21:07:32.917990   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0918 21:07:32.918051   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0918 21:07:32.950748   62061 cri.go:89] found id: ""
	I0918 21:07:32.950780   62061 logs.go:276] 0 containers: []
	W0918 21:07:32.950791   62061 logs.go:278] No container was found matching "kube-controller-manager"
	I0918 21:07:32.950800   62061 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0918 21:07:32.950864   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0918 21:07:32.985059   62061 cri.go:89] found id: ""
	I0918 21:07:32.985092   62061 logs.go:276] 0 containers: []
	W0918 21:07:32.985100   62061 logs.go:278] No container was found matching "kindnet"
	I0918 21:07:32.985106   62061 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0918 21:07:32.985167   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0918 21:07:33.021496   62061 cri.go:89] found id: ""
	I0918 21:07:33.021526   62061 logs.go:276] 0 containers: []
	W0918 21:07:33.021536   62061 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0918 21:07:33.021546   62061 logs.go:123] Gathering logs for kubelet ...
	I0918 21:07:33.021560   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 21:07:33.071744   62061 logs.go:123] Gathering logs for dmesg ...
	I0918 21:07:33.071793   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 21:07:33.086533   62061 logs.go:123] Gathering logs for describe nodes ...
	I0918 21:07:33.086565   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0918 21:07:33.155274   62061 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0918 21:07:33.155307   62061 logs.go:123] Gathering logs for CRI-O ...
	I0918 21:07:33.155326   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0918 21:07:33.238301   62061 logs.go:123] Gathering logs for container status ...
	I0918 21:07:33.238342   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 21:07:33.356874   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:07:35.357445   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:07:32.631814   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:07:34.631954   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:07:36.633057   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:07:34.387725   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:07:36.887031   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:07:35.777905   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:07:35.792442   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0918 21:07:35.792510   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0918 21:07:35.827310   62061 cri.go:89] found id: ""
	I0918 21:07:35.827339   62061 logs.go:276] 0 containers: []
	W0918 21:07:35.827350   62061 logs.go:278] No container was found matching "kube-apiserver"
	I0918 21:07:35.827357   62061 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0918 21:07:35.827422   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0918 21:07:35.861873   62061 cri.go:89] found id: ""
	I0918 21:07:35.861897   62061 logs.go:276] 0 containers: []
	W0918 21:07:35.861905   62061 logs.go:278] No container was found matching "etcd"
	I0918 21:07:35.861916   62061 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0918 21:07:35.861966   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0918 21:07:35.895295   62061 cri.go:89] found id: ""
	I0918 21:07:35.895324   62061 logs.go:276] 0 containers: []
	W0918 21:07:35.895345   62061 logs.go:278] No container was found matching "coredns"
	I0918 21:07:35.895353   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0918 21:07:35.895410   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0918 21:07:35.927690   62061 cri.go:89] found id: ""
	I0918 21:07:35.927717   62061 logs.go:276] 0 containers: []
	W0918 21:07:35.927728   62061 logs.go:278] No container was found matching "kube-scheduler"
	I0918 21:07:35.927736   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0918 21:07:35.927788   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0918 21:07:35.963954   62061 cri.go:89] found id: ""
	I0918 21:07:35.963979   62061 logs.go:276] 0 containers: []
	W0918 21:07:35.963986   62061 logs.go:278] No container was found matching "kube-proxy"
	I0918 21:07:35.963992   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0918 21:07:35.964059   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0918 21:07:36.002287   62061 cri.go:89] found id: ""
	I0918 21:07:36.002313   62061 logs.go:276] 0 containers: []
	W0918 21:07:36.002322   62061 logs.go:278] No container was found matching "kube-controller-manager"
	I0918 21:07:36.002328   62061 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0918 21:07:36.002380   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0918 21:07:36.036748   62061 cri.go:89] found id: ""
	I0918 21:07:36.036778   62061 logs.go:276] 0 containers: []
	W0918 21:07:36.036790   62061 logs.go:278] No container was found matching "kindnet"
	I0918 21:07:36.036797   62061 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0918 21:07:36.036861   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0918 21:07:36.072616   62061 cri.go:89] found id: ""
	I0918 21:07:36.072647   62061 logs.go:276] 0 containers: []
	W0918 21:07:36.072658   62061 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0918 21:07:36.072668   62061 logs.go:123] Gathering logs for kubelet ...
	I0918 21:07:36.072683   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 21:07:36.121723   62061 logs.go:123] Gathering logs for dmesg ...
	I0918 21:07:36.121762   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 21:07:36.136528   62061 logs.go:123] Gathering logs for describe nodes ...
	I0918 21:07:36.136560   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0918 21:07:36.202878   62061 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0918 21:07:36.202911   62061 logs.go:123] Gathering logs for CRI-O ...
	I0918 21:07:36.202926   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0918 21:07:36.287882   62061 logs.go:123] Gathering logs for container status ...
	I0918 21:07:36.287924   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 21:07:38.825888   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:07:38.838437   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0918 21:07:38.838510   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0918 21:07:38.872312   62061 cri.go:89] found id: ""
	I0918 21:07:38.872348   62061 logs.go:276] 0 containers: []
	W0918 21:07:38.872358   62061 logs.go:278] No container was found matching "kube-apiserver"
	I0918 21:07:38.872366   62061 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0918 21:07:38.872417   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0918 21:07:38.905927   62061 cri.go:89] found id: ""
	I0918 21:07:38.905962   62061 logs.go:276] 0 containers: []
	W0918 21:07:38.905975   62061 logs.go:278] No container was found matching "etcd"
	I0918 21:07:38.905982   62061 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0918 21:07:38.906042   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0918 21:07:38.938245   62061 cri.go:89] found id: ""
	I0918 21:07:38.938274   62061 logs.go:276] 0 containers: []
	W0918 21:07:38.938286   62061 logs.go:278] No container was found matching "coredns"
	I0918 21:07:38.938293   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0918 21:07:38.938358   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0918 21:07:38.971497   62061 cri.go:89] found id: ""
	I0918 21:07:38.971536   62061 logs.go:276] 0 containers: []
	W0918 21:07:38.971548   62061 logs.go:278] No container was found matching "kube-scheduler"
	I0918 21:07:38.971555   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0918 21:07:38.971625   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0918 21:07:39.004755   62061 cri.go:89] found id: ""
	I0918 21:07:39.004784   62061 logs.go:276] 0 containers: []
	W0918 21:07:39.004793   62061 logs.go:278] No container was found matching "kube-proxy"
	I0918 21:07:39.004799   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0918 21:07:39.004854   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0918 21:07:39.036237   62061 cri.go:89] found id: ""
	I0918 21:07:39.036265   62061 logs.go:276] 0 containers: []
	W0918 21:07:39.036273   62061 logs.go:278] No container was found matching "kube-controller-manager"
	I0918 21:07:39.036279   62061 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0918 21:07:39.036328   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0918 21:07:39.071504   62061 cri.go:89] found id: ""
	I0918 21:07:39.071537   62061 logs.go:276] 0 containers: []
	W0918 21:07:39.071549   62061 logs.go:278] No container was found matching "kindnet"
	I0918 21:07:39.071557   62061 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0918 21:07:39.071623   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0918 21:07:39.107035   62061 cri.go:89] found id: ""
	I0918 21:07:39.107063   62061 logs.go:276] 0 containers: []
	W0918 21:07:39.107071   62061 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0918 21:07:39.107080   62061 logs.go:123] Gathering logs for kubelet ...
	I0918 21:07:39.107090   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 21:07:39.158078   62061 logs.go:123] Gathering logs for dmesg ...
	I0918 21:07:39.158113   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 21:07:39.172846   62061 logs.go:123] Gathering logs for describe nodes ...
	I0918 21:07:39.172875   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0918 21:07:39.240577   62061 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0918 21:07:39.240602   62061 logs.go:123] Gathering logs for CRI-O ...
	I0918 21:07:39.240618   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0918 21:07:39.319762   62061 logs.go:123] Gathering logs for container status ...
	I0918 21:07:39.319797   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 21:07:37.857371   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:07:40.356710   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:07:39.133586   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:07:41.632538   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:07:38.887485   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:07:41.386252   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:07:41.856586   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:07:41.870308   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0918 21:07:41.870388   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0918 21:07:41.905657   62061 cri.go:89] found id: ""
	I0918 21:07:41.905688   62061 logs.go:276] 0 containers: []
	W0918 21:07:41.905699   62061 logs.go:278] No container was found matching "kube-apiserver"
	I0918 21:07:41.905706   62061 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0918 21:07:41.905766   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0918 21:07:41.944507   62061 cri.go:89] found id: ""
	I0918 21:07:41.944544   62061 logs.go:276] 0 containers: []
	W0918 21:07:41.944555   62061 logs.go:278] No container was found matching "etcd"
	I0918 21:07:41.944566   62061 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0918 21:07:41.944634   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0918 21:07:41.979217   62061 cri.go:89] found id: ""
	I0918 21:07:41.979252   62061 logs.go:276] 0 containers: []
	W0918 21:07:41.979271   62061 logs.go:278] No container was found matching "coredns"
	I0918 21:07:41.979279   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0918 21:07:41.979346   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0918 21:07:42.013613   62061 cri.go:89] found id: ""
	I0918 21:07:42.013641   62061 logs.go:276] 0 containers: []
	W0918 21:07:42.013652   62061 logs.go:278] No container was found matching "kube-scheduler"
	I0918 21:07:42.013659   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0918 21:07:42.013725   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0918 21:07:42.049225   62061 cri.go:89] found id: ""
	I0918 21:07:42.049259   62061 logs.go:276] 0 containers: []
	W0918 21:07:42.049271   62061 logs.go:278] No container was found matching "kube-proxy"
	I0918 21:07:42.049279   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0918 21:07:42.049364   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0918 21:07:42.085737   62061 cri.go:89] found id: ""
	I0918 21:07:42.085763   62061 logs.go:276] 0 containers: []
	W0918 21:07:42.085775   62061 logs.go:278] No container was found matching "kube-controller-manager"
	I0918 21:07:42.085782   62061 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0918 21:07:42.085843   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0918 21:07:42.121326   62061 cri.go:89] found id: ""
	I0918 21:07:42.121356   62061 logs.go:276] 0 containers: []
	W0918 21:07:42.121365   62061 logs.go:278] No container was found matching "kindnet"
	I0918 21:07:42.121371   62061 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0918 21:07:42.121428   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0918 21:07:42.157070   62061 cri.go:89] found id: ""
	I0918 21:07:42.157097   62061 logs.go:276] 0 containers: []
	W0918 21:07:42.157107   62061 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0918 21:07:42.157118   62061 logs.go:123] Gathering logs for CRI-O ...
	I0918 21:07:42.157133   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0918 21:07:42.232110   62061 logs.go:123] Gathering logs for container status ...
	I0918 21:07:42.232162   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 21:07:42.270478   62061 logs.go:123] Gathering logs for kubelet ...
	I0918 21:07:42.270517   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 21:07:42.324545   62061 logs.go:123] Gathering logs for dmesg ...
	I0918 21:07:42.324586   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 21:07:42.339928   62061 logs.go:123] Gathering logs for describe nodes ...
	I0918 21:07:42.339962   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0918 21:07:42.415124   62061 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0918 21:07:42.356847   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:07:44.856845   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:07:43.633029   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:07:46.134786   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:07:43.387596   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:07:45.887071   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:07:44.915316   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:07:44.928261   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0918 21:07:44.928331   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0918 21:07:44.959641   62061 cri.go:89] found id: ""
	I0918 21:07:44.959673   62061 logs.go:276] 0 containers: []
	W0918 21:07:44.959682   62061 logs.go:278] No container was found matching "kube-apiserver"
	I0918 21:07:44.959689   62061 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0918 21:07:44.959791   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0918 21:07:44.991744   62061 cri.go:89] found id: ""
	I0918 21:07:44.991778   62061 logs.go:276] 0 containers: []
	W0918 21:07:44.991787   62061 logs.go:278] No container was found matching "etcd"
	I0918 21:07:44.991803   62061 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0918 21:07:44.991877   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0918 21:07:45.024228   62061 cri.go:89] found id: ""
	I0918 21:07:45.024261   62061 logs.go:276] 0 containers: []
	W0918 21:07:45.024272   62061 logs.go:278] No container was found matching "coredns"
	I0918 21:07:45.024280   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0918 21:07:45.024355   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0918 21:07:45.061905   62061 cri.go:89] found id: ""
	I0918 21:07:45.061931   62061 logs.go:276] 0 containers: []
	W0918 21:07:45.061940   62061 logs.go:278] No container was found matching "kube-scheduler"
	I0918 21:07:45.061946   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0918 21:07:45.061994   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0918 21:07:45.099224   62061 cri.go:89] found id: ""
	I0918 21:07:45.099251   62061 logs.go:276] 0 containers: []
	W0918 21:07:45.099259   62061 logs.go:278] No container was found matching "kube-proxy"
	I0918 21:07:45.099266   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0918 21:07:45.099327   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0918 21:07:45.137235   62061 cri.go:89] found id: ""
	I0918 21:07:45.137262   62061 logs.go:276] 0 containers: []
	W0918 21:07:45.137270   62061 logs.go:278] No container was found matching "kube-controller-manager"
	I0918 21:07:45.137278   62061 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0918 21:07:45.137333   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0918 21:07:45.174343   62061 cri.go:89] found id: ""
	I0918 21:07:45.174370   62061 logs.go:276] 0 containers: []
	W0918 21:07:45.174379   62061 logs.go:278] No container was found matching "kindnet"
	I0918 21:07:45.174385   62061 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0918 21:07:45.174432   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0918 21:07:45.209278   62061 cri.go:89] found id: ""
	I0918 21:07:45.209306   62061 logs.go:276] 0 containers: []
	W0918 21:07:45.209316   62061 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0918 21:07:45.209326   62061 logs.go:123] Gathering logs for dmesg ...
	I0918 21:07:45.209341   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 21:07:45.222376   62061 logs.go:123] Gathering logs for describe nodes ...
	I0918 21:07:45.222409   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0918 21:07:45.294562   62061 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0918 21:07:45.294595   62061 logs.go:123] Gathering logs for CRI-O ...
	I0918 21:07:45.294607   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0918 21:07:45.382582   62061 logs.go:123] Gathering logs for container status ...
	I0918 21:07:45.382631   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 21:07:45.424743   62061 logs.go:123] Gathering logs for kubelet ...
	I0918 21:07:45.424780   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 21:07:47.978171   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:07:47.990485   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0918 21:07:47.990554   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0918 21:07:48.024465   62061 cri.go:89] found id: ""
	I0918 21:07:48.024494   62061 logs.go:276] 0 containers: []
	W0918 21:07:48.024505   62061 logs.go:278] No container was found matching "kube-apiserver"
	I0918 21:07:48.024512   62061 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0918 21:07:48.024575   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0918 21:07:48.058838   62061 cri.go:89] found id: ""
	I0918 21:07:48.058871   62061 logs.go:276] 0 containers: []
	W0918 21:07:48.058899   62061 logs.go:278] No container was found matching "etcd"
	I0918 21:07:48.058905   62061 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0918 21:07:48.058959   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0918 21:07:48.094307   62061 cri.go:89] found id: ""
	I0918 21:07:48.094335   62061 logs.go:276] 0 containers: []
	W0918 21:07:48.094343   62061 logs.go:278] No container was found matching "coredns"
	I0918 21:07:48.094349   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0918 21:07:48.094398   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0918 21:07:48.127497   62061 cri.go:89] found id: ""
	I0918 21:07:48.127529   62061 logs.go:276] 0 containers: []
	W0918 21:07:48.127538   62061 logs.go:278] No container was found matching "kube-scheduler"
	I0918 21:07:48.127544   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0918 21:07:48.127590   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0918 21:07:48.160440   62061 cri.go:89] found id: ""
	I0918 21:07:48.160465   62061 logs.go:276] 0 containers: []
	W0918 21:07:48.160473   62061 logs.go:278] No container was found matching "kube-proxy"
	I0918 21:07:48.160478   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0918 21:07:48.160527   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0918 21:07:48.195581   62061 cri.go:89] found id: ""
	I0918 21:07:48.195615   62061 logs.go:276] 0 containers: []
	W0918 21:07:48.195624   62061 logs.go:278] No container was found matching "kube-controller-manager"
	I0918 21:07:48.195631   62061 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0918 21:07:48.195675   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0918 21:07:48.226478   62061 cri.go:89] found id: ""
	I0918 21:07:48.226506   62061 logs.go:276] 0 containers: []
	W0918 21:07:48.226514   62061 logs.go:278] No container was found matching "kindnet"
	I0918 21:07:48.226519   62061 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0918 21:07:48.226566   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0918 21:07:48.259775   62061 cri.go:89] found id: ""
	I0918 21:07:48.259804   62061 logs.go:276] 0 containers: []
	W0918 21:07:48.259815   62061 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0918 21:07:48.259825   62061 logs.go:123] Gathering logs for dmesg ...
	I0918 21:07:48.259839   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 21:07:48.274364   62061 logs.go:123] Gathering logs for describe nodes ...
	I0918 21:07:48.274391   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0918 21:07:48.347131   62061 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0918 21:07:48.347151   62061 logs.go:123] Gathering logs for CRI-O ...
	I0918 21:07:48.347163   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0918 21:07:48.430569   62061 logs.go:123] Gathering logs for container status ...
	I0918 21:07:48.430607   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 21:07:48.472972   62061 logs.go:123] Gathering logs for kubelet ...
	I0918 21:07:48.473007   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 21:07:47.356907   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:07:49.857984   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:07:48.633550   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:07:51.133639   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:07:48.388136   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:07:50.888317   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:07:51.024429   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:07:51.037249   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0918 21:07:51.037328   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0918 21:07:51.070137   62061 cri.go:89] found id: ""
	I0918 21:07:51.070173   62061 logs.go:276] 0 containers: []
	W0918 21:07:51.070195   62061 logs.go:278] No container was found matching "kube-apiserver"
	I0918 21:07:51.070203   62061 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0918 21:07:51.070276   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0918 21:07:51.105014   62061 cri.go:89] found id: ""
	I0918 21:07:51.105039   62061 logs.go:276] 0 containers: []
	W0918 21:07:51.105048   62061 logs.go:278] No container was found matching "etcd"
	I0918 21:07:51.105054   62061 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0918 21:07:51.105101   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0918 21:07:51.143287   62061 cri.go:89] found id: ""
	I0918 21:07:51.143310   62061 logs.go:276] 0 containers: []
	W0918 21:07:51.143318   62061 logs.go:278] No container was found matching "coredns"
	I0918 21:07:51.143325   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0918 21:07:51.143372   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0918 21:07:51.176811   62061 cri.go:89] found id: ""
	I0918 21:07:51.176838   62061 logs.go:276] 0 containers: []
	W0918 21:07:51.176846   62061 logs.go:278] No container was found matching "kube-scheduler"
	I0918 21:07:51.176852   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0918 21:07:51.176898   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0918 21:07:51.210817   62061 cri.go:89] found id: ""
	I0918 21:07:51.210842   62061 logs.go:276] 0 containers: []
	W0918 21:07:51.210850   62061 logs.go:278] No container was found matching "kube-proxy"
	I0918 21:07:51.210856   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0918 21:07:51.210916   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0918 21:07:51.243968   62061 cri.go:89] found id: ""
	I0918 21:07:51.244002   62061 logs.go:276] 0 containers: []
	W0918 21:07:51.244035   62061 logs.go:278] No container was found matching "kube-controller-manager"
	I0918 21:07:51.244043   62061 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0918 21:07:51.244104   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0918 21:07:51.279078   62061 cri.go:89] found id: ""
	I0918 21:07:51.279108   62061 logs.go:276] 0 containers: []
	W0918 21:07:51.279117   62061 logs.go:278] No container was found matching "kindnet"
	I0918 21:07:51.279122   62061 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0918 21:07:51.279173   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0918 21:07:51.315033   62061 cri.go:89] found id: ""
	I0918 21:07:51.315082   62061 logs.go:276] 0 containers: []
	W0918 21:07:51.315091   62061 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0918 21:07:51.315100   62061 logs.go:123] Gathering logs for CRI-O ...
	I0918 21:07:51.315111   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0918 21:07:51.391927   62061 logs.go:123] Gathering logs for container status ...
	I0918 21:07:51.391976   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 21:07:51.435515   62061 logs.go:123] Gathering logs for kubelet ...
	I0918 21:07:51.435542   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 21:07:51.488404   62061 logs.go:123] Gathering logs for dmesg ...
	I0918 21:07:51.488442   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 21:07:51.502019   62061 logs.go:123] Gathering logs for describe nodes ...
	I0918 21:07:51.502065   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0918 21:07:51.567893   62061 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0918 21:07:54.068142   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:07:54.082544   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0918 21:07:54.082619   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0918 21:07:54.118600   62061 cri.go:89] found id: ""
	I0918 21:07:54.118629   62061 logs.go:276] 0 containers: []
	W0918 21:07:54.118638   62061 logs.go:278] No container was found matching "kube-apiserver"
	I0918 21:07:54.118644   62061 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0918 21:07:54.118695   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0918 21:07:54.154086   62061 cri.go:89] found id: ""
	I0918 21:07:54.154115   62061 logs.go:276] 0 containers: []
	W0918 21:07:54.154127   62061 logs.go:278] No container was found matching "etcd"
	I0918 21:07:54.154135   62061 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0918 21:07:54.154187   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0918 21:07:54.188213   62061 cri.go:89] found id: ""
	I0918 21:07:54.188246   62061 logs.go:276] 0 containers: []
	W0918 21:07:54.188257   62061 logs.go:278] No container was found matching "coredns"
	I0918 21:07:54.188266   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0918 21:07:54.188329   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0918 21:07:54.221682   62061 cri.go:89] found id: ""
	I0918 21:07:54.221712   62061 logs.go:276] 0 containers: []
	W0918 21:07:54.221721   62061 logs.go:278] No container was found matching "kube-scheduler"
	I0918 21:07:54.221728   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0918 21:07:54.221791   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0918 21:07:54.254746   62061 cri.go:89] found id: ""
	I0918 21:07:54.254770   62061 logs.go:276] 0 containers: []
	W0918 21:07:54.254778   62061 logs.go:278] No container was found matching "kube-proxy"
	I0918 21:07:54.254783   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0918 21:07:54.254844   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0918 21:07:54.288249   62061 cri.go:89] found id: ""
	I0918 21:07:54.288273   62061 logs.go:276] 0 containers: []
	W0918 21:07:54.288281   62061 logs.go:278] No container was found matching "kube-controller-manager"
	I0918 21:07:54.288288   62061 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0918 21:07:54.288349   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0918 21:07:54.323390   62061 cri.go:89] found id: ""
	I0918 21:07:54.323422   62061 logs.go:276] 0 containers: []
	W0918 21:07:54.323433   62061 logs.go:278] No container was found matching "kindnet"
	I0918 21:07:54.323440   62061 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0918 21:07:54.323507   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0918 21:07:54.357680   62061 cri.go:89] found id: ""
	I0918 21:07:54.357709   62061 logs.go:276] 0 containers: []
	W0918 21:07:54.357719   62061 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0918 21:07:54.357734   62061 logs.go:123] Gathering logs for dmesg ...
	I0918 21:07:54.357748   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 21:07:54.396644   62061 logs.go:123] Gathering logs for describe nodes ...
	I0918 21:07:54.396683   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0918 21:07:54.469136   62061 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0918 21:07:54.469175   62061 logs.go:123] Gathering logs for CRI-O ...
	I0918 21:07:54.469191   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0918 21:07:52.357187   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:07:54.857437   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:07:53.633161   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:07:56.132554   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:07:53.386646   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:07:55.387377   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:07:57.387524   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:07:54.547848   62061 logs.go:123] Gathering logs for container status ...
	I0918 21:07:54.547884   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 21:07:54.585758   62061 logs.go:123] Gathering logs for kubelet ...
	I0918 21:07:54.585788   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 21:07:57.139198   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:07:57.152342   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0918 21:07:57.152429   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0918 21:07:57.186747   62061 cri.go:89] found id: ""
	I0918 21:07:57.186778   62061 logs.go:276] 0 containers: []
	W0918 21:07:57.186789   62061 logs.go:278] No container was found matching "kube-apiserver"
	I0918 21:07:57.186796   62061 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0918 21:07:57.186855   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0918 21:07:57.219211   62061 cri.go:89] found id: ""
	I0918 21:07:57.219239   62061 logs.go:276] 0 containers: []
	W0918 21:07:57.219248   62061 logs.go:278] No container was found matching "etcd"
	I0918 21:07:57.219256   62061 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0918 21:07:57.219315   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0918 21:07:57.251361   62061 cri.go:89] found id: ""
	I0918 21:07:57.251388   62061 logs.go:276] 0 containers: []
	W0918 21:07:57.251396   62061 logs.go:278] No container was found matching "coredns"
	I0918 21:07:57.251401   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0918 21:07:57.251450   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0918 21:07:57.285467   62061 cri.go:89] found id: ""
	I0918 21:07:57.285501   62061 logs.go:276] 0 containers: []
	W0918 21:07:57.285512   62061 logs.go:278] No container was found matching "kube-scheduler"
	I0918 21:07:57.285519   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0918 21:07:57.285580   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0918 21:07:57.317973   62061 cri.go:89] found id: ""
	I0918 21:07:57.317999   62061 logs.go:276] 0 containers: []
	W0918 21:07:57.318006   62061 logs.go:278] No container was found matching "kube-proxy"
	I0918 21:07:57.318012   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0918 21:07:57.318071   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0918 21:07:57.352172   62061 cri.go:89] found id: ""
	I0918 21:07:57.352202   62061 logs.go:276] 0 containers: []
	W0918 21:07:57.352215   62061 logs.go:278] No container was found matching "kube-controller-manager"
	I0918 21:07:57.352223   62061 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0918 21:07:57.352277   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0918 21:07:57.388117   62061 cri.go:89] found id: ""
	I0918 21:07:57.388137   62061 logs.go:276] 0 containers: []
	W0918 21:07:57.388144   62061 logs.go:278] No container was found matching "kindnet"
	I0918 21:07:57.388150   62061 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0918 21:07:57.388205   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0918 21:07:57.424814   62061 cri.go:89] found id: ""
	I0918 21:07:57.424846   62061 logs.go:276] 0 containers: []
	W0918 21:07:57.424857   62061 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0918 21:07:57.424868   62061 logs.go:123] Gathering logs for dmesg ...
	I0918 21:07:57.424882   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 21:07:57.437317   62061 logs.go:123] Gathering logs for describe nodes ...
	I0918 21:07:57.437351   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0918 21:07:57.503393   62061 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0918 21:07:57.503417   62061 logs.go:123] Gathering logs for CRI-O ...
	I0918 21:07:57.503429   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0918 21:07:57.584167   62061 logs.go:123] Gathering logs for container status ...
	I0918 21:07:57.584203   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 21:07:57.620761   62061 logs.go:123] Gathering logs for kubelet ...
	I0918 21:07:57.620792   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 21:07:57.357989   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:07:59.856413   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:07:58.133077   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:08:00.633233   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:07:59.886455   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:08:01.887882   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:08:00.174933   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:08:00.188098   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0918 21:08:00.188177   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0918 21:08:00.221852   62061 cri.go:89] found id: ""
	I0918 21:08:00.221877   62061 logs.go:276] 0 containers: []
	W0918 21:08:00.221890   62061 logs.go:278] No container was found matching "kube-apiserver"
	I0918 21:08:00.221895   62061 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0918 21:08:00.221947   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0918 21:08:00.255947   62061 cri.go:89] found id: ""
	I0918 21:08:00.255974   62061 logs.go:276] 0 containers: []
	W0918 21:08:00.255982   62061 logs.go:278] No container was found matching "etcd"
	I0918 21:08:00.255987   62061 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0918 21:08:00.256056   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0918 21:08:00.290919   62061 cri.go:89] found id: ""
	I0918 21:08:00.290953   62061 logs.go:276] 0 containers: []
	W0918 21:08:00.290961   62061 logs.go:278] No container was found matching "coredns"
	I0918 21:08:00.290966   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0918 21:08:00.291017   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0918 21:08:00.328175   62061 cri.go:89] found id: ""
	I0918 21:08:00.328200   62061 logs.go:276] 0 containers: []
	W0918 21:08:00.328208   62061 logs.go:278] No container was found matching "kube-scheduler"
	I0918 21:08:00.328214   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0918 21:08:00.328261   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0918 21:08:00.364863   62061 cri.go:89] found id: ""
	I0918 21:08:00.364892   62061 logs.go:276] 0 containers: []
	W0918 21:08:00.364903   62061 logs.go:278] No container was found matching "kube-proxy"
	I0918 21:08:00.364912   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0918 21:08:00.364967   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0918 21:08:00.400368   62061 cri.go:89] found id: ""
	I0918 21:08:00.400397   62061 logs.go:276] 0 containers: []
	W0918 21:08:00.400408   62061 logs.go:278] No container was found matching "kube-controller-manager"
	I0918 21:08:00.400415   62061 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0918 21:08:00.400480   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0918 21:08:00.435341   62061 cri.go:89] found id: ""
	I0918 21:08:00.435375   62061 logs.go:276] 0 containers: []
	W0918 21:08:00.435386   62061 logs.go:278] No container was found matching "kindnet"
	I0918 21:08:00.435394   62061 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0918 21:08:00.435459   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0918 21:08:00.469981   62061 cri.go:89] found id: ""
	I0918 21:08:00.470010   62061 logs.go:276] 0 containers: []
	W0918 21:08:00.470019   62061 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0918 21:08:00.470028   62061 logs.go:123] Gathering logs for describe nodes ...
	I0918 21:08:00.470041   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0918 21:08:00.538006   62061 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0918 21:08:00.538037   62061 logs.go:123] Gathering logs for CRI-O ...
	I0918 21:08:00.538053   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0918 21:08:00.618497   62061 logs.go:123] Gathering logs for container status ...
	I0918 21:08:00.618543   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 21:08:00.656884   62061 logs.go:123] Gathering logs for kubelet ...
	I0918 21:08:00.656912   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 21:08:00.708836   62061 logs.go:123] Gathering logs for dmesg ...
	I0918 21:08:00.708870   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 21:08:03.222489   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:08:03.236904   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0918 21:08:03.236972   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0918 21:08:03.270559   62061 cri.go:89] found id: ""
	I0918 21:08:03.270588   62061 logs.go:276] 0 containers: []
	W0918 21:08:03.270596   62061 logs.go:278] No container was found matching "kube-apiserver"
	I0918 21:08:03.270602   62061 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0918 21:08:03.270649   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0918 21:08:03.305908   62061 cri.go:89] found id: ""
	I0918 21:08:03.305933   62061 logs.go:276] 0 containers: []
	W0918 21:08:03.305940   62061 logs.go:278] No container was found matching "etcd"
	I0918 21:08:03.305946   62061 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0918 21:08:03.306004   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0918 21:08:03.339442   62061 cri.go:89] found id: ""
	I0918 21:08:03.339468   62061 logs.go:276] 0 containers: []
	W0918 21:08:03.339476   62061 logs.go:278] No container was found matching "coredns"
	I0918 21:08:03.339482   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0918 21:08:03.339550   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0918 21:08:03.377460   62061 cri.go:89] found id: ""
	I0918 21:08:03.377486   62061 logs.go:276] 0 containers: []
	W0918 21:08:03.377495   62061 logs.go:278] No container was found matching "kube-scheduler"
	I0918 21:08:03.377501   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0918 21:08:03.377552   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0918 21:08:03.414815   62061 cri.go:89] found id: ""
	I0918 21:08:03.414850   62061 logs.go:276] 0 containers: []
	W0918 21:08:03.414861   62061 logs.go:278] No container was found matching "kube-proxy"
	I0918 21:08:03.414869   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0918 21:08:03.414930   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0918 21:08:03.448654   62061 cri.go:89] found id: ""
	I0918 21:08:03.448680   62061 logs.go:276] 0 containers: []
	W0918 21:08:03.448690   62061 logs.go:278] No container was found matching "kube-controller-manager"
	I0918 21:08:03.448698   62061 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0918 21:08:03.448759   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0918 21:08:03.483598   62061 cri.go:89] found id: ""
	I0918 21:08:03.483628   62061 logs.go:276] 0 containers: []
	W0918 21:08:03.483639   62061 logs.go:278] No container was found matching "kindnet"
	I0918 21:08:03.483646   62061 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0918 21:08:03.483717   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0918 21:08:03.518557   62061 cri.go:89] found id: ""
	I0918 21:08:03.518585   62061 logs.go:276] 0 containers: []
	W0918 21:08:03.518601   62061 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0918 21:08:03.518612   62061 logs.go:123] Gathering logs for container status ...
	I0918 21:08:03.518627   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 21:08:03.555922   62061 logs.go:123] Gathering logs for kubelet ...
	I0918 21:08:03.555958   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 21:08:03.608173   62061 logs.go:123] Gathering logs for dmesg ...
	I0918 21:08:03.608208   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 21:08:03.621251   62061 logs.go:123] Gathering logs for describe nodes ...
	I0918 21:08:03.621278   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0918 21:08:03.688773   62061 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0918 21:08:03.688796   62061 logs.go:123] Gathering logs for CRI-O ...
	I0918 21:08:03.688812   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0918 21:08:01.857289   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:08:03.857768   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:08:06.356504   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:08:03.132376   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:08:05.134169   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:08:04.386905   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:08:06.891459   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:08:06.272727   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:08:06.286033   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0918 21:08:06.286115   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0918 21:08:06.319058   62061 cri.go:89] found id: ""
	I0918 21:08:06.319084   62061 logs.go:276] 0 containers: []
	W0918 21:08:06.319092   62061 logs.go:278] No container was found matching "kube-apiserver"
	I0918 21:08:06.319099   62061 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0918 21:08:06.319167   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0918 21:08:06.352572   62061 cri.go:89] found id: ""
	I0918 21:08:06.352606   62061 logs.go:276] 0 containers: []
	W0918 21:08:06.352627   62061 logs.go:278] No container was found matching "etcd"
	I0918 21:08:06.352638   62061 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0918 21:08:06.352709   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0918 21:08:06.389897   62061 cri.go:89] found id: ""
	I0918 21:08:06.389922   62061 logs.go:276] 0 containers: []
	W0918 21:08:06.389929   62061 logs.go:278] No container was found matching "coredns"
	I0918 21:08:06.389935   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0918 21:08:06.389993   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0918 21:08:06.427265   62061 cri.go:89] found id: ""
	I0918 21:08:06.427294   62061 logs.go:276] 0 containers: []
	W0918 21:08:06.427306   62061 logs.go:278] No container was found matching "kube-scheduler"
	I0918 21:08:06.427314   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0918 21:08:06.427375   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0918 21:08:06.462706   62061 cri.go:89] found id: ""
	I0918 21:08:06.462738   62061 logs.go:276] 0 containers: []
	W0918 21:08:06.462746   62061 logs.go:278] No container was found matching "kube-proxy"
	I0918 21:08:06.462753   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0918 21:08:06.462816   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0918 21:08:06.499672   62061 cri.go:89] found id: ""
	I0918 21:08:06.499707   62061 logs.go:276] 0 containers: []
	W0918 21:08:06.499719   62061 logs.go:278] No container was found matching "kube-controller-manager"
	I0918 21:08:06.499727   62061 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0918 21:08:06.499781   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0918 21:08:06.540386   62061 cri.go:89] found id: ""
	I0918 21:08:06.540415   62061 logs.go:276] 0 containers: []
	W0918 21:08:06.540426   62061 logs.go:278] No container was found matching "kindnet"
	I0918 21:08:06.540433   62061 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0918 21:08:06.540492   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0918 21:08:06.582919   62061 cri.go:89] found id: ""
	I0918 21:08:06.582946   62061 logs.go:276] 0 containers: []
	W0918 21:08:06.582957   62061 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0918 21:08:06.582966   62061 logs.go:123] Gathering logs for kubelet ...
	I0918 21:08:06.582982   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 21:08:06.637315   62061 logs.go:123] Gathering logs for dmesg ...
	I0918 21:08:06.637355   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 21:08:06.650626   62061 logs.go:123] Gathering logs for describe nodes ...
	I0918 21:08:06.650662   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0918 21:08:06.725605   62061 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0918 21:08:06.725641   62061 logs.go:123] Gathering logs for CRI-O ...
	I0918 21:08:06.725656   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0918 21:08:06.805431   62061 logs.go:123] Gathering logs for container status ...
	I0918 21:08:06.805471   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 21:08:09.343498   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:08:09.357396   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0918 21:08:09.357472   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0918 21:08:09.391561   62061 cri.go:89] found id: ""
	I0918 21:08:09.391590   62061 logs.go:276] 0 containers: []
	W0918 21:08:09.391600   62061 logs.go:278] No container was found matching "kube-apiserver"
	I0918 21:08:09.391605   62061 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0918 21:08:09.391655   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0918 21:08:09.429154   62061 cri.go:89] found id: ""
	I0918 21:08:09.429181   62061 logs.go:276] 0 containers: []
	W0918 21:08:09.429190   62061 logs.go:278] No container was found matching "etcd"
	I0918 21:08:09.429196   62061 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0918 21:08:09.429259   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0918 21:08:09.464807   62061 cri.go:89] found id: ""
	I0918 21:08:09.464848   62061 logs.go:276] 0 containers: []
	W0918 21:08:09.464859   62061 logs.go:278] No container was found matching "coredns"
	I0918 21:08:09.464866   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0918 21:08:09.464927   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0918 21:08:08.856578   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:08:10.856650   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:08:07.633438   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:08:10.132651   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:08:12.132903   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:08:09.387482   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:08:11.886885   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:08:09.499512   62061 cri.go:89] found id: ""
	I0918 21:08:09.499540   62061 logs.go:276] 0 containers: []
	W0918 21:08:09.499549   62061 logs.go:278] No container was found matching "kube-scheduler"
	I0918 21:08:09.499556   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0918 21:08:09.499619   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0918 21:08:09.534569   62061 cri.go:89] found id: ""
	I0918 21:08:09.534593   62061 logs.go:276] 0 containers: []
	W0918 21:08:09.534601   62061 logs.go:278] No container was found matching "kube-proxy"
	I0918 21:08:09.534607   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0918 21:08:09.534660   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0918 21:08:09.570387   62061 cri.go:89] found id: ""
	I0918 21:08:09.570414   62061 logs.go:276] 0 containers: []
	W0918 21:08:09.570422   62061 logs.go:278] No container was found matching "kube-controller-manager"
	I0918 21:08:09.570428   62061 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0918 21:08:09.570489   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0918 21:08:09.603832   62061 cri.go:89] found id: ""
	I0918 21:08:09.603863   62061 logs.go:276] 0 containers: []
	W0918 21:08:09.603871   62061 logs.go:278] No container was found matching "kindnet"
	I0918 21:08:09.603877   62061 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0918 21:08:09.603923   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0918 21:08:09.638887   62061 cri.go:89] found id: ""
	I0918 21:08:09.638918   62061 logs.go:276] 0 containers: []
	W0918 21:08:09.638930   62061 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0918 21:08:09.638940   62061 logs.go:123] Gathering logs for kubelet ...
	I0918 21:08:09.638953   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 21:08:09.691197   62061 logs.go:123] Gathering logs for dmesg ...
	I0918 21:08:09.691237   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 21:08:09.705470   62061 logs.go:123] Gathering logs for describe nodes ...
	I0918 21:08:09.705500   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0918 21:08:09.776826   62061 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0918 21:08:09.776858   62061 logs.go:123] Gathering logs for CRI-O ...
	I0918 21:08:09.776871   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0918 21:08:09.851287   62061 logs.go:123] Gathering logs for container status ...
	I0918 21:08:09.851321   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 21:08:12.390895   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:08:12.406734   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0918 21:08:12.406809   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0918 21:08:12.459302   62061 cri.go:89] found id: ""
	I0918 21:08:12.459331   62061 logs.go:276] 0 containers: []
	W0918 21:08:12.459349   62061 logs.go:278] No container was found matching "kube-apiserver"
	I0918 21:08:12.459360   62061 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0918 21:08:12.459420   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0918 21:08:12.517527   62061 cri.go:89] found id: ""
	I0918 21:08:12.517557   62061 logs.go:276] 0 containers: []
	W0918 21:08:12.517567   62061 logs.go:278] No container was found matching "etcd"
	I0918 21:08:12.517573   62061 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0918 21:08:12.517628   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0918 21:08:12.549661   62061 cri.go:89] found id: ""
	I0918 21:08:12.549689   62061 logs.go:276] 0 containers: []
	W0918 21:08:12.549696   62061 logs.go:278] No container was found matching "coredns"
	I0918 21:08:12.549702   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0918 21:08:12.549747   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0918 21:08:12.582970   62061 cri.go:89] found id: ""
	I0918 21:08:12.582996   62061 logs.go:276] 0 containers: []
	W0918 21:08:12.583004   62061 logs.go:278] No container was found matching "kube-scheduler"
	I0918 21:08:12.583009   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0918 21:08:12.583054   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0918 21:08:12.617059   62061 cri.go:89] found id: ""
	I0918 21:08:12.617089   62061 logs.go:276] 0 containers: []
	W0918 21:08:12.617098   62061 logs.go:278] No container was found matching "kube-proxy"
	I0918 21:08:12.617103   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0918 21:08:12.617153   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0918 21:08:12.651115   62061 cri.go:89] found id: ""
	I0918 21:08:12.651143   62061 logs.go:276] 0 containers: []
	W0918 21:08:12.651156   62061 logs.go:278] No container was found matching "kube-controller-manager"
	I0918 21:08:12.651164   62061 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0918 21:08:12.651217   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0918 21:08:12.684727   62061 cri.go:89] found id: ""
	I0918 21:08:12.684758   62061 logs.go:276] 0 containers: []
	W0918 21:08:12.684765   62061 logs.go:278] No container was found matching "kindnet"
	I0918 21:08:12.684771   62061 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0918 21:08:12.684829   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0918 21:08:12.718181   62061 cri.go:89] found id: ""
	I0918 21:08:12.718215   62061 logs.go:276] 0 containers: []
	W0918 21:08:12.718226   62061 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0918 21:08:12.718237   62061 logs.go:123] Gathering logs for container status ...
	I0918 21:08:12.718252   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 21:08:12.760350   62061 logs.go:123] Gathering logs for kubelet ...
	I0918 21:08:12.760379   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 21:08:12.810028   62061 logs.go:123] Gathering logs for dmesg ...
	I0918 21:08:12.810067   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 21:08:12.823785   62061 logs.go:123] Gathering logs for describe nodes ...
	I0918 21:08:12.823815   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0918 21:08:12.898511   62061 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0918 21:08:12.898532   62061 logs.go:123] Gathering logs for CRI-O ...
	I0918 21:08:12.898545   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0918 21:08:12.856697   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:08:15.356381   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:08:14.632694   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:08:17.131888   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:08:13.887157   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:08:15.887190   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:08:17.890618   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:08:15.476840   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:08:15.489386   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0918 21:08:15.489470   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0918 21:08:15.524549   62061 cri.go:89] found id: ""
	I0918 21:08:15.524578   62061 logs.go:276] 0 containers: []
	W0918 21:08:15.524585   62061 logs.go:278] No container was found matching "kube-apiserver"
	I0918 21:08:15.524591   62061 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0918 21:08:15.524642   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0918 21:08:15.557834   62061 cri.go:89] found id: ""
	I0918 21:08:15.557867   62061 logs.go:276] 0 containers: []
	W0918 21:08:15.557876   62061 logs.go:278] No container was found matching "etcd"
	I0918 21:08:15.557883   62061 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0918 21:08:15.557933   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0918 21:08:15.598774   62061 cri.go:89] found id: ""
	I0918 21:08:15.598805   62061 logs.go:276] 0 containers: []
	W0918 21:08:15.598818   62061 logs.go:278] No container was found matching "coredns"
	I0918 21:08:15.598828   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0918 21:08:15.598882   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0918 21:08:15.633123   62061 cri.go:89] found id: ""
	I0918 21:08:15.633147   62061 logs.go:276] 0 containers: []
	W0918 21:08:15.633155   62061 logs.go:278] No container was found matching "kube-scheduler"
	I0918 21:08:15.633161   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0918 21:08:15.633208   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0918 21:08:15.667281   62061 cri.go:89] found id: ""
	I0918 21:08:15.667307   62061 logs.go:276] 0 containers: []
	W0918 21:08:15.667317   62061 logs.go:278] No container was found matching "kube-proxy"
	I0918 21:08:15.667323   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0918 21:08:15.667381   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0918 21:08:15.705945   62061 cri.go:89] found id: ""
	I0918 21:08:15.705977   62061 logs.go:276] 0 containers: []
	W0918 21:08:15.705990   62061 logs.go:278] No container was found matching "kube-controller-manager"
	I0918 21:08:15.706015   62061 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0918 21:08:15.706093   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0918 21:08:15.739795   62061 cri.go:89] found id: ""
	I0918 21:08:15.739826   62061 logs.go:276] 0 containers: []
	W0918 21:08:15.739836   62061 logs.go:278] No container was found matching "kindnet"
	I0918 21:08:15.739843   62061 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0918 21:08:15.739904   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0918 21:08:15.778520   62061 cri.go:89] found id: ""
	I0918 21:08:15.778556   62061 logs.go:276] 0 containers: []
	W0918 21:08:15.778567   62061 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0918 21:08:15.778579   62061 logs.go:123] Gathering logs for kubelet ...
	I0918 21:08:15.778592   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 21:08:15.829357   62061 logs.go:123] Gathering logs for dmesg ...
	I0918 21:08:15.829394   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 21:08:15.842852   62061 logs.go:123] Gathering logs for describe nodes ...
	I0918 21:08:15.842882   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0918 21:08:15.922438   62061 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0918 21:08:15.922471   62061 logs.go:123] Gathering logs for CRI-O ...
	I0918 21:08:15.922483   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0918 21:08:16.001687   62061 logs.go:123] Gathering logs for container status ...
	I0918 21:08:16.001726   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 21:08:18.542067   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:08:18.554783   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0918 21:08:18.554870   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0918 21:08:18.589555   62061 cri.go:89] found id: ""
	I0918 21:08:18.589581   62061 logs.go:276] 0 containers: []
	W0918 21:08:18.589592   62061 logs.go:278] No container was found matching "kube-apiserver"
	I0918 21:08:18.589604   62061 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0918 21:08:18.589667   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0918 21:08:18.623035   62061 cri.go:89] found id: ""
	I0918 21:08:18.623059   62061 logs.go:276] 0 containers: []
	W0918 21:08:18.623067   62061 logs.go:278] No container was found matching "etcd"
	I0918 21:08:18.623073   62061 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0918 21:08:18.623127   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0918 21:08:18.655875   62061 cri.go:89] found id: ""
	I0918 21:08:18.655901   62061 logs.go:276] 0 containers: []
	W0918 21:08:18.655909   62061 logs.go:278] No container was found matching "coredns"
	I0918 21:08:18.655915   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0918 21:08:18.655973   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0918 21:08:18.688964   62061 cri.go:89] found id: ""
	I0918 21:08:18.688997   62061 logs.go:276] 0 containers: []
	W0918 21:08:18.689008   62061 logs.go:278] No container was found matching "kube-scheduler"
	I0918 21:08:18.689016   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0918 21:08:18.689080   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0918 21:08:18.723164   62061 cri.go:89] found id: ""
	I0918 21:08:18.723186   62061 logs.go:276] 0 containers: []
	W0918 21:08:18.723196   62061 logs.go:278] No container was found matching "kube-proxy"
	I0918 21:08:18.723201   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0918 21:08:18.723246   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0918 21:08:18.755022   62061 cri.go:89] found id: ""
	I0918 21:08:18.755048   62061 logs.go:276] 0 containers: []
	W0918 21:08:18.755057   62061 logs.go:278] No container was found matching "kube-controller-manager"
	I0918 21:08:18.755063   62061 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0918 21:08:18.755113   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0918 21:08:18.791614   62061 cri.go:89] found id: ""
	I0918 21:08:18.791645   62061 logs.go:276] 0 containers: []
	W0918 21:08:18.791655   62061 logs.go:278] No container was found matching "kindnet"
	I0918 21:08:18.791663   62061 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0918 21:08:18.791731   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0918 21:08:18.823553   62061 cri.go:89] found id: ""
	I0918 21:08:18.823583   62061 logs.go:276] 0 containers: []
	W0918 21:08:18.823590   62061 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0918 21:08:18.823597   62061 logs.go:123] Gathering logs for container status ...
	I0918 21:08:18.823609   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 21:08:18.864528   62061 logs.go:123] Gathering logs for kubelet ...
	I0918 21:08:18.864567   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 21:08:18.916590   62061 logs.go:123] Gathering logs for dmesg ...
	I0918 21:08:18.916628   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 21:08:18.930077   62061 logs.go:123] Gathering logs for describe nodes ...
	I0918 21:08:18.930107   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0918 21:08:18.999958   62061 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0918 21:08:18.999982   62061 logs.go:123] Gathering logs for CRI-O ...
	I0918 21:08:18.999997   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0918 21:08:17.358190   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:08:19.856605   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:08:19.132382   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:08:21.634433   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:08:20.387223   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:08:22.387374   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:08:21.573718   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:08:21.588164   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0918 21:08:21.588252   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0918 21:08:21.630044   62061 cri.go:89] found id: ""
	I0918 21:08:21.630075   62061 logs.go:276] 0 containers: []
	W0918 21:08:21.630086   62061 logs.go:278] No container was found matching "kube-apiserver"
	I0918 21:08:21.630094   62061 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0918 21:08:21.630154   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0918 21:08:21.666992   62061 cri.go:89] found id: ""
	I0918 21:08:21.667021   62061 logs.go:276] 0 containers: []
	W0918 21:08:21.667029   62061 logs.go:278] No container was found matching "etcd"
	I0918 21:08:21.667035   62061 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0918 21:08:21.667083   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0918 21:08:21.702379   62061 cri.go:89] found id: ""
	I0918 21:08:21.702403   62061 logs.go:276] 0 containers: []
	W0918 21:08:21.702411   62061 logs.go:278] No container was found matching "coredns"
	I0918 21:08:21.702416   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0918 21:08:21.702463   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0918 21:08:21.739877   62061 cri.go:89] found id: ""
	I0918 21:08:21.739908   62061 logs.go:276] 0 containers: []
	W0918 21:08:21.739918   62061 logs.go:278] No container was found matching "kube-scheduler"
	I0918 21:08:21.739923   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0918 21:08:21.739974   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0918 21:08:21.777536   62061 cri.go:89] found id: ""
	I0918 21:08:21.777573   62061 logs.go:276] 0 containers: []
	W0918 21:08:21.777584   62061 logs.go:278] No container was found matching "kube-proxy"
	I0918 21:08:21.777592   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0918 21:08:21.777652   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0918 21:08:21.812284   62061 cri.go:89] found id: ""
	I0918 21:08:21.812316   62061 logs.go:276] 0 containers: []
	W0918 21:08:21.812325   62061 logs.go:278] No container was found matching "kube-controller-manager"
	I0918 21:08:21.812332   62061 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0918 21:08:21.812401   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0918 21:08:21.848143   62061 cri.go:89] found id: ""
	I0918 21:08:21.848176   62061 logs.go:276] 0 containers: []
	W0918 21:08:21.848185   62061 logs.go:278] No container was found matching "kindnet"
	I0918 21:08:21.848191   62061 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0918 21:08:21.848250   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0918 21:08:21.887151   62061 cri.go:89] found id: ""
	I0918 21:08:21.887177   62061 logs.go:276] 0 containers: []
	W0918 21:08:21.887188   62061 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0918 21:08:21.887199   62061 logs.go:123] Gathering logs for kubelet ...
	I0918 21:08:21.887213   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 21:08:21.939969   62061 logs.go:123] Gathering logs for dmesg ...
	I0918 21:08:21.940008   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 21:08:21.954128   62061 logs.go:123] Gathering logs for describe nodes ...
	I0918 21:08:21.954164   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0918 21:08:22.022827   62061 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0918 21:08:22.022853   62061 logs.go:123] Gathering logs for CRI-O ...
	I0918 21:08:22.022865   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0918 21:08:22.103131   62061 logs.go:123] Gathering logs for container status ...
	I0918 21:08:22.103172   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 21:08:22.356641   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:08:24.358204   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:08:24.133101   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:08:26.633701   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:08:24.888715   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:08:27.386901   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:08:24.642045   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:08:24.655273   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0918 21:08:24.655343   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0918 21:08:24.687816   62061 cri.go:89] found id: ""
	I0918 21:08:24.687847   62061 logs.go:276] 0 containers: []
	W0918 21:08:24.687858   62061 logs.go:278] No container was found matching "kube-apiserver"
	I0918 21:08:24.687865   62061 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0918 21:08:24.687927   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0918 21:08:24.721276   62061 cri.go:89] found id: ""
	I0918 21:08:24.721303   62061 logs.go:276] 0 containers: []
	W0918 21:08:24.721311   62061 logs.go:278] No container was found matching "etcd"
	I0918 21:08:24.721316   62061 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0918 21:08:24.721366   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0918 21:08:24.753874   62061 cri.go:89] found id: ""
	I0918 21:08:24.753904   62061 logs.go:276] 0 containers: []
	W0918 21:08:24.753911   62061 logs.go:278] No container was found matching "coredns"
	I0918 21:08:24.753917   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0918 21:08:24.753967   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0918 21:08:24.789107   62061 cri.go:89] found id: ""
	I0918 21:08:24.789148   62061 logs.go:276] 0 containers: []
	W0918 21:08:24.789163   62061 logs.go:278] No container was found matching "kube-scheduler"
	I0918 21:08:24.789170   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0918 21:08:24.789219   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0918 21:08:24.826283   62061 cri.go:89] found id: ""
	I0918 21:08:24.826316   62061 logs.go:276] 0 containers: []
	W0918 21:08:24.826329   62061 logs.go:278] No container was found matching "kube-proxy"
	I0918 21:08:24.826337   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0918 21:08:24.826401   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0918 21:08:24.859878   62061 cri.go:89] found id: ""
	I0918 21:08:24.859907   62061 logs.go:276] 0 containers: []
	W0918 21:08:24.859917   62061 logs.go:278] No container was found matching "kube-controller-manager"
	I0918 21:08:24.859924   62061 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0918 21:08:24.859982   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0918 21:08:24.895717   62061 cri.go:89] found id: ""
	I0918 21:08:24.895747   62061 logs.go:276] 0 containers: []
	W0918 21:08:24.895758   62061 logs.go:278] No container was found matching "kindnet"
	I0918 21:08:24.895766   62061 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0918 21:08:24.895830   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0918 21:08:24.930207   62061 cri.go:89] found id: ""
	I0918 21:08:24.930239   62061 logs.go:276] 0 containers: []
	W0918 21:08:24.930250   62061 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0918 21:08:24.930262   62061 logs.go:123] Gathering logs for container status ...
	I0918 21:08:24.930279   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 21:08:24.967939   62061 logs.go:123] Gathering logs for kubelet ...
	I0918 21:08:24.967981   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 21:08:25.022526   62061 logs.go:123] Gathering logs for dmesg ...
	I0918 21:08:25.022569   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 21:08:25.036175   62061 logs.go:123] Gathering logs for describe nodes ...
	I0918 21:08:25.036214   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0918 21:08:25.104251   62061 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0918 21:08:25.104277   62061 logs.go:123] Gathering logs for CRI-O ...
	I0918 21:08:25.104294   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0918 21:08:27.686943   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:08:27.700071   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0918 21:08:27.700147   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0918 21:08:27.735245   62061 cri.go:89] found id: ""
	I0918 21:08:27.735277   62061 logs.go:276] 0 containers: []
	W0918 21:08:27.735286   62061 logs.go:278] No container was found matching "kube-apiserver"
	I0918 21:08:27.735291   62061 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0918 21:08:27.735349   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0918 21:08:27.768945   62061 cri.go:89] found id: ""
	I0918 21:08:27.768974   62061 logs.go:276] 0 containers: []
	W0918 21:08:27.768985   62061 logs.go:278] No container was found matching "etcd"
	I0918 21:08:27.768993   62061 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0918 21:08:27.769055   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0918 21:08:27.805425   62061 cri.go:89] found id: ""
	I0918 21:08:27.805457   62061 logs.go:276] 0 containers: []
	W0918 21:08:27.805468   62061 logs.go:278] No container was found matching "coredns"
	I0918 21:08:27.805474   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0918 21:08:27.805523   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0918 21:08:27.838049   62061 cri.go:89] found id: ""
	I0918 21:08:27.838081   62061 logs.go:276] 0 containers: []
	W0918 21:08:27.838091   62061 logs.go:278] No container was found matching "kube-scheduler"
	I0918 21:08:27.838098   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0918 21:08:27.838163   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0918 21:08:27.873961   62061 cri.go:89] found id: ""
	I0918 21:08:27.873986   62061 logs.go:276] 0 containers: []
	W0918 21:08:27.873994   62061 logs.go:278] No container was found matching "kube-proxy"
	I0918 21:08:27.874001   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0918 21:08:27.874064   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0918 21:08:27.908822   62061 cri.go:89] found id: ""
	I0918 21:08:27.908846   62061 logs.go:276] 0 containers: []
	W0918 21:08:27.908854   62061 logs.go:278] No container was found matching "kube-controller-manager"
	I0918 21:08:27.908860   62061 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0918 21:08:27.908915   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0918 21:08:27.943758   62061 cri.go:89] found id: ""
	I0918 21:08:27.943794   62061 logs.go:276] 0 containers: []
	W0918 21:08:27.943802   62061 logs.go:278] No container was found matching "kindnet"
	I0918 21:08:27.943808   62061 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0918 21:08:27.943875   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0918 21:08:27.978136   62061 cri.go:89] found id: ""
	I0918 21:08:27.978168   62061 logs.go:276] 0 containers: []
	W0918 21:08:27.978179   62061 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0918 21:08:27.978189   62061 logs.go:123] Gathering logs for dmesg ...
	I0918 21:08:27.978202   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 21:08:27.992744   62061 logs.go:123] Gathering logs for describe nodes ...
	I0918 21:08:27.992773   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0918 21:08:28.070339   62061 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0918 21:08:28.070362   62061 logs.go:123] Gathering logs for CRI-O ...
	I0918 21:08:28.070374   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0918 21:08:28.152405   62061 logs.go:123] Gathering logs for container status ...
	I0918 21:08:28.152452   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 21:08:28.191190   62061 logs.go:123] Gathering logs for kubelet ...
	I0918 21:08:28.191220   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 21:08:26.857256   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:08:29.356662   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:08:29.132577   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:08:31.133108   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:08:29.387068   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:08:31.886962   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:08:30.742507   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:08:30.756451   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0918 21:08:30.756553   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0918 21:08:30.790746   62061 cri.go:89] found id: ""
	I0918 21:08:30.790771   62061 logs.go:276] 0 containers: []
	W0918 21:08:30.790781   62061 logs.go:278] No container was found matching "kube-apiserver"
	I0918 21:08:30.790787   62061 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0918 21:08:30.790851   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0918 21:08:30.823633   62061 cri.go:89] found id: ""
	I0918 21:08:30.823670   62061 logs.go:276] 0 containers: []
	W0918 21:08:30.823682   62061 logs.go:278] No container was found matching "etcd"
	I0918 21:08:30.823689   62061 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0918 21:08:30.823754   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0918 21:08:30.861912   62061 cri.go:89] found id: ""
	I0918 21:08:30.861936   62061 logs.go:276] 0 containers: []
	W0918 21:08:30.861943   62061 logs.go:278] No container was found matching "coredns"
	I0918 21:08:30.861949   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0918 21:08:30.862000   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0918 21:08:30.899452   62061 cri.go:89] found id: ""
	I0918 21:08:30.899481   62061 logs.go:276] 0 containers: []
	W0918 21:08:30.899489   62061 logs.go:278] No container was found matching "kube-scheduler"
	I0918 21:08:30.899495   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0918 21:08:30.899562   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0918 21:08:30.935871   62061 cri.go:89] found id: ""
	I0918 21:08:30.935898   62061 logs.go:276] 0 containers: []
	W0918 21:08:30.935906   62061 logs.go:278] No container was found matching "kube-proxy"
	I0918 21:08:30.935912   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0918 21:08:30.935969   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0918 21:08:30.974608   62061 cri.go:89] found id: ""
	I0918 21:08:30.974643   62061 logs.go:276] 0 containers: []
	W0918 21:08:30.974655   62061 logs.go:278] No container was found matching "kube-controller-manager"
	I0918 21:08:30.974663   62061 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0918 21:08:30.974722   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0918 21:08:31.008249   62061 cri.go:89] found id: ""
	I0918 21:08:31.008279   62061 logs.go:276] 0 containers: []
	W0918 21:08:31.008290   62061 logs.go:278] No container was found matching "kindnet"
	I0918 21:08:31.008297   62061 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0918 21:08:31.008366   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0918 21:08:31.047042   62061 cri.go:89] found id: ""
	I0918 21:08:31.047075   62061 logs.go:276] 0 containers: []
	W0918 21:08:31.047083   62061 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0918 21:08:31.047093   62061 logs.go:123] Gathering logs for kubelet ...
	I0918 21:08:31.047107   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 21:08:31.098961   62061 logs.go:123] Gathering logs for dmesg ...
	I0918 21:08:31.099001   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 21:08:31.113116   62061 logs.go:123] Gathering logs for describe nodes ...
	I0918 21:08:31.113147   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0918 21:08:31.179609   62061 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0918 21:08:31.179650   62061 logs.go:123] Gathering logs for CRI-O ...
	I0918 21:08:31.179664   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0918 21:08:31.258299   62061 logs.go:123] Gathering logs for container status ...
	I0918 21:08:31.258335   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 21:08:33.798360   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:08:33.812105   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0918 21:08:33.812172   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0918 21:08:33.848120   62061 cri.go:89] found id: ""
	I0918 21:08:33.848149   62061 logs.go:276] 0 containers: []
	W0918 21:08:33.848160   62061 logs.go:278] No container was found matching "kube-apiserver"
	I0918 21:08:33.848169   62061 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0918 21:08:33.848231   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0918 21:08:33.884974   62061 cri.go:89] found id: ""
	I0918 21:08:33.885007   62061 logs.go:276] 0 containers: []
	W0918 21:08:33.885019   62061 logs.go:278] No container was found matching "etcd"
	I0918 21:08:33.885028   62061 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0918 21:08:33.885104   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0918 21:08:33.919613   62061 cri.go:89] found id: ""
	I0918 21:08:33.919658   62061 logs.go:276] 0 containers: []
	W0918 21:08:33.919667   62061 logs.go:278] No container was found matching "coredns"
	I0918 21:08:33.919673   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0918 21:08:33.919721   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0918 21:08:33.953074   62061 cri.go:89] found id: ""
	I0918 21:08:33.953112   62061 logs.go:276] 0 containers: []
	W0918 21:08:33.953120   62061 logs.go:278] No container was found matching "kube-scheduler"
	I0918 21:08:33.953125   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0918 21:08:33.953185   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0918 21:08:33.985493   62061 cri.go:89] found id: ""
	I0918 21:08:33.985521   62061 logs.go:276] 0 containers: []
	W0918 21:08:33.985532   62061 logs.go:278] No container was found matching "kube-proxy"
	I0918 21:08:33.985539   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0918 21:08:33.985630   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0918 21:08:34.020938   62061 cri.go:89] found id: ""
	I0918 21:08:34.020962   62061 logs.go:276] 0 containers: []
	W0918 21:08:34.020972   62061 logs.go:278] No container was found matching "kube-controller-manager"
	I0918 21:08:34.020981   62061 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0918 21:08:34.021047   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0918 21:08:34.053947   62061 cri.go:89] found id: ""
	I0918 21:08:34.053977   62061 logs.go:276] 0 containers: []
	W0918 21:08:34.053988   62061 logs.go:278] No container was found matching "kindnet"
	I0918 21:08:34.053996   62061 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0918 21:08:34.054060   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0918 21:08:34.090072   62061 cri.go:89] found id: ""
	I0918 21:08:34.090110   62061 logs.go:276] 0 containers: []
	W0918 21:08:34.090123   62061 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0918 21:08:34.090133   62061 logs.go:123] Gathering logs for kubelet ...
	I0918 21:08:34.090145   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 21:08:34.142069   62061 logs.go:123] Gathering logs for dmesg ...
	I0918 21:08:34.142107   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 21:08:34.156709   62061 logs.go:123] Gathering logs for describe nodes ...
	I0918 21:08:34.156740   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0918 21:08:34.232644   62061 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0918 21:08:34.232672   62061 logs.go:123] Gathering logs for CRI-O ...
	I0918 21:08:34.232685   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0918 21:08:34.311833   62061 logs.go:123] Gathering logs for container status ...
	I0918 21:08:34.311878   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 21:08:31.859360   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:08:34.357056   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:08:33.133212   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:08:35.632885   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:08:33.888487   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:08:36.386571   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:08:36.850811   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:08:36.864479   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0918 21:08:36.864558   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0918 21:08:36.902595   62061 cri.go:89] found id: ""
	I0918 21:08:36.902628   62061 logs.go:276] 0 containers: []
	W0918 21:08:36.902640   62061 logs.go:278] No container was found matching "kube-apiserver"
	I0918 21:08:36.902649   62061 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0918 21:08:36.902706   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0918 21:08:36.940336   62061 cri.go:89] found id: ""
	I0918 21:08:36.940385   62061 logs.go:276] 0 containers: []
	W0918 21:08:36.940394   62061 logs.go:278] No container was found matching "etcd"
	I0918 21:08:36.940400   62061 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0918 21:08:36.940465   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0918 21:08:36.973909   62061 cri.go:89] found id: ""
	I0918 21:08:36.973942   62061 logs.go:276] 0 containers: []
	W0918 21:08:36.973952   62061 logs.go:278] No container was found matching "coredns"
	I0918 21:08:36.973958   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0918 21:08:36.974013   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0918 21:08:37.008766   62061 cri.go:89] found id: ""
	I0918 21:08:37.008791   62061 logs.go:276] 0 containers: []
	W0918 21:08:37.008799   62061 logs.go:278] No container was found matching "kube-scheduler"
	I0918 21:08:37.008805   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0918 21:08:37.008852   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0918 21:08:37.041633   62061 cri.go:89] found id: ""
	I0918 21:08:37.041669   62061 logs.go:276] 0 containers: []
	W0918 21:08:37.041681   62061 logs.go:278] No container was found matching "kube-proxy"
	I0918 21:08:37.041688   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0918 21:08:37.041750   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0918 21:08:37.075154   62061 cri.go:89] found id: ""
	I0918 21:08:37.075188   62061 logs.go:276] 0 containers: []
	W0918 21:08:37.075197   62061 logs.go:278] No container was found matching "kube-controller-manager"
	I0918 21:08:37.075204   62061 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0918 21:08:37.075268   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0918 21:08:37.111083   62061 cri.go:89] found id: ""
	I0918 21:08:37.111119   62061 logs.go:276] 0 containers: []
	W0918 21:08:37.111130   62061 logs.go:278] No container was found matching "kindnet"
	I0918 21:08:37.111138   62061 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0918 21:08:37.111192   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0918 21:08:37.147894   62061 cri.go:89] found id: ""
	I0918 21:08:37.147925   62061 logs.go:276] 0 containers: []
	W0918 21:08:37.147936   62061 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0918 21:08:37.147948   62061 logs.go:123] Gathering logs for kubelet ...
	I0918 21:08:37.147962   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 21:08:37.200102   62061 logs.go:123] Gathering logs for dmesg ...
	I0918 21:08:37.200141   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 21:08:37.213511   62061 logs.go:123] Gathering logs for describe nodes ...
	I0918 21:08:37.213537   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0918 21:08:37.286978   62061 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0918 21:08:37.287012   62061 logs.go:123] Gathering logs for CRI-O ...
	I0918 21:08:37.287027   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0918 21:08:37.368153   62061 logs.go:123] Gathering logs for container status ...
	I0918 21:08:37.368191   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 21:08:36.857508   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:08:39.357177   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:08:41.357329   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:08:38.134332   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:08:40.633274   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:08:38.387121   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:08:40.387310   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:08:42.887614   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:08:39.907600   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:08:39.922325   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0918 21:08:39.922395   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0918 21:08:39.958150   62061 cri.go:89] found id: ""
	I0918 21:08:39.958175   62061 logs.go:276] 0 containers: []
	W0918 21:08:39.958183   62061 logs.go:278] No container was found matching "kube-apiserver"
	I0918 21:08:39.958189   62061 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0918 21:08:39.958254   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0918 21:08:39.993916   62061 cri.go:89] found id: ""
	I0918 21:08:39.993945   62061 logs.go:276] 0 containers: []
	W0918 21:08:39.993956   62061 logs.go:278] No container was found matching "etcd"
	I0918 21:08:39.993963   62061 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0918 21:08:39.994026   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0918 21:08:40.031078   62061 cri.go:89] found id: ""
	I0918 21:08:40.031114   62061 logs.go:276] 0 containers: []
	W0918 21:08:40.031126   62061 logs.go:278] No container was found matching "coredns"
	I0918 21:08:40.031133   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0918 21:08:40.031194   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0918 21:08:40.065028   62061 cri.go:89] found id: ""
	I0918 21:08:40.065054   62061 logs.go:276] 0 containers: []
	W0918 21:08:40.065065   62061 logs.go:278] No container was found matching "kube-scheduler"
	I0918 21:08:40.065072   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0918 21:08:40.065129   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0918 21:08:40.099437   62061 cri.go:89] found id: ""
	I0918 21:08:40.099466   62061 logs.go:276] 0 containers: []
	W0918 21:08:40.099474   62061 logs.go:278] No container was found matching "kube-proxy"
	I0918 21:08:40.099480   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0918 21:08:40.099544   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0918 21:08:40.139841   62061 cri.go:89] found id: ""
	I0918 21:08:40.139866   62061 logs.go:276] 0 containers: []
	W0918 21:08:40.139874   62061 logs.go:278] No container was found matching "kube-controller-manager"
	I0918 21:08:40.139880   62061 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0918 21:08:40.139936   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0918 21:08:40.175287   62061 cri.go:89] found id: ""
	I0918 21:08:40.175316   62061 logs.go:276] 0 containers: []
	W0918 21:08:40.175327   62061 logs.go:278] No container was found matching "kindnet"
	I0918 21:08:40.175334   62061 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0918 21:08:40.175397   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0918 21:08:40.208646   62061 cri.go:89] found id: ""
	I0918 21:08:40.208677   62061 logs.go:276] 0 containers: []
	W0918 21:08:40.208690   62061 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0918 21:08:40.208701   62061 logs.go:123] Gathering logs for kubelet ...
	I0918 21:08:40.208712   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 21:08:40.262944   62061 logs.go:123] Gathering logs for dmesg ...
	I0918 21:08:40.262982   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 21:08:40.276192   62061 logs.go:123] Gathering logs for describe nodes ...
	I0918 21:08:40.276222   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0918 21:08:40.346393   62061 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0918 21:08:40.346414   62061 logs.go:123] Gathering logs for CRI-O ...
	I0918 21:08:40.346426   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0918 21:08:40.433797   62061 logs.go:123] Gathering logs for container status ...
	I0918 21:08:40.433848   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 21:08:42.976574   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:08:42.990198   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0918 21:08:42.990263   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0918 21:08:43.031744   62061 cri.go:89] found id: ""
	I0918 21:08:43.031774   62061 logs.go:276] 0 containers: []
	W0918 21:08:43.031784   62061 logs.go:278] No container was found matching "kube-apiserver"
	I0918 21:08:43.031791   62061 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0918 21:08:43.031854   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0918 21:08:43.065198   62061 cri.go:89] found id: ""
	I0918 21:08:43.065240   62061 logs.go:276] 0 containers: []
	W0918 21:08:43.065248   62061 logs.go:278] No container was found matching "etcd"
	I0918 21:08:43.065261   62061 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0918 21:08:43.065319   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0918 21:08:43.099292   62061 cri.go:89] found id: ""
	I0918 21:08:43.099320   62061 logs.go:276] 0 containers: []
	W0918 21:08:43.099328   62061 logs.go:278] No container was found matching "coredns"
	I0918 21:08:43.099333   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0918 21:08:43.099388   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0918 21:08:43.135085   62061 cri.go:89] found id: ""
	I0918 21:08:43.135110   62061 logs.go:276] 0 containers: []
	W0918 21:08:43.135119   62061 logs.go:278] No container was found matching "kube-scheduler"
	I0918 21:08:43.135131   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0918 21:08:43.135190   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0918 21:08:43.173255   62061 cri.go:89] found id: ""
	I0918 21:08:43.173312   62061 logs.go:276] 0 containers: []
	W0918 21:08:43.173326   62061 logs.go:278] No container was found matching "kube-proxy"
	I0918 21:08:43.173335   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0918 21:08:43.173433   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0918 21:08:43.209249   62061 cri.go:89] found id: ""
	I0918 21:08:43.209282   62061 logs.go:276] 0 containers: []
	W0918 21:08:43.209294   62061 logs.go:278] No container was found matching "kube-controller-manager"
	I0918 21:08:43.209303   62061 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0918 21:08:43.209377   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0918 21:08:43.242333   62061 cri.go:89] found id: ""
	I0918 21:08:43.242366   62061 logs.go:276] 0 containers: []
	W0918 21:08:43.242376   62061 logs.go:278] No container was found matching "kindnet"
	I0918 21:08:43.242383   62061 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0918 21:08:43.242449   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0918 21:08:43.278099   62061 cri.go:89] found id: ""
	I0918 21:08:43.278128   62061 logs.go:276] 0 containers: []
	W0918 21:08:43.278136   62061 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0918 21:08:43.278146   62061 logs.go:123] Gathering logs for kubelet ...
	I0918 21:08:43.278164   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 21:08:43.329621   62061 logs.go:123] Gathering logs for dmesg ...
	I0918 21:08:43.329661   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 21:08:43.343357   62061 logs.go:123] Gathering logs for describe nodes ...
	I0918 21:08:43.343402   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0918 21:08:43.415392   62061 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0918 21:08:43.415419   62061 logs.go:123] Gathering logs for CRI-O ...
	I0918 21:08:43.415435   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0918 21:08:43.501634   62061 logs.go:123] Gathering logs for container status ...
	I0918 21:08:43.501670   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 21:08:43.357675   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:08:45.857212   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:08:43.133389   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:08:45.134057   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:08:44.887763   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:08:47.387221   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:08:46.043925   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:08:46.057457   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0918 21:08:46.057540   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0918 21:08:46.091987   62061 cri.go:89] found id: ""
	I0918 21:08:46.092036   62061 logs.go:276] 0 containers: []
	W0918 21:08:46.092047   62061 logs.go:278] No container was found matching "kube-apiserver"
	I0918 21:08:46.092055   62061 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0918 21:08:46.092118   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0918 21:08:46.131171   62061 cri.go:89] found id: ""
	I0918 21:08:46.131195   62061 logs.go:276] 0 containers: []
	W0918 21:08:46.131203   62061 logs.go:278] No container was found matching "etcd"
	I0918 21:08:46.131209   62061 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0918 21:08:46.131266   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0918 21:08:46.167324   62061 cri.go:89] found id: ""
	I0918 21:08:46.167360   62061 logs.go:276] 0 containers: []
	W0918 21:08:46.167369   62061 logs.go:278] No container was found matching "coredns"
	I0918 21:08:46.167375   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0918 21:08:46.167433   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0918 21:08:46.200940   62061 cri.go:89] found id: ""
	I0918 21:08:46.200969   62061 logs.go:276] 0 containers: []
	W0918 21:08:46.200978   62061 logs.go:278] No container was found matching "kube-scheduler"
	I0918 21:08:46.200983   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0918 21:08:46.201042   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0918 21:08:46.233736   62061 cri.go:89] found id: ""
	I0918 21:08:46.233765   62061 logs.go:276] 0 containers: []
	W0918 21:08:46.233773   62061 logs.go:278] No container was found matching "kube-proxy"
	I0918 21:08:46.233779   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0918 21:08:46.233841   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0918 21:08:46.267223   62061 cri.go:89] found id: ""
	I0918 21:08:46.267250   62061 logs.go:276] 0 containers: []
	W0918 21:08:46.267261   62061 logs.go:278] No container was found matching "kube-controller-manager"
	I0918 21:08:46.267268   62061 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0918 21:08:46.267331   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0918 21:08:46.302218   62061 cri.go:89] found id: ""
	I0918 21:08:46.302245   62061 logs.go:276] 0 containers: []
	W0918 21:08:46.302255   62061 logs.go:278] No container was found matching "kindnet"
	I0918 21:08:46.302262   62061 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0918 21:08:46.302326   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0918 21:08:46.334979   62061 cri.go:89] found id: ""
	I0918 21:08:46.335005   62061 logs.go:276] 0 containers: []
	W0918 21:08:46.335015   62061 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0918 21:08:46.335024   62061 logs.go:123] Gathering logs for describe nodes ...
	I0918 21:08:46.335038   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0918 21:08:46.422905   62061 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0918 21:08:46.422929   62061 logs.go:123] Gathering logs for CRI-O ...
	I0918 21:08:46.422948   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0918 21:08:46.504831   62061 logs.go:123] Gathering logs for container status ...
	I0918 21:08:46.504884   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 21:08:46.543954   62061 logs.go:123] Gathering logs for kubelet ...
	I0918 21:08:46.543981   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 21:08:46.595817   62061 logs.go:123] Gathering logs for dmesg ...
	I0918 21:08:46.595854   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 21:08:49.109984   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:08:49.124966   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0918 21:08:49.125052   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0918 21:08:49.163272   62061 cri.go:89] found id: ""
	I0918 21:08:49.163302   62061 logs.go:276] 0 containers: []
	W0918 21:08:49.163334   62061 logs.go:278] No container was found matching "kube-apiserver"
	I0918 21:08:49.163342   62061 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0918 21:08:49.163411   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0918 21:08:49.199973   62061 cri.go:89] found id: ""
	I0918 21:08:49.200003   62061 logs.go:276] 0 containers: []
	W0918 21:08:49.200037   62061 logs.go:278] No container was found matching "etcd"
	I0918 21:08:49.200046   62061 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0918 21:08:49.200111   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0918 21:08:49.235980   62061 cri.go:89] found id: ""
	I0918 21:08:49.236036   62061 logs.go:276] 0 containers: []
	W0918 21:08:49.236050   62061 logs.go:278] No container was found matching "coredns"
	I0918 21:08:49.236061   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0918 21:08:49.236123   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0918 21:08:49.271162   62061 cri.go:89] found id: ""
	I0918 21:08:49.271386   62061 logs.go:276] 0 containers: []
	W0918 21:08:49.271404   62061 logs.go:278] No container was found matching "kube-scheduler"
	I0918 21:08:49.271413   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0918 21:08:49.271483   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0918 21:08:49.305799   62061 cri.go:89] found id: ""
	I0918 21:08:49.305831   62061 logs.go:276] 0 containers: []
	W0918 21:08:49.305842   62061 logs.go:278] No container was found matching "kube-proxy"
	I0918 21:08:49.305848   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0918 21:08:49.305899   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0918 21:08:49.342170   62061 cri.go:89] found id: ""
	I0918 21:08:49.342195   62061 logs.go:276] 0 containers: []
	W0918 21:08:49.342202   62061 logs.go:278] No container was found matching "kube-controller-manager"
	I0918 21:08:49.342208   62061 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0918 21:08:49.342265   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0918 21:08:49.374071   62061 cri.go:89] found id: ""
	I0918 21:08:49.374098   62061 logs.go:276] 0 containers: []
	W0918 21:08:49.374108   62061 logs.go:278] No container was found matching "kindnet"
	I0918 21:08:49.374116   62061 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0918 21:08:49.374186   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0918 21:08:49.407614   62061 cri.go:89] found id: ""
	I0918 21:08:49.407645   62061 logs.go:276] 0 containers: []
	W0918 21:08:49.407657   62061 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0918 21:08:49.407669   62061 logs.go:123] Gathering logs for kubelet ...
	I0918 21:08:49.407685   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 21:08:49.458433   62061 logs.go:123] Gathering logs for dmesg ...
	I0918 21:08:49.458473   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 21:08:49.471490   62061 logs.go:123] Gathering logs for describe nodes ...
	I0918 21:08:49.471519   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0918 21:08:47.857798   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:08:50.355748   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:08:47.634158   61659 pod_ready.go:103] pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace has status "Ready":"False"
	I0918 21:08:49.627085   61659 pod_ready.go:82] duration metric: took 4m0.000936582s for pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace to be "Ready" ...
	E0918 21:08:49.627133   61659 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-cqp47" in "kube-system" namespace to be "Ready" (will not retry!)
	I0918 21:08:49.627156   61659 pod_ready.go:39] duration metric: took 4m7.542795536s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0918 21:08:49.627192   61659 kubeadm.go:597] duration metric: took 4m15.452827752s to restartPrimaryControlPlane
	W0918 21:08:49.627251   61659 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0918 21:08:49.627290   61659 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0918 21:08:49.387560   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:08:51.887591   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	W0918 21:08:49.533874   62061 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0918 21:08:49.533901   62061 logs.go:123] Gathering logs for CRI-O ...
	I0918 21:08:49.533916   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0918 21:08:49.611711   62061 logs.go:123] Gathering logs for container status ...
	I0918 21:08:49.611744   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 21:08:52.157839   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:08:52.170690   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0918 21:08:52.170770   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0918 21:08:52.208737   62061 cri.go:89] found id: ""
	I0918 21:08:52.208767   62061 logs.go:276] 0 containers: []
	W0918 21:08:52.208779   62061 logs.go:278] No container was found matching "kube-apiserver"
	I0918 21:08:52.208787   62061 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0918 21:08:52.208846   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0918 21:08:52.242624   62061 cri.go:89] found id: ""
	I0918 21:08:52.242658   62061 logs.go:276] 0 containers: []
	W0918 21:08:52.242669   62061 logs.go:278] No container was found matching "etcd"
	I0918 21:08:52.242677   62061 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0918 21:08:52.242742   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0918 21:08:52.280686   62061 cri.go:89] found id: ""
	I0918 21:08:52.280717   62061 logs.go:276] 0 containers: []
	W0918 21:08:52.280728   62061 logs.go:278] No container was found matching "coredns"
	I0918 21:08:52.280736   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0918 21:08:52.280798   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0918 21:08:52.313748   62061 cri.go:89] found id: ""
	I0918 21:08:52.313777   62061 logs.go:276] 0 containers: []
	W0918 21:08:52.313785   62061 logs.go:278] No container was found matching "kube-scheduler"
	I0918 21:08:52.313791   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0918 21:08:52.313840   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0918 21:08:52.353072   62061 cri.go:89] found id: ""
	I0918 21:08:52.353102   62061 logs.go:276] 0 containers: []
	W0918 21:08:52.353124   62061 logs.go:278] No container was found matching "kube-proxy"
	I0918 21:08:52.353132   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0918 21:08:52.353195   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0918 21:08:52.390358   62061 cri.go:89] found id: ""
	I0918 21:08:52.390384   62061 logs.go:276] 0 containers: []
	W0918 21:08:52.390392   62061 logs.go:278] No container was found matching "kube-controller-manager"
	I0918 21:08:52.390398   62061 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0918 21:08:52.390448   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0918 21:08:52.430039   62061 cri.go:89] found id: ""
	I0918 21:08:52.430068   62061 logs.go:276] 0 containers: []
	W0918 21:08:52.430081   62061 logs.go:278] No container was found matching "kindnet"
	I0918 21:08:52.430088   62061 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0918 21:08:52.430146   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0918 21:08:52.466096   62061 cri.go:89] found id: ""
	I0918 21:08:52.466125   62061 logs.go:276] 0 containers: []
	W0918 21:08:52.466137   62061 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0918 21:08:52.466149   62061 logs.go:123] Gathering logs for kubelet ...
	I0918 21:08:52.466162   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 21:08:52.518643   62061 logs.go:123] Gathering logs for dmesg ...
	I0918 21:08:52.518678   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 21:08:52.531768   62061 logs.go:123] Gathering logs for describe nodes ...
	I0918 21:08:52.531797   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0918 21:08:52.605130   62061 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0918 21:08:52.605163   62061 logs.go:123] Gathering logs for CRI-O ...
	I0918 21:08:52.605181   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0918 21:08:52.686510   62061 logs.go:123] Gathering logs for container status ...
	I0918 21:08:52.686553   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 21:08:52.356535   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:08:54.356671   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:08:54.387306   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:08:56.887745   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:08:55.225867   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:08:55.240537   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0918 21:08:55.240618   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0918 21:08:55.276461   62061 cri.go:89] found id: ""
	I0918 21:08:55.276490   62061 logs.go:276] 0 containers: []
	W0918 21:08:55.276498   62061 logs.go:278] No container was found matching "kube-apiserver"
	I0918 21:08:55.276504   62061 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0918 21:08:55.276564   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0918 21:08:55.313449   62061 cri.go:89] found id: ""
	I0918 21:08:55.313482   62061 logs.go:276] 0 containers: []
	W0918 21:08:55.313493   62061 logs.go:278] No container was found matching "etcd"
	I0918 21:08:55.313499   62061 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0918 21:08:55.313551   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0918 21:08:55.352436   62061 cri.go:89] found id: ""
	I0918 21:08:55.352475   62061 logs.go:276] 0 containers: []
	W0918 21:08:55.352485   62061 logs.go:278] No container was found matching "coredns"
	I0918 21:08:55.352492   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0918 21:08:55.352560   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0918 21:08:55.390433   62061 cri.go:89] found id: ""
	I0918 21:08:55.390458   62061 logs.go:276] 0 containers: []
	W0918 21:08:55.390466   62061 logs.go:278] No container was found matching "kube-scheduler"
	I0918 21:08:55.390472   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0918 21:08:55.390529   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0918 21:08:55.428426   62061 cri.go:89] found id: ""
	I0918 21:08:55.428455   62061 logs.go:276] 0 containers: []
	W0918 21:08:55.428465   62061 logs.go:278] No container was found matching "kube-proxy"
	I0918 21:08:55.428473   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0918 21:08:55.428540   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0918 21:08:55.465587   62061 cri.go:89] found id: ""
	I0918 21:08:55.465622   62061 logs.go:276] 0 containers: []
	W0918 21:08:55.465633   62061 logs.go:278] No container was found matching "kube-controller-manager"
	I0918 21:08:55.465641   62061 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0918 21:08:55.465710   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0918 21:08:55.502137   62061 cri.go:89] found id: ""
	I0918 21:08:55.502185   62061 logs.go:276] 0 containers: []
	W0918 21:08:55.502196   62061 logs.go:278] No container was found matching "kindnet"
	I0918 21:08:55.502203   62061 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0918 21:08:55.502266   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0918 21:08:55.535992   62061 cri.go:89] found id: ""
	I0918 21:08:55.536037   62061 logs.go:276] 0 containers: []
	W0918 21:08:55.536050   62061 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0918 21:08:55.536060   62061 logs.go:123] Gathering logs for dmesg ...
	I0918 21:08:55.536078   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 21:08:55.549267   62061 logs.go:123] Gathering logs for describe nodes ...
	I0918 21:08:55.549296   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0918 21:08:55.616522   62061 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0918 21:08:55.616556   62061 logs.go:123] Gathering logs for CRI-O ...
	I0918 21:08:55.616572   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0918 21:08:55.698822   62061 logs.go:123] Gathering logs for container status ...
	I0918 21:08:55.698874   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 21:08:55.740234   62061 logs.go:123] Gathering logs for kubelet ...
	I0918 21:08:55.740264   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 21:08:58.294876   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:08:58.307543   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0918 21:08:58.307630   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0918 21:08:58.340435   62061 cri.go:89] found id: ""
	I0918 21:08:58.340467   62061 logs.go:276] 0 containers: []
	W0918 21:08:58.340479   62061 logs.go:278] No container was found matching "kube-apiserver"
	I0918 21:08:58.340486   62061 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0918 21:08:58.340545   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0918 21:08:58.374759   62061 cri.go:89] found id: ""
	I0918 21:08:58.374792   62061 logs.go:276] 0 containers: []
	W0918 21:08:58.374804   62061 logs.go:278] No container was found matching "etcd"
	I0918 21:08:58.374810   62061 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0918 21:08:58.374863   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0918 21:08:58.411756   62061 cri.go:89] found id: ""
	I0918 21:08:58.411787   62061 logs.go:276] 0 containers: []
	W0918 21:08:58.411797   62061 logs.go:278] No container was found matching "coredns"
	I0918 21:08:58.411804   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0918 21:08:58.411867   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0918 21:08:58.449787   62061 cri.go:89] found id: ""
	I0918 21:08:58.449820   62061 logs.go:276] 0 containers: []
	W0918 21:08:58.449832   62061 logs.go:278] No container was found matching "kube-scheduler"
	I0918 21:08:58.449839   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0918 21:08:58.449903   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0918 21:08:58.485212   62061 cri.go:89] found id: ""
	I0918 21:08:58.485243   62061 logs.go:276] 0 containers: []
	W0918 21:08:58.485254   62061 logs.go:278] No container was found matching "kube-proxy"
	I0918 21:08:58.485261   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0918 21:08:58.485331   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0918 21:08:58.528667   62061 cri.go:89] found id: ""
	I0918 21:08:58.528696   62061 logs.go:276] 0 containers: []
	W0918 21:08:58.528706   62061 logs.go:278] No container was found matching "kube-controller-manager"
	I0918 21:08:58.528714   62061 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0918 21:08:58.528775   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0918 21:08:58.568201   62061 cri.go:89] found id: ""
	I0918 21:08:58.568231   62061 logs.go:276] 0 containers: []
	W0918 21:08:58.568241   62061 logs.go:278] No container was found matching "kindnet"
	I0918 21:08:58.568247   62061 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0918 21:08:58.568305   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0918 21:08:58.623952   62061 cri.go:89] found id: ""
	I0918 21:08:58.623982   62061 logs.go:276] 0 containers: []
	W0918 21:08:58.623991   62061 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0918 21:08:58.624000   62061 logs.go:123] Gathering logs for container status ...
	I0918 21:08:58.624011   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 21:08:58.665418   62061 logs.go:123] Gathering logs for kubelet ...
	I0918 21:08:58.665457   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 21:08:58.713464   62061 logs.go:123] Gathering logs for dmesg ...
	I0918 21:08:58.713504   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 21:08:58.727511   62061 logs.go:123] Gathering logs for describe nodes ...
	I0918 21:08:58.727552   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0918 21:08:58.799004   62061 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0918 21:08:58.799035   62061 logs.go:123] Gathering logs for CRI-O ...
	I0918 21:08:58.799050   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0918 21:08:56.856428   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:08:58.856632   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:09:00.857301   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:08:59.386076   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:09:01.387016   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:09:01.389457   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:09:01.404658   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0918 21:09:01.404734   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0918 21:09:01.439139   62061 cri.go:89] found id: ""
	I0918 21:09:01.439170   62061 logs.go:276] 0 containers: []
	W0918 21:09:01.439180   62061 logs.go:278] No container was found matching "kube-apiserver"
	I0918 21:09:01.439187   62061 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0918 21:09:01.439251   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0918 21:09:01.473866   62061 cri.go:89] found id: ""
	I0918 21:09:01.473896   62061 logs.go:276] 0 containers: []
	W0918 21:09:01.473907   62061 logs.go:278] No container was found matching "etcd"
	I0918 21:09:01.473915   62061 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0918 21:09:01.473978   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0918 21:09:01.506734   62061 cri.go:89] found id: ""
	I0918 21:09:01.506767   62061 logs.go:276] 0 containers: []
	W0918 21:09:01.506777   62061 logs.go:278] No container was found matching "coredns"
	I0918 21:09:01.506783   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0918 21:09:01.506836   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0918 21:09:01.540123   62061 cri.go:89] found id: ""
	I0918 21:09:01.540152   62061 logs.go:276] 0 containers: []
	W0918 21:09:01.540162   62061 logs.go:278] No container was found matching "kube-scheduler"
	I0918 21:09:01.540169   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0918 21:09:01.540236   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0918 21:09:01.575002   62061 cri.go:89] found id: ""
	I0918 21:09:01.575037   62061 logs.go:276] 0 containers: []
	W0918 21:09:01.575048   62061 logs.go:278] No container was found matching "kube-proxy"
	I0918 21:09:01.575071   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0918 21:09:01.575159   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0918 21:09:01.612285   62061 cri.go:89] found id: ""
	I0918 21:09:01.612316   62061 logs.go:276] 0 containers: []
	W0918 21:09:01.612327   62061 logs.go:278] No container was found matching "kube-controller-manager"
	I0918 21:09:01.612335   62061 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0918 21:09:01.612399   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0918 21:09:01.648276   62061 cri.go:89] found id: ""
	I0918 21:09:01.648304   62061 logs.go:276] 0 containers: []
	W0918 21:09:01.648318   62061 logs.go:278] No container was found matching "kindnet"
	I0918 21:09:01.648325   62061 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0918 21:09:01.648397   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0918 21:09:01.686192   62061 cri.go:89] found id: ""
	I0918 21:09:01.686220   62061 logs.go:276] 0 containers: []
	W0918 21:09:01.686228   62061 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0918 21:09:01.686236   62061 logs.go:123] Gathering logs for kubelet ...
	I0918 21:09:01.686247   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 21:09:01.738366   62061 logs.go:123] Gathering logs for dmesg ...
	I0918 21:09:01.738408   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 21:09:01.752650   62061 logs.go:123] Gathering logs for describe nodes ...
	I0918 21:09:01.752683   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0918 21:09:01.825010   62061 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0918 21:09:01.825114   62061 logs.go:123] Gathering logs for CRI-O ...
	I0918 21:09:01.825156   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0918 21:09:01.907401   62061 logs.go:123] Gathering logs for container status ...
	I0918 21:09:01.907448   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 21:09:04.448316   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:09:04.461242   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0918 21:09:04.461304   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0918 21:09:03.357089   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:09:05.856126   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:09:03.387563   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:09:05.389665   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:09:07.886523   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:09:04.495951   62061 cri.go:89] found id: ""
	I0918 21:09:04.495984   62061 logs.go:276] 0 containers: []
	W0918 21:09:04.495997   62061 logs.go:278] No container was found matching "kube-apiserver"
	I0918 21:09:04.496006   62061 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0918 21:09:04.496090   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0918 21:09:04.536874   62061 cri.go:89] found id: ""
	I0918 21:09:04.536906   62061 logs.go:276] 0 containers: []
	W0918 21:09:04.536917   62061 logs.go:278] No container was found matching "etcd"
	I0918 21:09:04.536935   62061 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0918 21:09:04.537010   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0918 21:09:04.572601   62061 cri.go:89] found id: ""
	I0918 21:09:04.572634   62061 logs.go:276] 0 containers: []
	W0918 21:09:04.572646   62061 logs.go:278] No container was found matching "coredns"
	I0918 21:09:04.572653   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0918 21:09:04.572716   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0918 21:09:04.609783   62061 cri.go:89] found id: ""
	I0918 21:09:04.609817   62061 logs.go:276] 0 containers: []
	W0918 21:09:04.609826   62061 logs.go:278] No container was found matching "kube-scheduler"
	I0918 21:09:04.609832   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0918 21:09:04.609891   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0918 21:09:04.645124   62061 cri.go:89] found id: ""
	I0918 21:09:04.645156   62061 logs.go:276] 0 containers: []
	W0918 21:09:04.645167   62061 logs.go:278] No container was found matching "kube-proxy"
	I0918 21:09:04.645175   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0918 21:09:04.645241   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0918 21:09:04.680927   62061 cri.go:89] found id: ""
	I0918 21:09:04.680959   62061 logs.go:276] 0 containers: []
	W0918 21:09:04.680971   62061 logs.go:278] No container was found matching "kube-controller-manager"
	I0918 21:09:04.680978   62061 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0918 21:09:04.681038   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0918 21:09:04.718920   62061 cri.go:89] found id: ""
	I0918 21:09:04.718954   62061 logs.go:276] 0 containers: []
	W0918 21:09:04.718972   62061 logs.go:278] No container was found matching "kindnet"
	I0918 21:09:04.718979   62061 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0918 21:09:04.719039   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0918 21:09:04.751450   62061 cri.go:89] found id: ""
	I0918 21:09:04.751488   62061 logs.go:276] 0 containers: []
	W0918 21:09:04.751500   62061 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0918 21:09:04.751511   62061 logs.go:123] Gathering logs for container status ...
	I0918 21:09:04.751529   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 21:09:04.788969   62061 logs.go:123] Gathering logs for kubelet ...
	I0918 21:09:04.789001   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 21:09:04.837638   62061 logs.go:123] Gathering logs for dmesg ...
	I0918 21:09:04.837673   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 21:09:04.853673   62061 logs.go:123] Gathering logs for describe nodes ...
	I0918 21:09:04.853706   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0918 21:09:04.923670   62061 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0918 21:09:04.923703   62061 logs.go:123] Gathering logs for CRI-O ...
	I0918 21:09:04.923717   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0918 21:09:07.507979   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:09:07.521581   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0918 21:09:07.521656   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0918 21:09:07.556280   62061 cri.go:89] found id: ""
	I0918 21:09:07.556310   62061 logs.go:276] 0 containers: []
	W0918 21:09:07.556321   62061 logs.go:278] No container was found matching "kube-apiserver"
	I0918 21:09:07.556329   62061 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0918 21:09:07.556397   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0918 21:09:07.590775   62061 cri.go:89] found id: ""
	I0918 21:09:07.590802   62061 logs.go:276] 0 containers: []
	W0918 21:09:07.590810   62061 logs.go:278] No container was found matching "etcd"
	I0918 21:09:07.590815   62061 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0918 21:09:07.590862   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0918 21:09:07.625971   62061 cri.go:89] found id: ""
	I0918 21:09:07.626000   62061 logs.go:276] 0 containers: []
	W0918 21:09:07.626010   62061 logs.go:278] No container was found matching "coredns"
	I0918 21:09:07.626018   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0918 21:09:07.626080   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0918 21:09:07.660083   62061 cri.go:89] found id: ""
	I0918 21:09:07.660116   62061 logs.go:276] 0 containers: []
	W0918 21:09:07.660128   62061 logs.go:278] No container was found matching "kube-scheduler"
	I0918 21:09:07.660136   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0918 21:09:07.660201   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0918 21:09:07.694165   62061 cri.go:89] found id: ""
	I0918 21:09:07.694195   62061 logs.go:276] 0 containers: []
	W0918 21:09:07.694204   62061 logs.go:278] No container was found matching "kube-proxy"
	I0918 21:09:07.694211   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0918 21:09:07.694269   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0918 21:09:07.728298   62061 cri.go:89] found id: ""
	I0918 21:09:07.728328   62061 logs.go:276] 0 containers: []
	W0918 21:09:07.728338   62061 logs.go:278] No container was found matching "kube-controller-manager"
	I0918 21:09:07.728349   62061 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0918 21:09:07.728409   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0918 21:09:07.762507   62061 cri.go:89] found id: ""
	I0918 21:09:07.762546   62061 logs.go:276] 0 containers: []
	W0918 21:09:07.762555   62061 logs.go:278] No container was found matching "kindnet"
	I0918 21:09:07.762565   62061 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0918 21:09:07.762745   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0918 21:09:07.798006   62061 cri.go:89] found id: ""
	I0918 21:09:07.798038   62061 logs.go:276] 0 containers: []
	W0918 21:09:07.798049   62061 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0918 21:09:07.798059   62061 logs.go:123] Gathering logs for kubelet ...
	I0918 21:09:07.798074   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 21:09:07.849222   62061 logs.go:123] Gathering logs for dmesg ...
	I0918 21:09:07.849267   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 21:09:07.865023   62061 logs.go:123] Gathering logs for describe nodes ...
	I0918 21:09:07.865055   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0918 21:09:07.938810   62061 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0918 21:09:07.938830   62061 logs.go:123] Gathering logs for CRI-O ...
	I0918 21:09:07.938842   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0918 21:09:08.018885   62061 logs.go:123] Gathering logs for container status ...
	I0918 21:09:08.018924   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 21:09:07.856987   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:09:10.356244   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:09:09.886563   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:09:12.386922   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:09:10.562565   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:09:10.575854   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0918 21:09:10.575941   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0918 21:09:10.612853   62061 cri.go:89] found id: ""
	I0918 21:09:10.612884   62061 logs.go:276] 0 containers: []
	W0918 21:09:10.612896   62061 logs.go:278] No container was found matching "kube-apiserver"
	I0918 21:09:10.612906   62061 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0918 21:09:10.612966   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0918 21:09:10.648672   62061 cri.go:89] found id: ""
	I0918 21:09:10.648703   62061 logs.go:276] 0 containers: []
	W0918 21:09:10.648713   62061 logs.go:278] No container was found matching "etcd"
	I0918 21:09:10.648720   62061 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0918 21:09:10.648780   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0918 21:09:10.682337   62061 cri.go:89] found id: ""
	I0918 21:09:10.682370   62061 logs.go:276] 0 containers: []
	W0918 21:09:10.682388   62061 logs.go:278] No container was found matching "coredns"
	I0918 21:09:10.682394   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0918 21:09:10.682445   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0918 21:09:10.718221   62061 cri.go:89] found id: ""
	I0918 21:09:10.718257   62061 logs.go:276] 0 containers: []
	W0918 21:09:10.718269   62061 logs.go:278] No container was found matching "kube-scheduler"
	I0918 21:09:10.718277   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0918 21:09:10.718345   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0918 21:09:10.751570   62061 cri.go:89] found id: ""
	I0918 21:09:10.751600   62061 logs.go:276] 0 containers: []
	W0918 21:09:10.751609   62061 logs.go:278] No container was found matching "kube-proxy"
	I0918 21:09:10.751615   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0918 21:09:10.751706   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0918 21:09:10.792125   62061 cri.go:89] found id: ""
	I0918 21:09:10.792158   62061 logs.go:276] 0 containers: []
	W0918 21:09:10.792170   62061 logs.go:278] No container was found matching "kube-controller-manager"
	I0918 21:09:10.792178   62061 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0918 21:09:10.792245   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0918 21:09:10.830699   62061 cri.go:89] found id: ""
	I0918 21:09:10.830733   62061 logs.go:276] 0 containers: []
	W0918 21:09:10.830742   62061 logs.go:278] No container was found matching "kindnet"
	I0918 21:09:10.830748   62061 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0918 21:09:10.830820   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0918 21:09:10.869625   62061 cri.go:89] found id: ""
	I0918 21:09:10.869655   62061 logs.go:276] 0 containers: []
	W0918 21:09:10.869663   62061 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0918 21:09:10.869672   62061 logs.go:123] Gathering logs for kubelet ...
	I0918 21:09:10.869684   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 21:09:10.921340   62061 logs.go:123] Gathering logs for dmesg ...
	I0918 21:09:10.921378   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 21:09:10.937032   62061 logs.go:123] Gathering logs for describe nodes ...
	I0918 21:09:10.937071   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0918 21:09:11.006248   62061 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0918 21:09:11.006276   62061 logs.go:123] Gathering logs for CRI-O ...
	I0918 21:09:11.006291   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0918 21:09:11.086458   62061 logs.go:123] Gathering logs for container status ...
	I0918 21:09:11.086496   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 21:09:13.628824   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:09:13.642499   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0918 21:09:13.642578   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0918 21:09:13.677337   62061 cri.go:89] found id: ""
	I0918 21:09:13.677368   62061 logs.go:276] 0 containers: []
	W0918 21:09:13.677378   62061 logs.go:278] No container was found matching "kube-apiserver"
	I0918 21:09:13.677385   62061 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0918 21:09:13.677449   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0918 21:09:13.717317   62061 cri.go:89] found id: ""
	I0918 21:09:13.717341   62061 logs.go:276] 0 containers: []
	W0918 21:09:13.717353   62061 logs.go:278] No container was found matching "etcd"
	I0918 21:09:13.717358   62061 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0918 21:09:13.717419   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0918 21:09:13.752151   62061 cri.go:89] found id: ""
	I0918 21:09:13.752181   62061 logs.go:276] 0 containers: []
	W0918 21:09:13.752189   62061 logs.go:278] No container was found matching "coredns"
	I0918 21:09:13.752195   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0918 21:09:13.752253   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0918 21:09:13.791946   62061 cri.go:89] found id: ""
	I0918 21:09:13.791975   62061 logs.go:276] 0 containers: []
	W0918 21:09:13.791983   62061 logs.go:278] No container was found matching "kube-scheduler"
	I0918 21:09:13.791989   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0918 21:09:13.792069   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0918 21:09:13.825173   62061 cri.go:89] found id: ""
	I0918 21:09:13.825198   62061 logs.go:276] 0 containers: []
	W0918 21:09:13.825209   62061 logs.go:278] No container was found matching "kube-proxy"
	I0918 21:09:13.825216   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0918 21:09:13.825276   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0918 21:09:13.859801   62061 cri.go:89] found id: ""
	I0918 21:09:13.859834   62061 logs.go:276] 0 containers: []
	W0918 21:09:13.859846   62061 logs.go:278] No container was found matching "kube-controller-manager"
	I0918 21:09:13.859853   62061 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0918 21:09:13.859907   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0918 21:09:13.895413   62061 cri.go:89] found id: ""
	I0918 21:09:13.895445   62061 logs.go:276] 0 containers: []
	W0918 21:09:13.895456   62061 logs.go:278] No container was found matching "kindnet"
	I0918 21:09:13.895463   62061 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0918 21:09:13.895515   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0918 21:09:13.929048   62061 cri.go:89] found id: ""
	I0918 21:09:13.929075   62061 logs.go:276] 0 containers: []
	W0918 21:09:13.929083   62061 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0918 21:09:13.929092   62061 logs.go:123] Gathering logs for kubelet ...
	I0918 21:09:13.929104   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 21:09:13.981579   62061 logs.go:123] Gathering logs for dmesg ...
	I0918 21:09:13.981613   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 21:09:13.995642   62061 logs.go:123] Gathering logs for describe nodes ...
	I0918 21:09:13.995679   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0918 21:09:14.061762   62061 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0918 21:09:14.061782   62061 logs.go:123] Gathering logs for CRI-O ...
	I0918 21:09:14.061793   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0918 21:09:14.139623   62061 logs.go:123] Gathering logs for container status ...
	I0918 21:09:14.139659   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 21:09:16.001617   61659 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.374302262s)
	I0918 21:09:16.001692   61659 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0918 21:09:16.019307   61659 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0918 21:09:16.029547   61659 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0918 21:09:16.039132   61659 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0918 21:09:16.039154   61659 kubeadm.go:157] found existing configuration files:
	
	I0918 21:09:16.039196   61659 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0918 21:09:16.048506   61659 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0918 21:09:16.048567   61659 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0918 21:09:16.058120   61659 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0918 21:09:16.067686   61659 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0918 21:09:16.067746   61659 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0918 21:09:16.077707   61659 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0918 21:09:16.087089   61659 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0918 21:09:16.087149   61659 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0918 21:09:16.097040   61659 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0918 21:09:16.106448   61659 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0918 21:09:16.106514   61659 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0918 21:09:16.116060   61659 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0918 21:09:16.159721   61659 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I0918 21:09:16.159797   61659 kubeadm.go:310] [preflight] Running pre-flight checks
	I0918 21:09:16.266821   61659 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0918 21:09:16.266968   61659 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0918 21:09:16.267122   61659 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0918 21:09:16.275249   61659 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0918 21:09:12.855996   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:09:14.857296   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:09:16.277228   61659 out.go:235]   - Generating certificates and keys ...
	I0918 21:09:16.277333   61659 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0918 21:09:16.277419   61659 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0918 21:09:16.277534   61659 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0918 21:09:16.277617   61659 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0918 21:09:16.277709   61659 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0918 21:09:16.277790   61659 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0918 21:09:16.277904   61659 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0918 21:09:16.278013   61659 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0918 21:09:16.278131   61659 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0918 21:09:16.278265   61659 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0918 21:09:16.278331   61659 kubeadm.go:310] [certs] Using the existing "sa" key
	I0918 21:09:16.278401   61659 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0918 21:09:16.516263   61659 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0918 21:09:16.708220   61659 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0918 21:09:17.009820   61659 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0918 21:09:17.108871   61659 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0918 21:09:17.211014   61659 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0918 21:09:17.211658   61659 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0918 21:09:17.216626   61659 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0918 21:09:14.887068   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:09:16.888350   61740 pod_ready.go:103] pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace has status "Ready":"False"
	I0918 21:09:16.686071   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:09:16.699769   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0918 21:09:16.699844   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0918 21:09:16.735242   62061 cri.go:89] found id: ""
	I0918 21:09:16.735277   62061 logs.go:276] 0 containers: []
	W0918 21:09:16.735288   62061 logs.go:278] No container was found matching "kube-apiserver"
	I0918 21:09:16.735298   62061 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0918 21:09:16.735371   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0918 21:09:16.770027   62061 cri.go:89] found id: ""
	I0918 21:09:16.770052   62061 logs.go:276] 0 containers: []
	W0918 21:09:16.770060   62061 logs.go:278] No container was found matching "etcd"
	I0918 21:09:16.770066   62061 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0918 21:09:16.770114   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0918 21:09:16.806525   62061 cri.go:89] found id: ""
	I0918 21:09:16.806555   62061 logs.go:276] 0 containers: []
	W0918 21:09:16.806563   62061 logs.go:278] No container was found matching "coredns"
	I0918 21:09:16.806569   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0918 21:09:16.806636   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0918 21:09:16.851146   62061 cri.go:89] found id: ""
	I0918 21:09:16.851183   62061 logs.go:276] 0 containers: []
	W0918 21:09:16.851194   62061 logs.go:278] No container was found matching "kube-scheduler"
	I0918 21:09:16.851204   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0918 21:09:16.851271   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0918 21:09:16.890718   62061 cri.go:89] found id: ""
	I0918 21:09:16.890748   62061 logs.go:276] 0 containers: []
	W0918 21:09:16.890760   62061 logs.go:278] No container was found matching "kube-proxy"
	I0918 21:09:16.890767   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0918 21:09:16.890824   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0918 21:09:16.928971   62061 cri.go:89] found id: ""
	I0918 21:09:16.929002   62061 logs.go:276] 0 containers: []
	W0918 21:09:16.929012   62061 logs.go:278] No container was found matching "kube-controller-manager"
	I0918 21:09:16.929020   62061 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0918 21:09:16.929079   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0918 21:09:16.970965   62061 cri.go:89] found id: ""
	I0918 21:09:16.970999   62061 logs.go:276] 0 containers: []
	W0918 21:09:16.971011   62061 logs.go:278] No container was found matching "kindnet"
	I0918 21:09:16.971019   62061 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0918 21:09:16.971089   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0918 21:09:17.006427   62061 cri.go:89] found id: ""
	I0918 21:09:17.006453   62061 logs.go:276] 0 containers: []
	W0918 21:09:17.006461   62061 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0918 21:09:17.006468   62061 logs.go:123] Gathering logs for kubelet ...
	I0918 21:09:17.006480   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 21:09:17.058690   62061 logs.go:123] Gathering logs for dmesg ...
	I0918 21:09:17.058733   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 21:09:17.072593   62061 logs.go:123] Gathering logs for describe nodes ...
	I0918 21:09:17.072623   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0918 21:09:17.143046   62061 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0918 21:09:17.143071   62061 logs.go:123] Gathering logs for CRI-O ...
	I0918 21:09:17.143082   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0918 21:09:17.236943   62061 logs.go:123] Gathering logs for container status ...
	I0918 21:09:17.236989   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 21:09:17.357978   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:09:19.858268   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:09:17.218406   61659 out.go:235]   - Booting up control plane ...
	I0918 21:09:17.218544   61659 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0918 21:09:17.218662   61659 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0918 21:09:17.218765   61659 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0918 21:09:17.238076   61659 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0918 21:09:17.248123   61659 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0918 21:09:17.248226   61659 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0918 21:09:17.379685   61659 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0918 21:09:17.379840   61659 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0918 21:09:18.380791   61659 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001279947s
	I0918 21:09:18.380906   61659 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0918 21:09:18.380783   61740 pod_ready.go:82] duration metric: took 4m0.000205104s for pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace to be "Ready" ...
	E0918 21:09:18.380812   61740 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-z8rm7" in "kube-system" namespace to be "Ready" (will not retry!)
	I0918 21:09:18.380832   61740 pod_ready.go:39] duration metric: took 4m15.618837854s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0918 21:09:18.380875   61740 kubeadm.go:597] duration metric: took 4m23.646410044s to restartPrimaryControlPlane
	W0918 21:09:18.380936   61740 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0918 21:09:18.380966   61740 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0918 21:09:23.386705   61659 kubeadm.go:310] [api-check] The API server is healthy after 5.005706581s
	I0918 21:09:23.402316   61659 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0918 21:09:23.422786   61659 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0918 21:09:23.462099   61659 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0918 21:09:23.462373   61659 kubeadm.go:310] [mark-control-plane] Marking the node default-k8s-diff-port-828868 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0918 21:09:23.484276   61659 kubeadm.go:310] [bootstrap-token] Using token: 2vcil8.e13zhc1806da8knq
	I0918 21:09:19.782266   62061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:09:19.799433   62061 kubeadm.go:597] duration metric: took 4m2.126311085s to restartPrimaryControlPlane
	W0918 21:09:19.799513   62061 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0918 21:09:19.799543   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0918 21:09:20.910192   62061 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (1.110625463s)
	I0918 21:09:20.910273   62061 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0918 21:09:20.925992   62061 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0918 21:09:20.936876   62061 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0918 21:09:20.947170   62061 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0918 21:09:20.947199   62061 kubeadm.go:157] found existing configuration files:
	
	I0918 21:09:20.947255   62061 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0918 21:09:20.958140   62061 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0918 21:09:20.958240   62061 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0918 21:09:20.968351   62061 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0918 21:09:20.978669   62061 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0918 21:09:20.978735   62061 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0918 21:09:20.989765   62061 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0918 21:09:20.999842   62061 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0918 21:09:20.999903   62061 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0918 21:09:21.009945   62061 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0918 21:09:21.020229   62061 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0918 21:09:21.020289   62061 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0918 21:09:21.030583   62061 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0918 21:09:21.271399   62061 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0918 21:09:23.485978   61659 out.go:235]   - Configuring RBAC rules ...
	I0918 21:09:23.486112   61659 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0918 21:09:23.499163   61659 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0918 21:09:23.510754   61659 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0918 21:09:23.514794   61659 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0918 21:09:23.519247   61659 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0918 21:09:23.530424   61659 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0918 21:09:23.799778   61659 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0918 21:09:24.223469   61659 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0918 21:09:24.794852   61659 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0918 21:09:24.794886   61659 kubeadm.go:310] 
	I0918 21:09:24.794951   61659 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0918 21:09:24.794963   61659 kubeadm.go:310] 
	I0918 21:09:24.795058   61659 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0918 21:09:24.795073   61659 kubeadm.go:310] 
	I0918 21:09:24.795105   61659 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0918 21:09:24.795192   61659 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0918 21:09:24.795255   61659 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0918 21:09:24.795285   61659 kubeadm.go:310] 
	I0918 21:09:24.795366   61659 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0918 21:09:24.795376   61659 kubeadm.go:310] 
	I0918 21:09:24.795416   61659 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0918 21:09:24.795425   61659 kubeadm.go:310] 
	I0918 21:09:24.795497   61659 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0918 21:09:24.795580   61659 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0918 21:09:24.795678   61659 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0918 21:09:24.795692   61659 kubeadm.go:310] 
	I0918 21:09:24.795779   61659 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0918 21:09:24.795891   61659 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0918 21:09:24.795901   61659 kubeadm.go:310] 
	I0918 21:09:24.796174   61659 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8444 --token 2vcil8.e13zhc1806da8knq \
	I0918 21:09:24.796299   61659 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:8ad46bf288e8cc2226f5a929a768e6c95cddecc97ffc4016be27ef0987b1e36f \
	I0918 21:09:24.796350   61659 kubeadm.go:310] 	--control-plane 
	I0918 21:09:24.796367   61659 kubeadm.go:310] 
	I0918 21:09:24.796479   61659 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0918 21:09:24.796487   61659 kubeadm.go:310] 
	I0918 21:09:24.796594   61659 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8444 --token 2vcil8.e13zhc1806da8knq \
	I0918 21:09:24.796738   61659 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:8ad46bf288e8cc2226f5a929a768e6c95cddecc97ffc4016be27ef0987b1e36f 
	I0918 21:09:24.797359   61659 kubeadm.go:310] W0918 21:09:16.134048    2561 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0918 21:09:24.797679   61659 kubeadm.go:310] W0918 21:09:16.134873    2561 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0918 21:09:24.797832   61659 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0918 21:09:24.797858   61659 cni.go:84] Creating CNI manager for ""
	I0918 21:09:24.797872   61659 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0918 21:09:24.799953   61659 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0918 21:09:22.357582   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:09:24.857037   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:09:24.801259   61659 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0918 21:09:24.812277   61659 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0918 21:09:24.834749   61659 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0918 21:09:24.834855   61659 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0918 21:09:24.834871   61659 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-828868 minikube.k8s.io/updated_at=2024_09_18T21_09_24_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=85073601a832bd4bbda5d11fa91feafff6ec6b91 minikube.k8s.io/name=default-k8s-diff-port-828868 minikube.k8s.io/primary=true
	I0918 21:09:25.022861   61659 ops.go:34] apiserver oom_adj: -16
	I0918 21:09:25.022930   61659 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0918 21:09:25.523400   61659 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0918 21:09:26.023075   61659 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0918 21:09:26.523330   61659 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0918 21:09:27.023179   61659 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0918 21:09:27.523363   61659 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0918 21:09:28.023150   61659 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0918 21:09:28.523941   61659 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0918 21:09:29.023542   61659 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0918 21:09:29.143581   61659 kubeadm.go:1113] duration metric: took 4.308796493s to wait for elevateKubeSystemPrivileges
	I0918 21:09:29.143614   61659 kubeadm.go:394] duration metric: took 4m55.024616229s to StartCluster
	I0918 21:09:29.143632   61659 settings.go:142] acquiring lock: {Name:mk6ae95a69dcbe00c28846409e9f46945a88de2f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0918 21:09:29.143727   61659 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19667-7671/kubeconfig
	I0918 21:09:29.145397   61659 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19667-7671/kubeconfig: {Name:mk3a3eecadb0164dfdd5d3c4392b5b473a2a9bb0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0918 21:09:29.145680   61659 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.50.109 Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0918 21:09:29.145767   61659 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0918 21:09:29.145851   61659 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-828868"
	I0918 21:09:29.145869   61659 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-828868"
	I0918 21:09:29.145877   61659 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-828868"
	W0918 21:09:29.145885   61659 addons.go:243] addon storage-provisioner should already be in state true
	I0918 21:09:29.145896   61659 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-828868"
	I0918 21:09:29.145898   61659 config.go:182] Loaded profile config "default-k8s-diff-port-828868": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0918 21:09:29.145900   61659 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-828868"
	I0918 21:09:29.145920   61659 host.go:66] Checking if "default-k8s-diff-port-828868" exists ...
	I0918 21:09:29.145932   61659 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-828868"
	W0918 21:09:29.145946   61659 addons.go:243] addon metrics-server should already be in state true
	I0918 21:09:29.145980   61659 host.go:66] Checking if "default-k8s-diff-port-828868" exists ...
	I0918 21:09:29.146234   61659 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19667-7671/.minikube/bin/docker-machine-driver-kvm2
	I0918 21:09:29.146238   61659 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19667-7671/.minikube/bin/docker-machine-driver-kvm2
	I0918 21:09:29.146282   61659 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0918 21:09:29.146297   61659 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0918 21:09:29.146372   61659 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19667-7671/.minikube/bin/docker-machine-driver-kvm2
	I0918 21:09:29.146389   61659 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0918 21:09:29.147645   61659 out.go:177] * Verifying Kubernetes components...
	I0918 21:09:29.149574   61659 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0918 21:09:29.164779   61659 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43359
	I0918 21:09:29.165002   61659 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37857
	I0918 21:09:29.165390   61659 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44573
	I0918 21:09:29.165682   61659 main.go:141] libmachine: () Calling .GetVersion
	I0918 21:09:29.165749   61659 main.go:141] libmachine: () Calling .GetVersion
	I0918 21:09:29.166233   61659 main.go:141] libmachine: Using API Version  1
	I0918 21:09:29.166254   61659 main.go:141] libmachine: () Calling .SetConfigRaw
	I0918 21:09:29.166270   61659 main.go:141] libmachine: () Calling .GetVersion
	I0918 21:09:29.166388   61659 main.go:141] libmachine: Using API Version  1
	I0918 21:09:29.166414   61659 main.go:141] libmachine: () Calling .SetConfigRaw
	I0918 21:09:29.166544   61659 main.go:141] libmachine: () Calling .GetMachineName
	I0918 21:09:29.166711   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .GetState
	I0918 21:09:29.166730   61659 main.go:141] libmachine: () Calling .GetMachineName
	I0918 21:09:29.166894   61659 main.go:141] libmachine: Using API Version  1
	I0918 21:09:29.166918   61659 main.go:141] libmachine: () Calling .SetConfigRaw
	I0918 21:09:29.167381   61659 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19667-7671/.minikube/bin/docker-machine-driver-kvm2
	I0918 21:09:29.167425   61659 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0918 21:09:29.168144   61659 main.go:141] libmachine: () Calling .GetMachineName
	I0918 21:09:29.168578   61659 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19667-7671/.minikube/bin/docker-machine-driver-kvm2
	I0918 21:09:29.168614   61659 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0918 21:09:29.171072   61659 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-828868"
	W0918 21:09:29.171101   61659 addons.go:243] addon default-storageclass should already be in state true
	I0918 21:09:29.171133   61659 host.go:66] Checking if "default-k8s-diff-port-828868" exists ...
	I0918 21:09:29.171534   61659 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19667-7671/.minikube/bin/docker-machine-driver-kvm2
	I0918 21:09:29.171597   61659 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0918 21:09:29.186305   61659 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42211
	I0918 21:09:29.186318   61659 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45153
	I0918 21:09:29.186838   61659 main.go:141] libmachine: () Calling .GetVersion
	I0918 21:09:29.186847   61659 main.go:141] libmachine: () Calling .GetVersion
	I0918 21:09:29.187353   61659 main.go:141] libmachine: Using API Version  1
	I0918 21:09:29.187367   61659 main.go:141] libmachine: () Calling .SetConfigRaw
	I0918 21:09:29.187373   61659 main.go:141] libmachine: Using API Version  1
	I0918 21:09:29.187403   61659 main.go:141] libmachine: () Calling .SetConfigRaw
	I0918 21:09:29.187840   61659 main.go:141] libmachine: () Calling .GetMachineName
	I0918 21:09:29.187855   61659 main.go:141] libmachine: () Calling .GetMachineName
	I0918 21:09:29.188085   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .GetState
	I0918 21:09:29.188106   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .GetState
	I0918 21:09:29.193453   61659 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33471
	I0918 21:09:29.193905   61659 main.go:141] libmachine: () Calling .GetVersion
	I0918 21:09:29.194477   61659 main.go:141] libmachine: Using API Version  1
	I0918 21:09:29.194513   61659 main.go:141] libmachine: () Calling .SetConfigRaw
	I0918 21:09:29.194981   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .DriverName
	I0918 21:09:29.195155   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .DriverName
	I0918 21:09:29.195254   61659 main.go:141] libmachine: () Calling .GetMachineName
	I0918 21:09:29.195807   61659 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19667-7671/.minikube/bin/docker-machine-driver-kvm2
	I0918 21:09:29.195839   61659 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0918 21:09:29.197102   61659 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0918 21:09:29.197111   61659 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0918 21:09:29.198425   61659 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0918 21:09:29.198458   61659 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0918 21:09:29.198486   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .GetSSHHostname
	I0918 21:09:29.198589   61659 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0918 21:09:29.198605   61659 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0918 21:09:29.198622   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .GetSSHHostname
	I0918 21:09:29.202110   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | domain default-k8s-diff-port-828868 has defined MAC address 52:54:00:c0:39:06 in network mk-default-k8s-diff-port-828868
	I0918 21:09:29.202236   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | domain default-k8s-diff-port-828868 has defined MAC address 52:54:00:c0:39:06 in network mk-default-k8s-diff-port-828868
	I0918 21:09:29.202634   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:39:06", ip: ""} in network mk-default-k8s-diff-port-828868: {Iface:virbr4 ExpiryTime:2024-09-18 22:04:19 +0000 UTC Type:0 Mac:52:54:00:c0:39:06 Iaid: IPaddr:192.168.50.109 Prefix:24 Hostname:default-k8s-diff-port-828868 Clientid:01:52:54:00:c0:39:06}
	I0918 21:09:29.202656   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:39:06", ip: ""} in network mk-default-k8s-diff-port-828868: {Iface:virbr4 ExpiryTime:2024-09-18 22:04:19 +0000 UTC Type:0 Mac:52:54:00:c0:39:06 Iaid: IPaddr:192.168.50.109 Prefix:24 Hostname:default-k8s-diff-port-828868 Clientid:01:52:54:00:c0:39:06}
	I0918 21:09:29.202661   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | domain default-k8s-diff-port-828868 has defined IP address 192.168.50.109 and MAC address 52:54:00:c0:39:06 in network mk-default-k8s-diff-port-828868
	I0918 21:09:29.202677   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | domain default-k8s-diff-port-828868 has defined IP address 192.168.50.109 and MAC address 52:54:00:c0:39:06 in network mk-default-k8s-diff-port-828868
	I0918 21:09:29.202895   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .GetSSHPort
	I0918 21:09:29.202942   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .GetSSHPort
	I0918 21:09:29.203084   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .GetSSHKeyPath
	I0918 21:09:29.203129   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .GetSSHKeyPath
	I0918 21:09:29.203268   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .GetSSHUsername
	I0918 21:09:29.203275   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .GetSSHUsername
	I0918 21:09:29.203393   61659 sshutil.go:53] new ssh client: &{IP:192.168.50.109 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19667-7671/.minikube/machines/default-k8s-diff-port-828868/id_rsa Username:docker}
	I0918 21:09:29.203407   61659 sshutil.go:53] new ssh client: &{IP:192.168.50.109 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19667-7671/.minikube/machines/default-k8s-diff-port-828868/id_rsa Username:docker}
	I0918 21:09:29.215178   61659 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41565
	I0918 21:09:29.215727   61659 main.go:141] libmachine: () Calling .GetVersion
	I0918 21:09:29.216301   61659 main.go:141] libmachine: Using API Version  1
	I0918 21:09:29.216325   61659 main.go:141] libmachine: () Calling .SetConfigRaw
	I0918 21:09:29.216669   61659 main.go:141] libmachine: () Calling .GetMachineName
	I0918 21:09:29.216873   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .GetState
	I0918 21:09:29.218689   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .DriverName
	I0918 21:09:29.218980   61659 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0918 21:09:29.218994   61659 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0918 21:09:29.219009   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .GetSSHHostname
	I0918 21:09:29.222542   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | domain default-k8s-diff-port-828868 has defined MAC address 52:54:00:c0:39:06 in network mk-default-k8s-diff-port-828868
	I0918 21:09:29.222963   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:39:06", ip: ""} in network mk-default-k8s-diff-port-828868: {Iface:virbr4 ExpiryTime:2024-09-18 22:04:19 +0000 UTC Type:0 Mac:52:54:00:c0:39:06 Iaid: IPaddr:192.168.50.109 Prefix:24 Hostname:default-k8s-diff-port-828868 Clientid:01:52:54:00:c0:39:06}
	I0918 21:09:29.222985   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | domain default-k8s-diff-port-828868 has defined IP address 192.168.50.109 and MAC address 52:54:00:c0:39:06 in network mk-default-k8s-diff-port-828868
	I0918 21:09:29.223398   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .GetSSHPort
	I0918 21:09:29.223632   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .GetSSHKeyPath
	I0918 21:09:29.223820   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .GetSSHUsername
	I0918 21:09:29.224004   61659 sshutil.go:53] new ssh client: &{IP:192.168.50.109 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19667-7671/.minikube/machines/default-k8s-diff-port-828868/id_rsa Username:docker}
	I0918 21:09:29.360595   61659 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0918 21:09:29.381254   61659 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-828868" to be "Ready" ...
	I0918 21:09:29.390526   61659 node_ready.go:49] node "default-k8s-diff-port-828868" has status "Ready":"True"
	I0918 21:09:29.390554   61659 node_ready.go:38] duration metric: took 9.264338ms for node "default-k8s-diff-port-828868" to be "Ready" ...
	I0918 21:09:29.390565   61659 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0918 21:09:29.395433   61659 pod_ready.go:79] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-828868" in "kube-system" namespace to be "Ready" ...
	I0918 21:09:29.468492   61659 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0918 21:09:29.526515   61659 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0918 21:09:29.527137   61659 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0918 21:09:29.527162   61659 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0918 21:09:29.570619   61659 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0918 21:09:29.570651   61659 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0918 21:09:29.631944   61659 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0918 21:09:29.631975   61659 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0918 21:09:29.653905   61659 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0918 21:09:30.402107   61659 main.go:141] libmachine: Making call to close driver server
	I0918 21:09:30.402145   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .Close
	I0918 21:09:30.402142   61659 main.go:141] libmachine: Making call to close driver server
	I0918 21:09:30.402167   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .Close
	I0918 21:09:30.402466   61659 main.go:141] libmachine: Successfully made call to close driver server
	I0918 21:09:30.402480   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) DBG | Closing plugin on server side
	I0918 21:09:30.402493   61659 main.go:141] libmachine: Making call to close connection to plugin binary
	I0918 21:09:30.402503   61659 main.go:141] libmachine: Making call to close driver server
	I0918 21:09:30.402512   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .Close
	I0918 21:09:30.402537   61659 main.go:141] libmachine: Successfully made call to close driver server
	I0918 21:09:30.402546   61659 main.go:141] libmachine: Making call to close connection to plugin binary
	I0918 21:09:30.402555   61659 main.go:141] libmachine: Making call to close driver server
	I0918 21:09:30.402571   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .Close
	I0918 21:09:30.402733   61659 main.go:141] libmachine: Successfully made call to close driver server
	I0918 21:09:30.402773   61659 main.go:141] libmachine: Making call to close connection to plugin binary
	I0918 21:09:30.402921   61659 main.go:141] libmachine: Successfully made call to close driver server
	I0918 21:09:30.402941   61659 main.go:141] libmachine: Making call to close connection to plugin binary
	I0918 21:09:30.435323   61659 main.go:141] libmachine: Making call to close driver server
	I0918 21:09:30.435366   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .Close
	I0918 21:09:30.435659   61659 main.go:141] libmachine: Successfully made call to close driver server
	I0918 21:09:30.435683   61659 main.go:141] libmachine: Making call to close connection to plugin binary
	I0918 21:09:30.975630   61659 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.321677798s)
	I0918 21:09:30.975716   61659 main.go:141] libmachine: Making call to close driver server
	I0918 21:09:30.975733   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .Close
	I0918 21:09:30.976074   61659 main.go:141] libmachine: Successfully made call to close driver server
	I0918 21:09:30.976094   61659 main.go:141] libmachine: Making call to close connection to plugin binary
	I0918 21:09:30.976105   61659 main.go:141] libmachine: Making call to close driver server
	I0918 21:09:30.976113   61659 main.go:141] libmachine: (default-k8s-diff-port-828868) Calling .Close
	I0918 21:09:30.976369   61659 main.go:141] libmachine: Successfully made call to close driver server
	I0918 21:09:30.976395   61659 main.go:141] libmachine: Making call to close connection to plugin binary
	I0918 21:09:30.976406   61659 addons.go:475] Verifying addon metrics-server=true in "default-k8s-diff-port-828868"
	I0918 21:09:30.978345   61659 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0918 21:09:26.857486   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:09:29.356533   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:09:31.358269   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:09:30.979731   61659 addons.go:510] duration metric: took 1.833970994s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0918 21:09:31.403620   61659 pod_ready.go:103] pod "etcd-default-k8s-diff-port-828868" in "kube-system" namespace has status "Ready":"False"
	I0918 21:09:33.857960   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:09:36.357454   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:09:33.902436   61659 pod_ready.go:103] pod "etcd-default-k8s-diff-port-828868" in "kube-system" namespace has status "Ready":"False"
	I0918 21:09:36.401889   61659 pod_ready.go:103] pod "etcd-default-k8s-diff-port-828868" in "kube-system" namespace has status "Ready":"False"
	I0918 21:09:36.902002   61659 pod_ready.go:93] pod "etcd-default-k8s-diff-port-828868" in "kube-system" namespace has status "Ready":"True"
	I0918 21:09:36.902026   61659 pod_ready.go:82] duration metric: took 7.506563242s for pod "etcd-default-k8s-diff-port-828868" in "kube-system" namespace to be "Ready" ...
	I0918 21:09:36.902035   61659 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-828868" in "kube-system" namespace to be "Ready" ...
	I0918 21:09:36.907689   61659 pod_ready.go:93] pod "kube-apiserver-default-k8s-diff-port-828868" in "kube-system" namespace has status "Ready":"True"
	I0918 21:09:36.907713   61659 pod_ready.go:82] duration metric: took 5.672631ms for pod "kube-apiserver-default-k8s-diff-port-828868" in "kube-system" namespace to be "Ready" ...
	I0918 21:09:36.907722   61659 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-828868" in "kube-system" namespace to be "Ready" ...
	I0918 21:09:38.914521   61659 pod_ready.go:103] pod "kube-controller-manager-default-k8s-diff-port-828868" in "kube-system" namespace has status "Ready":"False"
	I0918 21:09:39.414168   61659 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-828868" in "kube-system" namespace has status "Ready":"True"
	I0918 21:09:39.414196   61659 pod_ready.go:82] duration metric: took 2.506467297s for pod "kube-controller-manager-default-k8s-diff-port-828868" in "kube-system" namespace to be "Ready" ...
	I0918 21:09:39.414207   61659 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-hf5mm" in "kube-system" namespace to be "Ready" ...
	I0918 21:09:39.419030   61659 pod_ready.go:93] pod "kube-proxy-hf5mm" in "kube-system" namespace has status "Ready":"True"
	I0918 21:09:39.419053   61659 pod_ready.go:82] duration metric: took 4.838856ms for pod "kube-proxy-hf5mm" in "kube-system" namespace to be "Ready" ...
	I0918 21:09:39.419061   61659 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-828868" in "kube-system" namespace to be "Ready" ...
	I0918 21:09:39.423321   61659 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-828868" in "kube-system" namespace has status "Ready":"True"
	I0918 21:09:39.423341   61659 pod_ready.go:82] duration metric: took 4.274601ms for pod "kube-scheduler-default-k8s-diff-port-828868" in "kube-system" namespace to be "Ready" ...
	I0918 21:09:39.423348   61659 pod_ready.go:39] duration metric: took 10.03277208s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0918 21:09:39.423360   61659 api_server.go:52] waiting for apiserver process to appear ...
	I0918 21:09:39.423407   61659 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:09:39.438272   61659 api_server.go:72] duration metric: took 10.292559807s to wait for apiserver process to appear ...
	I0918 21:09:39.438297   61659 api_server.go:88] waiting for apiserver healthz status ...
	I0918 21:09:39.438315   61659 api_server.go:253] Checking apiserver healthz at https://192.168.50.109:8444/healthz ...
	I0918 21:09:39.443342   61659 api_server.go:279] https://192.168.50.109:8444/healthz returned 200:
	ok
	I0918 21:09:39.444238   61659 api_server.go:141] control plane version: v1.31.1
	I0918 21:09:39.444262   61659 api_server.go:131] duration metric: took 5.958748ms to wait for apiserver health ...
	I0918 21:09:39.444270   61659 system_pods.go:43] waiting for kube-system pods to appear ...
	I0918 21:09:39.449914   61659 system_pods.go:59] 9 kube-system pods found
	I0918 21:09:39.449938   61659 system_pods.go:61] "coredns-7c65d6cfc9-8gz5v" [bfcbe99d-9a3d-4da8-a976-58cb9d9bf7ac] Running
	I0918 21:09:39.449942   61659 system_pods.go:61] "coredns-7c65d6cfc9-shx5p" [2d6d25ab-9a90-490a-911b-bf396605fa88] Running
	I0918 21:09:39.449947   61659 system_pods.go:61] "etcd-default-k8s-diff-port-828868" [d9772e35-df33-4b90-af88-7479a81776a2] Running
	I0918 21:09:39.449950   61659 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-828868" [9ca05dc8-1e15-4f6e-9855-1e1b6376fbfc] Running
	I0918 21:09:39.449954   61659 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-828868" [b4196fd3-4882-4e31-b418-2f5f24d2610d] Running
	I0918 21:09:39.449957   61659 system_pods.go:61] "kube-proxy-hf5mm" [3fb0a166-a925-4486-9695-6db05ae704b8] Running
	I0918 21:09:39.449962   61659 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-828868" [161b8693-3400-43a4-b361-cce5749dd2eb] Running
	I0918 21:09:39.449969   61659 system_pods.go:61] "metrics-server-6867b74b74-hdt52" [bf3aa7d0-a121-4a2c-92dd-2c79c6cbaa4a] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0918 21:09:39.449976   61659 system_pods.go:61] "storage-provisioner" [8b4e1077-e23f-4262-b83e-989506798531] Running
	I0918 21:09:39.449983   61659 system_pods.go:74] duration metric: took 5.708013ms to wait for pod list to return data ...
	I0918 21:09:39.449992   61659 default_sa.go:34] waiting for default service account to be created ...
	I0918 21:09:39.453256   61659 default_sa.go:45] found service account: "default"
	I0918 21:09:39.453278   61659 default_sa.go:55] duration metric: took 3.281012ms for default service account to be created ...
	I0918 21:09:39.453287   61659 system_pods.go:116] waiting for k8s-apps to be running ...
	I0918 21:09:39.502200   61659 system_pods.go:86] 9 kube-system pods found
	I0918 21:09:39.502231   61659 system_pods.go:89] "coredns-7c65d6cfc9-8gz5v" [bfcbe99d-9a3d-4da8-a976-58cb9d9bf7ac] Running
	I0918 21:09:39.502237   61659 system_pods.go:89] "coredns-7c65d6cfc9-shx5p" [2d6d25ab-9a90-490a-911b-bf396605fa88] Running
	I0918 21:09:39.502241   61659 system_pods.go:89] "etcd-default-k8s-diff-port-828868" [d9772e35-df33-4b90-af88-7479a81776a2] Running
	I0918 21:09:39.502246   61659 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-828868" [9ca05dc8-1e15-4f6e-9855-1e1b6376fbfc] Running
	I0918 21:09:39.502250   61659 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-828868" [b4196fd3-4882-4e31-b418-2f5f24d2610d] Running
	I0918 21:09:39.502253   61659 system_pods.go:89] "kube-proxy-hf5mm" [3fb0a166-a925-4486-9695-6db05ae704b8] Running
	I0918 21:09:39.502256   61659 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-828868" [161b8693-3400-43a4-b361-cce5749dd2eb] Running
	I0918 21:09:39.502262   61659 system_pods.go:89] "metrics-server-6867b74b74-hdt52" [bf3aa7d0-a121-4a2c-92dd-2c79c6cbaa4a] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0918 21:09:39.502266   61659 system_pods.go:89] "storage-provisioner" [8b4e1077-e23f-4262-b83e-989506798531] Running
	I0918 21:09:39.502276   61659 system_pods.go:126] duration metric: took 48.981872ms to wait for k8s-apps to be running ...
	I0918 21:09:39.502291   61659 system_svc.go:44] waiting for kubelet service to be running ....
	I0918 21:09:39.502367   61659 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0918 21:09:39.517514   61659 system_svc.go:56] duration metric: took 15.213443ms WaitForService to wait for kubelet
	I0918 21:09:39.517549   61659 kubeadm.go:582] duration metric: took 10.37183977s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0918 21:09:39.517573   61659 node_conditions.go:102] verifying NodePressure condition ...
	I0918 21:09:39.700593   61659 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0918 21:09:39.700616   61659 node_conditions.go:123] node cpu capacity is 2
	I0918 21:09:39.700626   61659 node_conditions.go:105] duration metric: took 183.048537ms to run NodePressure ...
	I0918 21:09:39.700637   61659 start.go:241] waiting for startup goroutines ...
	I0918 21:09:39.700643   61659 start.go:246] waiting for cluster config update ...
	I0918 21:09:39.700653   61659 start.go:255] writing updated cluster config ...
	I0918 21:09:39.700899   61659 ssh_runner.go:195] Run: rm -f paused
	I0918 21:09:39.750890   61659 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I0918 21:09:39.753015   61659 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-828868" cluster and "default" namespace by default
	I0918 21:09:38.857481   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:09:41.356307   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:09:44.581125   61740 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.200138695s)
	I0918 21:09:44.581198   61740 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0918 21:09:44.597051   61740 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0918 21:09:44.607195   61740 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0918 21:09:44.617135   61740 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0918 21:09:44.617160   61740 kubeadm.go:157] found existing configuration files:
	
	I0918 21:09:44.617203   61740 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0918 21:09:44.626216   61740 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0918 21:09:44.626278   61740 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0918 21:09:44.635161   61740 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0918 21:09:44.643767   61740 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0918 21:09:44.643828   61740 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0918 21:09:44.652663   61740 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0918 21:09:44.662045   61740 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0918 21:09:44.662107   61740 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0918 21:09:44.671165   61740 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0918 21:09:44.680397   61740 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0918 21:09:44.680469   61740 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0918 21:09:44.689168   61740 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0918 21:09:44.733425   61740 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I0918 21:09:44.733528   61740 kubeadm.go:310] [preflight] Running pre-flight checks
	I0918 21:09:44.846369   61740 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0918 21:09:44.846477   61740 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0918 21:09:44.846612   61740 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0918 21:09:44.855581   61740 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0918 21:09:44.857599   61740 out.go:235]   - Generating certificates and keys ...
	I0918 21:09:44.857709   61740 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0918 21:09:44.857777   61740 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0918 21:09:44.857851   61740 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0918 21:09:44.857942   61740 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0918 21:09:44.858061   61740 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0918 21:09:44.858137   61740 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0918 21:09:44.858243   61740 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0918 21:09:44.858339   61740 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0918 21:09:44.858409   61740 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0918 21:09:44.858509   61740 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0918 21:09:44.858547   61740 kubeadm.go:310] [certs] Using the existing "sa" key
	I0918 21:09:44.858615   61740 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0918 21:09:45.048967   61740 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0918 21:09:45.229640   61740 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0918 21:09:45.397078   61740 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0918 21:09:45.722116   61740 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0918 21:09:45.850285   61740 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0918 21:09:45.850902   61740 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0918 21:09:45.853909   61740 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0918 21:09:43.357136   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:09:45.858056   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:09:45.855803   61740 out.go:235]   - Booting up control plane ...
	I0918 21:09:45.855931   61740 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0918 21:09:45.857227   61740 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0918 21:09:45.858855   61740 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0918 21:09:45.877299   61740 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0918 21:09:45.883953   61740 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0918 21:09:45.884043   61740 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0918 21:09:46.015368   61740 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0918 21:09:46.015509   61740 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0918 21:09:47.016371   61740 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001062473s
	I0918 21:09:47.016465   61740 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0918 21:09:48.357057   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:09:50.856124   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:09:51.518808   61740 kubeadm.go:310] [api-check] The API server is healthy after 4.502250914s
	I0918 21:09:51.532148   61740 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0918 21:09:51.549560   61740 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0918 21:09:51.579801   61740 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0918 21:09:51.580053   61740 kubeadm.go:310] [mark-control-plane] Marking the node embed-certs-255556 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0918 21:09:51.598605   61740 kubeadm.go:310] [bootstrap-token] Using token: iilbxo.n0c6mbjmeqehlfso
	I0918 21:09:51.600035   61740 out.go:235]   - Configuring RBAC rules ...
	I0918 21:09:51.600200   61740 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0918 21:09:51.614672   61740 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0918 21:09:51.626186   61740 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0918 21:09:51.629722   61740 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0918 21:09:51.634757   61740 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0918 21:09:51.642778   61740 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0918 21:09:51.931051   61740 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0918 21:09:52.359085   61740 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0918 21:09:52.930191   61740 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0918 21:09:52.931033   61740 kubeadm.go:310] 
	I0918 21:09:52.931100   61740 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0918 21:09:52.931108   61740 kubeadm.go:310] 
	I0918 21:09:52.931178   61740 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0918 21:09:52.931186   61740 kubeadm.go:310] 
	I0918 21:09:52.931208   61740 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0918 21:09:52.931313   61740 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0918 21:09:52.931400   61740 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0918 21:09:52.931435   61740 kubeadm.go:310] 
	I0918 21:09:52.931524   61740 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0918 21:09:52.931537   61740 kubeadm.go:310] 
	I0918 21:09:52.931601   61740 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0918 21:09:52.931627   61740 kubeadm.go:310] 
	I0918 21:09:52.931721   61740 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0918 21:09:52.931825   61740 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0918 21:09:52.931896   61740 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0918 21:09:52.931903   61740 kubeadm.go:310] 
	I0918 21:09:52.931974   61740 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0918 21:09:52.932073   61740 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0918 21:09:52.932081   61740 kubeadm.go:310] 
	I0918 21:09:52.932154   61740 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token iilbxo.n0c6mbjmeqehlfso \
	I0918 21:09:52.932243   61740 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:8ad46bf288e8cc2226f5a929a768e6c95cddecc97ffc4016be27ef0987b1e36f \
	I0918 21:09:52.932289   61740 kubeadm.go:310] 	--control-plane 
	I0918 21:09:52.932296   61740 kubeadm.go:310] 
	I0918 21:09:52.932365   61740 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0918 21:09:52.932372   61740 kubeadm.go:310] 
	I0918 21:09:52.932438   61740 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token iilbxo.n0c6mbjmeqehlfso \
	I0918 21:09:52.932568   61740 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:8ad46bf288e8cc2226f5a929a768e6c95cddecc97ffc4016be27ef0987b1e36f 
	I0918 21:09:52.934280   61740 kubeadm.go:310] W0918 21:09:44.705437    2512 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0918 21:09:52.934656   61740 kubeadm.go:310] W0918 21:09:44.706219    2512 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0918 21:09:52.934841   61740 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0918 21:09:52.934861   61740 cni.go:84] Creating CNI manager for ""
	I0918 21:09:52.934871   61740 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0918 21:09:52.937656   61740 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0918 21:09:52.939150   61740 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0918 21:09:52.950774   61740 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0918 21:09:52.973081   61740 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0918 21:09:52.973161   61740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0918 21:09:52.973210   61740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-255556 minikube.k8s.io/updated_at=2024_09_18T21_09_52_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=85073601a832bd4bbda5d11fa91feafff6ec6b91 minikube.k8s.io/name=embed-certs-255556 minikube.k8s.io/primary=true
	I0918 21:09:53.012402   61740 ops.go:34] apiserver oom_adj: -16
	I0918 21:09:53.180983   61740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0918 21:09:52.857161   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:09:55.357515   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:09:53.681852   61740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0918 21:09:54.181892   61740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0918 21:09:54.681768   61740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0918 21:09:55.181353   61740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0918 21:09:55.681336   61740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0918 21:09:56.181389   61740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0918 21:09:56.681574   61740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0918 21:09:57.181050   61740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0918 21:09:57.258766   61740 kubeadm.go:1113] duration metric: took 4.285672952s to wait for elevateKubeSystemPrivileges
	I0918 21:09:57.258809   61740 kubeadm.go:394] duration metric: took 5m2.572577294s to StartCluster
	I0918 21:09:57.258831   61740 settings.go:142] acquiring lock: {Name:mk6ae95a69dcbe00c28846409e9f46945a88de2f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0918 21:09:57.258925   61740 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19667-7671/kubeconfig
	I0918 21:09:57.260757   61740 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19667-7671/kubeconfig: {Name:mk3a3eecadb0164dfdd5d3c4392b5b473a2a9bb0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0918 21:09:57.261072   61740 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.21 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0918 21:09:57.261168   61740 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0918 21:09:57.261275   61740 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-255556"
	I0918 21:09:57.261302   61740 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-255556"
	W0918 21:09:57.261314   61740 addons.go:243] addon storage-provisioner should already be in state true
	I0918 21:09:57.261344   61740 host.go:66] Checking if "embed-certs-255556" exists ...
	I0918 21:09:57.261337   61740 addons.go:69] Setting default-storageclass=true in profile "embed-certs-255556"
	I0918 21:09:57.261366   61740 config.go:182] Loaded profile config "embed-certs-255556": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0918 21:09:57.261363   61740 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-255556"
	I0918 21:09:57.261354   61740 addons.go:69] Setting metrics-server=true in profile "embed-certs-255556"
	I0918 21:09:57.261413   61740 addons.go:234] Setting addon metrics-server=true in "embed-certs-255556"
	W0918 21:09:57.261423   61740 addons.go:243] addon metrics-server should already be in state true
	I0918 21:09:57.261450   61740 host.go:66] Checking if "embed-certs-255556" exists ...
	I0918 21:09:57.261751   61740 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19667-7671/.minikube/bin/docker-machine-driver-kvm2
	I0918 21:09:57.261773   61740 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19667-7671/.minikube/bin/docker-machine-driver-kvm2
	I0918 21:09:57.261797   61740 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0918 21:09:57.261805   61740 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0918 21:09:57.261827   61740 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19667-7671/.minikube/bin/docker-machine-driver-kvm2
	I0918 21:09:57.261913   61740 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0918 21:09:57.263016   61740 out.go:177] * Verifying Kubernetes components...
	I0918 21:09:57.264732   61740 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0918 21:09:57.279143   61740 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41499
	I0918 21:09:57.279741   61740 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38525
	I0918 21:09:57.279948   61740 main.go:141] libmachine: () Calling .GetVersion
	I0918 21:09:57.280150   61740 main.go:141] libmachine: () Calling .GetVersion
	I0918 21:09:57.280518   61740 main.go:141] libmachine: Using API Version  1
	I0918 21:09:57.280536   61740 main.go:141] libmachine: () Calling .SetConfigRaw
	I0918 21:09:57.280662   61740 main.go:141] libmachine: Using API Version  1
	I0918 21:09:57.280699   61740 main.go:141] libmachine: () Calling .SetConfigRaw
	I0918 21:09:57.280899   61740 main.go:141] libmachine: () Calling .GetMachineName
	I0918 21:09:57.281014   61740 main.go:141] libmachine: () Calling .GetMachineName
	I0918 21:09:57.281224   61740 main.go:141] libmachine: (embed-certs-255556) Calling .GetState
	I0918 21:09:57.281401   61740 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45081
	I0918 21:09:57.281609   61740 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19667-7671/.minikube/bin/docker-machine-driver-kvm2
	I0918 21:09:57.281669   61740 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0918 21:09:57.281824   61740 main.go:141] libmachine: () Calling .GetVersion
	I0918 21:09:57.282291   61740 main.go:141] libmachine: Using API Version  1
	I0918 21:09:57.282316   61740 main.go:141] libmachine: () Calling .SetConfigRaw
	I0918 21:09:57.282655   61740 main.go:141] libmachine: () Calling .GetMachineName
	I0918 21:09:57.283166   61740 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19667-7671/.minikube/bin/docker-machine-driver-kvm2
	I0918 21:09:57.283198   61740 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0918 21:09:57.284993   61740 addons.go:234] Setting addon default-storageclass=true in "embed-certs-255556"
	W0918 21:09:57.285013   61740 addons.go:243] addon default-storageclass should already be in state true
	I0918 21:09:57.285042   61740 host.go:66] Checking if "embed-certs-255556" exists ...
	I0918 21:09:57.285400   61740 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19667-7671/.minikube/bin/docker-machine-driver-kvm2
	I0918 21:09:57.285441   61740 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0918 21:09:57.298996   61740 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39709
	I0918 21:09:57.299572   61740 main.go:141] libmachine: () Calling .GetVersion
	I0918 21:09:57.300427   61740 main.go:141] libmachine: Using API Version  1
	I0918 21:09:57.300453   61740 main.go:141] libmachine: () Calling .SetConfigRaw
	I0918 21:09:57.300865   61740 main.go:141] libmachine: () Calling .GetMachineName
	I0918 21:09:57.301062   61740 main.go:141] libmachine: (embed-certs-255556) Calling .GetState
	I0918 21:09:57.301827   61740 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40441
	I0918 21:09:57.302410   61740 main.go:141] libmachine: () Calling .GetVersion
	I0918 21:09:57.302948   61740 main.go:141] libmachine: Using API Version  1
	I0918 21:09:57.302968   61740 main.go:141] libmachine: () Calling .SetConfigRaw
	I0918 21:09:57.303284   61740 main.go:141] libmachine: (embed-certs-255556) Calling .DriverName
	I0918 21:09:57.303333   61740 main.go:141] libmachine: () Calling .GetMachineName
	I0918 21:09:57.303512   61740 main.go:141] libmachine: (embed-certs-255556) Calling .GetState
	I0918 21:09:57.304409   61740 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43125
	I0918 21:09:57.304836   61740 main.go:141] libmachine: () Calling .GetVersion
	I0918 21:09:57.305379   61740 main.go:141] libmachine: Using API Version  1
	I0918 21:09:57.305393   61740 main.go:141] libmachine: () Calling .SetConfigRaw
	I0918 21:09:57.305423   61740 main.go:141] libmachine: (embed-certs-255556) Calling .DriverName
	I0918 21:09:57.305449   61740 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0918 21:09:57.305705   61740 main.go:141] libmachine: () Calling .GetMachineName
	I0918 21:09:57.306221   61740 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19667-7671/.minikube/bin/docker-machine-driver-kvm2
	I0918 21:09:57.306270   61740 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0918 21:09:57.306972   61740 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0918 21:09:57.307226   61740 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0918 21:09:57.307247   61740 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0918 21:09:57.307261   61740 main.go:141] libmachine: (embed-certs-255556) Calling .GetSSHHostname
	I0918 21:09:57.308757   61740 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0918 21:09:57.308778   61740 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0918 21:09:57.308798   61740 main.go:141] libmachine: (embed-certs-255556) Calling .GetSSHHostname
	I0918 21:09:57.311608   61740 main.go:141] libmachine: (embed-certs-255556) DBG | domain embed-certs-255556 has defined MAC address 52:54:00:e8:c2:b7 in network mk-embed-certs-255556
	I0918 21:09:57.312311   61740 main.go:141] libmachine: (embed-certs-255556) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e8:c2:b7", ip: ""} in network mk-embed-certs-255556: {Iface:virbr1 ExpiryTime:2024-09-18 22:04:39 +0000 UTC Type:0 Mac:52:54:00:e8:c2:b7 Iaid: IPaddr:192.168.39.21 Prefix:24 Hostname:embed-certs-255556 Clientid:01:52:54:00:e8:c2:b7}
	I0918 21:09:57.312346   61740 main.go:141] libmachine: (embed-certs-255556) DBG | domain embed-certs-255556 has defined IP address 192.168.39.21 and MAC address 52:54:00:e8:c2:b7 in network mk-embed-certs-255556
	I0918 21:09:57.312529   61740 main.go:141] libmachine: (embed-certs-255556) Calling .GetSSHPort
	I0918 21:09:57.313308   61740 main.go:141] libmachine: (embed-certs-255556) DBG | domain embed-certs-255556 has defined MAC address 52:54:00:e8:c2:b7 in network mk-embed-certs-255556
	I0918 21:09:57.313344   61740 main.go:141] libmachine: (embed-certs-255556) Calling .GetSSHKeyPath
	I0918 21:09:57.313533   61740 main.go:141] libmachine: (embed-certs-255556) Calling .GetSSHUsername
	I0918 21:09:57.313707   61740 sshutil.go:53] new ssh client: &{IP:192.168.39.21 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19667-7671/.minikube/machines/embed-certs-255556/id_rsa Username:docker}
	I0918 21:09:57.313964   61740 main.go:141] libmachine: (embed-certs-255556) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e8:c2:b7", ip: ""} in network mk-embed-certs-255556: {Iface:virbr1 ExpiryTime:2024-09-18 22:04:39 +0000 UTC Type:0 Mac:52:54:00:e8:c2:b7 Iaid: IPaddr:192.168.39.21 Prefix:24 Hostname:embed-certs-255556 Clientid:01:52:54:00:e8:c2:b7}
	I0918 21:09:57.313991   61740 main.go:141] libmachine: (embed-certs-255556) DBG | domain embed-certs-255556 has defined IP address 192.168.39.21 and MAC address 52:54:00:e8:c2:b7 in network mk-embed-certs-255556
	I0918 21:09:57.314181   61740 main.go:141] libmachine: (embed-certs-255556) Calling .GetSSHPort
	I0918 21:09:57.314357   61740 main.go:141] libmachine: (embed-certs-255556) Calling .GetSSHKeyPath
	I0918 21:09:57.314517   61740 main.go:141] libmachine: (embed-certs-255556) Calling .GetSSHUsername
	I0918 21:09:57.314644   61740 sshutil.go:53] new ssh client: &{IP:192.168.39.21 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19667-7671/.minikube/machines/embed-certs-255556/id_rsa Username:docker}
	I0918 21:09:57.325307   61740 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45641
	I0918 21:09:57.325800   61740 main.go:141] libmachine: () Calling .GetVersion
	I0918 21:09:57.326390   61740 main.go:141] libmachine: Using API Version  1
	I0918 21:09:57.326416   61740 main.go:141] libmachine: () Calling .SetConfigRaw
	I0918 21:09:57.326850   61740 main.go:141] libmachine: () Calling .GetMachineName
	I0918 21:09:57.327116   61740 main.go:141] libmachine: (embed-certs-255556) Calling .GetState
	I0918 21:09:57.328954   61740 main.go:141] libmachine: (embed-certs-255556) Calling .DriverName
	I0918 21:09:57.329179   61740 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0918 21:09:57.329197   61740 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0918 21:09:57.329216   61740 main.go:141] libmachine: (embed-certs-255556) Calling .GetSSHHostname
	I0918 21:09:57.332176   61740 main.go:141] libmachine: (embed-certs-255556) DBG | domain embed-certs-255556 has defined MAC address 52:54:00:e8:c2:b7 in network mk-embed-certs-255556
	I0918 21:09:57.332608   61740 main.go:141] libmachine: (embed-certs-255556) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e8:c2:b7", ip: ""} in network mk-embed-certs-255556: {Iface:virbr1 ExpiryTime:2024-09-18 22:04:39 +0000 UTC Type:0 Mac:52:54:00:e8:c2:b7 Iaid: IPaddr:192.168.39.21 Prefix:24 Hostname:embed-certs-255556 Clientid:01:52:54:00:e8:c2:b7}
	I0918 21:09:57.332633   61740 main.go:141] libmachine: (embed-certs-255556) DBG | domain embed-certs-255556 has defined IP address 192.168.39.21 and MAC address 52:54:00:e8:c2:b7 in network mk-embed-certs-255556
	I0918 21:09:57.332803   61740 main.go:141] libmachine: (embed-certs-255556) Calling .GetSSHPort
	I0918 21:09:57.332991   61740 main.go:141] libmachine: (embed-certs-255556) Calling .GetSSHKeyPath
	I0918 21:09:57.333132   61740 main.go:141] libmachine: (embed-certs-255556) Calling .GetSSHUsername
	I0918 21:09:57.333254   61740 sshutil.go:53] new ssh client: &{IP:192.168.39.21 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19667-7671/.minikube/machines/embed-certs-255556/id_rsa Username:docker}
	I0918 21:09:57.463767   61740 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0918 21:09:57.480852   61740 node_ready.go:35] waiting up to 6m0s for node "embed-certs-255556" to be "Ready" ...
	I0918 21:09:57.492198   61740 node_ready.go:49] node "embed-certs-255556" has status "Ready":"True"
	I0918 21:09:57.492221   61740 node_ready.go:38] duration metric: took 11.335784ms for node "embed-certs-255556" to be "Ready" ...
	I0918 21:09:57.492229   61740 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0918 21:09:57.496607   61740 pod_ready.go:79] waiting up to 6m0s for pod "etcd-embed-certs-255556" in "kube-system" namespace to be "Ready" ...
	I0918 21:09:57.627581   61740 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0918 21:09:57.631704   61740 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0918 21:09:57.647778   61740 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0918 21:09:57.647799   61740 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0918 21:09:57.686558   61740 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0918 21:09:57.686589   61740 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0918 21:09:57.726206   61740 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0918 21:09:57.726230   61740 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0918 21:09:57.831932   61740 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0918 21:09:58.026530   61740 main.go:141] libmachine: Making call to close driver server
	I0918 21:09:58.026554   61740 main.go:141] libmachine: (embed-certs-255556) Calling .Close
	I0918 21:09:58.026862   61740 main.go:141] libmachine: Successfully made call to close driver server
	I0918 21:09:58.026885   61740 main.go:141] libmachine: Making call to close connection to plugin binary
	I0918 21:09:58.026895   61740 main.go:141] libmachine: Making call to close driver server
	I0918 21:09:58.026903   61740 main.go:141] libmachine: (embed-certs-255556) Calling .Close
	I0918 21:09:58.027205   61740 main.go:141] libmachine: Successfully made call to close driver server
	I0918 21:09:58.027260   61740 main.go:141] libmachine: Making call to close connection to plugin binary
	I0918 21:09:58.027269   61740 main.go:141] libmachine: (embed-certs-255556) DBG | Closing plugin on server side
	I0918 21:09:58.038140   61740 main.go:141] libmachine: Making call to close driver server
	I0918 21:09:58.038172   61740 main.go:141] libmachine: (embed-certs-255556) Calling .Close
	I0918 21:09:58.038506   61740 main.go:141] libmachine: Successfully made call to close driver server
	I0918 21:09:58.038555   61740 main.go:141] libmachine: Making call to close connection to plugin binary
	I0918 21:09:58.038512   61740 main.go:141] libmachine: (embed-certs-255556) DBG | Closing plugin on server side
	I0918 21:09:58.551479   61740 main.go:141] libmachine: Making call to close driver server
	I0918 21:09:58.551518   61740 main.go:141] libmachine: (embed-certs-255556) Calling .Close
	I0918 21:09:58.551851   61740 main.go:141] libmachine: Successfully made call to close driver server
	I0918 21:09:58.551870   61740 main.go:141] libmachine: Making call to close connection to plugin binary
	I0918 21:09:58.551885   61740 main.go:141] libmachine: Making call to close driver server
	I0918 21:09:58.551893   61740 main.go:141] libmachine: (embed-certs-255556) Calling .Close
	I0918 21:09:58.552242   61740 main.go:141] libmachine: Successfully made call to close driver server
	I0918 21:09:58.552307   61740 main.go:141] libmachine: Making call to close connection to plugin binary
	I0918 21:09:58.552326   61740 main.go:141] libmachine: (embed-certs-255556) DBG | Closing plugin on server side
	I0918 21:09:59.078469   61740 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.246485041s)
	I0918 21:09:59.078532   61740 main.go:141] libmachine: Making call to close driver server
	I0918 21:09:59.078550   61740 main.go:141] libmachine: (embed-certs-255556) Calling .Close
	I0918 21:09:59.078883   61740 main.go:141] libmachine: Successfully made call to close driver server
	I0918 21:09:59.078906   61740 main.go:141] libmachine: Making call to close connection to plugin binary
	I0918 21:09:59.078917   61740 main.go:141] libmachine: Making call to close driver server
	I0918 21:09:59.078924   61740 main.go:141] libmachine: (embed-certs-255556) Calling .Close
	I0918 21:09:59.079143   61740 main.go:141] libmachine: Successfully made call to close driver server
	I0918 21:09:59.079157   61740 main.go:141] libmachine: Making call to close connection to plugin binary
	I0918 21:09:59.079168   61740 addons.go:475] Verifying addon metrics-server=true in "embed-certs-255556"
	I0918 21:09:59.080861   61740 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I0918 21:09:57.357619   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:09:59.357838   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:09:59.082145   61740 addons.go:510] duration metric: took 1.82098849s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I0918 21:09:59.526424   61740 pod_ready.go:93] pod "etcd-embed-certs-255556" in "kube-system" namespace has status "Ready":"True"
	I0918 21:09:59.526445   61740 pod_ready.go:82] duration metric: took 2.02981732s for pod "etcd-embed-certs-255556" in "kube-system" namespace to be "Ready" ...
	I0918 21:09:59.526455   61740 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-embed-certs-255556" in "kube-system" namespace to be "Ready" ...
	I0918 21:10:00.033589   61740 pod_ready.go:93] pod "kube-apiserver-embed-certs-255556" in "kube-system" namespace has status "Ready":"True"
	I0918 21:10:00.033616   61740 pod_ready.go:82] duration metric: took 507.155125ms for pod "kube-apiserver-embed-certs-255556" in "kube-system" namespace to be "Ready" ...
	I0918 21:10:00.033630   61740 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-255556" in "kube-system" namespace to be "Ready" ...
	I0918 21:10:02.039884   61740 pod_ready.go:103] pod "kube-controller-manager-embed-certs-255556" in "kube-system" namespace has status "Ready":"False"
	I0918 21:10:04.040760   61740 pod_ready.go:103] pod "kube-controller-manager-embed-certs-255556" in "kube-system" namespace has status "Ready":"False"
	I0918 21:10:04.541799   61740 pod_ready.go:93] pod "kube-controller-manager-embed-certs-255556" in "kube-system" namespace has status "Ready":"True"
	I0918 21:10:04.541821   61740 pod_ready.go:82] duration metric: took 4.508184279s for pod "kube-controller-manager-embed-certs-255556" in "kube-system" namespace to be "Ready" ...
	I0918 21:10:04.541830   61740 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-embed-certs-255556" in "kube-system" namespace to be "Ready" ...
	I0918 21:10:04.550008   61740 pod_ready.go:93] pod "kube-scheduler-embed-certs-255556" in "kube-system" namespace has status "Ready":"True"
	I0918 21:10:04.550038   61740 pod_ready.go:82] duration metric: took 8.201765ms for pod "kube-scheduler-embed-certs-255556" in "kube-system" namespace to be "Ready" ...
	I0918 21:10:04.550046   61740 pod_ready.go:39] duration metric: took 7.057808243s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0918 21:10:04.550060   61740 api_server.go:52] waiting for apiserver process to appear ...
	I0918 21:10:04.550110   61740 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:10:04.566882   61740 api_server.go:72] duration metric: took 7.305767858s to wait for apiserver process to appear ...
	I0918 21:10:04.566914   61740 api_server.go:88] waiting for apiserver healthz status ...
	I0918 21:10:04.566937   61740 api_server.go:253] Checking apiserver healthz at https://192.168.39.21:8443/healthz ...
	I0918 21:10:04.571495   61740 api_server.go:279] https://192.168.39.21:8443/healthz returned 200:
	ok
	I0918 21:10:04.572590   61740 api_server.go:141] control plane version: v1.31.1
	I0918 21:10:04.572618   61740 api_server.go:131] duration metric: took 5.69747ms to wait for apiserver health ...
	I0918 21:10:04.572625   61740 system_pods.go:43] waiting for kube-system pods to appear ...
	I0918 21:10:04.578979   61740 system_pods.go:59] 9 kube-system pods found
	I0918 21:10:04.579019   61740 system_pods.go:61] "coredns-7c65d6cfc9-ptxbt" [798665e6-6f4a-4ba5-b4f9-3192d3f76f03] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0918 21:10:04.579030   61740 system_pods.go:61] "coredns-7c65d6cfc9-vgmtd" [1224ebf9-1b24-413a-b779-093acfcfb61e] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0918 21:10:04.579039   61740 system_pods.go:61] "etcd-embed-certs-255556" [a796cd6d-5b0c-4f73-9dfe-23c552cb2a43] Running
	I0918 21:10:04.579046   61740 system_pods.go:61] "kube-apiserver-embed-certs-255556" [f1926542-940b-4ba9-94cf-23265d42874c] Running
	I0918 21:10:04.579051   61740 system_pods.go:61] "kube-controller-manager-embed-certs-255556" [cd0bc9bb-ca2c-435f-81d5-2d4ea9f32d85] Running
	I0918 21:10:04.579057   61740 system_pods.go:61] "kube-proxy-m7gxh" [47d72a32-7efc-4155-a890-0ddc620af6e0] Running
	I0918 21:10:04.579067   61740 system_pods.go:61] "kube-scheduler-embed-certs-255556" [091c79cd-bc76-42b3-9635-96fdbe3ecfff] Running
	I0918 21:10:04.579076   61740 system_pods.go:61] "metrics-server-6867b74b74-sr6hq" [8867f8fa-687b-4105-8ace-18af50195726] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0918 21:10:04.579085   61740 system_pods.go:61] "storage-provisioner" [dcc9b789-237c-4d92-96c6-2c23d2c401c0] Running
	I0918 21:10:04.579095   61740 system_pods.go:74] duration metric: took 6.462809ms to wait for pod list to return data ...
	I0918 21:10:04.579106   61740 default_sa.go:34] waiting for default service account to be created ...
	I0918 21:10:04.583020   61740 default_sa.go:45] found service account: "default"
	I0918 21:10:04.583059   61740 default_sa.go:55] duration metric: took 3.946388ms for default service account to be created ...
	I0918 21:10:04.583072   61740 system_pods.go:116] waiting for k8s-apps to be running ...
	I0918 21:10:04.589946   61740 system_pods.go:86] 9 kube-system pods found
	I0918 21:10:04.589991   61740 system_pods.go:89] "coredns-7c65d6cfc9-ptxbt" [798665e6-6f4a-4ba5-b4f9-3192d3f76f03] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0918 21:10:04.590004   61740 system_pods.go:89] "coredns-7c65d6cfc9-vgmtd" [1224ebf9-1b24-413a-b779-093acfcfb61e] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0918 21:10:04.590012   61740 system_pods.go:89] "etcd-embed-certs-255556" [a796cd6d-5b0c-4f73-9dfe-23c552cb2a43] Running
	I0918 21:10:04.590019   61740 system_pods.go:89] "kube-apiserver-embed-certs-255556" [f1926542-940b-4ba9-94cf-23265d42874c] Running
	I0918 21:10:04.590025   61740 system_pods.go:89] "kube-controller-manager-embed-certs-255556" [cd0bc9bb-ca2c-435f-81d5-2d4ea9f32d85] Running
	I0918 21:10:04.590030   61740 system_pods.go:89] "kube-proxy-m7gxh" [47d72a32-7efc-4155-a890-0ddc620af6e0] Running
	I0918 21:10:04.590035   61740 system_pods.go:89] "kube-scheduler-embed-certs-255556" [091c79cd-bc76-42b3-9635-96fdbe3ecfff] Running
	I0918 21:10:04.590044   61740 system_pods.go:89] "metrics-server-6867b74b74-sr6hq" [8867f8fa-687b-4105-8ace-18af50195726] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0918 21:10:04.590051   61740 system_pods.go:89] "storage-provisioner" [dcc9b789-237c-4d92-96c6-2c23d2c401c0] Running
	I0918 21:10:04.590061   61740 system_pods.go:126] duration metric: took 6.981726ms to wait for k8s-apps to be running ...
	I0918 21:10:04.590070   61740 system_svc.go:44] waiting for kubelet service to be running ....
	I0918 21:10:04.590127   61740 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0918 21:10:04.605893   61740 system_svc.go:56] duration metric: took 15.815591ms WaitForService to wait for kubelet
	I0918 21:10:04.605921   61740 kubeadm.go:582] duration metric: took 7.344815015s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0918 21:10:04.605939   61740 node_conditions.go:102] verifying NodePressure condition ...
	I0918 21:10:04.609551   61740 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0918 21:10:04.609577   61740 node_conditions.go:123] node cpu capacity is 2
	I0918 21:10:04.609588   61740 node_conditions.go:105] duration metric: took 3.645116ms to run NodePressure ...
	I0918 21:10:04.609598   61740 start.go:241] waiting for startup goroutines ...
	I0918 21:10:04.609605   61740 start.go:246] waiting for cluster config update ...
	I0918 21:10:04.609614   61740 start.go:255] writing updated cluster config ...
	I0918 21:10:04.609870   61740 ssh_runner.go:195] Run: rm -f paused
	I0918 21:10:04.664479   61740 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I0918 21:10:04.666589   61740 out.go:177] * Done! kubectl is now configured to use "embed-certs-255556" cluster and "default" namespace by default
	I0918 21:10:01.858109   61273 pod_ready.go:103] pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace has status "Ready":"False"
	I0918 21:10:03.356912   61273 pod_ready.go:82] duration metric: took 4m0.006778464s for pod "metrics-server-6867b74b74-n27vc" in "kube-system" namespace to be "Ready" ...
	E0918 21:10:03.356944   61273 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I0918 21:10:03.356952   61273 pod_ready.go:39] duration metric: took 4m0.807781101s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0918 21:10:03.356967   61273 api_server.go:52] waiting for apiserver process to appear ...
	I0918 21:10:03.356994   61273 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0918 21:10:03.357047   61273 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0918 21:10:03.410066   61273 cri.go:89] found id: "a70652dce4d80c6fac020d63cff7054a835d183ac4b910157a5bf4eb8cabcaa1"
	I0918 21:10:03.410096   61273 cri.go:89] found id: ""
	I0918 21:10:03.410104   61273 logs.go:276] 1 containers: [a70652dce4d80c6fac020d63cff7054a835d183ac4b910157a5bf4eb8cabcaa1]
	I0918 21:10:03.410168   61273 ssh_runner.go:195] Run: which crictl
	I0918 21:10:03.414236   61273 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0918 21:10:03.414309   61273 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0918 21:10:03.449405   61273 cri.go:89] found id: "a913074a00723bc43f97ba625ebbdfded7dca1ce664ace9a13173adb96d2068f"
	I0918 21:10:03.449426   61273 cri.go:89] found id: ""
	I0918 21:10:03.449434   61273 logs.go:276] 1 containers: [a913074a00723bc43f97ba625ebbdfded7dca1ce664ace9a13173adb96d2068f]
	I0918 21:10:03.449492   61273 ssh_runner.go:195] Run: which crictl
	I0918 21:10:03.453335   61273 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0918 21:10:03.453403   61273 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0918 21:10:03.487057   61273 cri.go:89] found id: "76b9e08a213468abdd23d7b3998b55d6544e2fbae9205ec6076ef0cd7a80e7be"
	I0918 21:10:03.487081   61273 cri.go:89] found id: ""
	I0918 21:10:03.487089   61273 logs.go:276] 1 containers: [76b9e08a213468abdd23d7b3998b55d6544e2fbae9205ec6076ef0cd7a80e7be]
	I0918 21:10:03.487137   61273 ssh_runner.go:195] Run: which crictl
	I0918 21:10:03.491027   61273 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0918 21:10:03.491101   61273 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0918 21:10:03.529636   61273 cri.go:89] found id: "c372970fdf265433e1668377d0214de6608cb39ebfd706cfad9624cba2f9b481"
	I0918 21:10:03.529665   61273 cri.go:89] found id: ""
	I0918 21:10:03.529675   61273 logs.go:276] 1 containers: [c372970fdf265433e1668377d0214de6608cb39ebfd706cfad9624cba2f9b481]
	I0918 21:10:03.529738   61273 ssh_runner.go:195] Run: which crictl
	I0918 21:10:03.535042   61273 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0918 21:10:03.535121   61273 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0918 21:10:03.572913   61273 cri.go:89] found id: "0257280a0d21d52e2a996c2f717880bdb9d9f6fe90a4b82cc62222c941ba7d84"
	I0918 21:10:03.572942   61273 cri.go:89] found id: ""
	I0918 21:10:03.572952   61273 logs.go:276] 1 containers: [0257280a0d21d52e2a996c2f717880bdb9d9f6fe90a4b82cc62222c941ba7d84]
	I0918 21:10:03.573012   61273 ssh_runner.go:195] Run: which crictl
	I0918 21:10:03.576945   61273 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0918 21:10:03.577021   61273 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0918 21:10:03.612785   61273 cri.go:89] found id: "785dc83056153e5eeb22216ac7520f529b71f1b3ae42e9540a5c8930f1fe62c2"
	I0918 21:10:03.612805   61273 cri.go:89] found id: ""
	I0918 21:10:03.612812   61273 logs.go:276] 1 containers: [785dc83056153e5eeb22216ac7520f529b71f1b3ae42e9540a5c8930f1fe62c2]
	I0918 21:10:03.612868   61273 ssh_runner.go:195] Run: which crictl
	I0918 21:10:03.616855   61273 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0918 21:10:03.616924   61273 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0918 21:10:03.650330   61273 cri.go:89] found id: ""
	I0918 21:10:03.650359   61273 logs.go:276] 0 containers: []
	W0918 21:10:03.650370   61273 logs.go:278] No container was found matching "kindnet"
	I0918 21:10:03.650378   61273 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0918 21:10:03.650446   61273 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0918 21:10:03.698078   61273 cri.go:89] found id: "b44d6f4b44928aa498c4375f783c165714b4a5959a1fa709712ad39a412d6d9f"
	I0918 21:10:03.698106   61273 cri.go:89] found id: "38c14df0554157969a97390590d6e9c46765fd6fc3e286f7d2e3094838e0aff5"
	I0918 21:10:03.698113   61273 cri.go:89] found id: ""
	I0918 21:10:03.698122   61273 logs.go:276] 2 containers: [b44d6f4b44928aa498c4375f783c165714b4a5959a1fa709712ad39a412d6d9f 38c14df0554157969a97390590d6e9c46765fd6fc3e286f7d2e3094838e0aff5]
	I0918 21:10:03.698184   61273 ssh_runner.go:195] Run: which crictl
	I0918 21:10:03.702311   61273 ssh_runner.go:195] Run: which crictl
	I0918 21:10:03.705974   61273 logs.go:123] Gathering logs for kubelet ...
	I0918 21:10:03.705996   61273 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 21:10:03.771043   61273 logs.go:123] Gathering logs for kube-scheduler [c372970fdf265433e1668377d0214de6608cb39ebfd706cfad9624cba2f9b481] ...
	I0918 21:10:03.771097   61273 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c372970fdf265433e1668377d0214de6608cb39ebfd706cfad9624cba2f9b481"
	I0918 21:10:03.813148   61273 logs.go:123] Gathering logs for storage-provisioner [b44d6f4b44928aa498c4375f783c165714b4a5959a1fa709712ad39a412d6d9f] ...
	I0918 21:10:03.813175   61273 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b44d6f4b44928aa498c4375f783c165714b4a5959a1fa709712ad39a412d6d9f"
	I0918 21:10:03.864553   61273 logs.go:123] Gathering logs for CRI-O ...
	I0918 21:10:03.864580   61273 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0918 21:10:04.345484   61273 logs.go:123] Gathering logs for container status ...
	I0918 21:10:04.345531   61273 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 21:10:04.390777   61273 logs.go:123] Gathering logs for dmesg ...
	I0918 21:10:04.390818   61273 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 21:10:04.409877   61273 logs.go:123] Gathering logs for describe nodes ...
	I0918 21:10:04.409918   61273 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0918 21:10:04.536579   61273 logs.go:123] Gathering logs for kube-apiserver [a70652dce4d80c6fac020d63cff7054a835d183ac4b910157a5bf4eb8cabcaa1] ...
	I0918 21:10:04.536609   61273 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a70652dce4d80c6fac020d63cff7054a835d183ac4b910157a5bf4eb8cabcaa1"
	I0918 21:10:04.595640   61273 logs.go:123] Gathering logs for etcd [a913074a00723bc43f97ba625ebbdfded7dca1ce664ace9a13173adb96d2068f] ...
	I0918 21:10:04.595680   61273 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a913074a00723bc43f97ba625ebbdfded7dca1ce664ace9a13173adb96d2068f"
	I0918 21:10:04.642332   61273 logs.go:123] Gathering logs for coredns [76b9e08a213468abdd23d7b3998b55d6544e2fbae9205ec6076ef0cd7a80e7be] ...
	I0918 21:10:04.642377   61273 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 76b9e08a213468abdd23d7b3998b55d6544e2fbae9205ec6076ef0cd7a80e7be"
	I0918 21:10:04.679525   61273 logs.go:123] Gathering logs for kube-proxy [0257280a0d21d52e2a996c2f717880bdb9d9f6fe90a4b82cc62222c941ba7d84] ...
	I0918 21:10:04.679551   61273 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0257280a0d21d52e2a996c2f717880bdb9d9f6fe90a4b82cc62222c941ba7d84"
	I0918 21:10:04.721130   61273 logs.go:123] Gathering logs for kube-controller-manager [785dc83056153e5eeb22216ac7520f529b71f1b3ae42e9540a5c8930f1fe62c2] ...
	I0918 21:10:04.721164   61273 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 785dc83056153e5eeb22216ac7520f529b71f1b3ae42e9540a5c8930f1fe62c2"
	I0918 21:10:04.789527   61273 logs.go:123] Gathering logs for storage-provisioner [38c14df0554157969a97390590d6e9c46765fd6fc3e286f7d2e3094838e0aff5] ...
	I0918 21:10:04.789558   61273 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 38c14df0554157969a97390590d6e9c46765fd6fc3e286f7d2e3094838e0aff5"
	I0918 21:10:07.334989   61273 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 21:10:07.352382   61273 api_server.go:72] duration metric: took 4m12.031791528s to wait for apiserver process to appear ...
	I0918 21:10:07.352411   61273 api_server.go:88] waiting for apiserver healthz status ...
	I0918 21:10:07.352446   61273 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0918 21:10:07.352494   61273 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0918 21:10:07.404709   61273 cri.go:89] found id: "a70652dce4d80c6fac020d63cff7054a835d183ac4b910157a5bf4eb8cabcaa1"
	I0918 21:10:07.404739   61273 cri.go:89] found id: ""
	I0918 21:10:07.404748   61273 logs.go:276] 1 containers: [a70652dce4d80c6fac020d63cff7054a835d183ac4b910157a5bf4eb8cabcaa1]
	I0918 21:10:07.404815   61273 ssh_runner.go:195] Run: which crictl
	I0918 21:10:07.409205   61273 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0918 21:10:07.409273   61273 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0918 21:10:07.450409   61273 cri.go:89] found id: "a913074a00723bc43f97ba625ebbdfded7dca1ce664ace9a13173adb96d2068f"
	I0918 21:10:07.450429   61273 cri.go:89] found id: ""
	I0918 21:10:07.450438   61273 logs.go:276] 1 containers: [a913074a00723bc43f97ba625ebbdfded7dca1ce664ace9a13173adb96d2068f]
	I0918 21:10:07.450498   61273 ssh_runner.go:195] Run: which crictl
	I0918 21:10:07.454623   61273 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0918 21:10:07.454692   61273 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0918 21:10:07.498344   61273 cri.go:89] found id: "76b9e08a213468abdd23d7b3998b55d6544e2fbae9205ec6076ef0cd7a80e7be"
	I0918 21:10:07.498370   61273 cri.go:89] found id: ""
	I0918 21:10:07.498379   61273 logs.go:276] 1 containers: [76b9e08a213468abdd23d7b3998b55d6544e2fbae9205ec6076ef0cd7a80e7be]
	I0918 21:10:07.498443   61273 ssh_runner.go:195] Run: which crictl
	I0918 21:10:07.503900   61273 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0918 21:10:07.503986   61273 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0918 21:10:07.543438   61273 cri.go:89] found id: "c372970fdf265433e1668377d0214de6608cb39ebfd706cfad9624cba2f9b481"
	I0918 21:10:07.543469   61273 cri.go:89] found id: ""
	I0918 21:10:07.543478   61273 logs.go:276] 1 containers: [c372970fdf265433e1668377d0214de6608cb39ebfd706cfad9624cba2f9b481]
	I0918 21:10:07.543538   61273 ssh_runner.go:195] Run: which crictl
	I0918 21:10:07.548439   61273 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0918 21:10:07.548518   61273 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0918 21:10:07.592109   61273 cri.go:89] found id: "0257280a0d21d52e2a996c2f717880bdb9d9f6fe90a4b82cc62222c941ba7d84"
	I0918 21:10:07.592140   61273 cri.go:89] found id: ""
	I0918 21:10:07.592150   61273 logs.go:276] 1 containers: [0257280a0d21d52e2a996c2f717880bdb9d9f6fe90a4b82cc62222c941ba7d84]
	I0918 21:10:07.592202   61273 ssh_runner.go:195] Run: which crictl
	I0918 21:10:07.596127   61273 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0918 21:10:07.596200   61273 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0918 21:10:07.630588   61273 cri.go:89] found id: "785dc83056153e5eeb22216ac7520f529b71f1b3ae42e9540a5c8930f1fe62c2"
	I0918 21:10:07.630623   61273 cri.go:89] found id: ""
	I0918 21:10:07.630633   61273 logs.go:276] 1 containers: [785dc83056153e5eeb22216ac7520f529b71f1b3ae42e9540a5c8930f1fe62c2]
	I0918 21:10:07.630699   61273 ssh_runner.go:195] Run: which crictl
	I0918 21:10:07.635130   61273 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0918 21:10:07.635214   61273 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0918 21:10:07.672446   61273 cri.go:89] found id: ""
	I0918 21:10:07.672475   61273 logs.go:276] 0 containers: []
	W0918 21:10:07.672487   61273 logs.go:278] No container was found matching "kindnet"
	I0918 21:10:07.672494   61273 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0918 21:10:07.672554   61273 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0918 21:10:07.710660   61273 cri.go:89] found id: "b44d6f4b44928aa498c4375f783c165714b4a5959a1fa709712ad39a412d6d9f"
	I0918 21:10:07.710693   61273 cri.go:89] found id: "38c14df0554157969a97390590d6e9c46765fd6fc3e286f7d2e3094838e0aff5"
	I0918 21:10:07.710700   61273 cri.go:89] found id: ""
	I0918 21:10:07.710709   61273 logs.go:276] 2 containers: [b44d6f4b44928aa498c4375f783c165714b4a5959a1fa709712ad39a412d6d9f 38c14df0554157969a97390590d6e9c46765fd6fc3e286f7d2e3094838e0aff5]
	I0918 21:10:07.710761   61273 ssh_runner.go:195] Run: which crictl
	I0918 21:10:07.714772   61273 ssh_runner.go:195] Run: which crictl
	I0918 21:10:07.718402   61273 logs.go:123] Gathering logs for etcd [a913074a00723bc43f97ba625ebbdfded7dca1ce664ace9a13173adb96d2068f] ...
	I0918 21:10:07.718423   61273 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a913074a00723bc43f97ba625ebbdfded7dca1ce664ace9a13173adb96d2068f"
	I0918 21:10:07.756682   61273 logs.go:123] Gathering logs for coredns [76b9e08a213468abdd23d7b3998b55d6544e2fbae9205ec6076ef0cd7a80e7be] ...
	I0918 21:10:07.756717   61273 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 76b9e08a213468abdd23d7b3998b55d6544e2fbae9205ec6076ef0cd7a80e7be"
	I0918 21:10:07.792784   61273 logs.go:123] Gathering logs for kube-proxy [0257280a0d21d52e2a996c2f717880bdb9d9f6fe90a4b82cc62222c941ba7d84] ...
	I0918 21:10:07.792813   61273 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0257280a0d21d52e2a996c2f717880bdb9d9f6fe90a4b82cc62222c941ba7d84"
	I0918 21:10:07.829746   61273 logs.go:123] Gathering logs for kube-controller-manager [785dc83056153e5eeb22216ac7520f529b71f1b3ae42e9540a5c8930f1fe62c2] ...
	I0918 21:10:07.829779   61273 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 785dc83056153e5eeb22216ac7520f529b71f1b3ae42e9540a5c8930f1fe62c2"
	I0918 21:10:07.882151   61273 logs.go:123] Gathering logs for storage-provisioner [38c14df0554157969a97390590d6e9c46765fd6fc3e286f7d2e3094838e0aff5] ...
	I0918 21:10:07.882190   61273 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 38c14df0554157969a97390590d6e9c46765fd6fc3e286f7d2e3094838e0aff5"
	I0918 21:10:07.921948   61273 logs.go:123] Gathering logs for container status ...
	I0918 21:10:07.921973   61273 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 21:10:07.969080   61273 logs.go:123] Gathering logs for kubelet ...
	I0918 21:10:07.969110   61273 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 21:10:08.036341   61273 logs.go:123] Gathering logs for dmesg ...
	I0918 21:10:08.036376   61273 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 21:10:08.050690   61273 logs.go:123] Gathering logs for describe nodes ...
	I0918 21:10:08.050722   61273 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0918 21:10:08.177111   61273 logs.go:123] Gathering logs for kube-apiserver [a70652dce4d80c6fac020d63cff7054a835d183ac4b910157a5bf4eb8cabcaa1] ...
	I0918 21:10:08.177154   61273 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a70652dce4d80c6fac020d63cff7054a835d183ac4b910157a5bf4eb8cabcaa1"
	I0918 21:10:08.224169   61273 logs.go:123] Gathering logs for kube-scheduler [c372970fdf265433e1668377d0214de6608cb39ebfd706cfad9624cba2f9b481] ...
	I0918 21:10:08.224203   61273 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c372970fdf265433e1668377d0214de6608cb39ebfd706cfad9624cba2f9b481"
	I0918 21:10:08.264412   61273 logs.go:123] Gathering logs for storage-provisioner [b44d6f4b44928aa498c4375f783c165714b4a5959a1fa709712ad39a412d6d9f] ...
	I0918 21:10:08.264437   61273 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b44d6f4b44928aa498c4375f783c165714b4a5959a1fa709712ad39a412d6d9f"
	I0918 21:10:08.309190   61273 logs.go:123] Gathering logs for CRI-O ...
	I0918 21:10:08.309215   61273 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0918 21:10:11.209439   61273 api_server.go:253] Checking apiserver healthz at https://192.168.61.31:8443/healthz ...
	I0918 21:10:11.214345   61273 api_server.go:279] https://192.168.61.31:8443/healthz returned 200:
	ok
	I0918 21:10:11.215424   61273 api_server.go:141] control plane version: v1.31.1
	I0918 21:10:11.215446   61273 api_server.go:131] duration metric: took 3.863027585s to wait for apiserver health ...
	I0918 21:10:11.215456   61273 system_pods.go:43] waiting for kube-system pods to appear ...
	I0918 21:10:11.215485   61273 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0918 21:10:11.215545   61273 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0918 21:10:11.251158   61273 cri.go:89] found id: "a70652dce4d80c6fac020d63cff7054a835d183ac4b910157a5bf4eb8cabcaa1"
	I0918 21:10:11.251182   61273 cri.go:89] found id: ""
	I0918 21:10:11.251190   61273 logs.go:276] 1 containers: [a70652dce4d80c6fac020d63cff7054a835d183ac4b910157a5bf4eb8cabcaa1]
	I0918 21:10:11.251246   61273 ssh_runner.go:195] Run: which crictl
	I0918 21:10:11.255090   61273 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0918 21:10:11.255177   61273 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0918 21:10:11.290504   61273 cri.go:89] found id: "a913074a00723bc43f97ba625ebbdfded7dca1ce664ace9a13173adb96d2068f"
	I0918 21:10:11.290526   61273 cri.go:89] found id: ""
	I0918 21:10:11.290534   61273 logs.go:276] 1 containers: [a913074a00723bc43f97ba625ebbdfded7dca1ce664ace9a13173adb96d2068f]
	I0918 21:10:11.290593   61273 ssh_runner.go:195] Run: which crictl
	I0918 21:10:11.295141   61273 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0918 21:10:11.295224   61273 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0918 21:10:11.340273   61273 cri.go:89] found id: "76b9e08a213468abdd23d7b3998b55d6544e2fbae9205ec6076ef0cd7a80e7be"
	I0918 21:10:11.340300   61273 cri.go:89] found id: ""
	I0918 21:10:11.340310   61273 logs.go:276] 1 containers: [76b9e08a213468abdd23d7b3998b55d6544e2fbae9205ec6076ef0cd7a80e7be]
	I0918 21:10:11.340362   61273 ssh_runner.go:195] Run: which crictl
	I0918 21:10:11.344823   61273 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0918 21:10:11.344903   61273 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0918 21:10:11.384145   61273 cri.go:89] found id: "c372970fdf265433e1668377d0214de6608cb39ebfd706cfad9624cba2f9b481"
	I0918 21:10:11.384172   61273 cri.go:89] found id: ""
	I0918 21:10:11.384187   61273 logs.go:276] 1 containers: [c372970fdf265433e1668377d0214de6608cb39ebfd706cfad9624cba2f9b481]
	I0918 21:10:11.384251   61273 ssh_runner.go:195] Run: which crictl
	I0918 21:10:11.388594   61273 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0918 21:10:11.388673   61273 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0918 21:10:11.434881   61273 cri.go:89] found id: "0257280a0d21d52e2a996c2f717880bdb9d9f6fe90a4b82cc62222c941ba7d84"
	I0918 21:10:11.434915   61273 cri.go:89] found id: ""
	I0918 21:10:11.434925   61273 logs.go:276] 1 containers: [0257280a0d21d52e2a996c2f717880bdb9d9f6fe90a4b82cc62222c941ba7d84]
	I0918 21:10:11.434984   61273 ssh_runner.go:195] Run: which crictl
	I0918 21:10:11.439048   61273 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0918 21:10:11.439124   61273 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0918 21:10:11.474786   61273 cri.go:89] found id: "785dc83056153e5eeb22216ac7520f529b71f1b3ae42e9540a5c8930f1fe62c2"
	I0918 21:10:11.474812   61273 cri.go:89] found id: ""
	I0918 21:10:11.474820   61273 logs.go:276] 1 containers: [785dc83056153e5eeb22216ac7520f529b71f1b3ae42e9540a5c8930f1fe62c2]
	I0918 21:10:11.474871   61273 ssh_runner.go:195] Run: which crictl
	I0918 21:10:11.478907   61273 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0918 21:10:11.478961   61273 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0918 21:10:11.521522   61273 cri.go:89] found id: ""
	I0918 21:10:11.521550   61273 logs.go:276] 0 containers: []
	W0918 21:10:11.521561   61273 logs.go:278] No container was found matching "kindnet"
	I0918 21:10:11.521568   61273 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0918 21:10:11.521642   61273 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0918 21:10:11.560406   61273 cri.go:89] found id: "b44d6f4b44928aa498c4375f783c165714b4a5959a1fa709712ad39a412d6d9f"
	I0918 21:10:11.560428   61273 cri.go:89] found id: "38c14df0554157969a97390590d6e9c46765fd6fc3e286f7d2e3094838e0aff5"
	I0918 21:10:11.560432   61273 cri.go:89] found id: ""
	I0918 21:10:11.560439   61273 logs.go:276] 2 containers: [b44d6f4b44928aa498c4375f783c165714b4a5959a1fa709712ad39a412d6d9f 38c14df0554157969a97390590d6e9c46765fd6fc3e286f7d2e3094838e0aff5]
	I0918 21:10:11.560489   61273 ssh_runner.go:195] Run: which crictl
	I0918 21:10:11.564559   61273 ssh_runner.go:195] Run: which crictl
	I0918 21:10:11.568380   61273 logs.go:123] Gathering logs for kube-proxy [0257280a0d21d52e2a996c2f717880bdb9d9f6fe90a4b82cc62222c941ba7d84] ...
	I0918 21:10:11.568405   61273 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0257280a0d21d52e2a996c2f717880bdb9d9f6fe90a4b82cc62222c941ba7d84"
	I0918 21:10:11.614927   61273 logs.go:123] Gathering logs for kube-controller-manager [785dc83056153e5eeb22216ac7520f529b71f1b3ae42e9540a5c8930f1fe62c2] ...
	I0918 21:10:11.614959   61273 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 785dc83056153e5eeb22216ac7520f529b71f1b3ae42e9540a5c8930f1fe62c2"
	I0918 21:10:11.668337   61273 logs.go:123] Gathering logs for storage-provisioner [b44d6f4b44928aa498c4375f783c165714b4a5959a1fa709712ad39a412d6d9f] ...
	I0918 21:10:11.668372   61273 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b44d6f4b44928aa498c4375f783c165714b4a5959a1fa709712ad39a412d6d9f"
	I0918 21:10:11.705574   61273 logs.go:123] Gathering logs for kubelet ...
	I0918 21:10:11.705604   61273 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 21:10:11.772691   61273 logs.go:123] Gathering logs for describe nodes ...
	I0918 21:10:11.772731   61273 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0918 21:10:11.885001   61273 logs.go:123] Gathering logs for etcd [a913074a00723bc43f97ba625ebbdfded7dca1ce664ace9a13173adb96d2068f] ...
	I0918 21:10:11.885043   61273 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a913074a00723bc43f97ba625ebbdfded7dca1ce664ace9a13173adb96d2068f"
	I0918 21:10:11.929585   61273 logs.go:123] Gathering logs for coredns [76b9e08a213468abdd23d7b3998b55d6544e2fbae9205ec6076ef0cd7a80e7be] ...
	I0918 21:10:11.929623   61273 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 76b9e08a213468abdd23d7b3998b55d6544e2fbae9205ec6076ef0cd7a80e7be"
	I0918 21:10:11.967540   61273 logs.go:123] Gathering logs for kube-scheduler [c372970fdf265433e1668377d0214de6608cb39ebfd706cfad9624cba2f9b481] ...
	I0918 21:10:11.967566   61273 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c372970fdf265433e1668377d0214de6608cb39ebfd706cfad9624cba2f9b481"
	I0918 21:10:12.007037   61273 logs.go:123] Gathering logs for storage-provisioner [38c14df0554157969a97390590d6e9c46765fd6fc3e286f7d2e3094838e0aff5] ...
	I0918 21:10:12.007076   61273 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 38c14df0554157969a97390590d6e9c46765fd6fc3e286f7d2e3094838e0aff5"
	I0918 21:10:12.045764   61273 logs.go:123] Gathering logs for CRI-O ...
	I0918 21:10:12.045805   61273 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0918 21:10:12.434993   61273 logs.go:123] Gathering logs for dmesg ...
	I0918 21:10:12.435042   61273 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 21:10:12.449422   61273 logs.go:123] Gathering logs for kube-apiserver [a70652dce4d80c6fac020d63cff7054a835d183ac4b910157a5bf4eb8cabcaa1] ...
	I0918 21:10:12.449453   61273 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a70652dce4d80c6fac020d63cff7054a835d183ac4b910157a5bf4eb8cabcaa1"
	I0918 21:10:12.500491   61273 logs.go:123] Gathering logs for container status ...
	I0918 21:10:12.500522   61273 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 21:10:15.053164   61273 system_pods.go:59] 8 kube-system pods found
	I0918 21:10:15.053203   61273 system_pods.go:61] "coredns-7c65d6cfc9-dgnw2" [085d5a98-0a61-4678-830f-384780a0d7ef] Running
	I0918 21:10:15.053211   61273 system_pods.go:61] "etcd-no-preload-331658" [d6a53ff4-fcf0-46c0-9e77-9af6d0363e08] Running
	I0918 21:10:15.053218   61273 system_pods.go:61] "kube-apiserver-no-preload-331658" [1953598f-4c3f-4a98-ab75-b8ca1b8093ad] Running
	I0918 21:10:15.053223   61273 system_pods.go:61] "kube-controller-manager-no-preload-331658" [8fc655be-a192-4250-a31d-2e533a2b5e41] Running
	I0918 21:10:15.053228   61273 system_pods.go:61] "kube-proxy-hx25w" [a26512ff-f695-4452-8974-577479257160] Running
	I0918 21:10:15.053232   61273 system_pods.go:61] "kube-scheduler-no-preload-331658" [64e5347e-e7fd-4a0d-ae41-539c3989c29e] Running
	I0918 21:10:15.053243   61273 system_pods.go:61] "metrics-server-6867b74b74-n27vc" [b1de76ec-8987-49ce-ae66-eedda2705cde] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0918 21:10:15.053254   61273 system_pods.go:61] "storage-provisioner" [e110aeb3-e9bc-4bb9-9b49-5579558bdda2] Running
	I0918 21:10:15.053264   61273 system_pods.go:74] duration metric: took 3.837800115s to wait for pod list to return data ...
	I0918 21:10:15.053273   61273 default_sa.go:34] waiting for default service account to be created ...
	I0918 21:10:15.056865   61273 default_sa.go:45] found service account: "default"
	I0918 21:10:15.056900   61273 default_sa.go:55] duration metric: took 3.619144ms for default service account to be created ...
	I0918 21:10:15.056912   61273 system_pods.go:116] waiting for k8s-apps to be running ...
	I0918 21:10:15.061835   61273 system_pods.go:86] 8 kube-system pods found
	I0918 21:10:15.061864   61273 system_pods.go:89] "coredns-7c65d6cfc9-dgnw2" [085d5a98-0a61-4678-830f-384780a0d7ef] Running
	I0918 21:10:15.061870   61273 system_pods.go:89] "etcd-no-preload-331658" [d6a53ff4-fcf0-46c0-9e77-9af6d0363e08] Running
	I0918 21:10:15.061875   61273 system_pods.go:89] "kube-apiserver-no-preload-331658" [1953598f-4c3f-4a98-ab75-b8ca1b8093ad] Running
	I0918 21:10:15.061880   61273 system_pods.go:89] "kube-controller-manager-no-preload-331658" [8fc655be-a192-4250-a31d-2e533a2b5e41] Running
	I0918 21:10:15.061884   61273 system_pods.go:89] "kube-proxy-hx25w" [a26512ff-f695-4452-8974-577479257160] Running
	I0918 21:10:15.061888   61273 system_pods.go:89] "kube-scheduler-no-preload-331658" [64e5347e-e7fd-4a0d-ae41-539c3989c29e] Running
	I0918 21:10:15.061894   61273 system_pods.go:89] "metrics-server-6867b74b74-n27vc" [b1de76ec-8987-49ce-ae66-eedda2705cde] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0918 21:10:15.061898   61273 system_pods.go:89] "storage-provisioner" [e110aeb3-e9bc-4bb9-9b49-5579558bdda2] Running
	I0918 21:10:15.061906   61273 system_pods.go:126] duration metric: took 4.987508ms to wait for k8s-apps to be running ...
	I0918 21:10:15.061912   61273 system_svc.go:44] waiting for kubelet service to be running ....
	I0918 21:10:15.061966   61273 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0918 21:10:15.079834   61273 system_svc.go:56] duration metric: took 17.908997ms WaitForService to wait for kubelet
	I0918 21:10:15.079875   61273 kubeadm.go:582] duration metric: took 4m19.759287892s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0918 21:10:15.079897   61273 node_conditions.go:102] verifying NodePressure condition ...
	I0918 21:10:15.083307   61273 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0918 21:10:15.083390   61273 node_conditions.go:123] node cpu capacity is 2
	I0918 21:10:15.083407   61273 node_conditions.go:105] duration metric: took 3.503352ms to run NodePressure ...
	I0918 21:10:15.083421   61273 start.go:241] waiting for startup goroutines ...
	I0918 21:10:15.083431   61273 start.go:246] waiting for cluster config update ...
	I0918 21:10:15.083444   61273 start.go:255] writing updated cluster config ...
	I0918 21:10:15.083788   61273 ssh_runner.go:195] Run: rm -f paused
	I0918 21:10:15.139144   61273 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I0918 21:10:15.141198   61273 out.go:177] * Done! kubectl is now configured to use "no-preload-331658" cluster and "default" namespace by default
	I0918 21:11:17.441368   62061 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0918 21:11:17.441607   62061 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0918 21:11:17.442921   62061 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0918 21:11:17.443036   62061 kubeadm.go:310] [preflight] Running pre-flight checks
	I0918 21:11:17.443221   62061 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0918 21:11:17.443500   62061 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0918 21:11:17.443818   62061 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0918 21:11:17.444099   62061 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0918 21:11:17.446858   62061 out.go:235]   - Generating certificates and keys ...
	I0918 21:11:17.446965   62061 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0918 21:11:17.447048   62061 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0918 21:11:17.447160   62061 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0918 21:11:17.447248   62061 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0918 21:11:17.447349   62061 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0918 21:11:17.447425   62061 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0918 21:11:17.447507   62061 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0918 21:11:17.447587   62061 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0918 21:11:17.447742   62061 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0918 21:11:17.447847   62061 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0918 21:11:17.447911   62061 kubeadm.go:310] [certs] Using the existing "sa" key
	I0918 21:11:17.447984   62061 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0918 21:11:17.448085   62061 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0918 21:11:17.448163   62061 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0918 21:11:17.448255   62061 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0918 21:11:17.448339   62061 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0918 21:11:17.448486   62061 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0918 21:11:17.448590   62061 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0918 21:11:17.448645   62061 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0918 21:11:17.448733   62061 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0918 21:11:17.450203   62061 out.go:235]   - Booting up control plane ...
	I0918 21:11:17.450299   62061 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0918 21:11:17.450434   62061 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0918 21:11:17.450506   62061 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0918 21:11:17.450578   62061 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0918 21:11:17.450752   62061 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0918 21:11:17.450805   62061 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0918 21:11:17.450863   62061 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0918 21:11:17.451034   62061 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0918 21:11:17.451090   62061 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0918 21:11:17.451259   62061 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0918 21:11:17.451325   62061 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0918 21:11:17.451498   62061 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0918 21:11:17.451562   62061 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0918 21:11:17.451765   62061 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0918 21:11:17.451845   62061 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0918 21:11:17.452027   62061 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0918 21:11:17.452041   62061 kubeadm.go:310] 
	I0918 21:11:17.452083   62061 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0918 21:11:17.452118   62061 kubeadm.go:310] 		timed out waiting for the condition
	I0918 21:11:17.452127   62061 kubeadm.go:310] 
	I0918 21:11:17.452160   62061 kubeadm.go:310] 	This error is likely caused by:
	I0918 21:11:17.452189   62061 kubeadm.go:310] 		- The kubelet is not running
	I0918 21:11:17.452282   62061 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0918 21:11:17.452289   62061 kubeadm.go:310] 
	I0918 21:11:17.452385   62061 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0918 21:11:17.452425   62061 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0918 21:11:17.452473   62061 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0918 21:11:17.452482   62061 kubeadm.go:310] 
	I0918 21:11:17.452578   62061 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0918 21:11:17.452649   62061 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0918 21:11:17.452656   62061 kubeadm.go:310] 
	I0918 21:11:17.452770   62061 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0918 21:11:17.452849   62061 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0918 21:11:17.452920   62061 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0918 21:11:17.452983   62061 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0918 21:11:17.453029   62061 kubeadm.go:310] 
	W0918 21:11:17.453100   62061 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0918 21:11:17.453139   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0918 21:11:17.917211   62061 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0918 21:11:17.933265   62061 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0918 21:11:17.943909   62061 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0918 21:11:17.943937   62061 kubeadm.go:157] found existing configuration files:
	
	I0918 21:11:17.943980   62061 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0918 21:11:17.953745   62061 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0918 21:11:17.953817   62061 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0918 21:11:17.964411   62061 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0918 21:11:17.974601   62061 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0918 21:11:17.974661   62061 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0918 21:11:17.984631   62061 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0918 21:11:17.994280   62061 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0918 21:11:17.994341   62061 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0918 21:11:18.004341   62061 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0918 21:11:18.014066   62061 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0918 21:11:18.014127   62061 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0918 21:11:18.023592   62061 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0918 21:11:18.098831   62061 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0918 21:11:18.098908   62061 kubeadm.go:310] [preflight] Running pre-flight checks
	I0918 21:11:18.254904   62061 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0918 21:11:18.255012   62061 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0918 21:11:18.255102   62061 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0918 21:11:18.444152   62061 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0918 21:11:18.445967   62061 out.go:235]   - Generating certificates and keys ...
	I0918 21:11:18.446082   62061 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0918 21:11:18.446185   62061 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0918 21:11:18.446316   62061 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0918 21:11:18.446403   62061 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0918 21:11:18.446508   62061 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0918 21:11:18.446600   62061 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0918 21:11:18.446706   62061 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0918 21:11:18.446767   62061 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0918 21:11:18.446852   62061 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0918 21:11:18.446936   62061 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0918 21:11:18.446971   62061 kubeadm.go:310] [certs] Using the existing "sa" key
	I0918 21:11:18.447025   62061 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0918 21:11:18.621414   62061 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0918 21:11:18.973691   62061 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0918 21:11:19.177522   62061 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0918 21:11:19.351312   62061 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0918 21:11:19.375169   62061 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0918 21:11:19.376274   62061 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0918 21:11:19.376351   62061 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0918 21:11:19.505030   62061 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0918 21:11:19.507065   62061 out.go:235]   - Booting up control plane ...
	I0918 21:11:19.507197   62061 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0918 21:11:19.519523   62061 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0918 21:11:19.520747   62061 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0918 21:11:19.522723   62061 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0918 21:11:19.524851   62061 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0918 21:11:59.527084   62061 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0918 21:11:59.527421   62061 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0918 21:11:59.527674   62061 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0918 21:12:04.528288   62061 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0918 21:12:04.528530   62061 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0918 21:12:14.529180   62061 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0918 21:12:14.529367   62061 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0918 21:12:34.530066   62061 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0918 21:12:34.530335   62061 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0918 21:13:14.529030   62061 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0918 21:13:14.529305   62061 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0918 21:13:14.529326   62061 kubeadm.go:310] 
	I0918 21:13:14.529374   62061 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0918 21:13:14.529447   62061 kubeadm.go:310] 		timed out waiting for the condition
	I0918 21:13:14.529466   62061 kubeadm.go:310] 
	I0918 21:13:14.529521   62061 kubeadm.go:310] 	This error is likely caused by:
	I0918 21:13:14.529569   62061 kubeadm.go:310] 		- The kubelet is not running
	I0918 21:13:14.529716   62061 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0918 21:13:14.529726   62061 kubeadm.go:310] 
	I0918 21:13:14.529866   62061 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0918 21:13:14.529917   62061 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0918 21:13:14.529967   62061 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0918 21:13:14.529976   62061 kubeadm.go:310] 
	I0918 21:13:14.530120   62061 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0918 21:13:14.530220   62061 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0918 21:13:14.530230   62061 kubeadm.go:310] 
	I0918 21:13:14.530352   62061 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0918 21:13:14.530480   62061 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0918 21:13:14.530579   62061 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0918 21:13:14.530678   62061 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0918 21:13:14.530688   62061 kubeadm.go:310] 
	I0918 21:13:14.531356   62061 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0918 21:13:14.531463   62061 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0918 21:13:14.531520   62061 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0918 21:13:14.531592   62061 kubeadm.go:394] duration metric: took 7m56.917405378s to StartCluster
	I0918 21:13:14.531633   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0918 21:13:14.531689   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0918 21:13:14.578851   62061 cri.go:89] found id: ""
	I0918 21:13:14.578883   62061 logs.go:276] 0 containers: []
	W0918 21:13:14.578893   62061 logs.go:278] No container was found matching "kube-apiserver"
	I0918 21:13:14.578901   62061 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0918 21:13:14.578960   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0918 21:13:14.627137   62061 cri.go:89] found id: ""
	I0918 21:13:14.627168   62061 logs.go:276] 0 containers: []
	W0918 21:13:14.627179   62061 logs.go:278] No container was found matching "etcd"
	I0918 21:13:14.627187   62061 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0918 21:13:14.627245   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0918 21:13:14.670678   62061 cri.go:89] found id: ""
	I0918 21:13:14.670707   62061 logs.go:276] 0 containers: []
	W0918 21:13:14.670717   62061 logs.go:278] No container was found matching "coredns"
	I0918 21:13:14.670724   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0918 21:13:14.670788   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0918 21:13:14.709610   62061 cri.go:89] found id: ""
	I0918 21:13:14.709641   62061 logs.go:276] 0 containers: []
	W0918 21:13:14.709651   62061 logs.go:278] No container was found matching "kube-scheduler"
	I0918 21:13:14.709658   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0918 21:13:14.709722   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0918 21:13:14.743492   62061 cri.go:89] found id: ""
	I0918 21:13:14.743522   62061 logs.go:276] 0 containers: []
	W0918 21:13:14.743534   62061 logs.go:278] No container was found matching "kube-proxy"
	I0918 21:13:14.743540   62061 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0918 21:13:14.743601   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0918 21:13:14.777577   62061 cri.go:89] found id: ""
	I0918 21:13:14.777602   62061 logs.go:276] 0 containers: []
	W0918 21:13:14.777610   62061 logs.go:278] No container was found matching "kube-controller-manager"
	I0918 21:13:14.777616   62061 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0918 21:13:14.777679   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0918 21:13:14.815884   62061 cri.go:89] found id: ""
	I0918 21:13:14.815913   62061 logs.go:276] 0 containers: []
	W0918 21:13:14.815922   62061 logs.go:278] No container was found matching "kindnet"
	I0918 21:13:14.815937   62061 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0918 21:13:14.815997   62061 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0918 21:13:14.855426   62061 cri.go:89] found id: ""
	I0918 21:13:14.855456   62061 logs.go:276] 0 containers: []
	W0918 21:13:14.855464   62061 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0918 21:13:14.855473   62061 logs.go:123] Gathering logs for dmesg ...
	I0918 21:13:14.855484   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 21:13:14.868812   62061 logs.go:123] Gathering logs for describe nodes ...
	I0918 21:13:14.868843   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0918 21:13:14.955093   62061 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0918 21:13:14.955120   62061 logs.go:123] Gathering logs for CRI-O ...
	I0918 21:13:14.955151   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0918 21:13:15.064722   62061 logs.go:123] Gathering logs for container status ...
	I0918 21:13:15.064760   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 21:13:15.105430   62061 logs.go:123] Gathering logs for kubelet ...
	I0918 21:13:15.105466   62061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0918 21:13:15.157878   62061 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0918 21:13:15.157956   62061 out.go:270] * 
	W0918 21:13:15.158036   62061 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0918 21:13:15.158052   62061 out.go:270] * 
	W0918 21:13:15.158934   62061 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0918 21:13:15.161604   62061 out.go:201] 
	W0918 21:13:15.162606   62061 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0918 21:13:15.162664   62061 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0918 21:13:15.162685   62061 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0918 21:13:15.163831   62061 out.go:201] 
	
	
	==> CRI-O <==
	Sep 18 21:24:26 old-k8s-version-740194 crio[636]: time="2024-09-18 21:24:26.961936297Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726694666961912509,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=0f4cafe0-ae8d-43f5-b3de-59993fb9d964 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 18 21:24:26 old-k8s-version-740194 crio[636]: time="2024-09-18 21:24:26.962509820Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=9643be62-610a-435d-95c4-654b69572342 name=/runtime.v1.RuntimeService/ListContainers
	Sep 18 21:24:26 old-k8s-version-740194 crio[636]: time="2024-09-18 21:24:26.962560398Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=9643be62-610a-435d-95c4-654b69572342 name=/runtime.v1.RuntimeService/ListContainers
	Sep 18 21:24:26 old-k8s-version-740194 crio[636]: time="2024-09-18 21:24:26.962595243Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=9643be62-610a-435d-95c4-654b69572342 name=/runtime.v1.RuntimeService/ListContainers
	Sep 18 21:24:26 old-k8s-version-740194 crio[636]: time="2024-09-18 21:24:26.997929004Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=f59d6778-dc95-42dc-aef2-aa1a69e3a09e name=/runtime.v1.RuntimeService/Version
	Sep 18 21:24:26 old-k8s-version-740194 crio[636]: time="2024-09-18 21:24:26.998046032Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=f59d6778-dc95-42dc-aef2-aa1a69e3a09e name=/runtime.v1.RuntimeService/Version
	Sep 18 21:24:26 old-k8s-version-740194 crio[636]: time="2024-09-18 21:24:26.999556646Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=496fa880-6456-4b76-94c5-6139c5751c4b name=/runtime.v1.ImageService/ImageFsInfo
	Sep 18 21:24:27 old-k8s-version-740194 crio[636]: time="2024-09-18 21:24:27.000214165Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726694667000186273,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=496fa880-6456-4b76-94c5-6139c5751c4b name=/runtime.v1.ImageService/ImageFsInfo
	Sep 18 21:24:27 old-k8s-version-740194 crio[636]: time="2024-09-18 21:24:27.000750856Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=822f08da-d24b-4e45-8e34-30558f6d380e name=/runtime.v1.RuntimeService/ListContainers
	Sep 18 21:24:27 old-k8s-version-740194 crio[636]: time="2024-09-18 21:24:27.000813043Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=822f08da-d24b-4e45-8e34-30558f6d380e name=/runtime.v1.RuntimeService/ListContainers
	Sep 18 21:24:27 old-k8s-version-740194 crio[636]: time="2024-09-18 21:24:27.000855077Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=822f08da-d24b-4e45-8e34-30558f6d380e name=/runtime.v1.RuntimeService/ListContainers
	Sep 18 21:24:27 old-k8s-version-740194 crio[636]: time="2024-09-18 21:24:27.030854573Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=936b4a72-37e3-4c72-9734-104059e4ead5 name=/runtime.v1.RuntimeService/Version
	Sep 18 21:24:27 old-k8s-version-740194 crio[636]: time="2024-09-18 21:24:27.030932875Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=936b4a72-37e3-4c72-9734-104059e4ead5 name=/runtime.v1.RuntimeService/Version
	Sep 18 21:24:27 old-k8s-version-740194 crio[636]: time="2024-09-18 21:24:27.032295517Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=e4f3c8e3-a752-420f-b2bf-4bf8f629bcb7 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 18 21:24:27 old-k8s-version-740194 crio[636]: time="2024-09-18 21:24:27.032696398Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726694667032666659,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=e4f3c8e3-a752-420f-b2bf-4bf8f629bcb7 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 18 21:24:27 old-k8s-version-740194 crio[636]: time="2024-09-18 21:24:27.033241446Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=0379063a-2518-4b55-9860-702e9f4686d6 name=/runtime.v1.RuntimeService/ListContainers
	Sep 18 21:24:27 old-k8s-version-740194 crio[636]: time="2024-09-18 21:24:27.033292887Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=0379063a-2518-4b55-9860-702e9f4686d6 name=/runtime.v1.RuntimeService/ListContainers
	Sep 18 21:24:27 old-k8s-version-740194 crio[636]: time="2024-09-18 21:24:27.033323559Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=0379063a-2518-4b55-9860-702e9f4686d6 name=/runtime.v1.RuntimeService/ListContainers
	Sep 18 21:24:27 old-k8s-version-740194 crio[636]: time="2024-09-18 21:24:27.063591810Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=9b5fd525-6f3f-473a-9eb6-2220990dd62c name=/runtime.v1.RuntimeService/Version
	Sep 18 21:24:27 old-k8s-version-740194 crio[636]: time="2024-09-18 21:24:27.063684487Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=9b5fd525-6f3f-473a-9eb6-2220990dd62c name=/runtime.v1.RuntimeService/Version
	Sep 18 21:24:27 old-k8s-version-740194 crio[636]: time="2024-09-18 21:24:27.064753233Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=3501b32e-a305-44fa-9a88-bb0f65a4e2bf name=/runtime.v1.ImageService/ImageFsInfo
	Sep 18 21:24:27 old-k8s-version-740194 crio[636]: time="2024-09-18 21:24:27.065202276Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726694667065171735,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=3501b32e-a305-44fa-9a88-bb0f65a4e2bf name=/runtime.v1.ImageService/ImageFsInfo
	Sep 18 21:24:27 old-k8s-version-740194 crio[636]: time="2024-09-18 21:24:27.065697551Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=6198f485-7009-4d20-b19b-904913d11fdc name=/runtime.v1.RuntimeService/ListContainers
	Sep 18 21:24:27 old-k8s-version-740194 crio[636]: time="2024-09-18 21:24:27.065763464Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=6198f485-7009-4d20-b19b-904913d11fdc name=/runtime.v1.RuntimeService/ListContainers
	Sep 18 21:24:27 old-k8s-version-740194 crio[636]: time="2024-09-18 21:24:27.065810828Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=6198f485-7009-4d20-b19b-904913d11fdc name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Sep18 21:04] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.052758] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.039829] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.960759] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +1.971252] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[Sep18 21:05] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000005] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +8.490123] systemd-fstab-generator[559]: Ignoring "noauto" option for root device
	[  +0.066663] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.070792] systemd-fstab-generator[571]: Ignoring "noauto" option for root device
	[  +0.218400] systemd-fstab-generator[585]: Ignoring "noauto" option for root device
	[  +0.116531] systemd-fstab-generator[597]: Ignoring "noauto" option for root device
	[  +0.277213] systemd-fstab-generator[626]: Ignoring "noauto" option for root device
	[  +6.543535] systemd-fstab-generator[885]: Ignoring "noauto" option for root device
	[  +0.067666] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.830893] systemd-fstab-generator[1009]: Ignoring "noauto" option for root device
	[ +11.620626] kauditd_printk_skb: 46 callbacks suppressed
	[Sep18 21:09] systemd-fstab-generator[5030]: Ignoring "noauto" option for root device
	[Sep18 21:11] systemd-fstab-generator[5310]: Ignoring "noauto" option for root device
	[  +0.067380] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> kernel <==
	 21:24:27 up 19 min,  0 users,  load average: 0.06, 0.06, 0.05
	Linux old-k8s-version-740194 5.10.207 #1 SMP Mon Sep 16 15:00:28 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Sep 18 21:24:21 old-k8s-version-740194 kubelet[6779]: k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*controller).Run.func1(0xc0001020c0, 0xc0000d0ab0)
	Sep 18 21:24:21 old-k8s-version-740194 kubelet[6779]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/controller.go:130 +0x34
	Sep 18 21:24:21 old-k8s-version-740194 kubelet[6779]: created by k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*controller).Run
	Sep 18 21:24:21 old-k8s-version-740194 kubelet[6779]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/controller.go:129 +0xa5
	Sep 18 21:24:21 old-k8s-version-740194 kubelet[6779]: goroutine 152 [select]:
	Sep 18 21:24:21 old-k8s-version-740194 kubelet[6779]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc0007e9ef0, 0x4f0ac20, 0xc000051590, 0x1, 0xc0001020c0)
	Sep 18 21:24:21 old-k8s-version-740194 kubelet[6779]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:167 +0x149
	Sep 18 21:24:21 old-k8s-version-740194 kubelet[6779]: k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*Reflector).Run(0xc0000d8ee0, 0xc0001020c0)
	Sep 18 21:24:21 old-k8s-version-740194 kubelet[6779]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/reflector.go:220 +0x1c5
	Sep 18 21:24:21 old-k8s-version-740194 kubelet[6779]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).StartWithChannel.func1()
	Sep 18 21:24:21 old-k8s-version-740194 kubelet[6779]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:56 +0x2e
	Sep 18 21:24:21 old-k8s-version-740194 kubelet[6779]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).Start.func1(0xc00095b7e0, 0xc00098c200)
	Sep 18 21:24:21 old-k8s-version-740194 kubelet[6779]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:73 +0x51
	Sep 18 21:24:21 old-k8s-version-740194 kubelet[6779]: created by k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).Start
	Sep 18 21:24:21 old-k8s-version-740194 kubelet[6779]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:71 +0x65
	Sep 18 21:24:21 old-k8s-version-740194 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Sep 18 21:24:21 old-k8s-version-740194 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Sep 18 21:24:22 old-k8s-version-740194 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 136.
	Sep 18 21:24:22 old-k8s-version-740194 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Sep 18 21:24:22 old-k8s-version-740194 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Sep 18 21:24:22 old-k8s-version-740194 kubelet[6788]: I0918 21:24:22.327962    6788 server.go:416] Version: v1.20.0
	Sep 18 21:24:22 old-k8s-version-740194 kubelet[6788]: I0918 21:24:22.328295    6788 server.go:837] Client rotation is on, will bootstrap in background
	Sep 18 21:24:22 old-k8s-version-740194 kubelet[6788]: I0918 21:24:22.330170    6788 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Sep 18 21:24:22 old-k8s-version-740194 kubelet[6788]: W0918 21:24:22.330950    6788 manager.go:159] Cannot detect current cgroup on cgroup v2
	Sep 18 21:24:22 old-k8s-version-740194 kubelet[6788]: I0918 21:24:22.331336    6788 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-740194 -n old-k8s-version-740194
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-740194 -n old-k8s-version-740194: exit status 2 (225.708007ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-740194" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (126.40s)

                                                
                                    

Test pass (244/312)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 35.4
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.39
9 TestDownloadOnly/v1.20.0/DeleteAll 0.13
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.12
12 TestDownloadOnly/v1.31.1/json-events 18.57
13 TestDownloadOnly/v1.31.1/preload-exists 0
17 TestDownloadOnly/v1.31.1/LogsDuration 0.06
18 TestDownloadOnly/v1.31.1/DeleteAll 0.14
19 TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds 0.12
21 TestBinaryMirror 0.59
22 TestOffline 134.32
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.05
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.05
27 TestAddons/Setup 138.74
31 TestAddons/serial/GCPAuth/Namespaces 0.15
35 TestAddons/parallel/InspektorGadget 11.79
37 TestAddons/parallel/HelmTiller 11.47
39 TestAddons/parallel/CSI 67.1
40 TestAddons/parallel/Headlamp 13.56
41 TestAddons/parallel/CloudSpanner 5.63
42 TestAddons/parallel/LocalPath 13.13
43 TestAddons/parallel/NvidiaDevicePlugin 5.5
44 TestAddons/parallel/Yakd 11.47
45 TestAddons/StoppedEnableDisable 92.72
46 TestCertOptions 44.51
47 TestCertExpiration 266.44
49 TestForceSystemdFlag 46.44
50 TestForceSystemdEnv 67.77
52 TestKVMDriverInstallOrUpdate 7.49
56 TestErrorSpam/setup 41.94
57 TestErrorSpam/start 0.34
58 TestErrorSpam/status 0.73
59 TestErrorSpam/pause 1.53
60 TestErrorSpam/unpause 1.68
61 TestErrorSpam/stop 5.3
64 TestFunctional/serial/CopySyncFile 0
65 TestFunctional/serial/StartWithProxy 85.36
66 TestFunctional/serial/AuditLog 0
67 TestFunctional/serial/SoftStart 40.43
68 TestFunctional/serial/KubeContext 0.04
69 TestFunctional/serial/KubectlGetPods 0.08
72 TestFunctional/serial/CacheCmd/cache/add_remote 4.28
73 TestFunctional/serial/CacheCmd/cache/add_local 2.14
74 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.05
75 TestFunctional/serial/CacheCmd/cache/list 0.05
76 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.22
77 TestFunctional/serial/CacheCmd/cache/cache_reload 1.77
78 TestFunctional/serial/CacheCmd/cache/delete 0.09
79 TestFunctional/serial/MinikubeKubectlCmd 0.1
80 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.1
81 TestFunctional/serial/ExtraConfig 32.16
82 TestFunctional/serial/ComponentHealth 0.07
83 TestFunctional/serial/LogsCmd 1.37
84 TestFunctional/serial/LogsFileCmd 1.32
85 TestFunctional/serial/InvalidService 3.96
87 TestFunctional/parallel/ConfigCmd 0.36
88 TestFunctional/parallel/DashboardCmd 38.72
89 TestFunctional/parallel/DryRun 0.28
90 TestFunctional/parallel/InternationalLanguage 0.14
91 TestFunctional/parallel/StatusCmd 0.96
95 TestFunctional/parallel/ServiceCmdConnect 7.49
96 TestFunctional/parallel/AddonsCmd 0.14
97 TestFunctional/parallel/PersistentVolumeClaim 26.85
99 TestFunctional/parallel/SSHCmd 0.5
100 TestFunctional/parallel/CpCmd 1.48
101 TestFunctional/parallel/MySQL 26.77
102 TestFunctional/parallel/FileSync 0.23
103 TestFunctional/parallel/CertSync 1.44
107 TestFunctional/parallel/NodeLabels 0.09
109 TestFunctional/parallel/NonActiveRuntimeDisabled 0.42
111 TestFunctional/parallel/License 0.43
112 TestFunctional/parallel/ServiceCmd/DeployApp 12.25
113 TestFunctional/parallel/ProfileCmd/profile_not_create 0.44
114 TestFunctional/parallel/MountCmd/any-port 24.5
115 TestFunctional/parallel/ProfileCmd/profile_list 0.39
116 TestFunctional/parallel/ProfileCmd/profile_json_output 0.33
117 TestFunctional/parallel/ServiceCmd/List 0.83
118 TestFunctional/parallel/ServiceCmd/JSONOutput 0.94
119 TestFunctional/parallel/ServiceCmd/HTTPS 0.39
120 TestFunctional/parallel/ServiceCmd/Format 0.94
121 TestFunctional/parallel/ServiceCmd/URL 0.35
122 TestFunctional/parallel/UpdateContextCmd/no_changes 0.1
123 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.09
124 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.1
125 TestFunctional/parallel/Version/short 0.05
126 TestFunctional/parallel/Version/components 0.81
127 TestFunctional/parallel/ImageCommands/ImageListShort 0.29
128 TestFunctional/parallel/ImageCommands/ImageListTable 0.22
129 TestFunctional/parallel/ImageCommands/ImageListJson 0.22
130 TestFunctional/parallel/ImageCommands/ImageListYaml 0.35
131 TestFunctional/parallel/ImageCommands/ImageBuild 4.78
132 TestFunctional/parallel/ImageCommands/Setup 1.96
133 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.7
134 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 0.89
135 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 2.45
136 TestFunctional/parallel/MountCmd/specific-port 1.75
137 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.52
138 TestFunctional/parallel/ImageCommands/ImageRemove 0.57
139 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 1.12
140 TestFunctional/parallel/MountCmd/VerifyCleanup 1.43
141 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.88
151 TestFunctional/delete_echo-server_images 0.04
152 TestFunctional/delete_my-image_image 0.02
153 TestFunctional/delete_minikube_cached_images 0.02
157 TestMultiControlPlane/serial/StartCluster 205.18
158 TestMultiControlPlane/serial/DeployApp 7.37
159 TestMultiControlPlane/serial/PingHostFromPods 1.18
160 TestMultiControlPlane/serial/AddWorkerNode 56.95
161 TestMultiControlPlane/serial/NodeLabels 0.07
162 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.9
163 TestMultiControlPlane/serial/CopyFile 12.82
167 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 3.93
169 TestMultiControlPlane/serial/DeleteSecondaryNode 16.94
170 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.62
172 TestMultiControlPlane/serial/RestartCluster 329.12
173 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.66
174 TestMultiControlPlane/serial/AddSecondaryNode 76.17
175 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.86
179 TestJSONOutput/start/Command 86.27
180 TestJSONOutput/start/Audit 0
182 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
183 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
185 TestJSONOutput/pause/Command 0.69
186 TestJSONOutput/pause/Audit 0
188 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
189 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
191 TestJSONOutput/unpause/Command 0.61
192 TestJSONOutput/unpause/Audit 0
194 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
195 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
197 TestJSONOutput/stop/Command 7.37
198 TestJSONOutput/stop/Audit 0
200 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
201 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
202 TestErrorJSONOutput 0.19
207 TestMainNoArgs 0.04
208 TestMinikubeProfile 85.6
211 TestMountStart/serial/StartWithMountFirst 29.11
212 TestMountStart/serial/VerifyMountFirst 0.37
213 TestMountStart/serial/StartWithMountSecond 25.31
214 TestMountStart/serial/VerifyMountSecond 0.37
215 TestMountStart/serial/DeleteFirst 0.7
216 TestMountStart/serial/VerifyMountPostDelete 0.37
217 TestMountStart/serial/Stop 1.27
218 TestMountStart/serial/RestartStopped 22.57
219 TestMountStart/serial/VerifyMountPostStop 0.38
222 TestMultiNode/serial/FreshStart2Nodes 106.25
223 TestMultiNode/serial/DeployApp2Nodes 6.64
224 TestMultiNode/serial/PingHostFrom2Pods 0.77
225 TestMultiNode/serial/AddNode 51.54
226 TestMultiNode/serial/MultiNodeLabels 0.06
227 TestMultiNode/serial/ProfileList 0.57
228 TestMultiNode/serial/CopyFile 7.11
229 TestMultiNode/serial/StopNode 2.36
230 TestMultiNode/serial/StartAfterStop 39.43
232 TestMultiNode/serial/DeleteNode 2.32
234 TestMultiNode/serial/RestartMultiNode 179.56
235 TestMultiNode/serial/ValidateNameConflict 46.77
242 TestScheduledStopUnix 110.6
246 TestRunningBinaryUpgrade 146.98
251 TestNoKubernetes/serial/StartNoK8sWithVersion 0.08
252 TestNoKubernetes/serial/StartWithK8s 117.94
253 TestNoKubernetes/serial/StartWithStopK8s 17.17
254 TestStoppedBinaryUpgrade/Setup 2.31
255 TestNoKubernetes/serial/Start 27.78
256 TestStoppedBinaryUpgrade/Upgrade 159.33
257 TestNoKubernetes/serial/VerifyK8sNotRunning 0.2
258 TestNoKubernetes/serial/ProfileList 1.26
259 TestNoKubernetes/serial/Stop 1.29
260 TestNoKubernetes/serial/StartNoArgs 22.42
261 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.21
270 TestPause/serial/Start 91.14
271 TestStoppedBinaryUpgrade/MinikubeLogs 0.94
279 TestNetworkPlugins/group/false 3.19
287 TestStartStop/group/no-preload/serial/FirstStart 105.1
289 TestStartStop/group/embed-certs/serial/FirstStart 91.78
291 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 58.82
292 TestStartStop/group/no-preload/serial/DeployApp 10.32
293 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.08
295 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 11.3
296 TestStartStop/group/embed-certs/serial/DeployApp 10.28
297 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 0.97
299 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 0.99
304 TestStartStop/group/no-preload/serial/SecondStart 643.95
307 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 567.87
308 TestStartStop/group/embed-certs/serial/SecondStart 586.64
309 TestStartStop/group/old-k8s-version/serial/Stop 5.34
310 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.18
321 TestStartStop/group/newest-cni/serial/FirstStart 43.82
322 TestNetworkPlugins/group/auto/Start 87.84
323 TestStartStop/group/newest-cni/serial/DeployApp 0
324 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.11
325 TestStartStop/group/newest-cni/serial/Stop 10.39
326 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.19
327 TestStartStop/group/newest-cni/serial/SecondStart 45.11
328 TestNetworkPlugins/group/kindnet/Start 67.13
329 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
330 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
331 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.25
332 TestStartStop/group/newest-cni/serial/Pause 2.47
333 TestNetworkPlugins/group/calico/Start 100.83
334 TestNetworkPlugins/group/custom-flannel/Start 105.86
335 TestNetworkPlugins/group/auto/KubeletFlags 0.21
336 TestNetworkPlugins/group/auto/NetCatPod 14.27
337 TestNetworkPlugins/group/auto/DNS 0.16
338 TestNetworkPlugins/group/auto/Localhost 0.17
339 TestNetworkPlugins/group/auto/HairPin 0.13
340 TestNetworkPlugins/group/enable-default-cni/Start 88.05
341 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
342 TestNetworkPlugins/group/kindnet/KubeletFlags 0.2
343 TestNetworkPlugins/group/kindnet/NetCatPod 10.22
344 TestNetworkPlugins/group/kindnet/DNS 0.2
345 TestNetworkPlugins/group/kindnet/Localhost 0.17
346 TestNetworkPlugins/group/kindnet/HairPin 0.17
347 TestNetworkPlugins/group/flannel/Start 81.16
348 TestNetworkPlugins/group/calico/ControllerPod 6.01
349 TestNetworkPlugins/group/calico/KubeletFlags 0.25
350 TestNetworkPlugins/group/calico/NetCatPod 14.36
351 TestNetworkPlugins/group/calico/DNS 0.19
352 TestNetworkPlugins/group/calico/Localhost 0.14
353 TestNetworkPlugins/group/calico/HairPin 0.13
354 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.22
355 TestNetworkPlugins/group/custom-flannel/NetCatPod 10.25
356 TestNetworkPlugins/group/custom-flannel/DNS 0.19
357 TestNetworkPlugins/group/custom-flannel/Localhost 0.18
358 TestNetworkPlugins/group/custom-flannel/HairPin 0.16
359 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.24
360 TestNetworkPlugins/group/enable-default-cni/NetCatPod 12.26
361 TestNetworkPlugins/group/bridge/Start 53.62
362 TestNetworkPlugins/group/enable-default-cni/DNS 0.18
363 TestNetworkPlugins/group/enable-default-cni/Localhost 0.17
364 TestNetworkPlugins/group/enable-default-cni/HairPin 0.14
365 TestNetworkPlugins/group/flannel/ControllerPod 6.01
366 TestNetworkPlugins/group/flannel/KubeletFlags 0.21
367 TestNetworkPlugins/group/flannel/NetCatPod 11.21
368 TestNetworkPlugins/group/flannel/DNS 0.16
369 TestNetworkPlugins/group/flannel/Localhost 0.15
370 TestNetworkPlugins/group/flannel/HairPin 0.18
371 TestNetworkPlugins/group/bridge/KubeletFlags 0.21
372 TestNetworkPlugins/group/bridge/NetCatPod 11.24
373 TestNetworkPlugins/group/bridge/DNS 0.15
374 TestNetworkPlugins/group/bridge/Localhost 0.13
375 TestNetworkPlugins/group/bridge/HairPin 0.14
x
+
TestDownloadOnly/v1.20.0/json-events (35.4s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-228031 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-228031 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (35.397472138s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (35.40s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
I0918 19:38:32.661112   14878 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
I0918 19:38:32.661206   14878 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19667-7671/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.39s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-228031
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-228031: exit status 85 (391.739675ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-228031 | jenkins | v1.34.0 | 18 Sep 24 19:37 UTC |          |
	|         | -p download-only-228031        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=kvm2                  |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/18 19:37:57
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0918 19:37:57.300291   14890 out.go:345] Setting OutFile to fd 1 ...
	I0918 19:37:57.300411   14890 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0918 19:37:57.300421   14890 out.go:358] Setting ErrFile to fd 2...
	I0918 19:37:57.300425   14890 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0918 19:37:57.300609   14890 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19667-7671/.minikube/bin
	W0918 19:37:57.300735   14890 root.go:314] Error reading config file at /home/jenkins/minikube-integration/19667-7671/.minikube/config/config.json: open /home/jenkins/minikube-integration/19667-7671/.minikube/config/config.json: no such file or directory
	I0918 19:37:57.301349   14890 out.go:352] Setting JSON to true
	I0918 19:37:57.302278   14890 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":1221,"bootTime":1726687056,"procs":173,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0918 19:37:57.302386   14890 start.go:139] virtualization: kvm guest
	I0918 19:37:57.304788   14890 out.go:97] [download-only-228031] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	W0918 19:37:57.304891   14890 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/19667-7671/.minikube/cache/preloaded-tarball: no such file or directory
	I0918 19:37:57.304925   14890 notify.go:220] Checking for updates...
	I0918 19:37:57.306027   14890 out.go:169] MINIKUBE_LOCATION=19667
	I0918 19:37:57.307304   14890 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0918 19:37:57.308311   14890 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19667-7671/kubeconfig
	I0918 19:37:57.309226   14890 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19667-7671/.minikube
	I0918 19:37:57.310042   14890 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0918 19:37:57.312104   14890 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0918 19:37:57.312377   14890 driver.go:394] Setting default libvirt URI to qemu:///system
	I0918 19:37:57.406070   14890 out.go:97] Using the kvm2 driver based on user configuration
	I0918 19:37:57.406094   14890 start.go:297] selected driver: kvm2
	I0918 19:37:57.406100   14890 start.go:901] validating driver "kvm2" against <nil>
	I0918 19:37:57.406462   14890 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0918 19:37:57.406621   14890 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19667-7671/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0918 19:37:57.421508   14890 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0918 19:37:57.421556   14890 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0918 19:37:57.422023   14890 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I0918 19:37:57.422188   14890 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0918 19:37:57.422220   14890 cni.go:84] Creating CNI manager for ""
	I0918 19:37:57.422261   14890 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0918 19:37:57.422269   14890 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0918 19:37:57.422322   14890 start.go:340] cluster config:
	{Name:download-only-228031 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-228031 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0918 19:37:57.422489   14890 iso.go:125] acquiring lock: {Name:mkbcf4d84f3091567a39802cd5e773cb10b8c545 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0918 19:37:57.424456   14890 out.go:97] Downloading VM boot image ...
	I0918 19:37:57.424498   14890 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso.sha256 -> /home/jenkins/minikube-integration/19667-7671/.minikube/cache/iso/amd64/minikube-v1.34.0-1726481713-19649-amd64.iso
	I0918 19:38:11.530887   14890 out.go:97] Starting "download-only-228031" primary control-plane node in "download-only-228031" cluster
	I0918 19:38:11.530921   14890 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0918 19:38:11.635774   14890 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0918 19:38:11.635800   14890 cache.go:56] Caching tarball of preloaded images
	I0918 19:38:11.635962   14890 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0918 19:38:11.637754   14890 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0918 19:38:11.637782   14890 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 ...
	I0918 19:38:11.739670   14890 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:f93b07cde9c3289306cbaeb7a1803c19 -> /home/jenkins/minikube-integration/19667-7671/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0918 19:38:30.583654   14890 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 ...
	I0918 19:38:30.583756   14890 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/19667-7671/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 ...
	I0918 19:38:31.617813   14890 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0918 19:38:31.618212   14890 profile.go:143] Saving config to /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/download-only-228031/config.json ...
	I0918 19:38:31.618256   14890 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/download-only-228031/config.json: {Name:mke19a3d13619cb3cc70c9ca025bf896c6a1a448 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0918 19:38:31.618438   14890 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0918 19:38:31.618655   14890 download.go:107] Downloading: https://dl.k8s.io/release/v1.20.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/linux/amd64/kubectl.sha256 -> /home/jenkins/minikube-integration/19667-7671/.minikube/cache/linux/amd64/v1.20.0/kubectl
	
	
	* The control-plane node download-only-228031 host does not exist
	  To start a cluster, run: "minikube start -p download-only-228031"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.39s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.12s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-228031
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.12s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/json-events (18.57s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-226542 --force --alsologtostderr --kubernetes-version=v1.31.1 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-226542 --force --alsologtostderr --kubernetes-version=v1.31.1 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (18.567989067s)
--- PASS: TestDownloadOnly/v1.31.1/json-events (18.57s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/preload-exists
I0918 19:38:51.877587   14878 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
I0918 19:38:51.877628   14878 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19667-7671/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.31.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-226542
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-226542: exit status 85 (60.168589ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-228031 | jenkins | v1.34.0 | 18 Sep 24 19:37 UTC |                     |
	|         | -p download-only-228031        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.34.0 | 18 Sep 24 19:38 UTC | 18 Sep 24 19:38 UTC |
	| delete  | -p download-only-228031        | download-only-228031 | jenkins | v1.34.0 | 18 Sep 24 19:38 UTC | 18 Sep 24 19:38 UTC |
	| start   | -o=json --download-only        | download-only-226542 | jenkins | v1.34.0 | 18 Sep 24 19:38 UTC |                     |
	|         | -p download-only-226542        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/18 19:38:33
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0918 19:38:33.346690   15180 out.go:345] Setting OutFile to fd 1 ...
	I0918 19:38:33.346937   15180 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0918 19:38:33.346946   15180 out.go:358] Setting ErrFile to fd 2...
	I0918 19:38:33.346950   15180 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0918 19:38:33.347119   15180 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19667-7671/.minikube/bin
	I0918 19:38:33.347728   15180 out.go:352] Setting JSON to true
	I0918 19:38:33.348623   15180 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":1257,"bootTime":1726687056,"procs":171,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0918 19:38:33.348714   15180 start.go:139] virtualization: kvm guest
	I0918 19:38:33.350623   15180 out.go:97] [download-only-226542] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0918 19:38:33.350755   15180 notify.go:220] Checking for updates...
	I0918 19:38:33.352038   15180 out.go:169] MINIKUBE_LOCATION=19667
	I0918 19:38:33.353120   15180 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0918 19:38:33.354516   15180 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19667-7671/kubeconfig
	I0918 19:38:33.355704   15180 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19667-7671/.minikube
	I0918 19:38:33.356941   15180 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0918 19:38:33.359144   15180 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0918 19:38:33.359358   15180 driver.go:394] Setting default libvirt URI to qemu:///system
	I0918 19:38:33.391631   15180 out.go:97] Using the kvm2 driver based on user configuration
	I0918 19:38:33.391652   15180 start.go:297] selected driver: kvm2
	I0918 19:38:33.391658   15180 start.go:901] validating driver "kvm2" against <nil>
	I0918 19:38:33.391958   15180 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0918 19:38:33.392076   15180 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19667-7671/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0918 19:38:33.408490   15180 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0918 19:38:33.408556   15180 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0918 19:38:33.409065   15180 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I0918 19:38:33.409216   15180 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0918 19:38:33.409246   15180 cni.go:84] Creating CNI manager for ""
	I0918 19:38:33.409291   15180 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0918 19:38:33.409309   15180 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0918 19:38:33.409363   15180 start.go:340] cluster config:
	{Name:download-only-226542 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:download-only-226542 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0918 19:38:33.409453   15180 iso.go:125] acquiring lock: {Name:mkbcf4d84f3091567a39802cd5e773cb10b8c545 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0918 19:38:33.410951   15180 out.go:97] Starting "download-only-226542" primary control-plane node in "download-only-226542" cluster
	I0918 19:38:33.410964   15180 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0918 19:38:33.577950   15180 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.1/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I0918 19:38:33.577983   15180 cache.go:56] Caching tarball of preloaded images
	I0918 19:38:33.578141   15180 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0918 19:38:33.579796   15180 out.go:97] Downloading Kubernetes v1.31.1 preload ...
	I0918 19:38:33.579814   15180 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 ...
	I0918 19:38:33.680104   15180 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.1/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4?checksum=md5:aa79045e4550b9510ee496fee0d50abb -> /home/jenkins/minikube-integration/19667-7671/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	
	
	* The control-plane node download-only-226542 host does not exist
	  To start a cluster, run: "minikube start -p download-only-226542"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.31.1/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/DeleteAll (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.31.1/DeleteAll (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds (0.12s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-226542
--- PASS: TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds (0.12s)

                                                
                                    
x
+
TestBinaryMirror (0.59s)

                                                
                                                
=== RUN   TestBinaryMirror
I0918 19:38:52.434436   14878 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl.sha256
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-930383 --alsologtostderr --binary-mirror http://127.0.0.1:32853 --driver=kvm2  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-930383" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-930383
--- PASS: TestBinaryMirror (0.59s)

                                                
                                    
x
+
TestOffline (134.32s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-crio-339909 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=crio
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-crio-339909 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=crio: (2m13.309589695s)
helpers_test.go:175: Cleaning up "offline-crio-339909" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-crio-339909
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p offline-crio-339909: (1.011057819s)
--- PASS: TestOffline (134.32s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1037: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-815929
addons_test.go:1037: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-815929: exit status 85 (47.633566ms)

                                                
                                                
-- stdout --
	* Profile "addons-815929" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-815929"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1048: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-815929
addons_test.go:1048: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-815929: exit status 85 (51.826254ms)

                                                
                                                
-- stdout --
	* Profile "addons-815929" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-815929"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestAddons/Setup (138.74s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:110: (dbg) Run:  out/minikube-linux-amd64 start -p addons-815929 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:110: (dbg) Done: out/minikube-linux-amd64 start -p addons-815929 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=helm-tiller: (2m18.743961606s)
--- PASS: TestAddons/Setup (138.74s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.15s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:656: (dbg) Run:  kubectl --context addons-815929 create ns new-namespace
addons_test.go:670: (dbg) Run:  kubectl --context addons-815929 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.15s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (11.79s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:848: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-56gjj" [b2fb4cc9-48d6-4d85-ac30-90b91428d15b] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:848: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.004537211s
addons_test.go:851: (dbg) Run:  out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-815929
addons_test.go:851: (dbg) Done: out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-815929: (5.78852429s)
--- PASS: TestAddons/parallel/InspektorGadget (11.79s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (11.47s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:458: tiller-deploy stabilized in 2.262906ms
addons_test.go:460: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...
helpers_test.go:344: "tiller-deploy-b48cc5f79-8nbwq" [4d5df6b3-4cf1-4ce5-9cb4-2983f9aa2728] Running
addons_test.go:460: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 5.005990053s
addons_test.go:475: (dbg) Run:  kubectl --context addons-815929 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:475: (dbg) Done: kubectl --context addons-815929 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: (5.861118191s)
addons_test.go:492: (dbg) Run:  out/minikube-linux-amd64 -p addons-815929 addons disable helm-tiller --alsologtostderr -v=1
--- PASS: TestAddons/parallel/HelmTiller (11.47s)

                                                
                                    
x
+
TestAddons/parallel/CSI (67.1s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:567: csi-hostpath-driver pods stabilized in 24.537413ms
addons_test.go:570: (dbg) Run:  kubectl --context addons-815929 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:575: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-815929 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-815929 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-815929 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-815929 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-815929 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-815929 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-815929 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-815929 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-815929 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-815929 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-815929 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-815929 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-815929 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-815929 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-815929 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-815929 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-815929 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-815929 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-815929 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:580: (dbg) Run:  kubectl --context addons-815929 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:585: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [8d4ed3f8-665f-4ba3-8eab-a8f478ff37e9] Pending
helpers_test.go:344: "task-pv-pod" [8d4ed3f8-665f-4ba3-8eab-a8f478ff37e9] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [8d4ed3f8-665f-4ba3-8eab-a8f478ff37e9] Running
addons_test.go:585: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 13.005756551s
addons_test.go:590: (dbg) Run:  kubectl --context addons-815929 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:595: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-815929 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:427: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: 
helpers_test.go:419: (dbg) Run:  kubectl --context addons-815929 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:600: (dbg) Run:  kubectl --context addons-815929 delete pod task-pv-pod
addons_test.go:600: (dbg) Done: kubectl --context addons-815929 delete pod task-pv-pod: (1.017009151s)
addons_test.go:606: (dbg) Run:  kubectl --context addons-815929 delete pvc hpvc
addons_test.go:612: (dbg) Run:  kubectl --context addons-815929 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:617: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-815929 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-815929 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-815929 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-815929 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-815929 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-815929 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-815929 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-815929 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-815929 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-815929 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-815929 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-815929 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-815929 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-815929 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-815929 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-815929 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-815929 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:622: (dbg) Run:  kubectl --context addons-815929 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:627: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [5abe520a-0b38-4ee0-80cf-7f8da8898d54] Pending
helpers_test.go:344: "task-pv-pod-restore" [5abe520a-0b38-4ee0-80cf-7f8da8898d54] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [5abe520a-0b38-4ee0-80cf-7f8da8898d54] Running
addons_test.go:627: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.004433137s
addons_test.go:632: (dbg) Run:  kubectl --context addons-815929 delete pod task-pv-pod-restore
addons_test.go:636: (dbg) Run:  kubectl --context addons-815929 delete pvc hpvc-restore
addons_test.go:640: (dbg) Run:  kubectl --context addons-815929 delete volumesnapshot new-snapshot-demo
addons_test.go:644: (dbg) Run:  out/minikube-linux-amd64 -p addons-815929 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:644: (dbg) Done: out/minikube-linux-amd64 -p addons-815929 addons disable csi-hostpath-driver --alsologtostderr -v=1: (7.198918756s)
addons_test.go:648: (dbg) Run:  out/minikube-linux-amd64 -p addons-815929 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (67.10s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (13.56s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:830: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-815929 --alsologtostderr -v=1
addons_test.go:830: (dbg) Done: out/minikube-linux-amd64 addons enable headlamp -p addons-815929 --alsologtostderr -v=1: (1.094098152s)
addons_test.go:835: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-7b5c95b59d-6t8xs" [f6f2d55f-b3e7-44c7-a00d-e99861a3846e] Pending
helpers_test.go:344: "headlamp-7b5c95b59d-6t8xs" [f6f2d55f-b3e7-44c7-a00d-e99861a3846e] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-7b5c95b59d-6t8xs" [f6f2d55f-b3e7-44c7-a00d-e99861a3846e] Running
addons_test.go:835: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 12.011417573s
addons_test.go:839: (dbg) Run:  out/minikube-linux-amd64 -p addons-815929 addons disable headlamp --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Headlamp (13.56s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.63s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:867: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-769b77f747-6gr2k" [ad62acda-3966-4438-ac4d-53c02019711a] Running
addons_test.go:867: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.004332041s
addons_test.go:870: (dbg) Run:  out/minikube-linux-amd64 addons disable cloud-spanner -p addons-815929
--- PASS: TestAddons/parallel/CloudSpanner (5.63s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (13.13s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:982: (dbg) Run:  kubectl --context addons-815929 apply -f testdata/storage-provisioner-rancher/pvc.yaml
I0918 19:49:14.742690   14878 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
addons_test.go:988: (dbg) Run:  kubectl --context addons-815929 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:992: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-815929 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-815929 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-815929 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-815929 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-815929 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-815929 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-815929 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:995: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [02ef3180-365f-48d0-9576-769e2616aa1e] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [02ef3180-365f-48d0-9576-769e2616aa1e] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [02ef3180-365f-48d0-9576-769e2616aa1e] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:995: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 6.004758096s
addons_test.go:1000: (dbg) Run:  kubectl --context addons-815929 get pvc test-pvc -o=json
addons_test.go:1009: (dbg) Run:  out/minikube-linux-amd64 -p addons-815929 ssh "cat /opt/local-path-provisioner/pvc-640ef54b-981f-4e43-8493-c1fa2c048453_default_test-pvc/file1"
addons_test.go:1021: (dbg) Run:  kubectl --context addons-815929 delete pod test-local-path
addons_test.go:1025: (dbg) Run:  kubectl --context addons-815929 delete pvc test-pvc
addons_test.go:1029: (dbg) Run:  out/minikube-linux-amd64 -p addons-815929 addons disable storage-provisioner-rancher --alsologtostderr -v=1
--- PASS: TestAddons/parallel/LocalPath (13.13s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.5s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1061: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-rvssn" [eed41d4f-f686-4590-959b-344ece686560] Running
addons_test.go:1061: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.004952523s
addons_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 addons disable nvidia-device-plugin -p addons-815929
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (5.50s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (11.47s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1072: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-67d98fc6b-5zgzg" [2f0a4941-d664-4905-bacf-d238f446547f] Running
addons_test.go:1072: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 5.009708295s
addons_test.go:1076: (dbg) Run:  out/minikube-linux-amd64 -p addons-815929 addons disable yakd --alsologtostderr -v=1
addons_test.go:1076: (dbg) Done: out/minikube-linux-amd64 -p addons-815929 addons disable yakd --alsologtostderr -v=1: (6.463430118s)
--- PASS: TestAddons/parallel/Yakd (11.47s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (92.72s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:174: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-815929
addons_test.go:174: (dbg) Done: out/minikube-linux-amd64 stop -p addons-815929: (1m32.456973325s)
addons_test.go:178: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-815929
addons_test.go:182: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-815929
addons_test.go:187: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-815929
--- PASS: TestAddons/StoppedEnableDisable (92.72s)

                                                
                                    
x
+
TestCertOptions (44.51s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-347585 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio
I0918 20:54:20.920562   14878 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0918 20:54:23.098271   14878 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/workspace/KVM_Linux_crio_integration/testdata/kvm2-driver-older-version:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
I0918 20:54:23.128938   14878 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/testdata/kvm2-driver-older-version/docker-machine-driver-kvm2 version is 1.1.1
W0918 20:54:23.128983   14878 install.go:62] docker-machine-driver-kvm2: docker-machine-driver-kvm2 is version 1.1.1, want 1.3.0
W0918 20:54:23.129063   14878 out.go:174] [unset outFile]: * Downloading driver docker-machine-driver-kvm2:
I0918 20:54:23.129106   14878 download.go:107] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 -> /tmp/TestKVMDriverInstallOrUpdate2644031165/002/docker-machine-driver-kvm2
I0918 20:54:23.212891   14878 driver.go:46] failed to download arch specific driver: getter: &{Ctx:context.Background Src:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 Dst:/tmp/TestKVMDriverInstallOrUpdate2644031165/002/docker-machine-driver-kvm2.download Pwd: Mode:2 Umask:---------- Detectors:[0x4668640 0x4668640 0x4668640 0x4668640 0x4668640 0x4668640 0x4668640] Decompressors:map[bz2:0xc000684d30 gz:0xc000684d38 tar:0xc000684c30 tar.bz2:0xc000684c40 tar.gz:0xc000684c60 tar.xz:0xc000684cc0 tar.zst:0xc000684cd0 tbz2:0xc000684c40 tgz:0xc000684c60 txz:0xc000684cc0 tzst:0xc000684cd0 xz:0xc000684d40 zip:0xc000684d80 zst:0xc000684d48] Getters:map[file:0xc0007a0710 http:0xc000710870 https:0xc0007108c0] Dir:false ProgressListener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response co
de: 404. trying to get the common version
I0918 20:54:23.212955   14878 download.go:107] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2.sha256 -> /tmp/TestKVMDriverInstallOrUpdate2644031165/002/docker-machine-driver-kvm2
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-347585 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio: (43.285186033s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-347585 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-347585 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-347585 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-347585" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-347585
--- PASS: TestCertOptions (44.51s)

                                                
                                    
x
+
TestCertExpiration (266.44s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-456762 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-456762 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=crio: (39.780988063s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-456762 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-456762 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio: (45.668564724s)
helpers_test.go:175: Cleaning up "cert-expiration-456762" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-456762
--- PASS: TestCertExpiration (266.44s)

                                                
                                    
x
+
TestForceSystemdFlag (46.44s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-108667 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-108667 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (45.167719809s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-108667 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-108667" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-108667
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-flag-108667: (1.056151345s)
--- PASS: TestForceSystemdFlag (46.44s)

                                                
                                    
x
+
TestForceSystemdEnv (67.77s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-347286 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-347286 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (1m6.975195263s)
helpers_test.go:175: Cleaning up "force-systemd-env-347286" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-347286
--- PASS: TestForceSystemdEnv (67.77s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (7.49s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
I0918 20:54:17.680114   14878 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0918 20:54:17.680268   14878 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/workspace/KVM_Linux_crio_integration/testdata/kvm2-driver-without-version:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
W0918 20:54:17.709963   14878 install.go:62] docker-machine-driver-kvm2: exit status 1
W0918 20:54:17.710330   14878 out.go:174] [unset outFile]: * Downloading driver docker-machine-driver-kvm2:
I0918 20:54:17.710386   14878 download.go:107] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 -> /tmp/TestKVMDriverInstallOrUpdate2644031165/001/docker-machine-driver-kvm2
I0918 20:54:17.960171   14878 driver.go:46] failed to download arch specific driver: getter: &{Ctx:context.Background Src:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 Dst:/tmp/TestKVMDriverInstallOrUpdate2644031165/001/docker-machine-driver-kvm2.download Pwd: Mode:2 Umask:---------- Detectors:[0x4668640 0x4668640 0x4668640 0x4668640 0x4668640 0x4668640 0x4668640] Decompressors:map[bz2:0xc000684d30 gz:0xc000684d38 tar:0xc000684c30 tar.bz2:0xc000684c40 tar.gz:0xc000684c60 tar.xz:0xc000684cc0 tar.zst:0xc000684cd0 tbz2:0xc000684c40 tgz:0xc000684c60 txz:0xc000684cc0 tzst:0xc000684cd0 xz:0xc000684d40 zip:0xc000684d80 zst:0xc000684d48] Getters:map[file:0xc0015c6990 http:0xc000553c20 https:0xc000553c70] Dir:false ProgressListener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response co
de: 404. trying to get the common version
I0918 20:54:17.960227   14878 download.go:107] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2.sha256 -> /tmp/TestKVMDriverInstallOrUpdate2644031165/001/docker-machine-driver-kvm2
--- PASS: TestKVMDriverInstallOrUpdate (7.49s)

                                                
                                    
x
+
TestErrorSpam/setup (41.94s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-983345 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-983345 --driver=kvm2  --container-runtime=crio
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-983345 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-983345 --driver=kvm2  --container-runtime=crio: (41.943911568s)
--- PASS: TestErrorSpam/setup (41.94s)

                                                
                                    
x
+
TestErrorSpam/start (0.34s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-983345 --log_dir /tmp/nospam-983345 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-983345 --log_dir /tmp/nospam-983345 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-983345 --log_dir /tmp/nospam-983345 start --dry-run
--- PASS: TestErrorSpam/start (0.34s)

                                                
                                    
x
+
TestErrorSpam/status (0.73s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-983345 --log_dir /tmp/nospam-983345 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-983345 --log_dir /tmp/nospam-983345 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-983345 --log_dir /tmp/nospam-983345 status
--- PASS: TestErrorSpam/status (0.73s)

                                                
                                    
x
+
TestErrorSpam/pause (1.53s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-983345 --log_dir /tmp/nospam-983345 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-983345 --log_dir /tmp/nospam-983345 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-983345 --log_dir /tmp/nospam-983345 pause
--- PASS: TestErrorSpam/pause (1.53s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.68s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-983345 --log_dir /tmp/nospam-983345 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-983345 --log_dir /tmp/nospam-983345 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-983345 --log_dir /tmp/nospam-983345 unpause
--- PASS: TestErrorSpam/unpause (1.68s)

                                                
                                    
x
+
TestErrorSpam/stop (5.3s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-983345 --log_dir /tmp/nospam-983345 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-983345 --log_dir /tmp/nospam-983345 stop: (2.273414691s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-983345 --log_dir /tmp/nospam-983345 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-983345 --log_dir /tmp/nospam-983345 stop: (1.474217465s)
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-983345 --log_dir /tmp/nospam-983345 stop
error_spam_test.go:182: (dbg) Done: out/minikube-linux-amd64 -p nospam-983345 --log_dir /tmp/nospam-983345 stop: (1.553823524s)
--- PASS: TestErrorSpam/stop (5.30s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1855: local sync path: /home/jenkins/minikube-integration/19667-7671/.minikube/files/etc/test/nested/copy/14878/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (85.36s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2234: (dbg) Run:  out/minikube-linux-amd64 start -p functional-790989 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio
functional_test.go:2234: (dbg) Done: out/minikube-linux-amd64 start -p functional-790989 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio: (1m25.361086435s)
--- PASS: TestFunctional/serial/StartWithProxy (85.36s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (40.43s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I0918 19:58:32.836002   14878 config.go:182] Loaded profile config "functional-790989": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
functional_test.go:659: (dbg) Run:  out/minikube-linux-amd64 start -p functional-790989 --alsologtostderr -v=8
functional_test.go:659: (dbg) Done: out/minikube-linux-amd64 start -p functional-790989 --alsologtostderr -v=8: (40.427000397s)
functional_test.go:663: soft start took 40.427784977s for "functional-790989" cluster.
I0918 19:59:13.263393   14878 config.go:182] Loaded profile config "functional-790989": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
--- PASS: TestFunctional/serial/SoftStart (40.43s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:681: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:696: (dbg) Run:  kubectl --context functional-790989 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (4.28s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1049: (dbg) Run:  out/minikube-linux-amd64 -p functional-790989 cache add registry.k8s.io/pause:3.1
functional_test.go:1049: (dbg) Done: out/minikube-linux-amd64 -p functional-790989 cache add registry.k8s.io/pause:3.1: (1.450747765s)
functional_test.go:1049: (dbg) Run:  out/minikube-linux-amd64 -p functional-790989 cache add registry.k8s.io/pause:3.3
functional_test.go:1049: (dbg) Done: out/minikube-linux-amd64 -p functional-790989 cache add registry.k8s.io/pause:3.3: (1.396071356s)
functional_test.go:1049: (dbg) Run:  out/minikube-linux-amd64 -p functional-790989 cache add registry.k8s.io/pause:latest
functional_test.go:1049: (dbg) Done: out/minikube-linux-amd64 -p functional-790989 cache add registry.k8s.io/pause:latest: (1.428557023s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (4.28s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (2.14s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1077: (dbg) Run:  docker build -t minikube-local-cache-test:functional-790989 /tmp/TestFunctionalserialCacheCmdcacheadd_local4232433462/001
functional_test.go:1089: (dbg) Run:  out/minikube-linux-amd64 -p functional-790989 cache add minikube-local-cache-test:functional-790989
functional_test.go:1089: (dbg) Done: out/minikube-linux-amd64 -p functional-790989 cache add minikube-local-cache-test:functional-790989: (1.801630444s)
functional_test.go:1094: (dbg) Run:  out/minikube-linux-amd64 -p functional-790989 cache delete minikube-local-cache-test:functional-790989
functional_test.go:1083: (dbg) Run:  docker rmi minikube-local-cache-test:functional-790989
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (2.14s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1102: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1110: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.22s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1124: (dbg) Run:  out/minikube-linux-amd64 -p functional-790989 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.22s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.77s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1147: (dbg) Run:  out/minikube-linux-amd64 -p functional-790989 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Run:  out/minikube-linux-amd64 -p functional-790989 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-790989 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (203.539943ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1158: (dbg) Run:  out/minikube-linux-amd64 -p functional-790989 cache reload
functional_test.go:1158: (dbg) Done: out/minikube-linux-amd64 -p functional-790989 cache reload: (1.102962537s)
functional_test.go:1163: (dbg) Run:  out/minikube-linux-amd64 -p functional-790989 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.77s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1172: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1172: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.09s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:716: (dbg) Run:  out/minikube-linux-amd64 -p functional-790989 kubectl -- --context functional-790989 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.10s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:741: (dbg) Run:  out/kubectl --context functional-790989 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.10s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (32.16s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:757: (dbg) Run:  out/minikube-linux-amd64 start -p functional-790989 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:757: (dbg) Done: out/minikube-linux-amd64 start -p functional-790989 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (32.159324478s)
functional_test.go:761: restart took 32.159439775s for "functional-790989" cluster.
I0918 19:59:54.327017   14878 config.go:182] Loaded profile config "functional-790989": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
--- PASS: TestFunctional/serial/ExtraConfig (32.16s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:810: (dbg) Run:  kubectl --context functional-790989 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:825: etcd phase: Running
functional_test.go:835: etcd status: Ready
functional_test.go:825: kube-apiserver phase: Running
functional_test.go:835: kube-apiserver status: Ready
functional_test.go:825: kube-controller-manager phase: Running
functional_test.go:835: kube-controller-manager status: Ready
functional_test.go:825: kube-scheduler phase: Running
functional_test.go:835: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.37s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1236: (dbg) Run:  out/minikube-linux-amd64 -p functional-790989 logs
functional_test.go:1236: (dbg) Done: out/minikube-linux-amd64 -p functional-790989 logs: (1.370516214s)
--- PASS: TestFunctional/serial/LogsCmd (1.37s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.32s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1250: (dbg) Run:  out/minikube-linux-amd64 -p functional-790989 logs --file /tmp/TestFunctionalserialLogsFileCmd1971225708/001/logs.txt
functional_test.go:1250: (dbg) Done: out/minikube-linux-amd64 -p functional-790989 logs --file /tmp/TestFunctionalserialLogsFileCmd1971225708/001/logs.txt: (1.314112009s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.32s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (3.96s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2321: (dbg) Run:  kubectl --context functional-790989 apply -f testdata/invalidsvc.yaml
functional_test.go:2335: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-790989
functional_test.go:2335: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-790989: exit status 115 (270.272089ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|-----------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |             URL             |
	|-----------|-------------|-------------|-----------------------------|
	| default   | invalid-svc |          80 | http://192.168.39.248:30475 |
	|-----------|-------------|-------------|-----------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2327: (dbg) Run:  kubectl --context functional-790989 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (3.96s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-790989 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-790989 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-790989 config get cpus: exit status 14 (75.602979ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-790989 config set cpus 2
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-790989 config get cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-790989 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-790989 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-790989 config get cpus: exit status 14 (45.875228ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (38.72s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:905: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-790989 --alsologtostderr -v=1]
functional_test.go:910: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-790989 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 24562: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (38.72s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:974: (dbg) Run:  out/minikube-linux-amd64 start -p functional-790989 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:974: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-790989 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (143.679754ms)

                                                
                                                
-- stdout --
	* [functional-790989] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19667
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19667-7671/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19667-7671/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0918 20:00:03.441266   24473 out.go:345] Setting OutFile to fd 1 ...
	I0918 20:00:03.441394   24473 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0918 20:00:03.441401   24473 out.go:358] Setting ErrFile to fd 2...
	I0918 20:00:03.441406   24473 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0918 20:00:03.441604   24473 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19667-7671/.minikube/bin
	I0918 20:00:03.442106   24473 out.go:352] Setting JSON to false
	I0918 20:00:03.442970   24473 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":2547,"bootTime":1726687056,"procs":216,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0918 20:00:03.443034   24473 start.go:139] virtualization: kvm guest
	I0918 20:00:03.444992   24473 out.go:177] * [functional-790989] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0918 20:00:03.446231   24473 out.go:177]   - MINIKUBE_LOCATION=19667
	I0918 20:00:03.446302   24473 notify.go:220] Checking for updates...
	I0918 20:00:03.448724   24473 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0918 20:00:03.449976   24473 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19667-7671/kubeconfig
	I0918 20:00:03.451324   24473 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19667-7671/.minikube
	I0918 20:00:03.452623   24473 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0918 20:00:03.453770   24473 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0918 20:00:03.455258   24473 config.go:182] Loaded profile config "functional-790989": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0918 20:00:03.455674   24473 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0918 20:00:03.455740   24473 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0918 20:00:03.474202   24473 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40255
	I0918 20:00:03.474703   24473 main.go:141] libmachine: () Calling .GetVersion
	I0918 20:00:03.475305   24473 main.go:141] libmachine: Using API Version  1
	I0918 20:00:03.475336   24473 main.go:141] libmachine: () Calling .SetConfigRaw
	I0918 20:00:03.475677   24473 main.go:141] libmachine: () Calling .GetMachineName
	I0918 20:00:03.475831   24473 main.go:141] libmachine: (functional-790989) Calling .DriverName
	I0918 20:00:03.476093   24473 driver.go:394] Setting default libvirt URI to qemu:///system
	I0918 20:00:03.476408   24473 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0918 20:00:03.476454   24473 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0918 20:00:03.491755   24473 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38799
	I0918 20:00:03.492288   24473 main.go:141] libmachine: () Calling .GetVersion
	I0918 20:00:03.492987   24473 main.go:141] libmachine: Using API Version  1
	I0918 20:00:03.493026   24473 main.go:141] libmachine: () Calling .SetConfigRaw
	I0918 20:00:03.493385   24473 main.go:141] libmachine: () Calling .GetMachineName
	I0918 20:00:03.493603   24473 main.go:141] libmachine: (functional-790989) Calling .DriverName
	I0918 20:00:03.528069   24473 out.go:177] * Using the kvm2 driver based on existing profile
	I0918 20:00:03.529402   24473 start.go:297] selected driver: kvm2
	I0918 20:00:03.529421   24473 start.go:901] validating driver "kvm2" against &{Name:functional-790989 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.1 ClusterName:functional-790989 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.248 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mo
unt:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0918 20:00:03.529561   24473 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0918 20:00:03.532076   24473 out.go:201] 
	W0918 20:00:03.533387   24473 out.go:270] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0918 20:00:03.534734   24473 out.go:201] 

                                                
                                                
** /stderr **
functional_test.go:991: (dbg) Run:  out/minikube-linux-amd64 start -p functional-790989 --dry-run --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1020: (dbg) Run:  out/minikube-linux-amd64 start -p functional-790989 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:1020: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-790989 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (138.131934ms)

                                                
                                                
-- stdout --
	* [functional-790989] minikube v1.34.0 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19667
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19667-7671/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19667-7671/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote kvm2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0918 20:00:03.302165   24446 out.go:345] Setting OutFile to fd 1 ...
	I0918 20:00:03.302290   24446 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0918 20:00:03.302299   24446 out.go:358] Setting ErrFile to fd 2...
	I0918 20:00:03.302303   24446 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0918 20:00:03.302558   24446 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19667-7671/.minikube/bin
	I0918 20:00:03.303093   24446 out.go:352] Setting JSON to false
	I0918 20:00:03.303999   24446 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":2547,"bootTime":1726687056,"procs":214,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0918 20:00:03.304121   24446 start.go:139] virtualization: kvm guest
	I0918 20:00:03.306521   24446 out.go:177] * [functional-790989] minikube v1.34.0 sur Ubuntu 20.04 (kvm/amd64)
	I0918 20:00:03.307928   24446 out.go:177]   - MINIKUBE_LOCATION=19667
	I0918 20:00:03.307960   24446 notify.go:220] Checking for updates...
	I0918 20:00:03.310790   24446 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0918 20:00:03.312049   24446 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19667-7671/kubeconfig
	I0918 20:00:03.313174   24446 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19667-7671/.minikube
	I0918 20:00:03.314365   24446 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0918 20:00:03.315661   24446 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0918 20:00:03.317202   24446 config.go:182] Loaded profile config "functional-790989": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0918 20:00:03.317626   24446 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0918 20:00:03.317691   24446 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0918 20:00:03.332684   24446 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39815
	I0918 20:00:03.333199   24446 main.go:141] libmachine: () Calling .GetVersion
	I0918 20:00:03.333765   24446 main.go:141] libmachine: Using API Version  1
	I0918 20:00:03.333785   24446 main.go:141] libmachine: () Calling .SetConfigRaw
	I0918 20:00:03.334115   24446 main.go:141] libmachine: () Calling .GetMachineName
	I0918 20:00:03.334320   24446 main.go:141] libmachine: (functional-790989) Calling .DriverName
	I0918 20:00:03.334546   24446 driver.go:394] Setting default libvirt URI to qemu:///system
	I0918 20:00:03.334837   24446 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0918 20:00:03.334869   24446 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0918 20:00:03.350211   24446 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34053
	I0918 20:00:03.350683   24446 main.go:141] libmachine: () Calling .GetVersion
	I0918 20:00:03.351149   24446 main.go:141] libmachine: Using API Version  1
	I0918 20:00:03.351175   24446 main.go:141] libmachine: () Calling .SetConfigRaw
	I0918 20:00:03.351485   24446 main.go:141] libmachine: () Calling .GetMachineName
	I0918 20:00:03.351665   24446 main.go:141] libmachine: (functional-790989) Calling .DriverName
	I0918 20:00:03.385672   24446 out.go:177] * Utilisation du pilote kvm2 basé sur le profil existant
	I0918 20:00:03.386969   24446 start.go:297] selected driver: kvm2
	I0918 20:00:03.386984   24446 start.go:901] validating driver "kvm2" against &{Name:functional-790989 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.1 ClusterName:functional-790989 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.248 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mo
unt:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0918 20:00:03.387125   24446 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0918 20:00:03.389607   24446 out.go:201] 
	W0918 20:00:03.390877   24446 out.go:270] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0918 20:00:03.391961   24446 out.go:201] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.96s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:854: (dbg) Run:  out/minikube-linux-amd64 -p functional-790989 status
functional_test.go:860: (dbg) Run:  out/minikube-linux-amd64 -p functional-790989 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:872: (dbg) Run:  out/minikube-linux-amd64 -p functional-790989 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.96s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (7.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1629: (dbg) Run:  kubectl --context functional-790989 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1635: (dbg) Run:  kubectl --context functional-790989 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-67bdd5bbb4-2wzdw" [a44ada62-94ac-46f8-8f89-b2ddfb7aef62] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-67bdd5bbb4-2wzdw" [a44ada62-94ac-46f8-8f89-b2ddfb7aef62] Running
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 7.004549629s
functional_test.go:1649: (dbg) Run:  out/minikube-linux-amd64 -p functional-790989 service hello-node-connect --url
functional_test.go:1655: found endpoint for hello-node-connect: http://192.168.39.248:31977
functional_test.go:1675: http://192.168.39.248:31977: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-67bdd5bbb4-2wzdw

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.39.248:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.39.248:31977
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (7.49s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1690: (dbg) Run:  out/minikube-linux-amd64 -p functional-790989 addons list
functional_test.go:1702: (dbg) Run:  out/minikube-linux-amd64 -p functional-790989 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (26.85s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [ec5e63f9-7ace-47fa-a1a1-2262ad1128ec] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.004391608s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-790989 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-790989 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-790989 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-790989 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [3d0cbd29-9c63-4a45-8d45-fffabcaccd8d] Pending
helpers_test.go:344: "sp-pod" [3d0cbd29-9c63-4a45-8d45-fffabcaccd8d] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [3d0cbd29-9c63-4a45-8d45-fffabcaccd8d] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 13.004957387s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-790989 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-790989 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-790989 delete -f testdata/storage-provisioner/pod.yaml: (1.131958108s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-790989 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [9f422f69-70d6-4405-988c-6df27610352f] Pending
helpers_test.go:344: "sp-pod" [9f422f69-70d6-4405-988c-6df27610352f] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [9f422f69-70d6-4405-988c-6df27610352f] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 7.005042669s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-790989 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (26.85s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1725: (dbg) Run:  out/minikube-linux-amd64 -p functional-790989 ssh "echo hello"
functional_test.go:1742: (dbg) Run:  out/minikube-linux-amd64 -p functional-790989 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.50s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-790989 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-790989 ssh -n functional-790989 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-790989 cp functional-790989:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd407111941/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-790989 ssh -n functional-790989 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-790989 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-790989 ssh -n functional-790989 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.48s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (26.77s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1793: (dbg) Run:  kubectl --context functional-790989 replace --force -f testdata/mysql.yaml
functional_test.go:1799: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-6cdb49bbb-vwn6x" [25ec0b66-e866-4f97-87ab-85d00d0e39ac] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-6cdb49bbb-vwn6x" [25ec0b66-e866-4f97-87ab-85d00d0e39ac] Running
functional_test.go:1799: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 25.005590271s
functional_test.go:1807: (dbg) Run:  kubectl --context functional-790989 exec mysql-6cdb49bbb-vwn6x -- mysql -ppassword -e "show databases;"
functional_test.go:1807: (dbg) Non-zero exit: kubectl --context functional-790989 exec mysql-6cdb49bbb-vwn6x -- mysql -ppassword -e "show databases;": exit status 1 (134.270335ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I0918 20:00:26.441039   14878 retry.go:31] will retry after 1.17800614s: exit status 1
functional_test.go:1807: (dbg) Run:  kubectl --context functional-790989 exec mysql-6cdb49bbb-vwn6x -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (26.77s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1929: Checking for existence of /etc/test/nested/copy/14878/hosts within VM
functional_test.go:1931: (dbg) Run:  out/minikube-linux-amd64 -p functional-790989 ssh "sudo cat /etc/test/nested/copy/14878/hosts"
functional_test.go:1936: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1972: Checking for existence of /etc/ssl/certs/14878.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-amd64 -p functional-790989 ssh "sudo cat /etc/ssl/certs/14878.pem"
functional_test.go:1972: Checking for existence of /usr/share/ca-certificates/14878.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-amd64 -p functional-790989 ssh "sudo cat /usr/share/ca-certificates/14878.pem"
functional_test.go:1972: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-amd64 -p functional-790989 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/148782.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-amd64 -p functional-790989 ssh "sudo cat /etc/ssl/certs/148782.pem"
functional_test.go:1999: Checking for existence of /usr/share/ca-certificates/148782.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-amd64 -p functional-790989 ssh "sudo cat /usr/share/ca-certificates/148782.pem"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-amd64 -p functional-790989 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.44s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:219: (dbg) Run:  kubectl --context functional-790989 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2027: (dbg) Run:  out/minikube-linux-amd64 -p functional-790989 ssh "sudo systemctl is-active docker"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-790989 ssh "sudo systemctl is-active docker": exit status 1 (214.2016ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2027: (dbg) Run:  out/minikube-linux-amd64 -p functional-790989 ssh "sudo systemctl is-active containerd"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-790989 ssh "sudo systemctl is-active containerd": exit status 1 (201.595666ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2288: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (12.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1439: (dbg) Run:  kubectl --context functional-790989 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1445: (dbg) Run:  kubectl --context functional-790989 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-6b9f76b5c7-4rbr8" [6eceac02-43f0-4c60-8871-5b909e2c725d] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-6b9f76b5c7-4rbr8" [6eceac02-43f0-4c60-8871-5b909e2c725d] Running
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 12.004231782s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (12.25s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1270: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1275: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (24.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-790989 /tmp/TestFunctionalparallelMountCmdany-port810708459/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1726689601408750634" to /tmp/TestFunctionalparallelMountCmdany-port810708459/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1726689601408750634" to /tmp/TestFunctionalparallelMountCmdany-port810708459/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1726689601408750634" to /tmp/TestFunctionalparallelMountCmdany-port810708459/001/test-1726689601408750634
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-790989 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-790989 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (238.613456ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0918 20:00:01.648344   14878 retry.go:31] will retry after 356.472651ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-790989 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-790989 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Sep 18 20:00 created-by-test
-rw-r--r-- 1 docker docker 24 Sep 18 20:00 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Sep 18 20:00 test-1726689601408750634
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-790989 ssh cat /mount-9p/test-1726689601408750634
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-790989 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [14c69e49-a76d-4e97-8c54-d89d0766cf85] Pending
helpers_test.go:344: "busybox-mount" [14c69e49-a76d-4e97-8c54-d89d0766cf85] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [14c69e49-a76d-4e97-8c54-d89d0766cf85] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [14c69e49-a76d-4e97-8c54-d89d0766cf85] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 22.021594352s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-790989 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-790989 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-790989 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-790989 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-790989 /tmp/TestFunctionalparallelMountCmdany-port810708459/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (24.50s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1310: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1315: Took "345.184393ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1324: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1329: Took "46.285195ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1361: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1366: Took "285.307215ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1374: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1379: Took "46.404315ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.83s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1459: (dbg) Run:  out/minikube-linux-amd64 -p functional-790989 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.83s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.94s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1489: (dbg) Run:  out/minikube-linux-amd64 -p functional-790989 service list -o json
functional_test.go:1494: Took "939.729302ms" to run "out/minikube-linux-amd64 -p functional-790989 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.94s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1509: (dbg) Run:  out/minikube-linux-amd64 -p functional-790989 service --namespace=default --https --url hello-node
functional_test.go:1522: found endpoint: https://192.168.39.248:31151
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.94s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1540: (dbg) Run:  out/minikube-linux-amd64 -p functional-790989 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.94s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1559: (dbg) Run:  out/minikube-linux-amd64 -p functional-790989 service hello-node --url
functional_test.go:1565: found endpoint for hello-node: http://192.168.39.248:31151
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p functional-790989 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p functional-790989 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p functional-790989 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2256: (dbg) Run:  out/minikube-linux-amd64 -p functional-790989 version --short
--- PASS: TestFunctional/parallel/Version/short (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.81s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2270: (dbg) Run:  out/minikube-linux-amd64 -p functional-790989 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.81s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-790989 image ls --format short --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-790989 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.31.1
registry.k8s.io/kube-proxy:v1.31.1
registry.k8s.io/kube-controller-manager:v1.31.1
registry.k8s.io/kube-apiserver:v1.31.1
registry.k8s.io/etcd:3.5.15-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.11.3
localhost/minikube-local-cache-test:functional-790989
localhost/kicbase/echo-server:functional-790989
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/mysql:5.7
docker.io/kindest/kindnetd:v20240813-c6f155d6
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-790989 image ls --format short --alsologtostderr:
I0918 20:00:31.053268   26316 out.go:345] Setting OutFile to fd 1 ...
I0918 20:00:31.053417   26316 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0918 20:00:31.053427   26316 out.go:358] Setting ErrFile to fd 2...
I0918 20:00:31.053431   26316 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0918 20:00:31.053624   26316 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19667-7671/.minikube/bin
I0918 20:00:31.054233   26316 config.go:182] Loaded profile config "functional-790989": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I0918 20:00:31.054344   26316 config.go:182] Loaded profile config "functional-790989": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I0918 20:00:31.054749   26316 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0918 20:00:31.054794   26316 main.go:141] libmachine: Launching plugin server for driver kvm2
I0918 20:00:31.070434   26316 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44815
I0918 20:00:31.070948   26316 main.go:141] libmachine: () Calling .GetVersion
I0918 20:00:31.071564   26316 main.go:141] libmachine: Using API Version  1
I0918 20:00:31.071597   26316 main.go:141] libmachine: () Calling .SetConfigRaw
I0918 20:00:31.071955   26316 main.go:141] libmachine: () Calling .GetMachineName
I0918 20:00:31.072144   26316 main.go:141] libmachine: (functional-790989) Calling .GetState
I0918 20:00:31.074026   26316 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0918 20:00:31.074064   26316 main.go:141] libmachine: Launching plugin server for driver kvm2
I0918 20:00:31.092406   26316 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38005
I0918 20:00:31.092887   26316 main.go:141] libmachine: () Calling .GetVersion
I0918 20:00:31.093443   26316 main.go:141] libmachine: Using API Version  1
I0918 20:00:31.093468   26316 main.go:141] libmachine: () Calling .SetConfigRaw
I0918 20:00:31.093828   26316 main.go:141] libmachine: () Calling .GetMachineName
I0918 20:00:31.094004   26316 main.go:141] libmachine: (functional-790989) Calling .DriverName
I0918 20:00:31.094212   26316 ssh_runner.go:195] Run: systemctl --version
I0918 20:00:31.094259   26316 main.go:141] libmachine: (functional-790989) Calling .GetSSHHostname
I0918 20:00:31.097793   26316 main.go:141] libmachine: (functional-790989) DBG | domain functional-790989 has defined MAC address 52:54:00:55:09:dc in network mk-functional-790989
I0918 20:00:31.098296   26316 main.go:141] libmachine: (functional-790989) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:09:dc", ip: ""} in network mk-functional-790989: {Iface:virbr1 ExpiryTime:2024-09-18 20:57:21 +0000 UTC Type:0 Mac:52:54:00:55:09:dc Iaid: IPaddr:192.168.39.248 Prefix:24 Hostname:functional-790989 Clientid:01:52:54:00:55:09:dc}
I0918 20:00:31.098346   26316 main.go:141] libmachine: (functional-790989) DBG | domain functional-790989 has defined IP address 192.168.39.248 and MAC address 52:54:00:55:09:dc in network mk-functional-790989
I0918 20:00:31.098520   26316 main.go:141] libmachine: (functional-790989) Calling .GetSSHPort
I0918 20:00:31.098711   26316 main.go:141] libmachine: (functional-790989) Calling .GetSSHKeyPath
I0918 20:00:31.098868   26316 main.go:141] libmachine: (functional-790989) Calling .GetSSHUsername
I0918 20:00:31.099009   26316 sshutil.go:53] new ssh client: &{IP:192.168.39.248 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19667-7671/.minikube/machines/functional-790989/id_rsa Username:docker}
I0918 20:00:31.195003   26316 ssh_runner.go:195] Run: sudo crictl images --output json
I0918 20:00:31.291778   26316 main.go:141] libmachine: Making call to close driver server
I0918 20:00:31.291796   26316 main.go:141] libmachine: (functional-790989) Calling .Close
I0918 20:00:31.292089   26316 main.go:141] libmachine: Successfully made call to close driver server
I0918 20:00:31.292098   26316 main.go:141] libmachine: (functional-790989) DBG | Closing plugin on server side
I0918 20:00:31.292109   26316 main.go:141] libmachine: Making call to close connection to plugin binary
I0918 20:00:31.292137   26316 main.go:141] libmachine: Making call to close driver server
I0918 20:00:31.292146   26316 main.go:141] libmachine: (functional-790989) Calling .Close
I0918 20:00:31.292418   26316 main.go:141] libmachine: Successfully made call to close driver server
I0918 20:00:31.292434   26316 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-790989 image ls --format table --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-790989 image ls --format table --alsologtostderr:
|-----------------------------------------|--------------------|---------------|--------|
|                  Image                  |        Tag         |   Image ID    |  Size  |
|-----------------------------------------|--------------------|---------------|--------|
| docker.io/kindest/kindnetd              | v20240813-c6f155d6 | 12968670680f4 | 87.2MB |
| registry.k8s.io/etcd                    | 3.5.15-0           | 2e96e5913fc06 | 149MB  |
| registry.k8s.io/kube-scheduler          | v1.31.1            | 9aa1fad941575 | 68.4MB |
| registry.k8s.io/coredns/coredns         | v1.11.3            | c69fa2e9cbf5f | 63.3MB |
| registry.k8s.io/pause                   | 3.3                | 0184c1613d929 | 686kB  |
| localhost/minikube-local-cache-test     | functional-790989  | 7b923dfa3f166 | 3.33kB |
| registry.k8s.io/echoserver              | 1.8                | 82e4c8a736a4f | 97.8MB |
| registry.k8s.io/kube-apiserver          | v1.31.1            | 6bab7719df100 | 95.2MB |
| docker.io/library/mysql                 | 5.7                | 5107333e08a87 | 520MB  |
| gcr.io/k8s-minikube/busybox             | latest             | beae173ccac6a | 1.46MB |
| gcr.io/k8s-minikube/storage-provisioner | v5                 | 6e38f40d628db | 31.5MB |
| registry.k8s.io/pause                   | 3.1                | da86e6ba6ca19 | 747kB  |
| registry.k8s.io/pause                   | 3.10               | 873ed75102791 | 742kB  |
| registry.k8s.io/pause                   | latest             | 350b164e7ae1d | 247kB  |
| gcr.io/k8s-minikube/busybox             | 1.28.4-glibc       | 56cc512116c8f | 4.63MB |
| localhost/kicbase/echo-server           | functional-790989  | 9056ab77afb8e | 4.94MB |
| localhost/my-image                      | functional-790989  | 83e0d86cbd5cc | 1.47MB |
| registry.k8s.io/kube-controller-manager | v1.31.1            | 175ffd71cce3d | 89.4MB |
| registry.k8s.io/kube-proxy              | v1.31.1            | 60c005f310ff3 | 92.7MB |
|-----------------------------------------|--------------------|---------------|--------|
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-790989 image ls --format table --alsologtostderr:
I0918 20:00:36.620042   26574 out.go:345] Setting OutFile to fd 1 ...
I0918 20:00:36.620161   26574 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0918 20:00:36.620166   26574 out.go:358] Setting ErrFile to fd 2...
I0918 20:00:36.620171   26574 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0918 20:00:36.620375   26574 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19667-7671/.minikube/bin
I0918 20:00:36.620961   26574 config.go:182] Loaded profile config "functional-790989": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I0918 20:00:36.621056   26574 config.go:182] Loaded profile config "functional-790989": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I0918 20:00:36.621420   26574 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0918 20:00:36.621461   26574 main.go:141] libmachine: Launching plugin server for driver kvm2
I0918 20:00:36.637171   26574 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33843
I0918 20:00:36.637712   26574 main.go:141] libmachine: () Calling .GetVersion
I0918 20:00:36.638410   26574 main.go:141] libmachine: Using API Version  1
I0918 20:00:36.638447   26574 main.go:141] libmachine: () Calling .SetConfigRaw
I0918 20:00:36.638808   26574 main.go:141] libmachine: () Calling .GetMachineName
I0918 20:00:36.639006   26574 main.go:141] libmachine: (functional-790989) Calling .GetState
I0918 20:00:36.641060   26574 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0918 20:00:36.641100   26574 main.go:141] libmachine: Launching plugin server for driver kvm2
I0918 20:00:36.659251   26574 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38657
I0918 20:00:36.659879   26574 main.go:141] libmachine: () Calling .GetVersion
I0918 20:00:36.660539   26574 main.go:141] libmachine: Using API Version  1
I0918 20:00:36.660564   26574 main.go:141] libmachine: () Calling .SetConfigRaw
I0918 20:00:36.660937   26574 main.go:141] libmachine: () Calling .GetMachineName
I0918 20:00:36.661148   26574 main.go:141] libmachine: (functional-790989) Calling .DriverName
I0918 20:00:36.661332   26574 ssh_runner.go:195] Run: systemctl --version
I0918 20:00:36.661356   26574 main.go:141] libmachine: (functional-790989) Calling .GetSSHHostname
I0918 20:00:36.664004   26574 main.go:141] libmachine: (functional-790989) DBG | domain functional-790989 has defined MAC address 52:54:00:55:09:dc in network mk-functional-790989
I0918 20:00:36.664453   26574 main.go:141] libmachine: (functional-790989) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:09:dc", ip: ""} in network mk-functional-790989: {Iface:virbr1 ExpiryTime:2024-09-18 20:57:21 +0000 UTC Type:0 Mac:52:54:00:55:09:dc Iaid: IPaddr:192.168.39.248 Prefix:24 Hostname:functional-790989 Clientid:01:52:54:00:55:09:dc}
I0918 20:00:36.664492   26574 main.go:141] libmachine: (functional-790989) DBG | domain functional-790989 has defined IP address 192.168.39.248 and MAC address 52:54:00:55:09:dc in network mk-functional-790989
I0918 20:00:36.664650   26574 main.go:141] libmachine: (functional-790989) Calling .GetSSHPort
I0918 20:00:36.664832   26574 main.go:141] libmachine: (functional-790989) Calling .GetSSHKeyPath
I0918 20:00:36.664974   26574 main.go:141] libmachine: (functional-790989) Calling .GetSSHUsername
I0918 20:00:36.665119   26574 sshutil.go:53] new ssh client: &{IP:192.168.39.248 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19667-7671/.minikube/machines/functional-790989/id_rsa Username:docker}
I0918 20:00:36.750389   26574 ssh_runner.go:195] Run: sudo crictl images --output json
I0918 20:00:36.789867   26574 main.go:141] libmachine: Making call to close driver server
I0918 20:00:36.789888   26574 main.go:141] libmachine: (functional-790989) Calling .Close
I0918 20:00:36.790198   26574 main.go:141] libmachine: Successfully made call to close driver server
I0918 20:00:36.790221   26574 main.go:141] libmachine: Making call to close connection to plugin binary
I0918 20:00:36.790236   26574 main.go:141] libmachine: Making call to close driver server
I0918 20:00:36.790244   26574 main.go:141] libmachine: (functional-790989) Calling .Close
I0918 20:00:36.790225   26574 main.go:141] libmachine: (functional-790989) DBG | Closing plugin on server side
I0918 20:00:36.790493   26574 main.go:141] libmachine: (functional-790989) DBG | Closing plugin on server side
I0918 20:00:36.790525   26574 main.go:141] libmachine: Successfully made call to close driver server
I0918 20:00:36.790547   26574 main.go:141] libmachine: Making call to close connection to plugin binary
2024/09/18 20:00:41 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-790989 image ls --format json --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-790989 image ls --format json --alsologtostderr:
[{"id":"873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136","repoDigests":["registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a","registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"],"repoTags":["registry.k8s.io/pause:3.10"],"size":"742080"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":["registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969"],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"97846543"},{"id":"6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee","repoDigests":["registry.k8s.io/kube-apiserver@sha256:1f30d71692d2ab71ce2c1dd5fab86e0cb00ce888d21de18806f5482021d18771","registry.k8s.io/kube-apiserver@sha256:2409c23dbb5a2b7a81adbb184d3eac43ac653e9b97a7c0ee121b89bb3ef61fdb"],"repoTags":["registry.k8s.io/kube-apiserver:v1.31.1"],"size":"95237600"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48
bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4631262"},{"id":"9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30","repoDigests":["localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf"],"repoTags":["localhost/kicbase/echo-server:functional-790989"],"size":"4943877"},{"id":"83e0d86cbd5cc0bd2e6aabae5dfaadb54431a8bf88523d9c500183014ef868d8","repoDigests":["localhost/my-image@sha256:39909c2fe0a6cb9ecf3ac3b196a0e697ca30bce892373973fcae32746aead217"],"repoTags":["localhost/my-image:functional-790989"],"size":"1468599"},{"id":"60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561","repoDigests":["registry.k8s.io/kube-proxy@sha256:4ee50b00484d7f39a90fc4cda92251177ef5ad8fdf2f2a0c768f9e63
4b4c6d44","registry.k8s.io/kube-proxy@sha256:bb26bcf4490a4653ecb77ceb883c0fd8dd876f104f776aa0a6cbf9df68b16af2"],"repoTags":["registry.k8s.io/kube-proxy:v1.31.1"],"size":"92733849"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":["registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9"],"repoTags":["registry.k8s.io/pause:latest"],"size":"247077"},{"id":"12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f","repoDigests":["docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b","docker.io/kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166"],"repoTags":["docker.io/kindest/kindnetd:v20240813-c6f155d6"],"size":"87190579"},{"id":"07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93","docker.io/kubernetesui/dashboa
rd@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029"],"repoTags":[],"size":"249229937"},{"id":"5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933","repoDigests":["docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb","docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da"],"repoTags":["docker.io/library/mysql:5.7"],"size":"519571821"},{"id":"beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:62ffc2ed7554e4c6d360bce40bbcf196573dd27c4ce080641a2c59867e732dee","gcr.io/k8s-minikube/busybox@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b"],"repoTags":["gcr.io/k8s-minikube/busybox:latest"],"size":"1462480"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944","g
cr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31470524"},{"id":"7b923dfa3f166115bdba5aac6c79a6b0a5ee95238416c002e2ac5cca7ba78c3d","repoDigests":["localhost/minikube-local-cache-test@sha256:c2da147ff5ef2bedc7b4d4f33729594ccf112141754cffa41cdd2dcd36273407"],"repoTags":["localhost/minikube-local-cache-test:functional-790989"],"size":"3330"},{"id":"9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b","repoDigests":["registry.k8s.io/kube-scheduler@sha256:969a7e96340f3a927b3d652582edec2d6d82a083871d81ef5064b7edaab430d0","registry.k8s.io/kube-scheduler@sha256:cb9d9404dddf0c6728b99a42d10d8ab1ece2a1c793ef1d7b03eddaeac26864d8"],"repoTags":["registry.k8s.io/kube-scheduler:v1.31.1"],"size":"68420934"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":["registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea9
29e"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"746911"},{"id":"115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a","docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"],"repoTags":[],"size":"43824855"},{"id":"588bd29b62b28a90f03a720eac5f30cae49d23022b09bf750313683e2e10d98c","repoDigests":["docker.io/library/c163b2e5351627493ec32c59170068e1a8f0f6308765eeb47c8593c4d9780066-tmp@sha256:a9837c4da3e8ec256e24fba91de54bd292cc69e16d18daa57880856b177fb292"],"repoTags":[],"size":"1466018"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":["registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"686139"},{"id":"175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1","repoDigests":
["registry.k8s.io/kube-controller-manager@sha256:9f9da5b27e03f89599cc40ba89150aebf3b4cff001e6db6d998674b34181e1a1","registry.k8s.io/kube-controller-manager@sha256:e6c5253433f9032cff2bd9b1f41e29b9691a6d6ec97903896c0ca5f069a63748"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.31.1"],"size":"89437508"},{"id":"c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6","repoDigests":["registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e","registry.k8s.io/coredns/coredns@sha256:f0b8c589314ed010a0c326e987a52b50801f0145ac9b75423af1b5c66dbd6d50"],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.3"],"size":"63273227"},{"id":"2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4","repoDigests":["registry.k8s.io/etcd@sha256:4e535f53f767fe400c2deec37fef7a6ab19a79a1db35041d067597641cd8b89d","registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a"],"repoTags":["registry.k8s.io/etcd:3.5.15-0"],"size":"14900
9664"}]
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-790989 image ls --format json --alsologtostderr:
I0918 20:00:36.477043   26539 out.go:345] Setting OutFile to fd 1 ...
I0918 20:00:36.477471   26539 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0918 20:00:36.477557   26539 out.go:358] Setting ErrFile to fd 2...
I0918 20:00:36.477573   26539 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0918 20:00:36.477827   26539 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19667-7671/.minikube/bin
I0918 20:00:36.478527   26539 config.go:182] Loaded profile config "functional-790989": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I0918 20:00:36.478658   26539 config.go:182] Loaded profile config "functional-790989": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I0918 20:00:36.479063   26539 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0918 20:00:36.479116   26539 main.go:141] libmachine: Launching plugin server for driver kvm2
I0918 20:00:36.494641   26539 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42781
I0918 20:00:36.495099   26539 main.go:141] libmachine: () Calling .GetVersion
I0918 20:00:36.495656   26539 main.go:141] libmachine: Using API Version  1
I0918 20:00:36.495676   26539 main.go:141] libmachine: () Calling .SetConfigRaw
I0918 20:00:36.496138   26539 main.go:141] libmachine: () Calling .GetMachineName
I0918 20:00:36.496423   26539 main.go:141] libmachine: (functional-790989) Calling .GetState
I0918 20:00:36.498192   26539 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0918 20:00:36.498236   26539 main.go:141] libmachine: Launching plugin server for driver kvm2
I0918 20:00:36.514275   26539 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43027
I0918 20:00:36.514744   26539 main.go:141] libmachine: () Calling .GetVersion
I0918 20:00:36.515310   26539 main.go:141] libmachine: Using API Version  1
I0918 20:00:36.515334   26539 main.go:141] libmachine: () Calling .SetConfigRaw
I0918 20:00:36.515684   26539 main.go:141] libmachine: () Calling .GetMachineName
I0918 20:00:36.515866   26539 main.go:141] libmachine: (functional-790989) Calling .DriverName
I0918 20:00:36.516064   26539 ssh_runner.go:195] Run: systemctl --version
I0918 20:00:36.516087   26539 main.go:141] libmachine: (functional-790989) Calling .GetSSHHostname
I0918 20:00:36.519229   26539 main.go:141] libmachine: (functional-790989) DBG | domain functional-790989 has defined MAC address 52:54:00:55:09:dc in network mk-functional-790989
I0918 20:00:36.519810   26539 main.go:141] libmachine: (functional-790989) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:09:dc", ip: ""} in network mk-functional-790989: {Iface:virbr1 ExpiryTime:2024-09-18 20:57:21 +0000 UTC Type:0 Mac:52:54:00:55:09:dc Iaid: IPaddr:192.168.39.248 Prefix:24 Hostname:functional-790989 Clientid:01:52:54:00:55:09:dc}
I0918 20:00:36.519844   26539 main.go:141] libmachine: (functional-790989) DBG | domain functional-790989 has defined IP address 192.168.39.248 and MAC address 52:54:00:55:09:dc in network mk-functional-790989
I0918 20:00:36.520040   26539 main.go:141] libmachine: (functional-790989) Calling .GetSSHPort
I0918 20:00:36.520230   26539 main.go:141] libmachine: (functional-790989) Calling .GetSSHKeyPath
I0918 20:00:36.520383   26539 main.go:141] libmachine: (functional-790989) Calling .GetSSHUsername
I0918 20:00:36.520536   26539 sshutil.go:53] new ssh client: &{IP:192.168.39.248 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19667-7671/.minikube/machines/functional-790989/id_rsa Username:docker}
I0918 20:00:36.606845   26539 ssh_runner.go:195] Run: sudo crictl images --output json
I0918 20:00:36.648569   26539 main.go:141] libmachine: Making call to close driver server
I0918 20:00:36.648584   26539 main.go:141] libmachine: (functional-790989) Calling .Close
I0918 20:00:36.648988   26539 main.go:141] libmachine: (functional-790989) DBG | Closing plugin on server side
I0918 20:00:36.649004   26539 main.go:141] libmachine: Successfully made call to close driver server
I0918 20:00:36.649020   26539 main.go:141] libmachine: Making call to close connection to plugin binary
I0918 20:00:36.649042   26539 main.go:141] libmachine: Making call to close driver server
I0918 20:00:36.649182   26539 main.go:141] libmachine: (functional-790989) Calling .Close
I0918 20:00:36.649442   26539 main.go:141] libmachine: Successfully made call to close driver server
I0918 20:00:36.649456   26539 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-790989 image ls --format yaml --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-790989 image ls --format yaml --alsologtostderr:
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests:
- registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969
repoTags:
- registry.k8s.io/echoserver:1.8
size: "97846543"
- id: 6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:1f30d71692d2ab71ce2c1dd5fab86e0cb00ce888d21de18806f5482021d18771
- registry.k8s.io/kube-apiserver@sha256:2409c23dbb5a2b7a81adbb184d3eac43ac653e9b97a7c0ee121b89bb3ef61fdb
repoTags:
- registry.k8s.io/kube-apiserver:v1.31.1
size: "95237600"
- id: 60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561
repoDigests:
- registry.k8s.io/kube-proxy@sha256:4ee50b00484d7f39a90fc4cda92251177ef5ad8fdf2f2a0c768f9e634b4c6d44
- registry.k8s.io/kube-proxy@sha256:bb26bcf4490a4653ecb77ceb883c0fd8dd876f104f776aa0a6cbf9df68b16af2
repoTags:
- registry.k8s.io/kube-proxy:v1.31.1
size: "92733849"
- id: 9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:969a7e96340f3a927b3d652582edec2d6d82a083871d81ef5064b7edaab430d0
- registry.k8s.io/kube-scheduler@sha256:cb9d9404dddf0c6728b99a42d10d8ab1ece2a1c793ef1d7b03eddaeac26864d8
repoTags:
- registry.k8s.io/kube-scheduler:v1.31.1
size: "68420934"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests:
- registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e
repoTags:
- registry.k8s.io/pause:3.1
size: "746911"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests:
- registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04
repoTags:
- registry.k8s.io/pause:3.3
size: "686139"
- id: c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e
- registry.k8s.io/coredns/coredns@sha256:f0b8c589314ed010a0c326e987a52b50801f0145ac9b75423af1b5c66dbd6d50
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.3
size: "63273227"
- id: 873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136
repoDigests:
- registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a
- registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a
repoTags:
- registry.k8s.io/pause:3.10
size: "742080"
- id: 5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933
repoDigests:
- docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb
- docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da
repoTags:
- docker.io/library/mysql:5.7
size: "519571821"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
- gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31470524"
- id: 9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests:
- localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf
repoTags:
- localhost/kicbase/echo-server:functional-790989
size: "4943877"
- id: 7b923dfa3f166115bdba5aac6c79a6b0a5ee95238416c002e2ac5cca7ba78c3d
repoDigests:
- localhost/minikube-local-cache-test@sha256:c2da147ff5ef2bedc7b4d4f33729594ccf112141754cffa41cdd2dcd36273407
repoTags:
- localhost/minikube-local-cache-test:functional-790989
size: "3330"
- id: 12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f
repoDigests:
- docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b
- docker.io/kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166
repoTags:
- docker.io/kindest/kindnetd:v20240813-c6f155d6
size: "87190579"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4631262"
- id: 2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4
repoDigests:
- registry.k8s.io/etcd@sha256:4e535f53f767fe400c2deec37fef7a6ab19a79a1db35041d067597641cd8b89d
- registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a
repoTags:
- registry.k8s.io/etcd:3.5.15-0
size: "149009664"
- id: 175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:9f9da5b27e03f89599cc40ba89150aebf3b4cff001e6db6d998674b34181e1a1
- registry.k8s.io/kube-controller-manager@sha256:e6c5253433f9032cff2bd9b1f41e29b9691a6d6ec97903896c0ca5f069a63748
repoTags:
- registry.k8s.io/kube-controller-manager:v1.31.1
size: "89437508"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests:
- registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9
repoTags:
- registry.k8s.io/pause:latest
size: "247077"
- id: 115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
repoTags: []
size: "43824855"

                                                
                                                
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-790989 image ls --format yaml --alsologtostderr:
I0918 20:00:31.348429   26356 out.go:345] Setting OutFile to fd 1 ...
I0918 20:00:31.348557   26356 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0918 20:00:31.348567   26356 out.go:358] Setting ErrFile to fd 2...
I0918 20:00:31.348574   26356 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0918 20:00:31.348873   26356 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19667-7671/.minikube/bin
I0918 20:00:31.349776   26356 config.go:182] Loaded profile config "functional-790989": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I0918 20:00:31.349911   26356 config.go:182] Loaded profile config "functional-790989": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I0918 20:00:31.350490   26356 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0918 20:00:31.350541   26356 main.go:141] libmachine: Launching plugin server for driver kvm2
I0918 20:00:31.365793   26356 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36617
I0918 20:00:31.366306   26356 main.go:141] libmachine: () Calling .GetVersion
I0918 20:00:31.366915   26356 main.go:141] libmachine: Using API Version  1
I0918 20:00:31.366936   26356 main.go:141] libmachine: () Calling .SetConfigRaw
I0918 20:00:31.367415   26356 main.go:141] libmachine: () Calling .GetMachineName
I0918 20:00:31.367649   26356 main.go:141] libmachine: (functional-790989) Calling .GetState
I0918 20:00:31.369865   26356 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0918 20:00:31.369916   26356 main.go:141] libmachine: Launching plugin server for driver kvm2
I0918 20:00:31.384876   26356 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33879
I0918 20:00:31.385402   26356 main.go:141] libmachine: () Calling .GetVersion
I0918 20:00:31.386049   26356 main.go:141] libmachine: Using API Version  1
I0918 20:00:31.386118   26356 main.go:141] libmachine: () Calling .SetConfigRaw
I0918 20:00:31.386440   26356 main.go:141] libmachine: () Calling .GetMachineName
I0918 20:00:31.386605   26356 main.go:141] libmachine: (functional-790989) Calling .DriverName
I0918 20:00:31.386838   26356 ssh_runner.go:195] Run: systemctl --version
I0918 20:00:31.386875   26356 main.go:141] libmachine: (functional-790989) Calling .GetSSHHostname
I0918 20:00:31.389993   26356 main.go:141] libmachine: (functional-790989) DBG | domain functional-790989 has defined MAC address 52:54:00:55:09:dc in network mk-functional-790989
I0918 20:00:31.390400   26356 main.go:141] libmachine: (functional-790989) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:09:dc", ip: ""} in network mk-functional-790989: {Iface:virbr1 ExpiryTime:2024-09-18 20:57:21 +0000 UTC Type:0 Mac:52:54:00:55:09:dc Iaid: IPaddr:192.168.39.248 Prefix:24 Hostname:functional-790989 Clientid:01:52:54:00:55:09:dc}
I0918 20:00:31.390435   26356 main.go:141] libmachine: (functional-790989) DBG | domain functional-790989 has defined IP address 192.168.39.248 and MAC address 52:54:00:55:09:dc in network mk-functional-790989
I0918 20:00:31.390598   26356 main.go:141] libmachine: (functional-790989) Calling .GetSSHPort
I0918 20:00:31.390762   26356 main.go:141] libmachine: (functional-790989) Calling .GetSSHKeyPath
I0918 20:00:31.390924   26356 main.go:141] libmachine: (functional-790989) Calling .GetSSHUsername
I0918 20:00:31.391065   26356 sshutil.go:53] new ssh client: &{IP:192.168.39.248 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19667-7671/.minikube/machines/functional-790989/id_rsa Username:docker}
I0918 20:00:31.522648   26356 ssh_runner.go:195] Run: sudo crictl images --output json
I0918 20:00:31.645275   26356 main.go:141] libmachine: Making call to close driver server
I0918 20:00:31.645292   26356 main.go:141] libmachine: (functional-790989) Calling .Close
I0918 20:00:31.645622   26356 main.go:141] libmachine: Successfully made call to close driver server
I0918 20:00:31.645640   26356 main.go:141] libmachine: Making call to close connection to plugin binary
I0918 20:00:31.645738   26356 main.go:141] libmachine: Making call to close driver server
I0918 20:00:31.645753   26356 main.go:141] libmachine: (functional-790989) Calling .Close
I0918 20:00:31.645695   26356 main.go:141] libmachine: (functional-790989) DBG | Closing plugin on server side
I0918 20:00:31.646030   26356 main.go:141] libmachine: Successfully made call to close driver server
I0918 20:00:31.646049   26356 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (4.78s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:308: (dbg) Run:  out/minikube-linux-amd64 -p functional-790989 ssh pgrep buildkitd
functional_test.go:308: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-790989 ssh pgrep buildkitd: exit status 1 (236.173269ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:315: (dbg) Run:  out/minikube-linux-amd64 -p functional-790989 image build -t localhost/my-image:functional-790989 testdata/build --alsologtostderr
functional_test.go:315: (dbg) Done: out/minikube-linux-amd64 -p functional-790989 image build -t localhost/my-image:functional-790989 testdata/build --alsologtostderr: (4.320459777s)
functional_test.go:320: (dbg) Stdout: out/minikube-linux-amd64 -p functional-790989 image build -t localhost/my-image:functional-790989 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> 588bd29b62b
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-790989
--> 83e0d86cbd5
Successfully tagged localhost/my-image:functional-790989
83e0d86cbd5cc0bd2e6aabae5dfaadb54431a8bf88523d9c500183014ef868d8
functional_test.go:323: (dbg) Stderr: out/minikube-linux-amd64 -p functional-790989 image build -t localhost/my-image:functional-790989 testdata/build --alsologtostderr:
I0918 20:00:31.933259   26408 out.go:345] Setting OutFile to fd 1 ...
I0918 20:00:31.933455   26408 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0918 20:00:31.933465   26408 out.go:358] Setting ErrFile to fd 2...
I0918 20:00:31.933469   26408 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0918 20:00:31.933680   26408 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19667-7671/.minikube/bin
I0918 20:00:31.934334   26408 config.go:182] Loaded profile config "functional-790989": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I0918 20:00:31.934991   26408 config.go:182] Loaded profile config "functional-790989": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I0918 20:00:31.935439   26408 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0918 20:00:31.935483   26408 main.go:141] libmachine: Launching plugin server for driver kvm2
I0918 20:00:31.951877   26408 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36913
I0918 20:00:31.952546   26408 main.go:141] libmachine: () Calling .GetVersion
I0918 20:00:31.953154   26408 main.go:141] libmachine: Using API Version  1
I0918 20:00:31.953171   26408 main.go:141] libmachine: () Calling .SetConfigRaw
I0918 20:00:31.953563   26408 main.go:141] libmachine: () Calling .GetMachineName
I0918 20:00:31.953754   26408 main.go:141] libmachine: (functional-790989) Calling .GetState
I0918 20:00:31.956033   26408 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0918 20:00:31.956085   26408 main.go:141] libmachine: Launching plugin server for driver kvm2
I0918 20:00:31.971737   26408 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37589
I0918 20:00:31.972202   26408 main.go:141] libmachine: () Calling .GetVersion
I0918 20:00:31.972657   26408 main.go:141] libmachine: Using API Version  1
I0918 20:00:31.972680   26408 main.go:141] libmachine: () Calling .SetConfigRaw
I0918 20:00:31.973013   26408 main.go:141] libmachine: () Calling .GetMachineName
I0918 20:00:31.973210   26408 main.go:141] libmachine: (functional-790989) Calling .DriverName
I0918 20:00:31.973438   26408 ssh_runner.go:195] Run: systemctl --version
I0918 20:00:31.973469   26408 main.go:141] libmachine: (functional-790989) Calling .GetSSHHostname
I0918 20:00:31.976387   26408 main.go:141] libmachine: (functional-790989) DBG | domain functional-790989 has defined MAC address 52:54:00:55:09:dc in network mk-functional-790989
I0918 20:00:31.976886   26408 main.go:141] libmachine: (functional-790989) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:09:dc", ip: ""} in network mk-functional-790989: {Iface:virbr1 ExpiryTime:2024-09-18 20:57:21 +0000 UTC Type:0 Mac:52:54:00:55:09:dc Iaid: IPaddr:192.168.39.248 Prefix:24 Hostname:functional-790989 Clientid:01:52:54:00:55:09:dc}
I0918 20:00:31.976912   26408 main.go:141] libmachine: (functional-790989) DBG | domain functional-790989 has defined IP address 192.168.39.248 and MAC address 52:54:00:55:09:dc in network mk-functional-790989
I0918 20:00:31.977017   26408 main.go:141] libmachine: (functional-790989) Calling .GetSSHPort
I0918 20:00:31.977178   26408 main.go:141] libmachine: (functional-790989) Calling .GetSSHKeyPath
I0918 20:00:31.977344   26408 main.go:141] libmachine: (functional-790989) Calling .GetSSHUsername
I0918 20:00:31.977546   26408 sshutil.go:53] new ssh client: &{IP:192.168.39.248 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19667-7671/.minikube/machines/functional-790989/id_rsa Username:docker}
I0918 20:00:32.099511   26408 build_images.go:161] Building image from path: /tmp/build.623725787.tar
I0918 20:00:32.099592   26408 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0918 20:00:32.122933   26408 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.623725787.tar
I0918 20:00:32.127687   26408 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.623725787.tar: stat -c "%s %y" /var/lib/minikube/build/build.623725787.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.623725787.tar': No such file or directory
I0918 20:00:32.127731   26408 ssh_runner.go:362] scp /tmp/build.623725787.tar --> /var/lib/minikube/build/build.623725787.tar (3072 bytes)
I0918 20:00:32.157041   26408 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.623725787
I0918 20:00:32.214863   26408 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.623725787 -xf /var/lib/minikube/build/build.623725787.tar
I0918 20:00:32.254124   26408 crio.go:315] Building image: /var/lib/minikube/build/build.623725787
I0918 20:00:32.254222   26408 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-790989 /var/lib/minikube/build/build.623725787 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I0918 20:00:36.179497   26408 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-790989 /var/lib/minikube/build/build.623725787 --cgroup-manager=cgroupfs: (3.925239003s)
I0918 20:00:36.179576   26408 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.623725787
I0918 20:00:36.192060   26408 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.623725787.tar
I0918 20:00:36.202827   26408 build_images.go:217] Built localhost/my-image:functional-790989 from /tmp/build.623725787.tar
I0918 20:00:36.202859   26408 build_images.go:133] succeeded building to: functional-790989
I0918 20:00:36.202863   26408 build_images.go:134] failed building to: 
I0918 20:00:36.202890   26408 main.go:141] libmachine: Making call to close driver server
I0918 20:00:36.202898   26408 main.go:141] libmachine: (functional-790989) Calling .Close
I0918 20:00:36.203237   26408 main.go:141] libmachine: (functional-790989) DBG | Closing plugin on server side
I0918 20:00:36.203270   26408 main.go:141] libmachine: Successfully made call to close driver server
I0918 20:00:36.203286   26408 main.go:141] libmachine: Making call to close connection to plugin binary
I0918 20:00:36.203299   26408 main.go:141] libmachine: Making call to close driver server
I0918 20:00:36.203310   26408 main.go:141] libmachine: (functional-790989) Calling .Close
I0918 20:00:36.203525   26408 main.go:141] libmachine: Successfully made call to close driver server
I0918 20:00:36.203545   26408 main.go:141] libmachine: Making call to close connection to plugin binary
I0918 20:00:36.203564   26408 main.go:141] libmachine: (functional-790989) DBG | Closing plugin on server side
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-790989 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (4.78s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.96s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:342: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:342: (dbg) Done: docker pull kicbase/echo-server:1.0: (1.94451894s)
functional_test.go:347: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-790989
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.96s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.7s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:355: (dbg) Run:  out/minikube-linux-amd64 -p functional-790989 image load --daemon kicbase/echo-server:functional-790989 --alsologtostderr
functional_test.go:355: (dbg) Done: out/minikube-linux-amd64 -p functional-790989 image load --daemon kicbase/echo-server:functional-790989 --alsologtostderr: (1.472966251s)
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-790989 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.70s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.89s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p functional-790989 image load --daemon kicbase/echo-server:functional-790989 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-790989 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.89s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (2.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:235: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:240: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-790989
functional_test.go:245: (dbg) Run:  out/minikube-linux-amd64 -p functional-790989 image load --daemon kicbase/echo-server:functional-790989 --alsologtostderr
functional_test.go:245: (dbg) Done: out/minikube-linux-amd64 -p functional-790989 image load --daemon kicbase/echo-server:functional-790989 --alsologtostderr: (1.387927906s)
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-790989 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (2.45s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.75s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-790989 /tmp/TestFunctionalparallelMountCmdspecific-port3618078650/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-790989 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-790989 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (218.15306ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0918 20:00:26.129645   14878 retry.go:31] will retry after 480.497655ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-790989 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-790989 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-790989 /tmp/TestFunctionalparallelMountCmdspecific-port3618078650/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-790989 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-790989 ssh "sudo umount -f /mount-9p": exit status 1 (214.927929ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-790989 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-790989 /tmp/TestFunctionalparallelMountCmdspecific-port3618078650/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.75s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:380: (dbg) Run:  out/minikube-linux-amd64 -p functional-790989 image save kicbase/echo-server:functional-790989 /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:392: (dbg) Run:  out/minikube-linux-amd64 -p functional-790989 image rm kicbase/echo-server:functional-790989 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-790989 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.57s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:409: (dbg) Run:  out/minikube-linux-amd64 -p functional-790989 image load /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-790989 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.12s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-790989 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3664297920/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-790989 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3664297920/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-790989 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3664297920/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-790989 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-790989 ssh "findmnt -T" /mount1: exit status 1 (268.959156ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0918 20:00:27.929285   14878 retry.go:31] will retry after 369.246649ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-790989 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-790989 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-790989 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-790989 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-790989 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3664297920/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-790989 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3664297920/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-790989 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3664297920/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.43s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.88s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:419: (dbg) Run:  docker rmi kicbase/echo-server:functional-790989
functional_test.go:424: (dbg) Run:  out/minikube-linux-amd64 -p functional-790989 image save --daemon kicbase/echo-server:functional-790989 --alsologtostderr
functional_test.go:432: (dbg) Run:  docker image inspect localhost/kicbase/echo-server:functional-790989
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.88s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-790989
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:198: (dbg) Run:  docker rmi -f localhost/my-image:functional-790989
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:206: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-790989
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (205.18s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 start -p ha-091565 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0918 20:01:12.176145   14878 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/addons-815929/client.crt: no such file or directory" logger="UnhandledError"
E0918 20:01:12.182556   14878 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/addons-815929/client.crt: no such file or directory" logger="UnhandledError"
E0918 20:01:12.194030   14878 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/addons-815929/client.crt: no such file or directory" logger="UnhandledError"
E0918 20:01:12.215420   14878 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/addons-815929/client.crt: no such file or directory" logger="UnhandledError"
E0918 20:01:12.256827   14878 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/addons-815929/client.crt: no such file or directory" logger="UnhandledError"
E0918 20:01:12.338301   14878 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/addons-815929/client.crt: no such file or directory" logger="UnhandledError"
E0918 20:01:12.499843   14878 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/addons-815929/client.crt: no such file or directory" logger="UnhandledError"
E0918 20:01:12.821206   14878 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/addons-815929/client.crt: no such file or directory" logger="UnhandledError"
E0918 20:01:13.463290   14878 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/addons-815929/client.crt: no such file or directory" logger="UnhandledError"
E0918 20:01:14.745236   14878 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/addons-815929/client.crt: no such file or directory" logger="UnhandledError"
E0918 20:01:17.306821   14878 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/addons-815929/client.crt: no such file or directory" logger="UnhandledError"
E0918 20:01:22.428388   14878 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/addons-815929/client.crt: no such file or directory" logger="UnhandledError"
E0918 20:01:32.670447   14878 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/addons-815929/client.crt: no such file or directory" logger="UnhandledError"
E0918 20:01:53.152235   14878 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/addons-815929/client.crt: no such file or directory" logger="UnhandledError"
E0918 20:02:34.115231   14878 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/addons-815929/client.crt: no such file or directory" logger="UnhandledError"
E0918 20:03:56.037157   14878 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/addons-815929/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 start -p ha-091565 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio: (3m24.52238751s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-091565 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (205.18s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (7.37s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-091565 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-091565 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 kubectl -p ha-091565 -- rollout status deployment/busybox: (5.216945608s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-091565 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-091565 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-091565 -- exec busybox-7dff88458-45phf -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-091565 -- exec busybox-7dff88458-jjr2n -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-091565 -- exec busybox-7dff88458-xhmzx -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-091565 -- exec busybox-7dff88458-45phf -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-091565 -- exec busybox-7dff88458-jjr2n -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-091565 -- exec busybox-7dff88458-xhmzx -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-091565 -- exec busybox-7dff88458-45phf -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-091565 -- exec busybox-7dff88458-jjr2n -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-091565 -- exec busybox-7dff88458-xhmzx -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (7.37s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.18s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-091565 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-091565 -- exec busybox-7dff88458-45phf -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-091565 -- exec busybox-7dff88458-45phf -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-091565 -- exec busybox-7dff88458-jjr2n -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-091565 -- exec busybox-7dff88458-jjr2n -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-091565 -- exec busybox-7dff88458-xhmzx -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-091565 -- exec busybox-7dff88458-xhmzx -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.18s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (56.95s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-091565 -v=7 --alsologtostderr
E0918 20:05:01.286715   14878 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/functional-790989/client.crt: no such file or directory" logger="UnhandledError"
E0918 20:05:01.293178   14878 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/functional-790989/client.crt: no such file or directory" logger="UnhandledError"
E0918 20:05:01.304648   14878 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/functional-790989/client.crt: no such file or directory" logger="UnhandledError"
E0918 20:05:01.326145   14878 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/functional-790989/client.crt: no such file or directory" logger="UnhandledError"
E0918 20:05:01.367662   14878 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/functional-790989/client.crt: no such file or directory" logger="UnhandledError"
E0918 20:05:01.449231   14878 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/functional-790989/client.crt: no such file or directory" logger="UnhandledError"
E0918 20:05:01.610869   14878 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/functional-790989/client.crt: no such file or directory" logger="UnhandledError"
E0918 20:05:01.933085   14878 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/functional-790989/client.crt: no such file or directory" logger="UnhandledError"
E0918 20:05:02.574715   14878 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/functional-790989/client.crt: no such file or directory" logger="UnhandledError"
E0918 20:05:03.856927   14878 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/functional-790989/client.crt: no such file or directory" logger="UnhandledError"
E0918 20:05:06.418865   14878 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/functional-790989/client.crt: no such file or directory" logger="UnhandledError"
E0918 20:05:11.540170   14878 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/functional-790989/client.crt: no such file or directory" logger="UnhandledError"
E0918 20:05:21.781449   14878 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/functional-790989/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 node add -p ha-091565 -v=7 --alsologtostderr: (56.070587128s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-091565 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (56.95s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-091565 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.9s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.90s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (12.82s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:326: (dbg) Run:  out/minikube-linux-amd64 -p ha-091565 status --output json -v=7 --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-091565 cp testdata/cp-test.txt ha-091565:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-091565 ssh -n ha-091565 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-091565 cp ha-091565:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3131576438/001/cp-test_ha-091565.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-091565 ssh -n ha-091565 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-091565 cp ha-091565:/home/docker/cp-test.txt ha-091565-m02:/home/docker/cp-test_ha-091565_ha-091565-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-091565 ssh -n ha-091565 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-091565 ssh -n ha-091565-m02 "sudo cat /home/docker/cp-test_ha-091565_ha-091565-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-091565 cp ha-091565:/home/docker/cp-test.txt ha-091565-m03:/home/docker/cp-test_ha-091565_ha-091565-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-091565 ssh -n ha-091565 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-091565 ssh -n ha-091565-m03 "sudo cat /home/docker/cp-test_ha-091565_ha-091565-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-091565 cp ha-091565:/home/docker/cp-test.txt ha-091565-m04:/home/docker/cp-test_ha-091565_ha-091565-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-091565 ssh -n ha-091565 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-091565 ssh -n ha-091565-m04 "sudo cat /home/docker/cp-test_ha-091565_ha-091565-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-091565 cp testdata/cp-test.txt ha-091565-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-091565 ssh -n ha-091565-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-091565 cp ha-091565-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3131576438/001/cp-test_ha-091565-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-091565 ssh -n ha-091565-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-091565 cp ha-091565-m02:/home/docker/cp-test.txt ha-091565:/home/docker/cp-test_ha-091565-m02_ha-091565.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-091565 ssh -n ha-091565-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-091565 ssh -n ha-091565 "sudo cat /home/docker/cp-test_ha-091565-m02_ha-091565.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-091565 cp ha-091565-m02:/home/docker/cp-test.txt ha-091565-m03:/home/docker/cp-test_ha-091565-m02_ha-091565-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-091565 ssh -n ha-091565-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-091565 ssh -n ha-091565-m03 "sudo cat /home/docker/cp-test_ha-091565-m02_ha-091565-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-091565 cp ha-091565-m02:/home/docker/cp-test.txt ha-091565-m04:/home/docker/cp-test_ha-091565-m02_ha-091565-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-091565 ssh -n ha-091565-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-091565 ssh -n ha-091565-m04 "sudo cat /home/docker/cp-test_ha-091565-m02_ha-091565-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-091565 cp testdata/cp-test.txt ha-091565-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-091565 ssh -n ha-091565-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-091565 cp ha-091565-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3131576438/001/cp-test_ha-091565-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-091565 ssh -n ha-091565-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-091565 cp ha-091565-m03:/home/docker/cp-test.txt ha-091565:/home/docker/cp-test_ha-091565-m03_ha-091565.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-091565 ssh -n ha-091565-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-091565 ssh -n ha-091565 "sudo cat /home/docker/cp-test_ha-091565-m03_ha-091565.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-091565 cp ha-091565-m03:/home/docker/cp-test.txt ha-091565-m02:/home/docker/cp-test_ha-091565-m03_ha-091565-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-091565 ssh -n ha-091565-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-091565 ssh -n ha-091565-m02 "sudo cat /home/docker/cp-test_ha-091565-m03_ha-091565-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-091565 cp ha-091565-m03:/home/docker/cp-test.txt ha-091565-m04:/home/docker/cp-test_ha-091565-m03_ha-091565-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-091565 ssh -n ha-091565-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-091565 ssh -n ha-091565-m04 "sudo cat /home/docker/cp-test_ha-091565-m03_ha-091565-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-091565 cp testdata/cp-test.txt ha-091565-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-091565 ssh -n ha-091565-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-091565 cp ha-091565-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3131576438/001/cp-test_ha-091565-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-091565 ssh -n ha-091565-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-091565 cp ha-091565-m04:/home/docker/cp-test.txt ha-091565:/home/docker/cp-test_ha-091565-m04_ha-091565.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-091565 ssh -n ha-091565-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-091565 ssh -n ha-091565 "sudo cat /home/docker/cp-test_ha-091565-m04_ha-091565.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-091565 cp ha-091565-m04:/home/docker/cp-test.txt ha-091565-m02:/home/docker/cp-test_ha-091565-m04_ha-091565-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-091565 ssh -n ha-091565-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-091565 ssh -n ha-091565-m02 "sudo cat /home/docker/cp-test_ha-091565-m04_ha-091565-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-091565 cp ha-091565-m04:/home/docker/cp-test.txt ha-091565-m03:/home/docker/cp-test_ha-091565-m04_ha-091565-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-091565 ssh -n ha-091565-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-091565 ssh -n ha-091565-m03 "sudo cat /home/docker/cp-test_ha-091565-m04_ha-091565-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (12.82s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (3.93s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-amd64 profile list --output json: (3.92972206s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (3.93s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (16.94s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:487: (dbg) Run:  out/minikube-linux-amd64 -p ha-091565 node delete m03 -v=7 --alsologtostderr
ha_test.go:487: (dbg) Done: out/minikube-linux-amd64 -p ha-091565 node delete m03 -v=7 --alsologtostderr: (16.14639475s)
ha_test.go:493: (dbg) Run:  out/minikube-linux-amd64 -p ha-091565 status -v=7 --alsologtostderr
ha_test.go:511: (dbg) Run:  kubectl get nodes
ha_test.go:519: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (16.94s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.62s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.62s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (329.12s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:560: (dbg) Run:  out/minikube-linux-amd64 start -p ha-091565 --wait=true -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0918 20:17:35.240254   14878 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/addons-815929/client.crt: no such file or directory" logger="UnhandledError"
E0918 20:20:01.289805   14878 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/functional-790989/client.crt: no such file or directory" logger="UnhandledError"
E0918 20:21:12.176188   14878 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/addons-815929/client.crt: no such file or directory" logger="UnhandledError"
E0918 20:21:24.352902   14878 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/functional-790989/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:560: (dbg) Done: out/minikube-linux-amd64 start -p ha-091565 --wait=true -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio: (5m28.342395926s)
ha_test.go:566: (dbg) Run:  out/minikube-linux-amd64 -p ha-091565 status -v=7 --alsologtostderr
ha_test.go:584: (dbg) Run:  kubectl get nodes
ha_test.go:592: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (329.12s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.66s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.66s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (76.17s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:605: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-091565 --control-plane -v=7 --alsologtostderr
ha_test.go:605: (dbg) Done: out/minikube-linux-amd64 node add -p ha-091565 --control-plane -v=7 --alsologtostderr: (1m15.292208358s)
ha_test.go:611: (dbg) Run:  out/minikube-linux-amd64 -p ha-091565 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (76.17s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.86s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.86s)

                                                
                                    
x
+
TestJSONOutput/start/Command (86.27s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-235983 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=crio
E0918 20:25:01.289999   14878 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/functional-790989/client.crt: no such file or directory" logger="UnhandledError"
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-235983 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=crio: (1m26.270818818s)
--- PASS: TestJSONOutput/start/Command (86.27s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.69s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-235983 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.69s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.61s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-235983 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.61s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (7.37s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-235983 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-235983 --output=json --user=testUser: (7.36941576s)
--- PASS: TestJSONOutput/stop/Command (7.37s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.19s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-336759 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-336759 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (58.991143ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"68423b61-14bf-4861-89f3-a793b4357bd8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-336759] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"04e19343-281f-4361-941f-f44096914761","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19667"}}
	{"specversion":"1.0","id":"bacde193-88f1-444d-9581-3038e4ce4fcb","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"095286c0-365a-44f1-b6b0-96c70e0017f0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/19667-7671/kubeconfig"}}
	{"specversion":"1.0","id":"5928e027-42d0-42de-9b7e-b0512d1910e9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/19667-7671/.minikube"}}
	{"specversion":"1.0","id":"0c95fffa-088b-4758-a8fa-6889871d3d59","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"2ff55508-5248-446c-9a3e-b727985c1940","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"acdd7703-6c7e-4d62-ab0a-7bd773f926aa","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-336759" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-336759
--- PASS: TestErrorJSONOutput (0.19s)

                                                
                                    
x
+
TestMainNoArgs (0.04s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.04s)

                                                
                                    
x
+
TestMinikubeProfile (85.6s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-982562 --driver=kvm2  --container-runtime=crio
E0918 20:26:12.175308   14878 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/addons-815929/client.crt: no such file or directory" logger="UnhandledError"
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-982562 --driver=kvm2  --container-runtime=crio: (39.879628072s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-997853 --driver=kvm2  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-997853 --driver=kvm2  --container-runtime=crio: (42.908201541s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-982562
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-997853
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-997853" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-997853
helpers_test.go:175: Cleaning up "first-982562" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-982562
--- PASS: TestMinikubeProfile (85.60s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (29.11s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-095834 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-095834 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (28.104715773s)
--- PASS: TestMountStart/serial/StartWithMountFirst (29.11s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.37s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-095834 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-095834 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountFirst (0.37s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (25.31s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-108447 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-108447 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (24.313332787s)
--- PASS: TestMountStart/serial/StartWithMountSecond (25.31s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.37s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-108447 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-108447 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountSecond (0.37s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (0.7s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-095834 --alsologtostderr -v=5
--- PASS: TestMountStart/serial/DeleteFirst (0.70s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.37s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-108447 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-108447 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.37s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.27s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-108447
mount_start_test.go:155: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-108447: (1.274418721s)
--- PASS: TestMountStart/serial/Stop (1.27s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (22.57s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-108447
mount_start_test.go:166: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-108447: (21.56966829s)
--- PASS: TestMountStart/serial/RestartStopped (22.57s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.38s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-108447 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-108447 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.38s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (106.25s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-622675 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0918 20:30:01.286258   14878 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/functional-790989/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-622675 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio: (1m45.854061054s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-622675 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (106.25s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (6.64s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-622675 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-622675 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-622675 -- rollout status deployment/busybox: (5.06705063s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-622675 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-622675 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-622675 -- exec busybox-7dff88458-sxchh -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-622675 -- exec busybox-7dff88458-wj2cd -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-622675 -- exec busybox-7dff88458-sxchh -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-622675 -- exec busybox-7dff88458-wj2cd -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-622675 -- exec busybox-7dff88458-sxchh -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-622675 -- exec busybox-7dff88458-wj2cd -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (6.64s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.77s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-622675 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-622675 -- exec busybox-7dff88458-sxchh -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-622675 -- exec busybox-7dff88458-sxchh -- sh -c "ping -c 1 192.168.39.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-622675 -- exec busybox-7dff88458-wj2cd -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-622675 -- exec busybox-7dff88458-wj2cd -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.77s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (51.54s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-622675 -v 3 --alsologtostderr
E0918 20:31:12.175951   14878 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/addons-815929/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-622675 -v 3 --alsologtostderr: (50.973862127s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-622675 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (51.54s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-622675 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.57s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.57s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (7.11s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-622675 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-622675 cp testdata/cp-test.txt multinode-622675:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-622675 ssh -n multinode-622675 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-622675 cp multinode-622675:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2019276691/001/cp-test_multinode-622675.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-622675 ssh -n multinode-622675 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-622675 cp multinode-622675:/home/docker/cp-test.txt multinode-622675-m02:/home/docker/cp-test_multinode-622675_multinode-622675-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-622675 ssh -n multinode-622675 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-622675 ssh -n multinode-622675-m02 "sudo cat /home/docker/cp-test_multinode-622675_multinode-622675-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-622675 cp multinode-622675:/home/docker/cp-test.txt multinode-622675-m03:/home/docker/cp-test_multinode-622675_multinode-622675-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-622675 ssh -n multinode-622675 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-622675 ssh -n multinode-622675-m03 "sudo cat /home/docker/cp-test_multinode-622675_multinode-622675-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-622675 cp testdata/cp-test.txt multinode-622675-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-622675 ssh -n multinode-622675-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-622675 cp multinode-622675-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2019276691/001/cp-test_multinode-622675-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-622675 ssh -n multinode-622675-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-622675 cp multinode-622675-m02:/home/docker/cp-test.txt multinode-622675:/home/docker/cp-test_multinode-622675-m02_multinode-622675.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-622675 ssh -n multinode-622675-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-622675 ssh -n multinode-622675 "sudo cat /home/docker/cp-test_multinode-622675-m02_multinode-622675.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-622675 cp multinode-622675-m02:/home/docker/cp-test.txt multinode-622675-m03:/home/docker/cp-test_multinode-622675-m02_multinode-622675-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-622675 ssh -n multinode-622675-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-622675 ssh -n multinode-622675-m03 "sudo cat /home/docker/cp-test_multinode-622675-m02_multinode-622675-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-622675 cp testdata/cp-test.txt multinode-622675-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-622675 ssh -n multinode-622675-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-622675 cp multinode-622675-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2019276691/001/cp-test_multinode-622675-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-622675 ssh -n multinode-622675-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-622675 cp multinode-622675-m03:/home/docker/cp-test.txt multinode-622675:/home/docker/cp-test_multinode-622675-m03_multinode-622675.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-622675 ssh -n multinode-622675-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-622675 ssh -n multinode-622675 "sudo cat /home/docker/cp-test_multinode-622675-m03_multinode-622675.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-622675 cp multinode-622675-m03:/home/docker/cp-test.txt multinode-622675-m02:/home/docker/cp-test_multinode-622675-m03_multinode-622675-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-622675 ssh -n multinode-622675-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-622675 ssh -n multinode-622675-m02 "sudo cat /home/docker/cp-test_multinode-622675-m03_multinode-622675-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (7.11s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.36s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-622675 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-622675 node stop m03: (1.524220303s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-622675 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-622675 status: exit status 7 (413.862481ms)

                                                
                                                
-- stdout --
	multinode-622675
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-622675-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-622675-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-622675 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-622675 status --alsologtostderr: exit status 7 (419.52595ms)

                                                
                                                
-- stdout --
	multinode-622675
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-622675-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-622675-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0918 20:31:22.773995   43805 out.go:345] Setting OutFile to fd 1 ...
	I0918 20:31:22.774230   43805 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0918 20:31:22.774238   43805 out.go:358] Setting ErrFile to fd 2...
	I0918 20:31:22.774242   43805 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0918 20:31:22.774441   43805 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19667-7671/.minikube/bin
	I0918 20:31:22.774611   43805 out.go:352] Setting JSON to false
	I0918 20:31:22.774641   43805 mustload.go:65] Loading cluster: multinode-622675
	I0918 20:31:22.774745   43805 notify.go:220] Checking for updates...
	I0918 20:31:22.775019   43805 config.go:182] Loaded profile config "multinode-622675": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0918 20:31:22.775037   43805 status.go:174] checking status of multinode-622675 ...
	I0918 20:31:22.775470   43805 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0918 20:31:22.775508   43805 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0918 20:31:22.794833   43805 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34791
	I0918 20:31:22.795288   43805 main.go:141] libmachine: () Calling .GetVersion
	I0918 20:31:22.795946   43805 main.go:141] libmachine: Using API Version  1
	I0918 20:31:22.795976   43805 main.go:141] libmachine: () Calling .SetConfigRaw
	I0918 20:31:22.796341   43805 main.go:141] libmachine: () Calling .GetMachineName
	I0918 20:31:22.796532   43805 main.go:141] libmachine: (multinode-622675) Calling .GetState
	I0918 20:31:22.798104   43805 status.go:364] multinode-622675 host status = "Running" (err=<nil>)
	I0918 20:31:22.798120   43805 host.go:66] Checking if "multinode-622675" exists ...
	I0918 20:31:22.798386   43805 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0918 20:31:22.798422   43805 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0918 20:31:22.813403   43805 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40457
	I0918 20:31:22.813720   43805 main.go:141] libmachine: () Calling .GetVersion
	I0918 20:31:22.814135   43805 main.go:141] libmachine: Using API Version  1
	I0918 20:31:22.814158   43805 main.go:141] libmachine: () Calling .SetConfigRaw
	I0918 20:31:22.814452   43805 main.go:141] libmachine: () Calling .GetMachineName
	I0918 20:31:22.814617   43805 main.go:141] libmachine: (multinode-622675) Calling .GetIP
	I0918 20:31:22.817374   43805 main.go:141] libmachine: (multinode-622675) DBG | domain multinode-622675 has defined MAC address 52:54:00:b9:25:86 in network mk-multinode-622675
	I0918 20:31:22.817778   43805 main.go:141] libmachine: (multinode-622675) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:25:86", ip: ""} in network mk-multinode-622675: {Iface:virbr1 ExpiryTime:2024-09-18 21:28:42 +0000 UTC Type:0 Mac:52:54:00:b9:25:86 Iaid: IPaddr:192.168.39.106 Prefix:24 Hostname:multinode-622675 Clientid:01:52:54:00:b9:25:86}
	I0918 20:31:22.817803   43805 main.go:141] libmachine: (multinode-622675) DBG | domain multinode-622675 has defined IP address 192.168.39.106 and MAC address 52:54:00:b9:25:86 in network mk-multinode-622675
	I0918 20:31:22.817942   43805 host.go:66] Checking if "multinode-622675" exists ...
	I0918 20:31:22.818333   43805 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0918 20:31:22.818377   43805 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0918 20:31:22.833820   43805 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46401
	I0918 20:31:22.834280   43805 main.go:141] libmachine: () Calling .GetVersion
	I0918 20:31:22.834812   43805 main.go:141] libmachine: Using API Version  1
	I0918 20:31:22.834835   43805 main.go:141] libmachine: () Calling .SetConfigRaw
	I0918 20:31:22.835117   43805 main.go:141] libmachine: () Calling .GetMachineName
	I0918 20:31:22.835275   43805 main.go:141] libmachine: (multinode-622675) Calling .DriverName
	I0918 20:31:22.835430   43805 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0918 20:31:22.835447   43805 main.go:141] libmachine: (multinode-622675) Calling .GetSSHHostname
	I0918 20:31:22.838100   43805 main.go:141] libmachine: (multinode-622675) DBG | domain multinode-622675 has defined MAC address 52:54:00:b9:25:86 in network mk-multinode-622675
	I0918 20:31:22.838513   43805 main.go:141] libmachine: (multinode-622675) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:25:86", ip: ""} in network mk-multinode-622675: {Iface:virbr1 ExpiryTime:2024-09-18 21:28:42 +0000 UTC Type:0 Mac:52:54:00:b9:25:86 Iaid: IPaddr:192.168.39.106 Prefix:24 Hostname:multinode-622675 Clientid:01:52:54:00:b9:25:86}
	I0918 20:31:22.838543   43805 main.go:141] libmachine: (multinode-622675) DBG | domain multinode-622675 has defined IP address 192.168.39.106 and MAC address 52:54:00:b9:25:86 in network mk-multinode-622675
	I0918 20:31:22.838682   43805 main.go:141] libmachine: (multinode-622675) Calling .GetSSHPort
	I0918 20:31:22.838867   43805 main.go:141] libmachine: (multinode-622675) Calling .GetSSHKeyPath
	I0918 20:31:22.839033   43805 main.go:141] libmachine: (multinode-622675) Calling .GetSSHUsername
	I0918 20:31:22.839172   43805 sshutil.go:53] new ssh client: &{IP:192.168.39.106 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19667-7671/.minikube/machines/multinode-622675/id_rsa Username:docker}
	I0918 20:31:22.914959   43805 ssh_runner.go:195] Run: systemctl --version
	I0918 20:31:22.920717   43805 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0918 20:31:22.937080   43805 kubeconfig.go:125] found "multinode-622675" server: "https://192.168.39.106:8443"
	I0918 20:31:22.937127   43805 api_server.go:166] Checking apiserver status ...
	I0918 20:31:22.937172   43805 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 20:31:22.951013   43805 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1058/cgroup
	W0918 20:31:22.962161   43805 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1058/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0918 20:31:22.962236   43805 ssh_runner.go:195] Run: ls
	I0918 20:31:22.968860   43805 api_server.go:253] Checking apiserver healthz at https://192.168.39.106:8443/healthz ...
	I0918 20:31:22.974988   43805 api_server.go:279] https://192.168.39.106:8443/healthz returned 200:
	ok
	I0918 20:31:22.975011   43805 status.go:456] multinode-622675 apiserver status = Running (err=<nil>)
	I0918 20:31:22.975020   43805 status.go:176] multinode-622675 status: &{Name:multinode-622675 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0918 20:31:22.975035   43805 status.go:174] checking status of multinode-622675-m02 ...
	I0918 20:31:22.975315   43805 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0918 20:31:22.975350   43805 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0918 20:31:22.991147   43805 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38465
	I0918 20:31:22.991630   43805 main.go:141] libmachine: () Calling .GetVersion
	I0918 20:31:22.992183   43805 main.go:141] libmachine: Using API Version  1
	I0918 20:31:22.992204   43805 main.go:141] libmachine: () Calling .SetConfigRaw
	I0918 20:31:22.992502   43805 main.go:141] libmachine: () Calling .GetMachineName
	I0918 20:31:22.992678   43805 main.go:141] libmachine: (multinode-622675-m02) Calling .GetState
	I0918 20:31:22.994247   43805 status.go:364] multinode-622675-m02 host status = "Running" (err=<nil>)
	I0918 20:31:22.994260   43805 host.go:66] Checking if "multinode-622675-m02" exists ...
	I0918 20:31:22.994554   43805 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0918 20:31:22.994585   43805 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0918 20:31:23.010234   43805 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44559
	I0918 20:31:23.010699   43805 main.go:141] libmachine: () Calling .GetVersion
	I0918 20:31:23.011160   43805 main.go:141] libmachine: Using API Version  1
	I0918 20:31:23.011175   43805 main.go:141] libmachine: () Calling .SetConfigRaw
	I0918 20:31:23.011455   43805 main.go:141] libmachine: () Calling .GetMachineName
	I0918 20:31:23.011653   43805 main.go:141] libmachine: (multinode-622675-m02) Calling .GetIP
	I0918 20:31:23.014305   43805 main.go:141] libmachine: (multinode-622675-m02) DBG | domain multinode-622675-m02 has defined MAC address 52:54:00:b1:d0:63 in network mk-multinode-622675
	I0918 20:31:23.014742   43805 main.go:141] libmachine: (multinode-622675-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:d0:63", ip: ""} in network mk-multinode-622675: {Iface:virbr1 ExpiryTime:2024-09-18 21:29:39 +0000 UTC Type:0 Mac:52:54:00:b1:d0:63 Iaid: IPaddr:192.168.39.224 Prefix:24 Hostname:multinode-622675-m02 Clientid:01:52:54:00:b1:d0:63}
	I0918 20:31:23.014766   43805 main.go:141] libmachine: (multinode-622675-m02) DBG | domain multinode-622675-m02 has defined IP address 192.168.39.224 and MAC address 52:54:00:b1:d0:63 in network mk-multinode-622675
	I0918 20:31:23.014923   43805 host.go:66] Checking if "multinode-622675-m02" exists ...
	I0918 20:31:23.015227   43805 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0918 20:31:23.015262   43805 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0918 20:31:23.031748   43805 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32853
	I0918 20:31:23.032255   43805 main.go:141] libmachine: () Calling .GetVersion
	I0918 20:31:23.032693   43805 main.go:141] libmachine: Using API Version  1
	I0918 20:31:23.032714   43805 main.go:141] libmachine: () Calling .SetConfigRaw
	I0918 20:31:23.033000   43805 main.go:141] libmachine: () Calling .GetMachineName
	I0918 20:31:23.033157   43805 main.go:141] libmachine: (multinode-622675-m02) Calling .DriverName
	I0918 20:31:23.033340   43805 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0918 20:31:23.033367   43805 main.go:141] libmachine: (multinode-622675-m02) Calling .GetSSHHostname
	I0918 20:31:23.035836   43805 main.go:141] libmachine: (multinode-622675-m02) DBG | domain multinode-622675-m02 has defined MAC address 52:54:00:b1:d0:63 in network mk-multinode-622675
	I0918 20:31:23.036265   43805 main.go:141] libmachine: (multinode-622675-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:d0:63", ip: ""} in network mk-multinode-622675: {Iface:virbr1 ExpiryTime:2024-09-18 21:29:39 +0000 UTC Type:0 Mac:52:54:00:b1:d0:63 Iaid: IPaddr:192.168.39.224 Prefix:24 Hostname:multinode-622675-m02 Clientid:01:52:54:00:b1:d0:63}
	I0918 20:31:23.036293   43805 main.go:141] libmachine: (multinode-622675-m02) DBG | domain multinode-622675-m02 has defined IP address 192.168.39.224 and MAC address 52:54:00:b1:d0:63 in network mk-multinode-622675
	I0918 20:31:23.036456   43805 main.go:141] libmachine: (multinode-622675-m02) Calling .GetSSHPort
	I0918 20:31:23.036642   43805 main.go:141] libmachine: (multinode-622675-m02) Calling .GetSSHKeyPath
	I0918 20:31:23.036790   43805 main.go:141] libmachine: (multinode-622675-m02) Calling .GetSSHUsername
	I0918 20:31:23.036924   43805 sshutil.go:53] new ssh client: &{IP:192.168.39.224 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19667-7671/.minikube/machines/multinode-622675-m02/id_rsa Username:docker}
	I0918 20:31:23.118804   43805 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0918 20:31:23.132536   43805 status.go:176] multinode-622675-m02 status: &{Name:multinode-622675-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0918 20:31:23.132569   43805 status.go:174] checking status of multinode-622675-m03 ...
	I0918 20:31:23.132893   43805 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0918 20:31:23.132935   43805 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0918 20:31:23.148650   43805 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44491
	I0918 20:31:23.149164   43805 main.go:141] libmachine: () Calling .GetVersion
	I0918 20:31:23.149722   43805 main.go:141] libmachine: Using API Version  1
	I0918 20:31:23.149742   43805 main.go:141] libmachine: () Calling .SetConfigRaw
	I0918 20:31:23.150097   43805 main.go:141] libmachine: () Calling .GetMachineName
	I0918 20:31:23.150271   43805 main.go:141] libmachine: (multinode-622675-m03) Calling .GetState
	I0918 20:31:23.152127   43805 status.go:364] multinode-622675-m03 host status = "Stopped" (err=<nil>)
	I0918 20:31:23.152142   43805 status.go:377] host is not running, skipping remaining checks
	I0918 20:31:23.152147   43805 status.go:176] multinode-622675-m03 status: &{Name:multinode-622675-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.36s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (39.43s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-622675 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-622675 node start m03 -v=7 --alsologtostderr: (38.801980848s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-622675 status -v=7 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (39.43s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (2.32s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-622675 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-622675 node delete m03: (1.799196666s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-622675 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (2.32s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (179.56s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-622675 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0918 20:41:12.175222   14878 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/addons-815929/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-622675 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio: (2m59.026157249s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-622675 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (179.56s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (46.77s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-622675
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-622675-m02 --driver=kvm2  --container-runtime=crio
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-622675-m02 --driver=kvm2  --container-runtime=crio: exit status 14 (57.656843ms)

                                                
                                                
-- stdout --
	* [multinode-622675-m02] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19667
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19667-7671/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19667-7671/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-622675-m02' is duplicated with machine name 'multinode-622675-m02' in profile 'multinode-622675'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-622675-m03 --driver=kvm2  --container-runtime=crio
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-622675-m03 --driver=kvm2  --container-runtime=crio: (45.65365511s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-622675
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-622675: exit status 80 (207.892214ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-622675 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-622675-m03 already exists in multinode-622675-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-622675-m03
--- PASS: TestMultiNode/serial/ValidateNameConflict (46.77s)

                                                
                                    
x
+
TestScheduledStopUnix (110.6s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-983524 --memory=2048 --driver=kvm2  --container-runtime=crio
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-983524 --memory=2048 --driver=kvm2  --container-runtime=crio: (39.00292301s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-983524 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-983524 -n scheduled-stop-983524
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-983524 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
I0918 20:47:21.943900   14878 retry.go:31] will retry after 132.677µs: open /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/scheduled-stop-983524/pid: no such file or directory
I0918 20:47:21.945079   14878 retry.go:31] will retry after 174.393µs: open /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/scheduled-stop-983524/pid: no such file or directory
I0918 20:47:21.946203   14878 retry.go:31] will retry after 188.044µs: open /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/scheduled-stop-983524/pid: no such file or directory
I0918 20:47:21.947343   14878 retry.go:31] will retry after 355.068µs: open /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/scheduled-stop-983524/pid: no such file or directory
I0918 20:47:21.948471   14878 retry.go:31] will retry after 258.823µs: open /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/scheduled-stop-983524/pid: no such file or directory
I0918 20:47:21.949618   14878 retry.go:31] will retry after 701.491µs: open /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/scheduled-stop-983524/pid: no such file or directory
I0918 20:47:21.950771   14878 retry.go:31] will retry after 740.275µs: open /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/scheduled-stop-983524/pid: no such file or directory
I0918 20:47:21.951891   14878 retry.go:31] will retry after 1.893126ms: open /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/scheduled-stop-983524/pid: no such file or directory
I0918 20:47:21.954074   14878 retry.go:31] will retry after 1.72935ms: open /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/scheduled-stop-983524/pid: no such file or directory
I0918 20:47:21.956370   14878 retry.go:31] will retry after 4.467322ms: open /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/scheduled-stop-983524/pid: no such file or directory
I0918 20:47:21.961636   14878 retry.go:31] will retry after 7.978426ms: open /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/scheduled-stop-983524/pid: no such file or directory
I0918 20:47:21.969901   14878 retry.go:31] will retry after 5.336522ms: open /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/scheduled-stop-983524/pid: no such file or directory
I0918 20:47:21.976196   14878 retry.go:31] will retry after 15.558499ms: open /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/scheduled-stop-983524/pid: no such file or directory
I0918 20:47:21.992500   14878 retry.go:31] will retry after 10.675529ms: open /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/scheduled-stop-983524/pid: no such file or directory
I0918 20:47:22.003824   14878 retry.go:31] will retry after 35.152896ms: open /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/scheduled-stop-983524/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-983524 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-983524 -n scheduled-stop-983524
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-983524
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-983524 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-983524
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-983524: exit status 7 (65.257892ms)

                                                
                                                
-- stdout --
	scheduled-stop-983524
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-983524 -n scheduled-stop-983524
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-983524 -n scheduled-stop-983524: exit status 7 (61.700613ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-983524" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-983524
--- PASS: TestScheduledStopUnix (110.60s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (146.98s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.1038858425 start -p running-upgrade-261780 --memory=2200 --vm-driver=kvm2  --container-runtime=crio
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.1038858425 start -p running-upgrade-261780 --memory=2200 --vm-driver=kvm2  --container-runtime=crio: (1m7.873078315s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-261780 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-261780 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m15.637657351s)
helpers_test.go:175: Cleaning up "running-upgrade-261780" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-261780
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-261780: (1.178388474s)
--- PASS: TestRunningBinaryUpgrade (146.98s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-341744 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-341744 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=crio: exit status 14 (81.161179ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-341744] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19667
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19667-7671/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19667-7671/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (117.94s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-341744 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-341744 --driver=kvm2  --container-runtime=crio: (1m57.679078944s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-341744 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (117.94s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (17.17s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-341744 --no-kubernetes --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-341744 --no-kubernetes --driver=kvm2  --container-runtime=crio: (15.467222079s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-341744 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-341744 status -o json: exit status 2 (254.25303ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-341744","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-341744
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-341744: (1.444047355s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (17.17s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (2.31s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (2.31s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (27.78s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-341744 --no-kubernetes --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-341744 --no-kubernetes --driver=kvm2  --container-runtime=crio: (27.774920594s)
--- PASS: TestNoKubernetes/serial/Start (27.78s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (159.33s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.2698222857 start -p stopped-upgrade-293139 --memory=2200 --vm-driver=kvm2  --container-runtime=crio
E0918 20:50:55.247496   14878 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/addons-815929/client.crt: no such file or directory" logger="UnhandledError"
E0918 20:51:12.175447   14878 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/addons-815929/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.2698222857 start -p stopped-upgrade-293139 --memory=2200 --vm-driver=kvm2  --container-runtime=crio: (1m40.891332924s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.2698222857 -p stopped-upgrade-293139 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.2698222857 -p stopped-upgrade-293139 stop: (11.51783792s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-293139 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-293139 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (46.917145729s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (159.33s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.2s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-341744 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-341744 "sudo systemctl is-active --quiet service kubelet": exit status 1 (197.317055ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.20s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (1.26s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (1.26s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.29s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-341744
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-341744: (1.288526402s)
--- PASS: TestNoKubernetes/serial/Stop (1.29s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (22.42s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-341744 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-341744 --driver=kvm2  --container-runtime=crio: (22.415736458s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (22.42s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.21s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-341744 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-341744 "sudo systemctl is-active --quiet service kubelet": exit status 1 (214.220168ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.21s)

                                                
                                    
x
+
TestPause/serial/Start (91.14s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-543700 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-543700 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio: (1m31.144621171s)
--- PASS: TestPause/serial/Start (91.14s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.94s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-293139
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.94s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (3.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-543581 --memory=2048 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-543581 --memory=2048 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio: exit status 14 (119.328441ms)

                                                
                                                
-- stdout --
	* [false-543581] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19667
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19667-7671/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19667-7671/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0918 20:54:12.625191   55836 out.go:345] Setting OutFile to fd 1 ...
	I0918 20:54:12.625322   55836 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0918 20:54:12.625332   55836 out.go:358] Setting ErrFile to fd 2...
	I0918 20:54:12.625339   55836 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0918 20:54:12.625575   55836 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19667-7671/.minikube/bin
	I0918 20:54:12.626374   55836 out.go:352] Setting JSON to false
	I0918 20:54:12.627439   55836 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":5797,"bootTime":1726687056,"procs":210,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0918 20:54:12.627575   55836 start.go:139] virtualization: kvm guest
	I0918 20:54:12.630269   55836 out.go:177] * [false-543581] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0918 20:54:12.632858   55836 out.go:177]   - MINIKUBE_LOCATION=19667
	I0918 20:54:12.632875   55836 notify.go:220] Checking for updates...
	I0918 20:54:12.636133   55836 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0918 20:54:12.637745   55836 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19667-7671/kubeconfig
	I0918 20:54:12.639382   55836 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19667-7671/.minikube
	I0918 20:54:12.640735   55836 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0918 20:54:12.642181   55836 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0918 20:54:12.644210   55836 config.go:182] Loaded profile config "force-systemd-flag-108667": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0918 20:54:12.644323   55836 config.go:182] Loaded profile config "kubernetes-upgrade-878094": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0918 20:54:12.644434   55836 config.go:182] Loaded profile config "pause-543700": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0918 20:54:12.644547   55836 driver.go:394] Setting default libvirt URI to qemu:///system
	I0918 20:54:12.686524   55836 out.go:177] * Using the kvm2 driver based on user configuration
	I0918 20:54:12.687674   55836 start.go:297] selected driver: kvm2
	I0918 20:54:12.687694   55836 start.go:901] validating driver "kvm2" against <nil>
	I0918 20:54:12.687720   55836 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0918 20:54:12.690013   55836 out.go:201] 
	W0918 20:54:12.691270   55836 out.go:270] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I0918 20:54:12.692642   55836 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-543581 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-543581

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-543581

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-543581

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-543581

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-543581

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-543581

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-543581

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-543581

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-543581

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-543581

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-543581" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-543581"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-543581" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-543581"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-543581" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-543581"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-543581

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-543581" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-543581"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-543581" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-543581"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-543581" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-543581" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-543581" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-543581" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-543581" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-543581" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-543581" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-543581" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-543581" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-543581"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-543581" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-543581"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-543581" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-543581"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-543581" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-543581"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-543581" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-543581"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-543581" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-543581" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-543581" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-543581" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-543581"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-543581" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-543581"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-543581" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-543581"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-543581" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-543581"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-543581" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-543581"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/19667-7671/.minikube/ca.crt
extensions:
- extension:
last-update: Wed, 18 Sep 2024 20:53:50 UTC
provider: minikube.sigs.k8s.io
version: v1.34.0
name: cluster_info
server: https://192.168.39.184:8443
name: pause-543700
contexts:
- context:
cluster: pause-543700
extensions:
- extension:
last-update: Wed, 18 Sep 2024 20:53:50 UTC
provider: minikube.sigs.k8s.io
version: v1.34.0
name: context_info
namespace: default
user: pause-543700
name: pause-543700
current-context: pause-543700
kind: Config
preferences: {}
users:
- name: pause-543700
user:
client-certificate: /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/pause-543700/client.crt
client-key: /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/pause-543700/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-543581

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-543581" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-543581"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-543581" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-543581"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-543581" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-543581"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-543581" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-543581"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-543581" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-543581"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-543581" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-543581"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-543581" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-543581"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-543581" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-543581"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-543581" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-543581"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-543581" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-543581"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-543581" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-543581"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-543581" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-543581"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-543581" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-543581"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-543581" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-543581"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-543581" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-543581"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-543581" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-543581"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-543581" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-543581"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-543581" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-543581"

                                                
                                                
----------------------- debugLogs end: false-543581 [took: 2.912574158s] --------------------------------
helpers_test.go:175: Cleaning up "false-543581" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p false-543581
--- PASS: TestNetworkPlugins/group/false (3.19s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (105.1s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-331658 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.1
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-331658 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.1: (1m45.100636795s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (105.10s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (91.78s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-255556 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.1
E0918 20:56:12.175478   14878 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/addons-815929/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-255556 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.1: (1m31.78102119s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (91.78s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (58.82s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-828868 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.1
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-828868 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.1: (58.822815059s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (58.82s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (10.32s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-331658 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [fd5604f9-f8ae-4012-884d-ff45e1238741] Pending
helpers_test.go:344: "busybox" [fd5604f9-f8ae-4012-884d-ff45e1238741] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [fd5604f9-f8ae-4012-884d-ff45e1238741] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 10.00732388s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-331658 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (10.32s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.08s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-331658 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p no-preload-331658 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.003346546s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-331658 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.08s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (11.3s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-828868 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [8d628346-7765-42af-b8af-8026a2f784d7] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [8d628346-7765-42af-b8af-8026a2f784d7] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 11.00488871s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-828868 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (11.30s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (10.28s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-255556 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [829efa8e-5fd5-45b8-9e03-06a6e528b268] Pending
helpers_test.go:344: "busybox" [829efa8e-5fd5-45b8-9e03-06a6e528b268] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [829efa8e-5fd5-45b8-9e03-06a6e528b268] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 10.004689312s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-255556 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (10.28s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.97s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-828868 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-828868 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.97s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.99s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-255556 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-255556 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.99s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (643.95s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-331658 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.1
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-331658 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.1: (10m43.698996538s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-331658 -n no-preload-331658
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (643.95s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (567.87s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-828868 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.1
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-828868 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.1: (9m27.606872131s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-828868 -n default-k8s-diff-port-828868
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (567.87s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (586.64s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-255556 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.1
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-255556 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.1: (9m46.37712842s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-255556 -n embed-certs-255556
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (586.64s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (5.34s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-740194 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-740194 --alsologtostderr -v=3: (5.336279077s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (5.34s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-740194 -n old-k8s-version-740194
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-740194 -n old-k8s-version-740194: exit status 7 (64.035642ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-740194 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (43.82s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-560575 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.1
E0918 21:25:01.286609   14878 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/functional-790989/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-560575 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.1: (43.818583339s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (43.82s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (87.84s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-543581 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-543581 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio: (1m27.843660366s)
--- PASS: TestNetworkPlugins/group/auto/Start (87.84s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.11s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-560575 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-560575 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.10631083s)
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.11s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (10.39s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-560575 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-560575 --alsologtostderr -v=3: (10.392476158s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (10.39s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-560575 -n newest-cni-560575
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-560575 -n newest-cni-560575: exit status 7 (65.48269ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-560575 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (45.11s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-560575 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.1
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-560575 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.1: (44.849846282s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-560575 -n newest-cni-560575
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (45.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (67.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-543581 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-543581 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio: (1m7.134640001s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (67.13s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-560575 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240813-c6f155d6
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (2.47s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-560575 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-560575 -n newest-cni-560575
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-560575 -n newest-cni-560575: exit status 2 (244.030717ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-560575 -n newest-cni-560575
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-560575 -n newest-cni-560575: exit status 2 (253.096594ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-560575 --alsologtostderr -v=1
E0918 21:26:12.176094   14878 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/addons-815929/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-560575 -n newest-cni-560575
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-560575 -n newest-cni-560575
--- PASS: TestStartStop/group/newest-cni/serial/Pause (2.47s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (100.83s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-543581 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-543581 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio: (1m40.830386762s)
--- PASS: TestNetworkPlugins/group/calico/Start (100.83s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (105.86s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-543581 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-543581 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio: (1m45.854923425s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (105.86s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-543581 "pgrep -a kubelet"
I0918 21:26:34.848226   14878 config.go:182] Loaded profile config "auto-543581": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (14.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-543581 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-sm4bt" [873fe77c-3a02-41ee-96a1-f9849f23ac35] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-sm4bt" [873fe77c-3a02-41ee-96a1-f9849f23ac35] Running
E0918 21:26:48.869369   14878 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/no-preload-331658/client.crt: no such file or directory" logger="UnhandledError"
E0918 21:26:48.875765   14878 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/no-preload-331658/client.crt: no such file or directory" logger="UnhandledError"
E0918 21:26:48.887218   14878 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/no-preload-331658/client.crt: no such file or directory" logger="UnhandledError"
E0918 21:26:48.908717   14878 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/no-preload-331658/client.crt: no such file or directory" logger="UnhandledError"
E0918 21:26:48.950162   14878 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/no-preload-331658/client.crt: no such file or directory" logger="UnhandledError"
E0918 21:26:49.031652   14878 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/no-preload-331658/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 14.004428152s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (14.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-543581 exec deployment/netcat -- nslookup kubernetes.default
E0918 21:26:49.193000   14878 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/no-preload-331658/client.crt: no such file or directory" logger="UnhandledError"
--- PASS: TestNetworkPlugins/group/auto/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-543581 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-543581 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
E0918 21:26:49.515125   14878 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/no-preload-331658/client.crt: no such file or directory" logger="UnhandledError"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (88.05s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-543581 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio
E0918 21:27:09.364099   14878 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/no-preload-331658/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-543581 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio: (1m28.04817249s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (88.05s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-l2kcg" [5a78dfeb-7ec8-42f0-ba29-4afc51a1bc6e] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.004645622s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-543581 "pgrep -a kubelet"
I0918 21:27:20.591841   14878 config.go:182] Loaded profile config "kindnet-543581": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (10.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-543581 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-ncg6k" [59dcc78e-2611-40fb-abe0-f46f61f0e6ba] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-ncg6k" [59dcc78e-2611-40fb-abe0-f46f61f0e6ba] Running
E0918 21:27:28.477865   14878 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/default-k8s-diff-port-828868/client.crt: no such file or directory" logger="UnhandledError"
E0918 21:27:28.484670   14878 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/default-k8s-diff-port-828868/client.crt: no such file or directory" logger="UnhandledError"
E0918 21:27:28.496099   14878 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/default-k8s-diff-port-828868/client.crt: no such file or directory" logger="UnhandledError"
E0918 21:27:28.517553   14878 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/default-k8s-diff-port-828868/client.crt: no such file or directory" logger="UnhandledError"
E0918 21:27:28.558968   14878 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/default-k8s-diff-port-828868/client.crt: no such file or directory" logger="UnhandledError"
E0918 21:27:28.640329   14878 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/default-k8s-diff-port-828868/client.crt: no such file or directory" logger="UnhandledError"
E0918 21:27:28.801654   14878 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/default-k8s-diff-port-828868/client.crt: no such file or directory" logger="UnhandledError"
E0918 21:27:29.123961   14878 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/default-k8s-diff-port-828868/client.crt: no such file or directory" logger="UnhandledError"
E0918 21:27:29.765599   14878 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/default-k8s-diff-port-828868/client.crt: no such file or directory" logger="UnhandledError"
E0918 21:27:29.846439   14878 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/no-preload-331658/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 10.005273933s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (10.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-543581 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-543581 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
E0918 21:27:31.047741   14878 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/default-k8s-diff-port-828868/client.crt: no such file or directory" logger="UnhandledError"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-543581 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (81.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-543581 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-543581 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio: (1m21.163139163s)
--- PASS: TestNetworkPlugins/group/flannel/Start (81.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-7btnt" [4c45a2cb-5ddf-419f-9b86-18019d79b60d] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.012177206s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-543581 "pgrep -a kubelet"
I0918 21:28:01.355511   14878 config.go:182] Loaded profile config "calico-543581": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (14.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-543581 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-7hgtr" [f1eaf7f7-0184-4ed1-ac96-ce3c59e65299] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0918 21:28:04.362471   14878 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/functional-790989/client.crt: no such file or directory" logger="UnhandledError"
E0918 21:28:09.455359   14878 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/default-k8s-diff-port-828868/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "netcat-6fc964789b-7hgtr" [f1eaf7f7-0184-4ed1-ac96-ce3c59e65299] Running
E0918 21:28:10.808246   14878 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/no-preload-331658/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 14.005521258s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (14.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-543581 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-543581 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-543581 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-543581 "pgrep -a kubelet"
I0918 21:28:18.439904   14878 config.go:182] Loaded profile config "custom-flannel-543581": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (10.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-543581 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-k5fcx" [4f8b0f0e-c7d9-4444-b996-2b530d156ce9] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0918 21:28:18.761415   14878 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/old-k8s-version-740194/client.crt: no such file or directory" logger="UnhandledError"
E0918 21:28:18.767793   14878 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/old-k8s-version-740194/client.crt: no such file or directory" logger="UnhandledError"
E0918 21:28:18.779182   14878 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/old-k8s-version-740194/client.crt: no such file or directory" logger="UnhandledError"
E0918 21:28:18.800649   14878 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/old-k8s-version-740194/client.crt: no such file or directory" logger="UnhandledError"
E0918 21:28:18.842042   14878 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/old-k8s-version-740194/client.crt: no such file or directory" logger="UnhandledError"
E0918 21:28:18.923754   14878 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/old-k8s-version-740194/client.crt: no such file or directory" logger="UnhandledError"
E0918 21:28:19.085702   14878 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/old-k8s-version-740194/client.crt: no such file or directory" logger="UnhandledError"
E0918 21:28:19.407451   14878 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/old-k8s-version-740194/client.crt: no such file or directory" logger="UnhandledError"
E0918 21:28:20.049469   14878 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/old-k8s-version-740194/client.crt: no such file or directory" logger="UnhandledError"
E0918 21:28:21.331740   14878 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/old-k8s-version-740194/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "netcat-6fc964789b-k5fcx" [4f8b0f0e-c7d9-4444-b996-2b530d156ce9] Running
E0918 21:28:23.893680   14878 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/old-k8s-version-740194/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 10.005459691s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (10.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-543581 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-543581 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
E0918 21:28:29.015926   14878 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/old-k8s-version-740194/client.crt: no such file or directory" logger="UnhandledError"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-543581 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-543581 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (12.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-543581 replace --force -f testdata/netcat-deployment.yaml
I0918 21:28:34.038408   14878 kapi.go:136] Waiting for deployment netcat to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-c6282" [27a871b9-f151-4208-b649-9729b93301cb] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-c6282" [27a871b9-f151-4208-b649-9729b93301cb] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 12.005558972s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (12.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (53.62s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-543581 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio
E0918 21:28:39.257810   14878 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/old-k8s-version-740194/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-543581 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio: (53.6208533s)
--- PASS: TestNetworkPlugins/group/bridge/Start (53.62s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-543581 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-543581 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-543581 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-kprkk" [d4066edd-ee97-405e-b51b-dfedbac9fccd] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.004609707s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-543581 "pgrep -a kubelet"
I0918 21:29:17.018252   14878 config.go:182] Loaded profile config "flannel-543581": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (11.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-543581 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-7nw5t" [a017a0fb-c420-4858-b3b4-a02269992a61] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-7nw5t" [a017a0fb-c420-4858-b3b4-a02269992a61] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 11.003545061s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (11.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-543581 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-543581 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-543581 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-543581 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (11.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-543581 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-z8g5p" [17126855-a394-476c-8978-15fed823c83b] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0918 21:29:32.730134   14878 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/no-preload-331658/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "netcat-6fc964789b-z8g5p" [17126855-a394-476c-8978-15fed823c83b] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 11.00510278s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (11.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-543581 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-543581 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-543581 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.14s)

                                                
                                    

Test skip (37/312)

Order skiped test Duration
5 TestDownloadOnly/v1.20.0/cached-images 0
6 TestDownloadOnly/v1.20.0/binaries 0
7 TestDownloadOnly/v1.20.0/kubectl 0
14 TestDownloadOnly/v1.31.1/cached-images 0
15 TestDownloadOnly/v1.31.1/binaries 0
16 TestDownloadOnly/v1.31.1/kubectl 0
20 TestDownloadOnlyKic 0
29 TestAddons/serial/Volcano 0
38 TestAddons/parallel/Olm 0
48 TestDockerFlags 0
51 TestDockerEnvContainerd 0
53 TestHyperKitDriverInstallOrUpdate 0
54 TestHyperkitDriverSkipUpgrade 0
105 TestFunctional/parallel/DockerEnv 0
106 TestFunctional/parallel/PodmanEnv 0
143 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.01
144 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
145 TestFunctional/parallel/TunnelCmd/serial/WaitService 0.01
146 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.01
147 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0.01
148 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.01
149 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0.01
150 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.01
154 TestGvisorAddon 0
176 TestImageBuild 0
203 TestKicCustomNetwork 0
204 TestKicExistingNetwork 0
205 TestKicCustomSubnet 0
206 TestKicStaticIP 0
238 TestChangeNoneUser 0
241 TestScheduledStopWindows 0
243 TestSkaffold 0
245 TestInsufficientStorage 0
249 TestMissingContainerUpgrade 0
267 TestStartStop/group/disable-driver-mounts 0.14
274 TestNetworkPlugins/group/kubenet 3.15
282 TestNetworkPlugins/group/cilium 3.33
x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.31.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.31.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.31.1/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/serial/Volcano (0s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:879: skipping: crio not supported
--- SKIP: TestAddons/serial/Volcano (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:500: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio false linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:463: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:550: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.14s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-335923" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-335923
--- SKIP: TestStartStop/group/disable-driver-mounts (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (3.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:629: 
----------------------- debugLogs start: kubenet-543581 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-543581

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-543581

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-543581

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-543581

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-543581

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-543581

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-543581

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-543581

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-543581

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-543581

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-543581" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-543581"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-543581" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-543581"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-543581" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-543581"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-543581

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-543581" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-543581"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-543581" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-543581"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-543581" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-543581" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-543581" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-543581" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-543581" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-543581" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-543581" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-543581" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-543581" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-543581"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-543581" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-543581"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-543581" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-543581"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-543581" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-543581"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-543581" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-543581"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-543581" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-543581" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-543581" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-543581" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-543581"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-543581" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-543581"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-543581" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-543581"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-543581" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-543581"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-543581" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-543581"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/19667-7671/.minikube/ca.crt
extensions:
- extension:
last-update: Wed, 18 Sep 2024 20:53:50 UTC
provider: minikube.sigs.k8s.io
version: v1.34.0
name: cluster_info
server: https://192.168.39.184:8443
name: pause-543700
contexts:
- context:
cluster: pause-543700
extensions:
- extension:
last-update: Wed, 18 Sep 2024 20:53:50 UTC
provider: minikube.sigs.k8s.io
version: v1.34.0
name: context_info
namespace: default
user: pause-543700
name: pause-543700
current-context: pause-543700
kind: Config
preferences: {}
users:
- name: pause-543700
user:
client-certificate: /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/pause-543700/client.crt
client-key: /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/pause-543700/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-543581

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-543581" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-543581"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-543581" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-543581"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-543581" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-543581"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-543581" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-543581"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-543581" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-543581"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-543581" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-543581"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-543581" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-543581"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-543581" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-543581"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-543581" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-543581"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-543581" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-543581"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-543581" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-543581"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-543581" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-543581"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-543581" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-543581"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-543581" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-543581"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-543581" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-543581"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-543581" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-543581"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-543581" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-543581"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-543581" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-543581"

                                                
                                                
----------------------- debugLogs end: kubenet-543581 [took: 2.976050655s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-543581" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-543581
--- SKIP: TestNetworkPlugins/group/kubenet (3.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (3.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:629: 
----------------------- debugLogs start: cilium-543581 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-543581

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-543581

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-543581

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-543581

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-543581

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-543581

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-543581

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-543581

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-543581

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-543581

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-543581" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-543581"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-543581" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-543581"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-543581" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-543581"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-543581

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-543581" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-543581"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-543581" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-543581"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-543581" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-543581" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-543581" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-543581" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-543581" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-543581" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-543581" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-543581" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-543581" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-543581"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-543581" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-543581"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-543581" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-543581"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-543581" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-543581"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-543581" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-543581"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-543581

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-543581

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-543581" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-543581" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-543581

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-543581

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-543581" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-543581" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-543581" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-543581" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-543581" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-543581" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-543581"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-543581" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-543581"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-543581" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-543581"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-543581" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-543581"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-543581" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-543581"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/19667-7671/.minikube/ca.crt
extensions:
- extension:
last-update: Wed, 18 Sep 2024 20:53:50 UTC
provider: minikube.sigs.k8s.io
version: v1.34.0
name: cluster_info
server: https://192.168.39.184:8443
name: pause-543700
contexts:
- context:
cluster: pause-543700
extensions:
- extension:
last-update: Wed, 18 Sep 2024 20:53:50 UTC
provider: minikube.sigs.k8s.io
version: v1.34.0
name: context_info
namespace: default
user: pause-543700
name: pause-543700
current-context: ""
kind: Config
preferences: {}
users:
- name: pause-543700
user:
client-certificate: /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/pause-543700/client.crt
client-key: /home/jenkins/minikube-integration/19667-7671/.minikube/profiles/pause-543700/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-543581

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-543581" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-543581"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-543581" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-543581"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-543581" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-543581"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-543581" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-543581"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-543581" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-543581"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-543581" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-543581"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-543581" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-543581"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-543581" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-543581"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-543581" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-543581"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-543581" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-543581"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-543581" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-543581"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-543581" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-543581"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-543581" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-543581"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-543581" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-543581"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-543581" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-543581"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-543581" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-543581"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-543581" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-543581"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-543581" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-543581"

                                                
                                                
----------------------- debugLogs end: cilium-543581 [took: 3.19252909s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-543581" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-543581
--- SKIP: TestNetworkPlugins/group/cilium (3.33s)

                                                
                                    
Copied to clipboard